No Description

Connor Treacy 3cd67f3144 Cleaned up API KEY placeholder text 1 month ago
.github ef4a42a568 Add E2B AI Analyst (#768) 1 month ago
3p-integrations 330321cd02 Update README.md to remove missing preview image 1 month ago
end-to-end-use-cases 3cd67f3144 Cleaned up API KEY placeholder text 1 month ago
getting-started f7cbd150e8 fix one colab link 1 month ago
src 4fcf3dcedd Update FAQ.md 1 month ago
.gitignore c2ca88d3d1 fix in gitignore 1 year ago
CODE_OF_CONDUCT.md 4767f09ecd Initial commit 1 year ago
CONTRIBUTING.md 738413870e refactor contributing and pull request template 1 month ago
README.md b1d95299f3 Fixed a path error on a Colab link 1 month ago
UPDATES.md 27504cd68d restructure 1 month ago
dev_requirements.txt 16fa3352bb Added dev_req.txt files 1 year ago
pyproject.toml 738413870e refactor contributing and pull request template 1 month ago
requirements.txt 64537458d1 removed old files after renaming and added markup safe in readme for CI/CD 4 months ago

README.md

Llama Cookbook: The Official Guide to building with Llama Models

Welcome to the official repository for helping you get started with inference, fine-tuning and end-to-end use-cases of building with the Llama Model family.

This repository covers the most popular community approaches, use-cases and the latest recipes for Llama Text and Vision models.

[!TIP] Popular getting started links:

[!TIP] Popular end to end recipes:

Note: We recently did a refactor of the repo, archive-main is a snapshot branch from before the refactor

Repository Structure:

  • 3P Integrations: Getting Started Recipes and End to End Use-Cases from various Llama providers
  • End to End Use Cases: As the name suggests, spanning various domains and applications
  • Getting Started: Reference for inferencing, fine-tuning and RAG examples
  • src: Contains the src for the original llama-recipes library along with some FAQs for fine-tuning.

FAQ:

  • Q: What happened to llama-recipes?

A: We recently renamed llama-recipes to llama-cookbook

  • Q: Prompt Template changes for Multi-Modality?

A: Llama 3.2 follows the same prompt template as Llama 3.1, with a new special token <|image|> representing the input image for the multimodal models.

More details on the prompt templates for image reasoning, tool-calling and code interpreter can be found on the documentation website.

  • Q: I have some questions for Fine-Tuning, is there a section to address these?

A: Checkout the Fine-Tuning FAQ here

  • Q: Some links are broken/folders are missing:

A: We recently did a refactor of the repo, archive-main is a snapshot branch from before the refactor

  • Where can we find details about the latest models?

A: Official Llama models website

Contributing

Please read CONTRIBUTING.md for details on our code of conduct, and the process for submitting pull requests to us.

License

See the License file for Meta Llama 3.2 here and Acceptable Use Policy here

See the License file for Meta Llama 3.1 here and Acceptable Use Policy here

See the License file for Meta Llama 3 here and Acceptable Use Policy here

See the License file for Meta Llama 2 here and Acceptable Use Policy here