|
5 dagar sedan | |
---|---|---|
.github | 2 veckor sedan | |
3p-integrations | 1 månad sedan | |
end-to-end-use-cases | 5 dagar sedan | |
getting-started | 3 veckor sedan | |
src | 3 veckor sedan | |
.gitignore | 1 år sedan | |
CODE_OF_CONDUCT.md | 1 år sedan | |
CONTRIBUTING.md | 1 månad sedan | |
LICENSE | 1 månad sedan | |
README.md | 3 veckor sedan | |
UPDATES.md | 1 månad sedan | |
dev_requirements.txt | 1 år sedan | |
pyproject.toml | 1 månad sedan | |
requirements.txt | 4 månader sedan |
Welcome to the official repository for helping you get started with inference, fine-tuning and end-to-end use-cases of building with the Llama Model family.
This repository covers the most popular community approaches, use-cases and the latest recipes for Llama Text and Vision models.
[!TIP] Popular getting started links:
- Build with Llama Tutorial
- Multimodal Inference with Llama 3.2 Vision
- Inferencing using Llama Guard (Safety Model)
[!TIP] Popular end to end recipes:
Note: We recently did a refactor of the repo, archive-main is a snapshot branch from before the refactor
A: We recently renamed llama-recipes to llama-cookbook
A: Llama 3.2 follows the same prompt template as Llama 3.1, with a new special token <|image|>
representing the input image for the multimodal models.
More details on the prompt templates for image reasoning, tool-calling and code interpreter can be found on the documentation website.
A: Checkout the Fine-Tuning FAQ here
A: We recently did a refactor of the repo, archive-main is a snapshot branch from before the refactor
A: Official Llama models website
Please read CONTRIBUTING.md for details on our code of conduct, and the process for submitting pull requests to us.
See the License file for Meta Llama 3.2 here and Acceptable Use Policy here
See the License file for Meta Llama 3.1 here and Acceptable Use Policy here
See the License file for Meta Llama 3 here and Acceptable Use Policy here
See the License file for Meta Llama 2 here and Acceptable Use Policy here