|
6 meses atrás | |
---|---|---|
.github | 7 meses atrás | |
3p-integrations | 7 meses atrás | |
end-to-end-use-cases | 6 meses atrás | |
getting-started | 6 meses atrás | |
src | 7 meses atrás | |
.gitignore | 1 ano atrás | |
CODE_OF_CONDUCT.md | 2 anos atrás | |
CONTRIBUTING.md | 1 ano atrás | |
README.md | 6 meses atrás | |
UPDATES.md | 6 meses atrás | |
dev_requirements.txt | 7 meses atrás | |
pyproject.toml | 10 meses atrás | |
requirements.txt | 7 meses atrás |
Welcome to the official repository for helping you get started with inference, fine-tuning and end-to-end use-cases of building with the Llama Model family.
The examples cover the most popular community approaches, popular use-cases and the latest Llama 3.2 Vision and Llama 3.2 Text, in this repository.
[!TIP] Popular getting started links:
- Build with Llama Notebook
- Multimodal Inference with Llama 3.2 Vision
- Inference on Llama Guard 1B + Multimodal inference on Llama Guard 11B-Vision
[!TIP] Popular end to end recipes:
Note: We recently did a refactor of the repo, archive-main is a snapshot branch from before the refactor
A: Llama 3.2 follows the same prompt template as Llama 3.1, with a new special token <|image|>
representing the input image for the multimodal models.
More details on the prompt templates for image reasoning, tool-calling and code interpreter can be found on the documentation website.
A: Checkout the Fine-Tuning FAQ here
A: We recently did a refactor of the repo, archive-main is a snapshot branch from before the refactor
A: Official Llama models website
Please read CONTRIBUTING.md for details on our code of conduct, and the process for submitting pull requests to us.
See the License file for Meta Llama 3.2 here and Acceptable Use Policy here
See the License file for Meta Llama 3.1 here and Acceptable Use Policy here
See the License file for Meta Llama 3 here and Acceptable Use Policy here
See the License file for Meta Llama 2 here and Acceptable Use Policy here