Monireh Ebrahimi 7fa0165741 moved images from quickstart to doc/img 4 månader sedan
..
NotebookLlama 716c23f9d0 Update Step-1 PDF-Pre-Processing-Logic.ipynb (#756) 5 månader sedan
RAG 017bee0356 Update hello_llama_cloud.ipynb (#754) 5 månader sedan
Running_Llama3_Anywhere 0f632b3e3d Fix version number in Python example 8 månader sedan
agents b579bd04a8 colab links fix 5 månader sedan
finetuning 7579b61d44 All functionality has been consolidated into a single file for CLI/UI/Checkpointing and Added fix for issue 702 and added code for that as well, added instructions in local_inference /README.md as well (#757) 5 månader sedan
inference 54269639de added inference instructions in README for easy inferencing 5 månader sedan
Getting_to_know_Llama.ipynb ee34e1be19 typo fix lama -> llama line 127 8 månader sedan
Prompt_Engineering_with_Llama_3.ipynb 4efe439084 Update Prompt_Engineering_with_Llama_3.ipynb 4 månader sedan
README.md e814d7d672 Update README.md 5 månader sedan
build_with_Llama_3_2.ipynb 7fa0165741 moved images from quickstart to doc/img 4 månader sedan

README.md

Llama-Recipes Quickstart

If you are new to developing with Meta Llama models, this is where you should start. This folder contains introductory-level notebooks across different techniques relating to Meta Llama.

  • The Build_with_Llama 3.2 notebook showcases a comprehensive walkthrough of the new capabilities of Llama 3.2 models, including multimodal use cases, function/tool calling, Llama Stack, and Llama on edge.
  • The Running_Llama_Anywhere notebooks demonstrate how to run Llama inference across Linux, Mac and Windows platforms using the appropriate tooling.
  • The Prompt_Engineering_with_Llama notebook showcases the various ways to elicit appropriate outputs from Llama. Take this notebook for a spin to get a feel for how Llama responds to different inputs and generation parameters.
  • The inference folder contains scripts to deploy Llama for inference on server and mobile. See also 3p_integrations/vllm and 3p_integrations/tgi for hosting Llama on open-source model servers.
  • The RAG folder contains a simple Retrieval-Augmented Generation application using Llama.
  • The finetuning folder contains resources to help you finetune Llama on your custom datasets, for both single- and multi-GPU setups. The scripts use the native llama-recipes finetuning code found in finetuning.py which supports these features: