Monireh Ebrahimi 7fa0165741 moved images from quickstart to doc/img hai 4 meses
..
NotebookLlama 716c23f9d0 Update Step-1 PDF-Pre-Processing-Logic.ipynb (#756) hai 5 meses
RAG 017bee0356 Update hello_llama_cloud.ipynb (#754) hai 5 meses
Running_Llama3_Anywhere 0f632b3e3d Fix version number in Python example hai 8 meses
agents b579bd04a8 colab links fix hai 5 meses
finetuning 7579b61d44 All functionality has been consolidated into a single file for CLI/UI/Checkpointing and Added fix for issue 702 and added code for that as well, added instructions in local_inference /README.md as well (#757) hai 5 meses
inference 54269639de added inference instructions in README for easy inferencing hai 5 meses
Getting_to_know_Llama.ipynb ee34e1be19 typo fix lama -> llama line 127 hai 8 meses
Prompt_Engineering_with_Llama_3.ipynb 4efe439084 Update Prompt_Engineering_with_Llama_3.ipynb hai 4 meses
README.md e814d7d672 Update README.md hai 5 meses
build_with_Llama_3_2.ipynb 7fa0165741 moved images from quickstart to doc/img hai 4 meses

README.md

Llama-Recipes Quickstart

If you are new to developing with Meta Llama models, this is where you should start. This folder contains introductory-level notebooks across different techniques relating to Meta Llama.

  • The Build_with_Llama 3.2 notebook showcases a comprehensive walkthrough of the new capabilities of Llama 3.2 models, including multimodal use cases, function/tool calling, Llama Stack, and Llama on edge.
  • The Running_Llama_Anywhere notebooks demonstrate how to run Llama inference across Linux, Mac and Windows platforms using the appropriate tooling.
  • The Prompt_Engineering_with_Llama notebook showcases the various ways to elicit appropriate outputs from Llama. Take this notebook for a spin to get a feel for how Llama responds to different inputs and generation parameters.
  • The inference folder contains scripts to deploy Llama for inference on server and mobile. See also 3p_integrations/vllm and 3p_integrations/tgi for hosting Llama on open-source model servers.
  • The RAG folder contains a simple Retrieval-Augmented Generation application using Llama.
  • The finetuning folder contains resources to help you finetune Llama on your custom datasets, for both single- and multi-GPU setups. The scripts use the native llama-recipes finetuning code found in finetuning.py which supports these features: