Sanyam Bhutani 2e99c1e0a4 Added a Gradio UI for multi-modal inferencing using Llama 3.2 Vision/ (#718) 5 月之前
..
NotebookLlama 5248cb14ec added some notes 5 月之前
RAG b6bea21263 Update hello_llama_cloud.ipynb 5 月之前
Running_Llama3_Anywhere 0f632b3e3d Fix version number in Python example 8 月之前
agents a6c7fe650b Fix 2 6 月之前
finetuning 8715e044b5 fix typo 6 月之前
inference e0e882561c Fixed readme with renaming file name 5 月之前
Getting_to_know_Llama.ipynb ee34e1be19 typo fix lama -> llama line 127 8 月之前
Prompt_Engineering_with_Llama_3.ipynb cb05f6e01a Add files via upload 8 月之前
README.md 6addcb8fa0 move feature table to main readme 8 月之前

README.md

Llama-Recipes Quickstart

If you are new to developing with Meta Llama models, this is where you should start. This folder contains introductory-level notebooks across different techniques relating to Meta Llama.

  • The Running_Llama_Anywhere notebooks demonstrate how to run Llama inference across Linux, Mac and Windows platforms using the appropriate tooling.
  • The Prompt_Engineering_with_Llama notebook showcases the various ways to elicit appropriate outputs from Llama. Take this notebook for a spin to get a feel for how Llama responds to different inputs and generation parameters.
  • The inference folder contains scripts to deploy Llama for inference on server and mobile. See also 3p_integrations/vllm and 3p_integrations/tgi for hosting Llama on open-source model servers.
  • The RAG folder contains a simple Retrieval-Augmented Generation application using Llama.
  • The finetuning folder contains resources to help you finetune Llama on your custom datasets, for both single- and multi-GPU setups. The scripts use the native llama-recipes finetuning code found in finetuning.py which supports these features: