Himanshu Shukla 9f4be9e39a Merge branch 'main' of https://github.com/himanshushukla12/llama-recipes 7 months ago
..
NotebookLlama 716c23f9d0 Update Step-1 PDF-Pre-Processing-Logic.ipynb (#756) 7 months ago
RAG 017bee0356 Update hello_llama_cloud.ipynb (#754) 7 months ago
Running_Llama3_Anywhere 0f632b3e3d Fix version number in Python example 9 months ago
agents a6c7fe650b Fix 2 7 months ago
finetuning 9f4be9e39a Merge branch 'main' of https://github.com/himanshushukla12/llama-recipes 7 months ago
inference e0e882561c Fixed readme with renaming file name 7 months ago
Getting_to_know_Llama.ipynb ee34e1be19 typo fix lama -> llama line 127 9 months ago
Prompt_Engineering_with_Llama_3.ipynb cb05f6e01a Add files via upload 9 months ago
README.md 6addcb8fa0 move feature table to main readme 10 months ago

README.md

Llama-Recipes Quickstart

If you are new to developing with Meta Llama models, this is where you should start. This folder contains introductory-level notebooks across different techniques relating to Meta Llama.

  • The Running_Llama_Anywhere notebooks demonstrate how to run Llama inference across Linux, Mac and Windows platforms using the appropriate tooling.
  • The Prompt_Engineering_with_Llama notebook showcases the various ways to elicit appropriate outputs from Llama. Take this notebook for a spin to get a feel for how Llama responds to different inputs and generation parameters.
  • The inference folder contains scripts to deploy Llama for inference on server and mobile. See also 3p_integrations/vllm and 3p_integrations/tgi for hosting Llama on open-source model servers.
  • The RAG folder contains a simple Retrieval-Augmented Generation application using Llama.
  • The finetuning folder contains resources to help you finetune Llama on your custom datasets, for both single- and multi-GPU setups. The scripts use the native llama-recipes finetuning code found in finetuning.py which supports these features: