Sanyam Bhutani cb05f6e01a Add files via upload hai 8 meses
..
RAG a11b80821e Add files via upload hai 8 meses
Running_Llama3_Anywhere b8ba3e761d Add files via upload hai 8 meses
agents cc569ef52b colab links fixed hai 9 meses
finetuning 91fd013a9d Merge branch 'main' into raft hai 8 meses
inference 01a20d1e86 Remove max_length from tokenization (#604) hai 9 meses
Getting_to_know_Llama.ipynb 669e6c96d5 Add files via upload hai 8 meses
Prompt_Engineering_with_Llama_3.ipynb cb05f6e01a Add files via upload hai 8 meses
README.md 6addcb8fa0 move feature table to main readme hai 8 meses

README.md

Llama-Recipes Quickstart

If you are new to developing with Meta Llama models, this is where you should start. This folder contains introductory-level notebooks across different techniques relating to Meta Llama.

  • The Running_Llama_Anywhere notebooks demonstrate how to run Llama inference across Linux, Mac and Windows platforms using the appropriate tooling.
  • The Prompt_Engineering_with_Llama notebook showcases the various ways to elicit appropriate outputs from Llama. Take this notebook for a spin to get a feel for how Llama responds to different inputs and generation parameters.
  • The inference folder contains scripts to deploy Llama for inference on server and mobile. See also 3p_integrations/vllm and 3p_integrations/tgi for hosting Llama on open-source model servers.
  • The RAG folder contains a simple Retrieval-Augmented Generation application using Llama.
  • The finetuning folder contains resources to help you finetune Llama on your custom datasets, for both single- and multi-GPU setups. The scripts use the native llama-recipes finetuning code found in finetuning.py which supports these features: