Suraj Subramanian 6addcb8fa0 move feature table to main readme 10 tháng trước cách đây
..
RAG 4344a420f2 recipes/quickstart folder updated 11 tháng trước cách đây
Running_Llama3_Anywhere 4be3eb0d17 Updates HF model_ids and readmes for 3.1 10 tháng trước cách đây
agents cc569ef52b colab links fixed 10 tháng trước cách đây
finetuning 91fd013a9d Merge branch 'main' into raft 10 tháng trước cách đây
inference 01a20d1e86 Remove max_length from tokenization (#604) 10 tháng trước cách đây
Getting_to_know_Llama.ipynb b1939b10c9 replace groq llama 2 with replicate 1 năm trước cách đây
Prompt_Engineering_with_Llama_3.ipynb c12aab7030 Moving Prompt eng file to quickstart 11 tháng trước cách đây
README.md 6addcb8fa0 move feature table to main readme 10 tháng trước cách đây

README.md

Llama-Recipes Quickstart

If you are new to developing with Meta Llama models, this is where you should start. This folder contains introductory-level notebooks across different techniques relating to Meta Llama.

  • The Running_Llama_Anywhere notebooks demonstrate how to run Llama inference across Linux, Mac and Windows platforms using the appropriate tooling.
  • The Prompt_Engineering_with_Llama notebook showcases the various ways to elicit appropriate outputs from Llama. Take this notebook for a spin to get a feel for how Llama responds to different inputs and generation parameters.
  • The inference folder contains scripts to deploy Llama for inference on server and mobile. See also 3p_integrations/vllm and 3p_integrations/tgi for hosting Llama on open-source model servers.
  • The RAG folder contains a simple Retrieval-Augmented Generation application using Llama.
  • The finetuning folder contains resources to help you finetune Llama on your custom datasets, for both single- and multi-GPU setups. The scripts use the native llama-recipes finetuning code found in finetuning.py which supports these features: