Sanyam Bhutani 5aafa16f2f add colab instruction, fix links (#886) 9 bulan lalu
..
RAG aa60f75d44 Fixed all "Open in Colab" absolute paths 10 bulan lalu
finetuning 691bcce716 use absolute links 10 bulan lalu
inference 73e8da5176 deprecate octoai 10 bulan lalu
responsible_ai 0d6c532ab0 Moving responsible ai to getting_started 10 bulan lalu
Prompt_Engineering_with_Llama.ipynb e537356d80 Fix: Updated broken documentations links and references (#856) 10 bulan lalu
README.md 8d661e698e Update README.md 10 bulan lalu
build_with_Llama_3_2.ipynb 2d71628fd6 add colab instruction, fix links 9 bulan lalu

README.md

Llama-cookbook Getting Started

If you are new to developing with Meta Llama models, this is where you should start. This folder contains introductory-level notebooks across different techniques relating to Meta Llama.

  • The Build_with_Llama 3.2 notebook showcases a comprehensive walkthrough of the new capabilities of Llama 3.2 models, including multimodal use cases, function/tool calling, Llama Stack, and Llama on edge.
  • The Prompt_Engineering_with_Llama notebook showcases the various ways to elicit appropriate outputs from Llama. Take this notebook for a spin to get a feel for how Llama responds to different inputs and generation parameters.
  • The inference folder contains scripts to deploy Llama for inference on server and mobile. See also 3p_integrations/vllm and 3p_integrations/tgi for hosting Llama on open-source model servers.
  • The RAG folder contains a simple Retrieval-Augmented Generation application using Llama.
  • The finetuning folder contains resources to help you finetune Llama on your custom datasets, for both single- and multi-GPU setups. The scripts use the native llama-cookbook finetuning code found in finetuning.py which supports these features: