Igor Kasianenko 9fd54a395c deprecate OctoAI (#873) 1 dienu atpakaļ
..
RAG aa60f75d44 Fixed all "Open in Colab" absolute paths 1 mēnesi atpakaļ
finetuning 691bcce716 use absolute links 3 nedēļas atpakaļ
inference 73e8da5176 deprecate octoai 3 nedēļas atpakaļ
responsible_ai 0d6c532ab0 Moving responsible ai to getting_started 1 mēnesi atpakaļ
Prompt_Engineering_with_Llama.ipynb e537356d80 Fix: Updated broken documentations links and references (#856) 1 mēnesi atpakaļ
README.md 8d661e698e Update README.md 1 mēnesi atpakaļ
build_with_Llama_3_2.ipynb d62f6d3281 fix few typos 1 mēnesi atpakaļ

README.md

Llama-cookbook Getting Started

If you are new to developing with Meta Llama models, this is where you should start. This folder contains introductory-level notebooks across different techniques relating to Meta Llama.

  • The Build_with_Llama 3.2 notebook showcases a comprehensive walkthrough of the new capabilities of Llama 3.2 models, including multimodal use cases, function/tool calling, Llama Stack, and Llama on edge.
  • The Prompt_Engineering_with_Llama notebook showcases the various ways to elicit appropriate outputs from Llama. Take this notebook for a spin to get a feel for how Llama responds to different inputs and generation parameters.
  • The inference folder contains scripts to deploy Llama for inference on server and mobile. See also 3p_integrations/vllm and 3p_integrations/tgi for hosting Llama on open-source model servers.
  • The RAG folder contains a simple Retrieval-Augmented Generation application using Llama.
  • The finetuning folder contains resources to help you finetune Llama on your custom datasets, for both single- and multi-GPU setups. The scripts use the native llama-cookbook finetuning code found in finetuning.py which supports these features: