Himanshu Shukla 3e1d9856ff Merge branch 'meta-llama:main' into main 9 miesięcy temu
..
NotebookLlama 716c23f9d0 Update Step-1 PDF-Pre-Processing-Logic.ipynb (#756) 10 miesięcy temu
RAG 017bee0356 Update hello_llama_cloud.ipynb (#754) 10 miesięcy temu
Running_Llama3_Anywhere 0f632b3e3d Fix version number in Python example 1 rok temu
agents b579bd04a8 colab links fix 9 miesięcy temu
finetuning 4377505e4f Moved the file code-merge-inference.py from fine-tuning firectory to local_inference 10 miesięcy temu
images 50fcb53165 removed unnecessary images and updated colab link 10 miesięcy temu
inference 6b6bb37ecd fixed gradio UI during performing the tests, it is working in this commit 10 miesięcy temu
Getting_to_know_Llama.ipynb ee34e1be19 typo fix lama -> llama line 127 1 rok temu
Prompt_Engineering_with_Llama_3.ipynb cb05f6e01a Add files via upload 1 rok temu
README.md e814d7d672 Update README.md 10 miesięcy temu
build_with_Llama_3_2.ipynb 50fcb53165 removed unnecessary images and updated colab link 10 miesięcy temu

README.md

Llama-Recipes Quickstart

If you are new to developing with Meta Llama models, this is where you should start. This folder contains introductory-level notebooks across different techniques relating to Meta Llama.

  • The Build_with_Llama 3.2 notebook showcases a comprehensive walkthrough of the new capabilities of Llama 3.2 models, including multimodal use cases, function/tool calling, Llama Stack, and Llama on edge.
  • The Running_Llama_Anywhere notebooks demonstrate how to run Llama inference across Linux, Mac and Windows platforms using the appropriate tooling.
  • The Prompt_Engineering_with_Llama notebook showcases the various ways to elicit appropriate outputs from Llama. Take this notebook for a spin to get a feel for how Llama responds to different inputs and generation parameters.
  • The inference folder contains scripts to deploy Llama for inference on server and mobile. See also 3p_integrations/vllm and 3p_integrations/tgi for hosting Llama on open-source model servers.
  • The RAG folder contains a simple Retrieval-Augmented Generation application using Llama.
  • The finetuning folder contains resources to help you finetune Llama on your custom datasets, for both single- and multi-GPU setups. The scripts use the native llama-recipes finetuning code found in finetuning.py which supports these features: