Matthias Reso 72a9832571 Merge branch 'main' into feature/package_distribution il y a 1 an
..
chat_completion 72a9832571 Merge branch 'main' into feature/package_distribution il y a 1 an
code_llama 2374b73aad Remove __init__.py files from examples il y a 1 an
hf_text_generation_inference 2374b73aad Remove __init__.py files from examples il y a 1 an
vllm 3ddf755539 Move examples into subfolders il y a 1 an
README.md 937537d1ca link doc/inference in example/readme il y a 1 an
finetuning.py 7702d702cc Add missing file extension il y a 1 an
inference.py ccda6fb8ca Move inference scripts into example folder il y a 1 an
multi_node.slurm 360a658262 Adjusted docs to reflect move of qs nb + finetuning script into examples il y a 1 an
quickstart.ipynb 682943551f Move quickstart nb, finetuning script and slurm config to examples il y a 1 an
samsum_prompt.txt ccda6fb8ca Move inference scripts into example folder il y a 1 an

README.md

Examples

This folder contains finetuning and inference examples for Llama 2. For the full documentation on these examples please refer to docs/inference.md

Finetuning

Please refer to the main README.md for information on how to use the finetuning.py script. After installing the llama-recipes package through pip you can also invoke the finetuning in two ways:

python -m llama_recipes.finetuning <parameters>

python examnples/finetuning.py <parameters>

Please see README.md for details.

Inference

So far, we have provide the following inference examples:

  1. inference script script provides support for Hugging Face accelerate, PEFT and FSDP fine tuned models. It also demonstrates safety features to protect the user from toxic or harmful content.

  2. vllm/inference.py script takes advantage of vLLM's paged attention concept for low latency.

  3. The hf_text_generation_inference folder contains information on Hugging Face Text Generation Inference (TGI).

  4. A chat completion example highlighting the handling of chat dialogs.

  5. Code Llama folder which provides examples for code completion and code infilling.

For more in depth information on inference including inference safety checks and examples, see the inference documentation here.

Note The sensitive topics safety checker utilizes AuditNLG which is an optional dependency. Please refer to installation section of the main README.md for details.

Note The vLLM example requires additional dependencies. Please refer to installation section of the main README.md for details.