This folder contains examples organized by topic: | Subfolder | Description | |---|---| [quickstart](./quickstart)|The "Hello World" of using Llama 3, start here if you are new to using Llama 3 [multilingual](./multilingual)|Scripts to add a new language to Llama [finetuning](./quickstart/finetuning)|Scripts to finetune Llama 3 on single-GPU and multi-GPU setups [inference](./quickstart/inference)|Scripts to deploy Llama 3 for inference [locally](./quickstart/inference/local_inference/), on mobile [Android](./quickstart/inference/mobile_inference/android_inference/) and using [model servers](./quickstart/inference/mobile_inference/) [use_cases](./use_cases)|Scripts showing common applications of Llama 3 [responsible_ai](./responsible_ai)|Scripts to use PurpleLlama for safeguarding model outputs [llama_api_providers](./llama_api_providers)|Scripts to run inference on Llama via hosted endpoints [benchmarks](./benchmarks)|Scripts to benchmark Llama 3 models inference on various backends [code_llama](./code_llama)|Scripts to run inference with the Code Llama models