http://llama.meta.com/ http://llama.meta.com/use-policy/ http://llama.meta.com/responsible-use-guide/ http://llama.meta.com/llama2/ http://llama.meta.com/llama2/license/ http://llama.meta.com/llama2/use-policy/ http://llama.meta.com/license/ http://llama.meta.com/code-llama/ http://llama.meta.com/llama3/ http://llama.meta.com/llama3/license/ http://llama.meta.com/docs/model-cards-and-prompt-formats/meta-llama-3 http://llama.meta.com/docs/model-cards-and-prompt-formats/meta-llama-guard-2 http://llama.meta.com/docs/model-cards-and-prompt-formats/meta-code-llama-70b http://llama.meta.com/docs/model-cards-and-prompt-formats/meta-llama-guard-1 http://llama.meta.com/docs/model-cards-and-prompt-formats/meta-code-llama http://llama.meta.com/docs/model-cards-and-prompt-formats/meta-llama-2 http://llama.meta.com/docs/getting_the_models http://llama.meta.com/docs/getting-the-models/hugging-face http://llama.meta.com/docs/getting-the-models/kaggle http://llama.meta.com/docs/llama-everywhere http://llama.meta.com/docs/llama-everywhere/running-meta-llama-on-linux/ http://llama.meta.com/docs/llama-everywhere/running-meta-llama-on-windows/ http://llama.meta.com/docs/llama-everywhere/running-meta-llama-on-mac/ http://llama.meta.com/docs/llama-everywhere/running-meta-llama-in-the-cloud/ http://llama.meta.com/docs/how-to-guides/fine-tuning http://llama.meta.com/docs/how-to-guides/quantization http://llama.meta.com/docs/how-to-guides/prompting http://llama.meta.com/docs/how-to-guides/validation http://llama.meta.com/docs/integration-guides/meta-code-llama http://llama.meta.com/docs/integration-guides/langchain http://llama.meta.com/docs/integration-guides/llamaindex http://raw.githubusercontent.com/meta-llama/llama-recipes/main/README.md http://raw.githubusercontent.com/meta-llama/llama/main/MODEL_CARD.md http://raw.githubusercontent.com/meta-llama/llama/main/README.md http://raw.githubusercontent.com/meta-llama/llama3/main/MODEL_CARD.md http://raw.githubusercontent.com/meta-llama/llama3/main/README.md http://raw.githubusercontent.com/meta-llama/codellama/main/MODEL_CARD.md http://raw.githubusercontent.com/meta-llama/codellama/main/README.md http://raw.githubusercontent.com/meta-llama/PurpleLlama/main/README.md http://raw.githubusercontent.com/meta-llama/PurpleLlama/main/Llama-Guard2/MODEL_CARD.md http://raw.githubusercontent.com/meta-llama/PurpleLlama/main/Llama-Guard2/README.md http://raw.githubusercontent.com/meta-llama/PurpleLlama/main/Llama-Guard/MODEL_CARD.md https://hamel.dev/notes/llm/inference/03_inference.html https://www.anyscale.com/blog/continuous-batching-llm-inference https://github.com/huggingface/peft https://github.com/facebookresearch/llama-recipes/blob/main/docs/LLM_finetuning.md https://github.com/meta-llama/llama-recipes/blob/main/recipes/finetuning/datasets/README.md https://www.databricks.com/blog/efficient-fine-tuning-lora-guide-llms https://www.wandb.courses/courses/training-fine-tuning-LLMs https://www.snowflake.com/blog/meta-code-llama-testing/ https://www.phind.com/blog/code-llama-beats-gpt4 https://www.anyscale.com/blog/llama-2-is-about-as-factually-accurate-as-gpt-4-for-summaries-and-is-30x-cheaper https://ragntune.com/blog/gpt3.5-vs-llama2-finetuning https://deci.ai/blog/fine-tune-llama-2-with-lora-for-question-answering/ https://replicate.com/blog/fine-tune-translation-model-axolotl https://huyenchip.com/2023/04/11/llm-engineering.html