|
2 天之前 | |
---|---|---|
.. | ||
aws | 3 月之前 | |
azure | 3 月之前 | |
crusoe | 3 月之前 | |
e2b-ai-analyst | 3 月之前 | |
groq | 2 月之前 | |
lamini | 3 月之前 | |
langchain | 3 月之前 | |
llamaindex | 3 月之前 | |
modal | 3 月之前 | |
tgi | 3 月之前 | |
togetherai | 3 月之前 | |
vllm | 3 月之前 | |
README.md | 2 天之前 | |
llama_on_prem.md | 1 月之前 | |
using_externally_hosted_llms.ipynb | 3 月之前 |
<a href="https://llama.developer.meta.com/join_waitlist?utm_source=llama-cookbook&utm_medium=readme&utm_campaign=3p_integrations"><img src="https://img.shields.io/badge/Llama_API-Join_Waitlist-brightgreen?logo=meta" /></a>
<a href="https://llama.developer.meta.com/docs?utm_source=llama-cookbook&utm_medium=readme&utm_campaign=3p_integrations"><img src="https://img.shields.io/badge/Llama_API-Documentation-4BA9FE?logo=meta" /></a>
<a href="https://github.com/meta-llama/llama-models/blob/main/models/?utm_source=llama-cookbook&utm_medium=readme&utm_campaign=3p_integrations"><img alt="Llama Model cards" src="https://img.shields.io/badge/Llama_OSS-Model_cards-green?logo=meta" /></a>
<a href="https://www.llama.com/docs/overview/?utm_source=llama-cookbook&utm_medium=readme&utm_campaign=3p_integrations"><img alt="Llama Documentation" src="https://img.shields.io/badge/Llama_OSS-Documentation-4BA9FE?logo=meta" /></a>
<a href="https://huggingface.co/meta-llama"><img alt="Hugging Face meta-llama" src="https://img.shields.io/badge/Hugging_Face-meta--llama-yellow?logo=huggingface" /></a>
<a href="https://github.com/meta-llama/synthetic-data-kit"><img alt="Llama Tools Syntethic Data Kit" src="https://img.shields.io/badge/Llama_Tools-synthetic--data--kit-orange?logo=meta" /></a>
<a href="https://github.com/meta-llama/llama-prompt-ops"><img alt="Llama Tools Syntethic Data Kit" src="https://img.shields.io/badge/Llama_Tools-llama--prompt--ops-orange?logo=meta" /></a>
This folder contains example scripts and tutorials showcasing the integration of Meta Llama models with popular platforms, frameworks, and tools in the LLM ecosystem. These integrations demonstrate how to leverage Llama's capabilities across different environments and use cases.
Each folder is maintained by the respective platform-owner and contains specific examples, tutorials, and documentation for using Llama with that platform.
[!NOTE] If you'd like to add your platform here, please open a new issue with details of your examples.
Examples for using Llama 3 on Amazon Bedrock, including getting started guides, prompt engineering, and React integration.
Recipes for running Llama model inference on Azure's serverless API offerings (MaaS).
Recipes for deploying Llama workflows on Crusoe's high-performance, sustainable cloud, including serving Llama3.1 in FP8 with vLLM.
AI-powered code and data analysis tool using Meta Llama and the E2B SDK, supporting data analysis, CSV uploads, and interactive charts.
Examples and templates for using Llama models with Groq's high-performance inference API.
Integration examples with Lamini's platform, including text2sql with memory tuning.
Cookbooks for building agents with Llama 3 and LangChain, including tool-calling agents and RAG agents using LangGraph.
Examples of using Llama with LlamaIndex for advanced RAG applications and agentic RAG.
Integration with Modal's cloud platform for running Llama models, including human evaluation examples.
Guide for serving fine-tuned Llama models with HuggingFace's text-generation-inference server, including weight merging for LoRA models.
Comprehensive demos for building LLM applications using Llama on Together AI, including multimodal RAG, contextual RAG, PDF-to-podcast conversion, knowledge graphs, and structured text extraction.
Examples for high-throughput and memory-efficient inference using vLLM with Llama models.
Guide for working with Llama models hosted on external platforms.
Information about on-premises deployment of Llama models.