Browse Source

Adding badges to top level readmes in the Llama Recipes repo.

Beto de Paola 2 tháng trước cách đây
mục cha
commit
30bcf9be78
3 tập tin đã thay đổi với 108 bổ sung16 xóa
  1. 65 3
      3p-integrations/README.md
  2. 26 11
      end-to-end-use-cases/README.md
  3. 17 2
      getting-started/README.md

+ 65 - 3
3p-integrations/README.md

@@ -1,8 +1,70 @@
-## Llama-Recipes 3P Integrations
+<h1 align="center"> Llama 3P Integrations </h1>
+<p align="center">
+	<a href="https://llama.developer.meta.com/join_waitlist?utm_source=llama-cookbook&utm_medium=readme&utm_campaign=3p_integrations"><img src="https://img.shields.io/badge/Llama_API-Join_Waitlist-brightgreen?logo=meta" /></a>
+	<a href="https://llama.developer.meta.com/docs?utm_source=llama-cookbook&utm_medium=readme&utm_campaign=3p_integrations"><img src="https://img.shields.io/badge/Llama_API-Documentation-4BA9FE?logo=meta" /></a>
 
-This folder contains example scripts showcasing the use of Meta Llama with popular platforms and tooling in the LLM ecosystem. 
+</p>
+<p align="center">
+	<a href="https://github.com/meta-llama/llama-models/blob/main/models/?utm_source=llama-cookbook&utm_medium=readme&utm_campaign=3p_integrations"><img alt="Llama Model cards" src="https://img.shields.io/badge/Llama_OSS-Model_cards-green?logo=meta" /></a>
+	<a href="https://www.llama.com/docs/overview/?utm_source=llama-cookbook&utm_medium=readme&utm_campaign=3p_integrations"><img alt="Llama Documentation" src="https://img.shields.io/badge/Llama_OSS-Documentation-4BA9FE?logo=meta" /></a>
+	<a href="https://huggingface.co/meta-llama"><img alt="Hugging Face meta-llama" src="https://img.shields.io/badge/Hugging_Face-meta--llama-yellow?logo=huggingface" /></a>
 
-Each folder is maintained by the platform-owner. 
+</p>
+<p align="center">
+	<a href="https://github.com/meta-llama/synthetic-data-kit"><img alt="Llama Tools Syntethic Data Kit" src="https://img.shields.io/badge/Llama_Tools-synthetic--data--kit-orange?logo=meta" /></a>
+	<a href="https://github.com/meta-llama/llama-prompt-ops"><img alt="Llama Tools Syntethic Data Kit" src="https://img.shields.io/badge/Llama_Tools-llama--prompt--ops-orange?logo=meta" /></a>
+</p>
+
+
+This folder contains example scripts and tutorials showcasing the integration of Meta Llama models with popular platforms, frameworks, and tools in the LLM ecosystem. These integrations demonstrate how to leverage Llama's capabilities across different environments and use cases.
+
+Each folder is maintained by the respective platform-owner and contains specific examples, tutorials, and documentation for using Llama with that platform.
 
 > [!NOTE]
 > If you'd like to add your platform here, please open a new issue with details of your examples.
+
+## Available Integrations
+
+### [AWS](./aws)
+Examples for using Llama 3 on Amazon Bedrock, including getting started guides, prompt engineering, and React integration.
+
+### [Azure](./azure)
+Recipes for running Llama model inference on Azure's serverless API offerings (MaaS).
+
+### [Crusoe](./crusoe)
+Recipes for deploying Llama workflows on Crusoe's high-performance, sustainable cloud, including serving Llama3.1 in FP8 with vLLM.
+
+### [E2B AI Analyst](./e2b-ai-analyst)
+AI-powered code and data analysis tool using Meta Llama and the E2B SDK, supporting data analysis, CSV uploads, and interactive charts.
+
+### [Groq](./groq)
+Examples and templates for using Llama models with Groq's high-performance inference API.
+
+### [Lamini](./lamini)
+Integration examples with Lamini's platform, including text2sql with memory tuning.
+
+### [LangChain](./langchain)
+Cookbooks for building agents with Llama 3 and LangChain, including tool-calling agents and RAG agents using LangGraph.
+
+### [LlamaIndex](./llamaindex)
+Examples of using Llama with LlamaIndex for advanced RAG applications and agentic RAG.
+
+### [Modal](./modal)
+Integration with Modal's cloud platform for running Llama models, including human evaluation examples.
+
+### [TGI](./tgi)
+Guide for serving fine-tuned Llama models with HuggingFace's text-generation-inference server, including weight merging for LoRA models.
+
+### [TogetherAI](./togetherai)
+Comprehensive demos for building LLM applications using Llama on Together AI, including multimodal RAG, contextual RAG, PDF-to-podcast conversion, knowledge graphs, and structured text extraction.
+
+### [vLLM](./vllm)
+Examples for high-throughput and memory-efficient inference using vLLM with Llama models.
+
+## Additional Resources
+
+### [Using Externally Hosted LLMs](./using_externally_hosted_llms.ipynb)
+Guide for working with Llama models hosted on external platforms.
+
+### [Llama On-Prem](./llama_on_prem.md)
+Information about on-premises deployment of Llama models.

+ 26 - 11
end-to-end-use-cases/README.md

@@ -1,44 +1,59 @@
-# End to End Use Applications using various Llama Models
+<h1 align="center"> End to End Use Applications using various Llama Models </h1>
+<p align="center">
+	<a href="https://llama.developer.meta.com/join_waitlist?utm_source=llama-cookbook&utm_medium=readme&utm_campaign=end_to_end"><img src="https://img.shields.io/badge/Llama_API-Join_Waitlist-brightgreen?logo=meta" /></a>
+	<a href="https://llama.developer.meta.com/docs?utm_source=llama-cookbook&utm_medium=readme&utm_campaign=end_to_end"><img src="https://img.shields.io/badge/Llama_API-Documentation-4BA9FE?logo=meta" /></a>
 
-## [Agentic Tutorial](./agents/): 
+</p>
+<p align="center">
+	<a href="https://github.com/meta-llama/llama-models/blob/main/models/?utm_source=llama-cookbook&utm_medium=readme&utm_campaign=end_to_end"><img alt="Llama Model cards" src="https://img.shields.io/badge/Llama_OSS-Model_cards-green?logo=meta" /></a>
+	<a href="https://www.llama.com/docs/overview/?utm_source=llama-cookbook&utm_medium=readme&utm_campaign=end_to_end"><img alt="Llama Documentation" src="https://img.shields.io/badge/Llama_OSS-Documentation-4BA9FE?logo=meta" /></a>
+	<a href="https://huggingface.co/meta-llama"><img alt="Hugging Face meta-llama" src="https://img.shields.io/badge/Hugging_Face-meta--llama-yellow?logo=huggingface" /></a>
+
+</p>
+<p align="center">
+	<a href="https://github.com/meta-llama/synthetic-data-kit"><img alt="Llama Tools Syntethic Data Kit" src="https://img.shields.io/badge/Llama_Tools-synthetic--data--kit-orange?logo=meta" /></a>
+	<a href="https://github.com/meta-llama/llama-prompt-ops"><img alt="Llama Tools Syntethic Data Kit" src="https://img.shields.io/badge/Llama_Tools-llama--prompt--ops-orange?logo=meta" /></a>
+</p>
+
+## [Agentic Tutorial](./agents/):
 
 ### 101 and 201 tutorials on performing Tool Calling and building an Agentic Workflow using Llama Models
 101 notebooks show how to apply Llama models and enable tool calling functionality, 201 notebook walks you through an end to end workflow of building an agent that can search two papers, fetch their details and find their differences.
 
-## [Benchmarks](./benchmarks/): 
+## [Benchmarks](./benchmarks/):
 
-### A folder contains benchmark scripts 
+### A folder contains benchmark scripts
 The scripts apply a throughput analysis and introduction to `lm-evaluation-harness`, a tool to evaluate Llama models including quantized models focusing on quality
 
-## [Browser Usage](./browser_use/): 
+## [Browser Usage](./browser_use/):
 
 ### Demo of how to apply Llama models and use them for browsing the internet and completing tasks
 
-## [Automatic Triaging of Github Repositories](./github_triage/walkthrough.ipynb): 
+## [Automatic Triaging of Github Repositories](./github_triage/walkthrough.ipynb):
 
 ### Use Llama to automatically triage issues in an OSS repository and generate insights to improve community experience
 This tool utilizes an off-the-shelf Llama model to analyze, generate insights, and create a report for better understanding of the state of a repository. It serves as a reference implementation for using Llama to develop custom reporting and data analytics applications.
 
 
-## [NBA2023-24](./coding/text2sql/quickstart.ipynb): 
+## [NBA2023-24](./coding/text2sql/quickstart.ipynb):
 
 ### Ask Llama 3 about Structured Data
 This demo app shows how to use LangChain and Llama 3 to let users ask questions about **structured** data stored in a SQL DB. As the 2023-24 NBA season is entering the playoff, we use the NBA roster info saved in a SQLite DB to show you how to ask Llama 3 questions about your favorite teams or players.
 
-## [NotebookLlama](./NotebookLlama/): 
+## [NotebookLlama](./NotebookLlama/):
 
 ### PDF to Podcast using Llama Models
 Workflow showcasing how to use multiple Llama models to go from any PDF to a Podcast and using open models to generate a multi-speaker podcast
 
 
-## [WhatsApp Chatbot](./customerservice_chatbots/whatsapp_chatbot/whatsapp_llama3.md): 
+## [WhatsApp Chatbot](./customerservice_chatbots/whatsapp_chatbot/whatsapp_llama3.md):
 ### Building a Llama 3 Enabled WhatsApp Chatbot
 This step-by-step tutorial shows how to use the [WhatsApp Business API](https://developers.facebook.com/docs/whatsapp/cloud-api/overview) to build a Llama 3 enabled WhatsApp chatbot.
 
-## [Messenger Chatbot](./customerservice_chatbots/messenger_chatbot/messenger_llama3.md): 
+## [Messenger Chatbot](./customerservice_chatbots/messenger_chatbot/messenger_llama3.md):
 
 ### Building a Llama 3 Enabled Messenger Chatbot
 This step-by-step tutorial shows how to use the [Messenger Platform](https://developers.facebook.com/docs/messenger-platform/overview) to build a Llama 3 enabled Messenger chatbot.
 
 ### RAG Chatbot Example (running [locally](./customerservice_chatbots/RAG_chatbot/RAG_Chatbot_Example.ipynb)
-A complete example of how to build a Llama 3 chatbot hosted on your browser that can answer questions based on your own data using retrieval augmented generation (RAG). 
+A complete example of how to build a Llama 3 chatbot hosted on your browser that can answer questions based on your own data using retrieval augmented generation (RAG).

+ 17 - 2
getting-started/README.md

@@ -1,9 +1,24 @@
-## Llama-cookbook Getting Started
+<h1 align="center"> Geting Started </h1>
+<p align="center">
+	<a href="https://llama.developer.meta.com/join_waitlist?utm_source=llama-cookbook&utm_medium=readme&utm_campaign=getting_started"><img src="https://img.shields.io/badge/Llama_API-Join_Waitlist-brightgreen?logo=meta" /></a>
+	<a href="https://llama.developer.meta.com/docs?utm_source=llama-cookbook&utm_medium=readme&utm_campaign=getting_started"><img src="https://img.shields.io/badge/Llama_API-Documentation-4BA9FE?logo=meta" /></a>
+
+</p>
+<p align="center">
+	<a href="https://github.com/meta-llama/llama-models/blob/main/models/?utm_source=llama-cookbook&utm_medium=readme&utm_campaign=getting_started"><img alt="Llama Model cards" src="https://img.shields.io/badge/Llama_OSS-Model_cards-green?logo=meta" /></a>
+	<a href="https://www.llama.com/docs/overview/?utm_source=llama-cookbook&utm_medium=readme&utm_campaign=getting_started"><img alt="Llama Documentation" src="https://img.shields.io/badge/Llama_OSS-Documentation-4BA9FE?logo=meta" /></a>
+	<a href="https://huggingface.co/meta-llama"><img alt="Hugging Face meta-llama" src="https://img.shields.io/badge/Hugging_Face-meta--llama-yellow?logo=huggingface" /></a>
+
+</p>
+<p align="center">
+	<a href="https://github.com/meta-llama/synthetic-data-kit"><img alt="Llama Tools Syntethic Data Kit" src="https://img.shields.io/badge/Llama_Tools-synthetic--data--kit-orange?logo=meta" /></a>
+	<a href="https://github.com/meta-llama/llama-prompt-ops"><img alt="Llama Tools Syntethic Data Kit" src="https://img.shields.io/badge/Llama_Tools-llama--prompt--ops-orange?logo=meta" /></a>
+</p>
 
 If you are new to developing with Meta Llama models, this is where you should start. This folder contains introductory-level notebooks across different techniques relating to Meta Llama.
 
 * The [Build_with_Llama 4](./build_with_llama_4.ipynb) notebook showcases a comprehensive walkthrough of the new capabilities of Llama 4 Scout models, including long context, multi-images and function calling.
-* The [Build_with_Llama API](./build_with_llama_api.ipynb) notebook highlights some of the features of [Llama API](https://llama.developer.meta.com).
+* The [Build_with_Llama API](./build_with_llama_api.ipynb) notebook highlights some of the features of [Llama API](https://llama.developer.meta.com?utm_source=llama-cookbook&utm_medium=readme&utm_campaign=getting_started).
 * The [inference](./inference/) folder contains scripts to deploy Llama for inference on server and mobile. See also [3p_integrations/vllm](../3p-integrations/vllm/) and [3p_integrations/tgi](../3p-integrations/tgi/) for hosting Llama on open-source model servers.
 * The [RAG](./RAG/) folder contains a simple Retrieval-Augmented Generation application using Llama.
 * The [finetuning](./finetuning/) folder contains resources to help you finetune Llama on your custom datasets, for both single- and multi-GPU setups. The scripts use the native llama-cookbook finetuning code found in [finetuning.py](../src/llama_cookbook/finetuning.py) which supports these features: