Pārlūkot izejas kodu

Minor file renames

Pia Papanna 1 gadu atpakaļ
vecāks
revīzija
77156c7324

recipes/experimental/long-context/H2O/README.md → recipes/experimental/long_context/H2O/README.md


recipes/experimental/long-context/H2O/data/summarization/cnn_dailymail.jsonl → recipes/experimental/long_context/H2O/data/summarization/cnn_dailymail.jsonl


recipes/experimental/long-context/H2O/data/summarization/xsum.jsonl → recipes/experimental/long_context/H2O/data/summarization/xsum.jsonl


recipes/experimental/long-context/H2O/requirements.txt → recipes/experimental/long_context/H2O/requirements.txt


recipes/experimental/long-context/H2O/run_streaming.py → recipes/experimental/long_context/H2O/run_streaming.py


recipes/experimental/long-context/H2O/run_summarization.py → recipes/experimental/long_context/H2O/run_summarization.py


recipes/experimental/long-context/H2O/src/streaming.sh → recipes/experimental/long_context/H2O/src/streaming.sh


recipes/experimental/long-context/H2O/utils/cache.py → recipes/experimental/long_context/H2O/utils/cache.py


recipes/experimental/long-context/H2O/utils/llama.py → recipes/experimental/long_context/H2O/utils/llama.py


recipes/experimental/long-context/H2O/utils/streaming.py → recipes/experimental/long_context/H2O/utils/streaming.py


+ 2 - 2
recipes/responsible_ai/README.md

@@ -2,10 +2,10 @@
 
 
 Meta Llama Guard and Meta Llama Guard 2 are new models that provide input and output guardrails for LLM inference. For more details, please visit the main [repository](https://github.com/facebookresearch/PurpleLlama/tree/main/Llama-Guard2).
 Meta Llama Guard and Meta Llama Guard 2 are new models that provide input and output guardrails for LLM inference. For more details, please visit the main [repository](https://github.com/facebookresearch/PurpleLlama/tree/main/Llama-Guard2).
 
 
-**Note** Please find the right model on HF side [here](https://huggingface.co/meta-llama/Meta-Llama-Guard-2-8B). 
+**Note** Please find the right model on HF side [here](https://huggingface.co/meta-llama/Meta-Llama-Guard-2-8B).
 
 
 ### Running locally
 ### Running locally
 The [llama_guard](llama_guard) folder contains the inference script to run Meta Llama Guard locally. Add test prompts directly to the [inference script](llama_guard/inference.py) before running it.
 The [llama_guard](llama_guard) folder contains the inference script to run Meta Llama Guard locally. Add test prompts directly to the [inference script](llama_guard/inference.py) before running it.
 
 
 ### Running on the cloud
 ### Running on the cloud
-The notebooks [Purple_Llama_Anyscale](Purple_Llama_Anyscale.ipynb) & [Purple_Llama_OctoAI](Purple_Llama_OctoAI.ipynb) contain examples for running Meta Llama Guard on cloud hosted endpoints.
+The notebooks [Purple_Llama_Anyscale](purple_llama_anyscale.ipynb) & [Purple_Llama_OctoAI](purple_llama_octoai.ipynb) contain examples for running Meta Llama Guard on cloud hosted endpoints.

recipes/responsible_ai/CodeShieldUsageDemo.ipynb → recipes/responsible_ai/code_shield_usage_demo.ipynb


+ 2 - 2
recipes/use_cases/README.md

@@ -13,11 +13,11 @@ This step-by-step tutorial shows how to use the [WhatsApp Business API](https://
 ## [Messenger Chatbot](./customerservice_chatbots/messenger_llama/messenger_llama3.md): Building a Llama 3 Enabled Messenger Chatbot
 ## [Messenger Chatbot](./customerservice_chatbots/messenger_llama/messenger_llama3.md): Building a Llama 3 Enabled Messenger Chatbot
 This step-by-step tutorial shows how to use the [Messenger Platform](https://developers.facebook.com/docs/messenger-platform/overview) to build a Llama 3 enabled Messenger chatbot.
 This step-by-step tutorial shows how to use the [Messenger Platform](https://developers.facebook.com/docs/messenger-platform/overview) to build a Llama 3 enabled Messenger chatbot.
 
 
-### RAG Chatbot Example (running [locally](./customerservice_chatbots/RAG_chatbot/RAG_Chatbot_Example.ipynb) or on [OctoAI](../3p_integration/octoai/RAG_Chatbot_example/RAG_Chatbot_Example.ipynb))
+### RAG Chatbot Example (running [locally](./customerservice_chatbots/RAG_chatbot/RAG_chatbot_example.ipynb) or on [OctoAI](../3p_integration/octoai/RAG_chatbot_example/RAG_chatbot_example.ipynb))
 A complete example of how to build a Llama 3 chatbot hosted on your browser that can answer questions based on your own data using retrieval augmented generation (RAG). You can run Llama2 locally if you have a good enough GPU or on OctoAI if you follow the note [here](../README.md#octoai_note).
 A complete example of how to build a Llama 3 chatbot hosted on your browser that can answer questions based on your own data using retrieval augmented generation (RAG). You can run Llama2 locally if you have a good enough GPU or on OctoAI if you follow the note [here](../README.md#octoai_note).
 
 
 ## [Sales Bot](./customerservice_chatbots/sales_bot/SalesBot.ipynb): Sales Bot with Llama3 - A Summarization and RAG Use Case
 ## [Sales Bot](./customerservice_chatbots/sales_bot/SalesBot.ipynb): Sales Bot with Llama3 - A Summarization and RAG Use Case
 An summarization + RAG use case built around the Amazon product review Kaggle dataset to build a helpful Music Store Sales Bot. The summarization and RAG are built on top of Llama models hosted on OctoAI, and the vector database is hosted on Weaviate Cloud Services.
 An summarization + RAG use case built around the Amazon product review Kaggle dataset to build a helpful Music Store Sales Bot. The summarization and RAG are built on top of Llama models hosted on OctoAI, and the vector database is hosted on Weaviate Cloud Services.
 
 
-## [Media Generation](./MediaGen.ipynb): Building a Video Generation Pipeline with Llama3
+## [Media Generation](./mediagen.ipynb): Building a Video Generation Pipeline with Llama3
 This step-by-step tutorial shows how to use leverage Llama 3 to drive the generation of animated videos using SDXL and SVD. More specifically it relies on JSON formatting to produce a scene-by-scene story board of a recipe video. The user provides the name of a dish, then Llama 3 describes a step by step guide to reproduce the said dish. This step by step guide is brought to life with models like SDXL and SVD.
 This step-by-step tutorial shows how to use leverage Llama 3 to drive the generation of animated videos using SDXL and SVD. More specifically it relies on JSON formatting to produce a scene-by-scene story board of a recipe video. The user provides the name of a dish, then Llama 3 describes a step by step guide to reproduce the said dish. This step by step guide is brought to life with models like SDXL and SVD.