浏览代码

File name updates to address the Lint changes

Pia Papanna 11 月之前
父节点
当前提交
c0f08c1074

+ 4 - 1
.github/scripts/spellcheck_conf/wordlist.txt

@@ -1390,4 +1390,7 @@ chatbot's
 Lamini
 Lamini
 lamini
 lamini
 nba
 nba
-sqlite
+sqlite
+customerservice
+fn
+ExecuTorch

+ 1 - 1
recipes/responsible_ai/README.md

@@ -8,4 +8,4 @@ Meta Llama Guard and Meta Llama Guard 2 are new models that provide input and ou
 The [llama_guard](llama_guard) folder contains the inference script to run Meta Llama Guard locally. Add test prompts directly to the [inference script](llama_guard/inference.py) before running it.
 The [llama_guard](llama_guard) folder contains the inference script to run Meta Llama Guard locally. Add test prompts directly to the [inference script](llama_guard/inference.py) before running it.
 
 
 ### Running on the cloud
 ### Running on the cloud
-The notebooks [Purple_Llama_Anyscale](purple_llama_anyscale.ipynb) & [Purple_Llama_OctoAI](purple_llama_octoai.ipynb) contain examples for running Meta Llama Guard on cloud hosted endpoints.
+The notebooks [Purple_Llama_Anyscale](Purple_Llama_Anyscale.ipynb) & [Purple_Llama_OctoAI](Purple_Llama_Octoai.ipynb) contain examples for running Meta Llama Guard on cloud hosted endpoints.

+ 2 - 2
recipes/use_cases/README.md

@@ -13,11 +13,11 @@ This step-by-step tutorial shows how to use the [WhatsApp Business API](https://
 ## [Messenger Chatbot](./customerservice_chatbots/messenger_llama/messenger_llama3.md): Building a Llama 3 Enabled Messenger Chatbot
 ## [Messenger Chatbot](./customerservice_chatbots/messenger_llama/messenger_llama3.md): Building a Llama 3 Enabled Messenger Chatbot
 This step-by-step tutorial shows how to use the [Messenger Platform](https://developers.facebook.com/docs/messenger-platform/overview) to build a Llama 3 enabled Messenger chatbot.
 This step-by-step tutorial shows how to use the [Messenger Platform](https://developers.facebook.com/docs/messenger-platform/overview) to build a Llama 3 enabled Messenger chatbot.
 
 
-### RAG Chatbot Example (running [locally](./customerservice_chatbots/RAG_chatbot/RAG_chatbot_example.ipynb) or on [OctoAI](../3p_integration/octoai/RAG_chatbot_example/RAG_chatbot_example.ipynb))
+### RAG Chatbot Example (running [locally](./customerservice_chatbots/RAG_chatbot/RAG_Chatbot_Example.ipynb) or on [OctoAI](../3p_integration/octoai/RAG_chatbot_example/RAG_chatbot_example.ipynb))
 A complete example of how to build a Llama 3 chatbot hosted on your browser that can answer questions based on your own data using retrieval augmented generation (RAG). You can run Llama2 locally if you have a good enough GPU or on OctoAI if you follow the note [here](../README.md#octoai_note).
 A complete example of how to build a Llama 3 chatbot hosted on your browser that can answer questions based on your own data using retrieval augmented generation (RAG). You can run Llama2 locally if you have a good enough GPU or on OctoAI if you follow the note [here](../README.md#octoai_note).
 
 
 ## [Sales Bot](./customerservice_chatbots/sales_bot/SalesBot.ipynb): Sales Bot with Llama3 - A Summarization and RAG Use Case
 ## [Sales Bot](./customerservice_chatbots/sales_bot/SalesBot.ipynb): Sales Bot with Llama3 - A Summarization and RAG Use Case
 An summarization + RAG use case built around the Amazon product review Kaggle dataset to build a helpful Music Store Sales Bot. The summarization and RAG are built on top of Llama models hosted on OctoAI, and the vector database is hosted on Weaviate Cloud Services.
 An summarization + RAG use case built around the Amazon product review Kaggle dataset to build a helpful Music Store Sales Bot. The summarization and RAG are built on top of Llama models hosted on OctoAI, and the vector database is hosted on Weaviate Cloud Services.
 
 
-## [Media Generation](./mediagen.ipynb): Building a Video Generation Pipeline with Llama3
+## [Media Generation](./MediaGen.ipynb): Building a Video Generation Pipeline with Llama3
 This step-by-step tutorial shows how to use leverage Llama 3 to drive the generation of animated videos using SDXL and SVD. More specifically it relies on JSON formatting to produce a scene-by-scene story board of a recipe video. The user provides the name of a dish, then Llama 3 describes a step by step guide to reproduce the said dish. This step by step guide is brought to life with models like SDXL and SVD.
 This step-by-step tutorial shows how to use leverage Llama 3 to drive the generation of animated videos using SDXL and SVD. More specifically it relies on JSON formatting to produce a scene-by-scene story board of a recipe video. The user provides the name of a dish, then Llama 3 describes a step by step guide to reproduce the said dish. This step by step guide is brought to life with models like SDXL and SVD.

+ 1 - 1
recipes/use_cases/multilingual/README.md

@@ -118,7 +118,7 @@ phase2_ds.save_to_disk("data/phase2")
 ```
 ```
 
 
 ### Train
 ### Train
-Finally, we can start finetuning Llama2 on these datasets by following the [finetuning recipes](https://github.com/meta-llama/llama-recipes/tree/main/recipes/quickstart/finetuning). Remember to pass the new tokenizer path as an argument to the script: `--tokenizer_name=./extended_tokenizer`.
+Finally, we can start finetuning Llama2 on these datasets by following the [finetuning recipes](../../quickstart/finetuning/). Remember to pass the new tokenizer path as an argument to the script: `--tokenizer_name=./extended_tokenizer`.
 
 
 OpenHathi was trained on 64 A100 80GB GPUs. Here are the hyperparameters used and other training details:
 OpenHathi was trained on 64 A100 80GB GPUs. Here are the hyperparameters used and other training details:
 - maximum learning rate: 2e-4
 - maximum learning rate: 2e-4