ソースを参照

Updated broken links in various notebooks, including references to Llama 2 and Llama 3 documentation, as well as correcting the path for the RecursiveCharacterTextSplitter

Arun Brahma 3 ヶ月 前
コミット
8f0ae1b366

+ 1 - 1
3p-integrations/aws/prompt_engineering_with_llama_2_on_amazon_bedrock.ipynb

@@ -909,7 +909,7 @@
    "source": [
     "### Role Prompting\n",
     "\n",
-    "Llama 2 will often give more consistent responses when given a role ([Kong et al. (2023)](https://browse.arxiv.org/pdf/2308.07702.pdf)). Roles give context to the LLM on what type of answers are desired.\n",
+    "Llama 2 will often give more consistent responses when given a role ([Kong et al. (2023)](https://arxiv.org/pdf/2308.07702)). Roles give context to the LLM on what type of answers are desired.\n",
     "\n",
     "Let's use Llama 2 to create a more focused, technical response for a question around the pros and cons of using PyTorch."
    ]

+ 1 - 1
3p-integrations/langchain/langgraph_rag_agent.ipynb

@@ -54,7 +54,7 @@
     "\n",
     "### LLM\n",
     "\n",
-    "We can use one of the providers that (1) offer Llama 3 and (2) [provide structure outputs](https://python.langchain.com/docs/modules/model_io/chat/structured_output/).\n",
+    "We can use one of the providers that (1) offer Llama 3 and (2) [provide structure outputs](https://python.langchain.com/docs/how_to/structured_output/).\n",
     "\n",
     "Here, we use [Groq](https://groq.com/).\n",
     "\n",

+ 1 - 1
3p-integrations/langchain/langgraph_tool_calling_agent.ipynb

@@ -50,7 +50,7 @@
     "\n",
     "We can review LangChain LLM integrations that support tool calling [here](https://python.langchain.com/docs/integrations/chat/).\n",
     "\n",
-    "Groq is included. [Here](https://github.com/groq/groq-api-cookbook/blob/main/llama3-stock-market-function-calling/llama3-stock-market-function-calling.ipynb) is a notebook by Groq on function calling with Llama 3 and LangChain."
+    "Groq is included. [Here](https://github.com/groq/groq-api-cookbook/blob/main/tutorials/llama3-stock-market-function-calling/llama3-stock-market-function-calling.ipynb) is a notebook by Groq on function calling with Llama 3 and LangChain."
    ]
   },
   {

+ 1 - 1
3p-integrations/octoai/RAG_chatbot_example/RAG_chatbot_example.ipynb

@@ -165,7 +165,7 @@
    "metadata": {},
    "source": [
     "Split the loaded documents into smaller chunks.\n",
-    "[`RecursiveCharacterTextSplitter`](https://api.python.langchain.com/en/latest/text_splitter/langchain.text_splitter.RecursiveCharacterTextSplitter.html) is one common splitter that splits long pieces of text into smaller, semantically meaningful chunks.\n",
+    "[`RecursiveCharacterTextSplitter`](https://api.python.langchain.com/en/latest/character/langchain_text_splitters.character.RecursiveCharacterTextSplitter.html) is one common splitter that splits long pieces of text into smaller, semantically meaningful chunks.\n",
     "Other splitters include:\n",
     "* SpacyTextSplitter\n",
     "* NLTKTextSplitter\n",

+ 1 - 1
end-to-end-use-cases/customerservice_chatbots/RAG_chatbot/RAG_Chatbot_Example.ipynb

@@ -155,7 +155,7 @@
    "metadata": {},
    "source": [
     "Split the loaded documents into smaller chunks.  \n",
-    "[`RecursiveCharacterTextSplitter`](https://api.python.langchain.com/en/latest/text_splitter/langchain.text_splitter.RecursiveCharacterTextSplitter.html) is one common splitter that splits long pieces of text into smaller, semantically meaningful chunks.  \n",
+    "[`RecursiveCharacterTextSplitter`](https://api.python.langchain.com/en/latest/character/langchain_text_splitters.character.RecursiveCharacterTextSplitter.html) is one common splitter that splits long pieces of text into smaller, semantically meaningful chunks.  \n",
     "Other splitters include:\n",
     "* SpacyTextSplitter\n",
     "* NLTKTextSplitter\n",

+ 1 - 1
end-to-end-use-cases/responsible_ai/llama_guard/llama_guard_customization_via_prompting_and_fine_tuning.ipynb

@@ -21,7 +21,7 @@
     "\n",
     "Llama Guard 3 is a Llama-3.1-8B pretrained model, fine-tuned for content safety classification. Llama Guard 3 builds on the capabilities introduced in Llama Guard 2, adding three new categories: Defamation, Elections, and Code Interpreter Abuse. The new model support 14 categories in total.\n",
     "\n",
-    "This model is multilingual (see [model card](https://github.com/meta-llama/PurpleLlama/blob/main/Llama-Guard3/MODEL_CARD.md)) and additionally introduces a new prompt format, which makes Llama Guard 3’s prompt format consistent with Llama 3+ Instruct models.\n",
+    "This model is multilingual (see [model card](https://github.com/meta-llama/PurpleLlama/blob/main/Llama-Guard3/README.md)) and additionally introduces a new prompt format, which makes Llama Guard 3’s prompt format consistent with Llama 3+ Instruct models.\n",
     "\n",
     "Sometimes these 14 categories are not sufficient and there will be a need to customize existing policies or creating new policies. This notebooks provides you instruction for how to customize your Llama Guard 3 using the following techniques\n",
     "\n",

+ 1 - 1
getting-started/Prompt_Engineering_with_Llama.ipynb

@@ -445,7 +445,7 @@
    "source": [
     "### Role Prompting\n",
     "\n",
-    "Llama will often give more consistent responses when given a role ([Kong et al. (2023)](https://browse.arxiv.org/pdf/2308.07702.pdf)). Roles give context to the LLM on what type of answers are desired.\n",
+    "Llama will often give more consistent responses when given a role ([Kong et al. (2023)](https://arxiv.org/pdf/2308.07702)). Roles give context to the LLM on what type of answers are desired.\n",
     "\n",
     "Let's use Llama 3 to create a more focused, technical response for a question around the pros and cons of using PyTorch."
    ]