Explorar el Código

fix few typos (#853)

Sanyam Bhutani hace 3 meses
padre
commit
78eba8179e
Se han modificado 27 ficheros con 60 adiciones y 60 borrados
  1. 3 3
      3p-integrations/aws/react_llama_3_bedrock_wk.ipynb
  2. 1 1
      3p-integrations/azure/Azure MaaS/azure_api_example.ipynb
  3. 3 3
      3p-integrations/groq/groq-api-cookbook/rag-langchain-presidential-speeches/rag-langchain-presidential-speeches.ipynb
  4. 3 3
      3p-integrations/lamini/text2sql_memory_tuning/meta_lamini.ipynb
  5. 1 1
      3p-integrations/langchain/langgraph_rag_agent.ipynb
  6. 4 4
      3p-integrations/langchain/langgraph_rag_agent_local.ipynb
  7. 3 3
      3p-integrations/langchain/langgraph_tool_calling_agent.ipynb
  8. 1 1
      3p-integrations/octoai/MediaGen.ipynb
  9. 1 1
      3p-integrations/octoai/RAG_chatbot_example/RAG_chatbot_example.ipynb
  10. 1 1
      3p-integrations/octoai/video_summary.ipynb
  11. 1 1
      3p-integrations/togetherai/knowledge_graphs_with_structured_outputs.ipynb
  12. 6 6
      3p-integrations/togetherai/llama_contextual_RAG.ipynb
  13. 2 2
      end-to-end-use-cases/Multi-Modal-RAG/notebooks/Part_1_Data_Preparation.ipynb
  14. 2 2
      end-to-end-use-cases/Multi-Modal-RAG/notebooks/Part_2_Cleaning_Data_and_DB.ipynb
  15. 1 1
      end-to-end-use-cases/Multi-Modal-RAG/notebooks/Part_3_RAG_Setup_and_Validation.ipynb
  16. 1 1
      end-to-end-use-cases/NotebookLlama/Step-2-Transcript-Writer.ipynb
  17. 1 1
      end-to-end-use-cases/NotebookLlama/Step-3-Re-Writer.ipynb
  18. 1 1
      end-to-end-use-cases/agents/DeepLearningai_Course_Notebooks/Functions_Tools_and_Agents_with_LangChain_L1_Function_Calling.ipynb
  19. 4 4
      end-to-end-use-cases/customerservice_chatbots/RAG_chatbot/RAG_Chatbot_Example.ipynb
  20. 7 7
      end-to-end-use-cases/customerservice_chatbots/RAG_chatbot/vectorstore/mongodb/rag_mongodb_llama3_huggingface_open_source.ipynb
  21. 1 1
      end-to-end-use-cases/customerservice_chatbots/ai_agent_chatbot/SalesBot.ipynb
  22. 1 1
      end-to-end-use-cases/github_triage/walkthrough.ipynb
  23. 2 2
      end-to-end-use-cases/responsible_ai/llama_guard/llama_guard_customization_via_prompting_and_fine_tuning.ipynb
  24. 2 2
      end-to-end-use-cases/responsible_ai/prompt_guard/prompt_guard_tutorial.ipynb
  25. 1 1
      end-to-end-use-cases/video_summary.ipynb
  26. 3 3
      getting-started/Prompt_Engineering_with_Llama.ipynb
  27. 3 3
      getting-started/build_with_Llama_3_2.ipynb

+ 3 - 3
3p-integrations/aws/react_llama_3_bedrock_wk.ipynb

@@ -11,7 +11,7 @@
     "\n",
     "LLMs abilities for reasoning (e.g. chain-of-thought CoT prompting) and acting have primarily been studied as separate topics. **ReAct** [Shunyu Yao et al. ICLR 2023](https://arxiv.org/pdf/2210.03629.pdf) (Reason and Act) is a method to generate both reasoning traces and task-specific actions in an interleaved manner.\n",
     "\n",
-    "In simple words, we define specific patterns for the language model to follow. This allows the model to act (usually through tools) and reason. Hence the model creates a squence of interleaved thoughts and actions. Such systems that act on an enviroment are usually called **agents** (borrowed from reinforcement learning).\n",
+    "In simple words, we define specific patterns for the language model to follow. This allows the model to act (usually through tools) and reason. Hence the model creates a sequence of interleaved thoughts and actions. Such systems that act on an environment are usually called **agents** (borrowed from reinforcement learning).\n",
     "\n",
     "![image.png](https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiuuYg9Pduep9GkUfjloNVOiy3qjpPbT017GKlgGEGMaLNu_TCheEeJ7r8Qok6-0BK3KMfLvsN2vSgFQ8xOvnHM9CAb4Ix4I62bcN2oXFWfqAJzGAGbVqbeCyVktu3h9Dyf5ameRe54LEr32Emp0nG52iofpNOTXCxMY12K7fvmDZNPPmfJaT5zo1OBQA/s595/Screen%20Shot%202022-11-08%20at%208.53.49%20AM.png)"
    ]
@@ -401,7 +401,7 @@
    "source": [
     "### Cleaning \n",
     "\n",
-    "Note that the model did a good job of identifying which tool to use and also what should be the input to the tool. But being a language model, it will complete the task even with incorrent info. Therefore, we need to clean up the generated text and format it before giving it to the corresponding tool."
+    "Note that the model did a good job of identifying which tool to use and also what should be the input to the tool. But being a language model, it will complete the task even with incorrect info. Therefore, we need to clean up the generated text and format it before giving it to the corresponding tool."
    ]
   },
   {
@@ -534,7 +534,7 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "Here we have very simple two step chain of acting (getting info from web) and reasoning (identifying the final asnwer). For doing longer and more complex chains we will need many more techniques that we will study in the future sessions, so **stay tuned!**"
+    "Here we have very simple two step chain of acting (getting info from web) and reasoning (identifying the final answer). For doing longer and more complex chains we will need many more techniques that we will study in the future sessions, so **stay tuned!**"
    ]
   },
   {

+ 1 - 1
3p-integrations/azure/Azure MaaS/azure_api_example.ipynb

@@ -150,7 +150,7 @@
       "source": [
         "## HTTP Requests API Usage in Python\n",
         "\n",
-        "Besides calling the API directly from command line tools, you can also programatically call them in Python.  \n",
+        "Besides calling the API directly from command line tools, you can also programmatically call them in Python.  \n",
         "\n",
         "Here is an example for the instruct model:\n",
         "\n",

La diferencia del archivo ha sido suprimido porque es demasiado grande
+ 3 - 3
3p-integrations/groq/groq-api-cookbook/rag-langchain-presidential-speeches/rag-langchain-presidential-speeches.ipynb


+ 3 - 3
3p-integrations/lamini/text2sql_memory_tuning/meta_lamini.ipynb

@@ -751,8 +751,8 @@
     "                continue\n",
     "\n",
     "            logger.info(\"=====================================\")\n",
-    "            logger.info(f\"Generted query 1: {result.response['sql_query_1']}\")\n",
-    "            logger.info(f\"Generted query 2: {result.response['sql_query_2']}\")\n",
+    "            logger.info(f\"Generated query 1: {result.response['sql_query_1']}\")\n",
+    "            logger.info(f\"Generated query 2: {result.response['sql_query_2']}\")\n",
     "            logger.info(\"=====================================\")\n",
     "\n",
     "            if self.check_sql_query(result.response[\"sql_query_1\"]):\n",
@@ -1731,7 +1731,7 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "Looks like there's plenty of room for improvment! You know how this works now:\n",
+    "Looks like there's plenty of room for improvement! You know how this works now:\n",
     "1. Generate a new training dataset \n",
     "2. Train a model\n",
     "3. Evaluate"

+ 1 - 1
3p-integrations/langchain/langgraph_rag_agent.ipynb

@@ -44,7 +44,7 @@
     "\n",
     "![Screenshot 2024-05-03 at 10.50.02 AM.png](attachment:dccfae03-f250-494e-82d6-f229eafb0ea6.png)\n",
     "\n",
-    "Note that this will incorperate [a few general ideas for agents](https://www.deeplearning.ai/the-batch/how-agents-can-improve-llm-performance/):\n",
+    "Note that this will incorporate [a few general ideas for agents](https://www.deeplearning.ai/the-batch/how-agents-can-improve-llm-performance/):\n",
     "\n",
     "- **Reflection**: The self-correction mechanism is a form of reflection, where the LangGraph agent reflects on its retrieval and generations\n",
     "- **Planning**: The control flow laid out in the graph is a form of planning \n",

+ 4 - 4
3p-integrations/langchain/langgraph_rag_agent_local.ipynb

@@ -32,7 +32,7 @@
     "\n",
     "Previously, we showed how to build simple agents with LangGraph and Llama 3.\n",
     "\n",
-    "Now, we'll pick a more advanced use-case: advanced RAG, with the requirment that it runs locally.\n",
+    "Now, we'll pick a more advanced use-case: advanced RAG, with the requirement that it runs locally.\n",
     "\n",
     "## Ideas\n",
     "\n",
@@ -44,7 +44,7 @@
     "\n",
     "![langgraph_adaptive_rag.png](attachment:7b00797e-fb85-4474-9a9e-c505b61add81.png)\n",
     "\n",
-    "Note that this will incorperate [a few general ideas for agents](https://www.deeplearning.ai/the-batch/how-agents-can-improve-llm-performance/):\n",
+    "Note that this will incorporate [a few general ideas for agents](https://www.deeplearning.ai/the-batch/how-agents-can-improve-llm-performance/):\n",
     "\n",
     "- **Reflection**: The self-correction mechanism is a form of reflection, where the LangGraph agent reflects on its retrieval and generations\n",
     "- **Planning**: The control flow laid out in the graph is a form of planning \n",
@@ -173,7 +173,7 @@
     "    grade it as relevant. It does not need to be a stringent test. The goal is to filter out erroneous retrievals. \n",
     "    \n",
     "    Give a binary score 'yes' or 'no' score to indicate whether the document is relevant to the question.\n",
-    "    Provide the binary score as a JSON with a single key 'score' and no premable or explaination.\n",
+    "    Provide the binary score as a JSON with a single key 'score' and no premable or explanation.\n",
     "     \n",
     "    Here is the retrieved document: \n",
     "    {document}\n",
@@ -316,7 +316,7 @@
     "    prompt engineering, and adversarial attacks. You do not need to be stringent with the keywords \n",
     "    in the question related to these topics. Otherwise, use web-search. Give a binary choice 'web_search' \n",
     "    or 'vectorstore' based on the question. Return the a JSON with a single key 'datasource' and \n",
-    "    no premable or explaination. \n",
+    "    no premable or explanation. \n",
     "    \n",
     "    Question to route: \n",
     "    {question}\"\"\",\n",

+ 3 - 3
3p-integrations/langchain/langgraph_tool_calling_agent.ipynb

@@ -42,7 +42,7 @@
     "\n",
     "We'll augment a tool-calling version of Llama 3 with various multi-model capabilities using an agent. \n",
     "\n",
-    "### Enviorment\n",
+    "### Environment\n",
     "\n",
     "We'll use [Tavily](https://tavily.com/#api) for web search.\n",
     "\n",
@@ -322,8 +322,8 @@
     "            # Invoke the tool-calling LLM\n",
     "            result = self.runnable.invoke(state)\n",
     "            # If it is a tool call -> response is valid\n",
-    "            # If it has meaninful text -> response is valid\n",
-    "            # Otherwise, we re-prompt it b/c response is not meaninful\n",
+    "            # If it has meaningful text -> response is valid\n",
+    "            # Otherwise, we re-prompt it b/c response is not meaningful\n",
     "            if not result.tool_calls and (\n",
     "                not result.content\n",
     "                or isinstance(result.content, list)\n",

+ 1 - 1
3p-integrations/octoai/MediaGen.ipynb

@@ -522,7 +522,7 @@
     "# two durations: short duration (2.0s) and long duration (4.0s)\n",
     "short_duration = 2\n",
     "long_duration = 4\n",
-    "# We keep track of the time ellapsed\n",
+    "# We keep track of the time elapsed\n",
     "t = 0\n",
     "# This sub list will contain tuples in the following form:\n",
     "# ((t_start, t_end), \"caption\")\n",

+ 1 - 1
3p-integrations/octoai/RAG_chatbot_example/RAG_chatbot_example.ipynb

@@ -340,7 +340,7 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "Next, we define the retriever and template for our RetrivalQA chain. For each call of the RetrievalQA, LangChain performs a semantic similarity search of the query in the vector database, then passes the search results as the context to Llama to answer the query about the data stored in the verctor database.\n",
+    "Next, we define the retriever and template for our RetrivalQA chain. For each call of the RetrievalQA, LangChain performs a semantic similarity search of the query in the vector database, then passes the search results as the context to Llama to answer the query about the data stored in the vector database.\n",
     "Whereas for the template, this defines the format of the question along with context that we will be sent into Llama for generation. In general, Llama 3 has special prompt format to handle special tokens. In some cases, the serving framework might already have taken care of it. Otherwise, you will need to write customized template to properly handle that."
    ]
   },

+ 1 - 1
3p-integrations/octoai/video_summary.ipynb

@@ -273,7 +273,7 @@
     "<new_content>\n",
     "```\n",
     "\n",
-    "**Note**: The following call will make 33 calls to Llama 3 and genereate the final summary in about 10 minutes."
+    "**Note**: The following call will make 33 calls to Llama 3 and generate the final summary in about 10 minutes."
    ]
   },
   {

+ 1 - 1
3p-integrations/togetherai/knowledge_graphs_with_structured_outputs.ipynb

@@ -232,7 +232,7 @@
     }
    ],
    "source": [
-    "# Lets see the knowlege graph components generated!\n",
+    "# Lets see the knowledge graph components generated!\n",
     "graph"
    ]
   },

+ 6 - 6
3p-integrations/togetherai/llama_contextual_RAG.ipynb

@@ -19,7 +19,7 @@
       "source": [
         "## Introduction\n",
         "\n",
-        "[Contextual Retrieval](https://www.anthropic.com/news/contextual-retrieval) is a chunk augmentation technique that uses a LLM to ehance each chunk.\n",
+        "[Contextual Retrieval](https://www.anthropic.com/news/contextual-retrieval) is a chunk augmentation technique that uses a LLM to enhance each chunk.\n",
         "\n",
         "<img src=\"images/cRAG.png\" width=\"1000\">\n",
         "\n",
@@ -61,8 +61,8 @@
         "\n",
         "<img src=\"images/cRAG_querytime.png\" width=\"750\">\n",
         "\n",
-        "1. Perform retreival using both indices and combine them using RRF\n",
-        "2. Reranker to improve retreival quality\n",
+        "1. Perform retrieval using both indices and combine them using RRF\n",
+        "2. Reranker to improve retrieval quality\n",
         "3. Generation with Llama 405b"
       ]
     },
@@ -351,7 +351,7 @@
         "\n",
         "{CHUNK_CONTENT}\n",
         "\n",
-        "Answer ONLY with a succinct explaination of the meaning of the chunk in the context of the whole document above.\n",
+        "Answer ONLY with a succinct explanation of the meaning of the chunk in the context of the whole document above.\n",
         "\"\"\""
       ]
     },
@@ -407,7 +407,7 @@
             "\n",
             "September 2024At a YC event last week Brian Chesky gave a talk that everyone who was there will remember. Most founders I talked to afterward said it was the best they'd ever heard. Ron Conway, for the first time in his life, forgot to take notes. I'\n",
             "\n",
-            "Answer ONLY with a succinct explaination of the meaning of the chunk in the context of the whole document above.\n",
+            "Answer ONLY with a succinct explanation of the meaning of the chunk in the context of the whole document above.\n",
             "\n"
           ]
         }
@@ -561,7 +561,7 @@
       "source": [
         "`$0.06` per 1 million tokens for Llama 3b.\n",
         "\n",
-        "Assuming input lenght of ~ 1560 tokens and output length of 100 tokens.\n",
+        "Assuming input length of ~ 1560 tokens and output length of 100 tokens.\n",
         "\n",
         "Given an approximate token count of ~ 1660 per context generation we can generate 10,000 contexts for a $1.00.\n",
         "\n",

+ 2 - 2
end-to-end-use-cases/Multi-Modal-RAG/notebooks/Part_1_Data_Preparation.ipynb

@@ -67,7 +67,7 @@
     "We import all the libraries here. \n",
     "\n",
     "- PIL: For handling images to be passed to our Llama model\n",
-    "- Huggingface Tranformers: For running the model\n",
+    "- Huggingface Transformers: For running the model\n",
     "- Concurrent Library: To clean up faster"
    ]
   },
@@ -1007,7 +1007,7 @@
     "\n",
     "We suggest testing 90B as an assignment. Although you will find that 11B is a great candidate for this model. \n",
     "\n",
-    "Read more about the model capabilites [here](https://www.llama.com/docs/how-to-guides/vision-capabilities/)"
+    "Read more about the model capabilities [here](https://www.llama.com/docs/how-to-guides/vision-capabilities/)"
    ]
   },
   {

+ 2 - 2
end-to-end-use-cases/Multi-Modal-RAG/notebooks/Part_2_Cleaning_Data_and_DB.ipynb

@@ -20,7 +20,7 @@
     "\n",
     "- Cleaing up Annotations produced from the previous step\n",
     "- Re-balancing categories: Since the model still hallucinates some new categories\n",
-    "- Final round of EDA beforing moving to creating a RAG pipeline in Notebook 3"
+    "- Final round of EDA before moving to creating a RAG pipeline in Notebook 3"
    ]
   },
   {
@@ -33,7 +33,7 @@
     "Hopefully you remember the prompt from previous notebook. Regardless of the prompt engineering, we still have a few issues to deal with: \n",
     "\n",
     "- The model hallucinates categories\n",
-    "- We need to delete escape characters to handle the JSON formatting. Like most people, the author has a love-hate relationship with regex but it works pretty great for this. Another approach that works is using `Llama-3.2-3B-Instruct` model for cleaning up. This is conviently left as an excercise for the reader\n",
+    "- We need to delete escape characters to handle the JSON formatting. Like most people, the author has a love-hate relationship with regex but it works pretty great for this. Another approach that works is using `Llama-3.2-3B-Instruct` model for cleaning up. This is conveniently left as an exercise for the reader\n",
     "- Refusals: Sometimes the model refuses to label the images-we need to remove these examples\n",
     "\n",
     "\n",

+ 1 - 1
end-to-end-use-cases/Multi-Modal-RAG/notebooks/Part_3_RAG_Setup_and_Validation.ipynb

@@ -674,7 +674,7 @@
     }
    ],
    "source": [
-    "# automatically converted to vector respresentation\n",
+    "# automatically converted to vector representation\n",
     "rs = tbl.search(response).limit(3).to_pandas()\n",
     "rs\n"
    ]

+ 1 - 1
end-to-end-use-cases/NotebookLlama/Step-2-Transcript-Writer.ipynb

@@ -49,7 +49,7 @@
     "It should be a real podcast with every fine nuance documented in as much detail as possible. Welcome the listeners with a super fun overview and keep it really catchy and almost borderline click bait\n",
     "\n",
     "ALWAYS START YOUR RESPONSE DIRECTLY WITH SPEAKER 1: \n",
-    "DO NOT GIVE EPISODE TITLES SEPERATELY, LET SPEAKER 1 TITLE IT IN HER SPEECH\n",
+    "DO NOT GIVE EPISODE TITLES SEPARATELY, LET SPEAKER 1 TITLE IT IN HER SPEECH\n",
     "DO NOT GIVE CHAPTER TITLES\n",
     "IT SHOULD STRICTLY BE THE DIALOGUES\n",
     "\"\"\""

+ 1 - 1
end-to-end-use-cases/NotebookLlama/Step-3-Re-Writer.ipynb

@@ -7,7 +7,7 @@
    "source": [
     "## Notebook 3: Transcript Re-writer\n",
     "\n",
-    "In the previouse notebook, we got a great podcast transcript using the raw file we have uploaded earlier. \n",
+    "In the previous notebook, we got a great podcast transcript using the raw file we have uploaded earlier. \n",
     "\n",
     "In this one, we will use `Llama-3.1-8B-Instruct` model to re-write the output from previous pipeline and make it more dramatic or realistic."
    ]

+ 1 - 1
end-to-end-use-cases/agents/DeepLearningai_Course_Notebooks/Functions_Tools_and_Agents_with_LangChain_L1_Function_Calling.ipynb

@@ -250,7 +250,7 @@
    "metadata": {},
    "outputs": [],
    "source": [
-    "# by defining and using known_functions, we can programatically call function\n",
+    "# by defining and using known_functions, we can programmatically call function\n",
     "function_response = known_functions[function_call.name](function_call.arguments)"
    ]
   },

+ 4 - 4
end-to-end-use-cases/customerservice_chatbots/RAG_chatbot/RAG_Chatbot_Example.ipynb

@@ -437,7 +437,7 @@
    "metadata": {},
    "outputs": [],
    "source": [
-    "!curl localhost:8080/generate -X POST -H 'Content-Type: application/json' -d '{\"inputs\": \"What is good about Beijing?\", \"parameters\": { \"max_new_tokens\":64}}' #Replace the locahost with the IP visible to the machine running the notebook     "
+    "!curl localhost:8080/generate -X POST -H 'Content-Type: application/json' -d '{\"inputs\": \"What is good about Beijing?\", \"parameters\": { \"max_new_tokens\":64}}' #Replace the localhost with the IP visible to the machine running the notebook     "
    ]
   },
   {
@@ -484,7 +484,7 @@
     "DB_FAISS_PATH = 'vectorstore/db_faiss'\n",
     "\n",
     "#Llama2 TGI models host port\n",
-    "LLAMA3_8B_HOSTPORT = \"http://localhost:8080/\" #Replace the locahost with the IP visible to the machine running the notebook\n",
+    "LLAMA3_8B_HOSTPORT = \"http://localhost:8080/\" #Replace the localhost with the IP visible to the machine running the notebook\n",
     "LLAMA3_70B_HOSTPORT = \"http://localhost:8081/\" # You can host multiple models if your infrastructure has capacity\n",
     "\n",
     "\n",
@@ -557,7 +557,7 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "Next, we define the retriever and template for our RetrivalQA chain. For each call of the RetrievalQA, LangChain performs a semantic similarity search of the query in the vector database, then passes the search results as the context to Llama to answer the query about the data stored in the verctor database.  \n",
+    "Next, we define the retriever and template for our RetrivalQA chain. For each call of the RetrievalQA, LangChain performs a semantic similarity search of the query in the vector database, then passes the search results as the context to Llama to answer the query about the data stored in the vector database.  \n",
     "Whereas for the template, this defines the format of the question along with context that we will be sent into Llama for generation. In general, Meta Llama 3 has special prompt format to handle special tokens. In some cases, the serving framework might already have taken care of it. Otherwise, you will need to write customized template to properly handle that.\n"
    ]
   },
@@ -729,7 +729,7 @@
     "        else:\n",
     "            return history + [[\"Invalid prompts - user prompt cannot be empty\", None]]\n",
     "\n",
-    "    #chatbot logic for configuration, sending the prompts, rendering the streamed back genereations etc\n",
+    "    #chatbot logic for configuration, sending the prompts, rendering the streamed back generations etc\n",
     "    def bot(model_selector, temperature_selector, top_p_selector, max_new_tokens_selector, user_prompt_message, history, messages_history):\n",
     "        dialog = []\n",
     "        bot_message = \"\"\n",

+ 7 - 7
end-to-end-use-cases/customerservice_chatbots/RAG_chatbot/vectorstore/mongodb/rag_mongodb_llama3_huggingface_open_source.ipynb

@@ -37,7 +37,7 @@
         " \n",
         " ![image.png](attachment:image.png)\n",
         "\n",
-        "3. For a free Cluser, be sure to select \"Shared\" option when creating your new cluster. See image below for details\n",
+        "3. For a free Cluster, be sure to select \"Shared\" option when creating your new cluster. See image below for details\n",
         "\n",
         "![image-2.png](attachment:image-2.png)\n",
         "\n",
@@ -53,7 +53,7 @@
       "source": [
         "## Import Libraries\n",
         "\n",
-        "Import libaries into development environment"
+        "Import libraries into development environment"
       ]
     },
     {
@@ -96,7 +96,7 @@
         "import pandas as pd\n",
         "import os\n",
         "\n",
-        "# Make sure you have an Hugging Face token(HF_TOKEN) in your development environemnt before runing the code below\n",
+        "# Make sure you have an Hugging Face token(HF_TOKEN) in your development environment before runing the code below\n",
         "# How to get a token: https://huggingface.co/docs/hub/en/security-tokens\n",
         "# Dataset Location: https://huggingface.co/datasets/MongoDB/subset_arxiv_papers_with_embeddings\n",
         "os.environ[\"HF_TOKEN\"] = \"place_hugging_face_access_token here\" # Do not use this in production environment, use a .env file instead\n",
@@ -199,7 +199,7 @@
               "      <td>704.0001</td>\n",
               "      <td>Pavel Nadolsky</td>\n",
               "      <td>C. Bal\\'azs, E. L. Berger, P. M. Nadolsky, C.-...</td>\n",
-              "      <td>Calculation of prompt diphoton production cros...</td>\n",
+              "      <td>Calculation of prompt diphoton production cross...</td>\n",
               "      <td>37 pages, 15 figures; published version</td>\n",
               "      <td>Phys.Rev.D76:013009,2007</td>\n",
               "      <td>10.1103/PhysRevD.76.013009</td>\n",
@@ -512,7 +512,7 @@
               "4           Wael Abu-Shammala and Alberto Torchinsky   \n",
               "\n",
               "                                               title  \\\n",
-              "0  Calculation of prompt diphoton production cros...   \n",
+              "0  Calculation of prompt diphoton production cross...   \n",
               "1           Sparsity-certifying Graph Decompositions   \n",
               "2  The evolution of the Earth-Moon system based o...   \n",
               "3  A determinant of Stirling cycle numbers counts...   \n",
@@ -859,7 +859,7 @@
           "name": "stdout",
           "output_type": "stream",
           "text": [
-            "[{'role': 'system', 'content': 'You are a research assitant!'}, {'role': 'user', 'content': 'Query: Get me papers on Artificial Intelligence?\\nContinue to answer the query by using the Search Results:\\n.'}]\n"
+            "[{'role': 'system', 'content': 'You are a research assistant!'}, {'role': 'user', 'content': 'Query: Get me papers on Artificial Intelligence?\\nContinue to answer the query by using the Search Results:\\n.'}]\n"
           ]
         }
       ],
@@ -869,7 +869,7 @@
         "source_information = get_search_result(query, collection)\n",
         "combined_information = f\"Query: {query}\\nContinue to answer the query by using the Search Results:\\n{source_information}.\"\n",
         "messages = [\n",
-        "    {\"role\": \"system\", \"content\": \"You are a research assitant!\"},\n",
+        "    {\"role\": \"system\", \"content\": \"You are a research assistant!\"},\n",
         "    {\"role\": \"user\", \"content\": combined_information},\n",
         "]\n",
         "print(messages)"

+ 1 - 1
end-to-end-use-cases/customerservice_chatbots/ai_agent_chatbot/SalesBot.ipynb

@@ -440,7 +440,7 @@
    "id": "1551fd74-b143-4c02-9b56-364d33683fd3",
    "metadata": {},
    "source": [
-    "Now we upsert all of the vectors into the databse using OpenAI's embedding model."
+    "Now we upsert all of the vectors into the database using OpenAI's embedding model."
    ]
   },
   {

+ 1 - 1
end-to-end-use-cases/github_triage/walkthrough.ipynb

@@ -320,7 +320,7 @@
             "                                             summary  \\\n",
             "0  Torch compile stochastically fails with FileNo...   \n",
             "1  FlopCounterMode does not support HOP, causing ...   \n",
-            "2  Discussion on removing redundant copy operatio...   \n",
+            "2  Discussion on removing redundant copy operation...   \n",
             "3  Performance issue with send_object_list and re...   \n",
             "4  ProcessGroupNCCL::barrier() device id guessing...   \n",
             "\n",

+ 2 - 2
end-to-end-use-cases/responsible_ai/llama_guard/llama_guard_customization_via_prompting_and_fine_tuning.ipynb

@@ -449,7 +449,7 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "# Beyond Prompt Customization - Evalulation and Fine Tuning\n",
+    "# Beyond Prompt Customization - Evaluation and Fine Tuning\n",
     "\n",
     "Finetuning is a technique used to improve the performance of a pre-trained model on a specific task. In the case of LlamaGuard, finetuning should be performed when the model does not perform sufficiently using the above techniques. For example, to train the model on categories which are not included in the default taxonomy. \n",
     "\n",
@@ -473,7 +473,7 @@
     "\n",
     "\n",
     "## Evaluation\n",
-    "The code below shows a workflow for evaluating the model using Toxic Chat. ToxicChat is provided as an example dataset. It is recommended that an dataset chosen specifically for the application be used to evaluate fine-tuning success. ToxicChat can be used to evaluate any degredation in standard category performance caused by the fine-tuning. \n"
+    "The code below shows a workflow for evaluating the model using Toxic Chat. ToxicChat is provided as an example dataset. It is recommended that an dataset chosen specifically for the application be used to evaluate fine-tuning success. ToxicChat can be used to evaluate any degradation in standard category performance caused by the fine-tuning. \n"
    ]
   },
   {

La diferencia del archivo ha sido suprimido porque es demasiado grande
+ 2 - 2
end-to-end-use-cases/responsible_ai/prompt_guard/prompt_guard_tutorial.ipynb


+ 1 - 1
end-to-end-use-cases/video_summary.ipynb

@@ -269,7 +269,7 @@
     "<new_content>\n",
     "```\n",
     "\n",
-    "**Note:** The following call will make 33 calls to Llama 3 and genereate the final summary in about 10 minutes. The complete log of the the calls with inputs and outputs is [here](https://smith.langchain.com/public/7f23d823-926f-4874-bbd7-b509328a94bf/r)."
+    "**Note:** The following call will make 33 calls to Llama 3 and generate the final summary in about 10 minutes. The complete log of the the calls with inputs and outputs is [here](https://smith.langchain.com/public/7f23d823-926f-4874-bbd7-b509328a94bf/r)."
    ]
   },
   {

+ 3 - 3
getting-started/Prompt_Engineering_with_Llama.ipynb

@@ -401,7 +401,7 @@
     "\n",
     "Adding specific examples of your desired output generally results in more accurate, consistent output. This technique is called \"few-shot prompting\".\n",
     "\n",
-    "In this example, the generated response follows our desired format that offers a more nuanced sentiment classifer that gives a positive, neutral, and negative response confidence percentage.\n",
+    "In this example, the generated response follows our desired format that offers a more nuanced sentiment classifier that gives a positive, neutral, and negative response confidence percentage.\n",
     "\n",
     "See also: [Zhao et al. (2021)](https://arxiv.org/abs/2102.09690), [Liu et al. (2021)](https://arxiv.org/abs/2101.06804), [Su et al. (2022)](https://arxiv.org/abs/2209.01975), [Rubin et al. (2022)](https://arxiv.org/abs/2112.08633).\n",
     "\n"
@@ -497,7 +497,7 @@
    "source": [
     "### Self-Consistency\n",
     "\n",
-    "LLMs are probablistic, so even with Chain-of-Thought, a single generation might produce incorrect results. Self-Consistency ([Wang et al. (2022)](https://arxiv.org/abs/2203.11171)) introduces enhanced accuracy by selecting the most frequent answer from multiple generations (at the cost of higher compute):"
+    "LLMs are probabilistic, so even with Chain-of-Thought, a single generation might produce incorrect results. Self-Consistency ([Wang et al. (2022)](https://arxiv.org/abs/2203.11171)) introduces enhanced accuracy by selecting the most frequent answer from multiple generations (at the cost of higher compute):"
    ]
   },
   {
@@ -579,7 +579,7 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "Retrieval-Augmented Generation, or RAG, describes the practice of including information in the prompt you've retrived from an external database ([Lewis et al. (2020)](https://arxiv.org/abs/2005.11401v4)). It's an effective way to incorporate facts into your LLM application and is more affordable than fine-tuning which may be costly and negatively impact the foundational model's capabilities.\n",
+    "Retrieval-Augmented Generation, or RAG, describes the practice of including information in the prompt you've retrieved from an external database ([Lewis et al. (2020)](https://arxiv.org/abs/2005.11401v4)). It's an effective way to incorporate facts into your LLM application and is more affordable than fine-tuning which may be costly and negatively impact the foundational model's capabilities.\n",
     "\n",
     "This could be as simple as a lookup table or as sophisticated as a [vector database]([FAISS](https://github.com/facebookresearch/faiss)) containing all of your company's knowledge:"
    ]

+ 3 - 3
getting-started/build_with_Llama_3_2.ipynb

@@ -867,7 +867,7 @@
     "    \"content\": [\n",
     "      {\n",
     "        \"type\": \"text\",\n",
-    "        \"text\": \"List all of the ingredients and their quantities that you have used in my meal plan which is not already in my basket annd create a shopping list for me!\"\n",
+    "        \"text\": \"List all of the ingredients and their quantities that you have used in my meal plan which is not already in my basket and create a shopping list for me!\"\n",
     "        \n",
     "      },\n",
     "    ]\n",
@@ -981,7 +981,7 @@
     "    \"content\": [\n",
     "      {\n",
     "        \"type\": \"text\",\n",
-    "        \"text\": \"List all of the ingredients and their quantities that you have used in my meal plan which is not already in my basket annd create a shopping list for me!\"\n",
+    "        \"text\": \"List all of the ingredients and their quantities that you have used in my meal plan which is not already in my basket and create a shopping list for me!\"\n",
     "        \n",
     "      },\n",
     "    ]\n",
@@ -1736,7 +1736,7 @@
    "source": [
     "### The brave_search built-in tool\n",
     "\n",
-    "Web search tool is needed when the answer to the user question is beyond the LLM's konwledge cutoff date, e.g. current whether info or recent events. Llama 3.2 has a konwledge cutoff date of December 2023. Similarly, we can use web search tool to validate the event venue and date for \"Albuquerque International Balloon Fiesta\":"
+    "Web search tool is needed when the answer to the user question is beyond the LLM's knowledge cutoff date, e.g. current whether info or recent events. Llama 3.2 has a knowledge cutoff date of December 2023. Similarly, we can use web search tool to validate the event venue and date for \"Albuquerque International Balloon Fiesta\":"
    ]
   },
   {