浏览代码

Fix 201 nb

Sanyam Bhutani 6 月之前
父节点
当前提交
df4b0243d4
共有 1 个文件被更改,包括 4 次插入2 次删除
  1. 4 2
      recipes/quickstart/agents/Agents_101/Tool_Calling_201.ipynb

+ 4 - 2
recipes/quickstart/agents/Agents_101/Tool_Calling_201.ipynb

@@ -12,7 +12,9 @@
     "\n",
     "\n",
     "- Step 1: Take the user input query \n",
     "- Step 1: Take the user input query \n",
     "\n",
     "\n",
-    "- Step 2: Perform an internet search to fetch the arxiv ID(s) based on the user query\n",
+    "- Step 2: Perform an internet search using `tavily` API to fetch the arxiv ID(s) based on the user query\n",
+    "\n",
+    "Note: `3.1` models support `brave_search` but this notebook is also aimed at showcasing custom tools. \n",
     "\n",
     "\n",
     "The above is important because many-times the user-query is different from the paper name and arxiv ID-this will help us with the next step\n",
     "The above is important because many-times the user-query is different from the paper name and arxiv ID-this will help us with the next step\n",
     "\n",
     "\n",
@@ -446,7 +448,7 @@
    "source": [
    "source": [
     "#### Downloading the papers and extracting details: \n",
     "#### Downloading the papers and extracting details: \n",
     "\n",
     "\n",
-    "3.1 family LLM(s) are great enough to use raw outputs extracted from a PDF and summarise them. However, we are still bound by their (great) 128k context length-to live with this, we will extract just the first 80k words. \n",
+    "Llama 3.1 family LLM(s) are great enough to use raw outputs extracted from a PDF and summarise them. However, we are still bound by their (great) 128k context length-to live with this, we will extract just the first 80k words. \n",
     "\n",
     "\n",
     "The functions below handle the logic of downloading the PDF(s) and extracting their outputs"
     "The functions below handle the logic of downloading the PDF(s) and extracting their outputs"
    ]
    ]