|
@@ -12,7 +12,9 @@
|
|
"\n",
|
|
"\n",
|
|
"- Step 1: Take the user input query \n",
|
|
"- Step 1: Take the user input query \n",
|
|
"\n",
|
|
"\n",
|
|
- "- Step 2: Perform an internet search to fetch the arxiv ID(s) based on the user query\n",
|
|
|
|
|
|
+ "- Step 2: Perform an internet search using `tavily` API to fetch the arxiv ID(s) based on the user query\n",
|
|
|
|
+ "\n",
|
|
|
|
+ "Note: `3.1` models support `brave_search` but this notebook is also aimed at showcasing custom tools. \n",
|
|
"\n",
|
|
"\n",
|
|
"The above is important because many-times the user-query is different from the paper name and arxiv ID-this will help us with the next step\n",
|
|
"The above is important because many-times the user-query is different from the paper name and arxiv ID-this will help us with the next step\n",
|
|
"\n",
|
|
"\n",
|
|
@@ -446,7 +448,7 @@
|
|
"source": [
|
|
"source": [
|
|
"#### Downloading the papers and extracting details: \n",
|
|
"#### Downloading the papers and extracting details: \n",
|
|
"\n",
|
|
"\n",
|
|
- "3.1 family LLM(s) are great enough to use raw outputs extracted from a PDF and summarise them. However, we are still bound by their (great) 128k context length-to live with this, we will extract just the first 80k words. \n",
|
|
|
|
|
|
+ "Llama 3.1 family LLM(s) are great enough to use raw outputs extracted from a PDF and summarise them. However, we are still bound by their (great) 128k context length-to live with this, we will extract just the first 80k words. \n",
|
|
"\n",
|
|
"\n",
|
|
"The functions below handle the logic of downloading the PDF(s) and extracting their outputs"
|
|
"The functions below handle the logic of downloading the PDF(s) and extracting their outputs"
|
|
]
|
|
]
|