|
@@ -10,7 +10,7 @@
|
|
|
"\n",
|
|
|
"For continuity, we show built-in tool calling that we introduced in Llama-3.1 namely allowing you to use `brave_search` and `wolfram_alpha`. \n",
|
|
|
"\n",
|
|
|
- "However, please remember `3.3` models will work great with zero-shot tool calling which we showcase in second notebook. Infact that is the recommended path.\n",
|
|
|
+ "However, please remember `3.3` models will work great with zero-shot tool calling which we showcase in second notebook. In fact that is the recommended path.\n",
|
|
|
"\n",
|
|
|
"Note: If you are looking for `3.2` Featherlight Model (1B and 3B) instructions, please see the respective sections in our website, this one covers 3.1 models.\n",
|
|
|
"\n",
|
|
@@ -62,15 +62,7 @@
|
|
|
"cell_type": "code",
|
|
|
"execution_count": null,
|
|
|
"metadata": {},
|
|
|
- "outputs": [
|
|
|
- {
|
|
|
- "name": "stdout",
|
|
|
- "output_type": "stream",
|
|
|
- "text": [
|
|
|
- "env: GROQ_API_KEY='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'\n"
|
|
|
- ]
|
|
|
- }
|
|
|
- ],
|
|
|
+ "outputs": [],
|
|
|
"source": [
|
|
|
"#!pip3 install groq\n",
|
|
|
"%set_env GROQ_API_KEY=''"
|
|
@@ -172,7 +164,7 @@
|
|
|
},
|
|
|
{
|
|
|
"cell_type": "code",
|
|
|
- "execution_count": 7,
|
|
|
+ "execution_count": null,
|
|
|
"metadata": {},
|
|
|
"outputs": [
|
|
|
{
|
|
@@ -189,7 +181,7 @@
|
|
|
],
|
|
|
"source": [
|
|
|
"user_input = \"\"\"\n",
|
|
|
- "When is the next elden ring game coming out?\n",
|
|
|
+ "When is the next Elden Ring game coming out?\n",
|
|
|
"\"\"\"\n",
|
|
|
"\n",
|
|
|
"print(\"Assistant:\", model_chat(user_input, sys_prompt=SYSTEM_PROMPT))"
|
|
@@ -777,7 +769,7 @@
|
|
|
},
|
|
|
{
|
|
|
"cell_type": "code",
|
|
|
- "execution_count": 11,
|
|
|
+ "execution_count": null,
|
|
|
"metadata": {},
|
|
|
"outputs": [
|
|
|
{
|
|
@@ -797,7 +789,7 @@
|
|
|
"\"\"\"\n",
|
|
|
"\n",
|
|
|
"user_input = \"\"\"\n",
|
|
|
- "When is the next Elden ring game coming out?\n",
|
|
|
+ "When is the next Elden Ring game coming out?\n",
|
|
|
"\"\"\"\n",
|
|
|
"\n",
|
|
|
"print(\"Assistant:\", model_chat(user_input, sys_prompt=SYSTEM_PROMPT))\n"
|
|
@@ -828,11 +820,11 @@
|
|
|
"cell_type": "markdown",
|
|
|
"metadata": {},
|
|
|
"source": [
|
|
|
- "### Using this knowledge in practise\n",
|
|
|
+ "### Using this knowledge in practice\n",
|
|
|
"\n",
|
|
|
"A common misconception about tool calling is: the model can handle the tool call and get your output. \n",
|
|
|
"\n",
|
|
|
- "This is NOT TRUE, the actual tool call is something that you have to implement. With this knowledge, let's see how we can utilise brave search to answer our original question"
|
|
|
+ "This is NOT TRUE, the actual tool call is something that you have to implement. With this knowledge, let's see how we can utilize brave search to answer our original question"
|
|
|
]
|
|
|
},
|
|
|
{
|