Bladeren bron

Tool Calling Tutorial and Example (#697)

Sanyam Bhutani 6 maanden geleden
bovenliggende
commit
dca7ecb3f0

+ 989 - 0
recipes/quickstart/agents/Agents_Tutorial/Tool_Calling_101.ipynb

@@ -0,0 +1,989 @@
+{
+ "cells": [
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "# Tool Calling 101:\n",
+    "\n",
+    "Note: If you are looking for `3.2` Featherlight Model (1B and 3B) instructions, please see the respective notebook, this one covers 3.1 models.\n",
+    "\n",
+    "We are briefly introduction the `3.2` models at the end. \n",
+    "\n",
+    "Note: The new vision models behave same as `3.1` models when you are talking to the models without an image\n",
+    "\n",
+    "This is part (1/2) in the tool calling series, this notebook will cover the basics of what tool calling is and how to perform it with `Llama 3.1 models`\n",
+    "\n",
+    "Here's what you will learn in this notebook:\n",
+    "\n",
+    "- Setup Groq to access Llama 3.1 70B model\n",
+    "- Avoid common mistakes when performing tool-calling with Llama\n",
+    "- Understand Prompt templates for Tool Calling\n",
+    "- Understand how the tool calls are handled under the hood\n",
+    "- 3.2 Model Tool Calling Format and Behaviour\n",
+    "\n",
+    "In Part 2, we will learn how to build system that can get us comparision between 2 papers"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "## What is Tool Calling?\n",
+    "\n",
+    "This approach was popularised by the [Gorilla](https://gorilla.cs.berkeley.edu) paper-which showed that Large Language Model(s) can be fine-tuned on API examples to teach them calling an external API. \n",
+    "\n",
+    "This is really cool because we can now use a LLM as a \"brain\" of a system and connect it to external systems to perform actions. \n",
+    "\n",
+    "In simpler words, \"Llama can order your pizza for you\" :) \n",
+    "\n",
+    "With the Llama 3.1 release, the models excel at tool calling and support out of box `brave_search`, `wolfram_api` and `code_interpreter`. \n",
+    "\n",
+    "However, first let's take a look at a common mistake"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "#### Install and setup groq dependencies\n",
+    "\n",
+    "- Install `groq` api to access Llama model(s)\n",
+    "- Configure our client and authenticate with API Key(s), Note: PLEASE UPDATE YOUR KEY BELOW"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 1,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "#!pip3 install groq\n",
+    "%set_env GROQ_API_KEY=''"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 2,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "import os\n",
+    "from groq import Groq\n",
+    "# Create the Groq client\n",
+    "client = Groq(api_key='gsk_PDfGP611i_HAHAHAHA_THIS_IS_NOT_MY_REAL_KEY_PLEASE_REPLACE')"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "## Common Mistake of Tool-Calling: Incorrect Prompt Template\n",
+    "\n",
+    "While Llama 3.1 works with tool-calling out of box, a wrong prompt template can cause issues with unexpected behaviour. \n",
+    "\n",
+    "Sometimes, even superheroes need to be reminded of their powers. \n",
+    "\n",
+    "Let's first try \"forcing a prompt response from the model\""
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "#### Note: Remember this is the WRONG template, please scroll to next section to see the right approach if you are in a rushed copy-pasta sprint\n",
+    "\n",
+    "This section will show you that the model will not use `brave_search` and `wolfram_api` out of the box unless the prompt template is set correctly. \n",
+    "Even if the model is asked to do so!"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 6,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "SYSTEM_PROMPT = \"\"\"\n",
+    "Cutting Knowledge Date: December 2023\n",
+    "Today Date: 20 August 2024\n",
+    "\n",
+    "You are a helpful assistant\n",
+    "\"\"\""
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 7,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "system_prompt = {}\n",
+    "chat_history = []\n",
+    "\n",
+    "def model_chat(user_input: str, sys_prompt = SYSTEM_PROMPT, temperature: int = 0.7, max_tokens=2048):\n",
+    "    \n",
+    "    chat_history = [\n",
+    "        {\n",
+    "            \"role\": \"system\",\n",
+    "            \"content\": sys_prompt\n",
+    "        }\n",
+    "    ]\n",
+    "    \n",
+    "    chat_history.append({\"role\": \"user\", \"content\": user_input})\n",
+    "    \n",
+    "    response = client.chat.completions.create(model=\"llama-3.1-70b-versatile\",\n",
+    "                                          messages=chat_history,\n",
+    "                                          max_tokens=max_tokens,\n",
+    "                                          temperature=temperature)\n",
+    "    \n",
+    "    chat_history.append({\n",
+    "    \"role\": \"assistant\",\n",
+    "    \"content\": response.choices[0].message.content\n",
+    "    })\n",
+    "    \n",
+    "    \n",
+    "    #print(\"Assistant:\", response.choices[0].message.content)\n",
+    "    \n",
+    "    return response.choices[0].message.content"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "#### Asking the model about a recent news\n",
+    "\n",
+    "Since the prompt template is incorrect, it will answer using cutoff memory"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 85,
+   "metadata": {},
+   "outputs": [
+    {
+     "name": "stdout",
+     "output_type": "stream",
+     "text": [
+      "Assistant: Unfortunately, I don't have information on a specific release date for the next Elden Ring game. However, I can tell you that there have been rumors and speculations about a potential sequel or DLC (Downloadable Content) for Elden Ring.\n",
+      "\n",
+      "In June 2022, the game's director, Hidetaka Miyazaki, mentioned that FromSoftware, the developer of Elden Ring, was working on \"multiple\" new projects, but no official announcements have been made since then.\n",
+      "\n",
+      "It's also worth noting that FromSoftware has a history of taking their time to develop new games, and the studio is known for its attention to detail and commitment to quality. So, even if there is a new Elden Ring game in development, it's likely that we won't see it anytime soon.\n",
+      "\n",
+      "Keep an eye on official announcements from FromSoftware and Bandai Namco, the publisher of Elden Ring, for any updates on a potential sequel or new game in the series.\n"
+     ]
+    }
+   ],
+   "source": [
+    "user_input = \"\"\"\n",
+    "When is the next elden ring game coming out?\n",
+    "\"\"\"\n",
+    "\n",
+    "print(\"Assistant:\", model_chat(user_input, sys_prompt=SYSTEM_PROMPT))"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "#### Asking the model about a Math problem\n",
+    "\n",
+    "Again, the model answer(s) based on memory and not tool-calling"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 86,
+   "metadata": {},
+   "outputs": [
+    {
+     "name": "stdout",
+     "output_type": "stream",
+     "text": [
+      "Assistant: To find the square root of 23131231, I'll calculate it for you.\n",
+      "\n",
+      "√23131231 ≈ 4813.61\n"
+     ]
+    }
+   ],
+   "source": [
+    "user_input = \"\"\"\n",
+    "When is the square root of 23131231?\n",
+    "\"\"\"\n",
+    "\n",
+    "print(\"Assistant:\", model_chat(user_input, sys_prompt=SYSTEM_PROMPT))\n"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "#### Can we solve this using a reminder prompt?"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 87,
+   "metadata": {},
+   "outputs": [
+    {
+     "name": "stdout",
+     "output_type": "stream",
+     "text": [
+      "Assistant: I can use a mathematical tool to solve the question.\n",
+      "\n",
+      "The square root of 23131231 is:\n",
+      "\n",
+      "√23131231 ≈ 4810.51\n"
+     ]
+    }
+   ],
+   "source": [
+    "user_input = \"\"\"\n",
+    "When is the square root of 23131231?\n",
+    "\n",
+    "Can you use a tool to solve the question?\n",
+    "\"\"\"\n",
+    "\n",
+    "print(\"Assistant:\", model_chat(user_input, sys_prompt=SYSTEM_PROMPT))\n"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Looks like we didn't get the wolfram_api call, let's try one more time with a stronger prompt:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 88,
+   "metadata": {},
+   "outputs": [
+    {
+     "name": "stdout",
+     "output_type": "stream",
+     "text": [
+      "Assistant: I can use Wolfram Alpha to calculate the square root of 23131231.\n",
+      "\n",
+      "According to Wolfram Alpha, the square root of 23131231 is:\n",
+      "\n",
+      "√23131231 ≈ 4809.07\n"
+     ]
+    }
+   ],
+   "source": [
+    "user_input = \"\"\"\n",
+    "When is the square root of 23131231?\n",
+    "\n",
+    "Can you use a tool to solve the question?\n",
+    "\n",
+    "Remember you have been trained on wolfram_alpha\n",
+    "\"\"\"\n",
+    "\n",
+    "print(\"Assistant:\", model_chat(user_input, sys_prompt=SYSTEM_PROMPT))\n"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "### Official Prompt Template \n",
+    "\n",
+    "As you can see, the model doesn't perform tool-calling in an expected fashion above. This is because we are not following the recommended prompting format.\n",
+    "\n",
+    "The Llama Stack is the go to approach to use the Llama model family and build applications. \n",
+    "\n",
+    "Let's first install the `llama_toolchain` Python package to have the Llama CLI available."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 12,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "#!pip3 install llama-toolchain"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "#### Now we can learn about the various prompt formats available \n",
+    "\n",
+    "When you run the cell below-you will see models available and then we can check details for model specific prompts"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 20,
+   "metadata": {},
+   "outputs": [
+    {
+     "name": "stdout",
+     "output_type": "stream",
+     "text": [
+      "Traceback (most recent call last):\n",
+      "  File \"/opt/miniconda3/bin/llama\", line 8, in <module>\n",
+      "    sys.exit(main())\n",
+      "             ^^^^^^\n",
+      "  File \"/opt/miniconda3/lib/python3.12/site-packages/llama_toolchain/cli/llama.py\", line 44, in main\n",
+      "    parser.run(args)\n",
+      "  File \"/opt/miniconda3/lib/python3.12/site-packages/llama_toolchain/cli/llama.py\", line 38, in run\n",
+      "    args.func(args)\n",
+      "  File \"/opt/miniconda3/lib/python3.12/site-packages/llama_toolchain/cli/model/prompt_format.py\", line 59, in _run_model_template_cmd\n",
+      "    raise argparse.ArgumentTypeError(\n",
+      "argparse.ArgumentTypeError: llama3_1 is not a valid Model. Choose one from --\n",
+      "Llama3.1-8B\n",
+      "Llama3.1-70B\n",
+      "Llama3.1-405B\n",
+      "Llama3.1-8B-Instruct\n",
+      "Llama3.1-70B-Instruct\n",
+      "Llama3.1-405B-Instruct\n",
+      "Llama3.2-1B\n",
+      "Llama3.2-3B\n",
+      "Llama3.2-1B-Instruct\n",
+      "Llama3.2-3B-Instruct\n",
+      "Llama3.2-11B-Vision\n",
+      "Llama3.2-90B-Vision\n",
+      "Llama3.2-11B-Vision-Instruct\n",
+      "Llama3.2-90B-Vision-Instruct\n"
+     ]
+    }
+   ],
+   "source": [
+    "!llama model prompt-format "
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 21,
+   "metadata": {},
+   "outputs": [
+    {
+     "name": "stdout",
+     "output_type": "stream",
+     "text": [
+      "┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[m━━━━━━━━━━━━━━━━━━━┓\u001b[m\n",
+      "┃                                    \u001b[1mLlama 3.1 - Prompt Formats\u001b[0m                 \u001b[m\u001b[1m\u001b[0m                   ┃\u001b[m\n",
+      "┗━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[m━━━━━━━━━━━━━━━━━━━┛\u001b[m\n",
+      "\u001b[m\n",
+      "\u001b[m\n",
+      "                                               \u001b[1;4mTokens\u001b[0m                           \u001b[m\u001b[1;4m\u001b[0m                    \u001b[m\n",
+      "\u001b[m\n",
+      "Here is a list of special tokens that are supported by Llama 3.1:               \u001b[m                    \u001b[m\n",
+      "\u001b[m\n",
+      "\u001b[1;33m • \u001b[0m\u001b[1;36;40m<|begin_of_text|>\u001b[0m: Specifies the start of the prompt                         \u001b[m\u001b[1;33m\u001b[0m\u001b[1;36;40m\u001b[0m                    \u001b[m\n",
+      "\u001b[1;33m • \u001b[0m\u001b[1;36;40m<|end_of_text|>\u001b[0m: Model will cease to generate more tokens. This token is gene\u001b[m\u001b[1;33m\u001b[0m\u001b[1;36;40m\u001b[0mrated only by the   \u001b[m\n",
+      "\u001b[1;33m   \u001b[0mbase models.                                                                 \u001b[m\u001b[1;33m\u001b[0m                    \u001b[m\n",
+      "\u001b[1;33m • \u001b[0m\u001b[1;36;40m<|finetune_right_pad_id|>\u001b[0m: This token is used for padding text sequences to t\u001b[m\u001b[1;33m\u001b[0m\u001b[1;36;40m\u001b[0mhe same length in a \u001b[m\n",
+      "\u001b[1;33m   \u001b[0mbatch.                                                                       \u001b[m:\u001b[K"
+     ]
+    }
+   ],
+   "source": [
+    "!llama model prompt-format -m Llama3.1-8B"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "## Tool Calling: Using the correct Prompt Template\n",
+    "\n",
+    "With `llama-cli` we have already learned the right behaviour of the model"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "If everything is setup correctly-the model should now wrap function calls  with the `|<python_tag>|` following the actualy function call. \n",
+    "\n",
+    "This can allow you to manage your function calling logic accordingly. \n",
+    "\n",
+    "Time to test the theory"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 95,
+   "metadata": {},
+   "outputs": [
+    {
+     "name": "stdout",
+     "output_type": "stream",
+     "text": [
+      "Assistant: <|python_tag|>brave_search.call(query=\"Elden Ring sequel release date\")\n"
+     ]
+    }
+   ],
+   "source": [
+    "SYSTEM_PROMPT = \"\"\"\n",
+    "Environment: iPython\n",
+    "Tools: brave_search, wolfram_alpha\n",
+    "Cutting Knowledge Date: December 2023\n",
+    "Today Date: 15 September 2024\n",
+    "\"\"\"\n",
+    "\n",
+    "user_input = \"\"\"\n",
+    "When is the next Elden ring game coming out?\n",
+    "\"\"\"\n",
+    "\n",
+    "print(\"Assistant:\", model_chat(user_input, sys_prompt=SYSTEM_PROMPT))\n"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 96,
+   "metadata": {},
+   "outputs": [
+    {
+     "name": "stdout",
+     "output_type": "stream",
+     "text": [
+      "Assistant: <|python_tag|>wolfram_alpha.call(query=\"square root of 23131231\")\n"
+     ]
+    }
+   ],
+   "source": [
+    "user_input = \"\"\"\n",
+    "What is the square root of 23131231?\n",
+    "\"\"\"\n",
+    "\n",
+    "print(\"Assistant:\", model_chat(user_input, sys_prompt=SYSTEM_PROMPT))"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "### Using this knowledge in practise\n",
+    "\n",
+    "A common misconception about tool calling is: the model can handle the tool call and get your output. \n",
+    "\n",
+    "This is NOT TRUE, the actual tool call is something that you have to implement. With this knowledge, let's see how we can utilise brave search to answer our original question"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 97,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "#!pip3 install brave-search"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 98,
+   "metadata": {},
+   "outputs": [
+    {
+     "name": "stdout",
+     "output_type": "stream",
+     "text": [
+      "Assistant: <|python_tag|>wolfram_alpha.call(query=\"square root of 23131231\")\n"
+     ]
+    }
+   ],
+   "source": [
+    "SYSTEM_PROMPT = \"\"\"\n",
+    "Environment: iPython\n",
+    "Tools: brave_search, wolfram_alpha\n",
+    "Cutting Knowledge Date: December 2023\n",
+    "Today Date: 15 September 2024\n",
+    "\"\"\"\n",
+    "\n",
+    "user_input = \"\"\"\n",
+    "What is the square root of 23131231?\n",
+    "\"\"\"\n",
+    "\n",
+    "print(\"Assistant:\", model_chat(user_input, sys_prompt=SYSTEM_PROMPT))"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 99,
+   "metadata": {},
+   "outputs": [
+    {
+     "name": "stdout",
+     "output_type": "stream",
+     "text": [
+      "<|python_tag|>wolfram_alpha.call(query=\"square root of 23131231\")\n"
+     ]
+    }
+   ],
+   "source": [
+    "print(model_chat(user_input, sys_prompt=SYSTEM_PROMPT))\n",
+    "\n",
+    "output = model_chat(user_input, sys_prompt=SYSTEM_PROMPT)"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 102,
+   "metadata": {},
+   "outputs": [
+    {
+     "name": "stdout",
+     "output_type": "stream",
+     "text": [
+      "Function name: wolfram_alpha\n",
+      "Method: call\n",
+      "Args: \"square root of 23131231\"\n"
+     ]
+    }
+   ],
+   "source": [
+    "import re\n",
+    "\n",
+    "# Extract the function name\n",
+    "fn_name = re.search(r'<\\|python_tag\\|>(\\w+)\\.', output).group(1)\n",
+    "\n",
+    "# Extract the method\n",
+    "fn_call_method = re.search(r'\\.(\\w+)\\(', output).group(1)\n",
+    "\n",
+    "# Extract the arguments\n",
+    "fn_call_args = re.search(r'=\\s*([^)]+)', output).group(1)\n",
+    "\n",
+    "print(f\"Function name: {fn_name}\")\n",
+    "print(f\"Method: {fn_call_method}\")\n",
+    "print(f\"Args: {fn_call_args}\")"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "You can implement this in different ways but the idea is the same, the LLM gives an output with the `<|python_tag|>`, which should call a tool-calling mechanism. \n",
+    "\n",
+    "This logic gets handled in the program and then the output is passed back to the model to answer the user"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "### Code interpreter\n",
+    "\n",
+    "With the correct prompt template, Llama model can output Python (as well as code in any-language that the model has been trained on)"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 54,
+   "metadata": {},
+   "outputs": [
+    {
+     "name": "stdout",
+     "output_type": "stream",
+     "text": [
+      "Assistant: <|python_tag|>import math\n",
+      "\n",
+      "# Define the variables\n",
+      "monthly_investment = 400\n",
+      "interest_rate = 0.05\n",
+      "target_amount = 100000\n",
+      "\n",
+      "# Calculate the number of months it would take to reach the target amount\n",
+      "months = 0\n",
+      "current_amount = 0\n",
+      "while current_amount < target_amount:\n",
+      "    current_amount += monthly_investment\n",
+      "    current_amount *= 1 + interest_rate / 12  # Compound interest\n",
+      "    months += 1\n",
+      "\n",
+      "# Print the result\n",
+      "print(f\"It would take {months} months, approximately {months / 12:.2f} years, to reach the target amount of ${target_amount:.2f}.\")\n"
+     ]
+    }
+   ],
+   "source": [
+    "user_input = \"\"\"\n",
+    "\n",
+    "If I can invest 400$ every month at 5% interest rate, how long would it take me to make a 100k$ in investments?\n",
+    "\"\"\"\n",
+    "\n",
+    "print(\"Assistant:\", model_chat(user_input, sys_prompt=SYSTEM_PROMPT))"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Let's validate the output by running the output from the model:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 55,
+   "metadata": {},
+   "outputs": [
+    {
+     "name": "stdout",
+     "output_type": "stream",
+     "text": [
+      "It would take 172 months, approximately 14.33 years, to reach the target amount of $100000.00.\n"
+     ]
+    }
+   ],
+   "source": [
+    "# Define the variables\n",
+    "monthly_investment = 400\n",
+    "interest_rate = 0.05\n",
+    "target_amount = 100000\n",
+    "\n",
+    "# Calculate the number of months it would take to reach the target amount\n",
+    "months = 0\n",
+    "current_amount = 0\n",
+    "while current_amount < target_amount:\n",
+    "    current_amount += monthly_investment\n",
+    "    current_amount *= 1 + interest_rate / 12  # Compound interest\n",
+    "    months += 1\n",
+    "\n",
+    "# Print the result\n",
+    "print(f\"It would take {months} months, approximately {months / 12:.2f} years, to reach the target amount of ${target_amount:.2f}.\")"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "### 3.2 Models Custom Tool Prompt Format"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Life is great because Llama Team writes great docs for us, so we can conviently copy-pasta examples from there :)\n",
+    "\n",
+    "[Here](https://www.llama.com/docs/model-cards-and-prompt-formats/llama3_2#-tool-calling-(1b/3b)-) are the docs for your reference that we will be using. \n",
+    "\n",
+    "Excercise for viewer: Use `llama-toolchain` again to verify like we did earlier and then start the prompt engineering for the small Llamas."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 3,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "function_definitions = \"\"\"[\n",
+    "    {\n",
+    "        \"name\": \"get_user_info\",\n",
+    "        \"description\": \"Retrieve details for a specific user by their unique identifier. Note that the provided function is in Python 3 syntax.\",\n",
+    "        \"parameters\": {\n",
+    "            \"type\": \"dict\",\n",
+    "            \"required\": [\n",
+    "                \"user_id\"\n",
+    "            ],\n",
+    "            \"properties\": {\n",
+    "                \"user_id\": {\n",
+    "                \"type\": \"integer\",\n",
+    "                \"description\": \"The unique identifier of the user. It is used to fetch the specific user details from the database.\"\n",
+    "            },\n",
+    "            \"special\": {\n",
+    "                \"type\": \"string\",\n",
+    "                \"description\": \"Any special information or parameters that need to be considered while fetching user details.\",\n",
+    "                \"default\": \"none\"\n",
+    "                }\n",
+    "            }\n",
+    "        }\n",
+    "    }\n",
+    "]\n",
+    "\"\"\""
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 4,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "system_prompt = \"\"\"You are an expert in composing functions. You are given a question and a set of possible functions. \n",
+    "Based on the question, you will need to make one or more function/tool calls to achieve the purpose. \n",
+    "If none of the function can be used, point it out. If the given question lacks the parameters required by the function,\n",
+    "also point it out. You should only return the function call in tools call sections.\n",
+    "\n",
+    "If you decide to invoke any of the function(s), you MUST put it in the format of [func_name1(params_name1=params_value1, params_name2=params_value2...), func_name2(params)]\\n\n",
+    "You SHOULD NOT include any other text in the response.\n",
+    "\n",
+    "Here is a list of functions in JSON format that you can invoke.\\n\\n{functions}\\n\"\"\".format(functions=function_definitions)"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 5,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "chat_history = []\n",
+    "\n",
+    "def model_chat(user_input: str, sys_prompt = system_prompt, temperature: int = 0.7, max_tokens=2048):\n",
+    "    \n",
+    "    chat_history = [\n",
+    "        {\n",
+    "            \"role\": \"system\",\n",
+    "            \"content\": system_prompt\n",
+    "        }\n",
+    "    ]\n",
+    "    \n",
+    "    chat_history.append({\"role\": \"user\", \"content\": user_input})\n",
+    "    \n",
+    "    response = client.chat.completions.create(model=\"llama-3.2-3b-preview\",\n",
+    "                                          messages=chat_history,\n",
+    "                                          max_tokens=max_tokens,\n",
+    "                                          temperature=temperature)\n",
+    "    \n",
+    "    chat_history.append({\n",
+    "    \"role\": \"assistant\",\n",
+    "    \"content\": response.choices[0].message.content\n",
+    "    })\n",
+    "    \n",
+    "    \n",
+    "    #print(\"Assistant:\", response.choices[0].message.content)\n",
+    "    \n",
+    "    return response.choices[0].message.content"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Note: We are assuming a structure for dataset here:\n",
+    "\n",
+    "- Name\n",
+    "- Email\n",
+    "- Age \n",
+    "- Color request"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 6,
+   "metadata": {},
+   "outputs": [
+    {
+     "name": "stdout",
+     "output_type": "stream",
+     "text": [
+      "Assistant: [get_user_info(user_id=7890, special='black')]\n"
+     ]
+    }
+   ],
+   "source": [
+    "user_input = \"Can you retrieve the details for the user with the ID 7890, who has black as their special request?\"\n",
+    "\n",
+    "print(\"Assistant:\", model_chat(user_input, sys_prompt=system_prompt))"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "#### Dummy dataset to make sure our model stays happy :) "
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 7,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "def get_user_info(user_id: int, special: str = \"none\") -> dict:\n",
+    "    # This is a mock database of users\n",
+    "    user_database = {\n",
+    "        7890: {\"name\": \"Emma Davis\", \"email\": \"emma@example.com\", \"age\": 31},\n",
+    "        1234: {\"name\": \"Liam Wilson\", \"email\": \"liam@example.com\", \"age\": 28},\n",
+    "        2345: {\"name\": \"Olivia Chen\", \"email\": \"olivia@example.com\", \"age\": 35},\n",
+    "        3456: {\"name\": \"Noah Taylor\", \"email\": \"noah@example.com\", \"age\": 42},\n",
+    "        4567: {\"name\": \"Ava Martinez\", \"email\": \"ava@example.com\", \"age\": 39},\n",
+    "        5678: {\"name\": \"Ethan Brown\", \"email\": \"ethan@example.com\", \"age\": 45},\n",
+    "        6789: {\"name\": \"Sophia Kim\", \"email\": \"sophia@example.com\", \"age\": 33},\n",
+    "        8901: {\"name\": \"Mason Lee\", \"email\": \"mason@example.com\", \"age\": 29},\n",
+    "        9012: {\"name\": \"Isabella Garcia\", \"email\": \"isabella@example.com\", \"age\": 37},\n",
+    "        1357: {\"name\": \"James Johnson\", \"email\": \"james@example.com\", \"age\": 41}\n",
+    "    }\n",
+    "    \n",
+    "    # Check if the user exists in our mock database\n",
+    "    if user_id in user_database:\n",
+    "        user_data = user_database[user_id]\n",
+    "        \n",
+    "        # Handle the 'special' parameter\n",
+    "        if special != \"none\":\n",
+    "            user_data[\"special_info\"] = f\"Special request: {special}\"\n",
+    "        \n",
+    "        return user_data\n",
+    "    else:\n",
+    "        return {\"error\": \"User not found\"}"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 8,
+   "metadata": {},
+   "outputs": [
+    {
+     "data": {
+      "text/plain": [
+       "[{'name': 'Emma Davis',\n",
+       "  'email': 'emma@example.com',\n",
+       "  'age': 31,\n",
+       "  'special_info': 'Special request: black'}]"
+      ]
+     },
+     "execution_count": 8,
+     "metadata": {},
+     "output_type": "execute_result"
+    }
+   ],
+   "source": [
+    "[get_user_info(user_id=7890, special='black')]"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "### Handling Tool-Calling logic for the model"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Hello Regex, my good old friend :) \n",
+    "\n",
+    "With Regex, we can write a simple way to handle tool_calling and return either the model or tool call response"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 9,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "import re\n",
+    "import json\n",
+    "\n",
+    "# Assuming you have defined get_user_info function and SYSTEM_PROMPT\n",
+    "\n",
+    "chat_history = []\n",
+    "\n",
+    "def process_response(response):\n",
+    "    function_call_pattern = r'\\[(.*?)\\((.*?)\\)\\]'\n",
+    "    function_calls = re.findall(function_call_pattern, response)\n",
+    "    \n",
+    "    if function_calls:\n",
+    "        processed_response = []\n",
+    "        for func_name, args_str in function_calls:\n",
+    "            args_dict = {}\n",
+    "            for arg in args_str.split(','):\n",
+    "                key, value = arg.split('=')\n",
+    "                key = key.strip()\n",
+    "                value = value.strip().strip(\"'\")\n",
+    "                if value.isdigit():\n",
+    "                    value = int(value)\n",
+    "                args_dict[key] = value\n",
+    "            \n",
+    "            if func_name == 'get_user_info':\n",
+    "                result = get_user_info(**args_dict)\n",
+    "                processed_response.append(f\"Function call result: {json.dumps(result, indent=2)}\")\n",
+    "            else:\n",
+    "                processed_response.append(f\"Unknown function: {func_name}\")\n",
+    "        return \"\\n\".join(processed_response)\n",
+    "    else:\n",
+    "        return response\n",
+    "\n",
+    "def model_chat(user_input: str, sys_prompt=system_prompt, temperature: float = 0.7, max_tokens: int = 2048):\n",
+    "    global chat_history\n",
+    "    \n",
+    "    if not chat_history:\n",
+    "        chat_history = [\n",
+    "            {\n",
+    "                \"role\": \"system\",\n",
+    "                \"content\": sys_prompt\n",
+    "            }\n",
+    "        ]\n",
+    "    \n",
+    "    chat_history.append({\"role\": \"user\", \"content\": user_input})\n",
+    "    \n",
+    "    response = client.chat.completions.create(\n",
+    "        model=\"llama-3.2-3b-preview\",\n",
+    "        messages=chat_history,\n",
+    "        max_tokens=max_tokens,\n",
+    "        temperature=temperature\n",
+    "    )\n",
+    "    \n",
+    "    assistant_response = response.choices[0].message.content\n",
+    "    processed_response = process_response(assistant_response)\n",
+    "    \n",
+    "    chat_history.append({\n",
+    "        \"role\": \"assistant\",\n",
+    "        \"content\": assistant_response\n",
+    "    })\n",
+    "    \n",
+    "    return processed_response"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 10,
+   "metadata": {},
+   "outputs": [
+    {
+     "name": "stdout",
+     "output_type": "stream",
+     "text": [
+      "Assistant: Function call result: {\n",
+      "  \"name\": \"Emma Davis\",\n",
+      "  \"email\": \"emma@example.com\",\n",
+      "  \"age\": 31,\n",
+      "  \"special_info\": \"Special request: black\"\n",
+      "}\n"
+     ]
+    }
+   ],
+   "source": [
+    "user_input = \"Can you retrieve the details for the user with the ID 7890, who has black as their special request?\"\n",
+    "\n",
+    "print(\"Assistant:\", model_chat(user_input, sys_prompt=system_prompt))"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 56,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "#fin"
+   ]
+  }
+ ],
+ "metadata": {
+  "kernelspec": {
+   "display_name": "Python 3",
+   "language": "python",
+   "name": "python3"
+  },
+  "language_info": {
+   "codemirror_mode": {
+    "name": "ipython",
+    "version": 3
+   },
+   "file_extension": ".py",
+   "mimetype": "text/x-python",
+   "name": "python",
+   "nbconvert_exporter": "python",
+   "pygments_lexer": "ipython3",
+   "version": "3.12.5"
+  }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 2
+}

File diff suppressed because it is too large
+ 776 - 0
recipes/quickstart/agents/Agents_Tutorial/Tool_Calling_201.ipynb


recipes/quickstart/agents/dlai/AI_Agentic_Design_Patterns_with_AutoGen_L4_Tool_Use_and_Conversational_Chess.ipynb → recipes/quickstart/agents/DeepLearningai_Course_Notebooks/AI_Agentic_Design_Patterns_with_AutoGen_L4_Tool_Use_and_Conversational_Chess.ipynb


recipes/quickstart/agents/dlai/AI_Agents_in_LangGraph_L1_Build_an_Agent_from_Scratch.ipynb → recipes/quickstart/agents/DeepLearningai_Course_Notebooks/AI_Agents_in_LangGraph_L1_Build_an_Agent_from_Scratch.ipynb


recipes/quickstart/agents/dlai/Building_Agentic_RAG_with_Llamaindex_L1_Router_Engine.ipynb → recipes/quickstart/agents/DeepLearningai_Course_Notebooks/Building_Agentic_RAG_with_Llamaindex_L1_Router_Engine.ipynb


recipes/quickstart/agents/dlai/Functions_Tools_and_Agents_with_LangChain_L1_Function_Calling.ipynb → recipes/quickstart/agents/DeepLearningai_Course_Notebooks/Functions_Tools_and_Agents_with_LangChain_L1_Function_Calling.ipynb


recipes/quickstart/agents/dlai/README.md → recipes/quickstart/agents/DeepLearningai_Course_Notebooks/README.md


+ 6 - 0
recipes/quickstart/agents/README.md

@@ -0,0 +1,6 @@
+## Agents and Tool Calling
+
+Structure:
+
+- Agents_Tutorial: Showcases 101 and 201 notebooks guidance for using tool calling with Llama models
+- DeepLearning_Course_Notebooks: Notebooks from the DL.ai course teaching Agents