Просмотр исходного кода

Update azure_api_example.ipynb

Change context to Llama 3
Chester Hu 11 месяцев назад
Родитель
Сommit
3d2c739aee
1 измененных файлов с 21 добавлено и 99 удалено
  1. 21 99
      recipes/llama_api_providers/Azure_API_example/azure_api_example.ipynb

+ 21 - 99
recipes/llama_api_providers/Azure_API_example/azure_api_example.ipynb

@@ -4,13 +4,14 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "# Use Azure API with Llama 2\n",
+    "# Use Azure API with Llama 3\n",
     "\n",
-    "This notebook shows examples of how to use Llama 2 APIs offered by Microsoft Azure. We will cover:  \n",
-    "* HTTP requests API usage for Llama 2 pretrained and chat models in CLI\n",
-    "* HTTP requests API usage for Llama 2 pretrained and chat models in Python\n",
+    "This notebook shows examples of how to use Llama 3 APIs offered by Microsoft Azure. We will cover:  \n",
+    "* HTTP requests API usage for Llama 3 instruct models in CLI\n",
+    "* HTTP requests API usage for Llama 3 instruct models in Python\n",
     "* Plug the APIs into LangChain\n",
     "* Wire the model with Gradio to build a simple chatbot with memory\n",
+    "\n",
     "\n"
    ]
   },
@@ -20,15 +21,13 @@
    "source": [
     "## Prerequisite\n",
     "\n",
-    "Before we start building with Azure Llama 2 APIs, there are certain steps we need to take to deploy the models:\n",
+    "Before we start building with Azure Llama 3 APIs, there are certain steps we need to take to deploy the models:\n",
     "\n",
     "* Register for a valid Azure account with subscription [here](https://azure.microsoft.com/en-us/free/search/?ef_id=_k_CjwKCAiA-P-rBhBEEiwAQEXhH5OHAJLhzzcNsuxwpa5c9EJFcuAjeh6EvZw4afirjbWXXWkiZXmU2hoC5GoQAvD_BwE_k_&OCID=AIDcmm5edswduu_SEM__k_CjwKCAiA-P-rBhBEEiwAQEXhH5OHAJLhzzcNsuxwpa5c9EJFcuAjeh6EvZw4afirjbWXXWkiZXmU2hoC5GoQAvD_BwE_k_&gad_source=1&gclid=CjwKCAiA-P-rBhBEEiwAQEXhH5OHAJLhzzcNsuxwpa5c9EJFcuAjeh6EvZw4afirjbWXXWkiZXmU2hoC5GoQAvD_BwE)\n",
     "* Take a quick look on what is the [Azure AI Studio](https://learn.microsoft.com/en-us/azure/ai-studio/what-is-ai-studio?tabs=home) and navigate to the website from the link in the article\n",
     "* Follow the demos in the article to create a project and [resource](https://learn.microsoft.com/en-us/azure/azure-resource-manager/management/manage-resource-groups-portal) group, or you can also follow the guide [here](https://learn.microsoft.com/en-us/azure/ai-studio/how-to/deploy-models-llama?tabs=azure-studio)\n",
-    "* Select Llama models from Model catalog\n",
-    "* Deploy with \"Pay-as-you-go\"\n",
-    "\n",
-    "Once deployed successfully, you should be assigned for an API endpoint and a security key for inference.  \n",
+    "* For Llama 3 instruct models from Model catalog, click Deploy in the model page and select \"Pay-as-you-go\". Once deployed successfully, you should be assigned for an API endpoint and a security key for inference.\n",
+    "* For Llama 3 pretrained models, Azure currently only support manual deployment under regular subscription. We are working with them to bring \"Pay-as-you-go\" for pretrained models.\n",
     "\n",
     "For more information, you should consult Azure's official documentation [here](https://learn.microsoft.com/en-us/azure/ai-studio/how-to/deploy-models-llama?tabs=azure-studio) for model deployment and inference."
    ]
@@ -41,10 +40,12 @@
     "\n",
     "### Basics\n",
     "\n",
+    "The usage and schema of the API are identical to Llama 3 API hosted on Azure.\n",
+    "\n",
     "For using the REST API, You will need to have an Endpoint url and Authentication Key associated with that endpoint.  \n",
     "This can be acquired from previous steps.  \n",
     "\n",
-    "In this text completion example for pre-trained model, we use a simple curl call for illustration. There are three major components:  \n",
+    "In this chat completion example for instruct model, we use a simple curl call for illustration. There are three major components:  \n",
     "\n",
     "* The `host-url` is your endpoint url with completion schema. \n",
     "* The `headers` defines the content type as well as your api key. \n",
@@ -52,20 +53,9 @@
    ]
   },
   {
-   "cell_type": "code",
-   "execution_count": null,
-   "metadata": {},
-   "outputs": [],
-   "source": [
-    "!curl -X POST -L https://your-endpoint.inference.ai.azure.com/v1/completions -H 'Content-Type: application/json' -H 'Authorization: your-auth-key' -d '{\"prompt\": \"Math is a\", \"max_tokens\": 30, \"temperature\": 0.7}' "
-   ]
-  },
-  {
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "For chat completion, the API schema and request payload are slightly different.\n",
-    "\n",
     "The `host-url` needs to be `/v1/chat/completions` and the request payload to include roles in conversations. Here is a sample payload:  \n",
     "\n",
     "```\n",
@@ -100,18 +90,6 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "If you compare the generation result for both text and chat completion API calls, you will notice that:  \n",
-    "\n",
-    "* Text completion returns a list of `choices` for the input prompt, each contains generated text and completion information such as `logprobs`.\n",
-    "* Chat completion returns a list of `choices` each with a `message` object with completion result, matching the `messages` object in the request.  \n",
-    "\n",
-    "\n"
-   ]
-  },
-  {
-   "cell_type": "markdown",
-   "metadata": {},
-   "source": [
     "### Streaming\n",
     "\n",
     "One fantastic feature the API offers is the streaming capability.  \n",
@@ -147,7 +125,7 @@
    "source": [
     "### Content Safety Filtering\n",
     "\n",
-    "All Azure Llama 2 API endpoints have content safety feature turned on. Both input prompt and output tokens are filtered by this service automatically.  \n",
+    "All Azure Llama 3 API endpoints have content safety feature turned on. Both input prompt and output tokens are filtered by this service automatically.  \n",
     "To know more about the impact to the request/response payload, please refer to official guide [here](https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/content-filter?tabs=python).   \n",
     "\n",
     "For model input and output, if the filter detects there is harmful content, the generation will error out with a response payload containing the reasoning, along with information on the type of content violation and its severity. \n",
@@ -172,7 +150,7 @@
     "\n",
     "Besides calling the API directly from command line tools, you can also programatically call them in Python.  \n",
     "\n",
-    "Here is an example for the text completion model:\n",
+    "Here is an example for the instruct model:\n",
     "\n",
     "\n"
    ]
@@ -187,53 +165,6 @@
     "import json\n",
     "\n",
     "#Configure payload data sending to API endpoint\n",
-    "data = {\"prompt\": \"Math is a\", \n",
-    "         \"max_tokens\": 30, \n",
-    "         \"temperature\": 0.7,\n",
-    "         \"top_p\": 0.9,      \n",
-    "}\n",
-    "\n",
-    "body = str.encode(json.dumps(data))\n",
-    "\n",
-    "#Replace the url with your API endpoint\n",
-    "url = 'https://your-endpoint.inference.ai.azure.com/v1/completions'\n",
-    "\n",
-    "#Replace this with the key for the endpoint\n",
-    "api_key = 'your-auth-key'\n",
-    "if not api_key:\n",
-    "    raise Exception(\"API Key is missing\")\n",
-    "\n",
-    "headers = {'Content-Type':'application/json', 'Authorization':(api_key)}\n",
-    "req = urllib.request.Request(url, body, headers)\n",
-    "\n",
-    "try:\n",
-    "    response = urllib.request.urlopen(req)\n",
-    "    result = response.read()\n",
-    "    print(result)\n",
-    "except urllib.error.HTTPError as error:\n",
-    "    print(\"The request failed with status code: \" + str(error.code))\n",
-    "    # Print the headers - they include the requert ID and the timestamp, which are useful for debugging the failure\n",
-    "    print(error.info())\n",
-    "    print(error.read().decode(\"utf8\", 'ignore'))\n"
-   ]
-  },
-  {
-   "cell_type": "markdown",
-   "metadata": {},
-   "source": [
-    "Chat completion in Python is very similar, here is a quick example:"
-   ]
-  },
-  {
-   "cell_type": "code",
-   "execution_count": null,
-   "metadata": {},
-   "outputs": [],
-   "source": [
-    "import urllib.request\n",
-    "import json\n",
-    "\n",
-    "#Configure payload data sending to API endpoint\n",
     "data = {\"messages\":[\n",
     "            {\"role\":\"system\", \"content\":\"You are a helpful assistant.\"},\n",
     "            {\"role\":\"user\", \"content\":\"Who wrote the book Innovators dilemma?\"}], \n",
@@ -323,14 +254,12 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "## Use Llama 2 API with LangChain\n",
+    "## Use Llama 3 API with LangChain\n",
     "\n",
-    "In this section, we will demonstrate how to use Llama 2 APIs with LangChain, one of the most popular framework to accelerate building your AI product.  \n",
+    "In this section, we will demonstrate how to use Llama 3 APIs with LangChain, one of the most popular framework to accelerate building your AI product.  \n",
     "One common solution here is to create your customized LLM instance, so you can add it to various chains to complete different tasks.  \n",
     "In this example, we will use the `AzureMLOnlineEndpoint` class LangChain provides to build a customized LLM instance. This particular class is designed to take in Azure endpoint and API keys as inputs and wire it with HTTP calls. So the underlying of it is very similar to how we used `urllib.request` library to send RESTful calls in previous examples to the Azure Endpoint.   \n",
     "\n",
-    "Note Azure is working on a standard solution for LangChain integration in this [PR](https://github.com/langchain-ai/langchain/pull/14560), you should consider migrating to that in the future. \n",
-    "\n",
     "First, let's install dependencies: \n",
     "\n"
    ]
@@ -363,7 +292,7 @@
     "\n",
     "\n",
     "class AzureLlamaAPIContentFormatter(ContentFormatterBase):\n",
-    "#Content formatter for Llama 2 API for Azure MaaS\n",
+    "#Content formatter for Llama 3 API for Azure MaaS\n",
     "\n",
     "    def format_request_payload(self, prompt: str, model_kwargs: Dict) -> bytes:\n",
     "        #Formats the request according to the chosen api\n",
@@ -450,18 +379,11 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "At the time of writing this sample notebook, LangChain doesn't support streaming with `AzureMLOnlineEndpoint` for Llama 2. We are working with LangChain and Azure team to implement that."
-   ]
-  },
-  {
-   "cell_type": "markdown",
-   "metadata": {},
-   "source": [
-    "## Build a chatbot with Llama 2 API\n",
+    "## Build a chatbot with Llama 3 API\n",
     "\n",
-    "In this section, we will build a simple chatbot using Azure Llama 2 API, LangChain and [Gradio](https://www.gradio.app/)'s `ChatInterface` with memory capability.\n",
+    "In this section, we will build a simple chatbot using Azure Llama 3 API, LangChain and [Gradio](https://www.gradio.app/)'s `ChatInterface` with memory capability.\n",
     "\n",
-    "Gradio is a framework to help demo your machine learning model with a web interface. We also have a dedicated Gradio chatbot [example](https://github.com/meta-llama/llama-recipes/blob/main/recipes/use_cases/chatbots/RAG_chatbot/RAG_Chatbot_Example.ipynb) built with Llama 2 on-premises with RAG.   \n",
+    "Gradio is a framework to help demo your machine learning model with a web interface. We also have a dedicated Gradio chatbot [example](https://github.com/meta-llama/llama-recipes/blob/main/recipes/use_cases/chatbots/RAG_chatbot/RAG_Chatbot_Example.ipynb) built with Llama 3 on-premises with RAG.   \n",
     "\n",
     "First, let's install Gradio dependencies.\n"
    ]
@@ -508,7 +430,7 @@
     "langchain.debug=True\n",
     "\n",
     "class AzureLlamaAPIContentFormatter(ContentFormatterBase):\n",
-    "#Content formatter for Llama 2 API for Azure MaaS\n",
+    "#Content formatter for Llama 3 API for Azure MaaS\n",
     "\n",
     "    def format_request_payload(self, prompt: str, model_kwargs: Dict) -> bytes:\n",
     "        #Formats the request according to the chosen api\n",
@@ -602,7 +524,7 @@
    "name": "python",
    "nbconvert_exporter": "python",
    "pygments_lexer": "ipython3",
-   "version": "3.10.10"
+   "version": "3.9.6"
   }
  },
  "nbformat": 4,