|
@@ -0,0 +1,672 @@
|
|
|
+{
|
|
|
+ "cells": [
|
|
|
+ {
|
|
|
+ "cell_type": "markdown",
|
|
|
+ "id": "9035a13b",
|
|
|
+ "metadata": {},
|
|
|
+ "source": [
|
|
|
+ "# Ensure the required libraries are installed i.e.\n",
|
|
|
+ "!pip install sentence-transformers qdrant-client requests IPython"
|
|
|
+ ]
|
|
|
+ },
|
|
|
+ {
|
|
|
+ "cell_type": "markdown",
|
|
|
+ "id": "2e2847fa",
|
|
|
+ "metadata": {},
|
|
|
+ "source": [
|
|
|
+ "# Step 1: Import necessary modules"
|
|
|
+ ]
|
|
|
+ },
|
|
|
+ {
|
|
|
+ "cell_type": "code",
|
|
|
+ "execution_count": 60,
|
|
|
+ "id": "0930f7de",
|
|
|
+ "metadata": {},
|
|
|
+ "outputs": [],
|
|
|
+ "source": [
|
|
|
+ "import os\n",
|
|
|
+ "import uuid\n",
|
|
|
+ "import re\n",
|
|
|
+ "from pathlib import Path\n",
|
|
|
+ "from sentence_transformers import SentenceTransformer, CrossEncoder\n",
|
|
|
+ "from qdrant_client import QdrantClient, models\n",
|
|
|
+ "from qdrant_client.models import SearchRequest\n",
|
|
|
+ "import requests\n",
|
|
|
+ "from IPython.display import Markdown, display\n",
|
|
|
+ "import json\n",
|
|
|
+ "\n"
|
|
|
+ ]
|
|
|
+ },
|
|
|
+ {
|
|
|
+ "cell_type": "markdown",
|
|
|
+ "id": "02b715a9",
|
|
|
+ "metadata": {},
|
|
|
+ "source": [
|
|
|
+ "## Step 2: Define Configuration and Global Variables\n",
|
|
|
+ "\n",
|
|
|
+ "To use this example, follow these steps to configure your environment:\n",
|
|
|
+ "\n",
|
|
|
+ "1. **Set up an account with Llama**: You can use the LLAMA API key with a model like `Llama-4-Maverick-17B-128E-Instruct-FP8`. However, you're not limited to this; you can choose any other inference provider's endpoint and respective LLAMA models that suit your needs.\n",
|
|
|
+ "2. **Choose a Llama model or alternative**: Select a suitable Llama model for inference, such as `Llama-4-Maverick-17B-128E-Instruct-FP8`, or explore other available LLAMA models from your chosen inference provider.\n",
|
|
|
+ "3. **Create a Qdrant account**: Sign up for a Qdrant account and generate an access token.\n",
|
|
|
+ "4. **Set up a Qdrant collection**: Use the provided script (`setup_qdrant_collection.py`) to create and populate a Qdrant collection.\n",
|
|
|
+ "\n",
|
|
|
+ "For more information on setting up a Qdrant collection, refer to the `setup_qdrant_collection.py` script. This script demonstrates how to process files, split them into chunks, and store them in a Qdrant collection.\n",
|
|
|
+ "\n",
|
|
|
+ "Once you've completed these steps, you can define your configuration variables as follows:"
|
|
|
+ ]
|
|
|
+ },
|
|
|
+ {
|
|
|
+ "cell_type": "code",
|
|
|
+ "execution_count": 65,
|
|
|
+ "id": "a9030dae",
|
|
|
+ "metadata": {},
|
|
|
+ "outputs": [],
|
|
|
+ "source": [
|
|
|
+ "LLAMA_API_KEY = os.getenv(\"LLAMA_API_KEY\") \n",
|
|
|
+ "if not LLAMA_API_KEY:\n",
|
|
|
+ " raise ValueError(\"LLAMA_API_KEY not found. Please set it as an environment variable.\")\n",
|
|
|
+ "API_URL = \"https://api.llama.com/v1/chat/completions\" # Replace with your chosen inference provider's API URL\n",
|
|
|
+ "HEADERS = {\n",
|
|
|
+ " \"Content-Type\": \"application/json\",\n",
|
|
|
+ " \"Authorization\": f\"Bearer {LLAMA_API_KEY}\"\n",
|
|
|
+ "}\n",
|
|
|
+ "LLAMA_MODEL = \"Llama-4-Maverick-17B-128E-Instruct-FP8\" # Choose a suitable Llama model or replace with your preferred model\n",
|
|
|
+ "# Qdrant Configuration\n",
|
|
|
+ "QDRANT_URL = \"add your existing qdrant URL\" # Replace with your Qdrant instance URL\n",
|
|
|
+ "QDRANT_API_KEY = os.getenv(\"QDRANT_API_KEY\") # Load from environment variable\n",
|
|
|
+ "if not QDRANT_API_KEY:\n",
|
|
|
+ " raise ValueError(\"QDRANT_API_KEY not found. Please set it as an environment variable.\")\n",
|
|
|
+ "# The Qdrant collection to be queried. This should already exist.\n",
|
|
|
+ "MAIN_COLLECTION_NAME = \"readme_blogs_latest\""
|
|
|
+ ]
|
|
|
+ },
|
|
|
+ {
|
|
|
+ "cell_type": "markdown",
|
|
|
+ "id": "6f13075f",
|
|
|
+ "metadata": {},
|
|
|
+ "source": [
|
|
|
+ "## Step 3: Define Helper Functions\n",
|
|
|
+ "\n",
|
|
|
+ "In this step, we'll define several helper functions that are used throughout the blog generation process. These functions include:\n",
|
|
|
+ "\n",
|
|
|
+ "1. **`get_qdrant_client`**: Returns a Qdrant client instance configured with your Qdrant URL and API key.\n",
|
|
|
+ "2. **`query_qdrant`**: Queries Qdrant with hybrid search and reranking on a specified collection.\n",
|
|
|
+ "\n",
|
|
|
+ "These helper functions simplify the code and make it easier to manage the Qdrant interaction. \n"
|
|
|
+ ]
|
|
|
+ },
|
|
|
+ {
|
|
|
+ "cell_type": "code",
|
|
|
+ "execution_count": 62,
|
|
|
+ "id": "87383cf5",
|
|
|
+ "metadata": {},
|
|
|
+ "outputs": [],
|
|
|
+ "source": [
|
|
|
+ "def get_qdrant_client():\n",
|
|
|
+ " \"\"\"\n",
|
|
|
+ " Returns a Qdrant client instance.\n",
|
|
|
+ " \n",
|
|
|
+ " :return: QdrantClient instance\n",
|
|
|
+ "\n",
|
|
|
+ " \"\"\"\n",
|
|
|
+ " return QdrantClient(url=QDRANT_URL, api_key=QDRANT_API_KEY)\n",
|
|
|
+ "\n",
|
|
|
+ "def get_embedding_model():\n",
|
|
|
+ " \"\"\"Returns the SentenceTransformer embedding model.\"\"\"\n",
|
|
|
+ " return SentenceTransformer('all-MiniLM-L6-v2')\n",
|
|
|
+ "\n",
|
|
|
+ "def query_qdrant(query: str, client: QdrantClient, collection_name: str, top_k: int = 5) -> list:\n",
|
|
|
+ " \"\"\"\n",
|
|
|
+ " Query Qdrant with hybrid search and reranking on a specified collection.\n",
|
|
|
+ " \n",
|
|
|
+ " :param query: Search query\n",
|
|
|
+ " :param client: QdrantClient instance\n",
|
|
|
+ " :param collection_name: Name of the Qdrant collection\n",
|
|
|
+ " :param top_k: Number of results to return (default: 5)\n",
|
|
|
+ " :return: List of relevant chunks\n",
|
|
|
+ " \"\"\"\n",
|
|
|
+ " embedding_model = SentenceTransformer('all-MiniLM-L6-v2')\n",
|
|
|
+ " query_embedding = embedding_model.encode(query).tolist()\n",
|
|
|
+ " \n",
|
|
|
+ " try:\n",
|
|
|
+ " results = client.search(\n",
|
|
|
+ " collection_name=collection_name,\n",
|
|
|
+ " query_vector=query_embedding,\n",
|
|
|
+ " limit=top_k*2\n",
|
|
|
+ " )\n",
|
|
|
+ " except Exception as e:\n",
|
|
|
+ " print(f\"Error during Qdrant search on collection '{collection_name}': {e}\")\n",
|
|
|
+ " return []\n",
|
|
|
+ " \n",
|
|
|
+ " if not results:\n",
|
|
|
+ " print(\"No results found in Qdrant for the given query.\")\n",
|
|
|
+ " return []\n",
|
|
|
+ " cross_encoder = CrossEncoder('cross-encoder/ms-marco-MiniLM-L6-v2')\n",
|
|
|
+ " pairs = [(query, hit.payload[\"text\"]) for hit in results]\n",
|
|
|
+ " scores = cross_encoder.predict(pairs)\n",
|
|
|
+ " \n",
|
|
|
+ " sorted_results = [x for _, x in sorted(zip(scores, results), key=lambda pair: pair[0], reverse=True)]\n",
|
|
|
+ " return sorted_results[:top_k]\n",
|
|
|
+ "\n"
|
|
|
+ ]
|
|
|
+ },
|
|
|
+ {
|
|
|
+ "cell_type": "markdown",
|
|
|
+ "id": "df183a73",
|
|
|
+ "metadata": {},
|
|
|
+ "source": [
|
|
|
+ "## Step 4: Define the Main Blog Generation Function\n",
|
|
|
+ "\n",
|
|
|
+ "The `generate_blog` function is the core of our blog generation process. It takes a topic as input and uses the following steps to generate a comprehensive blog post:\n",
|
|
|
+ "\n",
|
|
|
+ "1. **Retrieve relevant content**: Uses the `query_qdrant` function to retrieve relevant chunks from the Qdrant collection based on the input topic.\n",
|
|
|
+ "2. **Construct a prompt**: Creates a prompt for the Llama model by combining the retrieved content with a system prompt and user input.\n",
|
|
|
+ "3. **Generate the blog post**: Sends the constructed prompt to the Llama model via the chosen inference provider's API and retrieves the generated blog post.\n",
|
|
|
+ "\n",
|
|
|
+ "This function orchestrates the entire blog generation process, making it easy to produce high-quality content based on your technical documentation."
|
|
|
+ ]
|
|
|
+ },
|
|
|
+ {
|
|
|
+ "cell_type": "code",
|
|
|
+ "execution_count": 63,
|
|
|
+ "id": "437395c5",
|
|
|
+ "metadata": {},
|
|
|
+ "outputs": [],
|
|
|
+ "source": [
|
|
|
+ "def generate_blog(topic: str) -> str:\n",
|
|
|
+ " \"\"\"\n",
|
|
|
+ " Generates a technical blog post based on a topic using RAG.\n",
|
|
|
+ " \n",
|
|
|
+ " :param topic: Topic for the blog post\n",
|
|
|
+ " :return: Generated blog content\n",
|
|
|
+ " \"\"\"\n",
|
|
|
+ " client = get_qdrant_client()\n",
|
|
|
+ " relevant_chunks = query_qdrant(topic, client, MAIN_COLLECTION_NAME)\n",
|
|
|
+ " \n",
|
|
|
+ " if not relevant_chunks:\n",
|
|
|
+ " error_message = \"No relevant content found in the knowledge base. Cannot generate blog post.\"\n",
|
|
|
+ " print(error_message)\n",
|
|
|
+ " return error_message\n",
|
|
|
+ "\n",
|
|
|
+ " context = \"\\n\".join([chunk.payload[\"text\"] for chunk in relevant_chunks])\n",
|
|
|
+ " system_prompt = f\"\"\"\n",
|
|
|
+ " You are a technical writer specializing in creating comprehensive documentation-based blog posts. \n",
|
|
|
+ " Use the following context from technical documentation to write an in-depth blog post about {topic}.\n",
|
|
|
+ " \n",
|
|
|
+ " Requirements:\n",
|
|
|
+ " 1. Structure the blog with clear sections and subsections\n",
|
|
|
+ " 2. Include code structure and configuration details where relevant\n",
|
|
|
+ " 3. Explain architectural components using diagrams (describe in markdown)\n",
|
|
|
+ " 4. Add setup instructions and best practices\n",
|
|
|
+ " 5. Use technical terminology appropriate for developers\n",
|
|
|
+ " \n",
|
|
|
+ " Context:\n",
|
|
|
+ " {context}\n",
|
|
|
+ " \"\"\"\n",
|
|
|
+ " \n",
|
|
|
+ " payload = {\n",
|
|
|
+ " \"model\": LLAMA_MODEL,\n",
|
|
|
+ " \"messages\": [\n",
|
|
|
+ " {\"role\": \"system\", \"content\": system_prompt},\n",
|
|
|
+ " {\"role\": \"user\", \"content\": f\"Write a detailed technical blog post about {topic}\"}\n",
|
|
|
+ " ],\n",
|
|
|
+ " \"temperature\": 0.5,\n",
|
|
|
+ " \"max_tokens\": 4096\n",
|
|
|
+ " }\n",
|
|
|
+ " \n",
|
|
|
+ " try:\n",
|
|
|
+ " response = requests.post(API_URL, headers=HEADERS, json=payload)\n",
|
|
|
+ " \n",
|
|
|
+ " if response.status_code == 200:\n",
|
|
|
+ " response_json = response.json()\n",
|
|
|
+ " blog_content = response_json.get('completion_message', {}).get('content', {}).get('text', '')\n",
|
|
|
+ " \n",
|
|
|
+ " markdown_content = f\"# {topic}\\n\\n{blog_content}\"\n",
|
|
|
+ " output_path = Path(f\"{topic.replace(' ', '_')}_blog.md\")\n",
|
|
|
+ " with open(output_path, \"w\", encoding=\"utf-8\") as f:\n",
|
|
|
+ " f.write(markdown_content)\n",
|
|
|
+ " \n",
|
|
|
+ " print(f\"Blog post generated and saved to {output_path}.\")\n",
|
|
|
+ " display(Markdown(markdown_content))\n",
|
|
|
+ " return markdown_content\n",
|
|
|
+ " \n",
|
|
|
+ " else:\n",
|
|
|
+ " error_message = f\"Error: {response.status_code} - {response.text}\"\n",
|
|
|
+ " print(error_message)\n",
|
|
|
+ " return error_message\n",
|
|
|
+ " \n",
|
|
|
+ " except Exception as e:\n",
|
|
|
+ " error_message = f\"An unexpected error occurred: {str(e)}\"\n",
|
|
|
+ " print(error_message)\n",
|
|
|
+ " return error_message"
|
|
|
+ ]
|
|
|
+ },
|
|
|
+ {
|
|
|
+ "cell_type": "markdown",
|
|
|
+ "id": "0a5a1e4c",
|
|
|
+ "metadata": {},
|
|
|
+ "source": [
|
|
|
+ "## Step 5: Execute the Blog Generation Process\n",
|
|
|
+ "\n",
|
|
|
+ "Now that we've defined the necessary functions, let's put them to use! To generate a blog post, simply call the `generate_blog` function with a topic of your choice.\n",
|
|
|
+ "\n",
|
|
|
+ "For example:\n",
|
|
|
+ "```python\n",
|
|
|
+ "topic = \"Building a Messenger Chatbot with Llama 3\"\n",
|
|
|
+ "blog_content = generate_blog(topic)"
|
|
|
+ ]
|
|
|
+ },
|
|
|
+ {
|
|
|
+ "cell_type": "code",
|
|
|
+ "execution_count": null,
|
|
|
+ "id": "43d9b978",
|
|
|
+ "metadata": {},
|
|
|
+ "outputs": [
|
|
|
+ {
|
|
|
+ "name": "stderr",
|
|
|
+ "output_type": "stream",
|
|
|
+ "text": [
|
|
|
+ "/var/folders/f5/lntr7_gx6fd1y_1rtgwf2g9h0000gn/T/ipykernel_89390/1696310953.py:28: DeprecationWarning: `search` method is deprecated and will be removed in the future. Use `query_points` instead.\n",
|
|
|
+ " results = client.search(\n"
|
|
|
+ ]
|
|
|
+ },
|
|
|
+ {
|
|
|
+ "name": "stdout",
|
|
|
+ "output_type": "stream",
|
|
|
+ "text": [
|
|
|
+ "Blog post generated and saved to Building_a_Messenger_Chatbot_with_Llama_3_blog.md.\n"
|
|
|
+ ]
|
|
|
+ },
|
|
|
+ {
|
|
|
+ "data": {
|
|
|
+ "text/markdown": [
|
|
|
+ "# Building a Messenger Chatbot with Llama 3\n",
|
|
|
+ "\n",
|
|
|
+ "Building a Messenger Chatbot with Llama 3: A Step-by-Step Guide\n",
|
|
|
+ "===========================================================\n",
|
|
|
+ "\n",
|
|
|
+ "### Introduction\n",
|
|
|
+ "\n",
|
|
|
+ "In this blog post, we'll explore the process of building a Llama 3 enabled Messenger chatbot using the Messenger Platform. We'll cover the architecture, setup instructions, and best practices for integrating Llama 3 with the Messenger Platform.\n",
|
|
|
+ "\n",
|
|
|
+ "### Overview of the Messenger Platform\n",
|
|
|
+ "\n",
|
|
|
+ "The Messenger Platform is a powerful tool that allows businesses to connect with their customers through a Facebook business page. With the Messenger Platform, businesses can build chatbots that can respond to customer inquiries, provide support, and even offer personalized recommendations.\n",
|
|
|
+ "\n",
|
|
|
+ "### Architecture of the Llama 3 Enabled Messenger Chatbot\n",
|
|
|
+ "\n",
|
|
|
+ "The diagram below illustrates the components and overall data flow of the Llama 3 enabled Messenger chatbot demo.\n",
|
|
|
+ "\n",
|
|
|
+ "```markdown\n",
|
|
|
+ "+---------------+\n",
|
|
|
+ "| Facebook |\n",
|
|
|
+ "| Business Page |\n",
|
|
|
+ "+---------------+\n",
|
|
|
+ " |\n",
|
|
|
+ " | (User Message)\n",
|
|
|
+ " v\n",
|
|
|
+ "+---------------+\n",
|
|
|
+ "| Messenger |\n",
|
|
|
+ "| Platform |\n",
|
|
|
+ "+---------------+\n",
|
|
|
+ " |\n",
|
|
|
+ " | (Webhook Event)\n",
|
|
|
+ " v\n",
|
|
|
+ "+---------------+\n",
|
|
|
+ "| Web Server |\n",
|
|
|
+ "| (e.g., Amazon |\n",
|
|
|
+ "| EC2 instance) |\n",
|
|
|
+ "+---------------+\n",
|
|
|
+ " |\n",
|
|
|
+ " | (API Request)\n",
|
|
|
+ " v\n",
|
|
|
+ "+---------------+\n",
|
|
|
+ "| Llama 3 |\n",
|
|
|
+ "| Model |\n",
|
|
|
+ "+---------------+\n",
|
|
|
+ " |\n",
|
|
|
+ " | (Generated Response)\n",
|
|
|
+ " v\n",
|
|
|
+ "+---------------+\n",
|
|
|
+ "| Web Server |\n",
|
|
|
+ "| (e.g., Amazon |\n",
|
|
|
+ "| EC2 instance) |\n",
|
|
|
+ "+---------------+\n",
|
|
|
+ " |\n",
|
|
|
+ " | (API Response)\n",
|
|
|
+ " v\n",
|
|
|
+ "+---------------+\n",
|
|
|
+ "| Messenger |\n",
|
|
|
+ "| Platform |\n",
|
|
|
+ "+---------------+\n",
|
|
|
+ " |\n",
|
|
|
+ " | (Bot Response)\n",
|
|
|
+ " v\n",
|
|
|
+ "+---------------+\n",
|
|
|
+ "| Facebook |\n",
|
|
|
+ "| Business Page |\n",
|
|
|
+ "+---------------+\n",
|
|
|
+ "```\n",
|
|
|
+ "\n",
|
|
|
+ "The architecture consists of the following components:\n",
|
|
|
+ "\n",
|
|
|
+ "* Facebook Business Page: The page where customers interact with the chatbot.\n",
|
|
|
+ "* Messenger Platform: The platform that handles user messages and sends webhook events to the web server.\n",
|
|
|
+ "* Web Server: The server that receives webhook events from the Messenger Platform, sends API requests to the Llama 3 model, and returns API responses to the Messenger Platform.\n",
|
|
|
+ "* Llama 3 Model: The AI model that generates responses to user messages.\n",
|
|
|
+ "\n",
|
|
|
+ "### Setting Up the Messenger Chatbot\n",
|
|
|
+ "\n",
|
|
|
+ "To set up the Messenger chatbot, follow these steps:\n",
|
|
|
+ "\n",
|
|
|
+ "1. **Create a Facebook Business Page**: Create a Facebook business page for your business.\n",
|
|
|
+ "2. **Create a Facebook Developer Account**: Create a Facebook developer account and register your application.\n",
|
|
|
+ "3. **Set Up the Messenger Platform**: Set up the Messenger Platform for your application and configure the webhook settings.\n",
|
|
|
+ "4. **Set Up the Web Server**: Set up a web server (e.g., Amazon EC2 instance) to receive webhook events from the Messenger Platform.\n",
|
|
|
+ "5. **Integrate with Llama 3**: Integrate the Llama 3 model with your web server to generate responses to user messages.\n",
|
|
|
+ "\n",
|
|
|
+ "### Configuring the Webhook\n",
|
|
|
+ "\n",
|
|
|
+ "To configure the webhook, follow these steps:\n",
|
|
|
+ "\n",
|
|
|
+ "1. Go to the Facebook Developer Dashboard and navigate to the Messenger Platform settings.\n",
|
|
|
+ "2. Click on \"Webhooks\" and then click on \"Add Subscription\".\n",
|
|
|
+ "3. Enter the URL of your web server and select the \"messages\" and \"messaging_postbacks\" events.\n",
|
|
|
+ "4. Verify the webhook by clicking on \"Verify\" and entering the verification token.\n",
|
|
|
+ "\n",
|
|
|
+ "### Handling Webhook Events\n",
|
|
|
+ "\n",
|
|
|
+ "To handle webhook events, you'll need to write code that processes the events and sends API requests to the Llama 3 model. Here's an example code snippet in Python:\n",
|
|
|
+ "```python\n",
|
|
|
+ "import os\n",
|
|
|
+ "import json\n",
|
|
|
+ "from flask import Flask, request\n",
|
|
|
+ "import requests\n",
|
|
|
+ "\n",
|
|
|
+ "app = Flask(__name__)\n",
|
|
|
+ "\n",
|
|
|
+ "# Llama 3 API endpoint\n",
|
|
|
+ "LLAMA_API_ENDPOINT = os.environ['LLAMA_API_ENDPOINT']\n",
|
|
|
+ "\n",
|
|
|
+ "# Verify the webhook\n",
|
|
|
+ "@app.route('/webhook', methods=['GET'])\n",
|
|
|
+ "def verify_webhook():\n",
|
|
|
+ " mode = request.args.get('mode')\n",
|
|
|
+ " token = request.args.get('token')\n",
|
|
|
+ " challenge = request.args.get('challenge')\n",
|
|
|
+ "\n",
|
|
|
+ " if mode == 'subscribe' and token == 'YOUR_VERIFY_TOKEN':\n",
|
|
|
+ " return challenge\n",
|
|
|
+ " else:\n",
|
|
|
+ " return 'Invalid request', 403\n",
|
|
|
+ "\n",
|
|
|
+ "# Handle webhook events\n",
|
|
|
+ "@app.route('/webhook', methods=['POST'])\n",
|
|
|
+ "def handle_webhook():\n",
|
|
|
+ " data = request.get_json()\n",
|
|
|
+ " if data['object'] == 'page':\n",
|
|
|
+ " for entry in data['entry']:\n",
|
|
|
+ " for messaging_event in entry['messaging']:\n",
|
|
|
+ " if messaging_event.get('message'):\n",
|
|
|
+ " # Get the user message\n",
|
|
|
+ " user_message = messaging_event['message']['text']\n",
|
|
|
+ "\n",
|
|
|
+ " # Send API request to Llama 3 model\n",
|
|
|
+ " response = requests.post(LLAMA_API_ENDPOINT, json={'prompt': user_message})\n",
|
|
|
+ "\n",
|
|
|
+ " # Get the generated response\n",
|
|
|
+ " generated_response = response.json()['response']\n",
|
|
|
+ "\n",
|
|
|
+ " # Send API response back to Messenger Platform\n",
|
|
|
+ " send_response(messaging_event['sender']['id'], generated_response)\n",
|
|
|
+ "\n",
|
|
|
+ " return 'OK', 200\n",
|
|
|
+ "\n",
|
|
|
+ "# Send response back to Messenger Platform\n",
|
|
|
+ "def send_response(recipient_id, response):\n",
|
|
|
+ " # Set up the API endpoint and access token\n",
|
|
|
+ " endpoint = f'https://graph.facebook.com/v13.0/me/messages?access_token={os.environ[\"PAGE_ACCESS_TOKEN\"]}'\n",
|
|
|
+ "\n",
|
|
|
+ " # Set up the API request payload\n",
|
|
|
+ " payload = {\n",
|
|
|
+ " 'recipient': {'id': recipient_id},\n",
|
|
|
+ " 'message': {'text': response}\n",
|
|
|
+ " }\n",
|
|
|
+ "\n",
|
|
|
+ " # Send the API request\n",
|
|
|
+ " requests.post(endpoint, json=payload)\n",
|
|
|
+ "\n",
|
|
|
+ "if __name__ == '__main__':\n",
|
|
|
+ " app.run(debug=True)\n",
|
|
|
+ "```\n",
|
|
|
+ "\n",
|
|
|
+ "### Best Practices\n",
|
|
|
+ "\n",
|
|
|
+ "Here are some best practices to keep in mind when building a Messenger chatbot with Llama 3:\n",
|
|
|
+ "\n",
|
|
|
+ "* **Test thoroughly**: Test your chatbot thoroughly to ensure that it responds correctly to user messages.\n",
|
|
|
+ "* **Use a robust web server**: Use a robust web server that can handle a high volume of webhook events.\n",
|
|
|
+ "* **Implement error handling**: Implement error handling to handle cases where the Llama 3 model fails to generate a response.\n",
|
|
|
+ "* **Monitor performance**: Monitor the performance of your chatbot to ensure that it's responding quickly to user messages.\n",
|
|
|
+ "\n",
|
|
|
+ "### Conclusion\n",
|
|
|
+ "\n",
|
|
|
+ "Building a Messenger chatbot with Llama 3 is a powerful way to provide customer support and improve customer experience. By following the steps outlined in this blog post, you can build a chatbot that responds to user messages and provides personalized recommendations. Remember to test thoroughly, use a robust web server, implement error handling, and monitor performance to ensure that your chatbot is successful."
|
|
|
+ ],
|
|
|
+ "text/plain": [
|
|
|
+ "<IPython.core.display.Markdown object>"
|
|
|
+ ]
|
|
|
+ },
|
|
|
+ "metadata": {},
|
|
|
+ "output_type": "display_data"
|
|
|
+ },
|
|
|
+ {
|
|
|
+ "name": "stdout",
|
|
|
+ "output_type": "stream",
|
|
|
+ "text": [
|
|
|
+ "# Building a Messenger Chatbot with Llama 3\n",
|
|
|
+ "\n",
|
|
|
+ "Building a Messenger Chatbot with Llama 3: A Step-by-Step Guide\n",
|
|
|
+ "===========================================================\n",
|
|
|
+ "\n",
|
|
|
+ "### Introduction\n",
|
|
|
+ "\n",
|
|
|
+ "In this blog post, we'll explore the process of building a Llama 3 enabled Messenger chatbot using the Messenger Platform. We'll cover the architecture, setup instructions, and best practices for integrating Llama 3 with the Messenger Platform.\n",
|
|
|
+ "\n",
|
|
|
+ "### Overview of the Messenger Platform\n",
|
|
|
+ "\n",
|
|
|
+ "The Messenger Platform is a powerful tool that allows businesses to connect with their customers through a Facebook business page. With the Messenger Platform, businesses can build chatbots that can respond to customer inquiries, provide support, and even offer personalized recommendations.\n",
|
|
|
+ "\n",
|
|
|
+ "### Architecture of the Llama 3 Enabled Messenger Chatbot\n",
|
|
|
+ "\n",
|
|
|
+ "The diagram below illustrates the components and overall data flow of the Llama 3 enabled Messenger chatbot demo.\n",
|
|
|
+ "\n",
|
|
|
+ "```markdown\n",
|
|
|
+ "+---------------+\n",
|
|
|
+ "| Facebook |\n",
|
|
|
+ "| Business Page |\n",
|
|
|
+ "+---------------+\n",
|
|
|
+ " |\n",
|
|
|
+ " | (User Message)\n",
|
|
|
+ " v\n",
|
|
|
+ "+---------------+\n",
|
|
|
+ "| Messenger |\n",
|
|
|
+ "| Platform |\n",
|
|
|
+ "+---------------+\n",
|
|
|
+ " |\n",
|
|
|
+ " | (Webhook Event)\n",
|
|
|
+ " v\n",
|
|
|
+ "+---------------+\n",
|
|
|
+ "| Web Server |\n",
|
|
|
+ "| (e.g., Amazon |\n",
|
|
|
+ "| EC2 instance) |\n",
|
|
|
+ "+---------------+\n",
|
|
|
+ " |\n",
|
|
|
+ " | (API Request)\n",
|
|
|
+ " v\n",
|
|
|
+ "+---------------+\n",
|
|
|
+ "| Llama 3 |\n",
|
|
|
+ "| Model |\n",
|
|
|
+ "+---------------+\n",
|
|
|
+ " |\n",
|
|
|
+ " | (Generated Response)\n",
|
|
|
+ " v\n",
|
|
|
+ "+---------------+\n",
|
|
|
+ "| Web Server |\n",
|
|
|
+ "| (e.g., Amazon |\n",
|
|
|
+ "| EC2 instance) |\n",
|
|
|
+ "+---------------+\n",
|
|
|
+ " |\n",
|
|
|
+ " | (API Response)\n",
|
|
|
+ " v\n",
|
|
|
+ "+---------------+\n",
|
|
|
+ "| Messenger |\n",
|
|
|
+ "| Platform |\n",
|
|
|
+ "+---------------+\n",
|
|
|
+ " |\n",
|
|
|
+ " | (Bot Response)\n",
|
|
|
+ " v\n",
|
|
|
+ "+---------------+\n",
|
|
|
+ "| Facebook |\n",
|
|
|
+ "| Business Page |\n",
|
|
|
+ "+---------------+\n",
|
|
|
+ "```\n",
|
|
|
+ "\n",
|
|
|
+ "The architecture consists of the following components:\n",
|
|
|
+ "\n",
|
|
|
+ "* Facebook Business Page: The page where customers interact with the chatbot.\n",
|
|
|
+ "* Messenger Platform: The platform that handles user messages and sends webhook events to the web server.\n",
|
|
|
+ "* Web Server: The server that receives webhook events from the Messenger Platform, sends API requests to the Llama 3 model, and returns API responses to the Messenger Platform.\n",
|
|
|
+ "* Llama 3 Model: The AI model that generates responses to user messages.\n",
|
|
|
+ "\n",
|
|
|
+ "### Setting Up the Messenger Chatbot\n",
|
|
|
+ "\n",
|
|
|
+ "To set up the Messenger chatbot, follow these steps:\n",
|
|
|
+ "\n",
|
|
|
+ "1. **Create a Facebook Business Page**: Create a Facebook business page for your business.\n",
|
|
|
+ "2. **Create a Facebook Developer Account**: Create a Facebook developer account and register your application.\n",
|
|
|
+ "3. **Set Up the Messenger Platform**: Set up the Messenger Platform for your application and configure the webhook settings.\n",
|
|
|
+ "4. **Set Up the Web Server**: Set up a web server (e.g., Amazon EC2 instance) to receive webhook events from the Messenger Platform.\n",
|
|
|
+ "5. **Integrate with Llama 3**: Integrate the Llama 3 model with your web server to generate responses to user messages.\n",
|
|
|
+ "\n",
|
|
|
+ "### Configuring the Webhook\n",
|
|
|
+ "\n",
|
|
|
+ "To configure the webhook, follow these steps:\n",
|
|
|
+ "\n",
|
|
|
+ "1. Go to the Facebook Developer Dashboard and navigate to the Messenger Platform settings.\n",
|
|
|
+ "2. Click on \"Webhooks\" and then click on \"Add Subscription\".\n",
|
|
|
+ "3. Enter the URL of your web server and select the \"messages\" and \"messaging_postbacks\" events.\n",
|
|
|
+ "4. Verify the webhook by clicking on \"Verify\" and entering the verification token.\n",
|
|
|
+ "\n",
|
|
|
+ "### Handling Webhook Events\n",
|
|
|
+ "\n",
|
|
|
+ "To handle webhook events, you'll need to write code that processes the events and sends API requests to the Llama 3 model. Here's an example code snippet in Python:\n",
|
|
|
+ "```python\n",
|
|
|
+ "import os\n",
|
|
|
+ "import json\n",
|
|
|
+ "from flask import Flask, request\n",
|
|
|
+ "import requests\n",
|
|
|
+ "\n",
|
|
|
+ "app = Flask(__name__)\n",
|
|
|
+ "\n",
|
|
|
+ "# Llama 3 API endpoint\n",
|
|
|
+ "LLAMA_API_ENDPOINT = os.environ['LLAMA_API_ENDPOINT']\n",
|
|
|
+ "\n",
|
|
|
+ "# Verify the webhook\n",
|
|
|
+ "@app.route('/webhook', methods=['GET'])\n",
|
|
|
+ "def verify_webhook():\n",
|
|
|
+ " mode = request.args.get('mode')\n",
|
|
|
+ " token = request.args.get('token')\n",
|
|
|
+ " challenge = request.args.get('challenge')\n",
|
|
|
+ "\n",
|
|
|
+ " if mode == 'subscribe' and token == 'YOUR_VERIFY_TOKEN':\n",
|
|
|
+ " return challenge\n",
|
|
|
+ " else:\n",
|
|
|
+ " return 'Invalid request', 403\n",
|
|
|
+ "\n",
|
|
|
+ "# Handle webhook events\n",
|
|
|
+ "@app.route('/webhook', methods=['POST'])\n",
|
|
|
+ "def handle_webhook():\n",
|
|
|
+ " data = request.get_json()\n",
|
|
|
+ " if data['object'] == 'page':\n",
|
|
|
+ " for entry in data['entry']:\n",
|
|
|
+ " for messaging_event in entry['messaging']:\n",
|
|
|
+ " if messaging_event.get('message'):\n",
|
|
|
+ " # Get the user message\n",
|
|
|
+ " user_message = messaging_event['message']['text']\n",
|
|
|
+ "\n",
|
|
|
+ " # Send API request to Llama 3 model\n",
|
|
|
+ " response = requests.post(LLAMA_API_ENDPOINT, json={'prompt': user_message})\n",
|
|
|
+ "\n",
|
|
|
+ " # Get the generated response\n",
|
|
|
+ " generated_response = response.json()['response']\n",
|
|
|
+ "\n",
|
|
|
+ " # Send API response back to Messenger Platform\n",
|
|
|
+ " send_response(messaging_event['sender']['id'], generated_response)\n",
|
|
|
+ "\n",
|
|
|
+ " return 'OK', 200\n",
|
|
|
+ "\n",
|
|
|
+ "# Send response back to Messenger Platform\n",
|
|
|
+ "def send_response(recipient_id, response):\n",
|
|
|
+ " # Set up the API endpoint and access token\n",
|
|
|
+ " endpoint = f'https://graph.facebook.com/v13.0/me/messages?access_token={os.environ[\"PAGE_ACCESS_TOKEN\"]}'\n",
|
|
|
+ "\n",
|
|
|
+ " # Set up the API request payload\n",
|
|
|
+ " payload = {\n",
|
|
|
+ " 'recipient': {'id': recipient_id},\n",
|
|
|
+ " 'message': {'text': response}\n",
|
|
|
+ " }\n",
|
|
|
+ "\n",
|
|
|
+ " # Send the API request\n",
|
|
|
+ " requests.post(endpoint, json=payload)\n",
|
|
|
+ "\n",
|
|
|
+ "if __name__ == '__main__':\n",
|
|
|
+ " app.run(debug=True)\n",
|
|
|
+ "```\n",
|
|
|
+ "\n",
|
|
|
+ "### Best Practices\n",
|
|
|
+ "\n",
|
|
|
+ "Here are some best practices to keep in mind when building a Messenger chatbot with Llama 3:\n",
|
|
|
+ "\n",
|
|
|
+ "* **Test thoroughly**: Test your chatbot thoroughly to ensure that it responds correctly to user messages.\n",
|
|
|
+ "* **Use a robust web server**: Use a robust web server that can handle a high volume of webhook events.\n",
|
|
|
+ "* **Implement error handling**: Implement error handling to handle cases where the Llama 3 model fails to generate a response.\n",
|
|
|
+ "* **Monitor performance**: Monitor the performance of your chatbot to ensure that it's responding quickly to user messages.\n",
|
|
|
+ "\n",
|
|
|
+ "### Conclusion\n",
|
|
|
+ "\n",
|
|
|
+ "Building a Messenger chatbot with Llama 3 is a powerful way to provide customer support and improve customer experience. By following the steps outlined in this blog post, you can build a chatbot that responds to user messages and provides personalized recommendations. Remember to test thoroughly, use a robust web server, implement error handling, and monitor performance to ensure that your chatbot is successful.\n"
|
|
|
+ ]
|
|
|
+ }
|
|
|
+ ],
|
|
|
+ "source": [
|
|
|
+ "# Specify the topic for the blog post\n",
|
|
|
+ "topic = \"Building a Messenger Chatbot with Llama 3\"\n",
|
|
|
+ "blog_content = generate_blog(topic)\n",
|
|
|
+ "print(blog_content)\n"
|
|
|
+ ]
|
|
|
+ }
|
|
|
+ ],
|
|
|
+ "metadata": {
|
|
|
+ "kernelspec": {
|
|
|
+ "display_name": "test_blogs",
|
|
|
+ "language": "python",
|
|
|
+ "name": "python3"
|
|
|
+ },
|
|
|
+ "language_info": {
|
|
|
+ "codemirror_mode": {
|
|
|
+ "name": "ipython",
|
|
|
+ "version": 3
|
|
|
+ },
|
|
|
+ "file_extension": ".py",
|
|
|
+ "mimetype": "text/x-python",
|
|
|
+ "name": "python",
|
|
|
+ "nbconvert_exporter": "python",
|
|
|
+ "pygments_lexer": "ipython3",
|
|
|
+ "version": "3.12.11"
|
|
|
+ }
|
|
|
+ },
|
|
|
+ "nbformat": 4,
|
|
|
+ "nbformat_minor": 5
|
|
|
+}
|