ソースを参照

Fix: as per the Notebook review

llamatest 2 ヶ月 前
コミット
27cf3d19d2
1 ファイル変更193 行追加404 行削除
  1. 193 404
      end-to-end-use-cases/technical_blogger/Technical_Blog_Generator.ipynb

+ 193 - 404
end-to-end-use-cases/technical_blogger/Technical_Blog_Generator.ipynb

@@ -2,16 +2,16 @@
  "cells": [
   {
    "cell_type": "markdown",
-   "id": "a0eaff04",
+   "id": "9035a13b",
    "metadata": {},
    "source": [
     "# Ensure the required libraries are installed i.e.\n",
-    "!pip install sentence-transformers qdrant-client requests IPython\n"
+    "!pip install sentence-transformers qdrant-client requests IPython"
    ]
   },
   {
    "cell_type": "markdown",
-   "id": "4a8cdfac",
+   "id": "2e2847fa",
    "metadata": {},
    "source": [
     "# Step 1: Import necessary modules"
@@ -19,18 +19,10 @@
   },
   {
    "cell_type": "code",
-   "execution_count": 2,
-   "id": "b3f67c91",
+   "execution_count": 18,
+   "id": "0930f7de",
    "metadata": {},
-   "outputs": [
-    {
-     "name": "stdout",
-     "output_type": "stream",
-     "text": [
-      "Libraries installed and modules imported successfully.\n"
-     ]
-    }
-   ],
+   "outputs": [],
    "source": [
     "import os\n",
     "import uuid\n",
@@ -42,147 +34,101 @@
     "import requests\n",
     "from IPython.display import Markdown, display\n",
     "import json\n",
-    "\n",
-    "print(\"Libraries installed and modules imported successfully.\")"
+    "\n"
    ]
   },
   {
    "cell_type": "markdown",
-   "id": "58a4962f",
+   "id": "02b715a9",
    "metadata": {},
    "source": [
-    "# Step 2: Define Configuration and Global Variables\n",
-    "This contains all your static configuration, including API keys, URLs, and file paths."
+    "## Step 2: Define Configuration and Global Variables\n",
+    "\n",
+    "To use this example, follow these steps to configure your environment:\n",
+    "\n",
+    "1.  **Set up an account with Llama**: You can use the LLAMA API key with a model like `Llama-4-Maverick-17B-128E-Instruct-FP8`. However, you're not limited to this; you can choose any other inference provider's endpoint and respective LLAMA models that suit your needs.\n",
+    "2.  **Choose a Llama model or alternative**: Select a suitable Llama model for inference, such as `Llama-4-Maverick-17B-128E-Instruct-FP8`, or explore other available LLAMA models from your chosen inference provider.\n",
+    "3.  **Create a Qdrant account**: Sign up for a Qdrant account and generate an access token.\n",
+    "4.  **Set up a Qdrant collection**: Use the provided script (`qdrant_setup_partial.py`) to create and populate a Qdrant collection.\n",
+    "\n",
+    "For more information on setting up a Qdrant collection, refer to the `qdrant_setup_partial.py` script. This script demonstrates how to process files, split them into chunks, and store them in a Qdrant collection.\n",
+    "\n",
+    "Once you've completed these steps, you can define your configuration variables as follows:"
    ]
   },
   {
    "cell_type": "code",
    "execution_count": null,
-   "id": "01e548da",
+   "id": "a9030dae",
    "metadata": {},
-   "outputs": [
-    {
-     "name": "stdout",
-     "output_type": "stream",
-     "text": [
-      "Configuration variables and collection name set.\n"
-     ]
-    }
-   ],
+   "outputs": [],
    "source": [
-    "# --- Configuration ---\n",
-    "# API Keys should be loaded from environment variables for security.\n",
-    "# DO NOT commit your .env file or hardcode API keys directly in the code for production.\n",
-    "\n",
-    "\n",
-    "LLAMA_API_KEY = os.getenv(\"LLAMA_API_KEY\")\n",
+    "LLAMA_API_KEY = os.getenv(\"LLAMA_API_KEY\") \n",
     "if not LLAMA_API_KEY:\n",
-    "    raise ValueError(\"LLAMA_API_KEY not found. Please set it as an environment variable or in a .env file.\")\n",
-    "\n",
-    "API_URL = \"https://api.llama.com/v1/chat/completions\"\n",
+    "    raise ValueError(\"LLAMA_API_KEY not found. Please set it as an environment variable.\")\n",
+    "API_URL = \"https://api.llama.com/v1/chat/completions\"  # Replace with your chosen inference provider's API URL\n",
     "HEADERS = {\n",
     "    \"Content-Type\": \"application/json\",\n",
     "    \"Authorization\": f\"Bearer {LLAMA_API_KEY}\"\n",
     "}\n",
-    "LLAMA_MODEL = \"Llama-4-Maverick-17B-128E-Instruct-FP8\"\n",
-    "\n",
-    "# Qdrant Configuration (Now using In-Memory Qdrant for offline use)\n",
-    "# No QDRANT_URL or QDRANT_API_KEY needed for in-memory client.\n",
-    "\n",
-    "# The Qdrant collection to be queried. This will be created in-memory.\n",
-    "MAIN_COLLECTION_NAME = \"readme_blogs_latest\"\n",
-    "\n",
-    "print(\"Configuration variables and collection name set.\")"
+    "LLAMA_MODEL = \"Llama-4-Maverick-17B-128E-Instruct-FP8\"  # Choose a suitable Llama model or replace with your preferred model\n",
+    "# Qdrant Configuration\n",
+    "QDRANT_URL = \"add your existing qdrant URL\"  # Replace with your Qdrant instance URL\n",
+    "QDRANT_API_KEY = os.getenv(\"QDRANT_API_KEY\") # Load from environment variable\n",
+    "if not QDRANT_API_KEY:\n",
+    "    raise ValueError(\"QDRANT_API_KEY not found. Please set it as an environment variable.\")\n",
+    "# The Qdrant collection to be queried. This should already exist.\n",
+    "MAIN_COLLECTION_NAME = \"readme_blogs_latest\""
    ]
   },
   {
    "cell_type": "markdown",
-   "id": "d76eccb5",
+   "id": "6f13075f",
    "metadata": {},
    "source": [
-    "# Step 3: Define Helper Functions\n",
-    "It contains all the functions that handle the core logic of the application: markdown_splitter, setup_qdrant, and query_qdrant."
+    "## Step 3: Define Helper Functions\n",
+    "\n",
+    "In this step, we'll define several helper functions that are used throughout the blog generation process. These functions include:\n",
+    "\n",
+    "1.  **`get_qdrant_client`**: Returns a Qdrant client instance configured with your Qdrant URL and API key.\n",
+    "2.  **`query_qdrant`**: Queries Qdrant with hybrid search and reranking on a specified collection.\n",
+    "\n",
+    "These helper functions simplify the code and make it easier to manage the Qdrant interaction. \n"
    ]
   },
   {
    "cell_type": "code",
-   "execution_count": null,
-   "id": "2b972b21",
+   "execution_count": 14,
+   "id": "87383cf5",
    "metadata": {},
-   "outputs": [
-    {
-     "name": "stdout",
-     "output_type": "stream",
-     "text": [
-      "Helper functions for querying Qdrant defined.\n"
-     ]
-    }
-   ],
+   "outputs": [],
    "source": [
     "def get_qdrant_client():\n",
-    "    \"\"\"Returns an in-memory Qdrant client instance.\"\"\"\n",
-    "    # For an in-memory client, you don't pass URL or API Key.\n",
-    "    return QdrantClient(\":memory:\")\n",
+    "    \"\"\"\n",
+    "    Returns a Qdrant client instance.\n",
+    "    \n",
+    "    :return: QdrantClient instance\n",
+    "\n",
+    "    \"\"\"\n",
+    "    return QdrantClient(url=QDRANT_URL, api_key=QDRANT_API_KEY)\n",
     "\n",
     "def get_embedding_model():\n",
     "    \"\"\"Returns the SentenceTransformer embedding model.\"\"\"\n",
     "    return SentenceTransformer('all-MiniLM-L6-v2')\n",
     "\n",
-    "def create_qdrant_collection(client, collection_name, vector_size):\n",
-    "    \"\"\"Creates a Qdrant collection with the specified vector size if it doesn't exist.\"\"\"\n",
-    "    try:\n",
-    "        # Check if collection exists\n",
-    "        client.get_collection(collection_name=collection_name)\n",
-    "        print(f\"Collection '{collection_name}' already exists.\")\n",
-    "    except Exception: # QdrantClient throws if collection doesn't exist\n",
-    "        print(f\"Creating collection '{collection_name}'...\")\n",
-    "        client.recreate_collection(\n",
-    "            collection_name=collection_name,\n",
-    "            vectors_config=models.VectorParams(size=vector_size, distance=models.Distance.COSINE),\n",
-    "        )\n",
-    "        print(f\"Collection '{collection_name}' created.\")\n",
-    "\n",
-    "def ingest_data_into_qdrant(client, collection_name, embedding_model, data_chunks):\n",
-    "    \"\"\"\n",
-    "    Ingests data (text chunks) into the Qdrant collection.\n",
-    "    You will need to replace this with your actual data loading and chunking logic.\n",
+    "def query_qdrant(query: str, client: QdrantClient, collection_name: str, top_k: int = 5) -> list:\n",
     "    \"\"\"\n",
-    "    print(f\"Ingesting data into collection '{collection_name}'...\")\n",
-    "    if not data_chunks:\n",
-    "        print(\"No data chunks provided for ingestion.\")\n",
-    "        return\n",
-    "\n",
-    "    points = []\n",
-    "    for i, chunk_text in enumerate(data_chunks):\n",
-    "        embedding = embedding_model.encode(chunk_text).tolist()\n",
-    "        points.append(\n",
-    "            models.PointStruct(\n",
-    "                id=i, # Unique ID for each point\n",
-    "                vector=embedding,\n",
-    "                payload={\"text\": chunk_text}\n",
-    "            )\n",
-    "        )\n",
+    "    Query Qdrant with hybrid search and reranking on a specified collection.\n",
     "    \n",
-    "    # Ensure the collection has been created with the correct vector size\n",
-    "    # before attempting to upsert.\n",
-    "    # The vector size must match the embedding model output.\n",
-    "    embedding_size = len(embedding_model.encode(\"test\").tolist())\n",
-    "    create_qdrant_collection(client, collection_name, embedding_size)\n",
-    "\n",
-    "    operation_info = client.upsert(\n",
-    "        collection_name=collection_name,\n",
-    "        wait=True,\n",
-    "        points=points,\n",
-    "    )\n",
-    "    print(f\"Data ingestion complete. Status: {operation_info.status}\")\n",
-    "\n",
-    "\n",
-    "def query_qdrant(query, client, collection_name, top_k=5):\n",
-    "    \"\"\"Query Qdrant with hybrid search and reranking on a specified collection.\"\"\"\n",
-    "    embedding_model = get_embedding_model()\n",
+    "    :param query: Search query\n",
+    "    :param client: QdrantClient instance\n",
+    "    :param collection_name: Name of the Qdrant collection\n",
+    "    :param top_k: Number of results to return (default: 5)\n",
+    "    :return: List of relevant chunks\n",
+    "    \"\"\"\n",
+    "    embedding_model = SentenceTransformer('all-MiniLM-L6-v2')\n",
     "    query_embedding = embedding_model.encode(query).tolist()\n",
     "    \n",
-    "    # Initial vector search\n",
     "    try:\n",
     "        results = client.search(\n",
     "            collection_name=collection_name,\n",
@@ -196,67 +142,46 @@
     "    if not results:\n",
     "        print(\"No results found in Qdrant for the given query.\")\n",
     "        return []\n",
-    "\n",
-    "    # Rerank using cross-encoder\n",
     "    cross_encoder = CrossEncoder('cross-encoder/ms-marco-MiniLM-L6-v2')\n",
     "    pairs = [(query, hit.payload[\"text\"]) for hit in results]\n",
     "    scores = cross_encoder.predict(pairs)\n",
     "    \n",
-    "    # Combine scores with results\n",
     "    sorted_results = [x for _, x in sorted(zip(scores, results), key=lambda pair: pair[0], reverse=True)]\n",
     "    return sorted_results[:top_k]\n",
-    "\n",
-    "print(\"Helper functions for querying Qdrant defined.\")"
+    "\n"
    ]
   },
   {
    "cell_type": "markdown",
-   "id": "092d8cd8",
+   "id": "df183a73",
    "metadata": {},
    "source": [
-    "# Step 4: Define the Main Blog Generation Function\n",
-    "This function orchestrates the RAG process by calling the helper functions, building the prompt, and making the API call."
+    "## Step 4: Define the Main Blog Generation Function\n",
+    "\n",
+    "The `generate_blog` function is the core of our blog generation process. It takes a topic as input and uses the following steps to generate a comprehensive blog post:\n",
+    "\n",
+    "1.  **Retrieve relevant content**: Uses the `query_qdrant` function to retrieve relevant chunks from the Qdrant collection based on the input topic.\n",
+    "2.  **Construct a prompt**: Creates a prompt for the Llama model by combining the retrieved content with a system prompt and user input.\n",
+    "3.  **Generate the blog post**: Sends the constructed prompt to the Llama model via the chosen inference provider's API and retrieves the generated blog post.\n",
+    "\n",
+    "This function orchestrates the entire blog generation process, making it easy to produce high-quality content based on your technical documentation."
    ]
   },
   {
    "cell_type": "code",
-   "execution_count": null,
-   "id": "0d682099",
+   "execution_count": 15,
+   "id": "437395c5",
    "metadata": {},
-   "outputs": [
-    {
-     "name": "stdout",
-     "output_type": "stream",
-     "text": [
-      "Blog generation function defined.\n"
-     ]
-    }
-   ],
+   "outputs": [],
    "source": [
-    "def generate_blog(topic):\n",
-    "    \"\"\"Generates a technical blog post based on a topic using RAG.\"\"\"\n",
-    "    print(\"Getting Qdrant client and querying pre-existing collection...\")\n",
+    "def generate_blog(topic: str) -> str:\n",
+    "    \"\"\"\n",
+    "    Generates a technical blog post based on a topic using RAG.\n",
+    "    \n",
+    "    :param topic: Topic for the blog post\n",
+    "    :return: Generated blog content\n",
+    "    \"\"\"\n",
     "    client = get_qdrant_client()\n",
-    "    embedding_model = get_embedding_model()\n",
-    "\n",
-    "    # IMPORTANT: For in-memory Qdrant, you MUST ingest your data every time\n",
-    "    # the script runs or the client is initialized, as it's not persistent.\n",
-    "    # Replace this with your actual data loading and chunking.\n",
-    "    # Example placeholder data:\n",
-    "    example_data_chunks = [\n",
-    "        \"Llama 3 is a powerful large language model developed by Meta. It excels at various NLP tasks.\",\n",
-    "        \"To build a chatbot with Llama 3, you'll typically use an API to send prompts and receive responses.\",\n",
-    "        \"Messenger Platform allows developers to create interactive experiences for Facebook Messenger users.\",\n",
-    "        \"Integrating Llama 3 with Messenger involves setting up webhooks and handling message events.\",\n",
-    "        \"Key steps include setting up a Facebook App, configuring webhooks, and deploying your bot's backend.\",\n",
-    "        \"Best practices for chatbots include clear error handling, concise responses, and user guidance.\",\n",
-    "        \"Security is crucial; always protect your API keys and ensure your webhook endpoints are secure.\"\n",
-    "    ]\n",
-    "    ingest_data_into_qdrant(client, MAIN_COLLECTION_NAME, embedding_model, example_data_chunks)\n",
-    "    # End of IMPORTANT section for data ingestion\n",
-    "\n",
-    "\n",
-    "    # Query relevant sections from the main collection\n",
     "    relevant_chunks = query_qdrant(topic, client, MAIN_COLLECTION_NAME)\n",
     "    \n",
     "    if not relevant_chunks:\n",
@@ -265,14 +190,13 @@
     "        return error_message\n",
     "\n",
     "    context = \"\\n\".join([chunk.payload[\"text\"] for chunk in relevant_chunks])\n",
-    "    \n",
     "    system_prompt = f\"\"\"\n",
     "    You are a technical writer specializing in creating comprehensive documentation-based blog posts. \n",
     "    Use the following context from technical documentation to write an in-depth blog post about {topic}.\n",
     "    \n",
     "    Requirements:\n",
     "    1. Structure the blog with clear sections and subsections\n",
-    "    2. Include code examples and configuration details where relevant\n",
+    "    2. Include code structure and configuration details where relevant\n",
     "    3. Explain architectural components using diagrams (describe in markdown)\n",
     "    4. Add setup instructions and best practices\n",
     "    5. Use technical terminology appropriate for developers\n",
@@ -291,33 +215,19 @@
     "        \"max_tokens\": 4096\n",
     "    }\n",
     "    \n",
-    "    print(\"Sending request to Llama API for blog generation...\")\n",
     "    try:\n",
     "        response = requests.post(API_URL, headers=HEADERS, json=payload)\n",
     "        \n",
     "        if response.status_code == 200:\n",
     "            response_json = response.json()\n",
-    "            # Adjusting to handle the potentially nested structure as seen in your original code\n",
-    "            # where 'completion_message' might be missing or 'content' might be missing.\n",
-    "            # Adding .get with default values for safer access.\n",
     "            blog_content = response_json.get('completion_message', {}).get('content', {}).get('text', '')\n",
     "            \n",
-    "            if not blog_content:\n",
-    "                print(\"Warning: 'completion_message.content.text' was empty or not found in API response.\")\n",
-    "                print(f\"Full API response: {response_json}\")\n",
-    "                return \"Error: Could not extract blog content from API response.\"\n",
-    "\n",
-    "            # Format as markdown\n",
     "            markdown_content = f\"# {topic}\\n\\n{blog_content}\"\n",
-    "            \n",
-    "            # Save to file\n",
     "            output_path = Path(f\"{topic.replace(' ', '_')}_blog.md\")\n",
     "            with open(output_path, \"w\", encoding=\"utf-8\") as f:\n",
     "                f.write(markdown_content)\n",
     "            \n",
     "            print(f\"Blog post generated and saved to {output_path}.\")\n",
-    "            \n",
-    "            # Display markdown content directly in the notebook\n",
     "            display(Markdown(markdown_content))\n",
     "            return markdown_content\n",
     "            \n",
@@ -329,143 +239,42 @@
     "    except Exception as e:\n",
     "        error_message = f\"An unexpected error occurred: {str(e)}\"\n",
     "        print(error_message)\n",
-    "        return error_message\n",
-    "\n",
-    "print(\"Blog generation function defined.\")"
+    "        return error_message"
    ]
   },
   {
    "cell_type": "markdown",
-   "id": "67497a92",
+   "id": "0a5a1e4c",
    "metadata": {},
    "source": [
-    "# Step 5: Specify the topic for the blog post and execute the Blog Generation Process\n"
+    "## Step 5: Execute the Blog Generation Process\n",
+    "\n",
+    "Now that we've defined the necessary functions, let's put them to use! To generate a blog post, simply call the `generate_blog` function with a topic of your choice.\n",
+    "\n",
+    "For example:\n",
+    "```python\n",
+    "topic = \"Building a Messenger Chatbot with Llama 3\"\n",
+    "blog_content = generate_blog(topic)"
    ]
   },
   {
    "cell_type": "code",
-   "execution_count": null,
-   "id": "6b3113ef",
+   "execution_count": 17,
+   "id": "340f6320",
    "metadata": {},
    "outputs": [
     {
-     "name": "stdout",
-     "output_type": "stream",
-     "text": [
-      "Getting Qdrant client and querying pre-existing collection...\n"
-     ]
-    },
-    {
      "name": "stderr",
      "output_type": "stream",
      "text": [
-      "/var/folders/f5/lntr7_gx6fd1y_1rtgwf2g9h0000gn/T/ipykernel_89390/3804544503.py:16: DeprecationWarning: `search` method is deprecated and will be removed in the future. Use `query_points` instead.\n",
+      "/var/folders/f5/lntr7_gx6fd1y_1rtgwf2g9h0000gn/T/ipykernel_89390/1696310953.py:28: DeprecationWarning: `search` method is deprecated and will be removed in the future. Use `query_points` instead.\n",
       "  results = client.search(\n"
      ]
     },
     {
-     "data": {
-      "application/vnd.jupyter.widget-view+json": {
-       "model_id": "f0623176afc44a00ac97aa280a104140",
-       "version_major": 2,
-       "version_minor": 0
-      },
-      "text/plain": [
-       "config.json:   0%|          | 0.00/794 [00:00<?, ?B/s]"
-      ]
-     },
-     "metadata": {},
-     "output_type": "display_data"
-    },
-    {
-     "data": {
-      "application/vnd.jupyter.widget-view+json": {
-       "model_id": "bf7cfa830d6945c099716abd819fd652",
-       "version_major": 2,
-       "version_minor": 0
-      },
-      "text/plain": [
-       "model.safetensors:   0%|          | 0.00/90.9M [00:00<?, ?B/s]"
-      ]
-     },
-     "metadata": {},
-     "output_type": "display_data"
-    },
-    {
-     "data": {
-      "application/vnd.jupyter.widget-view+json": {
-       "model_id": "6b3a805cb1d04d659b0cda97fee10189",
-       "version_major": 2,
-       "version_minor": 0
-      },
-      "text/plain": [
-       "tokenizer_config.json: 0.00B [00:00, ?B/s]"
-      ]
-     },
-     "metadata": {},
-     "output_type": "display_data"
-    },
-    {
-     "data": {
-      "application/vnd.jupyter.widget-view+json": {
-       "model_id": "4a44a1a1123c4ea3960c454c260d7d01",
-       "version_major": 2,
-       "version_minor": 0
-      },
-      "text/plain": [
-       "vocab.txt: 0.00B [00:00, ?B/s]"
-      ]
-     },
-     "metadata": {},
-     "output_type": "display_data"
-    },
-    {
-     "data": {
-      "application/vnd.jupyter.widget-view+json": {
-       "model_id": "87a76d03243d4179849421448a3bd59e",
-       "version_major": 2,
-       "version_minor": 0
-      },
-      "text/plain": [
-       "tokenizer.json: 0.00B [00:00, ?B/s]"
-      ]
-     },
-     "metadata": {},
-     "output_type": "display_data"
-    },
-    {
-     "data": {
-      "application/vnd.jupyter.widget-view+json": {
-       "model_id": "7178785c5aaa4c02bd3f92277b67cb9f",
-       "version_major": 2,
-       "version_minor": 0
-      },
-      "text/plain": [
-       "special_tokens_map.json:   0%|          | 0.00/132 [00:00<?, ?B/s]"
-      ]
-     },
-     "metadata": {},
-     "output_type": "display_data"
-    },
-    {
-     "data": {
-      "application/vnd.jupyter.widget-view+json": {
-       "model_id": "04882361094c4a8e892c5c8d5b9e36a3",
-       "version_major": 2,
-       "version_minor": 0
-      },
-      "text/plain": [
-       "README.md: 0.00B [00:00, ?B/s]"
-      ]
-     },
-     "metadata": {},
-     "output_type": "display_data"
-    },
-    {
      "name": "stdout",
      "output_type": "stream",
      "text": [
-      "Sending request to Llama API for blog generation...\n",
       "Blog post generated and saved to Building_a_Messenger_Chatbot_with_Llama_3_blog.md.\n"
      ]
     },
@@ -475,11 +284,11 @@
        "# Building a Messenger Chatbot with Llama 3\n",
        "\n",
        "Building a Messenger Chatbot with Llama 3: A Step-by-Step Guide\n",
-       "=============================================================\n",
+       "===========================================================\n",
        "\n",
        "### Introduction\n",
        "\n",
-       "In this blog post, we'll explore the process of building a Llama 3 enabled Messenger chatbot using the Messenger Platform. We'll cover the architectural components, setup instructions, and best practices to help you get started.\n",
+       "In this blog post, we'll explore the process of building a Llama 3 enabled Messenger chatbot using the Messenger Platform. We'll cover the architectural components, setup instructions, and best practices for integrating Llama 3 with the Messenger Platform.\n",
        "\n",
        "### Overview of the Messenger Platform\n",
        "\n",
@@ -490,152 +299,144 @@
        "The diagram below illustrates the components and overall data flow of the Llama 3 enabled Messenger chatbot demo:\n",
        "```markdown\n",
        "+---------------+\n",
-       "|  User         |\n",
-       "|  (Messenger    |\n",
-       "|   App)         |\n",
-       "+---------------+\n",
-       "       |\n",
-       "       |  (1) Send Message\n",
-       "       v\n",
-       "+---------------+\n",
-       "|  Facebook      |\n",
+       "|  Facebook    |\n",
        "|  Business Page  |\n",
        "+---------------+\n",
        "       |\n",
-       "       |  (2) Webhook Event\n",
-       "       v\n",
-       "+---------------+\n",
-       "|  Web Server    |\n",
-       "|  (e.g., Amazon  |\n",
-       "|   EC2 instance)  |\n",
-       "+---------------+\n",
        "       |\n",
-       "       |  (3) Process Event\n",
-       "       |  and Generate Response\n",
-       "       |  using Llama 3\n",
        "       v\n",
        "+---------------+\n",
-       "|  Llama 3       |\n",
-       "|  Model         |\n",
+       "|  Messenger    |\n",
+       "|  Platform      |\n",
        "+---------------+\n",
        "       |\n",
-       "       |  (4) Send Response\n",
-       "       |  back to User\n",
+       "       |\n",
        "       v\n",
        "+---------------+\n",
-       "|  Facebook      |\n",
-       "|  Business Page  |\n",
+       "|  Webhook       |\n",
+       "|  (Amazon EC2)  |\n",
        "+---------------+\n",
        "       |\n",
-       "       |  (5) Receive Response\n",
+       "       |\n",
        "       v\n",
        "+---------------+\n",
-       "|  User         |\n",
-       "|  (Messenger    |\n",
-       "|   App)         |\n",
+       "|  Llama 3      |\n",
+       "|  Chatbot       |\n",
        "+---------------+\n",
        "```\n",
-       "The components involved are:\n",
+       "The components include:\n",
        "\n",
-       "*   **User**: The customer interacting with the Facebook business page using the Messenger app.\n",
-       "*   **Facebook Business Page**: The business page that receives user messages and sends responses.\n",
-       "*   **Web Server**: The server that processes incoming webhook events, generates responses using Llama 3, and sends responses back to the user.\n",
-       "*   **Llama 3 Model**: The AI model that generates human-like responses to user queries.\n",
+       "*   Facebook Business Page: The business page that customers interact with.\n",
+       "*   Messenger Platform: The platform that enables businesses to connect with customers.\n",
+       "*   Webhook (Amazon EC2): The web server that handles incoming requests from the Messenger Platform and sends responses back.\n",
+       "*   Llama 3 Chatbot: The intelligent chatbot powered by Llama 3 that generates responses to customer queries.\n",
        "\n",
-       "### Setup Instructions\n",
+       "### Setting Up the Messenger Chatbot\n",
        "\n",
        "To build a Llama 3 enabled Messenger chatbot, follow these steps:\n",
        "\n",
        "#### Step 1: Create a Facebook Business Page\n",
        "\n",
-       "1.  Go to the Facebook Business Page creation page and follow the instructions to create a new page.\n",
-       "2.  Ensure that you have the necessary permissions to manage the page.\n",
+       "1.  Go to the Facebook Business Page creation page and follow the instructions to create a new business page.\n",
+       "2.  Set up the page with the required information, including the page name, category, and description.\n",
+       "\n",
+       "#### Step 2: Set Up the Messenger Platform\n",
        "\n",
-       "#### Step 2: Set up a Web Server\n",
+       "1.  Go to the [Messenger Platform](https://developers.facebook.com/docs/messenger-platform/overview) documentation and follow the instructions to set up a new Messenger app.\n",
+       "2.  Create a new app and configure the Messenger settings, including the webhook and messaging permissions.\n",
        "\n",
-       "1.  Choose a cloud provider (e.g., Amazon Web Services) and launch an EC2 instance to host your web server.\n",
-       "2.  Configure the instance with the necessary dependencies, such as Node.js and a webhook event handler.\n",
+       "#### Step 3: Configure the Webhook\n",
        "\n",
-       "Here's an example of a basic Node.js server using Express.js:\n",
+       "1.  Set up an Amazon EC2 instance to host the webhook.\n",
+       "2.  Configure the webhook to receive incoming requests from the Messenger Platform.\n",
+       "3.  Use a secure connection (HTTPS) to ensure the integrity of the data exchanged between the Messenger Platform and the webhook.\n",
+       "\n",
+       "Here's an example of how to configure the webhook using Node.js and Express:\n",
        "```javascript\n",
        "const express = require('express');\n",
        "const app = express();\n",
        "\n",
-       "app.use(express.json());\n",
+       "// Verify the webhook\n",
+       "app.get('/webhook', (req, res) => {\n",
+       "  const mode = req.query['hub.mode'];\n",
+       "  const token = req.query['hub.verify_token'];\n",
+       "  const challenge = req.query['hub.challenge'];\n",
        "\n",
-       "app.post('/webhook', (req, res) => {\n",
-       "    // Process webhook event\n",
-       "    const event = req.body;\n",
-       "    // Generate response using Llama 3\n",
-       "    const response = generateResponse(event);\n",
-       "    // Send response back to user\n",
-       "    sendResponse(response);\n",
-       "    res.status(200).send('EVENT_RECEIVED');\n",
+       "  if (mode && token) {\n",
+       "    if (mode === 'subscribe' && token === 'YOUR_VERIFY_TOKEN') {\n",
+       "      console.log('WEBHOOK_VERIFIED');\n",
+       "      res.status(200).send(challenge);\n",
+       "    } else {\n",
+       "      res.sendStatus(403);\n",
+       "    }\n",
+       "  }\n",
        "});\n",
        "\n",
-       "app.listen(3000, () => {\n",
-       "    console.log('Server listening on port 3000');\n",
+       "// Handle incoming requests\n",
+       "app.post('/webhook', (req, res) => {\n",
+       "  const data = req.body;\n",
+       "\n",
+       "  // Process the incoming request\n",
+       "  if (data.object === 'page') {\n",
+       "    data.entry.forEach((entry) => {\n",
+       "      entry.messaging.forEach((event) => {\n",
+       "        if (event.message) {\n",
+       "          // Handle the message\n",
+       "          handleMessage(event);\n",
+       "        }\n",
+       "      });\n",
+       "    });\n",
+       "  }\n",
+       "\n",
+       "  res.status(200).send('EVENT_RECEIVED');\n",
        "});\n",
        "```\n",
-       "#### Step 3: Integrate Llama 3 with the Web Server\n",
+       "#### Step 4: Integrate Llama 3 with the Webhook\n",
        "\n",
-       "1.  Install the necessary dependencies for Llama 3, such as the Llama 3 Python library.\n",
-       "2.  Implement a function to generate responses using Llama 3.\n",
+       "1.  Use the Llama 3 API to generate responses to customer queries.\n",
+       "2.  Integrate the Llama 3 API with the webhook to send responses back to the Messenger Platform.\n",
        "\n",
-       "Here's an example of a Python function that generates a response using Llama 3:\n",
+       "Here's an example of how to integrate Llama 3 with the webhook using Python:\n",
        "```python\n",
-       "import llama\n",
-       "\n",
-       "def generate_response(event):\n",
-       "    # Initialize Llama 3 model\n",
-       "    model = llama.Llama3()\n",
-       "    # Process event and generate response\n",
-       "    response = model.generate(event['message'])\n",
-       "    return response\n",
-       "```\n",
-       "#### Step 4: Configure Webhook Events\n",
-       "\n",
-       "1.  Go to the Facebook Developer Dashboard and navigate to your app's settings.\n",
-       "2.  Configure the webhook events to send incoming messages to your web server.\n",
-       "\n",
-       "Here's an example of a webhook event configuration:\n",
-       "```json\n",
-       "{\n",
-       "    \"object\": \"page\",\n",
-       "    \"entry\": [\n",
-       "        {\n",
-       "            \"id\": \"PAGE_ID\",\n",
-       "            \"time\": 1643723400,\n",
-       "            \"messaging\": [\n",
-       "                {\n",
-       "                    \"sender\": {\n",
-       "                        \"id\": \"USER_ID\"\n",
-       "                    },\n",
-       "                    \"recipient\": {\n",
-       "                        \"id\": \"PAGE_ID\"\n",
-       "                    },\n",
-       "                    \"timestamp\": 1643723400,\n",
-       "                    \"message\": {\n",
-       "                        \"text\": \"Hello, how are you?\"\n",
-       "                    }\n",
-       "                }\n",
-       "            ]\n",
-       "        }\n",
-       "    ]\n",
-       "}\n",
-       "```\n",
-       "#### Step 5: Test the Chatbot\n",
+       "import requests\n",
+       "\n",
+       "def handle_message(event):\n",
+       "    # Get the message text\n",
+       "    message_text = event['message']['text']\n",
+       "\n",
+       "    # Generate a response using Llama 3\n",
+       "    response = generate_response(message_text)\n",
+       "\n",
+       "    # Send the response back to the Messenger Platform\n",
+       "    send_response(event['sender']['id'], response)\n",
+       "\n",
+       "def generate_response(message_text):\n",
+       "    # Use the Llama 3 API to generate a response\n",
+       "    llama3_api_url = 'https://api.llama3.com/generate'\n",
+       "    headers = {'Authorization': 'Bearer YOUR_LLAMA3_API_KEY'}\n",
+       "    data = {'text': message_text}\n",
+       "\n",
+       "    response = requests.post(llama3_api_url, headers=headers, json=data)\n",
+       "    return response.json()['response']\n",
        "\n",
-       "1.  Use the Messenger app to send a message to your Facebook business page.\n",
-       "2.  Verify that the chatbot responds with a relevant answer generated by Llama 3.\n",
+       "def send_response(recipient_id, response):\n",
+       "    # Send the response back to the Messenger Platform\n",
+       "    messenger_api_url = 'https://graph.facebook.com/v13.0/me/messages'\n",
+       "    headers = {'Authorization': 'Bearer YOUR_MESSENGER_API_KEY'}\n",
+       "    data = {'recipient': {'id': recipient_id}, 'message': {'text': response}}\n",
        "\n",
+       "    requests.post(messenger_api_url, headers=headers, json=data)\n",
+       "```\n",
        "### Best Practices\n",
        "\n",
-       "*   Ensure that your web server is secure and scalable to handle a large volume of incoming requests.\n",
-       "*   Implement logging and monitoring to track chatbot performance and identify areas for improvement.\n",
-       "*   Continuously update and fine-tune your Llama 3 model to improve response accuracy and relevance.\n",
+       "1.  **Use a secure connection**: Ensure that the webhook uses a secure connection (HTTPS) to protect the data exchanged between the Messenger Platform and the webhook.\n",
+       "2.  **Validate incoming requests**: Validate incoming requests from the Messenger Platform to prevent unauthorized access.\n",
+       "3.  **Handle errors**: Handle errors and exceptions properly to prevent crashes and ensure a smooth user experience.\n",
+       "4.  **Test thoroughly**: Test the chatbot thoroughly to ensure that it works as expected and provides accurate responses.\n",
+       "\n",
+       "### Conclusion\n",
        "\n",
-       "By following these steps and best practices, you can build a Llama 3 enabled Messenger chatbot that provides an engaging and informative customer experience."
+       "Building a Llama 3 enabled Messenger chatbot requires careful planning, setup, and integration with the Messenger Platform. By following the steps outlined in this blog post, businesses can create intelligent and knowledgeable chatbots that provide 24x7 customer support, improving customer experience and reducing costs."
       ],
       "text/plain": [
        "<IPython.core.display.Markdown object>"
@@ -646,23 +447,11 @@
     }
    ],
    "source": [
-    "# Specify the topic for the blog post\n",
     "topic = \"Building a Messenger Chatbot with Llama 3\"\n",
-    "\n",
-    "# Generate and display the blog content\n",
     "blog_content = generate_blog(topic)\n",
-    "\n",
     "if isinstance(blog_content, str) and \"Error\" in blog_content:\n",
     "    print(blog_content)"
    ]
-  },
-  {
-   "cell_type": "code",
-   "execution_count": null,
-   "id": "0930f7de",
-   "metadata": {},
-   "outputs": [],
-   "source": []
   }
  ],
  "metadata": {