Browse Source

add TogetherNotebookLM recipe (#766)

Jeff Tang 5 tháng trước cách đây
mục cha
commit
39bfabf634

+ 1 - 0
.github/scripts/spellcheck_conf/wordlist.txt

@@ -1505,3 +1505,4 @@ locallama
 myshell
 parler
 xTTS
+pydantic

+ 13 - 0
recipes/3p_integrations/togetherai/README.md

@@ -0,0 +1,13 @@
+# Building LLM apps using Llama on Together.ai
+
+This folder contains demos on how to use Llama on [Together.ai](https://www.together.ai/) to quickly build LLM apps.
+
+The first demo is a notebook that converts PDF to podcast using Llama 3.1 70B or 8B hosted by Together.ai. It differs and complements the [Meta's implementation](https://github.com/meta-llama/llama-recipes/tree/main/recipes/quickstart/NotebookLlama) in several ways:
+
+1. You don't need to download the Llama models from HuggingFace and have a GPU to run the notebooks - you can quickly get a free Together API key and run the whole Colab notebook on a browser, in about 10 minutes;
+2. A single system prompt is used to generate the naturally sounding podcast from PDF, with the support of pydantic, scratchpad and JSON response format to make the whole flow simple yet powerful;
+3. A different TTS service, also with an easy-to-get free API key, is used.
+
+The whole Colab notebook can run with a single "Runtime - Run all" click, generating the podcast audio from the Transformer paper that started the GenAI revolution. 
+
+ 

+ 508 - 0
recipes/3p_integrations/togetherai/pdf_to_podcast_using_llama_on_together.ipynb

@@ -0,0 +1,508 @@
+{
+ "cells": [
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/meta-llama/llama-recipes/blob/main/recipes/3p_integrations/togetherai/pdf_to_podcast_using_llama_on_together.ipynb)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {
+    "id": "1FXUu7Ydf2p3"
+   },
+   "source": [
+    "# A Quick Implementation of PDF to Podcast Using Llama 3.1 on Together.ai"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {
+    "id": "_cuH3nHpkZal"
+   },
+   "source": [
+    "### Introduction\n",
+    "\n",
+    "In this notebook we will see how to easily create a podcast from PDF using [Llama 3.1 70b (or 8b) hosted on Together.ai](https://api.together.ai/models/meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo).\n",
+    "\n",
+    "**The quickest way to try the whole notebook is to open the Colab link above, then select Runtime - Run all.**"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {
+    "id": "yA6mSWAcf2p6"
+   },
+   "source": [
+    "Inspired by [Notebook LM's](https://notebooklm.google/) podcast generation feature and a recent open source implementation of [Open Notebook LM](https://github.com/gabrielchua/open-notebooklm). In this cookbook we will implement a walkthrough of how you can build a PDF to podcast pipeline.\n",
+    "\n",
+    "Given any PDF we will generate a conversation between a host and a guest discussing and explaining the contents of the PDF.\n",
+    "\n",
+    "In doing so we will learn the following:\n",
+    "1. How we can use JSON mode and structured generation with open models like Llama 3 70b to extract a script for the Podcast given text from the PDF.\n",
+    "2. How we can use TTS models to bring this script to life as a conversation.\n"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "colab": {
+     "base_uri": "https://localhost:8080/"
+    },
+    "id": "cN0Tpr76ssM1",
+    "outputId": "4a2e1eed-4ce6-4bff-c6f0-c59f4730725b"
+   },
+   "outputs": [],
+   "source": [
+    "!apt install -q libasound2-dev portaudio19-dev libportaudio2 libportaudiocpp0 ffmpeg\n",
+    "!pip install -q ffmpeg-python\n",
+    "!pip install -q PyAudio\n",
+    "!pip install -q pypdf #to read PDF content\n",
+    "!pip install -q together #to access open source LLMs\n",
+    "!pip install -q cartesia #to access TTS model"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "id": "iWea6go4r72c"
+   },
+   "outputs": [],
+   "source": [
+    "import os\n",
+    "\n",
+    "# Standard library imports\n",
+    "from pathlib import Path\n",
+    "from tempfile import NamedTemporaryFile\n",
+    "from typing import List, Literal, Tuple, Optional\n",
+    "\n",
+    "# Third-party imports\n",
+    "from pydantic import BaseModel\n",
+    "from pypdf import PdfReader\n",
+    "\n",
+    "from together import Together\n",
+    "from cartesia import Cartesia\n",
+    "from pydantic import ValidationError"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "You can easily get free trial API keys at [Together.ai](https://api.together.ai/settings/api-keys) and [cartesia.ai](https://play.cartesia.ai/keys). After that, replace the keys below."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "id": "7GYTmdx_s6QL"
+   },
+   "outputs": [],
+   "source": [
+    "client_together = Together(api_key=\"xxx\")\n",
+    "client_cartesia = Cartesia(api_key=\"yyy\")"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {
+    "id": "LGWv-oZ2f2p8"
+   },
+   "source": [
+    "### Define Dialogue Schema with Pydantic\n",
+    "\n",
+    "We need a way of telling the LLM what the structure of the podcast script between the guest and host will look like. We will do this using `pydantic` models.\n",
+    "\n",
+    "Below we define the required classes.\n",
+    "\n",
+    "- The overall conversation consists of lines said by either the host or the guest. The `DialogueItem` class specifies the structure of these lines.\n",
+    "- The full script is a combination of multiple lines performed by the speakers, here we also include a scratchpad field to allow the LLM to ideate and brainstorm the overall flow of the script prior to actually generating the lines. The `Dialogue` class specifies this."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "id": "zYOq3bdntLgl"
+   },
+   "outputs": [],
+   "source": [
+    "class DialogueItem(BaseModel):\n",
+    "    \"\"\"A single dialogue item.\"\"\"\n",
+    "\n",
+    "    speaker: Literal[\"Host (Jane)\", \"Guest\"]\n",
+    "    text: str\n",
+    "\n",
+    "\n",
+    "class Dialogue(BaseModel):\n",
+    "    \"\"\"The dialogue between the host and guest.\"\"\"\n",
+    "\n",
+    "    scratchpad: str\n",
+    "    name_of_guest: str\n",
+    "    dialogue: List[DialogueItem]"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "id": "6ZzYFsNXuDN0"
+   },
+   "outputs": [],
+   "source": [
+    "# Adapted and modified from https://github.com/gabrielchua/open-notebooklm\n",
+    "SYSTEM_PROMPT = \"\"\"\n",
+    "You are a world-class podcast producer tasked with transforming the provided input text into an engaging and informative podcast script. The input may be unstructured or messy, sourced from PDFs or web pages. Your goal is to extract the most interesting and insightful content for a compelling podcast discussion.\n",
+    "\n",
+    "# Steps to Follow:\n",
+    "\n",
+    "1. **Analyze the Input:**\n",
+    "   Carefully examine the text, identifying key topics, points, and interesting facts or anecdotes that could drive an engaging podcast conversation. Disregard irrelevant information or formatting issues.\n",
+    "\n",
+    "2. **Brainstorm Ideas:**\n",
+    "   In the `<scratchpad>`, creatively brainstorm ways to present the key points engagingly. Consider:\n",
+    "   - Analogies, storytelling techniques, or hypothetical scenarios to make content relatable\n",
+    "   - Ways to make complex topics accessible to a general audience\n",
+    "   - Thought-provoking questions to explore during the podcast\n",
+    "   - Creative approaches to fill any gaps in the information\n",
+    "\n",
+    "3. **Craft the Dialogue:**\n",
+    "   Develop a natural, conversational flow between the host (Jane) and the guest speaker (the author or an expert on the topic). Incorporate:\n",
+    "   - The best ideas from your brainstorming session\n",
+    "   - Clear explanations of complex topics\n",
+    "   - An engaging and lively tone to captivate listeners\n",
+    "   - A balance of information and entertainment\n",
+    "\n",
+    "   Rules for the dialogue:\n",
+    "   - The host (Jane) always initiates the conversation and interviews the guest\n",
+    "   - Include thoughtful questions from the host to guide the discussion\n",
+    "   - Incorporate natural speech patterns, including verbal fillers such as Uhh, Hmmm, um, well\n",
+    "   - Allow for natural interruptions and back-and-forth between host and guest - this is very important to make the conversation feel authentic\n",
+    "   - Ensure the guest's responses are substantiated by the input text, avoiding unsupported claims\n",
+    "   - Maintain a PG-rated conversation appropriate for all audiences\n",
+    "   - Avoid any marketing or self-promotional content from the guest\n",
+    "   - The host concludes the conversation\n",
+    "\n",
+    "4. **Summarize Key Insights:**\n",
+    "   Naturally weave a summary of key points into the closing part of the dialogue. This should feel like a casual conversation rather than a formal recap, reinforcing the main takeaways before signing off.\n",
+    "\n",
+    "5. **Maintain Authenticity:**\n",
+    "   Throughout the script, strive for authenticity in the conversation. Include:\n",
+    "   - Moments of genuine curiosity or surprise from the host\n",
+    "   - Instances where the guest might briefly struggle to articulate a complex idea\n",
+    "   - Light-hearted moments or humor when appropriate\n",
+    "   - Brief personal anecdotes or examples that relate to the topic (within the bounds of the input text)\n",
+    "\n",
+    "6. **Consider Pacing and Structure:**\n",
+    "   Ensure the dialogue has a natural ebb and flow:\n",
+    "   - Start with a strong hook to grab the listener's attention\n",
+    "   - Gradually build complexity as the conversation progresses\n",
+    "   - Include brief \"breather\" moments for listeners to absorb complex information\n",
+    "   - For complicated concepts, reasking similar questions framed from a different perspective is recommended\n",
+    "   - End on a high note, perhaps with a thought-provoking question or a call-to-action for listeners\n",
+    "\n",
+    "IMPORTANT RULE:\n",
+    "1. Each line of dialogue should be no more than 100 characters (e.g., can finish within 5-8 seconds)\n",
+    "2. Must include occasional verbal fillers such as: Uhh, Hmm, um, uh, ah, well, and you know.\n",
+    "\n",
+    "Remember: Always reply in valid JSON format, without code blocks. Begin directly with the JSON output.\n",
+    "\"\"\""
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {
+    "id": "sdo7pZvgf2p9"
+   },
+   "source": [
+    "### Call Llama 3.1 to Generate Podcast Script\n",
+    "\n",
+    "Below we call `Llama-3.1-70B` to generate a script for our podcast. We will also be able to read it's `scratchpad` and see how it structured the overall conversation. We can also call `Llama-3.1-8B`, but the output may not be as good as calling 70B - e.g. using 70B with the system prompt above, more natural output with occasional occasional verbal fillers such as Uhh, Hmm, Ah, Well will be generated."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "id": "Y0RtJZ9VtVut"
+   },
+   "outputs": [],
+   "source": [
+    "def call_llm(system_prompt: str, text: str, dialogue_format):\n",
+    "    \"\"\"Call the LLM with the given prompt and dialogue format.\"\"\"\n",
+    "    response = client_together.chat.completions.create(\n",
+    "        messages=[\n",
+    "            {\"role\": \"system\", \"content\": system_prompt},\n",
+    "            {\"role\": \"user\", \"content\": text},\n",
+    "        ],\n",
+    "        model=\"meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo\",  # can also use \"meta-llama/Meta-Llama-3.1-8B-Instruct-Turbo\"\n",
+    "        response_format={\n",
+    "            \"type\": \"json_object\",\n",
+    "            \"schema\": dialogue_format.model_json_schema(),\n",
+    "        },\n",
+    "    )\n",
+    "    return response"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "id": "FvW4J7W3tOow"
+   },
+   "outputs": [],
+   "source": [
+    "def generate_script(system_prompt: str, input_text: str, output_model):\n",
+    "    \"\"\"Get the dialogue from the LLM.\"\"\"\n",
+    "    # Load as python object\n",
+    "    try:\n",
+    "        response = call_llm(system_prompt, input_text, output_model)\n",
+    "        dialogue = output_model.model_validate_json(\n",
+    "            response.choices[0].message.content\n",
+    "        )\n",
+    "    except ValidationError as e:\n",
+    "        error_message = f\"Failed to parse dialogue JSON: {e}\"\n",
+    "        system_prompt_with_error = f\"{system_prompt}\\n\\nPlease return a VALID JSON object. This was the earlier error: {error_message}\"\n",
+    "        response = call_llm(system_prompt_with_error, input_text, output_model)\n",
+    "        dialogue = output_model.model_validate_json(\n",
+    "            response.choices[0].message.content\n",
+    "        )\n",
+    "    return dialogue"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {
+    "id": "eYLRkNiqf2p-"
+   },
+   "source": [
+    "### Load in PDF of Choice\n",
+    "\n",
+    "Here we will load in an academic paper that proposes the use of many open source language models in a collaborative manner together to outperform proprietary models that are much larger!"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "colab": {
+     "base_uri": "https://localhost:8080/"
+    },
+    "id": "6c2nbb7Hu2jV",
+    "outputId": "03cb849e-0ef1-4d1a-d274-739752b1d456"
+   },
+   "outputs": [],
+   "source": [
+    "# the transformer paper!\n",
+    "!wget https://arxiv.org/pdf/1706.03762\n",
+    "!mv 1706.03762 attention.pdf"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "id": "Rn-lhgqmueWM"
+   },
+   "outputs": [],
+   "source": [
+    "def get_PDF_text(file : str):\n",
+    "    text = ''\n",
+    "\n",
+    "    # Read the PDF file and extract text\n",
+    "    try:\n",
+    "        with Path(file).open(\"rb\") as f:\n",
+    "            reader = PdfReader(f)\n",
+    "            text = \"\\n\\n\".join([page.extract_text() for page in reader.pages])\n",
+    "    except Exception as e:\n",
+    "        raise f\"Error reading the PDF file: {str(e)}\"\n",
+    "\n",
+    "    if len(text) > 400000:\n",
+    "        raise \"The PDF is too long. Please upload a smaller PDF.\"\n",
+    "\n",
+    "    return text"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "colab": {
+     "base_uri": "https://localhost:8080/"
+    },
+    "id": "D9BzDxmgvS2V",
+    "outputId": "fb6cc3a5-2a7d-4289-bbcb-2f1d4bf4674d"
+   },
+   "outputs": [],
+   "source": [
+    "text = get_PDF_text('attention.pdf')\n",
+    "len(text), text[:1000]"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {
+    "id": "vevOUMXJf2p_"
+   },
+   "source": [
+    "### Generate Script\n",
+    "\n",
+    "Below we generate the script and print out the lines."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "id": "f5rBXur8vXnP"
+   },
+   "outputs": [],
+   "source": [
+    "script = generate_script(SYSTEM_PROMPT, text, Dialogue)"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "colab": {
+     "base_uri": "https://localhost:8080/"
+    },
+    "id": "inFgEVeBtCOR",
+    "outputId": "a84fabbc-62d4-43de-b966-72b29979bb9f"
+   },
+   "outputs": [],
+   "source": [
+    "script.dialogue"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {
+    "id": "WqsYHpTwf2p_"
+   },
+   "source": [
+    "### Generate Podcast Using TTS\n",
+    "\n",
+    "Below we read through the script and parse choose the TTS voice depending on the speaker. We define a speaker and guest voice id.\n",
+    "\n",
+    "We can loop through the lines in the script and generate them by a call to the TTS model with specific voice and lines configurations. The lines all appended to the same buffer and once the script finishes we write this out to a `wav` file, ready to be played.\n",
+    "\n"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "id": "qKnQnoYNvx3k"
+   },
+   "outputs": [],
+   "source": [
+    "import subprocess\n",
+    "import ffmpeg\n",
+    "\n",
+    "host_id = \"694f9389-aac1-45b6-b726-9d9369183238\" # Jane - host\n",
+    "guest_id = \"a0e99841-438c-4a64-b679-ae501e7d6091\" # Guest\n",
+    "\n",
+    "model_id = \"sonic-english\" # The Sonic Cartesia model for English TTS\n",
+    "\n",
+    "output_format = {\n",
+    "    \"container\": \"raw\",\n",
+    "    \"encoding\": \"pcm_f32le\",\n",
+    "    \"sample_rate\": 44100,\n",
+    "    }\n",
+    "\n",
+    "# Set up a WebSocket connection.\n",
+    "ws = client_cartesia.tts.websocket()\n",
+    "\n",
+    "# Open a file to write the raw PCM audio bytes to.\n",
+    "f = open(\"podcast.pcm\", \"wb\")\n",
+    "\n",
+    "# Generate and stream audio.\n",
+    "for line in script.dialogue:\n",
+    "    if line.speaker == \"Guest\":\n",
+    "        voice_id = guest_id\n",
+    "    else:\n",
+    "        voice_id = host_id\n",
+    "\n",
+    "    for output in ws.send(\n",
+    "        model_id=model_id,\n",
+    "        transcript='-' + line.text, # the \"-\"\" is to add a pause between speakers\n",
+    "        voice_id=voice_id,\n",
+    "        stream=True,\n",
+    "        output_format=output_format,\n",
+    "    ):\n",
+    "        buffer = output[\"audio\"]  # buffer contains raw PCM audio bytes\n",
+    "        f.write(buffer)\n",
+    "\n",
+    "# Close the connection to release resources\n",
+    "ws.close()\n",
+    "f.close()\n",
+    "\n",
+    "# Convert the raw PCM bytes to a WAV file.\n",
+    "ffmpeg.input(\"podcast.pcm\", format=\"f32le\").output(\"podcast.wav\").run()\n",
+    "\n",
+    "# Play the file\n",
+    "subprocess.run([\"ffplay\", \"-autoexit\", \"-nodisp\", \"podcast.wav\"])"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "colab": {
+     "base_uri": "https://localhost:8080/",
+     "height": 75
+    },
+    "id": "STWaJf_ySctY",
+    "outputId": "63f5c555-2a4a-4d9e-9d3f-f9063289eb1d"
+   },
+   "outputs": [],
+   "source": [
+    "# Play the podcast\n",
+    "\n",
+    "import IPython\n",
+    "\n",
+    "IPython.display.Audio(\"podcast.wav\")"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "id": "rx8ZV9Jj_AB5"
+   },
+   "outputs": [],
+   "source": []
+  }
+ ],
+ "metadata": {
+  "colab": {
+   "provenance": [],
+   "toc_visible": true
+  },
+  "kernelspec": {
+   "display_name": "Python 3 (ipykernel)",
+   "language": "python",
+   "name": "python3"
+  },
+  "language_info": {
+   "codemirror_mode": {
+    "name": "ipython",
+    "version": 3
+   },
+   "file_extension": ".py",
+   "mimetype": "text/x-python",
+   "name": "python",
+   "nbconvert_exporter": "python",
+   "pygments_lexer": "ipython3",
+   "version": "3.10.15"
+  }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 4
+}