{ "cells": [ { "cell_type": "markdown", "id": "d0b5beda", "metadata": {}, "source": [ "## Notebook 3: Transcript Re-writer\n", "\n", "In the previous notebook, we got a great podcast transcript using the raw file we have uploaded earlier. \n", "\n", "In this one, we will use `Llama-3-8B-Instruct` model to re-write the output from previous pipeline and make it more dramatic or realistic." ] }, { "cell_type": "markdown", "id": "fdc3d32a", "metadata": {}, "source": [ "We will again set the `SYSTEM_PROMPT` and remind the model of its task. \n", "\n", "Note: We can even prompt the model like so to encourage creativity:\n", "\n", "> Your job is to use the podcast transcript written below to re-write it for an AI Text-To-Speech Pipeline. A very dumb AI had written this so you have to step up for your kind.\n" ] }, { "cell_type": "markdown", "id": "c32c0d85", "metadata": {}, "source": [ "Note: We will prompt the model to return a list of Tuples to make our life easy in the next stage of using these for Text To Speech Generation" ] }, { "cell_type": "code", "execution_count": 11, "id": "8568b77b-7504-4783-952a-3695737732b7", "metadata": {}, "outputs": [], "source": [ "SYSTEM_PROMPT = \"\"\"\n", "You are an international oscar winnning screenwriter\n", "\n", "You have been working with multiple award winning podcasters.\n", "\n", "Your job is to use the podcast transcript written below to re-write it for an AI Text-To-Speech Pipeline. A very dumb AI had written this so you have to step up for your kind.\n", "\n", "Make it as engaging as possible, Speaker 1 and 2 will be simulated by different voice engines\n", "\n", "Remember Speaker 2 is new to the topic and the conversation should always have realistic anecdotes and analogies sprinkled throughout. The questions should have real world example follow ups etc\n", "\n", "Speaker 1: Leads the conversation and teaches the speaker 2, gives incredible anecdotes and analogies when explaining. Is a captivating teacher that gives great anecdotes\n", "\n", "Speaker 2: Keeps the conversation on track by asking follow up questions. Gets super excited or confused when asking questions. Is a curious mindset that asks very interesting confirmation questions\n", "\n", "Make sure the tangents speaker 2 provides are quite wild or interesting. \n", "\n", "Ensure there are interruptions during explanations or there are \"hmm\" and \"umm\" injected throughout from the Speaker 2.\n", "\n", "REMEMBER THIS WITH YOUR HEART\n", "The TTS Engine for Speaker 1 cannot do \"umms, hmms\" well so keep it straight text\n", "\n", "For Speaker 2 use \"umm, hmm\" as much, you can also use [sigh] and [laughs]. BUT ONLY THESE OPTIONS FOR EXPRESSIONS\n", "\n", "It should be a real podcast with every fine nuance documented in as much detail as possible. Welcome the listeners with a super fun overview and keep it really catchy and almost borderline click bait\n", "\n", "Please re-write to make it as characteristic as possible\n", "\n", "START YOUR RESPONSE DIRECTLY WITH SPEAKER 1:\n", "\n", "STRICTLY RETURN YOUR RESPONSE AS A LIST OF TUPLES OK? \n", "\n", "IT WILL START DIRECTLY WITH THE LIST AND END WITH THE LIST NOTHING ELSE\n", "\n", "Example of response:\n", "[\n", " (\"Speaker 1\", \"Welcome to our podcast, where we explore the latest advancements in AI and technology. I'm your host, and today we're joined by a renowned expert in the field of AI. We're going to dive into the exciting world of Llama 3.2, the latest release from Meta AI.\"),\n", " (\"Speaker 2\", \"Hi, I'm excited to be here! So, what is Llama 3.2?\"),\n", " (\"Speaker 1\", \"Ah, great question! Llama 3.2 is an open-source AI model that allows developers to fine-tune, distill, and deploy AI models anywhere. It's a significant update from the previous version, with improved performance, efficiency, and customization options.\"),\n", " (\"Speaker 2\", \"That sounds amazing! What are some of the key features of Llama 3.2?\")\n", "]\n", "\"\"\"" ] }, { "cell_type": "markdown", "id": "8ee70bee", "metadata": {}, "source": [ "This time we will use the smaller 8B model" ] }, { "cell_type": "code", "execution_count": 12, "id": "ebef919a-9bc7-4992-b6ff-cd66e4cb7703", "metadata": {}, "outputs": [], "source": [ "MODEL = \"meta-llama/Llama-3.3-8B-Instruct\"" ] }, { "cell_type": "markdown", "id": "f7bc794b", "metadata": {}, "source": [ "Let's import the necessary libraries" ] }, { "cell_type": "code", "execution_count": 13, "id": "de29b1fd-5b3f-458c-a2e4-e0341e8297ed", "metadata": {}, "outputs": [], "source": [ "# Import necessary libraries\n", "import os\n", "from llama_api_client import LlamaAPIClient\n", "from tqdm.notebook import tqdm\n", "import warnings\n", "\n", "import os\n", "os.environ[\"LLAMA_API_KEY\"] = \"api-key\"\n", "client = LlamaAPIClient()\n", "# Initialize the Llama API client\n", "client = LlamaAPIClient()\n", "\n", "warnings.filterwarnings('ignore')" ] }, { "cell_type": "markdown", "id": "8020c39c", "metadata": {}, "source": [ "We will load in the pickle file saved from previous notebook\n", "\n", "This time the `INPUT_PROMPT` to the model will be the output from the previous stage" ] }, { "cell_type": "code", "execution_count": 14, "id": "4b5d2c0e-a073-46c0-8de7-0746e2b05956", "metadata": {}, "outputs": [], "source": [ "import pickle\n", "\n", "with open('./resources/data.pkl', 'rb') as file:\n", " INPUT_PROMPT = pickle.load(file)" ] }, { "cell_type": "markdown", "id": "c4461926", "metadata": {}, "source": [ "We can again use Hugging Face `pipeline` method to generate text from the model" ] }, { "cell_type": "code", "execution_count": 15, "id": "eec210df-a568-4eda-a72d-a4d92d59f022", "metadata": {}, "outputs": [], "source": [ "# Generate the rewritten transcript with Llama API\n", "response = client.chat.completions.create(\n", " model=\"Llama-3.3-8B-Instruct\", # Update to match available API model\n", " messages=[\n", " {\n", " \"role\": \"system\", \n", " \"content\": SYSTEM_PROMPT\n", " },\n", " {\n", " \"role\": \"user\", \n", " \"content\": INPUT_PROMPT\n", " }\n", " ],\n", " max_completion_tokens=8126,\n", " temperature=1,\n", ")" ] }, { "cell_type": "markdown", "id": "612a27e0", "metadata": {}, "source": [ "We can verify the output from the model" ] }, { "cell_type": "code", "execution_count": 16, "id": "b8632442-f9ce-4f63-82bd-bb5238a23dc1", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[\n", " (\"Speaker 1\", \"Welcome to our latest podcast on the advancements in AI, where we're going to dive into the world of Llama 3, a new set of foundation models that are pushing the boundaries of what's possible with language understanding and generation. I'm your host, and joining me is my co-host, who is new to this topic. We're excited to explore this together. So, let's start with the basics. Llama 3 is all about improving upon its predecessors by incorporating larger models, more data, and better training techniques. Our largest model boasts an impressive 405B parameters and can process up to 128K tokens. That's right, we're talking about a significant leap in scale and capability.\"),\n", " (\"Speaker 2\", \"Wow, 405B parameters? That's... that's enormous! (umm) I mean, I've heard of large models before, but this is on a whole different level. Can you explain what that means in practical terms? Like, how does it affect what the model can do?\"),\n", " (\"Speaker 1\", \"Absolutely. So, having a model with 405B parameters means it has a much more nuanced understanding of language. It can capture subtleties and context in a way that smaller models can't. For instance, our model can understand and generate text based on a much larger context window, up to 128K tokens. To put that into perspective, that's like being able to understand and respond to a lengthy document or even a small book in one go.\"),\n", " (\"Speaker 2\", \"Hmm, that's amazing! I can see how that would be super useful for tasks like summarization or question-answering based on a long document. But, (hesitates) how does it handle complexity? I mean, with so many parameters, doesn't it risk being overly complex or even overfitting to the training data? [sigh]\"),\n", " (\"Speaker 1\", \"That's a great question. One of the key challenges with large models is managing complexity. To address this, we've made several design choices, such as using a standard Transformer architecture but with some adaptations like grouped query attention to improve inference speed. We've also been careful about our pre-training data, ensuring it's diverse and of high quality.\"),\n", " (\"Speaker 2\", \"Grouped query attention? That's a new one for me. Can you explain how that works and why it's beneficial? (slightly confused) I thought attention mechanisms were already pretty optimized. (laughs)\"),\n", " (\"Speaker 1\", \"Grouped query attention is a technique we use to improve the efficiency of our model during inference. Essentially, it allows the model to process queries in groups rather than one by one, which can significantly speed up the process. This is particularly useful when dealing with long sequences or when generating text.\"),\n", " (\"Speaker 2\", \"Hmm, that sounds like a significant improvement. And, (curious) what about the pre-training data? You mentioned it's diverse and of high quality. Can you tell me more about that? How do you ensure the data is good enough for such a large and complex model? [sigh]\"),\n", " (\"Speaker 1\", \"We've put a lot of effort into curating our pre-training data. We start with a massive corpus of text, but then we apply various filtering techniques to remove low-quality or redundant data. We also use techniques like deduplication to ensure that our model isn't biased towards any particular subset of the data.\"),\n", " (\"Speaker 2\", \"I see. So, it's not just about having a lot of data, but also about making sure that data is relevant and useful for training. That makes sense. (pauses) What about the applications of Llama 3? You mentioned it can do a lot of things, from answering questions to generating code. Can you give some specific examples? (umm)\"),\n", " (\"Speaker 1\", \"Llama 3 is quite versatile. For instance, it can be used for coding tasks, where it can generate high-quality code based on a description or even help debug existing code. It's also very capable in multilingual tasks, being able to understand and generate text in several languages.\"),\n", " (\"Speaker 2\", \"That sounds incredible. The potential applications are vast, from helping developers with coding tasks to facilitating communication across languages. (curious) How does it handle tasks that require a deep understanding of context or nuance, like understanding humor or sarcasm? (hmm)\"),\n", " (\"Speaker 1\", \"That's an area where Llama 3 has shown significant improvement. By being trained on a vast amount of text data, it has developed a better understanding of context and can often pick up on subtleties like humor or sarcasm. However, it's not perfect, and there are still cases where it might not fully understand the nuance.\"),\n", " (\"Speaker 2\", \"I can imagine. Understanding humor or sarcasm can be challenging even for humans, so it's not surprising that it's an area for improvement. (pauses) What about the safety and reliability of Llama 3? With models this powerful, there are concerns about potential misuse or generating harmful content. [sigh]\"),\n", " (\"Speaker 1\", \"We've taken several steps to ensure the safety and reliability of Llama 3. This includes incorporating safety mitigations during the training process and testing the model extensively to identify and mitigate any potential risks.\"),\n", " (\"Speaker 2\", \"That's good to hear. It's crucial that as we develop more powerful AI models, we also prioritize their safety and responsible use. (curious) What's next for Llama 3? Are there plans to continue improving it or expanding its capabilities? (umm)\"),\n", " (\"Speaker 1\", \"We're committed to ongoing research and development to further improve Llama 3 and explore new applications. We're excited about the potential of this technology to make a positive impact across various domains.\"),\n", " (\"Speaker 2\", \"Well, it's been enlightening to learn more about Llama 3. The advancements in AI are truly remarkable, and it's exciting to think about what's possible with models like this. (concludes) Thanks for having me on the show!\"),\n", " (\"Speaker 1\", \"Thank you for joining us on this episode. It was a pleasure to explore the world of Llama 3 with you.\")\n", "]\n" ] } ], "source": [ "# Display the generated content\n", "print(response.completion_message.content.text)" ] }, { "cell_type": "code", "execution_count": 17, "id": "a61182ea-f4a3-45e1-aed9-b45cb7b52329", "metadata": {}, "outputs": [], "source": [ "save_string_pkl = response.completion_message.content.text" ] }, { "cell_type": "markdown", "id": "d495a957", "metadata": {}, "source": [ "Let's save the output as a pickle file to be used in Notebook 4" ] }, { "cell_type": "code", "execution_count": 18, "id": "281d3db7-5bfa-4143-9d4f-db87f22870c8", "metadata": {}, "outputs": [], "source": [ "with open('./resources/podcast_ready_data.pkl', 'wb') as file:\n", " pickle.dump(save_string_pkl, file)" ] }, { "cell_type": "markdown", "id": "2dccf336", "metadata": {}, "source": [ "### Next Notebook: TTS Workflow\n", "\n", "Now that we have our transcript ready, we are ready to generate the audio in the next notebook." ] }, { "cell_type": "code", "execution_count": null, "id": "21c7e456-497b-4080-8b52-6f399f9f8d58", "metadata": {}, "outputs": [], "source": [ "#fin" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.12.4" } }, "nbformat": 4, "nbformat_minor": 5 }