ソースを参照

Add files via upload

Sanyam Bhutani 8 ヶ月 前
コミット
669e6c96d5
1 ファイル変更38 行追加10 行削除
  1. 38 10
      recipes/quickstart/Getting_to_know_Llama.ipynb

+ 38 - 10
recipes/quickstart/Getting_to_know_Llama.ipynb

@@ -15,8 +15,8 @@
     "id": "LERqQn5v8-ak"
    },
    "source": [
-    "# **Getting to know Llama 3: Everything you need to start building**\n",
-    "Our goal in this session is to provide a guided tour of Llama 3 with comparison with Llama 2, including understanding different Llama 3 models, how and where to access them, Generative AI and Chatbot architectures, prompt engineering, RAG (Retrieval Augmented Generation), Fine-tuning and more. All this is implemented with a starter code for you to take it and use it in your Llama 3 projects."
+    "# **Getting to know Llama 3.1: Everything you need to start building**\n",
+    "Our goal in this session is to provide a guided tour of Llama 3.1 with comparison with Llama 2, including understanding different Llama 3.1 models, how and where to access them, Generative AI and Chatbot architectures, prompt engineering, RAG (Retrieval Augmented Generation), Fine-tuning and more. All this is implemented with a starter code for you to take it and use it in your Llama 3.1 projects."
    ]
   },
   {
@@ -113,6 +113,20 @@
     "      llama-3-70b --> llama-3-70b-instruct\n",
     "      classDef default fill:#CCE6FF,stroke:#84BCF5,textColor:#1C2B33,fontFamily:trebuchet ms;\n",
     "  \"\"\")\n",
+    "  \n",
+    "def llama3_1_family():\n",
+    "  mm(\"\"\"\n",
+    "  graph LR;\n",
+    "      llama-3-1 --> llama-3-8b\n",
+    "      llama-3-1 --> llama-3-70b\n",
+    "      llama-3-1 --> llama-3-4050b\n",
+    "      llama-3-1-8b --> llama-3-1-8b\n",
+    "      llama-3-1-8b --> llama-3-1-8b-instruct\n",
+    "      llama-3-1-70b --> llama-3-1-70b\n",
+    "      llama-3-1-70b --> llama-3-1-70b-instruct\n",
+    "      lama-3-1-405b --> llama-3-1-405b-instruct\n",
+    "      classDef default fill:#CCE6FF,stroke:#84BCF5,textColor:#1C2B33,fontFamily:trebuchet ms;\n",
+    "  \"\"\")\n",
     "\n",
     "import ipywidgets as widgets\n",
     "from IPython.display import display, Markdown\n",
@@ -184,7 +198,7 @@
     "id": "i4Np_l_KtIno"
    },
    "source": [
-    "### **1 - Understanding Llama 3**"
+    "### **1 - Understanding Llama 3.1**"
    ]
   },
   {
@@ -193,13 +207,13 @@
     "id": "PGPSI3M5PGTi"
    },
    "source": [
-    "### **1.1 - What is Llama 3?**\n",
+    "### **1.1 - What is Llama 3.1?**\n",
     "\n",
     "* State of the art (SOTA), Open Source LLM\n",
-    "* 8B, 70B - base and instruct models\n",
+    "* 8B, 70B, 405B - base and instruct models\n",
     "* Choosing model: Size, Quality, Cost, Speed\n",
     "* Pretrained + Chat\n",
-    "* [Meta Llama 3 Blog](https://ai.meta.com/blog/meta-llama-3/)\n",
+    "* [Meta Llama 3.1 Blog](https://ai.meta.com/blog/meta-llama-3-1/)\n",
     "* [Getting Started with Meta Llama](https://llama.meta.com/docs/get-started)"
    ]
   },
@@ -239,12 +253,21 @@
    ]
   },
   {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "llama3_1_family()"
+   ]
+  },
+  {
    "cell_type": "markdown",
    "metadata": {
     "id": "aYeHVVh45bdT"
    },
    "source": [
-    "### **1.2 - Accessing Llama 3**\n",
+    "### **1.2 - Accessing Llama 3.1**\n",
     "* Download + Self Host (i.e. [download Llama](https://ai.meta.com/resources/models-and-libraries/llama-downloads))\n",
     "* Hosted API Platform (e.g. [Groq](https://console.groq.com/), [Replicate](https://replicate.com/meta/meta-llama-3-8b-instruct), [Together](https://api.together.xyz/playground/language/meta-llama/Llama-3-8b-hf), [Anyscale](https://app.endpoints.anyscale.com/playground))\n",
     "\n",
@@ -258,7 +281,7 @@
     "id": "kBuSay8vtzL4"
    },
    "source": [
-    "### **1.3 - Use Cases of Llama 3**\n",
+    "### **1.3 - Use Cases of Llama 3.1**\n",
     "* Content Generation\n",
     "* Summarization\n",
     "* General Chatbots\n",
@@ -943,7 +966,7 @@
     "import bs4\n",
     "\n",
     "# Step 1: Load the document from a web url\n",
-    "loader = WebBaseLoader([\"https://huggingface.co/blog/llama3\"])\n",
+    "loader = WebBaseLoader([\"https://huggingface.co/blog/llama31\"])\n",
     "documents = loader.load()\n",
     "\n",
     "# Step 2: Split the document into chunks with a specified chunk size\n",
@@ -1079,7 +1102,7 @@
    },
    "source": [
     "#### **Resources**\n",
-    "- [Meta Llama 3 Blog](https://ai.meta.com/blog/meta-llama-3/)\n",
+    "- [Meta Llama 3.1 Blog](https://ai.meta.com/blog/meta-llama-3-1/)\n",
     "- [Getting Started with Meta Llama](https://llama.meta.com/docs/get-started)\n",
     "- [Llama 3 repo](https://github.com/meta-llama/llama3)\n",
     "- [Llama 3 model card](https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md)\n",
@@ -1088,6 +1111,11 @@
     "- [Acceptable Use Policy](https://ai.meta.com/llama/use-policy/)\n",
     "\n"
    ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": []
   }
  ],
  "metadata": {