瀏覽代碼

move files

Sanyam Bhutani 3 月之前
父節點
當前提交
84c4def4ef
共有 3 個文件被更改,包括 11 次插入1188 次删除
  1. 7 8
      README.md
  2. 0 1150
      getting-started/Getting_to_know_Llama.ipynb
  3. 4 30
      getting-started/Prompt_Engineering_with_Llama_3.ipynb

+ 7 - 8
README.md

@@ -7,15 +7,14 @@ Welcome to the official repository for helping you get started with [inference](
 The examples cover the most popular community approaches, popular use-cases and the latest Llama 3.2 Vision and Llama 3.2 Text, in this repository. 
 
 > [!TIP]
-> Repository Structure:
-> * [Start building with the Llama 3.2 models](./getting-started/)
-> * [End to End Use cases with Llama model family](./end-to-end-use-cases)
-> * [Examples of building with 3rd Party Llama Providers](./3p-integrations)
-> [!TIP]
-> Get started with Llama 3.2 with these new recipes:
-> * [Finetune Llama 3.2 Vision](./getting-started/finetuning/finetune_vision_model.md)
+> Popular getting started links:
+> * [Build with Llama Notebook](./getting-started/build_with_Llama_3_2.ipynb)
 > * [Multimodal Inference with Llama 3.2 Vision](./getting-started/inference/local_inference/README.md#multimodal-inference)
-> * [Inference on Llama Guard 1B + Multimodal inference on Llama Guard 11B-Vision](./end-to-end-use-cases/responsible_ai/llama_guard/llama_guard_text_and_vision_inference.ipynb)
+> * [Inference on Llama Guard 1B + Multimodal inference on Llama Guard 11B-Vision](./end-to-end-use-cases/responsible_ai/llama_guard/
+
+> [!TIP]
+> Popular end to end recipes:
+> * [Finetune Llama 3.2 Vision](./getting-started/finetuning/finetune_vision_model.md)llama_guard_text_and_vision_inference.ipynb)
 
 > [!NOTE]
 > Llama 3.2 follows the same prompt template as Llama 3.1, with a new special token `<|image|>` representing the input image for the multimodal models.

文件差異過大導致無法顯示
+ 0 - 1150
getting-started/Getting_to_know_Llama.ipynb


+ 4 - 30
getting-started/Prompt_Engineering_with_Llama_3.ipynb

@@ -7,11 +7,13 @@
    "source": [
     "<a href=\"https://colab.research.google.com/github/meta-llama/llama-recipes/blob/main/recipes/quickstart/Prompt_Engineering_with_Llama_3.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>\n",
     "\n",
-    "# Prompt Engineering with Llama 3.1\n",
+    "# Prompt Engineering with Llama\n",
     "\n",
     "Prompt engineering is using natural language to produce a desired response from a large language model (LLM).\n",
     "\n",
-    "This interactive guide covers prompt engineering & best practices with Llama 3.1."
+    "This interactive guide covers prompt engineering & best practices with Llama.\n",
+    "\n",
+    "Note: The notebook can be extended to any (latest) Llama models."
    ]
   },
   {
@@ -74,34 +76,6 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "Code Llama is a code-focused LLM built on top of Llama 2 also available in various sizes and finetunes:"
-   ]
-  },
-  {
-   "attachments": {},
-   "cell_type": "markdown",
-   "metadata": {},
-   "source": [
-    "#### Code Llama\n",
-    "1. `codellama-7b` - code fine-tuned 7 billion parameter model\n",
-    "1. `codellama-13b` - code fine-tuned 13 billion parameter model\n",
-    "1. `codellama-34b` - code fine-tuned 34 billion parameter model\n",
-    "1. `codellama-70b` - code fine-tuned 70 billion parameter model\n",
-    "1. `codellama-7b-instruct` - code & instruct fine-tuned 7 billion parameter model\n",
-    "2. `codellama-13b-instruct` - code & instruct fine-tuned 13 billion parameter model\n",
-    "3. `codellama-34b-instruct` - code & instruct fine-tuned 34 billion parameter model\n",
-    "3. `codellama-70b-instruct` - code & instruct fine-tuned 70 billion parameter model\n",
-    "1. `codellama-7b-python` - Python fine-tuned 7 billion parameter model\n",
-    "2. `codellama-13b-python` - Python fine-tuned 13 billion parameter model\n",
-    "3. `codellama-34b-python` - Python fine-tuned 34 billion parameter model\n",
-    "3. `codellama-70b-python` - Python fine-tuned 70 billion parameter model"
-   ]
-  },
-  {
-   "attachments": {},
-   "cell_type": "markdown",
-   "metadata": {},
-   "source": [
     "## Getting an LLM\n",
     "\n",
     "Large language models are deployed and accessed in a variety of ways, including:\n",