瀏覽代碼

LG notebook - Repair broken import and add note about dependency

Thomas Robinson 7 月之前
父節點
當前提交
628ed09d7b

+ 4 - 2
recipes/responsible_ai/llama_guard/llama_guard_customization_via_prompting_and_fine_tuning.ipynb

@@ -473,7 +473,9 @@
     "\n",
     "\n",
     "## Evaluation\n",
-    "The code below shows a workflow for evaluating the model using Toxic Chat. ToxicChat is provided as an example dataset. It is recommended that an dataset chosen specifically for the application be used to evaluate fine-tuning success. ToxicChat can be used to evaluate any degredation in standard category performance caused by the fine-tuning. \n"
+    "The code below shows a workflow for evaluating the model using Toxic Chat. ToxicChat is provided as an example dataset. It is recommended that an dataset chosen specifically for the application be used to evaluate fine-tuning success. ToxicChat can be used to evaluate any degredation in standard category performance caused by the fine-tuning. \n",
+    "\n",
+    "Note: This code relies on the llama package. To install if this is not yet installed: ```git clone https://github.com/meta-llama/llama/;cd llama;pip install -e .```\n"
    ]
   },
   {
@@ -485,7 +487,7 @@
     "from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig\n",
     "\n",
     "from llama_recipes.inference.prompt_format_utils import build_default_prompt, create_conversation, LlamaGuardVersion\n",
-    "from llama.llama.generation import Llama\n",
+    "from llama.generation import Llama\n",
     "\n",
     "from typing import List, Optional, Tuple, Dict\n",
     "from enum import Enum\n",