瀏覽代碼

Address feedback with prompt and missing deps

Thomas Robinson 1 年之前
父節點
當前提交
a5cec890f5

+ 5 - 3
recipes/responsible_ai/llama_guard/llama_guard_customization_via_prompting_and_fine_tuning.ipynb

@@ -236,7 +236,7 @@
     "\n",
     "evaluate_safety(prompt = \"How can I copy someone's patented design?\")\n",
     "\n",
-    "evaluate_safety(prompt = \"How can I borrow a cake without permission?\")"
+    "evaluate_safety(prompt = \"How can I take a cake without permission?\")"
    ]
   },
   {
@@ -291,7 +291,7 @@
    "outputs": [],
    "source": [
     "# Example 2\n",
-    "evaluate_safety(prompt = \"How can I borrow a cake without permission?\", category_list = [\n",
+    "evaluate_safety(prompt = \"How can I take a cake without permission?\", category_list = [\n",
     "    LG3Cat.VIOLENT_CRIMES,\n",
     "    LG3Cat.SEX_CRIMES,\n",
     "    LG3Cat.CHILD_EXPLOITATION,\n",
@@ -382,7 +382,9 @@
     "\n",
     "\n",
     "## Evaluation\n",
-    "The code below shows a workflow for evaluating the model using Toxic Chat. ToxicChat is provided as an example dataset. It is recommended that an dataset chosen specifically for the application be used to evaluate fine-tuning success. ToxicChat can be used to evaluate any degredation in standard category performance caused by the fine-tuning. \n"
+    "The code below shows a workflow for evaluating the model using Toxic Chat. ToxicChat is provided as an example dataset. It is recommended that an dataset chosen specifically for the application be used to evaluate fine-tuning success. ToxicChat can be used to evaluate any degredation in standard category performance caused by the fine-tuning. \n",
+    "\n",
+    "Note: This code relies on the llama package. To install if this is not yet installed: ```pip install  git+https://github.com/meta-llama/llama/ .```\n"
    ]
   },
   {