Prechádzať zdrojové kódy

updated readme and model card links

Beto 6 mesiacov pred
rodič
commit
83fab59f87

+ 4 - 2
recipes/responsible_ai/llama_guard/Llama-Guard-3-MultiModal_inference.ipynb

@@ -7,9 +7,11 @@
    "source": [
     "# Llama Guard 3 Multimodal update\n",
     "\n",
-    "<a href=\"https://colab.research.google.com/github/meta-llama/llama-recipes/blob/main/recipes/responsible_ai/llama_guard/\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>\n",
+    "<a href=\"https://colab.research.google.com/github/meta-llama/llama-recipes/blob/main/recipes/responsible_ai/llama_guard/Llama-Guard-3-MultiModal_inference.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>\n",
     "\n",
-    "In this notebook we show simple inference scripts using the [transformers](https://github.com/huggingface/transformers) library, from HuggingFace. We showcase how to load the 1B text only and 11B vision models and run inference on simple inputs. \n",
+    "In this notebook we show simple inference scripts using the [transformers](https://github.com/huggingface/transformers) library, from HuggingFace. We showcase how to load the 1B text only and 11B vision models and run inference on simple inputs. For details on the models, refer to their corresponging model cards:\n",
+    "* [Llama Guard 3 1B](https://github.com/meta-llama/PurpleLlama/blob/main/Llama-Guard3/1B/MODEL_CARD.md)\n",
+    "* [Llama Guard 3 11B-Vision](https://github.com/meta-llama/PurpleLlama/blob/main/Llama-Guard3/11B-vision/MODEL_CARD.md)\n",
     "\n",
     "## Loading the models\n",
     "\n",

+ 1 - 5
recipes/responsible_ai/llama_guard/README.md

@@ -2,17 +2,13 @@
 <!-- markdown-link-check-disable -->
 Meta Llama Guard is a language model that provides input and output guardrails for LLM inference. For more details and model cards, please visit the [PurpleLlama](https://github.com/meta-llama/PurpleLlama) repository.
 
-This notebook shows how to load the models with the transformers library and how to customize the categories.
+This [notebook](Llama-Guard-3-MultiModal_inference.ipynb) shows how to load the models with the transformers library and how to customize the categories.
 
 ## Requirements
 1. Access to Llama guard model weights on Hugging Face. To get access, follow the steps described in the top of the model card in [Hugging Face](https://huggingface.co/meta-llama/Llama-Guard-3-1B)
 2. Llama recipes package and it's dependencies [installed](https://github.com/meta-llama/llama-recipes?tab=readme-ov-file#installing)
 3. Pillow package installed
 
-
-## Llama Guard inference script
-To test the new models, follow this [notebook]().
-
 ## Inference Safety Checker
 When running the regular inference script with prompts, Meta Llama Guard will be used as a safety checker on the user prompt and the model output. If both are safe, the result will be shown, else a message with the error will be shown, with the word unsafe and a comma separated list of categories infringed. Meta Llama Guard is always loaded quantized using Hugging Face Transformers library with bitsandbytes.