Beto преди 6 месеца
родител
ревизия
60b5292c0f

+ 1 - 1
recipes/responsible_ai/README.md

@@ -4,7 +4,7 @@ The [Purple Llama](https://github.com/meta-llama/PurpleLlama/) project provides
 
 | Tool/Model | Description | Get Started
 |---|---|---|
-[Llama Guard](https://llama.meta.com/docs/model-cards-and-prompt-formats/llama-guard-3) | Provide guardrailing on inputs and outputs | [Inference](./llama_guard/Llama-Guard-3-MultiModal_inference.ipynb), [Finetuning](./llama_guard/llama_guard_customization_via_prompting_and_fine_tuning.ipynb)
+[Llama Guard](https://llama.meta.com/docs/model-cards-and-prompt-formats/llama-guard-3) | Provide guardrailing on inputs and outputs | [Inference](./llama_guard/llama_guard_text_&_vision_inference.ipynb), [Finetuning](./llama_guard/llama_guard_customization_via_prompting_and_fine_tuning.ipynb)
 [Prompt Guard](https://llama.meta.com/docs/model-cards-and-prompt-formats/prompt-guard) | Model to safeguards against jailbreak attempts and embedded prompt injections | [Notebook](./prompt_guard/prompt_guard_tutorial.ipynb)
 [Code Shield](https://github.com/meta-llama/PurpleLlama/tree/main/CodeShield) | Tool to safeguard against insecure code generated by the LLM | [Notebook](https://github.com/meta-llama/PurpleLlama/blob/main/CodeShield/notebook/CodeShieldUsageDemo.ipynb)
 

+ 1 - 1
recipes/responsible_ai/llama_guard/README.md

@@ -2,7 +2,7 @@
 <!-- markdown-link-check-disable -->
 Meta Llama Guard is a language model that provides input and output guardrails for LLM inference. For more details and model cards, please visit the [PurpleLlama](https://github.com/meta-llama/PurpleLlama) repository.
 
-This [notebook](Llama-Guard-3-MultiModal_inference.ipynb) shows how to load the models with the transformers library and how to customize the categories.
+This [notebook](llama_guard_text_&_vision_inference.ipynb) shows how to load the models with the transformers library and how to customize the categories.
 
 ## Requirements
 1. Access to Llama guard model weights on Hugging Face. To get access, follow the steps described in the top of the model card in [Hugging Face](https://huggingface.co/meta-llama/Llama-Guard-3-1B)

+ 1 - 1
recipes/responsible_ai/llama_guard/llama_guard_text_&_vision_inference.ipynb

@@ -7,7 +7,7 @@
    "source": [
     "# Llama Guard 3 Text & Vision update\n",
     "\n",
-    "<a href=\"https://colab.research.google.com/github/meta-llama/llama-recipes/blob/main/recipes/responsible_ai/llama_guard/Llama-Guard-3-MultiModal_inference.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>\n",
+    "<a href=\"https://colab.research.google.com/github/meta-llama/llama-recipes/blob/main/recipes/responsible_ai/llama_guard/llama_guard_text_&_vision_inference.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>\n",
     "\n",
     "In this notebook we show simple inference scripts using the [transformers](https://github.com/huggingface/transformers) library, from HuggingFace. We showcase how to load the 1B text only and 11B vision models and run inference on simple inputs. For details on the models, refer to their corresponging model cards:\n",
     "* [Llama Guard 3 1B](https://github.com/meta-llama/PurpleLlama/blob/main/Llama-Guard3/1B/MODEL_CARD.md)\n",