|
@@ -7,7 +7,7 @@
|
|
|
"source": [
|
|
|
"# Llama Guard 3 Text & Vision update\n",
|
|
|
"\n",
|
|
|
- "<a href=\"https://colab.research.google.com/github/meta-llama/llama-recipes/blob/main/recipes/responsible_ai/llama_guard/llama_guard_text_&_vision_inference.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>\n",
|
|
|
+ "<a href=\"https://colab.research.google.com/github/meta-llama/llama-recipes/blob/main/recipes/responsible_ai/llama_guard/llama_guard_text_and_vision_inference.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>\n",
|
|
|
"\n",
|
|
|
"In this notebook we show simple inference scripts using the [transformers](https://github.com/huggingface/transformers) library, from HuggingFace. We showcase how to load the 1B text only and 11B vision models and run inference on simple inputs. For details on the models, refer to their corresponging model cards:\n",
|
|
|
"* [Llama Guard 3 1B](https://github.com/meta-llama/PurpleLlama/blob/main/Llama-Guard3/1B/MODEL_CARD.md)\n",
|