浏览代码

Updates for commit review

Suraj 8 月之前
父节点
当前提交
acf0a74297

+ 1 - 1
recipes/responsible_ai/README.md

@@ -1,6 +1,6 @@
 # Meta Llama Guard
 
-Meta Llama Guard models provide input and output guardrails for LLM inference. For more details, please visit the main [repository](https://github.com/facebookresearch/PurpleLlama/).
+Meta Llama Guard models provide input and output guardrails for LLM inference. For more details, please visit the main [repository](https://github.com/meta-llama/PurpleLlama/).
 
 **Note** Please find the right model on HF side [here](https://huggingface.co/meta-llama/Llama-Guard-3-8B).
 

+ 1 - 1
recipes/responsible_ai/llama_guard/README.md

@@ -69,4 +69,4 @@ Use this command for testing with a quantized Llama model, modifying the values
 `python examples/inference.py --model_name <path_to_regular_llama_model> --prompt_file <path_to_prompt_file> --quantization 8bit --enable_llamaguard_content_safety`
 
 ## Llama Guard 3 Finetuning & Customization
-The safety categories in Llama Guard 3 can be tuned for specific application needs. Existing categories can be removed and new categories can be added to the taxonomy. The [Llama Guard Customization](./llama_guard_customization_via_prompting_changes_and_fine_tuning.ipynb) notebook walks through the process.
+The safety categories in Llama Guard 3 can be tuned for specific application needs. Existing categories can be removed and new categories can be added to the taxonomy. The [Llama Guard Customization](./llama_guard_customization_via_prompting_and_fine_tuning.ipynb) notebook walks through the process.

recipes/responsible_ai/prompt_guard/Prompt Guard Tutorial.ipynb → recipes/responsible_ai/prompt_guard/prompt_guard_tutorial.ipynb