Suraj d28400efb5 Merge branch 'main' of https://github.com/meta-llama/llama-recipes-alpha into responsible_ai 1 gadu atpakaļ
..
llama_guard d28400efb5 Merge branch 'main' of https://github.com/meta-llama/llama-recipes-alpha into responsible_ai 1 gadu atpakaļ
prompt_guard d28400efb5 Merge branch 'main' of https://github.com/meta-llama/llama-recipes-alpha into responsible_ai 1 gadu atpakaļ
Purple_Llama_Anyscale.ipynb f53f17138b fix dead links after refactor 1 gadu atpakaļ
Purple_Llama_OctoAI.ipynb f53f17138b fix dead links after refactor 1 gadu atpakaļ
README.md acf0a74297 Updates for commit review 1 gadu atpakaļ
code_shield_usage_demo.ipynb 77156c7324 Minor file renames 1 gadu atpakaļ
input_output_guardrails_with_llama.ipynb a404c9249c Notebook to demonstrate using llama and llama-guard together using OctoAI 1 gadu atpakaļ

README.md

Meta Llama Guard

Meta Llama Guard models provide input and output guardrails for LLM inference. For more details, please visit the main repository.

Note Please find the right model on HF side here.

Running locally

The llama_guard folder contains the inference script to run Meta Llama Guard locally. Add test prompts directly to the inference script before running it.

Running on the cloud

The notebooks Purple_Llama_Anyscale & Purple_Llama_OctoAI contain examples for running Meta Llama Guard on cloud hosted endpoints.