varunfb 811e09d022 Merge pull request #4 from meta-llama/LG3notebook пре 1 година
..
llama_guard f6ad82a976 Correct model card link in llama_guard_customization_via_prompting_and_fine_tuning.ipynb пре 1 година
prompt_guard 88167d59ca Merge branch 'main' of https://github.com/meta-llama/llama-recipes-alpha into main пре 1 година
Purple_Llama_Anyscale.ipynb f53f17138b fix dead links after refactor пре 1 година
Purple_Llama_OctoAI.ipynb f53f17138b fix dead links after refactor пре 1 година
README.md 28eea83a52 Spellcheck error fixed пре 1 година
code_shield_usage_demo.ipynb 77156c7324 Minor file renames пре 1 година
input_output_guardrails_with_llama.ipynb a404c9249c Notebook to demonstrate using llama and llama-guard together using OctoAI пре 1 година

README.md

Meta Llama Guard

Meta Llama Guard and Meta Llama Guard 2 are new models that provide input and output guardrails for LLM inference. For more details, please visit the main repository.

Note Please find the right model on HF side here.

Running locally

The llama_guard folder contains the inference script to run Meta Llama Guard locally. Add test prompts directly to the inference script before running it.

Running on the cloud

The notebooks Purple_Llama_Anyscale & Purple_Llama_OctoAI contain examples for running Meta Llama Guard on cloud hosted endpoints.