Thomas Robinson 2668bf4c35 Address feedback not possible before launch in LG3 recipe and dataset file (#606) 9 ماه پیش
..
llama_guard 2668bf4c35 Address feedback not possible before launch in LG3 recipe and dataset file (#606) 9 ماه پیش
prompt_guard d28400efb5 Merge branch 'main' of https://github.com/meta-llama/llama-recipes-alpha into responsible_ai 9 ماه پیش
Purple_Llama_Anyscale.ipynb f53f17138b fix dead links after refactor 1 سال پیش
Purple_Llama_OctoAI.ipynb f53f17138b fix dead links after refactor 1 سال پیش
README.md bc56973186 update rai readme 9 ماه پیش
code_shield_usage_demo.ipynb 77156c7324 Minor file renames 9 ماه پیش
input_output_guardrails_with_llama.ipynb a404c9249c Notebook to demonstrate using llama and llama-guard together using OctoAI 1 سال پیش

README.md

Trust and Safety with Llama

The Purple Llama project provides tools and models to improve LLM security. This folder contains examples to get started with PurpleLlama tools.

Tool/Model Description Get Started
Llama Guard Provide guardrailing on inputs and outputs Inference, Finetuning
Prompt Guard Model to safeguards against jailbreak attempts and embedded prompt injections Notebook
Code Shield Tool to safeguard against insecure code generated by the LLM Notebook

Running on hosted APIs

The notebooks input_output_guardrails.ipynb, Purple_Llama_Anyscale & Purple_Llama_OctoAI contain examples for running Meta Llama Guard on cloud hosted endpoints.