The Purple Llama project provides tools and models to improve LLM security. This folder contains examples to get started with PurpleLlama tools.
| Tool/Model | Description | Get Started | 
|---|---|---|
| Llama Guard | Provide guardrailing on inputs and outputs | Inference, Finetuning | 
| Prompt Guard | Model to safeguards against jailbreak attempts and embedded prompt injections | Notebook | 
| Code Shield | Tool to safeguard against insecure code generated by the LLM | Notebook |