Suraj d28400efb5 Merge branch 'main' of https://github.com/meta-llama/llama-recipes-alpha into responsible_ai 9 months ago
..
README.md 0c57646481 Prompt Guard Tutorial 9 months ago
__init__.py 0c57646481 Prompt Guard Tutorial 9 months ago
inference.py 88167d59ca Merge branch 'main' of https://github.com/meta-llama/llama-recipes-alpha into main 9 months ago
prompt_guard_tutorial.ipynb acf0a74297 Updates for commit review 9 months ago

README.md

Prompt Guard demo

Prompt Guard is a classifier model that provides input guardrails for LLM inference, particularly against *prompt attacks. For more details and model cards, please visit the main repository, Meta Prompt Guard

This folder contains an example file to run inference with a locally hosted model, either using the Hugging Face Hub or a local path. It also contains a comprehensive demo demonstrating the scenarios in which the model is effective and a script for fine-tuning the model.

This is a very small model and inference and fine-tuning are feasible on local CPUs.

Requirements

  1. Access to Prompt Guard model weights on Hugging Face. To get access, follow the steps described here
  2. Llama recipes package and it's dependencies installed