Alexandre Bassel 1d0dbb04a1 fixed few typos před 9 měsíci
..
README.md 0c57646481 Prompt Guard Tutorial před 1 rokem
__init__.py 0c57646481 Prompt Guard Tutorial před 1 rokem
inference.py 1d0dbb04a1 fixed few typos před 9 měsíci
prompt_guard_tutorial.ipynb be19e39442 Fill in one sentence in the prompt guard tutorial. před 1 rokem

README.md

Prompt Guard demo

Prompt Guard is a classifier model that provides input guardrails for LLM inference, particularly against *prompt attacks. For more details and model cards, please visit the main repository, Meta Prompt Guard

This folder contains an example file to run inference with a locally hosted model, either using the Hugging Face Hub or a local path. It also contains a comprehensive demo demonstrating the scenarios in which the model is effective and a script for fine-tuning the model.

This is a very small model and inference and fine-tuning are feasible on local CPUs.

Requirements

  1. Access to Prompt Guard model weights on Hugging Face. To get access, follow the steps described here
  2. Llama recipes package and it's dependencies installed