Cyrus Nikolaidis 3a99a54582 Add preprocessor to patch PromptGuard scores for inserted characters (#636) 8 mēneši atpakaļ
..
README.md 0c57646481 Prompt Guard Tutorial 9 mēneši atpakaļ
__init__.py 0c57646481 Prompt Guard Tutorial 9 mēneši atpakaļ
inference.py 3a99a54582 Add preprocessor to patch PromptGuard scores for inserted characters (#636) 8 mēneši atpakaļ
prompt_guard_tutorial.ipynb be19e39442 Fill in one sentence in the prompt guard tutorial. 8 mēneši atpakaļ

README.md

Prompt Guard demo

Prompt Guard is a classifier model that provides input guardrails for LLM inference, particularly against *prompt attacks. For more details and model cards, please visit the main repository, Meta Prompt Guard

This folder contains an example file to run inference with a locally hosted model, either using the Hugging Face Hub or a local path. It also contains a comprehensive demo demonstrating the scenarios in which the model is effective and a script for fine-tuning the model.

This is a very small model and inference and fine-tuning are feasible on local CPUs.

Requirements

  1. Access to Prompt Guard model weights on Hugging Face. To get access, follow the steps described here
  2. Llama recipes package and it's dependencies installed