|
hai 8 meses | |
---|---|---|
.. | ||
README.md | hai 9 meses | |
__init__.py | hai 9 meses | |
inference.py | hai 8 meses | |
prompt_guard_tutorial.ipynb | hai 8 meses |
Prompt Guard is a classifier model that provides input guardrails for LLM inference, particularly against *prompt attacks. For more details and model cards, please visit the main repository, Meta Prompt Guard
This folder contains an example file to run inference with a locally hosted model, either using the Hugging Face Hub or a local path. It also contains a comprehensive demo demonstrating the scenarios in which the model is effective and a script for fine-tuning the model.
This is a very small model and inference and fine-tuning are feasible on local CPUs.