|
8 mēneši atpakaļ | |
---|---|---|
.. | ||
README.md | 9 mēneši atpakaļ | |
__init__.py | 9 mēneši atpakaļ | |
inference.py | 8 mēneši atpakaļ | |
prompt_guard_tutorial.ipynb | 8 mēneši atpakaļ |
Prompt Guard is a classifier model that provides input guardrails for LLM inference, particularly against *prompt attacks. For more details and model cards, please visit the main repository, Meta Prompt Guard
This folder contains an example file to run inference with a locally hosted model, either using the Hugging Face Hub or a local path. It also contains a comprehensive demo demonstrating the scenarios in which the model is effective and a script for fine-tuning the model.
This is a very small model and inference and fine-tuning are feasible on local CPUs.