|
|
5 miesięcy temu | |
|---|---|---|
| .. | ||
| README.md | 9 miesięcy temu | |
| __init__.py | 9 miesięcy temu | |
| inference.py | 5 miesięcy temu | |
| prompt_guard_1_inference.py | 5 miesięcy temu | |
| prompt_guard_tutorial.ipynb | 5 miesięcy temu | |
Prompt Guard is a classifier model that provides input guardrails for LLM inference, particularly against *prompt attacks. For more details and model cards, please visit the main repository, Meta Prompt Guard
This folder contains an example file to run inference with a locally hosted model, either using the Hugging Face Hub or a local path. It also contains a comprehensive demo demonstrating the scenarios in which the model is effective and a script for fine-tuning the model.
This is a very small model and inference and fine-tuning are feasible on local CPUs.