|
5 ay önce | |
---|---|---|
.. | ||
benchmarks | 6 ay önce | |
notebooks | 5 ay önce | |
.gitignore | 6 ay önce | |
README.md | 6 ay önce | |
environment.yml | 6 ay önce | |
llama3_405b_chat_template.jinja | 5 ay önce |
The Prompt Migration toolkit helps you assess and adapt prompts across different language models, ensuring consistent performance and reliability. It includes benchmarking capabilities and evaluation tools to measure the effectiveness of prompt migrations.
notebooks/
: Contains Jupyter notebooks for interactive prompt migration examples
harness.ipynb
: Main notebook demonstrating the prompt migration workflowbenchmarks/
: Tools and scripts for performance evaluationenvironment.yml
: Conda environment specification with all required dependenciesConda Environment
Setting Up vLLM for Inference If you plan to use vLLM for model inference:
pip install vllm
To serve a large model (example: Meta’s Llama 3.3 70B Instruct), you might run:
vllm serve meta-llama/Llama-3.3-70B-Instruct --tensor-parallel-size=4
Adjust the model name and --tensor-parallel-size
according to your hardware and parallelization needs.
Accessing Hugging Face Datasets If you need to work with private or gated Hugging Face datasets, follow these steps:
bash
huggingface-cli login
bash
conda activate prompt-migration
bash
jupyter notebook
Open the main notebook:
Navigate to the notebooks/harness.ipynb
in your browser to get started.
Configure MMLU Benchmark: In the notebook, modify the benchmark configuration to use MMLU:
from benchmarks import llama_mmlu # You can also choose other available from `benchmarks/`
benchmark = llama_mmlu
Run Optimization: Choose an optimization level from the notebook and run the optimizer: ```python
optimizer = dspy.MIPROv2(metric=benchmark.metric, auto="medium") optimized_program = optimizer.compile(student, trainset=trainset)
# View the optimized prompt and/or demos print("BEST PROMPT:\n", optimized_program.signature.instructions) print("BEST EXAMPLES:\n", optimized_program.predict.demos)
6. **Run base and optimized prompt on meta-evals:**
Take the optimized prompt and examples and update your working directory:
- Navigate to `llama-recipes/end-to-end-use-cases/benchmarks/llm_eval_harness/meta_eval/work_dir/mmlu/utils.py`
- Open a new terminal and setup meta-evals environment following the readme in /meta_eval
- Update the prompts list with your base and optimized prompts as the first two items
```python
prompts = ["base_prompt", "optimized_prompt"] # Your base prompt and optimized prompt
prompt
index in template as such:
python
template = f"<|start_header_id|>user<|end_header_id|>{prompt[0]}. Question: {question}\n {choice}\n<|eot_id|> \n\n<|start_header_id|>assistant<end_header_id|>"
benchmarks/
directory to evaluate your prompt migrations.This project is part of the Llama Recipes collection. Please refer to the main repository’s license for usage terms.