Hamid Shojanazeri db7accfbe7 dix PR link il y a 2 ans
..
hf-text-generation-inference b3067b55dc fix typos and spelling errors il y a 2 ans
README.md db7accfbe7 dix PR link il y a 2 ans
chat_completion.py 557e881fcc aliginng the pad token with HF latest il y a 2 ans
chat_utils.py f6152893d8 update prompts il y a 2 ans
chats.json f6152893d8 update prompts il y a 2 ans
checkpoint_converter_fsdp_hf.py 50e9d17045 add the default option for find the HF model_name/path from train_param.yaml il y a 2 ans
inference.py c4e96af6ee clean up il y a 2 ans
model_utils.py 76a187c4d2 clean up il y a 2 ans
safety_utils.py 4767f09ecd Initial commit il y a 2 ans
samsum_prompt.txt 4767f09ecd Initial commit il y a 2 ans
vLLM_inference.py 4767f09ecd Initial commit il y a 2 ans

README.md

Inference

This folder contains inference examples for Llama 2. So far, we have provided support for three methods of inference:

  1. inference script script provides support for Hugging Face accelerate, PEFT and FSDP fine tuned models.

  2. vLLM_inference.py script takes advantage of vLLM's paged attention concept for low latency.

  3. The hf-text-generation-inference folder contains information on Hugging Face Text Generation Inference (TGI).

For more in depth information on inference including inference safety checks and examples, see the inference documentation here.

System Prompt Update

Observed Issue

We received feedback from the community on our prompt template and we are providing an update to reduce the false refusal rates seen. False refusals occur when the model incorrectly refuses to answer a question that it should, for example due to overly broad instructions to be cautious in how it provides responses.

Updated approach

Based on evaluation and analysis, we recommend the removal of the system prompt as the default setting. Pull request #105 removes the system prompt as the default option, but still provides an example to help enable experimentation for those using it.