Hamid Shojanazeri 0e9d1dfa78 passing input_ids as peft doesn't pass position args to base_model 1 year ago
..
code-llama 564ef2f628 remove padding logic 1 year ago
hf-text-generation-inference b3067b55dc fix typos and spelling errors 1 year ago
README.md 3c84962685 adding checkpoint conversion 1 year ago
chat_completion.py 0e9d1dfa78 passing input_ids as peft doesn't pass position args to base_model 1 year ago
chat_utils.py f6152893d8 update prompts 1 year ago
chats.json f6152893d8 update prompts 1 year ago
checkpoint_converter_fsdp_hf.py 50e9d17045 add the default option for find the HF model_name/path from train_param.yaml 1 year ago
inference.py fcc817e923 bugfix: remove duplicate load_peft_model 1 year ago
model_utils.py 76a187c4d2 clean up 1 year ago
safety_utils.py 754b5d22c7 clean up and typo fixes 1 year ago
samsum_prompt.txt 4767f09ecd Initial commit 1 year ago
vLLM_inference.py 4767f09ecd Initial commit 1 year ago

README.md

Inference

This folder contains inference examples for Llama 2. So far, we have provided support for three methods of inference:

  1. inference script script provides support for Hugging Face accelerate, PEFT and FSDP fine tuned models.

  2. vLLM_inference.py script takes advantage of vLLM's paged attention concept for low latency.

  3. The hf-text-generation-inference folder contains information on Hugging Face Text Generation Inference (TGI).

For more in depth information on inference including inference safety checks and examples, see the inference documentation here.