|
|
%!s(int64=2) %!d(string=hai) anos | |
|---|---|---|
| .. | ||
| hf-text-generation-inference | %!s(int64=2) %!d(string=hai) anos | |
| README.md | %!s(int64=2) %!d(string=hai) anos | |
| chat_completion.py | %!s(int64=2) %!d(string=hai) anos | |
| chat_utils.py | %!s(int64=2) %!d(string=hai) anos | |
| chats.json | %!s(int64=2) %!d(string=hai) anos | |
| inference.py | %!s(int64=2) %!d(string=hai) anos | |
| model_utils.py | %!s(int64=2) %!d(string=hai) anos | |
| safety_utils.py | %!s(int64=2) %!d(string=hai) anos | |
| samsum_prompt.txt | %!s(int64=2) %!d(string=hai) anos | |
| vLLM_inference.py | %!s(int64=2) %!d(string=hai) anos | |
This folder contains inference examples for Llama 2. So far, we have provided support for three methods of inference:
inference script script provides support for Hugging Face accelerate and PEFT fine tuned models.
vLLM_inference.py script takes advantage of vLLM's paged attention concept for low latency.
The hf-text-generation-inference folder contains information on Hugging Face Text Generation Inference (TGI).
For more in depth information on inference including inference safety checks and examples, see the inference documentation here.