Keita Watanabe 0e54f5634a use AutoTokenizer instead of LlamaTokenizer il y a 1 an
..
__init__.py 207d2f80e9 Make code-llama and hf-tgi inference runnable as module il y a 2 ans
chat_utils.py 6d9d48d619 Use apply_chat_template instead of custom functions il y a 1 an
checkpoint_converter_fsdp_hf.py 0e54f5634a use AutoTokenizer instead of LlamaTokenizer il y a 1 an
llm.py a404c9249c Notebook to demonstrate using llama and llama-guard together using OctoAI il y a 1 an
model_utils.py d51d2cce9c adding sdpa for flash attn il y a 1 an
prompt_format_utils.py bcdb5b31fe Fixing quantization config. Removing prints il y a 1 an
safety_utils.py f63ba19827 Fixing tokenizer used for llama 3. Changing quantization configs on safety_utils. il y a 1 an