Commit History

Autor SHA1 Mensaxe Data
  Matthias Reso 2717048197 Add vllm and pytest as dependencies %!s(int64=2) %!d(string=hai) anos
  Matthias Reso c46be5f7a3 Bump version as 0.1.0 has been burned on name registration %!s(int64=2) %!d(string=hai) anos
  Matthias Reso bd9f933c77 Exclude dist folder when creating source package %!s(int64=2) %!d(string=hai) anos
  Matthias Reso 27e56bdfd3 Add llama_finetuning.py script to provide support for torchrun %!s(int64=2) %!d(string=hai) anos
  Matthias Reso 6e327c95e1 Added install section to readme %!s(int64=2) %!d(string=hai) anos
  Matthias Reso 38ac7963a8 Added pyproject.toml %!s(int64=2) %!d(string=hai) anos
  Matthias Reso cf678b9bf0 Adjust imports to package structure + cleaned up imports %!s(int64=2) %!d(string=hai) anos
  Matthias Reso 02428c992a Adding vllm as dependency; fix dep install with hatchling %!s(int64=2) %!d(string=hai) anos
  Matthias Reso c8522eb0ff Remove peft install from src %!s(int64=2) %!d(string=hai) anos
  Matthias Reso 4c9cc7d223 Move modules into separate src folder %!s(int64=2) %!d(string=hai) anos
  Geeta Chauhan fbc513ec47 adding notes how to get the HF models (#151) %!s(int64=2) %!d(string=hai) anos
  Hamid Shojanazeri bcfafd9a0b adding notes how to get the HF models %!s(int64=2) %!d(string=hai) anos
  Geeta Chauhan cfba150311 adding llama code inference (#144) %!s(int64=2) %!d(string=hai) anos
  Hamid Shojanazeri 6105a3f886 clarifying the infilling use-case %!s(int64=2) %!d(string=hai) anos
  Hamid Shojanazeri 8b0008433c fix typos %!s(int64=2) %!d(string=hai) anos
  Hamid Shojanazeri 564ef2f628 remove padding logic %!s(int64=2) %!d(string=hai) anos
  Hamid Shojanazeri 277a292fbc adding autotokenizer %!s(int64=2) %!d(string=hai) anos
  Hamid Shojanazeri 3f2fb9167e adding notes to model not supporting infilling %!s(int64=2) %!d(string=hai) anos
  Hamid Shojanazeri c62428b99c setting defaults of temp and top_p %!s(int64=2) %!d(string=hai) anos
  Hamid Shojanazeri c014ae7cb8 setting BT option to true %!s(int64=2) %!d(string=hai) anos
  Hamid Shojanazeri 4fa44e16d9 add note for python llama not suited for llama infilling %!s(int64=2) %!d(string=hai) anos
  Hamid Shojanazeri b18a186385 removing the option to take prompt from cli %!s(int64=2) %!d(string=hai) anos
  Hamid Shojanazeri 75991d8795 fix the extra line added and remove take prompt from cli %!s(int64=2) %!d(string=hai) anos
  Hamid Shojanazeri d28fc9898a addressing doc comments %!s(int64=2) %!d(string=hai) anos
  Hamid Shojanazeri a234d1fe0c fix typos %!s(int64=2) %!d(string=hai) anos
  Hamid Shojanazeri 2d9f4796e8 fixing the output format %!s(int64=2) %!d(string=hai) anos
  Hamid Shojanazeri 1e8ea70b26 adding llama code inference %!s(int64=2) %!d(string=hai) anos
  Geeta Chauhan 82e05c46e0 fix a bug in the config for use_fast_kernels (#121) %!s(int64=2) %!d(string=hai) anos
  Hamid Shojanazeri 971c079aa6 bugfix: remove duplicate load_peft_model (#124) %!s(int64=2) %!d(string=hai) anos
  hongbo.mo fcc817e923 bugfix: remove duplicate load_peft_model %!s(int64=2) %!d(string=hai) anos