Historique des commits

Auteur SHA1 Message Date
  Matthias Reso 8b0a233c1a Use new chat format in custom dataset il y a 1 an
  Hamid Shojanazeri df03fd4b12 Recipe to add a new language to Llama2 (#429) il y a 1 an
  Rahul A R cc1029bcf1 update wordlist.txt il y a 1 an
  Rahul A R 664429b726 Merge branch 'main' of github.com:rahul-sarvam/llama-recipes il y a 1 an
  Rahul A R e98f6de80d typo il y a 1 an
  Kai Wu 03f1ca7817 fixed some typo to pass spellcheck il y a 1 an
  Matthias Reso 83fae41195 Add test for chat completion formatting il y a 1 an
  Kai Wu d0b7a20c89 finetuning readme updated il y a 1 an
  Hamid Shojanazeri e3d750f49f Update wordlist.txt il y a 1 an
  Rahul A R 09028bf893 addressing Hamid's comments il y a 1 an
  Matthias Reso 6d9d48d619 Use apply_chat_template instead of custom functions il y a 1 an
  Matthias Reso 5efea160a2 Adapt test_finetuning to new model il y a 1 an
  rahul-sarvam eb7ef4225f Update recipes/multilingual/README.md il y a 1 an
  rahul-sarvam f1f335a591 Update recipes/multilingual/README.md il y a 1 an
  rahul-sarvam 47556ce0a6 Update recipes/multilingual/README.md il y a 1 an
  Matthias Reso 739483f262 Adjust test_grammar_datasets to stable sort il y a 1 an
  Matthias Reso b96e435cda Adjust test_samsum_dataset to second model il y a 1 an
  Matthias Reso fac41298b0 Adapt test_custom_dataset to new model il y a 1 an
  Matthias Reso 960014a3bb Fix test_custom_dataset by introducing a stable sort algorithm il y a 1 an
  Matthias Reso b5583b31d5 Adapt test_grammar_dataset to new model il y a 1 an
  Matthias Reso 17a6d16289 Test batching for both llama versions il y a 1 an
  Kai Wu 7b1a9413d2 fixed a typo il y a 1 an
  Kai Wu 41434dc825 formatted and removed duplicated or unused function get_total_flops() and byte2mb() il y a 1 an
  Kai Wu f2e80bae22 created a FlopMeasure class on top of FlopCounterMode instead of keep of copy of our own tflop_counter.py il y a 1 an
  Matthias Reso a414ca6a57 Update chat format for llama3 il y a 1 an
  Kai Wu 69e46887b4 handling incorrect profiling early stop caused by max_train_steps and add profiler.step() for each train step il y a 1 an
  Matthias Reso 113ea18bf1 Replace LlamaTokenizer with AutoTokenizer il y a 1 an
  Beto 5979dbe996 Merging local with remote il y a 1 an
  Kai Wu 34e0bf4c6e second draft of this feature, seems to be working now il y a 1 an
  Beto d4cbfa1cc1 Merging upstream llama-recipes to current repo il y a 1 an