Igor Kasianenko 4ecdbf9639 typo fixes před 1 měsícem
..
evals_synthetic_data 4ecdbf9639 typo fixes před 1 měsícem
inference b6ca01882b revert working link před 5 měsíci
llm_eval_harness 00691affbf add mmlu_instruct for 3.2 před 5 měsíci
README.md ae010af7d8 move and add Difflog před 6 měsíci

README.md

Benchmarks

  • inference - a folder contains benchmark scripts that apply a throughput analysis for Llama models inference on various backends including on-prem, cloud and on-device.
  • llm_eval_harness - a folder that introduces lm-evaluation-harness, a tool to evaluate Llama models including quantized models focusing on quality. We also included a recipe that calculates Llama 3.1 evaluation metrics Using lm-evaluation-harness and instructions that calculate HuggingFace Open LLM Leaderboard v2 metrics.