Kai Wu 576e574e31 update readme 9 月之前
..
inference 4be3eb0d17 Updates HF model_ids and readmes for 3.1 10 月之前
llm_eval_harness 576e574e31 update readme 9 月之前
README.md 2f0a006b68 fix typo and readme 9 月之前

README.md

Benchmarks

  • inference - a folder contains benchmark scripts that apply a throughput analysis for Llama models inference on various backends including on-prem, cloud and on-device.
  • llm_eval_harness - a folder that introduces lm-evaluation-harness, a tool to evaluate Llama models including quantized models focusing on quality. We also included a recipe that reproduces Meta 3.1 evaluation metrics Using lm-evaluation-harness and instructions that reproduce HuggingFace Open LLM Leaderboard v2 metrics.