inference - a folder contains benchmark scripts that apply a throughput analysis for Llama models inference on various backends including on-prem, cloud and on-device.
llm_eval_harness - a folder contains a tool to evaluate fine-tuned Llama models including quantized models focusing on quality.