浏览代码

Update tools/benchmarks/llm_eval_harness/meta_eval_reproduce/README.md

Co-authored-by: Hamid Shojanazeri <hamid.nazeri2010@gmail.com>
Kai Wu 8 月之前
父节点
当前提交
c32517b882
共有 1 个文件被更改,包括 0 次插入1 次删除
  1. 0 1
      tools/benchmarks/llm_eval_harness/meta_eval_reproduce/README.md

+ 0 - 1
tools/benchmarks/llm_eval_harness/meta_eval_reproduce/README.md

@@ -5,7 +5,6 @@ As Meta Llama models gain popularity, evaluating these models has become increas
 
 ## Disclaimer
 
-### Important Notes
 
 1. **This tutorial is not the official implementation** of Meta Llama evaluation. It is based on public third-party libraries, and the implementation may differ slightly from our internal evaluation, leading to minor differences in the reproduced numbers.
 2. **Model Compatibility**: This tutorial is specifically for Llama 3 based models, as our prompts include Meta Llama 3 special tokens, e.g. `<|start_header_id|>user<|end_header_id|>`. It will not work with models that are not based on Llama 3.