瀏覽代碼

add wordlist and fix deadlink

Kai Wu 9 月之前
父節點
當前提交
9f4fb4b116
共有 2 個文件被更改,包括 2 次插入1 次删除
  1. 1 0
      .github/scripts/spellcheck_conf/wordlist.txt
  2. 1 1
      tools/benchmarks/llm_eval_harness/README.md

+ 1 - 0
.github/scripts/spellcheck_conf/wordlist.txt

@@ -1457,3 +1457,4 @@ MuSR
 Multistep
 multistep
 algorithmically
+asymptote

+ 1 - 1
tools/benchmarks/llm_eval_harness/README.md

@@ -131,7 +131,7 @@ lm_eval --model vllm \
 ```
 To use vllm, do `pip install lm_eval[vllm]`. For a full list of supported vLLM configurations, please reference our [vLLM integration](https://github.com/EleutherAI/lm-evaluation-harness/blob/e74ec966556253fbe3d8ecba9de675c77c075bce/lm_eval/models/vllm_causallms.py) and the vLLM documentation.
 
-vLLM occasionally differs in output from Huggingface. We treat Huggingface as the reference implementation, and provide a [script](./scripts/model_comparator.py) for checking the validity of vllm results against HF.
+vLLM occasionally differs in output from Huggingface. We treat Huggingface as the reference implementation, and provide a script for checking the validity of vllm results against HF.
 
 > [!Tip]
 > For fastest performance, we recommend using `--batch_size auto` for vLLM whenever possible, to leverage its continuous batching functionality!