This website works better with JavaScript
Home
Esplora
Aiuto
Accedi
radu
/
LLamaRecipes
mirror da
https://github.com/facebookresearch/llama-recipes.git
Segui
1
Vota
0
Forka
0
File
Problemi
0
Wiki
Sfoglia il codice sorgente
update readme
Kai Wu
7 mesi fa
parent
a7b449234a
commit
576e574e31
17 ha cambiato i file
con
26 aggiunte
e
26 eliminazioni
Visualizzazione separata
Mostra Diff Stats
2
2
tools/benchmarks/llm_eval_harness/README.md
24
24
tools/benchmarks/llm_eval_harness/meta_eval_reproduce/README.md
0
0
tools/benchmarks/llm_eval_harness/meta_eval/eval_config.yaml
0
0
tools/benchmarks/llm_eval_harness/meta_eval/meta_template/bbh/bbh_3shot_cot.yaml
0
0
tools/benchmarks/llm_eval_harness/meta_eval/meta_template/bbh/utils.py
0
0
tools/benchmarks/llm_eval_harness/meta_eval/meta_template/gpqa_cot/gpqa_0shot_cot.yaml
0
0
tools/benchmarks/llm_eval_harness/meta_eval/meta_template/gpqa_cot/utils.py
0
0
tools/benchmarks/llm_eval_harness/meta_eval/meta_template/ifeval/ifeval.yaml
0
0
tools/benchmarks/llm_eval_harness/meta_eval/meta_template/ifeval/utils.py
0
0
tools/benchmarks/llm_eval_harness/meta_eval/meta_template/math_hard/math_hard_0shot_cot.yaml
0
0
tools/benchmarks/llm_eval_harness/meta_eval/meta_template/math_hard/utils.py
0
0
tools/benchmarks/llm_eval_harness/meta_eval/meta_template/meta_instruct.yaml
0
0
tools/benchmarks/llm_eval_harness/meta_eval/meta_template/meta_pretrain.yaml
0
0
tools/benchmarks/llm_eval_harness/meta_eval/meta_template/mmlu_pro/mmlu_pro_5shot_cot_instruct.yaml
0
0
tools/benchmarks/llm_eval_harness/meta_eval/meta_template/mmlu_pro/mmlu_pro_5shot_cot_pretrain.yaml
0
0
tools/benchmarks/llm_eval_harness/meta_eval/meta_template/mmlu_pro/utils.py
0
0
tools/benchmarks/llm_eval_harness/meta_eval/prepare_meta_eval.py
File diff suppressed because it is too large
+ 2
- 2
tools/benchmarks/llm_eval_harness/README.md
File diff suppressed because it is too large
+ 24
- 24
tools/benchmarks/llm_eval_harness/meta_eval_reproduce/README.md
tools/benchmarks/llm_eval_harness/meta_eval_reproduce/eval_config.yaml → tools/benchmarks/llm_eval_harness/meta_eval/eval_config.yaml
Vedi File
tools/benchmarks/llm_eval_harness/meta_eval_reproduce/meta_template/bbh/bbh_3shot_cot.yaml → tools/benchmarks/llm_eval_harness/meta_eval/meta_template/bbh/bbh_3shot_cot.yaml
Vedi File
tools/benchmarks/llm_eval_harness/meta_eval_reproduce/meta_template/bbh/utils.py → tools/benchmarks/llm_eval_harness/meta_eval/meta_template/bbh/utils.py
Vedi File
tools/benchmarks/llm_eval_harness/meta_eval_reproduce/meta_template/gpqa_cot/gpqa_0shot_cot.yaml → tools/benchmarks/llm_eval_harness/meta_eval/meta_template/gpqa_cot/gpqa_0shot_cot.yaml
Vedi File
tools/benchmarks/llm_eval_harness/meta_eval_reproduce/meta_template/gpqa_cot/utils.py → tools/benchmarks/llm_eval_harness/meta_eval/meta_template/gpqa_cot/utils.py
Vedi File
tools/benchmarks/llm_eval_harness/meta_eval_reproduce/meta_template/ifeval/ifeval.yaml → tools/benchmarks/llm_eval_harness/meta_eval/meta_template/ifeval/ifeval.yaml
Vedi File
tools/benchmarks/llm_eval_harness/meta_eval_reproduce/meta_template/ifeval/utils.py → tools/benchmarks/llm_eval_harness/meta_eval/meta_template/ifeval/utils.py
Vedi File
tools/benchmarks/llm_eval_harness/meta_eval_reproduce/meta_template/math_hard/math_hard_0shot_cot.yaml → tools/benchmarks/llm_eval_harness/meta_eval/meta_template/math_hard/math_hard_0shot_cot.yaml
Vedi File
tools/benchmarks/llm_eval_harness/meta_eval_reproduce/meta_template/math_hard/utils.py → tools/benchmarks/llm_eval_harness/meta_eval/meta_template/math_hard/utils.py
Vedi File
tools/benchmarks/llm_eval_harness/meta_eval_reproduce/meta_template/meta_instruct.yaml → tools/benchmarks/llm_eval_harness/meta_eval/meta_template/meta_instruct.yaml
Vedi File
tools/benchmarks/llm_eval_harness/meta_eval_reproduce/meta_template/meta_pretrain.yaml → tools/benchmarks/llm_eval_harness/meta_eval/meta_template/meta_pretrain.yaml
Vedi File
tools/benchmarks/llm_eval_harness/meta_eval_reproduce/meta_template/mmlu_pro/mmlu_pro_5shot_cot_instruct.yaml → tools/benchmarks/llm_eval_harness/meta_eval/meta_template/mmlu_pro/mmlu_pro_5shot_cot_instruct.yaml
Vedi File
tools/benchmarks/llm_eval_harness/meta_eval_reproduce/meta_template/mmlu_pro/mmlu_pro_5shot_cot_pretrain.yaml → tools/benchmarks/llm_eval_harness/meta_eval/meta_template/mmlu_pro/mmlu_pro_5shot_cot_pretrain.yaml
Vedi File
tools/benchmarks/llm_eval_harness/meta_eval_reproduce/meta_template/mmlu_pro/utils.py → tools/benchmarks/llm_eval_harness/meta_eval/meta_template/mmlu_pro/utils.py
Vedi File
tools/benchmarks/llm_eval_harness/meta_eval_reproduce/prepare_meta_eval.py → tools/benchmarks/llm_eval_harness/meta_eval/prepare_meta_eval.py
Vedi File