This website works better with JavaScript
Página inicial
Explorar
Ajuda
Entrar
radu
/
LLamaRecipes
mirror de
https://github.com/facebookresearch/llama-recipes.git
Observar
1
Favorito
0
Fork
0
Arquivos
Issues
0
Wiki
Ver código fonte
update readme
Kai Wu
1 ano atrás
pai
a7b449234a
commit
576e574e31
17 arquivos alterados
com
26 adições
e
26 exclusões
Visão dividida
Mostrar estatísticas do Diff
2
2
tools/benchmarks/llm_eval_harness/README.md
24
24
tools/benchmarks/llm_eval_harness/meta_eval_reproduce/README.md
0
0
tools/benchmarks/llm_eval_harness/meta_eval/eval_config.yaml
0
0
tools/benchmarks/llm_eval_harness/meta_eval/meta_template/bbh/bbh_3shot_cot.yaml
0
0
tools/benchmarks/llm_eval_harness/meta_eval/meta_template/bbh/utils.py
0
0
tools/benchmarks/llm_eval_harness/meta_eval/meta_template/gpqa_cot/gpqa_0shot_cot.yaml
0
0
tools/benchmarks/llm_eval_harness/meta_eval/meta_template/gpqa_cot/utils.py
0
0
tools/benchmarks/llm_eval_harness/meta_eval/meta_template/ifeval/ifeval.yaml
0
0
tools/benchmarks/llm_eval_harness/meta_eval/meta_template/ifeval/utils.py
0
0
tools/benchmarks/llm_eval_harness/meta_eval/meta_template/math_hard/math_hard_0shot_cot.yaml
0
0
tools/benchmarks/llm_eval_harness/meta_eval/meta_template/math_hard/utils.py
0
0
tools/benchmarks/llm_eval_harness/meta_eval/meta_template/meta_instruct.yaml
0
0
tools/benchmarks/llm_eval_harness/meta_eval/meta_template/meta_pretrain.yaml
0
0
tools/benchmarks/llm_eval_harness/meta_eval/meta_template/mmlu_pro/mmlu_pro_5shot_cot_instruct.yaml
0
0
tools/benchmarks/llm_eval_harness/meta_eval/meta_template/mmlu_pro/mmlu_pro_5shot_cot_pretrain.yaml
0
0
tools/benchmarks/llm_eval_harness/meta_eval/meta_template/mmlu_pro/utils.py
0
0
tools/benchmarks/llm_eval_harness/meta_eval/prepare_meta_eval.py
Diferenças do arquivo suprimidas por serem muito extensas
+ 2
- 2
tools/benchmarks/llm_eval_harness/README.md
Diferenças do arquivo suprimidas por serem muito extensas
+ 24
- 24
tools/benchmarks/llm_eval_harness/meta_eval_reproduce/README.md
tools/benchmarks/llm_eval_harness/meta_eval_reproduce/eval_config.yaml → tools/benchmarks/llm_eval_harness/meta_eval/eval_config.yaml
Ver arquivo
tools/benchmarks/llm_eval_harness/meta_eval_reproduce/meta_template/bbh/bbh_3shot_cot.yaml → tools/benchmarks/llm_eval_harness/meta_eval/meta_template/bbh/bbh_3shot_cot.yaml
Ver arquivo
tools/benchmarks/llm_eval_harness/meta_eval_reproduce/meta_template/bbh/utils.py → tools/benchmarks/llm_eval_harness/meta_eval/meta_template/bbh/utils.py
Ver arquivo
tools/benchmarks/llm_eval_harness/meta_eval_reproduce/meta_template/gpqa_cot/gpqa_0shot_cot.yaml → tools/benchmarks/llm_eval_harness/meta_eval/meta_template/gpqa_cot/gpqa_0shot_cot.yaml
Ver arquivo
tools/benchmarks/llm_eval_harness/meta_eval_reproduce/meta_template/gpqa_cot/utils.py → tools/benchmarks/llm_eval_harness/meta_eval/meta_template/gpqa_cot/utils.py
Ver arquivo
tools/benchmarks/llm_eval_harness/meta_eval_reproduce/meta_template/ifeval/ifeval.yaml → tools/benchmarks/llm_eval_harness/meta_eval/meta_template/ifeval/ifeval.yaml
Ver arquivo
tools/benchmarks/llm_eval_harness/meta_eval_reproduce/meta_template/ifeval/utils.py → tools/benchmarks/llm_eval_harness/meta_eval/meta_template/ifeval/utils.py
Ver arquivo
tools/benchmarks/llm_eval_harness/meta_eval_reproduce/meta_template/math_hard/math_hard_0shot_cot.yaml → tools/benchmarks/llm_eval_harness/meta_eval/meta_template/math_hard/math_hard_0shot_cot.yaml
Ver arquivo
tools/benchmarks/llm_eval_harness/meta_eval_reproduce/meta_template/math_hard/utils.py → tools/benchmarks/llm_eval_harness/meta_eval/meta_template/math_hard/utils.py
Ver arquivo
tools/benchmarks/llm_eval_harness/meta_eval_reproduce/meta_template/meta_instruct.yaml → tools/benchmarks/llm_eval_harness/meta_eval/meta_template/meta_instruct.yaml
Ver arquivo
tools/benchmarks/llm_eval_harness/meta_eval_reproduce/meta_template/meta_pretrain.yaml → tools/benchmarks/llm_eval_harness/meta_eval/meta_template/meta_pretrain.yaml
Ver arquivo
tools/benchmarks/llm_eval_harness/meta_eval_reproduce/meta_template/mmlu_pro/mmlu_pro_5shot_cot_instruct.yaml → tools/benchmarks/llm_eval_harness/meta_eval/meta_template/mmlu_pro/mmlu_pro_5shot_cot_instruct.yaml
Ver arquivo
tools/benchmarks/llm_eval_harness/meta_eval_reproduce/meta_template/mmlu_pro/mmlu_pro_5shot_cot_pretrain.yaml → tools/benchmarks/llm_eval_harness/meta_eval/meta_template/mmlu_pro/mmlu_pro_5shot_cot_pretrain.yaml
Ver arquivo
tools/benchmarks/llm_eval_harness/meta_eval_reproduce/meta_template/mmlu_pro/utils.py → tools/benchmarks/llm_eval_harness/meta_eval/meta_template/mmlu_pro/utils.py
Ver arquivo
tools/benchmarks/llm_eval_harness/meta_eval_reproduce/prepare_meta_eval.py → tools/benchmarks/llm_eval_harness/meta_eval/prepare_meta_eval.py
Ver arquivo