This website works better with JavaScript
Inicio
Explorar
Ayuda
Iniciar sesión
radu
/
LLamaRecipes
espejo de
https://github.com/facebookresearch/llama-recipes.git
Seguir
1
Destacar
0
Fork
0
Archivos
Incidencias
0
Wiki
Explorar el Código
update readme
Kai Wu
hace 10 meses
padre
a7b449234a
commit
576e574e31
Se han
modificado 17 ficheros
con
26 adiciones
y
26 borrados
Dividir vista
Mostrar estadísticas de diff
2
2
tools/benchmarks/llm_eval_harness/README.md
24
24
tools/benchmarks/llm_eval_harness/meta_eval_reproduce/README.md
0
0
tools/benchmarks/llm_eval_harness/meta_eval/eval_config.yaml
0
0
tools/benchmarks/llm_eval_harness/meta_eval/meta_template/bbh/bbh_3shot_cot.yaml
0
0
tools/benchmarks/llm_eval_harness/meta_eval/meta_template/bbh/utils.py
0
0
tools/benchmarks/llm_eval_harness/meta_eval/meta_template/gpqa_cot/gpqa_0shot_cot.yaml
0
0
tools/benchmarks/llm_eval_harness/meta_eval/meta_template/gpqa_cot/utils.py
0
0
tools/benchmarks/llm_eval_harness/meta_eval/meta_template/ifeval/ifeval.yaml
0
0
tools/benchmarks/llm_eval_harness/meta_eval/meta_template/ifeval/utils.py
0
0
tools/benchmarks/llm_eval_harness/meta_eval/meta_template/math_hard/math_hard_0shot_cot.yaml
0
0
tools/benchmarks/llm_eval_harness/meta_eval/meta_template/math_hard/utils.py
0
0
tools/benchmarks/llm_eval_harness/meta_eval/meta_template/meta_instruct.yaml
0
0
tools/benchmarks/llm_eval_harness/meta_eval/meta_template/meta_pretrain.yaml
0
0
tools/benchmarks/llm_eval_harness/meta_eval/meta_template/mmlu_pro/mmlu_pro_5shot_cot_instruct.yaml
0
0
tools/benchmarks/llm_eval_harness/meta_eval/meta_template/mmlu_pro/mmlu_pro_5shot_cot_pretrain.yaml
0
0
tools/benchmarks/llm_eval_harness/meta_eval/meta_template/mmlu_pro/utils.py
0
0
tools/benchmarks/llm_eval_harness/meta_eval/prepare_meta_eval.py
La diferencia del archivo ha sido suprimido porque es demasiado grande
+ 2
- 2
tools/benchmarks/llm_eval_harness/README.md
La diferencia del archivo ha sido suprimido porque es demasiado grande
+ 24
- 24
tools/benchmarks/llm_eval_harness/meta_eval_reproduce/README.md
tools/benchmarks/llm_eval_harness/meta_eval_reproduce/eval_config.yaml → tools/benchmarks/llm_eval_harness/meta_eval/eval_config.yaml
Ver fichero
tools/benchmarks/llm_eval_harness/meta_eval_reproduce/meta_template/bbh/bbh_3shot_cot.yaml → tools/benchmarks/llm_eval_harness/meta_eval/meta_template/bbh/bbh_3shot_cot.yaml
Ver fichero
tools/benchmarks/llm_eval_harness/meta_eval_reproduce/meta_template/bbh/utils.py → tools/benchmarks/llm_eval_harness/meta_eval/meta_template/bbh/utils.py
Ver fichero
tools/benchmarks/llm_eval_harness/meta_eval_reproduce/meta_template/gpqa_cot/gpqa_0shot_cot.yaml → tools/benchmarks/llm_eval_harness/meta_eval/meta_template/gpqa_cot/gpqa_0shot_cot.yaml
Ver fichero
tools/benchmarks/llm_eval_harness/meta_eval_reproduce/meta_template/gpqa_cot/utils.py → tools/benchmarks/llm_eval_harness/meta_eval/meta_template/gpqa_cot/utils.py
Ver fichero
tools/benchmarks/llm_eval_harness/meta_eval_reproduce/meta_template/ifeval/ifeval.yaml → tools/benchmarks/llm_eval_harness/meta_eval/meta_template/ifeval/ifeval.yaml
Ver fichero
tools/benchmarks/llm_eval_harness/meta_eval_reproduce/meta_template/ifeval/utils.py → tools/benchmarks/llm_eval_harness/meta_eval/meta_template/ifeval/utils.py
Ver fichero
tools/benchmarks/llm_eval_harness/meta_eval_reproduce/meta_template/math_hard/math_hard_0shot_cot.yaml → tools/benchmarks/llm_eval_harness/meta_eval/meta_template/math_hard/math_hard_0shot_cot.yaml
Ver fichero
tools/benchmarks/llm_eval_harness/meta_eval_reproduce/meta_template/math_hard/utils.py → tools/benchmarks/llm_eval_harness/meta_eval/meta_template/math_hard/utils.py
Ver fichero
tools/benchmarks/llm_eval_harness/meta_eval_reproduce/meta_template/meta_instruct.yaml → tools/benchmarks/llm_eval_harness/meta_eval/meta_template/meta_instruct.yaml
Ver fichero
tools/benchmarks/llm_eval_harness/meta_eval_reproduce/meta_template/meta_pretrain.yaml → tools/benchmarks/llm_eval_harness/meta_eval/meta_template/meta_pretrain.yaml
Ver fichero
tools/benchmarks/llm_eval_harness/meta_eval_reproduce/meta_template/mmlu_pro/mmlu_pro_5shot_cot_instruct.yaml → tools/benchmarks/llm_eval_harness/meta_eval/meta_template/mmlu_pro/mmlu_pro_5shot_cot_instruct.yaml
Ver fichero
tools/benchmarks/llm_eval_harness/meta_eval_reproduce/meta_template/mmlu_pro/mmlu_pro_5shot_cot_pretrain.yaml → tools/benchmarks/llm_eval_harness/meta_eval/meta_template/mmlu_pro/mmlu_pro_5shot_cot_pretrain.yaml
Ver fichero
tools/benchmarks/llm_eval_harness/meta_eval_reproduce/meta_template/mmlu_pro/utils.py → tools/benchmarks/llm_eval_harness/meta_eval/meta_template/mmlu_pro/utils.py
Ver fichero
tools/benchmarks/llm_eval_harness/meta_eval_reproduce/prepare_meta_eval.py → tools/benchmarks/llm_eval_harness/meta_eval/prepare_meta_eval.py
Ver fichero