|
@@ -61,7 +61,7 @@
|
|
|
"Llama 4 has two variants:\n",
|
|
|
"\n",
|
|
|
"* Scout which has 17B x 16 Experts MoE\n",
|
|
|
- "* Maveric which has 17B x 128 Experts MoE\n",
|
|
|
+ "* Maverick which has 17B x 128 Experts MoE\n",
|
|
|
"\n",
|
|
|
"Please remember to use instruct models, although for our open source friends who like to fine-tune our models. The base models are also made available. We also make Maveric available in FP8 quantization on our huggingface org as well as website"
|
|
|
]
|