Browse Source

Update recipes/3p_integrations/modal/many-llamas-human-eval/README.md

Co-authored-by: Hamid Shojanazeri <hamid.nazeri2010@gmail.com>
Erik Dunteman 6 months ago
parent
commit
c7ee7353af
1 changed files with 1 additions and 1 deletions
  1. 1 1
      recipes/3p_integrations/modal/many-llamas-human-eval/README.md

+ 1 - 1
recipes/3p_integrations/modal/many-llamas-human-eval/README.md

@@ -10,7 +10,7 @@ It seeks to increase model performance not through scaling parameters, but by sc
 
 This experiment built by the team at [Modal](https://modal.com), and is described in the following blog post:
 
-[Beat GPT-4o at Python by searching with 100 dumb LLaMAs](https://modal.com/blog/llama-human-eval)
+[Beat GPT-4o at Python by searching with 100 small Llamas](https://modal.com/blog/llama-human-eval)
 
 The experiment has since been upgraded to use the [Llama 3.2 3B Instruct](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct) model, and runnable end-to-end using the Modal serverless platform.