|
@@ -7,11 +7,13 @@
|
|
|
"source": [
|
|
|
"<a href=\"https://colab.research.google.com/github/meta-llama/llama-recipes/blob/main/recipes/quickstart/Prompt_Engineering_with_Llama_3.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>\n",
|
|
|
"\n",
|
|
|
- "# Prompt Engineering with Llama 3.1\n",
|
|
|
+ "# Prompt Engineering with Llama\n",
|
|
|
"\n",
|
|
|
"Prompt engineering is using natural language to produce a desired response from a large language model (LLM).\n",
|
|
|
"\n",
|
|
|
- "This interactive guide covers prompt engineering & best practices with Llama 3.1."
|
|
|
+ "This interactive guide covers prompt engineering & best practices with Llama.\n",
|
|
|
+ "\n",
|
|
|
+ "Note: The notebook can be extended to any (latest) Llama models."
|
|
|
]
|
|
|
},
|
|
|
{
|
|
@@ -74,34 +76,6 @@
|
|
|
"cell_type": "markdown",
|
|
|
"metadata": {},
|
|
|
"source": [
|
|
|
- "Code Llama is a code-focused LLM built on top of Llama 2 also available in various sizes and finetunes:"
|
|
|
- ]
|
|
|
- },
|
|
|
- {
|
|
|
- "attachments": {},
|
|
|
- "cell_type": "markdown",
|
|
|
- "metadata": {},
|
|
|
- "source": [
|
|
|
- "#### Code Llama\n",
|
|
|
- "1. `codellama-7b` - code fine-tuned 7 billion parameter model\n",
|
|
|
- "1. `codellama-13b` - code fine-tuned 13 billion parameter model\n",
|
|
|
- "1. `codellama-34b` - code fine-tuned 34 billion parameter model\n",
|
|
|
- "1. `codellama-70b` - code fine-tuned 70 billion parameter model\n",
|
|
|
- "1. `codellama-7b-instruct` - code & instruct fine-tuned 7 billion parameter model\n",
|
|
|
- "2. `codellama-13b-instruct` - code & instruct fine-tuned 13 billion parameter model\n",
|
|
|
- "3. `codellama-34b-instruct` - code & instruct fine-tuned 34 billion parameter model\n",
|
|
|
- "3. `codellama-70b-instruct` - code & instruct fine-tuned 70 billion parameter model\n",
|
|
|
- "1. `codellama-7b-python` - Python fine-tuned 7 billion parameter model\n",
|
|
|
- "2. `codellama-13b-python` - Python fine-tuned 13 billion parameter model\n",
|
|
|
- "3. `codellama-34b-python` - Python fine-tuned 34 billion parameter model\n",
|
|
|
- "3. `codellama-70b-python` - Python fine-tuned 70 billion parameter model"
|
|
|
- ]
|
|
|
- },
|
|
|
- {
|
|
|
- "attachments": {},
|
|
|
- "cell_type": "markdown",
|
|
|
- "metadata": {},
|
|
|
- "source": [
|
|
|
"## Getting an LLM\n",
|
|
|
"\n",
|
|
|
"Large language models are deployed and accessed in a variety of ways, including:\n",
|