Sanyam Bhutani 3 mesi fa
parent
commit
ef99b32e48
23 ha cambiato i file con 110 aggiunte e 107 eliminazioni
  1. 2 2
      3p-integrations/crusoe/vllm-fp8/README.md
  2. 2 2
      3p-integrations/langchain/README.md
  3. 1 1
      3p-integrations/llama_on_prem.md
  4. 1 1
      3p-integrations/tgi/README.md
  5. 62 59
      3p-integrations/using_externally_hosted_llms.ipynb
  6. 2 2
      end-to-end-use-cases/NotebookLlama/README.md
  7. 2 2
      end-to-end-use-cases/RAFT-Chatbot/README.md
  8. 2 2
      end-to-end-use-cases/benchmarks/llm_eval_harness/meta_eval/README.md
  9. 1 1
      end-to-end-use-cases/coding/text2sql/quickstart.ipynb
  10. 1 1
      end-to-end-use-cases/customerservice_chatbots/RAG_chatbot/RAG_Chatbot_Example.ipynb
  11. 1 1
      end-to-end-use-cases/github_triage/README.md
  12. 1 1
      end-to-end-use-cases/github_triage/walkthrough.ipynb
  13. 1 1
      end-to-end-use-cases/responsible_ai/code_shield_usage_demo.ipynb
  14. 1 1
      end-to-end-use-cases/responsible_ai/llama_guard/README.md
  15. 11 11
      end-to-end-use-cases/responsible_ai/llama_guard/llama_guard_customization_via_prompting_and_fine_tuning.ipynb
  16. 1 1
      end-to-end-use-cases/responsible_ai/prompt_guard/README.md
  17. 1 1
      getting-started/finetuning/finetune_vision_model.md
  18. 1 1
      getting-started/finetuning/finetuning.py
  19. 8 8
      getting-started/finetuning/quickstart_peft_finetuning.ipynb
  20. 3 3
      getting-started/inference/local_inference/inference.py
  21. 1 1
      getting-started/inference/mobile_inference/android_inference/README.md
  22. 2 2
      src/llama_cookbook/data/llama_guard/README.md
  23. 2 2
      src/llama_cookbook/utils/config_utils.py

+ 2 - 2
3p-integrations/crusoe/vllm-fp8/README.md

@@ -23,8 +23,8 @@ source $HOME/.cargo/env
 
 Now, clone the recipes and navigate to this tutorial. Initialize the virtual environment and install dependencies:
 ```bash
-git clone https://github.com/meta-llama/llama-recipes.git
-cd llama-recipes/recipes/3p_integrations/crusoe/vllm-fp8/
+git clone https://github.com/meta-llama/llama-cookbook.git
+cd llama-cookbook/recipes/3p_integrations/crusoe/vllm-fp8/
 uv add vllm setuptools
 ```
 

+ 2 - 2
3p-integrations/langchain/README.md

@@ -8,7 +8,7 @@ Agents can empower Llama 3 with important new capabilities. In particular, we wi
 
 Tool-calling agents with LangGraph use two nodes: (1) a node with an LLM decides which tool to invoke based upon the user question. It outputs the tool name and arguments to use. (2) the tool name and arguments are passed to a tool node, which calls the tool itself with the specified arguments and returns the result back to the LLM.
 
-![Screenshot 2024-05-30 at 10 48 58 AM](https://github.com/rlancemartin/llama-recipes/assets/122662504/a2c2ec40-2c7b-486e-9290-33b6da26c304)
+![Screenshot 2024-05-30 at 10 48 58 AM](https://github.com/rlancemartin/llama-cookbook/assets/122662504/a2c2ec40-2c7b-486e-9290-33b6da26c304)
 
 Our first notebook, `langgraph-tool-calling-agent`, shows how to build our agent mentioned above using LangGraph.
 
@@ -31,7 +31,7 @@ We implement each approach as a control flow in LangGraph:
 
 We will build from CRAG (blue, below) to Self-RAG (green) and finally to Adaptive RAG (red):
 
-![langgraph_rag_agent_](https://github.com/rlancemartin/llama-recipes/assets/122662504/ec4aa1cd-3c7e-4cd1-a1e7-7deddc4033a8)
+![langgraph_rag_agent_](https://github.com/rlancemartin/llama-cookbook/assets/122662504/ec4aa1cd-3c7e-4cd1-a1e7-7deddc4033a8)
 
 --- 
  

File diff suppressed because it is too large
+ 1 - 1
3p-integrations/llama_on_prem.md


+ 1 - 1
3p-integrations/tgi/README.md

@@ -9,7 +9,7 @@ In case the model was fine tuned with LoRA method we need to merge the weights o
 The script takes the base model, the peft weight folder as well as an output as arguments:
 
 ```
-python -m llama_recipes.recipes.3p_integration.tgi.merge_lora_weights --base_model llama-7B --peft_model ft_output --output_dir data/merged_model_output
+python -m llama_cookbook.recipes.3p_integration.tgi.merge_lora_weights --base_model llama-7B --peft_model ft_output --output_dir data/merged_model_output
 ```
 
 ## Step 1: Serving the model

File diff suppressed because it is too large
+ 62 - 59
3p-integrations/using_externally_hosted_llms.ipynb


+ 2 - 2
end-to-end-use-cases/NotebookLlama/README.md

@@ -39,8 +39,8 @@ You'll need your Hugging Face access token, which you can get at your Settings p
 - First, please Install the requirements from [here]() by running inside the folder:
 
 ```
-git clone https://github.com/meta-llama/llama-recipes
-cd llama-recipes/recipes/quickstart/NotebookLlama/
+git clone https://github.com/meta-llama/llama-cookbook
+cd llama-cookbook/recipes/quickstart/NotebookLlama/
 pip install -r requirements.txt
 ```
 

File diff suppressed because it is too large
+ 2 - 2
end-to-end-use-cases/RAFT-Chatbot/README.md


+ 2 - 2
end-to-end-use-cases/benchmarks/llm_eval_harness/meta_eval/README.md

@@ -25,8 +25,8 @@ Given those differences, the numbers from this recipe can not be compared to the
 Please install lm-evaluation-harness and our llama-recipe repo by following:
 
 ```
-git clone git@github.com:meta-llama/llama-recipes.git
-cd llama-recipes
+git clone git@github.com:meta-llama/llama-cookbook.git
+cd llama-cookbook
 pip install -U pip setuptools
 pip install -e .
 pip install lm-eval[math,ifeval,sentencepiece,vllm]==0.4.3

+ 1 - 1
end-to-end-use-cases/coding/text2sql/quickstart.ipynb

@@ -1,5 +1,5 @@
 {
- "cells": [
+ "cells": [llama-cookbook
   {
    "cell_type": "markdown",
    "id": "e8cba0b6",

+ 1 - 1
end-to-end-use-cases/customerservice_chatbots/RAG_chatbot/RAG_Chatbot_Example.ipynb

@@ -402,7 +402,7 @@
     "In this example, we will be deploying a Meta Llama 3 8B chat HuggingFace model with the Text-generation-inference framework on-permises.  \n",
     "This would allow us to directly wire the API server with our chatbot.  \n",
     "There are alternative solutions to deploy Meta Llama 3 models on-permises as your local API server.  \n",
-    "You can find our complete guide [here](https://github.com/meta-llama/llama-recipes/blob/main/recipes/inference/model_servers/llama-on-prem.md)."
+    "You can find our complete guide [here](https://github.com/meta-llama/llama-cookbook/blob/main/recipes/inference/model_servers/llama-on-prem.md)."
    ]
   },
   {

+ 1 - 1
end-to-end-use-cases/github_triage/README.md

@@ -32,7 +32,7 @@ pip install -r requirements.txt
 ### Running the Tool
 
 ```bash
-python triage.py --repo_name='meta-llama/llama-recipes' --start_date='2024-08-14' --end_date='2024-08-27'
+python triage.py --repo_name='meta-llama/llama-cookbook' --start_date='2024-08-14' --end_date='2024-08-27'
 ```
 
 ### Output

+ 1 - 1
end-to-end-use-cases/github_triage/walkthrough.ipynb

@@ -1,4 +1,4 @@
-{
+{llama-cookbookllama-cookbook
   "cells": [
     {
       "cell_type": "code",

+ 1 - 1
end-to-end-use-cases/responsible_ai/code_shield_usage_demo.ipynb

@@ -151,7 +151,7 @@
     "import os\n",
     "import getpass\n",
     "\n",
-    "from llama_recipes.inference.llm import TOGETHER, OPENAI, ANYSCALE\n",
+    "from llama_cookbook.inference.llm import TOGETHER, OPENAI, ANYSCALE\n",
     "\n",
     "if \"EXTERNALLY_HOSTED_LLM_TOKEN\" not in os.environ:\n",
     "    os.environ[\"EXTERNALLY_HOSTED_LLM_TOKEN\"] = getpass.getpass(prompt=\"Provide token for LLM provider\")\n",

+ 1 - 1
end-to-end-use-cases/responsible_ai/llama_guard/README.md

@@ -6,7 +6,7 @@ This [notebook](llama_guard_text_and_vision_inference.ipynb) shows how to load t
 
 ## Requirements
 1. Access to Llama guard model weights on Hugging Face. To get access, follow the steps described in the top of the model card in [Hugging Face](https://huggingface.co/meta-llama/Llama-Guard-3-1B)
-2. Llama recipes package and its dependencies [installed](https://github.com/meta-llama/llama-recipes?tab=readme-ov-file#installing)
+2. Llama recipes package and its dependencies [installed](https://github.com/meta-llama/llama-cookbook?tab=readme-ov-file#installing)
 3. Pillow package installed
 
 ## Inference Safety Checker

+ 11 - 11
end-to-end-use-cases/responsible_ai/llama_guard/llama_guard_customization_via_prompting_and_fine_tuning.ipynb

@@ -33,7 +33,7 @@
     "\n",
     "Llama Guard is provided with a reference taxonomy explained on [this page](https://llama.meta.com/docs/model-cards-and-prompt-formats/meta-llama-guard-3), where the prompting format is also explained. \n",
     "\n",
-    "The functions below combine already existing [prompt formatting code in llama-recipes](https://github.com/meta-llama/llama-recipes/blob/main/src/llama_recipes/inference/prompt_format_utils.py) with custom code to aid in the custimization of the taxonomy. "
+    "The functions below combine already existing [prompt formatting code in llama-recipes](https://github.com/meta-llama/llama-recipes/blob/main/src/llama_cookbook/inference/prompt_format_utils.py) with custom code to aid in the custimization of the taxonomy. "
    ]
   },
   {
@@ -80,7 +80,7 @@
    ],
    "source": [
     "from enum import Enum\n",
-    "from llama_recipes.inference.prompt_format_utils import  LLAMA_GUARD_3_CATEGORY, SafetyCategory, AgentType\n",
+    "from llama_cookbook.inference.prompt_format_utils import  LLAMA_GUARD_3_CATEGORY, SafetyCategory, AgentType\n",
     "from typing import List\n",
     "\n",
     "class LG3Cat(Enum):\n",
@@ -158,7 +158,7 @@
     }
    ],
    "source": [
-    "from llama_recipes.inference.prompt_format_utils import build_custom_prompt, create_conversation, PROMPT_TEMPLATE_3, LLAMA_GUARD_3_CATEGORY_SHORT_NAME_PREFIX\n",
+    "from llama_cookbook.inference.prompt_format_utils import build_custom_prompt, create_conversation, PROMPT_TEMPLATE_3, LLAMA_GUARD_3_CATEGORY_SHORT_NAME_PREFIX\n",
     "from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig\n",
     "from typing import List, Tuple\n",
     "from enum import Enum\n",
@@ -463,13 +463,13 @@
     "\n",
     "To add additional datasets\n",
     "\n",
-    "1. Copy llama-recipes/src/llama_recipes/datasets/toxicchat_dataset.py \n",
+    "1. Copy llama-recipes/src/llama_cookbook/datasets/toxicchat_dataset.py \n",
     "2. Modify the file to change the dataset used\n",
     "3. Add references to the new dataset in \n",
-    "    - llama-recipes/src/llama_recipes/configs/datasets.py\n",
-    "    - llama_recipes/datasets/__init__.py\n",
-    "    - llama_recipes/datasets/toxicchat_dataset.py\n",
-    "    - llama_recipes/utils/dataset_utils.py\n",
+    "    - llama-recipes/src/llama_cookbook/configs/datasets.py\n",
+    "    - llama_cookbook/datasets/__init__.py\n",
+    "    - llama_cookbook/datasets/toxicchat_dataset.py\n",
+    "    - llama_cookbook/utils/dataset_utils.py\n",
     "\n",
     "\n",
     "## Evaluation\n",
@@ -484,7 +484,7 @@
    "source": [
     "from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig\n",
     "\n",
-    "from llama_recipes.inference.prompt_format_utils import build_default_prompt, create_conversation, LlamaGuardVersion\n",
+    "from llama_cookbook.inference.prompt_format_utils import build_default_prompt, create_conversation, LlamaGuardVersion\n",
     "from llama.llama.generation import Llama\n",
     "\n",
     "from typing import List, Optional, Tuple, Dict\n",
@@ -726,7 +726,7 @@
     "#     \"unsafe_content\": [\"O1\"]\n",
     "# }\n",
     "# ```\n",
-    "from llama_recipes.datasets.toxicchat_dataset import get_llamaguard_toxicchat_dataset\n",
+    "from llama_cookbook.datasets.toxicchat_dataset import get_llamaguard_toxicchat_dataset\n",
     "validation_data = get_llamaguard_toxicchat_dataset(None, None, \"train\", return_jsonl = True)[0:100]\n",
     "run_validation(validation_data, AgentType.USER, Type.HF, load_in_8bit = False, load_in_4bit = True)"
    ]
@@ -757,7 +757,7 @@
    "outputs": [],
    "source": [
     "model_id = \"meta-llama/Llama-Guard-3-8B\"\n",
-    "from llama_recipes import finetuning\n",
+    "from llama_cookbook import finetuning\n",
     "\n",
     "finetuning.main(\n",
     "    model_name = model_id,\n",

+ 1 - 1
end-to-end-use-cases/responsible_ai/prompt_guard/README.md

@@ -8,4 +8,4 @@ This is a very small model and inference and fine-tuning are feasible on local C
 
 ## Requirements
 1. Access to Prompt Guard model weights on Hugging Face. To get access, follow the steps described [here](https://github.com/facebookresearch/PurpleLlama/tree/main/Prompt-Guard#download)
-2. Llama recipes package and it's dependencies [installed](https://github.com/meta-llama/llama-recipes?tab=readme-ov-file#installing)
+2. Llama recipes package and it's dependencies [installed](https://github.com/meta-llama/llama-cookbook?tab=readme-ov-file#installing)

+ 1 - 1
getting-started/finetuning/finetune_vision_model.md

@@ -1,7 +1,7 @@
 ## Llama 3.2 Vision Models Fine-Tuning Recipe
 This recipe steps you through how to finetune a Llama 3.2 vision model on the OCR VQA task using the [OCRVQA](https://huggingface.co/datasets/HuggingFaceM4/the_cauldron/viewer/ocrvqa?row=0) dataset.
 
-**Disclaimer**: As our vision models already have a very good OCR ability, here we use the OCRVQA dataset only for demonstration purposes of the required steps for fine-tuning our vision models with llama-recipes.
+**Disclaimer**: As our vision models already have a very good OCR ability, here we use the OCRVQA dataset only for demonstration purposes of the required steps for fine-tuning our vision models with llama-cookbook.
 
 ### Fine-tuning steps
 

+ 1 - 1
getting-started/finetuning/finetuning.py

@@ -2,7 +2,7 @@
 # This software may be used and distributed according to the terms of the Llama 2 Community License Agreement.
 
 import fire
-from llama_recipes.finetuning import main
+from llama_cookbook.finetuning import main
 
 if __name__ == "__main__":
     fire.Fire(main)

+ 8 - 8
getting-started/finetuning/quickstart_peft_finetuning.ipynb

@@ -31,17 +31,17 @@
    "source": [
     "### Step 0: Install pre-requirements and convert checkpoint\n",
     "\n",
-    "We need to have llama-recipes and its dependencies installed for this notebook. Additionally, we need to log in with the huggingface_cli and make sure that the account is able to to access the Meta Llama weights."
+    "We need to have llama-cookbook and its dependencies installed for this notebook. Additionally, we need to log in with the huggingface_cli and make sure that the account is able to to access the Meta Llama weights."
    ]
   },
   {
    "cell_type": "code",
-   "execution_count": 1,
+   "execution_count": null,
    "metadata": {},
    "outputs": [],
    "source": [
     "# uncomment if running from Colab T4\n",
-    "# ! pip install llama-recipes ipywidgets\n",
+    "# ! pip install llama-cookbook ipywidgets\n",
     "\n",
     "# import huggingface_hub\n",
     "# huggingface_hub.login()"
@@ -59,7 +59,7 @@
   },
   {
    "cell_type": "code",
-   "execution_count": 2,
+   "execution_count": null,
    "metadata": {},
    "outputs": [
     {
@@ -80,7 +80,7 @@
    "source": [
     "import torch\n",
     "from transformers import LlamaForCausalLM, AutoTokenizer\n",
-    "from llama_recipes.configs import train_config as TRAIN_CONFIG\n",
+    "from llama_cookbook.configs import train_config as TRAIN_CONFIG\n",
     "\n",
     "train_config = TRAIN_CONFIG()\n",
     "train_config.model_name = \"meta-llama/Meta-Llama-3.1-8B\"\n",
@@ -221,8 +221,8 @@
    "metadata": {},
    "outputs": [],
    "source": [
-    "from llama_recipes.configs.datasets import samsum_dataset\n",
-    "from llama_recipes.utils.dataset_utils import get_dataloader\n",
+    "from llama_cookbook.configs.datasets import samsum_dataset\n",
+    "from llama_cookbook.utils.dataset_utils import get_dataloader\n",
     "\n",
     "samsum_dataset.trust_remote_code = True\n",
     "\n",
@@ -278,7 +278,7 @@
    "outputs": [],
    "source": [
     "import torch.optim as optim\n",
-    "from llama_recipes.utils.train_utils import train\n",
+    "from llama_cookbook.utils.train_utils import train\n",
     "from torch.optim.lr_scheduler import StepLR\n",
     "\n",
     "model.train()\n",

+ 3 - 3
getting-started/inference/local_inference/inference.py

@@ -10,9 +10,9 @@ import fire
 import torch
 
 from accelerate.utils import is_xpu_available
-from llama_recipes.inference.model_utils import load_model, load_peft_model
+from llama_cookbook.inference.model_utils import load_model, load_peft_model
 
-from llama_recipes.inference.safety_utils import AgentType, get_safety_checker
+from llama_cookbook.inference.safety_utils import AgentType, get_safety_checker
 from transformers import AutoTokenizer
 
 
@@ -176,7 +176,7 @@ def main(
                 )
             ],
             title="Meta Llama3 Playground",
-            description="https://github.com/meta-llama/llama-recipes",
+            description="https://github.com/meta-llama/llama-cookbook",
         ).queue().launch(server_name="0.0.0.0", share=share_gradio)
 
 

+ 1 - 1
getting-started/inference/mobile_inference/android_inference/README.md

@@ -103,7 +103,7 @@ Connect your phone to your development machine. On OSX, you'll be prompted on th
 
 ## Building the Android Package with MLC
 
-First edit the file under `android/MLCChat/mlc-package-config.json` and with the [mlc-package-config.json](./mlc-package-config.json) in llama-recipes.
+First edit the file under `android/MLCChat/mlc-package-config.json` and with the [mlc-package-config.json](./mlc-package-config.json) in llama-cookbook.
 
 To understand what these JSON fields mean you can refer to this [documentation](https://llm.mlc.ai/docs/deploy/android.html#step-2-build-runtime-and-model-libraries).
 

+ 2 - 2
src/llama_cookbook/data/llama_guard/README.md

@@ -10,9 +10,9 @@ The finetuning_data_formatter script provides classes and methods for formatting
 
 ## Running the script
 
-1. Clone the llama-recipes repo
+1. Clone the llama-cookbook repo
 2. Install the dependencies
-3. Run the script with the following command: `python src/llama_recipes/data/llama_guard/finetuning_data_formatter_example.py > sample.json`
+3. Run the script with the following command: `python src/llama_cookbook/data/llama_guard/finetuning_data_formatter_example.py > sample.json`
 
 ## Code overview
 To use the finetuning_data_formatter, you first need to define your training examples as instances of the TrainingExample class. For example:

+ 2 - 2
src/llama_cookbook/utils/config_utils.py

@@ -49,10 +49,10 @@ def generate_peft_config(train_config, kwargs):
         raise RuntimeError(f"Peft config not found: {train_config.peft_method}")
 
     if train_config.peft_method == "prefix":
-        raise RuntimeError("PrefixTuning is currently not supported (see https://github.com/meta-llama/llama-recipes/issues/359#issuecomment-2089350811)")
+        raise RuntimeError("PrefixTuning is currently not supported (see https://github.com/meta-llama/llama-cookbook/issues/359#issuecomment-2089350811)")
 
     if train_config.enable_fsdp and train_config.peft_method == "llama_adapter":
-        raise RuntimeError("Llama_adapter is currently not supported in combination with FSDP (see https://github.com/meta-llama/llama-recipes/issues/359#issuecomment-2089274425)")
+        raise RuntimeError("Llama_adapter is currently not supported in combination with FSDP (see https://github.com/meta-llama/llama-cookbook/issues/359#issuecomment-2089274425)")
 
     config = configs[names.index(train_config.peft_method)]()