Sanyam Bhutani před 3 měsíci
rodič
revize
ae2e0ba241
3 změnil soubory, kde provedl 22 přidání a 22 odebrání
  1. 1 1
      pyproject.toml
  2. 13 13
      src/README.md
  3. 8 8
      src/docs/single_gpu.md

+ 1 - 1
pyproject.toml

@@ -36,7 +36,7 @@ exclude = [
 ]
 
 [tool.hatch.build.targets.wheel]
-packages = ["src/llama_recipes"]
+packages = ["src/llama_cookbook"]
 
 [tool.hatch.metadata.hooks.requirements_txt]
 files = ["requirements.txt"]

+ 13 - 13
src/README.md

@@ -8,7 +8,7 @@ These instructions will get you a copy of the project up and running on your loc
 If you want to use PyTorch nightlies instead of the stable release, go to [this guide](https://pytorch.org/get-started/locally/) to retrieve the right `--extra-index-url URL` parameter for the `pip install` commands on your platform.
 
 ### Installing
-Llama-recipes provides a pip distribution for easy install and usage in other projects. Alternatively, it can be installed from source.
+Llama-Cookbook provides a pip distribution for easy install and usage in other projects. Alternatively, it can be installed from source.
 
 > [!NOTE]
 > Ensure you use the correct CUDA version (from `nvidia-smi`) when installing the PyTorch wheels. Here we are using 11.8 as `cu118`.
@@ -16,41 +16,41 @@ Llama-recipes provides a pip distribution for easy install and usage in other pr
 
 #### Install with pip
 ```
-pip install llama-recipes
+pip install llama-cookbook
 ```
 
 #### Install with optional dependencies
-Llama-recipes offers the installation of optional packages. There are three optional dependency groups.
+Llama-cookbook offers the installation of optional packages. There are three optional dependency groups.
 To run the unit tests we can install the required dependencies with:
 ```
-pip install llama-recipes[tests]
+pip install llama-cookbook[tests]
 ```
 For the vLLM example we need additional requirements that can be installed with:
 ```
-pip install llama-recipes[vllm]
+pip install llama-cookbook[vllm]
 ```
 To use the sensitive topics safety checker install with:
 ```
-pip install llama-recipes[auditnlg]
+pip install llama-cookbook[auditnlg]
 ```
-Some recipes require the presence of langchain. To install the packages follow the recipe description or install with:
+Some cookbook require the presence of langchain. To install the packages follow the recipe description or install with:
 ```
-pip install llama-recipes[langchain]
+pip install llama-cookbook[langchain]
 ```
 Optional dependencies can also be combined with [option1,option2].
 
 #### Install from source
 To install from source e.g. for development use these commands. We're using hatchling as our build backend which requires an up-to-date pip as well as setuptools package.
 ```
-git clone git@github.com:meta-llama/llama-recipes.git
-cd llama-recipes
+git clone git@github.com:meta-llama/llama-cookbook.git
+cd llama-cookbook
 pip install -U pip setuptools
 pip install -e .
 ```
-For development and contributing to llama-recipes please install all optional dependencies:
+For development and contributing to llama-cookbook please install all optional dependencies:
 ```
-git clone git@github.com:meta-llama/llama-recipes.git
-cd llama-recipes
+git clone git@github.com:meta-llama/llama-cookbook.git
+cd llama-cookbook
 pip install -U pip setuptools
 pip install -e .[tests,auditnlg,vllm]
 ```

+ 8 - 8
src/docs/single_gpu.md

@@ -9,9 +9,9 @@ To run fine-tuning on a single GPU, we will  make use of two packages
 Given combination of PEFT and Int8 quantization, we would be able to fine_tune a Meta Llama 3 8B model on one consumer grade GPU such as A10.
 
 ## Requirements
-To run the examples, make sure to install the llama-recipes package (See [README.md](../README.md) for details).
+To run the examples, make sure to install the llama-cookbook package (See [README.md](../README.md) for details).
 
-**Please note that the llama-recipes package will install PyTorch 2.0.1 version, in case you want to run FSDP + PEFT, please make sure to install PyTorch nightlies.**
+**Please note that the llama-cookbook package will install PyTorch 2.0.1 version, in case you want to run FSDP + PEFT, please make sure to install PyTorch nightlies.**
 
 ## How to run it?
 
@@ -21,7 +21,7 @@ Get access to a machine with one GPU or if using a multi-GPU machine please make
 
 ```bash
 
-python -m llama_recipes.finetuning  --use_peft --peft_method lora --quantization 8bit --use_fp16 --model_name /path_of_model_folder/8B --output_dir Path/to/save/PEFT/model
+python -m llama_cookbook.finetuning  --use_peft --peft_method lora --quantization 8bit --use_fp16 --model_name /path_of_model_folder/8B --output_dir Path/to/save/PEFT/model
 
 ```
 The args used in the command above are:
@@ -35,14 +35,14 @@ The args used in the command above are:
 
 ## How to run with different datasets?
 
-Currently 4 datasets are supported that can be found in [Datasets config file](../llama_recipes/configs/datasets.py).
+Currently 4 datasets are supported that can be found in [Datasets config file](../llama_cookbook/configs/datasets.py).
 
 * `grammar_dataset` : use this [notebook](../llama_recipes/datasets/grammar_dataset/grammar_dataset_process.ipynb) to pull and process theJfleg and C4 200M datasets for grammar checking.
 
 * `alpaca_dataset` : to get this open source data please download the `aplaca.json` to `ft_dataset` folder.
 
 ```bash
-wget -P src/llama_recipes/datasets https://raw.githubusercontent.com/tatsu-lab/stanford_alpaca/main/alpaca_data.json
+wget -P src/llama_cookbookittgdlenfkinikdtdhiruhtebvfcjtib/datasets https://raw.githubusercontent.com/tatsu-lab/stanford_alpaca/main/alpaca_data.json
 ```
 
 * `samsum_dataset`
@@ -52,16 +52,16 @@ to run with each of the datasets set the `dataset` flag in the command as shown
 ```bash
 # grammer_dataset
 
-python -m llama_recipes.finetuning  --use_peft --peft_method lora --quantization 8bit --dataset grammar_dataset --model_name /path_of_model_folder/8B --output_dir Path/to/save/PEFT/model
+python -m llama_cookbook.finetuning  --use_peft --peft_method lora --quantization 8bit --dataset grammar_dataset --model_name /path_of_model_folder/8B --output_dir Path/to/save/PEFT/model
 
 # alpaca_dataset
 
-python -m llama_recipes.finetuning  --use_peft --peft_method lora --quantization 8bit --dataset alpaca_dataset --model_name /path_of_model_folder/8B --output_dir Path/to/save/PEFT/model
+python -m llama_cookbook.finetuning  --use_peft --peft_method lora --quantization 8bit --dataset alpaca_dataset --model_name /path_of_model_folder/8B --output_dir Path/to/save/PEFT/model
 
 
 # samsum_dataset
 
-python -m llama_recipes.finetuning  --use_peft --peft_method lora --quantization 8bit --dataset samsum_dataset --model_name /path_of_model_folder/8B --output_dir Path/to/save/PEFT/model
+python -m llama_cookbook.finetuning  --use_peft --peft_method lora --quantization 8bit --dataset samsum_dataset --model_name /path_of_model_folder/8B --output_dir Path/to/save/PEFT/model
 
 ```