Bladeren bron

Fix package naming (#852)

Sanyam Bhutani 3 maanden geleden
bovenliggende
commit
c298aa3f4f
2 gewijzigde bestanden met toevoegingen van 18 en 15 verwijderingen
  1. 16 13
      src/README.md
  2. 2 2
      src/docs/single_gpu.md

+ 16 - 13
src/README.md

@@ -8,49 +8,52 @@ These instructions will get you a copy of the project up and running on your loc
 If you want to use PyTorch nightlies instead of the stable release, go to [this guide](https://pytorch.org/get-started/locally/) to retrieve the right `--extra-index-url URL` parameter for the `pip install` commands on your platform.
 
 ### Installing
-Llama-recipes provides a pip distribution for easy install and usage in other projects. Alternatively, it can be installed from source.
+Llama-Cookbook provides a pip distribution for easy install and usage in other projects. Alternatively, it can be installed from source.
 
 > [!NOTE]
 > Ensure you use the correct CUDA version (from `nvidia-smi`) when installing the PyTorch wheels. Here we are using 11.8 as `cu118`.
 > H100 GPUs work better with CUDA >12.0
 
+> > [!NOTE]
+When importing, please use the package name `llama_recipes`. We recently renamed the package to `llama-cookbook and will update the library naming in a following release`
+
 #### Install with pip
 ```
-pip install llama-recipes
+pip install llama-cookbook
 ```
 
 #### Install with optional dependencies
-Llama-recipes offers the installation of optional packages. There are three optional dependency groups.
+Llama-cookbook offers the installation of optional packages. There are three optional dependency groups.
 To run the unit tests we can install the required dependencies with:
 ```
-pip install llama-recipes[tests]
+pip install llama-cookbook[tests]
 ```
 For the vLLM example we need additional requirements that can be installed with:
 ```
-pip install llama-recipes[vllm]
+pip install llama-cookbook[vllm]
 ```
 To use the sensitive topics safety checker install with:
 ```
-pip install llama-recipes[auditnlg]
+pip install llama-cookbook[auditnlg]
 ```
-Some recipes require the presence of langchain. To install the packages follow the recipe description or install with:
+Some cookbook require the presence of langchain. To install the packages follow the recipe description or install with:
 ```
-pip install llama-recipes[langchain]
+pip install llama-cookbook[langchain]
 ```
 Optional dependencies can also be combined with [option1,option2].
 
 #### Install from source
 To install from source e.g. for development use these commands. We're using hatchling as our build backend which requires an up-to-date pip as well as setuptools package.
 ```
-git clone git@github.com:meta-llama/llama-recipes.git
-cd llama-recipes
+git clone git@github.com:meta-llama/llama-cookbook.git
+cd llama-cookbook
 pip install -U pip setuptools
 pip install -e .
 ```
-For development and contributing to llama-recipes please install all optional dependencies:
+For development and contributing to llama-cookbook please install all optional dependencies:
 ```
-git clone git@github.com:meta-llama/llama-recipes.git
-cd llama-recipes
+git clone git@github.com:meta-llama/llama-cookbook.git
+cd llama-cookbook
 pip install -U pip setuptools
 pip install -e .[tests,auditnlg,vllm]
 ```

+ 2 - 2
src/docs/single_gpu.md

@@ -9,9 +9,9 @@ To run fine-tuning on a single GPU, we will  make use of two packages
 Given combination of PEFT and Int8 quantization, we would be able to fine_tune a Meta Llama 3 8B model on one consumer grade GPU such as A10.
 
 ## Requirements
-To run the examples, make sure to install the llama-recipes package (See [README.md](../README.md) for details).
+To run the examples, make sure to install the llama-cookbook package (See [README.md](../README.md) for details).
 
-**Please note that the llama-recipes package will install PyTorch 2.0.1 version, in case you want to run FSDP + PEFT, please make sure to install PyTorch nightlies.**
+**Please note that the llama-cookbook package will install PyTorch 2.0.1 version, in case you want to run FSDP + PEFT, please make sure to install PyTorch nightlies.**
 
 ## How to run it?