Browse Source

fix paths

Sanyam Bhutani 3 months ago
parent
commit
f1137833e9
3 changed files with 13 additions and 13 deletions
  1. 2 2
      src/docs/FAQ.md
  2. 5 5
      src/docs/multi_gpu.md
  3. 6 6
      src/docs/single_gpu.md

+ 2 - 2
src/docs/FAQ.md

@@ -36,13 +36,13 @@ Here we discuss frequently asked questions that may occur and we found useful al
     os.environ['PYTORCH_CUDA_ALLOC_CONF']='expandable_segments:True'
 
     ```
-    We also added this environment variable in `setup_environ_flags` of the [train_utils.py](../llama_recipes/utils/train_utils.py), feel free to uncomment it if required.
+    We also added this environment variable in `setup_environ_flags` of the [train_utils.py](../llama_cookbook/utils/train_utils.py), feel free to uncomment it if required.
 
 8. Additional debugging flags?
 
     The environment variable `TORCH_DISTRIBUTED_DEBUG` can be used to trigger additional useful logging and collective synchronization checks to ensure all ranks are synchronized appropriately. `TORCH_DISTRIBUTED_DEBUG` can be set to either OFF (default), INFO, or DETAIL depending on the debugging level required. Please note that the most verbose option, DETAIL may impact the application performance and thus should only be used when debugging issues.
 
-    We also added this environment variable in `setup_environ_flags` of the [train_utils.py](../llama_recipes/utils/train_utils.py), feel free to uncomment it if required.
+    We also added this environment variable in `setup_environ_flags` of the [train_utils.py](../llama_cookbook/utils/train_utils.py), feel free to uncomment it if required.
 
 9. I am getting import errors when running inference.
 

+ 5 - 5
src/docs/multi_gpu.md

@@ -10,7 +10,7 @@ Given the combination of PEFT and FSDP, we would be able to fine tune a Meta Lla
 For big models like 405B we will need to fine-tune in a multi-node setup even if 4bit quantization is enabled.
 
 ## Requirements
-To run the examples, make sure to install the llama-recipes package and clone the github repository in order to use the provided [`finetuning.py`](../../getting-started/finetuning/finetuning.py) script with torchrun (See [README.md](../README.md) for details).
+To run the examples, make sure to install the llama-cookbook package and clone the github repository in order to use the provided [`finetuning.py`](../../getting-started/finetuning/finetuning.py) script with torchrun (See [README.md](../README.md) for details).
 
 ## How to run it
 
@@ -86,14 +86,14 @@ sbatch getting-started/finetuning/multi_node.slurm
 
 ## How to run with different datasets?
 
-Currently 4 datasets are supported that can be found in [Datasets config file](../llama_recipes/configs/datasets.py).
+Currently 4 datasets are supported that can be found in [Datasets config file](../llama_cookbook/configs/datasets.py).
 
-* `grammar_dataset` : use this [notebook](../llama_recipes/datasets/grammar_dataset/grammar_dataset_process.ipynb) to pull and process theJfleg and C4 200M datasets for grammar checking.
+* `grammar_dataset` : use this [notebook](../llama_cookbook/datasets/grammar_dataset/grammar_dataset_process.ipynb) to pull and process theJfleg and C4 200M datasets for grammar checking.
 
 * `alpaca_dataset` : to get this open source data please download the `aplaca.json` to `dataset` folder.
 
 ```bash
-wget -P src/llama_recipes/datasets https://raw.githubusercontent.com/tatsu-lab/stanford_alpaca/main/alpaca_data.json
+wget -P src/llama_cookbook/datasets https://raw.githubusercontent.com/tatsu-lab/stanford_alpaca/main/alpaca_data.json
 ```
 
 * `samsum_dataset`
@@ -117,7 +117,7 @@ torchrun --nnodes 1 --nproc_per_node 4  getting-started/finetuning/finetuning.py
 
 ## Where to configure settings?
 
-* [Training config file](../llama_recipes/configs/training.py) is the main config file that helps to specify the settings for our run and can be found in [configs folder](../llama_recipes/configs/)
+* [Training config file](../llama_cookbook/configs/training.py) is the main config file that helps to specify the settings for our run and can be found in [configs folder](../llama_cookbook/configs/)
 
 It lets us specify the training settings for everything from `model_name` to `dataset_name`, `batch_size` and so on. Below is the list of supported settings:
 

+ 6 - 6
src/docs/single_gpu.md

@@ -35,9 +35,9 @@ The args used in the command above are:
 
 ## How to run with different datasets?
 
-Currently 4 datasets are supported that can be found in [Datasets config file](../llama_recipes/configs/datasets.py).
+Currently 4 datasets are supported that can be found in [Datasets config file](../llama_cookbook/configs/datasets.py).
 
-* `grammar_dataset` : use this [notebook](../llama_recipes/datasets/grammar_dataset/grammar_dataset_process.ipynb) to pull and process theJfleg and C4 200M datasets for grammar checking.
+* `grammar_dataset` : use this [notebook](../llama_cookbook/datasets/grammar_dataset/grammar_dataset_process.ipynb) to pull and process theJfleg and C4 200M datasets for grammar checking.
 
 * `alpaca_dataset` : to get this open source data please download the `aplaca.json` to `ft_dataset` folder.
 
@@ -67,7 +67,7 @@ python -m llama_cookbook.finetuning  --use_peft --peft_method lora --quantizatio
 
 ## Where to configure settings?
 
-* [Training config file](../llama_recipes/configs/training.py) is the main config file that help to specify the settings for our run can be found in
+* [Training config file](../llama_cookbook/configs/training.py) is the main config file that help to specify the settings for our run can be found in
 
 It let us specify the training settings, everything from `model_name` to `dataset_name`, `batch_size` etc. can be set here. Below is the list of supported settings:
 
@@ -117,10 +117,10 @@ It let us specify the training settings, everything from `model_name` to `datase
 
 ```
 
-* [Datasets config file](../llama_recipes/configs/datasets.py)
-    ../src/llama_recipes/configs/datasets.py) provides the available options for datasets.
+* [Datasets config file](../llama_cookbook/configs/datasets.py)
+    ../src/llama_cookbook/configs/datasets.py) provides the available options for datasets.
 
-* [peft config file](../llama_recipes/configs/peft.py) provides the supported PEFT methods and respective settings that can be modified.
+* [peft config file](../llama_cookbook/configs/peft.py) provides the supported PEFT methods and respective settings that can be modified.
 
 ## FLOPS Counting and Pytorch Profiling