|
@@ -23,7 +23,7 @@ This runs with the `samsum_dataset` for summarization application by default.
|
|
|
|
|
|
```bash
|
|
|
|
|
|
-torchrun --nnodes 1 --nproc_per_node 4 recipes/quickstart/finetuning/finetuning.py --enable_fsdp --model_name /path_of_model_folder/8B --use_peft --peft_method lora --output_dir Path/to/save/PEFT/model
|
|
|
+torchrun --nnodes 1 --nproc_per_node 4 getting-started/finetuning/finetuning.py --enable_fsdp --model_name /path_of_model_folder/8B --use_peft --peft_method lora --output_dir Path/to/save/PEFT/model
|
|
|
|
|
|
```
|
|
|
|
|
@@ -42,7 +42,7 @@ We use `torchrun` here to spawn multiple processes for FSDP.
|
|
|
Setting `use_fast_kernels` will enable using of Flash Attention or Xformer memory-efficient kernels based on the hardware being used. This would speed up the fine-tuning job. This has been enabled in `optimum` library from HuggingFace as a one-liner API, please read more [here](https://pytorch.org/blog/out-of-the-box-acceleration/).
|
|
|
|
|
|
```bash
|
|
|
-torchrun --nnodes 1 --nproc_per_node 4 recipes/quickstart/finetuning/finetuning.py --enable_fsdp --model_name /path_of_model_folder/8B --use_peft --peft_method lora --output_dir Path/to/save/PEFT/model --use_fast_kernels
|
|
|
+torchrun --nnodes 1 --nproc_per_node 4 getting-started/finetuning/finetuning.py --enable_fsdp --model_name /path_of_model_folder/8B --use_peft --peft_method lora --output_dir Path/to/save/PEFT/model --use_fast_kernels
|
|
|
```
|
|
|
|
|
|
### Fine-tuning using FSDP Only
|
|
@@ -51,7 +51,7 @@ If interested in running full parameter finetuning without making use of PEFT me
|
|
|
|
|
|
```bash
|
|
|
|
|
|
-torchrun --nnodes 1 --nproc_per_node 8 recipes/quickstart/finetuning/finetuning.py --enable_fsdp --model_name /path_of_model_folder/8B --dist_checkpoint_root_folder model_checkpoints --dist_checkpoint_folder fine-tuned --fsdp_config.pure_bf16 --use_fast_kernels
|
|
|
+torchrun --nnodes 1 --nproc_per_node 8 getting-started/finetuning/finetuning.py --enable_fsdp --model_name /path_of_model_folder/8B --dist_checkpoint_root_folder model_checkpoints --dist_checkpoint_folder fine-tuned --fsdp_config.pure_bf16 --use_fast_kernels
|
|
|
|
|
|
```
|
|
|
|
|
@@ -69,7 +69,7 @@ If you are interested in running full parameter fine-tuning on the 70B model, yo
|
|
|
|
|
|
```bash
|
|
|
|
|
|
-torchrun --nnodes 1 --nproc_per_node 8 recipes/quickstart/finetuning/finetuning.py --enable_fsdp --low_cpu_fsdp --fsdp_config.pure_bf16 --model_name /path_of_model_folder/70B --batch_size_training 1 --dist_checkpoint_root_folder model_checkpoints --dist_checkpoint_folder fine-tuned
|
|
|
+torchrun --nnodes 1 --nproc_per_node 8 getting-started/finetuning/finetuning.py --enable_fsdp --low_cpu_fsdp --fsdp_config.pure_bf16 --model_name /path_of_model_folder/70B --batch_size_training 1 --dist_checkpoint_root_folder model_checkpoints --dist_checkpoint_folder fine-tuned
|
|
|
|
|
|
```
|
|
|
|
|
@@ -79,7 +79,7 @@ Here we use a slurm script to schedule a job with slurm over multiple nodes.
|
|
|
|
|
|
```bash
|
|
|
|
|
|
-sbatch recipes/quickstart/finetuning/multi_node.slurm
|
|
|
+sbatch getting-started/finetuning/multi_node.slurm
|
|
|
# Change the num nodes and GPU per nodes in the script before running.
|
|
|
|
|
|
```
|
|
@@ -102,16 +102,16 @@ To run with each of the datasets set the `dataset` flag in the command as shown
|
|
|
|
|
|
```bash
|
|
|
# grammer_dataset
|
|
|
-torchrun --nnodes 1 --nproc_per_node 4 recipes/quickstart/finetuning/finetuning.py --enable_fsdp --model_name /path_of_model_folder/8B --use_peft --peft_method lora --dataset grammar_dataset --save_model --dist_checkpoint_root_folder model_checkpoints --dist_checkpoint_folder fine-tuned --fsdp_config.pure_bf16 --output_dir Path/to/save/PEFT/model
|
|
|
+torchrun --nnodes 1 --nproc_per_node 4 getting-started/finetuning/finetuning.py --enable_fsdp --model_name /path_of_model_folder/8B --use_peft --peft_method lora --dataset grammar_dataset --save_model --dist_checkpoint_root_folder model_checkpoints --dist_checkpoint_folder fine-tuned --fsdp_config.pure_bf16 --output_dir Path/to/save/PEFT/model
|
|
|
|
|
|
# alpaca_dataset
|
|
|
|
|
|
-torchrun --nnodes 1 --nproc_per_node 4 recipes/quickstart/finetuning/finetuning.py --enable_fsdp --model_name /path_of_model_folder/8B --use_peft --peft_method lora --dataset alpaca_dataset --save_model --dist_checkpoint_root_folder model_checkpoints --dist_checkpoint_folder fine-tuned --fsdp_config.pure_bf16 --output_dir Path/to/save/PEFT/model
|
|
|
+torchrun --nnodes 1 --nproc_per_node 4 getting-started/finetuning/finetuning.py --enable_fsdp --model_name /path_of_model_folder/8B --use_peft --peft_method lora --dataset alpaca_dataset --save_model --dist_checkpoint_root_folder model_checkpoints --dist_checkpoint_folder fine-tuned --fsdp_config.pure_bf16 --output_dir Path/to/save/PEFT/model
|
|
|
|
|
|
|
|
|
# samsum_dataset
|
|
|
|
|
|
-torchrun --nnodes 1 --nproc_per_node 4 recipes/quickstart/finetuning/finetuning.py --enable_fsdp --model_name /path_of_model_folder/8B --use_peft --peft_method lora --dataset samsum_dataset --save_model --dist_checkpoint_root_folder model_checkpoints --dist_checkpoint_folder fine-tuned --fsdp_config.pure_bf16 --output_dir Path/to/save/PEFT/model
|
|
|
+torchrun --nnodes 1 --nproc_per_node 4 getting-started/finetuning/finetuning.py --enable_fsdp --model_name /path_of_model_folder/8B --use_peft --peft_method lora --dataset samsum_dataset --save_model --dist_checkpoint_root_folder model_checkpoints --dist_checkpoint_folder fine-tuned --fsdp_config.pure_bf16 --output_dir Path/to/save/PEFT/model
|
|
|
|
|
|
```
|
|
|
|
|
@@ -166,11 +166,11 @@ It lets us specify the training settings for everything from `model_name` to `da
|
|
|
profiler_dir: str = "PATH/to/save/profiler/results" # will be used if using profiler
|
|
|
```
|
|
|
|
|
|
-* [Datasets config file](../llama_recipes/configs/datasets.py) provides the available options for datasets.
|
|
|
+* [Datasets config file](../llama_cookbook/configs/datasets.py) provides the available options for datasets.
|
|
|
|
|
|
-* [peft config file](../llama_recipes/configs/peft.py) provides the supported PEFT methods and respective settings that can be modified.
|
|
|
+* [peft config file](../llama_cookbook/configs/peft.py) provides the supported PEFT methods and respective settings that can be modified.
|
|
|
|
|
|
-* [FSDP config file](../llama_recipes/configs/fsdp.py) provides FSDP settings such as:
|
|
|
+* [FSDP config file](../llama_cookbook/configs/fsdp.py) provides FSDP settings such as:
|
|
|
|
|
|
* `mixed_precision` boolean flag to specify using mixed precision, defatults to true.
|
|
|
|