This recipe steps you through how to finetune a Llama 2 model on the text summarization task using the samsum dataset on a single GPU.
These are the instructions for using the canonical finetuning script in the llama-recipes package.
Ensure that you have installed the llama-recipes package (details).
To run fine-tuning on a single GPU, we will make use of two packages:
python -m finetuning.py  --use_peft --peft_method lora --quantization --use_fp16 --model_name /patht_of_model_folder/7B --output_dir Path/to/save/PEFT/model
The args used in the command above are:
--use_peft boolean flag to enable PEFT methods in the script--peft_method to specify the PEFT method, here we use lora other options are llama_adapter, prefix.--quantization boolean flag to enable int8 quantization[!NOTE]
In case you are using a multi-GPU machine please make sure to only make one of them visible usingexport CUDA_VISIBLE_DEVICES=GPU:id.
Currently 3 open source datasets are supported that can be found in Datasets config file. You can also use your custom dataset (more info here).
grammar_dataset : use this notebook to pull and process the Jfleg and C4 200M datasets for grammar checking.
alpaca_dataset : to get this open source data please download the aplaca.json to dataset folder.
wget -P ../../src/llama_recipes/datasets https://raw.githubusercontent.com/tatsu-lab/stanford_alpaca/main/alpaca_data.json
samsum_datasetto run with each of the datasets set the dataset flag in the command as shown below:
# grammer_dataset
python -m finetuning.py  --use_peft --peft_method lora --quantization  --dataset grammar_dataset --model_name /patht_of_model_folder/7B --output_dir Path/to/save/PEFT/model
# alpaca_dataset
python -m finetuning.py  --use_peft --peft_method lora --quantization  --dataset alpaca_dataset --model_name /patht_of_model_folder/7B --output_dir Path/to/save/PEFT/model
# samsum_dataset
python -m finetuning.py  --use_peft --peft_method lora --quantization  --dataset samsum_dataset --model_name /patht_of_model_folder/7B --output_dir Path/to/save/PEFT/model