Bläddra i källkod

fix wandb config update

Kai Wu 11 månader sedan
förälder
incheckning
f1d90d0ff0
3 ändrade filer med 5 tillägg och 4 borttagningar
  1. 1 1
      docs/multi_gpu.md
  2. 1 1
      docs/single_gpu.md
  3. 3 2
      src/llama_recipes/finetuning.py

+ 1 - 1
docs/multi_gpu.md

@@ -34,7 +34,7 @@ The args used in the command above are:
 
 * `--use_peft` boolean flag to enable PEFT methods in the script
 
-* `--peft_method` to specify the PEFT method, here we use `lora` other options are `llama_adapter`, `prefix`.
+* `--peft_method` to specify the PEFT method, here we use `lora` other options are `llama_adapter`.
 
 We use `torchrun` here to spawn multiple processes for FSDP.
 

+ 1 - 1
docs/single_gpu.md

@@ -27,7 +27,7 @@ The args used in the command above are:
 
 * `--use_peft` boolean flag to enable PEFT methods in the script
 
-* `--peft_method` to specify the PEFT method, here we use `lora` other options are `llama_adapter`, `prefix`.
+* `--peft_method` to specify the PEFT method, here we use `lora` other options are `llama_adapter`.
 
 * `--quantization` boolean flag to enable int8 quantization
 

+ 3 - 2
src/llama_recipes/finetuning.py

@@ -154,12 +154,13 @@ def main(**kwargs):
         # Load the pre-trained peft model checkpoint and setup its configuration
         if train_config.from_peft_checkpoint:
             model = PeftModel.from_pretrained(model, train_config.from_peft_checkpoint, is_trainable=True)
+            peft_config = model.peft_config()
         # Generate the peft config and start fine-tuning from original model
         else:
             peft_config = generate_peft_config(train_config, kwargs)
             model = get_peft_model(model, peft_config)
-            if wandb_run:
-                wandb_run.config.update(peft_config)
+        if wandb_run:
+            wandb_run.config.update(peft_config)
         model.print_trainable_parameters()