JimChienTW 5 mēneši atpakaļ
vecāks
revīzija
f228cb4d53

+ 1 - 0
recipes/quickstart/finetuning/README.md

@@ -54,6 +54,7 @@ It lets us specify the training settings for everything from `model_name` to `da
     output_dir: str = "PATH/to/save/PEFT/model"
     freeze_layers: bool = False
     num_freeze_layers: int = 1
+    freeze_LLM_only: bool = False # Freeze self-attention layers in the language_model. Vision model, multi_modal_projector, cross-attention will be fine-tuned
     quantization: str = None
     one_gpu: bool = False
     save_model: bool = True

Failā izmaiņas netiks attēlotas, jo tās ir par lielu
+ 6 - 0
recipes/quickstart/finetuning/finetune_vision_model.md