소스 검색

Fix lint issue

Matthias Reso 9 달 전
부모
커밋
b319a9fb8c

+ 6 - 0
.github/scripts/spellcheck_conf/wordlist.txt

@@ -1406,3 +1406,9 @@ DLAI
 agentic
 containts
 dlai
+Prerequirements
+tp
+QLoRA
+ntasks
+srun
+xH

+ 1 - 1
docs/multi_gpu.md

@@ -98,7 +98,7 @@ Then we need to replace the bottom srun command with the following:
 srun  torchrun --nproc_per_node 8 --rdzv_id $RANDOM --rdzv_backend c10d --rdzv_endpoint $head_node_ip:29500 ./finetuning.py  --enable_fsdp --use_peft --peft_method lora --quantization 4bit  --quantization_config.quant_type nf4 --mixed_precision False --low_cpu_fsdp
 ```
 
-Do not forget to adujust the number of nodes, ntasks and gpus-per-task in the top.
+Do not forget to adjust the number of nodes, ntasks and gpus-per-task in the top.
 
 ## How to run with different datasets?
 

+ 1 - 1
recipes/quickstart/finetuning/multigpu_finetuning.md

@@ -93,7 +93,7 @@ Then we need to replace the bottom srun command with the following:
 srun  torchrun --nproc_per_node 8 --rdzv_id $RANDOM --rdzv_backend c10d --rdzv_endpoint $head_node_ip:29500 ./finetuning.py  --enable_fsdp --use_peft --peft_method lora --quantization 4bit  --quantization_config.quant_type nf4 --mixed_precision False --low_cpu_fsdp
 ```
 
-Do not forget to adujust the number of nodes, ntasks and gpus-per-task in the top.
+Do not forget to adjust the number of nodes, ntasks and gpus-per-task in the top.
 
 ## Running with different datasets
 Currently 3 open source datasets are supported that can be found in [Datasets config file](../../../src/llama_recipes/configs/datasets.py). You can also use your custom dataset (more info [here](./datasets/README.md)).

+ 2 - 2
recipes/quickstart/inference/local_inference/README.md

@@ -85,5 +85,5 @@ python inference.py --model_name <training_config.output_dir> --prompt_file <tes
 
 ```
 
-## Inference on large modles like Meta Llama 405B
-To run the Meta Llama 405B variant without quantization we need to ue a multi-node setup for inference. The llama-recipes inference script currently does not allow multi-node inference. To run this model you can use vLLM with pipeline and tensor parallelism as showed in [this example](../../../3p_integrations/vllm/README.md).
+## Inference on large models like Meta Llama 405B
+To run the Meta Llama 405B variant without quantization we need to use a multi-node setup for inference. The llama-recipes inference script currently does not allow multi-node inference. To run this model you can use vLLM with pipeline and tensor parallelism as showed in [this example](../../../3p_integrations/vllm/README.md).