Explorar el Código

Update FT EADME.md

Jeff Tang hace 3 meses
padre
commit
fc80546035
Se han modificado 1 ficheros con 2 adiciones y 4 borrados
  1. 2 4
      end-to-end-use-cases/coding/text2sql/fine-tuning/README.md

+ 2 - 4
end-to-end-use-cases/coding/text2sql/fine-tuning/README.md

@@ -81,14 +81,12 @@ model='fine_tuning/llama31-8b-text2sql-fft-nonquantized-cot'
 ```
 vllm serve fine_tuning/llama31-8b-text2sql-fft-nonquantized-cot --tensor-parallel-size 1 --max-num-batched-tokens 8192 --max-num-seqs 64
 ```
-If you have multiple GPUs you can run something like
+or if you want to speed up the inference and eval and have multiple GPUs, you can set `--tensor-parallel-size` to the number of your available GPUs, e.g.:
 
 ```
-CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 vllm serve fine_tuning/llama31-8b-text2sql-fft-nonquantized-cot --tensor-parallel-size 8 --max-num-batched-tokens 8192 --max-num-seqs 64
+vllm serve fine_tuning/llama31-8b-text2sql-fft-nonquantized-cot --tensor-parallel-size 8 --max-num-batched-tokens 8192 --max-num-seqs 64
 ```
 
-to speed up the eval.
-
 3. If you haven't downloaded the DEV dataset, download it and unzip it first:
 
 ```