浏览代码

finetuning README

Jeff Tang 3 月之前
父节点
当前提交
57ffb7427b
共有 1 个文件被更改,包括 1 次插入2 次删除
  1. 1 2
      end-to-end-use-cases/coding/text2sql/fine-tuning/README.md

+ 1 - 2
end-to-end-use-cases/coding/text2sql/fine-tuning/README.md

@@ -10,7 +10,7 @@ This folder contains scripts to:
 
 ## Eval Results of the Fine-tuned Models
 
-The eval results of SFT Llama 3.1 8B with different options are summarized in the table below:
+The eval results of SFT Llama 3.1 8B with different options (epochs is 3) are summarized in the table below:
 
 | Fine-tuning Combination     | Accuracy |
 |-----------------------------|----------|
@@ -23,7 +23,6 @@ The eval results of SFT Llama 3.1 8B with different options are summarized in th
 | Quantized, CoT, FFT         | N/A      |
 | Quantized, No CoT, FFT      | N/A      |
 
-
 ## SFT with the BIRD TRAIN dataset (No Reasoning)
 
 We'll first use the BIRD TRAIN dataset to prepare for supervised fine-tuning with no reasoning info in the dataset.