Jeff Tang ef6bbb2b20 main README and FT README update 4 kuukautta sitten
..
data 6815255595 folder struc refactoring 4 kuukautta sitten
eval 9c294dfacf eval/llama_eval.sh data path 4 kuukautta sitten
fine-tuning ef6bbb2b20 main README and FT README update 4 kuukautta sitten
quickstart 99ead57fb6 4 READMEs; requirements 4 kuukautta sitten
README.md ef6bbb2b20 main README and FT README update 4 kuukautta sitten

README.md

Text2SQL: Evaluating and Fine-tuning Llama Models with CoT

This folder contains scripts to:

  1. Evaluate Llama (original and fine-tuned) models on the Text2SQL task using the popular BIRD dataset in 3 simple steps;

  2. Generate two fine-tuning datasets (with and without CoT) and fine-tuning Llama 3.1 8B with the datasets, gaining a 165% improvement on the fine-tuned model without CoT (accuracy 37.16%) and 209% with CoT (accuracy 43.37%) over the original model (accuracy 14.02%).

Our end goal is to maximize the accuracy of Llama models on the Text2SQL task. To do so we need to first evaluate the current state of the art Llama models on the task, then apply fine-tuning, agent and other approaches to evaluate and improve Llama's performance.

Structure:

  • data: contains the scripts to download the BIRD TRAIN and DEV datasets;
  • eval: contains the scripts to evaluate Llama models (original and fine-tuned) on the BIRD dataset;
  • fine-tune: contains the scripts to generate non-CoT and CoT datasets based on the BIRD TRAIN set and to fine-tune Llama models using the datasets;
  • quickstart: contains a notebook to ask Llama 3.3 to convert natural language queries into SQL queries.

Next Steps

  1. Try GRPO RFT to further improve the accuracy.
  2. Fine-tune Llama 3.3 70b and Llama 4 models.
  3. Use torchtune.
  4. Try agentic workflow.
  5. Expand the eval to support other enterprise databases.