|
1 month ago | |
---|---|---|
.. | ||
configs | 2 months ago | |
data_prep | 2 months ago | |
scripts | 1 month ago | |
src | 1 month ago | |
ReadMe.MD | 1 month ago |
If you are working on fine-tuning a Large Language Model, the biggest effort is usually preparing the dataset.
This tool contains a bunch of util functions to make your life easy for loading a dataset to torchtune with helpers:
TODO: Add TT links
(WIP) We support the following file formats for parsing:
TODO: Supply requirements.txt file here instead
# Install all dependencies at once
pip install PyPDF2 python-docx beautifulsoup4 requests python-pptx yt-dlp youtube-transcript-api
TODO: Add links here
# Parse a PDF (outputs to data/output/document.txt)
python src/main.py docs/report.pdf
# Parse a website
python src/main.py https://en.wikipedia.org/wiki/Artificial_intelligence
# Get YouTube video transcripts
python src/main.py "https://www.youtube.com/watch?v=dQw4w9WgXcQ"
# Custom output location
python src/main.py docs/presentation.pptx -o my_training_data/
# Specify the output filename
python src/main.py docs/contract.docx -n legal_text_001.txt
# Use verbose mode for debugging
python src/main.py weird_file.pdf -v
All outputs are saved as UTF-8 txt files in data/output/
unless otherwise set.
&
character.
├── data/ # Where docs live
│ ├── pdf/ # PDF documents
│ ├── html/ # HTML files
│ ├── youtube/ # YouTube transcript stuff
│ ├── docx/ # Word documents
│ ├── ppt/ # PowerPoint slides
│ ├── txt/ # Plain text
│ └── output/ # Where the magic happens (output)
│
├── src/ # The code that makes it tick
│ ├── parsers/ # All our parser implementations
│ │ ├── pdf_parser.py # PDF -> text
│ │ ├── html_parser.py # HTML/web -> text
│ │ ├── youtube_parser.py # YouTube -> text
│ │ ├── docx_parser.py # Word -> text
│ │ ├── ppt_parser.py # PowerPoint -> text
│ │ ├── txt_parser.py # Text -> text (not much to do here)
│ │ └── __init__.py
│ ├── __init__.py
│ ├── main.py # CLI entry point
│ └── generate_qa.py # Creates Q&A pairs from text
│
└── README.md # You are here
After parsing your documents, the next step is to parse them into QA pairs:
Use the generate_qa.py
script to create using the Cerebras LLM API:
# Set your API key first
export CEREBRAS_API_KEY="your_key_here"
# This happens in 3 steps:
# 1. Summarize the doc
# 2. Generate QA
# 3. Evaluate & filter based on relevance
python src/generate_qa.py docs/report.pdf
# Customize the generation
python src/generate_qa.py docs/report.pdf --num-pairs 30 --threshold 7.0
# Skip parsing if you already have text
python src/generate_qa.py docs/report.pdf --text-file data/output/report.txt
# Save output to a specific directory
python src/generate_qa.py docs/report.pdf --output-dir training_data/
Time to setup: ~20-30 minutes:
conda create -n test-ft python=3.10
conda activate test-ft
pip install --pre torch torchvision torchao --index-url https://download.pytorch.org/whl/nightly/cu126
pip install --pre torchtune --extra-index-url https://download.pytorch.org/whl/nightly/cpu --no-cache-dir
pip install transformers datasets wandb
pip install huggingface-cli
huggingface-cli login
wandb login
git clone https://github.com/meta-llama/llama-cookbook/
cd llama-cookbook/
git checkout data-tool
cd end-to-end-use-cases/data-tool/scripts/finetuning
tune download meta-llama/Meta-Llama-3.1-70B-Instruct --output-dir /tmp/Meta-Llama-3.1-70B-Instruct --ignore-patterns "original/consolidated*
tune run --nproc_per_node 8 full_finetune_distributed --config ft-config.yaml
The end goal for this effort is to serve as fine-tuning data preparation kit.
Currently, I'm (WIP) evaluating the idea to improve tool-calling datasets.
Setup:
3.3