|
vor 9 Monaten | |
---|---|---|
.. | ||
RAG | vor 10 Monaten | |
Running_Llama3_Anywhere | vor 1 Jahr | |
agents | vor 9 Monaten | |
finetuning | vor 9 Monaten | |
inference | vor 9 Monaten | |
Getting_to_know_Llama.ipynb | vor 10 Monaten | |
Prompt_Engineering_with_Llama_3.ipynb | vor 11 Monaten | |
README.md | vor 9 Monaten |
If you are new to developing with Meta Llama models, this is where you should start. This folder contains introductory-level notebooks across different techniques relating to Meta Llama.
| Feature | | | ---------------------------------------------- | - | | HF support for finetuning | ✅ | | Deferred initialization ( meta init) | ✅ | | HF support for inference | ✅ | | Low CPU mode for multi GPU | ✅ | | Mixed precision | ✅ | | Single node quantization | ✅ | | Flash attention | ✅ | | PEFT | ✅ | | Activation checkpointing FSDP | ✅ | | Hybrid Sharded Data Parallel (HSDP) | ✅ | | Dataset packing & padding | ✅ | | BF16 Optimizer ( Pure BF16) | ✅ | | Profiling & MFU tracking | ✅ | | Gradient accumulation | ✅ | | CPU offloading | ✅ | | FSDP checkpoint conversion to HF for inference | ✅ | | W&B experiment tracker | ✅ |