|
|
@@ -0,0 +1,10 @@
|
|
|
+# Distillation with Llama 4 and Synthetic Data Kit
|
|
|
+
|
|
|
+*Copyright (c) Meta Platforms, Inc. and affiliates.*
|
|
|
+This software may be used and distributed according to the terms of the Llama Community License Agreement.*
|
|
|
+
|
|
|
+<a href="https://colab.research.google.com/github/meta-llama/llama-cookbook/blob/main/getting-started/finetuning/quickstart_peft_finetuning.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
|
|
|
+
|
|
|
+This notebook will walk you through distilling model knowledge from Llama 4 into a smaller Llama 3.2 model using synthetic training data from Synthetic Data Kit.
|
|
|
+
|
|
|
+[View notebook](/getting-started/distillation/distillation.ipynb)
|