{ "cells": [ { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "Copyright (c) Meta Platforms, Inc. and affiliates.\n", "This software may be used and distributed according to the terms of the Llama 2 Community License Agreement." ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "## PEFT Finetuning Quick Start Notebook\n", "\n", "This notebook shows how to train a Meta Llama 3 model on a single GPU (e.g. A10 with 24GB) using int8 quantization and LoRA." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Step 0: Install pre-requirements and convert checkpoint\n", "\n", "We need to have llama-recipes and its dependencies installed for this notebook. Additionally, we need to log in with the huggingface_cli and make sure that the account is able to to access the Meta Llama weights." ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [], "source": [ "# ! pip install llama-recipes ipywidgets\n", "\n", "# import huggingface_hub\n", "# huggingface_hub.login()" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "### Step 1: Load the model\n", "\n", "Setup training configuration and load the model and tokenizer." ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [ { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "c7963d43806d432aaa3d00e2055e355c", "version_major": 2, "version_minor": 0 }, "text/plain": [ "Loading checkpoint shards: 0%| | 0/4 [00:00