{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "

Home Page

\n", "\n", " \n", "
\n", " Previous Notebook\n", " \n", " 1\n", " 2\n", " 3\n", " 4\n", " \n", " Next Notebook\n", "
" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Introduction to Distributed Deep Learning - Part 3\n", "\n", "**Contents of this notebook:**\n", "\n", "- [Hands-on with Distributed training](#Hands-on-with-Distributed-training)\n", " - [Tensorflow - Keras](#Tensorflow---Keras)\n", " - [Horovod](#Horovod)\n", "\n", "**By the End of this Notebook you will:**\n", "\n", "- Implement distributed deep learning training using Tensorflow / Keras.\n", "- Learn concepts of Horovod and implement them." ] }, { "cell_type": "markdown", "metadata": { "tags": [] }, "source": [ "# Hands-on with Distributed training\n", "\n", "In this notebook, we will be focusing on **training** Deep Neural networks using **multiple GPUs** using **Data-parallelism**. We will be using the following frameworks for demonstration. \n", "\n", "- [Tensorflow - Keras](#Tensorflow---Keras)\n", "- [Horovod](#Horovod)\n", "\n", "In both cases, we will use **Single Host Multiple Device** implementations, although notes on **Multi-host multiple devices** will be present at relevant places.\n", "\n", "You can use either / both of the methods that we mention below. We recommend trying out both of them." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Tensorflow - Keras\n", "\n", "\n", "Tensorflow uses `tf.distribute.Strategy` API to distribute training across multiple GPUs, multiple machines or TPUs. Using this API, you can distribute your existing models and training code with minimal code changes. `tf.distribute.Strategy` can be used with a high-level API like **Keras**, and can also be used to distribute custom training loops.\n", "\n", "Tensorflow Strategies can be briefly summarised into two axes : \n", "\n", "- **Synchronous vs asynchronous** training: \n", " - In sync training, all workers train over different slices of input data in sync and aggregate gradients at each step. \n", " - In async training, all workers are independently training over the input data and updating variables asynchronously. \n", "\n", "\n", "\n", "- **Hardware platform**: You may want to scale your training onto multiple GPUs on one machine, or multiple machines in a network (with 0 or more GPUs each), or on Cloud TPUs.\n", "\n", "In order to support these use cases, there are 6 strategies available. Here is a short description of the strategies :\n", "\n", "- **MirroredStrategy**\n", " - `tf.distribute.MirroredStrategy` supports synchronous distributed training on multiple GPUs on one machine. It creates one replica per GPU device. Each variable in the model is mirrored across all the replicas. Together, these variables form a single conceptual variable called MirroredVariable. These variables are kept in sync with each other by applying identical updates.\n", "- **MultiWorkerMirroredStrategy**\n", " - `tf.distribute.MultiWorkerMirroredStrategy` is very similar to MirroredStrategy. It implements synchronous distributed training across multiple workers, each with potentially multiple GPUs. Similar to `tf.distribute.MirroredStrategy`, it creates copies of all variables in the model on each device across all workers.\n", " - **Note** : For multi-worker training, you need to set up the `TF_CONFIG` environment variable for each binary running in your cluster. The `TF_CONFIG` environment variable is a JSON string which specifies what tasks constitute a cluster, their addresses and each task's role in the cluster.\n", "- **TPUStrategy**\n", " - `tf.distribute.TPUStrategy` lets you run your TensorFlow training on Tensor Processing Units (TPUs).\n", "- **CentralStorageStrategy**\n", " - `tf.distribute.experimental.CentralStorageStrategy` does synchronous training as well. Variables are not mirrored, instead they are placed on the CPU and operations are replicated across all local GPUs. If there is only one GPU, all variables and operations will be placed on that GPU.\n", "- **OneDeviceStrategy**\n", " - `tf.distribute.OneDeviceStrategy` is a strategy to place all variables and computation on a single specified device.\n", "- **ParameterServerStrategy**\n", " - Parameter server training is a common data-parallel method to scale up model training on multiple machines. A parameter server training cluster consists of workers and parameter servers. Variables are created on parameter servers and they are read and updated by workers in each step.\n", "\n", "\n", "Tensorflow by default uses [NCCL](https://developer.nvidia.com/nccl) for communication between GPUs. Kindly refer to the [System Topology Notebook](#as) to learn more about NCCL.\n", "\n", "Now that we've given an overview of Tensorflow and the available functionalities let us try out an example using **MirroredStrategy**. \n", "\n", "We will be building a small CNN and train it on the **FMNIST dataset**." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#Import Necessary libraries\n", "import tensorflow_datasets as tfds\n", "import tensorflow as tf\n", "from tensorflow import keras\n", "import time\n", "import os\n", "import sys\n", "\n", "\n", "# Set number of GPUs to use for training\n", "os.environ[\"CUDA_VISIBLE_DEVICES\"]=\"0,1,2,3,4,5,6,7\"\n", "\n", "#Print Tensorflow version\n", "print(tf.__version__)\n", "\n", "# Download the MNIST dataset and load it from [TensorFlow Datasets](https://www.tensorflow.org/datasets). This returns a dataset in `tf.data` format.\n", "\n", "# Setting `with_info` to `True` includes the metadata for the entire dataset, which is being saved here to `info`.\n", "# Among other things, this metadata object includes the number of train and test examples. \n", "datasets, info = tfds.load(name='mnist', with_info=True, as_supervised=True)\n", "\n", "mnist_train, mnist_test = datasets['train'], datasets['test']" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let us now define a distribution strategy." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "### Define distribution strategy\n", "# Create a `MirroredStrategy` object. This will handle distribution, \n", "# and provides a context manager (`tf.distribute.MirroredStrategy.scope`) to build your model inside.\n", "strategy = tf.distribute.MirroredStrategy()\n", "print('Number of devices: {}'.format(strategy.num_replicas_in_sync))" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "# Setup input pipeline\n", "# When training a model with multiple GPUs, you can use the extra computing power effectively by increasing the batch size. In general, use the largest batch size that fits the GPU memory, and tune the learning rate accordingly.\n", "num_train_examples = info.splits['train'].num_examples\n", "num_test_examples = info.splits['test'].num_examples\n", "\n", "BUFFER_SIZE = 10000\n", "\n", "# Setting the batch size per GPU / replica\n", "BATCH_SIZE_PER_REPLICA = 512\n", "BATCH_SIZE = BATCH_SIZE_PER_REPLICA * strategy.num_replicas_in_sync\n", "\n", "\n", "# Feature scaling function\n", "def scale(image, label):\n", " image = tf.cast(image, tf.float32)\n", " image /= 255\n", "\n", " return image, label\n", "\n", "\n", "# Apply this function to the training and test data, shuffle the training data, and batching it.\n", "train_dataset = mnist_train.map(scale).cache().shuffle(BUFFER_SIZE).batch(BATCH_SIZE)\n", "eval_dataset = mnist_test.map(scale).batch(BATCH_SIZE)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let us now create and compile the model , note that we need to create and compile the model under the context of `strategy.scope`.\n", "\n", "Let us also define callbacks to calculate Throughput obtained.\n", "\n", "**Note : We use the same model used in Notebook-1 scaling efficiency experiment, that should help us calculate scaling efficiency from the available data**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "with strategy.scope():\n", " model = tf.keras.Sequential([\n", " tf.keras.layers.Conv2D(32, [3, 3], activation='relu',input_shape=(28, 28, 1)),\n", " tf.keras.layers.Conv2D(64, [3, 3], activation='relu'),\n", " tf.keras.layers.MaxPooling2D(pool_size=(2, 2)),\n", " tf.keras.layers.Dropout(0.25),\n", " tf.keras.layers.Flatten(),\n", " tf.keras.layers.Dense(128, activation='relu'),\n", " tf.keras.layers.Dropout(0.5),\n", " tf.keras.layers.Dense(10, activation='softmax')])\n", "\n", " model.compile(loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),\n", " optimizer=tf.keras.mixed_precision.LossScaleOptimizer(tf.keras.optimizers.Adam()),\n", " metrics=['accuracy'])" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# ## Define the callbacks\n", "# Callback for printing the LR at the end of each epoch.\n", "class Throughput(tf.keras.callbacks.Callback):\n", " def __init__(self, total_images=0):\n", " self.total_images = total_images\n", " def on_epoch_begin(self, epoch, logs=None):\n", " self.epoch_start_time = time.time()\n", " def on_epoch_end(self, epoch, logs=None):\n", " epoch_time = time.time() - self.epoch_start_time\n", " print('Epoch time : {}'.format(epoch_time))\n", " images_per_sec = round(self.total_images / epoch_time, 2)\n", " print('Images/sec: {}'.format(images_per_sec))\n", " \n", "# Now, train the model in the usual way, calling `fit` on the model and passing in the dataset created at the beginning of the tutorial. This step is the same whether you are distributing the training or not.\n", "model.fit(train_dataset, epochs=8, callbacks=Throughput(total_images=len(mnist_train)))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Result\n", "\n", "Output of running the command on DGX-1 : \n", "\n", "```bash\n", "Epoch 1/8\n", "15/15 [==============================] - 23s 72ms/step - loss: 1.2977 - accuracy: 0.6140\n", "Epoch time : 23.148940801620483\n", "Images/sec: 2591.91\n", "Epoch 2/8\n", "15/15 [==============================] - 0s 19ms/step - loss: 0.4460 - accuracy: 0.8667\n", "Epoch time : 0.31006669998168945\n", "Images/sec: 193506.75\n", "Epoch 3/8\n", "15/15 [==============================] - 0s 16ms/step - loss: 0.2840 - accuracy: 0.9169\n", "Epoch time : 0.2699730396270752\n", "Images/sec: 222244.41\n", "Epoch 4/8\n", "15/15 [==============================] - 0s 18ms/step - loss: 0.1961 - accuracy: 0.9426\n", "Epoch time : 0.2857208251953125\n", "Images/sec: 209995.19\n", "Epoch 5/8\n", "15/15 [==============================] - 0s 17ms/step - loss: 0.1482 - accuracy: 0.9579\n", "Epoch time : 0.27691197395324707\n", "Images/sec: 216675.35\n", "Epoch 6/8\n", "15/15 [==============================] - 0s 19ms/step - loss: 0.1178 - accuracy: 0.9658\n", "Epoch time : 0.3074929714202881\n", "Images/sec: 195126.41\n", "Epoch 7/8\n", "15/15 [==============================] - 0s 26ms/step - loss: 0.0999 - accuracy: 0.9699\n", "Epoch time : 0.43077993392944336\n", "Images/sec: 139282.25\n", "Epoch 8/8\n", "15/15 [==============================] - 0s 17ms/step - loss: 0.0851 - accuracy: 0.9748\n", "Epoch time : 0.2854182720184326\n", "Images/sec: 210217.8\n", "```" ] }, { "cell_type": "markdown", "metadata": { "tags": [] }, "source": [ "Kindly restart the kernel or run the following cell to restart the kernel to free up GPU memory before procedding to the further sections." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import IPython\n", "app = IPython.Application.instance()\n", "app.kernel.do_shutdown(True) " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Horovod\n", "\n", "Horovod is a distributed deep learning training framework. It is available for TensorFlow, Keras, PyTorch, and Apache MXNet.\n", "\n", "Horovod is an open-source tool initially developed by Uber to support their need for faster deep learning model training across many engineering teams. It is part of a growing ecosystem of approaches to distributed training, including for example, Distributed TensorFlow. Uber decided to develop a solution that utilised MPI for distributed process communication, and the NVIDIA Collective Communications Library (NCCL) for its highly optimised implementation of reductions across distributed processes and nodes. The resulting Horovod package delivers on its promise to scale deep learning model training across multiple GPUs and multiple nodes, with only minor code modification and intuitive debugging.\n", "\n", "\n", "#### Horovod's MPI Roots\n", "\n", "Horovod's connection to MPI runs deep, and for programmers familiar with MPI programming, much of what you program to distribute model training with Horovod will feel very familiar. For those unfamiliar with MPI programming, a brief discussion of some of the conventions and considerations required when distributing processes with Horovod, or MPI, is worthwhile. Horovod, as with MPI, strictly follows the Single-Program Multiple-Data (SPMD) paradigm where we implement the instruction flow of multiple processes in the same file/program. Because multiple processes are executing code in parallel, we have to take care about race conditions and also the synchronisation of participating processes.Horovod assigns a unique numerical ID or rank (an MPI concept) to each process executing the program. This rank can be accessed programmatically. As you will see below, when writing Horovod code, by identifying a process's rank programmatically in the code, we can take steps such as:\n", "\n", "- Pin that process to its own exclusive GPU.\n", "- Utilise a single rank for broadcasting values that need to be used uniformly by all ranks.\n", "- Utilise a single rank for collecting and/or reducing values produced by all ranks.\n", "- Utilise a single rank for logging or writing to disk.\n", "\n", "\n", "To use Horovod with Tensorflow, we would need to make some modifications. The modifications can be listed as follows :\n", "\n", "1. Initialise Horovod \n", "\n", "```python\n", "hvd.init()\n", "```\n", "\n", "2. Pin each GPU to a single process.\n", "\n", "```python\n", "gpus = tf.config.experimental.list_physical_devices('GPU')\n", "for gpu in gpus:\n", " tf.config.experimental.set_memory_growth(gpu, True)\n", "if gpus:\n", " tf.config.experimental.set_visible_devices(gpus[hvd.local_rank()], 'GPU')\n", "```\n", "\n", "3. Wrap the optimiser in Horovod Distributed optimiser. The distributed optimiser delegates gradient computation to the original optimiser, averages gradients using `all-reduce` or `all-gather`, and then applies those averaged gradients. \n", "\n", "```python \n", "hvd.DistributedOptimizer\n", "```\n", "\n", "\n", "4. Broadcast the initial variable states from rank 0 to all other processes.\n", "\n", "This is necessary to ensure consistent initialisation of all workers when training is started with random weights or restored from a checkpoint.\n", "\n", "This can be done by using the `hvd.broadcast_variables` method after models and optimisers have been initialised.\n", "\n", "5. Modify your code to save checkpoints only on worker 0 to prevent other workers from corrupting them.\n", "\n", "\n", "Let us now go over the modifications using a test code. \n", "\n", "**Note that the below code will run as a single process and will only use one GPU , this is used for explanations and a description on how to write it for multi-GPUs is given at the end**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#Import Necessary libraries\n", "import tensorflow_datasets as tfds\n", "import tensorflow as tf\n", "from tensorflow import keras\n", "import time\n", "import os\n", "import sys\n", "# Import Horovod\n", "import horovod.tensorflow.keras as hvd\n", "# 1. Horovod: initialize Horovod.\n", "hvd.init()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "With the typical setup of one GPU per process, setting this to `local_rank`. The first process on the server will be allocated the first GPU, the second process will be allocated the second GPU, and so forth." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# 2. Horovod: pin GPU to be used to process local rank (one GPU per process)\n", "gpus = tf.config.experimental.list_physical_devices('GPU')\n", "for gpu in gpus:\n", " tf.config.experimental.set_memory_growth(gpu, True)\n", "if gpus:\n", " tf.config.experimental.set_visible_devices(gpus[hvd.local_rank()], 'GPU')\n", "\n", " \n", "# Load dataset and batching.\n", "(mnist_images, mnist_labels), _ = tf.keras.datasets.mnist.load_data(path='mnist-%d.npz' % hvd.rank())\n", "\n", "dataset = tf.data.Dataset.from_tensor_slices(\n", " (tf.cast(mnist_images[..., tf.newaxis] / 255.0, tf.float32),\n", " tf.cast(mnist_labels, tf.int64))\n", ")\n", "BATCH_SIZE_PER_REPLCIA = 1024\n", "dataset = dataset.repeat().shuffle(10000).batch(BATCH_SIZE_PER_REPLCIA)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now , each process will have a copy of the model , we then wrap the Optimizer with `hvd.DistributedOptimizer`" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Building the model\n", "mnist_model = tf.keras.Sequential([\n", " tf.keras.layers.Conv2D(32, [3, 3], activation='relu'),\n", " tf.keras.layers.Conv2D(64, [3, 3], activation='relu'),\n", " tf.keras.layers.MaxPooling2D(pool_size=(2, 2)),\n", " tf.keras.layers.Dropout(0.25),\n", " tf.keras.layers.Flatten(),\n", " tf.keras.layers.Dense(128, activation='relu'),\n", " tf.keras.layers.Dropout(0.5),\n", " tf.keras.layers.Dense(10, activation='softmax')\n", "])\n", "\n", "# Use Adam optimizer for training \n", "opt = tf.optimizers.Adam()\n", "\n", "# 3. Horovod: Wrap optimizer with DistributedOptimizer.\n", "opt = hvd.DistributedOptimizer(opt, backward_passes_per_step=1, average_aggregated_gradients=True)\n", "\n", "# Horovod: Specify `experimental_run_tf_function=False` to ensure TensorFlow\n", "# uses hvd.DistributedOptimizer() to compute gradients.\n", "mnist_model.compile(loss=tf.losses.SparseCategoricalCrossentropy(),\n", " optimizer=opt,\n", " metrics=['accuracy'],\n", " experimental_run_tf_function=False)\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Callbacks \n", "\n", "We define some necessary callbacks , they are : \n", "\n", "1. `hvd.callbacks.BroadcastGlobalVariablesCallback(0)` : This is necessary to ensure consistent initialization of all workers when training is started with random weights or restored from a checkpoint.\n", "\n", "2. `hvd.callbacks.MetricAverageCallback()` : Since we are not validating the full dataset on each worker anymore, each worker will have different validation results. To improve validation metric quality and reduce variance, we will average validation results among all workers.\n", "\n", "3. `Throughput()` : We define the same callback used in the Tensorflow version. This callback gives us the value of the throughput in `Images/sec` to better understand the throughput of the system.\n", "\n", "We also make sure to set the `verbose` parameter in `model.fit()` to ensure only one worker prints the results as all workers have identical results. We then fit the dataset, note that we have to define the `steps_per_epoch` parameter. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "class Throughput(tf.keras.callbacks.Callback):\n", " def __init__(self, total_images=0):\n", " self.total_images = total_images\n", " def on_epoch_begin(self, epoch, logs=None):\n", " self.epoch_start_time = time.time()\n", " def on_epoch_end(self, epoch, logs=None):\n", " if hvd.rank() == 0 :\n", " epoch_time = time.time() - self.epoch_start_time\n", " print('Epoch time : {}'.format(epoch_time))\n", " images_per_sec = round(self.total_images / epoch_time, 2)\n", " print('Images/sec: {}'.format(images_per_sec))\n", "\n", "callbacks = [\n", " # Horovod: broadcast initial variable states from rank 0 to all other processes.\n", " hvd.callbacks.BroadcastGlobalVariablesCallback(0),\n", " # Horovod: average metrics among workers at the end of every epoch.\n", " hvd.callbacks.MetricAverageCallback(),\n", " # Callback to calculate Throughput\n", " Throughput(total_images=len(mnist_labels))\n", "]\n", "\n", "\n", "# Horovod: write logs on worker 0.\n", "verbose = 1 if hvd.rank() == 0 else 0\n", "\n", "# Train the model.\n", "# Horovod: adjust number of steps based on number of GPUs.\n", "mnist_model.fit(dataset, steps_per_epoch=len(mnist_labels) // (BATCH_SIZE_PER_REPLCIA*hvd.size()), callbacks=callbacks, epochs=8, verbose=verbose)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Kindly restart the kernel or run the following cell to restart the kernel to free up GPU memory before procedding to the further sections." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import IPython\n", "app = IPython.Application.instance()\n", "app.kernel.do_shutdown(True) " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Like we mentioned earlier , the above cells would run as a single process and thus not utilise multiple GPUs , now to launch multiple process, we need to use `horovodrun` command which inturn invokes the [mpirun](#https://www.open-mpi.org/doc/v4.0/man1/mpirun.1.php) command with certain optimisations.\n", "\n", "To run on a machine with 4 GPUs:\n", "\n", "```bash\n", "$ horovodrun -np 4 python train.py\n", "```\n", "\n", "To run on 4 machines with 4 GPUs each:\n", "\n", "```bash\n", "$ horovodrun -np 16 -H server1:4,server2:4,server3:4,server4:4 python train.py\n", "```\n", "\n", "Let us now run it with multiple-gpus by setting the `-np` flag , the `-np` flag sets the number of copies of the program that we want to run." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "!TF_CPP_MIN_LOG_LEVEL=3 horovodrun -np 8 --mpi-args=\"--oversubscribe\" python3 ../source_code/N3/cnn_fmnist.py --batch-size=2048 2> /dev/null" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Result and scaling efficiency\n", "\n", "\n", "Let us now look at the different results that obtained and calculate the scaling efficinecy.\n", "\n", "Output of running the command on DGX-1 : \n", "\n", "- Horovod ( 1 GPU ) : \n", "\n", "```bash\n", "Epoch 1/8\n", "58/58 [==============================] - 4s 24ms/step - loss: 0.5677 - accuracy: 0.8261\n", "Epoch time : 4.478050947189331\n", "Images/sec: 13398.69\n", "Epoch 2/8\n", "58/58 [==============================] - 1s 16ms/step - loss: 0.1593 - accuracy: 0.9531\n", "Epoch time : 0.9169270992279053\n", "Images/sec: 65435.95\n", "Epoch 3/8\n", "58/58 [==============================] - 1s 16ms/step - loss: 0.1059 - accuracy: 0.9694\n", "Epoch time : 0.9156594276428223\n", "Images/sec: 65526.55\n", "Epoch 4/8\n", "58/58 [==============================] - 1s 16ms/step - loss: 0.0840 - accuracy: 0.9750\n", "Epoch time : 0.9237456321716309\n", "Images/sec: 64952.95\n", "Epoch 5/8\n", "58/58 [==============================] - 1s 16ms/step - loss: 0.0709 - accuracy: 0.9791\n", "Epoch time : 0.906282901763916\n", "Images/sec: 66204.49\n", "Epoch 6/8\n", "58/58 [==============================] - 1s 16ms/step - loss: 0.0601 - accuracy: 0.9819\n", "Epoch time : 0.9177072048187256\n", "Images/sec: 65380.33\n", "Epoch 7/8\n", "58/58 [==============================] - 1s 16ms/step - loss: 0.0546 - accuracy: 0.9834\n", "Epoch time : 0.906665563583374\n", "Images/sec: 66176.55\n", "Epoch 8/8\n", "58/58 [==============================] - 1s 16ms/step - loss: 0.0487 - accuracy: 0.9851\n", "Epoch time : 0.9043354988098145\n", "Images/sec: 66347.06\n", "```\n", "\n", "- Horovod ( 8 GPUs ) \n", "\n", "```bash\n", "[1,0]:Epoch 1/8\n", "[1,0]:3/3 - 7s - loss: 2.1803 - accuracy: 0.2632\n", "[1,0]:Epoch time : 7.058096170425415\n", "[1,0]:Images/sec: 8500.88\n", "[1,0]:Epoch 2/8\n", "[1,0]:3/3 - 0s - loss: 1.5363 - accuracy: 0.5832\n", "[1,0]:Epoch time : 0.1487276554107666\n", "[1,0]:Images/sec: 403421.94\n", "[1,0]:Epoch 3/8\n", "[1,0]:3/3 - 0s - loss: 0.9254 - accuracy: 0.7147\n", "[1,0]:Epoch time : 0.1277930736541748\n", "[1,0]:Images/sec: 469509.01\n", "[1,0]:Epoch 4/8\n", "[1,0]:3/3 - 0s - loss: 0.7013 - accuracy: 0.7752\n", "[1,0]:Epoch time : 0.11966228485107422\n", "[1,0]:Images/sec: 501411.12\n", "[1,0]:Epoch 5/8\n", "[1,0]:3/3 - 0s - loss: 0.5648 - accuracy: 0.8245\n", "[1,0]:Epoch time : 0.11635589599609375\n", "[1,0]:Images/sec: 515659.3\n", "[1,0]:Epoch 6/8\n", "[1,0]:3/3 - 0s - loss: 0.4633 - accuracy: 0.8568\n", "[1,0]:Epoch time : 0.11620521545410156\n", "[1,0]:Images/sec: 516327.94\n", "[1,0]:Epoch 7/8\n", "[1,0]:3/3 - 0s - loss: 0.4102 - accuracy: 0.8748\n", "[1,0]:Epoch time : 0.11935830116271973\n", "[1,0]:Images/sec: 502688.12\n", "[1,0]:Epoch 8/8\n", "[1,0]:3/3 - 0s - loss: 0.3582 - accuracy: 0.8957\n", "[1,0]:Epoch time : 0.12322497367858887\n", "[1,0]:Images/sec: 486914.29\n", "```\n", "\n", "#### Scaling efficiency\n", "\n", "|#GPUs |Samples/sec|Scaling efficiency|\n", "|-|-|-|\n", "|1| 66176| NA|\n", "|8| 486914| ~91% |\n", "\n", "We have achieved a **7.3x** improvement in throughput and an impressive **91%** scaling efficiency. \n", "\n", "But, If we take a closer look at the results, we can find even after eight epochs in both cases, the run with a single GPU at the end of 8 epochs has a **loss of 0.0487 and accuracy of 0.9851** , comparing that with 8 GPU case, we find at the end of 8 epochs we have a **loss of 0.3582 and accuracy of 0.8957**" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**This increase in convergence time is noticed when training with larger batch sizes ( when we scale across GPUs, we create a large batch that has a size multiplied by the number of GPUs). Let us discuss the reasons for this and some techniques that we can use for faster convergence in the upcoming notebook.**" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "***\n", "\n", "## Licensing\n", "\n", "This material is released by OpenACC-Standard.org, in collaboration with NVIDIA Corporation, under the Creative Commons Attribution 4.0 International (CC BY 4.0)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "
\n", " Previous Notebook\n", " \n", " 1\n", " 2\n", " 3\n", " 4\n", " \n", " Next Notebook\n", "
\n", "\n", "

Home Page

\n", "\n" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.10" }, "toc-autonumbering": false, "toc-showcode": false, "toc-showmarkdowntxt": false }, "nbformat": 4, "nbformat_minor": 4 }