{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "In this lab, we will optimize the weather simulation application written in Fortran (if you prefer to use C++, click [this link](../../C/jupyter_notebook/profiling-c.ipynb)). \n", "\n", "Let's execute the cell below to display information about the GPUs running on the server by running the pgaccelinfo command, which ships with the PGI compiler that we will be using. To do this, execute the cell block below by giving it focus (clicking on it with your mouse), and hitting Ctrl-Enter, or pressing the play button in the toolbar above. If all goes well, you should see some output returned below the grey cell." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "!pgaccelinfo" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Exercise 1 \n", "\n", "### Learning objectives\n", "Learn how to assess your serial application, compile, and profile with Nsight systems and find the hotspots. In this exercise you will:\n", "\n", "- Learn how to compile your serial application with PGI compiler\n", "- Learn how to benchmark and profile the serial code using NVIDIA Nsight systems \n", "- Learn how to identify routines responsible for the bulk of the execution time via NVTX markers shown on the Nsight System’s timeline\n", "- Learn about scaling and Amdahl’s law\n", "\n", "To identify opportunities and parallelise the code, understanding the structure of the code is very important.\n", "\n", "**Understand and analyze** the code present at:\n", " \n", "[Serial Code](../source_code/lab1/miniWeather_serial.f90) \n", "\n", "[Makefile](../source_code/lab1/Makefile)\n", "\n", "Open the downloaded file for inspection.\n", "\n", "**Compile** the code with PGI compiler by running `make`. You can get compiler feedback by adding the `-Minfo` flag. Some of the available options are:\n", "\n", "- `accel` – Print compiler operations related to the accelerator\n", "- `all` – Print all compiler output\n", "- `intensity` – Print loop intensity information\n", "\n", "Example usage: `-Minfo=accel`" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "!cd ../source_code/lab1 && make clean && make" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now, if we **profile** the serial code via Nsight Systems command line (see below example command) and download the report, we can investigate the serial code further.\n", "\n", "`nsys profile -t nvtx --stats=true --force-overwrite true -o miniWeather_1 ./miniWeather`" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For the example command above, we download the profiler output (`miniWeather_1.qdrep`) and open it via the Nsight Systems UI. From the timeline view, checkout the NVTX markers displays as part of threads. **Why are we using NVTX?** Please see the section on [Using NVIDIA Tools Extension (NVTX)](profiling-fortran.ipynb#Using-NVIDIA-Tools-Extension-(NVTX))\n", "\n", "\n", "\n", "You can also checkout NVTX statistic from the terminal console once the profiling session ended. From the NVTX statistics, you can see most of the execution time is spend in `perform_timestep`. This is a function worth checking out.\n", "\n", "\n", "\n", "#### Scaling and Amdahl's law\n", "To plan an incremental parallelization strategy after identifying routines responsible for the bulk of the execution time, it is important to know how the application can scale. The amount of performance an application achieves by running on a GPU depends on the the extend to which it can be parallelized. Code that cannot be sufficiently parallelized should run on the host, unless doing so would result in excessive transfers between the host and the device. It is very important to understand the relation between the problem size and computational performance as this can determine the amount of speedup and benefit you would get by parallelizing on GPU. \n", "\n", "Change the value of `nx_glob`, `nz_glob` , and `sim_time` in the code to `nx_glob` = 40 , `nz_glob`= 20 , and `sim_time`= 10, where,\n", "\n", "* `nx_glob` and `nz_glob` is the number of total cells in the x and z directions\n", "* `sim_time` is the simulation time in seconds\n", "\n", "The number of total cells in the x-direction must be twice as large as the total number of cells in the z directions. The default values are 40, 20, and 10 seconds.\n", " \n", "Now, we profile the code again and open the example expected output via the Nsight Systems UI.\n", "\n", "From the \"timeline view\", take a closer look at the \"NVTX\" markers from function table on the left side of top pane and compare it with the timeline from the previous report. You can see now that the most time consuming part of the application is the initialization. \n", "\n", "\n", "\n", "Due to the small problem size (`nx_glob`, `nz_glob` , and `sim_time` in this example), most of the computation is dominated by the initialization and there is not enough work/computation to make it suitable for GPU. \n", "\n", "According to *Amdahl's law*, the speedup achieved by accelerating portions of an application is limited by the code sections that are not accelerated. Before parallelizing an application, it is important to know that the overall performance improvement gained by optimizing portion of the code is limited by the fraction of time that the improved section is actually used. In other words, you may speedup portion of the code by a factor of N, but if only a small fraction of time is spent in this portion of the code, then the overall performance hasn't been improved substantially.\n", "\n", "So, in this example, changing the problem size can hide the initialization part of the code and make it a better candidate for the GPU. Now that you have determined what the most important bottleneck is, modify the application to make this problem more appropriate for the GPU." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Post-Lab Summary\n", "\n", "If you would like to download this lab for later viewing, it is recommend you go to your browsers File menu (not the Jupyter notebook file menu) and save the complete web page. This will ensure the images are copied down as well. You can also execute the following cell block to create a zip-file of the files you've been working on, and download it with the link below." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%%bash\n", "cd ..\n", "rm -f openacc_profiler_files.zip\n", "zip -r openacc_profiler_files.zip *" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**After** executing the above zip command, you should be able to download the zip file [here](../openacc_profiler_files.zip)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "-----\n", "\n", "#
\n", "\n", "-----" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Links and Resources\n", "\n", "[OpenACC API Guide](https://www.openacc.org/sites/default/files/inline-files/OpenACC%20API%202.6%20Reference%20Guide.pdf)\n", "\n", "[NVIDIA Nsight System](https://docs.nvidia.com/nsight-systems/)\n", "\n", "[CUDA Toolkit Download](https://developer.nvidia.com/cuda-downloads)\n", "\n", "**NOTE**: To be able to see the Nsight System profiler output, please download Nsight System latest version from [here](https://developer.nvidia.com/nsight-systems).\n", "\n", "Don't forget to check out additional [OpenACC Resources](https://www.openacc.org/resources) and join our [OpenACC Slack Channel](https://www.openacc.org/community#slack) to share your experience and get more help from the community.\n", "\n", "--- \n", "\n", "## Licensing \n", "\n", "This material is released by NVIDIA Corporation under the Creative Commons Attribution 4.0 International (CC BY 4.0). " ] } ], "metadata": { "anaconda-cloud": {}, "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.4" } }, "nbformat": 4, "nbformat_minor": 1 }