{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "In this lab, we will optimize the weather simulation application written in C++ (if you prefer to use Fortran, click [this link](../../Fortran/jupyter_notebook/profiling-fortran.ipynb)). \n", "\n", "Let's execute the cell below to display information about the GPUs running on the server by running the pgaccelinfo command, which ships with the PGI compiler that we will be using. To do this, execute the cell block below by giving it focus (clicking on it with your mouse), and hitting Ctrl-Enter, or pressing the play button in the toolbar above. If all goes well, you should see some output returned below the grey cell." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "!pgaccelinfo" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Exercise 3\n", "\n", "### Learning objectives\n", "Learn how to improve GPU occupancy and extract more parallelism by adding more descriptive clauses to the OpenACC loop constructs in the application. In this exercise you will:\n", "\n", "- Learn about GPU occupancy, and OpenACC vs CUDA execution model \n", "- Learn how to find out GPU occupancy from the Nsight Systems profiler\n", "- Learn how to improve the occupancy and saturate compute resources \n", "- Learn about collapse clause for further optimization of the parallel nested loops and when to use them\n", "- Apply collapse clause to eligible nested loops in the application and investigate the profiler report\n", "\n", "Look at the profiler report from the previous exercise again. From the timeline, have a close look at the the kernel functions. We can see that the for example `compute_tendencies_z_383_gpu` kernel has the theoretical occupancy of 37.5% . It clearly shows that the occupancy is a limiting factor. *Occupancy* is a measure of how well the GPU compute resources are being utilized. It is about how much parallelism is running / how much parallelism the hardware could run.\n", "\n", "\n", "\n", "NVIDIA GPUs are comprised of multiple streaming multiprocessors (SMs) where it can manage up to 2048 concurrent threads (not actively running at the same time). Low occupancy shows that there are not enough active threads to fully utilize the computing resources. Higher occupancy implies that the scheduler has more active threads to choose from and hence achieves higher performance. So, what does this mean in OpenACC execution model?\n", "\n", "**Gang, Worker, and Vector**\n", "CUDA and OpenACC programming model use different terminologies for similar ideas. For example, in CUDA, parallel execution is organized into grids, blocks, and threads. On the other hand, the OpenACC execution model has three levels of gang, worker, and vector. OpenACC assumes the device has multiple processing elements (Streaming Multiprocessors on NVIDIA GPUs) running in parallel and mapping of OpenACC execution model on CUDA is as below:\n", "\n", "- An OpenACC gang is a threadblock\n", "- A worker is a warp\n", "- An OpenACC vector is a CUDA thread\n", "\n", "\n", "\n", "So, in order to improve the occupancy, we have to increase the parallelism within the gang. In other words, we have to increase the number of threads that can be scheduled on the GPU to improve GPU thread occupancy.\n", "\n", "**Optimizing loops and improving occupancy**\n", "Let's have a look at the compiler feedback (*Line 315*) and the corresponding code snippet showing three tightly nested loops. \n", "\n", "\n", "\n", "The iteration count for the outer loop is `NUM_VARS` which is 4. As you can see from the above screenshot, the block dimension is <4,1,1> which shows the small amount of parallelism within the gang.\n", "\n", "```cpp\n", "#pragma acc parallel loop private(indt, indf1, indf2) \n", " for (ll = 0; ll < NUM_VARS; ll++)\n", " {\n", " for (k = 0; k < nz; k++)\n", " {\n", " for (i = 0; i < nx; i++)\n", " {\n", " indt = ll * nz * nx + k * nx + i;\n", " indf1 = ll * (nz + 1) * (nx + 1) + k * (nx + 1) + i;\n", " indf2 = ll * (nz + 1) * (nx + 1) + k * (nx + 1) + i + 1;\n", " tend[indt] = -(flux[indf2] - flux[indf1]) / dx;\n", " }\n", " }\n", " }\n", "```\n", "\n", "In order to expose more parallelism and improve the occupancy, we can use an additional clause called `collapse` in the `#pragma acc loop` to optimize loops. The loop directive gives the compiler additional information about the next loop in the source code through several clauses. Apply the `collapse(N)` clause to a loop directive to collapse the next `N` tightly-nested loops to be collapsed into a single, flattened loop. This is useful if you have many nested loops or when you have really short loops. \n", "\n", "When the loop count in any of some tightly nested loops is relatively small compared to the available number of threads in the device, creating a single iteration space across all the nested loops, increases the iteration count thus allowing the compiler to extract more parallelism.\n", "\n", "**Tips on where to use:**\n", "- Collapse outer loops to enable creating more gangs.\n", "- Collapse inner loops to enable longer vector lengths.\n", "- Collapse all loops, when possible, to do both\n", "\n", "Now, add `collapse` clause to the code and make necessary changes to the loop directives. Once done, save the file, re-compile via `make`, and profile it again. \n", "\n", "Click on the [miniWeather_openacc.cpp](../source_code/lab3/miniWeather_openacc.cpp) and [Makefile](../source_code/lab3/Makefile) links and modify `miniWeather_openacc.cpp` and `Makefile`. Remember to **SAVE** your code after changes, before running below cells." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "!cd ../source_code/lab3 && make clean && make" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let us start inspecting the compiler feedback and see if it applied the optimizations. Here is the screenshot of expected compiler feedback after adding the `collapse`clause to the code. You can see that nested loops on line 277 and 315 have been successfully collapsed.\n", "\n", "\n", "\n", "Now, **Profile** your code with Nsight Systems command line `nsys`." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "!cd ../source_code/lab3 && nsys profile -t nvtx,openacc --stats=true --force-overwrite true -o miniWeather_4 ./miniWeather" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "[Download the profiler output](../source_code/lab3/miniWeather_4.qdrep) and open it via the GUI. Now have a close look at the kernel functions on the timeline and the occupancy.\n", "\n", "\n", "\n", "As you can see from the above screenshot, the theoretical occupancy is now 75% and the block dimension is now `<128,1,1>` where *128* is the vector size per gang. **Screenshots represents profiler report for the values of 400,200,1500.**\n", "\n", "```cpp\n", "#pragma acc parallel loop collapse(3) private(indt, indf1, indf2) \n", " for (ll = 0; ll < NUM_VARS; ll++)\n", " {\n", " for (k = 0; k < nz; k++)\n", " {\n", " for (i = 0; i < nx; i++)\n", " {\n", " indt = ll * nz * nx + k * nx + i;\n", " indf1 = ll * (nz + 1) * (nx + 1) + k * (nx + 1) + i;\n", " indf2 = ll * (nz + 1) * (nx + 1) + k * (nx + 1) + i + 1;\n", " tend[indt] = -(flux[indf2] - flux[indf1]) / dx;\n", " }\n", " }\n", " }\n", "```\n", "\n", "The iteration count for the collapsed loop is `NUM_VARS * nz * nx` where (in the example screenshot),\n", "\n", "- nz= 200,\n", "- nx = 400, and \n", "- NUM_VARS = 4\n", "\n", "So, the interaction count for this particular loop inside the `compute_tendencies_z_383_gpu` function is 320K. This number divided by the vector length of *128* would gives us the grid dimension of `<2500,1,1>`.\n", "\n", "By creating a single iteration space across the nested loops and increasing the iteration count, we improved the occupancy and extracted more parallelism.\n", "\n", "**Notes:**\n", "- 100% occupancy is not required for, nor does it guarantee best performance.\n", "- Less than 50% occupancy is often a red flag\n", "\n", "How much this optimization will speed-up the code will vary according to the application and the target accelerator, but it is not uncommon to see large speed-ups by using collapse on loop nests." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Post-Lab Summary\n", "\n", "If you would like to download this lab for later viewing, it is recommend you go to your browsers File menu (not the Jupyter notebook file menu) and save the complete web page. This will ensure the images are copied down as well. You can also execute the following cell block to create a zip-file of the files you've been working on, and download it with the link below." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%%bash\n", "cd ..\n", "rm -f openacc_profiler_files.zip\n", "zip -r openacc_profiler_files.zip *" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**After** executing the above zip command, you should be able to download the zip file [here](../openacc_profiler_files.zip)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "-----\n", "\n", "#

HOME      NEXT

\n", "\n", "-----" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Links and Resources\n", "\n", "[OpenACC API Guide](https://www.openacc.org/sites/default/files/inline-files/OpenACC%20API%202.6%20Reference%20Guide.pdf)\n", "\n", "[NVIDIA Nsight System](https://docs.nvidia.com/nsight-systems/)\n", "\n", "[CUDA Toolkit Download](https://developer.nvidia.com/cuda-downloads)\n", "\n", "**NOTE**: To be able to see the Nsight System profiler output, please download Nsight System latest version from [here](https://developer.nvidia.com/nsight-systems).\n", "\n", "Don't forget to check out additional [OpenACC Resources](https://www.openacc.org/resources) and join our [OpenACC Slack Channel](https://www.openacc.org/community#slack) to share your experience and get more help from the community.\n", "\n", "--- \n", "\n", "## Licensing \n", "\n", "This material is released by OpenACC-Standard.org, in collaboration with NVIDIA Corporation, under the Creative Commons Attribution 4.0 International (CC BY 4.0)." ] } ], "metadata": { "anaconda-cloud": {}, "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.5" } }, "nbformat": 4, "nbformat_minor": 1 }