No Description

Mozhgan K. Chimeh 921966a8c4 megatron readme updated 2 years ago
ai 921966a8c4 megatron readme updated 2 years ago
archived e58465a74f Folder Structure Change: Archived and Experimental folders added 2 years ago
experimental 5e3ea146d9 adding TAO to experimentals directory 2 years ago
hpc c357a6b0b5 nways profiler path fix 2 years ago
hpc_ai bebd54bb77 Merge pull request #53 from jrossthomson/patch-1 2 years ago
misc 88989394d8 bootcamp template v2 2 years ago
.gitignore b5c56c052e Merge branch 'master' into hpc-multi-gpu 2 years ago
.gitmodules 0ed490038d Developing multi-GPU programming bootcamp 2 years ago
CONTRIBUTING.md e58465a74f Folder Structure Change: Archived and Experimental folders added 2 years ago
LICENSE 46c8ac2b7b Created Directory Structure 3 years ago
README.md 00fcdff199 Main README Update, General troubleshooting added 2 years ago

README.md

License GitHub release (latest by date including pre-releases) GitHub issues

GPUBootcamp Official Training Materials

GPU Bootcamps are designed to help build confidence in Accelerated Computing and eventually prepare developers to enroll for Hackathons

This repository consists of GPU bootcamp material for HPC, AI and convergence of both:

  • HPC :: The bootcamp content focuses on how to follow the Analyze, Parallelize and Optimize Cycle to write parallel codes using different parallel programming models accelerating HPC simulations.
Lab Description
N-Ways This Bootcamp will cover multiple GPU programming models and choose the one that best fits your needs. The material supports different programming langauges including C ( CUDA C, OpenACC C, OpenMP C, C++ stdpar ), Fortran ( CUDA Fortran, OpenACC Fortran, OpenMP Fortran, ISO DO CONCURRENT ) Python ( Numba, CuPy )
OpenACC The Bootcamp will cover how to write portable parallel program that can run on multicore CPUs and accelerators like GPUs and how to apply incremental parallelization strategies using OpenACC
Multi GPU Programming Model This bootcamp will cover scaling applications to multiple GPUs across multiple nodes. Moreover, understanding of the underlying technologies and communication topology will help us utilize high-performance NVIDIA libraries to extract more performance out of the system
  • Convergence of HPC and AI :: The bootcamp content focuses on how AI can accelerate HPC simulations by introducing concepts of Deep Neural Networks, including data pre-processing, techniques on how to build, compare and improve accuracy of deep learning models.
Lab Description
Weather Pattern Recognition This Bootcamp will introduce developers to fundamentals of AI and how data driven approach can be applied to Climate/Weather domain
CFD Flow Prediction This Bootcamp will introduce developers to fundamentals of AI and how they can be applied to CFD (Computational Fluid Dynamics)
PINN This Bootcamp will introduce developers to fundamentals of using Physics Informed Neural Network and how they can be applied to different scientific domains using Nvidia SimNet
  • AI:: The bootcamp content focuses on using popular accelerated AI frameworks and using optimization techniques to get max performance from accelerators like GPU.
Lab Description
Accelerated Intelligent Video Analytics Learn how Nvidia DeepStream SDK can be used to create optimized Intelligent Video Analytics (IVA) pipeline. Participants will be exposed to the building blocks for creating IVA pipeline followed by profiling exercise to identify hotspots in the pipeline and methods to optimize and get higher throughput
Accelerated Data Science Learn how RAPIDS suite of open source software libraries gives you the freedom to execute end-to-end data science and analytics pipelines entirely on GPUs. Participants will be exposed to using libraries that can be easily integrated with the daily data science pipeline and accelerate computations for faster execution
Distributed Deep Learning This bootcamp will introduce participants to fundamentals of Distributed deep learning and give a hands-on experience on methods that can be applied to Deep learning models for faster model training

System Requirements

Each lab contains docker and singularity definition files. Follow the readme files inside each on how to build the container and run the labs inside it.

Contribution

  • The repository uses Apache 2.0 license. For more details on folder structure developers may refer to CONTRIBUTING.md file.
  • A project template for reference is located at Template

Authors and Acknowledgment

See Contributors for a list of contributors towards this Bootcamp.

Feature Request or filing issues

  • Bootcamp users may request for newer training material or file a bug by filing a github issues
  • Please do go through the existing list of issues to get more details of upcoming features and bugs currently being fixed Issues

General Troubleshooting

  • All materials developed are tested with latest GPU Architectures (V100, A100). Most labs unless specified explicitly are expected to work even on older GPU architectures and with lesser compute and memory capacity like the one present even in laptops. There will be change in performance results observed based on GPU used. In case you see any issue using the material on other GPU please file an issue in Github mentioning the details of GPU and CUDA Driver version installed.
  • The material developed are tested inside container environment like Docker and Singularity. In case the users don't have container environment in the cluster, they can explicitly look at the steps mentioned in the Dockerfile and Singularity scripts and install the dependenciesmanually.
  • All bootcamps are jupyter based and by default the Dockerfile and Singularity script runs the jupyter notebook at port 8888. In a munti-tenancy environment the admins are requested to explicitly map the ports to individual users else will result into port conflict issues. We recommend having installations of interactive interface to remote computing resources like Open OnDemand or JupyterHub coupled with scheduler (SLURM, Kubernetes etc ) to do these resources mapping automatically.

Join OpenACC Community

Please join OpenACC Slack Channel.