Ver código fonte

Fixed Main README to point to new labs.

Bharat Kumar 3 anos atrás
pai
commit
5a2fd586a8

+ 5 - 2
README.md

@@ -11,8 +11,10 @@ The bootcamp content focuses on how to follow the Analyze, Parallelize and Optim
 
 | Lab      | Description |
 | ----------- | ----------- |
-| [N-Ways](https://github.com/gpuhackathons-org/gpubootcamp/tree/master/hpc/nways)      | This lab will cover multiple GPU programming models and choose the one that best fits your needs. The material supports different programming langauges including C ( CUDA C, OpenACC C, OpenMP C, C++ stdpar ),  Fortran ( CUDA Fortran, OpenACC Fortran, OpenMP Fortran, ISO DO CONCURRENT ) Python ( Numba, CuPy )       |
-| [OpenACC](https://github.com/gpuhackathons-org/gpubootcamp/tree/master/hpc/openacc)   | The lab will cover how to write portable parallel program that can run on multicore CPUs and accelerators like GPUs and how to apply incremental parallelization strategies using OpenACC       |
+| [N-Ways](https://github.com/gpuhackathons-org/gpubootcamp/tree/master/hpc/nways)      | This Bootcamp will cover multiple GPU programming models and choose the one that best fits your needs. The material supports different programming langauges including C ( CUDA C, OpenACC C, OpenMP C, C++ stdpar ),  Fortran ( CUDA Fortran, OpenACC Fortran, OpenMP Fortran, ISO DO CONCURRENT ) Python ( Numba, CuPy )       |
+| [OpenACC](https://github.com/gpuhackathons-org/gpubootcamp/tree/master/hpc/openacc)   | The Bootcamp will cover how to write portable parallel program that can run on multicore CPUs and accelerators like GPUs and how to apply incremental parallelization strategies using OpenACC       |
+| [Multi GPU Programming Model](https://github.com/gpuhackathons-org/gpubootcamp/tree/master/hpc/multi_gpu_nways)   | This bootcamp will cover scaling applications to multiple GPUs across multiple nodes. Moreover, understanding of the underlying technologies and communication topology will help us utilize high-performance NVIDIA libraries to extract more performance out of the system     |
+
 
 - [Convergence of HPC and AI](https://github.com/gpuhackathons-org/gpubootcamp/tree/master/hpc_ai) :: 
 The bootcamp content focuses on how AI can accelerate HPC simulations by introducing concepts of Deep Neural Networks, including data pre-processing, techniques on how to build, compare and improve accuracy of deep learning models. 
@@ -31,6 +33,7 @@ The bootcamp content focuses on using popular accelerated AI frameworks and usin
 | ----------- | ----------- |
 | [Accelerated Intelligent Video Analytics](https://github.com/gpuhackathons-org/gpubootcamp/tree/master/ai/DeepStream) | Learn how Nvidia DeepStream SDK can be used to create optimized Intelligent Video Analytics (IVA) pipeline. Participants will be exposed to the building blocks for creating IVA pipeline followed by profiling exercise to identify hotspots in the pipeline and methods to optimize and get higher throughput       |
 | [Accelerated Data Science](https://github.com/gpuhackathons-org/gpubootcamp/tree/master/ai/RAPIDS)   | Learn how RAPIDS suite of open source software libraries gives you the freedom to execute end-to-end data science and analytics pipelines entirely on GPUs. Participants will be exposed to using libraries that can be easily integrated with the daily data science pipeline and accelerate computations for faster execution       |
+| [Distributed Deep Learning](https://github.com/gpuhackathons-org/gpubootcamp/tree/master/ai/Distributed_Deep_Learning)   | This bootcamp will introduce participants to fundamentals of Distributed deep learning and give a hands-on experience on methods that can be applied to Deep learning models for faster model training |
 
 # System Requirements
 Each lab contains docker and singularity definition files. Follow the readme files inside each on how to build the container and run the labs inside it.

+ 5 - 0
ai/Distributed_Deep_Learning/English/Presentations/README.md

@@ -0,0 +1,5 @@
+For Partners who are interested in delivering the critical hands-on skills needed to advance science in form of Bootcamp can reach out to us at [GPU Hackathon Partner](https://gpuhackathons.org/partners) website. In addition to current bootcamp material the Partners will be provided with the following:
+
+- Presentation: All the Bootcamps are accompanied with training material presentations which can be used during the Bootcamp session.
+- Mini challenge : To test the knowledge gained during this Bootcamp a mini application challenge is provided along with sample Solution.
+- Additional Support: On case to case basis the Partners can also be trained on how to effectively deliver the Bootcamp with maximal impact.

+ 10 - 5
ai/Distributed_Deep_Learning/README.md

@@ -2,6 +2,11 @@
 
 This folder contains contents for Distributed Deep learning bootcamp.
 
+- Introduction to Distributed deep learning
+- Understanding System Topology
+- Hands-on with Distributed training ( Horovord, TensorFlow )
+- Techniques for faster convergence
+
 ## Prerequisites
 To run this tutorial you will need a machine with NVIDIA GPU.
 
@@ -9,6 +14,9 @@ To run this tutorial you will need a machine with NVIDIA GPU.
 
 - The base containers required for the lab may require users to create a NGC account and generate an API key (https://docs.nvidia.com/ngc/ngc-catalog-user-guide/index.html#registering-activating-ngc-account)
 
+#Tutorial Duration
+The total bootcamp material would take approximately 5 hours ( including solving mini-challenge ).
+
 ## Creating containers
 To start with, you will have to build a Docker or Singularity container.
 
@@ -47,12 +55,9 @@ Then, run the container:
 Then, open the jupyter notebook in browser: http://localhost:8888
 Start working on the lab by clicking on the `Start_Here.ipynb` notebook.
 
-## Troubleshooting
+## Known Issues
 
 Q. "ResourceExhaustedError" error is observed while running the labs
 A. Currently the batch size and network model is set to consume 16GB GPU memory. In order to use the labs without any modifications it is recommended to have GPU with minimum 16GB GPU memory. Else the users can play with batch size to reduce the memory footprint
 
-
-## Questions?
-Please join [OpenACC Slack Channel](https://openacclang.slack.com/messages/openaccusergroup) for questions.
-
+- Please go through the list of exisiting bugs/issues or file a new issue at [Github](https://github.com/gpuhackathons-org/gpubootcamp/issues).