|
@@ -12,7 +12,7 @@
|
|
|
"\n",
|
|
|
"\n",
|
|
|
"* Standard: Python\n",
|
|
|
- "* Frameworks: Pytorch + Megatron \n",
|
|
|
+ "* Frameworks: Pytorch + Megatron-LM \n",
|
|
|
"\n",
|
|
|
"It is required to have more than one GPU for the bootcamp and we recommend using a [DGX](https://www.nvidia.com/en-in/data-center/dgx-systems/) like cluster with [NVLink / NVSwitch](https://www.nvidia.com/en-in/data-center/nvlink/) support.\n",
|
|
|
"\n",
|
|
@@ -267,7 +267,7 @@
|
|
|
"metadata": {},
|
|
|
"source": [
|
|
|
"### Tutorial Duration\n",
|
|
|
- "The lab material will be presented in a 6hr session. Link to material is available for download at the end of the lab with the **exception of the CC-100 Swedish preprocessed data used in the labs**, however, one can download CC-100 data on your own in [CC-100 webpage](http://data.statmt.org/cc-100/) for various langauges!\n",
|
|
|
+ "The lab material will be presented in a 8 hr session. Link to material is available for download at the end of the gpubootcamp. \n",
|
|
|
"\n",
|
|
|
"### Content Level\n",
|
|
|
"Intermediate , Advanced\n",
|
|
@@ -275,7 +275,7 @@
|
|
|
"### Target Audience and Prerequisites\n",
|
|
|
"The target audience for this lab is researchers/graduate students and developers who are interested in learning about scaling their Deep learning systems to multiple GPUs to accelerate their scientific applications.\n",
|
|
|
"\n",
|
|
|
- "Basic understanding on Deep learning is required, If you are new to Deep learning , it is recommended to go through the [AI for Climate Bootcamp](https://github.com/gpuhackathons-org/gpubootcamp/tree/master/hpc_ai/ai_science_climate) prior.\n",
|
|
|
+ "Basic understanding on Deep learning is required, If you are new to Deep learning , it is recommended to go through the [Distributed_Deep_Learning bootcamp](https://github.com/gpuhackathons-org/gpubootcamp/tree/master/ai/Distributed_Deep_Learning/English/python) prior.\n",
|
|
|
" \n",
|
|
|
"**Disclaimer** : All the results mentioned in the notebooks were tested on a *DGX-1 machine equipped with 2 or 4 or 8 x Tesla V100 connected via NVLink*. The results would vary when using different hardware and would also depend on the Interconnect bandwidth and the thermal conditions of the machine."
|
|
|
]
|