| 
					
				 | 
			
			
				@@ -198,7 +198,8 @@ 
			 | 
		
	
		
			
				 | 
				 | 
			
			
				     "    5. [jsonfy and convert to mmap format](./jupyter_notebook/Lab1-5_jsonfy_and_process2mmap.ipynb)\n", 
			 | 
		
	
		
			
				 | 
				 | 
			
			
				     "    6. [Megatron runs vs config](./jupyter_notebook/Lab1-6_Observe_GPT_runs_vs_performance.ipynb)\n", 
			 | 
		
	
		
			
				 | 
				 | 
			
			
				     "\n", 
			 | 
		
	
		
			
				 | 
				 | 
			
			
				-    "- **Outlines of Day 3**\n", 
			 | 
		
	
		
			
				 | 
				 | 
			
			
				+    "\n", 
			 | 
		
	
		
			
				 | 
				 | 
			
			
				+    "- **Outlines of Lab 2**\n", 
			 | 
		
	
		
			
				 | 
				 | 
			
			
				     "    Getting started on training own language Megatron GPT models -- Please go through the below notebooks sequentially.\n", 
			 | 
		
	
		
			
				 | 
				 | 
			
			
				     "    1. [Fetch and extract Swedish data](./jupyter_notebook/Megatron-LM/tools/openwebtext/Lab2-1_acquiring_data.ipynb)\n", 
			 | 
		
	
		
			
				 | 
				 | 
			
			
				     "    2. [Find sentence boundary and deduplicate your data](./jupyter_notebook/Megatron-LM/tools/openwebtext/Lab2-2_SentenceBoundary_and_Deduplicate.ipynb)\n", 
			 | 
		
	
	
		
			
				| 
					
				 | 
			
			
				@@ -214,17 +215,17 @@ 
			 | 
		
	
		
			
				 | 
				 | 
			
			
				    "metadata": {}, 
			 | 
		
	
		
			
				 | 
				 | 
			
			
				    "source": [ 
			 | 
		
	
		
			
				 | 
				 | 
			
			
				     "### Tutorial Duration\n", 
			 | 
		
	
		
			
				 | 
				 | 
			
			
				-    "The lab material will be presented in a 8 hr session. Link to material is available for download at the end of the gpubootcamp. \n", 
			 | 
		
	
		
			
				 | 
				 | 
			
			
				+    "The lab material will be presented in a 12 hr session. Link to material is available for download at the end of the gpubootcamp. \n", 
			 | 
		
	
		
			
				 | 
				 | 
			
			
				     "\n", 
			 | 
		
	
		
			
				 | 
				 | 
			
			
				     "### Content Level\n", 
			 | 
		
	
		
			
				 | 
				 | 
			
			
				     "Intermediate , Advanced\n", 
			 | 
		
	
		
			
				 | 
				 | 
			
			
				     "\n", 
			 | 
		
	
		
			
				 | 
				 | 
			
			
				     "### Target Audience and Prerequisites\n", 
			 | 
		
	
		
			
				 | 
				 | 
			
			
				-    "The target audience for this lab is researchers/graduate students and developers who are interested in learning about scaling their Deep learning systems to multiple GPUs to accelerate their scientific applications.\n", 
			 | 
		
	
		
			
				 | 
				 | 
			
			
				+    "The target audience for this lab is researchers/graduate students and developers who are interested in learning about training very large language models on a super computing cluster.\n", 
			 | 
		
	
		
			
				 | 
				 | 
			
			
				     "\n", 
			 | 
		
	
		
			
				 | 
				 | 
			
			
				-    "Basic understanding on Deep learning is required, If you are new to Deep learning , it is recommended to go through the [Distributed_Deep_Learning bootcamp](https://github.com/gpuhackathons-org/gpubootcamp/tree/master/ai/Distributed_Deep_Learning/English/python) prior.\n", 
			 | 
		
	
		
			
				 | 
				 | 
			
			
				+    "Basic understanding on Deep learning and Pytorch is required, if you are new to Deep learning and or new to Pytorch, it is recommended to go through the [Distributed_Deep_Learning bootcamp](https://github.com/gpuhackathons-org/gpubootcamp/tree/master/ai/Distributed_Deep_Learning/English/python) and [Pytorch tutorials](https://pytorch.org/tutorials/) as prior.\n", 
			 | 
		
	
		
			
				 | 
				 | 
			
			
				     " \n", 
			 | 
		
	
		
			
				 | 
				 | 
			
			
				-    "**Disclaimer** : All the results mentioned in the notebooks were tested on a *DGX-1 machine equipped with 2 or 4 or 8 x Tesla V100 connected via NVLink*. The results would vary when using different hardware and would also depend on the Interconnect bandwidth and the thermal conditions of the machine." 
			 | 
		
	
		
			
				 | 
				 | 
			
			
				+    "**Disclaimer** : All the results mentioned in the notebooks were tested on a *DGX-2 machine equipped with 2 or 4 or 8 x A100 connected via NVLink*. The results would vary when using different hardware and would also depend on the Interconnect bandwidth and the thermal conditions of the machine." 
			 | 
		
	
		
			
				 | 
				 | 
			
			
				    ] 
			 | 
		
	
		
			
				 | 
				 | 
			
			
				   }, 
			 | 
		
	
		
			
				 | 
				 | 
			
			
				   { 
			 |