Tosin Akinwale Adesuyi 3 年之前
父節點
當前提交
5552edc830
共有 60 個文件被更改,包括 206 次插入137 次删除
  1. 1 1
      ai/DeepStream/Dockerfile
  2. 5 5
      ai/DeepStream/README.md
  3. 1 1
      ai/DeepStream_Perf_Lab/Dockerfile
  4. 7 7
      ai/DeepStream_Perf_Lab/README.md
  5. 2 2
      ai/RAPIDS/README.MD
  6. 6 3
      hpc/miniprofiler/Dockerfile
  7. 2 2
      hpc/miniprofiler/English/C/jupyter_notebook/profiling-c-lab2.ipynb
  8. 2 2
      hpc/miniprofiler/English/C/jupyter_notebook/profiling-c-lab3.ipynb
  9. 2 2
      hpc/miniprofiler/English/C/jupyter_notebook/profiling-c-lab4.ipynb
  10. 3 3
      hpc/miniprofiler/English/C/jupyter_notebook/profiling-c-lab5.ipynb
  11. 1 1
      hpc/miniprofiler/English/C/jupyter_notebook/profiling-c.ipynb
  12. 3 3
      hpc/miniprofiler/English/Fortran/jupyter_notebook/profiling-fortran-lab2.ipynb
  13. 2 2
      hpc/miniprofiler/English/Fortran/jupyter_notebook/profiling-fortran-lab3.ipynb
  14. 2 2
      hpc/miniprofiler/English/Fortran/jupyter_notebook/profiling-fortran-lab4.ipynb
  15. 2 2
      hpc/miniprofiler/English/Fortran/jupyter_notebook/profiling-fortran-lab5.ipynb
  16. 1 1
      hpc/miniprofiler/English/Fortran/jupyter_notebook/profiling-fortran.ipynb
  17. 1 1
      hpc/miniprofiler/English/profiling_start.ipynb
  18. 6 6
      hpc/miniprofiler/README.md
  19. 4 1
      hpc/miniprofiler/Singularity
  20. 7 2
      hpc/nways/Dockerfile
  21. 6 2
      hpc/nways/Dockerfile_python
  22. 6 6
      hpc/nways/README.md
  23. 4 1
      hpc/nways/Singularity
  24. 4 1
      hpc/nways/Singularity_python
  25. 2 2
      hpc/nways/nways_labs/nways_MD/English/C/jupyter_notebook/cudac/nways_cuda.ipynb
  26. 6 5
      hpc/nways/nways_labs/nways_MD/English/C/jupyter_notebook/openacc/nways_openacc.ipynb
  27. 2 2
      hpc/nways/nways_labs/nways_MD/English/C/jupyter_notebook/openmp/nways_openmp.ipynb
  28. 2 2
      hpc/nways/nways_labs/nways_MD/English/C/jupyter_notebook/stdpar/nways_stdpar.ipynb
  29. 2 2
      hpc/nways/nways_labs/nways_MD/English/Fortran/jupyter_notebook/cudafortran/nways_cuda.ipynb
  30. 2 2
      hpc/nways/nways_labs/nways_MD/English/Fortran/jupyter_notebook/doconcurrent/nways_doconcurrent.ipynb
  31. 5 5
      hpc/nways/nways_labs/nways_MD/English/Fortran/jupyter_notebook/openacc/nways_openacc.ipynb
  32. 2 2
      hpc/nways/nways_labs/nways_MD/English/Fortran/jupyter_notebook/openmp/nways_openmp.ipynb
  33. 1 1
      hpc/nways/nways_labs/nways_MD/English/Python/jupyter_notebook/Final_Remarks.ipynb
  34. 2 1
      hpc/nways/nways_labs/nways_MD/English/Python/jupyter_notebook/cupy/cupy_RDF.ipynb
  35. 3 2
      hpc/nways/nways_labs/nways_MD/English/Python/jupyter_notebook/cupy/serial_RDF.ipynb
  36. 2 2
      hpc/nways/nways_labs/nways_MD/English/Python/jupyter_notebook/numba/serial_RDF.ipynb
  37. 17 5
      hpc/nways/nways_labs/nways_MD/English/nways_MD_start_python.ipynb
  38. 1 3
      hpc/nways/nways_labs/nways_start.ipynb
  39. 7 2
      hpc/openacc/Dockerfile
  40. 二進制
      hpc/openacc/English/C/jupyter_notebook/images/pgprof1.png
  41. 2 2
      hpc/openacc/English/C/jupyter_notebook/openacc_c_lab1.ipynb
  42. 3 3
      hpc/openacc/English/C/jupyter_notebook/openacc_c_lab2.ipynb
  43. 3 3
      hpc/openacc/English/C/jupyter_notebook/openacc_c_lab3.ipynb
  44. 二進制
      hpc/openacc/English/Fortran/jupyter_notebook/images/pgprof1.png
  45. 2 2
      hpc/openacc/English/Fortran/jupyter_notebook/openacc_fortran_lab1.ipynb
  46. 3 3
      hpc/openacc/English/Fortran/jupyter_notebook/openacc_fortran_lab2.ipynb
  47. 3 3
      hpc/openacc/English/Fortran/jupyter_notebook/openacc_fortran_lab3.ipynb
  48. 1 1
      hpc/openacc/English/Lab1.ipynb
  49. 1 1
      hpc/openacc/English/Lab2.ipynb
  50. 1 1
      hpc/openacc/English/Lab3.ipynb
  51. 1 1
      hpc/openacc/English/openacc_start.ipynb
  52. 6 6
      hpc/openacc/README.md
  53. 7 1
      hpc/openacc/Singularity
  54. 7 1
      hpc_ai/ai_science_cfd/Dockerfile
  55. 1 1
      hpc_ai/ai_science_cfd/English/python/jupyter_notebook/Intro_to_DL/Resnets.ipynb
  56. 5 5
      hpc_ai/ai_science_cfd/README.MD
  57. 6 0
      hpc_ai/ai_science_cfd/Singularity
  58. 7 1
      hpc_ai/ai_science_climate/Dockerfile
  59. 5 5
      hpc_ai/ai_science_climate/README.MD
  60. 6 0
      hpc_ai/ai_science_climate/Singularity

+ 1 - 1
ai/DeepStream/Dockerfile

@@ -11,4 +11,4 @@ WORKDIR /opt/nvidia/deepstream/deepstream-5.0
 RUN pip3 install jupyterlab
 RUN pip3 install jupyterlab
 COPY English /opt/nvidia/deepstream/deepstream-5.0
 COPY English /opt/nvidia/deepstream/deepstream-5.0
 
 
-CMD jupyter notebook --no-browser --allow-root --ip=0.0.0.0 --port=8889 --NotebookApp.token="" --notebook-dir=/opt/nvidia/deepstream/deepstream-5.0/python
+CMD jupyter-lab --no-browser --allow-root --ip=0.0.0.0 --port=8889 --NotebookApp.token="" --notebook-dir=/opt/nvidia/deepstream/deepstream-5.0/python

+ 5 - 5
ai/DeepStream/README.md

@@ -36,10 +36,10 @@ For instance:
 and to run the container, run:
 and to run the container, run:
 `sudo docker run --rm -it --gpus=all --network=host -p 8888:8888 myimage:1.0`
 `sudo docker run --rm -it --gpus=all --network=host -p 8888:8888 myimage:1.0`
 
 
-Once inside the container launch the jupyter notebook by typing the following command
-`jupyter notebook --no-browser --allow-root --ip=0.0.0.0 --port=8888 --NotebookApp.token="" --notebook-dir=/opt/nvidia/deepstream/deepstream-5.0/python`
+Once inside the container launch the jupyter lab by typing the following command
+`jupyter-lab --no-browser --allow-root --ip=0.0.0.0 --port=8888 --NotebookApp.token="" --notebook-dir=/opt/nvidia/deepstream/deepstream-5.0/python`
 
 
-Then, open the jupyter notebook in browser: http://localhost:8888
+Then, open the jupyter lab in browser: http://localhost:8888
 Start working on the lab by clicking on the `Start_Here.ipynb` notebook.
 Start working on the lab by clicking on the `Start_Here.ipynb` notebook.
 
 
 ### Singularity Container
 ### Singularity Container
@@ -52,9 +52,9 @@ and copy the files to your local machine to make sure changes are stored locally
 
 
 
 
 Then, run the container:
 Then, run the container:
-`singularity run --nv --writable <image_name>.simg jupyter notebook --no-browser --allow-root --ip=0.0.0.0 --port=8888 --NotebookApp.token="" --notebook-dir=~/workspace/python`
+`singularity run --nv --writable <image_name>.simg jupyter-lab --no-browser --allow-root --ip=0.0.0.0 --port=8888 --NotebookApp.token="" --notebook-dir=~/workspace/python`
 
 
-Then, open the jupyter notebook in browser: http://localhost:8888
+Then, open the jupyter lab in browser: http://localhost:8888
 Start working on the lab by clicking on the `Start_Here.ipynb` notebook.
 Start working on the lab by clicking on the `Start_Here.ipynb` notebook.
 
 
 ## Known issues
 ## Known issues

+ 1 - 1
ai/DeepStream_Perf_Lab/Dockerfile

@@ -34,5 +34,5 @@ RUN ls -l
 RUN unzip deepstream_dataset.zip
 RUN unzip deepstream_dataset.zip
 WORKDIR /opt/nvidia/deepstream/deepstream-5.0
 WORKDIR /opt/nvidia/deepstream/deepstream-5.0
 ## Uncomment this line to run Jupyter notebook by default
 ## Uncomment this line to run Jupyter notebook by default
-CMD jupyter notebook --no-browser --allow-root --ip=0.0.0.0 --port=8889 --NotebookApp.token="" --notebook-dir=/opt/nvidia/deepstream/deepstream-5.0/python
+CMD jupyter-lab --no-browser --allow-root --ip=0.0.0.0 --port=8889 --NotebookApp.token="" --notebook-dir=/opt/nvidia/deepstream/deepstream-5.0/python
 
 

+ 7 - 7
ai/DeepStream_Perf_Lab/README.md

@@ -37,13 +37,13 @@ For instance:
 and to run the container, run:
 and to run the container, run:
 `sudo docker run --rm -it --gpus=all --network=host -p 8888:8888 myimage:1.0`
 `sudo docker run --rm -it --gpus=all --network=host -p 8888:8888 myimage:1.0`
 
 
-The container launches jupyter notebook and runs on port 8888
-`jupyter notebook --ip 0.0.0.0 --port 8888 --no-browser --allow-root`
+The container launches jupyter lab and runs on port 8888
+`jupyter-lab --ip 0.0.0.0 --port 8888 --no-browser --allow-root`
 
 
-Once inside the container launch the jupyter notebook by typing the following command
-`jupyter notebook --no-browser --allow-root --ip=0.0.0.0 --port=8888 --NotebookApp.token="" --notebook-dir=/opt/nvidia/deepstream/deepstream-5.0/python`
+Once inside the container launch the jupyter lab by typing the following command
+`jupyter-lab --no-browser --allow-root --ip=0.0.0.0 --port=8888 --NotebookApp.token="" --notebook-dir=/opt/nvidia/deepstream/deepstream-5.0/python`
 
 
-Then, open the jupyter notebook in browser: http://localhost:8888
+Then, open the jupyter lab in browser: http://localhost:8888
 Start working on the lab by clicking on the `Start_Here.ipynb` notebook.
 Start working on the lab by clicking on the `Start_Here.ipynb` notebook.
 
 
 ### Singularity Container
 ### Singularity Container
@@ -56,9 +56,9 @@ and copy the files to your local machine to make sure changes are stored locally
 
 
 
 
 Then, run the container:
 Then, run the container:
-`singularity run --nv --writable <image_name>.simg jupyter notebook --no-browser --allow-root --ip=0.0.0.0 --port=8888 --NotebookApp.token="" --notebook-dir=~/workspace/python`
+`singularity run --nv --writable <image_name>.simg jupyter-lab --no-browser --allow-root --ip=0.0.0.0 --port=8888 --NotebookApp.token="" --notebook-dir=~/workspace/python`
 
 
-Then, open the jupyter notebook in browser: http://localhost:8888
+Then, open the jupyter lab in browser: http://localhost:8888
 Start working on the lab by clicking on the `Start_Here.ipynb` notebook.
 Start working on the lab by clicking on the `Start_Here.ipynb` notebook.
 
 
 ## Known issues
 ## Known issues

+ 2 - 2
ai/RAPIDS/README.MD

@@ -35,7 +35,7 @@ For instance:
 and to run the container, run:
 and to run the container, run:
 `sudo docker run --rm -it --gpus=all --network=host -p 8888:8888 myimage:1.0`
 `sudo docker run --rm -it --gpus=all --network=host -p 8888:8888 myimage:1.0`
 
 
-Then, open the jupyter notebook in browser: http://localhost:8888
+Then, open the jupyter lab in browser: http://localhost:8888
 Start working on the lab by clicking on the `Start_Here.ipynb` notebook.
 Start working on the lab by clicking on the `Start_Here.ipynb` notebook.
 
 
 ### Singularity Container
 ### Singularity Container
@@ -49,7 +49,7 @@ and copy the files to your local machine to make sure changes are stored locally
 Then, run the container:
 Then, run the container:
 `singularity run --nv --writable <image_name>.simg /opt/conda/envs/rapids/bin/jupyter lab --no-browser --allow-root --ip=0.0.0.0 --port=8888 --NotebookApp.token="" --notebook-dir=/workspace/jupyter_notebook`
 `singularity run --nv --writable <image_name>.simg /opt/conda/envs/rapids/bin/jupyter lab --no-browser --allow-root --ip=0.0.0.0 --port=8888 --NotebookApp.token="" --notebook-dir=/workspace/jupyter_notebook`
 
 
-Then, open the jupyter notebook in browser: http://localhost:8888
+Then, open the jupyter lab in browser: http://localhost:8888
 Start working on the lab by clicking on the `Start_Here.ipynb` notebook.
 Start working on the lab by clicking on the `Start_Here.ipynb` notebook.
 
 
 ## Troubleshooting
 ## Troubleshooting

+ 6 - 3
hpc/miniprofiler/Dockerfile

@@ -18,8 +18,11 @@ RUN apt-get update -y && \
 RUN apt-get update 
 RUN apt-get update 
 RUN apt-get install --no-install-recommends -y python3
 RUN apt-get install --no-install-recommends -y python3
 RUN pip3 install --upgrade pip
 RUN pip3 install --upgrade pip
-
-RUN pip3 install --no-cache-dir jupyter
+RUN apt-get update -y        
+RUN apt-get install -y git nvidia-modprobe
+RUN pip3 install jupyterlab
+# Install required python packages
+RUN pip3 install ipywidgets
 RUN pip3 install netcdf4
 RUN pip3 install netcdf4
 
 
 # NVIDIA nsight-systems-2020.2.1 
 # NVIDIA nsight-systems-2020.2.1 
@@ -49,4 +52,4 @@ ENV PATH="/opt/nvidia/nsight-systems/2020.2.1/bin:$PATH"
 
 
 ADD English/ /labs
 ADD English/ /labs
 WORKDIR /labs
 WORKDIR /labs
-CMD service nginx start && jupyter notebook --no-browser --allow-root --ip=0.0.0.0 --port=8888 --NotebookApp.token="" --notebook-dir=/labs
+CMD jupyter-lab --no-browser --allow-root --ip=0.0.0.0 --port=8888 --NotebookApp.token="" --notebook-dir=/labs

+ 2 - 2
hpc/miniprofiler/English/C/jupyter_notebook/profiling-c-lab2.ipynb

@@ -37,7 +37,7 @@
    "cell_type": "markdown",
    "cell_type": "markdown",
    "metadata": {},
    "metadata": {},
    "source": [
    "source": [
-    "From the top menu, click on *File*, and *Open* `miniWeather_openacc.cpp` and `Makefile` from the current directory at `C/source_code/lab2` directory and inspect the code before running below cells. We have already added OpenACC compute directives (`#pragma acc parallel`) around the expensive routines (loops) in the code.\n",
+    "Click on the <b>[miniWeather_openacc.cpp](../source_code/lab2/miniWeather_openacc.cpp)</b> and <b>[Makefile](../source_code/lab2/Makefile)</b> and inspect the code before running below cells. We have already added OpenACC compute directives (`#pragma acc parallel`) around the expensive routines (loops) in the code.\n",
     "\n",
     "\n",
     "Once done, compile the code with `make`. View the PGI compiler feedback (enabled by adding `-Minfo=accel` flag) and investigate the compiler feedback for the OpenACC code. The compiler feedback provides useful information about applied optimizations."
     "Once done, compile the code with `make`. View the PGI compiler feedback (enabled by adding `-Minfo=accel` flag) and investigate the compiler feedback for the OpenACC code. The compiler feedback provides useful information about applied optimizations."
    ]
    ]
@@ -176,7 +176,7 @@
    "name": "python",
    "name": "python",
    "nbconvert_exporter": "python",
    "nbconvert_exporter": "python",
    "pygments_lexer": "ipython3",
    "pygments_lexer": "ipython3",
-   "version": "3.6.2"
+   "version": "3.8.5"
   }
   }
  },
  },
  "nbformat": 4,
  "nbformat": 4,

+ 2 - 2
hpc/miniprofiler/English/C/jupyter_notebook/profiling-c-lab3.ipynb

@@ -85,7 +85,7 @@
     "\n",
     "\n",
     "Now, add `collapse` clause to the code and make necessary changes to the loop directives. Once done, save the file, re-compile via `make`, and profile it again. \n",
     "Now, add `collapse` clause to the code and make necessary changes to the loop directives. Once done, save the file, re-compile via `make`, and profile it again. \n",
     "\n",
     "\n",
-    "From the top menu, click on *File*, and *Open* `miniWeather_openacc.cpp` and `Makefile` from the current directory at `C/source_code/lab3` directory. Remember to **SAVE** your code after changes, before running below cells."
+    "Click on the <b>[miniWeather_openacc.cpp](../source_code/lab3/miniWeather_openacc.cpp)</b> and <b>[Makefile](../source_code/lab3/Makefile)</b> links and modify `miniWeather_openacc.cpp` and `Makefile`. Remember to **SAVE** your code after changes, before running below cells."
    ]
    ]
   },
   },
   {
   {
@@ -241,7 +241,7 @@
    "name": "python",
    "name": "python",
    "nbconvert_exporter": "python",
    "nbconvert_exporter": "python",
    "pygments_lexer": "ipython3",
    "pygments_lexer": "ipython3",
-   "version": "3.6.2"
+   "version": "3.8.5"
   }
   }
  },
  },
  "nbformat": 4,
  "nbformat": 4,

+ 2 - 2
hpc/miniprofiler/English/C/jupyter_notebook/profiling-c-lab4.ipynb

@@ -54,7 +54,7 @@
     "\n",
     "\n",
     "Now, add `data` directives to the code, save the file, re-compile via `make`, and profile it again.\n",
     "Now, add `data` directives to the code, save the file, re-compile via `make`, and profile it again.\n",
     "\n",
     "\n",
-    "From the top menu, click on *File*, and *Open* `miniWeather_openacc.cpp` and `Makefile` from the current directory at `C/source_code/lab4` directory. Remember to **SAVE** your code after changes, before running below cells."
+    "Click on the <b>[miniWeather_openacc.cpp](../source_code/lab4/miniWeather_openacc.cpp)</b> and <b>[Makefile](../source_code/lab4/Makefile)</b> links and modify `miniWeather_openacc.cpp` and `Makefile`. Remember to **SAVE** your code after changes, before running below cells."
    ]
    ]
   },
   },
   {
   {
@@ -179,7 +179,7 @@
    "name": "python",
    "name": "python",
    "nbconvert_exporter": "python",
    "nbconvert_exporter": "python",
    "pygments_lexer": "ipython3",
    "pygments_lexer": "ipython3",
-   "version": "3.6.2"
+   "version": "3.8.5"
   }
   }
  },
  },
  "nbformat": 4,
  "nbformat": 4,

文件差異過大導致無法顯示
+ 3 - 3
hpc/miniprofiler/English/C/jupyter_notebook/profiling-c-lab5.ipynb


+ 1 - 1
hpc/miniprofiler/English/C/jupyter_notebook/profiling-c.ipynb

@@ -219,7 +219,7 @@
    "name": "python",
    "name": "python",
    "nbconvert_exporter": "python",
    "nbconvert_exporter": "python",
    "pygments_lexer": "ipython3",
    "pygments_lexer": "ipython3",
-   "version": "3.6.2"
+   "version": "3.8.5"
   }
   }
  },
  },
  "nbformat": 4,
  "nbformat": 4,

+ 3 - 3
hpc/miniprofiler/English/Fortran/jupyter_notebook/profiling-fortran-lab2.ipynb

@@ -37,7 +37,7 @@
    "cell_type": "markdown",
    "cell_type": "markdown",
    "metadata": {},
    "metadata": {},
    "source": [
    "source": [
-    "From the top menu, click on *File*, and *Open* `miniWeather_openacc.f90` and `Makefile` from the current directory at `Fortran/source_code/lab2` directory and inspect the code before running below cells.We have already added OpenACC compute directives (`!$acc parallel loop`) around the expensive routines (loops) in the code.\n",
+    "Click on the <b>[miniWeather_openacc.f90](../source_code/lab2/miniWeather_openacc.f90)</b> and <b>[Makefile](../source_code/lab2/Makefile)</b> and inspect the code before running below cells. We have already added OpenACC compute directives (`!$acc parallel loop`) around the expensive routines (loops) in the code.\n",
     "\n",
     "\n",
     "Once done, compile the code with `make`. View the PGI compiler feedback (enabled by adding `-Minfo=accel` flag) and investigate the compiler feedback for the OpenACC code. The compiler feedback provides useful information about applied optimizations."
     "Once done, compile the code with `make`. View the PGI compiler feedback (enabled by adding `-Minfo=accel` flag) and investigate the compiler feedback for the OpenACC code. The compiler feedback provides useful information about applied optimizations."
    ]
    ]
@@ -172,7 +172,7 @@
     "\n",
     "\n",
     "## Licensing \n",
     "## Licensing \n",
     "\n",
     "\n",
-    "This material is released by NVIDIA Corporation under the Creative Commons Attribution 4.0 International (CC BY 4.0). "
+    "This material is released by OpenACC-Standard.org, in collaboration with NVIDIA Corporation, under the Creative Commons Attribution 4.0 International (CC BY 4.0). "
    ]
    ]
   }
   }
  ],
  ],
@@ -193,7 +193,7 @@
    "name": "python",
    "name": "python",
    "nbconvert_exporter": "python",
    "nbconvert_exporter": "python",
    "pygments_lexer": "ipython3",
    "pygments_lexer": "ipython3",
-   "version": "3.7.4"
+   "version": "3.8.5"
   }
   }
  },
  },
  "nbformat": 4,
  "nbformat": 4,

+ 2 - 2
hpc/miniprofiler/English/Fortran/jupyter_notebook/profiling-fortran-lab3.ipynb

@@ -79,7 +79,7 @@
     "\n",
     "\n",
     "Now, add `collapse` clause to the code and make necessary changes to the loop directives. Once done, save the file, re-compile via `make`, and profile it again. \n",
     "Now, add `collapse` clause to the code and make necessary changes to the loop directives. Once done, save the file, re-compile via `make`, and profile it again. \n",
     "\n",
     "\n",
-    "From the top menu, click on *File*, and *Open* `miniWeather_openacc.f90` and `Makefile` from the current directory at `Fortran/source_code/lab3` directory. Remember to **SAVE** your code after changes, before running below cells."
+    "Click on the <b>[miniWeather_openacc.f90](../source_code/lab3/miniWeather_openacc.f90)</b> and <b>[Makefile](../source_code/lab3/Makefile)</b> links and modify `miniWeather_openacc.f90` and `Makefile`. Remember to **SAVE** your code after changes, before running below cells."
    ]
    ]
   },
   },
   {
   {
@@ -229,7 +229,7 @@
    "name": "python",
    "name": "python",
    "nbconvert_exporter": "python",
    "nbconvert_exporter": "python",
    "pygments_lexer": "ipython3",
    "pygments_lexer": "ipython3",
-   "version": "3.6.2"
+   "version": "3.8.5"
   }
   }
  },
  },
  "nbformat": 4,
  "nbformat": 4,

+ 2 - 2
hpc/miniprofiler/English/Fortran/jupyter_notebook/profiling-fortran-lab4.ipynb

@@ -55,7 +55,7 @@
     "Now, add `data` directives to the code, save the file, re-compile via `make`, and profile it again.\n",
     "Now, add `data` directives to the code, save the file, re-compile via `make`, and profile it again.\n",
     "\n",
     "\n",
     "\n",
     "\n",
-    "From the top menu, click on *File*, and *Open* `miniWeather_openacc.f90` and `Makefile` from the current directory at `Fortran/source_code/lab4` directory. Remember to **SAVE** your code after changes, before running below cells."
+    "Click on the <b>[miniWeather_openacc.f90](../source_code/lab4/miniWeather_openacc.f90)</b> and <b>[Makefile](../source_code/lab4/Makefile)</b> links and modify `miniWeather_openacc.f90` and `Makefile`. Remember to **SAVE** your code after changes, before running below cells."
    ]
    ]
   },
   },
   {
   {
@@ -180,7 +180,7 @@
    "name": "python",
    "name": "python",
    "nbconvert_exporter": "python",
    "nbconvert_exporter": "python",
    "pygments_lexer": "ipython3",
    "pygments_lexer": "ipython3",
-   "version": "3.6.2"
+   "version": "3.8.5"
   }
   }
  },
  },
  "nbformat": 4,
  "nbformat": 4,

文件差異過大導致無法顯示
+ 2 - 2
hpc/miniprofiler/English/Fortran/jupyter_notebook/profiling-fortran-lab5.ipynb


+ 1 - 1
hpc/miniprofiler/English/Fortran/jupyter_notebook/profiling-fortran.ipynb

@@ -230,7 +230,7 @@
    "name": "python",
    "name": "python",
    "nbconvert_exporter": "python",
    "nbconvert_exporter": "python",
    "pygments_lexer": "ipython3",
    "pygments_lexer": "ipython3",
-   "version": "3.6.2"
+   "version": "3.8.5"
   }
   }
  },
  },
  "nbformat": 4,
  "nbformat": 4,

+ 1 - 1
hpc/miniprofiler/English/profiling_start.ipynb

@@ -93,7 +93,7 @@
    "name": "python",
    "name": "python",
    "nbconvert_exporter": "python",
    "nbconvert_exporter": "python",
    "pygments_lexer": "ipython3",
    "pygments_lexer": "ipython3",
-   "version": "3.6.2"
+   "version": "3.8.5"
   }
   }
  },
  },
  "nbformat": 4,
  "nbformat": 4,

文件差異過大導致無法顯示
+ 6 - 6
hpc/miniprofiler/README.md


+ 4 - 1
hpc/miniprofiler/Singularity

@@ -22,7 +22,10 @@ FROM: nvcr.io/nvidia/nvhpc:20.9-devel-ubuntu20.04
     rm -rf /var/lib/apt/cache/* 
     rm -rf /var/lib/apt/cache/* 
 
 
     pip3 install --upgrade pip
     pip3 install --upgrade pip
-    pip3 install --no-cache-dir jupyter
+    apt-get update -y
+    apt-get -y install git nvidia-modprobe
+    pip3 install jupyterlab
+    pip3 install ipywidgets
     pip3 install jupyter netcdf4
     pip3 install jupyter netcdf4
 
 
     apt-get install --no-install-recommends -y build-essential 
     apt-get install --no-install-recommends -y build-essential 

+ 7 - 2
hpc/nways/Dockerfile

@@ -11,8 +11,13 @@ RUN apt-get -y update && \
         DEBIAN_FRONTEND=noninteractive apt-get -yq install --no-install-recommends python3-pip python3-setuptools nginx zip make build-essential libtbb-dev && \
         DEBIAN_FRONTEND=noninteractive apt-get -yq install --no-install-recommends python3-pip python3-setuptools nginx zip make build-essential libtbb-dev && \
         rm -rf /var/lib/apt/lists/* && \
         rm -rf /var/lib/apt/lists/* && \
         pip3 install --upgrade pip &&\
         pip3 install --upgrade pip &&\
-        pip3 install --no-cache-dir jupyter &&\
         pip3 install gdown
         pip3 install gdown
+        
+RUN apt-get update -y        
+RUN apt-get install -y git nvidia-modprobe
+RUN pip3 install jupyterlab
+# Install required python packages
+RUN pip3 install ipywidgets
 
 
 ############################################
 ############################################
 # NVIDIA nsight-systems-2020.5.1 ,nsight-compute-2
 # NVIDIA nsight-systems-2020.5.1 ,nsight-compute-2
@@ -39,4 +44,4 @@ ENV PATH="/opt/nvidia/hpc_sdk/Linux_x86_64/21.3/cuda/11.2/include:/usr/local/bin
 
 
 ADD nways_labs/ /labs
 ADD nways_labs/ /labs
 WORKDIR /labs
 WORKDIR /labs
-CMD service nginx start && jupyter notebook --no-browser --allow-root --ip=0.0.0.0 --port=8888 --NotebookApp.token="" --notebook-dir=/labs
+CMD jupyter-lab --no-browser --allow-root --ip=0.0.0.0 --port=8888 --NotebookApp.token="" --notebook-dir=/labs

+ 6 - 2
hpc/nways/Dockerfile_python

@@ -17,7 +17,11 @@ RUN apt-get -y update && \
 
 
 RUN pip3 install --no-cache-dir -U install setuptools pip
 RUN pip3 install --no-cache-dir -U install setuptools pip
 RUN pip3 install gdown
 RUN pip3 install gdown
-RUN pip3 install --no-cache-dir jupyter
+RUN apt-get update -y
+RUN apt-get install -y git nvidia-modprobe
+# Install required python packages
+RUN pip3 install jupyterlab
+RUN pip3 install ipywidgets
 RUN pip3 install --no-cache-dir "cupy-cuda112==8.6.0" \
 RUN pip3 install --no-cache-dir "cupy-cuda112==8.6.0" \
     numba numpy scipy 
     numba numpy scipy 
        
        
@@ -51,4 +55,4 @@ RUN pip3 install --no-cache-dir MDAnalysis
 
 
 ADD nways_labs/ /labs
 ADD nways_labs/ /labs
 WORKDIR /labs
 WORKDIR /labs
-CMD service nginx start && jupyter notebook --no-browser --allow-root --ip=0.0.0.0 --port=8888 --NotebookApp.token="" --notebook-dir=/labs
+CMD jupyter-lab --no-browser --allow-root --ip=0.0.0.0 --port=8888 --NotebookApp.token="" --notebook-dir=/labs

文件差異過大導致無法顯示
+ 6 - 6
hpc/nways/README.md


+ 4 - 1
hpc/nways/Singularity

@@ -23,8 +23,11 @@ FROM: nvcr.io/nvidia/nvhpc:21.3-devel-cuda_multi-ubuntu20.04
     rm -rf /var/lib/apt/cache/* 
     rm -rf /var/lib/apt/cache/* 
 
 
     pip3 install --upgrade pip
     pip3 install --upgrade pip
-    pip3 install --no-cache-dir jupyter
     pip3 install gdown
     pip3 install gdown
+    apt-get update -y
+    apt-get -y install git nvidia-modprobe
+    pip3 install jupyterlab
+    pip3 install ipywidgets
 
 
     apt-get install --no-install-recommends -y build-essential 
     apt-get install --no-install-recommends -y build-essential 
 
 

+ 4 - 1
hpc/nways/Singularity_python

@@ -23,7 +23,10 @@ FROM:  nvidia/cuda:11.2.2-devel-ubuntu20.04
 
 
     pip3 install --no-cache-dir -U install setuptools pip
     pip3 install --no-cache-dir -U install setuptools pip
     pip3 install gdown
     pip3 install gdown
-    pip3 install --no-cache-dir jupyter
+    apt-get update -y
+    apt-get -y install git nvidia-modprobe
+    pip3 install jupyterlab
+    pip3 install ipywidgets
     pip3 install --no-cache-dir "cupy-cuda112==8.6.0" \
     pip3 install --no-cache-dir "cupy-cuda112==8.6.0" \
     numba numpy scipy
     numba numpy scipy
     pip3 install --upgrade MDAnalysis
     pip3 install --upgrade MDAnalysis

+ 2 - 2
hpc/nways/nways_labs/nways_MD/English/C/jupyter_notebook/cudac/nways_cuda.ipynb

@@ -263,7 +263,7 @@
     "## Compile and Run for NVIDIA GPU\n",
     "## Compile and Run for NVIDIA GPU\n",
     "Now, lets start modifying the original code and add CUDA C constructs. You can either explicitly transfer the allocated data between CPU and GPU or use unified memory which creates a pool of managed memory that is shared between the CPU and GPU.\n",
     "Now, lets start modifying the original code and add CUDA C constructs. You can either explicitly transfer the allocated data between CPU and GPU or use unified memory which creates a pool of managed memory that is shared between the CPU and GPU.\n",
     "\n",
     "\n",
-    "From the top menu, click on *File*, and *Open* `rdf.cpp` and `dcdread.h` from the current directory at `C/source_code/cudac` directory. Remember to **SAVE** your code after changes, before running below cells."
+    "Click on the <b>[rdf.cu](../../source_code/cudac/rdf.cu)</b> and <b>[dcdread.h](../../source_code/cudac/dcdread.h)</b> links and modify `rdf.cu` and `dcdread.h`. Remember to **SAVE** your code after changes, before running below cells."
    ]
    ]
   },
   },
   {
   {
@@ -418,7 +418,7 @@
    "name": "python",
    "name": "python",
    "nbconvert_exporter": "python",
    "nbconvert_exporter": "python",
    "pygments_lexer": "ipython3",
    "pygments_lexer": "ipython3",
-   "version": "3.7.4"
+   "version": "3.8.5"
   }
   }
  },
  },
  "nbformat": 4,
  "nbformat": 4,

+ 6 - 5
hpc/nways/nways_labs/nways_MD/English/C/jupyter_notebook/openacc/nways_openacc.ipynb

@@ -119,7 +119,7 @@
    "cell_type": "markdown",
    "cell_type": "markdown",
    "metadata": {},
    "metadata": {},
    "source": [
    "source": [
-    "Now, lets start modifying the original code and add the OpenACC directives. From the top menu, click on *File*, and *Open* `rdf.cpp` and `dcdread.h` from the current directory at `C/source_code/openacc` directory. Remember to **SAVE** your code after changes, before running below cells."
+    "Now, lets start modifying the original code and add the OpenACC directives. Click on the <b>[rdf.cpp](../../source_code/openacc/rdf.cpp)</b> and <b>[dcdread.h](../../source_code/openacc/dcdread.h)</b> links, and modify `rdf.cpp` and `dcdread.h`. Remember to **SAVE** your code after changes, before running below cells."
    ]
    ]
   },
   },
   {
   {
@@ -352,7 +352,7 @@
     "    }\n",
     "    }\n",
     "} \n",
     "} \n",
     "```\n",
     "```\n",
-    "Now, lets start modifying the original code and add the OpenACC kernel directive. From the top menu, click on *File*, and *Open* `rdf.cpp` and `dcdread.h` from the current directory at `C/source_code/openacc` directory. Remember to **SAVE** your code after changes, before running below cells."
+    "Now, lets start modifying the original code and add the OpenACC directives. Click on the <b>[rdf.cpp](../../source_code/openacc/rdf.cpp)</b> and <b>[dcdread.h](../../source_code/openacc/dcdread.h)</b> links, and modify `rdf.cpp` and `dcdread.h`. Remember to **SAVE** your code after changes, before running below cells."
    ]
    ]
   },
   },
   {
   {
@@ -414,7 +414,8 @@
     "    }\n",
     "    }\n",
     "} \n",
     "} \n",
     "```\n",
     "```\n",
-    "Now, lets start modifying the original code and add the OpenACC kernel directive. From the top menu, click on *File*, and *Open* `rdf.cpp` and `dcdread.h` from the current directory at `C/source_code/openacc` directory. Remember to **SAVE** your code after changes, before running below cells."
+    "\n",
+    "Now, lets start modifying the original code and add the OpenACC directives. Click on the <b>[rdf.cpp](../../source_code/openacc/rdf.cpp)</b> and <b>[dcdread.h](../../source_code/openacc/dcdread.h)</b> links, and modify `rdf.cpp` and `dcdread.h`. Remember to **SAVE** your code after changes, before running below cells."
    ]
    ]
   },
   },
   {
   {
@@ -539,7 +540,7 @@
     "Let us try adding a data clause to our code and observe any performance differences between the two. \n",
     "Let us try adding a data clause to our code and observe any performance differences between the two. \n",
     "**Note: We have removed the managed clause in order to handle data management explicitly.**\n",
     "**Note: We have removed the managed clause in order to handle data management explicitly.**\n",
     "\n",
     "\n",
-    "From the top menu, click on *File*, and *Open* `rdf.cpp`, `dcdread.h` and `Makefile` from the current directory at `C/source_code/openacc` directory. Remember to **SAVE** your code after changes, before running below cells."
+    "Now, lets start modifying the original code and add the OpenACC directives. Click on the <b>[rdf.cpp](../../source_code/openacc/rdf.cpp)</b>, <b>[dcdread.h](../../source_code/openacc/dcdread.h)</b> and <b>[Makefile](../../source_code/openacc/Makefile)</b> links, and modify `rdf.cpp`, `dcdread.h` and `Makefile`. Remember to **SAVE** your code after changes, before running below cells."
    ]
    ]
   },
   },
   {
   {
@@ -690,7 +691,7 @@
    "name": "python",
    "name": "python",
    "nbconvert_exporter": "python",
    "nbconvert_exporter": "python",
    "pygments_lexer": "ipython3",
    "pygments_lexer": "ipython3",
-   "version": "3.7.4"
+   "version": "3.8.5"
   }
   }
  },
  },
  "nbformat": 4,
  "nbformat": 4,

+ 2 - 2
hpc/nways/nways_labs/nways_MD/English/C/jupyter_notebook/openmp/nways_openmp.ipynb

@@ -245,7 +245,7 @@
    "cell_type": "markdown",
    "cell_type": "markdown",
    "metadata": {},
    "metadata": {},
    "source": [
    "source": [
-    "Now, lets start modifying the original code and add the OpenMP directives. From the top menu, click on *File*, and *Open* `rdf.cpp` and `dcdread.h` from the current directory at `C/source_code/openmp` directory. Remember to **SAVE** your code after changes, before running below cells."
+    "Now, lets start modifying the original code and add the OpenMP directives. Click on the <b>[rdf.cpp](../../source_code/openmp/rdf.cpp)</b> and <b>[dcdread.h](../../source_code/openmp/dcdread.h)</b> links, and modify `rdf.cpp` and `dcdread.h`. Remember to **SAVE** your code after changes, before running below cells."
    ]
    ]
   },
   },
   {
   {
@@ -521,7 +521,7 @@
    "name": "python",
    "name": "python",
    "nbconvert_exporter": "python",
    "nbconvert_exporter": "python",
    "pygments_lexer": "ipython3",
    "pygments_lexer": "ipython3",
-   "version": "3.7.4"
+   "version": "3.8.5"
   }
   }
  },
  },
  "nbformat": 4,
  "nbformat": 4,

+ 2 - 2
hpc/nways/nways_labs/nways_MD/English/C/jupyter_notebook/stdpar/nways_stdpar.ipynb

@@ -137,7 +137,7 @@
    "cell_type": "markdown",
    "cell_type": "markdown",
    "metadata": {},
    "metadata": {},
    "source": [
    "source": [
-    "Now, lets start modifying the original code and add stdpar. From the top menu, click on *File*, and *Open* `rdf.cpp` and `dcdread.h` from the current directory at `C/source_code/stdpar` directory. Remember to **SAVE** your code after changes, before running below cells."
+    "Now, lets start modifying the original code and add stdpar. Click on the <b>[rdf.cpp](../../source_code/stdpar/rdf.cpp)</b> and <b>[dcdread.h](../../source_code/stdpar/dcdread.h)</b> links, and modify `rdf.cpp` and `dcdread.h`. Remember to **SAVE** your code after changes, before running below cells."
    ]
    ]
   },
   },
   {
   {
@@ -381,7 +381,7 @@
    "name": "python",
    "name": "python",
    "nbconvert_exporter": "python",
    "nbconvert_exporter": "python",
    "pygments_lexer": "ipython3",
    "pygments_lexer": "ipython3",
-   "version": "3.7.4"
+   "version": "3.8.5"
   }
   }
  },
  },
  "nbformat": 4,
  "nbformat": 4,

+ 2 - 2
hpc/nways/nways_labs/nways_MD/English/Fortran/jupyter_notebook/cudafortran/nways_cuda.ipynb

@@ -363,7 +363,7 @@
     "## Compile and Run for NVIDIA GPU\n",
     "## Compile and Run for NVIDIA GPU\n",
     "Now, lets start modifying the original code and add CUDA C constructs. You can either explicitly transfer the allocated data between CPU and GPU or use unified memory which creates a pool of managed memory that is shared between the CPU and GPU.\n",
     "Now, lets start modifying the original code and add CUDA C constructs. You can either explicitly transfer the allocated data between CPU and GPU or use unified memory which creates a pool of managed memory that is shared between the CPU and GPU.\n",
     "\n",
     "\n",
-    "From the top menu, click on *File*, and *Open* `rdf.f90` from the current directory at `Fortran/source_code/cudafortran` directory. Remember to **SAVE** your code after changes, before running below cells."
+    "Click on the <b>[rdf.f90](../../source_code/cudafortran/rdf.f90)</b> link and modify `rdf.f90`. Remember to **SAVE** your code after changes, before running below cells."
    ]
    ]
   },
   },
   {
   {
@@ -514,7 +514,7 @@
    "name": "python",
    "name": "python",
    "nbconvert_exporter": "python",
    "nbconvert_exporter": "python",
    "pygments_lexer": "ipython3",
    "pygments_lexer": "ipython3",
-   "version": "3.7.4"
+   "version": "3.8.5"
   }
   }
  },
  },
  "nbformat": 4,
  "nbformat": 4,

+ 2 - 2
hpc/nways/nways_labs/nways_MD/English/Fortran/jupyter_notebook/doconcurrent/nways_doconcurrent.ipynb

@@ -89,7 +89,7 @@
    "cell_type": "markdown",
    "cell_type": "markdown",
    "metadata": {},
    "metadata": {},
    "source": [
    "source": [
-    "Now, lets start modifying the original code and add DO-CONCURRENT. From the top menu, click on *File*, and *Open* `rdf.f90`  from the current directory at `Fortran/source_code/doconcurrent` directory. Remember to **SAVE** your code after changes, before running below cells."
+    "Now, lets start modifying the original code and add DO-CONCURRENT.  Click on the <b>[rdf.f90](../../source_code/doconcurrent/rdf.f90)</b> link and modify `rdf.f90`. Remember to **SAVE** your code after changes, before running below cells."
    ]
    ]
   },
   },
   {
   {
@@ -343,7 +343,7 @@
    "name": "python",
    "name": "python",
    "nbconvert_exporter": "python",
    "nbconvert_exporter": "python",
    "pygments_lexer": "ipython3",
    "pygments_lexer": "ipython3",
-   "version": "3.7.4"
+   "version": "3.8.5"
   }
   }
  },
  },
  "nbformat": 4,
  "nbformat": 4,

+ 5 - 5
hpc/nways/nways_labs/nways_MD/English/Fortran/jupyter_notebook/openacc/nways_openacc.ipynb

@@ -112,7 +112,7 @@
    "cell_type": "markdown",
    "cell_type": "markdown",
    "metadata": {},
    "metadata": {},
    "source": [
    "source": [
-    "Now, lets start modifying the original code and add the OpenACC directives. From the top menu, click on *File*, and *Open* `rdf.f90` from the current directory at `Fortran/source_code/openacc` directory. Remember to **SAVE** your code after changes, before running below cells."
+    "Now, lets start modifying the original code and add the OpenACC directives. Click on the <b>[rdf.f90](../../source_code/openacc/rdf.f90)</b> link and modify `rdf.f90`. Remember to **SAVE** your code after changes, before running below cells."
    ]
    ]
   },
   },
   {
   {
@@ -354,7 +354,7 @@
     "        < loop code >\n",
     "        < loop code >\n",
     "    enddo\n",
     "    enddo\n",
     "```\n",
     "```\n",
-    "Now, lets start modifying the original code and add the OpenACC kernel directive. From the top menu, click on *File*, and *Open* `rdf.f90` from the current directory at `Fortran/source_code/openacc` directory. Remember to **SAVE** your code after changes, before running below cells."
+    "Now, lets start modifying the original code and add the OpenACC directives. Click on the <b>[rdf.f90](../../source_code/openacc/rdf.f90)</b> link and modify `rdf.f90`. Remember to **SAVE** your code after changes, before running below cells."
    ]
    ]
   },
   },
   {
   {
@@ -425,7 +425,7 @@
     "       end do\n",
     "       end do\n",
     "    enddo\n",
     "    enddo\n",
     "```\n",
     "```\n",
-    "Now, lets start modifying the original code and add the OpenACC kernel directive. From the top menu, click on *File*, and *Open* `rdf.f90` from the current directory at `Fortran/source_code/openacc` directory. Remember to **SAVE** your code after changes, before running below cells."
+    "Now, lets start modifying the original code and add the OpenACC kernel directive. Click on the <b>[rdf.f90](../../source_code/openacc/rdf.f90)</b> link and modify `rdf.f90`. Remember to **SAVE** your code after changes, before running below cells."
    ]
    ]
   },
   },
   {
   {
@@ -566,7 +566,7 @@
     "Let us try adding a data clause to our code and observe any performance differences between the two. \n",
     "Let us try adding a data clause to our code and observe any performance differences between the two. \n",
     "**Note: We have removed the managed clause in order to handle data management explicitly.**\n",
     "**Note: We have removed the managed clause in order to handle data management explicitly.**\n",
     "\n",
     "\n",
-    "From the top menu, click on *File*, and *Open* `rdf.f90` from the current directory at `Fortran/source_code/openacc` directory. Remember to **SAVE** your code after changes, before running below cells."
+    "Click on the <b>[rdf.f90](../../source_code/openacc/rdf.f90)</b> link and modify `rdf.f90`. Remember to **SAVE** your code after changes, before running below cells."
    ]
    ]
   },
   },
   {
   {
@@ -726,7 +726,7 @@
    "name": "python",
    "name": "python",
    "nbconvert_exporter": "python",
    "nbconvert_exporter": "python",
    "pygments_lexer": "ipython3",
    "pygments_lexer": "ipython3",
-   "version": "3.7.4"
+   "version": "3.8.5"
   }
   }
  },
  },
  "nbformat": 4,
  "nbformat": 4,

+ 2 - 2
hpc/nways/nways_labs/nways_MD/English/Fortran/jupyter_notebook/openmp/nways_openmp.ipynb

@@ -235,7 +235,7 @@
    "cell_type": "markdown",
    "cell_type": "markdown",
    "metadata": {},
    "metadata": {},
    "source": [
    "source": [
-    "Now, lets start modifying the original code and add the OpenMP directives. From the top menu, click on *File*, and *Open* `rdf.f90` from the current directory at `Fortran/source_code/openmp` directory. Remember to **SAVE** your code after changes, before running below cells."
+    "Now, lets start modifying the original code and add the OpenMP directives. Click on the <b>[rdf.f90](../../source_code/openmp/rdf.f90)</b> link and modify `rdf.f90`. Remember to **SAVE** your code after changes, before running below cells."
    ]
    ]
   },
   },
   {
   {
@@ -516,7 +516,7 @@
    "name": "python",
    "name": "python",
    "nbconvert_exporter": "python",
    "nbconvert_exporter": "python",
    "pygments_lexer": "ipython3",
    "pygments_lexer": "ipython3",
-   "version": "3.7.4"
+   "version": "3.8.5"
   }
   }
  },
  },
  "nbformat": 4,
  "nbformat": 4,

+ 1 - 1
hpc/nways/nways_labs/nways_MD/English/Python/jupyter_notebook/Final_Remarks.ipynb

@@ -109,7 +109,7 @@
    "name": "python",
    "name": "python",
    "nbconvert_exporter": "python",
    "nbconvert_exporter": "python",
    "pygments_lexer": "ipython3",
    "pygments_lexer": "ipython3",
-   "version": "3.6.2"
+   "version": "3.8.5"
   }
   }
  },
  },
  "nbformat": 4,
  "nbformat": 4,

+ 2 - 1
hpc/nways/nways_labs/nways_MD/English/Python/jupyter_notebook/cupy/cupy_RDF.ipynb

@@ -174,7 +174,8 @@
     "    d_x = cp.asarray(d_x)\n",
     "    d_x = cp.asarray(d_x)\n",
     "    d_y = cp.asarray(d_y)\n",
     "    d_y = cp.asarray(d_y)\n",
     "    d_z = cp.asarray(d_z)\n",
     "    d_z = cp.asarray(d_z)\n",
-    "    d_g2 = cp.zeros(sizebin, dtype=cp.int64)\n",
+    "    d_g2 = np.zeros(sizebin, dtype=np.int64)\n",
+    "    d_g2 = cp.asarray(d_g2)\n",
     "\n",
     "\n",
     "    ############################## RAW KERNEL #################################################\n",
     "    ############################## RAW KERNEL #################################################\n",
     "    nthreads = 128;\n",
     "    nthreads = 128;\n",

+ 3 - 2
hpc/nways/nways_labs/nways_MD/English/Python/jupyter_notebook/cupy/serial_RDF.ipynb

@@ -282,8 +282,9 @@
     "                main()\n",
     "                main()\n",
     "    ```\n",
     "    ```\n",
     "2. **Now, let's start modifying the original code to CuPy code constructs.**\n",
     "2. **Now, let's start modifying the original code to CuPy code constructs.**\n",
-    "> From the top menu, click on File, and Open **nways_serial.py** from the current directory at **Python/source_code/cupy** directory. Remember to SAVE your code after changes, and then run the cell below. \n",
-    "> Hints: focus on the **pair_gpu** function and you may need to modify few lines in the **main** function as well."
+    "> Click on the <b>[Modify](../../source_code/serial/nways_serial.py)</b>  link, and modify **nways_serial.py** to `CuPy code construct`. Remember to SAVE your code after changes, and then run the cell below. \n",
+    "> Hints: focus on the **pair_gpu** function and you may need to modify few lines in the **main** function as well.\n",
+    "---"
    ]
    ]
   },
   },
   {
   {

+ 2 - 2
hpc/nways/nways_labs/nways_MD/English/Python/jupyter_notebook/numba/serial_RDF.ipynb

@@ -283,8 +283,8 @@
     "                main()\n",
     "                main()\n",
     "    ```\n",
     "    ```\n",
     "2. **Now, lets start modifying the original code to Numba code constructs.**\n",
     "2. **Now, lets start modifying the original code to Numba code constructs.**\n",
-    "> From the top menu, click on File, and Open **nways_serial.py** from the current directory at **Python/source_code/numba** directory. Remember to SAVE your code after changes, and then run the cell below. \n",
-    "> Hints: focus on the **pair_gpu** function and you may need to modify few lines in the **main** function as well."
+    "> Click on the <b>[Modify](../../source_code/serial/nways_serial.py)</b>  link, and modify **nways_serial.py** to `Numba code construct`. Remember to SAVE your code after changes, and then run the cell below. \n",
+    "> Hints: focus on the **pair_gpu** function and you may need to modify few lines in the **main** function as well.\n"
    ]
    ]
   },
   },
   {
   {

+ 17 - 5
hpc/nways/nways_labs/nways_MD/English/nways_MD_start_python.ipynb

@@ -34,14 +34,26 @@
     "\n",
     "\n",
     " Throughout the tutorial we will be following the Analysis - Parallelization - Optimization cycle. Let us begin by understanding the NVIDIA Nsight System tool ecosystem:   \n",
     " Throughout the tutorial we will be following the Analysis - Parallelization - Optimization cycle. Let us begin by understanding the NVIDIA Nsight System tool ecosystem:   \n",
     "\n",
     "\n",
-    "- [Introduction to Profiling](../../profiler/English/jupyter_notebook/profiling.ipynb)\n",
+    "- [Nsight Systems](../../profiler/English/jupyter_notebook/nsight_systems.ipynb)\n",
     "    - Overview of Nsight profiler tools\n",
     "    - Overview of Nsight profiler tools\n",
-    "    - Introduction to NVIDIA Nsight Systems\n",
-    "    - How to use NVIDIA Tools Extension SDK (NVTX) \n",
-    "    - Introduction to NVIDIA Nsight Compute\n",
+    "    - Introduction to Nsight Systems\n",
+    "    - How to view the report\n",
+    "    - How to use NVTX APIs\n",
     "    - Optimization Steps to parallel programming \n",
     "    - Optimization Steps to parallel programming \n",
     "    \n",
     "    \n",
-    "We will be working on porting a radial distribution function (RDF) to GPUs. Please choose one approach within the Python programming language to proceed working on RDF. \n",
+    "- [Nsight Compute](../../profiler/English/jupyter_notebook/nsight_compute.ipynb)\n",
+    "    - Introduction to Nsight Compute\n",
+    "    - Overview of sections\n",
+    "    - Roofline Charts\n",
+    "    - Memory Charts\n",
+    "    - Profiling a kernel using CLI\n",
+    "    - How to view the report\n",
+    "\n",
+    "Note: Learn about all terminologies used throught the notebooks in the [GPU Architecture Terminologies](C/jupyter_notebook/GPU_Architecture_Terminologies.ipynb) notebook.\n",
+    "\n",
+    "\n",
+    "We will be working on porting a radial distribution function (RDF) to GPUs. Please choose one approach within the Python programming language to proceed working on RDF.  \n",
+    "    \n",
     "\n",
     "\n",
     "\n",
     "\n",
     "#### Python Programming Language\n",
     "#### Python Programming Language\n",

+ 1 - 3
hpc/nways/nways_labs/nways_start.ipynb

@@ -41,10 +41,8 @@
    "cell_type": "markdown",
    "cell_type": "markdown",
    "metadata": {},
    "metadata": {},
    "source": [
    "source": [
-
     "### Bootcamp Duration\n",
     "### Bootcamp Duration\n",
     "The lab material will be presented in an 8-hour session. A Link to the material is available for download at the end of the lab.\n",
     "The lab material will be presented in an 8-hour session. A Link to the material is available for download at the end of the lab.\n",
-
     "\n",
     "\n",
     "### Content Level\n",
     "### Content Level\n",
     "Beginner, Intermediate\n",
     "Beginner, Intermediate\n",
@@ -78,7 +76,7 @@
    "name": "python",
    "name": "python",
    "nbconvert_exporter": "python",
    "nbconvert_exporter": "python",
    "pygments_lexer": "ipython3",
    "pygments_lexer": "ipython3",
-   "version": "3.7.4"
+   "version": "3.8.5"
   }
   }
  },
  },
  "nbformat": 4,
  "nbformat": 4,

+ 7 - 2
hpc/openacc/Dockerfile

@@ -9,9 +9,14 @@ FROM nvcr.io/nvidia/nvhpc:20.9-devel-ubuntu20.04
 RUN apt-get -y update && \
 RUN apt-get -y update && \
     DEBIAN_FRONTEND=noninteractive apt-get -yq install --no-install-recommends python3-pip python3-setuptools nginx zip build-essential && \
     DEBIAN_FRONTEND=noninteractive apt-get -yq install --no-install-recommends python3-pip python3-setuptools nginx zip build-essential && \
     rm -rf /var/lib/apt/lists/* && \
     rm -rf /var/lib/apt/lists/* && \
-    pip3 install --no-cache-dir jupyter
+    pip3 install --upgrade pip
 
 
+RUN apt-get update -y        
+RUN apt-get install -y git nvidia-modprobe
+RUN pip3 install jupyterlab
+# Install required python packages
+RUN pip3 install ipywidgets
 
 
 ADD English/ /labs
 ADD English/ /labs
 WORKDIR /labs
 WORKDIR /labs
-CMD service nginx start && jupyter notebook --no-browser --allow-root --ip=0.0.0.0 --port=8888 --NotebookApp.token="" --notebook-dir=/labs
+CMD jupyter-lab --no-browser --allow-root --ip=0.0.0.0 --port=8888 --NotebookApp.token="" --notebook-dir=/labs

二進制
hpc/openacc/English/C/jupyter_notebook/images/pgprof1.png


+ 2 - 2
hpc/openacc/English/C/jupyter_notebook/openacc_c_lab1.ipynb

@@ -351,7 +351,7 @@
    "source": [
    "source": [
     "### Parallelize the Example Code\n",
     "### Parallelize the Example Code\n",
     "\n",
     "\n",
-    "At this point you have all of the tools you need to begin accelerating your application. The loops you will be parallelizing are in `laplace2d.c`. From the top menu, click on *File*, and *Open* `laplace2d.c` from the current directory at `C/source_code/lab1` directory. Remember to **SAVE** your code after changes, before running below cells.\n",
+    "At this point you have all of the tools you need to begin accelerating your application. The loops you will be parallelizing are in `laplace2d.c`. Click on the <b>[laplace2d.c](../source_code/lab1/laplace2d.c)</b> link and modify `laplace2d.c`. Remember to **SAVE** your code after changes, before running below cells.\n",
     "\n",
     "\n",
     "It is advisable to start with the `calcNext` routine and test your changes by compiling and running the code before moving on to the `swap` routine. OpenACC can be incrementally added to your application so that you can ensure each change is correct before getting too far along, which greatly simplifies debugging.\n",
     "It is advisable to start with the `calcNext` routine and test your changes by compiling and running the code before moving on to the `swap` routine. OpenACC can be incrementally added to your application so that you can ensure each change is correct before getting too far along, which greatly simplifies debugging.\n",
     "\n",
     "\n",
@@ -540,7 +540,7 @@
    "name": "python",
    "name": "python",
    "nbconvert_exporter": "python",
    "nbconvert_exporter": "python",
    "pygments_lexer": "ipython3",
    "pygments_lexer": "ipython3",
-   "version": "3.6.2"
+   "version": "3.8.5"
   }
   }
  },
  },
  "nbformat": 4,
  "nbformat": 4,

文件差異過大導致無法顯示
+ 3 - 3
hpc/openacc/English/C/jupyter_notebook/openacc_c_lab2.ipynb


+ 3 - 3
hpc/openacc/English/C/jupyter_notebook/openacc_c_lab3.ipynb

@@ -158,7 +158,7 @@
    "source": [
    "source": [
     "#### Implementing the Collapse Clause\n",
     "#### Implementing the Collapse Clause\n",
     "\n",
     "\n",
-    "From the top menu, click on *File*, and *Open* `laplace2d.c` from the current directory at `C/source_code/lab3` directory. Use the **collapse clause** to collapse our multi-dimensional loops into a single dimensional loop.\n",
+    "Click on the <b>[laplace2d.c](../source_code/lab3/laplace2d.c)</b> link and modify `laplace2d.c`. Use the **collapse clause** to collapse our multi-dimensional loops into a single dimensional loop. Remember to **SAVE** your code after changes, before running below cells.\n",
     "\n",
     "\n",
     "Remember to **SAVE** your code after changes, before running below cells.\n",
     "Remember to **SAVE** your code after changes, before running below cells.\n",
     "\n",
     "\n",
@@ -281,7 +281,7 @@
    "source": [
    "source": [
     "#### Implementing the Tile Clause\n",
     "#### Implementing the Tile Clause\n",
     "\n",
     "\n",
-    "From the top menu, click on *File*, and *Open* `laplace2d.c` from the current directory at `C/source_code/lab3` directory. Replace the `collapse` clause with the `tile` clause to break our multi-dimensional loops into smaller tiles. Try using a variety of different tile sizes, but for now keep one of the dimensions as a **multiple of 32**. We will talk later about why this is important.\n",
+    "Click on the <b>[laplace2d.c](../source_code/lab3/laplace2d.c)</b> link and modify `laplace2d.c`. Replace the `collapse` clause with the `tile` clause to break our multi-dimensional loops into smaller tiles. Try using a variety of different tile sizes, but for now keep one of the dimensions as a **multiple of 32**. We will talk later about why this is important.\n",
     "\n",
     "\n",
     "Remember to **SAVE** your code after changes, before running below cells.\n",
     "Remember to **SAVE** your code after changes, before running below cells.\n",
     "\n",
     "\n",
@@ -408,7 +408,7 @@
    "name": "python",
    "name": "python",
    "nbconvert_exporter": "python",
    "nbconvert_exporter": "python",
    "pygments_lexer": "ipython3",
    "pygments_lexer": "ipython3",
-   "version": "3.6.2"
+   "version": "3.8.5"
   }
   }
  },
  },
  "nbformat": 4,
  "nbformat": 4,

二進制
hpc/openacc/English/Fortran/jupyter_notebook/images/pgprof1.png


+ 2 - 2
hpc/openacc/English/Fortran/jupyter_notebook/openacc_fortran_lab1.ipynb

@@ -358,7 +358,7 @@
    "source": [
    "source": [
     "### Parallelize the Example Code\n",
     "### Parallelize the Example Code\n",
     "\n",
     "\n",
-    "At this point you have all of the tools you need to begin accelerating your application. The loops you will be parallelizing are in `laplace2d.f90`. From the top menu, click on *File*, and *Open* `laplace2d.f90` from the current directory at `Fortran/source_code/lab1` directory. Remember to **SAVE** your code after changes, before running below cells.\n",
+    "At this point you have all of the tools you need to begin accelerating your application. The loops you will be parallelizing are in `laplace2d.f90`. Click on the <b>[laplace2d.f90](../source_code/lab1/laplace2d.f90)</b> link and modify `laplace2d.f90`. Remember to **SAVE** your code after changes, before running below cells.\n",
     "\n",
     "\n",
     "It is advisable to start with the `calcNext` routine and test your changes by compiling and running the code before moving on to the `swap` routine. OpenACC can be incrementally added to your application so that you can ensure each change is correct before getting too far along, which greatly simplifies debugging.\n",
     "It is advisable to start with the `calcNext` routine and test your changes by compiling and running the code before moving on to the `swap` routine. OpenACC can be incrementally added to your application so that you can ensure each change is correct before getting too far along, which greatly simplifies debugging.\n",
     "\n",
     "\n",
@@ -543,7 +543,7 @@
    "name": "python",
    "name": "python",
    "nbconvert_exporter": "python",
    "nbconvert_exporter": "python",
    "pygments_lexer": "ipython3",
    "pygments_lexer": "ipython3",
-   "version": "3.6.2"
+   "version": "3.8.5"
   }
   }
  },
  },
  "nbformat": 4,
  "nbformat": 4,

文件差異過大導致無法顯示
+ 3 - 3
hpc/openacc/English/Fortran/jupyter_notebook/openacc_fortran_lab2.ipynb


+ 3 - 3
hpc/openacc/English/Fortran/jupyter_notebook/openacc_fortran_lab3.ipynb

@@ -155,7 +155,7 @@
    "source": [
    "source": [
     "#### Implementing the Collapse Clause\n",
     "#### Implementing the Collapse Clause\n",
     "\n",
     "\n",
-    "From the top menu, click on *File*, and *Open* `laplace2d.f90` from the current directory at `Fortran/source_code/lab3` directory. Use the **collapse clause** to collapse our multi-dimensional loops into a single dimensional loop.\n",
+    "Click on the <b>[laplace2d.f90](../source_code/lab3/laplace2d.f90)</b> link and modify `laplace2d.f90`. Use the **collapse clause** to collapse our multi-dimensional loops into a single dimensional loop.\n",
     "\n",
     "\n",
     "Remember to **SAVE** your code after changes, before running below cells.\n",
     "Remember to **SAVE** your code after changes, before running below cells.\n",
     "\n",
     "\n",
@@ -265,7 +265,7 @@
    "source": [
    "source": [
     "#### Implementing the Tile Clause\n",
     "#### Implementing the Tile Clause\n",
     "\n",
     "\n",
-    "From the top menu, click on *File*, and *Open* `laplace2d.f90` from the current directory at `Fortran/source_code/lab3` directory.  Replace the **collapse clause** with the **tile clause** to break our multi-dimensional loops into smaller tiles. Try using a variety of different tile sizes, but always keep one of the dimensions as a **multiple of 32**. We will talk later about why this is important.\n",
+    "Click on the <b>[laplace2d.f90](../source_code/lab3/laplace2d.f90)</b> link and modify `laplace2d.f90`. Replace the **collapse clause** with the **tile clause** to break our multi-dimensional loops into smaller tiles. Try using a variety of different tile sizes, but always keep one of the dimensions as a **multiple of 32**. We will talk later about why this is important.\n",
     "\n",
     "\n",
     "Remember to **SAVE** your code after changes, before running below cells.\n",
     "Remember to **SAVE** your code after changes, before running below cells.\n",
     "\n",
     "\n",
@@ -392,7 +392,7 @@
    "name": "python",
    "name": "python",
    "nbconvert_exporter": "python",
    "nbconvert_exporter": "python",
    "pygments_lexer": "ipython3",
    "pygments_lexer": "ipython3",
-   "version": "3.6.2"
+   "version": "3.8.5"
   }
   }
  },
  },
  "nbformat": 4,
  "nbformat": 4,

+ 1 - 1
hpc/openacc/English/Lab1.ipynb

@@ -29,7 +29,7 @@
    "name": "python",
    "name": "python",
    "nbconvert_exporter": "python",
    "nbconvert_exporter": "python",
    "pygments_lexer": "ipython3",
    "pygments_lexer": "ipython3",
-   "version": "3.7.4"
+   "version": "3.8.5"
   }
   }
  },
  },
  "nbformat": 4,
  "nbformat": 4,

+ 1 - 1
hpc/openacc/English/Lab2.ipynb

@@ -29,7 +29,7 @@
    "name": "python",
    "name": "python",
    "nbconvert_exporter": "python",
    "nbconvert_exporter": "python",
    "pygments_lexer": "ipython3",
    "pygments_lexer": "ipython3",
-   "version": "3.7.4"
+   "version": "3.8.5"
   }
   }
  },
  },
  "nbformat": 4,
  "nbformat": 4,

+ 1 - 1
hpc/openacc/English/Lab3.ipynb

@@ -29,7 +29,7 @@
    "name": "python",
    "name": "python",
    "nbconvert_exporter": "python",
    "nbconvert_exporter": "python",
    "pygments_lexer": "ipython3",
    "pygments_lexer": "ipython3",
-   "version": "3.6.2"
+   "version": "3.8.5"
   }
   }
  },
  },
  "nbformat": 4,
  "nbformat": 4,

+ 1 - 1
hpc/openacc/English/openacc_start.ipynb

@@ -62,7 +62,7 @@
    "name": "python",
    "name": "python",
    "nbconvert_exporter": "python",
    "nbconvert_exporter": "python",
    "pygments_lexer": "ipython3",
    "pygments_lexer": "ipython3",
-   "version": "3.6.2"
+   "version": "3.8.5"
   }
   }
  },
  },
  "nbformat": 4,
  "nbformat": 4,

文件差異過大導致無法顯示
+ 6 - 6
hpc/openacc/README.md


+ 7 - 1
hpc/openacc/Singularity

@@ -12,7 +12,13 @@ From: nvcr.io/nvidia/nvhpc:20.9-devel-ubuntu20.04
     apt-get -y update
     apt-get -y update
     DEBIAN_FRONTEND=noninteractive apt-get -yq install --no-install-recommends python3-pip python3-setuptools zip build-essential
     DEBIAN_FRONTEND=noninteractive apt-get -yq install --no-install-recommends python3-pip python3-setuptools zip build-essential
     rm -rf /var/lib/apt/lists/*
     rm -rf /var/lib/apt/lists/*
-    pip3 install --no-cache-dir jupyter
+    
+    pip3 install --upgrade pip
+    pip3 install gdown
+    apt-get update -y
+    apt-get -y install git nvidia-modprobe
+    pip3 install jupyterlab
+    pip3 install ipywidgets
 
 
 %files
 %files
 
 

+ 7 - 1
hpc_ai/ai_science_cfd/Dockerfile

@@ -12,6 +12,12 @@ RUN apt-get update
 RUN apt-get install -y libsm6 libxext6 libxrender-dev git
 RUN apt-get install -y libsm6 libxext6 libxrender-dev git
 # Install required python packages
 # Install required python packages
 RUN pip3 install opencv-python==4.1.2.30 pandas seaborn sklearn matplotlib scikit-fmm tqdm h5py gdown
 RUN pip3 install opencv-python==4.1.2.30 pandas seaborn sklearn matplotlib scikit-fmm tqdm h5py gdown
+RUN pip3 install --upgrade pip
+RUN apt-get update -y        
+RUN apt-get install -y git nvidia-modprobe
+RUN pip3 install jupyterlab
+# Install required python packages
+RUN pip3 install ipywidgets
 
 
 # TO COPY the data
 # TO COPY the data
 COPY English/ /workspace/
 COPY English/ /workspace/
@@ -30,5 +36,5 @@ RUN python3 /workspace/python/source_code/dataset.py
 
 
 ## Uncomment this line to run Jupyter notebook by default
 ## Uncomment this line to run Jupyter notebook by default
 #CMD jupyter notebook --ip 0.0.0.0 --port 8888 --allow-root
 #CMD jupyter notebook --ip 0.0.0.0 --port 8888 --allow-root
-CMD jupyter notebook --no-browser --allow-root --ip=0.0.0.0 --port=8888 --NotebookApp.token="" --notebook-dir=/workspace/python/jupyter_notebook/
+CMD jupyter-lab --no-browser --allow-root --ip=0.0.0.0 --port=8888 --NotebookApp.token="" --notebook-dir=/workspace/python/jupyter_notebook/
 
 

+ 1 - 1
hpc_ai/ai_science_cfd/English/python/jupyter_notebook/Intro_to_DL/Resnets.ipynb

@@ -623,7 +623,7 @@
    "name": "python",
    "name": "python",
    "nbconvert_exporter": "python",
    "nbconvert_exporter": "python",
    "pygments_lexer": "ipython3",
    "pygments_lexer": "ipython3",
-   "version": "3.6.2"
+   "version": "3.8.5"
   }
   }
  },
  },
  "nbformat": 4,
  "nbformat": 4,

+ 5 - 5
hpc_ai/ai_science_cfd/README.MD

@@ -35,10 +35,10 @@ For instance:
 and to run the container, run:
 and to run the container, run:
 `sudo docker run --rm -it --gpus=all --network=host -p 8888:8888 myimage:1.0`
 `sudo docker run --rm -it --gpus=all --network=host -p 8888:8888 myimage:1.0`
 
 
-The container launches jupyter notebook and runs on port 8888
-`jupyter notebook --ip 0.0.0.0 --port 8888 --no-browser --allow-root`
+The container launches jupyter lab and runs on port 8888
+`jupyter-lab --ip 0.0.0.0 --port 8888 --no-browser --allow-root`
 
 
-Then, open the jupyter notebook in browser: http://localhost:8888
+Then, open the jupyter lab in browser: http://localhost:8888
 Start working on the lab by clicking on the `Start_Here.ipynb` notebook.
 Start working on the lab by clicking on the `Start_Here.ipynb` notebook.
 
 
 ### Singularity Container
 ### Singularity Container
@@ -50,9 +50,9 @@ and copy the files to your local machine to make sure changes are stored locally
 `singularity run <image_name>.simg cp -rT /workspace ~/workspace`
 `singularity run <image_name>.simg cp -rT /workspace ~/workspace`
 
 
 Then, run the container:
 Then, run the container:
-`singularity run --nv <image_name>.simg jupyter notebook --notebook-dir=~/workspace/python/jupyter_notebook`
+`singularity run --nv <image_name>.simg jupyter-lab --notebook-dir=~/workspace/python/jupyter_notebook`
 
 
-Then, open the jupyter notebook in browser: http://localhost:8888
+Then, open the jupyter lab in browser: http://localhost:8888
 Start working on the lab by clicking on the `Start_Here.ipynb` notebook.
 Start working on the lab by clicking on the `Start_Here.ipynb` notebook.
 
 
 ## Troubleshooting
 ## Troubleshooting

+ 6 - 0
hpc_ai/ai_science_cfd/Singularity

@@ -10,6 +10,12 @@ FROM: nvcr.io/nvidia/tensorflow:21.05-tf2-py3
     pip3 install opencv-python==4.1.2.30 pandas seaborn sklearn matplotlib scikit-fmm tqdm h5py gdown
     pip3 install opencv-python==4.1.2.30 pandas seaborn sklearn matplotlib scikit-fmm tqdm h5py gdown
     mkdir /workspace/python/jupyter_notebook/CFD/data
     mkdir /workspace/python/jupyter_notebook/CFD/data
     python3 /workspace/python/source_code/dataset.py
     python3 /workspace/python/source_code/dataset.py
+    
+    pip3 install --upgrade pip
+    apt-get update -y
+    apt-get -y install git nvidia-modprobe
+    pip3 install jupyterlab
+    pip3 install ipywidgets
 
 
 %files
 %files
      English/* /workspace/
      English/* /workspace/

+ 7 - 1
hpc_ai/ai_science_climate/Dockerfile

@@ -12,6 +12,12 @@ RUN apt-get update -y
 RUN apt-get install -y libsm6 libxext6 libxrender-dev git nvidia-modprobe
 RUN apt-get install -y libsm6 libxext6 libxrender-dev git nvidia-modprobe
 # Install required python packages
 # Install required python packages
 RUN pip3 install  opencv-python==4.1.2.30 pandas seaborn sklearn matplotlib scikit-fmm tqdm h5py gdown
 RUN pip3 install  opencv-python==4.1.2.30 pandas seaborn sklearn matplotlib scikit-fmm tqdm h5py gdown
+RUN pip3 install --upgrade pip
+RUN apt-get update -y        
+RUN apt-get install -y git nvidia-modprobe
+RUN pip3 install jupyterlab
+# Install required python packages
+RUN pip3 install ipywidgets
 
 
 ##### TODO - From the Final Repo Changing this 
 ##### TODO - From the Final Repo Changing this 
 
 
@@ -22,4 +28,4 @@ COPY English/ /workspace/
 RUN python3 /workspace/python/source_code/dataset.py
 RUN python3 /workspace/python/source_code/dataset.py
 
 
 ## Uncomment this line to run Jupyter notebook by default
 ## Uncomment this line to run Jupyter notebook by default
-CMD jupyter notebook --no-browser --allow-root --ip=0.0.0.0 --port=8888 --NotebookApp.token="" --notebook-dir=/workspace/python/jupyter_notebook/
+CMD jupyter-lab --no-browser --allow-root --ip=0.0.0.0 --port=8888 --NotebookApp.token="" --notebook-dir=/workspace/python/jupyter_notebook/

+ 5 - 5
hpc_ai/ai_science_climate/README.MD

@@ -32,10 +32,10 @@ For instance:
 and to run the container, run:
 and to run the container, run:
 `sudo docker run --rm -it --gpus=all --network=host -p 8888:8888 myimage:1.0`
 `sudo docker run --rm -it --gpus=all --network=host -p 8888:8888 myimage:1.0`
 
 
-The container launches jupyter notebook and runs on port 8888
-`jupyter notebook --ip 0.0.0.0 --port 8888 --no-browser --allow-root`
+The container launches jupyter lab and runs on port 8888
+`jupyter-lab --ip 0.0.0.0 --port 8888 --no-browser --allow-root`
 
 
-Then, open the jupyter notebook in browser: http://localhost:8888
+Then, open the jupyter lab in browser: http://localhost:8888
 Start working on the lab by clicking on the `Start_Here.ipynb` notebook.
 Start working on the lab by clicking on the `Start_Here.ipynb` notebook.
 
 
 ### Singularity Container
 ### Singularity Container
@@ -48,9 +48,9 @@ and copy the files to your local machine to make sure changes are stored locally
 
 
 
 
 Then, run the container:
 Then, run the container:
-`singularity run --nv <image_name>.simg jupyter notebook --notebook-dir=~/workspace/python/jupyter_notebook`
+`singularity run --nv <image_name>.simg jupyter-lab --notebook-dir=~/workspace/python/jupyter_notebook`
 
 
-Then, open the jupyter notebook in browser: http://localhost:8888
+Then, open the jupyter lab in browser: http://localhost:8888
 Start working on the lab by clicking on the `Start_Here.ipynb` notebook.
 Start working on the lab by clicking on the `Start_Here.ipynb` notebook.
 
 
 
 

+ 6 - 0
hpc_ai/ai_science_climate/Singularity

@@ -10,6 +10,12 @@ FROM: nvcr.io/nvidia/tensorflow:21.05-tf2-py3
     pip3 install  opencv-python==4.1.2.30 pandas seaborn sklearn matplotlib scikit-fmm tqdm h5py gdown
     pip3 install  opencv-python==4.1.2.30 pandas seaborn sklearn matplotlib scikit-fmm tqdm h5py gdown
     python3 /workspace/python/source_code/dataset.py
     python3 /workspace/python/source_code/dataset.py
     
     
+    pip3 install --upgrade pip
+    apt-get update -y
+    apt-get -y install git nvidia-modprobe
+    pip3 install jupyterlab
+    pip3 install ipywidgets
+    
 %files
 %files
     English/* /workspace/
     English/* /workspace/