Browse Source

Merge pull request #2 from bharatk-parallel/ai_cfd_climate

Added CFD and Climate AI for Science labs
Bharatkumar Sharma 4 years ago
parent
commit
095fd154f6
100 changed files with 9559 additions and 0 deletions
  1. 1 0
      hpc_ai/ai_science_cfd/.gitignore
  2. 34 0
      hpc_ai/ai_science_cfd/Dockerfile
  3. 498 0
      hpc_ai/ai_science_cfd/English/python/jupyter_notebook/CFD/Competition.ipynb
  4. 618 0
      hpc_ai/ai_science_cfd/English/python/jupyter_notebook/CFD/Part2.ipynb
  5. 847 0
      hpc_ai/ai_science_cfd/English/python/jupyter_notebook/CFD/Part3.ipynb
  6. 647 0
      hpc_ai/ai_science_cfd/English/python/jupyter_notebook/CFD/Part4.ipynb
  7. 1233 0
      hpc_ai/ai_science_cfd/English/python/jupyter_notebook/CFD/Solution.ipynb
  8. 114 0
      hpc_ai/ai_science_cfd/English/python/jupyter_notebook/CFD/Start_Here.ipynb
  9. BIN
      hpc_ai/ai_science_cfd/English/python/jupyter_notebook/CFD/images/U-net_flow_func.png
  10. BIN
      hpc_ai/ai_science_cfd/English/python/jupyter_notebook/CFD/images/U-net_flow_nofunc.png
  11. BIN
      hpc_ai/ai_science_cfd/English/python/jupyter_notebook/CFD/images/convnet.png
  12. BIN
      hpc_ai/ai_science_cfd/English/python/jupyter_notebook/CFD/images/elu.png
  13. BIN
      hpc_ai/ai_science_cfd/English/python/jupyter_notebook/CFD/images/flow_example.png
  14. BIN
      hpc_ai/ai_science_cfd/English/python/jupyter_notebook/CFD/images/ml_pipeline.jpeg
  15. BIN
      hpc_ai/ai_science_cfd/English/python/jupyter_notebook/CFD/images/ssf_unet.png
  16. 570 0
      hpc_ai/ai_science_cfd/English/python/jupyter_notebook/Intro_to_DL/CNN's.ipynb
  17. 427 0
      hpc_ai/ai_science_cfd/English/python/jupyter_notebook/Intro_to_DL/Part_2.ipynb
  18. 631 0
      hpc_ai/ai_science_cfd/English/python/jupyter_notebook/Intro_to_DL/Resnets.ipynb
  19. 526 0
      hpc_ai/ai_science_cfd/English/python/jupyter_notebook/Intro_to_DL/Start_Here.ipynb
  20. BIN
      hpc_ai/ai_science_cfd/English/python/jupyter_notebook/Intro_to_DL/images/Activation_Function.png
  21. BIN
      hpc_ai/ai_science_cfd/English/python/jupyter_notebook/Intro_to_DL/images/NVIDIA_Bootcamp.png
  22. BIN
      hpc_ai/ai_science_cfd/English/python/jupyter_notebook/Intro_to_DL/images/activation_fns.png
  23. BIN
      hpc_ai/ai_science_cfd/English/python/jupyter_notebook/Intro_to_DL/images/alexnet.png
  24. BIN
      hpc_ai/ai_science_cfd/English/python/jupyter_notebook/Intro_to_DL/images/ann.png
  25. BIN
      hpc_ai/ai_science_cfd/English/python/jupyter_notebook/Intro_to_DL/images/bp_left.gif
  26. BIN
      hpc_ai/ai_science_cfd/English/python/jupyter_notebook/Intro_to_DL/images/bp_right.gif
  27. BIN
      hpc_ai/ai_science_cfd/English/python/jupyter_notebook/Intro_to_DL/images/cnn.jpeg
  28. BIN
      hpc_ai/ai_science_cfd/English/python/jupyter_notebook/Intro_to_DL/images/conv.gif
  29. BIN
      hpc_ai/ai_science_cfd/English/python/jupyter_notebook/Intro_to_DL/images/conv.png
  30. BIN
      hpc_ai/ai_science_cfd/English/python/jupyter_notebook/Intro_to_DL/images/conv_depth.png
  31. BIN
      hpc_ai/ai_science_cfd/English/python/jupyter_notebook/Intro_to_DL/images/convtranspose.gif
  32. BIN
      hpc_ai/ai_science_cfd/English/python/jupyter_notebook/Intro_to_DL/images/convtranspose_conv.gif
  33. BIN
      hpc_ai/ai_science_cfd/English/python/jupyter_notebook/Intro_to_DL/images/evaluation.jpg
  34. BIN
      hpc_ai/ai_science_cfd/English/python/jupyter_notebook/Intro_to_DL/images/evaluation.png
  35. BIN
      hpc_ai/ai_science_cfd/English/python/jupyter_notebook/Intro_to_DL/images/fashion-mnist.png
  36. BIN
      hpc_ai/ai_science_cfd/English/python/jupyter_notebook/Intro_to_DL/images/feature_hierarchy.png
  37. BIN
      hpc_ai/ai_science_cfd/English/python/jupyter_notebook/Intro_to_DL/images/identity.png
  38. BIN
      hpc_ai/ai_science_cfd/English/python/jupyter_notebook/Intro_to_DL/images/location.PNG
  39. BIN
      hpc_ai/ai_science_cfd/English/python/jupyter_notebook/Intro_to_DL/images/max_pool.png
  40. BIN
      hpc_ai/ai_science_cfd/English/python/jupyter_notebook/Intro_to_DL/images/ml_pipeline.jpeg
  41. BIN
      hpc_ai/ai_science_cfd/English/python/jupyter_notebook/Intro_to_DL/images/model.png
  42. BIN
      hpc_ai/ai_science_cfd/English/python/jupyter_notebook/Intro_to_DL/images/neuron.jpg
  43. BIN
      hpc_ai/ai_science_cfd/English/python/jupyter_notebook/Intro_to_DL/images/or_left.gif
  44. BIN
      hpc_ai/ai_science_cfd/English/python/jupyter_notebook/Intro_to_DL/images/or_right.png
  45. BIN
      hpc_ai/ai_science_cfd/English/python/jupyter_notebook/Intro_to_DL/images/our_cnn.png
  46. BIN
      hpc_ai/ai_science_cfd/English/python/jupyter_notebook/Intro_to_DL/images/resblock.PNG
  47. BIN
      hpc_ai/ai_science_cfd/English/python/jupyter_notebook/Intro_to_DL/images/resnet.PNG
  48. BIN
      hpc_ai/ai_science_cfd/English/python/jupyter_notebook/Intro_to_DL/images/stats.png
  49. BIN
      hpc_ai/ai_science_cfd/English/python/jupyter_notebook/Intro_to_DL/images/sup-unsup.png
  50. BIN
      hpc_ai/ai_science_cfd/English/python/jupyter_notebook/Intro_to_DL/images/training.gif
  51. BIN
      hpc_ai/ai_science_cfd/English/python/jupyter_notebook/Intro_to_DL/images/xnor_left.gif
  52. BIN
      hpc_ai/ai_science_cfd/English/python/jupyter_notebook/Intro_to_DL/images/xnor_right.png
  53. 84 0
      hpc_ai/ai_science_cfd/English/python/jupyter_notebook/Start_Here.ipynb
  54. 53 0
      hpc_ai/ai_science_cfd/English/python/source_code/dataset.py
  55. 133 0
      hpc_ai/ai_science_cfd/English/python/source_code/model/flow_architecture.py
  56. 320 0
      hpc_ai/ai_science_cfd/English/python/source_code/utils/data_utils.py
  57. 53 0
      hpc_ai/ai_science_cfd/README.MD
  58. 21 0
      hpc_ai/ai_science_cfd/Singularity
  59. 1 0
      hpc_ai/ai_science_climate/.gitignore
  60. 25 0
      hpc_ai/ai_science_climate/Dockerfile
  61. 564 0
      hpc_ai/ai_science_climate/English/python/jupyter_notebook/Intro_to_DL/CNN's.ipynb
  62. 430 0
      hpc_ai/ai_science_climate/English/python/jupyter_notebook/Intro_to_DL/Part_2.ipynb
  63. 628 0
      hpc_ai/ai_science_climate/English/python/jupyter_notebook/Intro_to_DL/Resnets.ipynb
  64. 526 0
      hpc_ai/ai_science_climate/English/python/jupyter_notebook/Intro_to_DL/Start_Here.ipynb
  65. BIN
      hpc_ai/ai_science_climate/English/python/jupyter_notebook/Intro_to_DL/images/Activation_Function.png
  66. BIN
      hpc_ai/ai_science_climate/English/python/jupyter_notebook/Intro_to_DL/images/NVIDIA_Bootcamp.png
  67. BIN
      hpc_ai/ai_science_climate/English/python/jupyter_notebook/Intro_to_DL/images/activation_fns.png
  68. BIN
      hpc_ai/ai_science_climate/English/python/jupyter_notebook/Intro_to_DL/images/alexnet.png
  69. BIN
      hpc_ai/ai_science_climate/English/python/jupyter_notebook/Intro_to_DL/images/ann.png
  70. BIN
      hpc_ai/ai_science_climate/English/python/jupyter_notebook/Intro_to_DL/images/bp_left.gif
  71. BIN
      hpc_ai/ai_science_climate/English/python/jupyter_notebook/Intro_to_DL/images/bp_right.gif
  72. BIN
      hpc_ai/ai_science_climate/English/python/jupyter_notebook/Intro_to_DL/images/cnn.jpeg
  73. BIN
      hpc_ai/ai_science_climate/English/python/jupyter_notebook/Intro_to_DL/images/conv.gif
  74. BIN
      hpc_ai/ai_science_climate/English/python/jupyter_notebook/Intro_to_DL/images/conv.png
  75. BIN
      hpc_ai/ai_science_climate/English/python/jupyter_notebook/Intro_to_DL/images/conv_depth.png
  76. BIN
      hpc_ai/ai_science_climate/English/python/jupyter_notebook/Intro_to_DL/images/convtranspose.gif
  77. BIN
      hpc_ai/ai_science_climate/English/python/jupyter_notebook/Intro_to_DL/images/convtranspose_conv.gif
  78. BIN
      hpc_ai/ai_science_climate/English/python/jupyter_notebook/Intro_to_DL/images/evaluation.jpg
  79. BIN
      hpc_ai/ai_science_climate/English/python/jupyter_notebook/Intro_to_DL/images/evaluation.png
  80. BIN
      hpc_ai/ai_science_climate/English/python/jupyter_notebook/Intro_to_DL/images/fashion-mnist.png
  81. BIN
      hpc_ai/ai_science_climate/English/python/jupyter_notebook/Intro_to_DL/images/feature_hierarchy.png
  82. BIN
      hpc_ai/ai_science_climate/English/python/jupyter_notebook/Intro_to_DL/images/identity.png
  83. BIN
      hpc_ai/ai_science_climate/English/python/jupyter_notebook/Intro_to_DL/images/location.PNG
  84. BIN
      hpc_ai/ai_science_climate/English/python/jupyter_notebook/Intro_to_DL/images/max_pool.png
  85. BIN
      hpc_ai/ai_science_climate/English/python/jupyter_notebook/Intro_to_DL/images/ml_pipeline.jpeg
  86. BIN
      hpc_ai/ai_science_climate/English/python/jupyter_notebook/Intro_to_DL/images/model.png
  87. BIN
      hpc_ai/ai_science_climate/English/python/jupyter_notebook/Intro_to_DL/images/neuron.jpg
  88. BIN
      hpc_ai/ai_science_climate/English/python/jupyter_notebook/Intro_to_DL/images/or_left.gif
  89. BIN
      hpc_ai/ai_science_climate/English/python/jupyter_notebook/Intro_to_DL/images/or_right.png
  90. BIN
      hpc_ai/ai_science_climate/English/python/jupyter_notebook/Intro_to_DL/images/our_cnn.png
  91. BIN
      hpc_ai/ai_science_climate/English/python/jupyter_notebook/Intro_to_DL/images/resblock.PNG
  92. BIN
      hpc_ai/ai_science_climate/English/python/jupyter_notebook/Intro_to_DL/images/resnet.PNG
  93. BIN
      hpc_ai/ai_science_climate/English/python/jupyter_notebook/Intro_to_DL/images/stats.png
  94. BIN
      hpc_ai/ai_science_climate/English/python/jupyter_notebook/Intro_to_DL/images/sup-unsup.png
  95. BIN
      hpc_ai/ai_science_climate/English/python/jupyter_notebook/Intro_to_DL/images/training.gif
  96. BIN
      hpc_ai/ai_science_climate/English/python/jupyter_notebook/Intro_to_DL/images/xnor_left.gif
  97. BIN
      hpc_ai/ai_science_climate/English/python/jupyter_notebook/Intro_to_DL/images/xnor_right.png
  98. 81 0
      hpc_ai/ai_science_climate/English/python/jupyter_notebook/Start_Here.ipynb
  99. 494 0
      hpc_ai/ai_science_climate/English/python/jupyter_notebook/Tropical_Cyclone_Intensity_Estimation/Approach_to_the_Problem_&_Inspecting_and_Cleaning_the_Required_Data.ipynb
  100. 0 0
      hpc_ai/ai_science_climate/English/python/jupyter_notebook/Tropical_Cyclone_Intensity_Estimation/Competition.ipynb

+ 1 - 0
hpc_ai/ai_science_cfd/.gitignore

@@ -0,0 +1 @@
+.ipynb_checkpoints

+ 34 - 0
hpc_ai/ai_science_cfd/Dockerfile

@@ -0,0 +1,34 @@
+# Copyright (c) 2020 NVIDIA Corporation.  All rights reserved. 
+
+# To build the docker container, run: $ sudo docker build -t ai-science-cfd:latest --network=host .
+# To run: $ sudo docker run --rm -it --gpus=all -p 8888:8888 ai-science-cfd:latest
+# Finally, open http://127.0.0.1:8888/
+
+# Select Base Image 
+FROM nvcr.io/nvidia/tensorflow:20.01-tf2-py3
+# Update the repo
+RUN apt-get update
+# Install required dependencies
+RUN apt-get install -y libsm6 libxext6 libxrender-dev git
+# Install required python packages
+RUN pip3 install opencv-python==4.1.2.30 pandas seaborn sklearn matplotlib scikit-fmm tqdm h5py gdown
+
+# TO COPY the data
+COPY English/ /workspace/
+#COPY English/python/jupyter_notebook/CFD /workspace/CFD/
+#COPY English/python/jupyter_notebook/Intro_to_DL /workspace/Intro_to_DL/
+#COPY English/Start_Here.ipynb /workspace/
+
+# Make a directory for Data
+RUN mkdir /workspace/python/jupyter_notebook/CFD/data
+
+# Copy the Python file for downloading dataset 
+#COPY English/python/source_code/dataset.py /workspace/
+# This Installs All the Dataset
+#RUN python3 /workspace/dataset.py
+RUN python3 /workspace/python/source_code/dataset.py
+
+## Uncomment this line to run Jupyter notebook by default
+#CMD jupyter notebook --ip 0.0.0.0 --port 8888 --allow-root
+CMD jupyter notebook --no-browser --allow-root --ip=0.0.0.0 --port=8888 --NotebookApp.token="" --notebook-dir=/workspace/python/jupyter_notebook/
+

+ 498 - 0
hpc_ai/ai_science_cfd/English/python/jupyter_notebook/CFD/Competition.ipynb

@@ -0,0 +1,498 @@
+{
+ "cells": [
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "     \n",
+    "     \n",
+    "     \n",
+    "     \n",
+    "     \n",
+    "    \n",
+    "[Home Page](../Start_Here.ipynb)\n",
+    "\n",
+    "\n",
+    "[Previous Notebook](Part4.ipynb)\n",
+    "      \n",
+    "     \n",
+    "     \n",
+    "     \n",
+    "[1](Start_Here.ipynb)\n",
+    "[2](Part2.ipynb)\n",
+    "[3](Part3.ipynb)\n",
+    "[4](Part4.ipynb)\n",
+    "[5]\n",
+    "     \n",
+    "     \n",
+    "     \n",
+    "     \n",
+    "     \n",
+    "\n"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "# Exercise  :\n",
+    "\n",
+    "Now with that, you are introduced to the Problem Statement and understood the ways you need to solve it. Now is your time to Tweak, Tune and get your hands dirty with the model. \n",
+    "\n",
+    "Let us help you get started by pointing out some ways in which you can make the model more efficient. \n",
+    "\n",
+    "- Epochs: Yes, you can increase the number of epochs to train more on the dataset, but make sure you don't overfit it.\n",
+    "- Learning Rate: This is a very critical thing to understand and tweak as lower values make learning slow, and higher values stop the model from learning.\n",
+    "- Depth of the Model: We had a depth of 5 upsampling and 5 downsampling blocks, we can reduce and increase to learn complex functions.\n",
+    "- Dropout Rate: This would regularize the model from overfitting and learn the features across all neurons properly.\n",
+    "- Gatedness of the Model: You can set the parameter `gated` to `True` or `False` to see how it impacts the model.\n",
+    "\n",
+    "\n",
+    "\n",
+    "\n",
+    "\n",
+    "Don't remember what an Epoch means? Worry not, head to our [Introduction to Deep Learning](../Start_Here/Intro_to_DL.ipynb) notebook to understand what an Epoch means and what the other terms mean.\n",
+    "\n",
+    "\n",
+    "Note, before you start tweaking and training your model, it would be worthwhile to refer to these to see how they affect your model: \n",
+    "\n",
+    "[Epochs impact on overfitting](https://datascience.stackexchange.com/questions/27561/can-the-number-of-epochs-influence-overfitting ) \n",
+    "\n",
+    "[Depth of the Model](https://www.quora.com/Does-adding-more-layers-always-result-in-more-accuracy-in-convolutional-neural-networks)\n",
+    "\n",
+    "[Understand the Impact of Learning Rate on Neural Network Performance](https://machinelearningmastery.com/understand-the-dynamics-of-learning-rate-on-deep-learning-neural-networks/)\n",
+    "\n",
+    "[Introduction to Dropout for Regularizing Deep Neural Networks](https://machinelearningmastery.com/dropout-for-regularizing-deep-neural-networks/)\n",
+    "\n",
+    "[Introduction to Optimizers](https://algorithmia.com/blog/introduction-to-optimizers)"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "# Import Necessary Libraries\n",
+    "from __future__ import print_function\n",
+    "\n",
+    "import sys\n",
+    "sys.path.append('/workspace/python/source_code')\n",
+    "\n",
+    "import numpy as np \n",
+    "import time\n",
+    "import importlib\n",
+    "import os\n",
+    "import math\n",
+    "import matplotlib.pyplot as plt\n",
+    "from matplotlib.pyplot import imshow\n",
+    "\n",
+    "# TensorFlow and tf.keras\n",
+    "import tensorflow as tf\n",
+    "from tensorflow.keras.models import *\n",
+    "from tensorflow.keras.layers import *\n",
+    "from tensorflow.keras.optimizers import *\n",
+    "from tensorflow.keras.activations import *\n",
+    "from tensorflow.keras.callbacks import ModelCheckpoint, LearningRateScheduler\n",
+    "from tensorflow.keras import backend as keras\n",
+    "from tensorflow.keras.preprocessing import image\n",
+    "from tensorflow.keras.applications.imagenet_utils import preprocess_input\n",
+    "from tensorflow.keras.initializers import glorot_uniform\n",
+    "import tensorflow.keras.backend as K\n",
+    "\n",
+    "# Custom Utlities\n",
+    "import model.flow_architecture as flow_architecture\n",
+    "import utils.data_utils as data_utils\n",
+    "\n",
+    "import os\n",
+    "os.environ[\"CUDA_VISIBLE_DEVICES\"]=\"0\"\n",
+    "# reload(data_utils) # you need to execute this in case you modify the plotting scripts in data_utils"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "batch_size = 8\n",
+    "dataset_size = 3000   # Number of elements in the train.tfrecords\n",
+    "validation_size = 256 # Number of elements to use for validation\n",
+    "\n",
+    "# derive some quantities\n",
+    "train_size = dataset_size - validation_size\n",
+    "train_batches = int(train_size / batch_size)\n",
+    "validation_batches= int(validation_size / batch_size)\n",
+    "\n",
+    "test_size = 28\n",
+    "test_batches = int(test_size/batch_size)\n",
+    "print('Number of batches in train/validation/test dataset:', train_batches, '/', validation_batches, '/', test_batches)\n",
+    "\n",
+    "def init_datasets():\n",
+    "    dataset = tf.data.TFRecordDataset('data/train.tfrecords')\n",
+    "    # Transform binary data into image arrays\n",
+    "    dataset = dataset.map(data_utils.parse_flow_data)\n",
+    "    \n",
+    "    training_dataset = dataset.skip(validation_size).shuffle(buffer_size=512)\n",
+    "    training_dataset = training_dataset.batch(batch_size, drop_remainder=True)\n",
+    "#     training_dataset = training_dataset.repeat()\n",
+    "\n",
+    "    validation_dataset = dataset.take(validation_size).batch(batch_size, drop_remainder=True)\n",
+    "#     validation_dataset = validation_dataset.repeat()\n",
+    "\n",
+    "    # Read test dataset\n",
+    "    test_dataset = tf.data.TFRecordDataset('data/test.tfrecords')\n",
+    "    test_dataset = test_dataset.map(data_utils.parse_flow_data) # Transform binary data into image arrays\n",
+    "    test_dataset = test_dataset.batch(batch_size, drop_remainder = True)\n",
+    " \n",
+    "    return training_dataset, validation_dataset, test_dataset\n",
+    "\n",
+    "training_dataset, validation_dataset, test_dataset = init_datasets()"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "def loss_image(vflow_hat, vflow):\n",
+    "    ''' Defines the loss for the predicted flow.\n",
+    "    \n",
+    "    Arguments:\n",
+    "    vflow_hat -- predicted flow, shape (?, nh, nw, 2)\n",
+    "    vflow   -- target flow from the simulation, shape (?, nh, nw, 2)\n",
+    "    \n",
+    "    Returns: the L2 loss\n",
+    "    '''\n",
+    "    ### define the square error loss (~ 1 line of code)\n",
+    "    loss = tf.nn.l2_loss(vflow_hat - vflow)\n",
+    "    ###\n",
+    "                         \n",
+    "    # Add a scalar to tensorboard\n",
+    "    tf.summary.scalar('loss', loss)\n",
+    "    \n",
+    "    return loss"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "def concat_elu(x):\n",
+    "    \"\"\" like concatenated ReLU (http://arxiv.org/abs/1603.05201), but then with ELU \"\"\"\n",
+    "    axis = len(x.get_shape())-1\n",
+    "    return elu(concatenate([x, -x], axis=axis))"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "# Helper Function to Set Non-linearity\n",
+    "def set_nonlinearity(name):\n",
+    "    if name == 'concat_elu':\n",
+    "        return concat_elu\n",
+    "    elif name == 'elu':\n",
+    "        return tf.nn.elu\n",
+    "    elif name == 'concat_relu':\n",
+    "        return tf.nn.crelu\n",
+    "    elif name == 'relu':\n",
+    "        return tf.nn.relu\n",
+    "    else:\n",
+    "        raise('nonlinearity ' + name + ' is not supported')"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "def res_block(x, a=None, filter_size=16, nonlinearity=concat_elu, rate=0, stride=1, gated=False, name=\"resnet\"):\n",
+    "    \"\"\" Residual block of 3x3 convolutions \"\"\"\n",
+    "    # Copy of our Input\n",
+    "    orig_x = x\n",
+    "    # Get Shape of Input data\n",
+    "    orig_x_int_shape = flow_architecture.int_shape(x)\n",
+    "    \n",
+    "    #### First Convolution Layer\n",
+    "    # If Input has one channel Data ( i.e., Input is our Input Image )\n",
+    "    if orig_x_int_shape[3] == 1:\n",
+    "        x_1 = flow_architecture.conv_layer(x, 3, stride, filter_size, name + '_conv_1')\n",
+    "    else:\n",
+    "        x_1 = flow_architecture.conv_layer(nonlinearity(x), 3, stride, filter_size, name + '_conv_1')\n",
+    "    \n",
+    "    # a is fed during the Up-sampling part of the Network ( Refer Upsampling block )\n",
+    "    if a is not None:\n",
+    "        shape_a = flow_architecture.int_shape(a)\n",
+    "        shape_x_1 = flow_architecture.int_shape(x_1)\n",
+    "        paddings = [[0,0],[0, shape_x_1[1]-shape_a[1]], [0, shape_x_1[2]-shape_a[2]], [0, 0]]\n",
+    "        a = tf.pad(a, paddings)\n",
+    "        x_1 = x_1 + flow_architecture.nin(nonlinearity(a), filter_size, name + '_nin')\n",
+    "    # Add Activation Function and Dropout\n",
+    "    x_1 = nonlinearity(x_1)\n",
+    "    x_1 = Dropout(rate=rate)(x_1)\n",
+    "    \n",
+    "    #### Second Convolution Layer \n",
+    "    # Implemented Gated Residual Blocks \n",
+    "    if not gated:\n",
+    "        x_2 = flow_architecture.conv_layer(x_1, 3, 1, filter_size, name + '_conv_2')\n",
+    "    else:\n",
+    "        x_2 = flow_architecture.conv_layer(x_1, 3, 1, filter_size*2, name + '_conv_2')\n",
+    "        x_2_1, x_2_2 = tf.split(axis=3,num_or_size_splits=2,value=x_2)\n",
+    "        x_2 = x_2_1 * tf.nn.sigmoid(x_2_2)\n",
+    "    \n",
+    "    # During Down-sampling Apply Pooling layer for the Input to Match the Outout\n",
+    "    if int(orig_x.get_shape()[2]) > int(x_2.get_shape()[2]):\n",
+    "        orig_x = tf.nn.avg_pool(orig_x, [1,2,2,1], [1,2,2,1], padding='SAME')\n",
+    "\n",
+    "    # Pad Input Data\n",
+    "    out_filter = filter_size\n",
+    "    in_filter = int(orig_x.get_shape()[3])\n",
+    "    if out_filter != in_filter:\n",
+    "        orig_x = tf.pad( orig_x, [[0, 0], [0, 0], [0, 0], [(out_filter-in_filter), 0]])\n",
+    "    # Output Input Data + Output of Convolution Layer ( Why ? Because this is a Residual Block )\n",
+    "    return orig_x + x_2"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "def downsampling_res_blocks(x, nr_res_blocks, filter_size, nonlinearity, rate, gated,name_prefix, downsample):\n",
+    "    ''' An optional downsampling step followed by one (or more) residual blocks '''\n",
+    "    \n",
+    "    # Set Parameters and Call our Residual Block Function \n",
+    "    if downsample:\n",
+    "        x = res_block(x, filter_size=filter_size, nonlinearity=nonlinearity, rate=rate, stride=2,gated=gated, name=name_prefix + \"downsample\")\n",
+    "    for i in range(nr_res_blocks):\n",
+    "        x = res_block(x, filter_size=filter_size, nonlinearity=nonlinearity, stride=1,rate=rate, gated=gated, name=name_prefix + str(i))      \n",
+    "    return x    \n"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "def upsumpling_res_blocks(x, nr_res_blocks, filter_size, nonlinearity, rate, gated, \n",
+    "                        name_prefix, a):\n",
+    "    ''' Upsampling followed by a residual block '''\n",
+    "    # Set Parameters and Call our Residual Block Functions\n",
+    "    x = flow_architecture.transpose_conv_layer(x, 3, 2, filter_size, name_prefix)\n",
+    "    for i in range(nr_res_blocks):\n",
+    "        if i == 0:\n",
+    "            x = res_block(x, a, filter_size=filter_size, nonlinearity=nonlinearity, rate=rate, gated=gated, name=name_prefix + str(i))\n",
+    "        else:\n",
+    "            x = res_block(x, filter_size=filter_size, nonlinearity=nonlinearity, rate=rate, gated=gated, name=name_prefix + str(i))\n",
+    "\n",
+    "    return x"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "def conv_res(inputs, nr_res_blocks=1, rate=1.0, nonlinearity_name='concat_elu', gated=True,depth=5):\n",
+    "    \"\"\"Builds conv part of net.\n",
+    "    Args:\n",
+    "      inputs: input images\n",
+    "      rate: dropout layer\n",
+    "    \"\"\"\n",
+    "    # Set Non-linearity\n",
+    "    nonlinearity = set_nonlinearity(nonlinearity_name)\n",
+    "    filter_size = 8\n",
+    "    # Store for Concatenation of the Downsampling output with the Upsampling blocks\n",
+    "    a = []\n",
+    "    \n",
+    "    # First Downsampling Residual Block to Convert Input Image from ( 128  ,256 ,1 ) to ( 128 , 256 , 8)\n",
+    "    \n",
+    "    x = downsampling_res_blocks(inputs, nr_res_blocks, filter_size, nonlinearity, rate, gated, \"resnet_1_\", False)\n",
+    "    \n",
+    "    # Loop Through Downsampling Blocks \n",
+    "    for i in range(2,1+depth):\n",
+    "        a.append(x)\n",
+    "        filter_size = 2 * filter_size\n",
+    "        name_prefix = \"resnet\" + str(i) + \"_\"\n",
+    "        x = downsampling_res_blocks(x, nr_res_blocks, filter_size, nonlinearity, rate, gated, name_prefix, True)\n",
+    "\n",
+    "    # Loop Through Up-sampling Blocks \n",
+    "    for i in range(1,depth):\n",
+    "        filter_size = int(filter_size /2)\n",
+    "        name_prefix = \"up_conv_\" + str(i)\n",
+    "        x = upsumpling_res_blocks(x, nr_res_blocks, filter_size, nonlinearity, rate, gated, name_prefix, a[-i])\n",
+    "    \n",
+    "    # Last Convolution Layer with Activation \n",
+    "    x = flow_architecture.conv_layer(x, 3, 1, 2, \"last_conv\")\n",
+    "    x = tf.nn.tanh(x)\n",
+    "\n",
+    "    return x"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "def model(boundary, rate,gated=True,depth=5):\n",
+    "    return conv_res(boundary, nr_res_blocks=1, rate=rate, nonlinearity_name='concat_elu', gated=gated,depth=depth)"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "# Define Dropout Rate and Set Gated = False\n",
+    "\n",
+    "## TODO \n",
+    "## ~~ Change parameters Here ~~\n",
+    "rate = 0.3\n",
+    "gated = False\n",
+    "depth = 5\n",
+    "lr = 0.0001\n",
+    "## ~~ Change parameters Here ~~\n",
+    "\n",
+    "# Compile the Model\n",
+    "input = tf.keras.Input(shape=(128,256,1), name=\"boundary\")\n",
+    "output = model(input,rate=rate,gated=gated,depth=depth)\n",
+    "unet = tf.keras.Model(inputs = input, outputs=output)\n",
+    "unet.compile(tf.keras.optimizers.Adam(lr), loss=loss_image)\n",
+    "unet.summary()"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "# Let's train our Model for 20 Epochs\n",
+    "results = unet.fit(training_dataset, epochs=15)"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "# Let us Plot the train History\n",
+    "data_utils.plot_keras_loss(results)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "## Let's Test our Model"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "test_loss = unet.evaluate(test_dataset, steps=3)\n",
+    "print('The loss over the test dataset', test_loss)"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "## Let's Test our Model\n",
+    "\n",
+    "test_loss = unet.evaluate(test_dataset, steps=3)\n",
+    "print('The loss over the test dataset', test_loss)"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "x, vxy = data_utils.load_test_data(1) # you can try different numbers between 1 and 28\n",
+    "vxy_hat = unet.predict(x)\n",
+    "data_utils.plot_test_result(x, vxy, vxy_hat)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "## Licensing\n",
+    "This material is released by NVIDIA Corporation under the Creative Commons Attribution 4.0 International (CC BY 4.0)\n",
+    "\n",
+    "[Previous Notebook](Part4.ipynb)\n",
+    "      \n",
+    "     \n",
+    "     \n",
+    "     \n",
+    "[1](Start_Here.ipynb)\n",
+    "[2](Part2.ipynb)\n",
+    "[3](Part3.ipynb)\n",
+    "[4](Part4.ipynb)\n",
+    "[5]\n",
+    "     \n",
+    "     \n",
+    "     \n",
+    "     \n",
+    "     \n",
+    "\n",
+    "\n",
+    "     \n",
+    "     \n",
+    "     \n",
+    "     \n",
+    "     \n",
+    "    \n",
+    "[Home Page](../Start_Here.ipynb)\n"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": []
+  }
+ ],
+ "metadata": {
+  "kernelspec": {
+   "display_name": "Python 3",
+   "language": "python",
+   "name": "python3"
+  },
+  "language_info": {
+   "codemirror_mode": {
+    "name": "ipython",
+    "version": 3
+   },
+   "file_extension": ".py",
+   "mimetype": "text/x-python",
+   "name": "python",
+   "nbconvert_exporter": "python",
+   "pygments_lexer": "ipython3",
+   "version": "3.6.2"
+  }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 2
+}

+ 618 - 0
hpc_ai/ai_science_cfd/English/python/jupyter_notebook/CFD/Part2.ipynb

@@ -0,0 +1,618 @@
+{
+ "cells": [
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "     \n",
+    "     \n",
+    "     \n",
+    "     \n",
+    "     \n",
+    "    \n",
+    "[Home Page](../Start_Here.ipynb)\n",
+    "\n",
+    "[Previous Notebook](Start_Here.ipynb)\n",
+    "      \n",
+    "     \n",
+    "     \n",
+    "     \n",
+    "[1](Start_Here.ipynb)\n",
+    "[2]\n",
+    "[3](Part3.ipynb)\n",
+    "[4](Part4.ipynb)\n",
+    "[5](Competition.ipynb)\n",
+    "     \n",
+    "     \n",
+    "     \n",
+    "     \n",
+    "     \n",
+    "[Next Notebook](Part3.ipynb)\n",
+    "\n",
+    "\n",
+    "\n"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "# Steady State Flow Using Neural Networks - Part 2 \n",
+    "\n",
+    "**Contents of this notebook:**\n",
+    "\n",
+    "- [Approaching the Problem](#Approaching-the-Problem)\n",
+    "- [Data and Task](#Data-and-Task)\n",
+    "- [Model and Loss](#Model-and-Loss) \n",
+    "- [Training and Evaluation](#Training-and-Evaluation) \n",
+    "- [Building our First Model](#Building-our-First-Model)\n",
+    "\n",
+    "\n",
+    "**By the End of this Notebook you will:**\n",
+    "\n",
+    "- Understand the working pipeline of DL Network.\n",
+    "- Building our first model using Fully Connected networks."
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "# Approaching the Problem\n",
+    "\n",
+    "####  As mentioned in the Introduction to Deep Learning notebook we will follow the same steps in this notebook as well:\n",
+    " \n",
+    "- Data\n",
+    "- Tasks\n",
+    "- Model\n",
+    "- Loss\n",
+    "- Learning\n",
+    "- Evaluation"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "# Data and Task \n",
+    "\n",
+    "Like we saw in the previous notebook, our input and output data are as follows :\n",
+    "\n",
+    "<img src=\"images/flow_example.png\">\n",
+    "\n",
+    "This Simulated Flow lines were calculated using Lattice Boltzmann method ([Mechsys](http://mechsys.nongnu.org/)). \n",
+    "\n",
+    "Let us import our dataset and see some of the input output pairs : "
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "# Importing Necessary Libaries \n",
+    "from __future__ import print_function\n",
+    "\n",
+    "import sys\n",
+    "sys.path.append('/workspace/python/source_code')\n",
+    "\n",
+    "import numpy as np\n",
+    "import utils.data_utils as data_utils\n",
+    "import tensorflow as tf\n",
+    "from tensorflow.keras import layers\n",
+    "import tensorflow.keras.backend as K\n",
+    "import math\n",
+    "import matplotlib.pyplot as plt\n",
+    "\n",
+    "import time\n",
+    "\n",
+    "import importlib\n",
+    "# reload(data_utils) # you need to execute this in case you modify the plotting scripts in data_utils"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "The input dataset is stored as a `TFRecords` file which is suitable for storing large datasets which cannot be loaded into memory and Tensorflow takes batches of data to optimize our data pipelining process."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "# Set up our dataset\n",
+    "dataset = tf.data.TFRecordDataset('data/train.tfrecords')\n",
+    "# Transform binary data into image arrays\n",
+    "dataset = dataset.map(data_utils.parse_flow_data) \n",
+    "\n",
+    "batched_dataset = dataset.batch(32, drop_remainder=True)\n",
+    "\n",
+    "# Create an iterator for reading a batch of input and output data\n",
+    "iterator = iter(batched_dataset)\n",
+    "boundary, vflow = next(iterator)\n",
+    "\n",
+    "print('Input shape:', boundary.shape.as_list())\n",
+    "print('Output shape:', vflow.shape.as_list())"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "We have set up a batch of 32 images with 128x256 resolution. The input data has only 1 channel (last dimension), which describes the boundary. The output data has 2 channels the x and y velocity component of the flow.\n",
+    "\n",
+    "Let us now display some of the training examples. Feel free to change `plot_idx`, and try plotting into a single figure."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "plot_idx = 10 # set it between 0-31\n",
+    "\n",
+    "data_utils.plot_flow_data(boundary[plot_idx,:,:,:], vflow[plot_idx,:,:,:])\n",
+    "\n",
+    "# You can plot the input and output data into a single figure\n",
+    "data_utils.plot_flow_data(boundary[plot_idx,:,:,:], vflow[plot_idx,:,:,:], single_plot=True)\n",
+    "\n",
+    "    "
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "# How a Boundary is defined ?  \n",
+    "\n",
+    "# Running counter over the Input \n",
+    "from collections import Counter\n",
+    "c = np.array(boundary[plot_idx,:,:,:]).flatten()\n",
+    "c = Counter(c)\n",
+    "c"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "From the Input data, we can notice the boundary is defined by 1's and 0's, which is also reported to us by the counter above.\n",
+    "\n",
+    "Let us now understand how the flow lines are described."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "import matplotlib.pyplot as plt\n",
+    "import numpy as np\n",
+    "\n",
+    "fig, (ax1, ax2) = plt.subplots(1, 2,figsize=(16, 4))\n",
+    "\n",
+    "ax1.hist(np.array(vflow[plot_idx,:,:,0]).flatten(), density=True, bins=30)\n",
+    "ax1.set_ylabel('Count')\n",
+    "ax1.set_xlabel('Value')\n",
+    "ax1.set_title(\"X Values\")\n",
+    "\n",
+    "\n",
+    "ax2.hist(np.array(vflow[plot_idx,:,:,1]).flatten(), density=True, bins=30)\n",
+    "ax2.set_ylabel('Count')\n",
+    "ax2.set_xlabel('Value')\n",
+    "ax2.set_title(\"Y Values\")\n",
+    "plt.show()"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "From the X and Y Values, we can observe that each pixel denotes the velocity vector corresponding to that pixel :\n",
+    "\n",
+    "Positive meaning $+x$ or $+y $ direction and negative meaning the $-x$ or $-y $ direction respectively."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "#Plotting Heatmap of the 2-Channels\n",
+    "import matplotlib.pyplot as plt\n",
+    "fig, (ax1, ax2) = plt.subplots(1, 2,figsize=(14, 14))\n",
+    "\n",
+    "# Plotting X vector and Y vector separately.\n",
+    "\n",
+    "# X Vector \n",
+    "\n",
+    "ax1.imshow(vflow[plot_idx,:,:,0], cmap='hot', interpolation='nearest')\n",
+    "ax1.set_title(\"X Vector\")\n",
+    "\n",
+    "# Y Vector\n",
+    "\n",
+    "ax2.imshow(-vflow[plot_idx,:,:,1], cmap='hot', interpolation='nearest')\n",
+    "ax2.set_title(\"Y Vector\")\n",
+    "plt.show()"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Now that we understand our input and output data, our task is now to predict the velocity vectors of both the $x$ and $y$ channels from our model.\n",
+    "\n",
+    "Let us now split the dataset into Training, Test and Validation Data."
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "# Model and Loss\n",
+    "\n",
+    "We will be building the following Models and benchmarking them as we proceed :\n",
+    "\n",
+    "* Simple Fully Connected Networks \n",
+    "    - *3 Layer Network*\n",
+    "    - *5 Layer Network*\n",
+    "* Convolution Neural Networks \n",
+    "    - *Binary Boundary*\n",
+    "    - *Signed Distance Function*\n",
+    "* Advanced Networks\n",
+    "    - *Gated Residual Network*\n",
+    "    - *Non-Gated Residual Network*\n",
+    "    \n",
+    "    \n",
+    "There are a variety of functions also discussed in the *Introduction to Deep Learning Notebook*. In this *Task*, we will be using Squared Error Loss.\n",
+    "\n",
+    "$$ \\mathrm{Loss}(\\hat{v}_x, \\hat{v}_y, v_x, v_y) = \\sum_{i=1}^{n_\\mathrm{batch}} \n",
+    "  \\sum_{x=1,y=1}^{nh,nw} \\left(\\left(v_x^i(x,y) - \\hat{v}_x^i(x,y)\\right)^2 + \n",
+    "   \\left(v_y^i(x,y) - \\hat{v}_y^i(x,y)\\right)^2\\right) . $$\n",
+    "  \n",
+    "This can be implemented easily using [tf.nn.l2_loss](https://www.tensorflow.org/api_docs/python/tf/nn/l2_loss)."
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "# Training and Evaluation\n",
+    "\n",
+    "* Epochs: We will be using 25 - 30 epochs. Readers are free to change and play with the values. \n",
+    "* Activation Function: We will start with ReLu and improving upon same\n",
+    "* Optimizer: We will be using Adam Optimizer with Learning Rate of 0.0001\n",
+    "* Test Set: We will be using a set of 28 boundary conditions as part of the out Test set.\n",
+    "\n",
+    "Now we have an idea on how to proceed. Let us start building our First Fully Connected Model."
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "# Building our First Model"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "### Let's import our data and divide them into training, test and validation sets"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "batch_size = 8\n",
+    "dataset_size = 3000   # Number of elements in the train.tfrecords\n",
+    "validation_size = 256 # Number of elements to use for validation\n",
+    "\n",
+    "# derive some quantities\n",
+    "train_size = dataset_size - validation_size\n",
+    "train_batches = int(train_size / batch_size)\n",
+    "validation_batches= int(validation_size / batch_size)\n",
+    "\n",
+    "test_size = 28\n",
+    "test_batches = int(test_size/batch_size)\n",
+    "print('Number of batches in train/validation/test dataset:', train_batches, '/', validation_batches, '/', test_batches)\n",
+    "\n",
+    "def init_datasets():\n",
+    "    dataset = tf.data.TFRecordDataset('data/train.tfrecords')\n",
+    "    # Transform binary data into image arrays\n",
+    "    dataset = dataset.map(data_utils.parse_flow_data)\n",
+    "    \n",
+    "    training_dataset = dataset.skip(validation_size).shuffle(buffer_size=512)\n",
+    "    training_dataset = training_dataset.batch(batch_size, drop_remainder=True)\n",
+    "    training_dataset = training_dataset.repeat()\n",
+    "\n",
+    "    validation_dataset = dataset.take(validation_size).batch(batch_size, drop_remainder=True)\n",
+    "    validation_dataset = validation_dataset.repeat()\n",
+    "\n",
+    "    # Read test dataset\n",
+    "    test_dataset = tf.data.TFRecordDataset('data/test.tfrecords')\n",
+    "    test_dataset = test_dataset.map(data_utils.parse_flow_data) # Transform binary data into image arrays\n",
+    "    test_dataset = test_dataset.batch(batch_size, drop_remainder = True).repeat()\n",
+    " \n",
+    "    return training_dataset, validation_dataset, test_dataset\n",
+    "\n",
+    "training_dataset, validation_dataset, test_dataset = init_datasets()"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "### Loss function \n",
+    "\n",
+    "Now let us define  Loss Function using [tf.nn.l2_loss](https://www.tensorflow.org/api_docs/python/tf/nn/l2_loss)"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "def loss_image(vflow_hat, vflow):\n",
+    "    '''Defines the loss for the predicted flow.\n",
+    "    \n",
+    "    Arguments:\n",
+    "    vflow_hat -- predicted flow, shape (?, nh, nw, 2)\n",
+    "    vflow   -- target flow from the simulation, shape (?, nh, nw, 2)\n",
+    "    \n",
+    "    Returns: the L2 loss\n",
+    "    ''' \n",
+    "    ### Define the square error loss (~ 1 line of code)\n",
+    "    loss = tf.nn.l2_loss(vflow_hat - vflow)\n",
+    "    ###\n",
+    "                         \n",
+    "    # Add a scalar to tensorboard\n",
+    "    tf.summary.scalar('loss', loss)\n",
+    "    \n",
+    "    return loss"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "### Model \n",
+    "\n",
+    "3 - Layer Fully Connected Network : \n",
+    "\n",
+    "* Input Layer of Size ( 128 * 256 * 1 )\n",
+    "* Hidden Layer of Size ( 256 ) \n",
+    "* Output Layer of Size ( 128 * 256 * 2 ) \n"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "def fully_connected(input):\n",
+    "    # Arguments:\n",
+    "    # input -- input layer for the network, expected shape (?,nh,nw,1)\n",
+    "    # Returns -- predicted flow (?, nh, nw, 2)\n",
+    "    \n",
+    "    nh = K.int_shape(input)[1]\n",
+    "    nw = K.int_shape(input)[2]\n",
+    "    \n",
+    "    # define the hidden layer\n",
+    "    x = layers.Flatten()(input)\n",
+    "    x = layers.Dense(256, activation='relu')(x)\n",
+    "    \n",
+    "    \n",
+    "    ### Define output layer and reshape it to nh x nw x 2. \n",
+    "    ### (Note that the extra batch dimension is handled automatically by Keras)\n",
+    "    x = layers.Dense(nh*nw*2, activation='relu')(x)\n",
+    "    output = layers.Reshape((nh,nw,2))(x)\n",
+    "    ###\n",
+    "    \n",
+    "    return output"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Let us define the input and parameters of our model : "
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "# Define Inputs and Outputs\n",
+    "input = tf.keras.Input(shape=(128,256,1))\n",
+    "output = fully_connected(input)\n",
+    "# Use Keras Functional API to Create our Model and Define our Optimizer and Loss Function\n",
+    "fc_model = tf.keras.Model(inputs = input, outputs=output)\n",
+    "fc_model.compile(tf.keras.optimizers.Adam(0.0001), loss=loss_image)\n",
+    "fc_model.summary()"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "### Let us now train our Model : "
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "history = fc_model.fit(training_dataset, epochs=30, steps_per_epoch=train_batches,\n",
+    "          validation_data=validation_dataset, validation_steps=validation_batches)"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "# Let us Plot the train History\n",
+    "data_utils.plot_keras_loss(history)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "### Test\n",
+    "\n",
+    "We will evaluate the model on the test dataset, and plot some of the results."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "scrolled": true
+   },
+   "outputs": [],
+   "source": [
+    "test_loss = fc_model.evaluate(test_dataset, steps=3)\n",
+    "print('The loss over the test dataset', test_loss)"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "scrolled": true
+   },
+   "outputs": [],
+   "source": [
+    "x, vxy = data_utils.load_test_data(1) # you can try different numbers between 1 and 28\n",
+    "vxy_hat = fc_model.predict(x)\n",
+    "data_utils.plot_test_result(x, vxy, vxy_hat)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "We can plot a vertical slice of the velocity field for better comparison"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "x_idx=120 # x coordinate for the slice\n",
+    "\n",
+    "vx = np.squeeze(vxy[0,:,x_idx,0])                 # test velocity fields\n",
+    "vy = np.squeeze(vxy[0,:,x_idx,1])\n",
+    "\n",
+    "vx_hat = np.squeeze(vxy_hat[0,:,x_idx,0])         # predicted velocity field\n",
+    "vy_hat = np.squeeze(vxy_hat[0,:,x_idx,1])\n",
+    "\n",
+    "fig = plt.figure(figsize=(16,6))\n",
+    "\n",
+    "# plot the x component of the velocity\n",
+    "ax = fig.add_subplot(121)\n",
+    "ax.plot(vx, label='ground truth')\n",
+    "ax.plot(vx_hat, label='predicted')\n",
+    "ax.legend()\n",
+    "ax.set_xlabel('y')\n",
+    "ax.set_ylabel('vx')\n",
+    "\n",
+    "# plot the y component of the velocity\n",
+    "ax = fig.add_subplot(122)\n",
+    "ax.plot(vy, label='ground truth')\n",
+    "ax.plot(vy_hat, label='predicted')\n",
+    "ax.legend()\n",
+    "ax.set_xlabel('y')\n",
+    "ax.set_ylabel('vy')\n"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "In the upcoming notebook let us define a 5 Layer fully connected network and train it.\n",
+    "\n",
+    "## Important:\n",
+    "<mark>Shutdown the kernel before clicking on “Next Notebook” to free up the GPU memory.</mark>\n",
+    "\n",
+    "\n",
+    "## Licensing\n",
+    "This material is released by NVIDIA Corporation under the Creative Commons Attribution 4.0 International (CC BY 4.0)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "[Previous Notebook](Start_Here.ipynb)\n",
+    "&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;\n",
+    "&emsp;&emsp;&emsp;&emsp;&emsp;\n",
+    "&emsp;&emsp;&emsp;&emsp;&emsp;\n",
+    "&emsp;&emsp;&emsp;&emsp;&emsp;\n",
+    "[1](Start_Here.ipynb)\n",
+    "[2]\n",
+    "[3](Part3.ipynb)\n",
+    "[4](Part4.ipynb)\n",
+    "[5](Competition.ipynb)\n",
+    "&emsp;&emsp;&emsp;&emsp;&emsp;\n",
+    "&emsp;&emsp;&emsp;&emsp;&emsp;\n",
+    "&emsp;&emsp;&emsp;&emsp;&emsp;\n",
+    "&emsp;&emsp;&emsp;&emsp;&emsp;\n",
+    "&emsp;&emsp;&emsp;&emsp;&emsp;\n",
+    "[Next Notebook](Part3.ipynb)\n",
+    "\n",
+    "&emsp;&emsp;&emsp;&emsp;&emsp;\n",
+    "&emsp;&emsp;&emsp;&emsp;&emsp;\n",
+    "&emsp;&emsp;&emsp;&emsp;&emsp;\n",
+    "&emsp;&emsp;&emsp;&emsp;&emsp;\n",
+    "&emsp;&emsp;&emsp;&emsp;&emsp;\n",
+    "&emsp;&emsp;&emsp;&ensp;\n",
+    "[Home Page](../Start_Here.ipynb)"
+   ]
+  }
+ ],
+ "metadata": {
+  "kernelspec": {
+   "display_name": "Python 3",
+   "language": "python",
+   "name": "python3"
+  },
+  "language_info": {
+   "codemirror_mode": {
+    "name": "ipython",
+    "version": 3
+   },
+   "file_extension": ".py",
+   "mimetype": "text/x-python",
+   "name": "python",
+   "nbconvert_exporter": "python",
+   "pygments_lexer": "ipython3",
+   "version": "3.6.2"
+  }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 2
+}

+ 847 - 0
hpc_ai/ai_science_cfd/English/python/jupyter_notebook/CFD/Part3.ipynb

@@ -0,0 +1,847 @@
+{
+ "cells": [
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "&emsp;&emsp;&emsp;&emsp;&emsp;\n",
+    "&emsp;&emsp;&emsp;&emsp;&emsp;\n",
+    "&emsp;&emsp;&emsp;&emsp;&emsp;\n",
+    "&emsp;&emsp;&emsp;&emsp;&emsp;\n",
+    "&emsp;&emsp;&emsp;&emsp;&emsp;\n",
+    "&emsp;&emsp;&emsp;&ensp;\n",
+    "[Home Page](../Start_Here.ipynb)\n",
+    "\n",
+    "\n",
+    "[Previous Notebook](Part2.ipynb)\n",
+    "&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;\n",
+    "&emsp;&emsp;&emsp;&emsp;&emsp;\n",
+    "&emsp;&emsp;&emsp;&emsp;&emsp;\n",
+    "&emsp;&emsp;&emsp;&emsp;&emsp;\n",
+    "[1](Start_Here.ipynb)\n",
+    "[2](Part2.ipynb)\n",
+    "[3]\n",
+    "[4](Part4.ipynb)\n",
+    "[5](Competition.ipynb)\n",
+    "&emsp;&emsp;&emsp;&emsp;&emsp;\n",
+    "&emsp;&emsp;&emsp;&emsp;&emsp;\n",
+    "&emsp;&emsp;&emsp;&emsp;&emsp;\n",
+    "&emsp;&emsp;&emsp;&emsp;&emsp;\n",
+    "&emsp;&emsp;&emsp;&emsp;&emsp;\n",
+    "[Next Notebook](Part4.ipynb)\n",
+    "\n",
+    "\n",
+    "\n"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "# Steady State Flow Using Neural Networks - Part 3\n",
+    "\n",
+    "**Contents of the this notebook:**\n",
+    "\n",
+    "- [Improving Our Fully Connected Model](#Building-our-First-Model)\n",
+    "- [Building a Convolutional Model](#Convolutional-model)\n",
+    "- [Input Data Manipulation](#Input-Data-Manipulation)\n",
+    "\n",
+    "**By the End of this notebook participants will:**\n",
+    "\n",
+    "- Benchmark three different models and their performance\n",
+    "- Understand how input data manipulation can help in building a better model."
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "# Improving our Fully Connected Network\n",
+    "\n",
+    "Let us import libraries, dataset and define the Loss function "
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "# Importing Necessary Libraries \n",
+    "from __future__ import print_function\n",
+    "\n",
+    "import sys\n",
+    "sys.path.append('/workspace/python/source_code')\n",
+    "\n",
+    "import numpy as np\n",
+    "import utils.data_utils as data_utils\n",
+    "import tensorflow as tf\n",
+    "from tensorflow.keras import layers\n",
+    "import tensorflow.keras.backend as K\n",
+    "import math\n",
+    "import matplotlib.pyplot as plt\n",
+    "\n",
+    "import time\n",
+    "\n",
+    "import importlib\n",
+    "\n",
+    "import os\n",
+    "os.environ[\"CUDA_VISIBLE_DEVICES\"]=\"0\"\n",
+    "# reload(data_utils) # you need to execute this in case you modify the plotting scripts in data_utils"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "batch_size = 8\n",
+    "dataset_size = 1500   # Number of elements in the train.tfrecords\n",
+    "validation_size = 256 # Number of elements to use for validation\n",
+    "\n",
+    "# derive some quantities\n",
+    "train_size = dataset_size - validation_size\n",
+    "train_batches = int(train_size / batch_size)\n",
+    "validation_batches= int(validation_size / batch_size)\n",
+    "\n",
+    "test_size = 28\n",
+    "test_batches = int(test_size/batch_size)\n",
+    "print('Number of batches in train/validation/test dataset:', train_batches, '/', validation_batches, '/', test_batches)\n",
+    "\n",
+    "def init_datasets():\n",
+    "    dataset = tf.data.TFRecordDataset('data/train.tfrecords')\n",
+    "    dataset = dataset.take(dataset_size)\n",
+    "    # Transform binary data into image arrays\n",
+    "    dataset = dataset.map(data_utils.parse_flow_data)\n",
+    "    \n",
+    "    training_dataset = dataset.skip(validation_size).shuffle(buffer_size=512)\n",
+    "    training_dataset = training_dataset.batch(batch_size, drop_remainder=True)\n",
+    "    training_dataset = training_dataset.repeat()\n",
+    "\n",
+    "    validation_dataset = dataset.take(validation_size).batch(batch_size, drop_remainder=True)\n",
+    "    validation_dataset = validation_dataset.repeat()\n",
+    "\n",
+    "    # Read test dataset\n",
+    "    test_dataset = tf.data.TFRecordDataset('data/test.tfrecords')\n",
+    "    test_dataset = test_dataset.map(data_utils.parse_flow_data) # Transform binary data into image arrays\n",
+    "    test_dataset = test_dataset.batch(batch_size, drop_remainder = True).repeat()\n",
+    " \n",
+    "    return training_dataset, validation_dataset, test_dataset\n",
+    "\n",
+    "training_dataset, validation_dataset, test_dataset = init_datasets()"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "# Create an iterator for reading a batch of input and output data\n",
+    "iterator = iter(training_dataset)\n",
+    "boundary, vflow = next(iterator)\n",
+    "\n",
+    "print('input shape:', boundary.shape.as_list())\n",
+    "print('output shape:', vflow.shape.as_list())\n",
+    "\n",
+    "plot_idx = 3 # set it between 0 and batch_size\n",
+    "\n",
+    "data_utils.plot_flow_data(boundary[plot_idx,:,:,:], vflow[plot_idx,:,:,:])"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "def loss_image(vflow_hat, vflow):\n",
+    "    ''' Defines the loss for the predicted flow.\n",
+    "    \n",
+    "    Arguments:\n",
+    "    vflow_hat -- predicted flow, shape (?, nh, nw, 2)\n",
+    "    vflow   -- target flow from the simulation, shape (?, nh, nw, 2)\n",
+    "    \n",
+    "    Returns: the L2 loss\n",
+    "    '''\n",
+    "    ### define the squure error loss (~ 1 line of code)\n",
+    "    loss = tf.nn.l2_loss(vflow_hat - vflow)\n",
+    "    ###\n",
+    "                         \n",
+    "    # Add a scalar to tensorboard\n",
+    "    tf.summary.scalar('loss', loss)\n",
+    "    \n",
+    "    return loss"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "### Model \n",
+    "\n",
+    "5 - Layer Fully Connected Network : \n",
+    "\n",
+    "* Input Layer of Size ( 128 * 256 * 1 )\n",
+    "* Hidden Layer of Size ( 1024 ) \n",
+    "* Hidden Layer of Size ( 1024 ) \n",
+    "* Hidden Layer of Size ( 1024 ) \n",
+    "* Output Layer of Size ( 128 * 256 * 2 ) \n"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "def fully_connected(input):\n",
+    "    # Arguments:\n",
+    "    # input -- input layer for the network, expected shape (?,nh,nw,1)\n",
+    "    # Returns -- predicted flow (?, nh, nw, 2)\n",
+    "    \n",
+    "    nh = K.int_shape(input)[1]\n",
+    "    nw = K.int_shape(input)[2]\n",
+    "    \n",
+    "    # define the hidden layers\n",
+    "    x = layers.Flatten()(input)\n",
+    "    \n",
+    "     \n",
+    "    ### Add three dense hidden layers with 1024 hidden units each\n",
+    "    x = layers.Dense(1024, activation='relu')(x)\n",
+    "    x = layers.Dense(1024, activation='relu')(x)\n",
+    "    x = layers.Dense(1024, activation='relu')(x)\n",
+    "    ##\n",
+    "   \n",
+    "    x = layers.Dense(nh*nw*2, activation='relu')(x)\n",
+    "    output = layers.Reshape((nh,nw,2))(x)\n",
+    "    ###\n",
+    "    \n",
+    "    return output"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "input = tf.keras.Input(shape=(128,256,1))\n",
+    "output = fully_connected(input)\n",
+    "\n",
+    "\n",
+    "### Define a new keras model with the above input and output, and compile it with Adam optimizer\n",
+    "fc_model = tf.keras.Model(inputs = input, outputs=output)\n",
+    "fc_model.compile(tf.keras.optimizers.Adam(0.0001), loss=loss_image)\n",
+    "###\n",
+    "fc_model.summary()"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "history = fc_model.fit(training_dataset, epochs=30, steps_per_epoch=train_batches,\n",
+    "          validation_data=validation_dataset, validation_steps=validation_batches, \n",
+    "            callbacks=[tf.keras.callbacks.TensorBoard(log_dir='/tmp/fc3')])"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "# Let us Plot the train History\n",
+    "data_utils.plot_keras_loss(history)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "### Test\n",
+    "\n",
+    "We will evaluate the model on the test dataset, and plot some of the results."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "scrolled": true
+   },
+   "outputs": [],
+   "source": [
+    "test_loss = fc_model.evaluate(test_dataset, steps=3)\n",
+    "print('The loss over the test dataset', test_loss)"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "scrolled": true
+   },
+   "outputs": [],
+   "source": [
+    "x, vxy = data_utils.load_test_data(1) # you can try different numbers between 1 and 28\n",
+    "vxy_hat = fc_model.predict(x)\n",
+    "data_utils.plot_test_result(x, vxy, vxy_hat)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "We can plot a vertical slice of the velocity field for better comparison"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "x_idx=120 # x coordinate for the slice\n",
+    "\n",
+    "vx = np.squeeze(vxy[0,:,x_idx,0])                 # test velocity fields\n",
+    "vy = np.squeeze(vxy[0,:,x_idx,1])\n",
+    "\n",
+    "vx_hat = np.squeeze(vxy_hat[0,:,x_idx,0])         # predicted velocity field\n",
+    "vy_hat = np.squeeze(vxy_hat[0,:,x_idx,1])\n",
+    "\n",
+    "fig = plt.figure(figsize=(16,6))\n",
+    "\n",
+    "# plot the x component of the velocity\n",
+    "ax = fig.add_subplot(121)\n",
+    "ax.plot(vx, label='ground truth')\n",
+    "ax.plot(vx_hat, label='predicted')\n",
+    "ax.legend()\n",
+    "ax.set_xlabel('y')\n",
+    "ax.set_ylabel('vx')\n",
+    "\n",
+    "# plot the y component of the velocity\n",
+    "ax = fig.add_subplot(122)\n",
+    "ax.plot(vy, label='ground truth')\n",
+    "ax.plot(vy_hat, label='predicted')\n",
+    "ax.legend()\n",
+    "ax.set_xlabel('y')\n",
+    "ax.set_ylabel('vy')\n"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "We have bbserved the decrease in the loss function but it is still not sufficient for practical purposes. Let us now define a Convolution Model and see how it performs."
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "## Convolutional model\n",
+    "We will re-create the network from [Convolutional Neural Networks for Steady Flow Approximation](https://www.autodeskresearch.com/publications/convolutional-neural-networks-steady-flow-approximation). Here is an illustration from the paper: <img src='images/convnet.png' width='800px'>\n",
+    "\n",
+    "The number of filters and the kernel size are shown below for the conv/deconv operations. The dimension of the feature maps is indicated below in the boxes. The strides are the same as the kernel sizes.\n",
+    "\n",
+    "The direct connection between the input and the output layers is just a multiplication that zeros the flow inside the objects.\n",
+    "\n",
+    "To Learn about Convolutional Neural Networks and how they work, visit [Convolution Neural Network Notebook](../Intro_to_DL/CNN's.ipynb)\n",
+    "\n",
+    "### Model\n",
+    "\n",
+    "We define the encoding/decoding part separately, and then we combine them.\n",
+    "\n",
+    "We will set the parameters for [conv2d](https://keras.io/layers/convolutional/#conv2d), and add the fully connected layer"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "def conv(input):\n",
+    "    # Define layers to calculate the convolution and FC part of the network\n",
+    "    # Arguments:\n",
+    "    # input -- (?, nh, nw, nc)\n",
+    "    # Returns: (? 1,1,1024)\n",
+    "    \n",
+    "    \n",
+    "    ### Set the number of filters for the first convolutional layer\n",
+    "    x = layers.Conv2D(128, (16,16), strides=(16,16), padding='same', name='conv1', activation='relu')(input)\n",
+    "    \n",
+    "    \n",
+    "    ### Set the number of filters and kernel size for the second convolutional layer \n",
+    "    x = layers.Conv2D(512, (4,4), strides=(4,4), padding='same', name='conv2', activation='relu')(x)\n",
+    "    ###\n",
+    "    \n",
+    "    x = layers.Flatten()(x)\n",
+    "    \n",
+    "    \n",
+    "    ### Add a denslayer with ReLU activation\n",
+    "    x = layers.Dense(1024, activation='relu')(x)\n",
+    "    ###\n",
+    "    \n",
+    "    # Reshape the output as 1x1 image with 1024 channels:\n",
+    "    x = layers.Reshape((1,1,1024))(x)\n",
+    "    \n",
+    "    return(x)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "We define one of the decoding branch using [Conv2DTranspose](https://keras.io/layers/convolutional/#conv2dtranspose)"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "def deconv(input, suffix):\n",
+    "    # Define layers that perform the deconvolution steps\n",
+    "    # Arguments:\n",
+    "    # input -- (?, 1, 1, 1024)\n",
+    "    # suffix -- name_suffix\n",
+    "    # Returns -- (?,128,256,1)\n",
+    "    x = layers.Conv2DTranspose(512, (8,8), strides=(8,8), activation='relu', name=\"deconv1_\"+suffix)(input)\n",
+    "    \n",
+    "    \n",
+    "    ### Add the 2nd and 3rd Conv2DTranspose layers\n",
+    "    x = layers.Conv2DTranspose(256, (8,4), strides=(8,4), activation='relu', name=\"deconv2_\"+suffix)(x)\n",
+    "    x = layers.Conv2DTranspose(32, (2,2), strides=(2,2), activation='relu', name=\"deconv3_\"+suffix)(x)\n",
+    "    ###\n",
+    "    \n",
+    "    x = layers.Conv2DTranspose(1, (2,2), strides=(2,2), activation='relu', name=\"deconv4_\"+suffix)(x)\n",
+    "    x = layers.Permute((2,1,3))(x)\n",
+    "    return x"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "def conv_deconv(input):\n",
+    "    # Combine the convolution / deconvolution steps\n",
+    "    x = conv(input)\n",
+    "    \n",
+    "    vx = deconv(x, \"vx\")\n",
+    "    \n",
+    "    # create a mask to zero out flow at (and inside) the boundary \n",
+    "    vx = layers.Lambda(lambda x: x[0]*(1-x[1]), name='mask_vx')([vx, input])\n",
+    "    \n",
+    "     \n",
+    "    ### Add decoder for vy\n",
+    "    vy = deconv(x, \"vy\")\n",
+    "    ### \n",
+    "    \n",
+    "    vy = layers.Lambda(lambda x: x[0]*(1-x[1]), name='mask_vy')([vy, input])\n",
+    "    \n",
+    "    output = layers.concatenate([vx, vy], axis=3)\n",
+    "    \n",
+    "    return output"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Compile the model:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "K.clear_session()\n",
+    "\n",
+    "# we need to re-init the dacaset because of Clearing our session\n",
+    "training_dataset, validation_dataset, test_dataset = init_datasets()\n",
+    "\n",
+    "input = tf.keras.Input(shape=(128,256,1), name=\"boundary\")\n",
+    "output = conv_deconv(input)\n",
+    "conv_model = tf.keras.Model(inputs = input, outputs=output)\n",
+    "conv_model.compile(tf.keras.optimizers.Adam(0.0001), loss=loss_image)\n",
+    "conv_model.summary()"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "## Train the Model"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "#But Training our model from scratch will take a long time\n",
+    "#So we will load a partially trained model to speedup the process \n",
+    "K.clear_session()\n",
+    "conv_model = tf.keras.models.load_model(\"conv_model.h5\",custom_objects={'loss_image': loss_image})\n",
+    "\n",
+    "history = conv_model.fit(training_dataset, epochs=5, steps_per_epoch=train_batches,\n",
+    "          validation_data=validation_dataset, validation_steps=validation_batches, \n",
+    "            callbacks=[tf.keras.callbacks.TensorBoard(log_dir='/tmp/conv')])\n",
+    "\n",
+    "data_utils.plot_keras_loss(history)\n",
+    "# not much improvement after 20 epochs, takes 25sec/epoch on v100"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "### Test\n",
+    "\n",
+    "We will evaluate the model on the test dataset, and plot some of the results."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "test_loss = conv_model.evaluate(test_dataset, steps=3)\n",
+    "print('The loss over the test dataset', test_loss)"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "x, y = data_utils.load_test_data(1) # You can try different values between 1 and 28\n",
+    "y_hat = conv_model.predict(x)\n",
+    "data_utils.plot_test_result(x, y, y_hat)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "# Input Data Manipulation\n",
+    "\n",
+    "### Use signed distance function as the input feature\n",
+    "\n",
+    "To improve the performance of the model, we will use a different encoding of the input data. Instead of giving 0s and 1s, we calculate the [signed distance function (SDF)](https://en.wikipedia.org/wiki/Signed_distance_function) of the input data.\n",
+    "\n",
+    "Let $B$ denote the set of points inside solid objects, and $\\partial B$ its boundary. We define $d(\\vec{r},\\partial B)$ as the distance between point $\\vec{r}$ and the boundary $B$.\n",
+    "\n",
+    "$$ d(\\vec{r}, \\partial B) = \\min_{\\vec{x} \\in \\partial B} | \\vec{r} - \\vec{x}|.$$\n",
+    "\n",
+    "The signed distance function is defined as\n",
+    "\n",
+    "$$\\mathrm{SDF}(\\vec{r}) = \\begin{cases}\n",
+    "  -d(\\vec{r}, \\partial B) & \\mbox{ if } \\vec{r} \\in B \\\\\n",
+    "   d(\\vec{r}, \\partial B)&  \\mbox{ if } \\vec{r} \\notin B\n",
+    "\\end{cases}$$\n",
+    "\n",
+    "\n",
+    "For every point in the grid, the SDF tells the distance to the closest boundary point. The plot below illustrates the SDF."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "data_utils.plot_sdf(x[:,:,:], plot_boundary=True)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "The following functions will create two new input files where the SDF is added as the second channel of the input data. Let it run for a minute."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "data_utils.create_sdf_file('train')\n",
+    "data_utils.create_sdf_file('test')"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Let's load our new dataset."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "batch_size = 8\n",
+    "dataset_size = 1500   # Number of elements in the train.tfrecords\n",
+    "validation_size = 256 # Number of elements to use for validation\n",
+    "\n",
+    "# derive some quantities\n",
+    "train_size = dataset_size - validation_size\n",
+    "train_batches = int(train_size / batch_size)\n",
+    "validation_batches= int(validation_size / batch_size)\n",
+    "\n",
+    "test_size = 28\n",
+    "test_batches = int(test_size/batch_size)\n",
+    "\n",
+    "def init_sdf_datasets():\n",
+    "    # Set up a dataset\n",
+    "    sdf_dataset = tf.data.TFRecordDataset('data/train_sdf.tfrecords')\n",
+    "    sdf_dataset = sdf_dataset.take(dataset_size)\n",
+    "    # Transform binary data into image arrays\n",
+    "    sdf_dataset = sdf_dataset.map(data_utils.parse_sdf_flow_data) \n",
+    "\n",
+    "    sdf_training_dataset = sdf_dataset.skip(validation_size).shuffle(buffer_size=512)\n",
+    "    sdf_training_dataset = sdf_training_dataset.batch(batch_size, drop_remainder=True)\n",
+    "    sdf_training_dataset = sdf_training_dataset.repeat()\n",
+    "\n",
+    "    sdf_validation_dataset = sdf_dataset.take(validation_size).batch(batch_size, drop_remainder=True)\n",
+    "    sdf_validation_dataset = sdf_validation_dataset.repeat()\n",
+    "\n",
+    "    # Read test dataset\n",
+    "    sdf_test_dataset = tf.data.TFRecordDataset('data/test_sdf.tfrecords')\n",
+    "    sdf_test_dataset = sdf_test_dataset.map(data_utils.parse_sdf_flow_data) # Transform binary data into image arrays\n",
+    "    sdf_test_dataset = sdf_test_dataset.batch(batch_size, drop_remainder = True).repeat()\n",
+    "\n",
+    "    print('Number of batches in train/validation/test dataset:', train_batches, '/', validation_batches, '/', test_batches)\n",
+    "    return sdf_training_dataset,sdf_validation_dataset,sdf_test_dataset"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "We can visualize the the SDF for the training data : "
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "sdf_training_dataset, sdf_validation_dataset, sdf_test_dataset = init_sdf_datasets()\n",
+    "# Create an iterator for reading a batch of input and output data\n",
+    "iterator = iter(sdf_training_dataset)\n",
+    "boundary, vflow = next(iterator)\n",
+    "boundary, vflow = next(iterator)\n",
+    "\n",
+    "print('input shape:', boundary.shape.as_list())\n",
+    "print('output shape:', vflow.shape.as_list())\n",
+    "\n",
+    "plot_idx = 2 # set it between 0 and batch_size\n",
+    "\n",
+    "data_utils.plot_sdf(boundary[plot_idx,:,:,0],boundary[plot_idx,:,:,1])"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "We can notice above that now our input data has 2 Channel holding boundary data and signed distance function respectively. \n",
+    "\n",
+    "We will use the boundary condition as a mask to remove flow lines in those areas.\n",
+    "So, let's modify the the convolutional network model to use the new SDF input feature"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 1,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "def conv_deconv_sdf(input):\n",
+    "    # Combine the convolution / deconvolution steps\n",
+    "    boundary = layers.Lambda(lambda x : x[:,:,:,0:1], name=\"boundary_slice\")(input)\n",
+    "    sdf = layers.Lambda(lambda x : x[:,:,:,1:2], name=\"sdf_slice\")(input)\n",
+    "    \n",
+    "    \n",
+    "    ### Calculate the encoding using the SDF\n",
+    "    x = conv(sdf)\n",
+    "    ###\n",
+    "    \n",
+    "    vx = deconv(x, \"vx\")\n",
+    "    \n",
+    "    # create a mask to zero out flow at (and inside) the boundary \n",
+    "    vx = layers.Lambda(lambda x: x[0]*(1-x[1]), name='mask_vx')([vx, boundary])\n",
+    "    \n",
+    "    vy = deconv(x, \"vy\")\n",
+    "    vy = layers.Lambda(lambda x: x[0]*(1-x[1]), name='mask_vy')([vy, boundary])\n",
+    "    \n",
+    "    output = layers.concatenate([vx, vy], axis=3)\n",
+    "    \n",
+    "    return output"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "K.clear_session()\n",
+    "\n",
+    "# we need to re-init the dacaset because of Clearing our session\n",
+    "sdf_training_dataset, sdf_validation_dataset, sdf_test_dataset = init_sdf_datasets()\n",
+    "\n",
+    "# Define Input Outputs and Train the Model\n",
+    "input = tf.keras.Input(shape=(128,256,2), name=\"boundary\")\n",
+    "output = conv_deconv_sdf(input)\n",
+    "conv_sdf_model = tf.keras.Model(inputs = input, outputs=output)\n",
+    "conv_sdf_model.compile(tf.keras.optimizers.Adam(0.0001), loss=loss_image)\n",
+    "conv_sdf_model.summary()"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "#But Training our model from scratch will take a long time\n",
+    "#So we will load a partially trained model to speedup the process \n",
+    "K.clear_session()\n",
+    "conv_sdf_model = tf.keras.models.load_model(\"conv_sdf_model.h5\",custom_objects={'loss_image': loss_image})\n",
+    "\n",
+    "history = conv_sdf_model.fit(sdf_training_dataset, epochs=5, steps_per_epoch=train_batches,\n",
+    "          validation_data=sdf_validation_dataset, validation_steps=validation_batches)\n",
+    "\n",
+    "# Plot Training data\n",
+    "data_utils.plot_keras_loss(history)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "### Test"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "test_loss = conv_sdf_model.evaluate(sdf_test_dataset, steps=3)\n",
+    "print('The loss over the test dataset', test_loss)"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "x, y = data_utils.load_test_data(1) # You can try different values between 1 and 28\n",
+    "sdf = np.reshape(data_utils.calc_sdf(x[0,:,:,0]),(1,x.shape[1], x.shape[2],1))\n",
+    "x = np.concatenate((x, sdf), axis=3)\n",
+    "y_hat = conv_sdf_model.predict(x)\n",
+    "data_utils.plot_test_result(x[:,:,:,0:1], y, y_hat)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "We find the Signed distance function performed better than the Boundary defined input. Let us understand why this is the case:\n",
+    "\n",
+    "From the Research paper : \n",
+    "\n",
+    "```\n",
+    "Geometry can be represented in multiple ways, such as boundaries and geometric parameters. However, those               representations are not effective for neural networks since the vectors' semantic meaning varies.\n",
+    "```\n",
+    "\n",
+    "```\n",
+    "The values of SDF on the sampled Cartesian grid not only provide local geometry details but also contain additional     information of the global geometry structure.\n",
+    "```\n",
+    "\n",
+    "To put the above in simple words, when our Convolution neural networks learn, we have seen that it also convolutes over an area of the set kernel size where it takes the considerations of the neighbouring pixels and not just a single pixel, this makes the signed distance function a rightful choice as it assigns values to all the pixels in the input image.\n",
+    "\n",
+    "In the upcoming notebook, let us introduce some advance networks and train them.\n",
+    "\n",
+    "## Important:\n",
+    "<mark>Shutdown the kernel before clicking on “Next Notebook” to free up the GPU memory.</mark>\n"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "\n",
+    "[Previous Notebook](Part2.ipynb)\n",
+    "&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;\n",
+    "&emsp;&emsp;&emsp;&emsp;&emsp;\n",
+    "&emsp;&emsp;&emsp;&emsp;&emsp;\n",
+    "&emsp;&emsp;&emsp;&emsp;&emsp;\n",
+    "[1](Start_Here.ipynb)\n",
+    "[2](Part2.ipynb)\n",
+    "[3]\n",
+    "[4](Part4.ipynb)\n",
+    "[5](Competition.ipynb)\n",
+    "&emsp;&emsp;&emsp;&emsp;&emsp;\n",
+    "&emsp;&emsp;&emsp;&emsp;&emsp;\n",
+    "&emsp;&emsp;&emsp;&emsp;&emsp;\n",
+    "&emsp;&emsp;&emsp;&emsp;&emsp;\n",
+    "&emsp;&emsp;&emsp;&emsp;&emsp;\n",
+    "[Next Notebook](Part4.ipynb)\n",
+    "\n",
+    "\n",
+    "&emsp;&emsp;&emsp;&emsp;&emsp;\n",
+    "&emsp;&emsp;&emsp;&emsp;&emsp;\n",
+    "&emsp;&emsp;&emsp;&emsp;&emsp;\n",
+    "&emsp;&emsp;&emsp;&emsp;&emsp;\n",
+    "&emsp;&emsp;&emsp;&emsp;&emsp;\n",
+    "&emsp;&emsp;&emsp;&ensp;\n",
+    "[Home Page](../Start_Here.ipynb)\n"
+   ]
+  }
+ ],
+ "metadata": {
+  "kernelspec": {
+   "display_name": "Python 3",
+   "language": "python",
+   "name": "python3"
+  },
+  "language_info": {
+   "codemirror_mode": {
+    "name": "ipython",
+    "version": 3
+   },
+   "file_extension": ".py",
+   "mimetype": "text/x-python",
+   "name": "python",
+   "nbconvert_exporter": "python",
+   "pygments_lexer": "ipython3",
+   "version": "3.6.2"
+  }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 2
+}

+ 647 - 0
hpc_ai/ai_science_cfd/English/python/jupyter_notebook/CFD/Part4.ipynb

@@ -0,0 +1,647 @@
+{
+ "cells": [
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "&emsp;&emsp;&emsp;&emsp;&emsp;\n",
+    "&emsp;&emsp;&emsp;&emsp;&emsp;\n",
+    "&emsp;&emsp;&emsp;&emsp;&emsp;\n",
+    "&emsp;&emsp;&emsp;&emsp;&emsp;\n",
+    "&emsp;&emsp;&emsp;&emsp;&emsp;\n",
+    "&emsp;&emsp;&emsp;&ensp;\n",
+    "[Home Page](../Start_Here.ipynb)\n",
+    "\n",
+    "\n",
+    "[Previous Notebook](Part3.ipynb)\n",
+    "&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;\n",
+    "&emsp;&emsp;&emsp;&emsp;&emsp;\n",
+    "&emsp;&emsp;&emsp;&emsp;&emsp;\n",
+    "&emsp;&emsp;&emsp;&emsp;&emsp;\n",
+    "[1](Start_Here.ipynb)\n",
+    "[2](Part2.ipynb)\n",
+    "[3](Part3.ipynb)\n",
+    "[4]\n",
+    "[5](Competition.ipynb)\n",
+    "&emsp;&emsp;&emsp;&emsp;&emsp;\n",
+    "&emsp;&emsp;&emsp;&emsp;&emsp;\n",
+    "&emsp;&emsp;&emsp;&emsp;&emsp;\n",
+    "&emsp;&emsp;&emsp;&emsp;&emsp;\n",
+    "&emsp;&emsp;&emsp;&emsp;&emsp;\n",
+    "[Next Notebook](Competition.ipynb)\n"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "# Steady State Flow using Neural Networks - Part 3\n",
+    "\n",
+    "**Contents of the this notebook:**\n",
+    "\n",
+    "- [Advanced Networks](#Advanced-Networks)\n",
+    "    - [Model (Non-Gated Residual Block)](#Model-(Non-Gated-Residual-Block))\n",
+    "    - [Model (Gated Residual Block)](#Model-(Gated-Residual-Block))\n",
+    "    \n",
+    "**By the end of this notebook you will:**\n",
+    "\n",
+    "- Understand slightly advanced networks\n",
+    "- Understanding Gatedness of a Residual Block."
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Let us import libraries, dataset and define the _Loss Function_"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "# Import Necessary Libraries\n",
+    "from __future__ import print_function\n",
+    "\n",
+    "import sys\n",
+    "sys.path.append('/workspace/python/source_code')\n",
+    "\n",
+    "import numpy as np \n",
+    "import time\n",
+    "import importlib\n",
+    "import os\n",
+    "import math\n",
+    "import matplotlib.pyplot as plt\n",
+    "from matplotlib.pyplot import imshow\n",
+    "\n",
+    "# TensorFlow and tf.keras\n",
+    "import tensorflow as tf\n",
+    "from tensorflow.keras.models import *\n",
+    "from tensorflow.keras.layers import *\n",
+    "from tensorflow.keras.optimizers import *\n",
+    "from tensorflow.keras.activations import *\n",
+    "from tensorflow.keras.callbacks import ModelCheckpoint, LearningRateScheduler\n",
+    "from tensorflow.keras import backend as keras\n",
+    "from tensorflow.keras.preprocessing import image\n",
+    "from tensorflow.keras.applications.imagenet_utils import preprocess_input\n",
+    "from tensorflow.keras.initializers import glorot_uniform\n",
+    "import tensorflow.keras.backend as K\n",
+    "\n",
+    "# Custom Utlities\n",
+    "import model.flow_architecture as flow_architecture\n",
+    "import utils.data_utils as data_utils\n",
+    "\n",
+    "import os\n",
+    "os.environ[\"CUDA_VISIBLE_DEVICES\"]=\"0\"\n",
+    "# reload(data_utils) # you need to execute this in case you modify the plotting scripts in data_utils"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "batch_size = 8\n",
+    "dataset_size = 2000   # Number of elements in the train.tfrecords\n",
+    "validation_size = 256 # Number of elements to use for validation\n",
+    "\n",
+    "# derive some quantities\n",
+    "train_size = dataset_size - validation_size\n",
+    "train_batches = int(train_size / batch_size)\n",
+    "validation_batches= int(validation_size / batch_size)\n",
+    "\n",
+    "test_size = 28\n",
+    "test_batches = int(test_size/batch_size)\n",
+    "print('Number of batches in train/validation/test dataset:', train_batches, '/', validation_batches, '/', test_batches)\n",
+    "\n",
+    "def init_datasets():\n",
+    "    dataset = tf.data.TFRecordDataset('data/train.tfrecords')\n",
+    "    dataset = dataset.take(dataset_size)\n",
+    "    # Transform binary data into image arrays\n",
+    "    dataset = dataset.map(data_utils.parse_flow_data)\n",
+    "    \n",
+    "    training_dataset = dataset.skip(validation_size).shuffle(buffer_size=512)\n",
+    "    training_dataset = training_dataset.batch(batch_size, drop_remainder=True)\n",
+    "\n",
+    "    validation_dataset = dataset.take(validation_size).batch(batch_size, drop_remainder=True)\n",
+    "    \n",
+    "    # Read test dataset\n",
+    "    test_dataset = tf.data.TFRecordDataset('data/test.tfrecords')\n",
+    "    test_dataset = test_dataset.map(data_utils.parse_flow_data) # Transform binary data into image arrays\n",
+    "    test_dataset = test_dataset.batch(batch_size, drop_remainder = True)\n",
+    " \n",
+    "    return training_dataset, validation_dataset, test_dataset\n",
+    "\n",
+    "training_dataset, validation_dataset, test_dataset = init_datasets()"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "def loss_image(vflow_hat, vflow):\n",
+    "    ''' Defines the loss for the predicted flow.\n",
+    "    \n",
+    "    Arguments:\n",
+    "    vflow_hat -- predicted flow, shape (?, nh, nw, 2)\n",
+    "    vflow   -- target flow from the simulation, shape (?, nh, nw, 2)\n",
+    "    \n",
+    "    Returns: the L2 loss\n",
+    "    '''\n",
+    "    ### Define the square error loss (~ 1 line of code)\n",
+    "    loss = tf.nn.l2_loss(vflow_hat - vflow)\n",
+    "    ###\n",
+    "                         \n",
+    "    # Add a scalar to tensorboard\n",
+    "    tf.summary.scalar('loss', loss)\n",
+    "    \n",
+    "    return loss"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "##  Advanced network\n",
+    "In this section we will use the model by [O. Hennigh](https://arxiv.org/abs/1710.10352)\n",
+    "###  Model ( Non- Gated Residual Block ) \n",
+    "The network architecture in inspired by the [U-net](https://arxiv.org/abs/1505.04597), additionally it uses gated residual blocks.\n",
+    "<img src=\"images/ssf_unet.png\">\n",
+    "\n",
+    "Kindly refer here to know more about [Residual Networks and Residual Blocks](../Intro_to_DL/Resnet.ipynb)\n",
+    "\n",
+    "\n",
+    "Let's start building the architecture step-by-step : \n",
+    "\n",
+    "We will need to build the flow of our network as follows : \n",
+    "\n",
+    "<img src=\"images/U-net_flow_nofunc.png\">\n",
+    "\n",
+    "\n",
+    "Now, if we try to model this U-Network into one cell of the notebook, it will become extremely long and hard to add/remove the depth of the layers. So, let us see if we can break this into modular functions. \n",
+    "\n",
+    "\n",
+    "<img src=\"images/U-net_flow_func.png\">\n",
+    "\n",
+    "\n",
+    "We will now start building each block one by one : \n",
+    "\n"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "#### Let us now Define our Activation Function : \n",
+    "\n",
+    "We will be using a custom _Activation Function_ for this, we will be using concatenated ELU : \n",
+    "\n",
+    "<img src=\"images/elu.png\">\n",
+    "\n",
+    "*Concatenated ELU* : We take both the +ve and -ve values and apply ELU Activation Functions on it.\n",
+    "\n",
+    "**Why we need to create a custom activation function for our model?** The answer is because we want to preserves both positive and negative phase information while enforcing non-saturated non-linearity.\n",
+    "\n",
+    "Let's define the helper function to switch between different activation functions after which we will build the Residual Block."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "def concat_elu(x):\n",
+    "    \"\"\" like concatenated ReLU (http://arxiv.org/abs/1603.05201), but then with ELU \"\"\"\n",
+    "    axis = len(x.get_shape())-1    # Dimensions of the data subtracted by 1 \n",
+    "    return elu(concatenate([x, -x], axis=axis)) # Concatenated x and -x of the Data and Apply ELU on it "
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "# Helper Function to Set Non-linearity\n",
+    "def set_nonlinearity(name):\n",
+    "    if name == 'concat_elu':\n",
+    "        return concat_elu\n",
+    "    elif name == 'elu':\n",
+    "        return tf.nn.elu\n",
+    "    elif name == 'concat_relu':\n",
+    "        return tf.nn.crelu\n",
+    "    elif name == 'relu':\n",
+    "        return tf.nn.relu\n",
+    "    else:\n",
+    "        raise('nonlinearity ' + name + ' is not supported')"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "def res_block(x, a=None, filter_size=16, nonlinearity=concat_elu, rate=0, stride=1, gated=False, name=\"resnet\"):\n",
+    "    \"\"\" Residual block of 3x3 convolutions \"\"\"\n",
+    "    # Copy of our Input\n",
+    "    orig_x = x\n",
+    "    # Get Shape of Input data\n",
+    "    orig_x_int_shape = flow_architecture.int_shape(x)\n",
+    "    \n",
+    "    #### First Convolution Layer\n",
+    "    # If Input has one channel Data ( i.e., Input is our Input Image )\n",
+    "    if orig_x_int_shape[3] == 1:\n",
+    "        x_1 = flow_architecture.conv_layer(x, 3, stride, filter_size, name + '_conv_1')\n",
+    "    else:\n",
+    "        x_1 = flow_architecture.conv_layer(nonlinearity(x), 3, stride, filter_size, name + '_conv_1')\n",
+    "    \n",
+    "    # a is fed during the Up-sampling part of the Network ( Refer Upsampling block )\n",
+    "    if a is not None:\n",
+    "        shape_a = flow_architecture.int_shape(a)\n",
+    "        shape_x_1 = flow_architecture.int_shape(x_1)\n",
+    "        paddings = [[0,0],[0, shape_x_1[1]-shape_a[1]], [0, shape_x_1[2]-shape_a[2]], [0, 0]]\n",
+    "        a = tf.pad(a, paddings)\n",
+    "        x_1 = x_1 + flow_architecture.nin(nonlinearity(a), filter_size, name + '_nin')\n",
+    "    # Add Activation Function and Dropout\n",
+    "    x_1 = nonlinearity(x_1)\n",
+    "    x_1 = Dropout(rate=rate)(x_1)\n",
+    "    \n",
+    "    #### Second Convolution Layer \n",
+    "    # Implemented Gated Residual Blocks \n",
+    "    if not gated:\n",
+    "        x_2 = flow_architecture.conv_layer(x_1, 3, 1, filter_size, name + '_conv_2')\n",
+    "    else:\n",
+    "        x_2 = flow_architecture.conv_layer(x_1, 3, 1, filter_size*2, name + '_conv_2')\n",
+    "        x_2_1, x_2_2 = tf.split(axis=3,num_or_size_splits=2,value=x_2)\n",
+    "        x_2 = x_2_1 * tf.nn.sigmoid(x_2_2)\n",
+    "    \n",
+    "    # During Down-sampling Apply Pooling layer for the Input to Match the Outout\n",
+    "    if int(orig_x.get_shape()[2]) > int(x_2.get_shape()[2]):\n",
+    "        orig_x = tf.nn.avg_pool(orig_x, [1,2,2,1], [1,2,2,1], padding='SAME')\n",
+    "\n",
+    "    # Pad Input Data\n",
+    "    out_filter = filter_size\n",
+    "    in_filter = int(orig_x.get_shape()[3])\n",
+    "    if out_filter != in_filter:\n",
+    "        orig_x = tf.pad( orig_x, [[0, 0], [0, 0], [0, 0], [(out_filter-in_filter), 0]])\n",
+    "    # Output Input Data + Output of Convolution Layer ( Why ? Because this is a Residual Block )\n",
+    "    return orig_x + x_2"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "### Let's now build the Downsampling Function and Upsampling Functions "
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "def downsampling_res_blocks(x, nr_res_blocks, filter_size, nonlinearity, rate, gated,name_prefix, downsample):\n",
+    "    ''' An optional downsampling step followed by one (or more) residual blocks '''\n",
+    "    \n",
+    "    # Set Parameters and Call our Residual Block Function \n",
+    "    if downsample:\n",
+    "        x = res_block(x, filter_size=filter_size, nonlinearity=nonlinearity, rate=rate, stride=2,gated=gated, name=name_prefix + \"downsample\")\n",
+    "    for i in range(nr_res_blocks):\n",
+    "        x = res_block(x, filter_size=filter_size, nonlinearity=nonlinearity, stride=1,rate=rate, gated=gated, name=name_prefix + str(i))      \n",
+    "    return x    \n"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "def upsumpling_res_blocks(x, nr_res_blocks, filter_size, nonlinearity, rate, gated, \n",
+    "                        name_prefix, a):\n",
+    "    ''' Upsampling followed by a residual block '''\n",
+    "    # Set Parameters and Call our Residual Block Functions\n",
+    "    x = flow_architecture.transpose_conv_layer(x, 3, 2, filter_size, name_prefix)\n",
+    "    for i in range(nr_res_blocks):\n",
+    "        if i == 0:\n",
+    "            x = res_block(x, a, filter_size=filter_size, nonlinearity=nonlinearity, rate=rate, gated=gated, name=name_prefix + str(i))\n",
+    "        else:\n",
+    "            x = res_block(x, filter_size=filter_size, nonlinearity=nonlinearity, rate=rate, gated=gated, name=name_prefix + str(i))\n",
+    "\n",
+    "    return x"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "#### Let's now build the functions where we set our parameters "
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "def conv_res(inputs, nr_res_blocks=1, rate=1.0, nonlinearity_name='concat_elu', gated=True):\n",
+    "    \"\"\"Builds conv part of net.\n",
+    "    Args:\n",
+    "      inputs: input images\n",
+    "      rate: dropout layer\n",
+    "    \"\"\"\n",
+    "    # Set Non-linearity\n",
+    "    nonlinearity = set_nonlinearity(nonlinearity_name)\n",
+    "    filter_size = 8\n",
+    "    # Store for Concatenation of the Downsampling output with the Upsampling blocks\n",
+    "    a = []\n",
+    "    \n",
+    "    # First Downsampling Residual Block to Convert Input Image from ( 128  ,256 ,1 ) to ( 128 , 256 , 8)\n",
+    "    \n",
+    "    x = downsampling_res_blocks(inputs, nr_res_blocks, filter_size, nonlinearity, rate, gated, \"resnet_1_\", False)\n",
+    "    \n",
+    "    # Loop Through Downsampling Blocks \n",
+    "    for i in range(2,6):\n",
+    "        a.append(x)\n",
+    "        filter_size = 2 * filter_size\n",
+    "        name_prefix = \"resnet\" + str(i) + \"_\"\n",
+    "        x = downsampling_res_blocks(x, nr_res_blocks, filter_size, nonlinearity, rate, gated, name_prefix, True)\n",
+    "\n",
+    "    # Loop Through Up-sampling Blocks \n",
+    "    for i in range(1,5):\n",
+    "        filter_size = int(filter_size /2)\n",
+    "        name_prefix = \"up_conv_\" + str(i)\n",
+    "        x = upsumpling_res_blocks(x, nr_res_blocks, filter_size, nonlinearity, rate, gated, name_prefix, a[-i])\n",
+    "    \n",
+    "    # Last Convolution Layer with Activation \n",
+    "    x = flow_architecture.conv_layer(x, 3, 1, 2, \"last_conv\")\n",
+    "    x = tf.nn.tanh(x)\n",
+    "\n",
+    "    return x"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "def model(boundary, rate,gated=True):\n",
+    "    return conv_res(boundary, nr_res_blocks=1, rate=rate, nonlinearity_name='concat_elu', gated=gated)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {
+    "scrolled": true
+   },
+   "source": [
+    "### Let's now define some parameters and compile our model"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "# Define Dropout Rate and Set Gated = False\n",
+    "rate = 0.3\n",
+    "gated = False\n",
+    "\n",
+    "# Compile the Model\n",
+    "input = tf.keras.Input(shape=(128,256,1), name=\"boundary\")\n",
+    "output = model(input,rate=rate,gated=gated)\n",
+    "unet = tf.keras.Model(inputs = input, outputs=output)\n",
+    "unet.compile(tf.keras.optimizers.Adam(0.0001), loss=loss_image)\n",
+    "unet.summary()"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "# Let's train our Model for 15 Epochs\n",
+    "results = unet.fit(training_dataset, epochs=15)"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "# Let us Plot the train History\n",
+    "data_utils.plot_keras_loss(results)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "## Let's Test our Model"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "test_loss = unet.evaluate(test_dataset, steps=3)\n",
+    "print('The loss over the test dataset', test_loss)"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "x, vxy = data_utils.load_test_data(1) # you can try different numbers between 1 and 28\n",
+    "vxy_hat = unet.predict(x)\n",
+    "data_utils.plot_test_result(x, vxy, vxy_hat)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "### Before we go ahead to train further models, let us understand why this network performed better : \n",
+    "\n",
+    "U-networks were first used in Biomedical Image Segmentations. The research papers suggest the following : \n",
+    "\n",
+    "```\n",
+    "However, in many visual tasks, especially in biomedical image processing, the desired output should include localization, i.e., a class label is supposed to be assigned to each pixel. Moreover, thousands of training images are usually beyond reach in biomedical tasks\n",
+    "```\n",
+    "\n",
+    "This is very similar to our Problem because : \n",
+    "\n",
+    "- We have a limited dataset as creating an extensive database is computationally expensive in the case of Fluid Dynamics.\n",
+    "- Just like a class label needs to be assigned for the Biomedical Applications, we need numerical values to be assigned with every pixel to predict the flow around the objects."
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "# Gated Residual Blocks \n",
+    "\n",
+    "For using Gated residual blocks, we will have to change the variable `gated = False` to `gated = True.` \n",
+    "\n",
+    "**But What is Residual Gates ??**\n",
+    "\n",
+    "Residual Gates leverages the idea of shortcut connections but with a simple weighted linear combination between the original layer’s output and input. This improves the learning capacity of the Residual Blocks.\n",
+    "\n",
+    "How do we implement it? \n",
+    "\n",
+    "```\n",
+    "x_2 = flow_architecture.conv_layer(x_1, 3, 1, filter_size*2, name + '_conv_2')  # Convolution Operation \n",
+    "x_2_1, x_2_2 = tf.split(axis=3,num_or_size_splits=2,value=x_2)                  # Splitting Layers\n",
+    "x_2 = x_2_1 * tf.nn.sigmoid(x_2_2)                                              # Applying Sigmoid Activation as Weights\n",
+    "```"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "K.clear_session()\n",
+    "\n",
+    "# Define Dropout Rate and Set Gated = True \n",
+    "rate = 0.3\n",
+    "gated = True\n",
+    "\n",
+    "# we need to re-init the dacaset because of Clearing our session\n",
+    "training_dataset, validation_dataset, test_dataset = init_datasets()\n",
+    "\n",
+    "input = tf.keras.Input(shape=(128,256,1), name=\"boundary\")\n",
+    "output = model(input,rate=rate,gated=gated)\n",
+    "unet = tf.keras.Model(inputs = input, outputs=output)\n",
+    "unet.compile(tf.keras.optimizers.Adam(0.0001), loss=loss_image)\n",
+    "unet.summary()"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "# Let's train our Model for 20 Epochs\n",
+    "results = unet.fit(training_dataset, epochs=15)"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "# Let us Plot the train History\n",
+    "data_utils.plot_keras_loss(results)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "## Let's Test our Model"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "test_loss = unet.evaluate(test_dataset, steps=3)\n",
+    "print('The loss over the test dataset', test_loss)"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "scrolled": false
+   },
+   "outputs": [],
+   "source": [
+    "x, vxy = data_utils.load_test_data(1) # you can try different numbers between 1 and 28\n",
+    "vxy_hat = unet.predict(x)\n",
+    "data_utils.plot_test_result(x, vxy, vxy_hat)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "## Licensing\n",
+    "This material is released by NVIDIA Corporation under the Creative Commons Attribution 4.0 International (CC BY 4.0)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "[Previous Notebook](Part3.ipynb)\n",
+    "&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;\n",
+    "&emsp;&emsp;&emsp;&emsp;&emsp;\n",
+    "&emsp;&emsp;&emsp;&emsp;&emsp;\n",
+    "&emsp;&emsp;&emsp;&emsp;&emsp;\n",
+    "[1](Start_Here.ipynb)\n",
+    "[2](Part2.ipynb)\n",
+    "[3](Part3.ipynb)\n",
+    "[4]\n",
+    "[5](Competition.ipynb)\n",
+    "&emsp;&emsp;&emsp;&emsp;&emsp;\n",
+    "&emsp;&emsp;&emsp;&emsp;&emsp;\n",
+    "&emsp;&emsp;&emsp;&emsp;&emsp;\n",
+    "&emsp;&emsp;&emsp;&emsp;&emsp;\n",
+    "&emsp;&emsp;&emsp;&emsp;&emsp;\n",
+    "[Next Notebook](Competition.ipynb)\n",
+    "\n",
+    "&emsp;&emsp;&emsp;&emsp;&emsp;\n",
+    "&emsp;&emsp;&emsp;&emsp;&emsp;\n",
+    "&emsp;&emsp;&emsp;&emsp;&emsp;\n",
+    "&emsp;&emsp;&emsp;&emsp;&emsp;\n",
+    "&emsp;&emsp;&emsp;&emsp;&emsp;\n",
+    "&emsp;&emsp;&emsp;&ensp;\n",
+    "[Home Page](../Start_Here.ipynb)\n"
+   ]
+  }
+ ],
+ "metadata": {
+  "kernelspec": {
+   "display_name": "Python 3",
+   "language": "python",
+   "name": "python3"
+  },
+  "language_info": {
+   "codemirror_mode": {
+    "name": "ipython",
+    "version": 3
+   },
+   "file_extension": ".py",
+   "mimetype": "text/x-python",
+   "name": "python",
+   "nbconvert_exporter": "python",
+   "pygments_lexer": "ipython3",
+   "version": "3.6.2"
+  }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 2
+}

File diff suppressed because it is too large
+ 1233 - 0
hpc_ai/ai_science_cfd/English/python/jupyter_notebook/CFD/Solution.ipynb


+ 114 - 0
hpc_ai/ai_science_cfd/English/python/jupyter_notebook/CFD/Start_Here.ipynb

@@ -0,0 +1,114 @@
+{
+ "cells": [
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "&emsp;&emsp;&emsp;&emsp;&emsp;\n",
+    "&emsp;&emsp;&emsp;&emsp;&emsp;\n",
+    "&emsp;&emsp;&emsp;&emsp;&emsp;\n",
+    "&emsp;&emsp;&emsp;&emsp;&emsp;\n",
+    "&emsp;&emsp;&emsp;&emsp;&emsp;\n",
+    "&emsp;&emsp;&emsp;\n",
+    "[Home Page](../Start_Here.ipynb)\n",
+    "\n",
+    "\n",
+    "&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;\n",
+    "&emsp;&emsp;&emsp;&emsp;&emsp;\n",
+    "&emsp;&emsp;&emsp;&emsp;&emsp;\n",
+    "&emsp;&emsp;&emsp;&emsp;&emsp;\n",
+    "&emsp;&emsp;&emsp;&emsp;&emsp;\n",
+    "&emsp;&emsp;&emsp;\n",
+    "[1]\n",
+    "[2](Part2.ipynb)\n",
+    "[3](Part3.ipynb)\n",
+    "[4](Part4.ipynb)\n",
+    "[5](Competition.ipynb)\n",
+    "&emsp;&emsp;&emsp;&emsp;&emsp;\n",
+    "&emsp;&emsp;&emsp;&emsp;&emsp;\n",
+    "&emsp;&emsp;&emsp;&emsp;&emsp;\n",
+    "&emsp;&emsp;&emsp;&emsp;&emsp;\n",
+    "[Next Notebook](Part2.ipynb)\n"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "# Steady State Flow with Neural Networks\n",
+    "\n",
+    "*In aerodynamics related design, analysis and optimization problems, flow fields are simulated using computational fluid\n",
+    "dynamics (CFD) solvers. However, CFD simulation is usually a computationally expensive, memory demanding and\n",
+    "time consuming iterative process. These drawbacks of CFD limit opportunities for design space exploration and forbid\n",
+    "interactive design. We will use a general and flexible approximation model for real-time prediction of non-uniform\n",
+    "steady laminar flow in a 2D domain based on neural networks.*\n",
+    "\n",
+    "\n",
+    "*This notebook contains a implementation of the paper [Convolutional Neural Networks for Steady Flow Approximation](https://www.autodeskresearch.com/publications/convolutional-neural-networks-steady-flow-approximation). The premise is to learn a mapping from boundary conditions to steady state fluid flow.  It is based on the implementation of [O. Hennigh](https://github.com/loliverhennigh/Steady-State-Flow-With-Neural-Nets)*\n",
+    "\n",
+    "## Problem description\n",
+    "Our aim is to predict 2D flow around objects. The input is the boundary around which we want to calculate the flow. \n",
+    "Here is an example of input data and the corresponding flow that was calculated using the Lattice Boltzmann method ([Mechsys](http://mechsys.nongnu.org/)).\n",
+    "\n",
+    "<img src=\"images/flow_example.png\">\n",
+    "\n",
+    "We will implement neural networks to predict the steady state flow. We will start with a simple fully connected model, then implement the convolutional model. Finally, we will implement an U-network model based on [this](https://arxiv.org/abs/1710.10352) paper.\n",
+    "\n",
+    "## Licensing\n",
+    "This material is released by NVIDIA Corporation under the Creative Commons Attribution 4.0 International (CC BY 4.0)\n"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;\n",
+    "&emsp;&emsp;&emsp;&emsp;&emsp;\n",
+    "&emsp;&emsp;&emsp;&emsp;&emsp;\n",
+    "&emsp;&emsp;&emsp;&emsp;&emsp;\n",
+    "&emsp;&emsp;&emsp;&emsp;&emsp;\n",
+    "&emsp;&emsp;&emsp;\n",
+    "[1]\n",
+    "[2](Part2.ipynb)\n",
+    "[3](Part3.ipynb)\n",
+    "[4](Part4.ipynb)\n",
+    "[5](Competition.ipynb)\n",
+    "&emsp;&emsp;&emsp;&emsp;&emsp;\n",
+    "&emsp;&emsp;&emsp;&emsp;&emsp;\n",
+    "&emsp;&emsp;&emsp;&emsp;&emsp;\n",
+    "&emsp;&emsp;&emsp;&emsp;&emsp;\n",
+    "[Next Notebook](Part2.ipynb)\n",
+    "\n",
+    "\n",
+    "&emsp;&emsp;&emsp;&emsp;&emsp;\n",
+    "&emsp;&emsp;&emsp;&emsp;&emsp;\n",
+    "&emsp;&emsp;&emsp;&emsp;&emsp;\n",
+    "&emsp;&emsp;&emsp;&emsp;&emsp;\n",
+    "&emsp;&emsp;&emsp;&emsp;&emsp;\n",
+    "&emsp;&emsp;&emsp;&emsp;\n",
+    "[Home Page](../Start_Here.ipynb)"
+   ]
+  }
+ ],
+ "metadata": {
+  "kernelspec": {
+   "display_name": "Python 3",
+   "language": "python",
+   "name": "python3"
+  },
+  "language_info": {
+   "codemirror_mode": {
+    "name": "ipython",
+    "version": 3
+   },
+   "file_extension": ".py",
+   "mimetype": "text/x-python",
+   "name": "python",
+   "nbconvert_exporter": "python",
+   "pygments_lexer": "ipython3",
+   "version": "3.6.2"
+  }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 2
+}

BIN
hpc_ai/ai_science_cfd/English/python/jupyter_notebook/CFD/images/U-net_flow_func.png


BIN
hpc_ai/ai_science_cfd/English/python/jupyter_notebook/CFD/images/U-net_flow_nofunc.png


BIN
hpc_ai/ai_science_cfd/English/python/jupyter_notebook/CFD/images/convnet.png


BIN
hpc_ai/ai_science_cfd/English/python/jupyter_notebook/CFD/images/elu.png


BIN
hpc_ai/ai_science_cfd/English/python/jupyter_notebook/CFD/images/flow_example.png


BIN
hpc_ai/ai_science_cfd/English/python/jupyter_notebook/CFD/images/ml_pipeline.jpeg


BIN
hpc_ai/ai_science_cfd/English/python/jupyter_notebook/CFD/images/ssf_unet.png


File diff suppressed because it is too large
+ 570 - 0
hpc_ai/ai_science_cfd/English/python/jupyter_notebook/Intro_to_DL/CNN's.ipynb


+ 427 - 0
hpc_ai/ai_science_cfd/English/python/jupyter_notebook/Intro_to_DL/Part_2.ipynb

@@ -0,0 +1,427 @@
+{
+ "cells": [
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "[Home Page](../Start_Here.ipynb)\n",
+    "&emsp;&emsp;&emsp;&emsp;&emsp;\n",
+    "&emsp;&emsp;&emsp;&emsp;&emsp;\n",
+    "&emsp;&emsp;&emsp;&emsp;&emsp;\n",
+    "&emsp;&emsp;&emsp;&emsp;&emsp;\n",
+    "&emsp;&emsp;&emsp;\n",
+    "[Next Notebook](CNN's.ipynb)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "# CNN Primer and Keras 101"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "\n",
+    "In this notebook, participants will be introduced to CNN, implement it using Keras. For an absolute beginner this notebook would serve as a good starting point.\n",
+    "\n",
+    "**Contents of the this notebook:**\n",
+    "\n",
+    "- [How a Deep Learning project is planned ?](#Machine-Learning-Pipeline)\n",
+    "- [Wrapping things up with an example ( Classification )](#Wrapping-Things-up-with-an-Example)\n",
+    "     - [Fully Connected Networks](#Image-Classification-on-types-of-Clothes)\n",
+    "\n",
+    "\n",
+    "**By the end of this notebook participant will:**\n",
+    "\n",
+    "- Understand the Machine Learning Pipeline\n",
+    "- Write a Deep Learning Classifier and train it.\n",
+    "\n",
+    "**We will be building a _Multi-class Classifier_ to classify images of clothing to their respective classes**"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "## Machine Learning Pipeline\n",
+    "\n",
+    "During the bootcamp we will be making use of the following buckets to help us understand how a Machine Learning project should be planned and executed: \n",
+    "\n",
+    "1. **Data**: To start any ML project we need data which is pre-processed and can be fed into the network.\n",
+    "2. **Task**: There are many tasks present in ML, we need to make sure we understand and define the problem statement accurately.\n",
+    "3. **Model**: We need to build our model, which is neither too deep and taking a lot of computational power or too small that it could not learn the important features.\n",
+    "4. **Loss**: Out of the many _loss functions_ present, we need to carefully choose a _loss function_ which is suitable for the task we are about to carry out.\n",
+    "5. **Learning**: As we mentioned in our last notebook, there are a variety of _optimisers_ each with their advantages and disadvantages. So here we choose an _optimiser_ which is suitable for our task and train our model using the set hyperparameters.\n",
+    "6. **Evaluation**: This is a crucial step in the process to determine if our model has learnt the features properly by analysing how it performs when unseen data is given to it. "
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "**Here we will be building a _Multi-class Classifier_ to classify images of clothing to their respective classes.**\n",
+    "\n",
+    "We will follow the above discussed pipeline to complete the example.\n",
+    "\n",
+    "## Image Classification on types of clothes  \n",
+    "\n",
+    "####  Step -1 : Data \n",
+    "\n",
+    "We will be using the **F-MNIST ( Fashion MNIST )** dataset, which is a very popular dataset. This dataset contains 70,000 grayscale images in 10 categories. The images show individual articles of clothing at low resolution (28 by 28 pixels).\n",
+    "\n",
+    "<img src=\"images/fashion-mnist.png\" alt=\"Fashion MNIST sprite\"  width=\"600\">\n",
+    "\n",
+    "*Source: https://www.tensorflow.org/tutorials/keras/classification*"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "# Import Necessary Libraries\n",
+    "\n",
+    "from __future__ import absolute_import, division, print_function, unicode_literals\n",
+    "\n",
+    "# TensorFlow and tf.keras\n",
+    "import tensorflow as tf\n",
+    "from tensorflow import keras\n",
+    "\n",
+    "# Helper libraries\n",
+    "import numpy as np\n",
+    "import matplotlib.pyplot as plt\n",
+    "\n",
+    "print(tf.__version__)"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "scrolled": true
+   },
+   "outputs": [],
+   "source": [
+    "# Let's Import the Dataset\n",
+    "fashion_mnist = keras.datasets.fashion_mnist\n",
+    "\n",
+    "(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Loading the dataset returns four NumPy arrays:\n",
+    "\n",
+    "* The `train_images` and `train_labels` arrays are the *training set*—the data the model uses to learn.\n",
+    "* The model is tested against the *test set*, the `test_images`, and `test_labels` arrays.\n",
+    "\n",
+    "The images are 28x28 NumPy arrays, with pixel values ranging from 0 to 255. The *labels* are an array of integers, ranging from 0 to 9. These correspond to the *class* of clothing the image represents:\n",
+    "\n",
+    "<table>\n",
+    "  <tr>\n",
+    "    <th>Label</th>\n",
+    "    <th>Class</th>\n",
+    "  </tr>\n",
+    "  <tr>\n",
+    "    <td>0</td>\n",
+    "    <td>T-shirt/top</td>\n",
+    "  </tr>\n",
+    "  <tr>\n",
+    "    <td>1</td>\n",
+    "    <td>Trouser</td>\n",
+    "  </tr>\n",
+    "    <tr>\n",
+    "    <td>2</td>\n",
+    "    <td>Pullover</td>\n",
+    "  </tr>\n",
+    "    <tr>\n",
+    "    <td>3</td>\n",
+    "    <td>Dress</td>\n",
+    "  </tr>\n",
+    "    <tr>\n",
+    "    <td>4</td>\n",
+    "    <td>Coat</td>\n",
+    "  </tr>\n",
+    "    <tr>\n",
+    "    <td>5</td>\n",
+    "    <td>Sandal</td>\n",
+    "  </tr>\n",
+    "    <tr>\n",
+    "    <td>6</td>\n",
+    "    <td>Shirt</td>\n",
+    "  </tr>\n",
+    "    <tr>\n",
+    "    <td>7</td>\n",
+    "    <td>Sneaker</td>\n",
+    "  </tr>\n",
+    "    <tr>\n",
+    "    <td>8</td>\n",
+    "    <td>Bag</td>\n",
+    "  </tr>\n",
+    "    <tr>\n",
+    "    <td>9</td>\n",
+    "    <td>Ankle boot</td>\n",
+    "  </tr>\n",
+    "</table>\n",
+    "\n",
+    "Each image is mapped to a single label. Since the *class names* are not included with the dataset, let us store them in an array so that we can use them later when plotting the images:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat',\n",
+    "               'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "## Understanding the Data"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "#Print Array Size of Training Set \n",
+    "print(\"Size of Training Images :\"+str(train_images.shape))\n",
+    "#Print Array Size of Label\n",
+    "print(\"Size of Training Labels :\"+str(train_labels.shape))\n",
+    "\n",
+    "#Print Array Size of Test Set \n",
+    "print(\"Size of Test Images :\"+str(test_images.shape))\n",
+    "#Print Array Size of Label\n",
+    "print(\"Size of Test Labels :\"+str(test_labels.shape))\n",
+    "\n",
+    "#Let's See how our Outputs Look like \n",
+    "print(\"Training Set Labels :\"+str(train_labels))\n",
+    "#Data in the Test Set\n",
+    "print(\"Test Set Labels :\"+str(test_labels))"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "## Data Pre-processing\n"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "plt.figure()\n",
+    "plt.imshow(train_images[0])\n",
+    "plt.colorbar()\n",
+    "plt.grid(False)\n",
+    "plt.show()"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "The image pixel values range from 0 to 255. Let us now normalise the data range from 0 - 255 to 0 - 1 in both the *Train* and *Test* set. This Normalisation of pixels helps us by optimizing the process where the gradients are computed."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "train_images = train_images / 255.0\n",
+    "test_images = test_images / 255.0"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "# Let's Print to Veryify if the Data is of the correct format.\n",
+    "plt.figure(figsize=(10,10))\n",
+    "for i in range(25):\n",
+    "    plt.subplot(5,5,i+1)\n",
+    "    plt.xticks([])\n",
+    "    plt.yticks([])\n",
+    "    plt.grid(False)\n",
+    "    plt.imshow(train_images[i], cmap=plt.cm.binary)\n",
+    "    plt.xlabel(class_names[train_labels[i]])\n",
+    "plt.show()"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "## Defining our Model\n",
+    "\n",
+    "Our Model has three layers :\n",
+    "\n",
+    "- 784 Input features ( 28 * 28 ) \n",
+    "- 128 nodes in hidden layer (Feel free to experiment with the value)\n",
+    "- 10 output nodes to denote the Class\n",
+    "\n",
+    "Implementing the same in Keras ( Machine Learning framework built on top of Tensorflow, Theano, etc..) \n"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "from tensorflow.keras import backend as K\n",
+    "K.clear_session()\n",
+    "model = keras.Sequential([\n",
+    "    keras.layers.Flatten(input_shape=(28, 28)),\n",
+    "    keras.layers.Dense(128, activation='relu'),\n",
+    "    keras.layers.Dense(10, activation='softmax')\n",
+    "])"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "The first layer in this network, `tf.keras.layers.Flatten`, transforms the format of the images from a two-dimensional array (of 28 by 28 pixels) to a one-dimensional array (of 28 * 28 = 784 pixels). Think of this layer as unstacking rows of pixels in the image and lining them up. This layer has no parameters to learn; it only reformats the data.\n",
+    "\n",
+    "After the pixels are flattened, the network consists of a sequence of two `tf.keras.layers.Dense` layers. These are densely connected, or fully connected, neural layers. The first `Dense` layer has 128 nodes (or neurons). The second (and last) layer is a 10-node *softmax* layer that returns an array of 10 probability scores that sum to 1. Each node contains a score that indicates the probability that the current image belongs to one of the 10 classes.\n",
+    "\n",
+    "### Compile the model\n",
+    "\n",
+    "Before the model is ready for training, it needs a few more settings. These are added during the model's *compile* step:\n",
+    "\n",
+    "* *Loss function* —This measures how accurate the model is during training. You want to minimize this function to \"steer\" the model in the right direction.\n",
+    "* *Optimizer* —This is how the model is updated based on the data it sees and its loss function.\n",
+    "* *Metrics* —Used to monitor the training and testing steps. The following example uses *accuracy*, the fraction of the images that are correctly classified."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "model.compile(optimizer='adam',\n",
+    "              loss='sparse_categorical_crossentropy',\n",
+    "              metrics=['accuracy'])"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "## Train the model\n",
+    "\n",
+    "Training the neural network model requires the following steps:\n",
+    "\n",
+    "1. Feed the training data to the model. In this example, the training data is in the `train_images` and `train_labels` arrays.\n",
+    "2. The model learns to associate images and labels.\n",
+    "3. You ask the model to make predictions about a test set—in this example, the `test_images` array. Verify that the predictions match the labels from the `test_labels` array.\n",
+    "\n",
+    "To start training,  call the `model.fit` method—so called because it \"fits\" the model to the training data:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "model.fit(train_images, train_labels ,epochs=5)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "## Evaluate accuracy\n",
+    "\n",
+    "Next, compare how the model performs on the test dataset:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "#Evaluating the Model using the Test Set\n",
+    "\n",
+    "test_loss, test_acc = model.evaluate(test_images,  test_labels, verbose=2)\n",
+    "\n",
+    "print('\\nTest accuracy:', test_acc)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "We get an Accuracy of 87% in the Test dataset which is less than the 89% we got during the Training phase, This problem in ML is called as Overfitting, and we have discussed the same in the previous notebook. \n",
+    "\n",
+    "## Exercise\n",
+    "\n",
+    "Try adding more dense layers to the network above and observe change in accuracy.\n",
+    "\n",
+    "## Important:\n",
+    "<mark>Shutdown the kernel before clicking on “Next Notebook” to free up the GPU memory.</mark>\n",
+    "\n",
+    "\n",
+    "## Licensing\n",
+    "This material is released by NVIDIA Corporation under the Creative Commons Attribution 4.0 International (CC BY 4.0)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "[Home Page](../Start_Here.ipynb)\n",
+    "&emsp;&emsp;&emsp;&emsp;&emsp;\n",
+    "&emsp;&emsp;&emsp;&emsp;&emsp;\n",
+    "&emsp;&emsp;&emsp;&emsp;&emsp;\n",
+    "&emsp;&emsp;&emsp;&emsp;&emsp;\n",
+    "&emsp;&emsp;&emsp;\n",
+    "[Next Notebook](CNN's.ipynb)"
+   ]
+  }
+ ],
+ "metadata": {
+  "kernelspec": {
+   "display_name": "Python 3",
+   "language": "python",
+   "name": "python3"
+  },
+  "language_info": {
+   "codemirror_mode": {
+    "name": "ipython",
+    "version": 3
+   },
+   "file_extension": ".py",
+   "mimetype": "text/x-python",
+   "name": "python",
+   "nbconvert_exporter": "python",
+   "pygments_lexer": "ipython3",
+   "version": "3.6.2"
+  }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 2
+}

File diff suppressed because it is too large
+ 631 - 0
hpc_ai/ai_science_cfd/English/python/jupyter_notebook/Intro_to_DL/Resnets.ipynb


File diff suppressed because it is too large
+ 526 - 0
hpc_ai/ai_science_cfd/English/python/jupyter_notebook/Intro_to_DL/Start_Here.ipynb


BIN
hpc_ai/ai_science_cfd/English/python/jupyter_notebook/Intro_to_DL/images/Activation_Function.png


BIN
hpc_ai/ai_science_cfd/English/python/jupyter_notebook/Intro_to_DL/images/NVIDIA_Bootcamp.png


BIN
hpc_ai/ai_science_cfd/English/python/jupyter_notebook/Intro_to_DL/images/activation_fns.png


BIN
hpc_ai/ai_science_cfd/English/python/jupyter_notebook/Intro_to_DL/images/alexnet.png


BIN
hpc_ai/ai_science_cfd/English/python/jupyter_notebook/Intro_to_DL/images/ann.png


BIN
hpc_ai/ai_science_cfd/English/python/jupyter_notebook/Intro_to_DL/images/bp_left.gif


BIN
hpc_ai/ai_science_cfd/English/python/jupyter_notebook/Intro_to_DL/images/bp_right.gif


BIN
hpc_ai/ai_science_cfd/English/python/jupyter_notebook/Intro_to_DL/images/cnn.jpeg


BIN
hpc_ai/ai_science_cfd/English/python/jupyter_notebook/Intro_to_DL/images/conv.gif


BIN
hpc_ai/ai_science_cfd/English/python/jupyter_notebook/Intro_to_DL/images/conv.png


BIN
hpc_ai/ai_science_cfd/English/python/jupyter_notebook/Intro_to_DL/images/conv_depth.png


BIN
hpc_ai/ai_science_cfd/English/python/jupyter_notebook/Intro_to_DL/images/convtranspose.gif


BIN
hpc_ai/ai_science_cfd/English/python/jupyter_notebook/Intro_to_DL/images/convtranspose_conv.gif


BIN
hpc_ai/ai_science_cfd/English/python/jupyter_notebook/Intro_to_DL/images/evaluation.jpg


BIN
hpc_ai/ai_science_cfd/English/python/jupyter_notebook/Intro_to_DL/images/evaluation.png


BIN
hpc_ai/ai_science_cfd/English/python/jupyter_notebook/Intro_to_DL/images/fashion-mnist.png


BIN
hpc_ai/ai_science_cfd/English/python/jupyter_notebook/Intro_to_DL/images/feature_hierarchy.png


BIN
hpc_ai/ai_science_cfd/English/python/jupyter_notebook/Intro_to_DL/images/identity.png


BIN
hpc_ai/ai_science_cfd/English/python/jupyter_notebook/Intro_to_DL/images/location.PNG


BIN
hpc_ai/ai_science_cfd/English/python/jupyter_notebook/Intro_to_DL/images/max_pool.png


BIN
hpc_ai/ai_science_cfd/English/python/jupyter_notebook/Intro_to_DL/images/ml_pipeline.jpeg


BIN
hpc_ai/ai_science_cfd/English/python/jupyter_notebook/Intro_to_DL/images/model.png


BIN
hpc_ai/ai_science_cfd/English/python/jupyter_notebook/Intro_to_DL/images/neuron.jpg


BIN
hpc_ai/ai_science_cfd/English/python/jupyter_notebook/Intro_to_DL/images/or_left.gif


BIN
hpc_ai/ai_science_cfd/English/python/jupyter_notebook/Intro_to_DL/images/or_right.png


BIN
hpc_ai/ai_science_cfd/English/python/jupyter_notebook/Intro_to_DL/images/our_cnn.png


BIN
hpc_ai/ai_science_cfd/English/python/jupyter_notebook/Intro_to_DL/images/resblock.PNG


BIN
hpc_ai/ai_science_cfd/English/python/jupyter_notebook/Intro_to_DL/images/resnet.PNG


BIN
hpc_ai/ai_science_cfd/English/python/jupyter_notebook/Intro_to_DL/images/stats.png


BIN
hpc_ai/ai_science_cfd/English/python/jupyter_notebook/Intro_to_DL/images/sup-unsup.png


BIN
hpc_ai/ai_science_cfd/English/python/jupyter_notebook/Intro_to_DL/images/training.gif


BIN
hpc_ai/ai_science_cfd/English/python/jupyter_notebook/Intro_to_DL/images/xnor_left.gif


BIN
hpc_ai/ai_science_cfd/English/python/jupyter_notebook/Intro_to_DL/images/xnor_right.png


+ 84 - 0
hpc_ai/ai_science_cfd/English/python/jupyter_notebook/Start_Here.ipynb

@@ -0,0 +1,84 @@
+{
+ "cells": [
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "# Welcome to AI for Science Bootcamp\n",
+    "\n",
+    "The objective of this bootcamp is to give an introduction to application of Artificial Intelligence (AI) algorithms in Science ( High Performance Computing(HPC) Simulations ). This bootcamp will introduce participants to fundamentals of AI and how those can be applied to different HPC simulation domains. \n",
+    "\n",
+    "The following contents will be covered during the Bootcamp :\n",
+    "- [CNN Primer and Keras 101](Intro_to_DL/Part_2.ipynb)\n",
+    "- [Steady State Flow using Neural Networks](CFD/Start_Here.ipynb)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "## [CNN Primer and Keras 101](Intro_to_DL/Part_2.ipynb)\n",
+    "\n",
+    "In this notebook, participants will be introduced to Convolution Neural Network and how to implement one using Keras API. For an absolute beginner to CNN and Keras this notebook would serve as a good starting point.\n",
+    "\n",
+    "**By the end of this notebook you will:**\n",
+    "\n",
+    "- Understand the Machine Learning pipeline\n",
+    "- Understand how a Convolution Neural Network works\n",
+    "- Write your own Deep Learning classifier and train it.\n",
+    "\n",
+    "For in depth understanding of Deep Learning Concepts, visit [NVIDIA Deep Learning Institute](https://www.nvidia.com/en-us/deep-learning-ai/education/)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "## [Steady State Flow using Neural Networks](CFD/Start_Here.ipynb)\n",
+    "\n",
+    "In this notebook, participant will be introduced to how Deep Learning can be applied in the field of Fluid Dynamics.\n",
+    "\n",
+    "**Contents of this notebook:**\n",
+    "\n",
+    "- Understanding the problem statement\n",
+    "- Building a Deep Learning Pipeline\n",
+    "    - Understand data and task\n",
+    "    - Discuss various models\n",
+    "    - Define Neural network parameters\n",
+    "- Fully Connected Networks\n",
+    "- Convolutional models\n",
+    "- Advanced networks\n",
+    "\n",
+    "**By the end of the notebook participant will:** \n",
+    "\n",
+    "- Understand the process of applying Deep Learning to Computational Fluid Dynamics\n",
+    "- Understanding how Residual Blocks work.\n",
+    "- Benchmark between different models and how they compare against one another.\n",
+    "\n",
+    "## Licensing\n",
+    "This material is released by NVIDIA Corporation under the Creative Commons Attribution 4.0 International (CC BY 4.0)"
+   ]
+  }
+ ],
+ "metadata": {
+  "kernelspec": {
+   "display_name": "Python 3",
+   "language": "python",
+   "name": "python3"
+  },
+  "language_info": {
+   "codemirror_mode": {
+    "name": "ipython",
+    "version": 3
+   },
+   "file_extension": ".py",
+   "mimetype": "text/x-python",
+   "name": "python",
+   "nbconvert_exporter": "python",
+   "pygments_lexer": "ipython3",
+   "version": "3.6.2"
+  }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 2
+}

+ 53 - 0
hpc_ai/ai_science_cfd/English/python/source_code/dataset.py

@@ -0,0 +1,53 @@
+# Copyright (c) 2012, NVIDIA CORPORATION. All rights reserved.
+ #
+ # Redistribution and use in source and binary forms, with or without
+ # modification, are permitted provided that the following conditions
+ # are met:
+ #  * Redistributions of source code must retain the above copyright
+ #    notice, this list of conditions and the following disclaimer.
+ #  * Redistributions in binary form must reproduce the above copyright
+ #    notice, this list of conditions and the following disclaimer in the
+ #    documentation and/or other materials provided with the distribution.
+ #  * Neither the name of NVIDIA CORPORATION nor the names of its
+ #    contributors may be used to endorse or promote products derived
+ #    from this software without specific prior written permission.
+ #
+ # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AS IS'' AND ANY
+ # EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ # IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ # PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL THE COPYRIGHT OWNER OR
+ # CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ # EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ # PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ # PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
+ # OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+import gdown
+import os
+## CFD TRAIN DATASET
+url = 'https://drive.google.com/uc?id=0BzsbU65NgrSuZDBMOW93OWpsMHM&export=download'
+output = '/workspace/python/jupyter_notebook/CFD/data/train.tfrecords'
+gdown.download(url, output, quiet=False,proxy=None)
+
+## CFD TEST DATASET
+url = 'https://drive.google.com/uc?id=1WSJLK0cOQehixJ6Tf5k0eYDcb4RJ5mXv&export=download'
+output = '/workspace/python/jupyter_notebook/CFD/data/test.tfrecords'
+gdown.download(url, output, quiet=False,proxy=None)
+
+## CFD CONV_SDF MODEL
+url = 'https://drive.google.com/uc?id=1pfR0io1CZKvXArGk-nt2wciUoAN_6Z08&export=download'
+output = '/workspace/python/jupyter_notebook/CFD/conv_sdf_model.h5'
+gdown.download(url, output, quiet=False,proxy=None)
+
+## CFD CONV MODEL
+url = 'https://drive.google.com/uc?id=1rFhqlQnTkzIyZocjAxMffucmS3FDI0_j&export=download'
+output = '/workspace/python/jupyter_notebook/CFD/conv_model.h5'
+gdown.download(url, output, quiet=False,proxy=None)
+
+
+## CFD TEST Dataset
+url = 'https://drive.google.com/uc?id=0BzsbU65NgrSuR2NRRjBRMDVHaDQ&export=download'
+output = '/workspace/python/jupyter_notebook/CFD/data/computed_car_flow.zip'
+gdown.cached_download(url, output, quiet=False,proxy=None,postprocess=gdown.extractall)

+ 133 - 0
hpc_ai/ai_science_cfd/English/python/source_code/model/flow_architecture.py

@@ -0,0 +1,133 @@
+# Copyright (c) 2012, NVIDIA CORPORATION. All rights reserved.
+# 
+ # Redistribution and use in source and binary forms, with or without
+ # modification, are permitted provided that the following conditions
+ # are met:
+ #  * Redistributions of source code must retain the above copyright
+ #    notice, this list of conditions and the following disclaimer.
+ #  * Redistributions in binary form must reproduce the above copyright
+ #    notice, this list of conditions and the following disclaimer in the
+ #    documentation and/or other materials provided with the distribution.
+ #  * Neither the name of NVIDIA CORPORATION nor the names of its
+ #    contributors may be used to endorse or promote products derived
+ #    from this software without specific prior written permission.
+ #
+ # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AS IS'' AND ANY
+ # EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ # IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ # PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL THE COPYRIGHT OWNER OR
+ # CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ # EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ # PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ # PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
+ # OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+"""functions used to construct different architectures
+"""
+
+# Import Necessary Libraries
+
+from __future__ import absolute_import, division, print_function, unicode_literals
+
+# TensorFlow and tf.keras
+import tensorflow as tf
+from tensorflow import keras
+from tensorflow.keras import layers
+from tensorflow.keras.layers import Input, Add, Dense, Activation, ZeroPadding2D, BatchNormalization, Flatten, Conv2D, AveragePooling2D, MaxPooling2D, GlobalMaxPooling2D , Conv2DTranspose
+from tensorflow.keras.models import Model, load_model
+from tensorflow.keras.preprocessing import image
+from tensorflow.keras.applications.imagenet_utils import preprocess_input
+from IPython.display import SVG
+from tensorflow.keras.utils import plot_model
+from tensorflow.keras.initializers import glorot_uniform
+import scipy.misc
+from matplotlib.pyplot import imshow
+
+# Helper libraries
+import numpy as np
+import matplotlib.pyplot as plt
+
+print(tf.__version__)
+
+def int_shape(x):
+  return x.get_shape().as_list()
+
+def concat_elu(x):
+    """ like concatenated ReLU (http://arxiv.org/abs/1603.05201), but then with ELU """
+    axis = len(x.get_shape())-1
+    return tf.nn.elu(tf.concat(values=[x, -x], axis=axis))
+
+def set_nonlinearity(name):
+  if name == 'concat_elu':
+    return concat_elu
+  elif name == 'elu':
+    return tf.nn.elu
+  elif name == 'concat_relu':
+    return tf.nn.crelu
+  elif name == 'relu':
+    return tf.nn.relu
+  else:
+    raise('nonlinearity ' + name + ' is not supported')
+
+def _activation_summary(x):
+  """Helper to create summaries for activations.
+  Creates a summary that provides a histogram of activations.
+  Creates a summary that measure the sparsity of activations.
+  Args:
+    x: Tensor
+  Returns:
+    nothing
+  """
+  tensor_name = x.op.name
+  tf.summary.histogram(tensor_name + '/activations', x)
+  tf.summary.scalar(tensor_name + '/sparsity', tf.nn.zero_fraction(x))
+
+def _variable(name, shape, initializer):
+  """Helper to create a Variable.
+  Args:
+    name: name of the variable
+    shape: list of ints
+    initializer: initializer for Variable
+  Returns:
+    Variable Tensor
+  """
+  # getting rid of stddev for xavier ## testing this for faster convergence
+  var = tf.Variable(name=name, initial_value=initializer(shape=shape))
+  return var
+
+def conv_layer(inputs, kernel_size, stride, num_features, idx, nonlinearity=None):
+
+    conv = Conv2D(num_features, kernel_size, strides=(stride, stride), padding='same')(inputs)
+    
+    if nonlinearity is not None:
+      conv = nonlinearity(conv)
+    return conv
+
+def transpose_conv_layer(inputs, kernel_size, stride, num_features, idx, nonlinearity=None):
+        
+    conv = Conv2DTranspose(num_features, kernel_size, strides=(stride, stride), padding='same')(inputs)
+    
+    if nonlinearity is not None:
+      conv = nonlinearity(conv)
+
+    return conv
+
+def fc_layer(inputs, hiddens, idx, nonlinearity=None, flat = False):
+    input_shape = inputs.get_shape().as_list()
+    if flat:
+        dim = input_shape[1]*input_shape[2]*input_shape[3]
+        inputs_processed = tf.reshape(inputs, [-1,dim])
+    else:
+        dim = input_shape[1]
+        inputs_processed = inputs
+    output_biased = Dense(hiddens, input_dim=dim)(inputs_processed)
+    if nonlinearity is not None:
+      output_biased = nonlinearity(ouput_biased)
+    return output_biased
+def nin(x, num_units, idx):
+    """ a network in network layer (1x1 CONV) """
+    s = int_shape(x)
+    x = tf.keras.layers.Reshape((np.prod(s[1:-1]),s[-1]))(x)
+    x = fc_layer(x, num_units, idx) # fully connected layer
+    return tf.keras.layers.Reshape(tuple(s[1:-1]+[num_units]))(x)

+ 320 - 0
hpc_ai/ai_science_cfd/English/python/source_code/utils/data_utils.py

@@ -0,0 +1,320 @@
+# Copyright (c) 2012, NVIDIA CORPORATION. All rights reserved.
+ #
+ # Redistribution and use in source and binary forms, with or without
+ # modification, are permitted provided that the following conditions
+ # are met:
+ #  * Redistributions of source code must retain the above copyright
+ #    notice, this list of conditions and the following disclaimer.
+ #  * Redistributions in binary form must reproduce the above copyright
+ #    notice, this list of conditions and the following disclaimer in the
+ #    documentation and/or other materials provided with the distribution.
+ #  * Neither the name of NVIDIA CORPORATION nor the names of its
+ #    contributors may be used to endorse or promote products derived
+ #    from this software without specific prior written permission.
+ #
+ # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AS IS'' AND ANY
+ # EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ # IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ # PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL THE COPYRIGHT OWNER OR
+ # CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ # EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ # PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ # PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
+ # OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ 
+
+import numpy as np
+import h5py
+import matplotlib
+import matplotlib.pyplot as plt
+from mpl_toolkits.mplot3d import Axes3D
+import tensorflow as tf
+import skfmm
+from tqdm import tqdm
+
+
+def eval_input_fn(dataset):
+    dataset = dataset.batch(32)
+    return dataset.__iter__()
+
+def load_test_data(number):
+    filename = "data/computed_car_flow/sample_{0:d}/fluid_flow_0002.h5".format(number)
+
+    stream_flow = h5py.File(filename)
+    v = stream_flow['Velocity_0']
+    v = np.array(v).reshape([1,128,256+128,3])[:,:,0:256,0:2]
+    b = np.array(stream_flow["Gamma"]).reshape(1, 128, 256 + 128,1)[:,:,0:256,:]
+    return (b,v)
+
+def plot_keras_loss(history):
+    plt.plot(history.history['loss'])
+    plt.xlabel('Epoch')
+    plt.ylabel('Loss')
+    plt.show()
+
+def plot_boundary(ax, boundary):
+    ax.imshow(np.squeeze(boundary), cmap='Greys')
+
+
+def plot_flow(ax, velocity):
+    velocity = np.squeeze(velocity)
+    Y, X = np.mgrid[0:velocity.shape[0],0:velocity.shape[1]]
+    U = velocity[:,:,0]
+    V = velocity[:,:,1]
+    strm = ax.streamplot(X, Y, U, V, color=U, linewidth=2, cmap='autumn', integration_direction="both")
+    return strm
+
+
+def plot_flow_data(boundary, velocity, single_plot = False, fig = None, ax = None):
+    if single_plot:
+        if not (ax or fig):
+            fig = plt.figure(figsize = (10,4))
+            ax = fig.add_subplot(111)
+        plot_boundary(ax, boundary)
+        strm = plot_flow(ax, velocity)
+        ax.set_title('Input data + simulated flow field')
+    else:
+        fig = plt.figure(figsize = (16,4))
+        ax = fig.add_subplot(121)
+        plot_boundary(ax, boundary)
+        ax.set_title('Input data X')
+
+        ax = fig.add_subplot(122)
+        strm = plot_flow(ax, velocity)
+        ax.set_ylim((128,0)) # reverse the y axes to match the boundary plot
+        ax.set_title('Simulated flow lines Y')
+        ax.set_aspect('equal')
+
+    fig.colorbar(strm.lines)
+    return strm
+
+def plot_test_result(x, y, y_hat):
+    # Display the simulated and the predicted flow field
+    
+    # Display field lines
+    fig = plt.figure(figsize=(16,8))
+    ax = fig.add_subplot(221)
+    plot_boundary(ax, x)
+    strm = plot_flow(ax, y)
+    fig.colorbar(strm.lines)
+    ax.set_title('Simulated flow')
+                
+    ax = fig.add_subplot(222)
+    plot_boundary(ax, x)
+    strm = plot_flow(ax, y_hat)
+    fig.colorbar(strm.lines)
+    ax.set_title('Flow predicted by NN')
+    
+    # Show magnitude of the flow
+    sflow_plot = np.concatenate([y, y_hat, y-y_hat], axis=2) 
+    boundary_concat = np.concatenate(3*[x], axis=2) 
+
+    sflow_plot = np.sqrt(np.square(sflow_plot[:,:,:,0]) + np.square(sflow_plot[:,:,:,1])) # - .05 *boundary_concat[:,:,:,0]
+    ax = fig.add_subplot(2,1,2)
+    im = ax.imshow(np.squeeze(sflow_plot), cmap='hsv', zorder=1)
+    
+    # Adding car shape in black
+    # We create an RGBA image, setting alpha from boundary_concat
+    # This way we can plot the boundary image over the contour plot without the white pixels hiding the contour
+    im2 = np.zeros(boundary_concat.shape[1:3] + (4,))
+    im2[:, :, 3] = np.squeeze(boundary_concat)
+    ax.imshow(im2, cmap='Greys', zorder=2)
+    
+    ax.set_title('Magnitude of the flow: (left) simulated, (middle) Neural Net prediction,'
+                 ' (right) difference')
+    fig.colorbar(im)
+
+
+def calc_sdf(x):
+    """ calculates thes signed distance function for a batch of input data
+    Arguments:
+    x -- nbatch x Heiht x Width [ x 1]
+    """
+    if x.ndim == 2 or x.ndim == 3: # H x W [ x 1 ] optional single channel
+        sdf = skfmm.distance(np.squeeze(0.5-x))
+    elif x.ndim == 4: # batched example Nbatch x H x W x 1
+        sdf = np.zeros(x.shape)
+        for i in range(x.shape[0]):
+            sdf[i,:,:,:] = skfmm.distance(np.squeeze(0.5-x[i,:,:,:]))
+    else:
+        print("Error, invalid array dimension for calc_sdf", x.shape)
+        
+    return sdf 
+
+def plot_sdf(x, sdf = None, plot_boundary=True):
+    x = np.squeeze(x)
+    if sdf is None:
+        sdf = calc_sdf(x)
+    else:
+        sdf = np.squeeze(sdf)
+    fig = plt.figure(figsize=(16,5))
+    ax = plt.subplot(111)
+    Y, X = np.mgrid[0:sdf.shape[0],0:sdf.shape[1]]
+    cs = ax.contourf(X, Y, sdf, 50, cmap=matplotlib.cm.coolwarm, zorder=1)
+    
+    ax.set_ylim((128,0))
+    fig.colorbar(cs, ax=ax)
+    ax.set_aspect('equal')
+
+    if plot_boundary:
+        ax.contour(X,Y, sdf, [0], colors='black', zorder=2, linewidths=2)
+        # We create an RGBA image, setting alpha from x
+        # This way we can plot the boundary image over the contour plot without the white pixels hiding the contour
+
+        #im2 = np.zeros(x.shape + (4,))
+        #im2[:, :, 3] = x
+        #ax.imshow(im2, cmap='Greys', zorder=2)
+
+
+def display_flow(sample_number):
+    b,v = load_test_data(sample_number)
+    plot_flow_data(b,v)
+
+def parse_flow_data(serialized_example):
+    shape = (128,256)
+    features = {
+      'boundary':tf.io.FixedLenFeature([],tf.string),
+      'sflow':tf.io.FixedLenFeature([],tf.string)
+    }
+    parsed_features = tf.io.parse_single_example(serialized_example, features)
+    boundary = tf.io.decode_raw(parsed_features['boundary'], tf.uint8)
+    sflow = tf.io.decode_raw(parsed_features['sflow'], tf.float32)
+    boundary = tf.reshape(boundary, [shape[0], shape[1], 1])
+    sflow = tf.reshape(sflow, [shape[0], shape[1], 2])
+    boundary = tf.cast(boundary,dtype=tf.float32)
+    sflow = tf.cast(sflow,dtype=tf.float32)
+    return boundary, sflow
+
+def parse_sdf_flow_data(serialized_example):
+    shape = (128,256)
+    features = {
+      'sdf_boundary':tf.io.FixedLenFeature([],tf.string),
+      'sflow':tf.io.FixedLenFeature([],tf.string)
+    }
+    parsed_features = tf.io.parse_single_example(serialized_example, features)
+    boundary = tf.io.decode_raw(parsed_features['sdf_boundary'], tf.float32)
+    sflow = tf.io.decode_raw(parsed_features['sflow'], tf.float32)
+    boundary = tf.reshape(boundary, [shape[0], shape[1], 2])
+    sflow = tf.reshape(sflow, [shape[0], shape[1], 2])
+    boundary = tf.cast(boundary,dtype=tf.float32)
+    sflow = tf.cast(sflow,dtype=tf.float32)
+    return boundary, sflow
+
+
+# helper function
+def _bytes_feature(value):
+    return tf.train.Feature(bytes_list=tf.train.BytesList(value=[value]))
+
+
+# def create_sdf_file(name):
+#     # Set up a dataset for the input data
+#     dataset = tf.data.TFRecordDataset('data/'+ name + '.tfrecords')
+#
+#     # Transform binary data into image arrays
+#     dataset = dataset.map(parse_flow_data) 
+#
+#     # Create an iterator for reading a batch of input and output data
+#     iterator = iter(dataset)
+#     boundary_t, sflow_t = next(iterator)
+#
+#
+#     # create tf writer
+#     record_filename = 'data/' + name + '_sdf.tfrecords'
+#
+#     writer = tf.io.TFRecordWriter(record_filename)
+#
+#     shape = [128, 256]
+#     
+#     if name == 'train':
+#         num_images = 3000
+#     elif name == 'test' :
+#         num_images = 28
+#     else:
+#         print('error, number of images is not known for ', name)
+#         num_images = 1000
+#    
+#     for i in (range(num_images)):
+#             print("*",end='')
+#             # read in images    
+#             b, s = boundary_t, sflow_t
+#  
+#             # calculate signed distance function
+#             sdf = np.reshape(calc_sdf(b),(shape[0], shape[1], 1))
+#         
+#             # keep both the boundary (channel 1) and the SDF (channel 2) as input
+#             b_sdf = np.concatenate((b,sdf),axis=2)
+#     
+#             # process frame for saving
+#             boundary = np.float32(b_sdf)
+#             boundary = boundary.reshape([1,shape[0]*shape[1]*2])
+#             boundary = boundary.tostring()
+#             sflow = np.float32(s)
+#             sflow = sflow.reshape([1,shape[0]*shape[1]*2])
+#             sflow = sflow.tostring()
+#
+#             # create example and write it
+#             example = tf.train.Example(features=tf.train.Features(feature={
+#             'sdf_boundary': _bytes_feature(boundary),
+#             'sflow': _bytes_feature(sflow)}))
+#             writer.write(example.SerializeToString())
+#             print("/")
+
+def create_sdf_file(name):
+    # Set up a dataset for the input data
+    dataset = tf.data.TFRecordDataset('data/'+ name + '.tfrecords')
+
+    # Transform binary data into image arrays
+    dataset = dataset.map(parse_flow_data) 
+
+    # Create an iterator for reading a batch of input and output data
+    iterator = iter(dataset)
+
+
+    # create tf writer
+    record_filename = 'data/' + name + '_sdf.tfrecords'
+
+    writer = tf.io.TFRecordWriter(record_filename)
+
+    shape = [128, 256]
+    
+    if name == 'train':
+        num_images = 3000
+    elif name == 'test' :
+        num_images = 28
+    else:
+        print('error, number of images is not known for ', name)
+        num_images = 1000
+   
+    for i in tqdm(range(num_images)):
+        try:
+            # read in images
+            boundary_t, sflow_t = next(iterator)
+            b, s = boundary_t, sflow_t
+ 
+            # calculate signed distance function
+            sdf = np.reshape(calc_sdf(b),(shape[0], shape[1], 1))
+        
+            # keep both the boundary (channel 1) and the SDF (channel 2) as input
+            b_sdf = np.concatenate((b,sdf),axis=2)
+    
+            # process frame for saving
+            boundary = np.float32(b_sdf)
+            boundary = boundary.reshape([1,shape[0]*shape[1]*2])
+            boundary = boundary.tostring()
+            sflow = np.float32(s)
+            sflow = sflow.reshape([1,shape[0]*shape[1]*2])
+            sflow = sflow.tostring()
+
+            # create example and write it
+            example = tf.train.Example(features=tf.train.Features(feature={
+            'sdf_boundary': _bytes_feature(boundary),
+            'sflow': _bytes_feature(sflow)}))
+            writer.write(example.SerializeToString())
+        except tf.errors.OutOfRangeError:
+            print('Finished writing into', record_filename)
+            print('Read in ', idx, ' images')
+            break
+
+

+ 53 - 0
hpc_ai/ai_science_cfd/README.MD

@@ -0,0 +1,53 @@
+# openacc-training-materials
+Training materials provided by OpenACC.org. The objective of this lab is to give an introduction to application of Artificial Intelligence (AI) algorithms in Science ( High Performance Computing(HPC) Simulations ). This Bootcamp will introduce you to fundamentals of AI and how they can be applied to CFD (Computational Fluid Dynamics)
+
+## Prerequisites:
+To run this tutorial you will need a machine with NVIDIA GPU.
+
+- Install the latest [Docker](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html#docker) or [Singularity](https://sylabs.io/docs/).
+
+Make sure both Docker and Singularity has been installed with NVIDIA GPU support
+
+## Creating containers
+To start with, you will have to build a Docker or Singularity container.
+
+### Docker Container
+To build a docker container, run: 
+`sudo docker build --network=host -t <imagename>:<tagnumber> .`
+
+For instance:
+`sudo docker build --network=host -t myimage:1.0 .`
+
+and to run the container, run:
+`sudo docker run --rm -it --gpus=all --network=host -p 8888:8888 myimage:1.0`
+
+The container launches jupyter notebook and runs on port 8888
+`jupyter notebook --ip 0.0.0.0 --port 8888 --no-browser --allow-root`
+
+Then, open the jupyter notebook in browser: http://localhost:8888
+Start working on the lab by clicking on the `Start_Here.ipynb` notebook.
+
+### Singularity Container
+
+To build the singularity container, run: 
+`sudo singularity build <image_name>.simg Singularity`
+
+and copy the files to your local machine to make sure changes are stored locally:
+`singularity run <image_name>.simg cp -rT /workspace ~/workspace`
+
+Then, run the container:
+`singularity run --nv <image_name>.simg jupyter notebook --notebook-dir=~/workspace/python/jupyter_notebook`
+
+Then, open the jupyter notebook in browser: http://localhost:8888
+Start working on the lab by clicking on the `Start_Here.ipynb` notebook.
+
+## Troubleshooting
+
+Q. cuDNN failed to initialize or GPU out of memory error
+
+A. This error occurs when the user forgot to shutdown the jupyter kernel of previously run notebooks. Please make sure that all the previous notebook jupyter kernel is shutdown. ( Go to Home Tab --> Click Running Tab--> Kill notebooks that aren’t being used )
+
+Q. Cannot write to /tmp directory
+
+A. Some notebooks depend on writing logs to /tmp directory. While creating container make sure /tmp director is accesible with write permission to container. Else the user can also change the tmp directory location
+

+ 21 - 0
hpc_ai/ai_science_cfd/Singularity

@@ -0,0 +1,21 @@
+# Copyright (c) 2020 NVIDIA Corporation.  All rights reserved. 
+
+Bootstrap: docker
+FROM: nvcr.io/nvidia/tensorflow:20.01-tf2-py3
+
+%environment
+%post
+    apt-get update -y
+    apt-get install -y libsm6 libxext6 libxrender-dev git
+    pip3 install opencv-python==4.1.2.30 pandas seaborn sklearn matplotlib scikit-fmm tqdm h5py gdown
+    mkdir /workspace/python/jupyter_notebook/CFD/data
+    python3 /workspace/python/source_code/dataset.py
+
+%files
+     English/* /workspace/
+
+%runscript
+    "$@"
+
+%labels
+    AUTHOR bharatk

+ 1 - 0
hpc_ai/ai_science_climate/.gitignore

@@ -0,0 +1 @@
+.ipynb_checkpoints

+ 25 - 0
hpc_ai/ai_science_climate/Dockerfile

@@ -0,0 +1,25 @@
+# Copyright (c) 2020 NVIDIA Corporation.  All rights reserved.
+
+# To build the docker container, run: $ sudo docker build -t ai-science-climate:latest --network=host .
+# To run: $ sudo docker run --rm -it --gpus=all --network=host -p 8888:8888 ai-science-climate:latest
+# Finally, open http://127.0.0.1:8888/
+
+# Select Base Image 
+FROM nvcr.io/nvidia/tensorflow:20.01-tf2-py3
+# Update the repo
+RUN apt-get update -y
+# Install required dependencies
+RUN apt-get install -y libsm6 libxext6 libxrender-dev git nvidia-modprobe
+# Install required python packages
+RUN pip3 install  opencv-python==4.1.2.30 pandas seaborn sklearn matplotlib scikit-fmm tqdm h5py gdown
+
+##### TODO - From the Final Repo Changing this 
+
+# TO COPY the data 
+COPY English/ /workspace/
+
+# This Installs All the Dataset
+RUN python3 /workspace/python/source_code/dataset.py
+
+## Uncomment this line to run Jupyter notebook by default
+CMD jupyter notebook --no-browser --allow-root --ip=0.0.0.0 --port=8888 --NotebookApp.token="" --notebook-dir=/workspace/python/jupyter_notebook/

File diff suppressed because it is too large
+ 564 - 0
hpc_ai/ai_science_climate/English/python/jupyter_notebook/Intro_to_DL/CNN's.ipynb


+ 430 - 0
hpc_ai/ai_science_climate/English/python/jupyter_notebook/Intro_to_DL/Part_2.ipynb

@@ -0,0 +1,430 @@
+{
+ "cells": [
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "[Home Page](../Start_Here.ipynb)\n",
+    "&emsp;&emsp;&emsp;&emsp;&emsp;\n",
+    "&emsp;&emsp;&emsp;&emsp;&emsp;\n",
+    "&emsp;&emsp;&emsp;&emsp;&emsp;\n",
+    "&emsp;&emsp;&emsp;&emsp;&emsp;\n",
+    "&emsp;&emsp;&emsp;\n",
+    "[Next Notebook](CNN's.ipynb)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "# CNN Primer and Keras 101"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "\n",
+    "In this notebook, participants will be introduced to CNN, implement it using Keras. For an absolute beginner this notebook would serve as a good starting point.\n",
+    "\n",
+    "**Contents of the this notebook:**\n",
+    "\n",
+    "- [How a Deep Learning project is planned ?](#Machine-Learning-Pipeline)\n",
+    "- [Wrapping things up with an example ( Classification )](#Image-Classification-on-types-of-clothes)\n",
+    "\n",
+    "\n",
+    "**By the end of this notebook participant will:**\n",
+    "\n",
+    "- Understand the Machine Learning Pipeline\n",
+    "- Write a Deep Learning Classifier and train it.\n",
+    "\n",
+    "**We will be building a _Multi-class Classifier_ to classify images of clothing to their respective classes**"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "## Machine Learning Pipeline\n",
+    "\n",
+    "During the bootcamp we will be making use of the following buckets to help us understand how a Machine Learning project should be planned and executed: \n",
+    "\n",
+    "1. **Data**: To start with any ML project we need data which is pre-processed and can be fed into the network.\n",
+    "2. **Task**: There are many tasks present in ML, we need to make sure we understand and define the problem statement accurately.\n",
+    "3. **Model**: We need to build our model, which is neither too deep and complex, thereby taking a lot of computational power or too small that it could not learn the important features.\n",
+    "4. **Loss**: Out of the many _loss functions_ present, we need to carefully choose a _loss function_ which is suitable for the task we are about to carry out.\n",
+    "5. **Learning**: As we mentioned in our last notebook, there are a variety of _optimisers_ each with their advantages and disadvantages. So here we choose an _optimiser_ which is suitable for our task and train our model using the set hyperparameters.\n",
+    "6. **Evaluation**: This is a crucial step in the process to determine if our model has learnt the features properly by analysing how it performs when unseen data is given to it. "
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "**Here we will be building a _Multi-class Classifier_ to classify images of clothing to their respective classes.**\n",
+    "\n",
+    "We will follow the above discussed pipeline to complete the example.\n",
+    "\n",
+    "## Image Classification on types of clothes  \n",
+    "\n",
+    "####  Step -1 : Data \n",
+    "\n",
+    "We will be using the **F-MNIST ( Fashion MNIST )** dataset, which is a very popular dataset. This dataset contains 70,000 grayscale images in 10 categories. The images show individual articles of clothing at low resolution (28 by 28 pixels).\n",
+    "\n",
+    "<img src=\"images/fashion-mnist.png\" alt=\"Fashion MNIST sprite\"  width=\"600\">\n",
+    "\n",
+    "*Source: https://www.tensorflow.org/tutorials/keras/classification*"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "# Import Necessary Libraries\n",
+    "\n",
+    "from __future__ import absolute_import, division, print_function, unicode_literals\n",
+    "\n",
+    "# TensorFlow and tf.keras\n",
+    "import tensorflow as tf\n",
+    "from tensorflow import keras\n",
+    "\n",
+    "# Helper libraries\n",
+    "import numpy as np\n",
+    "import matplotlib.pyplot as plt\n",
+    "\n",
+    "print(tf.__version__)"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "scrolled": true
+   },
+   "outputs": [],
+   "source": [
+    "# Let's Import the Dataset\n",
+    "fashion_mnist = keras.datasets.fashion_mnist\n",
+    "\n",
+    "(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Loading the dataset returns four NumPy arrays:\n",
+    "\n",
+    "* The `train_images` and `train_labels` arrays are the *training set*—the data the model uses to learn.\n",
+    "* The model is tested against the *test set*, the `test_images`, and `test_labels` arrays.\n",
+    "\n",
+    "The images are 28x28 NumPy arrays, with pixel values ranging from 0 to 255. The *labels* are an array of integers, ranging from 0 to 9. These correspond to the *class* of clothing the image represents:\n",
+    "\n",
+    "<table>\n",
+    "  <tr>\n",
+    "    <th>Label</th>\n",
+    "    <th>Class</th>\n",
+    "  </tr>\n",
+    "  <tr>\n",
+    "    <td>0</td>\n",
+    "    <td>T-shirt/top</td>\n",
+    "  </tr>\n",
+    "  <tr>\n",
+    "    <td>1</td>\n",
+    "    <td>Trouser</td>\n",
+    "  </tr>\n",
+    "    <tr>\n",
+    "    <td>2</td>\n",
+    "    <td>Pullover</td>\n",
+    "  </tr>\n",
+    "    <tr>\n",
+    "    <td>3</td>\n",
+    "    <td>Dress</td>\n",
+    "  </tr>\n",
+    "    <tr>\n",
+    "    <td>4</td>\n",
+    "    <td>Coat</td>\n",
+    "  </tr>\n",
+    "    <tr>\n",
+    "    <td>5</td>\n",
+    "    <td>Sandal</td>\n",
+    "  </tr>\n",
+    "    <tr>\n",
+    "    <td>6</td>\n",
+    "    <td>Shirt</td>\n",
+    "  </tr>\n",
+    "    <tr>\n",
+    "    <td>7</td>\n",
+    "    <td>Sneaker</td>\n",
+    "  </tr>\n",
+    "    <tr>\n",
+    "    <td>8</td>\n",
+    "    <td>Bag</td>\n",
+    "  </tr>\n",
+    "    <tr>\n",
+    "    <td>9</td>\n",
+    "    <td>Ankle boot</td>\n",
+    "  </tr>\n",
+    "</table>\n",
+    "\n",
+    "Each image is mapped to a single label. Since the *class names* are not included with the dataset, let us store them in an array so that we can use them later when plotting the images:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat',\n",
+    "               'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "## Understanding the Data"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "#Print Array Size of Training Set \n",
+    "print(\"Size of Training Images :\"+str(train_images.shape))\n",
+    "#Print Array Size of Label\n",
+    "print(\"Size of Training Labels :\"+str(train_labels.shape))\n",
+    "\n",
+    "#Print Array Size of Test Set \n",
+    "print(\"Size of Test Images :\"+str(test_images.shape))\n",
+    "#Print Array Size of Label\n",
+    "print(\"Size of Test Labels :\"+str(test_labels.shape))\n",
+    "\n",
+    "#Let's See how our Outputs Look like \n",
+    "print(\"Training Set Labels :\"+str(train_labels))\n",
+    "#Data in the Test Set\n",
+    "print(\"Test Set Labels :\"+str(test_labels))"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "## Data Pre-processing\n"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "plt.figure()\n",
+    "plt.imshow(train_images[0])\n",
+    "plt.colorbar()\n",
+    "plt.grid(False)\n",
+    "plt.show()"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "The image pixel values range from 0 to 255. Let us now normalise the data range from 0 - 255 to 0 - 1 in both the *Train* and *Test* set. This Normalisation of pixels helps us by optimizing the process where the gradients are computed."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "train_images = train_images / 255.0\n",
+    "test_images = test_images / 255.0"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "# Let's Print to Veryify if the Data is of the correct format.\n",
+    "plt.figure(figsize=(10,10))\n",
+    "for i in range(25):\n",
+    "    plt.subplot(5,5,i+1)\n",
+    "    plt.xticks([])\n",
+    "    plt.yticks([])\n",
+    "    plt.grid(False)\n",
+    "    plt.imshow(train_images[i], cmap=plt.cm.binary)\n",
+    "    plt.xlabel(class_names[train_labels[i]])\n",
+    "plt.show()"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "## Defining our Model\n",
+    "\n",
+    "Our Model has three layers :\n",
+    "\n",
+    "- 784 Input features ( 28 * 28 ) \n",
+    "- 128 nodes in hidden layer (Feel free to experiment with the value)\n",
+    "- 10 output nodes to denote the Class\n",
+    "\n",
+    "Implementing the same in Keras ( Machine Learning framework built on top of Tensorflow, Theano, etc..) \n"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "from tensorflow.keras import backend as K\n",
+    "K.clear_session()\n",
+    "model = keras.Sequential([\n",
+    "    keras.layers.Flatten(input_shape=(28, 28)),\n",
+    "    keras.layers.Dense(128, activation='relu'),\n",
+    "    keras.layers.Dense(10, activation='softmax')\n",
+    "])"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "The first layer in this network, `tf.keras.layers.Flatten`, transforms the format of the images from a two-dimensional array (of 28 by 28 pixels) to a one-dimensional array (of 28 * 28 = 784 pixels). Think of this layer as unstacking rows of pixels in the image and lining them up. This layer has no parameters to learn; it only reformats the data.\n",
+    "\n",
+    "After the pixels are flattened, the network consists of a sequence of two `tf.keras.layers.Dense` layers. These are densely connected, or fully connected, neural layers. The first `Dense` layer has 128 nodes (or neurons). The second (and last) layer is a 10-node *softmax* layer that returns an array of 10 probability scores that sum to 1. Each node contains a score that indicates the probability that the current image belongs to one of the 10 classes.\n",
+    "\n",
+    "### Compile the model\n",
+    "\n",
+    "Before the model is ready for training, it needs a few more settings. These are added during the model's *compile* step:\n",
+    "\n",
+    "* *Loss function* —This measures how accurate the model is during training. You want to minimize this function to \"steer\" the model in the right direction.\n",
+    "* *Optimizer* —This is how the model is updated based on the data it sees and its loss function.\n",
+    "* *Metrics* —Used to monitor the training and testing steps. The following example uses *accuracy*, the fraction of the images that are correctly classified."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "model.compile(optimizer='adam',\n",
+    "              loss='sparse_categorical_crossentropy',\n",
+    "              metrics=['accuracy'])"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "## Train the model\n",
+    "\n",
+    "Training the neural network model requires the following steps:\n",
+    "\n",
+    "1. Feed the training data to the model. In this example, the training data is in the `train_images` and `train_labels` arrays.\n",
+    "2. The model learns to associate images and labels.\n",
+    "3. You ask the model to make predictions about a test set—in this example, the `test_images` array. Verify that the predictions match the labels from the `test_labels` array.\n",
+    "\n",
+    "To start training,  call the `model.fit` method—so called because it \"fits\" the model to the training data:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "model.fit(train_images, train_labels ,epochs=5)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "## Evaluate accuracy\n",
+    "\n",
+    "Next, compare how the model performs on the test dataset:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "#Evaluating the Model using the Test Set\n",
+    "\n",
+    "test_loss, test_acc = model.evaluate(test_images,  test_labels, verbose=2)\n",
+    "\n",
+    "print('\\nTest accuracy:', test_acc)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "## Exercise\n",
+    "\n",
+    "Try adding more dense layers to the network above and observe change in accuracy."
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "We get an Accuracy of 87% in the Test dataset which is less than the 89% we got during the Training phase, This problem in ML is called as Overfitting\n",
+    "\n",
+    "## Important:\n",
+    "<mark>Shutdown the kernel before clicking on “Next Notebook” to free up the GPU memory.</mark>\n",
+    "\n",
+    "## Licensing\n",
+    "This material is released by NVIDIA Corporation under the Creative Commons Attribution 4.0 International (CC BY 4.0)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "[Home Page](../Start_Here.ipynb)\n",
+    "&emsp;&emsp;&emsp;&emsp;&emsp;\n",
+    "&emsp;&emsp;&emsp;&emsp;&emsp;\n",
+    "&emsp;&emsp;&emsp;&emsp;&emsp;\n",
+    "&emsp;&emsp;&emsp;&emsp;&emsp;\n",
+    "&emsp;&emsp;&emsp;\n",
+    "[Next Notebook](CNN's.ipynb)"
+   ]
+  }
+ ],
+ "metadata": {
+  "kernelspec": {
+   "display_name": "Python 3",
+   "language": "python",
+   "name": "python3"
+  },
+  "language_info": {
+   "codemirror_mode": {
+    "name": "ipython",
+    "version": 3
+   },
+   "file_extension": ".py",
+   "mimetype": "text/x-python",
+   "name": "python",
+   "nbconvert_exporter": "python",
+   "pygments_lexer": "ipython3",
+   "version": "3.6.2"
+  }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 2
+}

File diff suppressed because it is too large
+ 628 - 0
hpc_ai/ai_science_climate/English/python/jupyter_notebook/Intro_to_DL/Resnets.ipynb


File diff suppressed because it is too large
+ 526 - 0
hpc_ai/ai_science_climate/English/python/jupyter_notebook/Intro_to_DL/Start_Here.ipynb


BIN
hpc_ai/ai_science_climate/English/python/jupyter_notebook/Intro_to_DL/images/Activation_Function.png


BIN
hpc_ai/ai_science_climate/English/python/jupyter_notebook/Intro_to_DL/images/NVIDIA_Bootcamp.png


BIN
hpc_ai/ai_science_climate/English/python/jupyter_notebook/Intro_to_DL/images/activation_fns.png


BIN
hpc_ai/ai_science_climate/English/python/jupyter_notebook/Intro_to_DL/images/alexnet.png


BIN
hpc_ai/ai_science_climate/English/python/jupyter_notebook/Intro_to_DL/images/ann.png


BIN
hpc_ai/ai_science_climate/English/python/jupyter_notebook/Intro_to_DL/images/bp_left.gif


BIN
hpc_ai/ai_science_climate/English/python/jupyter_notebook/Intro_to_DL/images/bp_right.gif


BIN
hpc_ai/ai_science_climate/English/python/jupyter_notebook/Intro_to_DL/images/cnn.jpeg


BIN
hpc_ai/ai_science_climate/English/python/jupyter_notebook/Intro_to_DL/images/conv.gif


BIN
hpc_ai/ai_science_climate/English/python/jupyter_notebook/Intro_to_DL/images/conv.png


BIN
hpc_ai/ai_science_climate/English/python/jupyter_notebook/Intro_to_DL/images/conv_depth.png


BIN
hpc_ai/ai_science_climate/English/python/jupyter_notebook/Intro_to_DL/images/convtranspose.gif


BIN
hpc_ai/ai_science_climate/English/python/jupyter_notebook/Intro_to_DL/images/convtranspose_conv.gif


BIN
hpc_ai/ai_science_climate/English/python/jupyter_notebook/Intro_to_DL/images/evaluation.jpg


BIN
hpc_ai/ai_science_climate/English/python/jupyter_notebook/Intro_to_DL/images/evaluation.png


BIN
hpc_ai/ai_science_climate/English/python/jupyter_notebook/Intro_to_DL/images/fashion-mnist.png


BIN
hpc_ai/ai_science_climate/English/python/jupyter_notebook/Intro_to_DL/images/feature_hierarchy.png


BIN
hpc_ai/ai_science_climate/English/python/jupyter_notebook/Intro_to_DL/images/identity.png


BIN
hpc_ai/ai_science_climate/English/python/jupyter_notebook/Intro_to_DL/images/location.PNG


BIN
hpc_ai/ai_science_climate/English/python/jupyter_notebook/Intro_to_DL/images/max_pool.png


BIN
hpc_ai/ai_science_climate/English/python/jupyter_notebook/Intro_to_DL/images/ml_pipeline.jpeg


BIN
hpc_ai/ai_science_climate/English/python/jupyter_notebook/Intro_to_DL/images/model.png


BIN
hpc_ai/ai_science_climate/English/python/jupyter_notebook/Intro_to_DL/images/neuron.jpg


BIN
hpc_ai/ai_science_climate/English/python/jupyter_notebook/Intro_to_DL/images/or_left.gif


BIN
hpc_ai/ai_science_climate/English/python/jupyter_notebook/Intro_to_DL/images/or_right.png


BIN
hpc_ai/ai_science_climate/English/python/jupyter_notebook/Intro_to_DL/images/our_cnn.png


BIN
hpc_ai/ai_science_climate/English/python/jupyter_notebook/Intro_to_DL/images/resblock.PNG


BIN
hpc_ai/ai_science_climate/English/python/jupyter_notebook/Intro_to_DL/images/resnet.PNG


BIN
hpc_ai/ai_science_climate/English/python/jupyter_notebook/Intro_to_DL/images/stats.png


BIN
hpc_ai/ai_science_climate/English/python/jupyter_notebook/Intro_to_DL/images/sup-unsup.png


BIN
hpc_ai/ai_science_climate/English/python/jupyter_notebook/Intro_to_DL/images/training.gif


BIN
hpc_ai/ai_science_climate/English/python/jupyter_notebook/Intro_to_DL/images/xnor_left.gif


BIN
hpc_ai/ai_science_climate/English/python/jupyter_notebook/Intro_to_DL/images/xnor_right.png


+ 81 - 0
hpc_ai/ai_science_climate/English/python/jupyter_notebook/Start_Here.ipynb

@@ -0,0 +1,81 @@
+{
+ "cells": [
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "# Welcome to AI for Science Bootcamp\n",
+    "\n",
+    "The objective of this bootcamp is to give an introduction to application of Artificial Intelligence (AI) algorithms in Science ( High Performance Computing(HPC) Simulations ). This bootcamp will introduce participants to fundamentals of AI and how those can be applied to different HPC simulation domains. \n",
+    "\n",
+    "The following contents will be covered during the Bootcamp :\n",
+    "\n",
+    "- [CNN Primer and Keras 101](Intro_to_DL/Part_2.ipynb)\n",
+    "- [Tropical Cyclone Intensity Estimation using Deep Convolution Neural Networks.](Tropical_Cyclone_Intensity_Estimation/The_Problem_Statement.ipynb)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "## [CNN Primer and Keras 101](Intro_to_DL/Part_2.ipynb)\n",
+    "\n",
+    "In this notebook, participants will be introduced to Convolution Neural Network and how to implement one using Keras API. For an absolute beginner to CNN and Keras this notebook would serve as a good starting point.\n",
+    "\n",
+    "**By the end of this notebook you will:**\n",
+    "\n",
+    "- Understand the Machine Learning pipeline\n",
+    "- Understand how a Convolution Neural Network works\n",
+    "- Write your own Deep Learning classifier and train it.\n",
+    "\n",
+    "For in depth understanding of Deep Learning Concepts, visit [NVIDIA Deep Learning Institute](https://www.nvidia.com/en-us/deep-learning-ai/education/)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "## [Tropical Cyclone Intensity Estimation using Deep Convolutional Neural Networks](Tropical_Cyclone_Intensity_Estimation/The_Problem_Statement.ipynb)\n",
+    "\n",
+    "In this notebook, participants will be introduced to how Convolutional Neural Networks (CNN) can be applied in the field of Climate analysis.\n",
+    "\n",
+    "**Contents of this Notebook:**\n",
+    "\n",
+    "- Understanding the problem statement\n",
+    "- Building a Deep Learning pipeline\n",
+    "- Understand input data\n",
+    "- Annotating the data\n",
+    "- Build the model\n",
+    "- Understanding different accuracy and what they mean\n",
+    "- Improving the model\n",
+    "\n",
+    "**By the End of the Notebook you will:** \n",
+    "\n",
+    "- Understand the process of applying Deep Learning to solve a problem in the field of Climate Analysis \n",
+    "- Understand various challenges with data pre-processing\n",
+    "- How hyper-parameters play an essential role in improving the accuracy of the model."
+   ]
+  }
+ ],
+ "metadata": {
+  "kernelspec": {
+   "display_name": "Python 3",
+   "language": "python",
+   "name": "python3"
+  },
+  "language_info": {
+   "codemirror_mode": {
+    "name": "ipython",
+    "version": 3
+   },
+   "file_extension": ".py",
+   "mimetype": "text/x-python",
+   "name": "python",
+   "nbconvert_exporter": "python",
+   "pygments_lexer": "ipython3",
+   "version": "3.6.2"
+  }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 2
+}

File diff suppressed because it is too large
+ 494 - 0
hpc_ai/ai_science_climate/English/python/jupyter_notebook/Tropical_Cyclone_Intensity_Estimation/Approach_to_the_Problem_&_Inspecting_and_Cleaning_the_Required_Data.ipynb


+ 0 - 0
hpc_ai/ai_science_climate/English/python/jupyter_notebook/Tropical_Cyclone_Intensity_Estimation/Competition.ipynb


Some files were not shown because too many files changed in this diff