Bläddra i källkod

Purged solutions

Aswinkumar 3 år sedan
förälder
incheckning
47a0b3b1c1

+ 0 - 662
ai/DeepStream/English/python/jupyter_notebook/Multi-stream_Multi_DNN_Solution.ipynb

@@ -1,662 +0,0 @@
-{
- "cells": [
-  {
-   "cell_type": "markdown",
-   "metadata": {},
-   "source": [
-    "# Hackathon Solution : Multi-stream - Multi-DNN pipeline\n",
-    "\n",
-    "In this notebook, you will build an Multi-stream Multi-DNN pipeline using the concepts learned from the previous notebooks. \n"
-   ]
-  },
-  {
-   "cell_type": "markdown",
-   "metadata": {},
-   "source": [
-    "## Building the pipeline\n",
-    "\n",
-    "We will the using batched on the Multi-DNN network from [Notebook 3](Introduction_to_Multi-DNN_pipeline.ipynb) and combine it with the knowledge learnt in [Notebook 4](Multi-stream_pipeline.ipynb). \n",
-    "\n",
-    "\n",
-    "Here are the illustrations of the Pipeline \n",
-    "![test2](images/test2.png)\n",
-    "![test3](images/test3.png)\n",
-    "\n",
-    "Let us get started with the Notebook , You will have to fill in the `TODO` parts of the code present in the Notebook to complete the pipeline. Feel free to refer to the previous notebooks for the commands."
-   ]
-  },
-  {
-   "cell_type": "code",
-   "execution_count": null,
-   "metadata": {},
-   "outputs": [],
-   "source": [
-    "# Import required libraries \n",
-    "import sys\n",
-    "sys.path.append('../source_code')\n",
-    "import gi\n",
-    "import configparser\n",
-    "gi.require_version('Gst', '1.0')\n",
-    "from gi.repository import GObject, Gst\n",
-    "from gi.repository import GLib\n",
-    "from ctypes import *\n",
-    "import time\n",
-    "import sys\n",
-    "import math\n",
-    "import platform\n",
-    "from common.bus_call import bus_call\n",
-    "from common.FPS import GETFPS\n",
-    "import pyds\n",
-    "\n",
-    "\n",
-    "# Define variables to be used later\n",
-    "fps_streams={}\n",
-    "\n",
-    "PGIE_CLASS_ID_VEHICLE = 0\n",
-    "PGIE_CLASS_ID_BICYCLE = 1\n",
-    "PGIE_CLASS_ID_PERSON = 2\n",
-    "PGIE_CLASS_ID_ROADSIGN = 3\n",
-    "\n",
-    "MUXER_OUTPUT_WIDTH=1920\n",
-    "MUXER_OUTPUT_HEIGHT=1080\n",
-    "\n",
-    "TILED_OUTPUT_WIDTH=1920\n",
-    "TILED_OUTPUT_HEIGHT=1080\n",
-    "OSD_PROCESS_MODE= 0\n",
-    "OSD_DISPLAY_TEXT= 0\n",
-    "pgie_classes_str= [\"Vehicle\", \"TwoWheeler\", \"Person\",\"RoadSign\"]\n",
-    "\n",
-    "################ Three Stream Pipeline ###########\n",
-    "# Define Input and output Stream information \n",
-    "num_sources = 3 \n",
-    "INPUT_VIDEO_1 = '/opt/nvidia/deepstream/deepstream-5.0/samples/streams/sample_720p.h264'\n",
-    "INPUT_VIDEO_2 = '/opt/nvidia/deepstream/deepstream-5.0/samples/streams/sample_720p.h264'\n",
-    "INPUT_VIDEO_3 = '/opt/nvidia/deepstream/deepstream-5.0/samples/streams/sample_720p.h264'\n",
-    "OUTPUT_VIDEO_NAME = \"../source_code/N4/ds_out.mp4\""
-   ]
-  },
-  {
-   "cell_type": "markdown",
-   "metadata": {},
-   "source": [
-    "We define a function `make_elm_or_print_err()` to create our elements and report any errors if the creation fails.\n",
-    "\n",
-    "Elements are created using the `Gst.ElementFactory.make()` function as part of Gstreamer library."
-   ]
-  },
-  {
-   "cell_type": "code",
-   "execution_count": null,
-   "metadata": {},
-   "outputs": [],
-   "source": [
-    "## Make Element or Print Error and any other detail\n",
-    "def make_elm_or_print_err(factoryname, name, printedname, detail=\"\"):\n",
-    "  print(\"Creating\", printedname)\n",
-    "  elm = Gst.ElementFactory.make(factoryname, name)\n",
-    "  if not elm:\n",
-    "     sys.stderr.write(\"Unable to create \" + printedname + \" \\n\")\n",
-    "  if detail:\n",
-    "     sys.stderr.write(detail)\n",
-    "  return elm"
-   ]
-  },
-  {
-   "cell_type": "markdown",
-   "metadata": {},
-   "source": [
-    "#### Initialise GStreamer and Create an Empty Pipeline"
-   ]
-  },
-  {
-   "cell_type": "code",
-   "execution_count": null,
-   "metadata": {},
-   "outputs": [],
-   "source": [
-    "for i in range(0,num_sources):\n",
-    "        fps_streams[\"stream{0}\".format(i)]=GETFPS(i)\n",
-    "\n",
-    "# Standard GStreamer initialization\n",
-    "GObject.threads_init()\n",
-    "Gst.init(None)\n",
-    "\n",
-    "# Create gstreamer elements */\n",
-    "# Create Pipeline element that will form a connection of other elements\n",
-    "print(\"Creating Pipeline \\n \")\n",
-    "pipeline = Gst.Pipeline()\n",
-    "\n",
-    "if not pipeline:\n",
-    "    sys.stderr.write(\" Unable to create Pipeline \\n\")\n"
-   ]
-  },
-  {
-   "cell_type": "markdown",
-   "metadata": {},
-   "source": [
-    "#### Create Elements that are required for our pipeline\n",
-    "\n",
-    "Compared to the first notebook , we use a lot of queues in this notebook to buffer data when it moves from one plugin to another."
-   ]
-  },
-  {
-   "cell_type": "code",
-   "execution_count": null,
-   "metadata": {},
-   "outputs": [],
-   "source": [
-    "########### Create Elements required for the Pipeline ########### \n",
-    "\n",
-    "######### Defining Stream 1 \n",
-    "# Source element for reading from the file\n",
-    "source1 = make_elm_or_print_err(\"filesrc\", \"file-source-1\",'file-source-1')\n",
-    "# Since the data format in the input file is elementary h264 stream,we need a h264parser\n",
-    "h264parser1 = make_elm_or_print_err(\"h264parse\", \"h264-parser-1\",\"h264-parser-1\")\n",
-    "# Use nvdec_h264 for hardware accelerated decode on GPU\n",
-    "decoder1 = make_elm_or_print_err(\"nvv4l2decoder\", \"nvv4l2-decoder-1\",\"nvv4l2-decoder-1\")\n",
-    "   \n",
-    "##########\n",
-    "\n",
-    "########## Defining Stream 2 \n",
-    "# Source element for reading from the file\n",
-    "source2 = make_elm_or_print_err(\"filesrc\", \"file-source-2\",\"file-source-2\")\n",
-    "# Since the data format in the input file is elementary h264 stream, we need a h264parser\n",
-    "h264parser2 = make_elm_or_print_err(\"h264parse\", \"h264-parser-2\", \"h264-parser-2\")\n",
-    "# Use nvdec_h264 for hardware accelerated decode on GPU\n",
-    "decoder2 = make_elm_or_print_err(\"nvv4l2decoder\", \"nvv4l2-decoder-2\",\"nvv4l2-decoder-2\")\n",
-    "########### \n",
-    "\n",
-    "########## Defining Stream 3\n",
-    "# Source element for reading from the file\n",
-    "source3 = make_elm_or_print_err(\"filesrc\", \"file-source-3\",\"file-source-3\")\n",
-    "# Since the data format in the input file is elementary h264 stream, we need a h264parser\n",
-    "h264parser3 = make_elm_or_print_err(\"h264parse\", \"h264-parser-3\", \"h264-parser-3\")\n",
-    "# Use nvdec_h264 for hardware accelerated decode on GPU\n",
-    "decoder3 = make_elm_or_print_err(\"nvv4l2decoder\", \"nvv4l2-decoder-3\",\"nvv4l2-decoder-3\")\n",
-    "########### \n",
-    "    \n",
-    "# Create nvstreammux instance to form batches from one or more sources.\n",
-    "streammux = make_elm_or_print_err(\"nvstreammux\", \"Stream-muxer\",\"Stream-muxer\") \n",
-    "# Use nvinfer to run inferencing on decoder's output, behaviour of inferencing is set through config file\n",
-    "pgie = make_elm_or_print_err(\"nvinfer\", \"primary-inference\" ,\"pgie\")\n",
-    "# Use nvtracker to give objects unique-ids\n",
-    "tracker = make_elm_or_print_err(\"nvtracker\", \"tracker\",'tracker')\n",
-    "# Seconday inference for Finding Car Color\n",
-    "sgie1 = make_elm_or_print_err(\"nvinfer\", \"secondary1-nvinference-engine\",'sgie1')\n",
-    "# Seconday inference for Finding Car Make\n",
-    "sgie2 = make_elm_or_print_err(\"nvinfer\", \"secondary2-nvinference-engine\",'sgie2')\n",
-    "# Seconday inference for Finding Car Type\n",
-    "sgie3 = make_elm_or_print_err(\"nvinfer\", \"secondary3-nvinference-engine\",'sgie3')\n",
-    "# Creating Tiler to present more than one streams\n",
-    "tiler=make_elm_or_print_err(\"nvmultistreamtiler\", \"nvtiler\",\"nvtiler\")\n",
-    "# Use convertor to convert from NV12 to RGBA as required by nvosd\n",
-    "nvvidconv = make_elm_or_print_err(\"nvvideoconvert\", \"convertor\",\"nvvidconv\")\n",
-    "# Create OSD to draw on the converted RGBA buffer\n",
-    "nvosd = make_elm_or_print_err(\"nvdsosd\", \"onscreendisplay\",\"nvosd\")\n",
-    "# Creating queue's to buffer incoming data from pgie\n",
-    "queue1=make_elm_or_print_err(\"queue\",\"queue1\",\"queue1\")\n",
-    "# Creating queue's to buffer incoming data from tiler\n",
-    "queue2=make_elm_or_print_err(\"queue\",\"queue2\",\"queue2\")\n",
-    "# Creating queue's to buffer incoming data from nvvidconv\n",
-    "queue3=make_elm_or_print_err(\"queue\",\"queue3\",\"queue3\")\n",
-    "# Creating queue's to buffer incoming data from nvosd\n",
-    "queue4=make_elm_or_print_err(\"queue\",\"queue4\",\"queue4\")\n",
-    "# Creating queue's to buffer incoming data from nvvidconv2\n",
-    "queue5=make_elm_or_print_err(\"queue\",\"queue5\",\"queue5\")\n",
-    "# Creating queue's to buffer incoming data from nvtracker\n",
-    "queue6=make_elm_or_print_err(\"queue\",\"queue6\",\"queue6\")\n",
-    "# Creating queue's to buffer incoming data from sgie1\n",
-    "queue7=make_elm_or_print_err(\"queue\",\"queue7\",\"queue7\")\n",
-    "# Creating queue's to buffer incoming data from sgie2\n",
-    "queue8=make_elm_or_print_err(\"queue\",\"queue8\",\"queue8\")\n",
-    "# Creating queue's to buffer incoming data from sgie3\n",
-    "queue9=make_elm_or_print_err(\"queue\",\"queue9\",\"queue9\")\n",
-    "# Use convertor to convert from NV12 to RGBA as required by nvosd\n",
-    "nvvidconv2 = make_elm_or_print_err(\"nvvideoconvert\", \"convertor2\",\"nvvidconv2\")\n",
-    "# Place an encoder instead of OSD to save as video file\n",
-    "encoder = make_elm_or_print_err(\"avenc_mpeg4\", \"encoder\", \"Encoder\")\n",
-    "# Parse output from Encoder \n",
-    "codeparser = make_elm_or_print_err(\"mpeg4videoparse\", \"mpeg4-parser\", 'Code Parser')\n",
-    "# Create a container\n",
-    "container = make_elm_or_print_err(\"qtmux\", \"qtmux\", \"Container\")\n",
-    "# Create Sink for storing the output \n",
-    "sink = make_elm_or_print_err(\"filesink\", \"filesink\", \"Sink\")"
-   ]
-  },
-  {
-   "cell_type": "markdown",
-   "metadata": {},
-   "source": [
-    "Now that we have created the elements ,we can now set various properties for out pipeline at this point. The configuration files are the same as in [Multi-DNN Notebook](Introduction_to_Multi-DNN_pipeline.ipynb)"
-   ]
-  },
-  {
-   "cell_type": "code",
-   "execution_count": null,
-   "metadata": {},
-   "outputs": [],
-   "source": [
-    "############ Set properties for the Elements ############\n",
-    "# Set Input Video files \n",
-    "source1.set_property('location', INPUT_VIDEO_1)\n",
-    "source2.set_property('location', INPUT_VIDEO_2)\n",
-    "source3.set_property('location', INPUT_VIDEO_2)\n",
-    "# Set Input Width , Height and Batch Size \n",
-    "streammux.set_property('width', 1920)\n",
-    "streammux.set_property('height', 1080)\n",
-    "streammux.set_property('batch-size', num_sources)\n",
-    "# Timeout in microseconds to wait after the first buffer is available \n",
-    "# to push the batch even if a complete batch is not formed.\n",
-    "streammux.set_property('batched-push-timeout', 4000000)\n",
-    "# Set configuration file for nvinfer \n",
-    "# Set Congifuration file for nvinfer \n",
-    "pgie.set_property('config-file-path', \"../source_code/N4/dstest4_pgie_config.txt\")\n",
-    "sgie1.set_property('config-file-path', \"../source_code/N4/dstest4_sgie1_config.txt\")\n",
-    "sgie2.set_property('config-file-path', \"../source_code/N4/dstest4_sgie2_config.txt\")\n",
-    "sgie3.set_property('config-file-path', \"../source_code/N4/dstest4_sgie3_config.txt\")\n",
-    "#Set properties of tracker from tracker_config\n",
-    "config = configparser.ConfigParser()\n",
-    "config.read('../source_code/N4/dstest4_tracker_config.txt')\n",
-    "config.sections()\n",
-    "for key in config['tracker']:\n",
-    "    if key == 'tracker-width' :\n",
-    "        tracker_width = config.getint('tracker', key)\n",
-    "        tracker.set_property('tracker-width', tracker_width)\n",
-    "    if key == 'tracker-height' :\n",
-    "        tracker_height = config.getint('tracker', key)\n",
-    "        tracker.set_property('tracker-height', tracker_height)\n",
-    "    if key == 'gpu-id' :\n",
-    "        tracker_gpu_id = config.getint('tracker', key)\n",
-    "        tracker.set_property('gpu_id', tracker_gpu_id)\n",
-    "    if key == 'll-lib-file' :\n",
-    "        tracker_ll_lib_file = config.get('tracker', key)\n",
-    "        tracker.set_property('ll-lib-file', tracker_ll_lib_file)\n",
-    "    if key == 'll-config-file' :\n",
-    "        tracker_ll_config_file = config.get('tracker', key)\n",
-    "        tracker.set_property('ll-config-file', tracker_ll_config_file)\n",
-    "    if key == 'enable-batch-process' :\n",
-    "        tracker_enable_batch_process = config.getint('tracker', key)\n",
-    "        tracker.set_property('enable_batch_process', tracker_enable_batch_process)\n",
-    "        \n",
-    "## Set batch size \n",
-    "pgie_batch_size=pgie.get_property(\"batch-size\")\n",
-    "print(\"PGIE batch size :\",end='')\n",
-    "print(pgie_batch_size)\n",
-    "if(pgie_batch_size != num_sources):\n",
-    "    print(\"WARNING: Overriding infer-config batch-size\",pgie_batch_size,\" with number of sources \", num_sources,\" \\n\")\n",
-    "    pgie.set_property(\"batch-size\",num_sources)\n",
-    "    \n",
-    "## Set batch size \n",
-    "sgie1_batch_size=sgie1.get_property(\"batch-size\")\n",
-    "print(\"SGIE1 batch size :\",end='')\n",
-    "print(sgie1_batch_size)\n",
-    "if(sgie1_batch_size != num_sources):\n",
-    "    print(\"WARNING: Overriding infer-config batch-size\",sgie1_batch_size,\" with number of sources \", num_sources,\" \\n\")\n",
-    "    sgie1.set_property(\"batch-size\",num_sources)\n",
-    "    \n",
-    "## Set batch size \n",
-    "sgie2_batch_size=sgie2.get_property(\"batch-size\")\n",
-    "print(\"SGIE2 batch size :\",end='')\n",
-    "print(sgie2_batch_size)\n",
-    "if(sgie2_batch_size != num_sources):\n",
-    "    print(\"WARNING: Overriding infer-config batch-size\",sgie2_batch_size,\" with number of sources \", num_sources,\" \\n\")\n",
-    "    sgie2.set_property(\"batch-size\",num_sources)\n",
-    "\n",
-    "## Set batch size \n",
-    "sgie3_batch_size=sgie3.get_property(\"batch-size\")\n",
-    "print(\"SGIE3 batch size :\",end='')\n",
-    "print(sgie3_batch_size)\n",
-    "if(sgie3_batch_size != num_sources):\n",
-    "    print(\"WARNING: Overriding infer-config batch-size\",sgie3_batch_size,\" with number of sources \", num_sources,\" \\n\")\n",
-    "    sgie3.set_property(\"batch-size\",num_sources)\n",
-    "    \n",
-    "# Set display configurations for nvmultistreamtiler    \n",
-    "tiler_rows=int(2)\n",
-    "tiler_columns=int(2)\n",
-    "tiler.set_property(\"rows\",tiler_rows)\n",
-    "tiler.set_property(\"columns\",tiler_columns)\n",
-    "tiler.set_property(\"width\", TILED_OUTPUT_WIDTH)\n",
-    "tiler.set_property(\"height\", TILED_OUTPUT_HEIGHT)\n",
-    "\n",
-    "# Set encoding properties and Sink configs\n",
-    "encoder.set_property(\"bitrate\", 2000000)\n",
-    "sink.set_property(\"location\", OUTPUT_VIDEO_NAME)\n",
-    "sink.set_property(\"sync\", 0)\n",
-    "sink.set_property(\"async\", 0)\n"
-   ]
-  },
-  {
-   "cell_type": "markdown",
-   "metadata": {},
-   "source": [
-    "We now link all the elements in the order we prefer and create Gstreamer bus to feed all messages through it. "
-   ]
-  },
-  {
-   "cell_type": "code",
-   "execution_count": null,
-   "metadata": {},
-   "outputs": [],
-   "source": [
-    "########## Add and Link ELements in the Pipeline ########## \n",
-    "\n",
-    "print(\"Adding elements to Pipeline \\n\")\n",
-    "pipeline.add(source1)\n",
-    "pipeline.add(h264parser1)\n",
-    "pipeline.add(decoder1)\n",
-    "pipeline.add(source2)\n",
-    "pipeline.add(h264parser2)\n",
-    "pipeline.add(decoder2)\n",
-    "pipeline.add(source3)\n",
-    "pipeline.add(h264parser3)\n",
-    "pipeline.add(decoder3)\n",
-    "pipeline.add(streammux)\n",
-    "pipeline.add(pgie)\n",
-    "pipeline.add(tracker)\n",
-    "pipeline.add(sgie1)\n",
-    "pipeline.add(sgie2)\n",
-    "pipeline.add(sgie3)\n",
-    "pipeline.add(tiler)\n",
-    "pipeline.add(nvvidconv)\n",
-    "pipeline.add(nvosd)\n",
-    "pipeline.add(queue1)\n",
-    "pipeline.add(queue2)\n",
-    "pipeline.add(queue3)\n",
-    "pipeline.add(queue4)\n",
-    "pipeline.add(queue5)\n",
-    "pipeline.add(queue6)\n",
-    "pipeline.add(queue7)\n",
-    "pipeline.add(queue8)\n",
-    "pipeline.add(queue9)\n",
-    "pipeline.add(nvvidconv2)\n",
-    "pipeline.add(encoder)\n",
-    "pipeline.add(codeparser)\n",
-    "pipeline.add(container)\n",
-    "pipeline.add(sink)\n",
-    "\n",
-    "print(\"Linking elements in the Pipeline \\n\")\n",
-    "\n",
-    "source1.link(h264parser1)\n",
-    "h264parser1.link(decoder1)\n",
-    "\n",
-    "\n",
-    "###### Create Sink pad and connect to decoder's source pad \n",
-    "sinkpad1 = streammux.get_request_pad(\"sink_0\")\n",
-    "if not sinkpad1:\n",
-    "    sys.stderr.write(\" Unable to get the sink pad of streammux \\n\")\n",
-    "    \n",
-    "srcpad1 = decoder1.get_static_pad(\"src\")\n",
-    "if not srcpad1:\n",
-    "    sys.stderr.write(\" Unable to get source pad of decoder \\n\")\n",
-    "    \n",
-    "srcpad1.link(sinkpad1)\n",
-    "\n",
-    "######\n",
-    "\n",
-    "###### Create Sink pad and connect to decoder's source pad \n",
-    "source2.link(h264parser2)\n",
-    "h264parser2.link(decoder2)\n",
-    "\n",
-    "sinkpad2 = streammux.get_request_pad(\"sink_1\")\n",
-    "if not sinkpad2:\n",
-    "    sys.stderr.write(\" Unable to get the sink pad of streammux \\n\")\n",
-    "    \n",
-    "srcpad2 = decoder2.get_static_pad(\"src\")\n",
-    "if not srcpad2:\n",
-    "    sys.stderr.write(\" Unable to get source pad of decoder \\n\")\n",
-    "    \n",
-    "srcpad2.link(sinkpad2)\n",
-    "\n",
-    "######\n",
-    "\n",
-    "###### Create Sink pad and connect to decoder's source pad \n",
-    "source3.link(h264parser3)\n",
-    "h264parser3.link(decoder3)\n",
-    "\n",
-    "sinkpad3 = streammux.get_request_pad(\"sink_2\")\n",
-    "if not sinkpad2:\n",
-    "    sys.stderr.write(\" Unable to get the sink pad of streammux \\n\")\n",
-    "    \n",
-    "srcpad3 = decoder3.get_static_pad(\"src\")\n",
-    "if not srcpad3:\n",
-    "    sys.stderr.write(\" Unable to get source pad of decoder \\n\")\n",
-    "    \n",
-    "srcpad3.link(sinkpad3)\n",
-    "\n",
-    "######\n",
-    "\n",
-    "\n",
-    "streammux.link(queue1)\n",
-    "queue1.link(pgie)\n",
-    "pgie.link(queue2)\n",
-    "queue2.link(tracker)\n",
-    "tracker.link(queue3)\n",
-    "queue3.link(sgie1)\n",
-    "sgie1.link(queue4)\n",
-    "queue4.link(sgie2)\n",
-    "sgie2.link(queue5)\n",
-    "queue5.link(sgie3)\n",
-    "sgie3.link(queue6)\n",
-    "queue6.link(tiler)\n",
-    "tiler.link(queue7)\n",
-    "queue7.link(nvvidconv)\n",
-    "nvvidconv.link(queue8)\n",
-    "queue8.link(nvosd)\n",
-    "nvosd.link(queue9)\n",
-    "queue9.link(nvvidconv2)\n",
-    "nvvidconv2.link(encoder)\n",
-    "encoder.link(codeparser)\n",
-    "codeparser.link(container)\n",
-    "container.link(sink)\n"
-   ]
-  },
-  {
-   "cell_type": "code",
-   "execution_count": null,
-   "metadata": {},
-   "outputs": [],
-   "source": [
-    "# create an event loop and feed gstreamer bus mesages to it\n",
-    "loop = GObject.MainLoop()\n",
-    "bus = pipeline.get_bus()\n",
-    "bus.add_signal_watch()\n",
-    "bus.connect (\"message\", bus_call, loop)\n"
-   ]
-  },
-  {
-   "cell_type": "markdown",
-   "metadata": {},
-   "source": [
-    "Our pipeline now carries the metadata forward but we have not done anything with it until now, but as mentoioned in the above pipeline diagram , we will now create a callback function to write relevant data on the frame once called and create a sink pad in the nvosd element to call the function. \n",
-    "\n",
-    "This callback function is the same as used in the previous notebook."
-   ]
-  },
-  {
-   "cell_type": "code",
-   "execution_count": null,
-   "metadata": {},
-   "outputs": [],
-   "source": [
-    "# tiler_sink_pad_buffer_probe  will extract metadata received on OSD sink pad\n",
-    "# and update params for drawing rectangle, object information etc.\n",
-    "def tiler_src_pad_buffer_probe(pad,info,u_data):\n",
-    "    #Intiallizing object counter with 0.\n",
-    "    obj_counter = {\n",
-    "        PGIE_CLASS_ID_VEHICLE:0,\n",
-    "        PGIE_CLASS_ID_PERSON:0,\n",
-    "        PGIE_CLASS_ID_BICYCLE:0,\n",
-    "        PGIE_CLASS_ID_ROADSIGN:0\n",
-    "    }\n",
-    "    # Set frame_number & rectangles to draw as 0 \n",
-    "    frame_number=0\n",
-    "    num_rects=0\n",
-    "    \n",
-    "    gst_buffer = info.get_buffer()\n",
-    "    if not gst_buffer:\n",
-    "        print(\"Unable to get GstBuffer \")\n",
-    "        return\n",
-    "\n",
-    "    # Retrieve batch metadata from the gst_buffer\n",
-    "    # Note that pyds.gst_buffer_get_nvds_batch_meta() expects the\n",
-    "    # C address of gst_buffer as input, which is obtained with hash(gst_buffer)\n",
-    "    batch_meta = pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer))\n",
-    "    l_frame = batch_meta.frame_meta_list\n",
-    "    while l_frame is not None:\n",
-    "        try:\n",
-    "            # Note that l_frame.data needs a cast to pyds.NvDsFrameMeta\n",
-    "            frame_meta = pyds.NvDsFrameMeta.cast(l_frame.data)\n",
-    "        except StopIteration:\n",
-    "            break\n",
-    "        \n",
-    "        # Get frame number , number of rectables to draw and object metadata\n",
-    "        frame_number=frame_meta.frame_num\n",
-    "        num_rects = frame_meta.num_obj_meta\n",
-    "        l_obj=frame_meta.obj_meta_list\n",
-    "        \n",
-    "        while l_obj is not None:\n",
-    "            try:\n",
-    "                # Casting l_obj.data to pyds.NvDsObjectMeta\n",
-    "                obj_meta=pyds.NvDsObjectMeta.cast(l_obj.data)\n",
-    "            except StopIteration:\n",
-    "                break\n",
-    "            # Increment Object class by 1 and Set Box border to Red color     \n",
-    "            obj_counter[obj_meta.class_id] += 1\n",
-    "            obj_meta.rect_params.border_color.set(0.0, 0.0, 1.0, 0.0)\n",
-    "            try: \n",
-    "                l_obj=l_obj.next\n",
-    "            except StopIteration:\n",
-    "                break\n",
-    "        ################## Setting Metadata Display configruation ############### \n",
-    "        # Acquiring a display meta object.\n",
-    "        display_meta=pyds.nvds_acquire_display_meta_from_pool(batch_meta)\n",
-    "        display_meta.num_labels = 1\n",
-    "        py_nvosd_text_params = display_meta.text_params[0]\n",
-    "        # Setting display text to be shown on screen\n",
-    "        py_nvosd_text_params.display_text = \"Frame Number={} Number of Objects={} Vehicle_count={} Person_count={}\".format(frame_number, num_rects, obj_counter[PGIE_CLASS_ID_VEHICLE], obj_counter[PGIE_CLASS_ID_PERSON])\n",
-    "        # Now set the offsets where the string should appear\n",
-    "        py_nvosd_text_params.x_offset = 10\n",
-    "        py_nvosd_text_params.y_offset = 12\n",
-    "        # Font , font-color and font-size\n",
-    "        py_nvosd_text_params.font_params.font_name = \"Serif\"\n",
-    "        py_nvosd_text_params.font_params.font_size = 10\n",
-    "        # Set(red, green, blue, alpha); Set to White\n",
-    "        py_nvosd_text_params.font_params.font_color.set(1.0, 1.0, 1.0, 1.0)\n",
-    "        # Text background color\n",
-    "        py_nvosd_text_params.set_bg_clr = 1\n",
-    "        # Set(red, green, blue, alpha); set to Black\n",
-    "        py_nvosd_text_params.text_bg_clr.set(0.0, 0.0, 0.0, 1.0)\n",
-    "        # Using pyds.get_string() to get display_text as string to print in notebook\n",
-    "        print(pyds.get_string(py_nvosd_text_params.display_text))\n",
-    "        pyds.nvds_add_display_meta_to_frame(frame_meta, display_meta)\n",
-    "        \n",
-    "        ############################################################################\n",
-    "        # Get frame rate through this probe\n",
-    "        fps_streams[\"stream{0}\".format(frame_meta.pad_index)].get_fps()\n",
-    "        try:\n",
-    "            l_frame=l_frame.next\n",
-    "        except StopIteration:\n",
-    "            break\n",
-    "\n",
-    "    return Gst.PadProbeReturn.OK\n"
-   ]
-  },
-  {
-   "cell_type": "code",
-   "execution_count": null,
-   "metadata": {},
-   "outputs": [],
-   "source": [
-    "tiler_src_pad=sgie3.get_static_pad(\"src\")\n",
-    "if not tiler_src_pad:\n",
-    "    sys.stderr.write(\" Unable to get src pad \\n\")\n",
-    "else:\n",
-    "    tiler_src_pad.add_probe(Gst.PadProbeType.BUFFER, tiler_src_pad_buffer_probe, 0)"
-   ]
-  },
-  {
-   "cell_type": "markdown",
-   "metadata": {},
-   "source": [
-    "Now with everything defined , we can start the playback and listen the events."
-   ]
-  },
-  {
-   "cell_type": "code",
-   "execution_count": null,
-   "metadata": {},
-   "outputs": [],
-   "source": [
-    "# List the sources\n",
-    "print(\"Now playing...\")\n",
-    "start_time = time.time()\n",
-    "print(\"Starting pipeline \\n\")\n",
-    "# start play back and listed to events\t\t\n",
-    "pipeline.set_state(Gst.State.PLAYING)\n",
-    "try:\n",
-    "    loop.run()\n",
-    "except:\n",
-    "    pass\n",
-    "# cleanup\n",
-    "print(\"Exiting app\\n\")\n",
-    "pipeline.set_state(Gst.State.NULL)\n",
-    "print(\"--- %s seconds ---\" % (time.time() - start_time))"
-   ]
-  },
-  {
-   "cell_type": "code",
-   "execution_count": null,
-   "metadata": {},
-   "outputs": [],
-   "source": [
-    "# Convert video profile to be compatible with Jupyter notebook\n",
-    "!ffmpeg -loglevel panic -y -an -i ../source_code/N4/ds_out.mp4 -vcodec libx264 -pix_fmt yuv420p -profile:v baseline -level 3 ../source_code/N4/output.mp4"
-   ]
-  },
-  {
-   "cell_type": "code",
-   "execution_count": null,
-   "metadata": {},
-   "outputs": [],
-   "source": [
-    "# Display the Output\n",
-    "from IPython.display import HTML\n",
-    "HTML(\"\"\"\n",
-    " <video width=\"960\" height=\"540\" controls>\n",
-    " <source src=\"../source_code/N4/output.mp4\"\n",
-    " </video>\n",
-    "\"\"\".format())"
-   ]
-  },
-  {
-   "cell_type": "markdown",
-   "metadata": {},
-   "source": [
-    "## Licensing\n",
-    "  \n",
-    "This material is released by NVIDIA Corporation under the Creative Commons Attribution 4.0 International (CC BY 4.0)."
-   ]
-  }
- ],
- "metadata": {
-  "kernelspec": {
-   "display_name": "Python 3",
-   "language": "python",
-   "name": "python3"
-  },
-  "language_info": {
-   "codemirror_mode": {
-    "name": "ipython",
-    "version": 3
-   },
-   "file_extension": ".py",
-   "mimetype": "text/x-python",
-   "name": "python",
-   "nbconvert_exporter": "python",
-   "pygments_lexer": "ipython3",
-   "version": "3.6.2"
-  }
- },
- "nbformat": 4,
- "nbformat_minor": 4
-}

Filskillnaden har hållts tillbaka eftersom den är för stor
+ 0 - 1233
hpc_ai/ai_science_cfd/English/python/jupyter_notebook/CFD/Solution.ipynb


Filskillnaden har hållts tillbaka eftersom den är för stor
+ 0 - 458
hpc_ai/ai_science_climate/English/python/jupyter_notebook/Tropical_Cyclone_Intensity_Estimation/Solutions.ipynb