{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "     \n", "     \n", "     \n", "     \n", "     \n", "  \n", "[Home Page](Start_Here.ipynb)\n", " \n", " \n", "\n", "     \n", "     \n", "     \n", "     \n", "     \n", "   \n", "[1]\n", "[2](Performance_Analysis_using_NSight_systems.ipynb)\n", "[3](Performance_Analysis_using_NSight_systems_Continued.ipynb)\n", "     \n", "     \n", "     \n", "     \n", "[Next Notebook](Performance_Analysis_using_NSight_systems.ipynb)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Introduction to Performance analysis\n", "\n", "\n", "In this notebook, we will get introduced to the various metrics used to measure the performance of a DeepStream pipeline and improve the performance of a DeepStream pipeline.\n", "\n", "- [Latency, Throughput, and GPU Metrics](#Latency,-Throughput,-and-GPU-Metrics)\n", " - [Latency](#Latency)\n", " - [GPU Metrics](#GPU-Metrics)\n", " - [Throughput](#Throughput)\n", "- [Case 1 : Multi-stream cascaded network pipeline](#Case-1:-Multi-stream-cascaded-network-pipeline.)\n", " - [Bench-marking with GST Probes](#Benchmarking-with-GST-Probes)\n", " - [Effects on OSD,Tiler & Queues](#Effects-on-OSD,-Tiler,-and-Queues)\n", "- [Summary](#Summary)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Latency, Throughput, and GPU Metrics\n", "\n", "\n", "### Latency\n", "\n", "Latency is important for real-time pipelines that are time-critical. Latency in a DeepStream pipeline can be measured using GStreamer debugging capabilities. By setting the `GST-DEBUG` environment variable to `GST_SCHEDULING:7`, we get a trace log that contains details on when the buffers are modified from which we can obtain detailed information about our pipeline." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#To make sure that right paths to the NVidia Libraries are added run this cell first\n", "!rm ~/.cache/gstreamer-1.0/registry.x86_64.bin\n", "!export LD_LIBRARY_PATH=/opt/tensorrtserver/lib:/usr/local/nvidia/lib:/usr/local/nvidia/lib64:/.singularity.d/libs:$LD_LIBRARY_PATH" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "!GST_DEBUG=\"GST_SCHEDULING:7\" GST_DEBUG_FILE=../source_code/trace.log \\\n", "python3 ../source_code/deepstream-app-1/deepstream_test_1.py '/opt/nvidia/deepstream/deepstream-5.0/samples/streams/sample_720p.h264'" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The `trace.log` file is huge, and here is a small portion of the file that highlights the time a buffer entered the decoder plugin and the time the buffer enters the next input.\n", "\n", "```txt\n", "0:00:01.641136185 GST_SCHEDULING gstpad.c:4320:gst_pad_chain_data_unchecked:\u001b[00m calling chainfunction &gst_video_decoder_chain with buffer buffer: 0x7ff010028d90, pts 99:99:99.999999999, dts 0:00:02.966666637, dur 0:00:00.033333333, size 30487, offset 947619, offset_end 1013155, flags 0x2000\n", "\n", "00:01.648137739 GST_SCHEDULING gstpad.c:4320:gst_pad_chain_data_unchecked:\u001b[00m calling chainfunction &gst_nvstreammux_chain with buffer buffer: 0x7ff01001c5f0, pts 0:00:02.966666637, dts 99:99:99.999999999, dur 0:00:00.033333333, size 64, offset none, offset_end none, flags 0x0\n", "```\n", "\n", "Here latency can be calculated by looking at the time difference between the stream entering one element to the other in the pipeline. In the output shown above, it is ~7ms (00:01.648137739 - 0:00:01.641136185) , it is these timestamps that help us denote the latency. \n", "\n", "For more details, check [GStreamer's documentation on Latency](https://gstreamer.freedesktop.org/documentation/additional/design/latency.html?gi-language=c)\n", "\n", "### GPU Metrics\n", "\n", "We can use `nvidia-smi` to explore the GPU performance metrics while our application is running. GPU utilization is something we want to pay attention to, and we will discuss it below. Run the cell below to re-run the application while logging the results of `nvidia-smi`" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "!nvidia-smi dmon -i 0 -s ucmt -c 8 > ../source_code/smi.log & \\\n", "python3 ../source_code/deepstream-app-1/deepstream_test_1.py '/opt/nvidia/deepstream/deepstream-5.0/samples/streams/sample_720p.h264'" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can open the `smi.log` file to investigate our utilization metrics. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "!cat ../source_code/smi.log" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Understanding nvidia-smi\n", "The cell block above passed the following arguments to `nvidia-smi` :\n", "\n", "- `dmon -i 0` \n", "\n", " - Reports default metrics (device monitoring) for the devices selected by comma-separated device list. In this case, we are reporting default metrics for GPU with index 0 since that is the GPU we are using.\n", "- `-s ucmt` : \n", " - We can choose which metrics we want to display. In this case, we supplied ucmt to indicate we want metrics for\n", " - u: Utilization (SM, Memory, Encoder and Decoder Utilization in %) \n", " - c: Proc and Mem Clocks (in MHz)\n", " - m: Frame Buffer and Bar1 memory usage (in MB)\n", " - t: PCIe Rx and Tx Throughput in MB/s (Maxwell and above)\n", "- `-c 8`\n", " - We can configure the number of iterations for which we are monitoring. In this case, we choose 8 iterations.\n", "\n", "Let's dive a bit deeper into a few of the metrics that we've selected since they are particularly useful to\n", "monitor.\n", "\n", "Utilization metrics report how busy each GPU is over time and can be used to determine how much an application is using the GPUs in the system. In particular, the `sm` column tracks the percent of the time over the past sample period during which one or more kernels were executing on the GPU. `fb` reports the GPU's frame buffer memory usage.\n", "\n", "### Throughput \n", "\n", "The Throughput of the pipeline gives us an idea of the dataflow, which helps us understand how many Streams it can process concurrently at a required FPS. In this set of notebooks, we would mainly concentrate on increasing our pipelines' FPS using various optimizations.\n", "\n", "\n", "## Case 1: Multi-stream cascaded network pipeline.\n", "\n", "In this section, we will optimize a Multi-stream network that was part of the problem statement in the Introduction to DeepStream notebooks.\n", "\n", "We will utilize our `deepstream-test-2-app` to include multi-stream functionalities using the `Streammux` plugin.\n", "\n", "\n", "![Pipeline](images/app-2.png)\n", "\n", "\n", "### Benchmarking with GST-Probes\n", "\n", "\n", "Here we'll import the `GETFPS` Class and use the `get_fps()` method inside it to calculate the average FPS of our stream. This is part of [DeepStream Python Apps Github Repository](https://github.com/NVIDIA-AI-IOT/deepstream_python_apps), here we have modified the average FPS output time from 5s to 1s for benchmarking purposes.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Import required libraries \n", "import sys\n", "sys.path.append('../source_code')\n", "import gi\n", "import configparser\n", "gi.require_version('Gst', '1.0')\n", "from gi.repository import GObject, Gst\n", "from gi.repository import GLib\n", "from ctypes import *\n", "import time\n", "import sys\n", "import math\n", "import platform\n", "from common.bus_call import bus_call\n", "from common.FPS import GETFPS\n", "import pyds\n", "\n", "\n", "# Define variables to be used later\n", "fps_streams={}\n", "\n", "PGIE_CLASS_ID_VEHICLE = 0\n", "PGIE_CLASS_ID_BICYCLE = 1\n", "PGIE_CLASS_ID_PERSON = 2\n", "PGIE_CLASS_ID_ROADSIGN = 3\n", "\n", "MUXER_OUTPUT_WIDTH=1920\n", "MUXER_OUTPUT_HEIGHT=1080\n", "\n", "TILED_OUTPUT_WIDTH=1920\n", "TILED_OUTPUT_HEIGHT=1080\n", "OSD_PROCESS_MODE= 0\n", "OSD_DISPLAY_TEXT= 0\n", "pgie_classes_str= [\"Vehicle\", \"TwoWheeler\", \"Person\",\"RoadSign\"]\n", "\n", "################ Three Stream Pipeline ###########\n", "# Define Input and output Stream information \n", "num_sources = 3 \n", "INPUT_VIDEO_1 = '/opt/nvidia/deepstream/deepstream-5.0/samples/streams/sample_720p.h264'\n", "INPUT_VIDEO_2 = '/opt/nvidia/deepstream/deepstream-5.0/samples/streams/sample_720p.h264'\n", "INPUT_VIDEO_3 = '/opt/nvidia/deepstream/deepstream-5.0/samples/streams/sample_720p.h264'\n", "OUTPUT_VIDEO_NAME = \"../source_code/N1/ds_out.mp4\"" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We define a function `make_elm_or_print_err()` to create our elements and report any errors if the creation fails.\n", "\n", "Elements are created using the `Gst.ElementFactory.make()` function as part of Gstreamer library." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "## Make Element or Print Error and any other detail\n", "def make_elm_or_print_err(factoryname, name, printedname, detail=\"\"):\n", " print(\"Creating\", printedname)\n", " elm = Gst.ElementFactory.make(factoryname, name)\n", " if not elm:\n", " sys.stderr.write(\"Unable to create \" + printedname + \" \\n\")\n", " if detail:\n", " sys.stderr.write(detail)\n", " return elm" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Initialise GStreamer and Create an Empty Pipeline" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "for i in range(0,num_sources):\n", " fps_streams[\"stream{0}\".format(i)]=GETFPS(i)\n", "\n", "# Standard GStreamer initialization\n", "Gst.init(None)\n", "\n", "# Create gstreamer elements */\n", "# Create Pipeline element that will form a connection of other elements\n", "print(\"Creating Pipeline \\n \")\n", "pipeline = Gst.Pipeline()\n", "\n", "if not pipeline:\n", " sys.stderr.write(\" Unable to create Pipeline \\n\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Create Elements that are required for our pipeline\n", "\n", "Compared to the first notebook , we use a lot of queues in this notebook to buffer data when it moves from one plugin to another." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "########### Create Elements required for the Pipeline ########### \n", "\n", "######### Defining Stream 1 \n", "# Source element for reading from the file\n", "source1 = make_elm_or_print_err(\"filesrc\", \"file-source-1\",'file-source-1')\n", "# Since the data format in the input file is elementary h264 stream,we need a h264parser\n", "h264parser1 = make_elm_or_print_err(\"h264parse\", \"h264-parser-1\",\"h264-parser-1\")\n", "# Use nvdec_h264 for hardware accelerated decode on GPU\n", "decoder1 = make_elm_or_print_err(\"nvv4l2decoder\", \"nvv4l2-decoder-1\",\"nvv4l2-decoder-1\")\n", " \n", "##########\n", "\n", "########## Defining Stream 2 \n", "# Source element for reading from the file\n", "source2 = make_elm_or_print_err(\"filesrc\", \"file-source-2\",\"file-source-2\")\n", "# Since the data format in the input file is elementary h264 stream, we need a h264parser\n", "h264parser2 = make_elm_or_print_err(\"h264parse\", \"h264-parser-2\", \"h264-parser-2\")\n", "# Use nvdec_h264 for hardware accelerated decode on GPU\n", "decoder2 = make_elm_or_print_err(\"nvv4l2decoder\", \"nvv4l2-decoder-2\",\"nvv4l2-decoder-2\")\n", "########### \n", "\n", "########## Defining Stream 3\n", "# Source element for reading from the file\n", "source3 = make_elm_or_print_err(\"filesrc\", \"file-source-3\",\"file-source-3\")\n", "# Since the data format in the input file is elementary h264 stream, we need a h264parser\n", "h264parser3 = make_elm_or_print_err(\"h264parse\", \"h264-parser-3\", \"h264-parser-3\")\n", "# Use nvdec_h264 for hardware accelerated decode on GPU\n", "decoder3 = make_elm_or_print_err(\"nvv4l2decoder\", \"nvv4l2-decoder-3\",\"nvv4l2-decoder-3\")\n", "########### \n", " \n", "# Create nvstreammux instance to form batches from one or more sources.\n", "streammux = make_elm_or_print_err(\"nvstreammux\", \"Stream-muxer\",\"Stream-muxer\") \n", "# Use nvinfer to run inferencing on decoder's output, behaviour of inferencing is set through config file\n", "pgie = make_elm_or_print_err(\"nvinfer\", \"primary-inference\" ,\"pgie\")\n", "# Use nvtracker to give objects unique-ids\n", "tracker = make_elm_or_print_err(\"nvtracker\", \"tracker\",'tracker')\n", "# Seconday inference for Finding Car Color\n", "sgie1 = make_elm_or_print_err(\"nvinfer\", \"secondary1-nvinference-engine\",'sgie1')\n", "# Seconday inference for Finding Car Make\n", "sgie2 = make_elm_or_print_err(\"nvinfer\", \"secondary2-nvinference-engine\",'sgie2')\n", "# Seconday inference for Finding Car Type\n", "sgie3 = make_elm_or_print_err(\"nvinfer\", \"secondary3-nvinference-engine\",'sgie3')\n", "# Creating Tiler to present more than one streams\n", "tiler=make_elm_or_print_err(\"nvmultistreamtiler\", \"nvtiler\",\"nvtiler\")\n", "# Use convertor to convert from NV12 to RGBA as required by nvosd\n", "nvvidconv = make_elm_or_print_err(\"nvvideoconvert\", \"convertor\",\"nvvidconv\")\n", "# Create OSD to draw on the converted RGBA buffer\n", "nvosd = make_elm_or_print_err(\"nvdsosd\", \"onscreendisplay\",\"nvosd\")\n", "# Use convertor to convert from NV12 to RGBA as required by nvosd\n", "nvvidconv2 = make_elm_or_print_err(\"nvvideoconvert\", \"convertor2\",\"nvvidconv2\")\n", "# Place an encoder instead of OSD to save as video file\n", "encoder = make_elm_or_print_err(\"avenc_mpeg4\", \"encoder\", \"Encoder\")\n", "# Parse output from Encoder \n", "codeparser = make_elm_or_print_err(\"mpeg4videoparse\", \"mpeg4-parser\", 'Code Parser')\n", "# Create a container\n", "container = make_elm_or_print_err(\"qtmux\", \"qtmux\", \"Container\")\n", "# Create Sink for storing the output \n", "sink = make_elm_or_print_err(\"filesink\", \"filesink\", \"Sink\")\n", "\n", "# # Create Sink for storing the output \n", "# fksink = make_elm_or_print_err(\"fakesink\", \"fakesink\", \"Sink\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now that we have created the elements ,we can now set various properties for out pipeline at this point. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "############ Set properties for the Elements ############\n", "# Set Input Video files \n", "source1.set_property('location', INPUT_VIDEO_1)\n", "source2.set_property('location', INPUT_VIDEO_2)\n", "source3.set_property('location', INPUT_VIDEO_3)\n", "# Set Input Width , Height and Batch Size \n", "streammux.set_property('width', 1920)\n", "streammux.set_property('height', 1080)\n", "streammux.set_property('batch-size', 1)\n", "# Timeout in microseconds to wait after the first buffer is available \n", "# to push the batch even if a complete batch is not formed.\n", "streammux.set_property('batched-push-timeout', 4000000)\n", "# Set configuration file for nvinfer \n", "# Set Congifuration file for nvinfer \n", "pgie.set_property('config-file-path', \"../source_code/N1/dstest4_pgie_config.txt\")\n", "sgie1.set_property('config-file-path', \"../source_code/N1/dstest4_sgie1_config.txt\")\n", "sgie2.set_property('config-file-path', \"../source_code/N1/dstest4_sgie2_config.txt\")\n", "sgie3.set_property('config-file-path', \"../source_code/N1/dstest4_sgie3_config.txt\")\n", "#Set properties of tracker from tracker_config\n", "config = configparser.ConfigParser()\n", "config.read('../source_code/N1/dstest4_tracker_config.txt')\n", "config.sections()\n", "for key in config['tracker']:\n", " if key == 'tracker-width' :\n", " tracker_width = config.getint('tracker', key)\n", " tracker.set_property('tracker-width', tracker_width)\n", " if key == 'tracker-height' :\n", " tracker_height = config.getint('tracker', key)\n", " tracker.set_property('tracker-height', tracker_height)\n", " if key == 'gpu-id' :\n", " tracker_gpu_id = config.getint('tracker', key)\n", " tracker.set_property('gpu_id', tracker_gpu_id)\n", " if key == 'll-lib-file' :\n", " tracker_ll_lib_file = config.get('tracker', key)\n", " tracker.set_property('ll-lib-file', tracker_ll_lib_file)\n", " if key == 'll-config-file' :\n", " tracker_ll_config_file = config.get('tracker', key)\n", " tracker.set_property('ll-config-file', tracker_ll_config_file)\n", " if key == 'enable-batch-process' :\n", " tracker_enable_batch_process = config.getint('tracker', key)\n", " tracker.set_property('enable_batch_process', tracker_enable_batch_process)\n", " \n", " \n", "# Set display configurations for nvmultistreamtiler \n", "tiler_rows=int(2)\n", "tiler_columns=int(2)\n", "tiler.set_property(\"rows\",tiler_rows)\n", "tiler.set_property(\"columns\",tiler_columns)\n", "tiler.set_property(\"width\", TILED_OUTPUT_WIDTH)\n", "tiler.set_property(\"height\", TILED_OUTPUT_HEIGHT)\n", "\n", "# Set encoding properties and Sink configs\n", "encoder.set_property(\"bitrate\", 2000000)\n", "sink.set_property(\"location\", OUTPUT_VIDEO_NAME)\n", "sink.set_property(\"sync\", 0)\n", "sink.set_property(\"async\", 0)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We now link all the elements in the order we prefer and create Gstreamer bus to feed all messages through it. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "########## Add and Link ELements in the Pipeline ########## \n", "\n", "print(\"Adding elements to Pipeline \\n\")\n", "pipeline.add(source1)\n", "pipeline.add(h264parser1)\n", "pipeline.add(decoder1)\n", "pipeline.add(source2)\n", "pipeline.add(h264parser2)\n", "pipeline.add(decoder2)\n", "pipeline.add(source3)\n", "pipeline.add(h264parser3)\n", "pipeline.add(decoder3)\n", "pipeline.add(streammux)\n", "pipeline.add(pgie)\n", "pipeline.add(tracker)\n", "pipeline.add(sgie1)\n", "pipeline.add(sgie2)\n", "pipeline.add(sgie3)\n", "pipeline.add(tiler)\n", "pipeline.add(nvvidconv)\n", "pipeline.add(nvosd)\n", "pipeline.add(nvvidconv2)\n", "pipeline.add(encoder)\n", "pipeline.add(codeparser)\n", "pipeline.add(container)\n", "pipeline.add(sink)\n", "\n", "\n", "print(\"Linking elements in the Pipeline \\n\")\n", "\n", "source1.link(h264parser1)\n", "h264parser1.link(decoder1)\n", "\n", "\n", "###### Create Sink pad and connect to decoder's source pad \n", "sinkpad1 = streammux.get_request_pad(\"sink_0\")\n", "if not sinkpad1:\n", " sys.stderr.write(\" Unable to get the sink pad of streammux \\n\")\n", " \n", "srcpad1 = decoder1.get_static_pad(\"src\")\n", "if not srcpad1:\n", " sys.stderr.write(\" Unable to get source pad of decoder \\n\")\n", " \n", "srcpad1.link(sinkpad1)\n", "\n", "######\n", "\n", "###### Create Sink pad and connect to decoder's source pad \n", "source2.link(h264parser2)\n", "h264parser2.link(decoder2)\n", "\n", "sinkpad2 = streammux.get_request_pad(\"sink_1\")\n", "if not sinkpad2:\n", " sys.stderr.write(\" Unable to get the sink pad of streammux \\n\")\n", " \n", "srcpad2 = decoder2.get_static_pad(\"src\")\n", "if not srcpad2:\n", " sys.stderr.write(\" Unable to get source pad of decoder \\n\")\n", " \n", "srcpad2.link(sinkpad2)\n", "\n", "######\n", "\n", "###### Create Sink pad and connect to decoder's source pad \n", "source3.link(h264parser3)\n", "h264parser3.link(decoder3)\n", "\n", "sinkpad3 = streammux.get_request_pad(\"sink_2\")\n", "if not sinkpad2:\n", " sys.stderr.write(\" Unable to get the sink pad of streammux \\n\")\n", " \n", "srcpad3 = decoder3.get_static_pad(\"src\")\n", "if not srcpad3:\n", " sys.stderr.write(\" Unable to get source pad of decoder \\n\")\n", " \n", "srcpad3.link(sinkpad3)\n", "\n", "######\n", "\n", "\n", "streammux.link(pgie)\n", "pgie.link(tracker)\n", "tracker.link(sgie1)\n", "sgie1.link(sgie2)\n", "sgie2.link(sgie3)\n", "sgie3.link(tiler)\n", "tiler.link(nvvidconv)\n", "nvvidconv.link(nvosd)\n", "nvosd.link(nvvidconv2)\n", "nvvidconv2.link(encoder)\n", "encoder.link(codeparser)\n", "codeparser.link(container)\n", "container.link(sink)\n", "\n", "# create an event loop and feed gstreamer bus mesages to it\n", "loop = GLib.MainLoop()\n", "bus = pipeline.get_bus()\n", "bus.add_signal_watch()\n", "bus.connect (\"message\", bus_call, loop)\n", "\n", "print(\"Added and Linked elements to pipeline\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Our pipeline now carries the metadata forward, but we have not done anything with it until now. And as mentioned in the above pipeline diagram, we will create a callback function to write relevant data on the frame once called and create a sink pad in the nvosd element to call the function." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# tiler_sink_pad_buffer_probe will extract metadata received on OSD sink pad\n", "# and update params for drawing rectangle, object information etc.\n", "def tiler_src_pad_buffer_probe(pad,info,u_data):\n", " #Intiallizing object counter with 0.\n", " obj_counter = {\n", " PGIE_CLASS_ID_VEHICLE:0,\n", " PGIE_CLASS_ID_PERSON:0,\n", " PGIE_CLASS_ID_BICYCLE:0,\n", " PGIE_CLASS_ID_ROADSIGN:0\n", " }\n", " # Set frame_number & rectangles to draw as 0 \n", " frame_number=0\n", " num_rects=0\n", " \n", " gst_buffer = info.get_buffer()\n", " if not gst_buffer:\n", " print(\"Unable to get GstBuffer \")\n", " return\n", "\n", " # Retrieve batch metadata from the gst_buffer\n", " # Note that pyds.gst_buffer_get_nvds_batch_meta() expects the\n", " # C address of gst_buffer as input, which is obtained with hash(gst_buffer)\n", " batch_meta = pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer))\n", " l_frame = batch_meta.frame_meta_list\n", " while l_frame is not None:\n", " try:\n", " # Note that l_frame.data needs a cast to pyds.NvDsFrameMeta\n", " frame_meta = pyds.NvDsFrameMeta.cast(l_frame.data)\n", " except StopIteration:\n", " break\n", " \n", " # Get frame number , number of rectables to draw and object metadata\n", " frame_number=frame_meta.frame_num\n", " num_rects = frame_meta.num_obj_meta\n", " l_obj=frame_meta.obj_meta_list\n", " \n", " while l_obj is not None:\n", " try:\n", " # Casting l_obj.data to pyds.NvDsObjectMeta\n", " obj_meta=pyds.NvDsObjectMeta.cast(l_obj.data)\n", " except StopIteration:\n", " break\n", " # Increment Object class by 1 and Set Box border to Red color \n", " obj_counter[obj_meta.class_id] += 1\n", " obj_meta.rect_params.border_color.set(0.0, 0.0, 1.0, 0.0)\n", " try: \n", " l_obj=l_obj.next\n", " except StopIteration:\n", " break\n", " ################## Setting Metadata Display configruation ############### \n", " # Acquiring a display meta object.\n", " display_meta=pyds.nvds_acquire_display_meta_from_pool(batch_meta)\n", " display_meta.num_labels = 1\n", " py_nvosd_text_params = display_meta.text_params[0]\n", " # Setting display text to be shown on screen\n", " py_nvosd_text_params.display_text = \"Frame Number={} Number of Objects={} Vehicle_count={} Person_count={}\".format(frame_number, num_rects, obj_counter[PGIE_CLASS_ID_VEHICLE], obj_counter[PGIE_CLASS_ID_PERSON])\n", " # Now set the offsets where the string should appear\n", " py_nvosd_text_params.x_offset = 10\n", " py_nvosd_text_params.y_offset = 12\n", " # Font , font-color and font-size\n", " py_nvosd_text_params.font_params.font_name = \"Serif\"\n", " py_nvosd_text_params.font_params.font_size = 10\n", " # Set(red, green, blue, alpha); Set to White\n", " py_nvosd_text_params.font_params.font_color.set(1.0, 1.0, 1.0, 1.0)\n", " # Text background color\n", " py_nvosd_text_params.set_bg_clr = 1\n", " # Set(red, green, blue, alpha); set to Black\n", " py_nvosd_text_params.text_bg_clr.set(0.0, 0.0, 0.0, 1.0)\n", " # Using pyds.get_string() to get display_text as string to print in notebook\n", " print(pyds.get_string(py_nvosd_text_params.display_text))\n", " pyds.nvds_add_display_meta_to_frame(frame_meta, display_meta)\n", " \n", " ############################################################################\n", " # FPS Probe \n", " fps_streams[\"stream{0}\".format(frame_meta.pad_index)].get_fps()\n", " try:\n", " l_frame=l_frame.next\n", " except StopIteration:\n", " break\n", "\n", " return Gst.PadProbeReturn.OK\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "tiler_src_pad=sgie3.get_static_pad(\"src\")\n", "if not tiler_src_pad:\n", " sys.stderr.write(\" Unable to get src pad \\n\")\n", "else:\n", " tiler_src_pad.add_probe(Gst.PadProbeType.BUFFER, tiler_src_pad_buffer_probe, 0)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now with everything defined , we can start the playback and listen to the events." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# List the sources\n", "print(\"Now playing...\")\n", "print(\"Starting pipeline \\n\")\n", "# start play back and listed to events\t\t\n", "pipeline.set_state(Gst.State.PLAYING)\n", "start_time = time.time()\n", "try:\n", " loop.run()\n", "except:\n", " pass\n", "# cleanup\n", "print(\"Exiting app\\n\")\n", "pipeline.set_state(Gst.State.NULL)\n", "Gst.Object.unref(pipeline)\n", "Gst.Object.unref(bus)\n", "print(\"--- %s seconds ---\" % (time.time() - start_time))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Convert video profile to be compatible with Jupyter notebook\n", "!ffmpeg -loglevel panic -y -an -i ../source_code/N1/ds_out.mp4 -vcodec libx264 -pix_fmt yuv420p -profile:v baseline -level 3 ../source_code/N1/output.mp4" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Display the Output\n", "from IPython.display import HTML\n", "HTML(\"\"\"\n", "