|
2 years ago | |
---|---|---|
.. | ||
CMakeLists.txt | 2 years ago | |
README.md | 2 years ago | |
inferutils.cpp | 2 years ago | |
inferutils.h | 2 years ago | |
sample.cpp | 2 years ago |
This repository contains code for AUTOSAR C++ compliant deep learning inference with TensorRT blogpost.
All code was tested on Jetson AGX Xavier 16 GB Developer Kit running the latest JetPack 4.6 (rev 3) at the time of writing.
Kernel version:
agx@agx-desktop:~/agxnvme/pyTensorRT$ uname -r
4.9.253-tegra
PyTorch is needed for generating onnx model using Torchvision
Please refer to our python tutorial and use segmodel_to_onnx.py
to generate onnx file and save it as segmodel.onnx
in the build
directory.
Tested PyTorch and torchvision versions
>>> import torch
>>> import torchvision
>>> torch.__version__
'1.9.0'
>>> torchvision.__version__
'0.10.0'
Use the official guide to install PyTorch 1.9 if necessary.
Convert the torchvision model to onnx format with
#First go to the code for python tutorial and grab the onnx file
cd {path-where-cloned}/learnopencv/industrial_cv_TensorRT_python
python3 segmodel_to_onnx.py
#this will generate the onnx file
Ramp up the frequency of the GPU on the Jetson
sudo su #type password
echo 1377000000 > /sys/devices/gpu.0/devfreq/17000000.gv11b/min_freq
#set minimum frequency to 1.4 GHz, max supported by Jetson AGX
exit #exit superuser mode
Move onnx file, Compile and run C++ code
cd {path-where-cloned}/learnopencv/industrial_cv_TensorRT_cpp
mkdir build
cp ../industrial_cv_TensorRT_python/segmodel.onnx ./build
#copy onnx file to build directory
cd build
cmake -DCMAKE_BUILD_TYPE=Debug ../
make
./trt_test ./segmodel.onnx
#this will reproduce the fps numbers from python tutorial for FP16
These are same as what we got with python API. Refer to our TensorRT python tutorial for details.
Want to become an expert in AI? AI Courses by OpenCV is a great place to start.