欢迎您访问365答案网,请分享给你的朋友!
生活常识 学习资料

PyTorch转TensorRT加速推理

时间:2023-05-21
一、what onNX and TensorRT are onnx

You can train your model in any framework of your choice and then convert it to onNX format.
The huge benefit of having a common format is that the software or hardware that loads your model at run time only needs to be compatible with ONNX.
不同框架(pytorch,tf,mxnet等)转为同一框架(onnx),便于在不同的软硬件平台加载模型

TensorRT

NVIDIA’s TensorRT is an SDK for high performance deep learning inference.
It provides APIs to do inference for pre-trained models and generates optimized runtime engines for your platform.
从精度,显存,硬件几个方面来加速模型推理效率

二、Enviroment

Install PyTorch, ONNX, and OpenCV
Install TensorRT
Download and install NVIDIA CUDA 10.0 or later following by official instruction: link
Download and extract CuDNN library for your CUDA version (login required): link
Download and extract NVIDIA TensorRT library for your CUDA version (login required): link、The minimum required version is 6.0.1.5、Please follow the Installation Guide for your system and don’t forget to install Python’s part
Add the absolute path to CUDA, TensorRT, CuDNN libs to the environment variable PATH or LD_LIBRARY_PATH
Install PyCUDA

三、convert 1.Load and launch a pre-trained model using PyTorch 2、Convert the PyTorch model to onNX format 3、Visualize onNX Model 4、Initialize model in TensorRT

Now it’s time to parse the onNX model and initialize TensorRT Context and Engine、To do it we need to create an instance of Builder、The builder can create Network and generate Engine (that would be optimized to your platformhardware) from this network、When we create Network we can define the structure of the network by flags, but in our case, it’s enough to use default flag which means all tensors would have an implicit batch dimension、With Network definition we can create an instance of Parser and finally, parse our onNX file.
Tips: Initialization can take a lot of time because TensorRT tries to find out the best and faster way to perform your network on your platform、To do it only once and then use the already created engine you can serialize your engine、Serialized engines are not portable across different GPU models, platforms, or TensorRT versions、Engines are specific to the exact hardware and software they were built on.

5、Main pipeline 参考 (建议啃一下)

https://learnopencv.com/how-to-convert-a-model-from-pytorch-to-tensorrt-and-speed-up-inference/
https://www.cnblogs.com/mrlonely2018/p/14842107.html
https://learnopencv.com/how-to-run-inference-using-tensorrt-c-api/
https://blog.csdn.net/yanggg1997/article/details/111587687

Copyright © 2016-2020 www.365daan.com All Rights Reserved. 365答案网 版权所有 备案号:

部分内容来自互联网,版权归原作者所有,如有冒犯请联系我们,我们将在三个工作时内妥善处理。