All Projects → Wulingtian → yolov5_tensorrt_int8_tools

Wulingtian / yolov5_tensorrt_int8_tools

Licence: other
tensorrt int8 量化yolov5 onnx模型

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to yolov5 tensorrt int8 tools

Deepstream Project
This is a highly separated deployment project based on Deepstream , including the full range of Yolo and continuously expanding deployment projects such as Ocr.
Stars: ✭ 120 (+14.29%)
Mutual labels:  tensorrt, onnx, yolov5
yolov5 tensorrt int8
TensorRT int8 量化部署 yolov5s 模型,实测3.3ms一帧!
Stars: ✭ 112 (+6.67%)
Mutual labels:  tensorrt, int8, yolov5
YOLOv5-Lite
🍅🍅🍅YOLOv5-Lite: lighter, faster and easier to deploy. Evolved from yolov5 and the size of model is only 930+kb (int8) and 1.7M (fp16). It can reach 10+ FPS on the Raspberry Pi 4B when the input size is 320×320~
Stars: ✭ 1,230 (+1071.43%)
Mutual labels:  tensorrt, yolov5
pytorch YOLO OpenVINO demo
No description or website provided.
Stars: ✭ 73 (-30.48%)
Mutual labels:  onnx, yolov5
yolov5-deepsort-tensorrt
A c++ implementation of yolov5 and deepsort
Stars: ✭ 207 (+97.14%)
Mutual labels:  tensorrt, yolov5
Tengine
Tengine is a lite, high performance, modular inference engine for embedded device
Stars: ✭ 4,012 (+3720.95%)
Mutual labels:  tensorrt, onnx
nanodet tensorrt int8
nanodet int8 量化,实测推理2ms一帧!
Stars: ✭ 37 (-64.76%)
Mutual labels:  tensorrt, int8
deepvac
PyTorch Project Specification.
Stars: ✭ 507 (+382.86%)
Mutual labels:  tensorrt, onnx
onnx2tensorRt
tensorRt-inference darknet2onnx pytorch2onnx mxnet2onnx python version
Stars: ✭ 14 (-86.67%)
Mutual labels:  tensorrt, onnx
mediapipe plus
The purpose of this project is to apply mediapipe to more AI chips.
Stars: ✭ 38 (-63.81%)
Mutual labels:  tensorrt, onnx
ros-yolo-sort
YOLO v3, v4, v5, v6, v7 + SORT tracking + ROS platform. Supporting: YOLO with Darknet, OpenCV(DNN), OpenVINO, TensorRT(tkDNN). SORT supports python(original) and C++. (Not Deep SORT)
Stars: ✭ 162 (+54.29%)
Mutual labels:  tensorrt, yolov5
YOLOv4MLNet
Use the YOLO v4 and v5 (ONNX) models for object detection in C# using ML.Net
Stars: ✭ 61 (-41.9%)
Mutual labels:  onnx, yolov5
Tensorrtx
Implementation of popular deep learning networks with TensorRT network definition API
Stars: ✭ 3,456 (+3191.43%)
Mutual labels:  tensorrt, yolov5
Pytorch Yolov4
PyTorch ,ONNX and TensorRT implementation of YOLOv4
Stars: ✭ 3,690 (+3414.29%)
Mutual labels:  tensorrt, onnx
flexible-yolov5
More readable and flexible yolov5 with more backbone(resnet, shufflenet, moblienet, efficientnet, hrnet, swin-transformer) and (cbam,dcn and so on), and tensorrt
Stars: ✭ 282 (+168.57%)
Mutual labels:  tensorrt, yolov5
epic-awesome-gamer
🍷 Gracefully claim weekly free games and monthly content from Epic Store.
Stars: ✭ 600 (+471.43%)
Mutual labels:  onnx, yolov5
yolov5 deepsort tensorrt cpp
This repo is a C++ version of yolov5_deepsort_tensorrt. Packing all C++ programs into .so files, using Python script to call C++ programs further.
Stars: ✭ 21 (-80%)
Mutual labels:  tensorrt, yolov5
RepVGG TensorRT int8
RepVGG TensorRT int8 量化,实测推理不到1ms一帧!
Stars: ✭ 35 (-66.67%)
Mutual labels:  tensorrt, int8
torch-model-compression
针对pytorch模型的自动化模型结构分析和修改工具集,包含自动分析模型结构的模型压缩算法库
Stars: ✭ 126 (+20%)
Mutual labels:  tensorrt, onnx
InsightFace-REST
InsightFace REST API for easy deployment of face recognition services with TensorRT in Docker.
Stars: ✭ 308 (+193.33%)
Mutual labels:  tensorrt, onnx

环境配置

ubuntu:18.04

cuda:11.0

cudnn:8.0

tensorrt:7.2.16

OpenCV:3.4.2

cuda,cudnn,tensorrt和OpenCV安装包(编译好了,也可以自己从官网下载编译)可以从链接: https://pan.baidu.com/s/1dpMRyzLivnBAca2c_DIgGw 密码: 0rct

cuda安装

如果系统有安装驱动,运行如下命令卸载

sudo apt-get purge nvidia*

禁用nouveau,运行如下命令

sudo vim /etc/modprobe.d/blacklist.conf

在末尾添加

blacklist nouveau

然后执行

sudo update-initramfs -u

chmod +x cuda_11.0.2_450.51.05_linux.run

sudo ./cuda_11.0.2_450.51.05_linux.run

是否接受协议: accept

然后选择Install

最后回车

vim ~/.bashrc 添加如下内容:

export PATH=/usr/local/cuda-11.0/bin:$PATH

export LD_LIBRARY_PATH=/usr/local/cuda-11.0/lib64:$LD_LIBRARY_PATH

source .bashrc 激活环境

cudnn 安装

tar -xzvf cudnn-11.0-linux-x64-v8.0.4.30.tgz

cd cuda/include

sudo cp *.h /usr/local/cuda-11.0/include

cd cuda/lib64

sudo cp libcudnn* /usr/local/cuda-11.0/lib64

tensorrt及OpenCV安装

定位到用户根目录

tar -xzvf TensorRT-7.2.1.6.Ubuntu-18.04.x86_64-gnu.cuda-11.0.cudnn8.0.tar.gz

cd TensorRT-7.2.1.6/python,该目录有4个python版本的tensorrt安装包

sudo pip3 install tensorrt-7.2.1.6-cp37-none-linux_x86_64.whl(根据自己的python版本安装)

pip install pycuda 安装python版本的cuda

定位到用户根目录

tar -xzvf opencv-3.4.2.zip 以备推理调用

yolov5s模型转换onnx

pip install onnx

pip install onnx-simplifier

git clone https://github.com/ultralytics/yolov5.git

cd yolov5/models

vim common.py

把BottleneckCSP类下的激活函数替换为relu,tensorrt对leakyRelu int8量化不稳定(这是一个深坑,大家记得避开)即修改为self.act = nn.ReLU(inplace=True)

训练得到模型后

cd yolov5

python models/export.py --weights 训练得到的模型权重路径 --img-size 训练图片输入尺寸

python3 -m onnxsim onnx模型名称 yolov5s-simple.onnx 得到最终简化后的onnx模型

onnx模型转换为 int8 tensorrt引擎

git clone https://github.com/Wulingtian/yolov5_tensorrt_int8_tools.git(求star)

cd yolov5_tensorrt_int8_tools

vim convert_trt_quant.py 修改如下参数

BATCH_SIZE 模型量化一次输入多少张图片

BATCH 模型量化次数

height width 输入图片宽和高

CALIB_IMG_DIR 训练图片路径,用于量化

onnx_model_path onnx模型路径

python convert_trt_quant.py 量化后的模型存到models_save目录下

tensorrt模型推理

git clone https://github.com/Wulingtian/yolov5_tensorrt_int8.git(求star)

cd yolov5_tensorrt_int8

vim CMakeLists.txt

修改USER_DIR参数为自己的用户根目录

vim http://yolov5s_infer.cc 修改如下参数

output_name1 output_name2 output_name3 yolov5模型有3个输出

我们可以通过netron查看模型输出名

pip install netron 安装netron

vim netron_yolov5s.py 把如下内容粘贴

    import netron

    netron.start('此处填充简化后的onnx模型路径', port=3344)

python netron_yolov5s.py 即可查看 模型输出名

trt_model_path 量化的的tensorrt推理引擎(models_save目录下trt后缀的文件)

test_img 测试图片路径

INPUT_W INPUT_H 输入图片宽高

NUM_CLASS 训练的模型有多少类

NMS_THRESH nms阈值

CONF_THRESH 置信度

参数配置完毕

mkdir build

cd build

cmake ..

make

./YoloV5sEngine 输出平均推理时间,以及保存预测图片到当前目录下,至此,部署完成!
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].