All Projects → indra4837 → yolov4_trt_ros

indra4837 / yolov4_trt_ros

Licence: MIT license
YOLOv4 object detector using TensorRT engine

Programming Languages

python
139335 projects - #7 most used programming language
C++
36643 projects - #6 most used programming language
Cuda
1817 projects
shell
77523 projects

Projects that are alternatives of or similar to yolov4 trt ros

Pytorch Yolov4
PyTorch ,ONNX and TensorRT implementation of YOLOv4
Stars: ✭ 3,690 (+4046.07%)
Mutual labels:  tensorrt, yolov3, yolov4, yolov4-tiny
Tensorflow Yolov4 Tflite
YOLOv4, YOLOv4-tiny, YOLOv3, YOLOv3-tiny Implemented in Tensorflow 2.0, Android. Convert YOLO v4 .weights tensorflow, tensorrt and tflite
Stars: ✭ 1,881 (+2013.48%)
Mutual labels:  tensorrt, yolov3, yolov3-tiny, yolov4
go-darknet
Go bindings for Darknet (YOLO v4 / v3)
Stars: ✭ 56 (-37.08%)
Mutual labels:  yolov3, yolov3-tiny, yolov4
Tensorrtx
Implementation of popular deep learning networks with TensorRT network definition API
Stars: ✭ 3,456 (+3783.15%)
Mutual labels:  tensorrt, yolov3, yolov4
ScaledYOLOv4
Scaled-YOLOv4: Scaling Cross Stage Partial Network
Stars: ✭ 1,944 (+2084.27%)
Mutual labels:  yolov3, yolov4, yolov4-tiny
Open-Source-Models
Address book for computer vision models.
Stars: ✭ 30 (-66.29%)
Mutual labels:  yolov3, yolov4, yolov4-tiny
yolov4-triton-tensorrt
This repository deploys YOLOv4 as an optimized TensorRT engine to Triton Inference Server
Stars: ✭ 224 (+151.69%)
Mutual labels:  tensorrt, yolov4, yolov4-tiny
ros-yolo-sort
YOLO v3, v4, v5, v6, v7 + SORT tracking + ROS platform. Supporting: YOLO with Darknet, OpenCV(DNN), OpenVINO, TensorRT(tkDNN). SORT supports python(original) and C++. (Not Deep SORT)
Stars: ✭ 162 (+82.02%)
Mutual labels:  tensorrt, yolov4
Jetson Inference
Hello AI World guide to deploying deep-learning inference networks and deep vision primitives with TensorRT and NVIDIA Jetson.
Stars: ✭ 5,191 (+5732.58%)
Mutual labels:  jetson, tensorrt
LibtorchTutorials
This is a code repository for pytorch c++ (or libtorch) tutorial.
Stars: ✭ 463 (+420.22%)
Mutual labels:  yolov4, yolov4-tiny
pytorch YOLO OpenVINO demo
No description or website provided.
Stars: ✭ 73 (-17.98%)
Mutual labels:  yolov3, yolov4
Paddlex
PaddlePaddle End-to-End Development Toolkit(『飞桨』深度学习全流程开发工具)
Stars: ✭ 3,399 (+3719.1%)
Mutual labels:  jetson, yolov3
deepstream tao apps
Sample apps to demonstrate how to deploy models trained with TAO on DeepStream
Stars: ✭ 274 (+207.87%)
Mutual labels:  tensorrt, yolov3
Torch-TensorRT
PyTorch/TorchScript compiler for NVIDIA GPUs using TensorRT
Stars: ✭ 1,216 (+1266.29%)
Mutual labels:  jetson, tensorrt
tensorrt-examples
TensorRT Examples (TensorRT, Jetson Nano, Python, C++)
Stars: ✭ 31 (-65.17%)
Mutual labels:  jetson, tensorrt
mediapipe plus
The purpose of this project is to apply mediapipe to more AI chips.
Stars: ✭ 38 (-57.3%)
Mutual labels:  jetson, tensorrt
play with tensorrt
Sample projects for TensorRT in C++
Stars: ✭ 39 (-56.18%)
Mutual labels:  jetson, tensorrt
Deepstream Project
This is a highly separated deployment project based on Deepstream , including the full range of Yolo and continuously expanding deployment projects such as Ocr.
Stars: ✭ 120 (+34.83%)
Mutual labels:  tensorrt, yolov3
edge-tpu-tiny-yolo
Run Tiny YOLO-v3 on Google's Edge TPU USB Accelerator.
Stars: ✭ 94 (+5.62%)
Mutual labels:  yolov3, yolov3-tiny
object-detection-indonesian-traffic-signs-using-yolo-algorithm
Pendeteksian rambu lalu lintas khas Indonesia menggunakan dataset custom dan menggunakan algoritma Deep Learning You Only Look Once v4
Stars: ✭ 26 (-70.79%)
Mutual labels:  yolov3, yolov4

YOLOv4 with TensorRT engine

This package contains the yolov4_trt_node that performs the inference using NVIDIA's TensorRT engine

This package works for both YOLOv3 and YOLOv4. Do change the commands accordingly, corresponding to the YOLO model used.

Video_Result2


Setting up the environment

Install dependencies

Current Environment:

  • Jetson Xavier AGX
  • ROS Melodic
  • Ubuntu 18.04
  • Jetpack 4.4
  • TensorRT 7+

Dependencies:

  • OpenCV 3.x
  • numpy 1.15.1
  • Protobuf 3.8.0
  • Pycuda 2019.1.2
  • onnx 1.4.1 (depends on Protobuf)

Install all dependencies with below commands

Install pycuda (takes awhile)
$ cd ${HOME}/catkin_ws/src/yolov4_trt_ros/dependencies
$ ./install_pycuda.sh

Install Protobuf (takes awhile)
$ cd ${HOME}/catkin_ws/src/yolov4_trt_ros/dependencies
$ ./install_protobuf-3.8.0.sh

Install onnx (depends on Protobuf above)
$ sudo pip3 install onnx==1.4.1
  • Please also install jetson-inference
  • Note: This package uses similar nodes to ros_deep_learning package. Please place a CATKIN_IGNORE in that package to avoid similar node name catkin_make error
  • If these scripts do not work for you, do refer to this amazing repository by jefflgaol on installing the above packages and more on Jetson ARM devices.

Setting up the package

1. Clone project into catkin_ws and build it

$ cd ~/catkin_ws && catkin_make
$ source devel/setup.bash

2. Make libyolo_layer.so

$ cd ${HOME}/catkin_ws/src/yolov4_trt_ros/plugins
$ make

This will generate a libyolo_layer.so file

3. Place your yolo.weights and yolo.cfg file in the yolo folder

$ cd ${HOME}/catkin_ws/src/yolov4_trt_ros/yolo

** Please name the yolov4.weights and yolov4.cfg file as follows:

  • yolov4.weights
  • yolov4.cfg

Run the conversion script to convert to TensorRT engine file

$ ./convert_yolo_trt
  • Input the appropriate arguments
  • This conversion might take awhile
  • The optimised TensorRT engine would now be saved as yolov3-416.trt / yolov4-416.trt

4. Change the class labels

$ cd ${HOME}/catkin_ws/src/yolov4_trt_ros/utils
$ vim yolo_classes.py
  • Change the class labels to suit your model

5. Change the video_input and topic_name

$ cd ${HOME}/catkin_ws/src/yolov4_trt_ros/launch
  • yolov3_trt.launch : change the topic_name

  • yolov4_trt.launch : change the topic_name

  • video_source.launch : change the input format (refer to this Link

    • video_source.launch requires jetson-inference to be installed
    • Default input is CSI camera

Using the package

Running the package

Note: Run the launch files separately in different terminals

1. Run the video_source

# For csi input
$ roslaunch yolov4_trt_ros video_source.launch input:=csi://0

# For video input
$ roslaunch yolov4_trt_ros video_source.launch input:=/path_to_video/video.mp4

# For USB camera
$ roslaunch yolov4_trt_ros video_source.launch input:=v4l2://0

2. Run the yolo detector

# For YOLOv3 (single input)
$ roslaunch yolov4_trt_ros yolov3_trt.launch

# For YOLOv4 (single input)
$ roslaunch yolov4_trt_ros yolov4_trt.launch

# For YOLOv4 (multiple input)
$ roslaunch yolov4_trt_ros yolov4_trt_batch.launch

3. For maximum performance

$ cd /usr/bin/
$ sudo ./nvpmodel -m 0	# Enable 2 Denver CPU
$ sudo ./jetson_clock	# Maximise CPU/GPU performance
  • These commands are found/referred in this forum post
  • Please ensure the jetson device is cooled appropriately to prevent overheating

Parameters

  • str model = "yolov3" or "yolov4"

  • str model_path = "/abs_path_to_model/"

  • int input_shape = 288/416/608

  • int category_num = 80

  • double conf_th = 0.5

  • bool show_img = True

  • Default Input FPS from CSI camera = 30.0

  • To change this, go to jetson-inference/utils/camera/gstCamera.cpp
# In line 359, change this line
mOptions.frameRate = 15

# To desired frame_rate
mOptions.frameRate = desired_frame_rate

Results obtained

Inference Results

Single Camera Input

Model Hardware FPS Inference Time (ms)
Yolov4-416 Xavier AGX 40.0 0.025
Yolov4-416 Jetson Tx2 16.0 0.0625
  • Will be adding inference tests for YOLOv3/4 and YOLOv3-tiny/YOLOv4-tiny for different Jetson devices and multiple camera inputs inference tests in the future

1. Screenshots

Video_in: .mp4 video (640x360)

Tests Done on Xavier AGX

Avg FPS : ~38 FPS

Video_Result1

Video_Result2

Video_Result3


Licenses and References

1. TensorRT samples from jkjung-avt

Many thanks for his project with tensorrt samples. I have referenced his source code and adapted it to ROS for robotics applications.

I also used the pycuda and protobuf installation script from his project

Those code are under MIT License

2. Jetson-inference from dusty-nv

Many thanks for his work on the Jetson Inference with ROS. I have used his video_source input from his project for capturing video inputs.

Those code are under NVIDIA License

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].