All Projects → walktree → Libtorch Yolov3

walktree / Libtorch Yolov3

A Libtorch implementation of the YOLO v3 object detection algorithm

Programming Languages

cpp
1120 projects

Projects that are alternatives of or similar to Libtorch Yolov3

Mobilenetv2 Yolov3
yolov3 with mobilenetv2 and efficientnet
Stars: ✭ 258 (-35.01%)
Mutual labels:  yolov3
Tensorrt
TensorRT-7 Network Lib 包括常用目标检测、关键点检测、人脸检测、OCR等 可训练自己数据
Stars: ✭ 294 (-25.94%)
Mutual labels:  yolov3
Tensorflow Yolov3
🔥 TensorFlow Code for technical report: "YOLOv3: An Incremental Improvement"
Stars: ✭ 3,498 (+781.11%)
Mutual labels:  yolov3
Cortex License Plate Reader Client
A client to connect to cortex-provisioned infrastructure on AWS to do license plate identification in real time.
Stars: ✭ 268 (-32.49%)
Mutual labels:  yolov3
Pytorch Yolo V3
A PyTorch implementation of the YOLO v3 object detection algorithm
Stars: ✭ 3,148 (+692.95%)
Mutual labels:  yolov3
Yolo Pytorch
YOLO for object detection tasks
Stars: ✭ 301 (-24.18%)
Mutual labels:  yolov3
PP-YOLO
PaddlePaddle实现的目标检测模型PP-YOLO
Stars: ✭ 59 (-85.14%)
Mutual labels:  yolov3
Msnhnet
🔥 (yolov3 yolov4 yolov5 unet ...)A mini pytorch inference framework which inspired from darknet.
Stars: ✭ 357 (-10.08%)
Mutual labels:  yolov3
Fastmot
High-performance multiple object tracking based on YOLO, Deep SORT, and optical flow
Stars: ✭ 284 (-28.46%)
Mutual labels:  yolov3
Deep Sort Yolov4
People detection and optional tracking with Tensorflow backend.
Stars: ✭ 306 (-22.92%)
Mutual labels:  yolov3
Pytorch Yolov4
PyTorch ,ONNX and TensorRT implementation of YOLOv4
Stars: ✭ 3,690 (+829.47%)
Mutual labels:  yolov3
Pytorch 0.4 Yolov3
Yet Another Implimentation of Pytroch 0.4.1 and YoloV3 on python3
Stars: ✭ 284 (-28.46%)
Mutual labels:  yolov3
Tensorflow 2.x Yolov3
YOLOv3 implementation in TensorFlow 2.3.1
Stars: ✭ 300 (-24.43%)
Mutual labels:  yolov3
Mmdetection To Tensorrt
convert mmdetection model to tensorrt, support fp16, int8, batch input, dynamic shape etc.
Stars: ✭ 262 (-34.01%)
Mutual labels:  yolov3
Yoloface
Deep learning-based Face detection using the YOLOv3 algorithm (https://github.com/sthanhng/yoloface)
Stars: ✭ 339 (-14.61%)
Mutual labels:  yolov3
YOLOv3-Cloud-Tutorial
Everything you need in order to get YOLOv3 up and running in the cloud. Learn to train your custom YOLOv3 object detector in the cloud for free!
Stars: ✭ 68 (-82.87%)
Mutual labels:  yolov3
Simple Hrnet
Multi-person Human Pose Estimation with HRNet in Pytorch
Stars: ✭ 299 (-24.69%)
Mutual labels:  yolov3
Invoice
增值税发票OCR识别,使用flask微服务架构,识别type:增值税电子普通发票,增值税普通发票,增值税专用发票;识别字段为:发票代码、发票号码、开票日期、校验码、税后金额等
Stars: ✭ 381 (-4.03%)
Mutual labels:  yolov3
Pytorchnethub
项目注释+论文复现+算法竞赛
Stars: ✭ 341 (-14.11%)
Mutual labels:  yolov3
Alturos.yolo
C# Yolo Darknet Wrapper (real-time object detection)
Stars: ✭ 308 (-22.42%)
Mutual labels:  yolov3

libtorch-yolov3

A Libtorch implementation of the YOLO v3 object detection algorithm, written with pure C++. It's fast, easy to be integrated to your production, and CPU and GPU are both supported. Enjoy ~

This project is inspired by the pytorch version, I rewritten it with C++.

Requirements

  1. LibTorch v1.0.0
  2. Cuda
  3. OpenCV (just used in the example)

To compile

  1. cmake3
  2. gcc 5.4 +
mkdir build && cd build
cmake3 -DCMAKE_PREFIX_PATH="your libtorch path" ..

# if there are multi versions of gcc, then tell cmake which one your want to use, e.g.:
cmake3 -DCMAKE_PREFIX_PATH="your libtorch path" -DCMAKE_C_COMPILER=/usr/local/bin/gcc -DCMAKE_CXX_COMPILER=/usr/local/bin/g++ ..

Running the detector

The first thing you need to do is to get the weights file for v3:

cd models
wget https://pjreddie.com/media/files/yolov3.weights 

On Single image:

./yolo-app ../imgs/person.jpg

As I tested, it will take 25 ms on GPU ( 1080 ti ). please run inference job more than once, and calculate the average cost.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].