All Projects → Peterisfar → Yolov3

Peterisfar / Yolov3

Licence: mit
yolov3 by pytorch

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Yolov3

Tensornets
High level network definitions with pre-trained weights in TensorFlow
Stars: ✭ 982 (+591.55%)
Mutual labels:  object-detection, yolov3, mobilenetv2
Mobilenet Yolo
MobileNetV2-YoloV3-Nano: 0.5BFlops 3MB HUAWEI P40: 6ms/img, YoloFace-500k:0.1Bflops 420KB🔥🔥🔥
Stars: ✭ 1,566 (+1002.82%)
Mutual labels:  object-detection, yolov3, mobilenetv2
Channel Pruning
Channel Pruning for Accelerating Very Deep Neural Networks (ICCV'17)
Stars: ✭ 979 (+589.44%)
Mutual labels:  object-detection, model-compression
Yolov3 Model Pruning
在 oxford hand 数据集上对 YOLOv3 做模型剪枝(network slimming)
Stars: ✭ 1,386 (+876.06%)
Mutual labels:  object-detection, yolov3
Tensorflow2.0 Examples
🙄 Difficult algorithm, Simple code.
Stars: ✭ 1,397 (+883.8%)
Mutual labels:  object-detection, yolov3
Yolov3
YOLOv3 in PyTorch > ONNX > CoreML > TFLite
Stars: ✭ 8,159 (+5645.77%)
Mutual labels:  object-detection, yolov3
Yolo Vehicle Counter
This project aims to count every vehicle (motorcycle, bus, car, cycle, truck, train) detected in the input video using YOLOv3 object-detection algorithm.
Stars: ✭ 28 (-80.28%)
Mutual labels:  object-detection, yolov3
Yolov4 Pytorch
This is a pytorch repository of YOLOv4, attentive YOLOv4 and mobilenet YOLOv4 with PASCAL VOC and COCO
Stars: ✭ 1,070 (+653.52%)
Mutual labels:  object-detection, mobilenetv2
Yolov3 tensorflow
Complete YOLO v3 TensorFlow implementation. Support training on your own dataset.
Stars: ✭ 1,498 (+954.93%)
Mutual labels:  object-detection, yolov3
Tensorflow Yolov4 Tflite
YOLOv4, YOLOv4-tiny, YOLOv3, YOLOv3-tiny Implemented in Tensorflow 2.0, Android. Convert YOLO v4 .weights tensorflow, tensorrt and tflite
Stars: ✭ 1,881 (+1224.65%)
Mutual labels:  object-detection, yolov3
Tensorflow Yolo V3
Implementation of YOLO v3 object detector in Tensorflow (TF-Slim)
Stars: ✭ 862 (+507.04%)
Mutual labels:  object-detection, yolov3
Yolo label
GUI for marking bounded boxes of objects in images for training neural network Yolo v3 and v2 https://github.com/AlexeyAB/darknet, https://github.com/pjreddie/darknet
Stars: ✭ 128 (-9.86%)
Mutual labels:  object-detection, yolov3
3d Bounding Boxes From Monocular Images
A two stage multi-modal loss model along with rigid body transformations to regress 3D bounding boxes
Stars: ✭ 24 (-83.1%)
Mutual labels:  object-detection, yolov3
Ros yolo as template matching
Run 3 scripts to (1) Synthesize images (by putting few template images onto backgrounds), (2) Train YOLOv3, and (3) Detect objects for: one image, images, video, webcam, or ROS topic.
Stars: ✭ 32 (-77.46%)
Mutual labels:  object-detection, yolov3
Yolo annotation tool
Annotation tool for YOLO in opencv
Stars: ✭ 17 (-88.03%)
Mutual labels:  object-detection, yolov3
Yolo Tf2
yolo(all versions) implementation in keras and tensorflow 2.4
Stars: ✭ 695 (+389.44%)
Mutual labels:  object-detection, yolov3
Yolov3 pytorch
Full implementation of YOLOv3 in PyTorch
Stars: ✭ 570 (+301.41%)
Mutual labels:  object-detection, yolov3
Yolov3
Keras implementation of yolo v3 object detection.
Stars: ✭ 585 (+311.97%)
Mutual labels:  object-detection, yolov3
Yolov5
YOLOv5 🚀 in PyTorch > ONNX > CoreML > TFLite
Stars: ✭ 19,914 (+13923.94%)
Mutual labels:  object-detection, yolov3
Yolo V3 Iou
YOLO3 动漫人脸检测 (Based on keras and tensorflow) 2019-1-19
Stars: ✭ 116 (-18.31%)
Mutual labels:  object-detection, yolov3

YOLOV3


Introduction

This is my own YOLOV3 written in pytorch, and is also the first time i have reproduced a object detection model.The dataset used is PASCAL VOC. The eval tool is the voc2010. Now the mAP gains the goal score.

Subsequently, i will continue to update the code to make it more concise , and add the new and efficient tricks.

Note : Now this repository supports the model compression in the new branch model_compression


Results

name Train Dataset Val Dataset mAP(others) mAP(mine) notes
YOLOV3-448-544 2007trainval + 2012trainval 2007test 0.769 0.768 | - baseline(augument + step lr)
YOLOV3-*-544 2007trainval + 2012trainval 2007test 0.793 0.803 | - +multi-scale training
YOLOV3-*-544 2007trainval + 2012trainval 2007test 0.806 0.811 | - +focal loss(note the conf_loss in the start is lower)
YOLOV3-*-544 2007trainval + 2012trainval 2007test 0.808 0.813 | - +giou loss
YOLOV3-*-544 2007trainval + 2012trainval 2007test 0.812 0.821 | - +label smooth
YOLOV3-*-544 2007trainval + 2012trainval 2007test 0.822 0.826 | - +mixup
YOLOV3-*-544 2007trainval + 2012trainval 2007test 0.833 0.832 | 0.840 +cosine lr
YOLOV3-*-* 2007trainval + 2012trainval 2007test 0.858 0.858 | 0.860 +multi-scale test and flip, nms threshold is 0.45

Note :

  • YOLOV3-448-544 means train image size is 448 and test image size is 544. "*" means the multi-scale.
  • mAP(mine)'s format is (use_difficult mAP | no_difficult mAP).
  • In the test, the nms threshold is 0.5(except the last one) and the conf_score is 0.01.others nms threshold is 0.45(0.45 will increase the mAP)
  • Now only support the single gpu to train and test.

Environment

  • Nvida GeForce RTX 2080 Ti
  • CUDA10.0
  • CUDNN7.0
  • ubuntu 16.04
  • python 3.5
# install packages
pip3 install -r requirements.txt --user

Brief

  • [x] Data Augment (RandomHorizontalFlip, RandomCrop, RandomAffine, Resize)
  • [x] Step lr Schedule
  • [x] Multi-scale Training (320 to 640)
  • [x] focal loss
  • [x] GIOU
  • [x] Label smooth
  • [x] Mixup
  • [x] cosine lr
  • [x] Multi-scale Test and Flip

Prepared work

1、Git clone YOLOV3 repository

git clone https://github.com/Peterisfar/YOLOV3.git

update the "PROJECT_PATH" in the params.py.

2、Download dataset

  • Download Pascal VOC dataset : VOC 2012_trainvalVOC 2007_trainvalVOC2007_test. put them in the dir, and update the "DATA_PATH" in the params.py.
  • Convert data format : Convert the pascal voc *.xml format to custom format (Image_path0   xmin0,ymin0,xmax0,ymax0,class0   xmin1,ymin1...)
cd YOLOV3 && mkdir data
cd utils
python3 voc.py # get train_annotation.txt and test_annotation.txt in data/

3、Download weight file

Make dir weight/ in the YOLOV3 and put the weight file in.


Train

Run the following command to start training and see the details in the config/yolov3_config_voc.py

WEIGHT_PATH=weight/darknet53_448.weights

CUDA_VISIBLE_DEVICES=0 nohup python3 -u train.py --weight_path $WEIGHT_PATH --gpu_id 0 > nohup.log 2>&1 &

Notes:

  • Training steps could run the "cat nohup.log" to print the log.
  • It supports to resume training adding --resume, it will load last.pt automaticly.

Test

You should define your weight file path WEIGHT_FILE and test data's path DATA_TEST

WEIGHT_PATH=weight/best.pt
DATA_TEST=./data/test # your own images

CUDA_VISIBLE_DEVICES=0 python3 test.py --weight_path $WEIGHT_PATH --gpu_id 0 --visiual $DATA_TEST --eval

The images can be seen in the data/


TODO

  • [ ] Mish
  • [ ] OctConv
  • [ ] Custom data

Reference

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].