All Projects โ†’ kunnnnethan โ†’ R-YOLOv4

kunnnnethan / R-YOLOv4

Licence: other
This is a PyTorch-based R-YOLOv4 implementation which combines YOLOv4 model and loss function from R3Det for arbitrary oriented object detection.

Programming Languages

python
139335 projects - #7 most used programming language
shell
77523 projects

Projects that are alternatives of or similar to R-YOLOv4

Streamlit-Applications
Deep Learning and Computer Vision Applications using Streamlit
Stars: โœญ 55 (-9.84%)
Mutual labels:  yolov4
Comet.Box
Collection of Object Detection and Segmentation Pipelines๐Ÿ›ธ๐Ÿš€
Stars: โœญ 24 (-60.66%)
Mutual labels:  yolov4
onnx tensorrt project
Support Yolov5(4.0)/Yolov5(5.0)/YoloR/YoloX/Yolov4/Yolov3/CenterNet/CenterFace/RetinaFace/Classify/Unet. use darknet/libtorch/pytorch/mxnet to onnx to tensorrt
Stars: โœญ 145 (+137.7%)
Mutual labels:  yolov4
YOLOv4MLNet
Use the YOLO v4 and v5 (ONNX) models for object detection in C# using ML.Net
Stars: โœญ 61 (+0%)
Mutual labels:  yolov4
go-darknet
Go bindings for Darknet (YOLO v4 / v3)
Stars: โœญ 56 (-8.2%)
Mutual labels:  yolov4
yolov4-opencv-cpp-python
Example of using YOLO v4 with OpenCV, C++ and Python
Stars: โœญ 38 (-37.7%)
Mutual labels:  yolov4
yolov34-cpp-opencv-dnn
ๅŸบไบŽopencv็š„4็งYOLO็›ฎๆ ‡ๆฃ€ๆต‹๏ผŒC++ๅ’ŒPythonไธคไธช็‰ˆๆœฌ็š„ๅฎž็Žฐ๏ผŒไป…ไป…ๅชไพ่ต–opencvๅบ“ๅฐฑๅฏไปฅ่ฟ่กŒ
Stars: โœญ 152 (+149.18%)
Mutual labels:  yolov4
onnx2tensorRt
tensorRt-inference darknet2onnx pytorch2onnx mxnet2onnx python version
Stars: โœญ 14 (-77.05%)
Mutual labels:  yolov4
Deep-Learning-with-GoogleColab
Deep Learning Applications (Darknet - YOLOv3, YOLOv4 | DeOldify - Image Colorization, Video Colorization | Face-Recognition) with Google Colaboratory - on the free Tesla K80/Tesla T4/Tesla P100 GPU - using Keras, Tensorflow and PyTorch.
Stars: โœญ 63 (+3.28%)
Mutual labels:  yolov4
YOLOv4-Hat-detection
ๅŸบไบŽYOLOv4็š„ๅฎ‰ๅ…จๅธฝไฝฉๆˆดๆฃ€ๆต‹
Stars: โœญ 57 (-6.56%)
Mutual labels:  yolov4
simpleAICV-pytorch-ImageNet-COCO-training
SimpleAICV:pytorch training example on ImageNet(ILSVRC2012)/COCO2017/VOC2007+2012 datasets.Include ResNet/DarkNet/RetinaNet/FCOS/CenterNet/TTFNet/YOLOv3/YOLOv4/YOLOv5/YOLOX.
Stars: โœญ 276 (+352.46%)
Mutual labels:  yolov4
OrientedRepPoints DOTA
Oriented Object Detection: Oriented RepPoints + Swin Transformer/ReResNet
Stars: โœญ 62 (+1.64%)
Mutual labels:  oriented-object-detection
Pruned-OpenVINO-YOLO
Deploy the pruned YOLOv3/v4/v4-tiny/v4-tiny-3l model on OpenVINO embedded devices
Stars: โœญ 46 (-24.59%)
Mutual labels:  yolov4
ScaledYOLOv4
Scaled-YOLOv4: Scaling Cross Stage Partial Network
Stars: โœญ 1,944 (+3086.89%)
Mutual labels:  yolov4
GGHL
This is the implementation of GGHL (A General Gaussian Heatmap Label Assignment for Arbitrary-Oriented Object Detection)
Stars: โœญ 309 (+406.56%)
Mutual labels:  oriented-object-detection
YOLOv4-PyTorch
PyTorch re-implementation of YOLOv4 architecture
Stars: โœญ 44 (-27.87%)
Mutual labels:  yolov4
odam
ODAM - Object detection and Monitoring
Stars: โœญ 16 (-73.77%)
Mutual labels:  yolov4
JDet
JDet is an object detection benchmark based on Jittor. Mainly focus on aerial image object detection (oriented object detection).
Stars: โœญ 81 (+32.79%)
Mutual labels:  oriented-object-detection
vehicles-counting-yolov4-deepsort
A project for counting vehicles using YOLOv4 + DeepSORT + Flask + Ngrok + TF2
Stars: โœญ 23 (-62.3%)
Mutual labels:  yolov4
multi-camera-pig-tracking
Official Implementation of "Tracking Grow-Finish Pigs Across Large Pens Using Multiple Cameras"
Stars: โœญ 25 (-59.02%)
Mutual labels:  yolov4

R-YOLOv4

This is a PyTorch-based R-YOLOv4 implementation which combines YOLOv4 model and loss function from R3Det for arbitrary oriented object detection. (Final project for NCKU INTRODUCTION TO ARTIFICIAL INTELLIGENCE course)

Introduction

The objective of this project is to adapt YOLOv4 model to detecting oriented objects. As a result, modifying the original loss function of the model is required. I got a successful result by increasing the number of anchor boxes with different rotating angle and combining smooth-L1-IoU loss function proposed by R3Det: Refined Single-Stage Detector with Feature Refinement for Rotating Object into the original loss for bounding boxes.

Features


Loss Function (only for x, y, w, h, theta)

loss

angle


Scheduler

Cosine Annealing with Warmup (Reference: Cosine Annealing with Warmup for PyTorch)
scheduler


Recall

recall

As the paper suggested, I get a better results from **f(ariou) = exp(1-ariou)-1**. Therefore I used it for my loss function.

Usage

  1. Clone and Setup Environment

    $ git clone https://github.com/kunnnnethan/R-YOLOv4.git
    $ cd R-YOLOv4/
    

    Create Conda Environment

    $ conda env create -f environment.yml
    

    Create Python Virtual Environment

    $ python3.8 -m venv (your environment name)
    $ source ~/your-environment-name/bin/activate
    $ pip3 install torch torchvision torchaudio
    $ pip install -r requirements.txt
    
  2. Download pretrained weights
    weights

  3. Make sure your files arrangment looks like the following
    Note that each of your dataset folder in data should split into three files, namely train, test, and detect.

    R-YOLOv4/
    โ”œโ”€โ”€ train.py
    โ”œโ”€โ”€ test.py
    โ”œโ”€โ”€ detect.py
    โ”œโ”€โ”€ xml2txt.py
    โ”œโ”€โ”€ environment.xml
    โ”œโ”€โ”€ requirements.txt
    โ”œโ”€โ”€ model/
    โ”œโ”€โ”€ datasets/
    โ”œโ”€โ”€ lib/
    โ”œโ”€โ”€ outputs/
    โ”œโ”€โ”€ weights/
        โ”œโ”€โ”€ pretrained/ (for training)
        โ””โ”€โ”€ UCAS-AOD/ (for testing and detection)
    โ””โ”€โ”€ data/
        โ””โ”€โ”€ UCAS-AOD/
            โ”œโ”€โ”€ class.names
            โ”œโ”€โ”€ train/
                โ”œโ”€โ”€ ...png
                โ””โ”€โ”€ ...txt
            โ”œโ”€โ”€ test/
                โ”œโ”€โ”€ ...png
                โ””โ”€โ”€ ...txt
            โ””โ”€โ”€ detect/
                โ””โ”€โ”€ ...png
    
  4. Train, Test, and Detect
    Please refer to lib/options.py to check out all the arguments.

Train

I have implemented methods to load and train three different datasets. They are UCAS-AOD, DOTA, and custom dataset respectively. You can check out how I loaded those dataset into the model at /datasets. The angle of each bounding box is limited in (- pi/2, pi/2], and the height of each bounding box is always longer than it's width.

You can run experiments/display_inputs.py to visualize whether your data is loaded successfully.

UCAS-AOD dataset

Please refer to this repository to rearrange files so that it can be loaded and trained by this model.
You can download the weight that I trained from UCAS-AOD.

While training, please specify which dataset you are using.
$ python train.py --dataset UCAS_AOD

DOTA dataset

Download the official dataset from here. The original files should be able to be loaded and trained by this model.

While training, please specify which dataset you are using.
$ python train.py --dataset DOTA

Train with custom dataset

  1. Use labelImg2 to help label your data. labelImg2 is capable of labeling rotated objects.
  2. Move your data folder into the R-YOLOv4/data folder.
  3. Run xml2txt.py
    1. generate txt files: python xml2txt.py --data_folder your-path --action gen_txt
    2. delete xml files: python xml2txt.py --data_folder your-path --action del_xml

A trash custom dataset that I made and the weight trained from it are provided for your convenience.

While training, please specify which dataset you are using.
$ python train.py --dataset custom

Training Log

---- [Epoch 2/2] ----
+---------------+--------------------+---------------------+---------------------+----------------------+
| Step: 596/600 | loss               | reg_loss            | conf_loss           | cls_loss             |
+---------------+--------------------+---------------------+---------------------+----------------------+
| YoloLayer1    | 0.4302629232406616 | 0.32991039752960205 | 0.09135108441114426 | 0.009001442231237888 |
| YoloLayer2    | 0.7385762333869934 | 0.5682911276817322  | 0.15651139616966248 | 0.013773750513792038 |
| YoloLayer3    | 1.5002599954605103 | 1.1116538047790527  | 0.36262497305870056 | 0.025981156155467033 |
+---------------+--------------------+---------------------+---------------------+----------------------+
Total Loss: 2.669099, Runtime: 404.888372

Tensorboard

If you would like to use tensorboard for tracking traing process.

  • Open additional terminal in the same folder where you are running program.
  • Run command $ tensorboard --logdir='weights/your_model_name/logs' --port=6006
  • Go to http://localhost:6006/

Results

UCAS_AOD

Method Plane Car mAP
YOLOv4 (smoothL1-iou) 98.05 92.05 95.05

car

plane

DOTA

DOTA have not been tested yet. (It's quite difficult to test because of large resolution of images) DOTADOTA

trash (custom dataset)

Method Plane Car mAP
YOLOv4 (smoothL1-iou) 100.00 100.00 100.00

garbage1

garbage2

TODO

  • Mosaic Augmentation
  • Mixup Augmentation

References

yangxue0827/RotationDetection
eriklindernoren/PyTorch-YOLOv3
Tianxiaomo/pytorch-YOLOv4
ultralytics/yolov5

YOLOv4: Optimal Speed and Accuracy of Object Detection

Alexey Bochkovskiy, Chien-Yao Wang, Hong-Yuan Mark Liao

Abstract There are a huge number of features which are said to improve Convolutional Neural Network (CNN) accuracy. Practical testing of combinations of such features on large datasets, and theoretical justification of the result, is required. Some features operate on certain models exclusively and for certain problems exclusively, or only for small-scale datasets; while some features, such as batch-normalization and residual-connections, are applicable to the majority of models, tasks, and datasets...

@article{yolov4,
  title={YOLOv4: Optimal Speed and Accuracy of Object Detection},
  author={Alexey Bochkovskiy, Chien-Yao Wang, Hong-Yuan Mark Liao},
  journal = {arXiv},
  year={2020}
}

R3Det: Refined Single-Stage Detector with Feature Refinement for Rotating Object

Xue Yang, Junchi Yan, Ziming Feng, Tao He

Abstract Rotation detection is a challenging task due to the difficulties of locating the multi-angle objects and separating them effectively from the background. Though considerable progress has been made, for practical settings, there still exist challenges for rotating objects with large aspect ratio, dense distribution and category extremely imbalance. In this paper, we propose an end-to-end refined single-stage rotation detector for fast and accurate object detection by using a progressive regression approach from coarse to fine granularity...

@article{r3det,
  title={R3Det: Refined Single-Stage Detector with Feature Refinement for Rotating Object},
  author={Xue Yang, Junchi Yan, Ziming Feng, Tao He},
  journal = {arXiv},
  year={2019}
}
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].