All Projects → Cli98 → Dmnet

Cli98 / Dmnet

Official implementation for DMNet: Density map guided object detection in aerial image (CVPR 2020 EarthVision workshop)

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Dmnet

Awesome Gee
A curated list of Google Earth Engine resources
Stars: ✭ 292 (+630%)
Mutual labels:  remote-sensing
Qgis Earthengine Examples
A collection of 300+ Python examples for using Google Earth Engine in QGIS
Stars: ✭ 482 (+1105%)
Mutual labels:  remote-sensing
Phenofit
R package: A state-of-the-art Vegetation Phenology extraction package, phenofit
Stars: ✭ 31 (-22.5%)
Mutual labels:  remote-sensing
Dota Doai
This repo is the codebase for our team to participate in DOTA related competitions, including rotation and horizontal detection.
Stars: ✭ 326 (+715%)
Mutual labels:  remote-sensing
Deepnetsforeo
Deep networks for Earth Observation
Stars: ✭ 393 (+882.5%)
Mutual labels:  remote-sensing
Sentinelsat
Search and download Copernicus Sentinel satellite images
Stars: ✭ 576 (+1340%)
Mutual labels:  remote-sensing
Geospatial Machine Learning
A curated list of resources focused on Machine Learning in Geospatial Data Science.
Stars: ✭ 289 (+622.5%)
Mutual labels:  remote-sensing
Pytesmo
python Toolbox for the Evaluation of Soil Moisture Observations
Stars: ✭ 33 (-17.5%)
Mutual labels:  remote-sensing
Awesome Remote Sensing Change Detection
List of datasets, codes, and contests related to remote sensing change detection
Stars: ✭ 414 (+935%)
Mutual labels:  remote-sensing
Pyint
Python&GAMMA based interfermetry toolbox for single or time-series of InSAR data processing.
Stars: ✭ 30 (-25%)
Mutual labels:  remote-sensing
Notebooks
interactive notebooks from Planet Engineering
Stars: ✭ 339 (+747.5%)
Mutual labels:  remote-sensing
Label Maker
Data Preparation for Satellite Machine Learning
Stars: ✭ 377 (+842.5%)
Mutual labels:  remote-sensing
Earthengine Py Notebooks
A collection of 360+ Jupyter Python notebook examples for using Google Earth Engine with interactive mapping
Stars: ✭ 807 (+1917.5%)
Mutual labels:  remote-sensing
Custom Scripts
A repository of custom scripts to be used with Sentinel Hub
Stars: ✭ 322 (+705%)
Mutual labels:  remote-sensing
Pymasker
generate masks from Landsat and MODIS land product QA band
Stars: ✭ 31 (-22.5%)
Mutual labels:  remote-sensing
Spectral
Python module for hyperspectral image processing
Stars: ✭ 290 (+625%)
Mutual labels:  remote-sensing
R2cnn faster Rcnn tensorflow
Rotational region detection based on Faster-RCNN.
Stars: ✭ 548 (+1270%)
Mutual labels:  remote-sensing
Freenet
FPGA: Fast Patch-Free Global Learning Framework for Fully End-to-End Hyperspectral Image Classification (TGRS 2020) https://ieeexplore.ieee.org/document/9007624
Stars: ✭ 40 (+0%)
Mutual labels:  remote-sensing
Geemap
A Python package for interactive mapping with Google Earth Engine, ipyleaflet, and folium
Stars: ✭ 959 (+2297.5%)
Mutual labels:  remote-sensing
Wetland Hydro Gee
Mapping wetland hydrological dynamics using Google Earth Engine (GEE)
Stars: ✭ 20 (-50%)
Mutual labels:  remote-sensing

Density map guided object detection in aerial image

Introduction

Object detection in high-resolution aerial images is a challenging task because of 1) the large variation in object size, and 2) non-uniform distribution of objects. A common solution is to divide the large aerial image into small (uniform) crops and then apply object detection on each small crop. In this paper, we investigate the image cropping strategy to address these challenges. Specifically, we propose a Density-Map guided object detection Network (DMNet), which is inspired from the observation that the object density map of an image presents how objects distribute in terms of the pixel intensity of the map. As pixel intensity varies, it is able to tell whether a region has objects or not, which in turn provides guidance for cropping images statistically. DMNet has three key components: a density map generation module, an image cropping module and an object detector. DMNet generates a density map and learns scale information based on density intensities to form cropping regions. Extensive experiments show that DMNet achieves state-of-the-art performance on two popular aerial image datasets, i.e. VisionDrone and UAVDT.

If you are interested to see more details, please feel free to check the paper for more details.

Demo

Here we provide one video demo for DMNet. The video comes from Visiondrone 2018 dataset, which is a typical one for aerial image object detection.

If you find this repository useful in your project, please consider citing:

@InProceedings{Li_2020_CVPR_Workshops,
    author = {Li, Changlin and Yang, Taojiannan and Zhu, Sijie and Chen, Chen and Guan, Shanyue},
    title = {Density Map Guided Object Detection in Aerial Images},
    booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
    month = {June},
    year = {2020}
}

Requirement

- Python >= 3.5, Opencv, Numpy, Matplotlib, tqdm
- PyTorch >= 1.0
- mmdetection >=2.0

Dessity map generation

There are already state of art algorithms that can achieve satisfying results on density map generation. In DMNet, the density map generation modular uses MCNN to achieve the task. Yet there are many models can beat MCNN in terms of mean absolute error. How good they are turns out to be a further research direction for DMNet.

We introduce code from Ma to train MCNN. Since the code base is available online, we save the trouble to publish it again in DMNet.

The pretrain weight can be accessed here.

Image cropping

Once you obtained prediction of density map from density map generation modular, collect all of them and place it your dataset to run image cropping modular

The data should be arranged in following structure before you call any function within this script:

dataset(Train/val/test)

--------images

--------dens (short for density map)

--------Annotations (Optional, but not available only when you conduct inference steps)

Sample running command:

python density_slide_window_official.py . HEIGHT_WIDTH THRESHOLD --output_folder Output_FolderName --mode val

Object detection

After you obtain your density crops, collect crop images and annotations(txt format in Visiondrone 2018) and make them into proper annotations(COCO or VOC format). Then you can select any state of art object detection algorithm to train your model.

For DMNet, MMdetection is selected as the tool for training Faster-Rcnn detectors.

The pretrain weight can be accessed here.

If you are not familiar with the process to transform txt annotation to VOC/COCO format, please check

  1. create_VOC_annotation_official.py

This script helps you Loading txt annotation and transform to VOC format The resulting images+annotations will be saved to indicated folders

The data should be arranged in following structure before you call any function within this script: dataset(Train/val/test)

--------images

--------Annotations (Optional, not available only when you conduct inference steps)

Sample command line to run:

python create_VOC_annotation_official.py ./mcnn_0.08_train_data --h_split 2 --w_split 3 --output_folder FolderName --mode train

  1. VOC2coco_official.py

Loading VOC annotation and transform to COCO format

Normally it should be enough to extract your annotation to VOC format, which is supported by various of object detection framework. However, there does exist the needs to obtain annotation in COCO format. And this script can help you.

The data should be arranged in following structure before you call any function within this script: dataset(Train/val/test)

--------images

--------Annotations (XML format, Optional, not available only when you conduct inference steps)

Sample command line to run:

python VOC2coco_official.py Folder_Name --mode train

  1. fusion_detection_result_official.py

This script conducts Global-local fusion detection. Namely, the script will fuse detections from both original image and density crops. To use this script, please prepare your data in following structure:

dataset(Train/val/test)

-----mode(Train/val/test)

------Global

--------images

--------Annotations (Optional, not available only when you conduct inference steps)

------Density

--------images

--------Annotations (Optional, not available only when you conduct inference steps)

Sample command line to run:

python fusion_detection_result_official.py

Reference

- https://github.com/CommissarMa/MCNN-pytorch
- https://github.com/open-mmlab/mmdetection
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].