All Projects → dmburd → S-DCNet

dmburd / S-DCNet

Licence: other
Unofficial Pytorch implementation of S-DCNet and SS-DCNet

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to S-DCNet

NWPU-Crowd-Sample-Code
The sample code for a large-scale crowd counting dataset, NWPU-Crowd.
Stars: ✭ 140 (+723.53%)
Mutual labels:  crowd-counting, crowd-analysis
IIM
PyTorch implementations of the paper: "Learning Independent Instance Maps for Crowd Localization"
Stars: ✭ 94 (+452.94%)
Mutual labels:  crowd-counting, crowd-analysis
CrowdFlow
Optical Flow Dataset and Benchmark for Visual Crowd Analysis
Stars: ✭ 87 (+411.76%)
Mutual labels:  crowd-counting, crowd-analysis
Awesome Crowd Counting
Awesome Crowd Counting
Stars: ✭ 1,720 (+10017.65%)
Mutual labels:  crowd-counting, crowd-analysis
PCC-Net
PCC Net: Perspective Crowd Counting via Spatial Convolutional Network
Stars: ✭ 63 (+270.59%)
Mutual labels:  crowd-counting, crowd-analysis
keras-mcnn
keras实现的人群密度检测网络"Single Image Crowd Counting via Multi Column Convolutional Neural Network",欢迎试用、关注并反馈问题...
Stars: ✭ 23 (+35.29%)
Mutual labels:  crowd-counting, crowd-analysis
MARUNet
Multi-level Attention Refined UNet for crowd counting
Stars: ✭ 30 (+76.47%)
Mutual labels:  crowd-counting
Smart-City-Sample
The smart city reference pipeline shows how to integrate various media building blocks, with analytics powered by the OpenVINO™ Toolkit, for traffic or stadium sensing, analytics and management tasks.
Stars: ✭ 141 (+729.41%)
Mutual labels:  crowd-counting
Dense-Scale-Network-for-Crowd-Counting
An unofficial implement of paper "Dense Scale Network for Crowd Counting", link: https://arxiv.org/abs/1906.09707
Stars: ✭ 25 (+47.06%)
Mutual labels:  crowd-counting
pytorch-ACSCP
Unofficial implementation of "Crowd Counting via Adversarial Cross-Scale Consistency Pursuit" with pytorch - CVPR 2018
Stars: ✭ 18 (+5.88%)
Mutual labels:  crowd-counting
ACSCP cGAN
Code implementation for paper that "ACSCS: Crowd Counting via Adversarial Cross-Scale Consistency Pursuit"; This is method of Crowd counting by conditional generation adversarial networks
Stars: ✭ 36 (+111.76%)
Mutual labels:  crowd-counting
Variations-of-SFANet-for-Crowd-Counting
The official implementation of "Encoder-Decoder Based Convolutional Neural Networks with Multi-Scale-Aware Modules for Crowd Counting"
Stars: ✭ 78 (+358.82%)
Mutual labels:  crowd-counting
PyramidScaleNetwork
To the best of our knowledge, this is the first work to explicitly address feature similarity issue in multi-column design. Extensive experiments on four challenging benchmarks (ShanghaiTech, UCF_CC_50, UCF-QNRF, and Mall) demonstrate the effectiveness of the proposed innovations as well as the superior performance over the state-of-the-art. Mor…
Stars: ✭ 17 (+0%)
Mutual labels:  crowd-counting
ailia-models
The collection of pre-trained, state-of-the-art AI models for ailia SDK
Stars: ✭ 1,102 (+6382.35%)
Mutual labels:  crowd-counting
crowd-counting
Image Crowd Counting Using Convolutional Neural Network and Markov Random Field
Stars: ✭ 32 (+88.24%)
Mutual labels:  crowd-counting
W-Net-Keras
An unofficial implementation of W-Net for crowd counting.
Stars: ✭ 20 (+17.65%)
Mutual labels:  crowd-counting
Crowd-Counting-with-MCNNs
Crowd counting on the ShanghaiTech dataset, using multi-column convolutional neural networks.
Stars: ✭ 23 (+35.29%)
Mutual labels:  crowd-counting
CSRNet-keras
Implementation of the CSRNet paper (CVPR 18) in keras-tensorflow
Stars: ✭ 107 (+529.41%)
Mutual labels:  crowd-counting

S-DCNet and SS-DCNet

This is an unofficial implementation (Pytorch) of S-DCNet and SS-DCNet.

The papers are

See the exact references at the bottom of this page.

Discussions indicate that the authors do not have plans to release the training code (as of Oct 2019).

This repository contains the code for model training and evaluation on the published datasets ShanghaiTech Part_A and Part_B. Code for inference on standalone user-provided images is also present. Only classification-based counter (C-Counter) is implemented (regression-based counter (R-Counter) is not).

Environment

Install required packages according to requirements.txt.

All of the scripts intended to be run by user (gen_density_maps.py, train.py, evaluate.py, inference.py) are powered by Hydra configuration system. Run any script with --help to see the available options. The configuration files are in conf/ directory.

Datasets

Download the ShanghaiTech dataset using the links from this repo or from kaggle. After unpacking the archive, you will have the following directory structure:

./
└── ShanghaiTech/
    ├── part_A/
    │   ├── test_data/
    │   │   ├── ground-truth/GT_IMG_{1,2,3,...,182}.mat
    │   │   └── images/IMG_{1,2,3,...,182}.jpg
    │   └── train_data/
    │       ├── ground-truth/GT_IMG_{1,2,3,...,300}.mat
    │       └── images/IMG_{1,2,3,...,300}.jpg
    └── part_B/
        ├── test_data/
        │   ├── ground-truth/GT_IMG_{1,2,3,...,316}.mat
        │   └── images/IMG_{1,2,3,...,316}.jpg
        └── train_data/
            ├── ground-truth/GT_IMG_{1,2,3,...,400}.mat
            └── images/IMG_{1,2,3,...,400}.jpg

Ground truth density maps

Generate ground truth density maps by running a command like

python gen_density_maps.py dataset=ShanghaiTech_part_B
# and/or similarly for part_A

Files with the names density_maps_part_{A|B}_{train,test}.npz will appear in the current directory.

The generated density maps can be visualized and compared to the pre-calculated density maps provided by the official repo (only for the test sets of ShanghaiTech Part_A and Part_B). In order to do so, download the archive Test_Data.zip using the links in the Data section of the README in the official repo. After unpacking the archive, you will have the following directory structure:

./
└── Test_Data/
    ├── SH_partA_Density_map/
    │   ├── test/
    │   │   ├── gtdens/IMG_{1,2,3,...,182}.mat
    │   │   └── images/IMG_{1,2,3,...,182}.jpg
    │   └── rgbstate.mat
    └── SH_partB_Density_map/
        ├── test/
        │   ├── gtdens/IMG_{1,2,3,...,316}.mat
        │   └── images/IMG_{1,2,3,...,316}.jpg
        └── rgbstate.mat

Next, run gen_density_maps.py with the path to the gtdens directory:

python gen_density_maps.py \
    dataset=ShanghaiTech_part_B \
    dataset.xhp_gt_dmaps_dir=./Test_Data/SH_partB_Density_map/test/gtdens \
# and/or similarly for part_A

Directory named cmp_dmaps_part_{A|B}_test_<some_random_string> containing pairs of images (named IMG_<N>_my.png / IMG_<N>_xhp.png) will be created.

Training

train.py is the script for training a model. Launch the script by a command like this:

python train.py dataset=ShanghaiTech_part_B
# and/or similarly for part_A

Fine-tuning is supported (check the option train.pretrained_ckpt).

The logs and checkpoints generated during training are placed to a folder named like outputs/<launch_date>/<launch_time>. Plots of MAE and MSE vs epoch number can be visualized by tensorboard:

tensorboard --logdir outputs/<date>/<time>

Evaluation

evaluate.py is the script for evaluating a checkpoint. Select a checkpoint for epoch N and run a command like this:

python evaluate.py \
    dataset=ShanghaiTech_part_B\
    test.trained_ckpt_for_inference=outputs/<date>/<time>/checkpoints/epoch_<N>.pth

You will get an output like this for part_A:

Evaluating on the (whole) train data and on the test data (in ./ShanghaiTech/part_A)
Metrics on the (whole) train data: MAE: 21.52, MSE: 64.51
Metrics on the test data:          MAE: 61.16, MSE: 105.23

or like this for part_B:

Evaluating on the (whole) train data and on the test data (in ./ShanghaiTech/part_B)
Metrics on the (whole) train data: MAE: 4.84, MSE: 8.61
Metrics on the test data:          MAE: 8.22, MSE: 13.65

The error values shown above were obtained after training on a machine from vast.ai, so the training time was limited. The error values are higher than that reported in the original paper for SS-DCNet C-Counter (MAE = 56.1, MSE = 88.9 for part_A test set; MAE = 6.6, MSE = 10.8 for part_B test set).

dataset MAE MSE checkpoint
ShanghaiTech part A test 61.16 105.23 link_A
ShanghaiTech part B test 8.22 13.65 link_B

You can visualize the predictions of a model on a test set by adding test.visualize=True to the evaluation script command line. Combined images showing the ground truth counts, predicted counts, absolute / relative errors and coarse-grained density maps will be placed to a folder named outputs/<date>/<time>/visualized_part_{A|B}_test_set_predictions/.

Inference

To perform inference on user-specified images, run a command like this:

python inference.py \
    dataset=ShanghaiTech_part_B \
    test.trained_ckpt_for_inference=./outputs/<date>/<time>/checkpoints/epoch_<N>.pth \
    test.imgs_for_inference_dir=./dir_for_inference_B \
    test.visualize=True

where ./dir_for_inference_B folder contains the input images. The visualized predictions will be placed to the newly-created directory outputs/<date2>/<time2>/visualized_predictions. Also, the image file names and corresponding total count values will be printed to stdout.

Export

To try to export a checkpoint to ONNX, 'torch jit trace' and 'torch jit script' formats, run a command like this:

python export.py \
    dataset=ShanghaiTech_part_B \
    test.trained_ckpt_for_inference=./outputs/<date>/<time>/checkpoints/epoch_<N>.pth

The generated files will be placed to the newly-created directory outputs/<date3>/<time3>/.

References

The exact references to the original S-DCNet and SS-DCNet papers are, respectively,

@inproceedings{xhp2019SDCNet,
    title={From Open Set to Closed Set: Counting Objects by Spatial Divide-and-Conquer},
    author={Xiong, Haipeng and Lu, Hao and Liu, Chengxin and Liang, Liu and Cao, Zhiguo and Shen, Chunhua},
    booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
    year={2019}
}

@misc{xiong2020open,
    title={From Open Set to Closed Set: Supervised Spatial Divide-and-Conquer for Object Counting},
    author={Haipeng Xiong and Hao Lu and Chengxin Liu and Liang Liu and Chunhua Shen and Zhiguo Cao},
    year={2020},
    eprint={2001.01886},
    archivePrefix={arXiv},
    primaryClass={cs.CV}
}
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].