All Projects → RQuispeC → pytorch-ACSCP

RQuispeC / pytorch-ACSCP

Licence: MIT License
Unofficial implementation of "Crowd Counting via Adversarial Cross-Scale Consistency Pursuit" with pytorch - CVPR 2018

Programming Languages

python
139335 projects - #7 most used programming language
Makefile
30231 projects

Projects that are alternatives of or similar to pytorch-ACSCP

ACSCP cGAN
Code implementation for paper that "ACSCS: Crowd Counting via Adversarial Cross-Scale Consistency Pursuit"; This is method of Crowd counting by conditional generation adversarial networks
Stars: ✭ 36 (+100%)
Mutual labels:  conditional-gan, crowd-counting
Pytorch Generative Model Collections
Collection of generative models in Pytorch version.
Stars: ✭ 2,296 (+12655.56%)
Mutual labels:  gan, conditional-gan
ADL2019
Applied Deep Learning (2019 Spring) @ NTU
Stars: ✭ 20 (+11.11%)
Mutual labels:  gan
HistoGAN
Reference code for the paper HistoGAN: Controlling Colors of GAN-Generated and Real Images via Color Histograms (CVPR 2021).
Stars: ✭ 158 (+777.78%)
Mutual labels:  gan
Unsupervised-Anomaly-Detection-with-Generative-Adversarial-Networks
Unsupervised Anomaly Detection with Generative Adversarial Networks on MIAS dataset
Stars: ✭ 95 (+427.78%)
Mutual labels:  gan
Introduction-to-GAN
Introduction to Generative Adversarial Networks
Stars: ✭ 21 (+16.67%)
Mutual labels:  gan
DLSS
Deep Learning Super Sampling with Deep Convolutional Generative Adversarial Networks.
Stars: ✭ 88 (+388.89%)
Mutual labels:  gan
MUST-GAN
Pytorch implementation of CVPR2021 paper "MUST-GAN: Multi-level Statistics Transfer for Self-driven Person Image Generation"
Stars: ✭ 39 (+116.67%)
Mutual labels:  gan
AsymmetricGAN
[ACCV 2018 Oral] Dual Generator Generative Adversarial Networks for Multi-Domain Image-to-Image Translation
Stars: ✭ 42 (+133.33%)
Mutual labels:  gan
ezgan
An extremely simple generative adversarial network, built with TensorFlow
Stars: ✭ 36 (+100%)
Mutual labels:  gan
dcgan anime avatars
基于keras使用dcgan自动生成动漫头像
Stars: ✭ 37 (+105.56%)
Mutual labels:  gan
keras-3dgan
Keras implementation of 3D Generative Adversarial Network.
Stars: ✭ 20 (+11.11%)
Mutual labels:  gan
MNIST-invert-color
Invert the color of MNIST images with PyTorch
Stars: ✭ 13 (-27.78%)
Mutual labels:  gan
VSGAN
VapourSynth Single Image Super-Resolution Generative Adversarial Network (GAN)
Stars: ✭ 124 (+588.89%)
Mutual labels:  gan
SLE-GAN
Towards Faster and Stabilized GAN Training for High-fidelity Few-shot Image Synthesis
Stars: ✭ 53 (+194.44%)
Mutual labels:  gan
Tensorflow DCGAN
Study Friendly Implementation of DCGAN in Tensorflow
Stars: ✭ 22 (+22.22%)
Mutual labels:  gan
gan-image-similarity
InfoGAN inspired neural network trained on zap50k images (using Tensorflow + tf-slim). Intermediate layers of the discriminator network are used to do image similarity.
Stars: ✭ 111 (+516.67%)
Mutual labels:  gan
DeepFlow
Pytorch implementation of "DeepFlow: History Matching in the Space of Deep Generative Models"
Stars: ✭ 24 (+33.33%)
Mutual labels:  gan
cgan-face-generator
Face generator from sketches using cGAN (pix2pix) model
Stars: ✭ 52 (+188.89%)
Mutual labels:  gan
TransGAN
This is a re-implementation of TransGAN: Two Pure Transformers Can Make One Strong GAN (CVPR 2021) in PyTorch.
Stars: ✭ 57 (+216.67%)
Mutual labels:  gan

Crowd Counting via Adversarial Cross-Scale Consistency Pursuit IN PYTORCH (Unofficial)

Implementation of CVPR 2O18 paper Crowd Counting via Adversarial Cross-Scale

1. Environment

We used the following enviroment:

  • Python 3
  • PyTorch
  • OpenCV
  • Numpy
  • MatPlotLib
  • Ubuntu 16.04

You can also run the code using the docker image of ufoym/deepo.

2. Preparing data

We make available UCF-CC-50 and Shanghai Tech datasets here, download and unzip it into the root of the repo. Directories should have the following hierarchy:

ROOT_OF_REPO
    data
        ucf_cc_50
            UCF_CC_50
                images
                labels
        ShanghaiTech
            part_A
                train_data
                    images
                    ground-truth
                test_data
                    images
                    ground-truth
            part_B
                train_data
                    images
                    ground-truth
                test_data
                    images
                    ground-truth

The code was developed such that data augmentation is computed before every other step and the results are stored in the hard drive. Thus, the first time you run the code it will take quite a long time. Augmented data is stored with the following hierarchy:

ROOT_OF_REPO
    data
        ucf_cc_50
            people_thr_0_gt_mode_same
        ShanghaiTech
            part_A
                people_thr_0_gt_mode_same
            part_B
                people_thr_0_gt_mode_same

3. Training

To train using UCF-CC-50 (with all folds) and save the results log in log/ACSCP you can run:

python3 train.py -d ucf-cc-50 --gt-mode same --people-thr 0 --train-batch 24 --save-dir log/ACSCP

In case you want to run a specific fold or part you can use flag --units, check the Makefile for more examples.

The training log is stored in log_train.txt inside the corresponding log/fold/part directory.

4. Testing

After training you can re-load the trained weights (using flag --resume) and use them for testing:

python3 train.py -d ucf-cc-50 --save-dir log/multi-stream --resume log/ACSCP/ucf-cc-50_people_thr_0_gt_mode_same --evaluate-only

The testing log is stored in log_test.txt inside the corresponding log/fold/part directory. You can also generate the plots of the predictions using flag --save-plots, results are stored in the directory plot-results-test inside the corresponding log/fold/part directory.

5. Final notes / TODO:

  • Results for UCF_CC_50 with this code are MAE 281,73 MSE 415,56 (--people-thr 20). Reported results by the authors are MAE 291.0 MSE 404.6.
  • Validation for other dataset may be done in the future.
  • Batch normalization is not used because of inestable learning.
  • Ground thruth is generated using a gaussian of fixed size.
  • Number of channels of autoencoder in the middle layer is changed to 3, instead of 4.
  • Network receives images of 1 channel, instead of 3.
  • You can use the flag --overlap-test to overlap the sliding windows used for testing (as implemented by the authors).
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].