All Projects → gjy3035 → PCC-Net

gjy3035 / PCC-Net

Licence: other
PCC Net: Perspective Crowd Counting via Spatial Convolutional Network

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to PCC-Net

CrowdFlow
Optical Flow Dataset and Benchmark for Visual Crowd Analysis
Stars: ✭ 87 (+38.1%)
Mutual labels:  crowd-counting, crowd-analysis
keras-mcnn
keras实现的人群密度检测网络"Single Image Crowd Counting via Multi Column Convolutional Neural Network",欢迎试用、关注并反馈问题...
Stars: ✭ 23 (-63.49%)
Mutual labels:  crowd-counting, crowd-analysis
S-DCNet
Unofficial Pytorch implementation of S-DCNet and SS-DCNet
Stars: ✭ 17 (-73.02%)
Mutual labels:  crowd-counting, crowd-analysis
IIM
PyTorch implementations of the paper: "Learning Independent Instance Maps for Crowd Localization"
Stars: ✭ 94 (+49.21%)
Mutual labels:  crowd-counting, crowd-analysis
Awesome Crowd Counting
Awesome Crowd Counting
Stars: ✭ 1,720 (+2630.16%)
Mutual labels:  crowd-counting, crowd-analysis
NWPU-Crowd-Sample-Code
The sample code for a large-scale crowd counting dataset, NWPU-Crowd.
Stars: ✭ 140 (+122.22%)
Mutual labels:  crowd-counting, crowd-analysis
CSRNet-keras
Implementation of the CSRNet paper (CVPR 18) in keras-tensorflow
Stars: ✭ 107 (+69.84%)
Mutual labels:  crowd-counting
multi-task-learning
Multi-task learning smile detection, age and gender classification on GENKI4k, IMDB-Wiki dataset.
Stars: ✭ 154 (+144.44%)
Mutual labels:  multi-task-learning
cups-rl
Customisable Unified Physical Simulations (CUPS) for Reinforcement Learning. Experiments run on the ai2thor environment (http://ai2thor.allenai.org/) e.g. using A3C, RainbowDQN and A3C_GA (Gated Attention multi-modal fusion) for Task-Oriented Language Grounding (tasks specified by natural language instructions) e.g. "Pick up the Cup or else"
Stars: ✭ 38 (-39.68%)
Mutual labels:  multi-task-learning
FOCAL-ICLR
Code for FOCAL Paper Published at ICLR 2021
Stars: ✭ 35 (-44.44%)
Mutual labels:  multi-task-learning
DeepSegmentor
A Pytorch implementation of DeepCrack and RoadNet projects.
Stars: ✭ 152 (+141.27%)
Mutual labels:  multi-task-learning
Multi-task-Conditional-Attention-Networks
A prototype version of our submitted paper: Conversion Prediction Using Multi-task Conditional Attention Networks to Support the Creation of Effective Ad Creatives.
Stars: ✭ 21 (-66.67%)
Mutual labels:  multi-task-learning
Crowd-Counting-with-MCNNs
Crowd counting on the ShanghaiTech dataset, using multi-column convolutional neural networks.
Stars: ✭ 23 (-63.49%)
Mutual labels:  crowd-counting
Variations-of-SFANet-for-Crowd-Counting
The official implementation of "Encoder-Decoder Based Convolutional Neural Networks with Multi-Scale-Aware Modules for Crowd Counting"
Stars: ✭ 78 (+23.81%)
Mutual labels:  crowd-counting
Pytorch-PCGrad
Pytorch reimplementation for "Gradient Surgery for Multi-Task Learning"
Stars: ✭ 179 (+184.13%)
Mutual labels:  multi-task-learning
EasyRec
A framework for large scale recommendation algorithms.
Stars: ✭ 599 (+850.79%)
Mutual labels:  multi-task-learning
PyramidScaleNetwork
To the best of our knowledge, this is the first work to explicitly address feature similarity issue in multi-column design. Extensive experiments on four challenging benchmarks (ShanghaiTech, UCF_CC_50, UCF-QNRF, and Mall) demonstrate the effectiveness of the proposed innovations as well as the superior performance over the state-of-the-art. Mor…
Stars: ✭ 17 (-73.02%)
Mutual labels:  crowd-counting
Mask-YOLO
Inspired from Mask R-CNN to build a multi-task learning, two-branch architecture: one branch based on YOLOv2 for object detection, the other branch for instance segmentation. Simply tested on Rice and Shapes. MobileNet supported.
Stars: ✭ 100 (+58.73%)
Mutual labels:  multi-task-learning
ailia-models
The collection of pre-trained, state-of-the-art AI models for ailia SDK
Stars: ✭ 1,102 (+1649.21%)
Mutual labels:  crowd-counting
crowd-counting
Image Crowd Counting Using Convolutional Neural Network and Markov Random Field
Stars: ✭ 32 (-49.21%)
Mutual labels:  crowd-counting

PCC Net: Perspective Crowd Counting via Spatial Convolutional Network

This is an official implementation of the paper "PCC net" (PCC Net: Perspective Crowd Counting via Spatial Convolutional Network).

PCC Net.

In the paper, the experiments are conducted on the three populuar datasets: Shanghai Tech, UCF_CC_50 and WorldExpo'10. To be specific, Shanghai Tech Part B contains crowd images with the same resolution. For easier data prepareation, we only release the pre-trained model on ShanghaiTech Part B dataset in this repo.

Bracnhes

  1. ori_pt0.2_py2: the original version.
  2. ori_pt1_py3: the current version.
  3. vgg_pt1_py3: vgg-backbone PCC Net (higher performance).

Requirements

  • Python 3.x
  • Pytorch 1.x
  • TensorboardX (pip)
  • torchvision (pip)
  • easydict (pip)
  • pandas (pip)

Data preparation

  1. Download the original ShanghaiTech Dataset [Link: Dropbox / BaiduNetdisk]
  2. Resize the images and the locations of key points.
  3. Generate the density maps by using the code.
  4. Generate the segmentation maps.

We also provide the processed Part B dataset for training. [Link]

Training model

  1. Run the train_lr.py: python train_lr.py.
  2. See the training outputs: Tensorboard --logdir=exp --port=6006.

In the experiments, training and tesing 800 epoches take 21 hours on GTX 1080Ti.

Expermental results

Quantitative results

We show the Tensorboard visualization results as below: Detialed infomation during the traning phase. The mae and mse are the results on test set. Others are triaining loss.

Visualization results

Visualization results on the test set as below: Visualization results on the test set. Column 1: input image; Column 2: density map GT; Column 3: density map prediction; Column 4: segmentation map GT; Column 5: segmentation map prediction.

Citation

If you use the code, please cite the following paper:

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].