All Projects → qiaoguan → Person Reid Gan Pytorch

qiaoguan / Person Reid Gan Pytorch

A Pytorch Implementation of "Unlabeled Samples Generated by GAN Improve the Person Re-identification Baseline in vitro"(ICCV17)

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Person Reid Gan Pytorch

HistoGAN
Reference code for the paper HistoGAN: Controlling Colors of GAN-Generated and Real Images via Color Histograms (CVPR 2021).
Stars: ✭ 158 (+7.48%)
Mutual labels:  gan, deeplearning
Deeplearning
深度学习入门教程, 优秀文章, Deep Learning Tutorial
Stars: ✭ 6,783 (+4514.29%)
Mutual labels:  gan, deeplearning
Person Reid gan
ICCV2017 Unlabeled Samples Generated by GAN Improve the Person Re-identification Baseline in vitro
Stars: ✭ 301 (+104.76%)
Mutual labels:  gan, person-reidentification
T81 558 deep learning
Washington University (in St. Louis) Course T81-558: Applications of Deep Neural Networks
Stars: ✭ 4,152 (+2724.49%)
Mutual labels:  gan, deeplearning
Tensorflow 101
learn code with tensorflow
Stars: ✭ 1,116 (+659.18%)
Mutual labels:  gan, deeplearning
Faceswap pytorch
Deep fake ready to train on any 2 pair dataset with higher resolution
Stars: ✭ 194 (+31.97%)
Mutual labels:  gan, deeplearning
Deep Learning Resources
由淺入深的深度學習資源 Collection of deep learning materials for everyone
Stars: ✭ 422 (+187.07%)
Mutual labels:  gan, deeplearning
Cyclegan Vc2
Voice Conversion by CycleGAN (语音克隆/语音转换): CycleGAN-VC2
Stars: ✭ 158 (+7.48%)
Mutual labels:  gan, deeplearning
Relativistic Average Gan Keras
The implementation of Relativistic average GAN with Keras
Stars: ✭ 36 (-75.51%)
Mutual labels:  gan, deeplearning
Artificialintelligenceengines
Computer code collated for use with Artificial Intelligence Engines book by JV Stone
Stars: ✭ 35 (-76.19%)
Mutual labels:  gan, deeplearning
Dcgan Pytorch
PyTorch Implementation of DCGAN trained on the CelebA dataset.
Stars: ✭ 32 (-78.23%)
Mutual labels:  gan, deeplearning
Spectralnormalizationkeras
Spectral Normalization for Keras Dense and Convolution Layers
Stars: ✭ 100 (-31.97%)
Mutual labels:  gan, deeplearning
Specgan
SpecGAN - generate audio with adversarial training
Stars: ✭ 92 (-37.41%)
Mutual labels:  gan, deeplearning
Awesome Gan For Medical Imaging
Awesome GAN for Medical Imaging
Stars: ✭ 1,814 (+1134.01%)
Mutual labels:  gan, deeplearning
Pix2latent
Code for: Transforming and Projecting Images into Class-conditional Generative Networks
Stars: ✭ 141 (-4.08%)
Mutual labels:  gan
Reid Mgn
Reproduction of paper: Learning Discriminative Features with Multiple Granularities for Person Re-Identification
Stars: ✭ 145 (-1.36%)
Mutual labels:  person-reidentification
Semantic image inpainting
Semantic Image Inpainting
Stars: ✭ 140 (-4.76%)
Mutual labels:  gan
All4nlp
All For NLP, especially Chinese.
Stars: ✭ 141 (-4.08%)
Mutual labels:  deeplearning
Face generator
DCGAN face generator 🧑.
Stars: ✭ 146 (-0.68%)
Mutual labels:  gan
Art Dcgan
Modified implementation of DCGAN focused on generative art. Includes pre-trained models for landscapes, nude-portraits, and others.
Stars: ✭ 1,882 (+1180.27%)
Mutual labels:  gan

Person-reid-GAN-pytorch

A Pytorch Implementation of "Unlabeled Samples Generated by GAN Improve the Person Re-identification Baseline in vitro"(ICCV17), the official code is available here(in matlab).

We arrived [email protected]=93.55%, mAP=90.67% only with a very easy model.

Random Erasing is added to help train as a data augmentation method. the details of Random Erasing is available here

re-rank strategy is used to deal with the initial result, the details of re-rank method is available here

Model Structure(we simply alter the model from ResNet and DenseNet)

You may learn more from model.py. We add one linear layer(bottleneck), one batchnorm layer and relu.

Prerequisites

  • Python 2.7
  • GPU
  • Numpy
  • Pytorch
  • Torchvision

Getting started

Installation

  • Install Pytorch(the version is 0.2.0_3) from http://pytorch.org/
  • Install Torchvision from the source
git clone https://github.com/pytorch/vision
cd vision
python setup.py install

Dataset & Preparation

Download Market1501 Dataset

Preparation: Put the images with the same id in one folder. You may use

python prepare.py

Remember to change the dataset path to your own path.

Preparation: change the name of the folder to 0 to n-1(where n is the number of class, i.e. the number of person), the folder name is the label of each person (all the pictures under the same folder are the same person)

python changeIndex.py

the usage of the DCGAN model is under the DCGAN folder, first use market1501 to train the dcgan model, then generated some pictures using the trained DCGAN model, then you can use different numbers of generated images to help train the model using LSRO.

the generated images are in the gen_0000 folder, you can copy this folder under your training set, for more details, you can refer to DCGAN-TENSORFLOW folder

Our baseline code is only finetuned from resNet or DenseNet,
we use pretained DenseNet as baseline to train our model, the archieved result are as follows:

[email protected] mAP Note
0.921 0.793 ----
0.934     0.907 re-rank

using LSRO loss and added some image generated by DCGAN model, the achieved result are as follow:

**Batchsize Multi/Single GPU training [email protected] mAP Note**
32 Single 0.9162 0.7887 add 0 generated image
32 Single 0.9355 0.9067 after re-rank
32 Multi 0.8367 0.6442 add 0 generated image
32 Multi 0.8655 0.8143 after re-rank
64 Multi 0.843 0.646 add 0 generated image
64 Multi 0.872 0.815 after re-rank
32 Single 0.919 0.798 add 6000 generated image
32 Single 0.932 0.9012 after re-rank
64 Multi 0.909 0.779 add 12000 generated image
64 Multi 0.931 0.896 after re-rank
32 Single 0.925 0.801 add 12000 generated image
32 Single 0.939 0.904 after re-rank
64 Multi 0.915 0.790 add 18000 generated image
64 Multi 0.933 0.899 after re-rank
64 Multi 0.909 0.773 add 24000 generated image
64 Multi 0.924 0.887 after re-rank
32 Single 0.918 0.790 add 24000 generated image
32 Single 0.932 0.899 after re-rank

To save trained model, we make a dir.

mkdir model 

Train

Train the baseline by

python train_baseline.py --use_dense

--name the name of model.(ResNet or DesNet)

--data_dir the path of the training data.

--batchsize batch size.

--erasing_p random erasing probability.

Test

Use trained model to extract feature by

python test.py   --which_epoch 99  --use_dense

--gpu_ids which gpu to run.

--name the dir name of trained model.

--which_epoch select the i-th model.

--data_dir the path of the testing data.

--batchsize batch size.

Evaluation

python evaluate.py

It will output [email protected], [email protected], [email protected] and mAP results.

For mAP calculation, you also can refer to the C++ code for Oxford Building. We use the triangle mAP calculation (consistent with the Market1501 original code).

re-ranking

python evaluate_rerank.py

It may take more than 10G Memory to run. So run it on a powerful machine if possible.

It will output [email protected], [email protected], [email protected] and mAP results.

Conclusion

when the baseline result is not so high, the generated images can help model training(see multi-gpu training, add GAN images VS not add) , thus can improve the performance(more robust) , while when the baseline result is high(rank-1, 0.934), its difficult to improve the result. when batchsize is set to 32, the result is the best, and single-gpu training achieves better result than multi-gpu training.

Thanks

  • Many, many thanks to layumi for his Great work!

p.s. If you have any questions, you can open an issue! This Repo will no longer be supported for personal reasons!

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].