All Projects → xuehy → pytorch-GaitGAN

xuehy / pytorch-GaitGAN

Licence: other
GaitGAN: Invariant Gait Feature Extraction Using Generative Adversarial Networks

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to pytorch-GaitGAN

STEP
Spatial Temporal Graph Convolutional Networks for Emotion Perception from Gaits
Stars: ✭ 39 (-13.33%)
Mutual labels:  gait, gait-analysis
sensormotion
Package for analyzing human motion data (e.g. PA, gait)
Stars: ✭ 73 (+62.22%)
Mutual labels:  gait-analysis
gaitutils
Extract and visualize gait data
Stars: ✭ 28 (-37.78%)
Mutual labels:  gait-analysis
Cross-View-Gait-Based-Human-Identification-with-Deep-CNNs
Code for 2016 TPAMI(IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE) A Comprehensive Study on Cross-View Gait Based Human Identification with Deep CNNs
Stars: ✭ 21 (-53.33%)
Mutual labels:  gait
GaitRecognition
Gait demo for tutorial of ICPR 2016
Stars: ✭ 61 (+35.56%)
Mutual labels:  gait
GaitAnalysisToolKit
Tools for the Cleveland State Human Motion and Control Lab
Stars: ✭ 85 (+88.89%)
Mutual labels:  gait
TraND
This is the code for the paper "Jinkai Zheng, Xinchen Liu, Chenggang Yan, Jiyong Zhang, Wu Liu, Xiaoping Zhang and Tao Mei: TraND: Transferable Neighborhood Discovery for Unsupervised Cross-domain Gait Recognition. ISCAS 2021" (Best Paper Award - Honorable Mention)
Stars: ✭ 32 (-28.89%)
Mutual labels:  gait

GaitGAN

A pytorch implementation of GaitGAN: Invariant Gait Feature Extraction Using Generative Adversarial Networks.

Yu, Shiqi, et al. "Gaitgan: invariant gait feature extraction using generative adversarial networks." Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops. 2017.

Dependency

Training

To train the model, put the CASIA-B dataset silhoutte data under repository Then goto src dir and run

python3 train.py

The model will be saved into the execution dir every 500 iterations. YOu can change the interval in train.py.

Monitor the performance

  • Install visdom.
  • Start the visdom server with python3 -m visdom.server 5274 or any port you like (change the port in train.py and test.py)
  • Open this URL in your browser: http://localhost:5274 You will see the loss curve as well as the image examples.

After 19k iterations, the results(every 3x1 block shows the generated side view, ground truth side view and the input view GEI in order):

19

the loss curve is:

loss19k

Testing

  • goto src dir and run python3 test.py
  • Open this URL in your browser: http://localhost:5274 You will see the results on the test set.

After 19k iterations, some of the results: test19k

Recognition

The codes for recognition are also provided.

The dataset setting is identical to the paper, while we only test ProbeMN here.

  • Goto src dir and mkdir transformed_28500
  • run python3 generate.py
  • run python3 knn_class.py, you'll get the average accuracy with KNN(k=1) on ProbeMN.
  • run python3 knn_class_per_angle.py, you'll get the results for different Gallery views and Probe views.
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].