All Projects → Ha0Tang → Gesturegan

Ha0Tang / Gesturegan

Licence: other
[ACM MM 2018 Oral] GestureGAN for Hand Gesture-to-Gesture Translation in the Wild

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Gesturegan

Selectiongan
[CVPR 2019 Oral] Multi-Channel Attention Selection GAN with Cascaded Semantic Guidance for Cross-View Image Translation
Stars: ✭ 366 (+169.12%)
Mutual labels:  generative-adversarial-network, gans, image-generation, image-translation, image-manipulation
Lggan
[CVPR 2020] Local Class-Specific and Global Image-Level Generative Adversarial Networks for Semantic-Guided Scene Generation
Stars: ✭ 97 (-28.68%)
Mutual labels:  generative-adversarial-network, generative-model, image-generation, image-translation, image-manipulation
CoCosNet-v2
CoCosNet v2: Full-Resolution Correspondence Learning for Image Translation
Stars: ✭ 312 (+129.41%)
Mutual labels:  image-manipulation, image-generation, gans, image-translation
TriangleGAN
TriangleGAN, ACM MM 2019.
Stars: ✭ 28 (-79.41%)
Mutual labels:  generative-adversarial-network, generative-model, image-generation, image-translation
Anycost Gan
[CVPR 2021] Anycost GANs for Interactive Image Synthesis and Editing
Stars: ✭ 367 (+169.85%)
Mutual labels:  generative-adversarial-network, gans, image-generation, image-manipulation
Cocosnet
Cross-domain Correspondence Learning for Exemplar-based Image Translation. (CVPR 2020 Oral)
Stars: ✭ 211 (+55.15%)
Mutual labels:  generative-adversarial-network, gans, image-translation, image-manipulation
Contrastive Unpaired Translation
Contrastive unpaired image-to-image translation, faster and lighter training than cyclegan (ECCV 2020, in PyTorch)
Stars: ✭ 822 (+504.41%)
Mutual labels:  generative-adversarial-network, gans, image-generation, image-manipulation
Pytorch Cyclegan And Pix2pix
Image-to-Image Translation in PyTorch
Stars: ✭ 16,477 (+12015.44%)
Mutual labels:  generative-adversarial-network, gans, image-generation, image-manipulation
Finegan
FineGAN: Unsupervised Hierarchical Disentanglement for Fine-grained Object Generation and Discovery
Stars: ✭ 240 (+76.47%)
Mutual labels:  generative-adversarial-network, gans, image-generation, image-manipulation
Cyclegan
Software that can generate photos from paintings, turn horses into zebras, perform style transfer, and more.
Stars: ✭ 10,933 (+7938.97%)
Mutual labels:  generative-adversarial-network, gans, image-generation, image-manipulation
Generative models tutorial with demo
Generative Models Tutorial with Demo: Bayesian Classifier Sampling, Variational Auto Encoder (VAE), Generative Adversial Networks (GANs), Popular GANs Architectures, Auto-Regressive Models, Important Generative Model Papers, Courses, etc..
Stars: ✭ 276 (+102.94%)
Mutual labels:  generative-adversarial-network, generative-model, gans
Faceswap Gan
A denoising autoencoder + adversarial losses and attention mechanisms for face swapping.
Stars: ✭ 3,099 (+2178.68%)
Mutual labels:  generative-adversarial-network, gans, image-manipulation
Attentiongan
AttentionGAN for Unpaired Image-to-Image Translation & Multi-Domain Image-to-Image Translation
Stars: ✭ 341 (+150.74%)
Mutual labels:  gans, image-generation, image-translation
AODA
Official implementation of "Adversarial Open Domain Adaptation for Sketch-to-Photo Synthesis"(WACV 2022/CVPRW 2021)
Stars: ✭ 44 (-67.65%)
Mutual labels:  image-manipulation, image-generation, gans
pytorch-CycleGAN
Pytorch implementation of CycleGAN.
Stars: ✭ 39 (-71.32%)
Mutual labels:  generative-adversarial-network, generative-model, image-translation
coursera-gan-specialization
Programming assignments and quizzes from all courses within the GANs specialization offered by deeplearning.ai
Stars: ✭ 277 (+103.68%)
Mutual labels:  generative-adversarial-network, generative-model, gans
Texturize
🤖🖌️ Generate photo-realistic textures based on source images. Remix, remake, mashup! Useful if you want to create variations on a theme or elaborate on an existing texture.
Stars: ✭ 366 (+169.12%)
Mutual labels:  generative-model, image-generation, image-manipulation
Data Efficient Gans
[NeurIPS 2020] Differentiable Augmentation for Data-Efficient GAN Training
Stars: ✭ 682 (+401.47%)
Mutual labels:  generative-adversarial-network, gans, image-generation
Image To Image Papers
🦓<->🦒 🌃<->🌆 A collection of image to image papers with code (constantly updating)
Stars: ✭ 949 (+597.79%)
Mutual labels:  generative-adversarial-network, image-translation, image-manipulation
Bringing Old Photos Back To Life
Bringing Old Photo Back to Life (CVPR 2020 oral)
Stars: ✭ 9,525 (+6903.68%)
Mutual labels:  generative-adversarial-network, gans, image-manipulation

License CC BY-NC-SA 4.0 Python 3.6 Packagist Last Commit Maintenance Contributing Ask Me Anything ! PWC PWC

Contents

GestureGAN demo GestureGAN for hand gesture-to-gesture tranlation task. Given an image and some novel hand skeletons, GestureGAN is able to generate the same person but with different hand gestures.

GestureGAN demo GestureGAN for cross-view image tranlation task. Given an image and some novel semantic maps, GestureGAN is able to generate the same scene but with different viewpoints.

GestureGAN for Controllable Image-to-Image Translation

GestureGAN Framework

Framework

Comparison with State-of-the-Art Image-to-Image Transaltion Methods

Framework Comparison

Conference paper | Extended paper | Project page | Slides | Poster

GestureGAN for Hand Gesture-to-Gesture Translation in the Wild.
Hao Tang1, Wei Wang1,2, Dan Xu1,3, Yan Yan4 and Nicu Sebe1.
1University of Trento, Italy, 2EPFL, Switzerland, 3University of Oxford, UK, 4Texas State University, USA.
In ACM MM 2018 (Oral & Best Paper Candidate).
The repository offers the official implementation of our paper in PyTorch.

License

Creative Commons License
Copyright (C) 2018 University of Trento, Italy.

All rights reserved. Licensed under the CC BY-NC-SA 4.0 (Attribution-NonCommercial-ShareAlike 4.0 International)

The code is released for academic research use only. For commercial use, please contact [email protected].

Installation

Clone this repo.

git clone https://github.com/Ha0Tang/GestureGAN
cd GestureGAN/

This code requires PyTorch 0.4.1 and python 3.6+. Please install dependencies by

pip install -r requirements.txt (for pip users)

or

./scripts/conda_deps.sh (for Conda users)

To reproduce the results reported in the paper, you would need two NVIDIA GeForce GTX 1080 Ti GPUs or two NVIDIA TITAN Xp GPUs.

Dataset Preparation

For hand gesture-to-gesture translation tasks, we use NTU Hand Digit and Creative Senz3D datasets. For cross-view image translation task, we use Dayton and CVUSA datasets. These datasets must be downloaded beforehand. Please download them on the respective webpages. In addition, we put a few sample images in this code repo. Please cite their papers if you use the data.

Preparing NTU Hand Digit Dataset. The dataset can be downloaded in this paper. After downloading it we adopt OpenPose to generate hand skeletons and use them as training and testing data in our experiments. Note that we filter out failure cases in hand gesture estimation for training and testing. Please cite their papers if you use this dataset. Train/Test splits for Creative Senz3D dataset can be downloaded from here. Download images and the crossponding extracted hand skeletons of this dataset:

bash ./datasets/download_gesturegan_dataset.sh ntu_image_skeleton

Then run the following MATLAB script to generate training and testing data:

cd datasets/
matlab -nodesktop -nosplash -r "prepare_ntu_data"

Preparing Creative Senz3D Dataset. The dataset can be downloaded here. After downloading it we adopt OpenPose to generate hand skeletons and use them as training data in our experiments. Note that we filter out failure cases in hand gesture estimation for training and testing. Please cite their papers if you use this dataset. Train/Test splits for Creative Senz3D dataset can be downloaded from here. Download images and the crossponding extracted hand skeletons of this dataset:

bash ./datasets/download_gesturegan_dataset.sh senz3d_image_skeleton

Then run the following MATLAB script to generate training and testing data:

cd datasets/
matlab -nodesktop -nosplash -r "prepare_senz3d_data"

Preparing Dayton Dataset. The dataset can be downloaded here. In particular, you will need to download dayton.zip. Ground Truth semantic maps are not available for this datasets. We adopt RefineNet trained on CityScapes dataset for generating semantic maps and use them as training data in our experiments. Please cite their papers if you use this dataset. Train/Test splits for Dayton dataset can be downloaded from here.

Preparing CVUSA Dataset. The dataset can be downloaded here, which is from the page. After unzipping the dataset, prepare the training and testing data as discussed in SelectionGAN. We also convert semantic maps to the color ones by using this script. Since there is no semantic maps for the aerial images on this dataset, we use black images as aerial semantic maps for placehold purposes.

Or you can directly download the prepared Dayton and CVUSA data from here.

Preparing Your Own Datasets. Each training sample in the dataset will contain {Ix,Iy,Cx,Cy}, where Ix=image x, Iy=image y, Cx=Controllable structure of image x, and Cy=Controllable structure of image y. Of course, you can use GestureGAN for your own datasets and tasks, such landmark-guided facial experssion translation and keypoint-guided person image generation.

Generating Images Using Pretrained Model

Once the dataset is ready. The result images can be generated using pretrained models.

  1. You can download a pretrained model (e.g. ntu_gesturegan_twocycle) with the following script:
bash ./scripts/download_gesturegan_model.sh ntu_gesturegan_twocycle

The pretrained model is saved at ./checkpoints/[type]_pretrained. Check here for all the available GestureGAN models.

  1. Generate images using the pretrained model.
python test.py --dataroot [path_to_dataset] \
	--name [type]_pretrained \
	--model [gesturegan_model] \
	--which_model_netG resnet_9blocks \
	--which_direction AtoB \
	--dataset_mode aligned \
	--norm instance \
	--gpu_ids 0 \
	--batchSize [BS] \
	--loadSize [LS] \
	--fineSize [FS] \
	--no_flip

[path_to_dataset] is the path to the dataset. Dataset can be one of ntu, senz3d, dayton_a2g, dayton_g2a and cvusa. [type]_pretrained is the directory name of the checkpoint file downloaded in Step 1, which should be one of ntu_gesturegan_twocycle_pretrained, senz3d_gesturegan_twocycle_pretrained, dayton_a2g_64_gesturegan_onecycle_pretrained, dayton_g2a_64_gesturegan_onecycle_pretrained, dayton_a2g_gesturegan_onecycle_pretrained, dayton_g2a_gesturegan_onecycle_pretrained and cvusa_gesturegan_onecycle_pretrained. [gesturegan_model] is the directory name of the model of GestureGAN, which should be one of gesturegan_twocycle or gesturegan_onecycle. If you are running on CPU mode, change --gpu_ids 0 to --gpu_ids -1. For [BS, LS, FS], please see Training and Testing sections.

Note that testing requires large amount of disk storage space. If you don't have enough space, append --saveDisk on the command line.

  1. The outputs images are stored at ./results/[type]_pretrained/ by default. You can view them using the autogenerated HTML file in the directory.

Training New Models

New models can be trained with the following commands.

  1. Prepare dataset.

  2. Train.

For NTU dataset:

export CUDA_VISIBLE_DEVICES=3,4;
python train.py --dataroot ./datasets/ntu \
	--name ntu_gesturegan_twocycle \
	--model gesturegan_twocycle \
	--which_model_netG resnet_9blocks \
	--which_direction AtoB \
	--dataset_mode aligned \
	--norm instance \
	--gpu_ids 0,1 \
	--batchSize 4 \
	--loadSize 286 \
	--fineSize 256 \
	--no_flip \
	--lambda_L1 800 \
	--cyc_L1 0.1 \
	--lambda_identity 0.01 \
	--lambda_feat 1000 \
	--display_id 0 \
	--niter 10 \
	--niter_decay 10

For Senz3D dataset:

export CUDA_VISIBLE_DEVICES=5,7;
python train.py --dataroot ./datasets/senz3d \
	--name senz3d_gesturegan_twocycle \
	--model gesturegan_twocycle \
	--which_model_netG resnet_9blocks \
	--which_direction AtoB \
	--dataset_mode aligned \
	--norm instance \
	--gpu_ids 0,1 \
	--batchSize 4 \
	--loadSize 286 \
	--fineSize 256 \
	--no_flip \
	--lambda_L1 800 \
	--cyc_L1 0.1 \
	--lambda_identity 0.01 \
	--lambda_feat 1000 \
	--display_id 0 \
	--niter 10 \
	--niter_decay 10

For CVUSA dataset:

export CUDA_VISIBLE_DEVICES=0;
python train.py --dataroot ./dataset/cvusa \
	--name cvusa_gesturegan_onecycle \
	--model gesturegan_onecycle \
	--which_model_netG resnet_9blocks \
	--which_direction AtoB \
	--dataset_mode aligned \
	--norm instance \
	--gpu_ids 0 \
	--batchSize 4 \
	--loadSize 286 \
	--fineSize 256 \
	--no_flip \
	--cyc_L1 0.1 \
	--lambda_identity 100 \
	--lambda_feat 100 \
	--display_id 0 \
	--niter 15 \
	--niter_decay 15

For Dayton (a2g direction, 256) dataset:

export CUDA_VISIBLE_DEVICES=0;
python train.py --dataroot ./datasets/dayton_a2g \
	--name dayton_a2g_gesturegan_onecycle \
	--model gesturegan_onecycle \
	--which_model_netG resnet_9blocks \
	--which_direction AtoB \
	--dataset_mode aligned \
	--norm instance \
	--gpu_ids 0 \
	--batchSize 4 \
	--loadSize 286 \
	--fineSize 256 \
	--no_flip \
	--cyc_L1 0.1 \
	--lambda_identity 100 \
	--lambda_feat 100 \
	--display_id 0 \
	--niter 20 \
	--niter_decay 15

For Dayton (g2a direction, 256) dataset:

export CUDA_VISIBLE_DEVICES=1;
python train.py --dataroot ./datasets/dayton_g2a \
	--name dayton_g2a_gesturegan_onecycle \
	--model gesturegan_onecycle \
	--which_model_netG resnet_9blocks \
	--which_direction AtoB \
	--dataset_mode aligned \
	--norm instance \
	--gpu_ids 0 \
	--batchSize 4 \
	--loadSize 286 \
	--fineSize 256 \
	--no_flip \
	--cyc_L1 0.1 \
	--lambda_identity 100 \
	--lambda_feat 100 \
	--display_id 0 \
	--niter 20 \
	--niter_decay 15

For Dayton (a2g direction, 64) dataset:

export CUDA_VISIBLE_DEVICES=0;
python train.py --dataroot ./datasets/dayton_a2g \
	--name dayton_a2g_64_gesturegan_onecycle \
	--model gesturegan_onecycle \
	--which_model_netG resnet_9blocks \
	--which_direction AtoB \
	--dataset_mode aligned \
	--norm instance \
	--gpu_ids 0 \
	--batchSize 16 \
	--loadSize 72 \
	--fineSize 64 \
	--no_flip \
	--cyc_L1 0.1 \
	--lambda_identity 100 \
	--lambda_feat 100 \
	--display_id 0 \
	--niter 50 \
	--niter_decay 50

For Dayton (g2a direction, 64) dataset:

export CUDA_VISIBLE_DEVICES=1;
python train.py --dataroot ./datasets/dayton_g2a \
	--name dayton_g2a_64_gesturegan_onecycle \
	--model gesturegan_onecycle \
	--which_model_netG resnet_9blocks \
	--which_direction AtoB \
	--dataset_mode aligned \
	--norm instance \
	--gpu_ids 0 \
	--batchSize 16 \
	--loadSize 72 \
	--fineSize 64 \
	--no_flip \
	--cyc_L1 0.1 \
	--lambda_identity 100 \
	--lambda_feat 100 \
	--display_id 0 \
	--niter 50 \
	--niter_decay 50

There are many options you can specify. Please use python train.py --help. The specified options are printed to the console. To specify the number of GPUs to utilize, use export CUDA_VISIBLE_DEVICES=[GPU_ID]. Note that train gesturegan_onecycle only needs one GPU, while train gesturegan_twocycle needs two GPUs.

To view training results and loss plots on local computers, set --display_id to a non-zero value and run python -m visdom.server on a new terminal and click the URL http://localhost:8097. On a remote server, replace localhost with your server's name, such as http://server.trento.cs.edu:8097.

Can I continue/resume my training?

To fine-tune a pre-trained model, or resume the previous training, use the --continue_train --which_epoch <int> --epoch_count <int+1> flag. The program will then load the model based on epoch <int> you set in --which_epoch <int>. Set --epoch_count <int+1> to specify a different starting epoch count.

Testing

Testing is similar to testing pretrained models.

For NTU dataset:

python test.py --dataroot ./datasets/ntu \
	--name ntu_gesturegan_twocycle \
	--model gesturegan_twocycle \
	--which_model_netG resnet_9blocks \
	--which_direction AtoB \
	--dataset_mode aligned \
	--norm instance \
	--gpu_ids 0 \
	--batchSize 4 \
	--loadSize 286 \
	--fineSize 256 \
	--no_flip

For Senz3D dataset:

python test.py --dataroot ./datasets/senz3d \
	--name senz3d_gesturegan_twocycle \
	--model gesturegan_twocycle \
	--which_model_netG resnet_9blocks \
	--which_direction AtoB \
	--dataset_mode aligned \
	--norm instance \
	--gpu_ids 0 \
	--batchSize 4 \
	--loadSize 286 \
	--fineSize 256 \
	--no_flip

For CVUSA dataset:

python test.py --dataroot ./datasets/cvusa \
	--name cvusa_gesturegan_onecycle \
	--model gesturegan_onecycle \
	--which_model_netG resnet_9blocks \
	--which_direction AtoB \
	--dataset_mode aligned \
	--norm instance \
	--gpu_ids 0 \
	--batchSize 4 \
	--loadSize 286 \
	--fineSize 256 \
	--no_flip

For Dayton (a2g direction, 256) dataset:

python test.py --dataroot ./datasets/dayton_a2g \
	--name dayton_a2g_gesturegan_onecycle \
	--model gesturegan_onecycle \
	--which_model_netG resnet_9blocks \
	--which_direction AtoB \
	--dataset_mode aligned \
	--norm instance \
	--gpu_ids 0 \
	--batchSize 4 \
	--loadSize 286 \
	--fineSize 256 \
	--no_flip

For Dayton (g2a direction, 256) dataset:

python test.py --dataroot ./datasets/dayton_g2a \
	--name dayton_g2a_gesturegan_onecycle \
	--model gesturegan_onecycle \ 
	--which_model_netG resnet_9blocks \
	--which_direction AtoB \
	--dataset_mode aligned \
	--norm instance \
	--gpu_ids 0 \
	--batchSize 4 \
	--loadSize 286 \
	--fineSize 256 \
	--no_flip

For Dayton (a2g direction, 64) dataset:

python test.py --dataroot ./datasets/dayton_a2g \
	--name dayton_g2a_64_gesturegan_onecycle \
	--model gesturegan_onecycle \
	--which_model_netG resnet_9blocks \
	--which_direction AtoB \
	--dataset_mode aligned \
	--norm instance \
	--gpu_ids 0 \
	--batchSize 16 \
	--loadSize 72 \
	--fineSize 64 \
	--no_flip

For Dayton (g2a direction, 64) dataset:

python test.py --dataroot ./datasets/dayton_g2a \
	--name dayton_g2a_64_gesturegan_onecycle \
	--model gesturegan_onecycle \
	--which_model_netG resnet_9blocks \
	--which_direction AtoB \
	--dataset_mode aligned \
	--norm instance \
	--gpu_ids 0 \
	--batchSize 16 \
	--loadSize 72 \
	--fineSize 64 \
	--no_flip

Use --how_many to specify the maximum number of images to generate. By default, it loads the latest checkpoint. It can be changed using --which_epoch.

Code Structure

  • train.py, test.py: the entry point for training and testing.
  • models/gesturegan_onecycle_model.py, models/gesturegan_twocycle_model.py: creates the networks, and compute the losses.
  • models/networks/: defines the architecture of all models for GestureGAN.
  • options/: creates option lists using argparse package.
  • data/: defines the class for loading images and controllable structures.
  • scripts/evaluation: several evaluation codes.

Evaluation

We use several metrics to evaluate the quality of the generated images:

Acknowledgments

This source code is inspired by Pix2pix and SelectionGAN. We want to thank the NVIDIA Corporation for the donation of the TITAN Xp GPUs used in this work.

Related Projects

BiGraphGAN | XingGAN | C2GAN | SelectionGAN | Guided-I2I-Translation-Papers

Citation

If you use this code for your research, please cite our papers.

GestureGAN

@article{tang2019unified,
  title={Unified Generative Adversarial Networks for Controllable Image-to-Image Translation},
  author={Tang, Hao and Liu, Hong and Sebe, Nicu},
  journal={IEEE Transactions on Image Processing (TIP)},
  year={2020}
}

@inproceedings{tang2018gesturegan,
  title={GestureGAN for Hand Gesture-to-Gesture Translation in the Wild},
  author={Tang, Hao and Wang, Wei and Xu, Dan and Yan, Yan and Sebe, Nicu},
  booktitle={ACM MM},
  year={2018}
}

If you use the original BiGraphGAN, XingGAN, C2GAN, and SelectionGAN model, please cite the following papers:

BiGraphGAN

@inproceedings{tang2020bipartite,
  title={Bipartite Graph Reasoning GANs for Person Image Generation},
  author={Tang, Hao and Bai, Song and Torr, Philip HS and Sebe, Nicu},
  booktitle={BMVC},
  year={2020}
}

XingGAN

@inproceedings{tang2020xinggan,
  title={XingGAN for Person Image Generation},
  author={Tang, Hao and Bai, Song and Zhang, Li and Torr, Philip HS and Sebe, Nicu},
  booktitle={ECCV},
  year={2020}
}

C2GAN

@inproceedings{tang2019cycleincycle,
  title={Cycle In Cycle Generative Adversarial Networks for Keypoint-Guided Image Generation},
  author={Tang, Hao and Xu, Dan and Liu, Gaowen and Wang, Wei and Sebe, Nicu and Yan, Yan},
  booktitle={ACM MM},
  year={2019}
}

SelectionGAN

@inproceedings{tang2019multi,
  title={Multi-channel attention selection gan with cascaded semantic guidance for cross-view image translation},
  author={Tang, Hao and Xu, Dan and Sebe, Nicu and Wang, Yanzhi and Corso, Jason J and Yan, Yan},
  booktitle={CVPR},
  year={2019}
}

@article{tang2020multi,
  title={Multi-channel attention selection gans for guided image-to-image translation},
  author={Tang, Hao and Xu, Dan and Yan, Yan and Corso, Jason J and Torr, Philip HS and Sebe, Nicu},
  journal={arXiv preprint arXiv:2002.01048},
  year={2020}
}

Contributions

If you have any questions/comments/bug reports, feel free to open a github issue or pull a request or e-mail to the author Hao Tang ([email protected]).

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].