All Projects → vt-vl-lab → Guided Pix2pix

vt-vl-lab / Guided Pix2pix

Licence: other
[ICCV 2019] Guided Image-to-Image Translation with Bi-Directional Feature Transformation

Programming Languages

python
139335 projects - #7 most used programming language

Labels

Projects that are alternatives of or similar to Guided Pix2pix

Person remover
People removal in images using Pix2Pix and YOLO.
Stars: ✭ 96 (-42.86%)
Mutual labels:  pix2pix
Focal Frequency Loss
Focal Frequency Loss for Generative Models
Stars: ✭ 141 (-16.07%)
Mutual labels:  pix2pix
Ai Art
PyTorch (and PyTorch Lightning) implementation of Neural Style Transfer, Pix2Pix, CycleGAN, and Deep Dream!
Stars: ✭ 153 (-8.93%)
Mutual labels:  pix2pix
Ganotebooks
wgan, wgan2(improved, gp), infogan, and dcgan implementation in lasagne, keras, pytorch
Stars: ✭ 1,446 (+760.71%)
Mutual labels:  pix2pix
Cyclegan
Software that can generate photos from paintings, turn horses into zebras, perform style transfer, and more.
Stars: ✭ 10,933 (+6407.74%)
Mutual labels:  pix2pix
Unit
Unsupervised Image-to-Image Translation
Stars: ✭ 1,809 (+976.79%)
Mutual labels:  pix2pix
Ashpy
TensorFlow 2.0 library for distributed training, evaluation, model selection, and fast prototyping.
Stars: ✭ 82 (-51.19%)
Mutual labels:  pix2pix
Cyclegan Vc2
Voice Conversion by CycleGAN (语音克隆/语音转换): CycleGAN-VC2
Stars: ✭ 158 (-5.95%)
Mutual labels:  pix2pix
Chainer Pix2pix
chainer implementation of pix2pix
Stars: ✭ 130 (-22.62%)
Mutual labels:  pix2pix
Pix2pixbegan.pytorch
A pytorch implementation of pix2pix + BEGAN (Boundary Equilibrium Generative Adversarial Networks)
Stars: ✭ 148 (-11.9%)
Mutual labels:  pix2pix
Deepnudecli
DeepNude Command Line Version With Watermark Removed
Stars: ✭ 112 (-33.33%)
Mutual labels:  pix2pix
Sketch To Art
🖼 Create artwork from your casual sketch with GAN and style transfer
Stars: ✭ 115 (-31.55%)
Mutual labels:  pix2pix
Tensorflow Pix2pix
A lightweight pix2pix Tensorflow implementation.
Stars: ✭ 143 (-14.88%)
Mutual labels:  pix2pix
Tensorflow2.0 Examples
🙄 Difficult algorithm, Simple code.
Stars: ✭ 1,397 (+731.55%)
Mutual labels:  pix2pix
Zi2zi
Learning Chinese Character style with conditional GAN
Stars: ✭ 1,988 (+1083.33%)
Mutual labels:  pix2pix
Neuralnetworkpostprocessing
Unity Post Processing with Convolution Neural Network
Stars: ✭ 81 (-51.79%)
Mutual labels:  pix2pix
Starnet
StarNet
Stars: ✭ 141 (-16.07%)
Mutual labels:  pix2pix
Pix2pix Film
An implementation of Pix2Pix in Tensorflow for use with frames from films
Stars: ✭ 162 (-3.57%)
Mutual labels:  pix2pix
Deblurgan
Image Deblurring using Generative Adversarial Networks
Stars: ✭ 2,033 (+1110.12%)
Mutual labels:  pix2pix
P2pala
Page to PAGE Layout Analysis Tool
Stars: ✭ 147 (-12.5%)
Mutual labels:  pix2pix

Guided Image-to-Image Translation with Bi-Directional Feature Transformation

[Project | Paper]

Official Pytorch implementation for Guided Image-to-Image Translation with Bi-Directional Feature Transformation. Please contact Badour AlBahar ([email protected]) if you have any questions.

Prerequisites

This codebase was developed and tested with:

  • Python2.7
  • Pytorch 0.4.1.post2
  • CUDA 8.0

Datasets

Train

1. Pose transfer:

python train.py --dataroot /root/DeepFashion/ --name exp_name --netG bFT_resnet --dataset_mode pose --input_nc 3 --guide_nc 18 --output_nc 3 --lr 0.0002 --niter 100 --niter_decay 0 --batch_size 8 --use_GAN --netD basic --beta1 0.9 --checkpoints_dir ./pose_checkpoints

2. Texture transfer:

python train.py --dataroot /root/training_handbags_pretrain/ --name exp_name --netG bFT_unet --dataset_mode texture --input_nc 1 --guide_nc 4 --output_nc 3 --niter 100 --niter_decay 0 --batch_size 256 --lr 0.0002 --use_GAN --netD basic --n_layers 7 --beta1 .9 --checkpoints_dir ./texture_checkpoints

3. Depth Upsampling:

python train.py --dataroot /root/NYU_RGBD_matfiles/ --name exp_name --netG bFT_resnet --dataset_mode depth --input_nc 1     --guide_nc 3 --output_nc 1 --lr 0.0002 --niter 500 --niter_decay 0 --batch_size 2 --checkpoints_dir ./depth_checkpoints --depthTask_scale [4, 8, or 16]

Test

You can specify which epoch to test by specifying --epoch or use the default which is the latest epoch. Results will be saved in --results_dir.

1. Pose transfer:

python test.py --dataroot /root/DeepFashion/ --name exp_name --netG bFT_resnet --dataset_mode pose --input_nc 3 --guide_nc 18 --output_nc 3 --checkpoints_dir ./pose_checkpoints --task pose --results_dir ./pose_results

2. Texture transfer:

python test.py --dataroot /root/training_handbags_pretrain/ --name exp_name --netG bFT_unet --n_layers 7 --dataset_mode texture --input_nc 1 --guide_nc 4 --output_nc 3 --checkpoints_dir ./texture_checkpoints --task texture --results_dir ./texture_results

3. Depth Upsampling:

python test.py --dataroot /root/NYU_RGBD_matfiles/ --name exp_name --netG bFT_resnet --dataset_mode depth --input_nc 1 --guide_nc 3 --output_nc 1 --checkpoints_dir ./depth_checkpoints --task depth  --depthTask_scale [4, 8, or 16] --results_dir ./depth_results

Pretrained checkpoints

  • Download the pretrained checkpoints here.

  • Test: For example, to test the depth upsampling task with scale 16:

python test.py --dataroot /root/NYU_RGBD_matfiles/ --name depth_16 --netG bFT_resnet --dataset_mode depth --input_nc 1 --guide_nc 3 --output_nc 1 --checkpoints_dir ./checkpoints/pretrained/ --task depth  --depthTask_scale 16 --results_dir ./depth_results

Evaluate

You can specify which epoch to evaluate by specifying --epoch or use the default which is the latest epoch. Results will be saved in --results_dir.

1. Pose transfer:
Please note that the inception score evaluation requires tensorflow. We evaluate with tensorflow 1.4.0.

python evaluate.py --dataroot /root/DeepFashion/ --name pose --netG bFT_resnet --dataset_mode pose --input_nc 3 --guide_nc 18 --output_nc 3 --checkpoints_dir ./checkpoints/pretrained/ --task pose --results_dir ./pose_results

This will save the results in --results_dir and compute both SSIM and IS metrics.

2. Texture transfer: Please download the pretrained model of textureGAN in ./resources from bags, shoes, and clothes. For example, to test the pretrained texture transfer model for the bags dataset:

python evaluate.py --dataroot /root/training_handbags_pretrain/ --name texture_bags --netG bFT_unet --n_layers 7 --dataset_mode texture --input_nc 1 --guide_nc 4 --output_nc 3 --checkpoints_dir ./checkpoints/pretrained/ --task texture --results_dir ./texture_results

This will save the output of bFT and textureGAN in --results_dir for 10 random input texture patches per test image. The results can then be used to compute FID and LPIPS.

3. Depth Upsampling:

python evaluate.py --dataroot /root/NYU_RGBD_matfiles/ --name depth_16 --netG bFT_resnet --dataset_mode depth --input_nc 1 --guide_nc 3 --output_nc 1 --checkpoints_dir ./checkpoints/pretrained/ --task depth  --depthTask_scale 16 --results_dir ./depth_results

This will save the results in --results_dir and compute their RMSE metric.

Acknowledgments

This code is heavily borrowed from CycleGAN and pix2pix in PyTorch. We thank Shih-Yang Su for the code review.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].