All Projects → leehomyc → Img2Img-Translation-Networks

leehomyc / Img2Img-Translation-Networks

Licence: MIT license
Tensorflow implementation of paper "unsupervised image to image translation networks"

Programming Languages

python
139335 projects - #7 most used programming language
shell
77523 projects

Projects that are alternatives of or similar to Img2Img-Translation-Networks

Paper-Notes
Paper notes in deep learning/machine learning and computer vision
Stars: ✭ 37 (-44.78%)
Mutual labels:  generative-adversarial-network
progressive-growing-of-gans.pytorch
Unofficial PyTorch implementation of "Progressive Growing of GANs for Improved Quality, Stability, and Variation".
Stars: ✭ 51 (-23.88%)
Mutual labels:  generative-adversarial-network
MAD-GAN-MLCAMP
Repository for MAD-GAN Paper done in ML CAMP Jeju
Stars: ✭ 17 (-74.63%)
Mutual labels:  generative-adversarial-network
GAN-LTH
[ICLR 2021] "GANs Can Play Lottery Too" by Xuxi Chen, Zhenyu Zhang, Yongduo Sui, Tianlong Chen
Stars: ✭ 24 (-64.18%)
Mutual labels:  generative-adversarial-network
SMILE
SMILE: Semantically-guided Multi-attribute Image and Layout Editing, ICCV Workshops 2021.
Stars: ✭ 28 (-58.21%)
Mutual labels:  generative-adversarial-network
Deep-Learning-Pytorch
A repo containing code covering various aspects of deep learning on Pytorch. Great for beginners and intermediate in the field
Stars: ✭ 59 (-11.94%)
Mutual labels:  generative-adversarial-network
HashGAN
HashGAN: Deep Learning to Hash with Pair Conditional Wasserstein GAN
Stars: ✭ 63 (-5.97%)
Mutual labels:  generative-adversarial-network
bmusegan
Code for “Convolutional Generative Adversarial Networks with Binary Neurons for Polyphonic Music Generation”
Stars: ✭ 58 (-13.43%)
Mutual labels:  generative-adversarial-network
DCGAN-CIFAR10
A implementation of DCGAN (Deep Convolutional Generative Adversarial Networks) for CIFAR10 image
Stars: ✭ 18 (-73.13%)
Mutual labels:  generative-adversarial-network
timegan-pytorch
This repository is a non-official implementation of TimeGAN (Yoon et al., NIPS2019) using PyTorch.
Stars: ✭ 46 (-31.34%)
Mutual labels:  generative-adversarial-network
Pytorch-conditional-GANs
Implementation of Conditional Generative Adversarial Networks in PyTorch
Stars: ✭ 91 (+35.82%)
Mutual labels:  generative-adversarial-network
Self-Supervised-GANs
Tensorflow Implementation for paper "self-supervised generative adversarial networks"
Stars: ✭ 34 (-49.25%)
Mutual labels:  generative-adversarial-network
MMD-GAN
Improving MMD-GAN training with repulsive loss function
Stars: ✭ 82 (+22.39%)
Mutual labels:  generative-adversarial-network
gans-2.0
Generative Adversarial Networks in TensorFlow 2.0
Stars: ✭ 76 (+13.43%)
Mutual labels:  generative-adversarial-network
Pytorch models
PyTorch study
Stars: ✭ 14 (-79.1%)
Mutual labels:  generative-adversarial-network
speech-enhancement-WGAN
speech enhancement GAN on waveform/log-power-spectrum data using Improved WGAN
Stars: ✭ 35 (-47.76%)
Mutual labels:  generative-adversarial-network
text2imageNet
Generate image from text with Generative Adversarial Network
Stars: ✭ 26 (-61.19%)
Mutual labels:  generative-adversarial-network
CIPS-3D
3D-aware GANs based on NeRF (arXiv).
Stars: ✭ 562 (+738.81%)
Mutual labels:  generative-adversarial-network
Deep-Fakes
No description or website provided.
Stars: ✭ 88 (+31.34%)
Mutual labels:  generative-adversarial-network
text2image-benchmark
Performance comparison of existing GAN based Text To Image algorithms. (GAN-CLS, StackGAN, TAC-GAN)
Stars: ✭ 25 (-62.69%)
Mutual labels:  generative-adversarial-network

Unsupervised Image to Image Translation Networks

This is the TensorFlow Implementation of the NIPS 2017 paper "Unsupervised Image to Image Translation Networks" by Harry Yang.

network

Disclaimer: This was our own research project but it shares the same idea with the paper so we are making the code publicly available.

Introduction

In particular, we tried the 'pix2pix' model which is the auto-encoder model described in the paper, and also the 'resnet' model made up of 9 blocks of resnet (middle blocks are shared). We found that the 'resnet' model gives better result than auto-encoder and (slightly) better results than CycleGAN.

Below is a snapshot of our result at the 44th epoch using 'resnet' model with default parameters on the horse-zebra dataset. From left to right is: real horse, real zebra, fake zebra, fake horse, cycle horse, cycle zebra.

result

Getting Started

Prepare dataset

Download horse-zebra dataset (or any other dataset) and create a csv file containing the paths of image pairs in the dataset. -Download an image dataset (e.g. horse-zebra) and unzip:

bash ./download_datasets.sh horse2zebra

-Create csv files

python -m dataset.create_single_domain_filelist --input_path=path/to/horse/train --output_file=/data/img2img/horse.csv
python -m dataset.create_single_domain_filelist --input_path=path/to/zebra/train --output_file=/data/img2img/zebra.csv
python -m dataset.create_image_pair_list --first_dataset=/data/img2img/horse.csv --second_dataset=/data/img2img/zebra.csv --output_file=/data/img2img/horse_zebra.csv
  • Modify config.py based on the location of the csv file (the trained models will also be saved in the folder).

Train CycleGAN

  • Use resnet model with default parameters:
python -m main --split_name='horse_zebra' --cycle_lambda=15 --rec_lambda=1 --num_separate_layers_g=2 --num_separate_layers_d=5 --num_no_skip_layers=0 --lsgan_lambda_a=1 --lsgan_lambda_b=1 --network_structure='resnet'
  • Use autoencoder model with default parameters:
python -m main --split_name='horse_zebra' --cycle_lambda=30 --rec_lambda=10 --num_separate_layers_g=3 --num_separate_layers_d=5 --num_no_skip_layers=0 --lsgan_lambda_a=2 --lsgan_lambda_b=2

Restore from previous checkpoints

python -m main --split_name='horse_zebra' --cycle_lambda=15 --rec_lambda=1 --num_separate_layers_g=2 --num_separate_layers_d=5 --num_no_skip_layers=0 --lsgan_lambda_a=1 --lsgan_lambda_b=1 --network_structure='resnet' --checkpoint_dir=path/to/saved/checkpoint

TensorBoard Output

tensorboard

Visualization

Each epoch saves an html file for better visualization.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].