All Projects → prakashpandey9 → BicycleGAN

prakashpandey9 / BicycleGAN

Licence: MIT license
Tensorflow implementation of the NIPS paper "Toward Multimodal Image-to-Image Translation"

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to BicycleGAN

BicycleGAN-pytorch
Pytorch implementation of BicycleGAN with implementation details
Stars: ✭ 99 (+230%)
Mutual labels:  generative-adversarial-network, image-translation, bicyclegan
Gesturegan
[ACM MM 2018 Oral] GestureGAN for Hand Gesture-to-Gesture Translation in the Wild
Stars: ✭ 136 (+353.33%)
Mutual labels:  generative-adversarial-network, image-translation
Chainer Pix2pix
chainer implementation of pix2pix
Stars: ✭ 130 (+333.33%)
Mutual labels:  generative-adversarial-network, pix2pix
Cocosnet
Cross-domain Correspondence Learning for Exemplar-based Image Translation. (CVPR 2020 Oral)
Stars: ✭ 211 (+603.33%)
Mutual labels:  generative-adversarial-network, image-translation
pytorch-CycleGAN
Pytorch implementation of CycleGAN.
Stars: ✭ 39 (+30%)
Mutual labels:  generative-adversarial-network, image-translation
Cyclegan
Software that can generate photos from paintings, turn horses into zebras, perform style transfer, and more.
Stars: ✭ 10,933 (+36343.33%)
Mutual labels:  generative-adversarial-network, pix2pix
P2pala
Page to PAGE Layout Analysis Tool
Stars: ✭ 147 (+390%)
Mutual labels:  generative-adversarial-network, pix2pix
Bicyclegan
Toward Multimodal Image-to-Image Translation
Stars: ✭ 1,215 (+3950%)
Mutual labels:  generative-adversarial-network, pix2pix
pytorch-gans
PyTorch implementation of GANs (Generative Adversarial Networks). DCGAN, Pix2Pix, CycleGAN, SRGAN
Stars: ✭ 21 (-30%)
Mutual labels:  generative-adversarial-network, pix2pix
chainer-pix2pix
Chainer implementation for Image-to-Image Translation Using Conditional Adversarial Networks
Stars: ✭ 40 (+33.33%)
Mutual labels:  pix2pix, image-translation
Pix2Pix
Image to Image Translation using Conditional GANs (Pix2Pix) implemented using Tensorflow 2.0
Stars: ✭ 29 (-3.33%)
Mutual labels:  pix2pix, image-translation
Pixel2style2pixel
Official Implementation for "Encoding in Style: a StyleGAN Encoder for Image-to-Image Translation"
Stars: ✭ 1,395 (+4550%)
Mutual labels:  generative-adversarial-network, image-translation
Lggan
[CVPR 2020] Local Class-Specific and Global Image-Level Generative Adversarial Networks for Semantic-Guided Scene Generation
Stars: ✭ 97 (+223.33%)
Mutual labels:  generative-adversarial-network, image-translation
tiny-pix2pix
Redesigning the Pix2Pix model for small datasets with fewer parameters and different PatchGAN architecture
Stars: ✭ 21 (-30%)
Mutual labels:  generative-adversarial-network, pix2pix
Alice
NIPS 2017: ALICE: Towards Understanding Adversarial Learning for Joint Distribution Matching
Stars: ✭ 80 (+166.67%)
Mutual labels:  generative-adversarial-network, image-translation
Focal Frequency Loss
Focal Frequency Loss for Generative Models
Stars: ✭ 141 (+370%)
Mutual labels:  generative-adversarial-network, pix2pix
multitask-CycleGAN
Pytorch implementation of multitask CycleGAN with auxiliary classification loss
Stars: ✭ 88 (+193.33%)
Mutual labels:  generative-adversarial-network, image-translation
Pix2pix
Image-to-image translation with conditional adversarial nets
Stars: ✭ 8,765 (+29116.67%)
Mutual labels:  generative-adversarial-network, pix2pix
Sparsely Grouped Gan
Code for paper "Sparsely Grouped Multi-task Generative Adversarial Networks for Facial Attribute Manipulation"
Stars: ✭ 68 (+126.67%)
Mutual labels:  generative-adversarial-network, image-translation
Pytorch Cyclegan And Pix2pix
Image-to-Image Translation in PyTorch
Stars: ✭ 16,477 (+54823.33%)
Mutual labels:  generative-adversarial-network, pix2pix

Multimodal Image-to-Image Translation

This is a Tensorflow implementation of the NIPS paper "Toward Multimodal Image-to-Image Translation". The aim is to generate a distribution of output images given an input image. Basically, it is an extension of image to image translation model using Conditional Generative Adversarial Networks.

The idea is to learn a low-dimensional latent representation of target images using an encoder net i.e., a probability distribution which has generated all the target images and to learn the joint probability distribution of this latent vector as P(z). In this model, the mapping from latent vector to output images and output image to latent vector is bijective. The overall architecture consists of two cycle, B->z->B' and z->B'->z' and hence the name BicycleGAN.

Model Architecture

Image Source : Toward Multimodal Image-to-Image Translation Paper

Description

  • We have 3 different networks: a) Discriminator, b) Encoder, and c) Generator
  • A cGAN-VAE (Conditional Generative Adversarial Network- Variational Autoencoder) is used to encode the ground truth output image B to latent vector z which is then used to reconstruct the output image B' i.e., B -> z -> B'
  • For inverse mapping (z->B'->z'), we use LR-GAN (Latent Regressor Generative Adversarial Networks) in which a Generator is used to generate B' from input image A and z.
  • Combining both these models, we get BicycleGAN.
  • The architecture of Generator is same as U-net in which there are encoder and decoder nets with symmetric skip connections.
  • For Encoder, we use several residual blocks for an efficient encoding of the input image.
  • The model is trained using Adam optimizer using BatchNormalization with batch size 1.
  • LReLU activation function is used for all types of networks.

Requirements

  • Python 2.7
  • Numpy
  • Tensorflow
  • Scipy

Training / Testing

After cloning this repository, you can train the network by running the following command.

$ mkdir test_results
$ python main.py

References

  • Toward Multimodal Image-to-Image Translation2 Paper
  • pix2pix Paper3 Paper

License

MIT

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].