All Projects → soumik12345 → Pix2Pix

soumik12345 / Pix2Pix

Licence: other
Image to Image Translation using Conditional GANs (Pix2Pix) implemented using Tensorflow 2.0

Programming Languages

Jupyter Notebook
11667 projects

Projects that are alternatives of or similar to Pix2Pix

pix2pix
PyTorch implementation of Image-to-Image Translation with Conditional Adversarial Nets (pix2pix)
Stars: ✭ 36 (+24.14%)
Mutual labels:  pix2pix, image-translation, cityscapes
ganslate
Simple and extensible GAN image-to-image translation framework. Supports natural and medical images.
Stars: ✭ 17 (-41.38%)
Mutual labels:  pix2pix, image-translation
coursera-gan-specialization
Programming assignments and quizzes from all courses within the GANs specialization offered by deeplearning.ai
Stars: ✭ 277 (+855.17%)
Mutual labels:  pix2pix, conditional-gan
BicycleGAN
Tensorflow implementation of the NIPS paper "Toward Multimodal Image-to-Image Translation"
Stars: ✭ 30 (+3.45%)
Mutual labels:  pix2pix, image-translation
pix2pix-tensorflow
A minimal tensorflow implementation of pix2pix (Image-to-Image Translation with Conditional Adversarial Nets - https://phillipi.github.io/pix2pix/).
Stars: ✭ 22 (-24.14%)
Mutual labels:  pix2pix, image-translation
Munit
Multimodal Unsupervised Image-to-Image Translation
Stars: ✭ 2,404 (+8189.66%)
Mutual labels:  pix2pix, image-translation
Img2imggan
Implementation of the paper : "Toward Multimodal Image-to-Image Translation"
Stars: ✭ 49 (+68.97%)
Mutual labels:  pix2pix, image-translation
chainer-pix2pix
Chainer implementation for Image-to-Image Translation Using Conditional Adversarial Networks
Stars: ✭ 40 (+37.93%)
Mutual labels:  pix2pix, image-translation
Pytorch Pix2pix
Pytorch implementation of pix2pix for various datasets.
Stars: ✭ 74 (+155.17%)
Mutual labels:  pix2pix, image-translation
Unit
Unsupervised Image-to-Image Translation
Stars: ✭ 1,809 (+6137.93%)
Mutual labels:  pix2pix, image-translation
Everybody-dance-now
Implementation of paper everybody dance now for Deep learning course project
Stars: ✭ 22 (-24.14%)
Mutual labels:  pix2pix, conditional-gan
ember-google-maps
A friendly Ember addon for working with Google Maps.
Stars: ✭ 93 (+220.69%)
Mutual labels:  google-maps
toronto-apartment-finder
[really old and probably doesn't work] Slack bot to post relevant Toronto apartment listings from Kijiji & Craigslist
Stars: ✭ 23 (-20.69%)
Mutual labels:  google-maps
rubymap
Find out what's going on in your local Ruby community
Stars: ✭ 44 (+51.72%)
Mutual labels:  google-maps
deep-learning-for-document-dewarping
An application of high resolution GANs to dewarp images of perturbed documents
Stars: ✭ 100 (+244.83%)
Mutual labels:  pix2pix
gans-2.0
Generative Adversarial Networks in TensorFlow 2.0
Stars: ✭ 76 (+162.07%)
Mutual labels:  conditional-gan
svelte-googlemaps
Svelte Google Maps Components
Stars: ✭ 62 (+113.79%)
Mutual labels:  google-maps
cityscapes-to-coco-conversion
Cityscapes to CoCo Format Conversion Tool for Mask-RCNN and Detectron
Stars: ✭ 40 (+37.93%)
Mutual labels:  cityscapes
jquery-google-reviews
simple jquery Plugin that utilizes Google API to get data from a Place on Google Maps
Stars: ✭ 33 (+13.79%)
Mutual labels:  google-maps
geocoder
Geocoder is a Typescript library which helps you build geo-aware applications by providing a powerful abstraction layer for geocoding manipulations
Stars: ✭ 28 (-3.45%)
Mutual labels:  google-maps

Pix2Pix

Binder PWC PWC PWC HitCount

Tensorflow 2.0 Implementation of the paper Image-to-Image Translation using Conditional GANs by Philip Isola, Jun-Yan Zhu, Tinghui Zhou and Alexei A. Efros.

Architecture

Generator

  • The Generator is a Unet-Like model with skip connections between encoder and decoder.
  • Encoder Block is Convolution -> BatchNormalization -> Activation (LeakyReLU)
  • Decode Blocks is Conv2DTranspose -> BatchNormalization -> Dropout (optional) -> Activation (ReLU)

Generator Architecture

Discriminator

  • PatchGAN Discriminator
  • Discriminator Block is Convolution -> BatchNormalization -> Activation (LeakyReLU)

Discriminator Architecture

Loss Functions

Generator Loss

Generator Loss Equation

The Loss function can also be boiled down to

Loss = GAN_Loss + Lambda * L1_Loss, where GAN_Loss is Sigmoid Cross Entropy Loss and Lambda = 100 (determined by the authors)

Discriminator Loss

The Discriminator Loss function can be written as

Loss = disc_loss(real_images, array of ones) + disc_loss(generated_images, array of zeros)

where disc_loss is Sigmoid Cross Entropy Loss.

Experiments with Standard Architecture

Experiment 1

Resource Credits: Trained on Nvidia Quadro M4000 provided by Paperspace Gradient.

Dataset: Facades

Result:

Experiment 1 Result

Experiment 2

Resource Credits: Trained on Nvidia Quadro P5000 provided by Paperspace Gradient.

Dataset: Maps

Result:

Experiment 2 Result

Experiment 3

Resource Credits: Trained on Nvidia Tesla V100 provided by DeepWrex Technologies.

Dataset: Cityscapes

Result:

Experiment 3 Result

Experiments with Mish Activation Function

Experiment 1 Mish

Resource Credits: Trained on Nvidia Quadro P5000 provided by Paperspace Gradient.

Dataset: Facades

Generator Architecture:

  • The Generator is a Unet-Like model with skip connections between encoder and decoder.
  • Encoder Block is Convolution -> BatchNormalization -> Activation (Mish)
  • Decode Blocks is Conv2DTranspose -> BatchNormalization -> Dropout (optional) -> Activation (Mish)

Discriminator:

  • PatchGAN Discriminator
  • Discriminator Block is Convolution -> BatchNormalization -> Activation (Mish)

Result:

Experiment 1 Mish Result

Experiment 2 Mish

Resource Credits: Trained on Nvidia Tesla P100 provided by Google Colab.

Dataset: Facades

Generator Architecture:

  • The Generator is a Unet-Like model with skip connections between encoder and decoder.
  • Encoder Block is Convolution -> BatchNormalization -> Activation (Mish)
  • Decode Blocks is Conv2DTranspose -> BatchNormalization -> Dropout (optional) -> Activation (Mish)

Discriminator:

  • PatchGAN Discriminator
  • Discriminator Block is Convolution -> BatchNormalization -> Activation (ReLU)

Result:

Experiment 2 Mish Result

Experiment 3 Mish

Resource Credits: Trained on Nvidia Quadro P5000 provided by Paperspace Gradient.

Dataset: Facades

Generator Architecture:

  • The Generator is a Unet-Like model with skip connections between encoder and decoder.
  • Encoder Block is Convolution -> BatchNormalization -> Activation (Mish)
  • Decode Blocks is Conv2DTranspose -> BatchNormalization -> Dropout (optional) -> Activation (Mish) for the first three blocks are Conv2DTranspose -> BatchNormalization -> Dropout (optional) -> Activation (ReLU)

Discriminator:

  • PatchGAN Discriminator
  • Discriminator Block is Convolution -> BatchNormalization -> Activation (ReLU)

Result:

Experiment 3 Mish Result

References

All the sources cited during building this codebase are mentioned below:

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].