All Projects → bowenc0221 → MXNet-GAN

bowenc0221 / MXNet-GAN

Licence: other
MXNet Implementation of DCGAN, Conditional GAN, pix2pix

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to MXNet-GAN

coursera-gan-specialization
Programming assignments and quizzes from all courses within the GANs specialization offered by deeplearning.ai
Stars: ✭ 277 (+1104.35%)
Mutual labels:  dcgan, pix2pix
pytorch-gans
PyTorch implementation of GANs (Generative Adversarial Networks). DCGAN, Pix2Pix, CycleGAN, SRGAN
Stars: ✭ 21 (-8.7%)
Mutual labels:  dcgan, pix2pix
Generative-Model
Repository for implementation of generative models with Tensorflow 1.x
Stars: ✭ 66 (+186.96%)
Mutual labels:  dcgan, cgan
Voice-Denoising-AN
A Conditional Generative Adverserial Network (cGAN) was adapted for the task of source de-noising of noisy voice auditory images. The base architecture is adapted from Pix2Pix.
Stars: ✭ 42 (+82.61%)
Mutual labels:  pix2pix, cgan
Pix2pix
Image-to-image translation with conditional adversarial nets
Stars: ✭ 8,765 (+38008.7%)
Mutual labels:  dcgan, pix2pix
Pytorch-conditional-GANs
Implementation of Conditional Generative Adversarial Networks in PyTorch
Stars: ✭ 91 (+295.65%)
Mutual labels:  dcgan, cgan
GANs-Keras
GANs Implementations in Keras
Stars: ✭ 24 (+4.35%)
Mutual labels:  dcgan, cgan
cgan-face-generator
Face generator from sketches using cGAN (pix2pix) model
Stars: ✭ 52 (+126.09%)
Mutual labels:  pix2pix, cgan
Matlab Gan
MATLAB implementations of Generative Adversarial Networks -- from GAN to Pixel2Pixel, CycleGAN
Stars: ✭ 63 (+173.91%)
Mutual labels:  dcgan, pix2pix
Deepnude An Image To Image Technology
DeepNude's algorithm and general image generation theory and practice research, including pix2pix, CycleGAN, UGATIT, DCGAN, SinGAN, ALAE, mGANprior, StarGAN-v2 and VAE models (TensorFlow2 implementation). DeepNude的算法以及通用生成对抗网络(GAN,Generative Adversarial Network)图像生成的理论与实践研究。
Stars: ✭ 4,029 (+17417.39%)
Mutual labels:  dcgan, pix2pix
Advanced Models
여러가지 유명한 신경망 모델들을 제공합니다. (DCGAN, VAE, Resnet 등등)
Stars: ✭ 48 (+108.7%)
Mutual labels:  dcgan, cgan
Pytorch cpp
Deep Learning sample programs using PyTorch in C++
Stars: ✭ 114 (+395.65%)
Mutual labels:  dcgan, pix2pix
Igan
Interactive Image Generation via Generative Adversarial Networks
Stars: ✭ 3,845 (+16617.39%)
Mutual labels:  dcgan, pix2pix
Ganotebooks
wgan, wgan2(improved, gp), infogan, and dcgan implementation in lasagne, keras, pytorch
Stars: ✭ 1,446 (+6186.96%)
Mutual labels:  dcgan, pix2pix
Pytorch-Basic-GANs
Simple Pytorch implementations of most used Generative Adversarial Network (GAN) varieties.
Stars: ✭ 101 (+339.13%)
Mutual labels:  dcgan, cgan
djl
An Engine-Agnostic Deep Learning Framework in Java
Stars: ✭ 3,080 (+13291.3%)
Mutual labels:  mxnet
deep-learning-for-document-dewarping
An application of high resolution GANs to dewarp images of perturbed documents
Stars: ✭ 100 (+334.78%)
Mutual labels:  pix2pix
mxnet-retrain
Create mxnet finetuner (retrain) for mac/linux ,no need install docker and supports CPU, GPU(eGpu/cudnn).support the inception,resnet ,squeeznet,mobilenet...
Stars: ✭ 32 (+39.13%)
Mutual labels:  mxnet
gluon-faster-rcnn
Faster R-CNN implementation with MXNet Gluon API
Stars: ✭ 31 (+34.78%)
Mutual labels:  mxnet
FCOS GluonCV
FCOS: Fully Convolutional One-Stage Object Detection.
Stars: ✭ 24 (+4.35%)
Mutual labels:  mxnet

pix2pix in MXNet

MXNet Implementation of various GAN, including: DCGAN [1], CGAN [2], Image-to-Image translation [3] (a.k.a. pix2pix)

The main focus of the repo is to implement a MXNet version of pix2pix for research purpose.
Please refer to This paper by Isola et al. for more detail.
Here is the original code implemented by Torch and PyTorch

This is a working repo initially served as the final project for UIUC ECE544NA.

Prerequisites

  • Linux (Tested in Ubuntu 16.04)
  • Python 2 (You may need to modify some codes if you are using Python 3)
  • CPU or NVIDIA GPU + CUDA CuDNN

Getting Started

Installation

  • Build MXNet from source (tested using MXNet version v.0.11.1).
    git clone --recursive https://github.com/apache/incubator-mxnet mxnet
    cd mxnet
    cp make/config.mk .
    vim config.mk  # You need to change configuration in order to enable cuda and cudnn 
    make -j8
    cd python
    sudo python setup.py install
  • Clone this repo:
    git clone https://github.com/bowenc0221/MXNet-GAN
    cd MXNet-GAN
  • Put MXNet python package into ./external/mxnet/$(MXNET_VERSION) and modify MXNET_VERSION in ./experiments/*.yaml to $(YOUR_MXNET_PACKAGE).
  • Install python packages.
    pip install Cython
    pip install EasyDict
    pip install opencv-python

DCGAN train/test

  • Train
    python dcgan/train.py --cfg experiments/dcgan/mnist_dcgan.yaml
  • Test
    python dcgan/test.py --cfg experiments/dcgan/mnist_dcgan.yaml
  • Warning
    • I only implemented dcgan for mnist. You may need to write your own data iterator for other dataset.
    • I did not tune parameter for dcgan. I only trained for 1 epoch!

CGAN train/test

  • train
    python cgan/train.py --cfg experiments/cgan/mnist_cgan.yaml
  • test
    python cgan/test.py --cfg experiments/cgan/mnist_cgan.yaml
  • Warning
    • I only implemented dcgan for mnist. You may need to write your own data iterator for other dataset.
    • I did not tune parameter for dcgan. I only trained for 1 epoch!

pix2pix train/test

  • Download a pix2pix dataset (e.g.facades):
    bash ./datasets/download_pix2pix_dataset.sh facades
    Please refer to pytorch-CycleGAN-and-pix2pix for dataset information.
  • Train a model:
    • AtoB
      python pix2pix/train.py --cfg experiments/pix2pix/facades_pix2pix_AtoB.yaml
    • BtoA
      python pix2pix/train.py --cfg experiments/pix2pix/facades_pix2pix_BtoA.yaml
  • Test a model:
    • AtoB
      python pix2pix/test.py --cfg experiments/pix2pix/facades_pix2pix_AtoB.yaml
    • BtoA
      python pix2pix/test.py --cfg experiments/pix2pix/facades_pix2pix_BtoA.yaml
  • PatchGAN
    • You can use any PatchGAN listed in the paper by changing netD in configuration to 'n_layers' and set n_layers to any number from 0-6.
    • n_layers = 0: pixelGAN 1x1 discriminator
    • n_layers = 1: patchGAN 16x16 discriminator
    • n_layers = 3: patchGAN 70x70 discriminator (default setting in the paper)
    • n_layers = 6: imageGAN 256x256 discriminator
  • Train pix2pix on your own dataset
    • I only implemented pix2pix for cityscapes and facades dataset but you can generalize easily to your own dataset.
    • Prepare pix2pix-datasets according to this link
    • Modify num_train and num_val in ./data/generate_train_val.py and run the script.
    • In configuration file, modify dataset part, as well as batchsize and number of epoch.

Results

facades

Ground Truth

AtoB

BtoA

cityscapes

Ground Truth

AtoB

BtoA

edges2shoes

Ground Truth

AtoB

BtoA

Citation

If you use this code for your research, here is a list of paper you can refer to:

@inproceedings{goodfellow2014generative,
  title={Generative adversarial nets},
  author={Goodfellow, Ian and Pouget-Abadie, Jean and Mirza, Mehdi and Xu, Bing and Warde-Farley, David and Ozair, Sherjil and Courville, Aaron and Bengio, Yoshua},
  booktitle={Advances in neural information processing systems},
  pages={2672--2680},
  year={2014}
}
@article{mirza2014conditional,
  title={Conditional generative adversarial nets},
  author={Mirza, Mehdi and Osindero, Simon},
  journal={arXiv preprint arXiv:1411.1784},
  year={2014}
}
@article{radford2015unsupervised,
  title={Unsupervised representation learning with deep convolutional generative adversarial networks},
  author={Radford, Alec and Metz, Luke and Chintala, Soumith},
  journal={arXiv preprint arXiv:1511.06434},
  year={2015}
}
@article{pix2pix2016,
  title={Image-to-Image Translation with Conditional Adversarial Networks},
  author={Isola, Phillip and Zhu, Jun-Yan and Zhou, Tinghui and Efros, Alexei A},
  journal={arxiv},
  year={2016}
}

Reference

[1] DCGAN: Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks
[2] CGAN: Conditional Generative Adversarial Nets
[3] pix2pix: Image-to-Image Translation with Conditional Adversarial Networks

Acknowledgments

Code is inspired by:
[1] MXNet GAN Tutorial
[2] MXNet DCGAN Example
[3] A MXNet W-GAN Code
[4] pytorch-CycleGAN-and-pix2pix
[5] Gluon GAN Tutorials

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].