All Projects → kvmanohar22 → Img2imggan

kvmanohar22 / Img2imggan

Licence: mit
Implementation of the paper : "Toward Multimodal Image-to-Image Translation"

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Img2imggan

Matlab Gan
MATLAB implementations of Generative Adversarial Networks -- from GAN to Pixel2Pixel, CycleGAN
Stars: ✭ 63 (+28.57%)
Mutual labels:  gans, image-generation, pix2pix
Selectiongan
[CVPR 2019 Oral] Multi-Channel Attention Selection GAN with Cascaded Semantic Guidance for Cross-View Image Translation
Stars: ✭ 366 (+646.94%)
Mutual labels:  gans, image-generation, image-translation
Attentiongan
AttentionGAN for Unpaired Image-to-Image Translation & Multi-Domain Image-to-Image Translation
Stars: ✭ 341 (+595.92%)
Mutual labels:  gans, image-generation, image-translation
Cyclegan
Software that can generate photos from paintings, turn horses into zebras, perform style transfer, and more.
Stars: ✭ 10,933 (+22212.24%)
Mutual labels:  gans, image-generation, pix2pix
Gesturegan
[ACM MM 2018 Oral] GestureGAN for Hand Gesture-to-Gesture Translation in the Wild
Stars: ✭ 136 (+177.55%)
Mutual labels:  gans, image-generation, image-translation
Pytorch Cyclegan And Pix2pix
Image-to-Image Translation in PyTorch
Stars: ✭ 16,477 (+33526.53%)
Mutual labels:  gans, image-generation, pix2pix
Fq Gan
Official implementation of FQ-GAN
Stars: ✭ 137 (+179.59%)
Mutual labels:  gans, image-generation, image-translation
CoCosNet-v2
CoCosNet v2: Full-Resolution Correspondence Learning for Image Translation
Stars: ✭ 312 (+536.73%)
Mutual labels:  image-generation, gans, image-translation
TriangleGAN
TriangleGAN, ACM MM 2019.
Stars: ✭ 28 (-42.86%)
Mutual labels:  image-generation, image-translation
Text To Image Synthesis
Pytorch implementation of Generative Adversarial Text-to-Image Synthesis paper
Stars: ✭ 288 (+487.76%)
Mutual labels:  gans, image-generation
Gan Compression
[CVPR 2020] GAN Compression: Efficient Architectures for Interactive Conditional GANs
Stars: ✭ 800 (+1532.65%)
Mutual labels:  gans, pix2pix
CoMoGAN
CoMoGAN: continuous model-guided image-to-image translation. CVPR 2021 oral.
Stars: ✭ 139 (+183.67%)
Mutual labels:  gans, image-translation
pix2pix-tensorflow
A minimal tensorflow implementation of pix2pix (Image-to-Image Translation with Conditional Adversarial Nets - https://phillipi.github.io/pix2pix/).
Stars: ✭ 22 (-55.1%)
Mutual labels:  pix2pix, image-translation
Contrastive Unpaired Translation
Contrastive unpaired image-to-image translation, faster and lighter training than cyclegan (ECCV 2020, in PyTorch)
Stars: ✭ 822 (+1577.55%)
Mutual labels:  gans, image-generation
pix2pix
PyTorch implementation of Image-to-Image Translation with Conditional Adversarial Nets (pix2pix)
Stars: ✭ 36 (-26.53%)
Mutual labels:  pix2pix, image-translation
AODA
Official implementation of "Adversarial Open Domain Adaptation for Sketch-to-Photo Synthesis"(WACV 2022/CVPRW 2021)
Stars: ✭ 44 (-10.2%)
Mutual labels:  image-generation, gans
Pix2depth
DEPRECATED: Depth Map Estimation from Monocular Images
Stars: ✭ 293 (+497.96%)
Mutual labels:  gans, pix2pix
Anycost Gan
[CVPR 2021] Anycost GANs for Interactive Image Synthesis and Editing
Stars: ✭ 367 (+648.98%)
Mutual labels:  gans, image-generation
Awesome Image Translation
A collection of awesome resources image-to-image translation.
Stars: ✭ 408 (+732.65%)
Mutual labels:  image-generation, image-translation
Deepnude An Image To Image Technology
DeepNude's algorithm and general image generation theory and practice research, including pix2pix, CycleGAN, UGATIT, DCGAN, SinGAN, ALAE, mGANprior, StarGAN-v2 and VAE models (TensorFlow2 implementation). DeepNude的算法以及通用生成对抗网络(GAN,Generative Adversarial Network)图像生成的理论与实践研究。
Stars: ✭ 4,029 (+8122.45%)
Mutual labels:  image-generation, pix2pix

img2ImgGAN

Implementation of the paper : Toward Multimodal Image-to-Image Translation

Results

Result First column represents input, second column the ground truth. The next is the image generated from cLR-GAN and the last column represents the image generated from cVAE-GAN. Results were obtained from validation dataset.

Contents:

Model Architecture Visualization

  • Network

Fig 1: Structure of BicycleGAN. (Image taken from the paper)

  • Tensorboard visualization of the entire network

cVAE-GAN Network

Dependencies

  • tensorflow (1.4.0)
  • numpy (1.13.3)
  • scikit-image (0.13.1)
  • scipy (1.0.0)

To install the above dependencies, run:

$ sudo pip install -r requirements.txt

Structure

 -img2imgGAN/
            -nnet
            -utils
            -data/
                  -edges2handbags
                  -edges2shoes
                  -facades
                  -maps

Setup

  • Download the datasets from the following links

  • To generate numpy files for the datasets,

    $ python main.py --create <dataset_name>
    

    This creates train.npy and val.npy in the corresponding dataset directory. This generates very huge files. As an alternate, the next step attempts to read images at run-time during training

  • Alternate to the above step, you could read the images in real time during training. To do this, you should create files containing paths to the images. This can be done by running the following script in the root of this repo.

    $ bash setup_dataset.sh
    

Usage

  • Generating graph:

    To visualize the connections between the graph nodes, we can generate the graph using the flag archi. This would be useful to assert the connections are correct. This generates the graph for bicycleGAN

    $ python main.py --archi
    

    To generate the model graph for cvae-gan,

    $ python main.py --model cvae-gan --archi
    

    Possible models are: cvae-gan, clr-gan, bicycle (default)

    To visualize the graph on tensorboard, run the following command:

    $ tensorboard --logdir=logs/summary/Run_1 --host=127.0.0.1
    

    Replace Run_1 with the latest directory name

  • Complete list of options:

    $ python main.py --help
    

  • Training the network

    To train model (say cvae-gan) on dataset (say facades) from scratch,

    $ python main.py --train --model cvae-gan --dataset facades
    

    The above command by default trains the model in which images from distribution of domain B are generated conditioned on the images from the distribution of domain A. To switch the direction,

    $ python main.py --train --model cvae-gan --dataset facades --direction b2a
    

    To resume the training from a checkpoint,

    $ python main.py --resume <path_to_checkpoint> --model cvae-gan
    

  • Testing the network

    • Download the checkpoint file from here and place the checkpoint files in the ckpt directory

    To test the model from the given trained models, by default the model generates 5 different images (by sampling 5 different noise samples)

    $ ./test.sh <dataset_name> <test_image_path>
    

    To generate multiple output samples,

    $ ./test.sh <dataset_name> <test_image_path> < # of samples>
    

    Try it with some of the test samples present in the directory imgs/test

Visualizations

Loss of discriminator and generator as function of iterations on edges2shoes dataset.

TODO

  • [x] Residual Encoder
  • [ ] Multiple discriminators for cVAE-GAN and cLR-GAN
  • [ ] Inducing noise to all the layers of the generator
  • [ ] Train the model on rest of the datasets

License

Released under the MIT license

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].