All Projects → SunnerLi → RecycleGAN

SunnerLi / RecycleGAN

Licence: other
The simplest implementation toward the idea of Re-cycle GAN

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to RecycleGAN

Pix2pixhd
Synthesizing and manipulating 2048x1024 images with conditional GANs
Stars: ✭ 5,553 (+8066.18%)
Mutual labels:  generative-adversarial-network, image-to-image-translation
Pix2pix
Image-to-image translation with conditional adversarial nets
Stars: ✭ 8,765 (+12789.71%)
Mutual labels:  generative-adversarial-network, image-to-image-translation
DeepSIM
Official PyTorch implementation of the paper: "DeepSIM: Image Shape Manipulation from a Single Augmented Training Sample" (ICCV 2021 Oral)
Stars: ✭ 389 (+472.06%)
Mutual labels:  generative-adversarial-network, image-to-image-translation
gan-weightnorm-resnet
Generative Adversarial Network with Weight Normalization + ResNet
Stars: ✭ 19 (-72.06%)
Mutual labels:  generative-adversarial-network
domain adapt
Domain adaptation networks for digit recognitioning
Stars: ✭ 14 (-79.41%)
Mutual labels:  generative-adversarial-network
TextBoxGAN
Generate text boxes from input words with a GAN.
Stars: ✭ 50 (-26.47%)
Mutual labels:  generative-adversarial-network
GAN-Ensemble-for-Anomaly-Detection
This repository is the PyTorch implementation of GAN Ensemble for Anomaly Detection.
Stars: ✭ 26 (-61.76%)
Mutual labels:  generative-adversarial-network
CsiGAN
An implementation for our paper: CsiGAN: Robust Channel State Information-based Activity Recognition with GANs (IEEE Internet of Things Journal, 2019), which is the semi-supervised Generative Adversarial Network (GAN) for Channel State Information (CSI) -based activity recognition.
Stars: ✭ 23 (-66.18%)
Mutual labels:  generative-adversarial-network
subjectiveqe-esrgan
PyTorch implementation of ESRGAN (ECCVW 2018) for compressed image subjective quality enhancement.
Stars: ✭ 12 (-82.35%)
Mutual labels:  generative-adversarial-network
tt-vae-gan
Timbre transfer with variational autoencoding and cycle-consistent adversarial networks. Able to transfer the timbre of an audio source to that of another.
Stars: ✭ 37 (-45.59%)
Mutual labels:  generative-adversarial-network
projects
things I help(ed) to build
Stars: ✭ 47 (-30.88%)
Mutual labels:  generative-adversarial-network
skip-thought-gan
Generating Text through Adversarial Training(GAN) using Skip-Thought Vectors
Stars: ✭ 44 (-35.29%)
Mutual labels:  generative-adversarial-network
DeepFlow
Pytorch implementation of "DeepFlow: History Matching in the Space of Deep Generative Models"
Stars: ✭ 24 (-64.71%)
Mutual labels:  generative-adversarial-network
gans-collection.torch
Torch implementation of various types of GAN (e.g. DCGAN, ALI, Context-encoder, DiscoGAN, CycleGAN, EBGAN, LSGAN)
Stars: ✭ 53 (-22.06%)
Mutual labels:  generative-adversarial-network
celeba-gan-pytorch
Generative Adversarial Networks in PyTorch
Stars: ✭ 35 (-48.53%)
Mutual labels:  generative-adversarial-network
AvatarGAN
Generate Cartoon Images using Generative Adversarial Network
Stars: ✭ 24 (-64.71%)
Mutual labels:  generative-adversarial-network
ezgan
An extremely simple generative adversarial network, built with TensorFlow
Stars: ✭ 36 (-47.06%)
Mutual labels:  generative-adversarial-network
ADL2019
Applied Deep Learning (2019 Spring) @ NTU
Stars: ✭ 20 (-70.59%)
Mutual labels:  generative-adversarial-network
Deep-Learning
It contains the coursework and the practice I have done while learning Deep Learning.🚀 👨‍💻💥 🚩🌈
Stars: ✭ 21 (-69.12%)
Mutual labels:  generative-adversarial-network
MNIST-invert-color
Invert the color of MNIST images with PyTorch
Stars: ✭ 13 (-80.88%)
Mutual labels:  generative-adversarial-network

Re-cycle GAN

The Re-implementation of Re-cycle GAN Idea

Packagist Packagist Packagist Packagist

Abstract

This repository try to re-produce the idea of Re-cycle GAN [1] which is purposed by CMU. However, since CMU doesn't release any source code and collected dataset, we only extract the simple white and orange flower video to train the model. You should notice that it's not the official implementation. The idea of Re-cycle GAN is very similar to vid2vid. On the other hand, we provide the simple version whose idea can be traced much easily! For simplicity, this repository doesn't provide multi-GPU training or inference.

Branch Explain

  • master: This branch clearly address the code with detailed comment. You can refer to this branch if you are not realize the content of each part.
  • develop: This branch will update the latest version of repo.
  • clear: Since the full comment is lengthy and hard to trace, we also provide the code version with least comment in the recycle_gan.py. Also, some redundant check will be removed to add the readability. You can read with shortest length.

Usage

The detail can be found in here. But you should download the dataset from the following link.

https://drive.google.com/drive/folders/1mmWND9ZLK9nZwa8lMQWOVjN5sU_rrWD0?usp=sharing

And you can simply use the following command:

# For training
$ python3 train.py --A <domain_A_path> --B <domain_B_path> --T 3 --resume result.pkl --record_iter 500 --n_iter 30000
# For inference
$ python3 demo.py --in <video_path> --direction a2b --resume result.pkl

Result


The above image shows the both domain. The left column is the original image in both domain. The middle column is the rendered result which adopt the linear-smoothing function in paper. The right column is the reconstruction result. In our experiment, we don't consider usual cycle-consistency loss but thinking of recycle loss.


We only show the single flower-to-flower transform result. In the first domain, the flower is composed by green stem and white bundle part, and the flower is orange in the second domain. The above GIF shows that the generator can render the whole image with fire tone, and the plant can be colored into orange.


The next marvelous example is shown above. For the opposite direction, the generator can realize the petal meaning in both domain, and render into white! Also, the stem can be remain as green. As the time extend, the flower will open with green tone. The most successful feature is that there is no discontinuous artifact between each frame in time series.

Reference

[1] A. Bansal, S. Ma, D. Ramanan, and Y. Sheikh, "Recycle-gan: Unsupervised video retargeting," arXiv preprint arXiv:1808.05174, 2018.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].