All Projects → aayushbansal → Recycle Gan

aayushbansal / Recycle Gan

Licence: mit
Unsupervised Video Retargeting (e.g. face to face, flower to flower, clouds and winds, sunrise and sunset)

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Recycle Gan

All About The Gan
All About the GANs(Generative Adversarial Networks) - Summarized lists for GAN
Stars: ✭ 630 (+71.66%)
Mutual labels:  generative-adversarial-network, unsupervised-learning
3dpose gan
The authors' implementation of Unsupervised Adversarial Learning of 3D Human Pose from 2D Joint Locations
Stars: ✭ 124 (-66.21%)
Mutual labels:  generative-adversarial-network, unsupervised-learning
Context Encoder
[CVPR 2016] Unsupervised Feature Learning by Image Inpainting using GANs
Stars: ✭ 731 (+99.18%)
Mutual labels:  generative-adversarial-network, unsupervised-learning
Hidt
Official repository for the paper "High-Resolution Daytime Translation Without Domain Labels" (CVPR2020, Oral)
Stars: ✭ 513 (+39.78%)
Mutual labels:  generative-adversarial-network, unsupervised-learning
Transmomo.pytorch
This is the official PyTorch implementation of the CVPR 2020 paper "TransMoMo: Invariance-Driven Unsupervised Video Motion Retargeting".
Stars: ✭ 225 (-38.69%)
Mutual labels:  generative-adversarial-network, unsupervised-learning
Hypergan
Composable GAN framework with api and user interface
Stars: ✭ 1,104 (+200.82%)
Mutual labels:  generative-adversarial-network, unsupervised-learning
Marta Gan
MARTA GANs: Unsupervised Representation Learning for Remote Sensing Image Classification
Stars: ✭ 75 (-79.56%)
Mutual labels:  generative-adversarial-network, unsupervised-learning
Dragan
A stable algorithm for GAN training
Stars: ✭ 189 (-48.5%)
Mutual labels:  generative-adversarial-network, unsupervised-learning
Gan Sandbox
Vanilla GAN implemented on top of keras/tensorflow enabling rapid experimentation & research. Branches correspond to implementations of stable GAN variations (i.e. ACGan, InfoGAN) and other promising variations of GANs like conditional and Wasserstein.
Stars: ✭ 210 (-42.78%)
Mutual labels:  generative-adversarial-network, unsupervised-learning
Iseebetter
iSeeBetter: Spatio-Temporal Video Super Resolution using Recurrent-Generative Back-Projection Networks | Python3 | PyTorch | GANs | CNNs | ResNets | RNNs | Published in Springer Journal of Computational Visual Media, September 2020, Tsinghua University Press
Stars: ✭ 202 (-44.96%)
Mutual labels:  generative-adversarial-network, unsupervised-learning
catgan pytorch
Unsupervised and Semi-supervised Learning with Categorical Generative Adversarial Networks
Stars: ✭ 50 (-86.38%)
Mutual labels:  generative-adversarial-network, unsupervised-learning
Improved-Wasserstein-GAN-application-on-MRI-images
Improved Wasserstein GAN (WGAN-GP) application on medical (MRI) images
Stars: ✭ 23 (-93.73%)
Mutual labels:  generative-adversarial-network, unsupervised-learning
UEGAN
[TIP2020] Pytorch implementation of "Towards Unsupervised Deep Image Enhancement with Generative Adversarial Network"
Stars: ✭ 68 (-81.47%)
Mutual labels:  generative-adversarial-network, unsupervised-learning
Jellyfin Kodi
Jellyfin Plugin for Kodi
Stars: ✭ 332 (-9.54%)
Mutual labels:  videos
Cyclegan
Tensorflow implementation of CycleGAN
Stars: ✭ 348 (-5.18%)
Mutual labels:  generative-adversarial-network
Deepfashion try on
Official code for "Towards Photo-Realistic Virtual Try-On by Adaptively Generating↔Preserving Image Content",CVPR‘20 https://arxiv.org/abs/2003.05863
Stars: ✭ 332 (-9.54%)
Mutual labels:  generative-adversarial-network
Pytorch Adda
A PyTorch implementation for Adversarial Discriminative Domain Adaptation
Stars: ✭ 329 (-10.35%)
Mutual labels:  generative-adversarial-network
Cc
Competitive Collaboration: Joint Unsupervised Learning of Depth, Camera Motion, Optical Flow and Motion Segmentation
Stars: ✭ 348 (-5.18%)
Mutual labels:  unsupervised-learning
Generative Adversarial Networks Roadmap
The Roadmap to Learn Generative Adversarial Networks (GANs)
Stars: ✭ 346 (-5.72%)
Mutual labels:  generative-adversarial-network
Beta Vae
Pytorch implementation of β-VAE
Stars: ✭ 326 (-11.17%)
Mutual labels:  unsupervised-learning

Recycle-GAN :Unsupervised Video Retargeting

This repository provides the code for our work on unsupervised video retargeting.

@inproceedings{Recycle-GAN,
  author    = {Aayush Bansal and
               Shugao Ma and
               Deva Ramanan and
               Yaser Sheikh},
  title     = {Recycle-GAN: Unsupervised Video Retargeting},
  booktitle   = {ECCV},
  year      = {2018},
}

Acknowledgements: This code borrows heavily from the PyTorch implementation of Cycle-GAN and Pix2Pix. A huge thanks to them!

John Oliver to Stephen Colbert

John Oliver to Stephen Colbert

Click above to see video!

Video by CMU Media folks

Video by CMU folks!

Click above to see video!

Introduction

We use this formulation in our ECCV'18 paper on unsupervised video retargeting for various domains where space and time information matters such as face retargeting. Without any manual annotation, our approach could learn retargeting from one domain to another.

Using the Code

The repository contains the code for training a network for retargeting from one domain to another, and use a trained module for this task. Following are the things to consider with this code:

Requirements

  • Linux or MacOS
  • Python 3
  • Pytorch 0.4
  • NVIDIA GPU + CUDA CuDNN

Python Dependencies

  • numpy 1.15.0
  • torch 0.4.1.post2
  • torchvision 0.2.2
  • visdom
  • dominate

Run the following command to install automatically: pip install requirements.txt

Data pre-processing

For each task, create a new folder in "dataset/" directory. The images from two domains are placed respectively in "trainA/" and "trainB/". Each image file consists of horizontally concatenated images, "{t, t+1, t+2}" frames from the video. The test images are placed in "testA/" and "testB/". Since we do not use temporal information at test time, the test data consists of single image "{t}".

Training

There are two training modules in "scripts/" directory: (1). Recycle-GAN, (2). ReCycle-GAN

Recycle-GAN

Recycle-GAN is the model described in the paper and is used for most examples in the paper, specifically face to face, flower to flower, clouds and wind synthesis, sunrise and sunset.

ReCycle-GAN

ReCycle-GAN is mostly similar to Recycle-GAN. Additionally, we also use vanilla cycle-losses from CycleGAN between corresponding source and target frames. We found this module useful for tasks such as unpaired image to labels, and labels to image on VIPER dataset, image to normals, and normals to image on NYU-v2 depth dataset.

Prediction Model

There are two prediction model used in this work: (1). simple U-Net, (2). higher-capacity prediction.

unet-128, unet-256

If you want to use this prediction module, please set the flag "--which_model_netP" to "unet_128" and "unet_256" respectively.

prediction

An advanced version of prediction module is a higher capacity module by setting the flag "--which_model_netP" to "prediction".

Observation about training:

We observed that model converges in 20-40 epochs when sufficiently large data is used. For smaller datasets (ones having 1000 images or less), it is suitable to let it train for longer.

Test

At test time, we do inference per image (as mentioned previously). The test code is based on cycle-gan.

Data & Trained Models:

Please use following links to download Face, Flowers, and Viper data:

  1. Faces (10 GB)
  2. Flowers (1.6 GB)
  3. Viper (3.17 GB)

Please contact Aayush Bansal for any specific data or trained models, or for any other information.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].