All Projects → ioangatop → srVAE

ioangatop / srVAE

Licence: MIT License
VAE with RealNVP prior and Super-Resolution VAE in PyTorch. Code release for https://arxiv.org/abs/2006.05218.

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to srVAE

Awesome Vaes
A curated list of awesome work on VAEs, disentanglement, representation learning, and generative models.
Stars: ✭ 418 (+646.43%)
Mutual labels:  generative-model, vae, representation-learning, unsupervised-learning, variational-autoencoder
Disentangling Vae
Experiments for understanding disentanglement in VAE latent representations
Stars: ✭ 398 (+610.71%)
Mutual labels:  vae, representation-learning, unsupervised-learning, variational-autoencoder
Variational Autoencoder
Variational autoencoder implemented in tensorflow and pytorch (including inverse autoregressive flow)
Stars: ✭ 807 (+1341.07%)
Mutual labels:  vae, unsupervised-learning, variational-autoencoder
Tensorflow Generative Model Collections
Collection of generative models in Tensorflow
Stars: ✭ 3,785 (+6658.93%)
Mutual labels:  generative-model, vae, variational-autoencoder
ladder-vae-pytorch
Ladder Variational Autoencoders (LVAE) in PyTorch
Stars: ✭ 59 (+5.36%)
Mutual labels:  vae, representation-learning, unsupervised-learning
Vae protein function
Protein function prediction using a variational autoencoder
Stars: ✭ 57 (+1.79%)
Mutual labels:  generative-model, vae, variational-autoencoder
benchmark VAE
Unifying Variational Autoencoder (VAE) implementations in Pytorch (NeurIPS 2022)
Stars: ✭ 1,211 (+2062.5%)
Mutual labels:  vae, variational-autoencoder, vae-pytorch
Tf Vqvae
Tensorflow Implementation of the paper [Neural Discrete Representation Learning](https://arxiv.org/abs/1711.00937) (VQ-VAE).
Stars: ✭ 226 (+303.57%)
Mutual labels:  generative-model, vae, cifar10
Variational Ladder Autoencoder
Implementation of VLAE
Stars: ✭ 196 (+250%)
Mutual labels:  generative-model, representation-learning, unsupervised-learning
Vae For Image Generation
Implemented Variational Autoencoder generative model in Keras for image generation and its latent space visualization on MNIST and CIFAR10 datasets
Stars: ✭ 87 (+55.36%)
Mutual labels:  generative-model, vae, variational-autoencoder
soft-intro-vae-pytorch
[CVPR 2021 Oral] Official PyTorch implementation of Soft-IntroVAE from the paper "Soft-IntroVAE: Analyzing and Improving Introspective Variational Autoencoders"
Stars: ✭ 170 (+203.57%)
Mutual labels:  vae, variational-autoencoder, vae-pytorch
pyroVED
Invariant representation learning from imaging and spectral data
Stars: ✭ 23 (-58.93%)
Mutual labels:  vae, variational-autoencoder, unsupervised-machine-learning
State-Representation-Learning-An-Overview
Simplified version of "State Representation Learning for Control: An Overview" bibliography
Stars: ✭ 32 (-42.86%)
Mutual labels:  representation-learning, unsupervised-learning
rl singing voice
Unsupervised Representation Learning for Singing Voice Separation
Stars: ✭ 18 (-67.86%)
Mutual labels:  representation-learning, unsupervised-learning
proto
Proto-RL: Reinforcement Learning with Prototypical Representations
Stars: ✭ 67 (+19.64%)
Mutual labels:  representation-learning, unsupervised-learning
char-VAE
Inspired by the neural style algorithm in the computer vision field, we propose a high-level language model with the aim of adapting the linguistic style.
Stars: ✭ 18 (-67.86%)
Mutual labels:  generative-model, vae
continuous Bernoulli
There are C language computer programs about the simulator, transformation, and test statistic of continuous Bernoulli distribution. More than that, the book contains continuous Binomial distribution and continuous Trinomial distribution.
Stars: ✭ 22 (-60.71%)
Mutual labels:  vae, variational-autoencoder
SimCLR
Pytorch implementation of "A Simple Framework for Contrastive Learning of Visual Representations"
Stars: ✭ 65 (+16.07%)
Mutual labels:  representation-learning, unsupervised-learning
disent
🧶 Modular VAE disentanglement framework for python built with PyTorch Lightning ▸ Including metrics and datasets ▸ With strongly supervised, weakly supervised and unsupervised methods ▸ Easily configured and run with Hydra config ▸ Inspired by disentanglement_lib
Stars: ✭ 41 (-26.79%)
Mutual labels:  vae, representation-learning
generative deep learning
Generative Deep Learning Sessions led by Anugraha Sinha (Machine Learning Tokyo)
Stars: ✭ 24 (-57.14%)
Mutual labels:  generative-model, vae

VAE and Super-Resolution VAE in PyTorch

Python 3.6 PyTorch 1.3 MIT

Code release for Super-Resolution Variational Auto-Encoders

Abstract

The framework of Variational Auto-Encoders (VAEs) provides a principled manner of reasoning in latent-variable models using variational inference. However, the main drawback of this approach is blurriness of generated images. Some studies link this effect to the objective function, namely, the (negative) log-likelihood function. Here, we propose to enhance VAEs by adding a random variable that is a downscaled version of the original image and still use the log-likelihood function as the learning objective. Further, we provide the downscaled image as an input to the decoder and use it in a manner similar to the super-resolution. We present empirically that the proposed approach performs comparably to VAEs in terms of the negative log-likelihood function, but it obtains a better FID score.

Features

  • Models

    • VAE
    • Super-resolution VAE (srVAE)
  • Priors

    • Standard (unimodal) Gaussian
    • Mixture of Gaussians
    • RealNVP
  • Reconstruction Loss

    • Discretized Mixture of Logistics Loss
  • Neural Networks

    • DenseNet
  • Datasets

    • CIFAR-10

Quantitative results

Model nll
VAE 3.51
srVAE 3.65

Results on CIFAR-10. The log-likelihood value nll was estimated using 500 weighted samples on the test set (10k images).

Qualitative results

VAE

Results from VAE with RealNVP Prior trained on CIFAR10.

Interpolations

Reconstructions.

Unconditional generations.

Super-Resolution VAE

Results from Super-Resolution VAE trained on CIFAR10.

Interpolations

Super-Resolution results of the srVAE on CIFAR-10

Unconditional generations. Left: The generations of the first step, the compressed representations that capture the _global_ structure. Right: The final result after enhasing the images with local content.

Requirements

The code is compatible with:

  • python 3.6
  • pytorch 1.3

Usage

  • To run VAE with RealNVP prior on CIFAR-10, please execude:
python main.py --model VAE --network densenet32 --prior RealNVP
  • Otherwise, to run srVAE:
python main.py --model srVAE --network densenet16x32 --prior RealNVP

Cite

Please cite our paper if you use this code in your own work:

@misc{gatopoulos2020superresolution,
    title={Super-resolution Variational Auto-Encoders},
    author={Ioannis Gatopoulos and Maarten Stol and Jakub M. Tomczak},
    year={2020},
    eprint={2006.05218},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}

Acknowledgements

This work was supported and funded from the University of Amsterdam, and BrainCreators B.V..

Repo Author

Ioannis Gatopoulos, 2020

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].