All Projects → taldatech → soft-intro-vae-pytorch

taldatech / soft-intro-vae-pytorch

Licence: Apache-2.0 license
[CVPR 2021 Oral] Official PyTorch implementation of Soft-IntroVAE from the paper "Soft-IntroVAE: Analyzing and Improving Introspective Variational Autoencoders"

Programming Languages

Jupyter Notebook
11667 projects
python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to soft-intro-vae-pytorch

Vae For Image Generation
Implemented Variational Autoencoder generative model in Keras for image generation and its latent space visualization on MNIST and CIFAR10 datasets
Stars: ✭ 87 (-48.82%)
Mutual labels:  vae, image-generation, variational-autoencoder
benchmark VAE
Unifying Variational Autoencoder (VAE) implementations in Pytorch (NeurIPS 2022)
Stars: ✭ 1,211 (+612.35%)
Mutual labels:  vae, variational-autoencoder, vae-pytorch
srVAE
VAE with RealNVP prior and Super-Resolution VAE in PyTorch. Code release for https://arxiv.org/abs/2006.05218.
Stars: ✭ 56 (-67.06%)
Mutual labels:  vae, variational-autoencoder, vae-pytorch
Cada Vae Pytorch
Official implementation of the paper "Generalized Zero- and Few-Shot Learning via Aligned Variational Autoencoders" (CVPR 2019)
Stars: ✭ 198 (+16.47%)
Mutual labels:  vae, variational-autoencoder
Vae protein function
Protein function prediction using a variational autoencoder
Stars: ✭ 57 (-66.47%)
Mutual labels:  vae, variational-autoencoder
Python World
Stars: ✭ 98 (-42.35%)
Mutual labels:  vae, variational-autoencoder
Tensorflow Mnist Vae
Tensorflow implementation of variational auto-encoder for MNIST
Stars: ✭ 422 (+148.24%)
Mutual labels:  vae, variational-autoencoder
S Vae Tf
Tensorflow implementation of Hyperspherical Variational Auto-Encoders
Stars: ✭ 198 (+16.47%)
Mutual labels:  vae, variational-autoencoder
Smrt
Handle class imbalance intelligently by using variational auto-encoders to generate synthetic observations of your minority class.
Stars: ✭ 102 (-40%)
Mutual labels:  vae, variational-autoencoder
Vae Tensorflow
A Tensorflow implementation of a Variational Autoencoder for the deep learning course at the University of Southern California (USC).
Stars: ✭ 117 (-31.18%)
Mutual labels:  vae, variational-autoencoder
Video prediction
Stochastic Adversarial Video Prediction
Stars: ✭ 247 (+45.29%)
Mutual labels:  vae, variational-autoencoder
Variational Autoencoder
PyTorch implementation of "Auto-Encoding Variational Bayes"
Stars: ✭ 25 (-85.29%)
Mutual labels:  vae, variational-autoencoder
Tensorflow Mnist Cvae
Tensorflow implementation of conditional variational auto-encoder for MNIST
Stars: ✭ 139 (-18.24%)
Mutual labels:  vae, variational-autoencoder
Vae Lagging Encoder
PyTorch implementation of "Lagging Inference Networks and Posterior Collapse in Variational Autoencoders" (ICLR 2019)
Stars: ✭ 153 (-10%)
Mutual labels:  vae, image-generation
MIDI-VAE
No description or website provided.
Stars: ✭ 56 (-67.06%)
Mutual labels:  vae, variational-autoencoder
Variational Autoencoder
Variational autoencoder implemented in tensorflow and pytorch (including inverse autoregressive flow)
Stars: ✭ 807 (+374.71%)
Mutual labels:  vae, variational-autoencoder
Mojitalk
Code for "MojiTalk: Generating Emotional Responses at Scale" https://arxiv.org/abs/1711.04090
Stars: ✭ 107 (-37.06%)
Mutual labels:  vae, variational-autoencoder
Vae Cvae Mnist
Variational Autoencoder and Conditional Variational Autoencoder on MNIST in PyTorch
Stars: ✭ 229 (+34.71%)
Mutual labels:  vae, variational-autoencoder
Disentangling Vae
Experiments for understanding disentanglement in VAE latent representations
Stars: ✭ 398 (+134.12%)
Mutual labels:  vae, variational-autoencoder
Awesome Vaes
A curated list of awesome work on VAEs, disentanglement, representation learning, and generative models.
Stars: ✭ 418 (+145.88%)
Mutual labels:  vae, variational-autoencoder

soft-intro-vae-pytorch


[CVPR 2021 Oral] Soft-IntroVAE: Analyzing and Improving Introspective Variational Autoencoders

Tal DanielAviv Tamar

Official repository of the paper

CVPR 2021 Oral

Project WebsiteVideo

Open In Colab

Soft-IntroVAE

Soft-IntroVAE: Analyzing and Improving Introspective Variational Autoencoders
Tal Daniel, Aviv Tamar

Abstract: The recently introduced introspective variational autoencoder (IntroVAE) exhibits outstanding image generations, and allows for amortized inference using an image encoder. The main idea in IntroVAE is to train a VAE adversarially, using the VAE encoder to discriminate between generated and real data samples. However, the original IntroVAE loss function relied on a particular hinge-loss formulation that is very hard to stabilize in practice, and its theoretical convergence analysis ignored important terms in the loss. In this work, we take a step towards better understanding of the IntroVAE model, its practical implementation, and its applications. We propose the Soft-IntroVAE, a modified IntroVAE that replaces the hinge-loss terms with a smooth exponential loss on generated samples. This change significantly improves training stability, and also enables theoretical analysis of the complete algorithm. Interestingly, we show that the IntroVAE converges to a distribution that minimizes a sum of KL distance from the data distribution and an entropy term. We discuss the implications of this result, and demonstrate that it induces competitive image generation and reconstruction. Finally, we describe two applications of Soft-IntroVAE to unsupervised image translation and out-of-distribution detection, and demonstrate compelling results.

Citation

Daniel, Tal, and Aviv Tamar. "Soft-IntroVAE: Analyzing and Improving the Introspective Variational Autoencoder." arXiv preprint arXiv:2012.13253 (2020).

@InProceedings{Daniel_2021_CVPR,
author    = {Daniel, Tal and Tamar, Aviv},
title     = {Soft-IntroVAE: Analyzing and Improving the Introspective Variational Autoencoder},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month     = {June},
year      = {2021},
pages     = {4391-4400}

}

Preprint on ArXiv: 2012.13253

Prerequisites

  • For your convenience, we provide an environemnt.yml file which installs the required packages in a conda environment name torch.
    • Use the terminal or an Anaconda Prompt and run the following command conda env create -f environment.yml.
  • For Style-SoftIntroVAE, more packages are required, and we provide them in the style_soft_intro_vae directory.
Library Version
Python 3.6 (Anaconda)
torch >= 1.2 (tested on 1.7)
torchvision >= 0.4
matplotlib >= 2.2.2
numpy >= 1.17
opencv >= 3.4.2
tqdm >= 4.36.1
scipy >= 1.3.1

Repository Organization

File name Content
/soft_intro_vae directory containing implementation for image data
/soft_intro_vae_2d directory containing implementations for 2D datasets
/soft_intro_vae_3d directory containing implementations for 3D point clouds data
/soft_intro_vae_bootstrap directory containing implementation for image data using bootstrapping (using a target decoder)
/style_soft_intro_vae directory containing implementation for image data using ALAE's style-based architecture
/soft_intro_vae_tutorials directory containing Jupyter Noteboook tutorials for the various types of Soft-IntroVAE

Related Projects

  • March 2022: augmentation-enhanced-soft-intro-vae - GitHub - using differentiable augmentations to improve image generation FID score.

Credits

  • Adversarial Latent Autoencoders, Pidhorskyi et al., CVPR 2020 - Code, Paper.
  • FID is calculated natively in PyTorch using Seitzer implementation - Code
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].