All Projects → vithursant → VAE-Gumbel-Softmax

vithursant / VAE-Gumbel-Softmax

Licence: Apache-2.0 license
An implementation of a Variational-Autoencoder using the Gumbel-Softmax reparametrization trick in TensorFlow (tested on r1.5 CPU and GPU) in ICLR 2017.

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to VAE-Gumbel-Softmax

Tensorflow Generative Model Collections
Collection of generative models in Tensorflow
Stars: ✭ 3,785 (+5634.85%)
Mutual labels:  mnist, vae, variational-autoencoder
Vae Cvae Mnist
Variational Autoencoder and Conditional Variational Autoencoder on MNIST in PyTorch
Stars: ✭ 229 (+246.97%)
Mutual labels:  mnist, vae, variational-autoencoder
Tensorflow Mnist Cvae
Tensorflow implementation of conditional variational auto-encoder for MNIST
Stars: ✭ 139 (+110.61%)
Mutual labels:  mnist, vae, variational-autoencoder
vae-concrete
Keras implementation of a Variational Auto Encoder with a Concrete Latent Distribution
Stars: ✭ 51 (-22.73%)
Mutual labels:  vae, variational-autoencoder, gumbel-softmax
Disentangling Vae
Experiments for understanding disentanglement in VAE latent representations
Stars: ✭ 398 (+503.03%)
Mutual labels:  mnist, vae, variational-autoencoder
Tensorflow Mnist Vae
Tensorflow implementation of variational auto-encoder for MNIST
Stars: ✭ 422 (+539.39%)
Mutual labels:  mnist, vae, variational-autoencoder
continuous Bernoulli
There are C language computer programs about the simulator, transformation, and test statistic of continuous Bernoulli distribution. More than that, the book contains continuous Binomial distribution and continuous Trinomial distribution.
Stars: ✭ 22 (-66.67%)
Mutual labels:  vae, deeplearning, variational-autoencoder
MIDI-VAE
No description or website provided.
Stars: ✭ 56 (-15.15%)
Mutual labels:  vae, variational-autoencoder
Fun-with-MNIST
Playing with MNIST. Machine Learning. Generative Models.
Stars: ✭ 23 (-65.15%)
Mutual labels:  mnist, vae
benchmark VAE
Unifying Variational Autoencoder (VAE) implementations in Pytorch (NeurIPS 2022)
Stars: ✭ 1,211 (+1734.85%)
Mutual labels:  vae, variational-autoencoder
precision-recall-distributions
Assessing Generative Models via Precision and Recall (official repository)
Stars: ✭ 80 (+21.21%)
Mutual labels:  vae, variational-autoencoder
Gordon cnn
A small convolution neural network deep learning framework implemented in c++.
Stars: ✭ 241 (+265.15%)
Mutual labels:  mnist, deeplearning
Vq Vae
Minimalist implementation of VQ-VAE in Pytorch
Stars: ✭ 224 (+239.39%)
Mutual labels:  mnist, vae
VAE-Latent-Space-Explorer
Interactive exploration of MNIST variational autoencoder latent space with React and tensorflow.js.
Stars: ✭ 30 (-54.55%)
Mutual labels:  mnist, variational-autoencoder
Variational-Autoencoder-pytorch
Implementation of a convolutional Variational-Autoencoder model in pytorch.
Stars: ✭ 65 (-1.52%)
Mutual labels:  vae, variational-autoencoder
soft-intro-vae-pytorch
[CVPR 2021 Oral] Official PyTorch implementation of Soft-IntroVAE from the paper "Soft-IntroVAE: Analyzing and Improving Introspective Variational Autoencoders"
Stars: ✭ 170 (+157.58%)
Mutual labels:  vae, variational-autoencoder
Tf Vqvae
Tensorflow Implementation of the paper [Neural Discrete Representation Learning](https://arxiv.org/abs/1711.00937) (VQ-VAE).
Stars: ✭ 226 (+242.42%)
Mutual labels:  mnist, vae
pyroVED
Invariant representation learning from imaging and spectral data
Stars: ✭ 23 (-65.15%)
Mutual labels:  vae, variational-autoencoder
Bagel
IPCCC 2018: Robust and Unsupervised KPI Anomaly Detection Based on Conditional Variational Autoencoder
Stars: ✭ 45 (-31.82%)
Mutual labels:  vae, variational-autoencoder
Keras-Generating-Sentences-from-a-Continuous-Space
Text Variational Autoencoder inspired by the paper 'Generating Sentences from a Continuous Space' Bowman et al. https://arxiv.org/abs/1511.06349
Stars: ✭ 32 (-51.52%)
Mutual labels:  vae, variational-autoencoder

VAE with Gumbel-Softmax

TensorFlow implementation of a Variational Autoencoder with Gumbel-Softmax Distribution. Refer to the following paper:

Also, included is a jupyter notebook which shows how the Gumbel-Max trick for sampling discrete variables relates to Concrete distributions.

Table of Contents

Installation

The program requires the following dependencies (easy to install using pip, Ananconda or Docker):

  • python 2.7/3.5
  • tensorflow (tested with r1.1 and r1.5)
  • numpy
  • holoviews
  • jupyter
  • pandas
  • matplotlib
  • seaborn
  • tqdm

Anaconda

Anaconda: CPU Installation

To install VAE-Gumbel-Softmax in an TensorFlow 1.5 CPU - Python 2.7 environment:

conda env create -f tf_py26_cpu_env.yml

To activate Anaconda environment:

source activate tf-py26-cpu-env

Anaconda: GPU Installation

To install VAE-Gumbel-Softmax in an TensorFlow 1.5 GPU - Python 3.5 environment:

conda env create -f tf_py35_gpu_env.yml

To activate Anaconda environment:

source activate tf-py35-gpu-env

Anaconda: Train

Train VAE-Gumbel-Softmax model on the local machine using MNIST dataset:

python vae_gumbel_softmax.py

Docker

Train VAE-Gumbel-Softmax model using Docker on the MNIST dataset:

docker build -t vae-gs .
docker run vae-gs

Note: Current Dockerfile is for TensorFlow 1.5 CPU training.

Results

Hyperparameters

Batch Size:                         100
Number of Iterations:               50000
Learning Rate:                      0.001
Initial Temperature:                1.0
Minimum Temperature:                0.5
Anneal Rate:                        0.00003
Straight-Through Gumbel-Softmax:    False
KL-divergence:                      Relaxed
Learnable Temperature:              False

MNIST

Ground Truth Reconstructions

Citing VAE-Gumbel-Softmax

If you use VAE-Gumbel-Softmax in a scientific publication, I would appreciate references to the source code.

Biblatex entry:

@misc{VAEGumbelSoftmax,
  author = {Thangarasa, Vithursan},
  title = {VAE-Gumbel-Softmax},
  year = {2017},
  publisher = {GitHub},
  journal = {GitHub repository},
  howpublished = {\url{https://github.com/vithursant/VAE-Gumbel-Softmax}}
}
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].