All Projects → Schlumberger → Joint Vae

Schlumberger / Joint Vae

Licence: mit
Pytorch implementation of JointVAE, a framework for disentangling continuous and discrete factors of variation 🌟

Projects that are alternatives of or similar to Joint Vae

Variational Autoencoder
PyTorch implementation of "Auto-Encoding Variational Bayes"
Stars: ✭ 25 (-93.81%)
Mutual labels:  jupyter-notebook, vae
Vae Tensorflow
A Tensorflow implementation of a Variational Autoencoder for the deep learning course at the University of Southern California (USC).
Stars: ✭ 117 (-71.04%)
Mutual labels:  jupyter-notebook, vae
Pytorch Mnist Vae
Stars: ✭ 32 (-92.08%)
Mutual labels:  jupyter-notebook, vae
Vae protein function
Protein function prediction using a variational autoencoder
Stars: ✭ 57 (-85.89%)
Mutual labels:  jupyter-notebook, vae
Tf Vqvae
Tensorflow Implementation of the paper [Neural Discrete Representation Learning](https://arxiv.org/abs/1711.00937) (VQ-VAE).
Stars: ✭ 226 (-44.06%)
Mutual labels:  jupyter-notebook, vae
Deeplearningmugenknock
でぃーぷらーにんぐを無限にやってディープラーニングでDeepLearningするための実装CheatSheet
Stars: ✭ 684 (+69.31%)
Mutual labels:  jupyter-notebook, vae
Cross Lingual Voice Cloning
Tacotron 2 - PyTorch implementation with faster-than-realtime inference modified to enable cross lingual voice cloning.
Stars: ✭ 106 (-73.76%)
Mutual labels:  jupyter-notebook, vae
Generative Models
Annotated, understandable, and visually interpretable PyTorch implementations of: VAE, BIRVAE, NSGAN, MMGAN, WGAN, WGANGP, LSGAN, DRAGAN, BEGAN, RaGAN, InfoGAN, fGAN, FisherGAN
Stars: ✭ 438 (+8.42%)
Mutual labels:  jupyter-notebook, vae
Pytorch Vq Vae
PyTorch implementation of VQ-VAE by Aäron van den Oord et al.
Stars: ✭ 204 (-49.5%)
Mutual labels:  jupyter-notebook, vae
Pytorch Vae
A CNN Variational Autoencoder (CNN-VAE) implemented in PyTorch
Stars: ✭ 181 (-55.2%)
Mutual labels:  jupyter-notebook, vae
Deep Learning With Python
Example projects I completed to understand Deep Learning techniques with Tensorflow. Please note that I do no longer maintain this repository.
Stars: ✭ 134 (-66.83%)
Mutual labels:  jupyter-notebook, vae
Neural Ode
Jupyter notebook with Pytorch implementation of Neural Ordinary Differential Equations
Stars: ✭ 335 (-17.08%)
Mutual labels:  jupyter-notebook, vae
Human body prior
VPoser: Variational Human Pose Prior
Stars: ✭ 244 (-39.6%)
Mutual labels:  jupyter-notebook, vae
Dsprites Dataset
Dataset to assess the disentanglement properties of unsupervised learning methods
Stars: ✭ 340 (-15.84%)
Mutual labels:  jupyter-notebook, vae
Bloom Contrib
Making carbon footprint data available to everyone.
Stars: ✭ 398 (-1.49%)
Mutual labels:  jupyter-notebook
Triplet recommendations keras
An example of doing MovieLens recommendations using triplet loss in Keras
Stars: ✭ 400 (-0.99%)
Mutual labels:  jupyter-notebook
Icnet Tensorflow
TensorFlow-based implementation of "ICNet for Real-Time Semantic Segmentation on High-Resolution Images".
Stars: ✭ 396 (-1.98%)
Mutual labels:  jupyter-notebook
Pytorch Vqvae
Vector Quantized VAEs - PyTorch Implementation
Stars: ✭ 396 (-1.98%)
Mutual labels:  vae
Namedtensor
Named Tensor implementation for Torch
Stars: ✭ 403 (-0.25%)
Mutual labels:  jupyter-notebook
Trainyourownyolo
Train a state-of-the-art yolov3 object detector from scratch!
Stars: ✭ 399 (-1.24%)
Mutual labels:  jupyter-notebook

Learning Disentangled Joint Continuous and Discrete Representations

Pytorch implementation of Learning Disentangled Joint Continuous and Discrete Representations (NIPS 2018).

This repo contains an implementation of JointVAE, a framework for jointly disentangling continuous and discrete factors of variation in data in an unsupervised manner.

Examples

MNIST

CelebA

FashionMNIST

dSprites

Discrete and continuous factors on MNIST

dSprites comparisons

Usage

The train_model.ipynb notebook contains code for training a JointVAE model.

The load_model.ipynb notebook contains code for loading a trained model.

Example usage

from jointvae.models import VAE
from jointvae.training import Trainer
from torch.optim import Adam
from viz.visualize import Visualizer as Viz

# Build a dataloader for your data
dataloader = get_my_dataloader(batch_size=32)

# Define latent distribution
latent_spec = {'cont': 20, 'disc': [10, 5, 5, 2]}

# Build a Joint-VAE model
model = VAE(img_size=(3, 64, 64), latent_spec=latent_spec)

# Build a trainer and train model
optimizer = Adam(model.parameters())
trainer = Trainer(model, optimizer,
                  cont_capacity=[0., 5., 25000, 30.],
                  disc_capacity=[0., 5., 25000, 30.])
trainer.train(dataloader, epochs=10)

# Visualize samples from the model
viz = Viz(model)
samples = viz.samples()

# Do all sorts of fun things with model
...

Trained models

The trained models referenced in the paper are included in the trained_models folder. The load_model.ipynb ipython notebook provides code to load and use these trained models.

Data sources

The MNIST and FashionMNIST datasets can be automatically downloaded using torchvision.

CelebA

All CelebA images were resized to be 64 by 64. Data can be found here.

Chairs

All Chairs images were center cropped and resized to 64 by 64. Data can be found here.

Applications

Image editing

Inferring unlabelled quantities

Citing

If you find this work useful in your research, please cite using:

@inproceedings{dupont2018learning,
  title={Learning disentangled joint continuous and discrete representations},
  author={Dupont, Emilien},
  booktitle={Advances in Neural Information Processing Systems},
  pages={707--717},
  year={2018}
}

More examples

License

MIT

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].