All Projects → mobeets → classifying-vae-lstm

mobeets / classifying-vae-lstm

Licence: other
music generation with a classifying variational autoencoder (VAE) and LSTM

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to classifying-vae-lstm

Deep Learning With Python
Example projects I completed to understand Deep Learning techniques with Tensorflow. Please note that I do no longer maintain this repository.
Stars: ✭ 134 (+396.3%)
Mutual labels:  lstm, vae, variational-autoencoder
benchmark VAE
Unifying Variational Autoencoder (VAE) implementations in Pytorch (NeurIPS 2022)
Stars: ✭ 1,211 (+4385.19%)
Mutual labels:  vae, variational-autoencoder
soft-intro-vae-pytorch
[CVPR 2021 Oral] Official PyTorch implementation of Soft-IntroVAE from the paper "Soft-IntroVAE: Analyzing and Improving Introspective Variational Autoencoders"
Stars: ✭ 170 (+529.63%)
Mutual labels:  vae, variational-autoencoder
precision-recall-distributions
Assessing Generative Models via Precision and Recall (official repository)
Stars: ✭ 80 (+196.3%)
Mutual labels:  vae, variational-autoencoder
Numpy Ml
Machine learning, in numpy
Stars: ✭ 11,100 (+41011.11%)
Mutual labels:  lstm, vae
MIDI-VAE
No description or website provided.
Stars: ✭ 56 (+107.41%)
Mutual labels:  vae, variational-autoencoder
vae-concrete
Keras implementation of a Variational Auto Encoder with a Concrete Latent Distribution
Stars: ✭ 51 (+88.89%)
Mutual labels:  vae, variational-autoencoder
Cada Vae Pytorch
Official implementation of the paper "Generalized Zero- and Few-Shot Learning via Aligned Variational Autoencoders" (CVPR 2019)
Stars: ✭ 198 (+633.33%)
Mutual labels:  vae, variational-autoencoder
Bagel
IPCCC 2018: Robust and Unsupervised KPI Anomaly Detection Based on Conditional Variational Autoencoder
Stars: ✭ 45 (+66.67%)
Mutual labels:  vae, variational-autoencoder
pyroVED
Invariant representation learning from imaging and spectral data
Stars: ✭ 23 (-14.81%)
Mutual labels:  vae, variational-autoencoder
Keras-Generating-Sentences-from-a-Continuous-Space
Text Variational Autoencoder inspired by the paper 'Generating Sentences from a Continuous Space' Bowman et al. https://arxiv.org/abs/1511.06349
Stars: ✭ 32 (+18.52%)
Mutual labels:  vae, variational-autoencoder
Video prediction
Stochastic Adversarial Video Prediction
Stars: ✭ 247 (+814.81%)
Mutual labels:  vae, variational-autoencoder
Vae Cvae Mnist
Variational Autoencoder and Conditional Variational Autoencoder on MNIST in PyTorch
Stars: ✭ 229 (+748.15%)
Mutual labels:  vae, variational-autoencoder
Variational Recurrent Autoencoder Tensorflow
A tensorflow implementation of "Generating Sentences from a Continuous Space"
Stars: ✭ 228 (+744.44%)
Mutual labels:  vae, variational-autoencoder
VAE-Gumbel-Softmax
An implementation of a Variational-Autoencoder using the Gumbel-Softmax reparametrization trick in TensorFlow (tested on r1.5 CPU and GPU) in ICLR 2017.
Stars: ✭ 66 (+144.44%)
Mutual labels:  vae, variational-autoencoder
InpaintNet
Code accompanying ISMIR'19 paper titled "Learning to Traverse Latent Spaces for Musical Score Inpaintning"
Stars: ✭ 48 (+77.78%)
Mutual labels:  vae, music-generation
Pytorch Vae
A CNN Variational Autoencoder (CNN-VAE) implemented in PyTorch
Stars: ✭ 181 (+570.37%)
Mutual labels:  vae, variational-autoencoder
S Vae Tf
Tensorflow implementation of Hyperspherical Variational Auto-Encoders
Stars: ✭ 198 (+633.33%)
Mutual labels:  vae, variational-autoencoder
Variational-Autoencoder-pytorch
Implementation of a convolutional Variational-Autoencoder model in pytorch.
Stars: ✭ 65 (+140.74%)
Mutual labels:  vae, variational-autoencoder
continuous Bernoulli
There are C language computer programs about the simulator, transformation, and test statistic of continuous Bernoulli distribution. More than that, the book contains continuous Binomial distribution and continuous Trinomial distribution.
Stars: ✭ 22 (-18.52%)
Mutual labels:  vae, variational-autoencoder

A Classifying Variational Autoencoder with Application to Polyphonic Music Generation

This is the implementation of the Classifying VAE and Classifying VAE+LSTM models, as described in A Classifying Variational Autoencoder with Application to Polyphonic Music Generation by Jay A. Hennig, Akash Umakantha, and Ryan C. Williamson.

These models extend the standard VAE and VAE+LSTM to the case where there is a latent discrete category. In the case of music generation, for example, we may wish to infer the key of a song, so that we can generate notes that are consistent with that key. These discrete latents are modeled as a Logistic Normal distribution, so that random samples from this distribution can use the reparameterization trick during training.

Code for these models (in Keras) can be found here.

Training data for the JSB Chorales and Piano-midi corpuses can be found in data/input. Songs have been transposed into C major or C minor (*_Cs.pickle), for comparison to previous work, or kept in their original keys (*_all.pickle).

Generated music samples

Samples from the models trained on the JSB Chorales and Piano-midi corpuses, with songs in their original keys, can be found below, or in data/samples.

JSB Chorales (all keys):

  • VAE
  • Classifying VAE (inferred key)
  • VAE+LSTM
  • Classifying VAE+LSTM (inferred key)

Piano-midi (all keys):

  • VAE
  • Classifying VAE (inferred key)
  • Classifying VAE (given key)

Training new models

Example of training a Classifying VAE with 4 latent dimensions on JSB Chorales in two keys, and then generating a sample from this model:

$ python cl_vae/train.py run1 --use_x_prev --latent_dim 4 --train_file '../data/input/JSB Chorales_Cs.pickle'
$ python cl_vae/sample.py outfile --model_file ../data/models/run1.h5 --train_file '../data/input/JSB Chorales_Cs.pickle'
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].