All Projects → cheng6076 → Variational Lstm Autoencoder

cheng6076 / Variational Lstm Autoencoder

Variational Seq2Seq model

Programming Languages

lua
6591 projects

Variational LSTM-Autoencoder

This project implements the Variational LSTM sequence to sequence architecture for a sentence auto-encoding task. In general, I follow the paper "Variational Recurrent Auto-encoders" and "Generating Sentences from a Continuous Space". Most of the implementations about the variational layer are adapted from "y0ast/VAE-torch".

Descriptions

Following the above two papers, the variational layer is only added in between the last hidden state of the encoder and the first hidden state of the decoder, with the following steps:

  1. Compute mean and variance of the posterior q from the last hidden state, with a 2-layer mlp encoder

  2. Compute KLD loss between the estimated posterior q(z|x) and the enforced prior p(z)

  3. Collect a noise sample with reparameterization

  4. Get the first hidden state of the decoder with a 2-layer mlp decoder

Dependencies

This code requires Torch7 and nngraph

Usage

  • training on GPU: th VLSTM-Autoencoder.lua -gpuid 0
  • sampling on GPU: th sample.lua -gpuid 0 -cv cv/checkpoint -data dataset/test
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].