All Projects → wblgers → Tensorflow_stacked_denoising_autoencoder

wblgers / Tensorflow_stacked_denoising_autoencoder

Implementation of the stacked denoising autoencoder in Tensorflow

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Tensorflow stacked denoising autoencoder

Smrt
Handle class imbalance intelligently by using variational auto-encoders to generate synthetic observations of your minority class.
Stars: ✭ 102 (-32.45%)
Mutual labels:  autoencoder
Calc
Convolutional Autoencoder for Loop Closure
Stars: ✭ 119 (-21.19%)
Mutual labels:  autoencoder
Kate
Code & data accompanying the KDD 2017 paper "KATE: K-Competitive Autoencoder for Text"
Stars: ✭ 135 (-10.6%)
Mutual labels:  autoencoder
Repo 2016
R, Python and Mathematica Codes in Machine Learning, Deep Learning, Artificial Intelligence, NLP and Geolocation
Stars: ✭ 103 (-31.79%)
Mutual labels:  autoencoder
Pytorch cpp
Deep Learning sample programs using PyTorch in C++
Stars: ✭ 114 (-24.5%)
Mutual labels:  autoencoder
Deeptime
Deep learning meets molecular dynamics.
Stars: ✭ 123 (-18.54%)
Mutual labels:  autoencoder
Deep Autoencoders For Collaborative Filtering
Using Deep Autoencoders for predictions of movie ratings.
Stars: ✭ 101 (-33.11%)
Mutual labels:  autoencoder
Focal Frequency Loss
Focal Frequency Loss for Generative Models
Stars: ✭ 141 (-6.62%)
Mutual labels:  autoencoder
Lstm Autoencoders
Anomaly detection for streaming data using autoencoders
Stars: ✭ 113 (-25.17%)
Mutual labels:  autoencoder
Pt Dec
PyTorch implementation of DEC (Deep Embedding Clustering)
Stars: ✭ 132 (-12.58%)
Mutual labels:  autoencoder
Deepai
Detection of Accounting Anomalies using Deep Autoencoder Neural Networks - A lab we prepared for NVIDIA's GPU Technology Conference 2018 that will walk you through the detection of accounting anomalies using deep autoencoder neural networks. The majority of the lab content is based on Jupyter Notebook, Python and PyTorch.
Stars: ✭ 104 (-31.13%)
Mutual labels:  autoencoder
Gpnd
Generative Probabilistic Novelty Detection with Adversarial Autoencoders
Stars: ✭ 112 (-25.83%)
Mutual labels:  autoencoder
Srl Zoo
State Representation Learning (SRL) zoo with PyTorch - Part of S-RL Toolbox
Stars: ✭ 125 (-17.22%)
Mutual labels:  autoencoder
Sdcn
Structural Deep Clustering Network
Stars: ✭ 103 (-31.79%)
Mutual labels:  autoencoder
Splitbrainauto
Split-Brain Autoencoders: Unsupervised Learning by Cross-Channel Prediction. In CVPR, 2017.
Stars: ✭ 137 (-9.27%)
Mutual labels:  autoencoder
Segmentation
Tensorflow implementation : U-net and FCN with global convolution
Stars: ✭ 101 (-33.11%)
Mutual labels:  autoencoder
Rectorch
rectorch is a pytorch-based framework for state-of-the-art top-N recommendation
Stars: ✭ 121 (-19.87%)
Mutual labels:  autoencoder
Libsdae Autoencoder Tensorflow
A simple Tensorflow based library for deep and/or denoising AutoEncoder.
Stars: ✭ 147 (-2.65%)
Mutual labels:  autoencoder
Tensorflow Mnist Cvae
Tensorflow implementation of conditional variational auto-encoder for MNIST
Stars: ✭ 139 (-7.95%)
Mutual labels:  autoencoder
Tybalt
Training and evaluating a variational autoencoder for pan-cancer gene expression data
Stars: ✭ 126 (-16.56%)
Mutual labels:  autoencoder

tensorflow_stacked_denoising_autoencoder

0. Setup Environment

To run the script, at least following required packages should be satisfied:

  • Python 3.5.2
  • Tensorflow 1.6.0
  • NumPy 1.14.1

You can use Anaconda to install these required packages. For tensorflow, use the following command to make a quick installation under windows:

pip install tensorflow

1. Content

In this project, there are implementations for various kinds of autoencoders. The base python class is library/Autoencoder.py, you can set the value of "ae_para" in the construction function of Autoencoder to appoint corresponding autoencoder.

  • ae_para[0]: The corruption level for the input of autoencoder. If ae_para[0]>0, it's a denoising autoencoder;
  • aw_para[1]: The coeff for sparse regularization. If ae_para[1]>0, it's a sparse autoencoder.

1.1 autoencoder

Follow the code sample below to construct a autoencoder:

corruption_level = 0
sparse_reg = 0

#
n_inputs = 784
n_hidden = 400
n_outputs = 10
lr = 0.001

# define the autoencoder
ae = Autoencoder(n_layers=[n_inputs, n_hidden],
                          transfer_function = tf.nn.relu,
                          optimizer = tf.train.AdamOptimizer(learning_rate = lr),
                          ae_para = [corruption_level, sparse_reg])

To visualize the extracted features and images, check the code in visualize_ae.py.reconstructed

  • Extracted features on MNIST:

Alt text

  • Reconstructed noisy images after input->encoder->decoder pipeline:

Alt text

1.2 denoising autoencoder

Follow the code sample below to construct a denoising autoencoder:

corruption_level = 0.3
sparse_reg = 0

#
n_inputs = 784
n_hidden = 400
n_outputs = 10
lr = 0.001

# define the autoencoder
ae = Autoencoder(n_layers=[n_inputs, n_hidden],
                          transfer_function = tf.nn.relu,
                          optimizer = tf.train.AdamOptimizer(learning_rate = lr),
                          ae_para = [corruption_level, sparse_reg])

Test results:

  • Extracted features on MNIST:

Alt text

  • Reconstructed noisy images after input->encoder->decoder pipeline:

Alt text

1.3 sparse autoencoder

Follow the code sample below to construct a sparse autoencoder:

corruption_level = 0
sparse_reg = 2

#
n_inputs = 784
n_hidden = 400
n_outputs = 10
lr = 0.001

# define the autoencoder
ae = Autoencoder(n_layers=[n_inputs, n_hidden],
                          transfer_function = tf.nn.relu,
                          optimizer = tf.train.AdamOptimizer(learning_rate = lr),
                          ae_para = [corruption_level, sparse_reg])

1.4 stacked (denoising) autoencoder

For stacked autoencoder, there are more than one autoencoder in this network, in the script of "SAE_Softmax_MNIST.py", I defined two autoencoders:

corruption_level = 0.3
sparse_reg = 0

#
n_inputs = 784
n_hidden = 400
n_hidden2 = 100
n_outputs = 10
lr = 0.001

# define the autoencoder
ae = Autoencoder(n_layers=[n_inputs, n_hidden],
                          transfer_function = tf.nn.relu,
                          optimizer = tf.train.AdamOptimizer(learning_rate = lr),
                          ae_para = [corruption_level, sparse_reg])
ae_2nd = Autoencoder(n_layers=[n_hidden, n_hidden2],
                          transfer_function = tf.nn.relu,
                          optimizer = tf.train.AdamOptimizer(learning_rate = lr),
                          ae_para=[corruption_level, sparse_reg])

For the training of SAE on the task of MNIST classification, there are four sequential parts:

  1. Training of the first autoencoder;
  2. Training of the second autoencoder, based on the output of first ae;
  3. Training on the output layer, normally softmax layer, based on the sequential output of first and second ae;
  4. Fine-tune on the whole network.

Detailed code can be found in the script "SAE_Softmax_MNIST.py"

2. Reference

Class "autoencoder" are based on the tensorflow official models: https://github.com/tensorflow/models/tree/master/research/autoencoder/autoencoder_models

For the theory on autoencoder, sparse autoencoder, please refer to: http://ufldl.stanford.edu/tutorial/unsupervised/Autoencoders/

3. My blog for this project

漫谈autoencoder:降噪自编码器/稀疏自编码器/栈式自编码器(含tensorflow实现)

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].