All Projects → snatch59 → Keras Autoencoders

snatch59 / Keras Autoencoders

Licence: apache-2.0
Autoencoders in Keras

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Keras Autoencoders

Basic nns in frameworks
several basic neural networks[mlp, autoencoder, CNNs, recurrentNN, recursiveNN] implements under several NN frameworks[ tensorflow, pytorch, theano, keras]
Stars: ✭ 58 (-47.27%)
Mutual labels:  autoencoder
Sdne Keras
Keras implementation of Structural Deep Network Embedding, KDD 2016
Stars: ✭ 83 (-24.55%)
Mutual labels:  autoencoder
Segmentation
Tensorflow implementation : U-net and FCN with global convolution
Stars: ✭ 101 (-8.18%)
Mutual labels:  autoencoder
Repo 2017
Python codes in Machine Learning, NLP, Deep Learning and Reinforcement Learning with Keras and Theano
Stars: ✭ 1,123 (+920.91%)
Mutual labels:  autoencoder
Aialpha
Use unsupervised and supervised learning to predict stocks
Stars: ✭ 1,191 (+982.73%)
Mutual labels:  autoencoder
Pytorch sac ae
PyTorch implementation of Soft Actor-Critic + Autoencoder(SAC+AE)
Stars: ✭ 94 (-14.55%)
Mutual labels:  autoencoder
Lipreading
Stars: ✭ 49 (-55.45%)
Mutual labels:  autoencoder
Repo 2016
R, Python and Mathematica Codes in Machine Learning, Deep Learning, Artificial Intelligence, NLP and Geolocation
Stars: ✭ 103 (-6.36%)
Mutual labels:  autoencoder
Image similarity
PyTorch Blog Post On Image Similarity Search
Stars: ✭ 80 (-27.27%)
Mutual labels:  autoencoder
Deep Autoencoders For Collaborative Filtering
Using Deep Autoencoders for predictions of movie ratings.
Stars: ✭ 101 (-8.18%)
Mutual labels:  autoencoder
Codeslam
Implementation of CodeSLAM — Learning a Compact, Optimisable Representation for Dense Visual SLAM paper (https://arxiv.org/pdf/1804.00874.pdf)
Stars: ✭ 64 (-41.82%)
Mutual labels:  autoencoder
Pt Sdae
PyTorch implementation of SDAE (Stacked Denoising AutoEncoder)
Stars: ✭ 72 (-34.55%)
Mutual labels:  autoencoder
Deepdepthdenoising
This repo includes the source code of the fully convolutional depth denoising model presented in https://arxiv.org/pdf/1909.01193.pdf (ICCV19)
Stars: ✭ 96 (-12.73%)
Mutual labels:  autoencoder
Collaborative Deep Learning For Recommender Systems
The hybrid model combining stacked denoising autoencoder with matrix factorization is applied, to predict the customer purchase behavior in the future month according to the purchase history and user information in the Santander dataset.
Stars: ✭ 60 (-45.45%)
Mutual labels:  autoencoder
Smrt
Handle class imbalance intelligently by using variational auto-encoders to generate synthetic observations of your minority class.
Stars: ✭ 102 (-7.27%)
Mutual labels:  autoencoder
Frame Level Anomalies In Videos
Frame level anomaly detection and localization in videos using auto-encoders
Stars: ✭ 50 (-54.55%)
Mutual labels:  autoencoder
Niftynet
[unmaintained] An open-source convolutional neural networks platform for research in medical image analysis and image-guided therapy
Stars: ✭ 1,276 (+1060%)
Mutual labels:  autoencoder
Deepai
Detection of Accounting Anomalies using Deep Autoencoder Neural Networks - A lab we prepared for NVIDIA's GPU Technology Conference 2018 that will walk you through the detection of accounting anomalies using deep autoencoder neural networks. The majority of the lab content is based on Jupyter Notebook, Python and PyTorch.
Stars: ✭ 104 (-5.45%)
Mutual labels:  autoencoder
Sdcn
Structural Deep Clustering Network
Stars: ✭ 103 (-6.36%)
Mutual labels:  autoencoder
Zerospeech Tts Without T
A Pytorch implementation for the ZeroSpeech 2019 challenge.
Stars: ✭ 100 (-9.09%)
Mutual labels:  autoencoder

keras-autoencoders

This github repro was originally put together to give a full set of working examples of autoencoders taken from the code snippets in Building Autoencoders in Keras. These examples are:

  • A simple autoencoder / sparse autoencoder: simple_autoencoder.py
  • A deep autoencoder: deep_autoencoder.py
  • A convolutional autoencoder: convolutional_autoencoder.py
  • An image denoising autoencoder: image_desnoising.py
  • A variational autoencoder (VAE): variational_autoencoder.py
  • A variational autoecoder with deconvolutional layers: variational_autoencoder_deconv.py

All the scripts use the ubiquitous MNIST hardwritten digit data set, and have been run under Python 3.5 and Keras 2.1.4 with a TensorFlow 1.5 backend, and numpy 1.14.1. Note that it's important to use Keras 2.1.4+ or else the VAE example doesn't work.

Latent Space Visualization

In order to bring a bit of added value, each autoencoder script saves the autoencoder's latent space/features/bottleneck in a pickle file.

An autoencoder is made of two components, the encoder and the decoder. The encoder brings the data from a high dimensional input to a bottleneck layer, where the number of neurons is the smallest. Then, the decoder takes this encoded input and converts it back to the original input shape, in this case an image. The latent space is the space in which the data lies in the bottleneck layer.

The latent space contains a compressed representation of the image, which is the only information the decoder is allowed to use to try to reconstruct the input as faithfully as possible. To perform well, the network has to learn to extract the most relevant features in the bottleneck.

Autoencode latent space

A great explanation by Julien Despois on Latent space visualization can be found here, and from where I nicked the above explanation and diagram!

The visualizations are created by carrying out dimensionality reduction on the 32-d (or 128-d) features using t-distributed stochastic neighbor embedding (t-SNE) to transform them into a 2-d feature which is easy to visualize.

visualize_latent_space.py loads the appropriate feaure, carries out the t-SNE, saves the t-SNE and plots the scatter graph. Note that at the moment you have to some commenting/uncommenting to get to run the appropriate feature :-( .

Here a are some 32-d examples:

simple autoencoder latent space

sparse autoencoder latent space

deep autoencoder latent space

And the output from the 2-d VAE latent space output:

variational autoencoder latent space

variational autoencoder latent space

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].