All Projects → Kaixhin → Autoencoders

Kaixhin / Autoencoders

Licence: mit
Torch implementations of various types of autoencoders

Programming Languages

lua
6591 projects

Projects that are alternatives of or similar to Autoencoders

time-series-autoencoder
📈 PyTorch dual-attention LSTM-autoencoder for multivariate Time Series 📈
Stars: ✭ 198 (-52.97%)
Mutual labels:  autoencoder
SAE-NAD
The implementation of "Point-of-Interest Recommendation: Exploiting Self-Attentive Autoencoders with Neighbor-Aware Influence"
Stars: ✭ 48 (-88.6%)
Mutual labels:  autoencoder
Alae
[CVPR2020] Adversarial Latent Autoencoders
Stars: ✭ 3,178 (+654.87%)
Mutual labels:  autoencoder
abae-pytorch
PyTorch implementation of 'An Unsupervised Neural Attention Model for Aspect Extraction' by He et al. ACL2017'
Stars: ✭ 52 (-87.65%)
Mutual labels:  autoencoder
AE-CNN
ICVGIP' 18 Oral Paper - Classification of thoracic diseases on ChestX-Ray14 dataset
Stars: ✭ 33 (-92.16%)
Mutual labels:  autoencoder
Cifar-Autoencoder
A look at some simple autoencoders for the Cifar10 dataset, including a denoising autoencoder. Python code included.
Stars: ✭ 42 (-90.02%)
Mutual labels:  autoencoder
keras-adversarial-autoencoders
Experiments with Adversarial Autoencoders using Keras
Stars: ✭ 20 (-95.25%)
Mutual labels:  autoencoder
Deepsvg
[NeurIPS 2020] Official code for the paper "DeepSVG: A Hierarchical Generative Network for Vector Graphics Animation". Includes a PyTorch library for deep learning with SVG data.
Stars: ✭ 403 (-4.28%)
Mutual labels:  autoencoder
video autoencoder
Video lstm auto encoder built with pytorch. https://arxiv.org/pdf/1502.04681.pdf
Stars: ✭ 32 (-92.4%)
Mutual labels:  autoencoder
Hands On Deep Learning Algorithms With Python
Master Deep Learning Algorithms with Extensive Math by Implementing them using TensorFlow
Stars: ✭ 272 (-35.39%)
Mutual labels:  autoencoder
AutoEncoders
Variational autoencoder, denoising autoencoder and other variations of autoencoders implementation in keras
Stars: ✭ 14 (-96.67%)
Mutual labels:  autoencoder
mirapy
MiraPy: A Python package for Deep Learning in Astronomy
Stars: ✭ 40 (-90.5%)
Mutual labels:  autoencoder
Keras Autoencoder
Autoencoders using Keras
Stars: ✭ 74 (-82.42%)
Mutual labels:  autoencoder
TensorFlow-Autoencoders
Implementations of autoencoder, generative adversarial networks, variational autoencoder and adversarial variational autoencoder
Stars: ✭ 29 (-93.11%)
Mutual labels:  autoencoder
Pointnet Autoencoder
Autoencoder for Point Clouds
Stars: ✭ 291 (-30.88%)
Mutual labels:  autoencoder
Wavelet-like-Auto-Encoder
No description or website provided.
Stars: ✭ 61 (-85.51%)
Mutual labels:  autoencoder
T3
[EMNLP 2020] "T3: Tree-Autoencoder Constrained Adversarial Text Generation for Targeted Attack" by Boxin Wang, Hengzhi Pei, Boyuan Pan, Qian Chen, Shuohang Wang, Bo Li
Stars: ✭ 25 (-94.06%)
Mutual labels:  autoencoder
Tensorflow Tutorial
Tensorflow tutorial from basic to hard, 莫烦Python 中文AI教学
Stars: ✭ 4,122 (+879.1%)
Mutual labels:  autoencoder
Zhihu
This repo contains the source code in my personal column (https://zhuanlan.zhihu.com/zhaoyeyu), implemented using Python 3.6. Including Natural Language Processing and Computer Vision projects, such as text generation, machine translation, deep convolution GAN and other actual combat code.
Stars: ✭ 3,307 (+685.51%)
Mutual labels:  autoencoder
Noise2Noise-audio denoising without clean training data
Source code for the paper titled "Speech Denoising without Clean Training Data: a Noise2Noise Approach". Paper accepted at the INTERSPEECH 2021 conference. This paper tackles the problem of the heavy dependence of clean speech data required by deep learning based audio denoising methods by showing that it is possible to train deep speech denoisi…
Stars: ✭ 49 (-88.36%)
Mutual labels:  autoencoder

Autoencoders

This repository is a Torch version of Building Autoencoders in Keras, but only containing code for reference - please refer to the original blog post for an explanation of autoencoders. Training hyperparameters have not been adjusted. The following models are implemented:

  • AE: Fully-connected autoencoder
  • SparseAE: Sparse autoencoder
  • DeepAE: Deep (fully-connected) autoencoder
  • ConvAE: Convolutional autoencoder
  • UpconvAE: Upconvolutional autoencoder - also known by several other names (bonus)
  • DenoisingAE: Denoising (convolutional) autoencoder [1, 2]
  • CAE: Contractive autoencoder (bonus) [3]
  • Seq2SeqAE: Sequence-to-sequence autoencoder
  • VAE: Variational autoencoder [4, 5]
  • CatVAE: Categorical variational autoencoder (bonus) [6, 7]
  • AAE: Adversarial autoencoder (bonus) [8]
  • WTA-AE: Winner-take-all autoencoder (bonus) [9]

Different models can be chosen using th main.lua -model <modelName>.

The denoising criterion can be used to replace the standard (autoencoder) reconstruction criterion by using the denoising flag. For example, a denoising AAE (DAAE) [10] can be set up using th main.lua -model AAE -denoising. The corruption process is additive Gaussian noise *~ N(0, 0.5)*.

MCMC sampling [10] can be used for VAEs, CatVAEs and AAEs with th main.lua -model <modelName> -mcmc <steps>. To see the effects of MCMC sampling with this simple setup it is best to choose a large standard deviation, e.g. -sampleStd 5, for the Gaussian distribution to draw the initial samples from.

Requirements

The following luarocks packages are required:

  • mnist
  • dpnn (for DenoisingAE)
  • rnn (for Seq2SeqAE)

Citation

If you find this library useful and would like to cite it, the following would be appropriate:

@misc{Autoencoders,
  author = {Arulkumaran, Kai},
  title = {Kaixhin/Autoencoders},
  url = {https://github.com/Kaixhin/Autoencoders},
  year = {2016}
}

References

[1] Vincent, P., Larochelle, H., Bengio, Y., & Manzagol, P. A. (2008, July). Extracting and composing robust features with denoising autoencoders. In Proceedings of the 25th international conference on Machine learning (pp. 1096-1103). ACM.
[2] Vincent, P., Larochelle, H., Lajoie, I., Bengio, Y., & Manzagol, P. A. (2010). Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion. Journal of Machine Learning Research, 11(Dec), 3371-3408.
[3] Rifai, S., Vincent, P., Muller, X., Glorot, X., & Bengio, Y. (2011). Contractive auto-encoders: Explicit invariance during feature extraction. In Proceedings of the 28th international conference on machine learning (ICML-11) (pp. 833-840).
[4] Kingma, D. P., & Welling, M. (2013). Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114.
[5] Rezende, D. J., Mohamed, S., & Wierstra, D. (2014). Stochastic Backpropagation and Approximate Inference in Deep Generative Models. In Proceedings of The 31st International Conference on Machine Learning (pp. 1278-1286).
[6] Jang, E., Gu, S., & Poole, B. (2016). Categorical Reparameterization with Gumbel-Softmax. arXiv preprint arXiv:1611.01144.
[7] Maddison, C. J., Mnih, A., & Teh, Y. W. (2016). The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables. arXiv preprint arXiv:1611.00712.
[8] Makhzani, A., Shlens, J., Jaitly, N., Goodfellow, I., & Frey, B. (2015). Adversarial autoencoders. arXiv preprint arXiv:1511.05644.
[9] Makhzani, A., & Frey, B. J. (2015). Winner-take-all autoencoders. In Advances in Neural Information Processing Systems (pp. 2791-2799).
[10] Arulkumaran, K., Creswell, A., & Bharath, A. A. (2016). Improving Sampling from Generative Autoencoders with Markov Chains. arXiv preprint arXiv:1610.09296.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].