All Projects → iamyuanchung → seq2seq-autoencoder

iamyuanchung / seq2seq-autoencoder

Licence: other
Theano implementation of Sequence-to-Sequence Autoencoder

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to seq2seq-autoencoder

probabilistic nlg
Tensorflow Implementation of Stochastic Wasserstein Autoencoder for Probabilistic Sentence Generation (NAACL 2019).
Stars: ✭ 28 (+133.33%)
Mutual labels:  autoencoder, seq2seq
2D-and-3D-Deep-Autoencoder
Convolutional AutoEncoder application on MRI images
Stars: ✭ 57 (+375%)
Mutual labels:  theano, autoencoder
seq3
Source code for the NAACL 2019 paper "SEQ^3: Differentiable Sequence-to-Sequence-to-Sequence Autoencoder for Unsupervised Abstractive Sentence Compression"
Stars: ✭ 121 (+908.33%)
Mutual labels:  autoencoder, seq2seq
Basic nns in frameworks
several basic neural networks[mlp, autoencoder, CNNs, recurrentNN, recursiveNN] implements under several NN frameworks[ tensorflow, pytorch, theano, keras]
Stars: ✭ 58 (+383.33%)
Mutual labels:  theano, autoencoder
Tensorflow Tutorials
텐서플로우를 기초부터 응용까지 단계별로 연습할 수 있는 소스 코드를 제공합니다
Stars: ✭ 2,096 (+17366.67%)
Mutual labels:  autoencoder, seq2seq
Repo 2016
R, Python and Mathematica Codes in Machine Learning, Deep Learning, Artificial Intelligence, NLP and Geolocation
Stars: ✭ 103 (+758.33%)
Mutual labels:  theano, autoencoder
Keras Gp
Keras + Gaussian Processes: Learning scalable deep and recurrent kernels.
Stars: ✭ 218 (+1716.67%)
Mutual labels:  theano
Seq2Seq-chatbot
TensorFlow Implementation of Twitter Chatbot
Stars: ✭ 18 (+50%)
Mutual labels:  seq2seq
Alphazero gomoku
An implementation of the AlphaZero algorithm for Gomoku (also called Gobang or Five in a Row)
Stars: ✭ 2,570 (+21316.67%)
Mutual labels:  theano
Opt Mmd
Learning kernels to maximize the power of MMD tests
Stars: ✭ 181 (+1408.33%)
Mutual labels:  theano
Dandelion
A light weight deep learning framework, on top of Theano, offering better balance between flexibility and abstraction
Stars: ✭ 15 (+25%)
Mutual labels:  theano
Unsupervised-Classification-with-Autoencoder
Using Autoencoders for classification as unsupervised machine learning algorithms with Deep Learning.
Stars: ✭ 43 (+258.33%)
Mutual labels:  autoencoder
SemiDenseNet
Repository containing the code of one of the networks that we employed in the iSEG Grand MICCAI Challenge 2017, infant brain segmentation.
Stars: ✭ 55 (+358.33%)
Mutual labels:  theano
Rnn ctc
Recurrent Neural Network and Long Short Term Memory (LSTM) with Connectionist Temporal Classification implemented in Theano. Includes a Toy training example.
Stars: ✭ 220 (+1733.33%)
Mutual labels:  theano
TextSumma
reimplementing Neural Summarization by Extracting Sentences and Words
Stars: ✭ 16 (+33.33%)
Mutual labels:  seq2seq
Cnn Text Classification Keras
Text Classification by Convolutional Neural Network in Keras
Stars: ✭ 213 (+1675%)
Mutual labels:  theano
Theano-MPI
MPI Parallel framework for training deep learning models built in Theano
Stars: ✭ 55 (+358.33%)
Mutual labels:  theano
Sca Cnn.cvpr17
Image Captions Generation with Spatial and Channel-wise Attention
Stars: ✭ 198 (+1550%)
Mutual labels:  theano
Naver-AI-Hackathon-Speech
2019 Clova AI Hackathon : Speech - Rank 12 / Team Kai.Lib
Stars: ✭ 26 (+116.67%)
Mutual labels:  seq2seq
tensorflow-chatbot-chinese
網頁聊天機器人 | tensorflow implementation of seq2seq model with bahdanau attention and Word2Vec pretrained embedding
Stars: ✭ 50 (+316.67%)
Mutual labels:  seq2seq

Sequence-to-Sequence Autoencoder (SA)

Theano implementation of the SA model proposed in Audio Word2Vec: Unsupervised Learning of Audio Segment Representations using Sequence-to-Sequence Autoencoder, in Proceedings of the 17th Annual Conference of the International Speech Communication Association (INTERSPEECH), 2016.

If you use the code, please cite the paper as:

@inproceedings{chung2016audio2vec,
  title     = {Audio word2vec: Unsupervised learning of audio segment representations using sequence-to-sequence autoencoder},
  author    = {Chung, Yu-An and Wu, Chao-Chung and Shen, Chia-Hao and Lee, Hung-Yi and Lee, Lin-Shan},
  booktitle = {INTERSPEECH},
  year      = {2016}
}
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].