All Projects → YinpeiDai → Seq2Seq-Models

YinpeiDai / Seq2Seq-Models

Licence: MIT License
Basic Seq2Seq, Attention, CopyNet

Programming Languages

python
139335 projects - #7 most used programming language

Labels

Projects that are alternatives of or similar to Seq2Seq-Models

Base-On-Relation-Method-Extract-News-DA-RNN-Model-For-Stock-Prediction--Pytorch
基於關聯式新聞提取方法之雙階段注意力機制模型用於股票預測
Stars: ✭ 33 (+73.68%)
Mutual labels:  seq2seq
dts
A Keras library for multi-step time-series forecasting.
Stars: ✭ 130 (+584.21%)
Mutual labels:  seq2seq
torch-asg
Auto Segmentation Criterion (ASG) implemented in pytorch
Stars: ✭ 42 (+121.05%)
Mutual labels:  seq2seq
transformer
Neutron: A pytorch based implementation of Transformer and its variants.
Stars: ✭ 60 (+215.79%)
Mutual labels:  seq2seq
ttslearn
ttslearn: Library for Pythonで学ぶ音声合成 (Text-to-speech with Python)
Stars: ✭ 158 (+731.58%)
Mutual labels:  seq2seq
classy
classy is a simple-to-use library for building high-performance Machine Learning models in NLP.
Stars: ✭ 61 (+221.05%)
Mutual labels:  seq2seq
Video-Cap
🎬 Video Captioning: ICCV '15 paper implementation
Stars: ✭ 44 (+131.58%)
Mutual labels:  seq2seq
TaLKConvolutions
Official PyTorch implementation of Time-aware Large Kernel (TaLK) Convolutions (ICML 2020)
Stars: ✭ 26 (+36.84%)
Mutual labels:  seq2seq
MoChA-pytorch
PyTorch Implementation of "Monotonic Chunkwise Attention" (ICLR 2018)
Stars: ✭ 65 (+242.11%)
Mutual labels:  seq2seq
ai-visual-storytelling-seq2seq
Implementation of seq2seq model for Visual Storytelling Challenge (VIST) http://visionandlanguage.net/VIST/index.html
Stars: ✭ 50 (+163.16%)
Mutual labels:  seq2seq
pytorch-transformer-chatbot
PyTorch v1.2에서 생긴 Transformer API 를 이용한 간단한 Chitchat 챗봇
Stars: ✭ 44 (+131.58%)
Mutual labels:  seq2seq
GAN-RNN Timeseries-imputation
Recurrent GAN for imputation of time series data. Implemented in TensorFlow 2 on Wikipedia Web Traffic Forecast dataset from Kaggle.
Stars: ✭ 107 (+463.16%)
Mutual labels:  seq2seq
neural-chat
An AI chatbot using seq2seq
Stars: ✭ 30 (+57.89%)
Mutual labels:  seq2seq
Neural Conversation Models
Tensorflow based Neural Conversation Models
Stars: ✭ 29 (+52.63%)
Mutual labels:  seq2seq
NeuralTextSimplification
Exploring Neural Text Simplification
Stars: ✭ 64 (+236.84%)
Mutual labels:  seq2seq
Embedding
Embedding模型代码和学习笔记总结
Stars: ✭ 25 (+31.58%)
Mutual labels:  seq2seq
DLCV2018SPRING
Deep Learning for Computer Vision (CommE 5052) in NTU
Stars: ✭ 38 (+100%)
Mutual labels:  seq2seq
2D-LSTM-Seq2Seq
PyTorch implementation of a 2D-LSTM Seq2Seq Model for NMT.
Stars: ✭ 25 (+31.58%)
Mutual labels:  seq2seq
keras seq2seq word level
Implementation of seq2seq word-level model using keras
Stars: ✭ 12 (-36.84%)
Mutual labels:  seq2seq
chatbot
🤖️ 基于 PyTorch 的任务型聊天机器人(支持私有部署和 docker 部署的 Chatbot)
Stars: ✭ 77 (+305.26%)
Mutual labels:  seq2seq

Encoder-Decoder models

In this project, the following models are implemented:

Requirements

  • Python 3.6
  • Tensorflow 1.8

usage

python run.py --model=BasicSeq2Seq
python run.py --model=AttenNet
python run.py --model=CopyNet

Experiment

I build a toy task: word reconstruction. e.g. ‘cambridge’ → Encoder → Decoder → ‘cambridge’
Training data : ~20k English words
Testing data : 1K English words
Vocabulary : ‘a’,’b’,...’w’,’<EOS>’,’<BOS>’,’<UNK>’
OOV tokens: ‘x’,’y’,’z’.

Results

After around 1 epoch traning, the results are:

Model Accu
Seq2Seq 0.9544
Attention 0.9966
Copy 1.0000

Test samples

These samples are tested :
'circumstances', 'affirmative', 'corresponding', 'caraphernology','experimentation', 'dizziness','harambelover', 'terrifyingly','axbycydxexfyzxx'

Seq2Seq

true output: c i r c u m s t a n c e s 
pred output: c i m p h p s t a n s e s 

true output: a f f i r m a t i v e 
pred output: a p h i m m a t i s e 

true output: c o r r e s p o n d i n g 
pred output: c o r s r s p o n g i n g 

true output: c a r a p h e r n o l o g <UNK>
pred output: c a m u p p e r o o r e n e

true output: e <UNK> p e r i m e n t a t i o n
pred output: e r     p e r i s e t t a s i o n

true output: d i <UNK> <UNK> i n e s s
pred output: d i r     r     i n e s s

true output: h a r a m b e l o v e r
pred output: m a m a b b l m u t e s

true output: t e r r i f <UNK> i n g l <UNK>
pred output: t e r p l p  e    i n e l <UNK>

true output: a <UNK> b <UNK> c <UNK> d <UNK> e <UNK> f <UNK> <UNK> <UNK> <UNK>     
pred output: a   m   b   r   s <UNK> r <UNK> r  r    p  e    r      r      l 

Attention

true output: c i r c u m s t a n c e s 
pred output: c o r c u m s t a n c s s
true output: a f f i r m a t i v e 
pred output: a f f i r m a t i v e 
true output: c o r r e s p o n d i n g 
pred output: c o r r e s p o n d i n g
true output: c a r a p h e r n o l o g <UNK> 
pred output: c o e h p h e r n o l o g <UNK> 
true output: e <UNK> p e r i m e n t a t i o n 
pred output: e e     p e a e m e n t a t i o n
true output: d i <UNK> <UNK> i n e s s 
pred output: d i <UNK> <UNK> i n e s s 
true output: h a r a m b e l o v e r 
pred output: h a r a m b e l o v e r
true output: t e r r i f <UNK> i n g l <UNK>
pred output: t e r r i f <UNK> i n g l <UNK>
true output: a <UNK>   b   <UNK>   c  <UNK> d <UNK> e <UNK> f <UNK> <UNK> <UNK> <UNK> 
pred output: c <UNK> <UNK> <UNK> <UNK> d    d <UNK> e <UNK> f <UNK> <UNK> <UNK> <UNK> 

Annotation

CopyNet

true output: c i r c u m s t a n c e s 
pred output: c i r c u m s t a n c e s

true output: a f f i r m a t i v e 
pred output: a f f i r m a t i v e 

true output: c o r r e s p o n d i n g 
pred output: c o r r e s p o n d i n g 

true output: c a r a p h e r n o l o g y
pred output: c a r a p h e r n o l o g y

true output: e x p e r i m e n t a t i o n 
pred output: e x p e r i m e n t a t i o n 

true output: d i z z i n e s s 
pred output: d i z z i n e s s 

true output: h a r a m b e l o v e r
pred output: h a r a m b e l o v e r

true output: t e r r i f y i n g l y 
pred output: t e r r i f y i n g l y 

true output: a x b y c y d x e x f y z x x y 
pred output: a x b y c y d x e y d y z x x y

Almost 100% correct

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].