All Projects → jayparks → Tf Seq2seq

jayparks / Tf Seq2seq

Sequence to sequence learning using TensorFlow.

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Tf Seq2seq

Xmunmt
An implementation of RNNsearch using TensorFlow
Stars: ✭ 69 (-82.17%)
Mutual labels:  seq2seq, neural-machine-translation, sequence-to-sequence, nmt
Pytorch Seq2seq
Tutorials on implementing a few sequence-to-sequence (seq2seq) models with PyTorch and TorchText.
Stars: ✭ 3,418 (+783.2%)
Mutual labels:  seq2seq, neural-machine-translation, sequence-to-sequence, encoder-decoder
RNNSearch
An implementation of attention-based neural machine translation using Pytorch
Stars: ✭ 43 (-88.89%)
Mutual labels:  seq2seq, neural-machine-translation, sequence-to-sequence, nmt
Text summarization with tensorflow
Implementation of a seq2seq model for summarization of textual data. Demonstrated on amazon reviews, github issues and news articles.
Stars: ✭ 226 (-41.6%)
Mutual labels:  natural-language-processing, seq2seq, sequence-to-sequence, encoder-decoder
Sockeye
Sequence-to-sequence framework with a focus on Neural Machine Translation based on Apache MXNet
Stars: ✭ 990 (+155.81%)
Mutual labels:  seq2seq, neural-machine-translation, sequence-to-sequence, encoder-decoder
Neuralmonkey
An open-source tool for sequence learning in NLP built on TensorFlow.
Stars: ✭ 400 (+3.36%)
Mutual labels:  neural-machine-translation, sequence-to-sequence, nmt, encoder-decoder
Openseq2seq
Toolkit for efficient experimentation with Speech Recognition, Text2Speech and NLP
Stars: ✭ 1,378 (+256.07%)
Mutual labels:  seq2seq, neural-machine-translation, sequence-to-sequence
Nmt List
A list of Neural MT implementations
Stars: ✭ 359 (-7.24%)
Mutual labels:  neural-machine-translation, sequence-to-sequence, nmt
Speech recognition with tensorflow
Implementation of a seq2seq model for Speech Recognition using the latest version of TensorFlow. Architecture similar to Listen, Attend and Spell.
Stars: ✭ 253 (-34.63%)
Mutual labels:  seq2seq, sequence-to-sequence, encoder-decoder
Chatlearner
A chatbot implemented in TensorFlow based on the seq2seq model, with certain rules integrated.
Stars: ✭ 528 (+36.43%)
Mutual labels:  beam-search, sequence-to-sequence, nmt
Nmtpytorch
Sequence-to-Sequence Framework in PyTorch
Stars: ✭ 392 (+1.29%)
Mutual labels:  seq2seq, neural-machine-translation, nmt
Transformers
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
Stars: ✭ 55,742 (+14303.62%)
Mutual labels:  natural-language-processing, natural-language-understanding, seq2seq
Word-Level-Eng-Mar-NMT
Translating English sentences to Marathi using Neural Machine Translation
Stars: ✭ 37 (-90.44%)
Mutual labels:  seq2seq, neural-machine-translation, sequence-to-sequence
Joeynmt
Minimalist NMT for educational purposes
Stars: ✭ 420 (+8.53%)
Mutual labels:  seq2seq, neural-machine-translation, nmt
minimal-nmt
A minimal nmt example to serve as an seq2seq+attention reference.
Stars: ✭ 36 (-90.7%)
Mutual labels:  seq2seq, beam-search, neural-machine-translation
Seq2seq chatbot
基于seq2seq模型的简单对话系统的tf实现,具有embedding、attention、beam_search等功能,数据集是Cornell Movie Dialogs
Stars: ✭ 308 (-20.41%)
Mutual labels:  beam-search, seq2seq, nmt
classy
classy is a simple-to-use library for building high-performance Machine Learning models in NLP.
Stars: ✭ 61 (-84.24%)
Mutual labels:  seq2seq, sequence-to-sequence, natural-language-understanding
Nmt Keras
Neural Machine Translation with Keras
Stars: ✭ 501 (+29.46%)
Mutual labels:  neural-machine-translation, sequence-to-sequence, nmt
Nematus
Open-Source Neural Machine Translation in Tensorflow
Stars: ✭ 730 (+88.63%)
Mutual labels:  neural-machine-translation, sequence-to-sequence, nmt
dynmt-py
Neural machine translation implementation using dynet's python bindings
Stars: ✭ 17 (-95.61%)
Mutual labels:  seq2seq, neural-machine-translation, sequence-to-sequence

TF-seq2seq

Sequence to sequence (seq2seq) learning Using TensorFlow.

The core building blocks are RNN Encoder-Decoder architectures and Attention mechanism.

The package was largely implemented using the latest (1.2) tf.contrib.seq2seq modules

  • AttentionWrapper
  • Decoder
  • BasicDecoder
  • BeamSearchDecoder

The package supports

  • Multi-layer GRU/LSTM
  • Residual connection
  • Dropout
  • Attention and input_feeding
  • Beamsearch decoding
  • Write n-best list

Dependencies

  • NumPy >= 1.11.1
  • Tensorflow >= 1.2

History

  • June 5, 2017: Major update
  • June 6, 2017: Supports batch beamsearch decoding
  • June 11, 2017: Separted training / decoding
  • June 22, 2017: Supports tf.1.2 (contrib.rnn -> python.ops.rnn_cell)

Usage Instructions

Data Preparation

To preprocess raw parallel data of sample_data.src and sample_data.trg, simply run

cd data/
./preprocess.sh src trg sample_data ${max_seq_len}

Running the above code performs widely used preprocessing steps for Machine Translation (MT).

  • Normalizing punctuation
  • Tokenizing
  • Bytepair encoding (# merge = 30000) (Sennrich et al., 2016)
  • Cleaning sequences of length over ${max_seq_len}
  • Shuffling
  • Building dictionaries

Training

To train a seq2seq model,

$ python train.py   --cell_type 'lstm' \ 
                    --attention_type 'luong' \
                    --hidden_units 1024 \
                    --depth 2 \
                    --embedding_size 500 \
                    --num_encoder_symbols 30000 \
                    --num_decoder_symbols 30000 ...

Decoding

To run the trained model for decoding,

$ python decode.py  --beam_width 5 \
                    --decode_batch_size 30 \
                    --model_path $PATH_TO_A_MODEL_CHECKPOINT (e.g. model/translate.ckpt-100) \
                    --max_decode_step 300 \
                    --write_n_best False
                    --decode_input $PATH_TO_DECODE_INPUT
                    --decode_output $PATH_TO_DECODE_OUTPUT
                    

If --beam_width=1, greedy decoding is performed at each time-step.

Arguments

Data params

  • --source_vocabulary : Path to source vocabulary
  • --target_vocabulary : Path to target vocabulary
  • --source_train_data : Path to source training data
  • --target_train_data : Path to target training data
  • --source_valid_data : Path to source validation data
  • --target_valid_data : Path to target validation data

Network params

  • --cell_type : RNN cell to use for encoder and decoder (default: lstm)
  • --attention_type : Attention mechanism (bahdanau, luong), (default: bahdanau)
  • --depth : Number of hidden units for each layer in the model (default: 2)
  • --embedding_size : Embedding dimensions of encoder and decoder inputs (default: 500)
  • --num_encoder_symbols : Source vocabulary size to use (default: 30000)
  • --num_decoder_symbols : Target vocabulary size to use (default: 30000)
  • --use_residual : Use residual connection between layers (default: True)
  • --attn_input_feeding : Use input feeding method in attentional decoder (Luong et al., 2015) (default: True)
  • --use_dropout : Use dropout in rnn cell output (default: True)
  • --dropout_rate : Dropout probability for cell outputs (0.0: no dropout) (default: 0.3)

Training params

  • --learning_rate : Number of hidden units for each layer in the model (default: 0.0002)
  • --max_gradient_norm : Clip gradients to this norm (default 1.0)
  • --batch_size : Batch size
  • --max_epochs : Maximum training epochs
  • --max_load_batches : Maximum number of batches to prefetch at one time.
  • --max_seq_length : Maximum sequence length
  • --display_freq : Display training status every this iteration
  • --save_freq : Save model checkpoint every this iteration
  • --valid_freq : Evaluate the model every this iteration: valid_data needed
  • --optimizer : Optimizer for training: (adadelta, adam, rmsprop) (default: adam)
  • --model_dir : Path to save model checkpoints
  • --model_name : File name used for model checkpoints
  • --shuffle_each_epoch : Shuffle training dataset for each epoch (default: True)
  • --sort_by_length : Sort pre-fetched minibatches by their target sequence lengths (default: True)

Decoding params

  • --beam_width : Beam width used in beamsearch (default: 1)
  • --decode_batch_size : Batch size used in decoding
  • --max_decode_step : Maximum time step limit in decoding (default: 500)
  • --write_n_best : Write beamsearch n-best list (n=beam_width) (default: False)
  • --decode_input : Input file path to decode
  • --decode_output : Output file path of decoding output

Runtime params

  • --allow_soft_placement : Allow device soft placement
  • --log_device_placement : Log placement of ops on devices

Acknowledgements

The implementation is based on following projects:

  • nematus: Theano implementation of Neural Machine Translation. Major reference of this project
  • subword-nmt: Included subword-unit scripts to preprocess input data
  • moses: Included preprocessing scripts to preprocess input data
  • tf.seq2seq_legacy Legacy Tensorflow seq2seq tutorial
  • tf_tutorial_plus: Nice tutorials for tf.contrib.seq2seq API

For any comments and feedbacks, please email me at [email protected] or open an issue here.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].