All Projects → XMUNLP → Xmunmt

XMUNLP / Xmunmt

Licence: bsd-3-clause
An implementation of RNNsearch using TensorFlow

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Xmunmt

Tf Seq2seq
Sequence to sequence learning using TensorFlow.
Stars: ✭ 387 (+460.87%)
Mutual labels:  seq2seq, neural-machine-translation, sequence-to-sequence, nmt
RNNSearch
An implementation of attention-based neural machine translation using Pytorch
Stars: ✭ 43 (-37.68%)
Mutual labels:  seq2seq, neural-machine-translation, sequence-to-sequence, nmt
Neuralmonkey
An open-source tool for sequence learning in NLP built on TensorFlow.
Stars: ✭ 400 (+479.71%)
Mutual labels:  neural-machine-translation, sequence-to-sequence, nmt
Joeynmt
Minimalist NMT for educational purposes
Stars: ✭ 420 (+508.7%)
Mutual labels:  seq2seq, neural-machine-translation, nmt
Nmt Keras
Neural Machine Translation with Keras
Stars: ✭ 501 (+626.09%)
Mutual labels:  neural-machine-translation, sequence-to-sequence, nmt
Nmtpytorch
Sequence-to-Sequence Framework in PyTorch
Stars: ✭ 392 (+468.12%)
Mutual labels:  seq2seq, neural-machine-translation, nmt
Nmt List
A list of Neural MT implementations
Stars: ✭ 359 (+420.29%)
Mutual labels:  neural-machine-translation, sequence-to-sequence, nmt
Openseq2seq
Toolkit for efficient experimentation with Speech Recognition, Text2Speech and NLP
Stars: ✭ 1,378 (+1897.1%)
Mutual labels:  seq2seq, neural-machine-translation, sequence-to-sequence
Pytorch Seq2seq
Tutorials on implementing a few sequence-to-sequence (seq2seq) models with PyTorch and TorchText.
Stars: ✭ 3,418 (+4853.62%)
Mutual labels:  seq2seq, neural-machine-translation, sequence-to-sequence
Word-Level-Eng-Mar-NMT
Translating English sentences to Marathi using Neural Machine Translation
Stars: ✭ 37 (-46.38%)
Mutual labels:  seq2seq, neural-machine-translation, sequence-to-sequence
Nematus
Open-Source Neural Machine Translation in Tensorflow
Stars: ✭ 730 (+957.97%)
Mutual labels:  neural-machine-translation, sequence-to-sequence, nmt
dynmt-py
Neural machine translation implementation using dynet's python bindings
Stars: ✭ 17 (-75.36%)
Mutual labels:  seq2seq, neural-machine-translation, sequence-to-sequence
Sockeye
Sequence-to-sequence framework with a focus on Neural Machine Translation based on Apache MXNet
Stars: ✭ 990 (+1334.78%)
Mutual labels:  seq2seq, neural-machine-translation, sequence-to-sequence
classy
classy is a simple-to-use library for building high-performance Machine Learning models in NLP.
Stars: ✭ 61 (-11.59%)
Mutual labels:  seq2seq, sequence-to-sequence
Seq2seq chatbot
基于seq2seq模型的简单对话系统的tf实现,具有embedding、attention、beam_search等功能,数据集是Cornell Movie Dialogs
Stars: ✭ 308 (+346.38%)
Mutual labels:  seq2seq, nmt
Pytorch Chatbot
Pytorch seq2seq chatbot
Stars: ✭ 336 (+386.96%)
Mutual labels:  seq2seq, sequence-to-sequence
transformer
Neutron: A pytorch based implementation of Transformer and its variants.
Stars: ✭ 60 (-13.04%)
Mutual labels:  seq2seq, neural-machine-translation
Neural sp
End-to-end ASR/LM implementation with PyTorch
Stars: ✭ 408 (+491.3%)
Mutual labels:  seq2seq, sequence-to-sequence
Rnn Nmt
基于双向RNN,Attention机制的编解码神经机器翻译模型
Stars: ✭ 46 (-33.33%)
Mutual labels:  neural-machine-translation, nmt
Chatlearner
A chatbot implemented in TensorFlow based on the seq2seq model, with certain rules integrated.
Stars: ✭ 528 (+665.22%)
Mutual labels:  sequence-to-sequence, nmt

XMUNMT

An open source Neural Machine Translation toolkit developed by the NLPLAB of Xiamen University.

Features

  • Multi-GPU support
  • Builtin validation functionality

Tutorial

This tutorial describes how to train an NMT model on WMT17's EN-DE data using this repository.

Prerequisite

You must install TensorFlow (>=1.4.0) first to use this library.

Download Data

The preprocessed data can be found at here.

Data Preprocessing

  1. Byte Pair Encoding
  • The most common approach to achieve open vocabulary is to use Byte Pair Encoding (BPE). The codes of BPE can be found at here.
  • To encode the training corpora using BPE, you need to generate BPE operations first. The following command will create a file named "bpe32k", which contains 32k BPE operations along with two dictionaries named "vocab.en" and "vocab.de".
python subword-nmt/learn_joint_bpe_and_vocab.py --input corpus.tc.en corpus.tc.de -s 32000 -o bpe32k --write-vocabulary vocab.en vocab.de
  • You still need to encode the training corpora, validation set and test set using the generated BPE operations and dictionaries.
python subword-nmt/apply_bpe.py -c bpe32k --vocabulary vocab.en --vocabulary-threshold 50 < corpus.tc.en > corpus.bpe32k.en
python subword-nmt/apply_bpe.py -c bpe32k --vocabulary vocab.de --vocabulary-threshold 50 < corpus.tc.de > corpus.bpe32k.de
python subword-nmt/apply_bpe.py -c bpe32k --vocabulary vocab.en --vocabulary-threshold 50 < newstest2016.tc.en > newstest2016.bpe32k.en
python subword-nmt/apply_bpe.py -c bpe32k --vocabulary vocab.de --vocabulary-threshold 50 < newstest2016.tc.de > newstest2016.bpe32k.de
python subword-nmt/apply_bpe.py -c bpe32k --vocabulary vocab.en --vocabulary-threshold 50 < newstest2017.tc.en > newstest2017.bpe32k.en
  1. Environment Variables
  • Before using XMUNMT, you need to add the path of XMUNMT to PYTHONPATH environment variable. Typically, this can be done by adding the following line to the .bashrc file in your home directory.
PYTHONPATH=/PATH/TO/XMUNMT:$PYTHONPATH
  1. Build vocabulary
  • To train an NMT, you need to build vocabularies first. To build a shared source and target vocabulary, you can use the following script:
cat corpus.bpe32k.en corpus.bpe32k.de > corpus.bpe32k.all
python XMUNMT/xmunmt/scripts/build_vocab.py corpus.bpe32k.all vocab.shared32k.txt
  1. Shuffle corpus
  • It is beneficial to shuffle the training corpora before training.
python XMUNMT/xmunmt/scripts/shuffle_corpus.py --corpus corpus.bpe32k.en corpus.bpe32k.de --seed 1234
  • The above command will create two new files named "corpus.bpe32k.en.shuf" and "corpus.bpe32k.de.shuf".

Training

  • Finally, we can start the training stage. The recommended hyper-parameters are described below.
python XMUNMT/xmunmt/bin/trainer.py
  --model rnnsearch
  --output train 
  --input corpus.bpe32k.en.shuf corpus.bpe32k.de.shuf
  --vocabulary vocab.shared32k.txt vocab.shared32k.txt
  --validation newstest2016.bpe32k.en
  --references newstest2016.bpe32k.de
  --parameters=device_list=[0],eval_steps=5000,train_steps=75000,
               learning_rate_decay=piecewise_constant,
               learning_rate_values=[5e-4,25e-5,125e-6],
               learning_rate_boundaries=[25000,50000]
  • Change the argument of "device_list" to select GPU or use multiple GPUs. The above command will create a directory named "train". The best model can be found at "train/eval"

Decoding

  • The decoding command is quite simple.
python XMUNMT/xmunmt/bin/translator.py
  --models rnnsearch
  --checkpoints train/eval
  --input newstest2017.bpe32k.en
  --output test.txt
  --vocabulary vocab.shared32k.txt vocab.shared32k.txt

Benchmark

The benchmark is performed on 1 GTX 1080Ti GPU with default parameters.

Dataset BLEU BLEU (cased)
WMT17 En-De 22.81 22.30
WMT17 De-En 29.01 27.69
  • More benchmarks will be added soon.

Contact

This code is written by Zhixing Tan. If you have any problems, feel free to send an email.

LICENSE

BSD

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].