All Projects → Nmt Keras → Similar Projects or Alternatives

904 Open source projects that are alternatives of or similar to Nmt Keras

Sockeye
Sequence-to-sequence framework with a focus on Neural Machine Translation based on Apache MXNet
Stars: ✭ 990 (+97.6%)
Joeynmt
Minimalist NMT for educational purposes
Stars: ✭ 420 (-16.17%)
Nematus
Open-Source Neural Machine Translation in Tensorflow
Stars: ✭ 730 (+45.71%)
Nmt List
A list of Neural MT implementations
Stars: ✭ 359 (-28.34%)
Neuralmonkey
An open-source tool for sequence learning in NLP built on TensorFlow.
Stars: ✭ 400 (-20.16%)
Pytorch Seq2seq
Tutorials on implementing a few sequence-to-sequence (seq2seq) models with PyTorch and TorchText.
Stars: ✭ 3,418 (+582.24%)
parallel-corpora-tools
Tools for filtering and cleaning parallel and monolingual corpora for machine translation and other natural language processing tasks.
Stars: ✭ 35 (-93.01%)
Transformers without tears
Transformers without Tears: Improving the Normalization of Self-Attention
Stars: ✭ 80 (-84.03%)
Tf Seq2seq
Sequence to sequence learning using TensorFlow.
Stars: ✭ 387 (-22.75%)
NiuTrans.NMT
A Fast Neural Machine Translation System. It is developed in C++ and resorts to NiuTensor for fast tensor APIs.
Stars: ✭ 112 (-77.64%)
transformer
Neutron: A pytorch based implementation of Transformer and its variants.
Stars: ✭ 60 (-88.02%)
Pytorch Original Transformer
My implementation of the original transformer model (Vaswani et al.). I've additionally included the playground.py file for visualizing otherwise seemingly hard concepts. Currently included IWSLT pretrained models.
Stars: ✭ 411 (-17.96%)
Njunmt Pytorch
Stars: ✭ 79 (-84.23%)
pynmt
a simple and complete pytorch implementation of neural machine translation system
Stars: ✭ 13 (-97.41%)
Mutual labels:  transformer, nmt, attention-mechanism
Transformer
A TensorFlow Implementation of the Transformer: Attention Is All You Need
Stars: ✭ 3,646 (+627.74%)
dynmt-py
Neural machine translation implementation using dynet's python bindings
Stars: ✭ 17 (-96.61%)
Witwicky
Witwicky: An implementation of Transformer in PyTorch.
Stars: ✭ 21 (-95.81%)
Transformer
A Pytorch Implementation of "Attention is All You Need" and "Weighted Transformer Network for Machine Translation"
Stars: ✭ 271 (-45.91%)
RNNSearch
An implementation of attention-based neural machine translation using Pytorch
Stars: ✭ 43 (-91.42%)
Xmunmt
An implementation of RNNsearch using TensorFlow
Stars: ✭ 69 (-86.23%)
Neural sp
End-to-end ASR/LM implementation with PyTorch
Stars: ✭ 408 (-18.56%)
Njunmt Tf
An open-source neural machine translation system developed by Natural Language Processing Group, Nanjing University.
Stars: ✭ 97 (-80.64%)
Subword Nmt
Unsupervised Word Segmentation for Neural Machine Translation and Text Generation
Stars: ✭ 1,819 (+263.07%)
Npmt
Towards Neural Phrase-based Machine Translation
Stars: ✭ 175 (-65.07%)
Nmtpy
nmtpy is a Python framework based on dl4mt-tutorial to experiment with Neural Machine Translation pipelines.
Stars: ✭ 127 (-74.65%)
Linear Attention Recurrent Neural Network
A recurrent attention module consisting of an LSTM cell which can query its own past cell states by the means of windowed multi-head attention. The formulas are derived from the BN-LSTM and the Transformer Network. The LARNN cell with attention can be easily used inside a loop on the cell state, just like any other RNN. (LARNN)
Stars: ✭ 119 (-76.25%)
Eeg Dl
A Deep Learning library for EEG Tasks (Signals) Classification, based on TensorFlow.
Stars: ✭ 165 (-67.07%)
Mutual labels:  attention-mechanism, gru, transformer
Neural-Machine-Translation
Several basic neural machine translation models implemented by PyTorch & TensorFlow
Stars: ✭ 29 (-94.21%)
TS3000 TheChatBOT
Its a social networking chat-bot trained on Reddit dataset . It supports open bounded queries developed on the concept of Neural Machine Translation. Beware of its being sarcastic just like its creator 😝 BDW it uses Pytorch framework and Python3.
Stars: ✭ 20 (-96.01%)
SequenceToSequence
A seq2seq with attention dialogue/MT model implemented by TensorFlow.
Stars: ✭ 11 (-97.8%)
Nmtpytorch
Sequence-to-Sequence Framework in PyTorch
Stars: ✭ 392 (-21.76%)
Mutual labels:  neural-machine-translation, nmt
kospeech
Open-Source Toolkit for End-to-End Korean Automatic Speech Recognition leveraging PyTorch and Hydra.
Stars: ✭ 456 (-8.98%)
Word-Level-Eng-Mar-NMT
Translating English sentences to Marathi using Neural Machine Translation
Stars: ✭ 37 (-92.61%)
h-transformer-1d
Implementation of H-Transformer-1D, Hierarchical Attention for Sequence Learning
Stars: ✭ 121 (-75.85%)
Mutual labels:  transformer, attention-mechanism
learningspoons
nlp lecture-notes and source code
Stars: ✭ 29 (-94.21%)
Mutual labels:  transformer, attention-model
En-transformer
Implementation of E(n)-Transformer, which extends the ideas of Welling's E(n)-Equivariant Graph Neural Network to attention
Stars: ✭ 131 (-73.85%)
Mutual labels:  transformer, attention-mechanism
Machine-Translation-Hindi-to-english-
Machine translation is the task of converting one language to other. Unlike the traditional phrase-based translation system which consists of many small sub-components that are tuned separately, neural machine translation attempts to build and train a single, large neural network that reads a sentence and outputs a correct translation.
Stars: ✭ 19 (-96.21%)
Hierarchical-attention-network
My implementation of "Hierarchical Attention Networks for Document Classification" in Keras
Stars: ✭ 26 (-94.81%)
Mutual labels:  gru, attention-mechanism
Compact-Global-Descriptor
Pytorch implementation of "Compact Global Descriptor for Neural Networks" (CGD).
Stars: ✭ 22 (-95.61%)
Transformer-Transducer
PyTorch implementation of "Transformer Transducer: A Streamable Speech Recognition Model with Transformer Encoders and RNN-T Loss" (ICASSP 2020)
Stars: ✭ 61 (-87.82%)
dodrio
Exploring attention weights in transformer-based models with linguistic knowledge.
Stars: ✭ 233 (-53.49%)
Mutual labels:  transformer, attention-mechanism
NLP-paper
🎨 🎨NLP 自然语言处理教程 🎨🎨 https://dataxujing.github.io/NLP-paper/
Stars: ✭ 23 (-95.41%)
Mutual labels:  transformer, attention-mechanism
Quality-Estimation1
机器翻译子任务-翻译质量评价-复现 WMT2018 阿里论文结果
Stars: ✭ 19 (-96.21%)
Mutual labels:  transformer, nmt
zero
Zero -- A neural machine translation system
Stars: ✭ 121 (-75.85%)
A-Persona-Based-Neural-Conversation-Model
No description or website provided.
Stars: ✭ 22 (-95.61%)
transformer
A PyTorch Implementation of "Attention Is All You Need"
Stars: ✭ 28 (-94.41%)
Transformer Tts
A Pytorch Implementation of "Neural Speech Synthesis with Transformer Network"
Stars: ✭ 418 (-16.57%)
Mutual labels:  attention-mechanism, transformer
transformer
A simple TensorFlow implementation of the Transformer
Stars: ✭ 25 (-95.01%)
CrabNet
Predict materials properties using only the composition information!
Stars: ✭ 57 (-88.62%)
Mutual labels:  transformer, attention-mechanism
minimal-nmt
A minimal nmt example to serve as an seq2seq+attention reference.
Stars: ✭ 36 (-92.81%)
Structured Self Attention
A Structured Self-attentive Sentence Embedding
Stars: ✭ 459 (-8.38%)
OverlapPredator
[CVPR 2021, Oral] PREDATOR: Registration of 3D Point Clouds with Low Overlap.
Stars: ✭ 293 (-41.52%)
Mutual labels:  transformer, attention-mechanism
transformer-slt
Sign Language Translation with Transformers (COLING'2020, ECCV'20 SLRTP Workshop)
Stars: ✭ 92 (-81.64%)
FragmentVC
Any-to-any voice conversion by end-to-end extracting and fusing fine-grained voice fragments with attention
Stars: ✭ 134 (-73.25%)
Mutual labels:  transformer, attention-mechanism
visualization
a collection of visualization function
Stars: ✭ 189 (-62.28%)
Mutual labels:  transformer, attention-mechanism
enformer-pytorch
Implementation of Enformer, Deepmind's attention network for predicting gene expression, in Pytorch
Stars: ✭ 146 (-70.86%)
Mutual labels:  transformer, attention-mechanism
attention-mechanism-keras
attention mechanism in keras, like Dense and RNN...
Stars: ✭ 19 (-96.21%)
Transformer-in-Transformer
An Implementation of Transformer in Transformer in TensorFlow for image classification, attention inside local patches
Stars: ✭ 40 (-92.02%)
Mutual labels:  transformer, attention-mechanism
theano-recurrence
Recurrent Neural Networks (RNN, GRU, LSTM) and their Bidirectional versions (BiRNN, BiGRU, BiLSTM) for word & character level language modelling in Theano
Stars: ✭ 40 (-92.02%)
Mutual labels:  theano, gru
1-60 of 904 similar projects