DabData Augmentation by Backtranslation (DAB) ヽ( •_-)ᕗ
Stars: ✭ 294 (+308.33%)
Mutual labels: attention-is-all-you-need
Attention Is All You Need KerasA Keras+TensorFlow Implementation of the Transformer: Attention Is All You Need
Stars: ✭ 628 (+772.22%)
Mutual labels: attention-is-all-you-need
Njunmt Pytorch Stars: ✭ 79 (+9.72%)
Mutual labels: attention-is-all-you-need
Pytorch Original TransformerMy implementation of the original transformer model (Vaswani et al.). I've additionally included the playground.py file for visualizing otherwise seemingly hard concepts. Currently included IWSLT pretrained models.
Stars: ✭ 411 (+470.83%)
Mutual labels: attention-is-all-you-need
Attention Is All You Need PytorchA PyTorch implementation of the Transformer model in "Attention is All You Need".
Stars: ✭ 6,070 (+8330.56%)
Mutual labels: attention-is-all-you-need
WitwickyWitwicky: An implementation of Transformer in PyTorch.
Stars: ✭ 21 (-70.83%)
Mutual labels: attention-is-all-you-need
BangalASRTransformer based Bangla Speech Recognition
Stars: ✭ 20 (-72.22%)
Mutual labels: attention-is-all-you-need
Pytorch Transformerpytorch implementation of Attention is all you need
Stars: ✭ 199 (+176.39%)
Mutual labels: attention-is-all-you-need
Awesome Fast Attentionlist of efficient attention modules
Stars: ✭ 627 (+770.83%)
Mutual labels: attention-is-all-you-need
Transformers without tearsTransformers without Tears: Improving the Normalization of Self-Attention
Stars: ✭ 80 (+11.11%)
Mutual labels: attention-is-all-you-need
Nmt KerasNeural Machine Translation with Keras
Stars: ✭ 501 (+595.83%)
Mutual labels: attention-is-all-you-need
Speech TransformerA PyTorch implementation of Speech Transformer, an End-to-End ASR with Transformer network on Mandarin Chinese.
Stars: ✭ 565 (+684.72%)
Mutual labels: attention-is-all-you-need
SockeyeSequence-to-sequence framework with a focus on Neural Machine Translation based on Apache MXNet
Stars: ✭ 990 (+1275%)
Mutual labels: attention-is-all-you-need
TransformerA TensorFlow Implementation of the Transformer: Attention Is All You Need
Stars: ✭ 3,646 (+4963.89%)
Mutual labels: attention-is-all-you-need
Linear Attention Recurrent Neural NetworkA recurrent attention module consisting of an LSTM cell which can query its own past cell states by the means of windowed multi-head attention. The formulas are derived from the BN-LSTM and the Transformer Network. The LARNN cell with attention can be easily used inside a loop on the cell state, just like any other RNN. (LARNN)
Stars: ✭ 119 (+65.28%)
Mutual labels: attention-is-all-you-need
TransformerA Pytorch Implementation of "Attention is All You Need" and "Weighted Transformer Network for Machine Translation"
Stars: ✭ 271 (+276.39%)
Mutual labels: attention-is-all-you-need
Bert language understandingPre-training of Deep Bidirectional Transformers for Language Understanding: pre-train TextCNN
Stars: ✭ 933 (+1195.83%)
Mutual labels: attention-is-all-you-need
pytorch-transformerA PyTorch implementation of the Transformer model from "Attention Is All You Need".
Stars: ✭ 49 (-31.94%)
Mutual labels: attention-is-all-you-need
KospeechOpen-Source Toolkit for End-to-End Korean Automatic Speech Recognition.
Stars: ✭ 190 (+163.89%)
Mutual labels: attention-is-all-you-need