SockeyeSequence-to-sequence framework with a focus on Neural Machine Translation based on Apache MXNet
Stars: ✭ 990 (+265.31%)
Nmt KerasNeural Machine Translation with Keras
Stars: ✭ 501 (+84.87%)
Pytorch Original TransformerMy implementation of the original transformer model (Vaswani et al.). I've additionally included the playground.py file for visualizing otherwise seemingly hard concepts. Currently included IWSLT pretrained models.
Stars: ✭ 411 (+51.66%)
SequenceToSequenceA seq2seq with attention dialogue/MT model implemented by TensorFlow.
Stars: ✭ 11 (-95.94%)
Linear Attention Recurrent Neural NetworkA recurrent attention module consisting of an LSTM cell which can query its own past cell states by the means of windowed multi-head attention. The formulas are derived from the BN-LSTM and the Transformer Network. The LARNN cell with attention can be easily used inside a loop on the cell state, just like any other RNN. (LARNN)
Stars: ✭ 119 (-56.09%)
TransformerA TensorFlow Implementation of the Transformer: Attention Is All You Need
Stars: ✭ 3,646 (+1245.39%)
WitwickyWitwicky: An implementation of Transformer in PyTorch.
Stars: ✭ 21 (-92.25%)
Attention MechanismsImplementations for a family of attention mechanisms, suitable for all kinds of natural language processing tasks and compatible with TensorFlow 2.0 and Keras.
Stars: ✭ 203 (-25.09%)
Machine-Translation-Hindi-to-english-Machine translation is the task of converting one language to other. Unlike the traditional phrase-based translation system which consists of many small sub-components that are tuned separately, neural machine translation attempts to build and train a single, large neural network that reads a sentence and outputs a correct translation.
Stars: ✭ 19 (-92.99%)
pynmta simple and complete pytorch implementation of neural machine translation system
Stars: ✭ 13 (-95.2%)
PAM[TPAMI 2020] Parallax Attention for Unsupervised Stereo Correspondence Learning
Stars: ✭ 62 (-77.12%)
FragmentVCAny-to-any voice conversion by end-to-end extracting and fusing fine-grained voice fragments with attention
Stars: ✭ 134 (-50.55%)
transganformerImplementation of TransGanFormer, an all-attention GAN that combines the finding from the recent GanFormer and TransGan paper
Stars: ✭ 137 (-49.45%)
Attention一些不同的Attention机制代码
Stars: ✭ 17 (-93.73%)
nystrom-attentionImplementation of Nyström Self-attention, from the paper Nyströmformer
Stars: ✭ 83 (-69.37%)
Patient2VecPatient2Vec: A Personalized Interpretable Deep Representation of the Longitudinal Electronic Health Record
Stars: ✭ 85 (-68.63%)
AoA-pytorchA Pytorch implementation of Attention on Attention module (both self and guided variants), for Visual Question Answering
Stars: ✭ 33 (-87.82%)
banglanmtThis repository contains the code and data of the paper titled "Not Low-Resource Anymore: Aligner Ensembling, Batch Filtering, and New Datasets for Bengali-English Machine Translation" published in Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP 2020), November 16 - November 20, 2020.
Stars: ✭ 91 (-66.42%)
ADL2019Applied Deep Learning (2019 Spring) @ NTU
Stars: ✭ 20 (-92.62%)
MoChA-pytorchPyTorch Implementation of "Monotonic Chunkwise Attention" (ICLR 2018)
Stars: ✭ 65 (-76.01%)
OverlapPredator[CVPR 2021, Oral] PREDATOR: Registration of 3D Point Clouds with Low Overlap.
Stars: ✭ 293 (+8.12%)
QuantumForestFast Differentiable Forest lib with the advantages of both decision trees and neural networks
Stars: ✭ 63 (-76.75%)
Video-Cap🎬 Video Captioning: ICCV '15 paper implementation
Stars: ✭ 44 (-83.76%)
linformerImplementation of Linformer for Pytorch
Stars: ✭ 119 (-56.09%)
SentimentAnalysisSentiment Analysis: Deep Bi-LSTM+attention model
Stars: ✭ 32 (-88.19%)
transformer-pytorchA PyTorch implementation of Transformer in "Attention is All You Need"
Stars: ✭ 77 (-71.59%)
enformer-pytorchImplementation of Enformer, Deepmind's attention network for predicting gene expression, in Pytorch
Stars: ✭ 146 (-46.13%)
NTUA-slp-nlp💻Speech and Natural Language Processing (SLP & NLP) Lab Assignments for ECE NTUA
Stars: ✭ 19 (-92.99%)
co-attentionPytorch implementation of "Dynamic Coattention Networks For Question Answering"
Stars: ✭ 54 (-80.07%)
keras-deep-learningVarious implementations and projects on CNN, RNN, LSTM, GAN, etc
Stars: ✭ 22 (-91.88%)
BangalASRTransformer based Bangla Speech Recognition
Stars: ✭ 20 (-92.62%)
ntua-slp-semeval2018Deep-learning models of NTUA-SLP team submitted in SemEval 2018 tasks 1, 2 and 3.
Stars: ✭ 79 (-70.85%)
Natural-Language-ProcessingContains various architectures and novel paper implementations for Natural Language Processing tasks like Sequence Modelling and Neural Machine Translation.
Stars: ✭ 48 (-82.29%)
DAF3DDeep Attentive Features for Prostate Segmentation in 3D Transrectal Ultrasound
Stars: ✭ 60 (-77.86%)
MirnetOfficial repository for "Learning Enriched Features for Real Image Restoration and Enhancement" (ECCV 2020). SOTA results for image denoising, super-resolution, and image enhancement.
Stars: ✭ 247 (-8.86%)
NLP ToolkitLibrary of state-of-the-art models (PyTorch) for NLP tasks
Stars: ✭ 92 (-66.05%)
visualizationa collection of visualization function
Stars: ✭ 189 (-30.26%)
vista-netCode for the paper "VistaNet: Visual Aspect Attention Network for Multimodal Sentiment Analysis", AAAI'19
Stars: ✭ 67 (-75.28%)
ttslearnttslearn: Library for Pythonで学ぶ音声合成 (Text-to-speech with Python)
Stars: ✭ 158 (-41.7%)
SANETArbitrary Style Transfer with Style-Attentional Networks
Stars: ✭ 105 (-61.25%)
halonet-pytorchImplementation of the 😇 Attention layer from the paper, Scaling Local Self-Attention For Parameter Efficient Visual Backbones
Stars: ✭ 181 (-33.21%)
apertium-html-toolsWeb application providing a fully localised interface for text/website/document translation, analysis and generation powered by Apertium.
Stars: ✭ 36 (-86.72%)
egfr-attDrug effect prediction using neural network
Stars: ✭ 17 (-93.73%)
ilmultiTooling to play around with multilingual machine translation for Indian Languages.
Stars: ✭ 19 (-92.99%)
galerkin-transformer[NeurIPS 2021] Galerkin Transformer: a linear attention without softmax
Stars: ✭ 111 (-59.04%)
Transformer-in-TransformerAn Implementation of Transformer in Transformer in TensorFlow for image classification, attention inside local patches
Stars: ✭ 40 (-85.24%)
SelfAttentiveImplementation of A Structured Self-attentive Sentence Embedding
Stars: ✭ 107 (-60.52%)
3HANAn original implementation of "3HAN: A Deep Neural Network for Fake News Detection" (ICONIP 2017)
Stars: ✭ 29 (-89.3%)