Abstractive SummarizationImplementation of abstractive summarization using LSTM in the encoder-decoder architecture with local attention.
Stars: ✭ 128 (-54.45%)
SockeyeSequence-to-sequence framework with a focus on Neural Machine Translation based on Apache MXNet
Stars: ✭ 990 (+252.31%)
Image-CaptionUsing LSTM or Transformer to solve Image Captioning in Pytorch
Stars: ✭ 36 (-87.19%)
EmbeddingEmbedding模型代码和学习笔记总结
Stars: ✭ 25 (-91.1%)
pynmta simple and complete pytorch implementation of neural machine translation system
Stars: ✭ 13 (-95.37%)
SentimentAnalysisSentiment Analysis: Deep Bi-LSTM+attention model
Stars: ✭ 32 (-88.61%)
nystrom-attentionImplementation of Nyström Self-attention, from the paper Nyströmformer
Stars: ✭ 83 (-70.46%)
Patient2VecPatient2Vec: A Personalized Interpretable Deep Representation of the Longitudinal Electronic Health Record
Stars: ✭ 85 (-69.75%)
laikaExperiments with satellite image data
Stars: ✭ 97 (-65.48%)
Video-Cap🎬 Video Captioning: ICCV '15 paper implementation
Stars: ✭ 44 (-84.34%)
galerkin-transformer[NeurIPS 2021] Galerkin Transformer: a linear attention without softmax
Stars: ✭ 111 (-60.5%)
FragmentVCAny-to-any voice conversion by end-to-end extracting and fusing fine-grained voice fragments with attention
Stars: ✭ 134 (-52.31%)
co-attentionPytorch implementation of "Dynamic Coattention Networks For Question Answering"
Stars: ✭ 54 (-80.78%)
NTUA-slp-nlp💻Speech and Natural Language Processing (SLP & NLP) Lab Assignments for ECE NTUA
Stars: ✭ 19 (-93.24%)
TextboxTextBox is an open-source library for building text generation system.
Stars: ✭ 257 (-8.54%)
AoA-pytorchA Pytorch implementation of Attention on Attention module (both self and guided variants), for Visual Question Answering
Stars: ✭ 33 (-88.26%)
ntua-slp-semeval2018Deep-learning models of NTUA-SLP team submitted in SemEval 2018 tasks 1, 2 and 3.
Stars: ✭ 79 (-71.89%)
DAF3DDeep Attentive Features for Prostate Segmentation in 3D Transrectal Ultrasound
Stars: ✭ 60 (-78.65%)
transganformerImplementation of TransGanFormer, an all-attention GAN that combines the finding from the recent GanFormer and TransGan paper
Stars: ✭ 137 (-51.25%)
OverlapPredator[CVPR 2021, Oral] PREDATOR: Registration of 3D Point Clouds with Low Overlap.
Stars: ✭ 293 (+4.27%)
nuwa-pytorchImplementation of NÜWA, state of the art attention network for text to video synthesis, in Pytorch
Stars: ✭ 347 (+23.49%)
QuantumForestFast Differentiable Forest lib with the advantages of both decision trees and neural networks
Stars: ✭ 63 (-77.58%)
PAM[TPAMI 2020] Parallax Attention for Unsupervised Stereo Correspondence Learning
Stars: ✭ 62 (-77.94%)
Encoder decoderFour styles of encoder decoder model by Python, Theano, Keras and Seq2Seq
Stars: ✭ 269 (-4.27%)
Encoder-ForesteForest: Reversible mapping between high-dimensional data and path rule identifiers using trees embedding
Stars: ✭ 22 (-92.17%)
Attention一些不同的Attention机制代码
Stars: ✭ 17 (-93.95%)
enformer-pytorchImplementation of Enformer, Deepmind's attention network for predicting gene expression, in Pytorch
Stars: ✭ 146 (-48.04%)
cnn-surrogateBayesian deep convolutional encoder-decoder networks for surrogate modeling and uncertainty quantification
Stars: ✭ 75 (-73.31%)
recurrent-decoding-cell[AAAI'20] Segmenting Medical MRI via Recurrent Decoding Cell (Spotlight)
Stars: ✭ 14 (-95.02%)
ai-visual-storytelling-seq2seqImplementation of seq2seq model for Visual Storytelling Challenge (VIST) http://visionandlanguage.net/VIST/index.html
Stars: ✭ 50 (-82.21%)
MidcurveNNComputation of Midcurve of Thin Polygons using Neural Networks
Stars: ✭ 19 (-93.24%)
TransformerA Pytorch Implementation of "Attention is All You Need" and "Weighted Transformer Network for Machine Translation"
Stars: ✭ 271 (-3.56%)
keras-deep-learningVarious implementations and projects on CNN, RNN, LSTM, GAN, etc
Stars: ✭ 22 (-92.17%)
ConvLSTM-PyTorchConvLSTM/ConvGRU (Encoder-Decoder) with PyTorch on Moving-MNIST
Stars: ✭ 202 (-28.11%)
linformerImplementation of Linformer for Pytorch
Stars: ✭ 119 (-57.65%)
MoChA-pytorchPyTorch Implementation of "Monotonic Chunkwise Attention" (ICLR 2018)
Stars: ✭ 65 (-76.87%)
jzonA correct and safe JSON parser.
Stars: ✭ 78 (-72.24%)
Da Rnn📃 **Unofficial** PyTorch Implementation of DA-RNN (arXiv:1704.02971)
Stars: ✭ 256 (-8.9%)
visualizationa collection of visualization function
Stars: ✭ 189 (-32.74%)
ttslearnttslearn: Library for Pythonで学ぶ音声合成 (Text-to-speech with Python)
Stars: ✭ 158 (-43.77%)
SANETArbitrary Style Transfer with Style-Attentional Networks
Stars: ✭ 105 (-62.63%)
ADL2019Applied Deep Learning (2019 Spring) @ NTU
Stars: ✭ 20 (-92.88%)
Transformer-in-TransformerAn Implementation of Transformer in Transformer in TensorFlow for image classification, attention inside local patches
Stars: ✭ 40 (-85.77%)
halonet-pytorchImplementation of the 😇 Attention layer from the paper, Scaling Local Self-Attention For Parameter Efficient Visual Backbones
Stars: ✭ 181 (-35.59%)
Timesformer PytorchImplementation of TimeSformer from Facebook AI, a pure attention-based solution for video classification
Stars: ✭ 225 (-19.93%)
Rxffmpeg🔥💥RxFFmpeg 是基于 ( FFmpeg 4.0 + X264 + mp3lame + fdk-aac + opencore-amr + openssl ) 编译的适用于 Android 平台的音视频编辑、视频剪辑的快速处理框架,包含以下功能:视频拼接,转码,压缩,裁剪,片头片尾,分离音视频,变速,添加静态贴纸和gif动态贴纸,添加字幕,添加滤镜,添加背景音乐,加速减速视频,倒放音视频,音频裁剪,变声,混音,图片合成视频,视频解码图片,抖音首页,视频播放器及支持 OpenSSL https 等主流特色功能
Stars: ✭ 3,358 (+1095.02%)