Linear Attention Recurrent Neural NetworkA recurrent attention module consisting of an LSTM cell which can query its own past cell states by the means of windowed multi-head attention. The formulas are derived from the BN-LSTM and the Transformer Network. The LARNN cell with attention can be easily used inside a loop on the cell state, just like any other RNN. (LARNN)
Stars: ✭ 119 (+83.08%)
AttentionalpoolingactionCode/Model release for NIPS 2017 paper "Attentional Pooling for Action Recognition"
Stars: ✭ 248 (+281.54%)
Nmt KerasNeural Machine Translation with Keras
Stars: ✭ 501 (+670.77%)
Image Caption GeneratorA neural network to generate captions for an image using CNN and RNN with BEAM Search.
Stars: ✭ 126 (+93.85%)
Compact-Global-DescriptorPytorch implementation of "Compact Global Descriptor for Neural Networks" (CGD).
Stars: ✭ 22 (-66.15%)
SockeyeSequence-to-sequence framework with a focus on Neural Machine Translation based on Apache MXNet
Stars: ✭ 990 (+1423.08%)
SimgnnA PyTorch implementation of "SimGNN: A Neural Network Approach to Fast Graph Similarity Computation" (WSDM 2019).
Stars: ✭ 351 (+440%)
PaperrobotCode for PaperRobot: Incremental Draft Generation of Scientific Ideas
Stars: ✭ 372 (+472.31%)
Keras AttentionVisualizing RNNs using the attention mechanism
Stars: ✭ 697 (+972.31%)
AttentionganAttentionGAN for Unpaired Image-to-Image Translation & Multi-Domain Image-to-Image Translation
Stars: ✭ 341 (+424.62%)
Pointer summarizerpytorch implementation of "Get To The Point: Summarization with Pointer-Generator Networks"
Stars: ✭ 629 (+867.69%)
Keras GatKeras implementation of the graph attention networks (GAT) by Veličković et al. (2017; https://arxiv.org/abs/1710.10903)
Stars: ✭ 334 (+413.85%)
Seq2seq chatbot基于seq2seq模型的简单对话系统的tf实现,具有embedding、attention、beam_search等功能,数据集是Cornell Movie Dialogs
Stars: ✭ 308 (+373.85%)
Performer PytorchAn implementation of Performer, a linear attention-based transformer, in Pytorch
Stars: ✭ 546 (+740%)
Seq2seq SummarizerPointer-generator reinforced seq2seq summarization in PyTorch
Stars: ✭ 306 (+370.77%)
AdaptiveattentionImplementation of "Knowing When to Look: Adaptive Attention via A Visual Sentinel for Image Captioning"
Stars: ✭ 303 (+366.15%)
Isab PytorchAn implementation of (Induced) Set Attention Block, from the Set Transformers paper
Stars: ✭ 21 (-67.69%)
Attention is all you needTransformer of "Attention Is All You Need" (Vaswani et al. 2017) by Chainer.
Stars: ✭ 303 (+366.15%)
Vit PytorchImplementation of Vision Transformer, a simple way to achieve SOTA in vision classification with only a single transformer encoder, in Pytorch
Stars: ✭ 7,199 (+10975.38%)
Timesformer PytorchImplementation of TimeSformer from Facebook AI, a pure attention-based solution for video classification
Stars: ✭ 225 (+246.15%)
Neural spEnd-to-end ASR/LM implementation with PyTorch
Stars: ✭ 408 (+527.69%)
Chatbot cn基于金融-司法领域(兼有闲聊性质)的聊天机器人,其中的主要模块有信息抽取、NLU、NLG、知识图谱等,并且利用Django整合了前端展示,目前已经封装了nlp和kg的restful接口
Stars: ✭ 791 (+1116.92%)
MtanThe implementation of "End-to-End Multi-Task Learning with Attention" [CVPR 2019].
Stars: ✭ 364 (+460%)
TextclassifierText classifier for Hierarchical Attention Networks for Document Classification
Stars: ✭ 985 (+1415.38%)
TransformerA TensorFlow Implementation of the Transformer: Attention Is All You Need
Stars: ✭ 3,646 (+5509.23%)
Yolo Multi Backbones AttentionModel Compression—YOLOv3 with multi lightweight backbones(ShuffleNetV2 HuaWei GhostNet), attention, prune and quantization
Stars: ✭ 317 (+387.69%)
Awesome Bert NlpA curated list of NLP resources focused on BERT, attention mechanism, Transformer networks, and transfer learning.
Stars: ✭ 567 (+772.31%)
Fed AttAttentive Federated Learning for Private NLM
Stars: ✭ 34 (-47.69%)
Alphafold2To eventually become an unofficial Pytorch implementation / replication of Alphafold2, as details of the architecture get released
Stars: ✭ 298 (+358.46%)
Moran v2MORAN: A Multi-Object Rectified Attention Network for Scene Text Recognition
Stars: ✭ 536 (+724.62%)
TransformerA Pytorch Implementation of "Attention is All You Need" and "Weighted Transformer Network for Machine Translation"
Stars: ✭ 271 (+316.92%)
Multi Scale AttentionCode for our paper "Multi-scale Guided Attention for Medical Image Segmentation"
Stars: ✭ 281 (+332.31%)
Keras Self AttentionAttention mechanism for processing sequential data that considers the context for each timestamp.
Stars: ✭ 489 (+652.31%)
Attention ocr.pytorchThis repository implements the the encoder and decoder model with attention model for OCR
Stars: ✭ 278 (+327.69%)
Da Rnn📃 **Unofficial** PyTorch Implementation of DA-RNN (arXiv:1704.02971)
Stars: ✭ 256 (+293.85%)
MirnetOfficial repository for "Learning Enriched Features for Real Image Restoration and Enhancement" (ECCV 2020). SOTA results for image denoising, super-resolution, and image enhancement.
Stars: ✭ 247 (+280%)
Text classificationall kinds of text classification models and more with deep learning
Stars: ✭ 7,179 (+10944.62%)
Awesome Graph ClassificationA collection of important graph embedding, classification and representation learning papers with implementations.
Stars: ✭ 4,309 (+6529.23%)
galerkin-transformer[NeurIPS 2021] Galerkin Transformer: a linear attention without softmax
Stars: ✭ 111 (+70.77%)
SAE-NADThe implementation of "Point-of-Interest Recommendation: Exploiting Self-Attentive Autoencoders with Neighbor-Aware Influence"
Stars: ✭ 48 (-26.15%)
linformerImplementation of Linformer for Pytorch
Stars: ✭ 119 (+83.08%)
transganformerImplementation of TransGanFormer, an all-attention GAN that combines the finding from the recent GanFormer and TransGan paper
Stars: ✭ 137 (+110.77%)
Ag CnnThis is a reimplementation of AG-CNN. ("Thorax Disease Classification with Attention Guided Convolutional Neural Network","Diagnose like a Radiologist: Attention Guided Convolutional Neural Network for Thorax Disease Classification")
Stars: ✭ 27 (-58.46%)
Transformer TtsA Pytorch Implementation of "Neural Speech Synthesis with Transformer Network"
Stars: ✭ 418 (+543.08%)