Datastories Semeval2017 Task4Deep-learning model presented in "DataStories at SemEval-2017 Task 4: Deep LSTM with Attention for Message-level and Topic-based Sentiment Analysis".
Stars: ✭ 184 (+138.96%)
Triplet AttentionOfficial PyTorch Implementation for "Rotate to Attend: Convolutional Triplet Attention Module." [WACV 2021]
Stars: ✭ 222 (+188.31%)
Seq2seq chatbot new基于seq2seq模型的简单对话系统的tf实现,具有embedding、attention、beam_search等功能,数据集是Cornell Movie Dialogs
Stars: ✭ 144 (+87.01%)
AttentionalpoolingactionCode/Model release for NIPS 2017 paper "Attentional Pooling for Action Recognition"
Stars: ✭ 248 (+222.08%)
Lstm attentionattention-based LSTM/Dense implemented by Keras
Stars: ✭ 168 (+118.18%)
Pan[Params: Only 272K!!!] Efficient Image Super-Resolution Using Pixel Attention, in ECCV Workshop, 2020.
Stars: ✭ 151 (+96.1%)
Neat VisionNeat (Neural Attention) Vision, is a visualization tool for the attention mechanisms of deep-learning models for Natural Language Processing (NLP) tasks. (framework-agnostic)
Stars: ✭ 213 (+176.62%)
Csa InpaintingCoherent Semantic Attention for image inpainting(ICCV 2019)
Stars: ✭ 202 (+162.34%)
AdnetAttention-guided CNN for image denoising(Neural Networks,2020)
Stars: ✭ 135 (+75.32%)
SemiDenseNetRepository containing the code of one of the networks that we employed in the iSEG Grand MICCAI Challenge 2017, infant brain segmentation.
Stars: ✭ 55 (-28.57%)
Attentive Gan DerainnetUnofficial tensorflow implemention of "Attentive Generative Adversarial Network for Raindrop Removal from A Single Image (CVPR 2018) " model https://maybeshewill-cv.github.io/attentive-gan-derainnet/
Stars: ✭ 184 (+138.96%)
SA-DLSentiment Analysis with Deep Learning models. Implemented with Tensorflow and Keras.
Stars: ✭ 35 (-54.55%)
Linformer PytorchMy take on a practical implementation of Linformer for Pytorch.
Stars: ✭ 239 (+210.39%)
Slot AttentionImplementation of Slot Attention from GoogleAI
Stars: ✭ 168 (+118.18%)
Sinkhorn TransformerSinkhorn Transformer - Practical implementation of Sparse Sinkhorn Attention
Stars: ✭ 156 (+102.6%)
Dalle PytorchImplementation / replication of DALL-E, OpenAI's Text to Image Transformer, in Pytorch
Stars: ✭ 3,661 (+4654.55%)
Transformers-RLAn easy PyTorch implementation of "Stabilizing Transformers for Reinforcement Learning"
Stars: ✭ 107 (+38.96%)
Document Classifier LstmA bidirectional LSTM with attention for multiclass/multilabel text classification.
Stars: ✭ 136 (+76.62%)
Linear Attention TransformerTransformer based on a variant of attention that is linear complexity in respect to sequence length
Stars: ✭ 205 (+166.23%)
lstm-attentionAttention-based bidirectional LSTM for Classification Task (ICASSP)
Stars: ✭ 87 (+12.99%)
Sca Cnn.cvpr17Image Captions Generation with Spatial and Channel-wise Attention
Stars: ✭ 198 (+157.14%)
HnattTrain and visualize Hierarchical Attention Networks
Stars: ✭ 192 (+149.35%)
Graph attention poolAttention over nodes in Graph Neural Networks using PyTorch (NeurIPS 2019)
Stars: ✭ 186 (+141.56%)
memory-compressed-attentionImplementation of Memory-Compressed Attention, from the paper "Generating Wikipedia By Summarizing Long Sequences"
Stars: ✭ 47 (-38.96%)
AoanetCode for paper "Attention on Attention for Image Captioning". ICCV 2019
Stars: ✭ 242 (+214.29%)
Im2LaTeXAn implementation of the Show, Attend and Tell paper in Tensorflow, for the OpenAI Im2LaTeX suggested problem
Stars: ✭ 16 (-79.22%)
Eeg DlA Deep Learning library for EEG Tasks (Signals) Classification, based on TensorFlow.
Stars: ✭ 165 (+114.29%)
Self Attention CvImplementation of various self-attention mechanisms focused on computer vision. Ongoing repository.
Stars: ✭ 209 (+171.43%)
GatGraph Attention Networks (https://arxiv.org/abs/1710.10903)
Stars: ✭ 2,229 (+2794.81%)
segRetinoAn implementation of the research paper "Retina Blood Vessel Segmentation Using A U-Net Based Convolutional Neural Network"
Stars: ✭ 23 (-70.13%)
Picanet ImplementationPytorch Implementation of PiCANet: Learning Pixel-wise Contextual Attention for Saliency Detection
Stars: ✭ 157 (+103.9%)
X TransformersA simple but complete full-attention transformer with a set of promising experimental features from various papers
Stars: ✭ 211 (+174.03%)
question-generationNeural Models for Key Phrase Detection and Question Generation
Stars: ✭ 29 (-62.34%)
HartHierarchical Attentive Recurrent Tracking
Stars: ✭ 149 (+93.51%)
LightnetplusplusLightNet++: Boosted Light-weighted Networks for Real-time Semantic Segmentation
Stars: ✭ 218 (+183.12%)
STAM-pytorchImplementation of STAM (Space Time Attention Model), a pure and simple attention model that reaches SOTA for video classification
Stars: ✭ 109 (+41.56%)
Attribute Aware Attention[ACM MM 2018] Attribute-Aware Attention Model for Fine-grained Representation Learning
Stars: ✭ 143 (+85.71%)
Prediction FlowDeep-Learning based CTR models implemented by PyTorch
Stars: ✭ 138 (+79.22%)
Guided Attention Inference NetworkContains implementation of Guided Attention Inference Network (GAIN) presented in Tell Me Where to Look(CVPR 2018). This repository aims to apply GAIN on fcn8 architecture used for segmentation.
Stars: ✭ 204 (+164.94%)
NARREThis is our implementation of NARRE:Neural Attentional Regression with Review-level Explanations
Stars: ✭ 100 (+29.87%)
amta-netAsymmetric Multi-Task Attention Network for Prostate Bed Segmentation in CT Images
Stars: ✭ 26 (-66.23%)
DARNNA Dual-Stage Attention-Based Recurrent Neural Network for Time Series Prediction
Stars: ✭ 90 (+16.88%)
Attention MechanismsImplementations for a family of attention mechanisms, suitable for all kinds of natural language processing tasks and compatible with TensorFlow 2.0 and Keras.
Stars: ✭ 203 (+163.64%)