All Projects → gan3sh500 → attention-augmented-conv

gan3sh500 / attention-augmented-conv

Licence: other
Implementation from the paper Attention Augmented Convolutional Networks in Tensorflow (https://arxiv.org/pdf/1904.09925v1.pdf)

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to attention-augmented-conv

query-selector
LONG-TERM SERIES FORECASTING WITH QUERYSELECTOR – EFFICIENT MODEL OF SPARSEATTENTION
Stars: ✭ 63 (+34.04%)
Mutual labels:  self-attention
VAENAR-TTS
PyTorch Implementation of VAENAR-TTS: Variational Auto-Encoder based Non-AutoRegressive Text-to-Speech Synthesis.
Stars: ✭ 66 (+40.43%)
Mutual labels:  self-attention
iPerceive
Applying Common-Sense Reasoning to Multi-Modal Dense Video Captioning and Video Question Answering | Python3 | PyTorch | CNNs | Causality | Reasoning | LSTMs | Transformers | Multi-Head Self Attention | Published in IEEE Winter Conference on Applications of Computer Vision (WACV) 2021
Stars: ✭ 52 (+10.64%)
Mutual labels:  self-attention
R-MeN
Transformer-based Memory Networks for Knowledge Graph Embeddings (ACL 2020) (Pytorch and Tensorflow)
Stars: ✭ 74 (+57.45%)
Mutual labels:  self-attention
Multi-Hop-Knowledge-Paths-Human-Needs
Ranking and Selecting Multi-Hop Knowledge Paths to Better Predict Human Needs
Stars: ✭ 17 (-63.83%)
Mutual labels:  self-attention
lightweight-temporal-attention-pytorch
A PyTorch implementation of the Light Temporal Attention Encoder (L-TAE) for satellite image time series. classification
Stars: ✭ 43 (-8.51%)
Mutual labels:  self-attention
Gat
Graph Attention Networks (https://arxiv.org/abs/1710.10903)
Stars: ✭ 2,229 (+4642.55%)
Mutual labels:  self-attention
Walk-Transformer
From Random Walks to Transformer for Learning Node Embeddings (ECML-PKDD 2020) (In Pytorch and Tensorflow)
Stars: ✭ 26 (-44.68%)
Mutual labels:  self-attention
AttnSleep
[IEEE TNSRE] "An Attention-based Deep Learning Approach for Sleep Stage Classification with Single-Channel EEG"
Stars: ✭ 76 (+61.7%)
Mutual labels:  self-attention
Object-and-Semantic-Part-Detection-pyTorch
Joint detection of Object and its Semantic parts using Attention-based Feature Fusion on PASCAL Parts 2010 dataset
Stars: ✭ 18 (-61.7%)
Mutual labels:  self-attention
Transformer-in-PyTorch
Transformer/Transformer-XL/R-Transformer examples and explanations
Stars: ✭ 21 (-55.32%)
Mutual labels:  self-attention
FUSION
PyTorch code for NeurIPSW 2020 paper (4th Workshop on Meta-Learning) "Few-Shot Unsupervised Continual Learning through Meta-Examples"
Stars: ✭ 18 (-61.7%)
Mutual labels:  self-attention
Relational Deep Reinforcement Learning
No description or website provided.
Stars: ✭ 44 (-6.38%)
Mutual labels:  self-attention
MASTER-pytorch
Code for the paper "MASTER: Multi-Aspect Non-local Network for Scene Text Recognition" (Pattern Recognition 2021)
Stars: ✭ 263 (+459.57%)
Mutual labels:  self-attention
robustness-vit
Contains code for the paper "Vision Transformers are Robust Learners" (AAAI 2022).
Stars: ✭ 78 (+65.96%)
Mutual labels:  self-attention
seq2seq-pytorch
Sequence to Sequence Models in PyTorch
Stars: ✭ 41 (-12.77%)
Mutual labels:  self-attention
CrabNet
Predict materials properties using only the composition information!
Stars: ✭ 57 (+21.28%)
Mutual labels:  self-attention
pytorch-psetae
PyTorch implementation of the model presented in "Satellite Image Time Series Classification with Pixel-Set Encoders and Temporal Self-Attention"
Stars: ✭ 117 (+148.94%)
Mutual labels:  self-attention
Parallel-Tacotron2
PyTorch Implementation of Google's Parallel Tacotron 2: A Non-Autoregressive Neural TTS Model with Differentiable Duration Modeling
Stars: ✭ 149 (+217.02%)
Mutual labels:  self-attention
Awesome-Vision-Transformer-Collection
Variants of Vision Transformer and its downstream tasks
Stars: ✭ 124 (+163.83%)
Mutual labels:  self-attention

Attention-Augmented Convolution

The tensorflow implementation from the paper Attention Augmented Convolutional Networks. Will add a Pytorch implementation of the same soon but currently the torch.einsum is very slow, even on GPU it is slower than Numpy.

Attention-Augmented Convolution

To use the layer:

from layer import augmented_conv2d

The paper has shows promising results, especially below:

ResNet-50 improvements

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].