All Projects → lucidrains → AoA-pytorch

lucidrains / AoA-pytorch

Licence: MIT license
A Pytorch implementation of Attention on Attention module (both self and guided variants), for Visual Question Answering

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to AoA-pytorch

Im2LaTeX
An implementation of the Show, Attend and Tell paper in Tensorflow, for the OpenAI Im2LaTeX suggested problem
Stars: ✭ 16 (-51.52%)
Mutual labels:  attention, attention-mechanism
automatic-personality-prediction
[AAAI 2020] Modeling Personality with Attentive Networks and Contextual Embeddings
Stars: ✭ 43 (+30.3%)
Mutual labels:  attention, attention-mechanism
hexia
Mid-level PyTorch Based Framework for Visual Question Answering.
Stars: ✭ 24 (-27.27%)
Mutual labels:  attention-mechanism, visual-question-answering
iPerceive
Applying Common-Sense Reasoning to Multi-Modal Dense Video Captioning and Video Question Answering | Python3 | PyTorch | CNNs | Causality | Reasoning | LSTMs | Transformers | Multi-Head Self Attention | Published in IEEE Winter Conference on Applications of Computer Vision (WACV) 2021
Stars: ✭ 52 (+57.58%)
Mutual labels:  attention, captioning
Linear-Attention-Mechanism
Attention mechanism
Stars: ✭ 27 (-18.18%)
Mutual labels:  attention, attention-mechanism
self critical vqa
Code for NeurIPS 2019 paper ``Self-Critical Reasoning for Robust Visual Question Answering''
Stars: ✭ 39 (+18.18%)
Mutual labels:  vqa, visual-question-answering
TRAR-VQA
[ICCV 2021] TRAR: Routing the Attention Spans in Transformers for Visual Question Answering -- Official Implementation
Stars: ✭ 49 (+48.48%)
Mutual labels:  attention, visual-question-answering
bottom-up-features
Bottom-up features extractor implemented in PyTorch.
Stars: ✭ 62 (+87.88%)
Mutual labels:  vqa, visual-question-answering
Hierarchical-Word-Sense-Disambiguation-using-WordNet-Senses
Word Sense Disambiguation using Word Specific models, All word models and Hierarchical models in Tensorflow
Stars: ✭ 33 (+0%)
Mutual labels:  attention, attention-mechanism
CrabNet
Predict materials properties using only the composition information!
Stars: ✭ 57 (+72.73%)
Mutual labels:  attention, attention-mechanism
Vqa regat
Research Code for ICCV 2019 paper "Relation-aware Graph Attention Network for Visual Question Answering"
Stars: ✭ 129 (+290.91%)
Mutual labels:  vqa, attention
ntua-slp-semeval2018
Deep-learning models of NTUA-SLP team submitted in SemEval 2018 tasks 1, 2 and 3.
Stars: ✭ 79 (+139.39%)
Mutual labels:  attention, attention-mechanism
Mmf
A modular framework for vision & language multimodal research from Facebook AI Research (FAIR)
Stars: ✭ 4,713 (+14181.82%)
Mutual labels:  vqa, captioning
lstm-attention
Attention-based bidirectional LSTM for Classification Task (ICASSP)
Stars: ✭ 87 (+163.64%)
Mutual labels:  attention, attention-mechanism
Mac Network
Implementation for the paper "Compositional Attention Networks for Machine Reasoning" (Hudson and Manning, ICLR 2018)
Stars: ✭ 444 (+1245.45%)
Mutual labels:  vqa, attention
S2VT-seq2seq-video-captioning-attention
S2VT (seq2seq) video captioning with bahdanau & luong attention implementation in Tensorflow
Stars: ✭ 18 (-45.45%)
Mutual labels:  attention-mechanism, captioning
just-ask
[TPAMI Special Issue on ICCV 2021 Best Papers, Oral] Just Ask: Learning to Answer Questions from Millions of Narrated Videos
Stars: ✭ 57 (+72.73%)
Mutual labels:  vqa, visual-question-answering
FigureQA-baseline
TensorFlow implementation of the CNN-LSTM, Relation Network and text-only baselines for the paper "FigureQA: An Annotated Figure Dataset for Visual Reasoning"
Stars: ✭ 28 (-15.15%)
Mutual labels:  vqa, visual-question-answering
h-transformer-1d
Implementation of H-Transformer-1D, Hierarchical Attention for Sequence Learning
Stars: ✭ 121 (+266.67%)
Mutual labels:  attention, attention-mechanism
datastories-semeval2017-task6
Deep-learning model presented in "DataStories at SemEval-2017 Task 6: Siamese LSTM with Attention for Humorous Text Comparison".
Stars: ✭ 20 (-39.39%)
Mutual labels:  attention, attention-mechanism

Attention on Attention - Pytorch

A Pytorch implementation of the Attention on Attention module, from the paper An Improved Attention for Visual Question Answering. The repository will include both the Self and Guided (cross-attention) variants.

Install

$ pip install aoa-pytorch

Usage

Self Attention on Attention

import torch
from aoa_pytorch import AoA

attn = AoA(
    dim = 512,
    heads = 8
)

x = torch.randn(1, 1024, 512)
attn(x) + x # (1, 1024, 512)

Guided Attention on Attention

```python
import torch
from aoa_pytorch import AoA

attn = AoA(
    dim = 512,
    heads = 8
)

x = torch.randn(1, 1024, 512)
context = torch.randn(1, 1024, 512)

attn(x, context = context) + x # (1, 1024, 512)

Citations

@misc{rahman2020improved,
    title   = {An Improved Attention for Visual Question Answering}, 
    author  = {Tanzila Rahman and Shih-Han Chou and Leonid Sigal and Giuseppe Carenini},
    year    = {2020},
    eprint  = {2011.02164},
    archivePrefix = {arXiv},
    primaryClass = {cs.CV}
}
@misc{huang2019attention,
    title   = {Attention on Attention for Image Captioning}, 
    author  = {Lun Huang and Wenmin Wang and Jie Chen and Xiao-Yong Wei},
    year    = {2019},
    eprint  = {1908.06954},
    archivePrefix = {arXiv},
    primaryClass = {cs.CV}
}
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].