Im2LaTeXAn implementation of the Show, Attend and Tell paper in Tensorflow, for the OpenAI Im2LaTeX suggested problem
Stars: ✭ 16 (-51.52%)
Mutual labels: attention, attention-mechanism
automatic-personality-prediction[AAAI 2020] Modeling Personality with Attentive Networks and Contextual Embeddings
Stars: ✭ 43 (+30.3%)
Mutual labels: attention, attention-mechanism
hexiaMid-level PyTorch Based Framework for Visual Question Answering.
Stars: ✭ 24 (-27.27%)
Mutual labels: attention-mechanism, visual-question-answering
iPerceiveApplying Common-Sense Reasoning to Multi-Modal Dense Video Captioning and Video Question Answering | Python3 | PyTorch | CNNs | Causality | Reasoning | LSTMs | Transformers | Multi-Head Self Attention | Published in IEEE Winter Conference on Applications of Computer Vision (WACV) 2021
Stars: ✭ 52 (+57.58%)
Mutual labels: attention, captioning
self critical vqaCode for NeurIPS 2019 paper ``Self-Critical Reasoning for Robust Visual Question Answering''
Stars: ✭ 39 (+18.18%)
Mutual labels: vqa, visual-question-answering
TRAR-VQA[ICCV 2021] TRAR: Routing the Attention Spans in Transformers for Visual Question Answering -- Official Implementation
Stars: ✭ 49 (+48.48%)
Mutual labels: attention, visual-question-answering
bottom-up-featuresBottom-up features extractor implemented in PyTorch.
Stars: ✭ 62 (+87.88%)
Mutual labels: vqa, visual-question-answering
CrabNetPredict materials properties using only the composition information!
Stars: ✭ 57 (+72.73%)
Mutual labels: attention, attention-mechanism
Vqa regatResearch Code for ICCV 2019 paper "Relation-aware Graph Attention Network for Visual Question Answering"
Stars: ✭ 129 (+290.91%)
Mutual labels: vqa, attention
ntua-slp-semeval2018Deep-learning models of NTUA-SLP team submitted in SemEval 2018 tasks 1, 2 and 3.
Stars: ✭ 79 (+139.39%)
Mutual labels: attention, attention-mechanism
MmfA modular framework for vision & language multimodal research from Facebook AI Research (FAIR)
Stars: ✭ 4,713 (+14181.82%)
Mutual labels: vqa, captioning
lstm-attentionAttention-based bidirectional LSTM for Classification Task (ICASSP)
Stars: ✭ 87 (+163.64%)
Mutual labels: attention, attention-mechanism
Mac NetworkImplementation for the paper "Compositional Attention Networks for Machine Reasoning" (Hudson and Manning, ICLR 2018)
Stars: ✭ 444 (+1245.45%)
Mutual labels: vqa, attention
S2VT-seq2seq-video-captioning-attentionS2VT (seq2seq) video captioning with bahdanau & luong attention implementation in Tensorflow
Stars: ✭ 18 (-45.45%)
Mutual labels: attention-mechanism, captioning
just-ask[TPAMI Special Issue on ICCV 2021 Best Papers, Oral] Just Ask: Learning to Answer Questions from Millions of Narrated Videos
Stars: ✭ 57 (+72.73%)
Mutual labels: vqa, visual-question-answering
FigureQA-baselineTensorFlow implementation of the CNN-LSTM, Relation Network and text-only baselines for the paper "FigureQA: An Annotated Figure Dataset for Visual Reasoning"
Stars: ✭ 28 (-15.15%)
Mutual labels: vqa, visual-question-answering
h-transformer-1dImplementation of H-Transformer-1D, Hierarchical Attention for Sequence Learning
Stars: ✭ 121 (+266.67%)
Mutual labels: attention, attention-mechanism
datastories-semeval2017-task6Deep-learning model presented in "DataStories at SemEval-2017 Task 6: Siamese LSTM with Attention for Humorous Text Comparison".
Stars: ✭ 20 (-39.39%)
Mutual labels: attention, attention-mechanism