All Projects → mmgnn_textvqa → Similar Projects or Alternatives

63 Open source projects that are alternatives of or similar to mmgnn_textvqa

vqa-soft
Accompanying code for "A Simple Loss Function for Improving the Convergence and Accuracy of Visual Question Answering Models" CVPR 2017 VQA workshop paper.
Stars: ✭ 14 (-65.85%)
Mutual labels:  vqa
Gnnpapers
Must-read papers on graph neural networks (GNN)
Stars: ✭ 12,293 (+29882.93%)
Mutual labels:  gnn
Conditional Batch Norm
Pytorch implementation of NIPS 2017 paper "Modulating early visual processing by language"
Stars: ✭ 51 (+24.39%)
Mutual labels:  vqa
Nscl Pytorch Release
PyTorch implementation for the Neuro-Symbolic Concept Learner (NS-CL).
Stars: ✭ 276 (+573.17%)
Mutual labels:  vqa
PDN
The official PyTorch implementation of "Pathfinder Discovery Networks for Neural Message Passing" (WebConf '21)
Stars: ✭ 44 (+7.32%)
Mutual labels:  gnn
Papers
读过的CV方向的一些论文,图像生成文字、弱监督分割等
Stars: ✭ 99 (+141.46%)
Mutual labels:  vqa
AoA-pytorch
A Pytorch implementation of Attention on Attention module (both self and guided variants), for Visual Question Answering
Stars: ✭ 33 (-19.51%)
Mutual labels:  vqa
VideoNavQA
An alternative EQA paradigm and informative benchmark + models (BMVC 2019, ViGIL 2019 spotlight)
Stars: ✭ 22 (-46.34%)
Mutual labels:  vqa
gemnet pytorch
GemNet model in PyTorch, as proposed in "GemNet: Universal Directional Graph Neural Networks for Molecules" (NeurIPS 2021)
Stars: ✭ 80 (+95.12%)
Mutual labels:  gnn
Bottom Up Attention Vqa
An efficient PyTorch implementation of the winning entry of the 2017 VQA Challenge.
Stars: ✭ 667 (+1526.83%)
Mutual labels:  vqa
Tbd Nets
PyTorch implementation of "Transparency by Design: Closing the Gap Between Performance and Interpretability in Visual Reasoning"
Stars: ✭ 345 (+741.46%)
Mutual labels:  vqa
gnn-re-ranking
A real-time GNN-based method. Understanding Image Retrieval Re-Ranking: A Graph Neural Network Perspective
Stars: ✭ 64 (+56.1%)
Mutual labels:  gnn
Vqa Mfb
Stars: ✭ 153 (+273.17%)
Mutual labels:  vqa
bottom-up-features
Bottom-up features extractor implemented in PyTorch.
Stars: ✭ 62 (+51.22%)
Mutual labels:  vqa
ZS-F-VQA
Code and Data for paper: Zero-shot Visual Question Answering using Knowledge Graph [ ISWC 2021 ]
Stars: ✭ 51 (+24.39%)
Mutual labels:  vqa
DVQA dataset
DVQA Dataset: A Bar chart question answering dataset presented at CVPR 2018
Stars: ✭ 20 (-51.22%)
Mutual labels:  vqa
Mullowbivqa
Hadamard Product for Low-rank Bilinear Pooling
Stars: ✭ 57 (+39.02%)
Mutual labels:  vqa
iMIX
A framework for Multimodal Intelligence research from Inspur HSSLAB.
Stars: ✭ 21 (-48.78%)
Mutual labels:  vqa
awesome-efficient-gnn
Code and resources on scalable and efficient Graph Neural Networks
Stars: ✭ 498 (+1114.63%)
Mutual labels:  gnn
mtad-gat-pytorch
PyTorch implementation of MTAD-GAT (Multivariate Time-Series Anomaly Detection via Graph Attention Networks) by Zhao et. al (2020, https://arxiv.org/abs/2009.02040).
Stars: ✭ 85 (+107.32%)
Mutual labels:  gnn
Vizwiz Vqa Pytorch
PyTorch VQA implementation that achieved top performances in the (ECCV18) VizWiz Grand Challenge: Answering Visual Questions from Blind People
Stars: ✭ 33 (-19.51%)
Mutual labels:  vqa
GNNs-in-Network-Neuroscience
A review of papers proposing novel GNN methods with application to brain connectivity published in 2017-2020.
Stars: ✭ 92 (+124.39%)
Mutual labels:  gnn
self critical vqa
Code for NeurIPS 2019 paper ``Self-Critical Reasoning for Robust Visual Question Answering''
Stars: ✭ 39 (-4.88%)
Mutual labels:  vqa
gnn-lspe
Source code for GNN-LSPE (Graph Neural Networks with Learnable Structural and Positional Representations), ICLR 2022
Stars: ✭ 165 (+302.44%)
Mutual labels:  gnn
Mmf
A modular framework for vision & language multimodal research from Facebook AI Research (FAIR)
Stars: ✭ 4,713 (+11395.12%)
Mutual labels:  vqa
Oscar
Oscar and VinVL
Stars: ✭ 396 (+865.85%)
Mutual labels:  vqa
GNN-Recommender-Systems
An index of recommendation algorithms that are based on Graph Neural Networks.
Stars: ✭ 505 (+1131.71%)
Mutual labels:  gnn
Pytorch Vqa
Strong baseline for visual question answering
Stars: ✭ 158 (+285.37%)
Mutual labels:  vqa
Awesome Visual Question Answering
A curated list of Visual Question Answering(VQA)(Image/Video Question Answering),Visual Question Generation ,Visual Dialog ,Visual Commonsense Reasoning and related area.
Stars: ✭ 295 (+619.51%)
Mutual labels:  vqa
3DInfomax
Making self-supervised learning work on molecules by using their 3D geometry to pre-train GNNs. Implemented in DGL and Pytorch Geometric.
Stars: ✭ 107 (+160.98%)
Mutual labels:  gnn
MICCAI21 MMQ
Multiple Meta-model Quantifying for Medical Visual Question Answering
Stars: ✭ 16 (-60.98%)
Mutual labels:  vqa
Vqa regat
Research Code for ICCV 2019 paper "Relation-aware Graph Attention Network for Visual Question Answering"
Stars: ✭ 129 (+214.63%)
Mutual labels:  vqa
rosita
ROSITA: Enhancing Vision-and-Language Semantic Alignments via Cross- and Intra-modal Knowledge Integration
Stars: ✭ 36 (-12.2%)
Mutual labels:  vqa
spatio-temporal-brain
A Deep Graph Neural Network Architecture for Modelling Spatio-temporal Dynamics in rs-fMRI Data
Stars: ✭ 22 (-46.34%)
Mutual labels:  gnn
FigureQA-baseline
TensorFlow implementation of the CNN-LSTM, Relation Network and text-only baselines for the paper "FigureQA: An Annotated Figure Dataset for Visual Reasoning"
Stars: ✭ 28 (-31.71%)
Mutual labels:  vqa
Vqa Tensorflow
Tensorflow Implementation of Deeper LSTM+ normalized CNN for Visual Question Answering
Stars: ✭ 98 (+139.02%)
Mutual labels:  vqa
just-ask
[TPAMI Special Issue on ICCV 2021 Best Papers, Oral] Just Ask: Learning to Answer Questions from Millions of Narrated Videos
Stars: ✭ 57 (+39.02%)
Mutual labels:  vqa
GraphMix
Code for reproducing results in GraphMix paper
Stars: ✭ 64 (+56.1%)
Mutual labels:  gnn
probnmn-clevr
Code for ICML 2019 paper "Probabilistic Neural-symbolic Models for Interpretable Visual Question Answering" [long-oral]
Stars: ✭ 63 (+53.66%)
Mutual labels:  vqa
Vqa
CloudCV Visual Question Answering Demo
Stars: ✭ 57 (+39.02%)
Mutual labels:  vqa
Transformer-MM-Explainability
[ICCV 2021- Oral] Official PyTorch implementation for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers, a novel method to visualize any Transformer-based network. Including examples for DETR, VQA.
Stars: ✭ 484 (+1080.49%)
Mutual labels:  vqa
Causing
Causing: CAUsal INterpretation using Graphs
Stars: ✭ 47 (+14.63%)
Mutual labels:  gnn
gnn
TensorFlow GNN is a library to build Graph Neural Networks on the TensorFlow platform.
Stars: ✭ 558 (+1260.98%)
Mutual labels:  gnn
Bottom Up Attention
Bottom-up attention model for image captioning and VQA, based on Faster R-CNN and Visual Genome
Stars: ✭ 989 (+2312.2%)
Mutual labels:  vqa
Meta-Fine-Tuning
[CVPR 2020 VL3] The repository for meta fine-tuning in cross-domain few-shot learning.
Stars: ✭ 29 (-29.27%)
Mutual labels:  gnn
neuro-symbolic-ai-soc
Neuro-Symbolic Visual Question Answering on Sort-of-CLEVR using PyTorch
Stars: ✭ 41 (+0%)
Mutual labels:  vqa
Literatures-on-GNN-Acceleration
A reading list for deep graph learning acceleration.
Stars: ✭ 50 (+21.95%)
Mutual labels:  gnn
Visual Question Answering
📷 ❓ Visual Question Answering Demo and Algorithmia API
Stars: ✭ 18 (-56.1%)
Mutual labels:  vqa
VectorNet
Pytorch implementation of CVPR2020 paper “VectorNet: Encoding HD Maps and Agent Dynamics from Vectorized Representation”
Stars: ✭ 88 (+114.63%)
Mutual labels:  gnn
hcrn-videoqa
Implementation for the paper "Hierarchical Conditional Relation Networks for Video Question Answering" (Le et al., CVPR 2020, Oral)
Stars: ✭ 111 (+170.73%)
Mutual labels:  vqa
stagin
STAGIN: Spatio-Temporal Attention Graph Isomorphism Network
Stars: ✭ 34 (-17.07%)
Mutual labels:  gnn
Vqa.pytorch
Visual Question Answering in Pytorch
Stars: ✭ 602 (+1368.29%)
Mutual labels:  vqa
GCL
List of Publications in Graph Contrastive Learning
Stars: ✭ 25 (-39.02%)
Mutual labels:  gnn
Openvqa
A lightweight, scalable, and general framework for visual question answering research
Stars: ✭ 198 (+382.93%)
Mutual labels:  vqa
Mac Network
Implementation for the paper "Compositional Attention Networks for Machine Reasoning" (Hudson and Manning, ICLR 2018)
Stars: ✭ 444 (+982.93%)
Mutual labels:  vqa
ncem
Learning cell communication from spatial graphs of cells
Stars: ✭ 77 (+87.8%)
Mutual labels:  gnn
Awesome-Federated-Learning-on-Graph-and-GNN-papers
Federated learning on graph, especially on graph neural networks (GNNs), knowledge graph, and private GNN.
Stars: ✭ 206 (+402.44%)
Mutual labels:  gnn
cfvqa
[CVPR 2021] Counterfactual VQA: A Cause-Effect Look at Language Bias
Stars: ✭ 96 (+134.15%)
Mutual labels:  vqa
Clipbert
[CVPR 2021 Oral] Official PyTorch code for ClipBERT, an efficient framework for end-to-end learning for image-text and video-text tasks.
Stars: ✭ 168 (+309.76%)
Mutual labels:  vqa
Awesome Vqa
Visual Q&A reading list
Stars: ✭ 403 (+882.93%)
Mutual labels:  vqa
1-60 of 63 similar projects