All Projects → self_critical_vqa → Similar Projects or Alternatives

99 Open source projects that are alternatives of or similar to self_critical_vqa

Pytorch Grad Cam
Many Class Activation Map methods implemented in Pytorch for CNNs and Vision Transformers. Including Grad-CAM, Grad-CAM++, Score-CAM, Ablation-CAM and XGrad-CAM
Stars: ✭ 3,814 (+9679.49%)
Transformer-MM-Explainability
[ICCV 2021- Oral] Official PyTorch implementation for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers, a novel method to visualize any Transformer-based network. Including examples for DETR, VQA.
Stars: ✭ 484 (+1141.03%)
Mutual labels:  vqa, explainable-ai
zennit
Zennit is a high-level framework in Python using PyTorch for explaining/exploring neural networks using attribution methods like LRP.
Stars: ✭ 57 (+46.15%)
Mutual labels:  interpretable-ai, explainable-ai
ProtoTree
ProtoTrees: Neural Prototype Trees for Interpretable Fine-grained Image Recognition, published at CVPR2021
Stars: ✭ 47 (+20.51%)
ShapleyExplanationNetworks
Implementation of the paper "Shapley Explanation Networks"
Stars: ✭ 62 (+58.97%)
bottom-up-features
Bottom-up features extractor implemented in PyTorch.
Stars: ✭ 62 (+58.97%)
Mutual labels:  vqa, visual-question-answering
deep-explanation-penalization
Code for using CDEP from the paper "Interpretations are useful: penalizing explanations to align neural networks with prior knowledge" https://arxiv.org/abs/1909.13584
Stars: ✭ 110 (+182.05%)
Interpret
Fit interpretable models. Explain blackbox machine learning.
Stars: ✭ 4,352 (+11058.97%)
Mutual labels:  interpretable-ai, explainable-ai
path explain
A repository for explaining feature attributions and feature interactions in deep neural networks.
Stars: ✭ 151 (+287.18%)
FigureQA-baseline
TensorFlow implementation of the CNN-LSTM, Relation Network and text-only baselines for the paper "FigureQA: An Annotated Figure Dataset for Visual Reasoning"
Stars: ✭ 28 (-28.21%)
Mutual labels:  vqa, visual-question-answering
Awesome Machine Learning Interpretability
A curated list of awesome machine learning interpretability resources.
Stars: ✭ 2,404 (+6064.1%)
just-ask
[TPAMI Special Issue on ICCV 2021 Best Papers, Oral] Just Ask: Learning to Answer Questions from Millions of Narrated Videos
Stars: ✭ 57 (+46.15%)
Mutual labels:  vqa, visual-question-answering
mllp
The code of AAAI 2020 paper "Transparent Classification with Multilayer Logical Perceptrons and Random Binarization".
Stars: ✭ 15 (-61.54%)
Mutual labels:  interpretable-ai, explainable-ai
WhiteBox-Part1
In this part, I've introduced and experimented with ways to interpret and evaluate models in the field of image. (Pytorch)
Stars: ✭ 34 (-12.82%)
Mutual labels:  interpretable-ai, explainable-ai
AoA-pytorch
A Pytorch implementation of Attention on Attention module (both self and guided variants), for Visual Question Answering
Stars: ✭ 33 (-15.38%)
Mutual labels:  vqa, visual-question-answering
iMIX
A framework for Multimodal Intelligence research from Inspur HSSLAB.
Stars: ✭ 21 (-46.15%)
Mutual labels:  vqa
Mac Network
Implementation for the paper "Compositional Attention Networks for Machine Reasoning" (Hudson and Manning, ICLR 2018)
Stars: ✭ 444 (+1038.46%)
Mutual labels:  vqa
hcrn-videoqa
Implementation for the paper "Hierarchical Conditional Relation Networks for Video Question Answering" (Le et al., CVPR 2020, Oral)
Stars: ✭ 111 (+184.62%)
Mutual labels:  vqa
ZS-F-VQA
Code and Data for paper: Zero-shot Visual Question Answering using Knowledge Graph [ ISWC 2021 ]
Stars: ✭ 51 (+30.77%)
Mutual labels:  vqa
Vqa
CloudCV Visual Question Answering Demo
Stars: ✭ 57 (+46.15%)
Mutual labels:  vqa
Oscar
Oscar and VinVL
Stars: ✭ 396 (+915.38%)
Mutual labels:  vqa
neuro-symbolic-ai-soc
Neuro-Symbolic Visual Question Answering on Sort-of-CLEVR using PyTorch
Stars: ✭ 41 (+5.13%)
Mutual labels:  vqa
KVQA
Korean Visual Question Answering
Stars: ✭ 44 (+12.82%)
Mutual labels:  visual-question-answering
Awesome Visual Question Answering
A curated list of Visual Question Answering(VQA)(Image/Video Question Answering),Visual Question Generation ,Visual Dialog ,Visual Commonsense Reasoning and related area.
Stars: ✭ 295 (+656.41%)
Mutual labels:  vqa
VCML
PyTorch implementation of paper "Visual Concept-Metaconcept Learner", NeruIPS 2019
Stars: ✭ 45 (+15.38%)
Mutual labels:  visual-question-answering
hexia
Mid-level PyTorch Based Framework for Visual Question Answering.
Stars: ✭ 24 (-38.46%)
Mutual labels:  visual-question-answering
Mmf
A modular framework for vision & language multimodal research from Facebook AI Research (FAIR)
Stars: ✭ 4,713 (+11984.62%)
Mutual labels:  vqa
Vqa regat
Research Code for ICCV 2019 paper "Relation-aware Graph Attention Network for Visual Question Answering"
Stars: ✭ 129 (+230.77%)
Mutual labels:  vqa
Bottom Up Attention
Bottom-up attention model for image captioning and VQA, based on Faster R-CNN and Visual Genome
Stars: ✭ 989 (+2435.9%)
Mutual labels:  vqa
MICCAI21 MMQ
Multiple Meta-model Quantifying for Medical Visual Question Answering
Stars: ✭ 16 (-58.97%)
Mutual labels:  vqa
RelationNetworks-CLEVR
A pytorch implementation for "A simple neural network module for relational reasoning", working on the CLEVR dataset
Stars: ✭ 83 (+112.82%)
Mutual labels:  visual-question-answering
mmgnn textvqa
A Pytorch implementation of CVPR 2020 paper: Multi-Modal Graph Neural Network for Joint Reasoning on Vision and Scene Text
Stars: ✭ 41 (+5.13%)
Mutual labels:  vqa
Mullowbivqa
Hadamard Product for Low-rank Bilinear Pooling
Stars: ✭ 57 (+46.15%)
Mutual labels:  vqa
cfvqa
[CVPR 2021] Counterfactual VQA: A Cause-Effect Look at Language Bias
Stars: ✭ 96 (+146.15%)
Mutual labels:  vqa
Awesome Vqa
Visual Q&A reading list
Stars: ✭ 403 (+933.33%)
Mutual labels:  vqa
VideoNavQA
An alternative EQA paradigm and informative benchmark + models (BMVC 2019, ViGIL 2019 spotlight)
Stars: ✭ 22 (-43.59%)
Mutual labels:  vqa
Vqa Mfb
Stars: ✭ 153 (+292.31%)
Mutual labels:  vqa
convolutional-vqa
No description or website provided.
Stars: ✭ 39 (+0%)
Mutual labels:  visual-question-answering
Tbd Nets
PyTorch implementation of "Transparency by Design: Closing the Gap Between Performance and Interpretability in Visual Reasoning"
Stars: ✭ 345 (+784.62%)
Mutual labels:  vqa
detect-shortcuts
Repo for ICCV 2021 paper: Beyond Question-Based Biases: Assessing Multimodal Shortcut Learning in Visual Question Answering
Stars: ✭ 17 (-56.41%)
Mutual labels:  visual-question-answering
Conditional Batch Norm
Pytorch implementation of NIPS 2017 paper "Modulating early visual processing by language"
Stars: ✭ 51 (+30.77%)
Mutual labels:  vqa
TRAR-VQA
[ICCV 2021] TRAR: Routing the Attention Spans in Transformers for Visual Question Answering -- Official Implementation
Stars: ✭ 49 (+25.64%)
Mutual labels:  visual-question-answering
Nscl Pytorch Release
PyTorch implementation for the Neuro-Symbolic Concept Learner (NS-CL).
Stars: ✭ 276 (+607.69%)
Mutual labels:  vqa
Clipbert
[CVPR 2021 Oral] Official PyTorch code for ClipBERT, an efficient framework for end-to-end learning for image-text and video-text tasks.
Stars: ✭ 168 (+330.77%)
Mutual labels:  vqa
Layerwise-Relevance-Propagation
Implementation of Layerwise Relevance Propagation for heatmapping "deep" layers
Stars: ✭ 78 (+100%)
CNN-Units-in-NLP
✂️ Repository for our ICLR 2019 paper: Discovery of Natural Language Concepts in Individual Units of CNNs
Stars: ✭ 26 (-33.33%)
Vizwiz Vqa Pytorch
PyTorch VQA implementation that achieved top performances in the (ECCV18) VizWiz Grand Challenge: Answering Visual Questions from Blind People
Stars: ✭ 33 (-15.38%)
Mutual labels:  vqa
rosita
ROSITA: Enhancing Vision-and-Language Semantic Alignments via Cross- and Intra-modal Knowledge Integration
Stars: ✭ 36 (-7.69%)
Mutual labels:  vqa
InterpretableCNN
No description or website provided.
Stars: ✭ 36 (-7.69%)
vqa-soft
Accompanying code for "A Simple Loss Function for Improving the Convergence and Accuracy of Visual Question Answering Models" CVPR 2017 VQA workshop paper.
Stars: ✭ 14 (-64.1%)
Mutual labels:  vqa
Papers
读过的CV方向的一些论文,图像生成文字、弱监督分割等
Stars: ✭ 99 (+153.85%)
Mutual labels:  vqa
Visual Question Answering
📷 ❓ Visual Question Answering Demo and Algorithmia API
Stars: ✭ 18 (-53.85%)
Mutual labels:  vqa
rl singing voice
Unsupervised Representation Learning for Singing Voice Separation
Stars: ✭ 18 (-53.85%)
DVQA dataset
DVQA Dataset: A Bar chart question answering dataset presented at CVPR 2018
Stars: ✭ 20 (-48.72%)
Mutual labels:  vqa
ISeeU
ISeeU: Visually interpretable deep learning for mortality prediction inside the ICU
Stars: ✭ 20 (-48.72%)
Bottom Up Attention Vqa
An efficient PyTorch implementation of the winning entry of the 2017 VQA Challenge.
Stars: ✭ 667 (+1610.26%)
Mutual labels:  vqa
m-phate
Multislice PHATE for tensor embeddings
Stars: ✭ 54 (+38.46%)
redunet paper
Official NumPy Implementation of Deep Networks from the Principle of Rate Reduction (2021)
Stars: ✭ 49 (+25.64%)
Captum
Model interpretability and understanding for PyTorch
Stars: ✭ 2,830 (+7156.41%)
Mutual labels:  interpretable-ai
Openvqa
A lightweight, scalable, and general framework for visual question answering research
Stars: ✭ 198 (+407.69%)
Mutual labels:  vqa
1-60 of 99 similar projects