EgoCNNCode for "Distributed, Egocentric Representations of Graphs for Detecting Critical Structures" (ICML 2019)
Stars: â 16 (-74.6%)
neuro-symbolic-ai-socNeuro-Symbolic Visual Question Answering on Sort-of-CLEVR using PyTorch
Stars: â 41 (-34.92%)
VideoNavQAAn alternative EQA paradigm and informative benchmark + models (BMVC 2019, ViGIL 2019 spotlight)
Stars: â 22 (-65.08%)
Awesome Visual Question AnsweringA curated list of Visual Question Answering(VQA)(Image/Video Question Answering),Visual Question Generation ,Visual Dialog ,Visual Commonsense Reasoning and related area.
Stars: â 295 (+368.25%)
VqaCloudCV Visual Question Answering Demo
Stars: â 57 (-9.52%)
cfvqa[CVPR 2021] Counterfactual VQA: A Cause-Effect Look at Language Bias
Stars: â 96 (+52.38%)
Mac NetworkImplementation for the paper "Compositional Attention Networks for Machine Reasoning" (Hudson and Manning, ICLR 2018)
Stars: â 444 (+604.76%)
Tbd NetsPyTorch implementation of "Transparency by Design: Closing the Gap Between Performance and Interpretability in Visual Reasoning"
Stars: â 345 (+447.62%)
rositaROSITA: Enhancing Vision-and-Language Semantic Alignments via Cross- and Intra-modal Knowledge Integration
Stars: â 36 (-42.86%)
OpenvqaA lightweight, scalable, and general framework for visual question answering research
Stars: â 198 (+214.29%)
just-ask[TPAMI Special Issue on ICCV 2021 Best Papers, Oral] Just Ask: Learning to Answer Questions from Millions of Narrated Videos
Stars: â 57 (-9.52%)
Vqa TensorflowTensorflow Implementation of Deeper LSTM+ normalized CNN for Visual Question Answering
Stars: â 98 (+55.56%)
probai-2019Materials of the Nordic Probabilistic AI School 2019.
Stars: â 127 (+101.59%)
Bottom Up AttentionBottom-up attention model for image captioning and VQA, based on Faster R-CNN and Visual Genome
Stars: â 989 (+1469.84%)
NeuralPullImplementation of ICML'2021:Neural-Pull: Learning Signed Distance Functions from Point Clouds by Learning to Pull Space onto Surfaces
Stars: â 149 (+136.51%)
Vqa.pytorchVisual Question Answering in Pytorch
Stars: â 602 (+855.56%)
icml-nips-iclr-datasetPapers, authors and author affiliations from ICML, NeurIPS and ICLR 2006-2021
Stars: â 21 (-66.67%)
OscarOscar and VinVL
Stars: â 396 (+528.57%)
Transformer-MM-Explainability[ICCV 2021- Oral] Official PyTorch implementation for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers, a novel method to visualize any Transformer-based network. Including examples for DETR, VQA.
Stars: â 484 (+668.25%)
MICCAI21 MMQMultiple Meta-model Quantifying for Medical Visual Question Answering
Stars: â 16 (-74.6%)
flowtorch-oldSeparating Normalizing Flows code from Pyro and improving API
Stars: â 36 (-42.86%)
FigureQA-baselineTensorFlow implementation of the CNN-LSTM, Relation Network and text-only baselines for the paper "FigureQA: An Annotated Figure Dataset for Visual Reasoning"
Stars: â 28 (-55.56%)
RelationNetworks-CLEVRA pytorch implementation for "A simple neural network module for relational reasoning", working on the CLEVR dataset
Stars: â 83 (+31.75%)
Probability TheoryA quick introduction to all most important concepts of Probability Theory, only freshman level of mathematics needed as prerequisite.
Stars: â 25 (-60.32%)
Pytorch VqaStrong baseline for visual question answering
Stars: â 158 (+150.79%)
probai-2021Materials of the Nordic Probabilistic AI School 2021.
Stars: â 83 (+31.75%)
Papers读è¿çCVæ¹åçäžäºè®ºæïŒåŸåçææåã匱çç£åå²ç
Stars: â 99 (+57.14%)
NanoFlowPyTorch implementation of the paper "NanoFlow: Scalable Normalizing Flows with Sublinear Parameter Complexity." (NeurIPS 2020)
Stars: â 63 (+0%)
MullowbivqaHadamard Product for Low-rank Bilinear Pooling
Stars: â 57 (-9.52%)
blangSDKBlang's software development kit
Stars: â 21 (-66.67%)
Conditional Batch NormPytorch implementation of NIPS 2017 paper "Modulating early visual processing by language"
Stars: â 51 (-19.05%)
hcrn-videoqaImplementation for the paper "Hierarchical Conditional Relation Networks for Video Question Answering" (Le et al., CVPR 2020, Oral)
Stars: â 111 (+76.19%)
Vizwiz Vqa PytorchPyTorch VQA implementation that achieved top performances in the (ECCV18) VizWiz Grand Challenge: Answering Visual Questions from Blind People
Stars: â 33 (-47.62%)
deeprob-kitA Python Library for Deep Probabilistic Modeling
Stars: â 32 (-49.21%)
Bottom Up Attention VqaAn efficient PyTorch implementation of the winning entry of the 2017 VQA Challenge.
Stars: â 667 (+958.73%)
ACECode for our paper, Neural Network Attributions: A Causal Perspective (ICML 2019).
Stars: â 47 (-25.4%)
MmfA modular framework for vision & language multimodal research from Facebook AI Research (FAIR)
Stars: â 4,713 (+7380.95%)
multiple-objects-ganImplementation for "Generating Multiple Objects at Spatially Distinct Locations" (ICLR 2019)
Stars: â 111 (+76.19%)
Awesome VqaVisual Q&A reading list
Stars: â 403 (+539.68%)
ZS-F-VQACode and Data for paper: Zero-shot Visual Question Answering using Knowledge Graph [ ISWC 2021 ]
Stars: â 51 (-19.05%)
Active-Passive-Losses[ICML2020] Normalized Loss Functions for Deep Learning with Noisy Labels
Stars: â 92 (+46.03%)
Nscl Pytorch ReleasePyTorch implementation for the Neuro-Symbolic Concept Learner (NS-CL).
Stars: â 276 (+338.1%)
probai-2021-pyroRepo for the Tutorials of Day1-Day3 of the Nordic Probabilistic AI School 2021 (https://probabilistic.ai/)
Stars: â 45 (-28.57%)
bottom-up-featuresBottom-up features extractor implemented in PyTorch.
Stars: â 62 (-1.59%)
MMCAcovid19.jlMicroscopic Markov Chain Approach to model the spreading of COVID-19
Stars: â 15 (-76.19%)
vqa-softAccompanying code for "A Simple Loss Function for Improving the Convergence and Accuracy of Visual Question Answering Models" CVPR 2017 VQA workshop paper.
Stars: â 14 (-77.78%)
self critical vqaCode for NeurIPS 2019 paper ``Self-Critical Reasoning for Robust Visual Question Answering''
Stars: â 39 (-38.1%)
DVQA datasetDVQA Dataset: A Bar chart question answering dataset presented at CVPR 2018
Stars: â 20 (-68.25%)
mmgnn textvqaA Pytorch implementation of CVPR 2020 paper: Multi-Modal Graph Neural Network for Joint Reasoning on Vision and Scene Text
Stars: â 41 (-34.92%)
AoA-pytorchA Pytorch implementation of Attention on Attention module (both self and guided variants), for Visual Question Answering
Stars: â 33 (-47.62%)
Clipbert[CVPR 2021 Oral] Official PyTorch code for ClipBERT, an efficient framework for end-to-end learning for image-text and video-text tasks.
Stars: â 168 (+166.67%)
probabilistic-circuitsA curated collection of papers on probabilistic circuits, computational graphs encoding tractable probability distributions.
Stars: â 33 (-47.62%)
iMIXA framework for Multimodal Intelligence research from Inspur HSSLAB.
Stars: â 21 (-66.67%)
TRAR-VQA[ICCV 2021] TRAR: Routing the Attention Spans in Transformers for Visual Question Answering -- Official Implementation
Stars: â 49 (-22.22%)
unicornnOfficial code for UnICORNN (ICML 2021)
Stars: â 21 (-66.67%)
FedScaleFedScale is a scalable and extensible open-source federated learning (FL) platform.
Stars: â 274 (+334.92%)
Vqa regatResearch Code for ICCV 2019 paper "Relation-aware Graph Attention Network for Visual Question Answering"
Stars: â 129 (+104.76%)