iMIXA framework for Multimodal Intelligence research from Inspur HSSLAB.
Stars: ✭ 21 (-4.55%)
MmfA modular framework for vision & language multimodal research from Facebook AI Research (FAIR)
Stars: ✭ 4,713 (+21322.73%)
MullowbivqaHadamard Product for Low-rank Bilinear Pooling
Stars: ✭ 57 (+159.09%)
hcrn-videoqaImplementation for the paper "Hierarchical Conditional Relation Networks for Video Question Answering" (Le et al., CVPR 2020, Oral)
Stars: ✭ 111 (+404.55%)
DVQA datasetDVQA Dataset: A Bar chart question answering dataset presented at CVPR 2018
Stars: ✭ 20 (-9.09%)
Mac NetworkImplementation for the paper "Compositional Attention Networks for Machine Reasoning" (Hudson and Manning, ICLR 2018)
Stars: ✭ 444 (+1918.18%)
Vqa TensorflowTensorflow Implementation of Deeper LSTM+ normalized CNN for Visual Question Answering
Stars: ✭ 98 (+345.45%)
MICCAI21 MMQMultiple Meta-model Quantifying for Medical Visual Question Answering
Stars: ✭ 16 (-27.27%)
AoA-pytorchA Pytorch implementation of Attention on Attention module (both self and guided variants), for Visual Question Answering
Stars: ✭ 33 (+50%)
Bottom Up Attention VqaAn efficient PyTorch implementation of the winning entry of the 2017 VQA Challenge.
Stars: ✭ 667 (+2931.82%)
mmgnn textvqaA Pytorch implementation of CVPR 2020 paper: Multi-Modal Graph Neural Network for Joint Reasoning on Vision and Scene Text
Stars: ✭ 41 (+86.36%)
FigureQA-baselineTensorFlow implementation of the CNN-LSTM, Relation Network and text-only baselines for the paper "FigureQA: An Annotated Figure Dataset for Visual Reasoning"
Stars: ✭ 28 (+27.27%)
Vizwiz Vqa PytorchPyTorch VQA implementation that achieved top performances in the (ECCV18) VizWiz Grand Challenge: Answering Visual Questions from Blind People
Stars: ✭ 33 (+50%)
probnmn-clevrCode for ICML 2019 paper "Probabilistic Neural-symbolic Models for Interpretable Visual Question Answering" [long-oral]
Stars: ✭ 63 (+186.36%)
ZS-F-VQACode and Data for paper: Zero-shot Visual Question Answering using Knowledge Graph [ ISWC 2021 ]
Stars: ✭ 51 (+131.82%)
cfvqa[CVPR 2021] Counterfactual VQA: A Cause-Effect Look at Language Bias
Stars: ✭ 96 (+336.36%)
Cmrc2018A Span-Extraction Dataset for Chinese Machine Reading Comprehension (CMRC 2018)
Stars: ✭ 238 (+981.82%)
self critical vqaCode for NeurIPS 2019 paper ``Self-Critical Reasoning for Robust Visual Question Answering''
Stars: ✭ 39 (+77.27%)
Papers读过的CV方向的一些论文,图像生成文字、弱监督分割等
Stars: ✭ 99 (+350%)
Awesome VqaVisual Q&A reading list
Stars: ✭ 403 (+1731.82%)
ForumAma Laravel? Torne se um Jedi e Ajude outros Padawans
Stars: ✭ 233 (+959.09%)
Tbd NetsPyTorch implementation of "Transparency by Design: Closing the Gap Between Performance and Interpretability in Visual Reasoning"
Stars: ✭ 345 (+1468.18%)
FlowqaImplementation of conversational QA model: FlowQA (with slight improvement)
Stars: ✭ 194 (+781.82%)
Bottom Up AttentionBottom-up attention model for image captioning and VQA, based on Faster R-CNN and Visual Genome
Stars: ✭ 989 (+4395.45%)
SimpletransformersTransformers for Classification, NER, QA, Language Modelling, Language Generation, T5, Multi-Modal, and Conversational AI
Stars: ✭ 2,881 (+12995.45%)
just-ask[TPAMI Special Issue on ICCV 2021 Best Papers, Oral] Just Ask: Learning to Answer Questions from Millions of Narrated Videos
Stars: ✭ 57 (+159.09%)
Pytorch VqaStrong baseline for visual question answering
Stars: ✭ 158 (+618.18%)
Awesome KgqaA collection of some materials of knowledge graph question answering
Stars: ✭ 188 (+754.55%)
Nscl Pytorch ReleasePyTorch implementation for the Neuro-Symbolic Concept Learner (NS-CL).
Stars: ✭ 276 (+1154.55%)
OpenqaThe source code of ACL 2018 paper "Denoising Distantly Supervised Open-Domain Question Answering".
Stars: ✭ 188 (+754.55%)
Transformer-MM-Explainability[ICCV 2021- Oral] Official PyTorch implementation for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers, a novel method to visualize any Transformer-based network. Including examples for DETR, VQA.
Stars: ✭ 484 (+2100%)
tsflexFlexible time series feature extraction & processing
Stars: ✭ 252 (+1045.45%)
Vqa.pytorchVisual Question Answering in Pytorch
Stars: ✭ 602 (+2636.36%)
Vqa regatResearch Code for ICCV 2019 paper "Relation-aware Graph Attention Network for Visual Question Answering"
Stars: ✭ 129 (+486.36%)
JackJack the Reader
Stars: ✭ 242 (+1000%)
Mspars Stars: ✭ 177 (+704.55%)
Dmn TensorflowDynamic Memory Networks (https://arxiv.org/abs/1603.01417) in Tensorflow
Stars: ✭ 236 (+972.73%)
cmrc2017The First Evaluation Workshop on Chinese Machine Reading Comprehension (CMRC 2017)
Stars: ✭ 90 (+309.09%)
Tensorflow DsmmTensorflow implementations of various Deep Semantic Matching Models (DSMM).
Stars: ✭ 217 (+886.36%)
OscarOscar and VinVL
Stars: ✭ 396 (+1700%)
Kb Qa基于知识库的中文问答系统(biLSTM)
Stars: ✭ 195 (+786.36%)
bottom-up-featuresBottom-up features extractor implemented in PyTorch.
Stars: ✭ 62 (+181.82%)
Rat SqlA relation-aware semantic parsing model from English to SQL
Stars: ✭ 169 (+668.18%)
AnyqFAQ-based Question Answering System
Stars: ✭ 2,336 (+10518.18%)
Awesome Visual Question AnsweringA curated list of Visual Question Answering(VQA)(Image/Video Question Answering),Visual Question Generation ,Visual Dialog ,Visual Commonsense Reasoning and related area.
Stars: ✭ 295 (+1240.91%)
OpenvqaA lightweight, scalable, and general framework for visual question answering research
Stars: ✭ 198 (+800%)
TriviaqaCode for the TriviaQA reading comprehension dataset
Stars: ✭ 184 (+736.36%)
Hq bot📲 Bot to help solve HQ trivia
Stars: ✭ 167 (+659.09%)
Questgen.aiQuestion generation using state-of-the-art Natural Language Processing algorithms
Stars: ✭ 169 (+668.18%)
VqaCloudCV Visual Question Answering Demo
Stars: ✭ 57 (+159.09%)
rositaROSITA: Enhancing Vision-and-Language Semantic Alignments via Cross- and Intra-modal Knowledge Integration
Stars: ✭ 36 (+63.64%)
DrFAQDrFAQ is a plug-and-play question answering NLP chatbot that can be generally applied to any organisation's text corpora.
Stars: ✭ 29 (+31.82%)
neuro-symbolic-ai-socNeuro-Symbolic Visual Question Answering on Sort-of-CLEVR using PyTorch
Stars: ✭ 41 (+86.36%)
Clipbert[CVPR 2021 Oral] Official PyTorch code for ClipBERT, an efficient framework for end-to-end learning for image-text and video-text tasks.
Stars: ✭ 168 (+663.64%)
Conditional Batch NormPytorch implementation of NIPS 2017 paper "Modulating early visual processing by language"
Stars: ✭ 51 (+131.82%)
vqa-softAccompanying code for "A Simple Loss Function for Improving the Convergence and Accuracy of Visual Question Answering Models" CVPR 2017 VQA workshop paper.
Stars: ✭ 14 (-36.36%)