All Projects → hcrn-videoqa → Similar Projects or Alternatives

234 Open source projects that are alternatives of or similar to hcrn-videoqa

iPerceive
Applying Common-Sense Reasoning to Multi-Modal Dense Video Captioning and Video Question Answering | Python3 | PyTorch | CNNs | Causality | Reasoning | LSTMs | Transformers | Multi-Head Self Attention | Published in IEEE Winter Conference on Applications of Computer Vision (WACV) 2021
Stars: ✭ 52 (-53.15%)
Mutual labels:  question-answering, videoqa
just-ask
[TPAMI Special Issue on ICCV 2021 Best Papers, Oral] Just Ask: Learning to Answer Questions from Millions of Narrated Videos
Stars: ✭ 57 (-48.65%)
Mutual labels:  vqa, videoqa
VideoNavQA
An alternative EQA paradigm and informative benchmark + models (BMVC 2019, ViGIL 2019 spotlight)
Stars: ✭ 22 (-80.18%)
Mutual labels:  vqa, question-answering
DVQA dataset
DVQA Dataset: A Bar chart question answering dataset presented at CVPR 2018
Stars: ✭ 20 (-81.98%)
Mutual labels:  vqa, question-answering
MICCAI21 MMQ
Multiple Meta-model Quantifying for Medical Visual Question Answering
Stars: ✭ 16 (-85.59%)
Mutual labels:  vqa, question-answering
Vqa Tensorflow
Tensorflow Implementation of Deeper LSTM+ normalized CNN for Visual Question Answering
Stars: ✭ 98 (-11.71%)
Mutual labels:  vqa, question-answering
Mullowbivqa
Hadamard Product for Low-rank Bilinear Pooling
Stars: ✭ 57 (-48.65%)
Mutual labels:  vqa, question-answering
Mac Network
Implementation for the paper "Compositional Attention Networks for Machine Reasoning" (Hudson and Manning, ICLR 2018)
Stars: ✭ 444 (+300%)
Mutual labels:  vqa, question-answering
Awesome Visual Question Answering
A curated list of Visual Question Answering(VQA)(Image/Video Question Answering),Visual Question Generation ,Visual Dialog ,Visual Commonsense Reasoning and related area.
Stars: ✭ 295 (+165.77%)
Mutual labels:  vqa
Vqa regat
Research Code for ICCV 2019 paper "Relation-aware Graph Attention Network for Visual Question Answering"
Stars: ✭ 129 (+16.22%)
Mutual labels:  vqa
rosita
ROSITA: Enhancing Vision-and-Language Semantic Alignments via Cross- and Intra-modal Knowledge Integration
Stars: ✭ 36 (-67.57%)
Mutual labels:  vqa
Oscar
Oscar and VinVL
Stars: ✭ 396 (+256.76%)
Mutual labels:  vqa
Pytorch Vqa
Strong baseline for visual question answering
Stars: ✭ 158 (+42.34%)
Mutual labels:  vqa
Nscl Pytorch Release
PyTorch implementation for the Neuro-Symbolic Concept Learner (NS-CL).
Stars: ✭ 276 (+148.65%)
Mutual labels:  vqa
FinBERT-QA
Financial Domain Question Answering with pre-trained BERT Language Model
Stars: ✭ 70 (-36.94%)
Mutual labels:  question-answering
Papers
读过的CV方向的一些论文,图像生成文字、弱监督分割等
Stars: ✭ 99 (-10.81%)
Mutual labels:  vqa
FigureQA-baseline
TensorFlow implementation of the CNN-LSTM, Relation Network and text-only baselines for the paper "FigureQA: An Annotated Figure Dataset for Visual Reasoning"
Stars: ✭ 28 (-74.77%)
Mutual labels:  vqa
CPPNotes
【C++ 面试 + C++ 学习指南】 一份涵盖大部分 C++ 程序员所需要掌握的核心知识。
Stars: ✭ 557 (+401.8%)
Mutual labels:  question-answering
probnmn-clevr
Code for ICML 2019 paper "Probabilistic Neural-symbolic Models for Interpretable Visual Question Answering" [long-oral]
Stars: ✭ 63 (-43.24%)
Mutual labels:  vqa
DrFAQ
DrFAQ is a plug-and-play question answering NLP chatbot that can be generally applied to any organisation's text corpora.
Stars: ✭ 29 (-73.87%)
Mutual labels:  question-answering
Vqa
CloudCV Visual Question Answering Demo
Stars: ✭ 57 (-48.65%)
Mutual labels:  vqa
Transformer-MM-Explainability
[ICCV 2021- Oral] Official PyTorch implementation for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers, a novel method to visualize any Transformer-based network. Including examples for DETR, VQA.
Stars: ✭ 484 (+336.04%)
Mutual labels:  vqa
Agriculture knowledgegraph
农业知识图谱(AgriKG):农业领域的信息检索,命名实体识别,关系抽取,智能问答,辅助决策
Stars: ✭ 2,957 (+2563.96%)
Mutual labels:  question-answering
Bottom Up Attention
Bottom-up attention model for image captioning and VQA, based on Faster R-CNN and Visual Genome
Stars: ✭ 989 (+790.99%)
Mutual labels:  vqa
Cmrc2018
A Span-Extraction Dataset for Chinese Machine Reading Comprehension (CMRC 2018)
Stars: ✭ 238 (+114.41%)
Mutual labels:  question-answering
Forum
Ama Laravel? Torne se um Jedi e Ajude outros Padawans
Stars: ✭ 233 (+109.91%)
Mutual labels:  question-answering
Awesome Vqa
Visual Q&A reading list
Stars: ✭ 403 (+263.06%)
Mutual labels:  vqa
Clipbert
[CVPR 2021 Oral] Official PyTorch code for ClipBERT, an efficient framework for end-to-end learning for image-text and video-text tasks.
Stars: ✭ 168 (+51.35%)
Mutual labels:  vqa
Tbd Nets
PyTorch implementation of "Transparency by Design: Closing the Gap Between Performance and Interpretability in Visual Reasoning"
Stars: ✭ 345 (+210.81%)
Mutual labels:  vqa
ZS-F-VQA
Code and Data for paper: Zero-shot Visual Question Answering using Knowledge Graph [ ISWC 2021 ]
Stars: ✭ 51 (-54.05%)
Mutual labels:  vqa
Vqa Mfb
Stars: ✭ 153 (+37.84%)
Mutual labels:  vqa
neuro-symbolic-ai-soc
Neuro-Symbolic Visual Question Answering on Sort-of-CLEVR using PyTorch
Stars: ✭ 41 (-63.06%)
Mutual labels:  vqa
Visual Question Answering
📷 ❓ Visual Question Answering Demo and Algorithmia API
Stars: ✭ 18 (-83.78%)
Mutual labels:  vqa
Awesome Deep Learning And Machine Learning Questions
【不定期更新】收集整理的一些网站中(如知乎、Quora、Reddit、Stack Exchange等)与深度学习、机器学习、强化学习、数据科学相关的有价值的问题
Stars: ✭ 203 (+82.88%)
Mutual labels:  question-answering
bottom-up-features
Bottom-up features extractor implemented in PyTorch.
Stars: ✭ 62 (-44.14%)
Mutual labels:  vqa
nlp qa project
Natural Language Processing Question Answering Final Project
Stars: ✭ 61 (-45.05%)
Mutual labels:  question-answering
vqa-soft
Accompanying code for "A Simple Loss Function for Improving the Convergence and Accuracy of Visual Question Answering Models" CVPR 2017 VQA workshop paper.
Stars: ✭ 14 (-87.39%)
Mutual labels:  vqa
Flowqa
Implementation of conversational QA model: FlowQA (with slight improvement)
Stars: ✭ 194 (+74.77%)
Mutual labels:  question-answering
AoA-pytorch
A Pytorch implementation of Attention on Attention module (both self and guided variants), for Visual Question Answering
Stars: ✭ 33 (-70.27%)
Mutual labels:  vqa
Vqa.pytorch
Visual Question Answering in Pytorch
Stars: ✭ 602 (+442.34%)
Mutual labels:  vqa
Simpletransformers
Transformers for Classification, NER, QA, Language Modelling, Language Generation, T5, Multi-Modal, and Conversational AI
Stars: ✭ 2,881 (+2495.5%)
Mutual labels:  question-answering
iMIX
A framework for Multimodal Intelligence research from Inspur HSSLAB.
Stars: ✭ 21 (-81.08%)
Mutual labels:  vqa
examinee
Laravel Quiz and Exam System clone of udemy
Stars: ✭ 151 (+36.04%)
Mutual labels:  question-answering
mmgnn textvqa
A Pytorch implementation of CVPR 2020 paper: Multi-Modal Graph Neural Network for Joint Reasoning on Vision and Scene Text
Stars: ✭ 41 (-63.06%)
Mutual labels:  vqa
Conditional Batch Norm
Pytorch implementation of NIPS 2017 paper "Modulating early visual processing by language"
Stars: ✭ 51 (-54.05%)
Mutual labels:  vqa
Jack
Jack the Reader
Stars: ✭ 242 (+118.02%)
Mutual labels:  question-answering
cmrc2017
The First Evaluation Workshop on Chinese Machine Reading Comprehension (CMRC 2017)
Stars: ✭ 90 (-18.92%)
Mutual labels:  question-answering
Dmn Tensorflow
Dynamic Memory Networks (https://arxiv.org/abs/1603.01417) in Tensorflow
Stars: ✭ 236 (+112.61%)
Mutual labels:  question-answering
Vizwiz Vqa Pytorch
PyTorch VQA implementation that achieved top performances in the (ECCV18) VizWiz Grand Challenge: Answering Visual Questions from Blind People
Stars: ✭ 33 (-70.27%)
Mutual labels:  vqa
Tensorflow Dsmm
Tensorflow implementations of various Deep Semantic Matching Models (DSMM).
Stars: ✭ 217 (+95.5%)
Mutual labels:  question-answering
unsupervised-qa
Template-Based Question Generation from Retrieved Sentences for Improved Unsupervised Question Answering
Stars: ✭ 47 (-57.66%)
Mutual labels:  question-answering
Kb Qa
基于知识库的中文问答系统(biLSTM)
Stars: ✭ 195 (+75.68%)
Mutual labels:  question-answering
Bottom Up Attention Vqa
An efficient PyTorch implementation of the winning entry of the 2017 VQA Challenge.
Stars: ✭ 667 (+500.9%)
Mutual labels:  vqa
Anyq
FAQ-based Question Answering System
Stars: ✭ 2,336 (+2004.5%)
Mutual labels:  question-answering
self critical vqa
Code for NeurIPS 2019 paper ``Self-Critical Reasoning for Robust Visual Question Answering''
Stars: ✭ 39 (-64.86%)
Mutual labels:  vqa
cfvqa
[CVPR 2021] Counterfactual VQA: A Cause-Effect Look at Language Bias
Stars: ✭ 96 (-13.51%)
Mutual labels:  vqa
cmrc2019
A Sentence Cloze Dataset for Chinese Machine Reading Comprehension (CMRC 2019)
Stars: ✭ 118 (+6.31%)
Mutual labels:  question-answering
rankqa
This is the PyTorch implementation of the ACL 2019 paper RankQA: Neural Question Answering with Answer Re-Ranking.
Stars: ✭ 83 (-25.23%)
Mutual labels:  question-answering
Openvqa
A lightweight, scalable, and general framework for visual question answering research
Stars: ✭ 198 (+78.38%)
Mutual labels:  vqa
Mmf
A modular framework for vision & language multimodal research from Facebook AI Research (FAIR)
Stars: ✭ 4,713 (+4145.95%)
Mutual labels:  vqa
1-60 of 234 similar projects