All Projects → xanhho → Reading Comprehension Question Answering Papers

xanhho / Reading Comprehension Question Answering Papers

Survey on Machine Reading Comprehension

Projects that are alternatives of or similar to Reading Comprehension Question Answering Papers

Knowledge Graphs
A collection of research on knowledge graphs
Stars: ✭ 845 (+736.63%)
Mutual labels:  survey, question-answering
Qa Survey
北航大数据高精尖中心研究张日崇团队对问答系统的调研。包括知识图谱问答系统(KBQA)和文本问答系统(TextQA),每类系统分别对学术界和工业界进行调研。
Stars: ✭ 502 (+397.03%)
Mutual labels:  survey, question-answering
Farm
🏡 Fast & easy transfer learning for NLP. Harvesting language models for the industry. Focus on Question Answering.
Stars: ✭ 1,140 (+1028.71%)
Mutual labels:  question-answering
Surveyproject
Survey Project Webapplication - development, sources & releases
Stars: ✭ 97 (-3.96%)
Mutual labels:  survey
Simple Qa Emnlp 2018
Code for my EMNLP 2018 paper "SimpleQuestions Nearly Solved: A New Upperbound and Baseline Approach"
Stars: ✭ 87 (-13.86%)
Mutual labels:  question-answering
Chinesenlp
Datasets, SOTA results of every fields of Chinese NLP
Stars: ✭ 1,206 (+1094.06%)
Mutual labels:  question-answering
Surveyjs react quickstart
React QuickStart Boilerplate - SurveyJS: Survey Library and Survey Creator
Stars: ✭ 88 (-12.87%)
Mutual labels:  survey
Medical Question Answer Data
Medical question and answer dataset gathered from the web.
Stars: ✭ 65 (-35.64%)
Mutual labels:  question-answering
Wq.app
💻📱 wq's app library: a JavaScript framework powering offline-first web & native apps for geospatial data collection, mobile surveys, and citizen science. Powered by Redux, React, Material UI and Mapbox GL.
Stars: ✭ 99 (-1.98%)
Mutual labels:  survey
Amas
Awesome & Marvelous Amas
Stars: ✭ 1,273 (+1160.4%)
Mutual labels:  question-answering
Happy Transformer
A package built on top of Hugging Face's transformer library that makes it easy to utilize state-of-the-art NLP models
Stars: ✭ 97 (-3.96%)
Mutual labels:  question-answering
Turkish Bert Nlp Pipeline
Bert-base NLP pipeline for Turkish, Ner, Sentiment Analysis, Question Answering etc.
Stars: ✭ 85 (-15.84%)
Mutual labels:  question-answering
Formium
The headless form builder for the modern web.
Stars: ✭ 78 (-22.77%)
Mutual labels:  survey
Caseinterviewquestions
PHPer case interview questions
Stars: ✭ 96 (-4.95%)
Mutual labels:  question-answering
Soqal
Arabic Open Domain Question Answering System using Neural Reading Comprehension
Stars: ✭ 72 (-28.71%)
Mutual labels:  question-answering
Vqa Tensorflow
Tensorflow Implementation of Deeper LSTM+ normalized CNN for Visual Question Answering
Stars: ✭ 98 (-2.97%)
Mutual labels:  question-answering
Wsdm2018 hyperqa
Reference Implementation for WSDM 2018 Paper "Hyperbolic Representation Learning for Fast and Efficient Neural Question Answering"
Stars: ✭ 66 (-34.65%)
Mutual labels:  question-answering
Pedsurvey
From Handcrafted to Deep Features for Pedestrian Detection: A Survey
Stars: ✭ 81 (-19.8%)
Mutual labels:  survey
Neural kbqa
Knowledge Base Question Answering using memory networks
Stars: ✭ 87 (-13.86%)
Mutual labels:  question-answering
Flexneuart
Flexible classic and NeurAl Retrieval Toolkit
Stars: ✭ 99 (-1.98%)
Mutual labels:  question-answering

Content

Survey/Overview papers/documents should read on Machine Reading Comprehension

  • Fengbin Zhu et al., Retrieving and Reading : A Comprehensive Survey on Open-domain Question Answering, arXiv, 2021, paper
  • Mokanarangan Thayaparan, Marco Valentino, and André Freitas, A Survey on Explainability in Machine Reading Comprehension, arXiv, 2020, paper
  • Viktor Schlegel et al., Beyond Leaderboards: A survey of methods for revealing weaknesses in Natural Language Inference data and models, arXiv, 2020, paper
  • Viktor Schlegel et al., A Framework for Evaluation of Machine Reading Comprehension Gold Standards, arXiv, 2020, paper
  • Chengchang Zeng et al., A Survey on Machine Reading Comprehension: Tasks, Evaluation Metrics, and Benchmark Datasets, arXiv, 2020, paper.
  • Razieh Baradaran, Razieh Ghiasi, and Hossein Amirkhani, A Survey on Machine Reading Comprehension Systems, arXiv, 6 Jan 2020, paper.
  • Matthew Gardner et al., On Making Reading Comprehension More Comprehensive., aclweb, 2019, paper.
  • Shanshan Liu et al., Neural Machine Reading Comprehension: Methods and Trends, arXiv, 2019, paper.
  • Xin Zhang et al., Machine Reading Comprehension: a Literature Review, arXiv, 2019, paper.
  • Boyu Qiu et al., A Survey on Neural Machine Reading Comprehension, arXiv, 2019, paper.
  • Danqi Chen: Neural Reading Comprehension and Beyond. PhD thesis, Stanford University, 2018, paper.

Slides

  • Sebastian Riedel, Reading and Reasoning with Neural Program Interpreters, slides, MRQA 2018.
  • Phil Blunsom, Data driven reading comprehension: successes and limitations, slides, MRQA 2018.
  • Jianfeng Gao, Multi-step reasoning neural networks for question answering, slides, MRQA 2018.
  • Sameer Singh, Questioning Question Answering Answers, slides, MRQA 2018.

Evaluation papers

  • Diana Galvan, Active Reading Comprehension: A dataset for learning the Question-Answer Relationship strategy, ACL 2019, paper.
  • Divyansh Kaushik and Zachary C. Lipton, How Much Reading Does Reading Comprehension Require? A Critical Investigation of Popular Benchmarks, EMNLP 2018, paper.
  • Saku Sugawara et al., What Makes Reading Comprehension Questions Easier?, EMNLP 2018, paper.
  • Pramod K. Mudrakarta et al., Did the Model Understand the Question?, ACL 2018, paper.
  • Robin Jia and Percy Liang, Adversarial Examples for Evaluating Reading Comprehension Systems, EMNLP 2017, paper.
  • Saku Sugawara et al., Evaluation Metrics for Machine Reading Comprehension: Prerequisite Skills and Readability, ACL 2017, paper.
  • Saku Sugawara et al., Prerequisite Skills for Reading Comprehension: Multi-perspective Analysis of MCTest Datasets and Systems, AAAI 2017, paper.
  • Danqi Chen et al., A Thorough Examination of the CNN/Daily Mail Reading Comprehension Task, ACL 2016, paper.

Basic Papers/Models

Year Title Model Datasets Misc Paper, Source Code
2019 XLNet: Generalized Autoregressive Pretraining for Language Understanding XLNet Race, SQuAD 1.1, SQuAD 2.0 pretrained LM paper, code
2019 BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding BERT GLUE, SQuAD 1.1, SQuAD 2.0, SWAG pretrained LM paper, code
2018 S-NET: From Answer Extraction to Answer Generation for Machine Reading Comprehension S-NET MS-MARCO multiple passages paper, [code]
2018 QANET: Combining local Convolution with global Self-Attention for Reading Comprehension QANet SQuAD 1.1 paper, code
2017 ReasoNet: Learning to Stop Reading in Machine Comprehension ReasoNet CNN and Daily Mail, SQuAD 1.1 paper, [code]
2017 Reading Wikipedia to Answer Open-Domain Questions DrQA Wikipedia, SQuAD 1.1, CuratedTREC, WebQuestions, WikiMovies OPQA, Multi-Passage MRC paper, code
2017 R-Net: Machine Reading Comprehension with Self-Matching Networks R-Net SQuAD 1.1, MS-MARCO paper, code
2017 Machine Comprehension Using Match-LSTM and Answer Pointer Match-LSTM + Pointer Network SQuAD 1.1 paper, code
2017 Gated-Attention Readers for Text Comprehension Gated-attention Reader CNN and Daily Mail, Children’s Book Test, Who Did What paper, code
2017 Gated Self-Matching Networks for Reading Comprehension and Question Answering Gated Self-Matching SQuAD 1.1 paper, [code]
2017 Dynamic CoAttention Networks for Question Answering Dynamic coattention networks SQuAD 1.1 paper, code
2017 DCN+: Mixed Objective and Deep Residual CoAttention for Question Answering DCN+ SQuAD 1.1 paper, code
2017 Bi-directional Attention Flow for Machine Comprehension BiDAF SQuAD 1.1 paper, code
2017 Attention-over-Attention Neural Networks for Reading Comprehension Attention-over-Attention Reader Children’s Book Test, CNN and Daily Mail paper, code
2016 Text Understanding with the Attention Sum Reader Network Attention Sum Reader Children’s Book Test, CNN and Daily Mail paper, code
2016 Multi-Perspective Context Matching for Machine Comprehension Multi-Perspective Context Matching SQuAD 1.1 paper, [code]
2016 Key-Value Memory Networks for Directly Reading Documents Key-Value Memory Networks WikiMovies, WikiQA paper, code
2016 Iterative Alternating Neural Attention for Machine Reading Iterative Attention Reader Children’s Book Test, CNN and Daily Mail paper, [code]
2016 A Thorough Examination of the CNN/Daily Mail Reading Comprehension Task CNN and Daily Mail paper, [code]
2015 Teaching Machines to Read and Comprehend Attentive Reader CNN and Daily Mail paper, code

KBMRC: Knowledge-based Machine Reading Comprehension

OPQA: Open-domain Question Answering

UQ: Unanswerable Questions

Multi-Passage MRC: Multi-Passage Machine Reading Comprehension

CQA: Conversational Question Answering

Datasets

Year Dataset Task Size Source Web/Paper Answer type Misc Similar datasets
2019 ROPES RC 14k Wikipedia + science textbooks web, paper Span extraction background passage + situation ShARC
2019 RC-QED RC 12k Wikipedia web, paper Multiple choice multi-passage HotpotQA
2019 QUOREF RC 24k+ Wikipedia web, paper Span extraction coreference resolution
2019 COSMOS QA 35,600 narrative web, paper Multiple choice
2019 DROP RC 96k Wikipedia web, paper Span extraction + numerical reasoning multi-span answers
2019 Natural Questions RC 323k Wikipedia paper Span extraction
2018 SQuAD 2.0 RC 150k Wikipedia paper Span extraction no answer: 50k NewsQA
2018 MultiRC RC 6k+ questions various articles web, paper Multiple choice multiple sentence reasoning MCTest
2018 CSQA QA 200k dialogs, 1.6M turns paper
2018 QuAC RC 100k Wikipedia web, paper Span extraction conversational questions CoQA
2018 QAngaroo (Wikihop + Medhop) RC Wikipedia + Medline web, paper Multiple choice multi-passage HotpotQA
2018 HotpotQA RC 113k Wikipedia web, paper Span extraction multi-passage QAngaroo
2018 CoQA RC 127k various articles paper Free answering conversational questions QuAC
2018 ComplexWebQuestions RC 34,689 WebQuestionsSP web, paper Span extraction? multi-passage
2018 SWAG QA 113k video caption Multiple choice situational commonsense reasoning
2018 RecipeQA RC 36k various multimodal comprehension
2018 ProPara RC 2k procedural text bAbI, SCoNE
2018 OpenBookQA QA 6k science facts Multiple choice external knowledge ARC
2018 FEVER
2018 DuReader Free answering
2018 DuoRC RC 186k movie plot Span extraction NarrativeQA
2018 CLOTH RC 99k English exams Cloze test RACE
2018 CliCR RC 100k clinical case text Cloze test
2018 ARC RC 8k science exam easy 5197, challenge 2590
2017 WikiSuggest paper
2017 TriviaQA RC 96k question-answer pairs Web + Wikipedia web, paper Span extraction SQuAD
2017 SQA paper
2017 SearchQA paper Free answering
2017 RACE paper Multiple choice
2017 NarrativeQA paper Free answering
2016 Who-did-What paper Cloze test
2016 SQuAD 1.1 RC 87k training + 10k development Wikipedia paper Span extraction NewsQA
2016 NewsQA paper Span extraction
2016 MS MARCO web Free answering
2016 LAMBADA paper Cloze test
2016 WikiMovies QA
2015 CuratedTREC QA
2015 CNN and Daily Mail RC 93k + 220k articles CNN + Daily Mail paper web Cloze test
2015 Children's Book Test RC 108 children's books web Cloze test
2015 bAbI RC classic text adventure game web Free answering 20 tasks
2013 WebQuestions QA
2013 QA4MRE RC various articles paper Multiple choice
2013 MCTest RC 500 stories + 2k questions fictional stories paper Multiple choice open-domain
1999 DeepRead RC 60 development and 60 test? news stories paper Free answering

Datasets with Explanations

QA over KG

Knowledge Bases/Knowledge Sources

Question Answering Systems

  • IBM's DeepQA
  • QuASE
  • Microsoft's AskMSR
  • YodaQA
  • DrQA

Others (Misc: Model, transfer learning, data augmentation, domain adaption, cross lingual ...)

  • Minghao Hu, Yuxing Peng, Zhen Huang and Dongsheng Li, A Multi-Type Multi-Span Network for Reading Comprehension that Requires Discrete Reasoning, EMNLP 2019, paper.
  • Huazheng Wang, Zhe Gan, Xiaodong Liu, Jingjing Liu, Jianfeng Gao and Hongning Wang, Adversarial Domain Adaptation for Machine Reading Comprehension, EMNLP 2019, paper.
  • Yimin Jing, Deyi Xiong and Zhen Yan, BiPaR: A Bilingual Parallel Dataset for Multilingual and Cross-lingual Reading Comprehension on Novels, EMNLP 2019, paper.
  • Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Shijin Wang and Guoping Hu, Cross-Lingual Machine Reading Comprehension, EMNLP 2019, paper.
  • Todor Mihaylov and Anette Frank, Discourse-Aware Semantic Self-Attention for Narrative Reading Comprehension, EMNLP 2019, paper.
  • Kyungjae Lee, Sunghyun Park, Hojae Han, Jinyoung Yeo, Seung-won Hwang and Juho Lee, Learning with Limited Data for Multilingual Reading Comprehension, EMNLP 2019, paper.
  • Qiu Ran, Yankai Lin, Peng Li, Jie Zhou and Zhiyuan Liu, NumNet: Machine Reading Comprehension with Numerical Reasoning, EMNLP 2019, paper.
  • Yiming Cui, Ting Liu, Wanxiang Che, Li Xiao, Zhipeng Chen, Wentao Ma, Shijin Wang and Guoping Hu, A Span-Extraction Dataset for Chinese Machine Reading Comprehension, EMNLP 2019, paper.
  • Daniel Andor, Luheng He, Kenton Lee and Emily Pitler, Giving BERT a Calculator: Finding Operations and Arguments with Reading Comprehension, EMNLP 2019, paper.
  • Tsung-Yuan Hsu, Chi-Liang Liu and Hung-yi Lee, Zero-shot Reading Comprehension by Cross-lingual Transfer Learning with Multi-lingual Language Representation Model, EMNLP 2019, paper.
  • Kyosuke Nishida et al., Multi-style Generative Reading Comprehension, ACL 2019, paper.
  • Alon Talmor and Jonathan Berant, MultiQA: An Empirical Investigation of Generalization and Transfer in Reading Comprehension, ACL 2019, paper.
  • Yi Tay et al., Simple and Effective Curriculum Pointer-Generator Networks for Reading Comprehension over Long Narratives, ACL 2019, paper.
  • Haichao Zhu et al., Learning to Ask Unanswerable Questions for Machine Reading Comprehension, ACL 2019, paper.
  • Patrick Lewis et al., Unsupervised Question Answering by Cloze Translation, ACL 2019, paper.
  • Michael Hahn and Frank Keller, Modeling Human Reading with Neural Attention, EMNLP 2016, paper.
  • Jianpeng Cheng et al., Long Short-Term Memory-Networks for Machine Reading, EMNLP 2016, paper.

Thanks to these repositories:

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].