All Projects → bernhard2202 → rankqa

bernhard2202 / rankqa

Licence: other
This is the PyTorch implementation of the ACL 2019 paper RankQA: Neural Question Answering with Answer Re-Ranking.

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to rankqa

Mspars
Stars: ✭ 177 (+113.25%)
Mutual labels:  question-answering
Awesome Deep Learning And Machine Learning Questions
【不定期更新】收集整理的一些网站中(如知乎、Quora、Reddit、Stack Exchange等)与深度学习、机器学习、强化学习、数据科学相关的有价值的问题
Stars: ✭ 203 (+144.58%)
Mutual labels:  question-answering
Agriculture knowledgegraph
农业知识图谱(AgriKG):农业领域的信息检索,命名实体识别,关系抽取,智能问答,辅助决策
Stars: ✭ 2,957 (+3462.65%)
Mutual labels:  question-answering
Openqa
The source code of ACL 2018 paper "Denoising Distantly Supervised Open-Domain Question Answering".
Stars: ✭ 188 (+126.51%)
Mutual labels:  question-answering
Flowqa
Implementation of conversational QA model: FlowQA (with slight improvement)
Stars: ✭ 194 (+133.73%)
Mutual labels:  question-answering
Forum
Ama Laravel? Torne se um Jedi e Ajude outros Padawans
Stars: ✭ 233 (+180.72%)
Mutual labels:  question-answering
Rat Sql
A relation-aware semantic parsing model from English to SQL
Stars: ✭ 169 (+103.61%)
Mutual labels:  question-answering
VideoNavQA
An alternative EQA paradigm and informative benchmark + models (BMVC 2019, ViGIL 2019 spotlight)
Stars: ✭ 22 (-73.49%)
Mutual labels:  question-answering
Kb Qa
基于知识库的中文问答系统(biLSTM)
Stars: ✭ 195 (+134.94%)
Mutual labels:  question-answering
Jack
Jack the Reader
Stars: ✭ 242 (+191.57%)
Mutual labels:  question-answering
Awesome Kgqa
A collection of some materials of knowledge graph question answering
Stars: ✭ 188 (+126.51%)
Mutual labels:  question-answering
Anyq
FAQ-based Question Answering System
Stars: ✭ 2,336 (+2714.46%)
Mutual labels:  question-answering
Dmn Tensorflow
Dynamic Memory Networks (https://arxiv.org/abs/1603.01417) in Tensorflow
Stars: ✭ 236 (+184.34%)
Mutual labels:  question-answering
Triviaqa
Code for the TriviaQA reading comprehension dataset
Stars: ✭ 184 (+121.69%)
Mutual labels:  question-answering
cmrc2017
The First Evaluation Workshop on Chinese Machine Reading Comprehension (CMRC 2017)
Stars: ✭ 90 (+8.43%)
Mutual labels:  question-answering
Questgen.ai
Question generation using state-of-the-art Natural Language Processing algorithms
Stars: ✭ 169 (+103.61%)
Mutual labels:  question-answering
Tensorflow Dsmm
Tensorflow implementations of various Deep Semantic Matching Models (DSMM).
Stars: ✭ 217 (+161.45%)
Mutual labels:  question-answering
FinBERT-QA
Financial Domain Question Answering with pre-trained BERT Language Model
Stars: ✭ 70 (-15.66%)
Mutual labels:  question-answering
DrFAQ
DrFAQ is a plug-and-play question answering NLP chatbot that can be generally applied to any organisation's text corpora.
Stars: ✭ 29 (-65.06%)
Mutual labels:  question-answering
Cmrc2018
A Span-Extraction Dataset for Chinese Machine Reading Comprehension (CMRC 2018)
Stars: ✭ 238 (+186.75%)
Mutual labels:  question-answering

RankQA: Neural Question Answering with Answer Re-Ranking

This is the PyTorch implementation of the ACL 2019 paper RankQA: Neural Question Answering with Answer Re-Ranking

The conventional paradigm in neural question answering (QA) for narrative content is limited to a two-stage process: first, relevant text passages are retrieved and, subsequently, a neural network for machine comprehension extracts the likeliest answer. However, both stages are largely isolated in the status quo and, hence, information from the two phases is never properly fused. In contrast, this work proposes RankQA: RankQA extends the conventional two-stage process in neural QA with a third stage that performs an additional answer re-ranking. The re-ranking leverages different features that are directly extracted from the QA pipeline, i.e., a combination of retrieval and comprehension features. While our intentionally simple design allows for an efficient, data-sparse estimation, it nevertheless outperforms more complex QA systems by a significant margin.

Open Review and Changelog

We want to support an open discourse about science and decided to publish our ACL 2019 reviews as well as a detailed list of changes that we have made ever since. In my free time, I am still trying to work on this project and you can find all updates and the reviews HERE.

Quick Overview

In our paper, we use two different QA pipelines. The main experiments are based on a customized DrQA system with additional answer re-ranking. In order to demonstrate robustness, we implemented a second pipeline based on BERT, which we call BERT-QA.

RankQA: The Main Pipeline based on DrQA

We extended the DrQA pipeline by integrating a third answer re-ranking module. In this repository, we split the answer re-ranking module from the rest of the pipeline in order to allow faster experiments. We precomputed and aggregated features for all candidate answers and you can read them directly from files.

Feature Generation

If you want to extract features manually for new datasets please write us a mail and we will provide the source code. We will publish it here at a later point as well. You can find more information about pre-computed features for SQuAD, WebQuestions, WikiMovies, and CuratedTREC HERE.

NOTE: To support future research in this area, our precomputed candidate answers contain tokenized paragraphs, questions and answer spans that can easily be processed by any neural network architecture.

Re-Ranking Module

A detailed description of our implementation, how to replicate the results, and the pre-trained model can be found HERE.

Answer Re-Ranking Based on BERT-QA

To show robustness across implementations we implemented a second QA pipeline based on BERT. You can find the source code for BERT-QA HERE.

Citation

Please cite our paper if you use RankQA in your work:

@inproceedings{kratzwald2019rankqa,
  title={RankQA: Neural Question Answering with Answer Re-Ranking},
  author={Kratzwald, Bernhard and Eigenmann, Anna and Feuerriegel, Stefan},
  booktitle={Annual Meeting of the Association for Computational Linguistics (ACL)},
  year={2019}
}

Contact

Any questions left?

Please write an email to bkratzwald [AET] ethz [DOT] ch

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].