mandarjoshi90 / Triviaqa
Licence: apache-2.0
Code for the TriviaQA reading comprehension dataset
Stars: ✭ 184
Programming Languages
python
139335 projects - #7 most used programming language
Labels
Projects that are alternatives of or similar to Triviaqa
Knowledge Aware Reader
PyTorch implementation of the ACL 2019 paper "Improving Question Answering over Incomplete KBs with Knowledge-Aware Reader"
Stars: ✭ 123 (-33.15%)
Mutual labels: question-answering
Pytorch Question Answering
Important paper implementations for Question Answering using PyTorch
Stars: ✭ 154 (-16.3%)
Mutual labels: question-answering
Improved Dynamic Memory Networks Dmn Plus
Theano Implementation of DMN+ (Improved Dynamic Memory Networks) from the paper by Xiong, Merity, & Socher at MetaMind, http://arxiv.org/abs/1603.01417 (Dynamic Memory Networks for Visual and Textual Question Answering)
Stars: ✭ 165 (-10.33%)
Mutual labels: question-answering
Medquad
Medical Question Answering Dataset of 47,457 QA pairs created from 12 NIH websites
Stars: ✭ 129 (-29.89%)
Mutual labels: question-answering
Question answering models
This repo collects and re-produces models related to domains of question answering and machine reading comprehension
Stars: ✭ 139 (-24.46%)
Mutual labels: question-answering
Chinese Rc Datasets
Collections of Chinese reading comprehension datasets
Stars: ✭ 159 (-13.59%)
Mutual labels: question-answering
Clicr
Machine reading comprehension on clinical case reports
Stars: ✭ 123 (-33.15%)
Mutual labels: question-answering
Questgen.ai
Question generation using state-of-the-art Natural Language Processing algorithms
Stars: ✭ 169 (-8.15%)
Mutual labels: question-answering
Cape Webservices
Entrypoint for all backend cape webservices
Stars: ✭ 149 (-19.02%)
Mutual labels: question-answering
Awesomemrc
This repo is our research summary and playground for MRC. More features are coming.
Stars: ✭ 162 (-11.96%)
Mutual labels: question-answering
Kbqa Ar Smcnn
Question answering over Freebase (single-relation)
Stars: ✭ 129 (-29.89%)
Mutual labels: question-answering
Denspi
Real-Time Open-Domain Question Answering with Dense-Sparse Phrase Index (DenSPI)
Stars: ✭ 162 (-11.96%)
Mutual labels: question-answering
Dan Jurafsky Chris Manning Nlp
My solution to the Natural Language Processing course made by Dan Jurafsky, Chris Manning in Winter 2012.
Stars: ✭ 124 (-32.61%)
Mutual labels: question-answering
Dynamic Memory Networks Plus Pytorch
Implementation of Dynamic memory networks plus in Pytorch
Stars: ✭ 123 (-33.15%)
Mutual labels: question-answering
Nspm
🤖 Neural SPARQL Machines for Knowledge Graph Question Answering.
Stars: ✭ 156 (-15.22%)
Mutual labels: question-answering
Rat Sql
A relation-aware semantic parsing model from English to SQL
Stars: ✭ 169 (-8.15%)
Mutual labels: question-answering
Rczoo
question answering, reading comprehension toolkit
Stars: ✭ 163 (-11.41%)
Mutual labels: question-answering
TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension
- This repo contains code for the paper Mandar Joshi, Eunsol Choi, Daniel Weld, Luke Zettlemoyer.
TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension In Association for Computational Linguistics (ACL) 2017, Vancouver, Canada.
- The data can be downloaded from the TriviaQA website.
- Please contact Mandar Joshi (<first-name>[email protected]) for suggestions and comments.
Requirements
General
- Python 3. You should be able to run the evaluation scripts using Python 2.7 if you take care of unicode in
utils.utils.py
. - BiDAF requires Python 3 -- check the original repository for more details.
Python Packages
- tensorflow (only if you want to run BiDAF, verified on r0.11)
- nltk
- tqdm
Evaluation
The dataset file
parameter refers to files in the qa
directory of the data (e.g., wikipedia-dev.json
). For file format, check out the sample
directory in the repo.
python3 -m evaluation.triviaqa_evaluation --dataset_file samples/triviaqa_sample.json --prediction_file samples/sample_predictions.json
Miscellaneous
- If you have a SQuAD model and want to run on TriviaQA, please refer to
utils.convert_to_squad_format.py
Note that the project description data, including the texts, logos, images, and/or trademarks,
for each open source project belongs to its rightful owner.
If you wish to add or remove any projects, please contact us at [email protected].