All Projects → iamyuanchung → TOEFL-QA

iamyuanchung / TOEFL-QA

Licence: other
A question answering dataset for machine comprehension of spoken content

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to TOEFL-QA

ODSQA
ODSQA: OPEN-DOMAIN SPOKEN QUESTION ANSWERING DATASET
Stars: ✭ 43 (-29.51%)
Mutual labels:  question-answering, reading-comprehension, machine-comprehension
Insuranceqa Corpus Zh
🚁 保险行业语料库,聊天机器人
Stars: ✭ 821 (+1245.9%)
Mutual labels:  question-answering, natural-language-understanding
MSMARCO
Machine Comprehension Train on MSMARCO with S-NET Extraction Modification
Stars: ✭ 31 (-49.18%)
Mutual labels:  question-answering, machine-comprehension
cmrc2017
The First Evaluation Workshop on Chinese Machine Reading Comprehension (CMRC 2017)
Stars: ✭ 90 (+47.54%)
Mutual labels:  question-answering, reading-comprehension
co-attention
Pytorch implementation of "Dynamic Coattention Networks For Question Answering"
Stars: ✭ 54 (-11.48%)
Mutual labels:  question-answering, reading-comprehension
Chat
基于自然语言理解与机器学习的聊天机器人,支持多用户并发及自定义多轮对话
Stars: ✭ 516 (+745.9%)
Mutual labels:  question-answering, natural-language-understanding
Chatbot
Русскоязычный чатбот
Stars: ✭ 106 (+73.77%)
Mutual labels:  question-answering, natural-language-understanding
cdQA-ui
⛔ [NOT MAINTAINED] A web interface for cdQA and other question answering systems.
Stars: ✭ 19 (-68.85%)
Mutual labels:  question-answering, reading-comprehension
exams-qa
A Multi-subject High School Examinations Dataset for Cross-lingual and Multilingual Question Answering
Stars: ✭ 25 (-59.02%)
Mutual labels:  question-answering, reading-comprehension
Question-Answering-based-on-SQuAD
Question Answering System using BiDAF Model on SQuAD v2.0
Stars: ✭ 20 (-67.21%)
Mutual labels:  question-answering, natural-language-understanding
explicit memory tracker
[ACL 2020] Explicit Memory Tracker with Coarse-to-Fine Reasoning for Conversational Machine Reading
Stars: ✭ 35 (-42.62%)
Mutual labels:  question-answering, reading-comprehension
Bidaf Keras
Bidirectional Attention Flow for Machine Comprehension implemented in Keras 2
Stars: ✭ 60 (-1.64%)
Mutual labels:  question-answering, natural-language-understanding
cmrc2019
A Sentence Cloze Dataset for Chinese Machine Reading Comprehension (CMRC 2019)
Stars: ✭ 118 (+93.44%)
Mutual labels:  question-answering, reading-comprehension
extractive rc by runtime mt
Code and datasets of "Multilingual Extractive Reading Comprehension by Runtime Machine Translation"
Stars: ✭ 36 (-40.98%)
Mutual labels:  question-answering, reading-comprehension
PersianQA
Persian (Farsi) Question Answering Dataset (+ Models)
Stars: ✭ 114 (+86.89%)
Mutual labels:  question-answering, reading-comprehension
mrqa
Code for EMNLP-IJCNLP 2019 MRQA Workshop Paper: "Domain-agnostic Question-Answering with Adversarial Training"
Stars: ✭ 35 (-42.62%)
Mutual labels:  question-answering
Manhattan-LSTM
Keras and PyTorch implementations of the MaLSTM model for computing Semantic Similarity.
Stars: ✭ 28 (-54.1%)
Mutual labels:  natural-language-understanding
WSDM-Cup-2019
[ACM-WSDM] 3rd place solution at WSDM Cup 2019, Fake News Classification on Kaggle.
Stars: ✭ 62 (+1.64%)
Mutual labels:  natural-language-understanding
SQUAD2.Q-Augmented-Dataset
Augmented version of SQUAD 2.0 for Questions
Stars: ✭ 31 (-49.18%)
Mutual labels:  question-answering
squadgym
Environment that can be used to evaluate reasoning capabilities of artificial agents
Stars: ✭ 27 (-55.74%)
Mutual labels:  question-answering

TOEFL-QA: A question answering dataset for machine comprehension of spoken content

Authors: Bo-Hsiang Tseng & Yu-An Chung

The dataset was originally collected by Tseng et al. (2016), and later used in Fang et al. (2016) and Chung et al. (2018). We make the dataset publicly available to encourage more research on this challenging task. If you have any questions about this dataset, do not hesitate to shoot me an email.

Introduction

Multimedia or spoken content presents more attractive information than plain text content, but it's more difficult to display on a screen and be selected by a user. As a result, accessing large collections of the former is much more difficult and time-consuming than the latter for humans. It's highly attractive to develop a machine which can automatically understand spoken content and summarize the key information for humans to browse over. In this endeavor, we propose a new task of machine comprehension of spoken content. We define the initial goal as the listening comprehension test of TOEFL, a challenging academic English examination for English learners whose native language is not English.

In this test, the subjects would first listen to an audio story around five minutes and then answer several question according to that story. The story is related to the college life such as conversation between the student and the professor or a lecture in the class. Each question has four choices where only one is correct. A real example in the TOEFL examination is shown in the following figure. The questions in TOEFL are not simple even for a human with relatively good knowledge because the question cannot be answered by simply matching the words in the question and in the choices with those in the story, and key information is usually buried by many irrelevant utterances. To answer the questions like "Why does the professor mention Issac Newton?", the listeners have to understand the whole audio story and draw the inferences to answer the question correctly.

Data

The collected TOEFL dataset includes 963 examples in total (717 for training, 124 for validation, 122 for testing). Each example consists of a story, a question, and 4 choices.

Existing Models

Evaluation metric: Accuracy on the test set

Acc. on test set
Sukhbaatar et al. (2015) 45.2
Tseng et al. (2016) 42.5
Fang et al. (2016) 49.1 (code)
Chung et al. (2018) 56.1 (code)

Citation

If you use the dataset in your work, please cite the following two papers as:

@inproceedings{tseng2016towards,
  title     = {Towards machine comprehension of spoken content: Initial TOEFL listening comprehension test by machine},
  author    = {Tseng, Bo-Hsiang and Shen, Sheng-Syun and Lee, Hung-Yi and Lee, Lin-Shan},
  booktitle = {INTERSPEECH},
  year      = {2016}
}

and

@inproceedings{chung2018supervised,
  title     = {Supervised and unsupervised transfer learning for question answering},
  author    = {Chung, Yu-An and Lee, Hung-Yi and Glass, James},
  booktitle = {NAACL HLT},
  year      = {2018}
}
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].