All Projects → HKUST-KnowComp → Mnemonicreader

HKUST-KnowComp / Mnemonicreader

Licence: bsd-3-clause
A PyTorch implementation of Mnemonic Reader for the Machine Comprehension task

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Mnemonicreader

PersianQA
Persian (Farsi) Question Answering Dataset (+ Models)
Stars: ✭ 114 (-16.79%)
Mutual labels:  squad
Drqa
A pytorch implementation of Reading Wikipedia to Answer Open-Domain Questions.
Stars: ✭ 378 (+175.91%)
Mutual labels:  squad
Qanet
A Tensorflow implementation of QANet for machine reading comprehension
Stars: ✭ 996 (+627.01%)
Mutual labels:  squad
Medi-CoQA
Conversational Question Answering on Clinical Text
Stars: ✭ 22 (-83.94%)
Mutual labels:  squad
Learning to retrieve reasoning paths
The official implementation of ICLR 2020, "Learning to Retrieve Reasoning Paths over Wikipedia Graph for Question Answering".
Stars: ✭ 318 (+132.12%)
Mutual labels:  squad
R Net
Tensorflow Implementation of R-Net
Stars: ✭ 582 (+324.82%)
Mutual labels:  squad
MRC Competition Dureader
机器阅读理解 冠军/亚军代码及中文预训练MRC模型
Stars: ✭ 552 (+302.92%)
Mutual labels:  squad
Bi Att Flow
Bi-directional Attention Flow (BiDAF) network is a multi-stage hierarchical process that represents context at different levels of granularity and uses a bi-directional attention flow mechanism to achieve a query-aware context representation without early summarization.
Stars: ✭ 1,472 (+974.45%)
Mutual labels:  squad
R Net
A Tensorflow Implementation of R-net: Machine reading comprehension with self matching networks
Stars: ✭ 321 (+134.31%)
Mutual labels:  squad
Reading comprehension tf
Machine Reading Comprehension in Tensorflow
Stars: ✭ 37 (-72.99%)
Mutual labels:  squad
FastFusionNet
A PyTorch Implementation of FastFusionNet on SQuAD 1.1
Stars: ✭ 38 (-72.26%)
Mutual labels:  squad
co-attention
Pytorch implementation of "Dynamic Coattention Networks For Question Answering"
Stars: ✭ 54 (-60.58%)
Mutual labels:  squad
Awesome Qa
😎 A curated list of the Question Answering (QA)
Stars: ✭ 596 (+335.04%)
Mutual labels:  squad
SQUAD2.Q-Augmented-Dataset
Augmented version of SQUAD 2.0 for Questions
Stars: ✭ 31 (-77.37%)
Mutual labels:  squad
Bidaf Pytorch
An Implementation of Bidirectional Attention Flow
Stars: ✭ 42 (-69.34%)
Mutual labels:  squad
question-answering
No description or website provided.
Stars: ✭ 32 (-76.64%)
Mutual labels:  squad
Lambdahack
Haskell game engine library for roguelike dungeon crawlers; please offer feedback, e.g., after trying out the sample game with the web frontend at
Stars: ✭ 439 (+220.44%)
Mutual labels:  squad
Haystack
🔍 Haystack is an open source NLP framework that leverages Transformer models. It enables developers to implement production-ready neural search, question answering, semantic document search and summarization for a wide range of applications.
Stars: ✭ 3,409 (+2388.32%)
Mutual labels:  squad
Match Lstm
A PyTorch implemention of Match-LSTM, R-NET and M-Reader for Machine Reading Comprehension
Stars: ✭ 92 (-32.85%)
Mutual labels:  squad
Fusionnet
My implementation of the FusionNet for machine comprehension
Stars: ✭ 29 (-78.83%)
Mutual labels:  squad

Mnemonic Reader

The Mnemonic Reader is a deep learning model for Machine Comprehension task. You can get details from this paper. It combines advantages of match-LSTM, R-Net and Document Reader and utilizes a new unit, the Semantic Fusion Unit (SFU), to achieve state-of-the-art results (at that time).

This model is a PyTorch implementation of Mnemonic Reader. At the same time, a PyTorch implementation of R-Net and a PyTorch implementation of Document Reader are also included to compare with the Mnemonic Reader. Pretrained models are also available in release.

This repo belongs to HKUST-KnowComp and is under the BSD LICENSE.

Some codes are implemented based on DrQA.

Please feel free to contact with Xin Liu ([email protected]) if you have any question about this repo.

Evaluation on SQuAD

Model DEV_EM DEV_F1
Document Reader (original paper) 69.5 78.8
Document Reader (trained model) 69.4 78.6
R-Net (original paper 1) 71.1 79.5
R-Net (original paper 2) 72.3 80.6
R-Net (trained model) 70.2 79.4
Mnemonic Reader (original paper) 71.8 81.2
Mnemonic Reader + RL (original paper) 72.1 81.6
Mnemonic Reader (trained model) 73.2 81.5

EM_F1

Requirements

  • Python >= 3.4
  • PyTorch >= 0.31
  • spaCy >= 2.0.0
  • tqdm
  • ujson
  • numpy
  • prettytable

Prepare

First of all, you need to download the dataset and pre-trained word vectors.

mkdir -p data/datasets
wget https://rajpurkar.github.io/SQuAD-explorer/dataset/train-v1.1.json -O data/datasets/SQuAD-train-v1.1.json
wget https://rajpurkar.github.io/SQuAD-explorer/dataset/dev-v1.1.json -O data/datasets/SQuAD-dev-v1.1.json
mkdir -p data/embeddings
wget http://nlp.stanford.edu/data/glove.840B.300d.zip -O data/embeddings/glove.840B.300d.zip
cd data/embeddings
unzip glove.840B.300d.zip

Then, you need to preprocess these data.

python script/preprocess.py data/datasets data/datasets --split SQuAD-train-v1.1
python script/preprocess.py data/datasets data/datasets --split SQuAD-dev-v1.1

If you want to use multicores to speed up, you could add --num-workers 4 in commands.

Train

There are some parameters to set but default values are ready. If you are not interested in tuning parameters, you can use default values. Just run:

python script/train.py

After several hours, you will get the model in data/models/, e.g. 20180416-acc9d06d.mdl and you can see the log file in data/models/, e.g. 20180416-acc9d06d.txt.

Predict

To evaluate the model you get, you should complete this part.

python script/predict.py --model data/models/20180416-acc9d06d.mdl

You need to change the model name in the command above.

You will not get results directly but to use the official evaluate-v1.1.py in data/script.

python script/evaluate-v1.1.py data/predict/SQuAD-dev-v1.1-20180416-acc9d06d.preds data/datasets/SQuAD-dev-v1.1.json

Interactivate

In order to help those who are interested in QA systems, script/interactivate.py provides an easy but good demo.

python script/interactivate.py --model data/models/20180416-acc9d06d.mdl

Then you will drop into an interactive session. It looks like:

* Interactive Module *

* Repo: Mnemonic Reader (https://github.com/HKUST-KnowComp/MnemonicReader)

* Implement based on Facebook's DrQA

>>> process(document, question, candidates=None, top_n=1)
>>> usage()

>>> text="Architecturally, the school has a Catholic character. Atop the Main Building's gold dome is a golden statue of the Virgin Mary. Immediately in front of the Main Building and facing it, is a copper statue of Christ with arms upraised with the legend \"Venite Ad Me Omnes\". Next to the Main Building is the Basilica of the Sacred Heart. Immediately behind the basilica is the Grotto, a Marian place of prayer and reflection. It is a replica of the grotto at Lourdes, France where the Virgin Mary reputedly appeared to Saint Bernadette Soubirous in 1858. At the end of the main drive (and in a direct line that connects through 3 statues and the Gold Dome), is a simple, modern stone statue of Mary."
>>> question = "To whom did the Virgin Mary allegedly appear in 1858 in Lourdes France?"
>>> process(text, question)

+------+----------------------------+-----------+
| Rank |            Span            |   Score   |
+------+----------------------------+-----------+
|  1   | Saint Bernadette Soubirous | 0.9875301 |
+------+----------------------------+-----------+

More parameters

If you want to tune parameters to achieve a higher score, you can get instructions about parameters via using

python script/preprocess.py --help
python script/train.py --help
python script/predict.py --help
python script/interactivate.py --help

License

All codes in Mnemonic Reader are under BSD LICENSE.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].