All Projects → l11x0m7 → Question_answering_models

l11x0m7 / Question_answering_models

Licence: mit
This repo collects and re-produces models related to domains of question answering and machine reading comprehension

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Question answering models

KrantikariQA
An InformationGain based Question Answering over knowledge Graph system.
Stars: ✭ 54 (-61.15%)
Mutual labels:  qa, question-answering
Tableqa
AI Tool for querying natural language on tabular data.
Stars: ✭ 109 (-21.58%)
Mutual labels:  question-answering, qa
Nlu sim
all kinds of baseline models for sentence similarity 句子对语义相似度模型
Stars: ✭ 286 (+105.76%)
Mutual labels:  question-answering, qa
dialogbot
dialogbot, provide search-based dialogue, task-based dialogue and generative dialogue model. 对话机器人,基于问答型对话、任务型对话、聊天型对话等模型实现,支持网络检索问答,领域知识问答,任务引导问答,闲聊问答,开箱即用。
Stars: ✭ 96 (-30.94%)
Mutual labels:  qa, question-answering
Giveme5W
Extraction of the five journalistic W-questions (5W) from news articles
Stars: ✭ 16 (-88.49%)
Mutual labels:  qa, question-answering
Chinese-Psychological-QA-DataSet
中文心理问答数据集
Stars: ✭ 23 (-83.45%)
Mutual labels:  qa, question-answering
Chat
基于自然语言理解与机器学习的聊天机器人,支持多用户并发及自定义多轮对话
Stars: ✭ 516 (+271.22%)
Mutual labels:  question-answering, qa
Bi Att Flow
Bi-directional Attention Flow (BiDAF) network is a multi-stage hierarchical process that represents context at different levels of granularity and uses a bi-directional attention flow mechanism to achieve a query-aware context representation without early summarization.
Stars: ✭ 1,472 (+958.99%)
Mutual labels:  question-answering
Knowledge Aware Reader
PyTorch implementation of the ACL 2019 paper "Improving Question Answering over Incomplete KBs with Knowledge-Aware Reader"
Stars: ✭ 123 (-11.51%)
Mutual labels:  question-answering
Qaror
Questions & Answers platform on Rails - stackoverflow clone
Stars: ✭ 107 (-23.02%)
Mutual labels:  qa
Test Data Supplier
Extended TestNG DataProvider
Stars: ✭ 105 (-24.46%)
Mutual labels:  qa
Nlp Papers
Papers and Book to look at when starting NLP 📚
Stars: ✭ 111 (-20.14%)
Mutual labels:  qa
Dan Jurafsky Chris Manning Nlp
My solution to the Natural Language Processing course made by Dan Jurafsky, Chris Manning in Winter 2012.
Stars: ✭ 124 (-10.79%)
Mutual labels:  question-answering
Qrn
Query-Reduction Networks (QRN)
Stars: ✭ 137 (-1.44%)
Mutual labels:  qa
Kbqa Ar Smcnn
Question answering over Freebase (single-relation)
Stars: ✭ 129 (-7.19%)
Mutual labels:  question-answering
Clicr
Machine reading comprehension on clinical case reports
Stars: ✭ 123 (-11.51%)
Mutual labels:  question-answering
Question Answering
TensorFlow implementation of Match-LSTM and Answer pointer for the popular SQuAD dataset.
Stars: ✭ 133 (-4.32%)
Mutual labels:  question-answering
Chatbot
Русскоязычный чатбот
Stars: ✭ 106 (-23.74%)
Mutual labels:  question-answering
Foot traffic
Pure Ruby DSL for Chrome scripting based on Ferrum. No Selenium required. Works from any script. Simulate web app usage scenarios in production or locally.
Stars: ✭ 123 (-11.51%)
Mutual labels:  qa
Medquad
Medical Question Answering Dataset of 47,457 QA pairs created from 12 NIH websites
Stars: ✭ 129 (-7.19%)
Mutual labels:  question-answering

Question_Answering_Models

This repo collects and re-produces models related to domains of question answering and machine reading comprehension.

It's now still in the process of supplement.

comunity QA

Dataset

WikiQA, TrecQA, InsuranceQA

data preprocess on WikiQA

cd cQA
bash download.sh
python preprocess_wiki.py

Siamese-NN model

This model is a simple complementation of a Siamese NN QA model with a pointwise way.

To this repo for details

train model

python siamese.py --train

test model

python siamese.py --test

Siamese-CNN model

This model is a simple complementation of a Siamese CNN QA model with a pointwise way.

To this repo for details

train model

python siamese.py --train

test model

python siamese.py --test

Siamese-RNN model

This model is a simple complementation of a Siamese RNN/LSTM/GRU QA model with a pointwise way.

To this repo for details

train model

python siamese.py --train

test model

python siamese.py --test

note

All these three models above are based on the vanilla siamese structure. You can easily combine these basic deep learning module cells together and build your own models.

QACNN

Given a question, a positive answer and a negative answer, this pairwise model can rank two answers with higher ranking in terms of the right answer.

To this repo for details

train model

python qacnn.py --train

test model

python qacnn.py --test

Refer to:

Decomposable Attention Model

To this repo for details

train model

python decomp_att.py --train

test model

python decomp_att.py --test

Refer to:

Compare-Aggregate Model with Multi-Compare

To this repo for details

train model

python seq_match_seq.py --train

test model

python seq_match_seq.py --test

Refer to:

BiMPM

To this repo for details

train model

python bimpm.py --train

test model

python bimpm.py --test

Refer to:

Machine Reading Comprehension

Dataset

CNN/Daily mail, CBT, SQuAD, MS MARCO, RACE

GA Reader

To be done

GA

Refer to:

SA Reader

To be done

SAR

Refer to:

AoA Reader

To be done

AoA

Refer to:

  • Attention-over-Attention Neural Networks for Reading Comprehension

BiDAF

To this repo for details

BiDAF

The result on dev set(single model) under my experimental environment is shown as follows:

training step batch size hidden size EM (%) F1 (%) speed device
12W 32 75 67.7 77.3 3.40 it/s 1 GTX 1080 Ti

Refer to:

RNet

To this repo for details

RNet

The result on dev set(single model) under my experimental environment is shown as follows:

training step batch size hidden size EM (%) F1 (%) speed device RNN type
12W 32 75 69.1 78.2 1.35 it/s 1 GTX 1080 Ti cuDNNGRU
6W 64 75 66.1 75.6 2.95 s/it 1 GTX 1080 Ti SRU

RNet trained with cuDNNGRU:

RNet trained with SRU(without optimization on operation efficiency):

Refer to:

QANet

To this repo for details

QANet

The result on dev set(single model) under my experimental environment is shown as follows:

training step batch size attention heads hidden size EM (%) F1 (%) speed device
6W 32 1 96 70.2 79.7 2.4 it/s 1 GTX 1080 Ti
12W 32 1 75 70.1 79.4 2.4 it/s 1 GTX 1080 Ti

Experimental records for the first experiment:

Experimental records for the second experiment(without smooth):

Refer to:

  • QANet: Combining Local Convolution with Global Self-Attention for Reading Comprehension
  • github repo of NLPLearn/QANet

Hybrid Network

To this repo for details

This repo contains my experiments and attempt for MRC problems, and I'm still working on it.

training step batch size hidden size EM (%) F1 (%) speed device description
12W 32 100 70.1 78.9 1.6 it/s 1 GTX 1080 Ti \
12W 32 75 70.0 79.1 1.8 it/s 1 GTX 1080 Ti \
12W 32 75 69.5 78.8 1.8 it/s 1 GTX 1080 Ti with spatial dropout on embeddings

Experimental records for the first experiment(without smooth):

Experimental records for the second experiment(without smooth):

Information

For more information, please visit http://skyhigh233.com/blog/2018/04/26/cqa-intro/.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].