All Projects → Kyung-Min → CompareModels_TRECQA

Kyung-Min / CompareModels_TRECQA

Licence: other
Compare six baseline deep learning models on TrecQA

Programming Languages

Jupyter Notebook
11667 projects
python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to CompareModels TRECQA

Pytorch Question Answering
Important paper implementations for Question Answering using PyTorch
Stars: ✭ 154 (+152.46%)
Mutual labels:  question-answering, attention-mechanism, nlp-machine-learning
Datastories Semeval2017 Task4
Deep-learning model presented in "DataStories at SemEval-2017 Task 4: Deep LSTM with Attention for Message-level and Topic-based Sentiment Analysis".
Stars: ✭ 184 (+201.64%)
Mutual labels:  attention-mechanism, nlp-machine-learning
co-attention
Pytorch implementation of "Dynamic Coattention Networks For Question Answering"
Stars: ✭ 54 (-11.48%)
Mutual labels:  question-answering, attention-mechanism
Deeppavlov
An open source library for deep learning end-to-end dialog systems and chatbots.
Stars: ✭ 5,525 (+8957.38%)
Mutual labels:  question-answering, nlp-machine-learning
Dan Jurafsky Chris Manning Nlp
My solution to the Natural Language Processing course made by Dan Jurafsky, Chris Manning in Winter 2012.
Stars: ✭ 124 (+103.28%)
Mutual labels:  question-answering, nlp-machine-learning
NTUA-slp-nlp
💻Speech and Natural Language Processing (SLP & NLP) Lab Assignments for ECE NTUA
Stars: ✭ 19 (-68.85%)
Mutual labels:  attention-mechanism, nlp-machine-learning
Tapas
End-to-end neural table-text understanding models.
Stars: ✭ 583 (+855.74%)
Mutual labels:  question-answering, nlp-machine-learning
datastories-semeval2017-task6
Deep-learning model presented in "DataStories at SemEval-2017 Task 6: Siamese LSTM with Attention for Humorous Text Comparison".
Stars: ✭ 20 (-67.21%)
Mutual labels:  attention-mechanism, nlp-machine-learning
Question-Answering-based-on-SQuAD
Question Answering System using BiDAF Model on SQuAD v2.0
Stars: ✭ 20 (-67.21%)
Mutual labels:  question-answering, nlp-machine-learning
MLH-Quizzet
This is a smart Quiz Generator that generates a dynamic quiz from any uploaded text/PDF document using NLP. This can be used for self-analysis, question paper generation, and evaluation, thus reducing human effort.
Stars: ✭ 23 (-62.3%)
Mutual labels:  question-answering, nlp-machine-learning
SentimentAnalysis
Sentiment Analysis: Deep Bi-LSTM+attention model
Stars: ✭ 32 (-47.54%)
Mutual labels:  attention-mechanism, nlp-machine-learning
Image-Caption
Using LSTM or Transformer to solve Image Captioning in Pytorch
Stars: ✭ 36 (-40.98%)
Mutual labels:  attention-mechanism
QuantumForest
Fast Differentiable Forest lib with the advantages of both decision trees and neural networks
Stars: ✭ 63 (+3.28%)
Mutual labels:  attention-mechanism
unanswerable qa
The official implementation for ACL 2021 "Challenges in Information Seeking QA: Unanswerable Questions and Paragraph Retrieval".
Stars: ✭ 21 (-65.57%)
Mutual labels:  question-answering
PAM
[TPAMI 2020] Parallax Attention for Unsupervised Stereo Correspondence Learning
Stars: ✭ 62 (+1.64%)
Mutual labels:  attention-mechanism
VoiceNET.Library
.NET library to easily create Voice Command Control feature.
Stars: ✭ 14 (-77.05%)
Mutual labels:  cnn-model
easyNLP
Do NLP without coding!
Stars: ✭ 19 (-68.85%)
Mutual labels:  nlp-machine-learning
KrantikariQA
An InformationGain based Question Answering over knowledge Graph system.
Stars: ✭ 54 (-11.48%)
Mutual labels:  question-answering
Video-Cap
🎬 Video Captioning: ICCV '15 paper implementation
Stars: ✭ 44 (-27.87%)
Mutual labels:  attention-mechanism
mrqa
Code for EMNLP-IJCNLP 2019 MRQA Workshop Paper: "Domain-agnostic Question-Answering with Adversarial Training"
Stars: ✭ 35 (-42.62%)
Mutual labels:  question-answering

CompareModels_TRECQA

In a QA system that needs to infer from unstructured corpus, one challenge is to choose the sentence that contains best answer information for the given question.

These files provide six baseline models, i.e. average pooling, RNN, CNN, RNNCNN, QA-LSTM/CNN+attention (Tan, 2015; state-of-art 2015), AP-LSTM/CNN (Santos, 2016; state-of-art 2016) for the TrecQA task (wang et al. 2007).

Model Comparison

All models were trained on train-all using Keras 2.1.2.
You can download the glove parameters at here http://nlp.stanford.edu/data/glove.6B.zip
Batch normalization was used to improve the performance of the models over the results of the pasky's experiments.
https://github.com/brmson/dataset-sts/tree/master/data/anssel/wang

If you see the other performance records on this dataset, visit here. https://aclweb.org/aclwiki/Question_Answering_(State_of_the_art)

Model devMRR testMRR etc
Avg. 0.855998 0.810032 pdim=0.5, Ddim=1
CNN 0.865507 0.859114 pdim=0.5, p_layers=1, Ddim = 1
RNN(LSTM) 0.842302 0.827154 sdim=5~7, rnn=CuDNNLSTM, rnnbidi_mode=concatenate, Ddim = 2, proj=False
RNN+CNN 0.862692 0.803874 Ddim=2, p_layers=2, pdim=0.5, rnn=CuDNNLSTM, rnnbidi_mode=concatenate sdim=1
QA-LSTM/CNN+attention 0.875321 0.832281 Ddim=[1, 1/2], p_layers=2, pdim=0.5, rnn=CuDNNLSTM, rnnbidi_mode=concatenate sdim=1, adim=0.5, state-of-art 2015
AP-LSTM/CNN (Attentive Pooling) 0.883974 0.850000 Ddim=0.1, p_layers=1, pdim=0.5, rnn=CuDNNLSTM, rnnbidi_mode=concatenate sdim=5, w_feat_model=rnn, sdim=4, state-of-art 2016

This year(2017)'s new results (TO DO list to implement)

Model testMRR etc
HyperQA 0.865 Tay et al. (2017)
BiMPM 0.875 Wang et al. (2017)
Compare-Aggregate 0.899 Bian et al. (2017)
IWAN 0.889 Shen et al. (2017)

Reference

  • Wang, Mengqiu and Smith, Noah A. and Mitamura, Teruko. 2007. What is the Jeopardy Model? A Quasi-Synchronous Grammar for QA. In EMNLP-CoNLL 2007.
  • Ming Tan, Cicero dos Santos, Bing Xiang & Bowen Zhou. 2015. LSTM-Based Deep Learning Models for Nonfactoid Answer Selection. In eprint arXiv:1511.04108.
  • Sergey Ioffe, Christian Szegedy. 2015 Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. In ICML 2015.
  • Cicero dos Santos, Ming Tan, Bing Xiang, Bowen Zhou. 2016. Attentive Pooling Networks. In eprint axXiv:1602.03609.
  • Yi Tay, Luu Anh Tuan, Siu Cheung Hui. 2017 Enabling Efficient Question Answer Retrieval via Hyperbolic Neural Networks. In eprint arXiv: 1707.07847.
  • Zhiguo Wang, Wael Hamza and Radu Florian. 2017. Bilateral Multi-Perspective Matching for Natural Language Sentences. In eprint arXiv:1702.03814.
  • Weijie Bian, Si Li, Zhao Yang, Guang Chen, Zhiqing Lin. 2017. A Compare-Aggregate Model with Dynamic-Clip Attention for Answer Selection. In CIKM 2017.
  • Gehui Shen, Yunlun Yang, Zhi-Hong Deng. 2017. Inter-Weighted Alignment Network for Sentence Pair Modeling. In EMNLP 2017.
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].