All Projects → david-yoon → QA_HRDE_LTC

david-yoon / QA_HRDE_LTC

Licence: MIT license
TensorFlow implementation of "Learning to Rank Question-Answer Pairs using Hierarchical Recurrent Encoder with Latent Topic Clustering," NAACL-18

Programming Languages

python
139335 projects - #7 most used programming language
Jupyter Notebook
11667 projects
shell
77523 projects

Projects that are alternatives of or similar to QA HRDE LTC

pair2vec
pair2vec: Compositional Word-Pair Embeddings for Cross-Sentence Inference
Stars: ✭ 62 (+113.79%)
Mutual labels:  question-answering
TransTQA
Author: Wenhao Yu ([email protected]). EMNLP'20. Transfer Learning for Technical Question Answering.
Stars: ✭ 12 (-58.62%)
Mutual labels:  question-answering
verseagility
Ramp up your custom natural language processing (NLP) task, allowing you to bring your own data, use your preferred frameworks and bring models into production.
Stars: ✭ 23 (-20.69%)
Mutual labels:  question-answering
Instahelp
Instahelp is a Q&A portal website similar to Quora
Stars: ✭ 21 (-27.59%)
Mutual labels:  question-answering
TeBaQA
A question answering system which utilises machine learning.
Stars: ✭ 17 (-41.38%)
Mutual labels:  question-answering
FreebaseQA
The release of the FreebaseQA data set (NAACL 2019).
Stars: ✭ 55 (+89.66%)
Mutual labels:  question-answering
KitanaQA
KitanaQA: Adversarial training and data augmentation for neural question-answering models
Stars: ✭ 58 (+100%)
Mutual labels:  question-answering
intergo
A package for interleaving / multileaving ranking generation in go
Stars: ✭ 30 (+3.45%)
Mutual labels:  ranking-algorithm
HHH-An-Online-Question-Answering-System-for-Medical-Questions
HBAM: Hierarchical Bi-directional Word Attention Model
Stars: ✭ 44 (+51.72%)
Mutual labels:  question-answering
QANet
A TensorFlow implementation of "QANet: Combining Local Convolution with Global Self-Attention for Reading Comprehension"
Stars: ✭ 31 (+6.9%)
Mutual labels:  question-answering
Dynamic-Coattention-Network-for-SQuAD
Tensorflow implementation of DCN for question answering on the Stanford Question Answering Dataset (SQuAD)
Stars: ✭ 14 (-51.72%)
Mutual labels:  question-answering
dialogbot
dialogbot, provide search-based dialogue, task-based dialogue and generative dialogue model. 对话机器人,基于问答型对话、任务型对话、聊天型对话等模型实现,支持网络检索问答,领域知识问答,任务引导问答,闲聊问答,开箱即用。
Stars: ✭ 96 (+231.03%)
Mutual labels:  question-answering
calcipher
Calculates the best possible answer for multiple-choice questions using techniques to maximize accuracy without any other outside resources or knowledge.
Stars: ✭ 15 (-48.28%)
Mutual labels:  question-answering
Question-Answering-based-on-SQuAD
Question Answering System using BiDAF Model on SQuAD v2.0
Stars: ✭ 20 (-31.03%)
Mutual labels:  question-answering
explicit memory tracker
[ACL 2020] Explicit Memory Tracker with Coarse-to-Fine Reasoning for Conversational Machine Reading
Stars: ✭ 35 (+20.69%)
Mutual labels:  question-answering
RL
A set of RL experiments. Currently including: (1) the MDP rank experiment, based on policy gradient algorithm
Stars: ✭ 22 (-24.14%)
Mutual labels:  ranking-algorithm
BERT-for-Chinese-Question-Answering
No description or website provided.
Stars: ✭ 75 (+158.62%)
Mutual labels:  question-answering
extractive rc by runtime mt
Code and datasets of "Multilingual Extractive Reading Comprehension by Runtime Machine Translation"
Stars: ✭ 36 (+24.14%)
Mutual labels:  question-answering
gated-attention-reader
Tensorflow/Pytorch implementation of Gated Attention Reader
Stars: ✭ 37 (+27.59%)
Mutual labels:  question-answering
Shukongdashi
使用知识图谱,自然语言处理,卷积神经网络等技术,基于python语言,设计了一个数控领域故障诊断专家系统
Stars: ✭ 109 (+275.86%)
Mutual labels:  question-answering

This repository contains the source code & data corpus for the models used in the following paper,

Learning to Rank Question-Answer Pairs using Hierarchical Recurrent Encoder with Latent Topic Clustering, NAACL-18, paper


[requirements]

tensorflow==1.14 (tested)
python==2.7

[download dataset]

  • data corpus is available from "releases" tab
  • place each data corpus into following path of the project
/ data / ubuntu_v1 /
       / ubuntu_v2 /
       / samsungQA /
  • Note that ubuntu_v1/v2 are originally from following github repository. ubuntu-v1, ubuntu-v2

[source code path]

/ data           : contains dataset (ubuntu v1/v2, samsungQA)
/ src_ubuntu_v1  : source code for ubuntu v1 data
/ src_ubuntu_v2  : source code for ubuntu v2 data
/ src_samsungQA  : source code for samsung QA data

[Training]

  • each source code folder contains training script << for example >>
/src_ubunutu_v1/
./run_RDE.sh      : train ubuntu_v1 dataset with RDE model
./run_RDE_LTC.sh  : train ubuntu_v1 dataset with RDE-LTC model
./run_HRDE.sh     : train ubuntu_v1 dataset with HRDE model
./run_HRDE_LTC.sh : train ubuntu_v1 dataset with HRDE-LTC model
  • best model will be stored in save folder << for example >>
/src_ubunutu_v1/save/

[Inference]

  • each source code folder contains inference code
    << execution example >> /src_ubunutu_v1/
 python eval_RDE.py       : inference ubuntu_v1 testset with RDE model
 python eval_RDE_LTC.py   : inference ubuntu_v1 testset with RDE-LTC model
 python eval_HRDE.py      : inference ubuntu_v1 testset with HRDE model
 python eval_HRDE_LTC.py  : inference ubuntu_v1 testset with HRDE-LTC model
  • inference code use saved model in 'save' folder
  • inference result will be stored in 'save' folder << example >>
/src_ubunutu_v1/save/result_RDE.txt

[cite]

  • Please cite our paper, when you use our code | dataset | model.
@inproceedings{yoon2018learning, 
   title={Learning to Rank Question-Answer Pairs Using Hierarchical Recurrent Encoder with Latent Topic Clustering}, 
   author={Yoon, Seunghyun and Shin, Joongbo and Jung, Kyomin}, 
   booktitle={Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies}, 
   volume={1},
   pages={1575--1584},
   year={2018} 
   }   
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].