All Projects → SJTU-lqiu → QA4IE

SJTU-lqiu / QA4IE

Licence: MIT license
Original implementation of QA4IE

Programming Languages

python
139335 projects - #7 most used programming language
Jupyter Notebook
11667 projects
shell
77523 projects

Projects that are alternatives of or similar to QA4IE

ReQuest
Indirect Supervision for Relation Extraction Using Question-Answer Pairs (WSDM'18)
Stars: ✭ 26 (+8.33%)
Mutual labels:  information-extraction, question-answering
Dan Jurafsky Chris Manning Nlp
My solution to the Natural Language Processing course made by Dan Jurafsky, Chris Manning in Winter 2012.
Stars: ✭ 124 (+416.67%)
Mutual labels:  information-extraction, question-answering
slotminer
Tool for slot extraction from text
Stars: ✭ 15 (-37.5%)
Mutual labels:  information-extraction
MLH-Quizzet
This is a smart Quiz Generator that generates a dynamic quiz from any uploaded text/PDF document using NLP. This can be used for self-analysis, question paper generation, and evaluation, thus reducing human effort.
Stars: ✭ 23 (-4.17%)
Mutual labels:  question-answering
minie
An open information extraction system that provides compact extractions
Stars: ✭ 83 (+245.83%)
Mutual labels:  information-extraction
neji
Flexible and powerful platform for biomedical information extraction from text
Stars: ✭ 37 (+54.17%)
Mutual labels:  information-extraction
NCE-CNN-Torch
Noise-Contrastive Estimation for Question Answering with Convolutional Neural Networks (Rao et al. CIKM 2016)
Stars: ✭ 54 (+125%)
Mutual labels:  question-answering
InformationExtractionSystem
Information Extraction System can perform NLP tasks like Named Entity Recognition, Sentence Simplification, Relation Extraction etc.
Stars: ✭ 27 (+12.5%)
Mutual labels:  information-extraction
GAR
Code and resources for papers "Generation-Augmented Retrieval for Open-Domain Question Answering" and "Reader-Guided Passage Reranking for Open-Domain Question Answering", ACL 2021
Stars: ✭ 38 (+58.33%)
Mutual labels:  question-answering
trinity-ie
Information extraction pipeline containing coreference resolution, named entity linking, and relationship extraction
Stars: ✭ 59 (+145.83%)
Mutual labels:  information-extraction
nested-ner-tacl2020-flair
Implementation of Nested Named Entity Recognition using Flair
Stars: ✭ 23 (-4.17%)
Mutual labels:  information-extraction
strategy
Improving Machine Reading Comprehension with General Reading Strategies
Stars: ✭ 35 (+45.83%)
Mutual labels:  question-answering
TabInOut
Framework for information extraction from tables
Stars: ✭ 37 (+54.17%)
Mutual labels:  information-extraction
ODSQA
ODSQA: OPEN-DOMAIN SPOKEN QUESTION ANSWERING DATASET
Stars: ✭ 43 (+79.17%)
Mutual labels:  question-answering
gated-attention-reader
Tensorflow/Pytorch implementation of Gated Attention Reader
Stars: ✭ 37 (+54.17%)
Mutual labels:  question-answering
patrick-wechat
⭐️🐟 questionnaire wechat app built with taro, taro-ui and heart. 微信问卷小程序
Stars: ✭ 74 (+208.33%)
Mutual labels:  question-answering
Deep-NLP-Resources
Curated list of all NLP Resources
Stars: ✭ 65 (+170.83%)
Mutual labels:  information-extraction
QA HRDE LTC
TensorFlow implementation of "Learning to Rank Question-Answer Pairs using Hierarchical Recurrent Encoder with Latent Topic Clustering," NAACL-18
Stars: ✭ 29 (+20.83%)
Mutual labels:  question-answering
simple NER
simple rule based named entity recognition
Stars: ✭ 29 (+20.83%)
Mutual labels:  information-extraction
algorithm-implementations
Competitive programming algorithms implementations
Stars: ✭ 11 (-54.17%)
Mutual labels:  implementation-of-algorithms

Pytorch Version QA4IE Code (Journal Version)

Here is the original implementation for the following series of publications on QA4IE.

This branch mantains the code for the journal version QA4IE. The ISWC conference version is maintained in iswc branch. The chinese benchmark and the corresponding code will also be released soon.

Requirements

  • torch = 1.7.1
  • wandb

Pretrained Models

Download preprocessed data and pretrained models here.

Uncompress them in ./data and ./out, respectively.

Training & Evaluation Scripts

Training and evaluation scripts are provided in ./scripts. Note that you need to execute the dump_SS.sh script before training QA module, and execute the dump_QA.sh before training the AT module.

IE-setting Evaluation

Evaluate in IE-setting with different types of scorer:

  • mean: average probability of answer sequence
  • prod: product of answer sequence probabilities
  • AT: use the output of the AT module as the score

python3 eval_ie.py --scorer <mean|prod|AT>

Issues on Experimental Results

  • Note that the results obtained by running the code in this repo will be slightly better than the results reported in the paper. The main reasons are the usage of a more proper optimizer, a larger batch size, and a learning rate scheduler in the new implementation.

Cite Us

@inproceedings{qiu2018qa4ie,
  title={QA4IE: A question answering based framework for information extraction},
  author={Qiu, Lin and Zhou, Hao and Qu, Yanru and Zhang, Weinan and Li, Suoheng and Rong, Shu and Ru, Dongyu and Qian, Lihua and Tu, Kewei and Yu, Yong},
  booktitle={International Semantic Web Conference},
  pages={198--216},
  year={2018},
  organization={Springer}
}

@article{qiu2020qa4ie,
  title={Qa4ie: A question answering based system for document-level general information extraction},
  author={Qiu, Lin and Ru, Dongyu and Long, Quanyu and Zhang, Weinan and Yu, Yong},
  journal={IEEE Access},
  volume={8},
  pages={29677--29689},
  year={2020},
  publisher={IEEE}
}

@inproceedings{ru2020quachie,
  title={QuAChIE: Question Answering based Chinese Information Extraction System},
  author={Ru, Dongyu and Wang, Zhenghui and Qiu, Lin and Zhou, Hao and Li, Lei and Zhang, Weinan and Yu, Yong},
  booktitle={Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval},
  pages={2177--2180},
  year={2020}
}
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].