All Projects → WindChimeRan → Pytorch_multi_head_selection_re

WindChimeRan / Pytorch_multi_head_selection_re

BERT + reproduce "Joint entity recognition and relation extraction as a multi-head selection problem" for Chinese and English IE

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Pytorch multi head selection re

Oie Resources
A curated list of Open Information Extraction (OIE) resources: papers, code, data, etc.
Stars: ✭ 283 (+169.52%)
Mutual labels:  relation-extraction, information-extraction
Open Entity Relation Extraction
Knowledge triples extraction and knowledge base construction based on dependency syntax for open domain text.
Stars: ✭ 350 (+233.33%)
Mutual labels:  relation-extraction, information-extraction
Gcn Over Pruned Trees
Graph Convolution over Pruned Dependency Trees Improves Relation Extraction (authors' PyTorch implementation)
Stars: ✭ 312 (+197.14%)
Mutual labels:  relation-extraction, information-extraction
knowledge-graph-nlp-in-action
从模型训练到部署,实战知识图谱(Knowledge Graph)&自然语言处理(NLP)。涉及 Tensorflow, Bert+Bi-LSTM+CRF,Neo4j等 涵盖 Named Entity Recognition,Text Classify,Information Extraction,Relation Extraction 等任务。
Stars: ✭ 58 (-44.76%)
Mutual labels:  information-extraction, relation-extraction
Zhopenie
Chinese Open Information Extraction (Tree-based Triple Relation Extraction Module)
Stars: ✭ 98 (-6.67%)
Mutual labels:  chinese, relation-extraction
Multiple Relations Extraction Only Look Once
Multiple-Relations-Extraction-Only-Look-Once. Just look at the sentence once and extract the multiple pairs of entities and their corresponding relations. 端到端联合多关系抽取模型,可用于 http://lic2019.ccf.org.cn/kg 信息抽取。
Stars: ✭ 269 (+156.19%)
Mutual labels:  relation-extraction, information-extraction
Casrel
A Novel Cascade Binary Tagging Framework for Relational Triple Extraction. Accepted by ACL 2020.
Stars: ✭ 329 (+213.33%)
Mutual labels:  relation-extraction, information-extraction
CogIE
CogIE: An Information Extraction Toolkit for Bridging Text and CogNet. ACL 2021
Stars: ✭ 47 (-55.24%)
Mutual labels:  information-extraction, relation-extraction
Deepke
基于深度学习的开源中文关系抽取框架
Stars: ✭ 525 (+400%)
Mutual labels:  chinese, relation-extraction
Chinesenre
中文实体关系抽取,pytorch,bilstm+attention
Stars: ✭ 463 (+340.95%)
Mutual labels:  chinese, relation-extraction
PSPE
Pretrained Span and span Pair Encoder, code for "Pre-training Entity Relation Encoder with Intra-span and Inter-spanInformation.", EMNLP2020. It is based on our NERE toolkit (https://github.com/Receiling/NERE).
Stars: ✭ 17 (-83.81%)
Mutual labels:  information-extraction, relation-extraction
Distre
[ACL 19] Fine-tuning Pre-Trained Transformer Language Models to Distantly Supervised Relation Extraction
Stars: ✭ 75 (-28.57%)
Mutual labels:  relation-extraction, information-extraction
IE Paper Notes
Paper notes for Information Extraction, including Relation Extraction (RE), Named Entity Recognition (NER), Entity Linking (EL), Event Extraction (EE), Named Entity Disambiguation (NED).
Stars: ✭ 14 (-86.67%)
Mutual labels:  information-extraction, relation-extraction
Tacred Relation
PyTorch implementation of the position-aware attention model for relation extraction
Stars: ✭ 271 (+158.1%)
Mutual labels:  relation-extraction, information-extraction
InformationExtractionSystem
Information Extraction System can perform NLP tasks like Named Entity Recognition, Sentence Simplification, Relation Extraction etc.
Stars: ✭ 27 (-74.29%)
Mutual labels:  information-extraction, relation-extraction
Aggcn
Attention Guided Graph Convolutional Networks for Relation Extraction (authors' PyTorch implementation for the ACL19 paper)
Stars: ✭ 318 (+202.86%)
Mutual labels:  relation-extraction, information-extraction
ReQuest
Indirect Supervision for Relation Extraction Using Question-Answer Pairs (WSDM'18)
Stars: ✭ 26 (-75.24%)
Mutual labels:  information-extraction, relation-extraction
DocuNet
Code and dataset for the IJCAI 2021 paper "Document-level Relation Extraction as Semantic Segmentation".
Stars: ✭ 84 (-20%)
Mutual labels:  information-extraction, relation-extraction
Usc Ds Relationextraction
Distantly Supervised Relation Extraction
Stars: ✭ 378 (+260%)
Mutual labels:  relation-extraction, information-extraction
Lightnlp
基于Pytorch和torchtext的自然语言处理深度学习框架。
Stars: ✭ 739 (+603.81%)
Mutual labels:  chinese, relation-extraction

"Joint entity recognition and relation extraction as a multi-head selection problem" (Expert Syst. Appl, 2018)

paper

official tensorflow version

This model is extreamly useful for real-world RE usage. I originally reimplemented for a competition (Chinese IE). I will add CoNLL04 dataset and BERT model.

Requirement

  • python 3.7
  • pytorch 1.10

Dataset

Chinese IE

Chinese Information Extraction Competition link

Unzip *.json into ./raw_data/chinese/

CoNLL04

We use the data processed by official version.

already in ./raw_data/CoNLL04/

Run

python main.py --mode preprocessing --exp_name chinese_selection_re
python main.py --mode train --exp_name chinese_selection_re 
python main.py --mode evaluation --exp_name chinese_selection_re

If you want to try other experiments:

set exp_name as conll_selection_re or conll_bert_re

Result

Chinese

Training speed: 10min/epoch

precision recall f1
Ours (dev) 0.7443 0.6960 0.7194
Winner (test) 0.8975 0.8886 0.893

CoNLL04

Test set:

precision recall f1
Ours (LSTM) 0.6531 0.3153 0.4252
Ours (BERT-freeze) 0.5233 0.4975 0.5101
Official 0.6375 0.6043 0.6204

We use the strictest setting: a triplet is correct only if the relation and all the tokens of head and tail are correct.

Details

The model was originally used for Chinese IE, thus, it's a bit different from the official paper:

They use pretrained char-word embedding while we use word embedding initialized randomly; they use 3-layer LSTM while we use 1-layer LSTM.

TODO

  • Tune the hyperparameters for CoNLL04
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].