All Projects → limteng-rpi → neural_name_tagging

limteng-rpi / neural_name_tagging

Licence: other
Code for "Reliability-aware Dynamic Feature Composition for Name Tagging" (ACL2019)

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to neural name tagging

Ner Bert Pytorch
PyTorch solution of named entity recognition task Using Google AI's pre-trained BERT model.
Stars: ✭ 249 (+538.46%)
Mutual labels:  information-extraction, named-entity-recognition, ner
Snips Nlu
Snips Python library to extract meaning from text
Stars: ✭ 3,583 (+9087.18%)
Mutual labels:  information-extraction, named-entity-recognition, ner
Ncrfpp
NCRF++, a Neural Sequence Labeling Toolkit. Easy use to any sequence labeling tasks (e.g. NER, POS, Segmentation). It includes character LSTM/CNN, word LSTM/CNN and softmax/CRF components.
Stars: ✭ 1,767 (+4430.77%)
Mutual labels:  named-entity-recognition, ner, lstm-crf
simple NER
simple rule based named entity recognition
Stars: ✭ 29 (-25.64%)
Mutual labels:  information-extraction, named-entity-recognition, ner
Dan Jurafsky Chris Manning Nlp
My solution to the Natural Language Processing course made by Dan Jurafsky, Chris Manning in Winter 2012.
Stars: ✭ 124 (+217.95%)
Mutual labels:  information-extraction, named-entity-recognition, ner
IE Paper Notes
Paper notes for Information Extraction, including Relation Extraction (RE), Named Entity Recognition (NER), Entity Linking (EL), Event Extraction (EE), Named Entity Disambiguation (NED).
Stars: ✭ 14 (-64.1%)
Mutual labels:  information-extraction, named-entity-recognition
LNEx
📍 🏢 🏦 🏣 🏪 🏬 LNEx: Location Name Extractor
Stars: ✭ 21 (-46.15%)
Mutual labels:  information-extraction, named-entity-recognition
lima
The Libre Multilingual Analyzer, a Natural Language Processing (NLP) C++ toolkit.
Stars: ✭ 75 (+92.31%)
Mutual labels:  information-extraction, named-entity-recognition
neji
Flexible and powerful platform for biomedical information extraction from text
Stars: ✭ 37 (-5.13%)
Mutual labels:  information-extraction, named-entity-recognition
knowledge-graph-nlp-in-action
从模型训练到部署,实战知识图谱(Knowledge Graph)&自然语言处理(NLP)。涉及 Tensorflow, Bert+Bi-LSTM+CRF,Neo4j等 涵盖 Named Entity Recognition,Text Classify,Information Extraction,Relation Extraction 等任务。
Stars: ✭ 58 (+48.72%)
Mutual labels:  information-extraction, named-entity-recognition
PhoNER COVID19
COVID-19 Named Entity Recognition for Vietnamese (NAACL 2021)
Stars: ✭ 55 (+41.03%)
Mutual labels:  named-entity-recognition, ner
nested-ner-tacl2020-flair
Implementation of Nested Named Entity Recognition using Flair
Stars: ✭ 23 (-41.03%)
Mutual labels:  information-extraction, named-entity-recognition
Daguan 2019 rank9
datagrand 2019 information extraction competition rank9
Stars: ✭ 121 (+210.26%)
Mutual labels:  information-extraction, ner
KoBERT-NER
NER Task with KoBERT (with Naver NLP Challenge dataset)
Stars: ✭ 76 (+94.87%)
Mutual labels:  named-entity-recognition, ner
trinity-ie
Information extraction pipeline containing coreference resolution, named entity linking, and relationship extraction
Stars: ✭ 59 (+51.28%)
Mutual labels:  information-extraction, named-entity-recognition
Nested Ner Tacl2020 Transformers
Implementation of Nested Named Entity Recognition using BERT
Stars: ✭ 76 (+94.87%)
Mutual labels:  information-extraction, named-entity-recognition
Awesome Hungarian Nlp
A curated list of NLP resources for Hungarian
Stars: ✭ 121 (+210.26%)
Mutual labels:  information-extraction, named-entity-recognition
InformationExtractionSystem
Information Extraction System can perform NLP tasks like Named Entity Recognition, Sentence Simplification, Relation Extraction etc.
Stars: ✭ 27 (-30.77%)
Mutual labels:  information-extraction, named-entity-recognition
slotminer
Tool for slot extraction from text
Stars: ✭ 15 (-61.54%)
Mutual labels:  information-extraction, named-entity-recognition
Understanding Financial Reports Using Natural Language Processing
Investigate how mutual funds leverage credit derivatives by studying their routine filings to the SEC using NLP techniques 📈🤑
Stars: ✭ 36 (-7.69%)
Mutual labels:  information-extraction, named-entity-recognition

Dynamic Feature Composition for Name Tagging

Code for our ACL2019 paper Reliability-aware Dynamic Feature Composition for Name Tagging.

Input Data Set Directory Structure

  • <input_dir>
    • embed.vocab.tsv (embedding vocab file, 1st column: token, 2nd column: index)
    • embed.count.tsv (embedding token frequency file, 1st column: token, 2nd column: frequency)
    • bc
      • train.tsv (training set)
      • dev.tsv (development set)
      • test.tsv (test set)
      • token.vocab.tsv (token vocab file, 1st column: token, 2nd column: index)
      • char.vocab.tsv (character vocab file: 1st column: character, 2nd column: index)
      • label.vocab.tsv (label vocab file: 1st column: label, 2nd column: index)
    • bn
    • mz
    • nw
    • tc
    • wb

Note:

  • Other subsets have train.tsv, dev.tsv, test.tsv, token.vocab.tsv, char.vocab.tsv, and label.vocab.tsv in their directories.
  • In our experiments, we generated *.vocab.tsv from a merged data set of all subsets.
  • In our experiments, we use CoNLL format files generated from OntoNotes 5.0 with Pradhan et al.'s scripts, which can be found at https://cemantix.org/data/ontonotes.html.

Pre-processing

The following functions in proprocess.py can be used to create vocab and frequency files.

  • build_all_vocabs takes as input a list of CoNLL format files, and generate {token,char,label}.vocab.tsv in output_dir.
  • build_embed_vocab takes a pre-trained embedding file as input and return the embedding vocab.
  • build_embed_token_count takes a pre-trained embedding file as input and generate an embedding token frequency file.

Train LSTM-CNN

python train_lstmcnn_all.py -d 0 -i <input_dir> -o <output_dir> -e <embedding_file>
  --embed_vocab <embedding_vocab_file> --char_dim 50 --seed <random_seed>

This script train a model for each subset (which can be specified with the --datasets argument) and report within-subset (within-genre) and cross-subset (cross-genre) performance.

Train LSTM-CNN with Dynamic Feature Composition

python train_lstmcnn_dfc_all.py -d 0 -i <input_dir> -o <output_dir> -e <embedding_file>
  --embed_vocab <embedding_vocab_file> --embed_count <embedding_freq_file> --char_dim 50 --seed <random_seed>

Requirement

  • Python 3.5+
  • Pytorch 1.0

Resources

Reference

Lin, Y., Liu, L., Ji, H., Yu, D., Han, J. (2019) Reliability-aware Dynamic Feature Composition for Name Tagging. Proceedings of The 57th Annual Meeting of the Association for Computational Linguistics.

@article{lin2019reliability,
  title={Reliability-aware Dynamic Feature Composition for Name Tagging},
  author={Lin, Ying and Liu, Liyuan and Ji, Heng and Yu, Dong and Han, Jiawei},
  booktitle={Proceedings of The 57th Annual Meeting of the Association for Computational Linguistics (ACL2019)},
  year={2019}
}
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].