All Projects → LiyuanLucasLiu → Ld Net

LiyuanLucasLiu / Ld Net

Licence: apache-2.0
Efficient Contextualized Representation: Language Model Pruning for Sequence Labeling

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Ld Net

Bert Sklearn
a sklearn wrapper for Google's BERT model
Stars: ✭ 182 (+22.97%)
Mutual labels:  named-entity-recognition, ner, language-model
Kashgari
Kashgari is a production-level NLP Transfer learning framework built on top of tf.keras for text-labeling and text-classification, includes Word2Vec, BERT, and GPT2 Language Embedding.
Stars: ✭ 2,235 (+1410.14%)
Mutual labels:  named-entity-recognition, ner, sequence-labeling
Phonlp
PhoNLP: A BERT-based multi-task learning toolkit for part-of-speech tagging, named entity recognition and dependency parsing (NAACL 2021)
Stars: ✭ 56 (-62.16%)
Mutual labels:  named-entity-recognition, ner, language-model
Lm Lstm Crf
Empower Sequence Labeling with Task-Aware Language Model
Stars: ✭ 778 (+425.68%)
Mutual labels:  ner, language-model, sequence-labeling
Autoner
Learning Named Entity Tagger from Domain-Specific Dictionary
Stars: ✭ 357 (+141.22%)
Mutual labels:  named-entity-recognition, ner, sequence-labeling
CrossNER
CrossNER: Evaluating Cross-Domain Named Entity Recognition (AAAI-2021)
Stars: ✭ 87 (-41.22%)
Mutual labels:  named-entity-recognition, ner, sequence-labeling
Ncrfpp
NCRF++, a Neural Sequence Labeling Toolkit. Easy use to any sequence labeling tasks (e.g. NER, POS, Segmentation). It includes character LSTM/CNN, word LSTM/CNN and softmax/CRF components.
Stars: ✭ 1,767 (+1093.92%)
Mutual labels:  named-entity-recognition, ner, sequence-labeling
Bertweet
BERTweet: A pre-trained language model for English Tweets (EMNLP-2020)
Stars: ✭ 282 (+90.54%)
Mutual labels:  named-entity-recognition, ner, language-model
Cluener2020
CLUENER2020 中文细粒度命名实体识别 Fine Grained Named Entity Recognition
Stars: ✭ 689 (+365.54%)
Mutual labels:  named-entity-recognition, ner, sequence-labeling
Named entity recognition
中文命名实体识别(包括多种模型:HMM,CRF,BiLSTM,BiLSTM+CRF的具体实现)
Stars: ✭ 995 (+572.3%)
Mutual labels:  named-entity-recognition, ner, sequence-labeling
Ntagger
reference pytorch code for named entity tagging
Stars: ✭ 58 (-60.81%)
Mutual labels:  ner, sequence-labeling
Torchcrf
An Inplementation of CRF (Conditional Random Fields) in PyTorch 1.0
Stars: ✭ 58 (-60.81%)
Mutual labels:  named-entity-recognition, ner
Bnlp
BNLP is a natural language processing toolkit for Bengali Language.
Stars: ✭ 127 (-14.19%)
Mutual labels:  named-entity-recognition, ner
Tner
Language model finetuning on NER with an easy interface, and cross-domain evaluation. We released NER models finetuned on various domain via huggingface model hub.
Stars: ✭ 54 (-63.51%)
Mutual labels:  named-entity-recognition, language-model
Turkish Bert Nlp Pipeline
Bert-base NLP pipeline for Turkish, Ner, Sentiment Analysis, Question Answering etc.
Stars: ✭ 85 (-42.57%)
Mutual labels:  named-entity-recognition, ner
Ner blstm Crf
LSTM-CRF for NER with ConLL-2002 dataset
Stars: ✭ 51 (-65.54%)
Mutual labels:  named-entity-recognition, ner
Bond
BOND: BERT-Assisted Open-Domain Name Entity Recognition with Distant Supervision
Stars: ✭ 96 (-35.14%)
Mutual labels:  named-entity-recognition, ner
Jointre
End-to-end neural relation extraction using deep biaffine attention (ECIR 2019)
Stars: ✭ 41 (-72.3%)
Mutual labels:  named-entity-recognition, ner
Bi Lstm Crf Ner Tf2.0
Named Entity Recognition (NER) task using Bi-LSTM-CRF model implemented in Tensorflow 2.0(tensorflow2.0 +)
Stars: ✭ 93 (-37.16%)
Mutual labels:  named-entity-recognition, ner
Neuronblocks
NLP DNN Toolkit - Building Your NLP DNN Models Like Playing Lego
Stars: ✭ 1,356 (+816.22%)
Mutual labels:  model-compression, sequence-labeling

LD-Net

Documentation Status License

Check Our New NER Toolkit🚀🚀🚀

  • Inference:
    • LightNER: inference w. models pre-trained / trained w. any following tools, efficiently.
  • Training:
    • LD-Net: train NER models w. efficient contextualized representations.
    • VanillaNER: train vanilla NER models w. pre-trained embedding.
  • Distant Training:
    • AutoNER: train NER models w.o. line-by-line annotations and get competitive performance.

LD-Net provides sequence labeling models featuring:

  • Efficiency: constructing efficient contextualized representations without retraining language models.
  • Portability: well-organized, easy-to-modify and well-documented.

Remarkablely, our pre-trained NER model achieved:

  • 92.08 test F1 on the CoNLL03 NER task.
  • 160K words/sec decoding speed (6X speedup compared to its original model).

Details about LD-Net can be accessed at: https://arxiv.org/abs/1804.07827.

Model Notes

LD-Net Framework

Benchmarks

Model for CoNLL03 #FLOPs Mean(F1) Std(F1)
Vanilla NER w.o. LM 3 M 90.78 0.24
LD-Net (w.o. pruning) 51 M 91.86 0.15
LD-Net (origin, picked based on dev f1) 51 M 91.95
LD-Net (pruned) 5 M 91.84 0.14
Model for CoNLL00 #FLOPs Mean(F1) Std(F1)
Vanilla NP w.o. LM 3 M 94.42 0.08
LD-Net (w.o. pruning) 51 M 96.01 0.07
LD-Net (origin, picked based on dev f1) 51 M 96.13
LD-Net (pruned) 10 M 95.66 0.04

Pretrained Models

Here we provide both pre-trained language models and pre-trained sequence labeling models.

Language Models

Our pretrained language model contains word embedding, 10-layer densely-connected LSTM and adative softmax, and achieve an average PPL of 50.06 on the one billion benchmark dataset.

Forward Language Model Backward Language Model
Download Link Download Link

Named Entity Recognition

The original pre-trained named entity tagger achieves 91.95 F1, the pruned tagged achieved 92.08 F1.

Original Tagger Pruned Tagger
Download Link Download Link

Chunking

The original pre-trained named entity tagger achieves 96.13 F1, the pruned tagged achieved 95.79 F1.

Original Tagger Pruned Tagger
Download Link Download Link

Training

Demo Scripts

To pruning the original LD-Net for the CoNLL03 NER, please run:

bash ldnet_ner_prune.sh

To pruning the original LD-Net for the CoNLL00 Chunking, please run:

bash ldnet_np_prune.sh

Dependency

Our package is based on Python 3.6 and the following packages:

numpy
tqdm
torch-scope
torch==0.4.1

Data

Pre-process scripts are available in pre_seq and pre_word_ada, while pre-processed data has been stored in:

NER Chunking
Download Link Download Link

Model

Our implementations are available in model_seq and model_word_ada, and the documentations are hosted in ReadTheDoc

NER Chunking
Download Link Download Link

Inference

For model inference, please check our LightNER package

Citation

If you find the implementation useful, please cite the following paper: Efficient Contextualized Representation: Language Model Pruning for Sequence Labeling

@inproceedings{liu2018efficient,
  title = "{Efficient Contextualized Representation: Language Model Pruning for Sequence Labeling}", 
  author = {Liu, Liyuan and Ren, Xiang and Shang, Jingbo and Peng, Jian and Han, Jiawei}, 
  booktitle = {EMNLP}, 
  year = 2018, 
}
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].