All Projects → chakki-works → Seqeval

chakki-works / Seqeval

Licence: mit
A Python framework for sequence labeling evaluation(named-entity recognition, pos tagging, etc...)

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Seqeval

Neuronlp2
Deep neural models for core NLP tasks (Pytorch version)
Stars: ✭ 397 (-21.85%)
Mutual labels:  natural-language-processing, named-entity-recognition, sequence-labeling
Flair
A very simple framework for state-of-the-art Natural Language Processing (NLP)
Stars: ✭ 11,065 (+2078.15%)
Mutual labels:  natural-language-processing, named-entity-recognition, sequence-labeling
Anago
Bidirectional LSTM-CRF and ELMo for Named-Entity Recognition, Part-of-Speech Tagging and so on.
Stars: ✭ 1,392 (+174.02%)
Mutual labels:  natural-language-processing, named-entity-recognition, sequence-labeling
Ncrfpp
NCRF++, a Neural Sequence Labeling Toolkit. Easy use to any sequence labeling tasks (e.g. NER, POS, Segmentation). It includes character LSTM/CNN, word LSTM/CNN and softmax/CRF components.
Stars: ✭ 1,767 (+247.83%)
Mutual labels:  natural-language-processing, named-entity-recognition, sequence-labeling
AlpacaTag
AlpacaTag: An Active Learning-based Crowd Annotation Framework for Sequence Tagging (ACL 2019 Demo)
Stars: ✭ 126 (-75.2%)
Mutual labels:  named-entity-recognition, sequence-labeling
Pytorch-NLU
Pytorch-NLU,一个中文文本分类、序列标注工具包,支持中文长文本、短文本的多类、多标签分类任务,支持中文命名实体识别、词性标注、分词等序列标注任务。 Ptorch NLU, a Chinese text classification and sequence annotation toolkit, supports multi class and multi label classification tasks of Chinese long text and short text, and supports sequence annotation tasks such as Chinese named entity recognition, part of speech ta…
Stars: ✭ 151 (-70.28%)
Mutual labels:  named-entity-recognition, sequence-labeling
Awesome Persian Nlp Ir
Curated List of Persian Natural Language Processing and Information Retrieval Tools and Resources
Stars: ✭ 460 (-9.45%)
Mutual labels:  natural-language-processing, named-entity-recognition
Gector
Official implementation of the paper “GECToR – Grammatical Error Correction: Tag, Not Rewrite” // Published on BEA15 Workshop (co-located with ACL 2020) https://www.aclweb.org/anthology/2020.bea-1.16.pdf
Stars: ✭ 287 (-43.5%)
Mutual labels:  natural-language-processing, sequence-labeling
Pytorch Bert Crf Ner
KoBERT와 CRF로 만든 한국어 개체명인식기 (BERT+CRF based Named Entity Recognition model for Korean)
Stars: ✭ 236 (-53.54%)
Mutual labels:  natural-language-processing, named-entity-recognition
Chatbot ner
chatbot_ner: Named Entity Recognition for chatbots.
Stars: ✭ 273 (-46.26%)
Mutual labels:  natural-language-processing, named-entity-recognition
Slot filling and intent detection of slu
slot filling, intent detection, joint training, ATIS & SNIPS datasets, the Facebook’s multilingual dataset, MIT corpus, E-commerce Shopping Assistant (ECSA) dataset, CoNLL2003 NER, ELMo, BERT, XLNet
Stars: ✭ 298 (-41.34%)
Mutual labels:  named-entity-recognition, sequence-labeling
CrossNER
CrossNER: Evaluating Cross-Domain Named Entity Recognition (AAAI-2021)
Stars: ✭ 87 (-82.87%)
Mutual labels:  named-entity-recognition, sequence-labeling
sequence labeling tf
Sequence Labeling in Tensorflow
Stars: ✭ 18 (-96.46%)
Mutual labels:  named-entity-recognition, sequence-labeling
CrowdLayer
A neural network layer that enables training of deep neural networks directly from crowdsourced labels (e.g. from Amazon Mechanical Turk) or, more generally, labels from multiple annotators with different biases and levels of expertise.
Stars: ✭ 45 (-91.14%)
Mutual labels:  named-entity-recognition, sequence-labeling
pyner
🌈 Implementation of Neural Network based Named Entity Recognizer (Lample+, 2016) using Chainer.
Stars: ✭ 45 (-91.14%)
Mutual labels:  named-entity-recognition, sequence-labeling
Ner
Named Entity Recognition
Stars: ✭ 288 (-43.31%)
Mutual labels:  natural-language-processing, named-entity-recognition
Spacy Streamlit
👑 spaCy building blocks and visualizers for Streamlit apps
Stars: ✭ 360 (-29.13%)
Mutual labels:  natural-language-processing, named-entity-recognition
Autoner
Learning Named Entity Tagger from Domain-Specific Dictionary
Stars: ✭ 357 (-29.72%)
Mutual labels:  named-entity-recognition, sequence-labeling
Nlp Progress
Repository to track the progress in Natural Language Processing (NLP), including the datasets and the current state-of-the-art for the most common NLP tasks.
Stars: ✭ 19,518 (+3742.13%)
Mutual labels:  natural-language-processing, named-entity-recognition
Spacy Lookup
Named Entity Recognition based on dictionaries
Stars: ✭ 212 (-58.27%)
Mutual labels:  natural-language-processing, named-entity-recognition

seqeval

seqeval is a Python framework for sequence labeling evaluation. seqeval can evaluate the performance of chunking tasks such as named-entity recognition, part-of-speech tagging, semantic role labeling and so on.

This is well-tested by using the Perl script conlleval, which can be used for measuring the performance of a system that has processed the CoNLL-2000 shared task data.

Support features

seqeval supports following schemes:

  • IOB1
  • IOB2
  • IOE1
  • IOE2
  • IOBES(only in strict mode)
  • BILOU(only in strict mode)

and following metrics:

metrics description
accuracy_score(y_true, y_pred) Compute the accuracy.
precision_score(y_true, y_pred) Compute the precision.
recall_score(y_true, y_pred) Compute the recall.
f1_score(y_true, y_pred) Compute the F1 score, also known as balanced F-score or F-measure.
classification_report(y_true, y_pred, digits=2) Build a text report showing the main classification metrics. digits is number of digits for formatting output floating point values. Default value is 2.

Usage

seqeval supports the two evaluation modes. You can specify the following mode to each metrics:

  • default
  • strict

The default mode is compatible with conlleval. If you want to use the default mode, you don't need to specify it:

>>> from seqeval.metrics import accuracy_score
>>> from seqeval.metrics import classification_report
>>> from seqeval.metrics import f1_score
>>> y_true = [['O', 'O', 'O', 'B-MISC', 'I-MISC', 'I-MISC', 'O'], ['B-PER', 'I-PER', 'O']]
>>> y_pred = [['O', 'O', 'B-MISC', 'I-MISC', 'I-MISC', 'I-MISC', 'O'], ['B-PER', 'I-PER', 'O']]
>>> f1_score(y_true, y_pred)
0.50
>>> classification_report(y_true, y_pred)
              precision    recall  f1-score   support

        MISC       0.00      0.00      0.00         1
         PER       1.00      1.00      1.00         1

   micro avg       0.50      0.50      0.50         2
   macro avg       0.50      0.50      0.50         2
weighted avg       0.50      0.50      0.50         2

In strict mode, the inputs are evaluated according to the specified schema. The behavior of the strict mode is different from the default one which is designed to simulate conlleval. If you want to use the strict mode, please specify mode='strict' and scheme arguments at the same time:

>>> from seqeval.scheme import IOB2
>>> classification_report(y_true, y_pred, mode='strict', scheme=IOB2)
              precision    recall  f1-score   support

        MISC       0.00      0.00      0.00         1
         PER       1.00      1.00      1.00         1

   micro avg       0.50      0.50      0.50         2
   macro avg       0.50      0.50      0.50         2
weighted avg       0.50      0.50      0.50         2

A minimum case to explain differences between the default and strict mode:

>>> from seqeval.metrics import classification_report
>>> from seqeval.scheme import IOB2
>>> y_true = [['B-NP', 'I-NP', 'O']]
>>> y_pred = [['I-NP', 'I-NP', 'O']]
>>> classification_report(y_true, y_pred)
              precision    recall  f1-score   support
          NP       1.00      1.00      1.00         1
   micro avg       1.00      1.00      1.00         1
   macro avg       1.00      1.00      1.00         1
weighted avg       1.00      1.00      1.00         1
>>> classification_report(y_true, y_pred, mode='strict', scheme=IOB2)
              precision    recall  f1-score   support
          NP       0.00      0.00      0.00         1
   micro avg       0.00      0.00      0.00         1
   macro avg       0.00      0.00      0.00         1
weighted avg       0.00      0.00      0.00         1

Installation

To install seqeval, simply run:

pip install seqeval

License

MIT

Citation

@misc{seqeval,
  title={{seqeval}: A Python framework for sequence labeling evaluation},
  url={https://github.com/chakki-works/seqeval},
  note={Software available from https://github.com/chakki-works/seqeval},
  author={Hiroki Nakayama},
  year={2018},
}
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].