All Projects → davidsbatista → Ner Evaluation

davidsbatista / Ner Evaluation

Licence: mit
An implementation of a full named-entity evaluation metrics based on SemEval'13 Task 9 - not at tag/token level but considering all the tokens that are part of the named-entity

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Ner Evaluation

Nlp Experiments In Pytorch
PyTorch repository for text categorization and NER experiments in Turkish and English.
Stars: ✭ 35 (-72.22%)
Mutual labels:  named-entity-recognition, ner
Phonlp
PhoNLP: A BERT-based multi-task learning toolkit for part-of-speech tagging, named entity recognition and dependency parsing (NAACL 2021)
Stars: ✭ 56 (-55.56%)
Mutual labels:  named-entity-recognition, ner
Named entity recognition
中文命名实体识别(包括多种模型:HMM,CRF,BiLSTM,BiLSTM+CRF的具体实现)
Stars: ✭ 995 (+689.68%)
Mutual labels:  named-entity-recognition, ner
Entity Recognition Datasets
A collection of corpora for named entity recognition (NER) and entity recognition tasks. These annotated datasets cover a variety of languages, domains and entity types.
Stars: ✭ 891 (+607.14%)
Mutual labels:  named-entity-recognition, ner
Multilstm
keras attentional bi-LSTM-CRF for Joint NLU (slot-filling and intent detection) with ATIS
Stars: ✭ 122 (-3.17%)
Mutual labels:  named-entity-recognition, ner
Chinesener
中文命名实体识别,实体抽取,tensorflow,pytorch,BiLSTM+CRF
Stars: ✭ 938 (+644.44%)
Mutual labels:  named-entity-recognition, ner
Ner blstm Crf
LSTM-CRF for NER with ConLL-2002 dataset
Stars: ✭ 51 (-59.52%)
Mutual labels:  named-entity-recognition, ner
Bert Multitask Learning
BERT for Multitask Learning
Stars: ✭ 380 (+201.59%)
Mutual labels:  named-entity-recognition, ner
Bi Lstm Crf Ner Tf2.0
Named Entity Recognition (NER) task using Bi-LSTM-CRF model implemented in Tensorflow 2.0(tensorflow2.0 +)
Stars: ✭ 93 (-26.19%)
Mutual labels:  named-entity-recognition, ner
Turkish Bert Nlp Pipeline
Bert-base NLP pipeline for Turkish, Ner, Sentiment Analysis, Question Answering etc.
Stars: ✭ 85 (-32.54%)
Mutual labels:  named-entity-recognition, ner
Yedda
YEDDA: A Lightweight Collaborative Text Span Annotation Tool. Code for ACL 2018 Best Demo Paper Nomination.
Stars: ✭ 704 (+458.73%)
Mutual labels:  named-entity-recognition, ner
Dan Jurafsky Chris Manning Nlp
My solution to the Natural Language Processing course made by Dan Jurafsky, Chris Manning in Winter 2012.
Stars: ✭ 124 (-1.59%)
Mutual labels:  named-entity-recognition, ner
Cluener2020
CLUENER2020 中文细粒度命名实体识别 Fine Grained Named Entity Recognition
Stars: ✭ 689 (+446.83%)
Mutual labels:  named-entity-recognition, ner
Tf ner
Simple and Efficient Tensorflow implementations of NER models with tf.estimator and tf.data
Stars: ✭ 876 (+595.24%)
Mutual labels:  named-entity-recognition, ner
Lightkg
基于Pytorch和torchtext的知识图谱深度学习框架。
Stars: ✭ 452 (+258.73%)
Mutual labels:  named-entity-recognition, ner
Jointre
End-to-end neural relation extraction using deep biaffine attention (ECIR 2019)
Stars: ✭ 41 (-67.46%)
Mutual labels:  named-entity-recognition, ner
Autoner
Learning Named Entity Tagger from Domain-Specific Dictionary
Stars: ✭ 357 (+183.33%)
Mutual labels:  named-entity-recognition, ner
Spacy Streamlit
👑 spaCy building blocks and visualizers for Streamlit apps
Stars: ✭ 360 (+185.71%)
Mutual labels:  named-entity-recognition, ner
Torchcrf
An Inplementation of CRF (Conditional Random Fields) in PyTorch 1.0
Stars: ✭ 58 (-53.97%)
Mutual labels:  named-entity-recognition, ner
Bond
BOND: BERT-Assisted Open-Domain Name Entity Recognition with Distant Supervision
Stars: ✭ 96 (-23.81%)
Mutual labels:  named-entity-recognition, ner

Named Entity Evaluation as in SemEval 2013 task 9.1

My own implementation, with lots of input from Matt Upson, of the Named-Entity Recognition evaluation metrics as defined by the SemEval 2013 - 9.1 task.

This evaluation metrics go belong a simple token/tag based schema, and consider diferent scenarios based on wether all the tokens that belong to a named entity were classified or not, and also wether the correct entity type was assigned.

You can find a more detailed explanation in the following blog post:

Notes:

In scenarios IV and VI the entity type of the true and pred does not match, in both cases we only scored against the true entity, not the predicted one. You can argue that the predicted entity could also be scored as spurious, but according to the definition of spurious:

  • Spurius (SPU) : system produces a response which doesn’t exist in the golden annotation;

In this case it exists an annotation, but only with a different entity type, so we assume it's only incorrect

Example:

You can see a working example on the following notebook:

Note that in order to run that example you need to have installed:

  • sklearn
  • nltk
  • sklearn_crfsuite

For testing you will need:

  • pytest
  • coverage

These dependencies can be installed by running pip3 install -r requirements.txt

Code tests and tests coverage:

To run tests:

coverage run --rcfile=setup.cfg -m pytest

To produce a coverage report:

coverage report

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].