All Projects → vinid → quica

vinid / quica

Licence: MIT license
quica is a tool to run inter coder agreement pipelines in an easy and effective ways. Multiple measures are run and results are collected in a single table than can be easily exported in Latex

Programming Languages

python
139335 projects - #7 most used programming language
Makefile
30231 projects

Projects that are alternatives of or similar to quica

PySODEvalToolkit
PySODEvalToolkit: A Python-based Evaluation Toolbox for Salient Object Detection and Camouflaged Object Detection
Stars: ✭ 59 (+180.95%)
Mutual labels:  evaluation-metrics, evaluation-framework
powerflows-dmn
Power Flows DMN - Powerful decisions and rules engine
Stars: ✭ 46 (+119.05%)
Mutual labels:  evaluation-framework
precision-recall-distributions
Assessing Generative Models via Precision and Recall (official repository)
Stars: ✭ 80 (+280.95%)
Mutual labels:  evaluation-metrics
CrowdFlow
Optical Flow Dataset and Benchmark for Visual Crowd Analysis
Stars: ✭ 87 (+314.29%)
Mutual labels:  evaluation-framework
cvpr18-caption-eval
Learning to Evaluate Image Captioning. CVPR 2018
Stars: ✭ 79 (+276.19%)
Mutual labels:  evaluation-metrics
NLP-tools
Useful python NLP tools (evaluation, GUI interface, tokenization)
Stars: ✭ 39 (+85.71%)
Mutual labels:  evaluation-metrics
f1-communities
A novel approach to evaluate community detection algorithms on ground truth
Stars: ✭ 20 (-4.76%)
Mutual labels:  evaluation-metrics
nervaluate
Full named-entity (i.e., not tag/token) evaluation metrics based on SemEval’13
Stars: ✭ 40 (+90.48%)
Mutual labels:  evaluation-metrics
MusDr
Evaluation metrics for machine-composed symbolic music. Paper: "The Jazz Transformer on the Front Line: Exploring the Shortcomings of AI-Composed Music through Quantitative Measures", ISMIR 2020
Stars: ✭ 38 (+80.95%)
Mutual labels:  evaluation-metrics
easse
Easier Automatic Sentence Simplification Evaluation
Stars: ✭ 109 (+419.05%)
Mutual labels:  evaluation-metrics
BIRL
BIRL: Benchmark on Image Registration methods with Landmark validations
Stars: ✭ 66 (+214.29%)
Mutual labels:  evaluation-framework
PyDGN
A research library for Deep Graph Networks
Stars: ✭ 158 (+652.38%)
Mutual labels:  evaluation-framework
efda
Evaluation Framework for Dependency Analysis (EFDA)
Stars: ✭ 34 (+61.9%)
Mutual labels:  evaluation-framework

Quick Inter Coder Agreement in Python

Quica (Quick Inter Coder Agreement in Python) is a tool to run inter coder agreement pipelines in an easy and effective way. Multiple measures are run and results are collected in a single table than can be easily exported in Latex. quica supports binary or multiple coders.

Documentation Status License

Quick Inter Coder Agreement in Python

Installation

pip install -U quica

Jump start Tutorial

Name Link
Different possible usages of QUICA Open In Colab

Get Quick Agreement

If you already have a python dataframe you can run Quica with few liens of code! Let's assume you have two coders; we will create a pandas dataframe just to show how to use the library. As for now, we support only integer values and we still have not included weighting.

from quica.quica import Quica
import pandas as pd

coder_1 = [0, 1, 0, 1, 0, 1]
coder_3 = [0, 1, 0, 1, 0, 0]

dataframe = pd.DataFrame({"coder1" : coder_1,
              "coder3" : coder_3})

quica = Quica(dataframe=dataframe)
print(quica.get_results())

This is the expected output:

Out[1]:
             score
names
krippendorff  0.685714
fleiss        0.666667
scotts        0.657143
raw           0.833333
mace          0.426531
cohen         0.666667

It was pretty easy to get all the scores, right? What if we do not have a pandas dataframe? what if we want to directly get the latex table to put into the paper? worry not, my friend: it's easier done than said!

from quica.measures.irr import *
from quica.dataset.dataset import IRRDataset
from quica.quica import Quica

coder_1 = [0, 1, 0, 1, 0, 1]
coder_3 = [0, 1, 0, 1, 0, 0]

disagreeing_coders = [coder_1, coder_3]
disagreeing_dataset = IRRDataset(disagreeing_coders)

quica = Quica(disagreeing_dataset)

print(quica.get_results())
print(quica.get_latex())

you should get this in output, note that the latex table requires the booktabs package:

Out[1]:
             score
names
krippendorff  0.685714
fleiss        0.666667
scotts        0.657143
raw           0.833333
mace          0.426531
cohen         0.666667

Out[2]:

\begin{tabular}{lr}
\toprule
{} &     score \\
names        &           \\
\midrule
krippendorff &  0.685714 \\
fleiss       &  0.666667 \\
scotts       &  0.657143 \\
raw          &  0.833333 \\
mace         &  0.426531 \\
cohen        &  0.666667 \\
\bottomrule
\end{tabular}

Features

from quica.measures.irr import *
from quica.dataset.dataset import IRRDataset
from quica.quica import Quica

coder_1 = [0, 1, 0, 1, 0, 1]
coder_2 = [0, 1, 0, 1, 0, 1]
coder_3 = [0, 1, 0, 1, 0, 0]

agreeing_coders = [coder_1, coder_2]
agreeing_dataset = IRRDataset(agreeing_coders)

disagreeing_coders = [coder_1, coder_3]
disagreeing_dataset = IRRDataset(disagreeing_coders)

kri = Krippendorff()
cohen = CohensK()

assert kri.compute_irr(agreeing_dataset) == 1
assert kri.compute_irr(agreeing_dataset) == 1
assert cohen.compute_irr(disagreeing_dataset) < 1
assert cohen.compute_irr(disagreeing_dataset) < 1

Supported Algorithms

  • MACE (Multi-Annotator Competence Estimation)
    • Hovy, D., Berg-Kirkpatrick, T., Vaswani, A., & Hovy, E. (2013, June). Learning whom to trust with MACE. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (pp. 1120-1130).
    • We define the inter coder agreeement as the average competence of the users.
  • Krippendorff's Alpha
  • Cohens' K
  • Fleiss' K
  • Scotts' PI
  • Raw Agreement: Standard Accuracy

Credits

This package was created with Cookiecutter and the audreyr/cookiecutter-pypackage project template. Thanks to Pietro Lesci and Dirk Hovy for their implementation of MACE.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].