All Projects → cvangysel → Pytrec_eval

cvangysel / Pytrec_eval

Licence: mit
pytrec_eval is an Information Retrieval evaluation tool for Python, based on the popular trec_eval.

Projects that are alternatives of or similar to Pytrec eval

naacl2018-fever
Fact Extraction and VERification baseline published in NAACL2018
Stars: ✭ 109 (-4.39%)
Mutual labels:  information-retrieval, evaluation
Vec4ir
Word Embeddings for Information Retrieval
Stars: ✭ 188 (+64.91%)
Mutual labels:  information-retrieval, evaluation
Drepl
A REPL for D
Stars: ✭ 70 (-38.6%)
Mutual labels:  evaluation
Evo
Python package for the evaluation of odometry and SLAM
Stars: ✭ 1,373 (+1104.39%)
Mutual labels:  evaluation
Evaluate
A version of eval for R that returns more information about what happened
Stars: ✭ 88 (-22.81%)
Mutual labels:  evaluation
Textrank Keyword Extraction
Keyword extraction using TextRank algorithm after pre-processing the text with lemmatization, filtering unwanted parts-of-speech and other techniques.
Stars: ✭ 79 (-30.7%)
Mutual labels:  information-retrieval
Sypht Java Client
A Java client for the Sypht API
Stars: ✭ 93 (-18.42%)
Mutual labels:  information-retrieval
Pert
A simple command line (bash/shell) utility to estimate tasks using PERT [Program Evaluation and Review Technique]
Stars: ✭ 66 (-42.11%)
Mutual labels:  evaluation
Tf Exercise Gan
Tensorflow implementation of different GANs and their comparisions
Stars: ✭ 110 (-3.51%)
Mutual labels:  evaluation
Solrplugins
Dice Solr Plugins from Simon Hughes Dice.com
Stars: ✭ 86 (-24.56%)
Mutual labels:  information-retrieval
Sert
Semantic Entity Retrieval Toolkit
Stars: ✭ 100 (-12.28%)
Mutual labels:  information-retrieval
Pyndri
pyndri is a Python interface to the Indri search engine.
Stars: ✭ 85 (-25.44%)
Mutual labels:  information-retrieval
Vidvrd Helper
To keep updates with VRU Grand Challenge, please use https://github.com/NExTplusplus/VidVRD-helper
Stars: ✭ 81 (-28.95%)
Mutual labels:  evaluation
Enmf
This is our implementation of ENMF: Efficient Neural Matrix Factorization (TOIS. 38, 2020). This also provides a fair evaluation of existing state-of-the-art recommendation models.
Stars: ✭ 96 (-15.79%)
Mutual labels:  evaluation
Vectorsinsearch
Dice.com repo to accompany the dice.com 'Vectors in Search' talk by Simon Hughes, from the Activate 2018 search conference, and the 'Searching with Vectors' talk from Haystack 2019 (US). Builds upon my conceptual search and semantic search work from 2015
Stars: ✭ 71 (-37.72%)
Mutual labels:  information-retrieval
Ds2i
A library of inverted index data structures
Stars: ✭ 104 (-8.77%)
Mutual labels:  information-retrieval
Evalne
Source code for EvalNE, a Python library for evaluating Network Embedding methods.
Stars: ✭ 67 (-41.23%)
Mutual labels:  evaluation
Eval Sql.net
SQL Eval Function | Dynamically Evaluate Expression in SQL Server using C# Syntax
Stars: ✭ 84 (-26.32%)
Mutual labels:  evaluation
Forte
Forte is a flexible and powerful NLP builder FOR TExt. This is part of the CASL project: http://casl-project.ai/
Stars: ✭ 89 (-21.93%)
Mutual labels:  information-retrieval
Expressive
Expressive is a cross-platform expression parsing and evaluation framework. The cross-platform nature is achieved through compiling for .NET Standard so it will run on practically any platform.
Stars: ✭ 113 (-0.88%)
Mutual labels:  evaluation

pytrec_eval

pytrec_eval is a Python interface to TREC's evaluation tool, trec_eval. It is an attempt to stop the cultivation of custom implementations of Information Retrieval evaluation measures for the Python programming language.

Requirements

The module was developed using Python 3.5. You need a Python distribution that comes with development headers. In addition to the default Python modules, numpy and scipy are required.

Installation

Installation is simple and should be relatively painless if your Python environment is functioning correctly (see below for FAQs).

pip install pytrec_eval

Examples

Check out the examples that simulate the standard trec_eval front-end and that compute statistical significance between two runs.

To get a grasp of how simple the module is to use, check this out:

import pytrec_eval
import json

qrel = {
    'q1': {
        'd1': 0,
        'd2': 1,
        'd3': 0,
    },
    'q2': {
        'd2': 1,
        'd3': 1,
    },
}

run = {
    'q1': {
        'd1': 1.0,
        'd2': 0.0,
        'd3': 1.5,
    },
    'q2': {
        'd1': 1.5,
        'd2': 0.2,
        'd3': 0.5,
    }
}

evaluator = pytrec_eval.RelevanceEvaluator(
    qrel, {'map', 'ndcg'})

print(json.dumps(evaluator.evaluate(run), indent=1))

The above snippet will return a data structure that contains the requested evaluation measures for queries q1 and q2:

{
    'q1': {
        'ndcg': 0.5,
        'map': 0.3333333333333333
    },
    'q2': {
        'ndcg': 0.6934264036172708,
        'map': 0.5833333333333333
    }
}

For more like this, see the example that uses parametrized evaluation measures.

Frequently Asked Questions

Since the module's initial release, no questions have been asked so frequently that they deserve a spot in this section.

Citation

If you use pytrec_eval to produce results for your scientific publication, please refer to our SIGIR paper:

@inproceedings{VanGysel2018pytreceval,
  title={Pytrec\_eval: An Extremely Fast Python Interface to trec\_eval},
  author={Van Gysel, Christophe and de Rijke, Maarten},
  publisher={ACM},
  booktitle={SIGIR},
  year={2018},
}

License

pytrec_eval is licensed under the MIT license. Please note that trec_eval is licensed separately. If you modify pytrec_eval in any way, please link back to this repository.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].