All Projects → maumueller → ann-benchmarks

maumueller / ann-benchmarks

Licence: other
Benchmarking approximate nearest neighbors. Note: This is an archived version from our SISAP 2017 paper, see below.

Programming Languages

HTML
75241 projects
C++
36643 projects - #6 most used programming language
Makefile
30231 projects
python
139335 projects - #7 most used programming language
shell
77523 projects
java
68154 projects - #9 most used programming language

Projects that are alternatives of or similar to ann-benchmarks

load-testing-toolkit
Collection of open-source tools for debugging, benchmarking, load and stress testing your code or services.
Stars: ✭ 65 (+116.67%)
Mutual labels:  benchmarking
benchee html
Draw pretty micro benchmarking charts in HTML and allow to export them as png for benchee
Stars: ✭ 50 (+66.67%)
Mutual labels:  benchmarking
BARS
Towards open benchmarking for recommender systems https://openbenchmark.github.io/BARS
Stars: ✭ 157 (+423.33%)
Mutual labels:  benchmarking
grandma
👵 fully programmable stress testing framework
Stars: ✭ 20 (-33.33%)
Mutual labels:  benchmarking
annoy.rb
annoy-rb provides Ruby bindings for the Annoy (Approximate Nearest Neighbors Oh Yeah).
Stars: ✭ 23 (-23.33%)
Mutual labels:  nearest-neighbor-search
AMBER
AMBER: Assessment of Metagenome BinnERs
Stars: ✭ 18 (-40%)
Mutual labels:  benchmarking
Rayuela.jl
Code for my PhD thesis. Library of quantization-based methods for fast similarity search in high dimensions. Presented at ECCV 18.
Stars: ✭ 54 (+80%)
Mutual labels:  nearest-neighbor-search
Unchase.FluentPerformanceMeter
🔨 Make the exact performance measurements of the public methods for public classes using this NuGet Package with fluent interface. Requires .Net Standard 2.0+. It is an Open Source project under Apache-2.0 License.
Stars: ✭ 33 (+10%)
Mutual labels:  benchmarking
bench
⏱️ Reliable performance measurement for Go programs. All in one design.
Stars: ✭ 33 (+10%)
Mutual labels:  benchmarking
CARLA
CARLA: A Python Library to Benchmark Algorithmic Recourse and Counterfactual Explanation Algorithms
Stars: ✭ 166 (+453.33%)
Mutual labels:  benchmarking
blockchain-load-testing
Code for load testing the Stellar network.
Stars: ✭ 36 (+20%)
Mutual labels:  benchmarking
elm-benchmark
Benchmarking for Elm
Stars: ✭ 48 (+60%)
Mutual labels:  benchmarking
LAP-solvers
Benchmarking Linear Assignment Problem Solvers
Stars: ✭ 69 (+130%)
Mutual labels:  benchmarking
reframe
A powerful Python framework for writing and running portable regression tests and benchmarks for HPC systems.
Stars: ✭ 154 (+413.33%)
Mutual labels:  benchmarking
erlbench
Erlang Performance Measurements
Stars: ✭ 34 (+13.33%)
Mutual labels:  benchmarking
benchmark-trend
Measure performance trends of Ruby code
Stars: ✭ 60 (+100%)
Mutual labels:  benchmarking
prettyBenching
🦕 A small lib to make your Deno benchmarking progress and results look pretty
Stars: ✭ 23 (-23.33%)
Mutual labels:  benchmarking
knn-cpp
A header-only C++ library for k nearest neighbor search with Eigen3.
Stars: ✭ 25 (-16.67%)
Mutual labels:  nearest-neighbor-search
Dolphinn
High Dimensional Approximate Near(est) Neighbor
Stars: ✭ 32 (+6.67%)
Mutual labels:  nearest-neighbor-search
FLEXS
Fitness landscape exploration sandbox for biological sequence design.
Stars: ✭ 92 (+206.67%)
Mutual labels:  benchmarking

Benchmarking nearest neighbors

This project contains some tools to benchmark various implementations of approximate nearest neighbor (ANN) search for different metrics. Visit http://sss.projects.itu.dk/ann-benchmarks to see the results of this benchmark.

Note: The main development of ann-benchmarks can be found at https://github.com/erikbern/ann-benchmarks. The version provided here is what has been used for the paper ANN-Benchmarks: A Benchmarking Tool for Approximate Nearest Neighbor Algorithms, M. Aumüller, E. Bernhardsson, A. Faithfull http://www.itu.dk/people/maau/additional/sisap2017-preprint.pdf.

Evaluated

Euclidean space

Hamming distance

Set similarity

Data sets

Euclidean

Angular/Cosine

Hamming space

We used Spherical hashing to generate Hamming space versions of

  • SIFT
  • NYTimes

Set Similarity

We use the following three datasets from http://ssjoin.dbresearch.uni-salzburg.at/

  • Flickr
  • AOL
  • Kosarek

Motivation

Doing fast searching of nearest neighbors in high dimensional spaces is an increasingly important problem, but with little attempt at objectively comparing methods.

Install

Clone the repo and run bash install.sh. This will install all libraries. It could take a while. It has been tested in Ubuntu 16.04. We advice to run it only in a VM or in a docker container (see our Dockerfile).

Downloading and preprocessing the data sets is done by running the install/data-*.sh scripts, e.g., run bash install/data-glove.sh or bash install/data-sift.sh.

Experiment Setup

Running a set of algorithms with specific parameters works:

  • Check that algos.yaml contains the parameter settings that you want to test
  • To run experiments on SIFT, invoke python ann_benchmarks/main --dataset sift-data --query-dataset sift-query --distance euclidean. See python ann_benchmarks/main --help for more information on possible settings. Note that experiments can take a long time.
  • To process the results, either use python plot.py or python createwebsite.py. The website/ directory contains CSS and JS files necessary for displaying the websites generated by createwebsite.py. An example call: python createwebsite.py --plottype recall/time --latex --scatter --outputdir website/.

Including Your Algorithm

You have two choices to include your own algorithm. If your algorithm has a Python wrapper (or is entirely written in Python), then all you need to do is to add your algorithm into ann_benchmarks/algorithms by providing a small wrapper.

If your algorithm does not provide a Python wrapper, you can include it using the SubProcess system. Find a detailed documentation on how to do it here [TBD], or checkout the wrappers written for Annoy-Hamming, Dolphinn, and MIH in the install directory.

Principles

  • Everyone is welcome to submit pull requests with tweaks and changes to how each library is being used.
  • In particular: if you are the author of any of these libraries, and you think the benchmark can be improved, consider making the improvement and submitting a pull request.
  • This is meant to be an ongoing project and represent the current state.
  • Make everything easy to replicate, including installing and preparing the datasets.
  • Try many different values of parameters for each library and ignore the points that are not on the precision-performance frontier.
  • High-dimensional datasets with approximately 100-1000 dimensions. This is challenging but also realistic. Not more than 1000 dimensions because those problems should probably be solved by doing dimensionality reduction separately.
  • No batching of queries, use single queries by default. ANN-Benchmarks saturates CPU cores by using a thread pool.
  • Avoid extremely costly index building (more than several hours).
  • Focus on datasets that fit in RAM. Out of core ANN could be the topic of a later comparison.
  • We currently support CPU-based ANN algorithms. GPU support is planned as future work.
  • Do proper train/test set of index data and query points.

Results

See http://sss.projects.itu.dk/ann-benchmarks.

Note that NMSLIB saves indices in the directory indices. If the tests are re-run using a different seed and/or a different number of queries, the content of this directory should be deleted.

Testing

The project is fully tested using Travis, with unit tests run for all different libraries and algorithms.

References

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].