All Projects → erikbern → Ann Benchmarks

erikbern / Ann Benchmarks

Licence: mit
Benchmarks of approximate nearest neighbor libraries in Python

Programming Languages

python
139335 projects - #7 most used programming language
HTML
75241 projects

Projects that are alternatives of or similar to Ann Benchmarks

Kubestone
Performance benchmarks for Kubernetes
Stars: ✭ 159 (-94.02%)
Mutual labels:  benchmark
Regex Benchmark
It's just a simple regex benchmark of different programming languages.
Stars: ✭ 171 (-93.57%)
Mutual labels:  benchmark
Train Ticket
Train Ticket - A Benchmark Microservice System
Stars: ✭ 180 (-93.23%)
Mutual labels:  benchmark
Are We Fast Yet
Are We Fast Yet? Comparing Language Implementations with Objects, Closures, and Arrays
Stars: ✭ 161 (-93.94%)
Mutual labels:  benchmark
Dlbench
Benchmarking State-of-the-Art Deep Learning Software Tools
Stars: ✭ 169 (-93.64%)
Mutual labels:  benchmark
Hand pose action
Dataset and code for the paper "First-Person Hand Action Benchmark with RGB-D Videos and 3D Hand Pose Annotations", CVPR 2018.
Stars: ✭ 173 (-93.49%)
Mutual labels:  benchmark
Sv Benchmarks
Collection of Verification Tasks
Stars: ✭ 158 (-94.06%)
Mutual labels:  benchmark
Gotemplatebenchmark
comparing the performance of different template engines
Stars: ✭ 180 (-93.23%)
Mutual labels:  benchmark
Pantheon
Pantheon of Congestion Control
Stars: ✭ 170 (-93.6%)
Mutual labels:  benchmark
Tsung
Tsung is a high-performance benchmark framework for various protocols including HTTP, XMPP, LDAP, etc.
Stars: ✭ 2,185 (-17.8%)
Mutual labels:  benchmark
Uibench
UI Benchmark
Stars: ✭ 163 (-93.87%)
Mutual labels:  benchmark
Pytorch Retraining
Transfer Learning Shootout for PyTorch's model zoo (torchvision)
Stars: ✭ 167 (-93.72%)
Mutual labels:  benchmark
Tpcds Kit
TPC-DS benchmark kit with some modifications/fixes
Stars: ✭ 176 (-93.38%)
Mutual labels:  benchmark
D Optimizer
Make Dota 2 fps great again
Stars: ✭ 161 (-93.94%)
Mutual labels:  benchmark
Jmh Visualizer
Visually explore your JMH Benchmarks
Stars: ✭ 180 (-93.23%)
Mutual labels:  benchmark
Blue benchmark
BLUE benchmark consists of five different biomedicine text-mining tasks with ten corpora.
Stars: ✭ 159 (-94.02%)
Mutual labels:  benchmark
Dbnet
DBNet: A Large-Scale Dataset for Driving Behavior Learning, CVPR 2018
Stars: ✭ 172 (-93.53%)
Mutual labels:  benchmark
Jax Rs Performance Comparison
⚡️ Performance Comparison of Jax-RS implementations and embedded containers
Stars: ✭ 181 (-93.19%)
Mutual labels:  benchmark
Sangrenel
Apache Kafka load testing "...basically a cloth bag filled with small jagged pieces of scrap iron"
Stars: ✭ 180 (-93.23%)
Mutual labels:  benchmark
Gbm Perf
Performance of various open source GBM implementations
Stars: ✭ 177 (-93.34%)
Mutual labels:  benchmark

Benchmarking nearest neighbors

Build Status

Doing fast searching of nearest neighbors in high dimensional spaces is an increasingly important problem, but so far there has not been a lot of empirical attempts at comparing approaches in an objective way.

This project contains some tools to benchmark various implementations of approximate nearest neighbor (ANN) search for different metrics. We have pregenerated datasets (in HDF5) formats and we also have Docker containers for each algorithm. There's a test suite that makes sure every algorithm works.

Evaluated

Data sets

We have a number of precomputed data sets for this. All data sets are pre-split into train/test and come with ground truth data in the form of the top 100 neighbors. We store them in a HDF5 format:

Dataset Dimensions Train size Test size Neighbors Distance Download
DEEP1B 96 9,990,000 10,000 100 Angular HDF5 (3.6GB)
Fashion-MNIST 784 60,000 10,000 100 Euclidean HDF5 (217MB)
GIST 960 1,000,000 1,000 100 Euclidean HDF5 (3.6GB)
GloVe 25 1,183,514 10,000 100 Angular HDF5 (121MB)
GloVe 50 1,183,514 10,000 100 Angular HDF5 (235MB)
GloVe 100 1,183,514 10,000 100 Angular HDF5 (463MB)
GloVe 200 1,183,514 10,000 100 Angular HDF5 (918MB)
Kosarak 27983 74,962 500 100 Jaccard HDF5 (2.0GB)
MNIST 784 60,000 10,000 100 Euclidean HDF5 (217MB)
NYTimes 256 290,000 10,000 100 Angular HDF5 (301MB)
SIFT 128 1,000,000 10,000 100 Euclidean HDF5 (501MB)
Last.fm 65 292,385 50,000 100 Angular HDF5 (135MB)

Results

Interactive plots can be found at http://ann-benchmarks.com. The following results are all as of 2020-07-12, running all benchmarks on a c5.4xlarge machine on AWS with --parallelism set to 3:

glove-100-angular

glove-100-angular

sift-128-euclidean

glove-100-angular

fashion-mnist-784-euclidean

fashion-mnist-784-euclidean

lastfm-64-dot

lastfm-64-dot

nytimes-256-angular

nytimes-256-angular

glove-25-angular

glove-25-angular

Install

The only prerequisite is Python (tested with 3.6) and Docker.

  1. Clone the repo.
  2. Run pip install -r requirements.txt.
  3. Run python install.py to build all the libraries inside Docker containers (this can take a while, like 10-30 minutes).

Running

  1. Run python run.py (this can take an extremely long time, potentially days)
  2. Run python plot.py or python create_website.py to plot results.

You can customize the algorithms and datasets if you want to:

  • Check that algos.yaml contains the parameter settings that you want to test
  • To run experiments on SIFT, invoke python run.py --dataset glove-100-angular. See python run.py --help for more information on possible settings. Note that experiments can take a long time.
  • To process the results, either use python plot.py --dataset glove-100-angular or python create_website.py. An example call: python create_website.py --plottype recall/time --latex --scatter --outputdir website/.

Including your algorithm

  1. Add your algorithm into ann_benchmarks/algorithms by providing a small Python wrapper.
  2. Add a Dockerfile in install/ for it
  3. Add it to algos.yaml
  4. Add it to .github/workflows/benchmarks.yml

Principles

  • Everyone is welcome to submit pull requests with tweaks and changes to how each library is being used.
  • In particular: if you are the author of any of these libraries, and you think the benchmark can be improved, consider making the improvement and submitting a pull request.
  • This is meant to be an ongoing project and represent the current state.
  • Make everything easy to replicate, including installing and preparing the datasets.
  • Try many different values of parameters for each library and ignore the points that are not on the precision-performance frontier.
  • High-dimensional datasets with approximately 100-1000 dimensions. This is challenging but also realistic. Not more than 1000 dimensions because those problems should probably be solved by doing dimensionality reduction separately.
  • Single queries are used by default. ANN-Benchmarks enforces that only one CPU is saturated during experimentation, i.e., no multi-threading. A batch mode is available that provides all queries to the implementations at once. Add the flag --batch to run.py and plot.py to enable batch mode.
  • Avoid extremely costly index building (more than several hours).
  • Focus on datasets that fit in RAM. For billion-scale benchmarks, see the related big-ann-benchmarks project.
  • We mainly support CPU-based ANN algorithms. GPU support exists for FAISS, but it has to be compiled with GPU support locally and experiments must be run using the flags --local --batch.
  • Do proper train/test set of index data and query points.
  • Note that we consider that set similarity datasets are sparse and thus we pass a sorted array of integers to algorithms to represent the set of each user.

Authors

Built by Erik Bernhardsson with significant contributions from Martin Aumüller and Alexander Faithfull.

Related Publication

The following publication details design principles behind the benchmarking framework:

Related Projects

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].