All Projects → hpatches → Hpatches Benchmark

hpatches / Hpatches Benchmark

Licence: bsd-2-clause
Python & Matlab code for local feature descriptor evaluation with the HPatches dataset.

Programming Languages

matlab
3953 projects

Projects that are alternatives of or similar to Hpatches Benchmark

Medmnist
[ISBI'21] MedMNIST Classification Decathlon: A Lightweight AutoML Benchmark for Medical Image Analysis
Stars: ✭ 338 (+162.02%)
Mutual labels:  dataset, benchmark
Awesome Semantic Segmentation
🤘 awesome-semantic-segmentation
Stars: ✭ 8,831 (+6745.74%)
Mutual labels:  evaluation, benchmark
Dukemtmc Reid evaluation
ICCV2017 The Person re-ID Evaluation Code for DukeMTMC-reID Dataset (Including Dataset Download)
Stars: ✭ 344 (+166.67%)
Mutual labels:  dataset, evaluation
Nas Benchmark
"NAS evaluation is frustratingly hard", ICLR2020
Stars: ✭ 126 (-2.33%)
Mutual labels:  evaluation, benchmark
Fashion Mnist
A MNIST-like fashion product database. Benchmark 👇
Stars: ✭ 9,675 (+7400%)
Mutual labels:  dataset, benchmark
Deeperforensics 1.0
[CVPR 2020] A Large-Scale Dataset for Real-World Face Forgery Detection
Stars: ✭ 338 (+162.02%)
Mutual labels:  dataset, benchmark
Okutama Action
Okutama-Action: An Aerial View Video Dataset for Concurrent Human Action Detection
Stars: ✭ 36 (-72.09%)
Mutual labels:  dataset, benchmark
Superpixel Benchmark
An extensive evaluation and comparison of 28 state-of-the-art superpixel algorithms on 5 datasets.
Stars: ✭ 275 (+113.18%)
Mutual labels:  evaluation, benchmark
Vidvrd Helper
To keep updates with VRU Grand Challenge, please use https://github.com/NExTplusplus/VidVRD-helper
Stars: ✭ 81 (-37.21%)
Mutual labels:  dataset, evaluation
Evalne
Source code for EvalNE, a Python library for evaluating Network Embedding methods.
Stars: ✭ 67 (-48.06%)
Mutual labels:  evaluation, benchmark
Datasets
A repository of pretty cool datasets that I collected for network science and machine learning research.
Stars: ✭ 302 (+134.11%)
Mutual labels:  dataset, benchmark
Evo
Python package for the evaluation of odometry and SLAM
Stars: ✭ 1,373 (+964.34%)
Mutual labels:  evaluation, benchmark
Tape
Tasks Assessing Protein Embeddings (TAPE), a set of five biologically relevant semi-supervised learning tasks spread across different domains of protein biology.
Stars: ✭ 295 (+128.68%)
Mutual labels:  dataset, benchmark
Pcam
The PatchCamelyon (PCam) deep learning classification benchmark.
Stars: ✭ 340 (+163.57%)
Mutual labels:  dataset, benchmark
Text2sql Data
A collection of datasets that pair questions with SQL queries.
Stars: ✭ 287 (+122.48%)
Mutual labels:  dataset, evaluation
Caffenet Benchmark
Evaluation of the CNN design choices performance on ImageNet-2012.
Stars: ✭ 700 (+442.64%)
Mutual labels:  dataset, benchmark
MaskedFaceRepresentation
Masked face recognition focuses on identifying people using their facial features while they are wearing masks. We introduce benchmarks on face verification based on masked face images for the development of COVID-safe protocols in airports.
Stars: ✭ 17 (-86.82%)
Mutual labels:  benchmark, dataset
Semantic Kitti Api
SemanticKITTI API for visualizing dataset, processing data, and evaluating results.
Stars: ✭ 272 (+110.85%)
Mutual labels:  dataset, evaluation
View Finding Network
A deep ranking network that learns to find good compositions in a photograph.
Stars: ✭ 57 (-55.81%)
Mutual labels:  dataset, evaluation
Core50
CORe50: a new Dataset and Benchmark for Continual Learning
Stars: ✭ 91 (-29.46%)
Mutual labels:  dataset, benchmark

logo

Homography patches dataset

This repository contains the code for evaluating feature descriptors on the HPatches dataset. For more information on the methods and the evaluation protocols please check [1].

Benchmark implementations

We provide two implementations for computing results on the HPatches dataset, one in python and one in matlab.

python matlab
details details

Benchmark tasks

Details about the benchmarking tasks can he found here.
For a more in-depth description, please see the CVPR 2017 paper [1].

Getting the dataset

The data required for the benchmarks are saved in the ./data folder, and are shared between the two implementations.

To download the HPatches image dataset, run the provided shell script with the hpatches argument.

sh download.sh hpatches

To download the pre-computed files of a baseline descriptor X on the HPatches dataset, run the provided download.sh script with the descr X argument.

To see a list of all the currently available descriptor file results, run scipt with only the descr argument.

sh download.sh descr       # prints all the currently available baseline pre-computed descriptors
sh download.sh descr sift  # downloads the pre-computed descriptors for sift

The HPatches dataset is saved on ./data/hpatches-release and the pre-computed descriptor files are saved on ./data/descriptors.

Dataset description

After download, the folder ../data/hpatches-release contains all the patches from the 116 sequences. The sequence folders are named with the following convention

  • i_X: patches extracted from image sequences with illumination changes
  • v_X: patches extracted from image sequences with viewpoint changes

For each image sequence, we provide a set of reference patches ref.png. For the remaining 5 images in the sequence, we provide three patch sets eK.png and hK.png and tK.png, containing the corresponding patches from ref.png as found in the K-th image with increasing amounts of geometric noise (e<h<t).

patches

Please see the patch extraction method details for more information about the extraction process.

References

[1] HPatches: A benchmark and evaluation of handcrafted and learned local descriptors, Vassileios Balntas*, Karel Lenc*, Andrea Vedaldi and Krystian Mikolajczyk, CVPR 2017. *Authors contributed equally.

You might also be interested in the 3D reconstruction benchmark by Schönberger et al. also presented at CVPR 2017.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].