All Projects → xuyan1115 → Similarity-Adaptive-Deep-Hashing

xuyan1115 / Similarity-Adaptive-Deep-Hashing

Licence: other
Unsupervised Deep Hashing with Similarity-Adaptive and Discrete Optimization (TPAMI2018)

Programming Languages

matlab
3953 projects
c
50402 projects - #5 most used programming language
python
139335 projects - #7 most used programming language
shell
77523 projects
M
324 projects

Projects that are alternatives of or similar to Similarity-Adaptive-Deep-Hashing

Caffe Deepbinarycode
Supervised Semantics-preserving Deep Hashing (TPAMI18)
Stars: ✭ 206 (+1044.44%)
Mutual labels:  hashing, caffe, image-retrieval
Change Detection Review
A review of change detection methods, including codes and open data sets for deep learning. From paper: change detection based on artificial intelligence: state-of-the-art and challenges.
Stars: ✭ 248 (+1277.78%)
Mutual labels:  caffe, unsupervised-learning
Netron
Visualizer for neural network, deep learning, and machine learning models
Stars: ✭ 17,193 (+95416.67%)
Mutual labels:  caffe, deeplearning
cisip-FIRe
Fast Image Retrieval (FIRe) is an open source project to promote image retrieval research. It implements most of the major binary hashing methods to date, together with different popular backbone networks and public datasets.
Stars: ✭ 40 (+122.22%)
Mutual labels:  hashing, image-retrieval
Hidden Two Stream
Caffe implementation for "Hidden Two-Stream Convolutional Networks for Action Recognition"
Stars: ✭ 179 (+894.44%)
Mutual labels:  caffe, unsupervised-learning
Pixelnet
The repository contains source code and models to use PixelNet architecture used for various pixel-level tasks. More details can be accessed at <http://www.cs.cmu.edu/~aayushb/pixelNet/>.
Stars: ✭ 194 (+977.78%)
Mutual labels:  caffe, unsupervised-learning
Deep Mihash
Code for papers "Hashing with Mutual Information" (TPAMI 2019) and "Hashing with Binary Matrix Pursuit" (ECCV 2018)
Stars: ✭ 13 (-27.78%)
Mutual labels:  hashing, image-retrieval
Vehicle Retrieval Kcnns
vehicle image retrieval using k CNNs ensemble method
Stars: ✭ 81 (+350%)
Mutual labels:  caffe, image-retrieval
GPQ
Generalized Product Quantization Network For Semi-supervised Image Retrieval - CVPR 2020
Stars: ✭ 60 (+233.33%)
Mutual labels:  hashing, image-retrieval
IJCAI2018 SSDH
Semantic Structure-based Unsupervised Deep Hashing IJCAI2018
Stars: ✭ 38 (+111.11%)
Mutual labels:  hashing, unsupervised-learning
learning2hash.github.io
Website for "A survey of learning to hash for Computer Vision" https://learning2hash.github.io
Stars: ✭ 14 (-22.22%)
Mutual labels:  hashing, deeplearning
Splitbrainauto
Split-Brain Autoencoders: Unsupervised Learning by Cross-Channel Prediction. In CVPR, 2017.
Stars: ✭ 137 (+661.11%)
Mutual labels:  caffe, unsupervised-learning
Xlearning
AI on Hadoop
Stars: ✭ 1,709 (+9394.44%)
Mutual labels:  caffe, deeplearning
Liteflownet2
A Lightweight Optical Flow CNN - Revisiting Data Fidelity and Regularization, TPAMI 2020
Stars: ✭ 195 (+983.33%)
Mutual labels:  caffe, deeplearning
Mobilenet Ssd
MobileNet-SSD(MobileNetSSD) + Neural Compute Stick(NCS) Faster than YoloV2 + Explosion speed by RaspberryPi · Multiple moving object detection with high accuracy.
Stars: ✭ 84 (+366.67%)
Mutual labels:  caffe, deeplearning
caffe
Caffe: a Fast framework for deep learning. Custom version with built-in sparse inputs, segmentation, object detection, class weights, and custom layers
Stars: ✭ 36 (+100%)
Mutual labels:  caffe, deeplearning
Ssd Models
把极速检测器的门槛给我打下来make lightweight caffe-ssd great again
Stars: ✭ 62 (+244.44%)
Mutual labels:  caffe, deeplearning
Pwc Net
PWC-Net: CNNs for Optical Flow Using Pyramid, Warping, and Cost Volume, CVPR 2018 (Oral)
Stars: ✭ 1,142 (+6244.44%)
Mutual labels:  caffe, deeplearning
GuidedNet
Caffe implementation for "Guided Optical Flow Learning"
Stars: ✭ 28 (+55.56%)
Mutual labels:  caffe, unsupervised-learning
DLInfBench
CNN model inference benchmarks for some popular deep learning frameworks
Stars: ✭ 51 (+183.33%)
Mutual labels:  caffe, deeplearning

Similarity-Adaptive Deep Hashing (SADH)

Unsupervised Deep Hashing with Similarity-Adaptive and Discrete Optimization

Created by Fumin Shen, Yan Xu, Li Liu, Yang Yang, Zi Huang, Heng Tao Shen

The details can be found in the TPAMI 2018 paper.

Contents

Prerequisites

  1. Requirements for Caffe, pycaffe and matcaffe (see: Caffe installation instructions).

  2. Prerequisites for datasets.

    Note: In our experiments, we horizontally flip training images manually for data augmentation. If the size of your training data is small (< 100K, like CIFAR-10. MNIST), you should do this step.

    We also provide our flipping code in cifar10/flip_img.m, you can run it to handle your own datasets.

  3. VGG-16 pre-trained model on ILSVC12 datasets, and save it in caffemodels directory.

Installation

Enter caffe directory and download the source codes.

    cd caffe/

Modify Makefile.config and build Caffe with following commands:

    make all -j8
    make pycaffe
    make matcaffe

Usage

We only supply the code to train 16-bit SADH on CIFAR-10 dataset.

We integrate train step and test step in a bash file train.sh, please run it as follows:

    ./train.sh [ROOT_FOLDER] [GPU_ID]
    # ROOT_FOLDER is the root folder of image datasets, e.g. ./cifar10/
    # GPU_ID is the GPU you want to train on

Resources

We supply CIFAR-10 dataset, which has been flipped. You can download it by following links:

  • CIFAR-10 dataset (png format): BaiduYun (Updated).

Citation

If you find our approach useful in your research, please consider citing:

@article{'shen2018tpami',
    author   = {Fumin Shen and Yan Xu and Li Liu and Yang Yang and Zi Huang and Heng Tao Shen},
    journal  = {IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI)}, 
    title    = {Unsupervised Deep Hashing with Similarity-Adaptive and Discrete Optimization},
    year     = {2018}
}
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].