All Projects → sayakpaul → A-Barebones-Image-Retrieval-System

sayakpaul / A-Barebones-Image-Retrieval-System

Licence: MIT license
This project presents a simple framework to retrieve images similar to a query image.

Programming Languages

Jupyter Notebook
11667 projects

Projects that are alternatives of or similar to A-Barebones-Image-Retrieval-System

GLOM-TensorFlow
An attempt at the implementation of GLOM, Geoffrey Hinton's paper for emergent part-whole hierarchies from data
Stars: ✭ 32 (+28%)
Mutual labels:  representation-learning, tensorflow2
potato-disease-classification
Potato Disease Classification - Training, Rest APIs, and Frontend to test.
Stars: ✭ 95 (+280%)
Mutual labels:  tensorflow2
Representation-Learning-for-Information-Extraction
Pytorch implementation of Paper by Google Research - Representation Learning for Information Extraction from Form-like Documents.
Stars: ✭ 82 (+228%)
Mutual labels:  representation-learning
anatome
Ἀνατομή is a PyTorch library to analyze representation of neural networks
Stars: ✭ 50 (+100%)
Mutual labels:  representation-learning
3D-GuidedGradCAM-for-Medical-Imaging
This Repo containes the implemnetation of generating Guided-GradCAM for 3D medical Imaging using Nifti file in tensorflow 2.0. Different input files can be used in that case need to edit the input to the Guided-gradCAM model.
Stars: ✭ 60 (+140%)
Mutual labels:  tensorflow2
FUSION
PyTorch code for NeurIPSW 2020 paper (4th Workshop on Meta-Learning) "Few-Shot Unsupervised Continual Learning through Meta-Examples"
Stars: ✭ 18 (-28%)
Mutual labels:  representation-learning
manning tf2 in action
The official code repository for "TensorFlow in Action" by Manning.
Stars: ✭ 61 (+144%)
Mutual labels:  tensorflow2
point-cloud-segmentation
TF2 implementation of PointNet for segmenting point clouds
Stars: ✭ 33 (+32%)
Mutual labels:  tensorflow2
Bearcat captcha
熊猫识别不定长验证码,基于tensorflow2.2(tensorflow2.3也可以运行)轻松就能练出不错的模型
Stars: ✭ 67 (+168%)
Mutual labels:  tensorflow2
image embeddings
Using efficientnet to provide embeddings for retrieval
Stars: ✭ 107 (+328%)
Mutual labels:  representation-learning
PC3-pytorch
Predictive Coding for Locally-Linear Control (ICML-2020)
Stars: ✭ 16 (-36%)
Mutual labels:  representation-learning
Revisiting-Contrastive-SSL
Revisiting Contrastive Methods for Unsupervised Learning of Visual Representations. [NeurIPS 2021]
Stars: ✭ 81 (+224%)
Mutual labels:  representation-learning
Learning-From-Rules
Implementation of experiments in paper "Learning from Rules Generalizing Labeled Exemplars" to appear in ICLR2020 (https://openreview.net/forum?id=SkeuexBtDr)
Stars: ✭ 46 (+84%)
Mutual labels:  representation-learning
TensorFlow2.0 SSD
A tensorflow_2.0 implementation of SSD (Single Shot MultiBox Detector) .
Stars: ✭ 83 (+232%)
Mutual labels:  tensorflow2
golang-tf
Working golang + tensorflow
Stars: ✭ 21 (-16%)
Mutual labels:  tensorflow2
object-aware-contrastive
Object-aware Contrastive Learning for Debiased Scene Representation (NeurIPS 2021)
Stars: ✭ 44 (+76%)
Mutual labels:  representation-learning
VQ-APC
Vector Quantized Autoregressive Predictive Coding (VQ-APC)
Stars: ✭ 34 (+36%)
Mutual labels:  representation-learning
PCC-pytorch
A pytorch implementation of the paper "Prediction, Consistency, Curvature: Representation Learning for Locally-Linear Control"
Stars: ✭ 57 (+128%)
Mutual labels:  representation-learning
doctr
docTR (Document Text Recognition) - a seamless, high-performing & accessible library for OCR-related tasks powered by Deep Learning.
Stars: ✭ 1,409 (+5536%)
Mutual labels:  tensorflow2
Adaptive-Gradient-Clipping
Minimal implementation of adaptive gradient clipping (https://arxiv.org/abs/2102.06171) in TensorFlow 2.
Stars: ✭ 74 (+196%)
Mutual labels:  tensorflow2

A Barebones Image Retrieval System

This project presents a simple framework to retrieve images similar to a query image. The framework is as follows:

  • Train a CNN model (A) on a set of labeled images with Triplet Loss (I used this one).
  • Use the trained CNN model (A) to extract features from the validation set.
  • Train a kNN model (B) on these extracted features with k set to the number of neighbors wanted.
  • Grab an image (I) from the validation set and extract its features using the same CNN model (A).
  • Use the same kNN model (B) to calculate the nearest neighbors of I.

I used the Flowers dataset for experiments. I tried the above approach to a scenario where I had only 184 examples from the Flowers dataset and it worked well.

Here's a sample result:

Training specifics

I fine-tuned pre-trained models for minimizing the Triplet Loss. I experimented with the following pre-trained models:

  • VGG16
  • MobileNetV2
  • ResNet50
  • BigTransfer (also referred to as BiT) which is essentially a ResNet but pre-trained on a larger dataset with additional modifications

While training with the first three models I used the following learning rate callback (from the Transformers paper) -

Code of this callback is referred from here.

While fine-tuning the BiT model I used what is referred to as the BiT-HyperRule. BiT models come in different variants, I used this variant - m-r50x1 . Refer to this blog post to know more about BiT and BiT-HyperRule.

Visualization of the embedding space of a limited validation set

(The models were trained on 184 examples)

Training progress

(The models were trained on 184 examples)

The improvements with BiT are quite prominent. This indeed suggests that bigger models like BiT can be sample-efficient.

A few observations

Consider the following results (although they come from the model fine-tuned from VGG16):

We see that tulips and roses are being treated similarly and so are dandelions and daisies. If we see there is indeed an overlap in between their shapes and textures and this is likely why this is happening. When dealing with problems where very few samples are available per class it's good to have very rich representative samples per class which are distinct and indicative of a given class.

References

Different model weights

Available here.

Feedback

Via GitHub issues.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].