All Projects β†’ yassouali β†’ SCL

yassouali / SCL

Licence: MIT license
πŸ“„ Spatial Contrastive Learning for Few-Shot Classification (ECML/PKDD 2021).

Programming Languages

python
139335 projects - #7 most used programming language
shell
77523 projects

Projects that are alternatives of or similar to SCL

AdCo
AdCo: Adversarial Contrast for Efficient Learning of Unsupervised Representations from Self-Trained Negative Adversaries
Stars: ✭ 148 (+252.38%)
Mutual labels:  self-supervised-learning, contrastive-learning
Pytorch Metric Learning
The easiest way to use deep metric learning in your application. Modular, flexible, and extensible. Written in PyTorch.
Stars: ✭ 3,936 (+9271.43%)
Mutual labels:  self-supervised-learning, contrastive-learning
GCA
[WWW 2021] Source code for "Graph Contrastive Learning with Adaptive Augmentation"
Stars: ✭ 69 (+64.29%)
Mutual labels:  self-supervised-learning, contrastive-learning
GCL
List of Publications in Graph Contrastive Learning
Stars: ✭ 25 (-40.48%)
Mutual labels:  self-supervised-learning, contrastive-learning
LibFewShot
LibFewShot: A Comprehensive Library for Few-shot Learning.
Stars: ✭ 629 (+1397.62%)
Mutual labels:  image-classification, few-shot-learning
SoCo
[NeurIPS 2021 Spotlight] Aligning Pretraining for Detection via Object-Level Contrastive Learning
Stars: ✭ 125 (+197.62%)
Mutual labels:  self-supervised-learning, contrastive-learning
PIC
Parametric Instance Classification for Unsupervised Visual Feature Learning, NeurIPS 2020
Stars: ✭ 41 (-2.38%)
Mutual labels:  self-supervised-learning, contrastive-learning
TCE
This repository contains the code implementation used in the paper Temporally Coherent Embeddings for Self-Supervised Video Representation Learning (TCE).
Stars: ✭ 51 (+21.43%)
Mutual labels:  self-supervised-learning, contrastive-learning
SimMIM
This is an official implementation for "SimMIM: A Simple Framework for Masked Image Modeling".
Stars: ✭ 717 (+1607.14%)
Mutual labels:  image-classification, self-supervised-learning
Transferlearning
Transfer learning / domain adaptation / domain generalization / multi-task learning etc. Papers, codes, datasets, applications, tutorials.-迁移学习
Stars: ✭ 8,481 (+20092.86%)
Mutual labels:  few-shot-learning, self-supervised-learning
awesome-graph-self-supervised-learning-based-recommendation
A curated list of awesome graph & self-supervised-learning-based recommendation.
Stars: ✭ 37 (-11.9%)
Mutual labels:  self-supervised-learning, contrastive-learning
Parametric-Contrastive-Learning
Parametric Contrastive Learning (ICCV2021)
Stars: ✭ 155 (+269.05%)
Mutual labels:  image-classification, contrastive-learning
info-nce-pytorch
PyTorch implementation of the InfoNCE loss for self-supervised learning.
Stars: ✭ 160 (+280.95%)
Mutual labels:  self-supervised-learning, contrastive-learning
G-SimCLR
This is the code base for paper "G-SimCLR : Self-Supervised Contrastive Learning with Guided Projection via Pseudo Labelling" by Souradip Chakraborty, Aritra Roy Gosthipaty and Sayak Paul.
Stars: ✭ 69 (+64.29%)
Mutual labels:  self-supervised-learning, contrastive-learning
GeDML
Generalized Deep Metric Learning.
Stars: ✭ 30 (-28.57%)
Mutual labels:  self-supervised-learning, contrastive-learning
simclr-pytorch
PyTorch implementation of SimCLR: supports multi-GPU training and closely reproduces results
Stars: ✭ 89 (+111.9%)
Mutual labels:  self-supervised-learning, contrastive-learning
DisCont
Code for the paper "DisCont: Self-Supervised Visual Attribute Disentanglement using Context Vectors".
Stars: ✭ 13 (-69.05%)
Mutual labels:  self-supervised-learning, contrastive-learning
CLMR
Official PyTorch implementation of Contrastive Learning of Musical Representations
Stars: ✭ 216 (+414.29%)
Mutual labels:  self-supervised-learning, contrastive-learning
Simclr
SimCLRv2 - Big Self-Supervised Models are Strong Semi-Supervised Learners
Stars: ✭ 2,720 (+6376.19%)
Mutual labels:  self-supervised-learning, contrastive-learning
sinkhorn-label-allocation
Sinkhorn Label Allocation is a label assignment method for semi-supervised self-training algorithms. The SLA algorithm is described in full in this ICML 2021 paper: https://arxiv.org/abs/2102.08622.
Stars: ✭ 49 (+16.67%)
Mutual labels:  image-classification, few-shot-learning

Spatial Contrastive Learning for Few-Shot Classification (SCL)

Paper πŸ“ƒ

This repo contains the official implementation of Spatial Contrastive Learning for Few-Shot Classification (SCL), which presents of a novel contrastive learning method applied to few-shot image classification in order to learn more general purpose embeddings, and facilitate the test-time adaptation to novel visual categories.

Highlights πŸ”₯

(1) Contrastive Learning for Few-Shot Classification.
We explore contrastive learning as an auxiliary pre-training objective to learn more transferable features and facilitate the test time adaptation for few-shot classification.

(2) Spatial Contrastive Learning (SCL).
We propose a novel Spatial Contrastive (SC) loss that promotes the encoding of the relevant spatial information into the learned representations, and further promotes class-independent discriminative patterns.

(3) Contrastive Distillation for Few-Shot Classification.
We introduce a novel contrastive distillation objective to reduce the compactness of the features in the embedding space and provide additional refinement of the representations.

Requirements πŸ”§

This repo was tested with CentOS 7.7.1908, Python 3.7.7, PyTorch 1.6.0, and CUDA 10.2. However, we expect that the provided code is compatible with older and newer version alike.

The required packages are pytorch and torchvision, together with PIL and sckitlearn for data-preprocessing and evaluation, tqdm for showing the training progress, and some additional modules. To setup the necessary modules, simply run:

pip install -r requirements.txt

Datasets πŸ’½

Standard Few-shot Setting

For the standard few-shot experiments, we used ImageNet derivatives: miniImagetNet and tieredImageNet, in addition to CIFAR-100 derivatives: FC100 and CIFAR-FS. These datasets are preprocessed by the repo of MetaOptNet, renamed and re-uploaded by RFS and can be downloaded from here: [DropBox]

After downloading all of the dataset, and placing them in the same folder which we refer to as DATA_PATH, where each dataset has its specific folder, eg: DATA_PATH/FC100. Then, during training, we can set the training argument data_root to DATA_PATH.

Cross-domain Few-shot Setting

In cross-domain setting, we train on miniImageNet but we test on a different dataset. Specifically, we consider 4 datasets: cub, cars, places and plantae. All of the datasets can be downloaded as follows:

cd dataset/download
python download.py DATASET_NAME DATA_PATH

where DATASET_NAME refers to one of the 4 datasets (cub, cars, places and plantae) and DATA_PATH refers to the path where the data will be downloaded and saved, which can be the path as the standard datasets above.

Running βŒ›

All of the commands necessary to reproduce the results of the paper can be found in scripts/run.sh.

In general, to use the proposed method for few-shot classification, there is a two stage approach to follows: (1) training the model on the merged meta-training set using train_contrastive.py, then (2) an evaluation setting, where we evaluate the pre-trained embedding model on the meta-testing stage using eval_fewshot.py. Note that we can also apply an optional distillation step after the first pre-training step using train_distillation.py.

Other Use Cases

The proposed SCL method is not specific to few-shot classification, and can also be used for standard supervised or self-supervised training for image classification. For instance, this can be done as follows:

from losses import ContrastiveLoss
from models.attention import AttentionSimilarity

attention_module = AttentionSimilarity(hidden_size=128) # hidden_size depends on the encoder
contrast_criterion = ContrastiveLoss(temperature=10) # inverse temp is used (0.1)

....

# apply some augmentations
aug_inputs1, aug_inputs2 = augment(inputs) 
aug_inputs = torch.cat([aug_inputs1, aug_inputs2], dim=0)

# forward pass
features = encoder(aug_inputs)

# supervised case
loss_contrast = contrast_criterion(features, attention=attention_module, labels=labels)

# unsupervised case
loss_contrast = contrast_criterion(features, attention=attention_module, labels=None)

....

Citation πŸ“

If you find this repo useful for your research, please consider citing the paper as follows:

@article{ouali2020spatial,
  title={Spatial Contrastive Learning for Few-Shot Classification},
  author={Ouali, Yassine and Hudelot, C{\'e}line and Tami, Myriam},
  journal={arXiv preprint arXiv:2012.13831},
  year={2020}
}

For any questions, please contact Yassine Ouali.

Acknowlegements

  • The code structure is based on RFS repo.
  • The cross-domain datasets code is based on CrossDomainFewShot repo.
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].