All Projects → htdt → self-supervised

htdt / self-supervised

Licence: other
Whitening for Self-Supervised Representation Learning | Official repository

Programming Languages

python
139335 projects - #7 most used programming language
Dockerfile
14818 projects

Projects that are alternatives of or similar to self-supervised

EgoNet
Official project website for the CVPR 2021 paper "Exploring intermediate representation for monocular vehicle pose estimation"
Stars: ✭ 111 (+33.73%)
Mutual labels:  representation-learning, self-supervised-learning
awesome-contrastive-self-supervised-learning
A comprehensive list of awesome contrastive self-supervised learning papers.
Stars: ✭ 748 (+801.2%)
Mutual labels:  representation-learning, self-supervised-learning
Transferlearning
Transfer learning / domain adaptation / domain generalization / multi-task learning etc. Papers, codes, datasets, applications, tutorials.-迁移学习
Stars: ✭ 8,481 (+10118.07%)
Mutual labels:  representation-learning, self-supervised-learning
object-aware-contrastive
Object-aware Contrastive Learning for Debiased Scene Representation (NeurIPS 2021)
Stars: ✭ 44 (-46.99%)
Mutual labels:  representation-learning, self-supervised-learning
simclr-pytorch
PyTorch implementation of SimCLR: supports multi-GPU training and closely reproduces results
Stars: ✭ 89 (+7.23%)
Mutual labels:  representation-learning, self-supervised-learning
TCE
This repository contains the code implementation used in the paper Temporally Coherent Embeddings for Self-Supervised Video Representation Learning (TCE).
Stars: ✭ 51 (-38.55%)
Mutual labels:  representation-learning, self-supervised-learning
Simclr
SimCLRv2 - Big Self-Supervised Models are Strong Semi-Supervised Learners
Stars: ✭ 2,720 (+3177.11%)
Mutual labels:  representation-learning, self-supervised-learning
Revisiting-Contrastive-SSL
Revisiting Contrastive Methods for Unsupervised Learning of Visual Representations. [NeurIPS 2021]
Stars: ✭ 81 (-2.41%)
Mutual labels:  representation-learning, self-supervised-learning
VQ-APC
Vector Quantized Autoregressive Predictive Coding (VQ-APC)
Stars: ✭ 34 (-59.04%)
Mutual labels:  representation-learning, self-supervised-learning
awesome-graph-self-supervised-learning
Awesome Graph Self-Supervised Learning
Stars: ✭ 805 (+869.88%)
Mutual labels:  representation-learning, self-supervised-learning
MSF
Official code for "Mean Shift for Self-Supervised Learning"
Stars: ✭ 42 (-49.4%)
Mutual labels:  representation-learning, self-supervised-learning
DIG
A library for graph deep learning research
Stars: ✭ 1,078 (+1198.8%)
Mutual labels:  self-supervised-learning
SelfTask-GNN
Implementation of paper "Self-supervised Learning on Graphs:Deep Insights and New Directions"
Stars: ✭ 78 (-6.02%)
Mutual labels:  self-supervised-learning
cpnet
Learning Video Representations from Correspondence Proposals (CVPR 2019 Oral)
Stars: ✭ 93 (+12.05%)
Mutual labels:  representation-learning
al-fk-self-supervision
Official PyTorch code for CVPR 2020 paper "Deep Active Learning for Biased Datasets via Fisher Kernel Self-Supervision"
Stars: ✭ 28 (-66.27%)
Mutual labels:  self-supervised-learning
visual syntactic embedding video captioning
Source code of the paper titled *Improving Video Captioning with Temporal Composition of a Visual-Syntactic Embedding*
Stars: ✭ 23 (-72.29%)
Mutual labels:  representation-learning
GCA
[WWW 2021] Source code for "Graph Contrastive Learning with Adaptive Augmentation"
Stars: ✭ 69 (-16.87%)
Mutual labels:  self-supervised-learning
AdCo
AdCo: Adversarial Contrast for Efficient Learning of Unsupervised Representations from Self-Trained Negative Adversaries
Stars: ✭ 148 (+78.31%)
Mutual labels:  self-supervised-learning
factorized
[ICLR 2019] Learning Factorized Multimodal Representations
Stars: ✭ 49 (-40.96%)
Mutual labels:  representation-learning
awesome-multimodal-ml
Reading list for research topics in multimodal machine learning
Stars: ✭ 3,125 (+3665.06%)
Mutual labels:  representation-learning

Self-Supervised Representation Learning

Official repository of the paper Whitening for Self-Supervised Representation Learning
arXiv:2007.06346

It includes 3 types of losses:

And 5 datasets:

  • CIFAR-10 and CIFAR-100
  • STL-10
  • Tiny ImageNet
  • ImageNet-100

Checkpoints are stored in data each 100 epochs during training.

The implementation is optimized for a single GPU, although multiple are also supported. It includes fast evaluation: we pre-compute embeddings for the entire dataset and then train a classifier on top. The evaluation of the ResNet-18 encoder takes about one minute.

Installation

The implementation is based on PyTorch. Logging works on wandb.ai. See docker/Dockerfile.

ImageNet-100

To get this dataset, take the original ImageNet and filter out this subset of classes. We do not use augmentations during testing, and loading big images with resizing on the fly is slow, so we can preprocess classifier train and test images. We recommend mogrify for it. First, you need to resize to 256 (just like torchvision.transforms.Resize(256)) and then crop to 224 (like torchvision.transforms.CenterCrop(224)). Finally, put the original images to train, and resized to clf and test.

Usage

Detailed settings are good by default, to see all options:

python -m train --help
python -m test --help

To reproduce the results from table 1:

W-MSE 4

python -m train --dataset cifar10 --epoch 1000 --lr 3e-3 --num_samples 4 --bs 256 --emb 64 --w_size 128
python -m train --dataset cifar100 --epoch 1000 --lr 3e-3 --num_samples 4 --bs 256 --emb 64 --w_size 128
python -m train --dataset stl10 --epoch 2000 --lr 2e-3 --num_samples 4 --bs 256 --emb 128 --w_size 256
python -m train --dataset tiny_in --epoch 1000 --lr 2e-3 --num_samples 4 --bs 256 --emb 128 --w_size 256

W-MSE 2

python -m train --dataset cifar10 --epoch 1000 --lr 3e-3 --emb 64 --w_size 128
python -m train --dataset cifar100 --epoch 1000 --lr 3e-3 --emb 64 --w_size 128
python -m train --dataset stl10 --epoch 2000 --lr 2e-3 --emb 128 --w_size 256 --w_iter 4
python -m train --dataset tiny_in --epoch 1000 --lr 2e-3 --emb 128 --w_size 256 --w_iter 4

Contrastive

python -m train --dataset cifar10 --epoch 1000 --lr 3e-3 --emb 64 --method contrastive
python -m train --dataset cifar100 --epoch 1000 --lr 3e-3 --emb 64 --method contrastive
python -m train --dataset stl10 --epoch 2000 --lr 2e-3 --emb 128 --method contrastive
python -m train --dataset tiny_in --epoch 1000 --lr 2e-3 --emb 128 --method contrastive

BYOL

python -m train --dataset cifar10 --epoch 1000 --lr 3e-3 --emb 64 --method byol
python -m train --dataset cifar100 --epoch 1000 --lr 3e-3 --emb 64 --method byol
python -m train --dataset stl10 --epoch 2000 --lr 2e-3 --emb 128 --method byol
python -m train --dataset tiny_in --epoch 1000 --lr 2e-3 --emb 128 --method byol

ImageNet-100

python -m train --dataset imagenet --epoch 240 --lr 2e-3 --emb 128 --w_size 256 --crop_s0 0.08 --cj0 0.8 --cj1 0.8 --cj2 0.8 --cj3 0.2 --gs_p 0.2
python -m train --dataset imagenet --epoch 240 --lr 2e-3 --num_samples 4 --bs 256 --emb 128 --w_size 256 --crop_s0 0.08 --cj0 0.8 --cj1 0.8 --cj2 0.8 --cj3 0.2 --gs_p 0.2

Use --no_norm to disable normalization (for Euclidean distance).

Citation

@article{ermolov2020whitening,
  title={Whitening for Self-Supervised Representation Learning}, 
  author={Aleksandr Ermolov and Aliaksandr Siarohin and Enver Sangineto and Nicu Sebe},
  journal={arXiv preprint arXiv:2007.06346},
  year={2020}
}
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].