All Projects → Spijkervet → BYOL

Spijkervet / BYOL

Licence: other
Bootstrap Your Own Latent: A New Approach to Self-Supervised Learning

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to BYOL

awesome-graph-self-supervised-learning
Awesome Graph Self-Supervised Learning
Stars: ✭ 805 (+689.22%)
Mutual labels:  self-supervised-learning
Awesome-Vision-Transformer-Collection
Variants of Vision Transformer and its downstream tasks
Stars: ✭ 124 (+21.57%)
Mutual labels:  self-supervised-learning
newt
Natural World Tasks
Stars: ✭ 24 (-76.47%)
Mutual labels:  self-supervised-learning
TCE
This repository contains the code implementation used in the paper Temporally Coherent Embeddings for Self-Supervised Video Representation Learning (TCE).
Stars: ✭ 51 (-50%)
Mutual labels:  self-supervised-learning
info-nce-pytorch
PyTorch implementation of the InfoNCE loss for self-supervised learning.
Stars: ✭ 160 (+56.86%)
Mutual labels:  self-supervised-learning
mae-scalable-vision-learners
A TensorFlow 2.x implementation of Masked Autoencoders Are Scalable Vision Learners
Stars: ✭ 54 (-47.06%)
Mutual labels:  self-supervised-learning
SimSiam
Exploring Simple Siamese Representation Learning
Stars: ✭ 52 (-49.02%)
Mutual labels:  self-supervised-learning
MSF
Official code for "Mean Shift for Self-Supervised Learning"
Stars: ✭ 42 (-58.82%)
Mutual labels:  self-supervised-learning
awesome-graph-self-supervised-learning-based-recommendation
A curated list of awesome graph & self-supervised-learning-based recommendation.
Stars: ✭ 37 (-63.73%)
Mutual labels:  self-supervised-learning
SoCo
[NeurIPS 2021 Spotlight] Aligning Pretraining for Detection via Object-Level Contrastive Learning
Stars: ✭ 125 (+22.55%)
Mutual labels:  self-supervised-learning
mmselfsup
OpenMMLab Self-Supervised Learning Toolbox and Benchmark
Stars: ✭ 2,315 (+2169.61%)
Mutual labels:  self-supervised-learning
byol-a
BYOL for Audio: Self-Supervised Learning for General-Purpose Audio Representation
Stars: ✭ 147 (+44.12%)
Mutual labels:  byol
esvit
EsViT: Efficient self-supervised Vision Transformers
Stars: ✭ 323 (+216.67%)
Mutual labels:  self-supervised-learning
self6dpp
Self6D++: Occlusion-Aware Self-Supervised Monocular 6D Object Pose Estimation. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI) 2021.
Stars: ✭ 45 (-55.88%)
Mutual labels:  self-supervised-learning
simsiam-cifar10
Code to train the SimSiam model on cifar10 using PyTorch
Stars: ✭ 33 (-67.65%)
Mutual labels:  self-supervised-learning
latent-pose-reenactment
The authors' implementation of the "Neural Head Reenactment with Latent Pose Descriptors" (CVPR 2020) paper.
Stars: ✭ 132 (+29.41%)
Mutual labels:  self-supervised-learning
pillar-motion
Self-Supervised Pillar Motion Learning for Autonomous Driving (CVPR 2021)
Stars: ✭ 98 (-3.92%)
Mutual labels:  self-supervised-learning
CVPR21 PASS
PyTorch implementation of our CVPR2021 (oral) paper "Prototype Augmentation and Self-Supervision for Incremental Learning"
Stars: ✭ 55 (-46.08%)
Mutual labels:  self-supervised-learning
BossNAS
(ICCV 2021) BossNAS: Exploring Hybrid CNN-transformers with Block-wisely Self-supervised Neural Architecture Search
Stars: ✭ 125 (+22.55%)
Mutual labels:  self-supervised-learning
GCL
List of Publications in Graph Contrastive Learning
Stars: ✭ 25 (-75.49%)
Mutual labels:  self-supervised-learning

BYOL - Bootstrap Your Own Latent: A New Approach to Self-Supervised Learning

PyTorch implementation of "Bootstrap Your Own Latent: A New Approach to Self-Supervised Learning" by J.B. Grill et al.

Link to paper

This repository includes a practical implementation of BYOL with:

  • Distributed Data Parallel training
  • Benchmarks on vision datasets (CIFAR-10 / STL-10)
  • Support for PyTorch <= 1.5.0

Open BYOL in Google Colab Notebook

Open In Colab

Results

These are the top-1 accuracy of linear classifiers trained on the (frozen) representations learned by BYOL:

Method Batch size Image size ResNet Projection output dim. Pre-training epochs Optimizer STL-10 CIFAR-10
BYOL + linear eval. 192 224x224 ResNet18 256 100 Adam _ 0.832
Logistic Regression - - - - - - 0.358 0.389

Installation

git clone https://github.com/spijkervet/byol --recurse-submodules -j8
pip3 install -r requirements.txt
python3 main.py

Usage

Using a pre-trained model

The following commands will train a logistic regression model on a pre-trained ResNet18, yielding a top-1 accuracy of 83.2% on CIFAR-10.

curl https://github.com/Spijkervet/BYOL/releases/download/1.0/resnet18-CIFAR10-final.pt -L -O
rm features.p
python3 logistic_regression.py --model_path resnet18-CIFAR10-final.pt

Pre-training

To run pre-training using BYOL with the default arguments (1 node, 1 GPU), use:

python3 main.py

Which is equivalent to:

python3 main.py --nodes 1 --gpus 1

The pre-trained models are saved every n epochs in *.pt files, the final model being model-final.pt

Finetuning

Finetuning a model ('linear evaluation') on top of the pre-trained, frozen ResNet model can be done using:

python3 logistic_regression.py --model_path=./model_final.pt

With model_final.pt being file containing the pre-trained network from the pre-training stage.

Multi-GPU / Multi-node training

Use python3 main.py --gpus 2 to train e.g. on 2 GPU's, and python3 main.py --gpus 2 --nodes 2 to train with 2 GPU's using 2 nodes. See https://yangkky.github.io/2019/07/08/distributed-pytorch-tutorial.html for an excellent explanation.

Arguments

--image_size, default=224, "Image size"
--learning_rate, default=3e-4, "Initial learning rate."
--batch_size, default=42, "Batch size for training."
--num_epochs, default=100, "Number of epochs to train for."
--checkpoint_epochs, default=10, "Number of epochs between checkpoints/summaries."
--dataset_dir, default="./datasets", "Directory where dataset is stored.",
--num_workers, default=8, "Number of data loading workers (caution with nodes!)"
--nodes, default=1, "Number of nodes"
--gpus, default=1, "number of gpus per node"
--nr, default=0, "ranking within the nodes"
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].