All Projects → IgorSusmelj → barlowtwins

IgorSusmelj / barlowtwins

Licence: MIT license
Implementation of Barlow Twins paper

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to barlowtwins

Simclr
SimCLRv2 - Big Self-Supervised Models are Strong Semi-Supervised Learners
Stars: ✭ 2,720 (+3138.1%)
Mutual labels:  self-supervised-learning
object-aware-contrastive
Object-aware Contrastive Learning for Debiased Scene Representation (NeurIPS 2021)
Stars: ✭ 44 (-47.62%)
Mutual labels:  self-supervised-learning
S2-BNN
S2-BNN: Bridging the Gap Between Self-Supervised Real and 1-bit Neural Networks via Guided Distribution Calibration (CVPR 2021)
Stars: ✭ 53 (-36.9%)
Mutual labels:  self-supervised-learning
form2fit
[ICRA 2020] Train generalizable policies for kit assembly with self-supervised dense correspondence learning.
Stars: ✭ 78 (-7.14%)
Mutual labels:  self-supervised-learning
temporal-ssl
Video Representation Learning by Recognizing Temporal Transformations. In ECCV, 2020.
Stars: ✭ 46 (-45.24%)
Mutual labels:  self-supervised-learning
Revisiting-Contrastive-SSL
Revisiting Contrastive Methods for Unsupervised Learning of Visual Representations. [NeurIPS 2021]
Stars: ✭ 81 (-3.57%)
Mutual labels:  self-supervised-learning
Transferlearning
Transfer learning / domain adaptation / domain generalization / multi-task learning etc. Papers, codes, datasets, applications, tutorials.-迁移学习
Stars: ✭ 8,481 (+9996.43%)
Mutual labels:  self-supervised-learning
Self-Supervised-GANs
Tensorflow Implementation for paper "self-supervised generative adversarial networks"
Stars: ✭ 34 (-59.52%)
Mutual labels:  self-supervised-learning
CLSA
official implemntation for "Contrastive Learning with Stronger Augmentations"
Stars: ✭ 48 (-42.86%)
Mutual labels:  self-supervised-learning
video-pace
code for our ECCV-2020 paper: Self-supervised Video Representation Learning by Pace Prediction
Stars: ✭ 95 (+13.1%)
Mutual labels:  self-supervised-learning
SimMIM
This is an official implementation for "SimMIM: A Simple Framework for Masked Image Modeling".
Stars: ✭ 717 (+753.57%)
Mutual labels:  self-supervised-learning
3DInfomax
Making self-supervised learning work on molecules by using their 3D geometry to pre-train GNNs. Implemented in DGL and Pytorch Geometric.
Stars: ✭ 107 (+27.38%)
Mutual labels:  self-supervised-learning
SelfGNN
A PyTorch implementation of "SelfGNN: Self-supervised Graph Neural Networks without explicit negative sampling" paper, which appeared in The International Workshop on Self-Supervised Learning for the Web (SSL'21) @ the Web Conference 2021 (WWW'21).
Stars: ✭ 24 (-71.43%)
Mutual labels:  self-supervised-learning
SCL
📄 Spatial Contrastive Learning for Few-Shot Classification (ECML/PKDD 2021).
Stars: ✭ 42 (-50%)
Mutual labels:  self-supervised-learning
VQ-APC
Vector Quantized Autoregressive Predictive Coding (VQ-APC)
Stars: ✭ 34 (-59.52%)
Mutual labels:  self-supervised-learning
Sfmlearner
An unsupervised learning framework for depth and ego-motion estimation from monocular videos
Stars: ✭ 1,661 (+1877.38%)
Mutual labels:  self-supervised-learning
naru
Neural Relation Understanding: neural cardinality estimators for tabular data
Stars: ✭ 76 (-9.52%)
Mutual labels:  self-supervised-learning
exponential-moving-average-normalization
PyTorch implementation of EMAN for self-supervised and semi-supervised learning: https://arxiv.org/abs/2101.08482
Stars: ✭ 76 (-9.52%)
Mutual labels:  self-supervised-learning
ccgl
TKDE 22. CCCL: Contrastive Cascade Graph Learning.
Stars: ✭ 20 (-76.19%)
Mutual labels:  self-supervised-learning
libai
LiBai(李白): A Toolbox for Large-Scale Distributed Parallel Training
Stars: ✭ 284 (+238.1%)
Mutual labels:  self-supervised-learning

barlowtwins

PyTorch Implementation of Barlow Twins paper: Barlow Twins: Self-Supervised Learning via Redundancy Reduction

This is currently a work in progress. The code is a modified version of the SimSiam implementation here

  • Time per epoch is around 40 seconds on a V100 GPU
  • GPU usage is around 9 GBytes
  • The current version reaches around 84.7% test accuracy

Todo:

  • warmup learning rate from 0
  • report results on cifar-10
  • create PR to add to lightly

Installation

pip install -r requirements.txt

Dependencies

  • PyTorch
  • PyTorch Lightning
  • Torchvision
  • lightly

Benchmarks

We benchmark the BarlowTwins model on the CIFAR-10 dataset following the KNN evaluation protocol. Currently, the best effort achieved a test accuracy of 84.7%.

Accuracy Loss

Paper

Barlow Twins: Self-Supervised Learning via Redundancy Reduction

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].