All Projects → ariG23498 → G-SimCLR

ariG23498 / G-SimCLR

Licence: Apache-2.0 license
This is the code base for paper "G-SimCLR : Self-Supervised Contrastive Learning with Guided Projection via Pseudo Labelling" by Souradip Chakraborty, Aritra Roy Gosthipaty and Sayak Paul.

Programming Languages

Jupyter Notebook
11667 projects

Projects that are alternatives of or similar to G-SimCLR

Revisiting-Contrastive-SSL
Revisiting Contrastive Methods for Unsupervised Learning of Visual Representations. [NeurIPS 2021]
Stars: ✭ 81 (+17.39%)
Mutual labels:  clustering, self-supervised-learning, contrastive-learning
TCE
This repository contains the code implementation used in the paper Temporally Coherent Embeddings for Self-Supervised Video Representation Learning (TCE).
Stars: ✭ 51 (-26.09%)
Mutual labels:  self-supervised-learning, contrastive-learning
GLOM-TensorFlow
An attempt at the implementation of GLOM, Geoffrey Hinton's paper for emergent part-whole hierarchies from data
Stars: ✭ 32 (-53.62%)
Mutual labels:  keras-tensorflow, tensorflow2
tf-faster-rcnn
Tensorflow 2 Faster-RCNN implementation from scratch supporting to the batch processing with MobileNetV2 and VGG16 backbones
Stars: ✭ 88 (+27.54%)
Mutual labels:  keras-tensorflow, tensorflow2
ViCC
[WACV'22] Code repository for the paper "Self-supervised Video Representation Learning with Cross-Stream Prototypical Contrasting", https://arxiv.org/abs/2106.10137.
Stars: ✭ 33 (-52.17%)
Mutual labels:  self-supervised-learning, contrastive-learning
DisCont
Code for the paper "DisCont: Self-Supervised Visual Attribute Disentanglement using Context Vectors".
Stars: ✭ 13 (-81.16%)
Mutual labels:  self-supervised-learning, contrastive-learning
GeDML
Generalized Deep Metric Learning.
Stars: ✭ 30 (-56.52%)
Mutual labels:  self-supervised-learning, contrastive-learning
object-aware-contrastive
Object-aware Contrastive Learning for Debiased Scene Representation (NeurIPS 2021)
Stars: ✭ 44 (-36.23%)
Mutual labels:  self-supervised-learning, contrastive-learning
awesome-graph-self-supervised-learning-based-recommendation
A curated list of awesome graph & self-supervised-learning-based recommendation.
Stars: ✭ 37 (-46.38%)
Mutual labels:  self-supervised-learning, contrastive-learning
mae-scalable-vision-learners
A TensorFlow 2.x implementation of Masked Autoencoders Are Scalable Vision Learners
Stars: ✭ 54 (-21.74%)
Mutual labels:  self-supervised-learning, tensorflow2
GCL
List of Publications in Graph Contrastive Learning
Stars: ✭ 25 (-63.77%)
Mutual labels:  self-supervised-learning, contrastive-learning
potato-disease-classification
Potato Disease Classification - Training, Rest APIs, and Frontend to test.
Stars: ✭ 95 (+37.68%)
Mutual labels:  keras-tensorflow, tensorflow2
S2-BNN
S2-BNN: Bridging the Gap Between Self-Supervised Real and 1-bit Neural Networks via Guided Distribution Calibration (CVPR 2021)
Stars: ✭ 53 (-23.19%)
Mutual labels:  self-supervised-learning, contrastive-learning
CLMR
Official PyTorch implementation of Contrastive Learning of Musical Representations
Stars: ✭ 216 (+213.04%)
Mutual labels:  self-supervised-learning, contrastive-learning
GrouProx
FedGroup, A Clustered Federated Learning framework based on Tensorflow
Stars: ✭ 20 (-71.01%)
Mutual labels:  clustering, tensorflow2
labml
🔎 Monitor deep learning model training and hardware usage from your mobile phone 📱
Stars: ✭ 1,213 (+1657.97%)
Mutual labels:  keras-tensorflow, tensorflow2
SCL
📄 Spatial Contrastive Learning for Few-Shot Classification (ECML/PKDD 2021).
Stars: ✭ 42 (-39.13%)
Mutual labels:  self-supervised-learning, contrastive-learning
CLSA
official implemntation for "Contrastive Learning with Stronger Augmentations"
Stars: ✭ 48 (-30.43%)
Mutual labels:  self-supervised-learning, contrastive-learning
info-nce-pytorch
PyTorch implementation of the InfoNCE loss for self-supervised learning.
Stars: ✭ 160 (+131.88%)
Mutual labels:  self-supervised-learning, contrastive-learning
SoCo
[NeurIPS 2021 Spotlight] Aligning Pretraining for Detection via Object-Level Contrastive Learning
Stars: ✭ 125 (+81.16%)
Mutual labels:  self-supervised-learning, contrastive-learning

G-SimCLR: Self-Supervised Contrastive Learning with Guided Projection via Pseudo Labelling

Official TensorFlow implementation of G-SimCLR (Guided-SimCLR), as described in the paper G-SimCLR: Self-Supervised Contrastive Learning with Guided Projection via Pseudo Labelling by Souradip Chakraborty*, Aritra Roy Gosthipaty* and Sayak Paul*.

*Equal contribution.

The paper is accepted at ICDM 2020 for the Deep Learning for Knowledge Transfer (DLKT) workshop. A presentation deck is available here.

Abstract:

In the realms of computer vision, it is evident that deep neural networks perform better in a supervised setting with a large amount of labeled data. The representations learned with supervision are not only of high quality but also helps the model in enhancing its accuracy. However, the collection and annotation of a large dataset are costly and time-consuming. To avoid the same, there has been a lot of research going on in the field of unsupervised visual representation learning especially in a self-supervised setting. Amongst the recent advancements in self-supervised methods for visual recognition, in SimCLR Chen et al. shows that good quality representations can indeed be learned without explicit supervision. In SimCLR, the authors maximize the similarity of augmentations of the same image and minimize the similarity of augmentations of different images. A linear classifier trained with the representations learned using this approach yields 76.5% top-1 accuracy on the ImageNet ILSVRC-2012 dataset. In this work, we propose that, with the normalized temperature-scaled cross-entropy (NT-Xent) loss function (as used in SimCLR), it is beneficial to not have images of the same category in the same batch. In an unsupervised setting, the information of images pertaining to the same category is missing. We use the latent space representation of a denoising autoencoder trained on the unlabeled dataset and cluster them with k-means to obtain pseudo labels. With this apriori information we batch images, where no two images from the same category are to be found. We report comparable performance enhancements on the CIFAR10 dataset and a subset of the ImageNet dataset. We refer to our method as G-SimCLR.

Datasets Used:

Architectures used:

  1. ResNet20 used for CIFAR10.
  2. ResNet50 used for ImageNet subset.
  3. Denoising Autoencoder built from scratch.

Folder Structure:

.
├── CIFAR10
│   ├── Autoencoder.ipynb
│   ├── SimCLR_Pseudo_Labels
│   │   ├── Fine_Tune_10_Perc.ipynb
│   │   ├── Linear_Evaluation.ipynb
│   │   └── SimCLR_Pseudo_Labels_Training.ipynb
│   ├── Supervised_Training_CIFAR10.ipynb
│   └── Vanilla_SimCLR
│       ├── Fine_tune_10Perc.ipynb
│       ├── Linear_Evaluation.ipynb
│       └── SimCLR_Training.ipynb
├── Imagenet_Subset
│   ├── Autoencoder
│   │   ├── Deep_Autoencoder.ipynb
│   │   └── Shallow_Autoencoder.ipynb
│   ├── SimCLR_Pseudo_Labels
│   │   ├── Deep Autoencoder
│   │   │   ├── Fine_Tune_10Perc.ipynb
│   │   │   ├── Linear_Evaluation.ipynb
│   │   │   └── SimCLR_Pseudo_Labels_Training.ipynb
│   │   └── Shallow Autoencoder
│   │       ├── Fine_tune_10Perc.ipynb
│   │       ├── Linear_Evaluation.ipynb
│   │       └── SimCLR_Pseudo_Labels_Training.ipynb
│   ├── Supervised_Training_Imagenet_Subset.ipynb
│   └── Vanilla_SimCLR
│       ├── Fine_tune_10Perc.ipynb
│       ├── Linear_Evaluation.ipynb
│       └── SimCLR_Training.ipynb
└── README.md

Loss Curves:

Loss (NT-Xent) curves as obtained from the G-SimCLR training with the CIFAR10 and ImageNet Subset datasets respectively.

Pretrained Weights:

Results Reported:

Linear Evaluation

CIFAR 10 Imagenet Subset
Fully supervised 73.62 67.6
P1 37.69 52.8
SimCLR with minor modifications P2 39.4 48.4
P3 39.92 52.4
P1 38.15 56.4
G-SimCLR (ours) P2 41.01 56.8
P3 40.5 60

Fine-tuning (10% labeled data)

CIFAR 10 Imagenet Subset
Fully supervised 73.62 67.6
SimCLR with minor modifications 42.21 49.2
G-SimCLR (ours) 43.1 56

where,

  • P1 denotes the feature backbone network + the entire non-linear projection head - its final layer
  • P2 denotes the feature backbone network + the entire non-linear projection head - its final two layers
  • P3 denotes the feature backbone network only
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].