All Projects → sayakpaul → Supervised-Contrastive-Learning-in-TensorFlow-2

sayakpaul / Supervised-Contrastive-Learning-in-TensorFlow-2

Licence: other
Implements the ideas presented in https://arxiv.org/pdf/2004.11362v1.pdf by Khosla et al.

Programming Languages

Jupyter Notebook
11667 projects

Projects that are alternatives of or similar to Supervised-Contrastive-Learning-in-TensorFlow-2

object-aware-contrastive
Object-aware Contrastive Learning for Debiased Scene Representation (NeurIPS 2021)
Stars: ✭ 44 (-62.39%)
Mutual labels:  representation-learning, contrastive-learning
Simclr
SimCLRv2 - Big Self-Supervised Models are Strong Semi-Supervised Learners
Stars: ✭ 2,720 (+2224.79%)
Mutual labels:  representation-learning, contrastive-learning
Parametric-Contrastive-Learning
Parametric Contrastive Learning (ICCV2021)
Stars: ✭ 155 (+32.48%)
Mutual labels:  contrastive-learning, supervised-contrastive-learning
Revisiting-Contrastive-SSL
Revisiting Contrastive Methods for Unsupervised Learning of Visual Representations. [NeurIPS 2021]
Stars: ✭ 81 (-30.77%)
Mutual labels:  representation-learning, contrastive-learning
TCE
This repository contains the code implementation used in the paper Temporally Coherent Embeddings for Self-Supervised Video Representation Learning (TCE).
Stars: ✭ 51 (-56.41%)
Mutual labels:  representation-learning, contrastive-learning
simclr-pytorch
PyTorch implementation of SimCLR: supports multi-GPU training and closely reproduces results
Stars: ✭ 89 (-23.93%)
Mutual labels:  representation-learning, contrastive-learning
SimCLR
Pytorch implementation of "A Simple Framework for Contrastive Learning of Visual Representations"
Stars: ✭ 65 (-44.44%)
Mutual labels:  representation-learning, contrastive-learning
COCO-LM
[NeurIPS 2021] COCO-LM: Correcting and Contrasting Text Sequences for Language Model Pretraining
Stars: ✭ 109 (-6.84%)
Mutual labels:  representation-learning, contrastive-learning
Vae vampprior
Code for the paper "VAE with a VampPrior", J.M. Tomczak & M. Welling
Stars: ✭ 173 (+47.86%)
Mutual labels:  representation-learning
Contrastive Predictive Coding Pytorch
Contrastive Predictive Coding for Automatic Speaker Verification
Stars: ✭ 223 (+90.6%)
Mutual labels:  representation-learning
Stylealign
[ICCV 2019]Aggregation via Separation: Boosting Facial Landmark Detector with Semi-Supervised Style Transition
Stars: ✭ 172 (+47.01%)
Mutual labels:  representation-learning
Link Prediction
Representation learning for link prediction within social networks
Stars: ✭ 245 (+109.4%)
Mutual labels:  representation-learning
Deformable Kernels
Deforming kernels to adapt towards object deformation. In ICLR 2020.
Stars: ✭ 166 (+41.88%)
Mutual labels:  representation-learning
Jodie
A PyTorch implementation of ACM SIGKDD 2019 paper "Predicting Dynamic Embedding Trajectory in Temporal Interaction Networks"
Stars: ✭ 172 (+47.01%)
Mutual labels:  representation-learning
SCL
📄 Spatial Contrastive Learning for Few-Shot Classification (ECML/PKDD 2021).
Stars: ✭ 42 (-64.1%)
Mutual labels:  contrastive-learning
Awesome Visual Representation Learning With Transformers
Awesome Transformers (self-attention) in Computer Vision
Stars: ✭ 166 (+41.88%)
Mutual labels:  representation-learning
DESOM
🌐 Deep Embedded Self-Organizing Map: Joint Representation Learning and Self-Organization
Stars: ✭ 76 (-35.04%)
Mutual labels:  representation-learning
ConDigSum
Code for EMNLP 2021 paper "Topic-Aware Contrastive Learning for Abstractive Dialogue Summarization"
Stars: ✭ 62 (-47.01%)
Mutual labels:  contrastive-learning
Paddlehelix
Bio-Computing Platform featuring Large-Scale Representation Learning and Multi-Task Deep Learning “螺旋桨”生物计算工具集
Stars: ✭ 213 (+82.05%)
Mutual labels:  representation-learning
Pytorch Byol
PyTorch implementation of Bootstrap Your Own Latent: A New Approach to Self-Supervised Learning
Stars: ✭ 213 (+82.05%)
Mutual labels:  representation-learning

Supervised-Contrastive-Learning-in-TensorFlow-2

(Collaboratively done by Shweta Shaw and myself)

Implements the ideas presented in Supervised Contrastive Learning by Khosla et al. The authors propose a two-stage framework to enhance the performance of image classifiers and also achieves SoTA results.

(Figures gathered from the paper)

A detailed discussion of the paper and the results of our experiments are available here in this report.

This repository consists of the notebooks (runnable on Colab) showing the experiments we have done.

Acknowledgements

About the notebooks

├── Flowers
│   ├── Contrastive_Training_Flowers.ipynb
│   ├── Contrastive_Training_Flowers_Augmentation.ipynb
│   ├── Fully_Supervised_Training_Flowers.ipynb
│   └── Fully_Supervised_Training_Flowers_Augmentation.ipynb
├── ImageNet_Subset
│   ├── Contrastive_Training_Imagenet_subset_Adam.ipynb
│   ├── Contrastive_Training_Imagenet_subset_RMSprop.ipynb
│   ├── Contrastive_Training_Imagenet_subset_SGD.ipynb
│   ├── Fully_Supervised_Training_IMGNET_subset_Adam.ipynb
│   ├── Fully_Supervised_Training_IMGNET_subset_RMSprop.ipynb
│   └── Fully_Supervised_Training_IMGNET_subset_SGD.ipynb
├── Pets
│   ├── Contrastive_Training_Pets.ipynb
│   └── Fully_Supervised_Training_Pets.ipynb
├── Visualization_ImageNet_subset.ipynb
├── Visualization_Pets.ipynb
  • Contrastive_Training_*.ipynb notebooks show the supervised contrastive framework proposed in the paper.
  • Fully_Supervised_Training_*.ipynb notebooks show the typical fully supervised training with different datasets.
  • Visualization_ImageNet_*.ipynb notebooks show the visualizations of the embeddings learned by the supervised contrastive learning framework.

About the datasets

Things to note

  • The authors used AutoAugment in the paper. However, we used simple augmnetation operations which worked for the datasets we tried. Note that, there's no augmentation for the Pets dataset as we got pretty good results on that one even without any data augmentation.
  • LARS optimizer was used in the paper, however we used Adam. We have also shown the effect of different optimizers like SGD and RMSProp along with learning rate schedules.

Results

The above plots are from the experiments conducted on the Pets dataset. More results from the other two datasets have been discussed in the above-mentioned report and can be found here: https://app.wandb.ai/authors/scl.

Visualization of the embeddings learned by supervised contrastive learning

About executing the notebooks

If you go to any of the notebooks listed in the repository and use an extension like "Open notebook in Google Colab" to open it, you should be able to run the experiments right off the bat.

About the library versions

At the time of performing the experiments, we used TensorFlow 2.2. We specifically did not denote the versions of the other libraries. All of our experiments were performed on Google Colab.

Feedback

Via GitHub issues

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].