All Projects → sarthak268 → DisCont

sarthak268 / DisCont

Licence: MIT license
Code for the paper "DisCont: Self-Supervised Visual Attribute Disentanglement using Context Vectors".

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to DisCont

TCE
This repository contains the code implementation used in the paper Temporally Coherent Embeddings for Self-Supervised Video Representation Learning (TCE).
Stars: ✭ 51 (+292.31%)
Mutual labels:  contrastive-loss, self-supervised-learning, contrastive-learning
S2-BNN
S2-BNN: Bridging the Gap Between Self-Supervised Real and 1-bit Neural Networks via Guided Distribution Calibration (CVPR 2021)
Stars: ✭ 53 (+307.69%)
Mutual labels:  contrastive-loss, self-supervised-learning, contrastive-learning
info-nce-pytorch
PyTorch implementation of the InfoNCE loss for self-supervised learning.
Stars: ✭ 160 (+1130.77%)
Mutual labels:  contrastive-loss, self-supervised-learning, contrastive-learning
object-aware-contrastive
Object-aware Contrastive Learning for Debiased Scene Representation (NeurIPS 2021)
Stars: ✭ 44 (+238.46%)
Mutual labels:  self-supervised-learning, contrastive-learning
Revisiting-Contrastive-SSL
Revisiting Contrastive Methods for Unsupervised Learning of Visual Representations. [NeurIPS 2021]
Stars: ✭ 81 (+523.08%)
Mutual labels:  self-supervised-learning, contrastive-learning
simclr-pytorch
PyTorch implementation of SimCLR: supports multi-GPU training and closely reproduces results
Stars: ✭ 89 (+584.62%)
Mutual labels:  self-supervised-learning, contrastive-learning
Pytorch Metric Learning
The easiest way to use deep metric learning in your application. Modular, flexible, and extensible. Written in PyTorch.
Stars: ✭ 3,936 (+30176.92%)
Mutual labels:  self-supervised-learning, contrastive-learning
Simclr
SimCLRv2 - Big Self-Supervised Models are Strong Semi-Supervised Learners
Stars: ✭ 2,720 (+20823.08%)
Mutual labels:  self-supervised-learning, contrastive-learning
CLSA
official implemntation for "Contrastive Learning with Stronger Augmentations"
Stars: ✭ 48 (+269.23%)
Mutual labels:  self-supervised-learning, contrastive-learning
CONTRIQUE
Official implementation for "Image Quality Assessment using Contrastive Learning"
Stars: ✭ 33 (+153.85%)
Mutual labels:  contrastive-loss, contrastive-learning
linguistic-style-transfer-pytorch
Implementation of "Disentangled Representation Learning for Non-Parallel Text Style Transfer(ACL 2019)" in Pytorch
Stars: ✭ 55 (+323.08%)
Mutual labels:  disentanglement, disentangled-representations
concept-based-xai
Library implementing state-of-the-art Concept-based and Disentanglement Learning methods for Explainable AI
Stars: ✭ 41 (+215.38%)
Mutual labels:  disentanglement, disentangled-representations
GCA
[WWW 2021] Source code for "Graph Contrastive Learning with Adaptive Augmentation"
Stars: ✭ 69 (+430.77%)
Mutual labels:  self-supervised-learning, contrastive-learning
PIC
Parametric Instance Classification for Unsupervised Visual Feature Learning, NeurIPS 2020
Stars: ✭ 41 (+215.38%)
Mutual labels:  self-supervised-learning, contrastive-learning
AdCo
AdCo: Adversarial Contrast for Efficient Learning of Unsupervised Representations from Self-Trained Negative Adversaries
Stars: ✭ 148 (+1038.46%)
Mutual labels:  self-supervised-learning, contrastive-learning
G-SimCLR
This is the code base for paper "G-SimCLR : Self-Supervised Contrastive Learning with Guided Projection via Pseudo Labelling" by Souradip Chakraborty, Aritra Roy Gosthipaty and Sayak Paul.
Stars: ✭ 69 (+430.77%)
Mutual labels:  self-supervised-learning, contrastive-learning
GCL
List of Publications in Graph Contrastive Learning
Stars: ✭ 25 (+92.31%)
Mutual labels:  self-supervised-learning, contrastive-learning
SoCo
[NeurIPS 2021 Spotlight] Aligning Pretraining for Detection via Object-Level Contrastive Learning
Stars: ✭ 125 (+861.54%)
Mutual labels:  self-supervised-learning, contrastive-learning
SCL
📄 Spatial Contrastive Learning for Few-Shot Classification (ECML/PKDD 2021).
Stars: ✭ 42 (+223.08%)
Mutual labels:  self-supervised-learning, contrastive-learning
disent
🧶 Modular VAE disentanglement framework for python built with PyTorch Lightning ▸ Including metrics and datasets ▸ With strongly supervised, weakly supervised and unsupervised methods ▸ Easily configured and run with Hydra config ▸ Inspired by disentanglement_lib
Stars: ✭ 41 (+215.38%)
Mutual labels:  disentanglement, disentangled-representations

DisCont: Self-Supervised Visual Attribute Disentanglement using Context Vectors

Paper

This repository contains code for the paper DisCont: Self-Supervised Visual Attribute Disentanglement using Context Vectors. Video available here.

Abstract

Disentangling the underlying feature attributes within an image with no prior supervision is a challenging task. Models that can disentangle attributes well provide greater interpretability and control. In this paper, we propose a self-supervised framework DisCont to disentangle multiple attributes by exploiting the structural inductive biases within images. Motivated by the recent surge in contrastive learning paradigms, our model bridges the gap between self-supervised contrastive learning algorithms and unsupervised disentanglement. We evaluate the efficacy of our approach, both qualitatively and quantitatively, on four benchmark datasets.

DisCont Training

In case you find this work useful, consider citing:

@article{Bhagat2020DisContSV,
  title={DisCont: Self-Supervised Visual Attribute Disentanglement using Context Vectors},
  author={Sarthak Bhagat and Vishaal Udandarao and Shagun Uppal},
  journal={ArXiv},
  year={2020},
  volume={abs/2006.05895}
}

Installing Dependencies

In order to install the required libraries, clone our repository and run the following command:

pip install -r requirements.txt

Preparing Data

In our paper, we evaluate the efficacy of our approach on a set of four publicly available datasets. Download any of these datasets and place them inside another folder in order to begin training.

Training

Begin training the DisCont model by running the following script.

python train.py

Customize training by varying the latent structure using the given set of flags.

--z_chunk_size             Dimension of each Latent Chunk
--z_num_chunks             Number of Latent Chunks
--c_chunk_size             Dimension of each Context Vector Chunk
--c_num_chunks             Number of Context Vector Chunks
--num_specified_chunks     Number of Specified Chunks in the Latent Space
--num_unspecified_chunks   Number of Unspecified Chunks in the Latent Space

Evaluation

For evaluation of the trained model using feature swapping, run the following command.

python style_transfer.py

In order to plot the latent space visualizations, run the following command.

python latent_visualization.py

Contact

If you face any problem in running this code, you can contact us at {sarthak16189, vishaal16119, shagun16088}@iiitd.ac.in.

License

Copyright (c) 2020 Sarthak Bhagat, Vishaal Udandarao, Shagun Uppal.

For license information, see LICENSE or http://mit-license.org

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].