Pytorch Metric LearningThe easiest way to use deep metric learning in your application. Modular, flexible, and extensible. Written in PyTorch.
Stars: ✭ 3,936 (+5604.35%)
DisContCode for the paper "DisCont: Self-Supervised Visual Attribute Disentanglement using Context Vectors".
Stars: ✭ 13 (-81.16%)
object-aware-contrastiveObject-aware Contrastive Learning for Debiased Scene Representation (NeurIPS 2021)
Stars: ✭ 44 (-36.23%)
SoCo[NeurIPS 2021 Spotlight] Aligning Pretraining for Detection via Object-Level Contrastive Learning
Stars: ✭ 125 (+81.16%)
S2-BNNS2-BNN: Bridging the Gap Between Self-Supervised Real and 1-bit Neural Networks via Guided Distribution Calibration (CVPR 2021)
Stars: ✭ 53 (-23.19%)
TCEThis repository contains the code implementation used in the paper Temporally Coherent Embeddings for Self-Supervised Video Representation Learning (TCE).
Stars: ✭ 51 (-26.09%)
info-nce-pytorchPyTorch implementation of the InfoNCE loss for self-supervised learning.
Stars: ✭ 160 (+131.88%)
ViCC[WACV'22] Code repository for the paper "Self-supervised Video Representation Learning with Cross-Stream Prototypical Contrasting", https://arxiv.org/abs/2106.10137.
Stars: ✭ 33 (-52.17%)
PICParametric Instance Classification for Unsupervised Visual Feature Learning, NeurIPS 2020
Stars: ✭ 41 (-40.58%)
GCLList of Publications in Graph Contrastive Learning
Stars: ✭ 25 (-63.77%)
G-SimCLRThis is the code base for paper "G-SimCLR : Self-Supervised Contrastive Learning with Guided Projection via Pseudo Labelling" by Souradip Chakraborty, Aritra Roy Gosthipaty and Sayak Paul.
Stars: ✭ 69 (+0%)
SCL📄 Spatial Contrastive Learning for Few-Shot Classification (ECML/PKDD 2021).
Stars: ✭ 42 (-39.13%)
GRACE[GRL+ @ ICML 2020] PyTorch implementation for "Deep Graph Contrastive Representation Learning" (https://arxiv.org/abs/2006.04131v2)
Stars: ✭ 144 (+108.7%)
simclr-pytorchPyTorch implementation of SimCLR: supports multi-GPU training and closely reproduces results
Stars: ✭ 89 (+28.99%)
GeDMLGeneralized Deep Metric Learning.
Stars: ✭ 30 (-56.52%)
Revisiting-Contrastive-SSLRevisiting Contrastive Methods for Unsupervised Learning of Visual Representations. [NeurIPS 2021]
Stars: ✭ 81 (+17.39%)
CLSAofficial implemntation for "Contrastive Learning with Stronger Augmentations"
Stars: ✭ 48 (-30.43%)
3DInfomaxMaking self-supervised learning work on molecules by using their 3D geometry to pre-train GNNs. Implemented in DGL and Pytorch Geometric.
Stars: ✭ 107 (+55.07%)
CLMROfficial PyTorch implementation of Contrastive Learning of Musical Representations
Stars: ✭ 216 (+213.04%)
SimclrSimCLRv2 - Big Self-Supervised Models are Strong Semi-Supervised Learners
Stars: ✭ 2,720 (+3842.03%)
awesome-efficient-gnnCode and resources on scalable and efficient Graph Neural Networks
Stars: ✭ 498 (+621.74%)
AdCoAdCo: Adversarial Contrast for Efficient Learning of Unsupervised Representations from Self-Trained Negative Adversaries
Stars: ✭ 148 (+114.49%)
CVCCVC: Contrastive Learning for Non-parallel Voice Conversion (INTERSPEECH 2021, in PyTorch)
Stars: ✭ 45 (-34.78%)
SimCLRPytorch implementation of "A Simple Framework for Contrastive Learning of Visual Representations"
Stars: ✭ 65 (-5.8%)
pillar-motionSelf-Supervised Pillar Motion Learning for Autonomous Driving (CVPR 2021)
Stars: ✭ 98 (+42.03%)
DiGCLThe PyTorch implementation of Directed Graph Contrastive Learning (DiGCL), NeurIPS-2021
Stars: ✭ 27 (-60.87%)
BYOLBootstrap Your Own Latent: A New Approach to Self-Supervised Learning
Stars: ✭ 102 (+47.83%)
Scon-ABSA[CIKM 2021] Enhancing Aspect-Based Sentiment Analysis with Supervised Contrastive Learning
Stars: ✭ 17 (-75.36%)
MOONModel-Contrastive Federated Learning (CVPR 2021)
Stars: ✭ 93 (+34.78%)
SelfSupervisedLearning-DSMcode for AAAI21 paper "Enhancing Unsupervised Video Representation Learning by Decoupling the Scene and the Motion“
Stars: ✭ 26 (-62.32%)
CVPR21 PASSPyTorch implementation of our CVPR2021 (oral) paper "Prototype Augmentation and Self-Supervision for Incremental Learning"
Stars: ✭ 55 (-20.29%)
VarCLRVarCLR: Variable Semantic Representation Pre-training via Contrastive Learning
Stars: ✭ 30 (-56.52%)
mmselfsupOpenMMLab Self-Supervised Learning Toolbox and Benchmark
Stars: ✭ 2,315 (+3255.07%)
contrastive lossExperiments with supervised contrastive learning methods with different loss functions
Stars: ✭ 143 (+107.25%)
MSFOfficial code for "Mean Shift for Self-Supervised Learning"
Stars: ✭ 42 (-39.13%)
GNN-Recommender-SystemsAn index of recommendation algorithms that are based on Graph Neural Networks.
Stars: ✭ 505 (+631.88%)
self6dppSelf6D++: Occlusion-Aware Self-Supervised Monocular 6D Object Pose Estimation. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI) 2021.
Stars: ✭ 45 (-34.78%)
BossNAS(ICCV 2021) BossNAS: Exploring Hybrid CNN-transformers with Block-wisely Self-supervised Neural Architecture Search
Stars: ✭ 125 (+81.16%)
latent-pose-reenactmentThe authors' implementation of the "Neural Head Reenactment with Latent Pose Descriptors" (CVPR 2020) paper.
Stars: ✭ 132 (+91.3%)
point-cloud-predictionSelf-supervised Point Cloud Prediction Using 3D Spatio-temporal Convolutional Networks
Stars: ✭ 97 (+40.58%)
cl-icaCode for the paper "Contrastive Learning Inverts the Data Generating Process".
Stars: ✭ 65 (-5.8%)
FKDA Fast Knowledge Distillation Framework for Visual Recognition
Stars: ✭ 49 (-28.99%)
gnn-lspeSource code for GNN-LSPE (Graph Neural Networks with Learnable Structural and Positional Representations), ICLR 2022
Stars: ✭ 165 (+139.13%)
SimSiamExploring Simple Siamese Representation Learning
Stars: ✭ 52 (-24.64%)
simsiam-cifar10Code to train the SimSiam model on cifar10 using PyTorch
Stars: ✭ 33 (-52.17%)
lossylessGeneric image compressor for machine learning. Pytorch code for our paper "Lossy compression for lossless prediction".
Stars: ✭ 81 (+17.39%)
video repres mascode for CVPR-2019 paper: Self-supervised Spatio-temporal Representation Learning for Videos by Predicting Motion and Appearance Statistics
Stars: ✭ 63 (-8.7%)
newtNatural World Tasks
Stars: ✭ 24 (-65.22%)
sesemisupervised and semi-supervised image classification with self-supervision (Keras)
Stars: ✭ 43 (-37.68%)
mirror-bert[EMNLP 2021] Mirror-BERT: Converting Pretrained Language Models to universal text encoders without labels.
Stars: ✭ 56 (-18.84%)
barlowtwinsImplementation of Barlow Twins paper
Stars: ✭ 84 (+21.74%)
MiniVoxCode for our ACML and INTERSPEECH papers: "Speaker Diarization as a Fully Online Bandit Learning Problem in MiniVox".
Stars: ✭ 15 (-78.26%)
SubGNNSubgraph Neural Networks (NeurIPS 2020)
Stars: ✭ 136 (+97.1%)