ContrastToDivide / C2D

Licence: MIT license
PyTorch implementation of "Contrast to Divide: self-supervised pre-training for learning with noisy labels"

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to C2D

srVAE
VAE with RealNVP prior and Super-Resolution VAE in PyTorch. Code release for https://arxiv.org/abs/2006.05218.
Stars: ✭ 56 (-5.08%)
Mutual labels:  cifar-10
CIFAR10-VGG19-Tensorflow
No description or website provided.
Stars: ✭ 27 (-54.24%)
Mutual labels:  cifar-10
WhiteBox-Part1
In this part, I've introduced and experimented with ways to interpret and evaluate models in the field of image. (Pytorch)
Stars: ✭ 34 (-42.37%)
Mutual labels:  cifar-10
cifar10
Predict CIFAR-10 labels with 88% accuracy using keras.
Stars: ✭ 32 (-45.76%)
Mutual labels:  cifar-10
cifar-tensorflow
No description or website provided.
Stars: ✭ 18 (-69.49%)
Mutual labels:  cifar-10
DenseNet-Cifar10
Train DenseNet on Cifar-10 based on Keras
Stars: ✭ 39 (-33.9%)
Mutual labels:  cifar-10
NIPS-Global-Paper-Implementation-Challenge
Selective Classification For Deep Neural Networks.
Stars: ✭ 11 (-81.36%)
Mutual labels:  cifar-10
pcdarts-tf2
PC-DARTS (PC-DARTS: Partial Channel Connections for Memory-Efficient Differentiable Architecture Search, published in ICLR 2020) implemented in Tensorflow 2.0+. This is an unofficial implementation.
Stars: ✭ 25 (-57.63%)
Mutual labels:  cifar-10
dynamic-routing-capsule-cifar
CapsNet reference from : https://github.com/XifengGuo/CapsNet-Keras
Stars: ✭ 34 (-42.37%)
Mutual labels:  cifar-10
numpy-cnn
A numpy based CNN implementation for classifying images
Stars: ✭ 47 (-20.34%)
Mutual labels:  cifar-10
AlexNet
AlexNet model from ILSVRC 2012
Stars: ✭ 35 (-40.68%)
Mutual labels:  cifar-10
gans-2.0
Generative Adversarial Networks in TensorFlow 2.0
Stars: ✭ 76 (+28.81%)
Mutual labels:  cifar-10
Machine-Learning-Notebooks
15+ Machine/Deep Learning Projects in Ipython Notebooks
Stars: ✭ 66 (+11.86%)
Mutual labels:  cifar-10
NLNL-Negative-Learning-for-Noisy-Labels
NLNL: Negative Learning for Noisy Labels
Stars: ✭ 70 (+18.64%)
Mutual labels:  noisy-labels
IDN
AAAI 2021: Beyond Class-Conditional Assumption: A Primary Attempt to Combat Instance-Dependent Label Noise
Stars: ✭ 21 (-64.41%)
Mutual labels:  noisy-labels
noisy label understanding utilizing
ICML 2019: Understanding and Utilizing Deep Neural Networks Trained with Noisy Labels
Stars: ✭ 82 (+38.98%)
Mutual labels:  noisy-labels
ProSelfLC-2021
noisy labels; missing labels; semi-supervised learning; entropy; uncertainty; robustness and generalisation.
Stars: ✭ 45 (-23.73%)
Mutual labels:  noisy-labels
Active-Passive-Losses
[ICML2020] Normalized Loss Functions for Deep Learning with Noisy Labels
Stars: ✭ 92 (+55.93%)
Mutual labels:  noisy-labels
Advances-in-Label-Noise-Learning
A curated (most recent) list of resources for Learning with Noisy Labels
Stars: ✭ 360 (+510.17%)
Mutual labels:  noisy-labels
Noisy-Labels-with-Bootstrapping
Keras implementation of Training Deep Neural Networks on Noisy Labels with Bootstrapping, Reed et al. 2015
Stars: ✭ 22 (-62.71%)
Mutual labels:  noisy-labels

Contrast to Divide: self-supervised pre-training for learning with noisy labels

PWC PWC

This is an official implementation of "Contrast to Divide: self-supervised pre-training for learning with noisy labels". The code is based on DivideMix implementation.

Results

Following tables summarize main results of the paper:

CIFAR-10: CIFAR-10 results

CIFAR-100: CIFAR-100 results

Clothing1M: Clothing1M results

mini-WebVision: mini-WebVision

Running the code

First you need to install dependencies by running pip install -r requirements.txt.

You can download pretrained self-supervised models from Google Drive. Alternatively, you can train them by yourself, using SimCLR implementation. Put them into ./pretrained folder.

Then you can run the code for CIFAR

python3 main_cifar.py --r 0.8 --lambda_u 500 --dataset cifar100 --p_threshold 0.03 --data_path ./cifar-100 --experiment-name simclr_resnet18 --method selfsup --net resnet50

for Clothing1M

python3 main_clothing1M.py --data_path /path/to/clothing1m --experiment-name selfsup --method selfsup --p_threshold 0.7 --warmup 5 --num_epochs 120

or for mini-WebVision

python3 Train_webvision.py --p_threshold 0.03 --num_class 50 --data_path /path/to/webvision --imagenet_data_path /path/to/imagenet --method selfsup```

To run C2D with ELR+ just use the self-suprevised pretrained models with the original code.

License

This project is licensed under the terms of the MIT license.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].