All Projects → VITA-Group → Self-PU

VITA-Group / Self-PU

Licence: MIT license
[ICML2020] "Self-PU: Self Boosted and Calibrated Positive-Unlabeled Training" by Xuxi Chen, Wuyang Chen, Tianlong Chen, Ye Yuan, Chen Gong, Kewei Chen, Zhangyang Wang

Programming Languages

python
139335 projects - #7 most used programming language

Self-PU: Self Boosted and Calibrated Positive-Unlabeled Training

[ICML2020] Xuxi Chen*, Wuyang Chen*, Tianlong Chen, Ye Yuan, Chen Gong, Kewei Chen, Zhangyang Wang

Overview

We proposed Self-PU Framework that introduces self-paced, self-calibrated and self-supervised learning to the PU field.

Method

  • Self-paced learning: We gradually selected confident samples from the unlabeled set and assign labels to them.
  • Self-calibrated learning: Find optimal weights for unlabeled samples in order to obtain more source of supervision.
  • Self-supervised learning: Fully exploit the learning ability of models by teacher-student structure.

framework

Set-up

Environment

conda install pytorch==0.4.1 cuda92 torchvision -c pytorch
conda install matplotlib scikit-learn tqdm
pip install opencv-python

Preparing Data

Download cifar-10 and extract it into cifar/.

Evaluation

Pretrained Model

MNIST: Google Drive, Accuracy: 94.45%

CIFAR-10: Google Drive, Accuracy: 90.05%

Evaluation Code

MNIST:

python evaluation.py --model mnist.pth.tar 

CIFAR-10:

python evaluation.py --model cifar.pth.tar --datapath cifar --dataset cifar

Training

Baseline

MNIST

python train.py --self-paced False --mean-teacher False 

CIFAR-10

python train.py --self-paced False --mean-teacher False --dataset cifar --datapath cifar

Self-PU (without self-calibration)

Training with self-calibation would be expensive. A cheap alternative:

MNIST

python train_2s2t.py --soft-label

CIFAR-10

python train_2s2t.py --dataset cifar --datapath cifar --soft-label

Self-PU

MNIST

python train_2s2t_mix.py --soft-label

CIFAR-10

python train_2s2t_mix.py --dataset cifar --datapath cifar --soft-label

Reproduce

Seed Accuracy on MNIST Accuracy on CIFAR-10
3 93.87% 89.68%
13 94.68% 90.15%
23 94.44% 89.38%
33 93.84% 89.69%
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].