All Projects → amazon-research → exponential-moving-average-normalization

amazon-research / exponential-moving-average-normalization

Licence: other
PyTorch implementation of EMAN for self-supervised and semi-supervised learning: https://arxiv.org/abs/2101.08482

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to exponential-moving-average-normalization

SSL CR Histo
Official code for "Self-Supervised driven Consistency Training for Annotation Efficient Histopathology Image Analysis" Published in Medical Image Analysis (MedIA) Journal, Oct, 2021.
Stars: ✭ 32 (-57.89%)
Mutual labels:  semi-supervised-learning, self-supervised-learning
improving segmentation with selfsupervised depth
[CVPR21] Implementation of our work "Three Ways to Improve Semantic Segmentation with Self-Supervised Depth Estimation"
Stars: ✭ 189 (+148.68%)
Mutual labels:  semi-supervised-learning, self-supervised-learning
sesemi
supervised and semi-supervised image classification with self-supervision (Keras)
Stars: ✭ 43 (-43.42%)
Mutual labels:  semi-supervised-learning, self-supervised-learning
SHOT-plus
code for our TPAMI 2021 paper "Source Data-absent Unsupervised Domain Adaptation through Hypothesis Transfer and Labeling Transfer"
Stars: ✭ 46 (-39.47%)
Mutual labels:  semi-supervised-learning, self-supervised-learning
semi-supervised-paper-implementation
Reproduce some methods in semi-supervised papers.
Stars: ✭ 35 (-53.95%)
Mutual labels:  semi-supervised-learning
libai
LiBai(李白): A Toolbox for Large-Scale Distributed Parallel Training
Stars: ✭ 284 (+273.68%)
Mutual labels:  self-supervised-learning
Revisiting-Contrastive-SSL
Revisiting Contrastive Methods for Unsupervised Learning of Visual Representations. [NeurIPS 2021]
Stars: ✭ 81 (+6.58%)
Mutual labels:  self-supervised-learning
pywsl
Python codes for weakly-supervised learning
Stars: ✭ 118 (+55.26%)
Mutual labels:  semi-supervised-learning
SimPLE
Code for the paper: "SimPLE: Similar Pseudo Label Exploitation for Semi-Supervised Classification"
Stars: ✭ 50 (-34.21%)
Mutual labels:  semi-supervised-learning
sinkhorn-label-allocation
Sinkhorn Label Allocation is a label assignment method for semi-supervised self-training algorithms. The SLA algorithm is described in full in this ICML 2021 paper: https://arxiv.org/abs/2102.08622.
Stars: ✭ 49 (-35.53%)
Mutual labels:  semi-supervised-learning
VQ-APC
Vector Quantized Autoregressive Predictive Coding (VQ-APC)
Stars: ✭ 34 (-55.26%)
Mutual labels:  self-supervised-learning
autonormalize
python library for automated dataset normalization
Stars: ✭ 104 (+36.84%)
Mutual labels:  normalization
temporal-ensembling-semi-supervised
Keras implementation of temporal ensembling(semi-supervised learning)
Stars: ✭ 22 (-71.05%)
Mutual labels:  semi-supervised-learning
SelfGNN
A PyTorch implementation of "SelfGNN: Self-supervised Graph Neural Networks without explicit negative sampling" paper, which appeared in The International Workshop on Self-Supervised Learning for the Web (SSL'21) @ the Web Conference 2021 (WWW'21).
Stars: ✭ 24 (-68.42%)
Mutual labels:  self-supervised-learning
tape-neurips2019
Tasks Assessing Protein Embeddings (TAPE), a set of five biologically relevant semi-supervised learning tasks spread across different domains of protein biology. (DEPRECATED)
Stars: ✭ 117 (+53.95%)
Mutual labels:  semi-supervised-learning
ORNA
Fast in-silico normalization algorithm for NGS data
Stars: ✭ 21 (-72.37%)
Mutual labels:  normalization
S2-BNN
S2-BNN: Bridging the Gap Between Self-Supervised Real and 1-bit Neural Networks via Guided Distribution Calibration (CVPR 2021)
Stars: ✭ 53 (-30.26%)
Mutual labels:  self-supervised-learning
ccgl
TKDE 22. CCCL: Contrastive Cascade Graph Learning.
Stars: ✭ 20 (-73.68%)
Mutual labels:  self-supervised-learning
video-pace
code for our ECCV-2020 paper: Self-supervised Video Representation Learning by Pace Prediction
Stars: ✭ 95 (+25%)
Mutual labels:  self-supervised-learning
url-normalize
URL normalization for Python
Stars: ✭ 82 (+7.89%)
Mutual labels:  normalization

EMAN: Exponential Moving Average Normalization for Self-supervised and Semi-supervised Learning

This is a PyTorch implementation of the EMAN paper. It supports three popular self-supervised and semi-supervised learning techniques, i.e., MoCo, BYOL and FixMatch.

If you use the code/model/results of this repository please cite:

@inproceedings{cai21eman,
  author  = {Zhaowei Cai and Avinash Ravichandran and Subhransu Maji and Charless Fowlkes and Zhuowen Tu and Stefano Soatto},
  title   = {Exponential Moving Average Normalization for Self-supervised and Semi-supervised Learning},
  booktitle = {CVPR},
  Year  = {2021}
}

Install

First, install PyTorch and torchvision. We have tested on version of 1.7.1, but the other versions should also be working, e.g. 1.5.1.

$ conda install pytorch==1.7.1 torchvision==0.8.2 torchaudio==0.7.2 cudatoolkit=10.1 -c pytorch

Also install other dependencies.

$ pip install pandas faiss-gpu

Data Preparation

Install ImageNet dataset following the official PyTorch ImageNet training code, with the standard data folder structure for the torchvision datasets.ImageFolder. Please download the ImageNet index files for semi-supervised learning experiments. The file structure should look like:

$ tree data
imagenet
├── train
│   ├── class1
│   │   └── *.jpeg
│   ├── class2
│   │   └── *.jpeg
│   └── ...
├── val
│   ├── class1
│   │   └── *.jpeg
│   ├── class2
│   │   └── *.jpeg
│   └── ...
└── indexes
    └── *_index.csv

Training

To do self-supervised pre-training of MoCo-v2 with EMAN for 200 epochs, run:

python main_moco.py \
  --arch MoCoEMAN --backbone resnet50_encoder \
  --epochs 200 --warmup-epoch 10 \
  --moco-t 0.2 --cos \
  --dist-url 'tcp://localhost:10001' --multiprocessing-distributed --world-size 1 --rank 0 \
  /path/to/imagenet

To do self-supervised pre-training of BYOL with EMAN for 200 epochs, run:

python main_byol.py \
  --arch BYOLEMAN --backbone resnet50_encoder \
  --lr 1.8 -b 512 --wd 0.000001 \
  --byol-m 0.98 \
  --epochs 200 --cos --warmup-epoch 10 \
  --dist-url 'tcp://localhost:10001' --multiprocessing-distributed --world-size 1 --rank 0 \
  /path/to/imagenet

To do semi-supervised training of FixMatch with EMAN for 100 epochs, run:

python main_fixmatch.py \
  --arch FixMatch --backbone resnet50_encoder \
  --eman \
  --lr 0.03 \
  --epochs 100 --schedule 60 80 \
  --warmup-epoch 5 \
  --trainindex_x train_10p_index.csv --trainindex_u train_90p_index.csv \
  --dist-url 'tcp://localhost:10001' --multiprocessing-distributed --world-size 1 --rank 0 \
  /path/to/imagenet

Linear Classification and Finetuning

With a pre-trained model, to train a supervised linear classifier on frozen features/weights (e.g. MoCo) on 10% imagenet, run:

python main_lincls.py \
  -a resnet50 \
  --lr 30.0 \
  --epochs 50 --schedule 30 40 \
  --eval-freq 5 \
  --trainindex train_10p_index.csv \
  --model-prefix encoder_q \
  --pretrained /path/to/model_best.pth.tar \
  --dist-url 'tcp://localhost:10001' --multiprocessing-distributed --world-size 1 --rank 0 \
  /path/to/imagenet

To finetune the self-supervised pretrained model on 10% imagenet, with different learning rates for pretrained backbone and last classification layer, run:

python main_cls.py \
  -a resnet50 \
  --lr 0.001 --lr-classifier 0.1 \
  --epochs 50 --schedule 30 40 \
  --eval-freq 5 \
  --trainindex train_10p_index.csv \
  --model-prefix encoder_q \
  --self-pretrained /path/to/model_best.pth.tar \
  --dist-url 'tcp://localhost:10001' --multiprocessing-distributed --world-size 1 --rank 0 \
  /path/to/imagenet

For BYOL, change to --model-prefix online_net.backbone. For the best performance, follow the learning rate setting in Section 5.2 in the paper.

Models

Our pre-trained ResNet-50 models can be downloaded as following:

name epoch acc@1% IN acc@10% IN acc@100% IN model
MoCo-EMAN 200 48.9 60.5 67.7 download
MoCo-EMAN 800 55.4 64.0 70.1 download
MoCo-2X-EMAN 200 56.8 65.7 72.3 download
BYOL-EMAN 200 55.1 66.7 72.2 download

License

This project is under the CC-BY-NC 4.0 license. See LICENSE for details.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].