All Projects → bl0 → Moco

bl0 / Moco

Unofficial implementation with pytorch DistributedDataParallel for "MoCo: Momentum Contrast for Unsupervised Visual Representation Learning"

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Moco

Lemniscate.pytorch
Unsupervised Feature Learning via Non-parametric Instance Discrimination
Stars: ✭ 532 (+375%)
Mutual labels:  unsupervised-learning, imagenet
Self Supervised Speech Recognition
speech to text with self-supervised learning based on wav2vec 2.0 framework
Stars: ✭ 106 (-5.36%)
Mutual labels:  unsupervised-learning
Pytorch Classification
Classification with PyTorch.
Stars: ✭ 1,268 (+1032.14%)
Mutual labels:  imagenet
Vovnet.pytorch
A pytorch implementation of VoVNet
Stars: ✭ 101 (-9.82%)
Mutual labels:  imagenet
Self Supervised Relational Reasoning
Official PyTorch implementation of the paper "Self-Supervised Relational Reasoning for Representation Learning", NeurIPS 2020 Spotlight.
Stars: ✭ 89 (-20.54%)
Mutual labels:  unsupervised-learning
Ml Lib
An extensive machine learning library, made from scratch (Python).
Stars: ✭ 102 (-8.93%)
Mutual labels:  unsupervised-learning
Tf Mobilenet V2
Mobilenet V2(Inverted Residual) Implementation & Trained Weights Using Tensorflow
Stars: ✭ 85 (-24.11%)
Mutual labels:  imagenet
Imagenetv2
A new test set for ImageNet
Stars: ✭ 109 (-2.68%)
Mutual labels:  imagenet
Resnet Imagenet Caffe
train resnet on imagenet from scratch with caffe
Stars: ✭ 105 (-6.25%)
Mutual labels:  imagenet
Ddflow
DDFlow: Learning Optical Flow with Unlabeled Data Distillation
Stars: ✭ 101 (-9.82%)
Mutual labels:  unsupervised-learning
Vizuka
Explore high-dimensional datasets and how your algo handles specific regions.
Stars: ✭ 100 (-10.71%)
Mutual labels:  unsupervised-learning
Hbonet
[ICCV 2019] Harmonious Bottleneck on Two Orthogonal Dimensions
Stars: ✭ 94 (-16.07%)
Mutual labels:  imagenet
Back2future.pytorch
Unsupervised Learning of Multi-Frame Optical Flow with Occlusions
Stars: ✭ 104 (-7.14%)
Mutual labels:  unsupervised-learning
Pysad
Streaming Anomaly Detection Framework in Python (Outlier Detection for Streaming Data)
Stars: ✭ 87 (-22.32%)
Mutual labels:  unsupervised-learning
Diff2vec
Reference implementation of Diffusion2Vec (Complenet 2018) built on Gensim and NetworkX.
Stars: ✭ 108 (-3.57%)
Mutual labels:  unsupervised-learning
Pointglr
Global-Local Bidirectional Reasoning for Unsupervised Representation Learning of 3D Point Clouds (CVPR 2020)
Stars: ✭ 86 (-23.21%)
Mutual labels:  unsupervised-learning
Awesome Transfer Learning
Best transfer learning and domain adaptation resources (papers, tutorials, datasets, etc.)
Stars: ✭ 1,349 (+1104.46%)
Mutual labels:  unsupervised-learning
Pnasnet.tf
TensorFlow implementation of PNASNet-5 on ImageNet
Stars: ✭ 102 (-8.93%)
Mutual labels:  imagenet
Paysage
Unsupervised learning and generative models in python/pytorch.
Stars: ✭ 109 (-2.68%)
Mutual labels:  unsupervised-learning
Unsupervised Depth Completion Visual Inertial Odometry
Tensorflow implementation of Unsupervised Depth Completion from Visual Inertial Odometry (in RA-L January 2020 & ICRA 2020)
Stars: ✭ 109 (-2.68%)
Mutual labels:  unsupervised-learning

Unofficial implementation for MoCo: Momentum Contrast for Unsupervised Visual Representation Learning

Highlight

  1. Effective. Carefully implement important details such as ShuffleBN and distributed Queue mentioned in the paper to reproduce the reported results.
  2. Efficient. The implementation is based on pytorch DistributedDataParallel and Apex automatic mixed precision. It only takes about 40 hours to train MoCo on imagenet dataset with 8 V100 gpus. The time cost is smaller than 3 days reported in original MoCo paper.

Requirements

The following enverionments is tested:

  • Anaconda with python >= 3.6
  • pytorch>=1.3, torchvision, cuda=10.1/9.2
  • others: pip install termcolor opencv-python tensorboard
  • [Optional] apex: automatic mixed precision training.

Train and eval on imagenet

  • The pre-training stage:

    data_dir="./data/imagenet100"
    output_dir="./output/imagenet/K65536"
    python -m torch.distributed.launch --master_port 12347 --nproc_per_node=8 \
        train.py \
        --data-dir ${data_dir} \
        --dataset imagenet \
        --nce-k 65536 \
        --output-dir ${output_dir}
    

    The log, checkpoints and tensorboard events will be saved in ${output_dir}. Set --amp-opt-level to O1, O2, or O3 for mixed precision training. Run python train.py --help for more help.

  • The linear evaluation stage:

    python -m torch.distributed.launch --nproc_per_node=4 \
        eval.py \
        --dataset imagenet \
        --data-dir ${data_dir} \
        --pretrained-model ${output_dir}/current.pth \
        --output-dir ${output_dir}/eval
    

    The checkpoints and tensorboard log will be saved in ${output_dir}/eval. Set --amp-opt-level to O1, O2, or O3 for mixed precision training. Run python eval.py --help for more help.

Pre-trained weights

Pre-trained model checkpoint and tensorboard log for K = 16384 and 65536 on imagenet dataset can be downloaded from OneDrive.

BTW, the hyperparameters is also stored in model checkpoint, you can get full configs in the checkpoints like this:

import torch
ckpt = torch.load('model.pth')
ckpt['opt']

Performance comparison with original paper

K [email protected] (ours) [email protected] (MoCo paper)
16384 59.89 (model) 60.4
65536 60.79 (model) 60.6

Notes

The MultiStepLR of pytorch1.4 is broken (See https://github.com/pytorch/pytorch/issues/33229 for more details). So if you are using pytorch1.4, you should not set --lr-scheduler to step. You can use cosine instead.

Acknowledgements

A lot of codes is borrowed from CMC and lemniscate.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].