All Projects → open-mmlab → Openselfsup

open-mmlab / Openselfsup

Licence: apache-2.0
Self-Supervised Learning Toolbox and Benchmark

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Openselfsup

Tadw
An implementation of "Network Representation Learning with Rich Text Information" (IJCAI '15).
Stars: ✭ 43 (-96.53%)
Mutual labels:  unsupervised-learning
Dmgi
Unsupervised Attributed Multiplex Network Embedding (AAAI 2020)
Stars: ✭ 62 (-95%)
Mutual labels:  unsupervised-learning
Karateclub
Karate Club: An API Oriented Open-source Python Framework for Unsupervised Learning on Graphs (CIKM 2020)
Stars: ✭ 1,190 (-3.95%)
Mutual labels:  unsupervised-learning
Voxelmorph
Unsupervised Learning for Image Registration
Stars: ✭ 1,057 (-14.69%)
Mutual labels:  unsupervised-learning
Dgi
TensorFlow implementation of Deep Graph Infomax
Stars: ✭ 58 (-95.32%)
Mutual labels:  unsupervised-learning
Sine
A PyTorch Implementation of "SINE: Scalable Incomplete Network Embedding" (ICDM 2018).
Stars: ✭ 67 (-94.59%)
Mutual labels:  unsupervised-learning
Susi
SuSi: Python package for unsupervised, supervised and semi-supervised self-organizing maps (SOM)
Stars: ✭ 42 (-96.61%)
Mutual labels:  unsupervised-learning
Image similarity
PyTorch Blog Post On Image Similarity Search
Stars: ✭ 80 (-93.54%)
Mutual labels:  unsupervised-learning
Weakly Supervised 3d Object Detection
Weakly Supervised 3D Object Detection from Point Clouds (VS3D), ACM MM 2020
Stars: ✭ 61 (-95.08%)
Mutual labels:  unsupervised-learning
Self Supervised Learning Overview
📜 Self-Supervised Learning from Images: Up-to-date reading list.
Stars: ✭ 73 (-94.11%)
Mutual labels:  unsupervised-learning
Lir For Unsupervised Ir
This is an implementation for the CVPR2020 paper "Learning Invariant Representation for Unsupervised Image Restoration"
Stars: ✭ 53 (-95.72%)
Mutual labels:  unsupervised-learning
Hypergan
Composable GAN framework with api and user interface
Stars: ✭ 1,104 (-10.9%)
Mutual labels:  unsupervised-learning
Insta Dm
Learning Monocular Depth in Dynamic Scenes via Instance-Aware Projection Consistency (AAAI 2021)
Stars: ✭ 67 (-94.59%)
Mutual labels:  unsupervised-learning
Php Ml
PHP-ML - Machine Learning library for PHP
Stars: ✭ 7,900 (+537.61%)
Mutual labels:  unsupervised-learning
Attention Based Aspect Extraction
Code for unsupervised aspect extraction, using Keras and its Backends
Stars: ✭ 75 (-93.95%)
Mutual labels:  unsupervised-learning
Student Teacher Anomaly Detection
Student–Teacher Anomaly Detection with Discriminative Latent Embeddings
Stars: ✭ 43 (-96.53%)
Mutual labels:  unsupervised-learning
Neuralhmm
code for unsupervised learning Neural Hidden Markov Models paper
Stars: ✭ 64 (-94.83%)
Mutual labels:  unsupervised-learning
Mug
Learning Video Object Segmentation from Unlabeled Videos (CVPR2020)
Stars: ✭ 81 (-93.46%)
Mutual labels:  unsupervised-learning
Marta Gan
MARTA GANs: Unsupervised Representation Learning for Remote Sensing Image Classification
Stars: ✭ 75 (-93.95%)
Mutual labels:  unsupervised-learning
Concrete Autoencoders
Stars: ✭ 68 (-94.51%)
Mutual labels:  unsupervised-learning

OpenSelfSup

News

  • Downstream tasks now support more methods(Mask RCNN-FPN, RetinaNet, Keypoints RCNN) and more datasets(Cityscapes).
  • 'GaussianBlur' is replaced from Opencv to PIL, and MoCo v2 training speed doubles!
    (time/iter 0.35s-->0.16s, SimCLR and BYOL are also affected.)
  • OpenSelfSup now supports Mixed Precision Training (apex AMP)!
  • A bug of MoCo v2 has been fixed and now the results are reproducible.
  • OpenSelfSup now supports BYOL!

Introduction

The master branch works with PyTorch 1.1 or higher.

OpenSelfSup is an open source unsupervised representation learning toolbox based on PyTorch.

What does this repo do?

Below is the relations among Unsupervised Learning, Self-Supervised Learning and Representation Learning. This repo focuses on the shadow area, i.e., Unsupervised Representation Learning. Self-Supervised Representation Learning is the major branch of it. Since in many cases we do not distingush between Self-Supervised Representation Learning and Unsupervised Representation Learning strictly, we still name this repo as OpenSelfSup.

Major features

  • All methods in one repository

    For comprehensive comparison in all benchmarks, refer to MODEL_ZOO.md. Most of the selfsup pretraining methods are under the batch_size=256, epochs=200 setting.

    Method VOC07 SVM (best layer) ImageNet (best layer)
    ImageNet 87.17 76.17
    Random 30.54 16.21
    Relative-Loc 64.78 49.31
    Rotation-Pred 67.38 54.99
    DeepCluster 74.26 57.71
    NPID 74.50 56.61
    ODC 78.42 57.70
    MoCo 79.18 60.60
    MoCo v2 84.26 67.69
    SimCLR 78.95 61.57
    BYOL (epoch=300) 86.58 72.35
    • Flexibility & Extensibility

      OpenSelfSup follows a similar code architecture of MMDetection while is even more flexible than MMDetection, since OpenSelfSup integrates various self-supervised tasks including classification, joint clustering and feature learning, contrastive learning, tasks with a memory bank, etc.

      For existing methods in this repo, you only need to modify config files to adjust hyper-parameters. It is also simple to design your own methods, please refer to GETTING_STARTED.md.

    • Efficiency

      All methods support multi-machine multi-gpu distributed training.

    • Standardized Benchmarks

      We standardize the benchmarks including logistic regression, SVM / Low-shot SVM from linearly probed features, semi-supervised classification, and object detection. Below are the setting of these benchmarks.

      Benchmarks Setting Remarks
      ImageNet Linear Classification (Multi) goyal2019scaling Evaluate different layers.
      ImageNet Linear Classification (Last) MoCo Evaluate the last layer after global pooling.
      Places205 Linear Classification goyal2019scaling Evaluate different layers.
      ImageNet Semi-Sup Classification
      PASCAL VOC07 SVM goyal2019scaling Costs="1.0,10.0,100.0" to save evaluation time w/o change of results.
      PASCAL VOC07 Low-shot SVM goyal2019scaling Costs="1.0,10.0,100.0" to save evaluation time w/o change of results.
      PASCAL VOC07+12 Object Detection MoCo
      COCO17 Object Detection MoCo

    Change Log

    Please refer to CHANGELOG.md for details and release history.

    [2020-10-14] OpenSelfSup v0.3.0 is released with some bugs fixed and support of new features.

    [2020-06-26] OpenSelfSup v0.2.0 is released with benchmark results and support of new features.

    [2020-06-16] OpenSelfSup v0.1.0 is released.

    Installation

    Please refer to INSTALL.md for installation and dataset preparation.

    Get Started

    Please see GETTING_STARTED.md for the basic usage of OpenSelfSup.

    Benchmark and Model Zoo

    Please refer to MODEL_ZOO.md for for a comprehensive set of pre-trained models and benchmarks.

    License

    This project is released under the Apache 2.0 license.

    Acknowledgement

    • This repo borrows the architecture design and part of the code from MMDetection.
    • The implementation of MoCo and the detection benchmark borrow the code from moco.
    • The SVM benchmark borrows the code from fair_self_supervision_benchmark.
    • openselfsup/third_party/clustering.py is borrowed from deepcluster.

    Contributors

    We encourage researchers interested in Self-Supervised Learning to contribute to OpenSelfSup. Your contributions, including implementing or transferring new methods to OpenSelfSup, performing experiments, reproducing of results, parameter studies, etc, will be recorded in MODEL_ZOO.md. For now, the contributors include: Xiaohang Zhan (@XiaohangZhan), Jiahao Xie (@Jiahao000), Enze Xie (@xieenze), Xiangxiang Chu (@cxxgtxy), Zijian He (@scnuhealthy).

    Contact

    This repo is currently maintained by Xiaohang Zhan (@XiaohangZhan), Jiahao Xie (@Jiahao000) and Enze Xie (@xieenze).

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].