All Projects → PaddlePaddle → PASSL

PaddlePaddle / PASSL

Licence: Apache-2.0 license
PASSL包含 SimCLR,MoCo v1/v2,BYOL,CLIP,PixPro,BEiT,MAE等图像自监督算法以及 Vision Transformer,DEiT,Swin Transformer,CvT,T2T-ViT,MLP-Mixer,XCiT,ConvNeXt,PVTv2 等基础视觉算法

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to PASSL

PLSC
Paddle Large Scale Classification Tools,supports ArcFace, CosFace, PartialFC, Data Parallel + Model Parallel. Model includes ResNet, ViT, DeiT, FaceViT.
Stars: ✭ 113 (-15.67%)
Mutual labels:  vit, paddle, deit
mmselfsup
OpenMMLab Self-Supervised Learning Toolbox and Benchmark
Stars: ✭ 2,315 (+1627.61%)
Mutual labels:  moco, self-supervised-learning, simclr
Paddle-Image-Models
A PaddlePaddle version image model zoo.
Stars: ✭ 131 (-2.24%)
Mutual labels:  pvt, deit, swin-transformer
Revisiting-Contrastive-SSL
Revisiting Contrastive Methods for Unsupervised Learning of Visual Representations. [NeurIPS 2021]
Stars: ✭ 81 (-39.55%)
Mutual labels:  moco, self-supervised-learning
Simclr
SimCLRv2 - Big Self-Supervised Models are Strong Semi-Supervised Learners
Stars: ✭ 2,720 (+1929.85%)
Mutual labels:  self-supervised-learning, simclr
SimMIM
This is an official implementation for "SimMIM: A Simple Framework for Masked Image Modeling".
Stars: ✭ 717 (+435.07%)
Mutual labels:  self-supervised-learning, swin-transformer
HugsVision
HugsVision is a easy to use huggingface wrapper for state-of-the-art computer vision
Stars: ✭ 154 (+14.93%)
Mutual labels:  vit, deit
libai
LiBai(李白): A Toolbox for Large-Scale Distributed Parallel Training
Stars: ✭ 284 (+111.94%)
Mutual labels:  self-supervised-learning, vision-transformer
mobilevit-pytorch
A PyTorch implementation of "MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer".
Stars: ✭ 349 (+160.45%)
Mutual labels:  vit, vision-transformer
SReT
Official PyTorch implementation of our ECCV 2022 paper "Sliced Recursive Transformer"
Stars: ✭ 51 (-61.94%)
Mutual labels:  vit, vision-transformer
TransMorph Transformer for Medical Image Registration
TransMorph: Transformer for Unsupervised Medical Image Registration (PyTorch)
Stars: ✭ 130 (-2.99%)
Mutual labels:  vision-transformer, swin-transformer
pytorch-vit
An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
Stars: ✭ 250 (+86.57%)
Mutual labels:  vit, vision-transformer
SimSiam
Exploring Simple Siamese Representation Learning
Stars: ✭ 52 (-61.19%)
Mutual labels:  self-supervised-learning, simclr
towhee
Towhee is a framework that is dedicated to making neural data processing pipelines simple and fast.
Stars: ✭ 821 (+512.69%)
Mutual labels:  vit, vision-transformer
LaTeX-OCR
pix2tex: Using a ViT to convert images of equations into LaTeX code.
Stars: ✭ 1,566 (+1068.66%)
Mutual labels:  vit, vision-transformer
FKD
A Fast Knowledge Distillation Framework for Visual Recognition
Stars: ✭ 49 (-63.43%)
Mutual labels:  self-supervised-learning
SimCLR-in-TensorFlow-2
(Minimally) implements SimCLR (https://arxiv.org/abs/2002.05709) in TensorFlow 2.
Stars: ✭ 75 (-44.03%)
Mutual labels:  self-supervised-learning
OrientedRepPoints DOTA
Oriented Object Detection: Oriented RepPoints + Swin Transformer/ReResNet
Stars: ✭ 62 (-53.73%)
Mutual labels:  swin-transformer
video repres mas
code for CVPR-2019 paper: Self-supervised Spatio-temporal Representation Learning for Videos by Predicting Motion and Appearance Statistics
Stars: ✭ 63 (-52.99%)
Mutual labels:  self-supervised-learning
GCA
[WWW 2021] Source code for "Graph Contrastive Learning with Adaptive Augmentation"
Stars: ✭ 69 (-48.51%)
Mutual labels:  self-supervised-learning

⚙️ English | 简体中文

Introduction

PASSL is a Paddle based vision library for state-of-the-art Self-Supervised Learning research with PaddlePaddle. PASSL aims to accelerate research cycle in self-supervised learning: from designing a new self-supervised task to evaluating the learned representations.

Key features of PASSL:

  • Reproducible implementation of SOTA in Self-Supervision

    Existing SOTA in Self-Supervision are implemented - SimCLR, MoCo(v1), MoCo(v2), MoCo-BYOL, BYOL, BEiT. Supervised classification training is also supported.

  • Modular Design

    Easy to build new tasks and reuse the existing components from other tasks (Trainer, models and heads, data transforms, etc.)

🛠️ The ultimate goal of PASSL is to use self-supervised learning to provide more appropriate pre-training weights for downstream tasks while significantly reducing the cost of data annotation.

📣 Recent Update:

  • (2022-2-9): Refactoring README
  • 🔥 Now:

Implemented Models

  • Self-Supervised Learning Models

PASSL implements a series of self-supervised learning algorithms, See Document for details on its use

Epochs Official results PASSL results Backbone Model Document
MoCo 200 60.6 60.64 ResNet-50 download Train MoCo
SimCLR 100 64.5 65.3 ResNet-50 download Train SimCLR
MoCo v2 200 67.7 67.72 ResNet-50 download Train MoCo
MoCo-BYOL 300 71.56 72.10 ResNet-50 download Train MoCo-BYOL
BYOL 300 72.50 71.62 ResNet-50 download Train BYOL
PixPro 100 55.1(fp16) 57.2(fp32) ResNet-50 download Train PixPro

Benchmark Linear Image Classification on ImageNet-1K.

Comming Soon:More algorithm implementations are already in our plans ...

  • Classification Models

PASSL implements influential image classification algorithms such as Visual Transformer, and provides corresponding pre-training weights. Designed to support the construction and research of self-supervised, multimodal, large-model algorithms. See Classification_Models_Guide.md for more usage details

Detail Tutorial
ViT / PaddleEdu
Swin Transformer / PaddleEdu
CaiT config PaddleFleet
T2T-ViT config PaddleFleet
CvT config PaddleFleet
BEiT config unofficial
MLP-Mixer config PaddleFleet
ConvNeXt config PaddleFleet

🔥 PASSL provides a detailed dissection of the algorithm, see Tutorial for details.

Installation

See INSTALL.md.

Getting Started

Please see GETTING_STARTED.md for the basic usage of PASSL.

Awesome SSL

Self-Supervised Learning (SSL) is a rapidly growing field, and some influential papers are listed here for research use.PASSL seeks to implement self-supervised algorithms with application potential

Contributing

PASSL is still young. It may contain bugs and issues. Please report them in our bug track system. Contributions are welcome. Besides, if you have any ideas about PASSL, please let us know.

Citation

If PASSL is helpful to your research, feel free to cite

@misc{=passl,
    title={PASSL: A visual Self-Supervised Learning Library},
    author={PASSL Contributors},
    howpublished = {\url{https://github.com/PaddlePaddle/PASSL}},
    year={2022}
}

License

As shown in the LICENSE.txt file, PASSL uses the Apache 2.0 copyright agreement.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].