All Projects → Sense-X → UniFormer

Sense-X / UniFormer

Licence: Apache-2.0 license
[ICLR2022] official implementation of UniFormer

Programming Languages

Jupyter Notebook
11667 projects
python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to UniFormer

Gluon Cv
Gluon CV Toolkit
Stars: ✭ 5,001 (+771.25%)
Mutual labels:  image-classification, semantic-segmentation, pose-estimation
Swin-Transformer
This is an official implementation for "Swin Transformer: Hierarchical Vision Transformer using Shifted Windows".
Stars: ✭ 8,046 (+1301.74%)
Mutual labels:  image-classification, semantic-segmentation
super-gradients
Easily train or fine-tune SOTA computer vision models with one open source training library
Stars: ✭ 429 (-25.26%)
Mutual labels:  image-classification, semantic-segmentation
Involution
[CVPR 2021] Involution: Inverting the Inherence of Convolution for Visual Recognition, a brand new neural operator
Stars: ✭ 252 (-56.1%)
Mutual labels:  image-classification, semantic-segmentation
awesome-computer-vision-models
A list of popular deep learning models related to classification, segmentation and detection problems
Stars: ✭ 419 (-27%)
Mutual labels:  image-classification, semantic-segmentation
food-detection-yolov5
🍔🍟🍗 Food analysis baseline with Theseus. Integrate object detection, image classification and multi-class semantic segmentation. 🍞🍖🍕
Stars: ✭ 68 (-88.15%)
Mutual labels:  image-classification, semantic-segmentation
Awesome Computer Vision Models
A list of popular deep learning models related to classification, segmentation and detection problems
Stars: ✭ 278 (-51.57%)
Mutual labels:  image-classification, semantic-segmentation
HugsVision
HugsVision is a easy to use huggingface wrapper for state-of-the-art computer vision
Stars: ✭ 154 (-73.17%)
Mutual labels:  image-classification, semantic-segmentation
Cvat
Powerful and efficient Computer Vision Annotation Tool (CVAT)
Stars: ✭ 6,557 (+1042.33%)
Mutual labels:  image-classification, semantic-segmentation
Label Studio
Label Studio is a multi-type data labeling and annotation tool with standardized output format
Stars: ✭ 7,264 (+1165.51%)
Mutual labels:  image-classification, semantic-segmentation
Deepdenoising
Stars: ✭ 81 (-85.89%)
Mutual labels:  image-classification, semantic-segmentation
Paper-Notes
Paper notes in deep learning/machine learning and computer vision
Stars: ✭ 37 (-93.55%)
Mutual labels:  image-classification, semantic-segmentation
Paz
Hierarchical perception library in Python for pose estimation, object detection, instance segmentation, keypoint estimation, face recognition, etc.
Stars: ✭ 131 (-77.18%)
Mutual labels:  semantic-segmentation, pose-estimation
ailia-models
The collection of pre-trained, state-of-the-art AI models for ailia SDK
Stars: ✭ 1,102 (+91.99%)
Mutual labels:  image-classification, pose-estimation
label-studio-frontend
Data labeling react app that is backend agnostic and can be embedded into your applications — distributed as an NPM package
Stars: ✭ 230 (-59.93%)
Mutual labels:  semantic-segmentation, pose-estimation
Imgclsmob
Sandbox for training deep learning networks
Stars: ✭ 2,405 (+318.99%)
Mutual labels:  image-classification, semantic-segmentation
Wb color augmenter
WB color augmenter improves the accuracy of image classification and image semantic segmentation methods by emulating different WB effects (ICCV 2019) [Python & Matlab].
Stars: ✭ 89 (-84.49%)
Mutual labels:  image-classification, semantic-segmentation
Torchdistill
PyTorch-based modular, configuration-driven framework for knowledge distillation. 🏆18 methods including SOTA are implemented so far. 🎁 Trained models, training logs and configurations are available for ensuring the reproducibiliy.
Stars: ✭ 177 (-69.16%)
Mutual labels:  image-classification, semantic-segmentation
STAM-pytorch
Implementation of STAM (Space Time Attention Model), a pure and simple attention model that reaches SOTA for video classification
Stars: ✭ 109 (-81.01%)
Mutual labels:  video-classification
MFNet-pytorch
MFNet-pytorch, image semantic segmentation using RGB-Thermal images
Stars: ✭ 72 (-87.46%)
Mutual labels:  semantic-segmentation

UniFormer

This repo is the official implementation of "UniFormer: Unifying Convolution and Self-attention for Visual Recognition" and "UniFormer: Unified Transformer for Efficient Spatiotemporal Representation Learning".

It currently includes code and models for the following tasks:

Note!!!!!

For downstream tasks:

  • We forget to freeze BN in backbone, which will further improve the performance.
  • We have verified that Token Labeling can largely help the downstream tasks. Have a try if you utilize UniFormer for competition or application.
  • The head_dim of some models are 32, which will lead to large memory cost but little improvement for downstream tasks. Those models with head_dim=64 are released released in image_classification.

Updates

11/20/2022

We have released UniFormerV2, which aims to arming the pre-trained ViTs with efficient UniFormer designs. It can save a lot of reaining resources and achieve powerful performance on 8 popular benchmarks. Please have a try! 🎉

10/26/2022

We have provided the code for video visualizations, please see video_classification/vis.

05/24/2022

  1. Some bugs for video recognition have been fixed in Nightcrawler. We successfully adapt UniFormer for extreme dark video classification! 🎉
  2. More demos for Detection and Segmentation are provided. 👏😄

03/6/2022

Some models with head_dim=64 are released, which can save memory cost for downstream tasks.

02/9/2022

Some popular models and demos are updated in hugging face.

02/3/2022

Integrated into Hugging Face Spaces using Gradio. Have fun!

01/21/2022

UniFormer for video is accepted by ICLR2022 (8868, Top 3%)!

01/19/2022

  1. Pretrained models on ImageNet-1K with Token Labeling.
  2. Large resolution fine-tuning.

01/18/2022

  1. The supported code and models for COCO object detection.
  2. The supported code and models for ADE20K semantic segmentation.
  3. The supported code and models for COCO pose estimation.

01/13/2022

[Initial commits]:

  1. Pretrained models on ImageNet-1K, Kinetics-400, Kinetics-600, Something-Something V1&V2.

  2. The supported code and models for image classification and video classification are provided.

Introduction

UniFormer (Unified transFormer) is introduce in arxiv (more details can be found in arxiv), which can seamlessly integrate merits of convolution and self-attention in a concise transformer format. We adopt local MHRA in shallow layers to largely reduce computation burden and global MHRA in deep layers to learn global token relation.

Without any extra training data, our UniFormer achieves 86.3 top-1 accuracy on ImageNet-1K classification. With only ImageNet-1K pre-training, it can simply achieve state-of-the-art performance in a broad range of downstream tasks. Our UniFormer obtains 82.9/84.8 top-1 accuracy on Kinetics-400/600, and 60.9/71.2 top-1 accuracy on Something-Something V1/V2 video classification tasks. It also achieves 53.8 box AP and 46.4 mask AP on COCO object detection task, 50.8 mIoU on ADE20K semantic segmentation task, and 77.4 AP on COCO pose estimation task.

teaser

Main results on ImageNet-1K

Please see image_classification for more details.

More models with large resolution and token labeling will be released soon.

Model Pretrain Resolution Top-1 #Param. FLOPs
UniFormer-S ImageNet-1K 224x224 82.9 22M 3.6G
UniFormer-S† ImageNet-1K 224x224 83.4 24M 4.2G
UniFormer-B ImageNet-1K 224x224 83.9 50M 8.3G
UniFormer-S+TL ImageNet-1K 224x224 83.4 22M 3.6G
UniFormer-S†+TL ImageNet-1K 224x224 83.9 24M 4.2G
UniFormer-B+TL ImageNet-1K 224x224 85.1 50M 8.3G
UniFormer-L+TL ImageNet-1K 224x224 85.6 100M 12.6G
UniFormer-S+TL ImageNet-1K 384x384 84.6 22M 11.9G
UniFormer-S†+TL ImageNet-1K 384x384 84.9 24M 13.7G
UniFormer-B+TL ImageNet-1K 384x384 86.0 50M 27.2G
UniFormer-L+TL ImageNet-1K 384x384 86.3 100M 39.2G

Main results on Kinetics video classification

Please see video_classification for more details.

Model Pretrain #Frame Sampling Stride FLOPs K400 Top-1 K600 Top-1
UniFormer-S ImageNet-1K 16x1x4 4 167G 80.8 82.8
UniFormer-S ImageNet-1K 16x1x4 8 167G 80.8 82.7
UniFormer-S ImageNet-1K 32x1x4 4 438G 82.0 -
UniFormer-B ImageNet-1K 16x1x4 4 387G 82.0 84.0
UniFormer-B ImageNet-1K 16x1x4 8 387G 81.7 83.4
UniFormer-B ImageNet-1K 32x1x4 4 1036G 82.9 84.5*

#Frame = #input_frame x #crop x #clip

* Since Kinetics-600 is too large to train (>1 month in single node with 8 A100 GPUs), we provide model trained in multi node (around 2 weeks with 32 V100 GPUs), but the result is lower due to the lack of tuning hyperparameters.

Main results on Something-Something video classification

Please see video_classification for more details.

Model Pretrain #Frame FLOPs SSV1 Top-1 SSV2 Top-1
UniFormer-S K400 16x3x1 125G 57.2 67.7
UniFormer-S K600 16x3x1 125G 57.6 69.4
UniFormer-S K400 32x3x1 329G 58.8 69.0
UniFormer-S K600 32x3x1 329G 59.9 70.4
UniFormer-B K400 16x3x1 290G 59.1 70.4
UniFormer-B K600 16x3x1 290G 58.8 70.2
UniFormer-B K400 32x3x1 777G 60.9 71.1
UniFormer-B K600 32x3x1 777G 61.0 71.2

#Frame = #input_frame x #crop x #clip

Main results on COCO object detection

Please see object_detection for more details.

Mask R-CNN

Backbone Lr Schd box mAP mask mAP #params FLOPs
UniFormer-Sh14 1x 45.6 41.6 41M 269G
UniFormer-Sh14 3x+MS 48.2 43.4 41M 269G
UniFormer-Bh14 1x 47.4 43.1 69M 399G
UniFormer-Bh14 3x+MS 50.3 44.8 69M 399G

Cascade Mask R-CNN

Backbone Lr Schd box mAP mask mAP #params FLOPs
UniFormer-Sh14 3x+MS 52.1 45.2 79M 747G
UniFormer-Bh14 3x+MS 53.8 46.4 107M 878G

Main results on ADE20K semantic segmentation

Please see semantic_segmentation for more details.

Semantic FPN

Backbone Lr Schd mIoU #params FLOPs
UniFormer-Sh14 80K 46.3 25M 172G
UniFormer-Bh14 80K 47.0 54M 328G
UniFormer-Sw32 80K 45.6 25M 183G
UniFormer-Sh32 80K 46.2 25M 199G
UniFormer-S 80K 46.6 25M 247G
UniFormer-Bw32 80K 47.0 54M 310G
UniFormer-Bh32 80K 47.7 54M 350G
UniFormer-B 80K 48.0 54M 471G

UperNet

Backbone Lr Schd mIoU MS mIoU #params FLOPs
UniFormer-Sh14 160K 46.9 48.0 52M 947G
UniFormer-Bh14 160K 48.9 50.0 80M 1085G
UniFormer-Sw32 160K 46.6 48.4 52M 939G
UniFormer-Sh32 160K 47.0 48.5 52M 955G
UniFormer-S 160K 47.6 48.5 52M 1004G
UniFormer-Bw32 160K 49.1 50.6 80M 1066G
UniFormer-Bh32 160K 49.5 50.7 80M 1106G
UniFormer-B 160K 50.0 50.8 80M 1227G

Main results on COCO pose estimation

Please see pose_estimation for more details.

Top-Down

Backbone Input Size AP AP50 AP75 ARM ARL AR FLOPs
UniFormer-S 256x192 74.0 90.3 82.2 66.8 76.7 79.5 4.7G
UniFormer-S 384x288 75.9 90.6 83.4 68.6 79.0 81.4 11.1G
UniFormer-S 448x320 76.2 90.6 83.2 68.6 79.4 81.4 14.8G
UniFormer-B 256x192 75.0 90.6 83.0 67.8 77.7 80.4 9.2G
UniFormer-B 384x288 76.7 90.8 84.0 69.3 79.7 81.4 14.8G
UniFormer-B 448x320 77.4 91.1 84.4 70.2 80.6 82.5 29.6G

Cite Uniformer

If you find this repository useful, please use the following BibTeX entry for citation.

@misc{li2022uniformer,
      title={UniFormer: Unifying Convolution and Self-attention for Visual Recognition}, 
      author={Kunchang Li and Yali Wang and Junhao Zhang and Peng Gao and Guanglu Song and Yu Liu and Hongsheng Li and Yu Qiao},
      year={2022},
      eprint={2201.09450},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}
@misc{li2022uniformer,
      title={UniFormer: Unified Transformer for Efficient Spatiotemporal Representation Learning}, 
      author={Kunchang Li and Yali Wang and Peng Gao and Guanglu Song and Yu Liu and Hongsheng Li and Yu Qiao},
      year={2022},
      eprint={2201.04676},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

License

This project is released under the MIT license. Please see the LICENSE file for more information.

Contributors and Contact Information

UniFormer is maintained by Kunchang Li.

For help or issues using UniFormer, please submit a GitHub issue.

For other communications related to UniFormer, please contact Kunchang Li ([email protected]).

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].