All Projects → clovaai → Cutmix Pytorch

clovaai / Cutmix Pytorch

Licence: mit
Official Pytorch implementation of CutMix regularizer

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Cutmix Pytorch

Modelsgenesis
Official Keras & PyTorch Implementation and Pre-trained Models for Models Genesis - MICCAI 2019
Stars: ✭ 416 (-44.83%)
Mutual labels:  transfer-learning
Sketch Code
Keras model to generate HTML code from hand-drawn website mockups. Implements an image captioning architecture to drawn source images.
Stars: ✭ 4,544 (+502.65%)
Mutual labels:  augmentation
Tensorflow 101
TensorFlow 101: Introduction to Deep Learning for Python Within TensorFlow
Stars: ✭ 642 (-14.85%)
Mutual labels:  transfer-learning
Meta Transfer Learning
TensorFlow and PyTorch implementation of "Meta-Transfer Learning for Few-Shot Learning" (CVPR2019)
Stars: ✭ 439 (-41.78%)
Mutual labels:  transfer-learning
Augmentor
Image augmentation library in Python for machine learning.
Stars: ✭ 4,594 (+509.28%)
Mutual labels:  augmentation
Easytransfer
EasyTransfer is designed to make the development of transfer learning in NLP applications easier.
Stars: ✭ 563 (-25.33%)
Mutual labels:  transfer-learning
Trainyourownyolo
Train a state-of-the-art yolov3 object detector from scratch!
Stars: ✭ 399 (-47.08%)
Mutual labels:  transfer-learning
Torchio
Medical image preprocessing and augmentation toolkit for deep learning
Stars: ✭ 708 (-6.1%)
Mutual labels:  augmentation
Nlp Paper
NLP Paper
Stars: ✭ 484 (-35.81%)
Mutual labels:  transfer-learning
Awesome Federated Learning
Federated Learning Library: https://fedml.ai
Stars: ✭ 624 (-17.24%)
Mutual labels:  transfer-learning
Caer
High-performance Vision library in Python. Scale your research, not boilerplate.
Stars: ✭ 452 (-40.05%)
Mutual labels:  augmentation
Pba
Efficient Learning of Augmentation Policy Schedules
Stars: ✭ 461 (-38.86%)
Mutual labels:  augmentation
Awesome Bert Nlp
A curated list of NLP resources focused on BERT, attention mechanism, Transformer networks, and transfer learning.
Stars: ✭ 567 (-24.8%)
Mutual labels:  transfer-learning
Audiomentations
A Python library for audio data augmentation. Inspired by albumentations. Useful for machine learning.
Stars: ✭ 439 (-41.78%)
Mutual labels:  augmentation
Transfer Learning Library
Transfer-Learning-Library
Stars: ✭ 678 (-10.08%)
Mutual labels:  transfer-learning
Xlearn
Transfer Learning Library
Stars: ✭ 406 (-46.15%)
Mutual labels:  transfer-learning
Video Classification
Tutorial for video classification/ action recognition using 3D CNN/ CNN+RNN on UCF101
Stars: ✭ 543 (-27.98%)
Mutual labels:  transfer-learning
Getting Things Done With Pytorch
Jupyter Notebook tutorials on solving real-world problems with Machine Learning & Deep Learning using PyTorch. Topics: Face detection with Detectron 2, Time Series anomaly detection with LSTM Autoencoders, Object Detection with YOLO v5, Build your first Neural Network, Time Series forecasting for Coronavirus daily cases, Sentiment Analysis with BERT.
Stars: ✭ 738 (-2.12%)
Mutual labels:  transfer-learning
Naacl transfer learning tutorial
Repository of code for the tutorial on Transfer Learning in NLP held at NAACL 2019 in Minneapolis, MN, USA
Stars: ✭ 687 (-8.89%)
Mutual labels:  transfer-learning
Deepdrive
Deepdrive is a simulator that allows anyone with a PC to push the state-of-the-art in self-driving
Stars: ✭ 628 (-16.71%)
Mutual labels:  transfer-learning

Accepted at ICCV 2019 (oral talk) !!

CutMix: Regularization Strategy to Train Strong Classifiers with Localizable Features

Official Pytorch implementation of CutMix regularizer | Paper | Pretrained Models

Sangdoo Yun, Dongyoon Han, Seong Joon Oh, Sanghyuk Chun, Junsuk Choe, Youngjoon Yoo.

Clova AI Research, NAVER Corp.

Our implementation is based on these repositories:

Abstract

Regional dropout strategies have been proposed to enhance the performance of convolutional neural network classifiers. They have proved to be effective for guiding the model to attend on less discriminative parts of objects (e.g. leg as opposed to head of a person), thereby letting the network generalize better and have better object localization capabilities. On the other hand, current methods for regional dropout removes informative pixels on training images by overlaying a patch of either black pixels or random noise. Such removal is not desirable because it leads to information loss and inefficiency during training. We therefore propose the CutMix augmentation strategy: patches are cut and pasted among training images where the ground truth labels are also mixed proportionally to the area of the patches. By making efficient use of training pixels and retaining the regularization effect of regional dropout, CutMix consistently outperforms the state-of-the-art augmentation strategies on CIFAR and ImageNet classification tasks, as well as on the ImageNet weakly-supervised localization task. Moreover, unlike previous augmentation methods, our CutMix-trained ImageNet classifier, when used as a pretrained model, results in consistent performance gains in Pascal detection and MS-COCO image captioning benchmarks. We also show that CutMix improves the model robustness against input corruptions and its out-of-distribution detection performances.

Overview of the results of Mixup, Cutout, and CutMix.

teaser

Updates

23 May, 2019: Initial upload

Getting Started

Requirements

  • Python3
  • PyTorch (> 1.0)
  • torchvision (> 0.2)
  • NumPy

Train Examples

  • CIFAR-100: We used 2 GPUs to train CIFAR-100.
python train.py \
--net_type pyramidnet \
--dataset cifar100 \
--depth 200 \
--alpha 240 \
--batch_size 64 \
--lr 0.25 \
--expname PyraNet200 \
--epochs 300 \
--beta 1.0 \
--cutmix_prob 0.5 \
--no-verbose
  • ImageNet: We used 4 GPUs to train ImageNet.
python train.py \
--net_type resnet \
--dataset imagenet \
--batch_size 256 \
--lr 0.1 \
--depth 50 \
--epochs 300 \
--expname ResNet50 \
-j 40 \
--beta 1.0 \
--cutmix_prob 1.0 \
--no-verbose

Test Examples using Pretrained model

python test.py \
--net_type pyramidnet \
--dataset cifar100 \
--batch_size 64 \
--depth 200 \
--alpha 240 \
--pretrained /set/your/model/path/model_best.pth.tar
python test.py \
--net_type resnet \
--dataset imagenet \
--batch_size 64 \
--depth 50 \
--pretrained /set/your/model/path/model_best.pth.tar

Experimental Results and Pretrained Models

  • PyramidNet-200 pretrained on CIFAR-100 dataset:
Method Top-1 Error Model file
PyramidNet-200 [CVPR'17] (baseline) 16.45 model
PyramidNet-200 + CutMix 14.23 model
PyramidNet-200 + Shakedrop [arXiv'18] + CutMix 13.81 -
PyramidNet-200 + Mixup [ICLR'18] 15.63 model
PyramidNet-200 + Manifold Mixup [ICML'19] 16.14 model
PyramidNet-200 + Cutout [arXiv'17] 16.53 model
PyramidNet-200 + DropBlock [NeurIPS'18] 15.73 model
PyramidNet-200 + Cutout + Labelsmoothing 15.61 model
PyramidNet-200 + DropBlock + Labelsmoothing 15.16 model
PyramidNet-200 + Cutout + Mixup 15.46 model
  • ResNet models pretrained on ImageNet dataset:
Method Top-1 Error Model file
ResNet-50 [CVPR'16] (baseline) 23.68 model
ResNet-50 + CutMix 21.40 model
ResNet-50 + Feature CutMix 21.80 model
ResNet-50 + Mixup [ICLR'18] 22.58 model
ResNet-50 + Manifold Mixup [ICML'19] 22.50 model
ResNet-50 + Cutout [arXiv'17] 22.93 model
ResNet-50 + AutoAugment [CVPR'19] 22.40* -
ResNet-50 + DropBlock [NeurIPS'18] 21.87* -
ResNet-101 + CutMix 20.17 model
ResNet-152 + CutMix 19.20 model
ResNeXt-101 (32x4d) + CutMix 19.47 model

* denotes results reported in the original papers

Transfer Learning Results

Backbone ImageNet Cls (%) ImageNet Loc (%) CUB200 Loc (%) Detection (SSD) (mAP) Detection (Faster-RCNN) (mAP) Image Captioning (BLEU-4)
ResNet50 23.68 46.3 49.41 76.7 75.6 22.9
ResNet50+Mixup 22.58 45.84 49.3 76.6 73.9 23.2
ResNet50+Cutout 22.93 46.69 52.78 76.8 75 24.0
ResNet50+CutMix 21.60 46.25 54.81 77.6 76.7 24.9

Third-party Implementations

Citation

@inproceedings{yun2019cutmix,
    title={CutMix: Regularization Strategy to Train Strong Classifiers with Localizable Features},
    author={Yun, Sangdoo and Han, Dongyoon and Oh, Seong Joon and Chun, Sanghyuk and Choe, Junsuk and Yoo, Youngjoon},
    booktitle = {International Conference on Computer Vision (ICCV)},
    year={2019},
    pubstate={published},
    tppubtype={inproceedings}
}

License

Copyright (c) 2019-present NAVER Corp.

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].