All Projects → d-li14 → SAN

d-li14 / SAN

Licence: MIT license
[ECCV 2020] Scale Adaptive Network: Learning to Learn Parameterized Classification Networks for Scalable Input Images

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to SAN

Pyramidnet
Torch implementation of the paper "Deep Pyramidal Residual Networks" (https://arxiv.org/abs/1610.02915).
Stars: ✭ 121 (+195.12%)
Mutual labels:  imagenet, resnet
Senet Caffe
A Caffe Re-Implementation of SENet
Stars: ✭ 169 (+312.2%)
Mutual labels:  imagenet, resnet
Imagenet
This implements training of popular model architectures, such as AlexNet, ResNet and VGG on the ImageNet dataset(Now we supported alexnet, vgg, resnet, squeezenet, densenet)
Stars: ✭ 126 (+207.32%)
Mutual labels:  imagenet, resnet
Hbonet
[ICCV 2019] Harmonious Bottleneck on Two Orthogonal Dimensions
Stars: ✭ 94 (+129.27%)
Mutual labels:  imagenet, mobilenetv2
Pyramidnet Pytorch
A PyTorch implementation for PyramidNets (Deep Pyramidal Residual Networks, https://arxiv.org/abs/1610.02915)
Stars: ✭ 234 (+470.73%)
Mutual labels:  imagenet, resnet
Resnet Imagenet Caffe
train resnet on imagenet from scratch with caffe
Stars: ✭ 105 (+156.1%)
Mutual labels:  imagenet, resnet
Iresnet
Improved Residual Networks (https://arxiv.org/pdf/2004.04989.pdf)
Stars: ✭ 163 (+297.56%)
Mutual labels:  imagenet, resnet
Pretrained Models.pytorch
Pretrained ConvNets for pytorch: NASNet, ResNeXt, ResNet, InceptionV4, InceptionResnetV2, Xception, DPN, etc.
Stars: ✭ 8,318 (+20187.8%)
Mutual labels:  imagenet, resnet
Fusenet
Deep fusion project of deeply-fused nets, and the study on the connection to ensembling
Stars: ✭ 230 (+460.98%)
Mutual labels:  imagenet, resnet
Octconv.pytorch
PyTorch implementation of Octave Convolution with pre-trained Oct-ResNet and Oct-MobileNet models
Stars: ✭ 229 (+458.54%)
Mutual labels:  imagenet, resnet
Pytorch Classification
Classification with PyTorch.
Stars: ✭ 1,268 (+2992.68%)
Mutual labels:  imagenet, resnet
BAKE
Self-distillation with Batch Knowledge Ensembling Improves ImageNet Classification
Stars: ✭ 79 (+92.68%)
Mutual labels:  imagenet, knowledge-distillation
Caffe Model
Caffe models (including classification, detection and segmentation) and deploy files for famouse networks
Stars: ✭ 1,258 (+2968.29%)
Mutual labels:  imagenet, resnet
Ir Net
This project is the PyTorch implementation of our accepted CVPR 2020 paper : forward and backward information retention for accurate binary neural networks.
Stars: ✭ 119 (+190.24%)
Mutual labels:  imagenet, resnet
Mobilenet Caffe
Caffe Implementation of Google's MobileNets (v1 and v2)
Stars: ✭ 1,217 (+2868.29%)
Mutual labels:  imagenet, mobilenetv2
Pytorch Imagenet Cifar Coco Voc Training
Training examples and results for ImageNet(ILSVRC2012)/CIFAR100/COCO2017/VOC2007+VOC2012 datasets.Image Classification/Object Detection.Include ResNet/EfficientNet/VovNet/DarkNet/RegNet/RetinaNet/FCOS/CenterNet/YOLOv3.
Stars: ✭ 130 (+217.07%)
Mutual labels:  imagenet, resnet
Imagenet resnet tensorflow2.0
Train ResNet on ImageNet in Tensorflow 2.0; ResNet 在ImageNet上完整训练代码
Stars: ✭ 42 (+2.44%)
Mutual labels:  imagenet, resnet
Segmentationcpp
A c++ trainable semantic segmentation library based on libtorch (pytorch c++). Backbone: ResNet, ResNext. Architecture: FPN, U-Net, PAN, LinkNet, PSPNet, DeepLab-V3, DeepLab-V3+ by now.
Stars: ✭ 49 (+19.51%)
Mutual labels:  imagenet, resnet
Mini Imagenet Tools
Tools for generating mini-ImageNet dataset and processing batches
Stars: ✭ 209 (+409.76%)
Mutual labels:  imagenet, meta-learning
head-network-distillation
[IEEE Access] "Head Network Distillation: Splitting Distilled Deep Neural Networks for Resource-constrained Edge Computing Systems" and [ACM MobiCom HotEdgeVideo 2019] "Distilled Split Deep Neural Networks for Edge-assisted Real-time Systems"
Stars: ✭ 27 (-34.15%)
Mutual labels:  imagenet, knowledge-distillation

Scale Adaptive Network

Official implementation of Scale Adaptive Network (SAN) as described in Learning to Learn Parameterized Classification Networks for Scalable Input Images (ECCV'20) by Duo Li, Anbang Yao and Qifeng Chen on the ILSVRC 2012 benchmark.

We present a meta learning framework which dynamically parameterizes main networks conditioned on its input resolution at runtime, leading to efficient and flexible inference for arbitrarily switchable input resolutions.

Requirements

Dependency

  • PyTorch 1.0+
  • NVIDIA-DALI (in development, not recommended)

Dataset

Download the ImageNet dataset and move validation images to labeled subfolders. To do this, you can use the following script: https://raw.githubusercontent.com/soumith/imagenetloader.torch/master/valprep.sh

Pre-trained Models

Baseline (individually trained on each resolution)

ResNet-18

Resolution Top-1 Acc. Download
224x224 70.974 Google Drive
192x192 69.754 Google Drive
160x160 68.482 Google Drive
128x128 66.360 Google Drive
96x96 62.560 Google Drive

ResNet-50

Resolution Top-1 Acc. Download
224x224 77.150 Google Drive
192x192 76.406 Google Drive
160x160 75.312 Google Drive
128x128 73.526 Google Drive
96x96 70.610 Google Drive

MobileNetV2

Please visit my repository mobilenetv2.pytorch.

SAN

Architecture Download
ResNet-18 Google Drive
ResNet-50 Google Drive
MobileNetV2 Google Drive

Training

ResNet-18/50

python imagenet.py \
    -a meta_resnet18/50 \
    -d <path-to-ILSVRC2012-data> \
    --epochs 120 \
    --lr-decay cos \
    -c <path-to-save-checkpoints> \
    --sizes <list-of-input-resolutions> \ # default is 224, 192, 160, 128, 96
    -j <num-workers>
    --kd

MobileNetV2

python imagenet.py \
    -a meta_mobilenetv2 \
    -d <path-to-ILSVRC2012-data> \
    --epochs 150 \
    --lr-decay cos \
    --lr 0.05 \
    --wd 4e-5 \
    -c <path-to-save-checkpoints> \
    --sizes <list-of-input-resolutions> \ # default is 224, 192, 160, 128, 96
    -j <num-workers>
    --kd

Testing

Proxy Inference (default)

python imagenet.py \
    -a <arch> \
    -d <path-to-ILSVRC2012-data> \
    --resume <checkpoint-file> \
    --sizes <list-of-input-resolutions> \
    -e
    -j <num-workers>

Arguments are:

  • checkpoint-file: previously downloaded checkpoint file from here.
  • list-of-input-resolutions: test resolutions using different privatized BNs.

which gives Table 1 in the main paper and Table 5 in the supplementary materials.

Ideal Inference

Manually set the scale encoding here, which gives the left panel of Table 2 in the main paper.

Uncomment this line in the main script to enable post-hoc BN calibration, which gives the middle panel of Table 2 in the main paper.

Data-Free Ideal Inference

Manually set the scale encoding here and its corresponding shift here, then uncomment this line to replace its above line, which gives Table 6 in the supplementary materials.

Comparison to MutualNet

MutualNet: Adaptive ConvNet via Mutual Learning from Network Width and Resolution is accpepted to ECCV 2020 as oral. MutualNet and SAN are contemporary works sharing the similar motivation, regarding to switchable resolution during inference. We provide a comparison of top-1 validation accuracy on ImageNet with the same FLOPs, based on the common MobileNetV2 backbone.

Note that MutualNet training is based on the resolution range of {224, 192, 160, 128} with 4 network widths, while SAN training is based on the resolution range of {224, 192, 160, 128, 96} without tuning the network width, so the configuratuon is different in this comparison.

Method Config (width-resolution) MFLOPs Top-1 Acc.
MutualNet
SAN
1.0-224
1.0-224
300
300
73.0
72.86
MutualNet
SAN
0.9-224
1.0-208
269
270
72.4
72.42
MutualNet
SAN
1.0-192
1.0-192
221
221
71.9
72.22
MutualNet
SAN
0.9-192
1.0-176
198
195
71.5
71.63
MutualNet
SAN
0.75-192
1.0-160
154
154
70.2
71.16
MutualNet
SAN
0.9-160
1.0-144
138
133
69.9
69.80
MutualNet
SAN
1.0-128
1.0-128
99
99
67.8
69.14
MutualNet
SAN
0.85-128
1.0-112
84
82
66.1
66.59
MutualNet
SAN
0.7-128
1.0-96
58
56
64.3
65.07

Citation

If you find our work useful in your research, please consider citing:

@InProceedings{Li_2020_ECCV,
author = {Li, Duo and Yao, Anbang and Chen, Qifeng},
title = {Learning to Learn Parameterized Classification Networks for Scalable Input Images},
booktitle = {The European Conference on Computer Vision (ECCV)},
month = {August},
year = {2020}
}
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].