All Projects → gaohuang → Msdnet

gaohuang / Msdnet

Licence: mit
Multi-Scale Dense Networks for Resource Efficient Image Classification (ICLR 2018 Oral)

Programming Languages

lua
6591 projects

Projects that are alternatives of or similar to Msdnet

Pytorch Hardnet
35% faster than ResNet: Harmonic DenseNet, A low memory traffic network
Stars: ✭ 293 (-33.86%)
Mutual labels:  imagenet
Php Opencv Examples
Tutorial for computer vision and machine learning in PHP 7/8 by opencv (installation + examples + documentation)
Stars: ✭ 333 (-24.83%)
Mutual labels:  imagenet
Mobilenetv2.pytorch
72.8% MobileNetV2 1.0 model on ImageNet and a spectrum of pre-trained MobileNetV2 models
Stars: ✭ 369 (-16.7%)
Mutual labels:  imagenet
Segmentation models.pytorch
Segmentation models with pretrained backbones. PyTorch.
Stars: ✭ 4,584 (+934.76%)
Mutual labels:  imagenet
Natural Adv Examples
A Harder ImageNet Test Set (CVPR 2021)
Stars: ✭ 317 (-28.44%)
Mutual labels:  imagenet
Rectlabel Support
RectLabel - An image annotation tool to label images for bounding box object detection and segmentation.
Stars: ✭ 338 (-23.7%)
Mutual labels:  imagenet
Randwirenn
Pytorch Implementation of: "Exploring Randomly Wired Neural Networks for Image Recognition"
Stars: ✭ 270 (-39.05%)
Mutual labels:  imagenet
Neural Backed Decision Trees
Making decision trees competitive with neural networks on CIFAR10, CIFAR100, TinyImagenet200, Imagenet
Stars: ✭ 411 (-7.22%)
Mutual labels:  imagenet
Pytorch Randaugment
Unofficial PyTorch Reimplementation of RandAugment.
Stars: ✭ 323 (-27.09%)
Mutual labels:  imagenet
Cnn Models
ImageNet pre-trained models with batch normalization for the Caffe framework
Stars: ✭ 355 (-19.86%)
Mutual labels:  imagenet
Pnasnet.pytorch
PyTorch implementation of PNASNet-5 on ImageNet
Stars: ✭ 309 (-30.25%)
Mutual labels:  imagenet
Assembled Cnn
Tensorflow implementation of "Compounding the Performance Improvements of Assembled Techniques in a Convolutional Neural Network"
Stars: ✭ 319 (-27.99%)
Mutual labels:  imagenet
Benchmarking Keras Pytorch
🔥 Reproducibly benchmarking Keras and PyTorch models
Stars: ✭ 346 (-21.9%)
Mutual labels:  imagenet
Computer Vision Leaderboard
Comparison of famous convolutional neural network models
Stars: ✭ 299 (-32.51%)
Mutual labels:  imagenet
Espnetv2
A light-weight, power efficient, and general purpose convolutional neural network
Stars: ✭ 377 (-14.9%)
Mutual labels:  imagenet
Mobilenetv3.pytorch
74.3% MobileNetV3-Large and 67.2% MobileNetV3-Small model on ImageNet
Stars: ✭ 283 (-36.12%)
Mutual labels:  imagenet
Rexnet
Official Pytorch implementation of ReXNet (Rank eXpansion Network) with pretrained models
Stars: ✭ 319 (-27.99%)
Mutual labels:  imagenet
Class Balanced Loss
Class-Balanced Loss Based on Effective Number of Samples. CVPR 2019
Stars: ✭ 433 (-2.26%)
Mutual labels:  imagenet
Computer Vision
Programming Assignments and Lectures for Stanford's CS 231: Convolutional Neural Networks for Visual Recognition
Stars: ✭ 408 (-7.9%)
Mutual labels:  imagenet
Densenet Caffe
DenseNet Caffe Models, converted from https://github.com/liuzhuang13/DenseNet
Stars: ✭ 350 (-20.99%)
Mutual labels:  imagenet

MSDNet

This repository provides the code for the paper Multi-Scale Dense Networks for Resource Efficient Image Classification.

Update on April 3, 2019 -- PyTorch implementation released!

A PyTorch implementation of MSDNet can be found from here.

Introduction

This paper studies convolutional networks that require limited computational resources at test time. We develop a new network architecture that performs on par with state-of-the-art convolutional networks, whilst facilitating prediction in two settings: (1) an anytime-prediction setting in which the network's prediction for one example is progressively updated, facilitating the output of a prediction at any time; and (2) a batch computational budget setting in which a fixed amount of computation is available to classify a set of examples that can be spent unevenly across 'easier' and 'harder' examples.

Figure 1: MSDNet layout (2D).

Figure 2: MSDNet layout (3D).

Results

(a) anytime-prediction setting

Figure 3: Anytime prediction on ImageNet.

(b) batch computational budget setting

Figure 4: Prediction under batch computational budget on ImageNet.

Figure 5: Random example images from the ImageNet classes Red wine and Volcano. Top row: images exited from the first classification layer of an MSDNet with correct prediction; Bottom row: images failed to be correctly classified at the first classifier but were correctly predicted and exited at the last layer.

Usage

Our code is written under the framework of Torch ResNet (https://github.com/facebook/fb.resnet.torch). The training scripts come with several options, which can be listed with the --help flag.

th main.lua --help

Configuration

In all the experiments, we use a validation set for model selection. We hold out 5000 training images on CIFAR, and 50000 images on ImageNet as the validation set.

Training recipe

Train an MSDNet with 10 classifiers attached to every other layer for anytime prediction:

th main.lua -netType msdnet -dataset cifar10 -batchSize 64 -nEpochs 300 -nBlocks 10 -stepmode even -step 2 -base 4

Train an MSDNet with 7 classifiers with the span linearly increases for efficient batch computation:

th main.lua -netType msdnet -dataset cifar10 -batchSize 64 -nEpochs 300 -nBlocks 7 -stepmode lin_grow -step 1 -base 1

Pre-trained ImageNet Models

  1. Download model checkpoints and the validation set indeces.

  2. Testing script: th main.lua -dataset imagenet -testOnly true -resume <path-to-.t7-model> -data <path-to-image-net-data> -gen <path-to-validation-set-indices>

FAQ

  1. How to calculate the FLOPs (or mul-add op) of a model?

We strongly recommend doing it automatically. Please refer to the op-counter project (LuaTorch), or the script in ConDenseNet (PyTorch). The basic idea of these op counters is to add a hook before the forward pass of a model.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].