All Projects → VITA-Group → Orthogonality-in-CNNs

VITA-Group / Orthogonality-in-CNNs

Licence: other
[NeurIPS '18] "Can We Gain More from Orthogonality Regularizations in Training Deep CNNs?" Official Implementation.

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Orthogonality-in-CNNs

EEG-Motor-Imagery-Classification-CNNs-TensorFlow
EEG Motor Imagery Tasks Classification (by Channels) via Convolutional Neural Networks (CNNs) based on TensorFlow
Stars: ✭ 125 (+11.61%)
Mutual labels:  cnns
stanford-cs231n-assignments-2020
This repository contains my solutions to the assignments for Stanford's CS231n "Convolutional Neural Networks for Visual Recognition" (Spring 2020).
Stars: ✭ 84 (-25%)
Mutual labels:  cnns
isometric
A lightweight JavaScript library, written in TypeScript to create isometric projections using SVGs
Stars: ✭ 53 (-52.68%)
Mutual labels:  orthogonal
lin-im2im
Linear image-to-image translation
Stars: ✭ 39 (-65.18%)
Mutual labels:  orthogonal
OSLNet
Code release for OSLNet: Deep Small-Sample Classification with an Orthogonal Softmax Layer (TIP2020)
Stars: ✭ 42 (-62.5%)
Mutual labels:  orthogonal

Orthogonality in CNNs

Code Implementation for Restricted Isometry Property(RIP) based Orthogonal Regularizers, proposed for Image Classification Task, for various State-of-art ResNet based architectures.

This repositry provides an introduction, implementation and result achieved in the paper: "Can We Gain More from Orthogonality Regularizations in Training Deep CNNs?", NeurIPS 2018 [pdf]

Introduction

Orthogonal Network Weights are found to be a favorable property for training deep convolutional neural networks.Through this work, we look to find alternate and more effective ways to enforce orthogonality to deep CNNs. We develop novel orthogonality regularizations on training deep CNNs, utilizing various advanced analytical tools such as mutual coherence and restricted isometry property. These plug-and-play regularizations can be conveniently incorporated into training almost any CNN without extra hassle. We then benchmark their effects on state-of-the-art models: ResNet, WideResNet, and ResNeXt, on several most popular computer vision datasets: CIFAR-10, CIFAR-100, SVHN and ImageNet. We observe consistent performance gains after applying those proposed regularizations, in terms of both the final accuracies achieved, and faster and more stable convergences.

Illustration

Can-we-Gain-More-from-Orthogonality Figure 1. Validation Curve Achieved for differnet Regularizers Proposed

Enviroment and Datasets Used

  • Linux
  • Pytorch 4.0
  • Keras 2.2.4
  • CUDA 9.1
  • Cifar10 and Cifar100
  • SVHN
  • ImageNet

Architecture Used

  • ResNet
  • Wide ResNet
  • Pre Resnet
  • ResNext

Regularizers Proposed

  • Single Sided (SO)
  • Double Sided (DSO)
  • Mutual Coherence Based (MC)
  • Restricted Isometry (SRIP) (Best Performing )

Wide-Resnet CIFAR

For CIFAR datasets,we choose Wide Resnet Arch. with a depth of 28 and Kernel width of 10,which gives the best results for comparable number parameters for any Wide-Resnet Model. To train on Cifar-10 using 2 gpu:

CUDA_VISIBLE_DEVICES=6,7 python train_n.py --ngpu 2

To train on Cifar-100 using 2 gpu:

CUDA_VISIBLE_DEVICES=6,7 python train_n.py --ngpu 2 --dataset cifar100

After train phase, you can check saved model in the runs folder.

Wide-Resnet SVHN

For SVHN datasets,we choose Wide Resnet Arch. with a depth of 16 and Kernel width of 8,which gives the best results for comparable number parameters for any Wide-Resnet Model.

CUDA_VISIBEL_DEVICES=0 python train.py --dataset svhn --model wideresnet --learning_rate 0.01 --epochs 160

Result

Network CIFAR-10 CIFAR-100 SVHN
WideResNet 4.16 20.50 1.60
WideResNet + SRIP Reg 3.60 18.19 1.52

Resnet110 CIFAR

We trained CIFAR10 and 100 Dataset for ResNet110 Model and achieved an improvement in terms of Test Accuracy, when compared to a model, which doesn't uses any form Regularization.The Code for this part has been written in Keras, and we have used the base code from official keras Repo: https://github.com/keras-team/keras/blob/master/examples/cifar10_resnet.py, for a bottleneck based architecture.

Usage

CUDA_VISIBLE_DEVICES=2 python resnet_cifar_new.py

Result

Network CIFAR-10
ResNet110 7.11
WideResNet + SRIP Reg 5.46

Pre-Resnet Imagenet

we trained the Imagenet Dataset for Resnet-34 Resnet 50 and Pre-Resnet 34 and achieved a better Top-5 accuracy when compared to contemporary results. Basic Code was taken from:Pytorch Official cite.

Usage

CUDA_VISIBLE_DEVICES=4,5,6,7 python train_n.py

Result

Network Imagenet Regularizer
PreResnet 34 9.79 NONE
PreResNet 34 8. 85 SRIP
ResNet 34 9.84 NONE
ResNet 34 8.392 SRIP

Pre-Trained Networks

Pre-trained Model are saved on the Google Drive.

Other frameworks

Acknowledgement

References

Citation

If you find our code helpful in your resarch or work, please cite our paper.

@ARTICLE{2018arXiv181009102B,
  author = {{Bansal}, N. and {Chen}, X. and {Wang}, Z.},
   title = "{Can We Gain More from Orthogonality Regularizations in Training Deep CNNs?}",
 journal = {ArXiv e-prints},
archivePrefix = "arXiv",
  eprint = {1810.09102},
keywords = {Computer Science - Machine Learning, Computer Science - Computer Vision and Pattern Recognition, Statistics - Machine Learning},
    year = 2018,
   month = oct,
  adsurl = {http://adsabs.harvard.edu/abs/2018arXiv181009102B},
 adsnote = {Provided by the SAO/NASA Astrophysics Data System}
}
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].