All Projects → junyuseu → Pytorch Cifar Models

junyuseu / Pytorch Cifar Models

3.41% and 17.11% error on CIFAR-10 and CIFAR-100

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Pytorch Cifar Models

Tensorflow Cifar 10
Cifar-10 CNN implementation using TensorFlow library with 20% error.
Stars: ✭ 85 (-70.28%)
Mutual labels:  cnn, cifar10
One Pixel Attack Keras
Keras implementation of "One pixel attack for fooling deep neural networks" using differential evolution on Cifar10 and ImageNet
Stars: ✭ 1,097 (+283.57%)
Mutual labels:  cnn, cifar10
deep-learning-coursera-complete
Deep Learning Specialization by Andrew Ng on Coursera - My Completed Coursework Repo - All 5 Courses
Stars: ✭ 104 (-63.64%)
Mutual labels:  cnn
Anpr
licence plate detection and recognition
Stars: ✭ 277 (-3.15%)
Mutual labels:  cnn
Pytorch Saltnet
Kaggle | 9th place single model solution for TGS Salt Identification Challenge
Stars: ✭ 270 (-5.59%)
Mutual labels:  cnn
Facedetection
C++ project to implement MTCNN, a perfect face detect algorithm, on different DL frameworks. The most popular frameworks: caffe/mxnet/tensorflow, are all suppported now
Stars: ✭ 255 (-10.84%)
Mutual labels:  cnn
Caffe Hrt
Heterogeneous Run Time version of Caffe. Added heterogeneous capabilities to the Caffe, uses heterogeneous computing infrastructure framework to speed up Deep Learning on Arm-based heterogeneous embedded platform. It also retains all the features of the original Caffe architecture which users deploy their applications seamlessly.
Stars: ✭ 271 (-5.24%)
Mutual labels:  cnn
Stock-Prediction
stock predict by cnn and lstm
Stars: ✭ 25 (-91.26%)
Mutual labels:  cnn
Chinese Text Classification
Chinese-Text-Classification,Tensorflow CNN(卷积神经网络)实现的中文文本分类。QQ群:522785813,微信群二维码:http://www.tensorflownews.com/
Stars: ✭ 284 (-0.7%)
Mutual labels:  cnn
In Prestissimo
A very fast neural network computing framework optimized for mobile platforms.QQ group: 676883532 【验证信息输:绝影】
Stars: ✭ 268 (-6.29%)
Mutual labels:  cnn
Mobilenetv2
A Keras implementation of MobileNetV2.
Stars: ✭ 277 (-3.15%)
Mutual labels:  cnn
Embeddedsystem
📚 嵌入式系统基础知识与主流编程语言相关内容总结
Stars: ✭ 266 (-6.99%)
Mutual labels:  cnn
Cnnvisualizer
Visualizer for Deep Neural Networks
Stars: ✭ 259 (-9.44%)
Mutual labels:  cnn
Pytorch Image Classification
Tutorials on how to implement a few key architectures for image classification using PyTorch and TorchVision.
Stars: ✭ 272 (-4.9%)
Mutual labels:  cnn
social-cnn-pytorch
Human Trajectory Prediction in Socially Interacting Crowds Using a CNN-based Architecture
Stars: ✭ 24 (-91.61%)
Mutual labels:  cnn
Mydeeplearning
A deep learning library to provide algs in pure Numpy or Tensorflow.
Stars: ✭ 281 (-1.75%)
Mutual labels:  cnn
Cifar-Autoencoder
A look at some simple autoencoders for the Cifar10 dataset, including a denoising autoencoder. Python code included.
Stars: ✭ 42 (-85.31%)
Mutual labels:  cifar10
Handwritingrecognitionsystem
Handwriting Recognition System based on a deep Convolutional Recurrent Neural Network architecture
Stars: ✭ 262 (-8.39%)
Mutual labels:  cnn
Resnetcam Keras
Keras implementation of a ResNet-CAM model
Stars: ✭ 269 (-5.94%)
Mutual labels:  cnn
Deepnet
Implementation of CNNs, RNNs, and many deep learning techniques in plain Numpy.
Stars: ✭ 285 (-0.35%)
Mutual labels:  cnn

Experiments on CIFAR datasets with PyTorch

Introduction

Reimplement state-of-the-art CNN models in cifar dataset with PyTorch, now including:

1.ResNet

2.PreActResNet

3.WideResNet

4.ResNeXt

5.DenseNet

other results will be added later.

Requirements:software

Requirements for PyTorch

Requirements:hardware

For most experiments, one or two K40(~11G of memory) gpus is enough cause PyTorch is very memory efficient. However, to train DenseNet on cifar(10 or 100), you need at least 4 K40 gpus.

Usage

  1. Clone this repository
git clone https://github.com/junyuseu/pytorch-cifar-models.git

In this project, the network structure is defined in the models folder, the script gen_mean_std.py is used to calculate the mean and standard deviation value of the dataset.

  1. Edit main.py and run.sh

In the main.py, you can specify the network you want to train(for example):

model = resnet20_cifar(num_classes=10)
...
fdir = 'result/resnet20_cifar10'

Then, you need specify some parameter for training in run.sh. For resnet20:

CUDA_VISIBLE_DEVICES=0 python main.py --epoch 160 --batch-size 128 --lr 0.1 --momentum 0.9 --wd 1e-4 -ct 10
  1. Train
nohup sh run.sh > resnet20_cifar10.log &

After training, the training log will be recorded in the .log file, the best model(on the test set) will be stored in the fdir.

Note:For first training, cifar10 or cifar100 dataset will be downloaded, so make sure your comuter is online. Otherwise, download the datasets and decompress them and put them in the data folder.

  1. Test
CUDA_VISIBLE_DEVICES=0 python main.py -e --resume=fdir/model_best.pth.tar
  1. CIFAR100

The default setting in the code is for cifar10, to train with cifar100, you need specify it explicitly in the code.

model = resnet20_cifar(num_classes=100)

Note: you should also change fdir In the run.sh, you should set -ct 100

Results

Note:The results as follow are got by only one single experiment.

We got comparable or even better results than the original papers, the experiment settings are totally follow the original ones

ResNet

layers #params error(%)
20 0.27M 8.33
32 0.46M 7.36
44 0.66M 6.77
56 0.85M 6.73
110 1.7M 6.13
1202 19.4M -

PreActResNet

dataset network baseline unit pre-activation unit
CIFAR-10 ResNet-110 6.13 6.13
CIFAR-10 ResNet-164 5.84 5.35
CIFAR-10 ResNet-1001 11.27 5.13
CIFAR-100 ResNet-164 24.99 24.50
CIFAR-100 ResNet-1001 31.73 24.03

WideResNet

depth-k #params CIFAR-10 CIFAR-100
20-10 26.8M 4.27 19.73
26-10 36.5M 3.89 19.51

ResNeXt

network #params CIFAR-10 CIFAR-100
ResNeXt-29,1x64d 4.9M 4.51 22.09
ResNeXt-29,8x64d 34.4M 3.78 17.44
ResNeXt-29,16x64d 68.1M 3.69 17.11

DenseNet

network depth #params CIFAR-10 CIFAR-100
DenseNet-BC(k=12) 100 0.8M 4.69 22.19
DenseNet-BC(k=24) 250 15.3M 3.44 17.17
DenseNet-BC(k=40) 190 25.6M 3.41 17.33

References:

[1] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In CVPR, 2016.

[2] K. He, X. Zhang, S. Ren, and J. Sun. Identity mappings in deep residual networks. In ECCV, 2016.

[3] S. Zagoruyko and N. Komodakis. Wide residual networks. In BMVC, 2016.

[4] S. Xie, G. Ross, P. Dollar, Z. Tu and K. He Aggregated residual transformations for deep neural networks. In CVPR, 2017

[5] H. Gao, Z. Liu, L. Maaten and K. Weinberger. Densely connected convolutional networks. In CVPR, 2017

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].