All Projects → BIGBALLON → Cifar 10 Cnn

BIGBALLON / Cifar 10 Cnn

Licence: mit
Play deep learning with CIFAR datasets

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Cifar 10 Cnn

Chainer Cifar10
Various CNN models for CIFAR10 with Chainer
Stars: ✭ 134 (-82.28%)
Mutual labels:  convolutional-neural-networks, densenet, cifar10
Pytorch Classification
Classification with PyTorch.
Stars: ✭ 1,268 (+67.72%)
Mutual labels:  densenet, cifar10
Neural Api
CAI NEURAL API - Pascal based neural network API optimized for AVX, AVX2 and AVX512 instruction sets plus OpenCL capable devices including AMD, Intel and NVIDIA.
Stars: ✭ 94 (-87.57%)
Mutual labels:  densenet, cifar10
Tensorflow Cifar 10
Cifar-10 CNN implementation using TensorFlow library with 20% error.
Stars: ✭ 85 (-88.76%)
Mutual labels:  convolutional-neural-networks, cifar10
Hyperdensenet
This repository contains the code of HyperDenseNet, a hyper-densely connected CNN to segment medical images in multi-modal image scenarios.
Stars: ✭ 124 (-83.6%)
Mutual labels:  convolutional-neural-networks, densenet
deeplearning-mpo
Replace FC2, LeNet-5, VGG, Resnet, Densenet's full-connected layers with MPO
Stars: ✭ 26 (-96.56%)
Mutual labels:  densenet, cifar10
Pytorch Speech Commands
Speech commands recognition with PyTorch
Stars: ✭ 128 (-83.07%)
Mutual labels:  densenet, cifar10
Imagenet
Pytorch Imagenet Models Example + Transfer Learning (and fine-tuning)
Stars: ✭ 134 (-82.28%)
Mutual labels:  convolutional-neural-networks, densenet
Naszilla
Naszilla is a Python library for neural architecture search (NAS)
Stars: ✭ 181 (-76.06%)
Mutual labels:  convolutional-neural-networks, cifar10
Chexnet Keras
This project is a tool to build CheXNet-like models, written in Keras.
Stars: ✭ 254 (-66.4%)
Mutual labels:  convolutional-neural-networks, densenet
DenseNet-Cifar10
Train DenseNet on Cifar-10 based on Keras
Stars: ✭ 39 (-94.84%)
Mutual labels:  densenet, cifar10
Keras Idiomatic Programmer
Books, Presentations, Workshops, Notebook Labs, and Model Zoo for Software Engineers and Data Scientists wanting to learn the TF.Keras Machine Learning framework
Stars: ✭ 720 (-4.76%)
Mutual labels:  densenet
Densenet.pytorch
A PyTorch implementation of DenseNet.
Stars: ✭ 684 (-9.52%)
Mutual labels:  densenet
Senet Tensorflow
Simple Tensorflow implementation of "Squeeze and Excitation Networks" using Cifar10 (ResNeXt, Inception-v4, Inception-resnet-v2)
Stars: ✭ 682 (-9.79%)
Mutual labels:  densenet
Pytorch2keras
PyTorch to Keras model convertor
Stars: ✭ 676 (-10.58%)
Mutual labels:  densenet
Neurec
Next RecSys Library
Stars: ✭ 731 (-3.31%)
Mutual labels:  convolutional-neural-networks
Torchio
Medical image preprocessing and augmentation toolkit for deep learning
Stars: ✭ 708 (-6.35%)
Mutual labels:  convolutional-neural-networks
Pytorch Cnn Finetune
Fine-tune pretrained Convolutional Neural Networks with PyTorch
Stars: ✭ 653 (-13.62%)
Mutual labels:  convolutional-neural-networks
Saliency
TensorFlow implementation for SmoothGrad, Grad-CAM, Guided backprop, Integrated Gradients and other saliency techniques
Stars: ✭ 648 (-14.29%)
Mutual labels:  convolutional-neural-networks
Tensorflow 101
TensorFlow 101: Introduction to Deep Learning for Python Within TensorFlow
Stars: ✭ 642 (-15.08%)
Mutual labels:  convolutional-neural-networks

Convolutional Neural Networks for CIFAR-10

This repository is about some implementations of CNN Architecture for cifar10.

cifar10

I just use Keras and Tensorflow to implementate all of these CNN models.
(maybe torch/pytorch version if I have time)
A pytorch version is available at CIFAR-ZOO

Requirements

  • Python (3.5)
  • keras (>= 2.1.5)
  • tensorflow-gpu (>= 1.4.1)

Architectures and papers

Documents & tutorials

There are also some documents and tutorials in doc & issues/3.
Get it if you need.
You can also see the articles if you can speak Chinese.

Accuracy of all my implementations

In particular
Change the batch size according to your GPU's memory.
Modify the learning rate schedule may imporve the results of accuracy!

network GPU params batch size epoch training time accuracy(%)
Lecun-Network GTX1080TI 62k 128 200 30 min 76.23
Network-in-Network GTX1080TI 0.97M 128 200 1 h 40 min 91.63
Vgg19-Network GTX1080TI 39M 128 200 1 h 53 min 93.53
Residual-Network20 GTX1080TI 0.27M 128 200 44 min 91.82
Residual-Network32 GTX1080TI 0.47M 128 200 1 h 7 min 92.68
Residual-Network110 GTX1080TI 1.7M 128 200 3 h 38 min 93.93
Wide-resnet 16x8 GTX1080TI 11.3M 128 200 4 h 55 min 95.13
Wide-resnet 28x10 GTX1080TI 36.5M 128 200 10 h 22 min 95.78
DenseNet-100x12 GTX1080TI 0.85M 64 250 17 h 20 min 94.91
DenseNet-100x24 GTX1080TI 3.3M 64 250 22 h 27 min 95.30
DenseNet-160x24 1080 x 2 7.5M 64 250 50 h 20 min 95.90
ResNeXt-4x64d GTX1080TI 20M 120 250 21 h 3 min 95.19
SENet(ResNeXt-4x64d) GTX1080TI 20M 120 250 21 h 57 min 95.60

About LeNet and CNN training tips/tricks

LeNet is the first CNN network proposed by LeCun.
I used different CNN training tricks to show you how to train your model efficiently.

LeNet_keras.py is the baseline of LeNet,
LeNet_dp_keras.py used the Data Prepossessing [DP],
LeNet_dp_da_keras.py used both DP and the Data Augmentation[DA],
LeNet_dp_da_wd_keras.py used DP, DA and Weight Decay [WD]

network GPU DP DA WD training time accuracy(%)
LeNet_keras GTX1080TI - - - 5 min 58.48
LeNet_dp_keras GTX1080TI - - 5 min 60.41
LeNet_dp_da_keras GTX1080TI - 26 min 75.06
LeNet_dp_da_wd_keras GTX1080TI 26 min 76.23

For more CNN training tricks, see Must Know Tips/Tricks in Deep Neural Networks (by Xiu-Shen Wei)

About Learning Rate schedule

Different learning rate schedule may get different training/testing accuracy!
See ./htd, and HTD for more details.

About Multiple GPUs Training

Since the latest version of Keras is already supported keras.utils.multi_gpu_model, so you can simply use the following code to train your model with multiple GPUs:

from keras.utils import multi_gpu_model
from keras.applications.resnet50 import ResNet50

model = ResNet50()

# Replicates `model` on 8 GPUs.
parallel_model = multi_gpu_model(model, gpus=8)
parallel_model.compile(loss='categorical_crossentropy',optimizer='adam')

# This `fit` call will be distributed on 8 GPUs.
# Since the batch size is 256, each GPU will process 32 samples.
parallel_model.fit(x, y, epochs=20, batch_size=256)

About ResNeXt & DenseNet

Since I don't have enough machines to train the larger networks, I only trained the smallest network described in the paper. You can see the results in liuzhuang13/DenseNet and prlz77/ResNeXt.pytorch

   

Please feel free to contact me if you have any questions!

Citation

@misc{bigballon2017cifar10cnn,
  author = {Wei Li},
  title = {cifar-10-cnn: Play deep learning with CIFAR datasets},
  howpublished = {\url{https://github.com/BIGBALLON/cifar-10-cnn}},
  year = {2017}
}
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].