All Projects → koshian2 → Resnet Multipleframework

koshian2 / Resnet Multipleframework

Licence: mit
ResNet benchmark by Keras(TensorFlow), Keras(MXNet), Chainer, PyTorch using Google Colab

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Resnet Multipleframework

Imgclsmob
Sandbox for training deep learning networks
Stars: ✭ 2,405 (+17078.57%)
Mutual labels:  mxnet, chainer
Deepo
Setup and customize deep learning environment in seconds.
Stars: ✭ 6,145 (+43792.86%)
Mutual labels:  mxnet, chainer
char-rnn-text-generation
Character Embeddings Recurrent Neural Network Text Generation Models
Stars: ✭ 64 (+357.14%)
Mutual labels:  mxnet, chainer
Machine Learning Curriculum
💻 Make machines learn so that you don't have to struggle to program them; The ultimate list
Stars: ✭ 761 (+5335.71%)
Mutual labels:  mxnet, chainer
Capsnet
CapsNet (Capsules Net) in Geoffrey E Hinton paper "Dynamic Routing Between Capsules" - State Of the Art
Stars: ✭ 423 (+2921.43%)
Mutual labels:  mxnet, chainer
Mmdnn
MMdnn is a set of tools to help users inter-operate among different deep learning frameworks. E.g. model conversion and visualization. Convert models between Caffe, Keras, MXNet, Tensorflow, CNTK, PyTorch Onnx and CoreML.
Stars: ✭ 5,472 (+38985.71%)
Mutual labels:  mxnet
Tusimple Duc
Understanding Convolution for Semantic Segmentation
Stars: ✭ 567 (+3950%)
Mutual labels:  mxnet
See
Code for the AAAI 2018 publication "SEE: Towards Semi-Supervised End-to-End Scene Text Recognition"
Stars: ✭ 545 (+3792.86%)
Mutual labels:  chainer
Gluon Cv
Gluon CV Toolkit
Stars: ✭ 5,001 (+35621.43%)
Mutual labels:  mxnet
Chainerrl
ChainerRL is a deep reinforcement learning library built on top of Chainer.
Stars: ✭ 931 (+6550%)
Mutual labels:  chainer
Chainer Rnn Ner
Named Entity Recognition with RNN, implemented by Chainer
Stars: ✭ 19 (+35.71%)
Mutual labels:  chainer
Aws Machine Learning University Accelerated Tab
Machine Learning University: Accelerated Tabular Data Class
Stars: ✭ 718 (+5028.57%)
Mutual labels:  mxnet
Deep Learning Project Template
A best practice for deep learning project template architecture.
Stars: ✭ 641 (+4478.57%)
Mutual labels:  chainer
Multi Model Server
Multi Model Server is a tool for serving neural net models for inference
Stars: ✭ 770 (+5400%)
Mutual labels:  mxnet
Squeezenet v1.2
Top-1 Acc=61.0% on ImageNet, without any sacrificing compared with SqueezeNet v1.1.
Stars: ✭ 23 (+64.29%)
Mutual labels:  mxnet
Deeplearning
深度学习入门教程, 优秀文章, Deep Learning Tutorial
Stars: ✭ 6,783 (+48350%)
Mutual labels:  mxnet
Openhabai
Train Neuronal networks to automate your home
Stars: ✭ 19 (+35.71%)
Mutual labels:  mxnet
Deeplearningmugenknock
でぃーぷらーにんぐを無限にやってディープラーニングでDeepLearningするための実装CheatSheet
Stars: ✭ 684 (+4785.71%)
Mutual labels:  chainer
Test Tube
Python library to easily log experiments and parallelize hyperparameter search for neural networks
Stars: ✭ 663 (+4635.71%)
Mutual labels:  chainer
Deepcamera
Open source face recognition on Raspberry Pi. SharpAI is open source stack for machine learning engineering with private deployment and AutoML for edge computing. DeepCamera is application of SharpAI designed for connecting computer vision model to surveillance camera. Developers can run same code on Raspberry Pi/Android/PC/AWS to boost your AI production development.
Stars: ✭ 757 (+5307.14%)
Mutual labels:  mxnet

ResNet-MultipleFramework

ResNet benchmark by Keras(TensorFlow), Keras(MXNet), Chainer, PyTorch using Google Colab

Summary

Framework N # Layers MinTestError s / epoch
Keras(TF) 3 20 0.0965 51.817
Keras(MXNet) 3 20 0.0963 50.207
Chainer 3 20 0.0995 35.360
PyTorch 3 20 0.0986 26.602
Keras(TF) 5 32 0.0863 75.746
Keras(MXNet) 5 32 0.0943 69.260
Chainer 5 32 0.0916 56.854
PyTorch 5 32 0.0893 40.670
Keras(TF) 7 44 0.0864 96.946
Keras(MXNet) 7 44 0.0863 86.921
Chainer 7 44 0.0892 80.935
PyTorch 7 44 0.0894 55.465
Keras(TF) 9 56 0.0816 119.361
Keras(MXNet) 9 56 0.0848 111.772
Chainer 9 56 0.0882 100.730
PyTorch 9 56 0.0895 70.834
  • N denotes the number of shortcuts of each stage described in ResNet paper.
  • For the number of layers, we used the formula 6N + 2 written in the paper. Since Conv2D stride 2 is used for subsampling instead of pooling, it is actually +2 this value.
  • Time per epoch excluded the first epoch and took the average of the remaining 99 epochs.

TPU in Colaboratory(updated at 9/30)

I also tried same expretiment using the free TPU of Google Colaboratory.

Framework N # Layers MinTestError s / epoch
TF-Keras(TPU) 3 20 0.154 19.666
TF-Keras(TPU) 5 32 0.153 19.818
TF-Keras(TPU) 7 44 0.167 19.969
TF-Keras(TPU) 9 56 0.133 19.932

(TensorFlow:1.11.0-rc2、Keras:2.1.6)
Batch-size is changed to 1024

TPU is extremely fast!!! TPU is at least x6 faster than GPU version of Keras, and x3.5 faster than PyTorch which was the fastest among GPU frameworks.

But sadly, LearningRateScheduler does not work on the TPU due to a bug in TensorFlow. Therefore, accuracy is a little bit worse than the GPU.

See Details(Japanese)

Settings

Parameters are based on the paper, but differences are

  • Train on 100 epochs(The original iterates on 64k = 182 epochs).
  • Not use verification datasets. Use 50k images for training.
  • Initial learning rate is 0.01. Since it diverged when set to the original value 0.1. The learning rate scheduler remains to be used.
  • Weight decay of regularization parameters was changed from 0.0001 to 0.0005 (as it tended to overfits overall).
  • Run only once for each condition.

Environment

Library Version
TensorFlow 1.10.1
Keras 2.1.6
mxnet-cu80 1.2.1
Keras-mxnet 2.2.2
Chainer 4.4.0
cupy-cuda80 4.4.0
PyTorch 0.4.1
Torchvision 0.2.1
  • Experiment on Google Colab(using GPU)

Reference

K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In CVPR, 2016 https://arxiv.org/abs/1512.03385

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].