All Projects → L1aoXingyu → cifar10-gluon

L1aoXingyu / cifar10-gluon

Licence: other
Gluon implement of Kaggle cifar10 competition

Programming Languages

Jupyter Notebook
11667 projects
python
139335 projects - #7 most used programming language

MXNet/Gluon for CIFAR-10 Dataset

Introduction

This repository is about resnet164 and densenet architecture's gluon implement for cifar10 dataset in kaggle.

I just use mxnet/gluon to implement all of these models, and the single model is near rank1 in leader board, ensemble model is over rank1. All those ideas come from gluon community, welcome to join big family of gluon.

Requirements

  • MXNet(0.12)

    fast, flexible and portable deep learning architecture

  • tensorboardX

    for visualization loss and accuracy

Architectures and papers

Accuracy of single model

Before training, we will do standard data augumentation, pad 4 and random crop to 32 image size, do random mirror transform.

Resnet164

This model is defined in resnet.py, training file is train_resnet164.ipynb. The training strategy is same as the paper, total epochs are 200, batch size is 128, initial learing rate is 0.1, momentum is 0.9, learning rate decay at 90 epoch and 140 epoch.

After 200 epochs, training accuracy is almost 100%, kaggle score is 0.9526.

Densnet

This model is defined in densenet.py, training file is train_densenet.ipynb. The training strategy is similary as resnet, total epochs are 300, batch size is 128, initial learning rate is 0.1, momentum is 0.9, learning rate decay at 50% and 75% of total epochs.

After 300 epochs, training accuracy is almost 100%, kaggle score is 0.9536.

Ensemble

We can ensemble these two models, get the result of each model, then compute the final result by weighted each model's output. The weight is the accuracy of each single model. The ensemble result is in ensemble_submission.ipynb, the final result is 0.9616.

Future work

  • Use more models and data augumentation to do ensemble can get a better result.

  • There is a paper about mixup show that it can over 97% at cifar10 dataset, so if I have time, I want to try this strategy. As in paper, this strategy is like a way of data augumentation.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].