All Projects → hoang-ho → Skin_Lesions_Classification_DCNNs

hoang-ho / Skin_Lesions_Classification_DCNNs

Licence: MIT license
Transfer Learning with DCNNs (DenseNet, Inception V3, Inception-ResNet V2, VGG16) for skin lesions classification

Programming Languages

Jupyter Notebook
11667 projects

Projects that are alternatives of or similar to Skin Lesions Classification DCNNs

Deep-Learning
It contains the coursework and the practice I have done while learning Deep Learning.🚀 👨‍💻💥 🚩🌈
Stars: ✭ 21 (-55.32%)
Mutual labels:  image-classification, transfer-learning, vgg16, inceptionv3
Ensemble-of-Multi-Scale-CNN-for-Dermatoscopy-Classification
Fully supervised binary classification of skin lesions from dermatoscopic images using an ensemble of diverse CNN architectures (EfficientNet-B6, Inception-V3, SEResNeXt-101, SENet-154, DenseNet-169) with multi-scale input.
Stars: ✭ 25 (-46.81%)
Mutual labels:  ensemble-learning, densenet, skin-lesion-classification
Assembled Cnn
Tensorflow implementation of "Compounding the Performance Improvements of Assembled Techniques in a Convolutional Neural Network"
Stars: ✭ 319 (+578.72%)
Mutual labels:  imagenet, image-classification, transfer-learning
backprop
Backprop makes it simple to use, finetune, and deploy state-of-the-art ML models.
Stars: ✭ 229 (+387.23%)
Mutual labels:  image-classification, transfer-learning, fine-tuning
Pytorch Cifar100
Practice on cifar100(ResNet, DenseNet, VGG, GoogleNet, InceptionV3, InceptionV4, Inception-ResNetv2, Xception, Resnet In Resnet, ResNext,ShuffleNet, ShuffleNetv2, MobileNet, MobileNetv2, SqueezeNet, NasNet, Residual Attention Network, SENet, WideResNet)
Stars: ✭ 2,423 (+5055.32%)
Mutual labels:  image-classification, densenet, inceptionv3
Autogluon
AutoGluon: AutoML for Text, Image, and Tabular Data
Stars: ✭ 3,920 (+8240.43%)
Mutual labels:  image-classification, ensemble-learning, transfer-learning
bird species classification
Supervised Classification of bird species 🐦 in high resolution images, especially for, Himalayan birds, having diverse species with fairly low amount of labelled data
Stars: ✭ 59 (+25.53%)
Mutual labels:  image-classification, ensemble-learning, inceptionv3
super-gradients
Easily train or fine-tune SOTA computer vision models with one open source training library
Stars: ✭ 429 (+812.77%)
Mutual labels:  imagenet, image-classification, transfer-learning
Rexnet
Official Pytorch implementation of ReXNet (Rank eXpansion Network) with pretrained models
Stars: ✭ 319 (+578.72%)
Mutual labels:  imagenet, image-classification, transfer-learning
Imagenet
This implements training of popular model architectures, such as AlexNet, ResNet and VGG on the ImageNet dataset(Now we supported alexnet, vgg, resnet, squeezenet, densenet)
Stars: ✭ 126 (+168.09%)
Mutual labels:  imagenet, densenet
Regnet
Pytorch implementation of network design paradigm described in the paper "Designing Network Design Spaces"
Stars: ✭ 129 (+174.47%)
Mutual labels:  imagenet, image-classification
Efficientnet
Implementation of EfficientNet model. Keras and TensorFlow Keras.
Stars: ✭ 1,920 (+3985.11%)
Mutual labels:  imagenet, image-classification
Labelbox
Labelbox is the fastest way to annotate data to build and ship computer vision applications.
Stars: ✭ 1,588 (+3278.72%)
Mutual labels:  imagenet, image-classification
Petridishnn
Code for the neural architecture search methods contained in the paper Efficient Forward Neural Architecture Search
Stars: ✭ 112 (+138.3%)
Mutual labels:  imagenet, image-classification
Aognet
Code for CVPR 2019 paper: " Learning Deep Compositional Grammatical Architectures for Visual Recognition"
Stars: ✭ 132 (+180.85%)
Mutual labels:  imagenet, image-classification
Pytorch Classification
Classification with PyTorch.
Stars: ✭ 1,268 (+2597.87%)
Mutual labels:  imagenet, densenet
Iresnet
Improved Residual Networks (https://arxiv.org/pdf/2004.04989.pdf)
Stars: ✭ 163 (+246.81%)
Mutual labels:  imagenet, image-classification
Imgclsmob
Sandbox for training deep learning networks
Stars: ✭ 2,405 (+5017.02%)
Mutual labels:  imagenet, image-classification
Tf Mobilenet V2
Mobilenet V2(Inverted Residual) Implementation & Trained Weights Using Tensorflow
Stars: ✭ 85 (+80.85%)
Mutual labels:  imagenet, image-classification
Alexnet
implement AlexNet with C / convolutional nerual network / machine learning / computer vision
Stars: ✭ 147 (+212.77%)
Mutual labels:  imagenet, image-classification

Skin Lesions Classification with Deep Convolutional Neural Network

This is a 40-hour project for CIS 5526 Machine Learning. For full description and analysis please refer to Project_Report.pdf.

Future work in better training strategy and exploring other models such as Xception and creating a bigger ensemble can help the model performs better than this!

Files Description

  • Final report: Project_Report.pdf

  • Exploratory data analysis: Skin_Cancer_EDA.ipynb

  • Baseline model: Baseline_CNN.ipynb

  • Fine-tuning the last convolutional block of VGG16: Fine_Tuning_VGG16.ipynb

  • Fine-tuning the top 2 inception blocks of InceptionV3: Fine_Tuning_InceptionV3.ipynb

  • Fine-tuning the Inception-ResNet-C of Inception-ResNet V2: Fine_Tuning_InceptionResNet.ipynb

  • Fine-tuning the last dense block of DenseNet 201: Fine_Tuning_DenseNet.ipynb

  • Fine-tuning all layers of pretrained Inception V3 on ImageNet: Retraining_InceptionV3.ipynb

  • Fine-tuning all layers of pretrained DenseNet 201 on ImageNet: Retraining_DenseNet.ipynb

  • Ensemble model of the fully fine-tuned Inception V3 and DenseNet 201 (best result): Ensemble_Models.ipynb

Technical Issue

I'm using Keras 2.2.4 and Tensorflow 1.11. Batch-Norm layer in this version of Keras is implemented in a way that: during training your network will always use the mini-batch statistics either the BN layer is frozen or not; also during inference you will use the previously learned statistics of the frozen BN layers. As a result, if you fine-tune the top layers, their weights will be adjusted to the mean/variance of the new dataset. Nevertheless, during inference they will receive data which are scaled differently because the mean/variance of the original dataset will be used. Consequently, if use Keras's example codes for fine-tuning Inception V3 or any network with batch norm layer, the results will be very bad. Please refer to issue #9965 and #9214. One temporary solution is:

for layer in pre_trained_model.layers:
    if hasattr(layer, 'moving_mean') and hasattr(layer, 'moving_variance'):
        layer.trainable = True
        K.eval(K.update(layer.moving_mean, K.zeros_like(layer.moving_mean)))
        K.eval(K.update(layer.moving_variance, K.zeros_like(layer.moving_variance)))
    else:
        layer.trainable = False

Results

Models Validation Test Depth # Params
Baseline 77.48% 76.54% 11 layers 2,124,839
Fine-tuned VGG16 (from last block) 79.82% 79.64% 23 layers 14,980,935
Fine-tuned Inception V3 (from the last 2 inception blocks) 79.935% 79.94% 315 layers 22,855,463
Fine-tuned Inception-ResNet V2 (from the Inception-ResNet-C) 80.82% 82.53% 784 layers 55,127,271
Fine-tuned DenseNet 201 (from the last dense block) 85.8% 83.9% 711 layers 19,309,127
Fine-tuned Inception V3 (all layers) 86.92% 86.826% _ _
Fine-tuned DenseNet 201 (all layers) 86.696% 87.725% _ _
Ensemble of fully-fine-tuned Inception V3 and DenseNet 201 88.8% 88.52% _ _

The Dataset

The HAM10000 dataset, a large collection of multi-source dermatoscopic images of common pigmented skin lesions

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].