All Projects → anindox8 → Ensemble-of-Multi-Scale-CNN-for-Dermatoscopy-Classification

anindox8 / Ensemble-of-Multi-Scale-CNN-for-Dermatoscopy-Classification

Licence: other
Fully supervised binary classification of skin lesions from dermatoscopic images using an ensemble of diverse CNN architectures (EfficientNet-B6, Inception-V3, SEResNeXt-101, SENet-154, DenseNet-169) with multi-scale input.

Programming Languages

Jupyter Notebook
11667 projects
python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Ensemble-of-Multi-Scale-CNN-for-Dermatoscopy-Classification

Skin Lesions Classification DCNNs
Transfer Learning with DCNNs (DenseNet, Inception V3, Inception-ResNet V2, VGG16) for skin lesions classification
Stars: ✭ 47 (+88%)
Mutual labels:  ensemble-learning, densenet, skin-lesion-classification
Segmentation models
Segmentation models with pretrained backbones. Keras and TensorFlow Keras.
Stars: ✭ 3,575 (+14200%)
Mutual labels:  densenet, tensorflow-keras, efficientnet
awesome-computer-vision-models
A list of popular deep learning models related to classification, segmentation and detection problems
Stars: ✭ 419 (+1576%)
Mutual labels:  densenet, efficientnet
TensorMONK
A collection of deep learning models (PyTorch implemtation)
Stars: ✭ 21 (-16%)
Mutual labels:  densenet, efficientnet
modeltime.ensemble
Time Series Ensemble Forecasting
Stars: ✭ 65 (+160%)
Mutual labels:  ensemble-learning
prostateMR 3D-CAD-csPCa
Hierarchical probabilistic 3D U-Net, with attention mechanisms (—𝘈𝘵𝘵𝘦𝘯𝘵𝘪𝘰𝘯 𝘜-𝘕𝘦𝘵, 𝘚𝘌𝘙𝘦𝘴𝘕𝘦𝘵) and a nested decoder structure with deep supervision (—𝘜𝘕𝘦𝘵++). Built in TensorFlow 2.5. Configured for voxel-level clinically significant prostate cancer detection in multi-channel 3D bpMRI scans.
Stars: ✭ 32 (+28%)
Mutual labels:  tensorflow-keras
food-detection-yolov5
🍔🍟🍗 Food analysis baseline with Theseus. Integrate object detection, image classification and multi-class semantic segmentation. 🍞🍖🍕
Stars: ✭ 68 (+172%)
Mutual labels:  efficientnet
C5
Reference code for the paper "Cross-Camera Convolutional Color Constancy" (ICCV 2021)
Stars: ✭ 75 (+200%)
Mutual labels:  color-constancy
COVID19Tweet
WNUT-2020 Task 2: Identification of informative COVID-19 English Tweets
Stars: ✭ 26 (+4%)
Mutual labels:  binary-classification
python cv AI ML
用python做计算机视觉,人工智能,机器学习,深度学习等
Stars: ✭ 73 (+192%)
Mutual labels:  densenet
multi-label-classification
基于tf.keras的多标签多分类模型
Stars: ✭ 72 (+188%)
Mutual labels:  tensorflow-keras
2018-Tencent-Lookalike
2018-腾讯广告算法大赛-相似人群拓展(初赛):10th/1563 (Top 0.64%)
Stars: ✭ 46 (+84%)
Mutual labels:  binary-classification
yolov3-ios
Using yolo v3 object detection on ios platform.
Stars: ✭ 55 (+120%)
Mutual labels:  densenet
efficientnet-jax
EfficientNet, MobileNetV3, MobileNetV2, MixNet, etc in JAX w/ Flax Linen and Objax
Stars: ✭ 114 (+356%)
Mutual labels:  efficientnet
survtmle
Targeted Learning for Survival Analysis
Stars: ✭ 18 (-28%)
Mutual labels:  ensemble-learning
stackgbm
🌳 Stacked Gradient Boosting Machines
Stars: ✭ 24 (-4%)
Mutual labels:  ensemble-learning
cnn-text-classification
Text classification with Convolution Neural Networks on Yelp, IMDB & sentence polarity dataset v1.0
Stars: ✭ 108 (+332%)
Mutual labels:  binary-classification
efficientnetv2.pytorch
PyTorch implementation of EfficientNetV2 family
Stars: ✭ 366 (+1364%)
Mutual labels:  efficientnet
DenseNet-Tensorflow
Reimplementation of DenseNet
Stars: ✭ 16 (-36%)
Mutual labels:  densenet
efficientdet
PyTorch Implementation of the state-of-the-art model for object detection EfficientDet [pre-trained weights provided]
Stars: ✭ 21 (-16%)
Mutual labels:  efficientnet

Ensemble of Convolutional Neural Networks for Disease Classification of Skin Lesions

Problem Statement: Fully supervised binary classification of skin lesions from dermatoscopic images.

Note: The following approach won 1st place in the 2019 Computer-Aided Diagnosis: Deep Learning in Dermascopy Challenge at Universitat de Girona scoring 92.2% accuracy (kappa: 0.819) at test-time, during the 2018-20 Joint Master of Science in Medical Imaging and Applications (MaIA) program.

Acknowledgments: Pavel Yakubovskiy for the TensorFlow.Keras implementation of EfficientNet, SEResNeXt-101 and SENet-154, and Mina Sami for the Python implementation of Shades of Gray Color Constancy.

Data: Class A: Nevus; Class B: Other (Melanoma, Dermatofibroma, Pigmented Bowen's, Basal Cell Carcinoma, Vascular, Pigmented Benign Keratoses) [4800/1200/1000 : Train/Val/Test Ratio]

Directories
● Preprocessing Pipeline for Color Space/Constancy: scripts/color-io.ipynb
● Individual Model Training-Validation Pipeline: scripts/train-val.ipynb
● Ensemble Validation Pipeline: scripts/ensemble-val.ipynb
● Ensemble Inference Pipeline: scripts/ensemble-test.ipynb

Train/Test-Time Data Augmentation

Data AugmentationFigure 1. All 5 different types of data augmentation [vertical (b)/horizontal (c) flips, brightness shift (d), saturation (e)/contrast (f) boost) used at train-time to broaden the data representation beyond limited pre-existing samples, and test-time to ensure a full prediction from the classifier that is unaffected by the orientation or lighting conditions of the scan. Predictions from all 6 variations [including the original (a)] are averaged to obtain the final prediction per sample.

Multi-Scale Input

Multi-Scale InputFigure 2. Original RGB image (left), center cropped 448 x 448 x 3 image used to train 3 CNN member models and the further center cropped 224 x 224 x 3 image used to train 2 more CNN member models. Each model learns to classify at a different scale, with the hypothesis that the collective ensemble benefits from a multi-scale input.

Feature Maps

Feature MapsFigure 3. Features maps derived from the output of the second block of expanded convolutional layers in a pre-trained EfficientNet-B6 with ImageNet weights, after passing an input skin lesion image through the network.

Feature MapsFigure 4. Features maps derived from the output of the second block of expanded convolutional layers in a finetuned EfficientNet-B6 initialized with ImageNet weights, after passing an input skin lesion image through the network.

Experimental Results

ResultsFigure 5. Validation performance for the collective ensemble and each member model. Accuracy, sensitivity and specificity scores are calculated at the default threshold of 0.50.

Gradient Class Activation Maps

GradCAMFigure 6. Gradient–Class Activation Maps (Grad-CAM) from finetuned EfficientNet-B6 –using the gradients of the nevus class flowing into the final convolutional layer, to produce a coarse localization map highlighting important regions in the image for predicting nevus.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].