All Projects → sidml → Efficientnet Gradcam Visualization

sidml / Efficientnet Gradcam Visualization

EfficientNet-GradCam Visualization

Projects that are alternatives of or similar to Efficientnet Gradcam Visualization

Ipybind
IPython / Jupyter integration for pybind11
Stars: ✭ 63 (-1.56%)
Mutual labels:  jupyter-notebook
Pizzafire
Run your own DeepStyle factory on the cloud.
Stars: ✭ 63 (-1.56%)
Mutual labels:  jupyter-notebook
Icpr2020dfdc
Video Face Manipulation Detection Through Ensemble of CNNs
Stars: ✭ 64 (+0%)
Mutual labels:  jupyter-notebook
How to make a tensorflow image classifier live
Stars: ✭ 63 (-1.56%)
Mutual labels:  jupyter-notebook
Deep3dpose
Stars: ✭ 63 (-1.56%)
Mutual labels:  jupyter-notebook
Recsyspuc 2020
Material del curso de Sistemas Recomendadores IIC3633 PUC Chile
Stars: ✭ 64 (+0%)
Mutual labels:  jupyter-notebook
Openmomo
Sounding Rocket "MOMO"
Stars: ✭ 63 (-1.56%)
Mutual labels:  jupyter-notebook
Kaggle Competitions
Stars: ✭ 64 (+0%)
Mutual labels:  jupyter-notebook
Pysparkgeoanalysis
🌐 Interactive Workshop on GeoAnalysis using PySpark
Stars: ✭ 63 (-1.56%)
Mutual labels:  jupyter-notebook
Decisiveml
Machine learning end-to-end research and trade execution
Stars: ✭ 63 (-1.56%)
Mutual labels:  jupyter-notebook
Tutorials 2017
Geophysical Tutorials column for 2017
Stars: ✭ 63 (-1.56%)
Mutual labels:  jupyter-notebook
Codingworkshops
Programming challenges for python, webdev, data science Python Project Night
Stars: ✭ 63 (-1.56%)
Mutual labels:  jupyter-notebook
Sudo rm rf
Code for SuDoRm-Rf networks for efficient audio source separation. SuDoRm-Rf stands for SUccessive DOwnsampling and Resampling of Multi-Resolution Features which enables a more efficient way of separating sources from mixtures.
Stars: ✭ 64 (+0%)
Mutual labels:  jupyter-notebook
Constrained decoding
Lexically constrained decoding for sequence generation using Grid Beam Search
Stars: ✭ 63 (-1.56%)
Mutual labels:  jupyter-notebook
Xcos
Stars: ✭ 64 (+0%)
Mutual labels:  jupyter-notebook
Anomaly detection for cern
This is code for my CERN presentation
Stars: ✭ 63 (-1.56%)
Mutual labels:  jupyter-notebook
Deeplearning Nlp Models
A small, interpretable codebase containing the re-implementation of a few "deep" NLP models in PyTorch. Colab notebooks to run with GPUs. Models: word2vec, CNNs, transformer, gpt.
Stars: ✭ 64 (+0%)
Mutual labels:  jupyter-notebook
Processamento Digital De Sinais Financeiros
Estabelecer competências em técnicas quantitativas aplicadas ao mercado de renda variável, por meio da aplicação dos métodos de processamento digital de séries temporais.
Stars: ✭ 64 (+0%)
Mutual labels:  jupyter-notebook
Iba Paper Code
Code for the Paper "Restricting the Flow: Information Bottlenecks for Attribution"
Stars: ✭ 64 (+0%)
Mutual labels:  jupyter-notebook
Learners Space
This repository contains all the content for these courses to be covered in Learner's Space -
Stars: ✭ 64 (+0%)
Mutual labels:  jupyter-notebook

GradCam Viz

Intro

Recently Google AI Research published a paper titled “EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks”. In this paper the authors propose a new architecture which achieves state of the art classification accuracy on ImageNet while being 8.4x smaller and 6.1x faster on inference than the best existing CNN. It achieves high level of acacuracy on many other datasets like CIFAR-100, Flowers and Cars. Good results on multiple dataset shows that the architecture can be used for transfer learning.

In this notebook, I try to compare the proposed efficient models with other popular architectures like densenet and resnet. I use GradCam to highlight what different models are looking at.

You can find a very nice implementation of GradCam here. I use the pretrained model weights provided here for visualization.

About EfficientNet

A naive way to increase the performance of neural networks is to increase make CNN deeper. A great example, would be resnet which has several variations ranging from 18 to 202. Making the CNN deeper or wider may increase the performance but it comes at great computational cost. So we need some way to balance our ever increasing quest for performance with compuatational cost. In the paper, the authors propose a new model scaling method that uses a simple compound coefficient to scale up CNNs in a more structured manner. This method helps them to decide when to increase the depth or width of the network.

The authors wanted to optimize for accuracy and efficieny. So, they performed a neural architecture search. This search yielded th Efficient-B0 archictecture which looks pretty simple and straightforward to implement.

EfficientNet-B0 Architecture

As you can see from the performance graph, EfficientNet uses fewer parameters and achieves very high accuracy. For more details please refer

Efficient Performance

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].