All Projects → nguyenhoa93 → cnn-visualization-keras-tf2

nguyenhoa93 / cnn-visualization-keras-tf2

Licence: MIT license
Filter visualization, Feature map visualization, Guided Backprop, GradCAM, Guided-GradCAM, Deep Dream

Programming Languages

Jupyter Notebook
11667 projects
python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to cnn-visualization-keras-tf2

GradCAM and GuidedGradCAM tf2
Implementation of GradCAM & Guided GradCAM with Tensorflow 2.x
Stars: ✭ 16 (-23.81%)
Mutual labels:  keras-tensorflow, guided-backpropagation, guided-grad-cam, gradcam
Pytorch Cnn Visualizations
Pytorch implementation of convolutional neural network visualization techniques
Stars: ✭ 6,167 (+29266.67%)
Mutual labels:  guided-backpropagation, guided-grad-cam, cnn-visualization
Kervolution
Kervolution implementation using TF2.0
Stars: ✭ 20 (-4.76%)
Mutual labels:  tf2, keras-tensorflow
3D-GuidedGradCAM-for-Medical-Imaging
This Repo containes the implemnetation of generating Guided-GradCAM for 3D medical Imaging using Nifti file in tensorflow 2.0. Different input files can be used in that case need to edit the input to the Guided-gradCAM model.
Stars: ✭ 60 (+185.71%)
Mutual labels:  guided-grad-cam, cnn-visualization
tf-faster-rcnn
Tensorflow 2 Faster-RCNN implementation from scratch supporting to the batch processing with MobileNetV2 and VGG16 backbones
Stars: ✭ 88 (+319.05%)
Mutual labels:  tf2, keras-tensorflow
WGAN GP
Keras model and tensorflow optimization of 'improved Training of Wasserstein GANs'
Stars: ✭ 16 (-23.81%)
Mutual labels:  keras-tensorflow
uncertainty-wizard
Uncertainty-Wizard is a plugin on top of tensorflow.keras, allowing to easily and efficiently create uncertainty-aware deep neural networks. Also useful if you want to train multiple small models in parallel.
Stars: ✭ 39 (+85.71%)
Mutual labels:  keras-tensorflow
Xtreme-Vision
A High Level Python Library to empower students, developers to build applications and systems enabled with computer vision capabilities.
Stars: ✭ 77 (+266.67%)
Mutual labels:  keras-tensorflow
manning tf2 in action
The official code repository for "TensorFlow in Action" by Manning.
Stars: ✭ 61 (+190.48%)
Mutual labels:  tf2
GTAV-Self-driving-car
Self driving car in GTAV with Deep Learning
Stars: ✭ 15 (-28.57%)
Mutual labels:  keras-tensorflow
deep-blueberry
If you've always wanted to learn about deep-learning but don't know where to start, then you might have stumbled upon the right place!
Stars: ✭ 17 (-19.05%)
Mutual labels:  keras-tensorflow
NeuralNetworks
Implementation of a Neural Network that can detect whether a video is in-game or not
Stars: ✭ 64 (+204.76%)
Mutual labels:  keras-tensorflow
machine learning course
Artificial intelligence/machine learning course at UCF in Spring 2020 (Fall 2019 and Spring 2019)
Stars: ✭ 47 (+123.81%)
Mutual labels:  keras-tensorflow
a-frame-demos
VR demos built with A-Frame
Stars: ✭ 19 (-9.52%)
Mutual labels:  demos
TF2HUD.Fixes
Collection of bug fixes and QOL changes to the default Team Fortress 2 HUD.
Stars: ✭ 83 (+295.24%)
Mutual labels:  tf2
100DaysOfMLCode
I am taking up the #100DaysOfMLCode Challenge 😎
Stars: ✭ 12 (-42.86%)
Mutual labels:  keras-tensorflow
TF2-Item-Plugins
Manage your cosmetic and weapons freely! Set Unusual Effects, Australiums, Festives, War Paints (w/ Wear), Spells and Paints at will!
Stars: ✭ 22 (+4.76%)
Mutual labels:  tf2
KerasMNIST
Keras MNIST for Handwriting Detection
Stars: ✭ 25 (+19.05%)
Mutual labels:  keras-tensorflow
fiddler-core-demos
Sample applications demonstrating usages of Progress® Telerik® FiddlerCore Embedded Engine.
Stars: ✭ 64 (+204.76%)
Mutual labels:  demos
CRNN-OCR-lite
Lightweight CRNN for OCR (including handwritten text) with depthwise separable convolutions and spatial transformer module [keras+tf]
Stars: ✭ 130 (+519.05%)
Mutual labels:  keras-tensorflow

CNN Visualization and Explanation

This project aims to visualize filters, feature maps, guided backpropagation from any convolutional layers of all pre-trained models on ImageNet available in tf.keras.applications (TF 2.3). This will help you observe how filters and feature maps change through each convolution layer from input to output.

With any image uploaded, you can also make the classification with any of the above models and generate GradCAM, Guided-GradCAM to see the important features based on which the model makes its decision.

If "art" is in your blood, you can use any model to generate hallucination-like visuals from your original images. For this feature, personally, I highly recommend trying with "InceptionV3" model as the deep-dream images generated from this model are appealing.

With the current version, there are 26 pre-trained models.

How to use

Run with your resource

  • Clone this repo:
git clone https://github.com/nguyenhoa93/cnn-visualization-keras-tf2
cd cnn-visualization-keras-tf2
  • Create virtualev:
conda create -n cnn-vis python=3.6
conda activate cnn-vs
bash requirements.txt
  • Run demo with the file visualization.ipynb

Run on Google Colab

Voila! You got it.

Briefs

MethodBrief
Filter visualizationSimply plot the learned filters.
* Step 1: Find a convolutional layer.
* Step 2: Get weights at a convolution layer, they are filters at this layer.
* Step 3: Plot filter with the values from step 2.
This method does not require an input image.

VGG-16, block1_conv1
Image
Feature map visualizationPlot the feature maps obtained when fitting an image to the network.
* Step 1: Find a convolutional layer.
* Step 2: Build a feature model from the input up to that convolutional layer.
* Step 3: Fit the image to the feature model to get feature maps.
* Step 4: Plot the feature map.

VGG-16, block1_conv1
Image

VGG-16, block5_conv3
Image
Guided BackpropagationBackpropagate from a particular convolution layer to input image with modificaton of the gradient of ReLU.

VGG-16: block1_conv1 & block5_conv3
ImageImage
GradCAM* Step 1: Determine the last convolutional layer
* Step 2: Perform gradient from `pre-softmax` layer to last convolutional layer and the apply global average pooling to obtain weights for neurons' importance.
* Step 3: Linearly combinate feature map of last convolutional layer and weights, then apply ReLu on that linear combination.

InceptionV3, explanation for "lakeside" class
Image
Guided-GradCAM* Step 1: Calculate guided backpropagation from last convolutional layer to input.
* Step 2: Upsample GradCAM to the size of input
* Step 3: Apply element-wise multiplication of guided backpropagation and GradCAM

InceptionV3, explanation for "boathouse" class

Deep DreamSee more in this excellent tutorial from François Chollet: https://keras.io/examples/generative/deep_dream/

InceptionV3

Image
Original image

Image

References

  1. How to Visualize Filters and Feature Maps in Convolutional Neural Networks by Machine Learning Mastery
  2. Pytorch CNN visualzaton by utkuozbulak: https://github.com/utkuozbulak
  3. CNN visualization with TF 1.3 by conan7882: https://github.com/conan7882/CNN-Visualization
  4. Deep Dream Tutorial from François Chollet: https://keras.io/examples/generative/deep_dream/
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].