All Projects → raghakot → Keras Vis

raghakot / Keras Vis

Licence: mit
Neural network visualization toolkit for keras

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Keras Vis

Yann
This toolbox is support material for the book on CNN (http://www.convolution.network).
Stars: ✭ 41 (-98.59%)
Mutual labels:  neural-networks, theano
Lasagne
Lightweight library to build and train neural networks in Theano
Stars: ✭ 3,800 (+31.03%)
Mutual labels:  neural-networks, theano
Keras Rl
Deep Reinforcement Learning for Keras.
Stars: ✭ 5,166 (+78.14%)
Mutual labels:  neural-networks, theano
Keras Contrib
Keras community contributions
Stars: ✭ 1,532 (-47.17%)
Mutual labels:  neural-networks, theano
Hyperdensenet
This repository contains the code of HyperDenseNet, a hyper-densely connected CNN to segment medical images in multi-modal image scenarios.
Stars: ✭ 124 (-95.72%)
Mutual labels:  neural-networks, theano
Keras Gp
Keras + Gaussian Processes: Learning scalable deep and recurrent kernels.
Stars: ✭ 218 (-92.48%)
Mutual labels:  neural-networks, theano
Livianet
This repository contains the code of LiviaNET, a 3D fully convolutional neural network that was employed in our work: "3D fully convolutional networks for subcortical segmentation in MRI: A large-scale study"
Stars: ✭ 143 (-95.07%)
Mutual labels:  neural-networks, theano
Deepjazz
Deep learning driven jazz generation using Keras & Theano!
Stars: ✭ 2,766 (-4.62%)
Mutual labels:  neural-networks, theano
STORN-keras
This is a STORN (Stochastical Recurrent Neural Network) implementation for keras!
Stars: ✭ 23 (-99.21%)
Mutual labels:  theano
Netket
Machine learning algorithms for many-body quantum systems
Stars: ✭ 256 (-91.17%)
Mutual labels:  neural-networks
theano-recurrence
Recurrent Neural Networks (RNN, GRU, LSTM) and their Bidirectional versions (BiRNN, BiGRU, BiLSTM) for word & character level language modelling in Theano
Stars: ✭ 40 (-98.62%)
Mutual labels:  theano
DockerKeras
We provide GPU-enabled docker images including Keras, TensorFlow, CNTK, MXNET and Theano.
Stars: ✭ 49 (-98.31%)
Mutual labels:  theano
Place Recognition Using Autoencoders And Nn
Place recognition with WiFi fingerprints using Autoencoders and Neural Networks
Stars: ✭ 256 (-91.17%)
Mutual labels:  neural-networks
Reuters-21578-Classification
Text classification with Reuters-21578 datasets using Gensim Word2Vec and Keras LSTM
Stars: ✭ 44 (-98.48%)
Mutual labels:  theano
Deeplearning.ai Notes
These are my notes which I prepared during deep learning specialization taught by AI guru Andrew NG. I have used diagrams and code snippets from the code whenever needed but following The Honor Code.
Stars: ✭ 262 (-90.97%)
Mutual labels:  neural-networks
DeepLearningCode
深度学习相关代码
Stars: ✭ 21 (-99.28%)
Mutual labels:  theano
sequence-rnn-py
Sequence analyzing using Recurrent Neural Networks (RNN) based on Keras
Stars: ✭ 28 (-99.03%)
Mutual labels:  theano
Ergo
🧠 A tool that makes AI easier.
Stars: ✭ 264 (-90.9%)
Mutual labels:  neural-networks
Carrot
🥕 Evolutionary Neural Networks in JavaScript
Stars: ✭ 261 (-91%)
Mutual labels:  neural-networks
conx
The On-Ramp to Deep Learning
Stars: ✭ 93 (-96.79%)
Mutual labels:  theano

Keras Visualization Toolkit

Build Status license Slack

keras-vis is a high-level toolkit for visualizing and debugging your trained keras neural net models. Currently supported visualizations include:

  • Activation maximization
  • Saliency maps
  • Class activation maps

All visualizations by default support N-dimensional image inputs. i.e., it generalizes to N-dim image inputs to your model.

The toolkit generalizes all of the above as energy minimization problems with a clean, easy to use, and extendable interface. Compatible with both theano and tensorflow backends with 'channels_first', 'channels_last' data format.

Quick links

Getting Started

In image backprop problems, the goal is to generate an input image that minimizes some loss function. Setting up an image backprop problem is easy.

Define weighted loss function

Various useful loss functions are defined in losses. A custom loss function can be defined by implementing Loss.build_loss.

from vis.losses import ActivationMaximization
from vis.regularizers import TotalVariation, LPNorm

filter_indices = [1, 2, 3]

# Tuple consists of (loss_function, weight)
# Add regularizers as needed.
losses = [
    (ActivationMaximization(keras_layer, filter_indices), 1),
    (LPNorm(model.input), 10),
    (TotalVariation(model.input), 10)
]

Configure optimizer to minimize weighted loss

In order to generate natural looking images, image search space is constrained using regularization penalties. Some common regularizers are defined in regularizers. Like loss functions, custom regularizer can be defined by implementing Loss.build_loss.

from vis.optimizer import Optimizer

optimizer = Optimizer(model.input, losses)
opt_img, grads, _ = optimizer.minimize()

Concrete examples of various supported visualizations can be found in examples folder.

Installation

  1. Install keras with theano or tensorflow backend. Note that this library requires Keras > 2.0

  2. Install keras-vis

From sources

sudo python setup.py install

PyPI package

sudo pip install keras-vis

Visualizations

NOTE: The links are currently broken and the entire documentation is being reworked. Please see examples/ for samples.

Neural nets are black boxes. In the recent years, several approaches for understanding and visualizing Convolutional Networks have been developed in the literature. They give us a way to peer into the black boxes, diagnose mis-classifications, and assess whether the network is over/under fitting.

Guided backprop can also be used to create trippy art, neural/texture style transfer among the list of other growing applications.

Various visualizations, documented in their own pages, are summarized here.


Conv filter visualization

Convolutional filters learn 'template matching' filters that maximize the output when a similar template pattern is found in the input image. Visualize those templates via Activation Maximization.


Dense layer visualization

How can we assess whether a network is over/under fitting or generalizing well?


Attention Maps

How can we assess whether a network is attending to correct parts of the image in order to generate a decision?


Generating animated gif of optimization progress

It is possible to generate an animated gif of optimization progress by leveraging callbacks. Following example shows how to visualize the activation maximization for 'ouzel' class (output_index: 20).

from keras.applications import VGG16

from vis.losses import ActivationMaximization
from vis.regularizers import TotalVariation, LPNorm
from vis.input_modifiers import Jitter
from vis.optimizer import Optimizer
from vis.callbacks import GifGenerator

# Build the VGG16 network with ImageNet weights
model = VGG16(weights='imagenet', include_top=True)
print('Model loaded.')

# The name of the layer we want to visualize
# (see model definition in vggnet.py)
layer_name = 'predictions'
layer_dict = dict([(layer.name, layer) for layer in model.layers[1:]])
output_class = [20]

losses = [
    (ActivationMaximization(layer_dict[layer_name], output_class), 2),
    (LPNorm(model.input), 10),
    (TotalVariation(model.input), 10)
]
opt = Optimizer(model.input, losses)
opt.minimize(max_iter=500, verbose=True, input_modifiers=[Jitter()], callbacks=[GifGenerator('opt_progress')])

Notice how the output jitters around? This is because we used Jitter, a kind of ImageModifier that is known to produce crisper activation maximization images. As an exercise, try:

  • Without Jitter
  • Varying various loss weights

opt_progress


Citation

Please cite keras-vis in your publications if it helped your research. Here is an example BibTeX entry:

@misc{raghakotkerasvis,
  title={keras-vis},
  author={Kotikalapudi, Raghavendra and contributors},
  year={2017},
  publisher={GitHub},
  howpublished={\url{https://github.com/raghakot/keras-vis}},
}
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].