All Projects → atulshanbhag → Layerwise-Relevance-Propagation

atulshanbhag / Layerwise-Relevance-Propagation

Licence: other
Implementation of Layerwise Relevance Propagation for heatmapping "deep" layers

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Layerwise-Relevance-Propagation

ProtoTree
ProtoTrees: Neural Prototype Trees for Interpretable Fine-grained Image Recognition, published at CVPR2021
Stars: ✭ 47 (-39.74%)
Mutual labels:  interpretable-deep-learning, interpretable-machine-learning
ShapleyExplanationNetworks
Implementation of the paper "Shapley Explanation Networks"
Stars: ✭ 62 (-20.51%)
Mutual labels:  interpretable-deep-learning, interpretable-machine-learning
Awesome Machine Learning Interpretability
A curated list of awesome machine learning interpretability resources.
Stars: ✭ 2,404 (+2982.05%)
Mutual labels:  interpretable-deep-learning, interpretable-machine-learning
XAIatERUM2020
Workshop: Explanation and exploration of machine learning models with R and DALEX at eRum 2020
Stars: ✭ 52 (-33.33%)
Mutual labels:  interpretable-machine-learning
xai-iml-sota
Interesting resources related to Explainable Artificial Intelligence, Interpretable Machine Learning, Interactive Machine Learning, Human in Loop and Visual Analytics.
Stars: ✭ 51 (-34.62%)
Mutual labels:  interpretable-machine-learning
ISeeU
ISeeU: Visually interpretable deep learning for mortality prediction inside the ICU
Stars: ✭ 20 (-74.36%)
Mutual labels:  interpretable-deep-learning
ml-fairness-framework
FairPut - Machine Learning Fairness Framework with LightGBM — Explainability, Robustness, Fairness (by @firmai)
Stars: ✭ 59 (-24.36%)
Mutual labels:  interpretable-machine-learning
self critical vqa
Code for NeurIPS 2019 paper ``Self-Critical Reasoning for Robust Visual Question Answering''
Stars: ✭ 39 (-50%)
Mutual labels:  interpretable-deep-learning
path explain
A repository for explaining feature attributions and feature interactions in deep neural networks.
Stars: ✭ 151 (+93.59%)
Mutual labels:  interpretable-deep-learning
ShapML.jl
A Julia package for interpretable machine learning with stochastic Shapley values
Stars: ✭ 63 (-19.23%)
Mutual labels:  interpretable-machine-learning
ConceptBottleneck
Concept Bottleneck Models, ICML 2020
Stars: ✭ 91 (+16.67%)
Mutual labels:  interpretable-machine-learning
interpretable-ml
Techniques & resources for training interpretable ML models, explaining ML models, and debugging ML models.
Stars: ✭ 17 (-78.21%)
Mutual labels:  interpretable-machine-learning
redunet paper
Official NumPy Implementation of Deep Networks from the Principle of Rate Reduction (2021)
Stars: ✭ 49 (-37.18%)
Mutual labels:  interpretable-deep-learning
rl singing voice
Unsupervised Representation Learning for Singing Voice Separation
Stars: ✭ 18 (-76.92%)
Mutual labels:  interpretable-deep-learning
Awesome-XAI-Evaluation
Reference tables to introduce and organize evaluation methods and measures for explainable machine learning systems
Stars: ✭ 57 (-26.92%)
Mutual labels:  interpretable-machine-learning
InterpretableCNN
No description or website provided.
Stars: ✭ 36 (-53.85%)
Mutual labels:  interpretable-deep-learning
deep-explanation-penalization
Code for using CDEP from the paper "Interpretations are useful: penalizing explanations to align neural networks with prior knowledge" https://arxiv.org/abs/1909.13584
Stars: ✭ 110 (+41.03%)
Mutual labels:  interpretable-deep-learning
CNN-Units-in-NLP
✂️ Repository for our ICLR 2019 paper: Discovery of Natural Language Concepts in Individual Units of CNNs
Stars: ✭ 26 (-66.67%)
Mutual labels:  interpretable-deep-learning
fastshap
Fast approximate Shapley values in R
Stars: ✭ 79 (+1.28%)
Mutual labels:  interpretable-machine-learning
mllp
The code of AAAI 2020 paper "Transparent Classification with Multilayer Logical Perceptrons and Random Binarization".
Stars: ✭ 15 (-80.77%)
Mutual labels:  interpretable-machine-learning

Layerwise-Relevance-Propagation

Implementation of Layerwise Relevance Propagation for heatmapping "deep" layers, using Tensorflow and Keras.

Results

MNIST

VGG

Instructions

MNIST

  • Run train.py to train model.
  • Weights will be saved in logs/.
  • Run lrp.py for Layerwise Relevance Propagation.

NOTE: If using Tensorflow version < 1.5.0, you need to change tf.nn.softmax_cross_entropy_with_logits_v2 to tf.nn.softmax_cross_entropy_with_logits.

VGG

  • Feed a list of images to run Layerwise Relevance Propagation on all images.
  • All results will be saved in results/.
  • Run lrp.py <image_1> <image_2> ... <image_n>.

Reference

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].