All Projects → chr5tphr → zennit

chr5tphr / zennit

Licence: Unknown and 2 other licenses found Licenses found Unknown LICENSE GPL-3.0 COPYING LGPL-3.0 COPYING.LESSER
Zennit is a high-level framework in Python using PyTorch for explaining/exploring neural networks using attribution methods like LRP.

Programming Languages

python
139335 projects - #7 most used programming language
shell
77523 projects

Projects that are alternatives of or similar to zennit

Interpret
Fit interpretable models. Explain blackbox machine learning.
Stars: ✭ 4,352 (+7535.09%)
Mutual labels:  interpretability, interpretable-ai, interpretable-ml, explainable-ai, xai, explainability
mllp
The code of AAAI 2020 paper "Transparent Classification with Multilayer Logical Perceptrons and Random Binarization".
Stars: ✭ 15 (-73.68%)
Mutual labels:  interpretability, interpretable-ai, interpretable-ml, explainable-ai, xai, explainability
Awesome Machine Learning Interpretability
A curated list of awesome machine learning interpretability resources.
Stars: ✭ 2,404 (+4117.54%)
Mutual labels:  interpretability, interpretable-ai, interpretable-ml, xai
concept-based-xai
Library implementing state-of-the-art Concept-based and Disentanglement Learning methods for Explainable AI
Stars: ✭ 41 (-28.07%)
Mutual labels:  interpretability, explainable-ai, xai, explainability
Captum
Model interpretability and understanding for PyTorch
Stars: ✭ 2,830 (+4864.91%)
Mutual labels:  interpretability, interpretable-ai, interpretable-ml, feature-attribution
interpretable-ml
Techniques & resources for training interpretable ML models, explaining ML models, and debugging ML models.
Stars: ✭ 17 (-70.18%)
Mutual labels:  interpretability, interpretable-ai, interpretable-ml, xai
adaptive-wavelets
Adaptive, interpretable wavelets across domains (NeurIPS 2021)
Stars: ✭ 58 (+1.75%)
Mutual labels:  interpretability, xai, explainability
ProtoTree
ProtoTrees: Neural Prototype Trees for Interpretable Fine-grained Image Recognition, published at CVPR2021
Stars: ✭ 47 (-17.54%)
Mutual labels:  interpretability, explainable-ai, explainability
ArenaR
Data generator for Arena - interactive XAI dashboard
Stars: ✭ 28 (-50.88%)
Mutual labels:  interpretability, xai, explainability
thermostat
Collection of NLP model explanations and accompanying analysis tools
Stars: ✭ 126 (+121.05%)
Mutual labels:  interpretability, explainability, feature-attribution
awesome-graph-explainability-papers
Papers about explainability of GNNs
Stars: ✭ 153 (+168.42%)
Mutual labels:  explainable-ai, xai, explainability
diabetes use case
Sample use case for Xavier AI in Healthcare conference: https://www.xavierhealth.org/ai-summit-day2/
Stars: ✭ 22 (-61.4%)
Mutual labels:  interpretability, interpretable-ml, xai
deep-explanation-penalization
Code for using CDEP from the paper "Interpretations are useful: penalizing explanations to align neural networks with prior knowledge" https://arxiv.org/abs/1909.13584
Stars: ✭ 110 (+92.98%)
Mutual labels:  interpretability, explainable-ai, explainability
Transformer-MM-Explainability
[ICCV 2021- Oral] Official PyTorch implementation for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers, a novel method to visualize any Transformer-based network. Including examples for DETR, VQA.
Stars: ✭ 484 (+749.12%)
Mutual labels:  interpretability, explainable-ai, explainability
hierarchical-dnn-interpretations
Using / reproducing ACD from the paper "Hierarchical interpretations for neural network predictions" 🧠 (ICLR 2019)
Stars: ✭ 110 (+92.98%)
Mutual labels:  interpretability, explainable-ai, explainability
GNNLens2
Visualization tool for Graph Neural Networks
Stars: ✭ 155 (+171.93%)
Mutual labels:  xai, explainability
transformers-interpret
Model explainability that works seamlessly with 🤗 transformers. Explain your transformers model in just 2 lines of code.
Stars: ✭ 861 (+1410.53%)
Mutual labels:  interpretability, explainable-ai
ALPS 2021
XAI Tutorial for the Explainable AI track in the ALPS winter school 2021
Stars: ✭ 55 (-3.51%)
Mutual labels:  interpretability, explainability
mindsdb server
MindsDB server allows you to consume and expose MindsDB workflows, through http.
Stars: ✭ 3 (-94.74%)
Mutual labels:  explainable-ai, xai
expmrc
ExpMRC: Explainability Evaluation for Machine Reading Comprehension
Stars: ✭ 58 (+1.75%)
Mutual labels:  explainable-ai, xai

Zennit

Zennit-Logo

Documentation Status tests PyPI Version License

Zennit (Zennit explains neural networks in torch) is a high-level framework in Python using Pytorch for explaining/exploring neural networks. Its design philosophy is intended to provide high customizability and integration as a standardized solution for applying rule-based attribution methods in research, with a strong focus on Layerwise Relevance Propagation (LRP). Zennit strictly requires models to use Pytorch's torch.nn.Module structure (including activation functions).

Zennit is currently under active development, but should be mostly stable.

If you find Zennit useful for your research, please consider citing our related paper:

@article{anders2021software,
      author  = {Anders, Christopher J. and
                 Neumann, David and
                 Samek, Wojciech and
                 Müller, Klaus-Robert and
                 Lapuschkin, Sebastian},
      title   = {Software for Dataset-wide XAI: From Local Explanations to Global Insights with {Zennit}, {CoRelAy}, and {ViRelAy}},
      journal = {CoRR},
      volume  = {abs/2106.13200},
      year    = {2021},
}

Documentation

The latest documentation is hosted at zennit.readthedocs.io.

Install

To install directly from PyPI using pip, use:

$ pip install zennit

Alternatively, install from a manually cloned repository to try out the examples:

$ git clone https://github.com/chr5tphr/zennit.git
$ pip install ./zennit

Usage

At its heart, Zennit registers hooks at Pytorch's Module level, to modify the backward pass to produce rule-based attributions like LRP (instead of the usual gradient). All rules are implemented as hooks (zennit/rules.py) and most use the LRP basis BasicHook (zennit/core.py).

Composites (zennit/composites.py) are a way of choosing the right hook for the right layer. In addition to the abstract NameMapComposite, which assigns hooks to layers by name, and LayerMapComposite, which assigns hooks to layers based on their Type, there exist explicit Composites, some of which are EpsilonGammaBox (ZBox in input, Epsilon in dense, Gamma in convolutions) or EpsilonPlus (Epsilon in dense, ZPlus in convolutions). All composites may be used by directly importing from zennit.composites, or by using their snake-case name as key for zennit.composites.COMPOSITES.

Canonizers (zennit/canonizers.py) temporarily transform models into a canonical form, if required, like SequentialMergeBatchNorm, which automatically detects and merges BatchNorm layers followed by linear layers in sequential networks, or AttributeCanonizer, which temporarily overwrites attributes of applicable modules, e.g. to handle the residual connection in ResNet-Bottleneck modules.

Attributors (zennit/attribution.py) directly execute the necessary steps to apply certain attribution methods, like the simple Gradient, SmoothGrad or Occlusion. An optional Composite may be passed, which will be applied during the Attributor's execution to compute the modified gradient, or hybrid methods.

Using all of these components, an LRP-type attribution for VGG16 with batch-norm layers with respect to label 0 may be computed using:

import torch
from torchvision.models import vgg16_bn

from zennit.composites import EpsilonGammaBox
from zennit.canonizers import SequentialMergeBatchNorm
from zennit.attribution import Gradient


data = torch.randn(1, 3, 224, 224)
model = vgg16_bn()

canonizers = [SequentialMergeBatchNorm()]
composite = EpsilonGammaBox(low=-3., high=3., canonizers=canonizers)

with Gradient(model=model, composite=composite) as attributor:
    out, relevance = attributor(data, torch.eye(1000)[[0]])

A similar setup using the example script produces the following attribution heatmaps: beacon heatmaps

For more details and examples, have a look at our documentation.

More Example Heatmaps

More heatmaps of various attribution methods for VGG16 and ResNet50, all generated using share/example/feed_forward.py, can be found below.

Heatmaps for VGG16

vgg16 heatmaps

Heatmaps for ResNet50

resnet50 heatmaps

Contributing

See CONTRIBUTING.md for detailed instructions on how to contribute.

License

Zennit is licensed under the GNU LESSER GENERAL PUBLIC LICENSE VERSION 3 OR LATER -- see the LICENSE, COPYING and COPYING.LESSER files for details.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].