All Projects → PaddlePaddle → InterpretDL

PaddlePaddle / InterpretDL

Licence: Apache-2.0 license
InterpretDL: Interpretation of Deep Learning Models,基于『飞桨』的模型可解释性算法库。

Programming Languages

Jupyter Notebook
11667 projects
python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to InterpretDL

Pytorch Grad Cam
Many Class Activation Map methods implemented in Pytorch for CNNs and Vision Transformers. Including Grad-CAM, Grad-CAM++, Score-CAM, Ablation-CAM and XGrad-CAM
Stars: ✭ 3,814 (+3052.07%)
Mutual labels:  grad-cam, visualizations
WhiteBox-Part1
In this part, I've introduced and experimented with ways to interpret and evaluate models in the field of image. (Pytorch)
Stars: ✭ 34 (-71.9%)
Mutual labels:  grad-cam, smoothgrad
Transformers-Tutorials
This repository contains demos I made with the Transformers library by HuggingFace.
Stars: ✭ 2,828 (+2237.19%)
Mutual labels:  vision-transformer
towhee
Towhee is a framework that is dedicated to making neural data processing pipelines simple and fast.
Stars: ✭ 821 (+578.51%)
Mutual labels:  vision-transformer
lime
A library for drawing graphics on the console screen
Stars: ✭ 32 (-73.55%)
Mutual labels:  lime
ViT-V-Net for 3D Image Registration Pytorch
Vision Transformer for 3D medical image registration (Pytorch).
Stars: ✭ 169 (+39.67%)
Mutual labels:  vision-transformer
BookSource
《深度学习应用实战之PaddlePaddle》的源码
Stars: ✭ 17 (-85.95%)
Mutual labels:  paddlepaddle
multicycles
Multicycles.org aggregates on one map, more than 200 share vehicles like bikes, scooters, mopeds and cars. Demo APP for the Data Flow API, see https://flow.fluctuo.com
Stars: ✭ 84 (-30.58%)
Mutual labels:  lime
Paddle-SEQ
低代码序列数据处理框架,最短两行即可完成训练任务!
Stars: ✭ 13 (-89.26%)
Mutual labels:  paddlepaddle
YOLOS
You Only Look at One Sequence (NeurIPS 2021)
Stars: ✭ 612 (+405.79%)
Mutual labels:  vision-transformer
FSL-Mate
FSL-Mate: A collection of resources for few-shot learning (FSL).
Stars: ✭ 1,346 (+1012.4%)
Mutual labels:  paddlepaddle
insight-face-paddle
End-to-end face detection and recognition system using PaddlePaddle.
Stars: ✭ 52 (-57.02%)
Mutual labels:  paddlepaddle
spotfire-mods
Spotfire Mods by TIBCO Spotfire®
Stars: ✭ 39 (-67.77%)
Mutual labels:  visualizations
VisualML
Interactive Visual Machine Learning Demos.
Stars: ✭ 104 (-14.05%)
Mutual labels:  visualizations
LIME
Implementation of the paper, "LIME: Low-Light Image Enhancement via Illumination Map Estimation", which is for my graduation thesis.
Stars: ✭ 29 (-76.03%)
Mutual labels:  lime
image-classification
A collection of SOTA Image Classification Models in PyTorch
Stars: ✭ 70 (-42.15%)
Mutual labels:  vision-transformer
python-data-visualization
Curated Python Notebooks for Data Visualization
Stars: ✭ 22 (-81.82%)
Mutual labels:  visualizations
Active-Explainable-Classification
A set of tools for leveraging pre-trained embeddings, active learning and model explainability for effecient document classification
Stars: ✭ 28 (-76.86%)
Mutual labels:  model-interpretation
Paddle-Custom-Operators
Paddle Custom Operators.
Stars: ✭ 24 (-80.17%)
Mutual labels:  paddlepaddle
pytorch-vit
An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
Stars: ✭ 250 (+106.61%)
Mutual labels:  vision-transformer

中文 | English

Release PyPI CircleCI Documentation Status Downloads

InterpretDL: Interpretation of Deep Learning Models based on PaddlePaddle

InterpretDL, short for interpretations of deep learning models, is a model interpretation toolkit for PaddlePaddle models. This toolkit contains implementations of many interpretation algorithms, including LIME, Grad-CAM, Integrated Gradients and more. Some SOTA and new interpretation algorithms are also implemented.

InterpretDL is under active construction and all contributions are welcome!

Why InterpretDL

The increasingly complicated deep learning models make it impossible for people to understand their internal workings. Interpretability of black-box models has become the research focus of many talented researchers. InterpretDL provides a collection of both classical and new algorithms for interpreting models.

By utilizing these helpful methods, people can better understand why models work and why they don't, thus contributing to the model development process.

For researchers working on designing new interpretation algorithms, InterpretDL gives an easy access to existing methods that they can compare their work with.

🔥 🔥 🔥 News 🔥 🔥 🔥

  • (2022/04/27) A getting-started tutorial is provided. Check it from GitHub or NBViewer. Usage examples have been provided for each algorithm (both Interpreter and Evaluator). We are currently preparing tutorials for easy usages of InterpretDL. Both tutorials and examples can be assessed under the tutorial folder.

  • (2022/01/06) Implemented the Cross-Model Consensus Explanation method. In brief, this method averages the explanation results from several models. Instead of interpreting individual models, this method is able to identify the discriminative features in the input data with accurate localization. See the paper for details.

    • Consensus: Xuhong Li, Haoyi Xiong, Siyu Huang, Shilei Ji, Dejing Dou. Cross-Model Consensus of Explanations and Beyond for Image Classification Models: An Empirical Study. arXiv:2109.00707.

We show a demo with six models (the last column shows the consensus explanation), while more models (around 15) could give a much better result. See the example for more details.

Consensus Result

Demo

Interpretation algorithms give a hint of why a black-box model makes its decision.

The following table gives visualizations of several interpretation algorithms applied to the original image to tell us why the model predicts "bull_mastiff."

Original Image IntGrad (demo) SG (demo) LIME (demo) Grad-CAM (demo)

For sentiment analysis task, the reason why a model gives positive/negative predictions can be visualized as follows. A quick demo can be found here. Samples in Chinese are also available here.

Contents

Installation

It requires the deep learning framework paddlepaddle, versions with CUDA support are recommended.

Pip installation

pip install interpretdl

# or with tsinghua mirror
pip install interpretdl -i https://pypi.tuna.tsinghua.edu.cn/simple

Developer installation

git clone https://github.com/PaddlePaddle/InterpretDL.git
# ... fix bugs or add new features
cd InterpretDL && pip install -e .
# welcome to propose pull request and contribute
yapf -i <python_file_path>  # code style: column_limit=120

Unit Tests

# run gradcam unit tests
python -m unittest -v tests.interpreter.test_gradcam
# run all unit tests
python -m unittest -v

Documentation

Online link: interpretdl.readthedocs.io.

Or generate the docs locally:

git clone https://github.com/PaddlePaddle/InterpretDL.git
cd docs
make html
open _build/html/index.html

Getting Started

All interpreters inherit the abstract class Interpreter, of which interpret(**kwargs) is the function to call.

# an example of SmoothGradient Interpreter.

import interpretdl as it
from paddle.vision.models import resnet50
paddle_model = resnet50(pretrained=True)
sg = it.SmoothGradInterpreter(paddle_model, use_cuda=True)
gradients = sg.interpret("test.jpg", visual=True, save_path=None)

A quick Getting-Started tutorial (or on NBviewer) is provided. It takes only a few minutes to be familiar with InterpretDL.

Examples and Tutorials

We have provided at least one example for each interpretation algorithm and each trustworthiness evaluation algorithm, hopefully covering applications for both CV and NLP.

We are currently preparing tutorials for easy usages of InterpretDL.

Both examples and tutorials can be accessed under tutorials folder.

Roadmap

We are planning to create a useful toolkit for offering the model interpretations as well as evaluations. We have now implemented the interpretation algorithms as follows, and we are planning to add more algorithms that are desired. Welcome to contribute or just tell us which algorithms are desired.

Implemented Algorithms with Taxonomy

Two dimensions (representations of explanation results and types of the target model) are used to categorize the interpretation algorithms. This taxonomy can be an indicator to find the best suitable algorithm for the target task and model.

Methods Representation Model Type
LIME Input Features Model-Agnostic
LIME with Prior Input Features Model-Agnostic
GLIME Input Features Model-Agnostic
NormLIME/FastNormLIME Input Features Model-Agnostic
LRP Input Features Differentiable*
SmoothGrad Input Features Differentiable
IntGrad Input Features Differentiable
GradSHAP Input Features Differentiable
Occlusion Input Features Model-Agnostic
GradCAM/CAM Intermediate Features Specific: CNNs
ScoreCAM Intermediate Features Specific: CNNs
Rollout Intermediate Features Specific: Transformers
TAM Intermediate Features Specific: Transformers
ForgettingEvents Dataset-Level Differentiable
TIDY (Training Data Analyzer) Dataset-Level Differentiable
Consensus Features Cross-Model
Generic Attention Input Features Specific: Bi-Modal Transformers

* LRP requires that the model is of specific implementations for relevance back-propagation.

Implemented Trustworthiness Evaluation Algorithms

Planning Alorithms

  • Intermediate Features Interpretation Algorithm

    • More Transformers Specific Interpreters
  • Dataset-Level Interpretation Algorithms

    • Influence Function
  • Evaluations

    • Local Fidelity
    • Sensitivity

Presentations

Linux Foundation Project AI & Data -- Interpretable Deep Learning: Interpretation, Interpretability, Trustworthiness, and Beyond. Video Link (00:20:30 -- 00:45:00).

Baidu Create 2021 (in Chinese): Video Link (01:18:40 -- 01:36:30).

ICML 2021 Expo -- Interpretable Deep Learning: Interpretation, Interpretability, Trustworthiness, and Beyond. Video Link.

References of Algorithms

Copyright and License

InterpretDL is provided under the Apache-2.0 license.

Recent News

  • (2021/10/20) Implemented the Transition Attention Maps (TAM) explanation method for PaddlePaddle Vision Transformers. As always, several lines call this interpreter. See details from the example, and the paper:

    • TAM: Tingyi Yuan, Xuhong Li, Haoyi Xiong, Hui Cao, Dejing Dou. Explaining Information Flow Inside Vision Transformers Using Markov Chain. In Neurips 2021 XAI4Debugging Workshop.
import paddle
import interpretdl as it

# load vit model and weights
# !wget -c https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/ViT_base_patch16_224_pretrained.pdparams -P assets/
from assets.vision_transformer import ViT_base_patch16_224
paddle_model = ViT_base_patch16_224()
MODEL_PATH = 'assets/ViT_base_patch16_224_pretrained.pdparams'
paddle_model.set_dict(paddle.load(MODEL_PATH))

# Call the interpreter.
tam = it.TAMInterpreter(paddle_model, use_cuda=True)
img_path = 'samples/el1.png'
heatmap = tam.interpret(
        img_path,
        start_layer=4,
        label=None,  # elephant
        visual=True,
        save_path=None)
heatmap = tam.interpret(
        img_path,
        start_layer=4,
        label=340,  # zebra
        visual=True,
        save_path=None)
image elephant zebra
image elephant zebra
import paddle
import interpretdl as it

# wget -c https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/ViT_small_patch16_224_pretrained.pdparams -P assets/
from assets.vision_transformer import ViT_small_patch16_224
paddle_model = ViT_small_patch16_224()
MODEL_PATH = 'assets/ViT_small_patch16_224_pretrained.pdparams'
paddle_model.set_dict(paddle.load(MODEL_PATH))

img_path = 'assets/catdog.png'
rollout = it.RolloutInterpreter(paddle_model, use_cuda=True)
heatmap = rollout.interpret(img_path, start_layer=0, visual=True)
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].