All Projects → M-Nauta → ProtoTree

M-Nauta / ProtoTree

Licence: MIT license
ProtoTrees: Neural Prototype Trees for Interpretable Fine-grained Image Recognition, published at CVPR2021

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to ProtoTree

mllp
The code of AAAI 2020 paper "Transparent Classification with Multilayer Logical Perceptrons and Random Binarization".
Stars: ✭ 15 (-68.09%)
Mutual labels:  interpretability, explainable-ai, explainable-ml, interpretable-machine-learning, explainability
Interpret
Fit interpretable models. Explain blackbox machine learning.
Stars: ✭ 4,352 (+9159.57%)
Mutual labels:  interpretability, explainable-ai, explainable-ml, interpretable-machine-learning, explainability
Awesome Machine Learning Interpretability
A curated list of awesome machine learning interpretability resources.
Stars: ✭ 2,404 (+5014.89%)
Mutual labels:  interpretability, interpretable-deep-learning, explainable-ml, interpretable-machine-learning
deep-explanation-penalization
Code for using CDEP from the paper "Interpretations are useful: penalizing explanations to align neural networks with prior knowledge" https://arxiv.org/abs/1909.13584
Stars: ✭ 110 (+134.04%)
Mutual labels:  interpretability, interpretable-deep-learning, explainable-ai, explainability
concept-based-xai
Library implementing state-of-the-art Concept-based and Disentanglement Learning methods for Explainable AI
Stars: ✭ 41 (-12.77%)
Mutual labels:  interpretability, explainable-ai, explainability
xai-iml-sota
Interesting resources related to Explainable Artificial Intelligence, Interpretable Machine Learning, Interactive Machine Learning, Human in Loop and Visual Analytics.
Stars: ✭ 51 (+8.51%)
Mutual labels:  interpretability, explainable-ml, interpretable-machine-learning
ShapleyExplanationNetworks
Implementation of the paper "Shapley Explanation Networks"
Stars: ✭ 62 (+31.91%)
Mutual labels:  interpretable-deep-learning, explainable-ai, interpretable-machine-learning
Transformer-MM-Explainability
[ICCV 2021- Oral] Official PyTorch implementation for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers, a novel method to visualize any Transformer-based network. Including examples for DETR, VQA.
Stars: ✭ 484 (+929.79%)
Mutual labels:  interpretability, explainable-ai, explainability
fastshap
Fast approximate Shapley values in R
Stars: ✭ 79 (+68.09%)
Mutual labels:  explainable-ai, explainable-ml, interpretable-machine-learning
ml-fairness-framework
FairPut - Machine Learning Fairness Framework with LightGBM — Explainability, Robustness, Fairness (by @firmai)
Stars: ✭ 59 (+25.53%)
Mutual labels:  explainable-ai, explainable-ml, interpretable-machine-learning
diabetes use case
Sample use case for Xavier AI in Healthcare conference: https://www.xavierhealth.org/ai-summit-day2/
Stars: ✭ 22 (-53.19%)
Mutual labels:  interpretability, explainable-ml, interpretable-machine-learning
CARLA
CARLA: A Python Library to Benchmark Algorithmic Recourse and Counterfactual Explanation Algorithms
Stars: ✭ 166 (+253.19%)
Mutual labels:  explainable-ai, explainable-ml, explainability
zennit
Zennit is a high-level framework in Python using PyTorch for explaining/exploring neural networks using attribution methods like LRP.
Stars: ✭ 57 (+21.28%)
Mutual labels:  interpretability, explainable-ai, explainability
responsible-ai-toolbox
This project provides responsible AI user interfaces for Fairlearn, interpret-community, and Error Analysis, as well as foundational building blocks that they rely on.
Stars: ✭ 615 (+1208.51%)
Mutual labels:  explainable-ai, explainable-ml, explainability
interpretable-ml
Techniques & resources for training interpretable ML models, explaining ML models, and debugging ML models.
Stars: ✭ 17 (-63.83%)
Mutual labels:  decision-trees, interpretability, interpretable-machine-learning
hierarchical-dnn-interpretations
Using / reproducing ACD from the paper "Hierarchical interpretations for neural network predictions" 🧠 (ICLR 2019)
Stars: ✭ 110 (+134.04%)
Mutual labels:  interpretability, explainable-ai, explainability
adaptive-wavelets
Adaptive, interpretable wavelets across domains (NeurIPS 2021)
Stars: ✭ 58 (+23.4%)
Mutual labels:  interpretability, explainability
ALPS 2021
XAI Tutorial for the Explainable AI track in the ALPS winter school 2021
Stars: ✭ 55 (+17.02%)
Mutual labels:  interpretability, explainability
thermostat
Collection of NLP model explanations and accompanying analysis tools
Stars: ✭ 126 (+168.09%)
Mutual labels:  interpretability, explainability
DataScience ArtificialIntelligence Utils
Examples of Data Science projects and Artificial Intelligence use cases
Stars: ✭ 302 (+542.55%)
Mutual labels:  explainable-ai, explainable-ml

ProtoTrees: Neural Prototype Trees for Interpretable Fine-grained Image Recognition

This repository presents the PyTorch code for Neural Prototype Trees (ProtoTrees), published at CVPR 2021: "Neural Prototype Trees for Interpretable Fine-grained Image Recognition".

A ProtoTree is an intrinsically interpretable deep learning method for fine-grained image recognition. It includes prototypes in an interpretable decision tree to faithfully visualize the entire model. Each node in our binary tree contains a trainable prototypical part. The presence or absence of this prototype in an image determines the routing through a node. Decision making is therefore similar to human reasoning: Does the bird have a red throat? And an elongated beak? Then it's a hummingbird!

Example of a ProtoTree. Figure shows an example of a ProtoTree. A ProtoTree is a globally interpretable model faithfully explaining its entire behaviour (left, partially shown) and additionally the reasoning process for a single prediction can be followed (right): the presence of a red chest and black wing, and the absence of a black stripe near the eye, identifies a Scarlet Tanager.

Prerequisites

General

  • Python 3
  • PyTorch >= 1.5 and <= 1.7!
  • Optional: CUDA

Required Python Packages:

  • numpy
  • pandas
  • opencv
  • tqdm
  • scipy
  • matplotlib
  • requests (to download the CARS dataset, or download it manually)
  • gdown (to download the CUB dataset, or download it manually)

Data

The code can be applied to the CUB-200-2011 dataset with 200 bird species, or the Stanford Cars dataset with 196 car types.

The folder preprocess_data contains python code to download, extract and preprocess these datasets.

Preprocessing CUB

  1. create a folder ./data/CUB_200_2011
  2. download ResNet50 pretrained on iNaturalist2017 (Filename on Google Drive: BBN.iNaturalist2017.res50.180epoch.best_model.pth) and place it in the folder features/state_dicts.
  3. from the main ProtoTree folder, run python preprocess_data/download_birds.py
  4. from the main ProtoTree folder, run python preprocess_data/cub.py to create training and test sets

Preprocessing CARS

  1. create a folder ./data/cars
  2. from the main ProtoTree folder, run python preprocess_data/download_cars.py
  3. from the main ProtoTree folder, run python preprocess_data/cars.py to create training and test sets

Training a ProtoTree

  1. create a folder ./runs

A ProtoTree can be trained by running main_tree.py with arguments. An example for CUB: main_tree.py --epochs 100 --log_dir ./runs/protoree_cub --dataset CUB-200-2011 --lr 0.001 --lr_block 0.001 --lr_net 1e-5 --num_features 256 --depth 9 --net resnet50_inat --freeze_epochs 30 --milestones 60,70,80,90,100 To speed up the training process, the number of workers of the DataLoaders can be increased by setting num_workers to a positive integer value (suitable number depends on your available memory).

Check your --log_dir to keep track of the training progress. This directory contains log_epoch_overview.csv which prints per epoch the test accuracy, mean training accuracy and the mean loss. File log_train_epochs_losses.csv prints the loss value and training accuracy per batch iteration. File log.txt logs additional info.

The resulting visualized prototree (i.e. global explanation) is saved as a pdf in your --log_dir /pruned_and_projected/treevis.pdf. NOTE: this pdf can get large which is not supported by Adobe Acrobat Reader. Open it with e.g. Google Chrome or Apple Preview.

To train and evaluate an ensemble of ProtoTrees, run main_ensemble.py with the same arguments as for main_tree.py, but include the --nr_trees_ensemble to indicate the number of trees in the ensemble.

Local explanations

A trained ProtoTree is intrinsically interpretable and globally explainable. It can also locally explain a prediction. Run e.g. the following command to explain a single test image:

main_explain_local.py --log_dir ./runs/protoree_cars --dataset CARS --sample_dir ./data/cars/dataset/test/Dodge_Sprinter_Cargo_Van_2009/04003.jpg --prototree ./runs/protoree_cars/checkpoints/pruned_and_projected

In the folder --log_dir /local_explanations, the visualized local explanation is saved in predvis.pdf.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].