All Projects → MarcelRobeer → Contrastiveexplanation

MarcelRobeer / Contrastiveexplanation

Licence: bsd-3-clause
Contrastive Explanation (Foil Trees), developed at TNO/Utrecht University

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Contrastiveexplanation

Facet
Human-explainable AI.
Stars: ✭ 269 (+647.22%)
Mutual labels:  interpretability
Interpretable machine learning with python
Examples of techniques for training interpretable ML models, explaining ML models, and debugging ML models for accuracy, discrimination, and security.
Stars: ✭ 530 (+1372.22%)
Mutual labels:  interpretability
Tf Explain
Interpretability Methods for tf.keras models with Tensorflow 2.x
Stars: ✭ 780 (+2066.67%)
Mutual labels:  interpretability
Awesome deep learning interpretability
深度学习近年来关于神经网络模型解释性的相关高引用/顶会论文(附带代码)
Stars: ✭ 401 (+1013.89%)
Mutual labels:  interpretability
Tcav
Code for the TCAV ML interpretability project
Stars: ✭ 442 (+1127.78%)
Mutual labels:  interpretability
Flashtorch
Visualization toolkit for neural networks in PyTorch! Demo -->
Stars: ✭ 561 (+1458.33%)
Mutual labels:  interpretability
removal-explanations
A lightweight implementation of removal-based explanations for ML models.
Stars: ✭ 46 (+27.78%)
Mutual labels:  interpretability
Alibi
Algorithms for monitoring and explaining machine learning models
Stars: ✭ 924 (+2466.67%)
Mutual labels:  interpretability
Deeplift
Public facing deeplift repo
Stars: ✭ 512 (+1322.22%)
Mutual labels:  interpretability
Ad examples
A collection of anomaly detection methods (iid/point-based, graph and time series) including active learning for anomaly detection/discovery, bayesian rule-mining, description for diversity/explanation/interpretability. Analysis of incorporating label feedback with ensemble and tree-based detectors. Includes adversarial attacks with Graph Convolutional Network.
Stars: ✭ 641 (+1680.56%)
Mutual labels:  interpretability
Neural Backed Decision Trees
Making decision trees competitive with neural networks on CIFAR10, CIFAR100, TinyImagenet200, Imagenet
Stars: ✭ 411 (+1041.67%)
Mutual labels:  interpretability
Lucid
A collection of infrastructure and tools for research in neural network interpretability.
Stars: ✭ 4,344 (+11966.67%)
Mutual labels:  interpretability
Xai
XAI - An eXplainability toolbox for machine learning
Stars: ✭ 596 (+1555.56%)
Mutual labels:  interpretability
Interpret
Fit interpretable models. Explain blackbox machine learning.
Stars: ✭ 4,352 (+11988.89%)
Mutual labels:  interpretability
Dalex
moDel Agnostic Language for Exploration and eXplanation
Stars: ✭ 795 (+2108.33%)
Mutual labels:  interpretability
diabetes use case
Sample use case for Xavier AI in Healthcare conference: https://www.xavierhealth.org/ai-summit-day2/
Stars: ✭ 22 (-38.89%)
Mutual labels:  interpretability
Xai resources
Interesting resources related to XAI (Explainable Artificial Intelligence)
Stars: ✭ 553 (+1436.11%)
Mutual labels:  interpretability
Symbolic Metamodeling
Codebase for "Demystifying Black-box Models with Symbolic Metamodels", NeurIPS 2019.
Stars: ✭ 29 (-19.44%)
Mutual labels:  interpretability
Grad Cam
[ICCV 2017] Torch code for Grad-CAM
Stars: ✭ 891 (+2375%)
Mutual labels:  interpretability
Awesome Federated Learning
Federated Learning Library: https://fedml.ai
Stars: ✭ 624 (+1633.33%)
Mutual labels:  interpretability

Contrastive Explanation (Foil Trees)

Contrastive and counterfactual explanations for machine learning (ML)

Marcel Robeer (2018-2020), TNO/Utrecht University

Travis (.org) License Python version

Contents

  1. Introduction
  2. Publications: citing this package
  3. Example usage
  4. Documentation: choices for problem explanation
  5. License

Introduction

Contrastive Explanation provides an explanation for why an instance had the current outcome (fact) rather than a targeted outcome of interest (foil). These counterfactual explanations limit the explanation to the features relevant in distinguishing fact from foil, thereby disregarding irrelevant features. The idea of contrastive explanations is captured in this Python package ContrastiveExplanation. Example facts and foils are:

Machine Learning (ML) type Problem Explainable AI (XAI) question Fact Foil
Classification Determine type of animal Why is this instance a cat rather than a dog? Cat Dog
Regression analysis Predict students' grade Why is the predicted grade for this student 6.5 rather than higher? 6.5 More than 6.5
Clustering Find similar flowers Why is this flower in cluster 1 rather than cluster 4? Cluster 1 Cluster 4

Publications

One scientific paper was published on Contrastive Explanation / Foil Trees:

  • J. van der Waa, M. Robeer, J. van Diggelen, M. Brinkhuis, and M. Neerincx, "Contrastive Explanations with Local Foil Trees", in 2018 Workshop on Human Interpretability in Machine Learning (WHI 2018), 2018, pp. 41-47. [Online]. Available: http://arxiv.org/abs/1806.07470

It was developed as part of a Master's Thesis at Utrecht University / TNO:

Citing this package

@inproceedings{vanderwaa2018,
  title={{Contrastive Explanations with Local Foil Trees}},
  author={van der Waa, Jasper and Robeer, Marcel and van Diggelen, Jurriaan and Brinkhuis, Matthieu and Neerincx, Mark},
  booktitle={2018 Workshop on Human Interpretability in Machine Learning (WHI)},
  year={2018}
}

Example usage

As a simple example, let us explain a Random Forest classifier that determine the type of flower in the well-known Iris flower classification problem. The data set comprises 150 instances, each one of three types of flowers (setosa, versicolor and virginica). For each instance, the data set includes four features (sepal length, sepal width, petal length, petal width) and the goal is to determine which type of flower (class) each instance is.

Steps

First, train the 'black-box' model to explain

from sklearn import datasets, model_selection, ensemble
seed = 1

# Train black-box model on Iris data
data = datasets.load_iris()
train, test, y_train, y_test = model_selection.train_test_split(data.data, 
                                                                data.target, 
                                                                train_size=0.80, 
                                                                random_state=seed)
model = ensemble.RandomForestClassifier(random_state=seed)
model.fit(train, y_train)

Next, perform contrastive explanation on the first test instance (test[0]) by wrapping the tabular data in a DomainMapper, and then using method ContrastiveExplanation.explain_instance_domain()

# Contrastive explanation
import contrastive_explanation as ce

dm = ce.domain_mappers.DomainMapperTabular(train, 
                                           feature_names=data.feature_names,
					   contrast_names=data.target_names)
exp = ce.ContrastiveExplanation(dm, verbose=True)

sample = test[0]
exp.explain_instance_domain(model.predict_proba, sample)

[OUT] "The model predicted 'setosa' instead of 'versicolor' because 'sepal length (cm) <= 6.517 and petal width (cm) <= 0.868'"

The predicted class using the RandomForestClassifier was 'setosa', while the second most probable class 'versicolor' may have been expected instead. The difference of why the current instance was classified 'setosa' is because its sepal length is less than or equal to 6.517 centimers and its petal width is less than or equal to 0.868 centimers. In other words, if the instance would keep all feature values the same, but change its sepal width to more than 6.517 centimers and its petal width to more than 0.868 centimers, the black-box classifier would have changed the outcome to 'versicolor'.

More examples

For more examples, check out the attached Notebook.

Documentation

Several choices can be made to tailor the explanation to your type of explanation problem.

Choices for problem explanation

FactFoil

Used for determining the current outcome (fact) and the outcome of interest (foil), based on a foil_method (e.g. second most probable class, random class, greater than the current outcome). Foils can also be manually selected by using the foil=... optional argument of the ContrastiveExplanation.explain_instance_domain() method.

FactFoil Description foil_method
FactFoilClassification (default) Determine fact and foil for classification/unsupervised learning second, random
FactFoilRegression Determine fact and foil for regression analysis greater, smaller
Explanators

Method for forming the explanation, either using a Foil Tree (TreeExplanator) as described in the paper, or using a prototype (PointExplanator, not fully implemented). As multiple explanations hold, one can choose the foil_strategy as either 'closest' (shortest explanation), 'size' (move the current outcome to the area containing most samples of the foil outcome), 'impurity' (most informative foil area), or 'random' (random foil area)

Explanator Description foil_strategy
TreeExplanator (default) Foil Tree: Explain using a decision tree closest, size, impurity, random
PointExplanator Explain with a representatitive point (prototype) of the foil class closest, medoid, random
Domain Mappers

For handling the different types of data:

  • Tabular (rows and columns)
  • Images (rudimentary support)

Maps to a general format that the explanator can form the explanation in, and then maps the explanation back into this format. Ensures meaningful feature names.

DomainMapper Description
DomainMapperTabular Tabular data (columns with feature names, rows)
DomainMapperPandas Uses a pandas dataframe to create a DomainMapperTabular, while automatically inferring feature names
DomainMapperImage Image data

License

ContrastiveExplanation is BSD-3 Licensed.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].