All Projects → d909b → Cxplain

d909b / Cxplain

Licence: mit
Causal Explanation (CXPlain) is a method for explaining the predictions of any machine-learning model.

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Cxplain

Interpretable machine learning with python
Examples of techniques for training interpretable ML models, explaining ML models, and debugging ML models for accuracy, discrimination, and security.
Stars: ✭ 530 (+530.95%)
Mutual labels:  interpretability
Grad Cam
[ICCV 2017] Torch code for Grad-CAM
Stars: ✭ 891 (+960.71%)
Mutual labels:  interpretability
Adversarial Explainable Ai
💡 A curated list of adversarial attacks on model explanations
Stars: ✭ 56 (-33.33%)
Mutual labels:  interpretability
Flashtorch
Visualization toolkit for neural networks in PyTorch! Demo -->
Stars: ✭ 561 (+567.86%)
Mutual labels:  interpretability
Tf Explain
Interpretability Methods for tf.keras models with Tensorflow 2.x
Stars: ✭ 780 (+828.57%)
Mutual labels:  interpretability
Symbolic Metamodeling
Codebase for "Demystifying Black-box Models with Symbolic Metamodels", NeurIPS 2019.
Stars: ✭ 29 (-65.48%)
Mutual labels:  interpretability
Tcav
Code for the TCAV ML interpretability project
Stars: ✭ 442 (+426.19%)
Mutual labels:  interpretability
Cnn Interpretability
🏥 Visualizing Convolutional Networks for MRI-based Diagnosis of Alzheimer’s Disease
Stars: ✭ 68 (-19.05%)
Mutual labels:  interpretability
Dalex
moDel Agnostic Language for Exploration and eXplanation
Stars: ✭ 795 (+846.43%)
Mutual labels:  interpretability
Text nn
Text classification models. Used a submodule for other projects.
Stars: ✭ 55 (-34.52%)
Mutual labels:  interpretability
Xai
XAI - An eXplainability toolbox for machine learning
Stars: ✭ 596 (+609.52%)
Mutual labels:  interpretability
Ad examples
A collection of anomaly detection methods (iid/point-based, graph and time series) including active learning for anomaly detection/discovery, bayesian rule-mining, description for diversity/explanation/interpretability. Analysis of incorporating label feedback with ensemble and tree-based detectors. Includes adversarial attacks with Graph Convolutional Network.
Stars: ✭ 641 (+663.1%)
Mutual labels:  interpretability
Contrastiveexplanation
Contrastive Explanation (Foil Trees), developed at TNO/Utrecht University
Stars: ✭ 36 (-57.14%)
Mutual labels:  interpretability
Xai resources
Interesting resources related to XAI (Explainable Artificial Intelligence)
Stars: ✭ 553 (+558.33%)
Mutual labels:  interpretability
Athena
Automatic equation building and curve fitting. Runs on Tensorflow. Built for academia and research.
Stars: ✭ 57 (-32.14%)
Mutual labels:  interpretability
Deeplift
Public facing deeplift repo
Stars: ✭ 512 (+509.52%)
Mutual labels:  interpretability
Alibi
Algorithms for monitoring and explaining machine learning models
Stars: ✭ 924 (+1000%)
Mutual labels:  interpretability
Reverse Engineering Neural Networks
A collection of tools for reverse engineering neural networks.
Stars: ✭ 78 (-7.14%)
Mutual labels:  interpretability
Awesome Production Machine Learning
A curated list of awesome open source libraries to deploy, monitor, version and scale your machine learning
Stars: ✭ 10,504 (+12404.76%)
Mutual labels:  interpretability
Trelawney
General Interpretability Package
Stars: ✭ 55 (-34.52%)
Mutual labels:  interpretability

CXPlain

Code CoverageCode Coverage

Causal Explanations (CXPlain) is a method for explaining the decisions of any machine-learning model. CXPlain uses explanation models trained with a causal objective to learn to explain machine-learning models, and to quantify the uncertainty of its explanations. This repository contains a reference implementation for neural explanation models, and several practical examples for different data modalities. Please see the manuscript at https://arxiv.org/abs/1910.12336 (NeurIPS 2019) for a description and experimental evaluation of CXPlain.

Install

To install the latest release:

$ pip install cxplain

Use

A CXPlain model consists of four main components:

  • The model to be explained which can be any type of machine-learning model, including black-box models, such as neural networks and ensemble models.
  • The model builder that defines the structure of the explanation model to be used to explain the explained model.
  • The masking operation that defines how CXPlain will internally simulate the removal of input features from the set of available features.
  • The loss function that defines how the change in prediction accuracy incurred by removing an input feature will be measured by CXPlain.

After configuring these four components, you can fit a CXPlain instance to the same training data that was used to train your original model. The CXPlain instance can then explain any prediction of your explained model - even when no labels are available for that sample.

from tensorflow.python.keras.losses import categorical_crossentropy
from cxplain import MLPModelBuilder, ZeroMasking, CXPlain

x_train, y_train, x_test = ....  # Your dataset
explained_model = ...    # The model you wish to explain.

# Define the model you want to use to explain your __explained_model__.
# Here, we use a neural explanation model with a
# multilayer perceptron (MLP) architecture.
model_builder = MLPModelBuilder(num_layers=2, num_units=64, batch_size=256, learning_rate=0.001)

# Define your masking operation - the method of simulating the
# removal of input features used internally by CXPlain - ZeroMasking is typically a sensible default choice for tabular and image data.
masking_operation = ZeroMasking()

# Define the loss with which each input features' associated reduction in prediction error is calculated.
loss = categorical_crossentropy

# Build and fit a CXPlain instance.
explainer = CXPlain(explained_model, model_builder, masking_operation, loss)
explainer.fit(x_train, y_train)

# Use the __explainer__ to obtain explanations for the predictions of your __explained_model__.
attributions = explainer.explain(x_test)

Examples

More practical examples for various input data modalities, including images, textual data and tabular data, and both regression and classification tasks are provided in form of Jupyter notebooks in the examples/ directory:

MNIST ImageNet

Cite

Please consider citing, if you reference or use our methodology, code or results in your work:

@inproceedings{schwab2019cxplain,
  title={{CXPlain: Causal Explanations for Model Interpretation under Uncertainty}},
  author={Schwab, Patrick and Karlen, Walter},
  booktitle={{Advances in Neural Information Processing Systems (NeurIPS)}},
  year={2019}
}

License

MIT License

Acknowledgements

This work was partially funded by the Swiss National Science Foundation (SNSF) project No. 167302 within the National Research Program (NRP) 75 "Big Data". We gratefully acknowledge the support of NVIDIA Corporation with the donation of the Titan Xp GPUs used for this research. Patrick Schwab is an affiliated PhD fellow at the Max Planck ETH Center for Learning Systems.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].