All Projects → SeldonIO → Alibi

SeldonIO / Alibi

Licence: apache-2.0
Algorithms for monitoring and explaining machine learning models

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Alibi

removal-explanations
A lightweight implementation of removal-based explanations for ML models.
Stars: ✭ 46 (-95.02%)
Mutual labels:  interpretability
Tcav
Code for the TCAV ML interpretability project
Stars: ✭ 442 (-52.16%)
Mutual labels:  interpretability
Awesome Federated Learning
Federated Learning Library: https://fedml.ai
Stars: ✭ 624 (-32.47%)
Mutual labels:  interpretability
Facet
Human-explainable AI.
Stars: ✭ 269 (-70.89%)
Mutual labels:  interpretability
Mli Resources
H2O.ai Machine Learning Interpretability Resources
Stars: ✭ 428 (-53.68%)
Mutual labels:  interpretability
Interpretable machine learning with python
Examples of techniques for training interpretable ML models, explaining ML models, and debugging ML models for accuracy, discrimination, and security.
Stars: ✭ 530 (-42.64%)
Mutual labels:  interpretability
shapeshop
Towards Understanding Deep Learning Representations via Interactive Experimentation
Stars: ✭ 16 (-98.27%)
Mutual labels:  interpretability
Dalex
moDel Agnostic Language for Exploration and eXplanation
Stars: ✭ 795 (-13.96%)
Mutual labels:  interpretability
Lucid
A collection of infrastructure and tools for research in neural network interpretability.
Stars: ✭ 4,344 (+370.13%)
Mutual labels:  interpretability
Xai
XAI - An eXplainability toolbox for machine learning
Stars: ✭ 596 (-35.5%)
Mutual labels:  interpretability
Interpret
Fit interpretable models. Explain blackbox machine learning.
Stars: ✭ 4,352 (+371%)
Mutual labels:  interpretability
Neural Backed Decision Trees
Making decision trees competitive with neural networks on CIFAR10, CIFAR100, TinyImagenet200, Imagenet
Stars: ✭ 411 (-55.52%)
Mutual labels:  interpretability
Xai resources
Interesting resources related to XAI (Explainable Artificial Intelligence)
Stars: ✭ 553 (-40.15%)
Mutual labels:  interpretability
diabetes use case
Sample use case for Xavier AI in Healthcare conference: https://www.xavierhealth.org/ai-summit-day2/
Stars: ✭ 22 (-97.62%)
Mutual labels:  interpretability
Ad examples
A collection of anomaly detection methods (iid/point-based, graph and time series) including active learning for anomaly detection/discovery, bayesian rule-mining, description for diversity/explanation/interpretability. Analysis of incorporating label feedback with ensemble and tree-based detectors. Includes adversarial attacks with Graph Convolutional Network.
Stars: ✭ 641 (-30.63%)
Mutual labels:  interpretability
SPINE
Code for SPINE - Sparse Interpretable Neural Embeddings. Jhamtani H.*, Pruthi D.*, Subramanian A.*, Berg-Kirkpatrick T., Hovy E. AAAI 2018
Stars: ✭ 44 (-95.24%)
Mutual labels:  interpretability
Deeplift
Public facing deeplift repo
Stars: ✭ 512 (-44.59%)
Mutual labels:  interpretability
Grad Cam
[ICCV 2017] Torch code for Grad-CAM
Stars: ✭ 891 (-3.57%)
Mutual labels:  interpretability
Tf Explain
Interpretability Methods for tf.keras models with Tensorflow 2.x
Stars: ✭ 780 (-15.58%)
Mutual labels:  interpretability
Flashtorch
Visualization toolkit for neural networks in PyTorch! Demo -->
Stars: ✭ 561 (-39.29%)
Mutual labels:  interpretability

Alibi Logo

Build Status Documentation Status codecov Python version PyPI version GitHub Licence Slack channel

Alibi is an open source Python library aimed at machine learning model inspection and interpretation. The focus of the library is to provide high-quality implementations of black-box, white-box, local and global explanation methods for classification and regression models.

If you're interested in outlier detection, concept drift or adversarial instance detection, check out our sister project alibi-detect.


Anchor explanations for images


Integrated Gradients for text


Counterfactual examples


Accumulated Local Effects

Table of Contents

Installation and Usage

Alibi can be installed from PyPI:

pip install alibi

Alternatively, the development version can be installed:

pip install git+https://github.com/SeldonIO/alibi.git 

To take advantage of distributed computation of explanations, install alibi with ray:

pip install alibi[ray]

The alibi explanation API takes inspiration from scikit-learn, consisting of distinct initialize, fit and explain steps. We will use the AnchorTabular explainer to illustrate the API:

from alibi.explainers import AnchorTabular

# initialize and fit explainer by passing a prediction function and any other required arguments
explainer = AnchorTabular(predict_fn, feature_names=feature_names, category_map=category_map)
explainer.fit(X_train)

# explain an instance
explanation = explainer.explain(x)

The explanation returned is an Explanation object with attributes meta and data. meta is a dictionary containing the explainer metadata and any hyperparameters and data is a dictionary containing everything related to the computed explanation. For example, for the Anchor algorithm the explanation can be accessed via explanation.data['anchor'] (or explanation.anchor). The exact details of available fields varies from method to method so we encourage the reader to become familiar with the types of methods supported.

Supported Methods

The following tables summarize the possible use cases for each method.

Model Explanations

Method Models Explanations Classification Regression Tabular Text Images Categorical features Train set required Distributed
ALE BB global
Anchors BB local For Tabular
CEM BB* TF/Keras local Optional
Counterfactuals BB* TF/Keras local No
Prototype Counterfactuals BB* TF/Keras local Optional
Integrated Gradients TF/Keras local Optional
Kernel SHAP BB local
global
Tree SHAP WB local
global
Optional

Model Confidence

These algorithms provide instance-specific scores measuring the model confidence for making a particular prediction.

Method Models Classification Regression Tabular Text Images Categorical Features Train set required
Trust Scores BB ✔(1) ✔(2) Yes
Linearity Measure BB Optional

Key:

  • BB - black-box (only require a prediction function)
  • BB* - black-box but assume model is differentiable
  • WB - requires white-box model access. There may be limitations on models supported
  • TF/Keras - TensorFlow models via the Keras API
  • Local - instance specific explanation, why was this prediction made?
  • Global - explains the model with respect to a set of instances
  • (1) - depending on model
  • (2) - may require dimensionality reduction

References and Examples

Citations

If you use alibi in your research, please consider citing it.

BibTeX entry:

@software{alibi,
  title = {Alibi: Algorithms for monitoring and explaining machine learning models},
  author = {Klaise, Janis and Van Looveren, Arnaud and Vacanti, Giovanni and Coca, Alexandru},
  url = {https://github.com/SeldonIO/alibi},
  version = {0.5.6},
  date = {2021-02-18},
  year = {2019}
}
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].