All Projects → nredell → ShapML.jl

nredell / ShapML.jl

Licence: MIT license
A Julia package for interpretable machine learning with stochastic Shapley values

Programming Languages

julia
2034 projects

Projects that are alternatives of or similar to ShapML.jl

Shap
A game theoretic approach to explain the output of any machine learning model.
Stars: ✭ 14,917 (+23577.78%)
Mutual labels:  shapley, shap
interpretable-ml
Techniques & resources for training interpretable ML models, explaining ML models, and debugging ML models.
Stars: ✭ 17 (-73.02%)
Mutual labels:  interpretable-machine-learning, iml
fastshap
Fast approximate Shapley values in R
Stars: ✭ 79 (+25.4%)
Mutual labels:  shapley, interpretable-machine-learning
diabetes use case
Sample use case for Xavier AI in Healthcare conference: https://www.xavierhealth.org/ai-summit-day2/
Stars: ✭ 22 (-65.08%)
Mutual labels:  interpretable-machine-learning, iml
xai-iml-sota
Interesting resources related to Explainable Artificial Intelligence, Interpretable Machine Learning, Interactive Machine Learning, Human in Loop and Visual Analytics.
Stars: ✭ 51 (-19.05%)
Mutual labels:  interpretable-machine-learning, iml
dominance-analysis
This package can be used for dominance analysis or Shapley Value Regression for finding relative importance of predictors on given dataset. This library can be used for key driver analysis or marginal resource allocation models.
Stars: ✭ 111 (+76.19%)
Mutual labels:  feature-importance, shapley-value
Awesome Machine Learning Interpretability
A curated list of awesome machine learning interpretability resources.
Stars: ✭ 2,404 (+3715.87%)
Mutual labels:  interpretable-machine-learning, iml
Interpret
Fit interpretable models. Explain blackbox machine learning.
Stars: ✭ 4,352 (+6807.94%)
Mutual labels:  interpretable-machine-learning, iml
mllp
The code of AAAI 2020 paper "Transparent Classification with Multilayer Logical Perceptrons and Random Binarization".
Stars: ✭ 15 (-76.19%)
Mutual labels:  interpretable-machine-learning, iml
shapr
Explaining the output of machine learning models with more accurately estimated Shapley values
Stars: ✭ 95 (+50.79%)
Mutual labels:  shapley
isarn-sketches-spark
Routines and data structures for using isarn-sketches idiomatically in Apache Spark
Stars: ✭ 28 (-55.56%)
Mutual labels:  feature-importance
mta
Multi-Touch Attribution
Stars: ✭ 60 (-4.76%)
Mutual labels:  shapley
XAIatERUM2020
Workshop: Explanation and exploration of machine learning models with R and DALEX at eRum 2020
Stars: ✭ 52 (-17.46%)
Mutual labels:  interpretable-machine-learning
sage
For calculating global feature importance using Shapley values.
Stars: ✭ 129 (+104.76%)
Mutual labels:  shapley
hierarchical-dnn-interpretations
Using / reproducing ACD from the paper "Hierarchical interpretations for neural network predictions" 🧠 (ICLR 2019)
Stars: ✭ 110 (+74.6%)
Mutual labels:  feature-importance
Captum
Model interpretability and understanding for PyTorch
Stars: ✭ 2,830 (+4392.06%)
Mutual labels:  feature-importance
ArenaR
Data generator for Arena - interactive XAI dashboard
Stars: ✭ 28 (-55.56%)
Mutual labels:  iml
Predictive-Maintenance-of-Aircraft-Engine
In this project I aim to apply Various Predictive Maintenance Techniques to accurately predict the impending failure of an aircraft turbofan engine.
Stars: ✭ 48 (-23.81%)
Mutual labels:  feature-importance
deep-explanation-penalization
Code for using CDEP from the paper "Interpretations are useful: penalizing explanations to align neural networks with prior knowledge" https://arxiv.org/abs/1909.13584
Stars: ✭ 110 (+74.6%)
Mutual labels:  feature-importance
ConceptBottleneck
Concept Bottleneck Models, ICML 2020
Stars: ✭ 91 (+44.44%)
Mutual labels:  interpretable-machine-learning

Build Status Codecov

ShapML ShapML logo

The purpose of ShapML is to compute stochastic feature-level Shapley values which can be used to (a) interpret and/or (b) assess the fairness of any machine learning model. Shapley values are an intuitive and theoretically sound model-agnostic diagnostic tool to understand both global feature importance across all instances in a data set and instance/row-level local feature importance in black-box machine learning models.

This package implements the algorithm described in Štrumbelj and Kononenko's (2014) sampling-based Shapley approximation algorithm to compute the stochastic Shapley values for a given instance and model feature.

  • Flexibility:

    • Shapley values can be estimated for any machine learning model using a simple user-defined predict() wrapper function.
  • Speed:

    • The speed advantage of ShapML comes in the form of giving the user the ability to select 1 or more target features of interest and avoid having to compute Shapley values for all model features (i.e., a subset of target features from a trained model will return the same feature-level Shapley values as the full model with all features). This is especially useful in high-dimensional models as the computation of a Shapley value is exponential in the number of features.

Install

using Pkg
Pkg.add("ShapML")
  • Development
using Pkg
Pkg.add(PackageSpec(url = "https://github.com/nredell/ShapML.jl"))

Documentation and Vignettes

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].