ShapA game theoretic approach to explain the output of any machine learning model.
Stars: ✭ 14,917 (+23577.78%)
Mutual labels: shapley, shap
interpretable-mlTechniques & resources for training interpretable ML models, explaining ML models, and debugging ML models.
Stars: ✭ 17 (-73.02%)
Mutual labels: interpretable-machine-learning, iml
fastshapFast approximate Shapley values in R
Stars: ✭ 79 (+25.4%)
Mutual labels: shapley, interpretable-machine-learning
diabetes use caseSample use case for Xavier AI in Healthcare conference: https://www.xavierhealth.org/ai-summit-day2/
Stars: ✭ 22 (-65.08%)
Mutual labels: interpretable-machine-learning, iml
xai-iml-sotaInteresting resources related to Explainable Artificial Intelligence, Interpretable Machine Learning, Interactive Machine Learning, Human in Loop and Visual Analytics.
Stars: ✭ 51 (-19.05%)
Mutual labels: interpretable-machine-learning, iml
dominance-analysisThis package can be used for dominance analysis or Shapley Value Regression for finding relative importance of predictors on given dataset. This library can be used for key driver analysis or marginal resource allocation models.
Stars: ✭ 111 (+76.19%)
Mutual labels: feature-importance, shapley-value
InterpretFit interpretable models. Explain blackbox machine learning.
Stars: ✭ 4,352 (+6807.94%)
Mutual labels: interpretable-machine-learning, iml
mllpThe code of AAAI 2020 paper "Transparent Classification with Multilayer Logical Perceptrons and Random Binarization".
Stars: ✭ 15 (-76.19%)
Mutual labels: interpretable-machine-learning, iml
shaprExplaining the output of machine learning models with more accurately estimated Shapley values
Stars: ✭ 95 (+50.79%)
Mutual labels: shapley
isarn-sketches-sparkRoutines and data structures for using isarn-sketches idiomatically in Apache Spark
Stars: ✭ 28 (-55.56%)
Mutual labels: feature-importance
mtaMulti-Touch Attribution
Stars: ✭ 60 (-4.76%)
Mutual labels: shapley
XAIatERUM2020Workshop: Explanation and exploration of machine learning models with R and DALEX at eRum 2020
Stars: ✭ 52 (-17.46%)
Mutual labels: interpretable-machine-learning
sageFor calculating global feature importance using Shapley values.
Stars: ✭ 129 (+104.76%)
Mutual labels: shapley
hierarchical-dnn-interpretationsUsing / reproducing ACD from the paper "Hierarchical interpretations for neural network predictions" 🧠 (ICLR 2019)
Stars: ✭ 110 (+74.6%)
Mutual labels: feature-importance
CaptumModel interpretability and understanding for PyTorch
Stars: ✭ 2,830 (+4392.06%)
Mutual labels: feature-importance
ArenaRData generator for Arena - interactive XAI dashboard
Stars: ✭ 28 (-55.56%)
Mutual labels: iml
Predictive-Maintenance-of-Aircraft-EngineIn this project I aim to apply Various Predictive Maintenance Techniques to accurately predict the impending failure of an aircraft turbofan engine.
Stars: ✭ 48 (-23.81%)
Mutual labels: feature-importance
deep-explanation-penalizationCode for using CDEP from the paper "Interpretations are useful: penalizing explanations to align neural networks with prior knowledge" https://arxiv.org/abs/1909.13584
Stars: ✭ 110 (+74.6%)
Mutual labels: feature-importance
ConceptBottleneckConcept Bottleneck Models, ICML 2020
Stars: ✭ 91 (+44.44%)
Mutual labels: interpretable-machine-learning