interpretable-mlTechniques & resources for training interpretable ML models, explaining ML models, and debugging ML models.
Stars: ✭ 17 (-73.02%)
mllpThe code of AAAI 2020 paper "Transparent Classification with Multilayer Logical Perceptrons and Random Binarization".
Stars: ✭ 15 (-76.19%)
xai-iml-sotaInteresting resources related to Explainable Artificial Intelligence, Interpretable Machine Learning, Interactive Machine Learning, Human in Loop and Visual Analytics.
Stars: ✭ 51 (-19.05%)
dominance-analysisThis package can be used for dominance analysis or Shapley Value Regression for finding relative importance of predictors on given dataset. This library can be used for key driver analysis or marginal resource allocation models.
Stars: ✭ 111 (+76.19%)
ShapA game theoretic approach to explain the output of any machine learning model.
Stars: ✭ 14,917 (+23577.78%)
fastshapFast approximate Shapley values in R
Stars: ✭ 79 (+25.4%)
InterpretFit interpretable models. Explain blackbox machine learning.
Stars: ✭ 4,352 (+6807.94%)
diabetes use caseSample use case for Xavier AI in Healthcare conference: https://www.xavierhealth.org/ai-summit-day2/
Stars: ✭ 22 (-65.08%)
hierarchical-dnn-interpretationsUsing / reproducing ACD from the paper "Hierarchical interpretations for neural network predictions" 🧠 (ICLR 2019)
Stars: ✭ 110 (+74.6%)
ArenaRData generator for Arena - interactive XAI dashboard
Stars: ✭ 28 (-55.56%)
Awesome-XAI-EvaluationReference tables to introduce and organize evaluation methods and measures for explainable machine learning systems
Stars: ✭ 57 (-9.52%)
XAIatERUM2020Workshop: Explanation and exploration of machine learning models with R and DALEX at eRum 2020
Stars: ✭ 52 (-17.46%)
isarn-sketches-sparkRoutines and data structures for using isarn-sketches idiomatically in Apache Spark
Stars: ✭ 28 (-55.56%)
CaptumModel interpretability and understanding for PyTorch
Stars: ✭ 2,830 (+4392.06%)
Predictive-Maintenance-of-Aircraft-EngineIn this project I aim to apply Various Predictive Maintenance Techniques to accurately predict the impending failure of an aircraft turbofan engine.
Stars: ✭ 48 (-23.81%)
deep-explanation-penalizationCode for using CDEP from the paper "Interpretations are useful: penalizing explanations to align neural networks with prior knowledge" https://arxiv.org/abs/1909.13584
Stars: ✭ 110 (+74.6%)
removal-explanationsA lightweight implementation of removal-based explanations for ML models.
Stars: ✭ 46 (-26.98%)
shaprExplaining the output of machine learning models with more accurately estimated Shapley values
Stars: ✭ 95 (+50.79%)
sageFor calculating global feature importance using Shapley values.
Stars: ✭ 129 (+104.76%)
mtaMulti-Touch Attribution
Stars: ✭ 60 (-4.76%)
ml-fairness-frameworkFairPut - Machine Learning Fairness Framework with LightGBM — Explainability, Robustness, Fairness (by @firmai)
Stars: ✭ 59 (-6.35%)
ProtoTreeProtoTrees: Neural Prototype Trees for Interpretable Fine-grained Image Recognition, published at CVPR2021
Stars: ✭ 47 (-25.4%)