MindsdbPredictive AI layer for existing databases.
InterpretFit interpretable models. Explain blackbox machine learning.
TensorwatchDebugging, monitoring and visualization for Python Machine Learning and Data Science
diabetes use caseSample use case for Xavier AI in Healthcare conference: https://www.xavierhealth.org/ai-summit-day2/
ml-fairness-frameworkFairPut - Machine Learning Fairness Framework with LightGBM — Explainability, Robustness, Fairness (by @firmai)
global-attribution-mappingGAM (Global Attribution Mapping) explains the landscape of neural network predictions across subpopulations
SHAP FOLD(Explainable AI) - Learning Non-Monotonic Logic Programs From Statistical Models Using High-Utility Itemset Mining
fastshapFast approximate Shapley values in R
DIGA library for graph deep learning research
shaprExplaining the output of machine learning models with more accurately estimated Shapley values
CARLACARLA: A Python Library to Benchmark Algorithmic Recourse and Counterfactual Explanation Algorithms
cnn-raccoonCreate interactive dashboards for your Convolutional Neural Networks with a single line of code!
ProtoTreeProtoTrees: Neural Prototype Trees for Interpretable Fine-grained Image Recognition, published at CVPR2021
trulensLibrary containing attribution and interpretation methods for deep nets.
mllpThe code of AAAI 2020 paper "Transparent Classification with Multilayer Logical Perceptrons and Random Binarization".
responsible-ai-toolboxThis project provides responsible AI user interfaces for Fairlearn, interpret-community, and Error Analysis, as well as foundational building blocks that they rely on.
Deep XFPackage towards building Explainable Forecasting and Nowcasting Models with State-of-the-art Deep Neural Networks and Dynamic Factor Model on Time Series data sets with single line of code. Also, provides utilify facility for time-series signal similarities matching, and removing noise from timeseries signals.
dlime experimentsIn this work, we propose a deterministic version of Local Interpretable Model Agnostic Explanations (LIME) and the experimental results on three different medical datasets shows the superiority for Deterministic Local Interpretable Model-Agnostic Explanations (DLIME).
xai-iml-sotaInteresting resources related to Explainable Artificial Intelligence, Interpretable Machine Learning, Interactive Machine Learning, Human in Loop and Visual Analytics.