MindsdbPredictive AI layer for existing databases.
InterpretFit interpretable models. Explain blackbox machine learning.
TensorwatchDebugging, monitoring and visualization for Python Machine Learning and Data Science
ml-fairness-frameworkFairPut - Machine Learning Fairness Framework with LightGBM — Explainability, Robustness, Fairness (by @firmai)
WhiteBox-Part1In this part, I've introduced and experimented with ways to interpret and evaluate models in the field of image. (Pytorch)
global-attribution-mappingGAM (Global Attribution Mapping) explains the landscape of neural network predictions across subpopulations
SHAP FOLD(Explainable AI) - Learning Non-Monotonic Logic Programs From Statistical Models Using High-Utility Itemset Mining
fastshapFast approximate Shapley values in R
shaprExplaining the output of machine learning models with more accurately estimated Shapley values
CARLACARLA: A Python Library to Benchmark Algorithmic Recourse and Counterfactual Explanation Algorithms
path explainA repository for explaining feature attributions and feature interactions in deep neural networks.
ddsm-visual-primitivesUsing deep learning to discover interpretable representations for mammogram classification and explanation
GraphLIMEThis is a Pytorch implementation of GraphLIME
zennitZennit is a high-level framework in Python using PyTorch for explaining/exploring neural networks using attribution methods like LRP.
ProtoTreeProtoTrees: Neural Prototype Trees for Interpretable Fine-grained Image Recognition, published at CVPR2021
awesome-agi-cocosciAn awesome & curated list for Artificial General Intelligence, an emerging inter-discipline field that combines artificial intelligence and computational cognitive sciences.
javaAnchorExplainerExplains machine learning models fast using the Anchor algorithm originally proposed by marcotcr in 2018
deep-explanation-penalizationCode for using CDEP from the paper "Interpretations are useful: penalizing explanations to align neural networks with prior knowledge" https://arxiv.org/abs/1909.13584
transformers-interpretModel explainability that works seamlessly with 🤗 transformers. Explain your transformers model in just 2 lines of code.
mllpThe code of AAAI 2020 paper "Transparent Classification with Multilayer Logical Perceptrons and Random Binarization".
responsible-ai-toolboxThis project provides responsible AI user interfaces for Fairlearn, interpret-community, and Error Analysis, as well as foundational building blocks that they rely on.
megMolecular Explanation Generator
dlime experimentsIn this work, we propose a deterministic version of Local Interpretable Model Agnostic Explanations (LIME) and the experimental results on three different medical datasets shows the superiority for Deterministic Local Interpretable Model-Agnostic Explanations (DLIME).
Transformer-MM-Explainability[ICCV 2021- Oral] Official PyTorch implementation for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers, a novel method to visualize any Transformer-based network. Including examples for DETR, VQA.
concept-based-xaiLibrary implementing state-of-the-art Concept-based and Disentanglement Learning methods for Explainable AI
bert attn vizVisualize BERT's self-attention layers on text classification tasks
expmrcExpMRC: Explainability Evaluation for Machine Reading Comprehension
3D-GuidedGradCAM-for-Medical-ImagingThis Repo containes the implemnetation of generating Guided-GradCAM for 3D medical Imaging using Nifti file in tensorflow 2.0. Different input files can be used in that case need to edit the input to the Guided-gradCAM model.
graspEssential NLP & ML, short & fast pure Python code
XAIatERUM2020Workshop: Explanation and exploration of machine learning models with R and DALEX at eRum 2020
datafsmMachine Learning Finite State Machine Models from Data with Genetic Algorithms
mindsdb serverMindsDB server allows you to consume and expose MindsDB workflows, through http.
self critical vqaCode for NeurIPS 2019 paper ``Self-Critical Reasoning for Robust Visual Question Answering''