ShapA game theoretic approach to explain the output of any machine learning model.
Stars: ✭ 14,917 (+32328.26%)
sageFor calculating global feature importance using Shapley values.
Stars: ✭ 129 (+180.43%)
deep-explanation-penalizationCode for using CDEP from the paper "Interpretations are useful: penalizing explanations to align neural networks with prior knowledge" https://arxiv.org/abs/1909.13584
Stars: ✭ 110 (+139.13%)
zennitZennit is a high-level framework in Python using PyTorch for explaining/exploring neural networks using attribution methods like LRP.
Stars: ✭ 57 (+23.91%)
ArenaRData generator for Arena - interactive XAI dashboard
Stars: ✭ 28 (-39.13%)
thermostatCollection of NLP model explanations and accompanying analysis tools
Stars: ✭ 126 (+173.91%)
hierarchical-dnn-interpretationsUsing / reproducing ACD from the paper "Hierarchical interpretations for neural network predictions" 🧠 (ICLR 2019)
Stars: ✭ 110 (+139.13%)
mllpThe code of AAAI 2020 paper "Transparent Classification with Multilayer Logical Perceptrons and Random Binarization".
Stars: ✭ 15 (-67.39%)
ALPS 2021XAI Tutorial for the Explainable AI track in the ALPS winter school 2021
Stars: ✭ 55 (+19.57%)
adaptive-waveletsAdaptive, interpretable wavelets across domains (NeurIPS 2021)
Stars: ✭ 58 (+26.09%)
concept-based-xaiLibrary implementing state-of-the-art Concept-based and Disentanglement Learning methods for Explainable AI
Stars: ✭ 41 (-10.87%)
ProtoTreeProtoTrees: Neural Prototype Trees for Interpretable Fine-grained Image Recognition, published at CVPR2021
Stars: ✭ 47 (+2.17%)
InterpretFit interpretable models. Explain blackbox machine learning.
Stars: ✭ 4,352 (+9360.87%)
Transformer-MM-Explainability[ICCV 2021- Oral] Official PyTorch implementation for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers, a novel method to visualize any Transformer-based network. Including examples for DETR, VQA.
Stars: ✭ 484 (+952.17%)
adversarial-robustness-publicCode for AAAI 2018 accepted paper: "Improving the Adversarial Robustness and Interpretability of Deep Neural Networks by Regularizing their Input Gradients"
Stars: ✭ 49 (+6.52%)
contextual-aiContextual AI adds explainability to different stages of machine learning pipelines - data, training, and inference - thereby addressing the trust gap between such ML systems and their users. It does not refer to a specific algorithm or ML method — instead, it takes a human-centric view and approach to AI.
Stars: ✭ 81 (+76.09%)
EgoCNNCode for "Distributed, Egocentric Representations of Graphs for Detecting Critical Structures" (ICML 2019)
Stars: ✭ 16 (-65.22%)
spatio-temporal-brainA Deep Graph Neural Network Architecture for Modelling Spatio-temporal Dynamics in rs-fMRI Data
Stars: ✭ 22 (-52.17%)
fastshapFast approximate Shapley values in R
Stars: ✭ 79 (+71.74%)
partial dependencePython package to visualize and cluster partial dependence.
Stars: ✭ 23 (-50%)
GNNLens2Visualization tool for Graph Neural Networks
Stars: ✭ 155 (+236.96%)
kernel-modNeurIPS 2018. Linear-time model comparison tests.
Stars: ✭ 17 (-63.04%)
ShapML.jlA Julia package for interpretable machine learning with stochastic Shapley values
Stars: ✭ 63 (+36.96%)
Torch CamClass activation maps for your PyTorch models (CAM, Grad-CAM, Grad-CAM++, Smooth Grad-CAM++, Score-CAM, SS-CAM, IS-CAM, XGrad-CAM)
Stars: ✭ 249 (+441.3%)
CaptumModel interpretability and understanding for PyTorch
Stars: ✭ 2,830 (+6052.17%)
neuron-importance-zsl[ECCV 2018] code for Choose Your Neuron: Incorporating Domain Knowledge Through Neuron Importance
Stars: ✭ 56 (+21.74%)
mtaMulti-Touch Attribution
Stars: ✭ 60 (+30.43%)
summit🏔️ Summit: Scaling Deep Learning Interpretability by Visualizing Activation and Attribution Summarizations
Stars: ✭ 95 (+106.52%)
xai-iml-sotaInteresting resources related to Explainable Artificial Intelligence, Interpretable Machine Learning, Interactive Machine Learning, Human in Loop and Visual Analytics.
Stars: ✭ 51 (+10.87%)
Pyss3A Python package implementing a new machine learning model for text classification with visualization tools for Explainable AI
Stars: ✭ 191 (+315.22%)
free-lunch-saliencyCode for "Free-Lunch Saliency via Attention in Atari Agents"
Stars: ✭ 15 (-67.39%)
yggdrasil-decision-forestsA collection of state-of-the-art algorithms for the training, serving and interpretation of Decision Forest models.
Stars: ✭ 156 (+239.13%)
transformers-interpretModel explainability that works seamlessly with 🤗 transformers. Explain your transformers model in just 2 lines of code.
Stars: ✭ 861 (+1771.74%)
ExplainxExplainable AI framework for data scientists. Explain & debug any blackbox machine learning model with a single line of code.
Stars: ✭ 196 (+326.09%)
shapeshopTowards Understanding Deep Learning Representations via Interactive Experimentation
Stars: ✭ 16 (-65.22%)
ImodelsInterpretable ML package 🔍 for concise, transparent, and accurate predictive modeling (sklearn-compatible).
Stars: ✭ 194 (+321.74%)
interpretable-mlTechniques & resources for training interpretable ML models, explaining ML models, and debugging ML models.
Stars: ✭ 17 (-63.04%)
Lrp for lstmLayer-wise Relevance Propagation (LRP) for LSTMs
Stars: ✭ 152 (+230.43%)
StellargraphStellarGraph - Machine Learning on Graphs
Stars: ✭ 2,235 (+4758.7%)
responsible-ai-toolboxThis project provides responsible AI user interfaces for Fairlearn, interpret-community, and Error Analysis, as well as foundational building blocks that they rely on.
Stars: ✭ 615 (+1236.96%)
Modelstudio📍 Interactive Studio for Explanatory Model Analysis
Stars: ✭ 163 (+254.35%)
mmnMoore Machine Networks (MMN): Learning Finite-State Representations of Recurrent Policy Networks
Stars: ✭ 39 (-15.22%)
shaprExplaining the output of machine learning models with more accurately estimated Shapley values
Stars: ✭ 95 (+106.52%)
Visual AttributionPytorch Implementation of recent visual attribution methods for model interpretability
Stars: ✭ 127 (+176.09%)
Pycebox⬛ Python Individual Conditional Expectation Plot Toolbox
Stars: ✭ 101 (+119.57%)
BreakdownModel Agnostics breakDown plots
Stars: ✭ 93 (+102.17%)
megMolecular Explanation Generator
Stars: ✭ 14 (-69.57%)
SPINECode for SPINE - Sparse Interpretable Neural Embeddings. Jhamtani H.*, Pruthi D.*, Subramanian A.*, Berg-Kirkpatrick T., Hovy E. AAAI 2018
Stars: ✭ 44 (-4.35%)
knowledge-neuronsA library for finding knowledge neurons in pretrained transformer models.
Stars: ✭ 72 (+56.52%)
CARLACARLA: A Python Library to Benchmark Algorithmic Recourse and Counterfactual Explanation Algorithms
Stars: ✭ 166 (+260.87%)
staginSTAGIN: Spatio-Temporal Attention Graph Isomorphism Network
Stars: ✭ 34 (-26.09%)