All Categories → Machine Learning → interpretability

Top 80 interpretability open source projects

Torch Cam
Class activation maps for your PyTorch models (CAM, Grad-CAM, Grad-CAM++, Smooth Grad-CAM++, Score-CAM, SS-CAM, IS-CAM, XGrad-CAM)
Aspect Based Sentiment Analysis
💭 Aspect-Based-Sentiment-Analysis: Transformer & Explainable ML (TensorFlow)
Explainx
Explainable AI framework for data scientists. Explain & debug any blackbox machine learning model with a single line of code.
Awesome Explainable Ai
A collection of research materials on explainable AI/ML
Imodels
Interpretable ML package 🔍 for concise, transparent, and accurate predictive modeling (sklearn-compatible).
Pyss3
A Python package implementing a new machine learning model for text classification with visualization tools for Explainable AI
Modelstudio
📍 Interactive Studio for Explanatory Model Analysis
Lrp for lstm
Layer-wise Relevance Propagation (LRP) for LSTMs
Awesome Fairness In Ai
A curated list of awesome Fairness in AI resources
Visual Attribution
Pytorch Implementation of recent visual attribution methods for model interpretability
Pycebox
⬛ Python Individual Conditional Expectation Plot Toolbox
Breakdown
Model Agnostics breakDown plots
Interpretability By Parts
Code repository for "Interpretable and Accurate Fine-grained Recognition via Region Grouping", CVPR 2020 (Oral)
Cxplain
Causal Explanation (CXPlain) is a method for explaining the predictions of any machine-learning model.
Reverse Engineering Neural Networks
A collection of tools for reverse engineering neural networks.
Cnn Interpretability
🏥 Visualizing Convolutional Networks for MRI-based Diagnosis of Alzheimer’s Disease
Athena
Automatic equation building and curve fitting. Runs on Tensorflow. Built for academia and research.
Adversarial Explainable Ai
💡 A curated list of adversarial attacks on model explanations
Text nn
Text classification models. Used a submodule for other projects.
Trelawney
General Interpretability Package
Contrastiveexplanation
Contrastive Explanation (Foil Trees), developed at TNO/Utrecht University
Symbolic Metamodeling
Codebase for "Demystifying Black-box Models with Symbolic Metamodels", NeurIPS 2019.
Alibi
Algorithms for monitoring and explaining machine learning models
Dalex
moDel Agnostic Language for Exploration and eXplanation
Tf Explain
Interpretability Methods for tf.keras models with Tensorflow 2.x
Ad examples
A collection of anomaly detection methods (iid/point-based, graph and time series) including active learning for anomaly detection/discovery, bayesian rule-mining, description for diversity/explanation/interpretability. Analysis of incorporating label feedback with ensemble and tree-based detectors. Includes adversarial attacks with Graph Convolutional Network.
Xai
XAI - An eXplainability toolbox for machine learning
Flashtorch
Visualization toolkit for neural networks in PyTorch! Demo -->
Xai resources
Interesting resources related to XAI (Explainable Artificial Intelligence)
Interpretable machine learning with python
Examples of techniques for training interpretable ML models, explaining ML models, and debugging ML models for accuracy, discrimination, and security.
Deeplift
Public facing deeplift repo
Tcav
Code for the TCAV ML interpretability project
Lucid
A collection of infrastructure and tools for research in neural network interpretability.
Neural Backed Decision Trees
Making decision trees competitive with neural networks on CIFAR10, CIFAR100, TinyImagenet200, Imagenet
Pytorch Grad Cam
Many Class Activation Map methods implemented in Pytorch for CNNs and Vision Transformers. Including Grad-CAM, Grad-CAM++, Score-CAM, Ablation-CAM and XGrad-CAM
removal-explanations
A lightweight implementation of removal-based explanations for ML models.
SPINE
Code for SPINE - Sparse Interpretable Neural Embeddings. Jhamtani H.*, Pruthi D.*, Subramanian A.*, Berg-Kirkpatrick T., Hovy E. AAAI 2018
shapeshop
Towards Understanding Deep Learning Representations via Interactive Experimentation
knowledge-neurons
A library for finding knowledge neurons in pretrained transformer models.
neuron-importance-zsl
[ECCV 2018] code for Choose Your Neuron: Incorporating Domain Knowledge Through Neuron Importance
summit
🏔️ Summit: Scaling Deep Learning Interpretability by Visualizing Activation and Attribution Summarizations
sage
For calculating global feature importance using Shapley values.
yggdrasil-decision-forests
A collection of state-of-the-art algorithms for the training, serving and interpretation of Decision Forest models.
zennit
Zennit is a high-level framework in Python using PyTorch for explaining/exploring neural networks using attribution methods like LRP.
1-60 of 80 interpretability projects