All Categories → Machine Learning → interpretability

Top 60 interpretability open source projects

Torch Cam
Class activation maps for your PyTorch models (CAM, Grad-CAM, Grad-CAM++, Smooth Grad-CAM++, Score-CAM, SS-CAM, IS-CAM, XGrad-CAM)
Aspect Based Sentiment Analysis
💭 Aspect-Based-Sentiment-Analysis: Transformer & Explainable ML (TensorFlow)
Explainable AI framework for data scientists. Explain & debug any blackbox machine learning model with a single line of code.
Awesome Explainable Ai
A collection of research materials on explainable AI/ML
Interpretable ML package 🔍 for concise, transparent, and accurate predictive modeling (sklearn-compatible).
A Python package implementing a new machine learning model for text classification with visualization tools for Explainable AI
📍 Interactive Studio for Explanatory Model Analysis
Lrp for lstm
Layer-wise Relevance Propagation (LRP) for LSTMs
Awesome Fairness In Ai
A curated list of awesome Fairness in AI resources
Visual Attribution
Pytorch Implementation of recent visual attribution methods for model interpretability
⬛ Python Individual Conditional Expectation Plot Toolbox
Model Agnostics breakDown plots
Interpretability By Parts
Code repository for "Interpretable and Accurate Fine-grained Recognition via Region Grouping", CVPR 2020 (Oral)
Causal Explanation (CXPlain) is a method for explaining the predictions of any machine-learning model.
Reverse Engineering Neural Networks
A collection of tools for reverse engineering neural networks.
Cnn Interpretability
🏥 Visualizing Convolutional Networks for MRI-based Diagnosis of Alzheimer’s Disease
Automatic equation building and curve fitting. Runs on Tensorflow. Built for academia and research.
Adversarial Explainable Ai
💡 A curated list of adversarial attacks on model explanations
Text nn
Text classification models. Used a submodule for other projects.
General Interpretability Package
Contrastive Explanation (Foil Trees), developed at TNO/Utrecht University
Symbolic Metamodeling
Codebase for "Demystifying Black-box Models with Symbolic Metamodels", NeurIPS 2019.
Algorithms for monitoring and explaining machine learning models
moDel Agnostic Language for Exploration and eXplanation
Tf Explain
Interpretability Methods for tf.keras models with Tensorflow 2.x
Ad examples
A collection of anomaly detection methods (iid/point-based, graph and time series) including active learning for anomaly detection/discovery, bayesian rule-mining, description for diversity/explanation/interpretability. Analysis of incorporating label feedback with ensemble and tree-based detectors. Includes adversarial attacks with Graph Convolutional Network.
XAI - An eXplainability toolbox for machine learning
Visualization toolkit for neural networks in PyTorch! Demo -->
Xai resources
Interesting resources related to XAI (Explainable Artificial Intelligence)
Interpretable machine learning with python
Examples of techniques for training interpretable ML models, explaining ML models, and debugging ML models for accuracy, discrimination, and security.
Public facing deeplift repo
Code for the TCAV ML interpretability project
A collection of infrastructure and tools for research in neural network interpretability.
Neural Backed Decision Trees
Making decision trees competitive with neural networks on CIFAR10, CIFAR100, TinyImagenet200, Imagenet
Pytorch Grad Cam
Many Class Activation Map methods implemented in Pytorch for CNNs and Vision Transformers. Including Grad-CAM, Grad-CAM++, Score-CAM, Ablation-CAM and XGrad-CAM
A lightweight implementation of removal-based explanations for ML models.
Code for SPINE - Sparse Interpretable Neural Embeddings. Jhamtani H.*, Pruthi D.*, Subramanian A.*, Berg-Kirkpatrick T., Hovy E. AAAI 2018
Towards Understanding Deep Learning Representations via Interactive Experimentation
A library for finding knowledge neurons in pretrained transformer models.
[ECCV 2018] code for Choose Your Neuron: Incorporating Domain Knowledge Through Neuron Importance
🏔️ Summit: Scaling Deep Learning Interpretability by Visualizing Activation and Attribution Summarizations
For calculating global feature importance using Shapley values.
A collection of state-of-the-art algorithms for the training, serving and interpretation of Decision Forest models.
Zennit is a high-level framework in Python using PyTorch for explaining/exploring neural networks using attribution methods like LRP.
1-60 of 60 interpretability projects