All Projects → Captum → Similar Projects or Alternatives

88 Open source projects that are alternatives of or similar to Captum

zennit
Zennit is a high-level framework in Python using PyTorch for explaining/exploring neural networks using attribution methods like LRP.
Stars: ✭ 57 (-97.99%)
interpretable-ml
Techniques & resources for training interpretable ML models, explaining ML models, and debugging ML models.
Stars: ✭ 17 (-99.4%)
Interpret
Fit interpretable models. Explain blackbox machine learning.
Stars: ✭ 4,352 (+53.78%)
Awesome Machine Learning Interpretability
A curated list of awesome machine learning interpretability resources.
Stars: ✭ 2,404 (-15.05%)
mllp
The code of AAAI 2020 paper "Transparent Classification with Multilayer Logical Perceptrons and Random Binarization".
Stars: ✭ 15 (-99.47%)
thermostat
Collection of NLP model explanations and accompanying analysis tools
Stars: ✭ 126 (-95.55%)
hierarchical-dnn-interpretations
Using / reproducing ACD from the paper "Hierarchical interpretations for neural network predictions" 🧠 (ICLR 2019)
Stars: ✭ 110 (-96.11%)
diabetes use case
Sample use case for Xavier AI in Healthcare conference: https://www.xavierhealth.org/ai-summit-day2/
Stars: ✭ 22 (-99.22%)
deep-explanation-penalization
Code for using CDEP from the paper "Interpretations are useful: penalizing explanations to align neural networks with prior knowledge" https://arxiv.org/abs/1909.13584
Stars: ✭ 110 (-96.11%)
Pytorch Grad Cam
Many Class Activation Map methods implemented in Pytorch for CNNs and Vision Transformers. Including Grad-CAM, Grad-CAM++, Score-CAM, Ablation-CAM and XGrad-CAM
Stars: ✭ 3,814 (+34.77%)
Flashtorch
Visualization toolkit for neural networks in PyTorch! Demo -->
Stars: ✭ 561 (-80.18%)
Mutual labels:  interpretability
Awesome Federated Learning
Federated Learning Library: https://fedml.ai
Stars: ✭ 624 (-77.95%)
Mutual labels:  interpretability
Interpretability By Parts
Code repository for "Interpretable and Accurate Fine-grained Recognition via Region Grouping", CVPR 2020 (Oral)
Stars: ✭ 88 (-96.89%)
Mutual labels:  interpretability
Lrp for lstm
Layer-wise Relevance Propagation (LRP) for LSTMs
Stars: ✭ 152 (-94.63%)
Mutual labels:  interpretability
Interpretable machine learning with python
Examples of techniques for training interpretable ML models, explaining ML models, and debugging ML models for accuracy, discrimination, and security.
Stars: ✭ 530 (-81.27%)
Mutual labels:  interpretability
Reverse Engineering Neural Networks
A collection of tools for reverse engineering neural networks.
Stars: ✭ 78 (-97.24%)
Mutual labels:  interpretability
Tcav
Code for the TCAV ML interpretability project
Stars: ✭ 442 (-84.38%)
Mutual labels:  interpretability
Mli Resources
H2O.ai Machine Learning Interpretability Resources
Stars: ✭ 428 (-84.88%)
Mutual labels:  interpretability
Awesome Production Machine Learning
A curated list of awesome open source libraries to deploy, monitor, version and scale your machine learning
Stars: ✭ 10,504 (+271.17%)
Mutual labels:  interpretability
Awesome deep learning interpretability
深度学习近年来关于神经网络模型解释性的相关高引用/顶会论文(附带代码)
Stars: ✭ 401 (-85.83%)
Mutual labels:  interpretability
Facet
Human-explainable AI.
Stars: ✭ 269 (-90.49%)
Mutual labels:  interpretability
Awesome Explainable Ai
A collection of research materials on explainable AI/ML
Stars: ✭ 186 (-93.43%)
Mutual labels:  interpretability
Awesome Fairness In Ai
A curated list of awesome Fairness in AI resources
Stars: ✭ 144 (-94.91%)
Mutual labels:  interpretability
Adversarial Explainable Ai
💡 A curated list of adversarial attacks on model explanations
Stars: ✭ 56 (-98.02%)
Mutual labels:  interpretability
removal-explanations
A lightweight implementation of removal-based explanations for ML models.
Stars: ✭ 46 (-98.37%)
Mutual labels:  interpretability
WhiteBox-Part1
In this part, I've introduced and experimented with ways to interpret and evaluate models in the field of image. (Pytorch)
Stars: ✭ 34 (-98.8%)
Mutual labels:  interpretable-ai
Ad examples
A collection of anomaly detection methods (iid/point-based, graph and time series) including active learning for anomaly detection/discovery, bayesian rule-mining, description for diversity/explanation/interpretability. Analysis of incorporating label feedback with ensemble and tree-based detectors. Includes adversarial attacks with Graph Convolutional Network.
Stars: ✭ 641 (-77.35%)
Mutual labels:  interpretability
Awesome Computer Vision
Awesome Resources for Advanced Computer Vision Topics
Stars: ✭ 92 (-96.75%)
Mutual labels:  interpretability
Xai
XAI - An eXplainability toolbox for machine learning
Stars: ✭ 596 (-78.94%)
Mutual labels:  interpretability
Modelstudio
📍 Interactive Studio for Explanatory Model Analysis
Stars: ✭ 163 (-94.24%)
Mutual labels:  interpretability
Xai resources
Interesting resources related to XAI (Explainable Artificial Intelligence)
Stars: ✭ 553 (-80.46%)
Mutual labels:  interpretability
Cxplain
Causal Explanation (CXPlain) is a method for explaining the predictions of any machine-learning model.
Stars: ✭ 84 (-97.03%)
Mutual labels:  interpretability
Deeplift
Public facing deeplift repo
Stars: ✭ 512 (-81.91%)
Mutual labels:  interpretability
Pyss3
A Python package implementing a new machine learning model for text classification with visualization tools for Explainable AI
Stars: ✭ 191 (-93.25%)
Mutual labels:  interpretability
Lucid
A collection of infrastructure and tools for research in neural network interpretability.
Stars: ✭ 4,344 (+53.5%)
Mutual labels:  interpretability
Cnn Interpretability
🏥 Visualizing Convolutional Networks for MRI-based Diagnosis of Alzheimer’s Disease
Stars: ✭ 68 (-97.6%)
Mutual labels:  interpretability
Neural Backed Decision Trees
Making decision trees competitive with neural networks on CIFAR10, CIFAR100, TinyImagenet200, Imagenet
Stars: ✭ 411 (-85.48%)
Mutual labels:  interpretability
Stellargraph
StellarGraph - Machine Learning on Graphs
Stars: ✭ 2,235 (-21.02%)
Mutual labels:  interpretability
Trelawney
General Interpretability Package
Stars: ✭ 55 (-98.06%)
Mutual labels:  interpretability
knowledge-neurons
A library for finding knowledge neurons in pretrained transformer models.
Stars: ✭ 72 (-97.46%)
Mutual labels:  interpretability
Athena
Automatic equation building and curve fitting. Runs on Tensorflow. Built for academia and research.
Stars: ✭ 57 (-97.99%)
Mutual labels:  interpretability
caltech birds
A set of notebooks as a guide to the process of fine-grained image classification of birds species, using PyTorch based deep neural networks.
Stars: ✭ 29 (-98.98%)
Mutual labels:  feature-attribution
SPINE
Code for SPINE - Sparse Interpretable Neural Embeddings. Jhamtani H.*, Pruthi D.*, Subramanian A.*, Berg-Kirkpatrick T., Hovy E. AAAI 2018
Stars: ✭ 44 (-98.45%)
Mutual labels:  interpretability
Text nn
Text classification models. Used a submodule for other projects.
Stars: ✭ 55 (-98.06%)
Mutual labels:  interpretability
shapeshop
Towards Understanding Deep Learning Representations via Interactive Experimentation
Stars: ✭ 16 (-99.43%)
Mutual labels:  interpretability
Gandissect
Pytorch-based tools for visualizing and understanding the neurons of a GAN. https://gandissect.csail.mit.edu/
Stars: ✭ 1,700 (-39.93%)
Mutual labels:  interpretable-ml
Predictive-Maintenance-of-Aircraft-Engine
In this project I aim to apply Various Predictive Maintenance Techniques to accurately predict the impending failure of an aircraft turbofan engine.
Stars: ✭ 48 (-98.3%)
Mutual labels:  feature-importance
Contrastiveexplanation
Contrastive Explanation (Foil Trees), developed at TNO/Utrecht University
Stars: ✭ 36 (-98.73%)
Mutual labels:  interpretability
Shap
A game theoretic approach to explain the output of any machine learning model.
Stars: ✭ 14,917 (+427.1%)
Mutual labels:  interpretability
neuron-importance-zsl
[ECCV 2018] code for Choose Your Neuron: Incorporating Domain Knowledge Through Neuron Importance
Stars: ✭ 56 (-98.02%)
Mutual labels:  interpretability
Visual Attribution
Pytorch Implementation of recent visual attribution methods for model interpretability
Stars: ✭ 127 (-95.51%)
Mutual labels:  interpretability
Symbolic Metamodeling
Codebase for "Demystifying Black-box Models with Symbolic Metamodels", NeurIPS 2019.
Stars: ✭ 29 (-98.98%)
Mutual labels:  interpretability
summit
🏔️ Summit: Scaling Deep Learning Interpretability by Visualizing Activation and Attribution Summarizations
Stars: ✭ 95 (-96.64%)
Mutual labels:  interpretability
sage
For calculating global feature importance using Shapley values.
Stars: ✭ 129 (-95.44%)
Mutual labels:  interpretability
Alibi
Algorithms for monitoring and explaining machine learning models
Stars: ✭ 924 (-67.35%)
Mutual labels:  interpretability
yggdrasil-decision-forests
A collection of state-of-the-art algorithms for the training, serving and interpretation of Decision Forest models.
Stars: ✭ 156 (-94.49%)
Mutual labels:  interpretability
Pycebox
⬛ Python Individual Conditional Expectation Plot Toolbox
Stars: ✭ 101 (-96.43%)
Mutual labels:  interpretability
Grad Cam
[ICCV 2017] Torch code for Grad-CAM
Stars: ✭ 891 (-68.52%)
Mutual labels:  interpretability
ProtoTree
ProtoTrees: Neural Prototype Trees for Interpretable Fine-grained Image Recognition, published at CVPR2021
Stars: ✭ 47 (-98.34%)
Mutual labels:  interpretability
Explainx
Explainable AI framework for data scientists. Explain & debug any blackbox machine learning model with a single line of code.
Stars: ✭ 196 (-93.07%)
Mutual labels:  interpretability
1-60 of 88 similar projects