All Projects → Captum → Similar Projects or Alternatives

88 Open source projects that are alternatives of or similar to Captum

summit
🏔️ Summit: Scaling Deep Learning Interpretability by Visualizing Activation and Attribution Summarizations
Stars: ✭ 95 (-96.64%)
Mutual labels:  interpretability
sage
For calculating global feature importance using Shapley values.
Stars: ✭ 129 (-95.44%)
Mutual labels:  interpretability
yggdrasil-decision-forests
A collection of state-of-the-art algorithms for the training, serving and interpretation of Decision Forest models.
Stars: ✭ 156 (-94.49%)
Mutual labels:  interpretability
ProtoTree
ProtoTrees: Neural Prototype Trees for Interpretable Fine-grained Image Recognition, published at CVPR2021
Stars: ✭ 47 (-98.34%)
Mutual labels:  interpretability
Visualizing-CNNs-for-monocular-depth-estimation
official implementation of "Visualization of Convolutional Neural Networks for Monocular Depth Estimation"
Stars: ✭ 120 (-95.76%)
Mutual labels:  interpretability
dominance-analysis
This package can be used for dominance analysis or Shapley Value Regression for finding relative importance of predictors on given dataset. This library can be used for key driver analysis or marginal resource allocation models.
Stars: ✭ 111 (-96.08%)
Mutual labels:  feature-importance
partial dependence
Python package to visualize and cluster partial dependence.
Stars: ✭ 23 (-99.19%)
Mutual labels:  interpretability
free-lunch-saliency
Code for "Free-Lunch Saliency via Attention in Atari Agents"
Stars: ✭ 15 (-99.47%)
Mutual labels:  interpretability
ShapML.jl
A Julia package for interpretable machine learning with stochastic Shapley values
Stars: ✭ 63 (-97.77%)
Mutual labels:  feature-importance
transformers-interpret
Model explainability that works seamlessly with 🤗 transformers. Explain your transformers model in just 2 lines of code.
Stars: ✭ 861 (-69.58%)
Mutual labels:  interpretability
ConceptBottleneck
Concept Bottleneck Models, ICML 2020
Stars: ✭ 91 (-96.78%)
Mutual labels:  interpretability
mmn
Moore Machine Networks (MMN): Learning Finite-State Representations of Recurrent Policy Networks
Stars: ✭ 39 (-98.62%)
Mutual labels:  interpretability
meg
Molecular Explanation Generator
Stars: ✭ 14 (-99.51%)
Mutual labels:  interpretability
Deep XF
Package towards building Explainable Forecasting and Nowcasting Models with State-of-the-art Deep Neural Networks and Dynamic Factor Model on Time Series data sets with single line of code. Also, provides utilify facility for time-series signal similarities matching, and removing noise from timeseries signals.
Stars: ✭ 83 (-97.07%)
Mutual labels:  interpretable-ai
glcapsnet
Global-Local Capsule Network (GLCapsNet) is a capsule-based architecture able to provide context-based eye fixation prediction for several autonomous driving scenarios, while offering interpretability both globally and locally.
Stars: ✭ 33 (-98.83%)
Mutual labels:  interpretability
Transformer-MM-Explainability
[ICCV 2021- Oral] Official PyTorch implementation for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers, a novel method to visualize any Transformer-based network. Including examples for DETR, VQA.
Stars: ✭ 484 (-82.9%)
Mutual labels:  interpretability
adversarial-robustness-public
Code for AAAI 2018 accepted paper: "Improving the Adversarial Robustness and Interpretability of Deep Neural Networks by Regularizing their Input Gradients"
Stars: ✭ 49 (-98.27%)
Mutual labels:  interpretability
ArenaR
Data generator for Arena - interactive XAI dashboard
Stars: ✭ 28 (-99.01%)
Mutual labels:  interpretability
EgoCNN
Code for "Distributed, Egocentric Representations of Graphs for Detecting Critical Structures" (ICML 2019)
Stars: ✭ 16 (-99.43%)
Mutual labels:  interpretability
concept-based-xai
Library implementing state-of-the-art Concept-based and Disentanglement Learning methods for Explainable AI
Stars: ✭ 41 (-98.55%)
Mutual labels:  interpretability
xai-iml-sota
Interesting resources related to Explainable Artificial Intelligence, Interpretable Machine Learning, Interactive Machine Learning, Human in Loop and Visual Analytics.
Stars: ✭ 51 (-98.2%)
Mutual labels:  interpretability
ALPS 2021
XAI Tutorial for the Explainable AI track in the ALPS winter school 2021
Stars: ✭ 55 (-98.06%)
Mutual labels:  interpretability
kernel-mod
NeurIPS 2018. Linear-time model comparison tests.
Stars: ✭ 17 (-99.4%)
Mutual labels:  interpretability
adaptive-wavelets
Adaptive, interpretable wavelets across domains (NeurIPS 2021)
Stars: ✭ 58 (-97.95%)
Mutual labels:  interpretability
isarn-sketches-spark
Routines and data structures for using isarn-sketches idiomatically in Apache Spark
Stars: ✭ 28 (-99.01%)
Mutual labels:  feature-importance
self critical vqa
Code for NeurIPS 2019 paper ``Self-Critical Reasoning for Robust Visual Question Answering''
Stars: ✭ 39 (-98.62%)
Mutual labels:  interpretable-ai
Torch Cam
Class activation maps for your PyTorch models (CAM, Grad-CAM, Grad-CAM++, Smooth Grad-CAM++, Score-CAM, SS-CAM, IS-CAM, XGrad-CAM)
Stars: ✭ 249 (-91.2%)
Mutual labels:  interpretability
Aspect Based Sentiment Analysis
💭 Aspect-Based-Sentiment-Analysis: Transformer & Explainable ML (TensorFlow)
Stars: ✭ 206 (-92.72%)
Mutual labels:  interpretability
61-88 of 88 similar projects