All Projects → Deeplift → Similar Projects or Alternatives

79 Open source projects that are alternatives of or similar to Deeplift

adaptive-wavelets
Adaptive, interpretable wavelets across domains (NeurIPS 2021)
Stars: ✭ 58 (-88.67%)
Mutual labels:  interpretability
Pyss3
A Python package implementing a new machine learning model for text classification with visualization tools for Explainable AI
Stars: ✭ 191 (-62.7%)
Mutual labels:  interpretability
transformers-interpret
Model explainability that works seamlessly with 🤗 transformers. Explain your transformers model in just 2 lines of code.
Stars: ✭ 861 (+68.16%)
Mutual labels:  interpretability
concept-based-xai
Library implementing state-of-the-art Concept-based and Disentanglement Learning methods for Explainable AI
Stars: ✭ 41 (-91.99%)
Mutual labels:  interpretability
Pycebox
⬛ Python Individual Conditional Expectation Plot Toolbox
Stars: ✭ 101 (-80.27%)
Mutual labels:  interpretability
Visualizing-CNNs-for-monocular-depth-estimation
official implementation of "Visualization of Convolutional Neural Networks for Monocular Depth Estimation"
Stars: ✭ 120 (-76.56%)
Mutual labels:  interpretability
Captum
Model interpretability and understanding for PyTorch
Stars: ✭ 2,830 (+452.73%)
Mutual labels:  interpretability
shapeshop
Towards Understanding Deep Learning Representations via Interactive Experimentation
Stars: ✭ 16 (-96.87%)
Mutual labels:  interpretability
Lrp for lstm
Layer-wise Relevance Propagation (LRP) for LSTMs
Stars: ✭ 152 (-70.31%)
Mutual labels:  interpretability
mmn
Moore Machine Networks (MMN): Learning Finite-State Representations of Recurrent Policy Networks
Stars: ✭ 39 (-92.38%)
Mutual labels:  interpretability
ArenaR
Data generator for Arena - interactive XAI dashboard
Stars: ✭ 28 (-94.53%)
Mutual labels:  interpretability
Cxplain
Causal Explanation (CXPlain) is a method for explaining the predictions of any machine-learning model.
Stars: ✭ 84 (-83.59%)
Mutual labels:  interpretability
zennit
Zennit is a high-level framework in Python using PyTorch for explaining/exploring neural networks using attribution methods like LRP.
Stars: ✭ 57 (-88.87%)
Mutual labels:  interpretability
ALPS 2021
XAI Tutorial for the Explainable AI track in the ALPS winter school 2021
Stars: ✭ 55 (-89.26%)
Mutual labels:  interpretability
removal-explanations
A lightweight implementation of removal-based explanations for ML models.
Stars: ✭ 46 (-91.02%)
Mutual labels:  interpretability
Torch Cam
Class activation maps for your PyTorch models (CAM, Grad-CAM, Grad-CAM++, Smooth Grad-CAM++, Score-CAM, SS-CAM, IS-CAM, XGrad-CAM)
Stars: ✭ 249 (-51.37%)
Mutual labels:  interpretability
partial dependence
Python package to visualize and cluster partial dependence.
Stars: ✭ 23 (-95.51%)
Mutual labels:  interpretability
Awesome Explainable Ai
A collection of research materials on explainable AI/ML
Stars: ✭ 186 (-63.67%)
Mutual labels:  interpretability
Awesome deep learning interpretability
深度学习近年来关于神经网络模型解释性的相关高引用/顶会论文(附带代码)
Stars: ✭ 401 (-21.68%)
Mutual labels:  interpretability
Shap
A game theoretic approach to explain the output of any machine learning model.
Stars: ✭ 14,917 (+2813.48%)
Mutual labels:  interpretability
interpretable-ml
Techniques & resources for training interpretable ML models, explaining ML models, and debugging ML models.
Stars: ✭ 17 (-96.68%)
Mutual labels:  interpretability
Awesome Fairness In Ai
A curated list of awesome Fairness in AI resources
Stars: ✭ 144 (-71.87%)
Mutual labels:  interpretability
neuron-importance-zsl
[ECCV 2018] code for Choose Your Neuron: Incorporating Domain Knowledge Through Neuron Importance
Stars: ✭ 56 (-89.06%)
Mutual labels:  interpretability
Awesome Computer Vision
Awesome Resources for Advanced Computer Vision Topics
Stars: ✭ 92 (-82.03%)
Mutual labels:  interpretability
meg
Molecular Explanation Generator
Stars: ✭ 14 (-97.27%)
Mutual labels:  interpretability
adversarial-robustness-public
Code for AAAI 2018 accepted paper: "Improving the Adversarial Robustness and Interpretability of Deep Neural Networks by Regularizing their Input Gradients"
Stars: ✭ 49 (-90.43%)
Mutual labels:  interpretability
Reverse Engineering Neural Networks
A collection of tools for reverse engineering neural networks.
Stars: ✭ 78 (-84.77%)
Mutual labels:  interpretability
yggdrasil-decision-forests
A collection of state-of-the-art algorithms for the training, serving and interpretation of Decision Forest models.
Stars: ✭ 156 (-69.53%)
Mutual labels:  interpretability
EgoCNN
Code for "Distributed, Egocentric Representations of Graphs for Detecting Critical Structures" (ICML 2019)
Stars: ✭ 16 (-96.87%)
Mutual labels:  interpretability
diabetes use case
Sample use case for Xavier AI in Healthcare conference: https://www.xavierhealth.org/ai-summit-day2/
Stars: ✭ 22 (-95.7%)
Mutual labels:  interpretability
xai-iml-sota
Interesting resources related to Explainable Artificial Intelligence, Interpretable Machine Learning, Interactive Machine Learning, Human in Loop and Visual Analytics.
Stars: ✭ 51 (-90.04%)
Mutual labels:  interpretability
ProtoTree
ProtoTrees: Neural Prototype Trees for Interpretable Fine-grained Image Recognition, published at CVPR2021
Stars: ✭ 47 (-90.82%)
Mutual labels:  interpretability
kernel-mod
NeurIPS 2018. Linear-time model comparison tests.
Stars: ✭ 17 (-96.68%)
Mutual labels:  interpretability
Neural Backed Decision Trees
Making decision trees competitive with neural networks on CIFAR10, CIFAR100, TinyImagenet200, Imagenet
Stars: ✭ 411 (-19.73%)
Mutual labels:  interpretability
thermostat
Collection of NLP model explanations and accompanying analysis tools
Stars: ✭ 126 (-75.39%)
Mutual labels:  interpretability
deep-explanation-penalization
Code for using CDEP from the paper "Interpretations are useful: penalizing explanations to align neural networks with prior knowledge" https://arxiv.org/abs/1909.13584
Stars: ✭ 110 (-78.52%)
Mutual labels:  interpretability
Aspect Based Sentiment Analysis
💭 Aspect-Based-Sentiment-Analysis: Transformer & Explainable ML (TensorFlow)
Stars: ✭ 206 (-59.77%)
Mutual labels:  interpretability
SPINE
Code for SPINE - Sparse Interpretable Neural Embeddings. Jhamtani H.*, Pruthi D.*, Subramanian A.*, Berg-Kirkpatrick T., Hovy E. AAAI 2018
Stars: ✭ 44 (-91.41%)
Mutual labels:  interpretability
Explainx
Explainable AI framework for data scientists. Explain & debug any blackbox machine learning model with a single line of code.
Stars: ✭ 196 (-61.72%)
Mutual labels:  interpretability
free-lunch-saliency
Code for "Free-Lunch Saliency via Attention in Atari Agents"
Stars: ✭ 15 (-97.07%)
Mutual labels:  interpretability
Imodels
Interpretable ML package 🔍 for concise, transparent, and accurate predictive modeling (sklearn-compatible).
Stars: ✭ 194 (-62.11%)
Mutual labels:  interpretability
Lucid
A collection of infrastructure and tools for research in neural network interpretability.
Stars: ✭ 4,344 (+748.44%)
Mutual labels:  interpretability
Awesome Machine Learning Interpretability
A curated list of awesome machine learning interpretability resources.
Stars: ✭ 2,404 (+369.53%)
Mutual labels:  interpretability
ConceptBottleneck
Concept Bottleneck Models, ICML 2020
Stars: ✭ 91 (-82.23%)
Mutual labels:  interpretability
Modelstudio
📍 Interactive Studio for Explanatory Model Analysis
Stars: ✭ 163 (-68.16%)
Mutual labels:  interpretability
knowledge-neurons
A library for finding knowledge neurons in pretrained transformer models.
Stars: ✭ 72 (-85.94%)
Mutual labels:  interpretability
Stellargraph
StellarGraph - Machine Learning on Graphs
Stars: ✭ 2,235 (+336.52%)
Mutual labels:  interpretability
mllp
The code of AAAI 2020 paper "Transparent Classification with Multilayer Logical Perceptrons and Random Binarization".
Stars: ✭ 15 (-97.07%)
Mutual labels:  interpretability
Visual Attribution
Pytorch Implementation of recent visual attribution methods for model interpretability
Stars: ✭ 127 (-75.2%)
Mutual labels:  interpretability
Interpret
Fit interpretable models. Explain blackbox machine learning.
Stars: ✭ 4,352 (+750%)
Mutual labels:  interpretability
Breakdown
Model Agnostics breakDown plots
Stars: ✭ 93 (-81.84%)
Mutual labels:  interpretability
hierarchical-dnn-interpretations
Using / reproducing ACD from the paper "Hierarchical interpretations for neural network predictions" 🧠 (ICLR 2019)
Stars: ✭ 110 (-78.52%)
Mutual labels:  interpretability
Interpretability By Parts
Code repository for "Interpretable and Accurate Fine-grained Recognition via Region Grouping", CVPR 2020 (Oral)
Stars: ✭ 88 (-82.81%)
Mutual labels:  interpretability
summit
🏔️ Summit: Scaling Deep Learning Interpretability by Visualizing Activation and Attribution Summarizations
Stars: ✭ 95 (-81.45%)
Mutual labels:  interpretability
glcapsnet
Global-Local Capsule Network (GLCapsNet) is a capsule-based architecture able to provide context-based eye fixation prediction for several autonomous driving scenarios, while offering interpretability both globally and locally.
Stars: ✭ 33 (-93.55%)
Mutual labels:  interpretability
Tcav
Code for the TCAV ML interpretability project
Stars: ✭ 442 (-13.67%)
Mutual labels:  interpretability
Mli Resources
H2O.ai Machine Learning Interpretability Resources
Stars: ✭ 428 (-16.41%)
Mutual labels:  interpretability
Facet
Human-explainable AI.
Stars: ✭ 269 (-47.46%)
Mutual labels:  interpretability
sage
For calculating global feature importance using Shapley values.
Stars: ✭ 129 (-74.8%)
Mutual labels:  interpretability
Transformer-MM-Explainability
[ICCV 2021- Oral] Official PyTorch implementation for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers, a novel method to visualize any Transformer-based network. Including examples for DETR, VQA.
Stars: ✭ 484 (-5.47%)
Mutual labels:  interpretability
1-60 of 79 similar projects