summit🏔️ Summit: Scaling Deep Learning Interpretability by Visualizing Activation and Attribution Summarizations
Stars: ✭ 95 (-96.64%)
sageFor calculating global feature importance using Shapley values.
Stars: ✭ 129 (-95.44%)
yggdrasil-decision-forestsA collection of state-of-the-art algorithms for the training, serving and interpretation of Decision Forest models.
Stars: ✭ 156 (-94.49%)
ProtoTreeProtoTrees: Neural Prototype Trees for Interpretable Fine-grained Image Recognition, published at CVPR2021
Stars: ✭ 47 (-98.34%)
dominance-analysisThis package can be used for dominance analysis or Shapley Value Regression for finding relative importance of predictors on given dataset. This library can be used for key driver analysis or marginal resource allocation models.
Stars: ✭ 111 (-96.08%)
partial dependencePython package to visualize and cluster partial dependence.
Stars: ✭ 23 (-99.19%)
free-lunch-saliencyCode for "Free-Lunch Saliency via Attention in Atari Agents"
Stars: ✭ 15 (-99.47%)
ShapML.jlA Julia package for interpretable machine learning with stochastic Shapley values
Stars: ✭ 63 (-97.77%)
transformers-interpretModel explainability that works seamlessly with 🤗 transformers. Explain your transformers model in just 2 lines of code.
Stars: ✭ 861 (-69.58%)
mmnMoore Machine Networks (MMN): Learning Finite-State Representations of Recurrent Policy Networks
Stars: ✭ 39 (-98.62%)
megMolecular Explanation Generator
Stars: ✭ 14 (-99.51%)
Deep XFPackage towards building Explainable Forecasting and Nowcasting Models with State-of-the-art Deep Neural Networks and Dynamic Factor Model on Time Series data sets with single line of code. Also, provides utilify facility for time-series signal similarities matching, and removing noise from timeseries signals.
Stars: ✭ 83 (-97.07%)
glcapsnetGlobal-Local Capsule Network (GLCapsNet) is a capsule-based architecture able to provide context-based eye fixation prediction for several autonomous driving scenarios, while offering interpretability both globally and locally.
Stars: ✭ 33 (-98.83%)
Transformer-MM-Explainability[ICCV 2021- Oral] Official PyTorch implementation for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers, a novel method to visualize any Transformer-based network. Including examples for DETR, VQA.
Stars: ✭ 484 (-82.9%)
adversarial-robustness-publicCode for AAAI 2018 accepted paper: "Improving the Adversarial Robustness and Interpretability of Deep Neural Networks by Regularizing their Input Gradients"
Stars: ✭ 49 (-98.27%)
ArenaRData generator for Arena - interactive XAI dashboard
Stars: ✭ 28 (-99.01%)
EgoCNNCode for "Distributed, Egocentric Representations of Graphs for Detecting Critical Structures" (ICML 2019)
Stars: ✭ 16 (-99.43%)
concept-based-xaiLibrary implementing state-of-the-art Concept-based and Disentanglement Learning methods for Explainable AI
Stars: ✭ 41 (-98.55%)
xai-iml-sotaInteresting resources related to Explainable Artificial Intelligence, Interpretable Machine Learning, Interactive Machine Learning, Human in Loop and Visual Analytics.
Stars: ✭ 51 (-98.2%)
ALPS 2021XAI Tutorial for the Explainable AI track in the ALPS winter school 2021
Stars: ✭ 55 (-98.06%)
kernel-modNeurIPS 2018. Linear-time model comparison tests.
Stars: ✭ 17 (-99.4%)
adaptive-waveletsAdaptive, interpretable wavelets across domains (NeurIPS 2021)
Stars: ✭ 58 (-97.95%)
isarn-sketches-sparkRoutines and data structures for using isarn-sketches idiomatically in Apache Spark
Stars: ✭ 28 (-99.01%)
self critical vqaCode for NeurIPS 2019 paper ``Self-Critical Reasoning for Robust Visual Question Answering''
Stars: ✭ 39 (-98.62%)
Torch CamClass activation maps for your PyTorch models (CAM, Grad-CAM, Grad-CAM++, Smooth Grad-CAM++, Score-CAM, SS-CAM, IS-CAM, XGrad-CAM)
Stars: ✭ 249 (-91.2%)