Visual AttributionPytorch Implementation of recent visual attribution methods for model interpretability
Stars: ✭ 127 (-83.72%)
Pycebox⬛ Python Individual Conditional Expectation Plot Toolbox
Stars: ✭ 101 (-87.05%)
BreakdownModel Agnostics breakDown plots
Stars: ✭ 93 (-88.08%)
Interpretability By PartsCode repository for "Interpretable and Accurate Fine-grained Recognition via Region Grouping", CVPR 2020 (Oral)
Stars: ✭ 88 (-88.72%)
CxplainCausal Explanation (CXPlain) is a method for explaining the predictions of any machine-learning model.
Stars: ✭ 84 (-89.23%)
Cnn Interpretability🏥 Visualizing Convolutional Networks for MRI-based Diagnosis of Alzheimer’s Disease
Stars: ✭ 68 (-91.28%)
AthenaAutomatic equation building and curve fitting. Runs on Tensorflow. Built for academia and research.
Stars: ✭ 57 (-92.69%)
Text nnText classification models. Used a submodule for other projects.
Stars: ✭ 55 (-92.95%)
TrelawneyGeneral Interpretability Package
Stars: ✭ 55 (-92.95%)
ContrastiveexplanationContrastive Explanation (Foil Trees), developed at TNO/Utrecht University
Stars: ✭ 36 (-95.38%)
Symbolic MetamodelingCodebase for "Demystifying Black-box Models with Symbolic Metamodels", NeurIPS 2019.
Stars: ✭ 29 (-96.28%)
AlibiAlgorithms for monitoring and explaining machine learning models
Stars: ✭ 924 (+18.46%)
Grad Cam[ICCV 2017] Torch code for Grad-CAM
Stars: ✭ 891 (+14.23%)
DalexmoDel Agnostic Language for Exploration and eXplanation
Stars: ✭ 795 (+1.92%)
Pytorch Grad CamMany Class Activation Map methods implemented in Pytorch for CNNs and Vision Transformers. Including Grad-CAM, Grad-CAM++, Score-CAM, Ablation-CAM and XGrad-CAM
Stars: ✭ 3,814 (+388.97%)