ShapA game theoretic approach to explain the output of any machine learning model.
Stars: ✭ 14,917 (+41336.11%)
Modelstudio📍 Interactive Studio for Explanatory Model Analysis
Stars: ✭ 163 (+352.78%)
Lrp for lstmLayer-wise Relevance Propagation (LRP) for LSTMs
Stars: ✭ 152 (+322.22%)
StellargraphStellarGraph - Machine Learning on Graphs
Stars: ✭ 2,235 (+6108.33%)
Visual AttributionPytorch Implementation of recent visual attribution methods for model interpretability
Stars: ✭ 127 (+252.78%)
Pycebox⬛ Python Individual Conditional Expectation Plot Toolbox
Stars: ✭ 101 (+180.56%)
BreakdownModel Agnostics breakDown plots
Stars: ✭ 93 (+158.33%)
Interpretability By PartsCode repository for "Interpretable and Accurate Fine-grained Recognition via Region Grouping", CVPR 2020 (Oral)
Stars: ✭ 88 (+144.44%)
CxplainCausal Explanation (CXPlain) is a method for explaining the predictions of any machine-learning model.
Stars: ✭ 84 (+133.33%)
Cnn Interpretability🏥 Visualizing Convolutional Networks for MRI-based Diagnosis of Alzheimer’s Disease
Stars: ✭ 68 (+88.89%)
AthenaAutomatic equation building and curve fitting. Runs on Tensorflow. Built for academia and research.
Stars: ✭ 57 (+58.33%)
Text nnText classification models. Used a submodule for other projects.
Stars: ✭ 55 (+52.78%)
TrelawneyGeneral Interpretability Package
Stars: ✭ 55 (+52.78%)
Pytorch Grad CamMany Class Activation Map methods implemented in Pytorch for CNNs and Vision Transformers. Including Grad-CAM, Grad-CAM++, Score-CAM, Ablation-CAM and XGrad-CAM
Stars: ✭ 3,814 (+10494.44%)