CxplainCausal Explanation (CXPlain) is a method for explaining the predictions of any machine-learning model.
Stars: ✭ 84 (-84.81%)
Cnn Interpretability🏥 Visualizing Convolutional Networks for MRI-based Diagnosis of Alzheimer’s Disease
Stars: ✭ 68 (-87.7%)
AthenaAutomatic equation building and curve fitting. Runs on Tensorflow. Built for academia and research.
Stars: ✭ 57 (-89.69%)
Text nnText classification models. Used a submodule for other projects.
Stars: ✭ 55 (-90.05%)
TrelawneyGeneral Interpretability Package
Stars: ✭ 55 (-90.05%)
ContrastiveexplanationContrastive Explanation (Foil Trees), developed at TNO/Utrecht University
Stars: ✭ 36 (-93.49%)
Symbolic MetamodelingCodebase for "Demystifying Black-box Models with Symbolic Metamodels", NeurIPS 2019.
Stars: ✭ 29 (-94.76%)
AlibiAlgorithms for monitoring and explaining machine learning models
Stars: ✭ 924 (+67.09%)
Grad Cam[ICCV 2017] Torch code for Grad-CAM
Stars: ✭ 891 (+61.12%)
DalexmoDel Agnostic Language for Exploration and eXplanation
Stars: ✭ 795 (+43.76%)
Tf ExplainInterpretability Methods for tf.keras models with Tensorflow 2.x
Stars: ✭ 780 (+41.05%)
Ad examplesA collection of anomaly detection methods (iid/point-based, graph and time series) including active learning for anomaly detection/discovery, bayesian rule-mining, description for diversity/explanation/interpretability. Analysis of incorporating label feedback with ensemble and tree-based detectors. Includes adversarial attacks with Graph Convolutional Network.
Stars: ✭ 641 (+15.91%)
XaiXAI - An eXplainability toolbox for machine learning
Stars: ✭ 596 (+7.78%)
FlashtorchVisualization toolkit for neural networks in PyTorch! Demo -->
Stars: ✭ 561 (+1.45%)
Pytorch Grad CamMany Class Activation Map methods implemented in Pytorch for CNNs and Vision Transformers. Including Grad-CAM, Grad-CAM++, Score-CAM, Ablation-CAM and XGrad-CAM
Stars: ✭ 3,814 (+589.69%)