diabetes use caseSample use case for Xavier AI in Healthcare conference: https://www.xavierhealth.org/ai-summit-day2/
Stars: ✭ 22 (-56.86%)
interpretable-mlTechniques & resources for training interpretable ML models, explaining ML models, and debugging ML models.
Stars: ✭ 17 (-66.67%)
mllpThe code of AAAI 2020 paper "Transparent Classification with Multilayer Logical Perceptrons and Random Binarization".
Stars: ✭ 15 (-70.59%)
InterpretFit interpretable models. Explain blackbox machine learning.
Stars: ✭ 4,352 (+8433.33%)
fastshapFast approximate Shapley values in R
Stars: ✭ 79 (+54.9%)
root painterRootPainter: Deep Learning Segmentation of Biological Images with Corrective Annotation
Stars: ✭ 28 (-45.1%)
ml-fairness-frameworkFairPut - Machine Learning Fairness Framework with LightGBM — Explainability, Robustness, Fairness (by @firmai)
Stars: ✭ 59 (+15.69%)
ArenaRData generator for Arena - interactive XAI dashboard
Stars: ✭ 28 (-45.1%)
ProtoTreeProtoTrees: Neural Prototype Trees for Interpretable Fine-grained Image Recognition, published at CVPR2021
Stars: ✭ 47 (-7.84%)
mmnMoore Machine Networks (MMN): Learning Finite-State Representations of Recurrent Policy Networks
Stars: ✭ 39 (-23.53%)
ShapML.jlA Julia package for interpretable machine learning with stochastic Shapley values
Stars: ✭ 63 (+23.53%)
dlime experimentsIn this work, we propose a deterministic version of Local Interpretable Model Agnostic Explanations (LIME) and the experimental results on three different medical datasets shows the superiority for Deterministic Local Interpretable Model-Agnostic Explanations (DLIME).
Stars: ✭ 21 (-58.82%)
auditorModel verification, validation, and error analysis
Stars: ✭ 56 (+9.8%)
zennitZennit is a high-level framework in Python using PyTorch for explaining/exploring neural networks using attribution methods like LRP.
Stars: ✭ 57 (+11.76%)
concept-based-xaiLibrary implementing state-of-the-art Concept-based and Disentanglement Learning methods for Explainable AI
Stars: ✭ 41 (-19.61%)
summit🏔️ Summit: Scaling Deep Learning Interpretability by Visualizing Activation and Attribution Summarizations
Stars: ✭ 95 (+86.27%)
adaptive-waveletsAdaptive, interpretable wavelets across domains (NeurIPS 2021)
Stars: ✭ 58 (+13.73%)
Visual AttributionPytorch Implementation of recent visual attribution methods for model interpretability
Stars: ✭ 127 (+149.02%)
Pycebox⬛ Python Individual Conditional Expectation Plot Toolbox
Stars: ✭ 101 (+98.04%)
BreakdownModel Agnostics breakDown plots
Stars: ✭ 93 (+82.35%)
Interpretability By PartsCode repository for "Interpretable and Accurate Fine-grained Recognition via Region Grouping", CVPR 2020 (Oral)
Stars: ✭ 88 (+72.55%)
irisSemi-automatic tool for manual segmentation of multi-spectral and geo-spatial imagery.
Stars: ✭ 87 (+70.59%)
Fruit-APIA Universal Deep Reinforcement Learning Framework
Stars: ✭ 61 (+19.61%)
thermostatCollection of NLP model explanations and accompanying analysis tools
Stars: ✭ 126 (+147.06%)
DataCLUEDataCLUE: 数据为中心的NLP基准和工具包
Stars: ✭ 133 (+160.78%)
CxplainCausal Explanation (CXPlain) is a method for explaining the predictions of any machine-learning model.
Stars: ✭ 84 (+64.71%)
3D-GuidedGradCAM-for-Medical-ImagingThis Repo containes the implemnetation of generating Guided-GradCAM for 3D medical Imaging using Nifti file in tensorflow 2.0. Different input files can be used in that case need to edit the input to the Guided-gradCAM model.
Stars: ✭ 60 (+17.65%)
Torch CamClass activation maps for your PyTorch models (CAM, Grad-CAM, Grad-CAM++, Smooth Grad-CAM++, Score-CAM, SS-CAM, IS-CAM, XGrad-CAM)
Stars: ✭ 249 (+388.24%)
Cnn Interpretability🏥 Visualizing Convolutional Networks for MRI-based Diagnosis of Alzheimer’s Disease
Stars: ✭ 68 (+33.33%)
AthenaAutomatic equation building and curve fitting. Runs on Tensorflow. Built for academia and research.
Stars: ✭ 57 (+11.76%)
patzillaPatZilla is a modular patent information research platform and data integration toolkit with a modern user interface and access to multiple data sources.
Stars: ✭ 71 (+39.22%)
Text nnText classification models. Used a submodule for other projects.
Stars: ✭ 55 (+7.84%)
CaptumModel interpretability and understanding for PyTorch
Stars: ✭ 2,830 (+5449.02%)
TrelawneyGeneral Interpretability Package
Stars: ✭ 55 (+7.84%)
ContrastiveexplanationContrastive Explanation (Foil Trees), developed at TNO/Utrecht University
Stars: ✭ 36 (-29.41%)
expmrcExpMRC: Explainability Evaluation for Machine Reading Comprehension
Stars: ✭ 58 (+13.73%)
GNNLens2Visualization tool for Graph Neural Networks
Stars: ✭ 155 (+203.92%)
XAIatERUM2020Workshop: Explanation and exploration of machine learning models with R and DALEX at eRum 2020
Stars: ✭ 52 (+1.96%)
ExplainxExplainable AI framework for data scientists. Explain & debug any blackbox machine learning model with a single line of code.
Stars: ✭ 196 (+284.31%)
Symbolic MetamodelingCodebase for "Demystifying Black-box Models with Symbolic Metamodels", NeurIPS 2019.
Stars: ✭ 29 (-43.14%)
AlibiAlgorithms for monitoring and explaining machine learning models
Stars: ✭ 924 (+1711.76%)
Grad Cam[ICCV 2017] Torch code for Grad-CAM
Stars: ✭ 891 (+1647.06%)
DalexmoDel Agnostic Language for Exploration and eXplanation
Stars: ✭ 795 (+1458.82%)
mindsdb nativeMachine Learning in one line of code
Stars: ✭ 34 (-33.33%)
ImodelsInterpretable ML package 🔍 for concise, transparent, and accurate predictive modeling (sklearn-compatible).
Stars: ✭ 194 (+280.39%)
Tf ExplainInterpretability Methods for tf.keras models with Tensorflow 2.x
Stars: ✭ 780 (+1429.41%)
Ad examplesA collection of anomaly detection methods (iid/point-based, graph and time series) including active learning for anomaly detection/discovery, bayesian rule-mining, description for diversity/explanation/interpretability. Analysis of incorporating label feedback with ensemble and tree-based detectors. Includes adversarial attacks with Graph Convolutional Network.
Stars: ✭ 641 (+1156.86%)
Pyss3A Python package implementing a new machine learning model for text classification with visualization tools for Explainable AI
Stars: ✭ 191 (+274.51%)