zennitZennit is a high-level framework in Python using PyTorch for explaining/exploring neural networks using attribution methods like LRP.
Stars: ✭ 57 (-97.99%)
interpretable-mlTechniques & resources for training interpretable ML models, explaining ML models, and debugging ML models.
Stars: ✭ 17 (-99.4%)
InterpretFit interpretable models. Explain blackbox machine learning.
Stars: ✭ 4,352 (+53.78%)
mllpThe code of AAAI 2020 paper "Transparent Classification with Multilayer Logical Perceptrons and Random Binarization".
Stars: ✭ 15 (-99.47%)
thermostatCollection of NLP model explanations and accompanying analysis tools
Stars: ✭ 126 (-95.55%)
hierarchical-dnn-interpretationsUsing / reproducing ACD from the paper "Hierarchical interpretations for neural network predictions" 🧠 (ICLR 2019)
Stars: ✭ 110 (-96.11%)
diabetes use caseSample use case for Xavier AI in Healthcare conference: https://www.xavierhealth.org/ai-summit-day2/
Stars: ✭ 22 (-99.22%)
deep-explanation-penalizationCode for using CDEP from the paper "Interpretations are useful: penalizing explanations to align neural networks with prior knowledge" https://arxiv.org/abs/1909.13584
Stars: ✭ 110 (-96.11%)
Pytorch Grad CamMany Class Activation Map methods implemented in Pytorch for CNNs and Vision Transformers. Including Grad-CAM, Grad-CAM++, Score-CAM, Ablation-CAM and XGrad-CAM
Stars: ✭ 3,814 (+34.77%)
FlashtorchVisualization toolkit for neural networks in PyTorch! Demo -->
Stars: ✭ 561 (-80.18%)
Interpretability By PartsCode repository for "Interpretable and Accurate Fine-grained Recognition via Region Grouping", CVPR 2020 (Oral)
Stars: ✭ 88 (-96.89%)
Lrp for lstmLayer-wise Relevance Propagation (LRP) for LSTMs
Stars: ✭ 152 (-94.63%)
Interpretable machine learning with pythonExamples of techniques for training interpretable ML models, explaining ML models, and debugging ML models for accuracy, discrimination, and security.
Stars: ✭ 530 (-81.27%)
TcavCode for the TCAV ML interpretability project
Stars: ✭ 442 (-84.38%)
Mli ResourcesH2O.ai Machine Learning Interpretability Resources
Stars: ✭ 428 (-84.88%)
FacetHuman-explainable AI.
Stars: ✭ 269 (-90.49%)
removal-explanationsA lightweight implementation of removal-based explanations for ML models.
Stars: ✭ 46 (-98.37%)
WhiteBox-Part1In this part, I've introduced and experimented with ways to interpret and evaluate models in the field of image. (Pytorch)
Stars: ✭ 34 (-98.8%)
Ad examplesA collection of anomaly detection methods (iid/point-based, graph and time series) including active learning for anomaly detection/discovery, bayesian rule-mining, description for diversity/explanation/interpretability. Analysis of incorporating label feedback with ensemble and tree-based detectors. Includes adversarial attacks with Graph Convolutional Network.
Stars: ✭ 641 (-77.35%)
XaiXAI - An eXplainability toolbox for machine learning
Stars: ✭ 596 (-78.94%)
Modelstudio📍 Interactive Studio for Explanatory Model Analysis
Stars: ✭ 163 (-94.24%)
Xai resourcesInteresting resources related to XAI (Explainable Artificial Intelligence)
Stars: ✭ 553 (-80.46%)
CxplainCausal Explanation (CXPlain) is a method for explaining the predictions of any machine-learning model.
Stars: ✭ 84 (-97.03%)
DeepliftPublic facing deeplift repo
Stars: ✭ 512 (-81.91%)
Pyss3A Python package implementing a new machine learning model for text classification with visualization tools for Explainable AI
Stars: ✭ 191 (-93.25%)
LucidA collection of infrastructure and tools for research in neural network interpretability.
Stars: ✭ 4,344 (+53.5%)
Cnn Interpretability🏥 Visualizing Convolutional Networks for MRI-based Diagnosis of Alzheimer’s Disease
Stars: ✭ 68 (-97.6%)
Neural Backed Decision TreesMaking decision trees competitive with neural networks on CIFAR10, CIFAR100, TinyImagenet200, Imagenet
Stars: ✭ 411 (-85.48%)
StellargraphStellarGraph - Machine Learning on Graphs
Stars: ✭ 2,235 (-21.02%)
TrelawneyGeneral Interpretability Package
Stars: ✭ 55 (-98.06%)
knowledge-neuronsA library for finding knowledge neurons in pretrained transformer models.
Stars: ✭ 72 (-97.46%)
AthenaAutomatic equation building and curve fitting. Runs on Tensorflow. Built for academia and research.
Stars: ✭ 57 (-97.99%)
caltech birdsA set of notebooks as a guide to the process of fine-grained image classification of birds species, using PyTorch based deep neural networks.
Stars: ✭ 29 (-98.98%)
SPINECode for SPINE - Sparse Interpretable Neural Embeddings. Jhamtani H.*, Pruthi D.*, Subramanian A.*, Berg-Kirkpatrick T., Hovy E. AAAI 2018
Stars: ✭ 44 (-98.45%)
Text nnText classification models. Used a submodule for other projects.
Stars: ✭ 55 (-98.06%)
shapeshopTowards Understanding Deep Learning Representations via Interactive Experimentation
Stars: ✭ 16 (-99.43%)
GandissectPytorch-based tools for visualizing and understanding the neurons of a GAN. https://gandissect.csail.mit.edu/
Stars: ✭ 1,700 (-39.93%)
Predictive-Maintenance-of-Aircraft-EngineIn this project I aim to apply Various Predictive Maintenance Techniques to accurately predict the impending failure of an aircraft turbofan engine.
Stars: ✭ 48 (-98.3%)
ContrastiveexplanationContrastive Explanation (Foil Trees), developed at TNO/Utrecht University
Stars: ✭ 36 (-98.73%)
ShapA game theoretic approach to explain the output of any machine learning model.
Stars: ✭ 14,917 (+427.1%)
neuron-importance-zsl[ECCV 2018] code for Choose Your Neuron: Incorporating Domain Knowledge Through Neuron Importance
Stars: ✭ 56 (-98.02%)
Visual AttributionPytorch Implementation of recent visual attribution methods for model interpretability
Stars: ✭ 127 (-95.51%)
Symbolic MetamodelingCodebase for "Demystifying Black-box Models with Symbolic Metamodels", NeurIPS 2019.
Stars: ✭ 29 (-98.98%)
summit🏔️ Summit: Scaling Deep Learning Interpretability by Visualizing Activation and Attribution Summarizations
Stars: ✭ 95 (-96.64%)
sageFor calculating global feature importance using Shapley values.
Stars: ✭ 129 (-95.44%)
AlibiAlgorithms for monitoring and explaining machine learning models
Stars: ✭ 924 (-67.35%)
yggdrasil-decision-forestsA collection of state-of-the-art algorithms for the training, serving and interpretation of Decision Forest models.
Stars: ✭ 156 (-94.49%)
Pycebox⬛ Python Individual Conditional Expectation Plot Toolbox
Stars: ✭ 101 (-96.43%)
Grad Cam[ICCV 2017] Torch code for Grad-CAM
Stars: ✭ 891 (-68.52%)
ProtoTreeProtoTrees: Neural Prototype Trees for Interpretable Fine-grained Image Recognition, published at CVPR2021
Stars: ✭ 47 (-98.34%)
ExplainxExplainable AI framework for data scientists. Explain & debug any blackbox machine learning model with a single line of code.
Stars: ✭ 196 (-93.07%)