mmnMoore Machine Networks (MMN): Learning Finite-State Representations of Recurrent Policy Networks
Stars: ✭ 39 (-53.57%)
Mli ResourcesH2O.ai Machine Learning Interpretability Resources
Stars: ✭ 428 (+409.52%)
sageFor calculating global feature importance using Shapley values.
Stars: ✭ 129 (+53.57%)
concept-based-xaiLibrary implementing state-of-the-art Concept-based and Disentanglement Learning methods for Explainable AI
Stars: ✭ 41 (-51.19%)
Interpretable machine learning with pythonExamples of techniques for training interpretable ML models, explaining ML models, and debugging ML models for accuracy, discrimination, and security.
Stars: ✭ 530 (+530.95%)
transformers-interpretModel explainability that works seamlessly with 🤗 transformers. Explain your transformers model in just 2 lines of code.
Stars: ✭ 861 (+925%)
Grad Cam[ICCV 2017] Torch code for Grad-CAM
Stars: ✭ 891 (+960.71%)
Transformer-MM-Explainability[ICCV 2021- Oral] Official PyTorch implementation for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers, a novel method to visualize any Transformer-based network. Including examples for DETR, VQA.
Stars: ✭ 484 (+476.19%)
FacetHuman-explainable AI.
Stars: ✭ 269 (+220.24%)
neuron-importance-zsl[ECCV 2018] code for Choose Your Neuron: Incorporating Domain Knowledge Through Neuron Importance
Stars: ✭ 56 (-33.33%)
adaptive-waveletsAdaptive, interpretable wavelets across domains (NeurIPS 2021)
Stars: ✭ 58 (-30.95%)
FlashtorchVisualization toolkit for neural networks in PyTorch! Demo -->
Stars: ✭ 561 (+567.86%)
zennitZennit is a high-level framework in Python using PyTorch for explaining/exploring neural networks using attribution methods like LRP.
Stars: ✭ 57 (-32.14%)
Symbolic MetamodelingCodebase for "Demystifying Black-box Models with Symbolic Metamodels", NeurIPS 2019.
Stars: ✭ 29 (-65.48%)
partial dependencePython package to visualize and cluster partial dependence.
Stars: ✭ 23 (-72.62%)
TcavCode for the TCAV ML interpretability project
Stars: ✭ 442 (+426.19%)
interpretable-mlTechniques & resources for training interpretable ML models, explaining ML models, and debugging ML models.
Stars: ✭ 17 (-79.76%)
megMolecular Explanation Generator
Stars: ✭ 14 (-83.33%)
ArenaRData generator for Arena - interactive XAI dashboard
Stars: ✭ 28 (-66.67%)
Tf ExplainInterpretability Methods for tf.keras models with Tensorflow 2.x
Stars: ✭ 780 (+828.57%)
ALPS 2021XAI Tutorial for the Explainable AI track in the ALPS winter school 2021
Stars: ✭ 55 (-34.52%)
removal-explanationsA lightweight implementation of removal-based explanations for ML models.
Stars: ✭ 46 (-45.24%)
knowledge-neuronsA library for finding knowledge neurons in pretrained transformer models.
Stars: ✭ 72 (-14.29%)
thermostatCollection of NLP model explanations and accompanying analysis tools
Stars: ✭ 126 (+50%)
XaiXAI - An eXplainability toolbox for machine learning
Stars: ✭ 596 (+609.52%)
summit🏔️ Summit: Scaling Deep Learning Interpretability by Visualizing Activation and Attribution Summarizations
Stars: ✭ 95 (+13.1%)
ContrastiveexplanationContrastive Explanation (Foil Trees), developed at TNO/Utrecht University
Stars: ✭ 36 (-57.14%)
yggdrasil-decision-forestsA collection of state-of-the-art algorithms for the training, serving and interpretation of Decision Forest models.
Stars: ✭ 156 (+85.71%)
Xai resourcesInteresting resources related to XAI (Explainable Artificial Intelligence)
Stars: ✭ 553 (+558.33%)
ProtoTreeProtoTrees: Neural Prototype Trees for Interpretable Fine-grained Image Recognition, published at CVPR2021
Stars: ✭ 47 (-44.05%)
AthenaAutomatic equation building and curve fitting. Runs on Tensorflow. Built for academia and research.
Stars: ✭ 57 (-32.14%)
deep-explanation-penalizationCode for using CDEP from the paper "Interpretations are useful: penalizing explanations to align neural networks with prior knowledge" https://arxiv.org/abs/1909.13584
Stars: ✭ 110 (+30.95%)
DeepliftPublic facing deeplift repo
Stars: ✭ 512 (+509.52%)
free-lunch-saliencyCode for "Free-Lunch Saliency via Attention in Atari Agents"
Stars: ✭ 15 (-82.14%)
AlibiAlgorithms for monitoring and explaining machine learning models
Stars: ✭ 924 (+1000%)
LucidA collection of infrastructure and tools for research in neural network interpretability.
Stars: ✭ 4,344 (+5071.43%)
mllpThe code of AAAI 2020 paper "Transparent Classification with Multilayer Logical Perceptrons and Random Binarization".
Stars: ✭ 15 (-82.14%)
Cnn Interpretability🏥 Visualizing Convolutional Networks for MRI-based Diagnosis of Alzheimer’s Disease
Stars: ✭ 68 (-19.05%)
hierarchical-dnn-interpretationsUsing / reproducing ACD from the paper "Hierarchical interpretations for neural network predictions" 🧠 (ICLR 2019)
Stars: ✭ 110 (+30.95%)
Neural Backed Decision TreesMaking decision trees competitive with neural networks on CIFAR10, CIFAR100, TinyImagenet200, Imagenet
Stars: ✭ 411 (+389.29%)
glcapsnetGlobal-Local Capsule Network (GLCapsNet) is a capsule-based architecture able to provide context-based eye fixation prediction for several autonomous driving scenarios, while offering interpretability both globally and locally.
Stars: ✭ 33 (-60.71%)
DalexmoDel Agnostic Language for Exploration and eXplanation
Stars: ✭ 795 (+846.43%)
adversarial-robustness-publicCode for AAAI 2018 accepted paper: "Improving the Adversarial Robustness and Interpretability of Deep Neural Networks by Regularizing their Input Gradients"
Stars: ✭ 49 (-41.67%)
InterpretFit interpretable models. Explain blackbox machine learning.
Stars: ✭ 4,352 (+5080.95%)
EgoCNNCode for "Distributed, Egocentric Representations of Graphs for Detecting Critical Structures" (ICML 2019)
Stars: ✭ 16 (-80.95%)
Text nnText classification models. Used a submodule for other projects.
Stars: ✭ 55 (-34.52%)
xai-iml-sotaInteresting resources related to Explainable Artificial Intelligence, Interpretable Machine Learning, Interactive Machine Learning, Human in Loop and Visual Analytics.
Stars: ✭ 51 (-39.29%)
diabetes use caseSample use case for Xavier AI in Healthcare conference: https://www.xavierhealth.org/ai-summit-day2/
Stars: ✭ 22 (-73.81%)
kernel-modNeurIPS 2018. Linear-time model comparison tests.
Stars: ✭ 17 (-79.76%)
Ad examplesA collection of anomaly detection methods (iid/point-based, graph and time series) including active learning for anomaly detection/discovery, bayesian rule-mining, description for diversity/explanation/interpretability. Analysis of incorporating label feedback with ensemble and tree-based detectors. Includes adversarial attacks with Graph Convolutional Network.
Stars: ✭ 641 (+663.1%)
SPINECode for SPINE - Sparse Interpretable Neural Embeddings. Jhamtani H.*, Pruthi D.*, Subramanian A.*, Berg-Kirkpatrick T., Hovy E. AAAI 2018
Stars: ✭ 44 (-47.62%)
TrelawneyGeneral Interpretability Package
Stars: ✭ 55 (-34.52%)
shapeshopTowards Understanding Deep Learning Representations via Interactive Experimentation
Stars: ✭ 16 (-80.95%)