InterpretFit interpretable models. Explain blackbox machine learning.
Stars: ✭ 4,352 (+28913.33%)
Mutual labels: transparency, interpretability, interpretable-ai, interpretable-ml, explainable-ai, explainable-ml, xai, interpretable-machine-learning, iml, machine-learning-interpretability, explainability, interpretml diabetes use caseSample use case for Xavier AI in Healthcare conference: https://www.xavierhealth.org/ai-summit-day2/
Stars: ✭ 22 (+46.67%)
interpretable-mlTechniques & resources for training interpretable ML models, explaining ML models, and debugging ML models.
Stars: ✭ 17 (+13.33%)
zennitZennit is a high-level framework in Python using PyTorch for explaining/exploring neural networks using attribution methods like LRP.
Stars: ✭ 57 (+280%)
ProtoTreeProtoTrees: Neural Prototype Trees for Interpretable Fine-grained Image Recognition, published at CVPR2021
Stars: ✭ 47 (+213.33%)
xai-iml-sotaInteresting resources related to Explainable Artificial Intelligence, Interpretable Machine Learning, Interactive Machine Learning, Human in Loop and Visual Analytics.
Stars: ✭ 51 (+240%)
concept-based-xaiLibrary implementing state-of-the-art Concept-based and Disentanglement Learning methods for Explainable AI
Stars: ✭ 41 (+173.33%)
fastshapFast approximate Shapley values in R
Stars: ✭ 79 (+426.67%)
ArenaRData generator for Arena - interactive XAI dashboard
Stars: ✭ 28 (+86.67%)
ml-fairness-frameworkFairPut - Machine Learning Fairness Framework with LightGBM — Explainability, Robustness, Fairness (by @firmai)
Stars: ✭ 59 (+293.33%)
adaptive-waveletsAdaptive, interpretable wavelets across domains (NeurIPS 2021)
Stars: ✭ 58 (+286.67%)
hierarchical-dnn-interpretationsUsing / reproducing ACD from the paper "Hierarchical interpretations for neural network predictions" 🧠 (ICLR 2019)
Stars: ✭ 110 (+633.33%)
CaptumModel interpretability and understanding for PyTorch
Stars: ✭ 2,830 (+18766.67%)
dlime experimentsIn this work, we propose a deterministic version of Local Interpretable Model Agnostic Explanations (LIME) and the experimental results on three different medical datasets shows the superiority for Deterministic Local Interpretable Model-Agnostic Explanations (DLIME).
Stars: ✭ 21 (+40%)
Transformer-MM-Explainability[ICCV 2021- Oral] Official PyTorch implementation for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers, a novel method to visualize any Transformer-based network. Including examples for DETR, VQA.
Stars: ✭ 484 (+3126.67%)
CARLACARLA: A Python Library to Benchmark Algorithmic Recourse and Counterfactual Explanation Algorithms
Stars: ✭ 166 (+1006.67%)
deep-explanation-penalizationCode for using CDEP from the paper "Interpretations are useful: penalizing explanations to align neural networks with prior knowledge" https://arxiv.org/abs/1909.13584
Stars: ✭ 110 (+633.33%)
responsible-ai-toolboxThis project provides responsible AI user interfaces for Fairlearn, interpret-community, and Error Analysis, as well as foundational building blocks that they rely on.
Stars: ✭ 615 (+4000%)
GNNLens2Visualization tool for Graph Neural Networks
Stars: ✭ 155 (+933.33%)
ShapML.jlA Julia package for interpretable machine learning with stochastic Shapley values
Stars: ✭ 63 (+320%)
thermostatCollection of NLP model explanations and accompanying analysis tools
Stars: ✭ 126 (+740%)
Deep XFPackage towards building Explainable Forecasting and Nowcasting Models with State-of-the-art Deep Neural Networks and Dynamic Factor Model on Time Series data sets with single line of code. Also, provides utilify facility for time-series signal similarities matching, and removing noise from timeseries signals.
Stars: ✭ 83 (+453.33%)
XAIatERUM2020Workshop: Explanation and exploration of machine learning models with R and DALEX at eRum 2020
Stars: ✭ 52 (+246.67%)
SHAP FOLD(Explainable AI) - Learning Non-Monotonic Logic Programs From Statistical Models Using High-Utility Itemset Mining
Stars: ✭ 35 (+133.33%)
global-attribution-mappingGAM (Global Attribution Mapping) explains the landscape of neural network predictions across subpopulations
Stars: ✭ 18 (+20%)
MindsdbPredictive AI layer for existing databases.
Stars: ✭ 4,199 (+27893.33%)
TensorwatchDebugging, monitoring and visualization for Python Machine Learning and Data Science
Stars: ✭ 3,191 (+21173.33%)
WhiteBox-Part1In this part, I've introduced and experimented with ways to interpret and evaluate models in the field of image. (Pytorch)
Stars: ✭ 34 (+126.67%)
Pytorch Grad CamMany Class Activation Map methods implemented in Pytorch for CNNs and Vision Transformers. Including Grad-CAM, Grad-CAM++, Score-CAM, Ablation-CAM and XGrad-CAM
Stars: ✭ 3,814 (+25326.67%)
mindsdb serverMindsDB server allows you to consume and expose MindsDB workflows, through http.
Stars: ✭ 3 (-80%)
shaprExplaining the output of machine learning models with more accurately estimated Shapley values
Stars: ✭ 95 (+533.33%)
sageFor calculating global feature importance using Shapley values.
Stars: ✭ 129 (+760%)
ExplainxExplainable AI framework for data scientists. Explain & debug any blackbox machine learning model with a single line of code.
Stars: ✭ 196 (+1206.67%)
transformers-interpretModel explainability that works seamlessly with 🤗 transformers. Explain your transformers model in just 2 lines of code.
Stars: ✭ 861 (+5640%)
removal-explanationsA lightweight implementation of removal-based explanations for ML models.
Stars: ✭ 46 (+206.67%)
Mli ResourcesH2O.ai Machine Learning Interpretability Resources
Stars: ✭ 428 (+2753.33%)
expmrcExpMRC: Explainability Evaluation for Machine Reading Comprehension
Stars: ✭ 58 (+286.67%)
ALPS 2021XAI Tutorial for the Explainable AI track in the ALPS winter school 2021
Stars: ✭ 55 (+266.67%)
self critical vqaCode for NeurIPS 2019 paper ``Self-Critical Reasoning for Robust Visual Question Answering''
Stars: ✭ 39 (+160%)
ShapA game theoretic approach to explain the output of any machine learning model.
Stars: ✭ 14,917 (+99346.67%)
Interpretable machine learning with pythonExamples of techniques for training interpretable ML models, explaining ML models, and debugging ML models for accuracy, discrimination, and security.
Stars: ✭ 530 (+3433.33%)
megMolecular Explanation Generator
Stars: ✭ 14 (-6.67%)
3D-GuidedGradCAM-for-Medical-ImagingThis Repo containes the implemnetation of generating Guided-GradCAM for 3D medical Imaging using Nifti file in tensorflow 2.0. Different input files can be used in that case need to edit the input to the Guided-gradCAM model.
Stars: ✭ 60 (+300%)
trillian-examplesA place to store some examples which use Trillian APIs to build things.
Stars: ✭ 116 (+673.33%)
graspEssential NLP & ML, short & fast pure Python code
Stars: ✭ 58 (+286.67%)
iabtcf-esOfficial compliant tool suite for implementing the Transparency and Consent Framework (TCF) v2.0. The essential toolkit for CMPs.
Stars: ✭ 102 (+580%)
neuro-symbolic-sudoku-solver⚙️ Solving sudoku using Deep Reinforcement learning in combination with powerful symbolic representations.
Stars: ✭ 60 (+300%)
rita-dslA Domain Specific Language (DSL) for building language patterns. These can be later compiled into spaCy patterns, pure regex, or any other format
Stars: ✭ 60 (+300%)
kernel-modNeurIPS 2018. Linear-time model comparison tests.
Stars: ✭ 17 (+13.33%)
EgoCNNCode for "Distributed, Egocentric Representations of Graphs for Detecting Critical Structures" (ICML 2019)
Stars: ✭ 16 (+6.67%)
event-extraction-paperPapers from top conferences and journals for event extraction in recent years
Stars: ✭ 54 (+260%)
mindsdb nativeMachine Learning in one line of code
Stars: ✭ 34 (+126.67%)