diabetes use caseSample use case for Xavier AI in Healthcare conference: https://www.xavierhealth.org/ai-summit-day2/
Stars: ✭ 22 (-75.82%)
ProtoTreeProtoTrees: Neural Prototype Trees for Interpretable Fine-grained Image Recognition, published at CVPR2021
Stars: ✭ 47 (-48.35%)
InterpretFit interpretable models. Explain blackbox machine learning.
Stars: ✭ 4,352 (+4682.42%)
mllpThe code of AAAI 2020 paper "Transparent Classification with Multilayer Logical Perceptrons and Random Binarization".
Stars: ✭ 15 (-83.52%)
interpretable-mlTechniques & resources for training interpretable ML models, explaining ML models, and debugging ML models.
Stars: ✭ 17 (-81.32%)
xai-iml-sotaInteresting resources related to Explainable Artificial Intelligence, Interpretable Machine Learning, Interactive Machine Learning, Human in Loop and Visual Analytics.
Stars: ✭ 51 (-43.96%)
ExplainxExplainable AI framework for data scientists. Explain & debug any blackbox machine learning model with a single line of code.
Stars: ✭ 196 (+115.38%)
adaptive-waveletsAdaptive, interpretable wavelets across domains (NeurIPS 2021)
Stars: ✭ 58 (-36.26%)
Pyss3A Python package implementing a new machine learning model for text classification with visualization tools for Explainable AI
Stars: ✭ 191 (+109.89%)
StellargraphStellarGraph - Machine Learning on Graphs
Stars: ✭ 2,235 (+2356.04%)
ALPS 2021XAI Tutorial for the Explainable AI track in the ALPS winter school 2021
Stars: ✭ 55 (-39.56%)
ImodelsInterpretable ML package 🔍 for concise, transparent, and accurate predictive modeling (sklearn-compatible).
Stars: ✭ 194 (+113.19%)
Transformer-MM-Explainability[ICCV 2021- Oral] Official PyTorch implementation for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers, a novel method to visualize any Transformer-based network. Including examples for DETR, VQA.
Stars: ✭ 484 (+431.87%)
Modelstudio📍 Interactive Studio for Explanatory Model Analysis
Stars: ✭ 163 (+79.12%)
XAIatERUM2020Workshop: Explanation and exploration of machine learning models with R and DALEX at eRum 2020
Stars: ✭ 52 (-42.86%)
tf retrieval baselineA Tensorflow retrieval (space embedding) baseline. Metric learning baseline on CUB and Stanford Online Products.
Stars: ✭ 39 (-57.14%)
Visual AttributionPytorch Implementation of recent visual attribution methods for model interpretability
Stars: ✭ 127 (+39.56%)
adversarial-robustness-publicCode for AAAI 2018 accepted paper: "Improving the Adversarial Robustness and Interpretability of Deep Neural Networks by Regularizing their Input Gradients"
Stars: ✭ 49 (-46.15%)
BreakdownModel Agnostics breakDown plots
Stars: ✭ 93 (+2.2%)
Interpretability By PartsCode repository for "Interpretable and Accurate Fine-grained Recognition via Region Grouping", CVPR 2020 (Oral)
Stars: ✭ 88 (-3.3%)
Movie Trailers SwiftUIA simple app which shows the lastest movies trailers based on different genres developed using SwiftUI.
Stars: ✭ 51 (-43.96%)
Torch CamClass activation maps for your PyTorch models (CAM, Grad-CAM, Grad-CAM++, Smooth Grad-CAM++, Score-CAM, SS-CAM, IS-CAM, XGrad-CAM)
Stars: ✭ 249 (+173.63%)
Awesome-XAI-EvaluationReference tables to introduce and organize evaluation methods and measures for explainable machine learning systems
Stars: ✭ 57 (-37.36%)
CaptumModel interpretability and understanding for PyTorch
Stars: ✭ 2,830 (+3009.89%)
glcapsnetGlobal-Local Capsule Network (GLCapsNet) is a capsule-based architecture able to provide context-based eye fixation prediction for several autonomous driving scenarios, while offering interpretability both globally and locally.
Stars: ✭ 33 (-63.74%)
kernel-modNeurIPS 2018. Linear-time model comparison tests.
Stars: ✭ 17 (-81.32%)
mmnMoore Machine Networks (MMN): Learning Finite-State Representations of Recurrent Policy Networks
Stars: ✭ 39 (-57.14%)
EgoCNNCode for "Distributed, Egocentric Representations of Graphs for Detecting Critical Structures" (ICML 2019)
Stars: ✭ 16 (-82.42%)
GNOME-ConceptsConcepts and ideas for the GNOME desktop
Stars: ✭ 13 (-85.71%)
ShapA game theoretic approach to explain the output of any machine learning model.
Stars: ✭ 14,917 (+16292.31%)
thermostatCollection of NLP model explanations and accompanying analysis tools
Stars: ✭ 126 (+38.46%)
Lrp for lstmLayer-wise Relevance Propagation (LRP) for LSTMs
Stars: ✭ 152 (+67.03%)
copycatModern port of Melanie Mitchell's and Douglas Hofstadter's Copycat
Stars: ✭ 84 (-7.69%)
33 Js Concepts📜 33 JavaScript concepts every developer should know.
Stars: ✭ 45,558 (+49963.74%)
Pycebox⬛ Python Individual Conditional Expectation Plot Toolbox
Stars: ✭ 101 (+10.99%)
NamingThingsContent on tips, tricks, advice, practices for naming things in in software/technology
Stars: ✭ 31 (-65.93%)
concepticon-dataThe curation repository for the data behind Concepticon.
Stars: ✭ 25 (-72.53%)
CxplainCausal Explanation (CXPlain) is a method for explaining the predictions of any machine-learning model.
Stars: ✭ 84 (-7.69%)
ArenaRData generator for Arena - interactive XAI dashboard
Stars: ✭ 28 (-69.23%)
Cnn Interpretability🏥 Visualizing Convolutional Networks for MRI-based Diagnosis of Alzheimer’s Disease
Stars: ✭ 68 (-25.27%)
cefal(Concepts-enabled) Functional Abstraction Layer for C++
Stars: ✭ 52 (-42.86%)
AthenaAutomatic equation building and curve fitting. Runs on Tensorflow. Built for academia and research.
Stars: ✭ 57 (-37.36%)
hierarchical-dnn-interpretationsUsing / reproducing ACD from the paper "Hierarchical interpretations for neural network predictions" 🧠 (ICLR 2019)
Stars: ✭ 110 (+20.88%)
Text nnText classification models. Used a submodule for other projects.
Stars: ✭ 55 (-39.56%)
thread poolThread pool using std::* primitives from C++17, with optional priority queue/greenthreading for POSIX.
Stars: ✭ 74 (-18.68%)
TrelawneyGeneral Interpretability Package
Stars: ✭ 55 (-39.56%)
ContrastiveexplanationContrastive Explanation (Foil Trees), developed at TNO/Utrecht University
Stars: ✭ 36 (-60.44%)
concept-based-xaiLibrary implementing state-of-the-art Concept-based and Disentanglement Learning methods for Explainable AI
Stars: ✭ 41 (-54.95%)
data-science-learning📊 All of courses, assignments, exercises, mini-projects and books that I've done so far in the process of learning by myself Machine Learning and Data Science.
Stars: ✭ 32 (-64.84%)
Symbolic MetamodelingCodebase for "Demystifying Black-box Models with Symbolic Metamodels", NeurIPS 2019.
Stars: ✭ 29 (-68.13%)
megMolecular Explanation Generator
Stars: ✭ 14 (-84.62%)