mllpThe code of AAAI 2020 paper "Transparent Classification with Multilayer Logical Perceptrons and Random Binarization".
Stars: ✭ 15 (-81.01%)
InterpretFit interpretable models. Explain blackbox machine learning.
Stars: ✭ 4,352 (+5408.86%)
ml-fairness-frameworkFairPut - Machine Learning Fairness Framework with LightGBM — Explainability, Robustness, Fairness (by @firmai)
Stars: ✭ 59 (-25.32%)
shaprExplaining the output of machine learning models with more accurately estimated Shapley values
Stars: ✭ 95 (+20.25%)
dlime experimentsIn this work, we propose a deterministic version of Local Interpretable Model Agnostic Explanations (LIME) and the experimental results on three different medical datasets shows the superiority for Deterministic Local Interpretable Model-Agnostic Explanations (DLIME).
Stars: ✭ 21 (-73.42%)
xai-iml-sotaInteresting resources related to Explainable Artificial Intelligence, Interpretable Machine Learning, Interactive Machine Learning, Human in Loop and Visual Analytics.
Stars: ✭ 51 (-35.44%)
diabetes use caseSample use case for Xavier AI in Healthcare conference: https://www.xavierhealth.org/ai-summit-day2/
Stars: ✭ 22 (-72.15%)
ProtoTreeProtoTrees: Neural Prototype Trees for Interpretable Fine-grained Image Recognition, published at CVPR2021
Stars: ✭ 47 (-40.51%)
responsible-ai-toolboxThis project provides responsible AI user interfaces for Fairlearn, interpret-community, and Error Analysis, as well as foundational building blocks that they rely on.
Stars: ✭ 615 (+678.48%)
concept-based-xaiLibrary implementing state-of-the-art Concept-based and Disentanglement Learning methods for Explainable AI
Stars: ✭ 41 (-48.1%)
expmrcExpMRC: Explainability Evaluation for Machine Reading Comprehension
Stars: ✭ 58 (-26.58%)
SHAP FOLD(Explainable AI) - Learning Non-Monotonic Logic Programs From Statistical Models Using High-Utility Itemset Mining
Stars: ✭ 35 (-55.7%)
zennitZennit is a high-level framework in Python using PyTorch for explaining/exploring neural networks using attribution methods like LRP.
Stars: ✭ 57 (-27.85%)
TensorwatchDebugging, monitoring and visualization for Python Machine Learning and Data Science
Stars: ✭ 3,191 (+3939.24%)
ShapML.jlA Julia package for interpretable machine learning with stochastic Shapley values
Stars: ✭ 63 (-20.25%)
interpretable-mlTechniques & resources for training interpretable ML models, explaining ML models, and debugging ML models.
Stars: ✭ 17 (-78.48%)
MindsdbPredictive AI layer for existing databases.
Stars: ✭ 4,199 (+5215.19%)
XAIatERUM2020Workshop: Explanation and exploration of machine learning models with R and DALEX at eRum 2020
Stars: ✭ 52 (-34.18%)
CARLACARLA: A Python Library to Benchmark Algorithmic Recourse and Counterfactual Explanation Algorithms
Stars: ✭ 166 (+110.13%)
mindsdb serverMindsDB server allows you to consume and expose MindsDB workflows, through http.
Stars: ✭ 3 (-96.2%)
global-attribution-mappingGAM (Global Attribution Mapping) explains the landscape of neural network predictions across subpopulations
Stars: ✭ 18 (-77.22%)
Deep XFPackage towards building Explainable Forecasting and Nowcasting Models with State-of-the-art Deep Neural Networks and Dynamic Factor Model on Time Series data sets with single line of code. Also, provides utilify facility for time-series signal similarities matching, and removing noise from timeseries signals.
Stars: ✭ 83 (+5.06%)
megMolecular Explanation Generator
Stars: ✭ 14 (-82.28%)
awesome-agi-cocosciAn awesome & curated list for Artificial General Intelligence, an emerging inter-discipline field that combines artificial intelligence and computational cognitive sciences.
Stars: ✭ 81 (+2.53%)
javaAnchorExplainerExplains machine learning models fast using the Anchor algorithm originally proposed by marcotcr in 2018
Stars: ✭ 17 (-78.48%)
Transformer-MM-Explainability[ICCV 2021- Oral] Official PyTorch implementation for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers, a novel method to visualize any Transformer-based network. Including examples for DETR, VQA.
Stars: ✭ 484 (+512.66%)
path explainA repository for explaining feature attributions and feature interactions in deep neural networks.
Stars: ✭ 151 (+91.14%)
ArenaRData generator for Arena - interactive XAI dashboard
Stars: ✭ 28 (-64.56%)
cnn-raccoonCreate interactive dashboards for your Convolutional Neural Networks with a single line of code!
Stars: ✭ 31 (-60.76%)
ddsm-visual-primitivesUsing deep learning to discover interpretable representations for mammogram classification and explanation
Stars: ✭ 25 (-68.35%)
deep-explanation-penalizationCode for using CDEP from the paper "Interpretations are useful: penalizing explanations to align neural networks with prior knowledge" https://arxiv.org/abs/1909.13584
Stars: ✭ 110 (+39.24%)
bert attn vizVisualize BERT's self-attention layers on text classification tasks
Stars: ✭ 41 (-48.1%)
Awesome-XAI-EvaluationReference tables to introduce and organize evaluation methods and measures for explainable machine learning systems
Stars: ✭ 57 (-27.85%)
GraphLIMEThis is a Pytorch implementation of GraphLIME
Stars: ✭ 40 (-49.37%)
trulensLibrary containing attribution and interpretation methods for deep nets.
Stars: ✭ 146 (+84.81%)
3D-GuidedGradCAM-for-Medical-ImagingThis Repo containes the implemnetation of generating Guided-GradCAM for 3D medical Imaging using Nifti file in tensorflow 2.0. Different input files can be used in that case need to edit the input to the Guided-gradCAM model.
Stars: ✭ 60 (-24.05%)
graspEssential NLP & ML, short & fast pure Python code
Stars: ✭ 58 (-26.58%)
flPapersPaper collection of federated learning. Conferences and Journals Collection for Federated Learning from 2019 to 2021, Accepted Papers, Hot topics and good research groups. Paper summary
Stars: ✭ 76 (-3.8%)
neuro-symbolic-sudoku-solver⚙️ Solving sudoku using Deep Reinforcement learning in combination with powerful symbolic representations.
Stars: ✭ 60 (-24.05%)
sageFor calculating global feature importance using Shapley values.
Stars: ✭ 129 (+63.29%)
GNNLens2Visualization tool for Graph Neural Networks
Stars: ✭ 155 (+96.2%)
adaptive-waveletsAdaptive, interpretable wavelets across domains (NeurIPS 2021)
Stars: ✭ 58 (-26.58%)
isarn-sketches-sparkRoutines and data structures for using isarn-sketches idiomatically in Apache Spark
Stars: ✭ 28 (-64.56%)
mindsdb nativeMachine Learning in one line of code
Stars: ✭ 34 (-56.96%)
Relevance-CAMThe official code of Relevance-CAM
Stars: ✭ 21 (-73.42%)
transformers-interpretModel explainability that works seamlessly with 🤗 transformers. Explain your transformers model in just 2 lines of code.
Stars: ✭ 861 (+989.87%)
datafsmMachine Learning Finite State Machine Models from Data with Genetic Algorithms
Stars: ✭ 14 (-82.28%)
mtaMulti-Touch Attribution
Stars: ✭ 60 (-24.05%)
pyCeterisParibusPython library for Ceteris Paribus Plots (What-if plots)
Stars: ✭ 19 (-75.95%)
self critical vqaCode for NeurIPS 2019 paper ``Self-Critical Reasoning for Robust Visual Question Answering''
Stars: ✭ 39 (-50.63%)
DIGA library for graph deep learning research
Stars: ✭ 1,078 (+1264.56%)
auditorModel verification, validation, and error analysis
Stars: ✭ 56 (-29.11%)