All Projects → ShapML.jl → Similar Projects or Alternatives

26 Open source projects that are alternatives of or similar to ShapML.jl

interpretable-ml
Techniques & resources for training interpretable ML models, explaining ML models, and debugging ML models.
Stars: ✭ 17 (-73.02%)
mllp
The code of AAAI 2020 paper "Transparent Classification with Multilayer Logical Perceptrons and Random Binarization".
Stars: ✭ 15 (-76.19%)
xai-iml-sota
Interesting resources related to Explainable Artificial Intelligence, Interpretable Machine Learning, Interactive Machine Learning, Human in Loop and Visual Analytics.
Stars: ✭ 51 (-19.05%)
dominance-analysis
This package can be used for dominance analysis or Shapley Value Regression for finding relative importance of predictors on given dataset. This library can be used for key driver analysis or marginal resource allocation models.
Stars: ✭ 111 (+76.19%)
Shap
A game theoretic approach to explain the output of any machine learning model.
Stars: ✭ 14,917 (+23577.78%)
Mutual labels:  shapley, shap
fastshap
Fast approximate Shapley values in R
Stars: ✭ 79 (+25.4%)
Awesome Machine Learning Interpretability
A curated list of awesome machine learning interpretability resources.
Stars: ✭ 2,404 (+3715.87%)
Interpret
Fit interpretable models. Explain blackbox machine learning.
Stars: ✭ 4,352 (+6807.94%)
diabetes use case
Sample use case for Xavier AI in Healthcare conference: https://www.xavierhealth.org/ai-summit-day2/
Stars: ✭ 22 (-65.08%)
ConceptBottleneck
Concept Bottleneck Models, ICML 2020
Stars: ✭ 91 (+44.44%)
hierarchical-dnn-interpretations
Using / reproducing ACD from the paper "Hierarchical interpretations for neural network predictions" 🧠 (ICLR 2019)
Stars: ✭ 110 (+74.6%)
Mutual labels:  feature-importance
ArenaR
Data generator for Arena - interactive XAI dashboard
Stars: ✭ 28 (-55.56%)
Mutual labels:  iml
Awesome-XAI-Evaluation
Reference tables to introduce and organize evaluation methods and measures for explainable machine learning systems
Stars: ✭ 57 (-9.52%)
XAIatERUM2020
Workshop: Explanation and exploration of machine learning models with R and DALEX at eRum 2020
Stars: ✭ 52 (-17.46%)
isarn-sketches-spark
Routines and data structures for using isarn-sketches idiomatically in Apache Spark
Stars: ✭ 28 (-55.56%)
Mutual labels:  feature-importance
Captum
Model interpretability and understanding for PyTorch
Stars: ✭ 2,830 (+4392.06%)
Mutual labels:  feature-importance
Predictive-Maintenance-of-Aircraft-Engine
In this project I aim to apply Various Predictive Maintenance Techniques to accurately predict the impending failure of an aircraft turbofan engine.
Stars: ✭ 48 (-23.81%)
Mutual labels:  feature-importance
deep-explanation-penalization
Code for using CDEP from the paper "Interpretations are useful: penalizing explanations to align neural networks with prior knowledge" https://arxiv.org/abs/1909.13584
Stars: ✭ 110 (+74.6%)
Mutual labels:  feature-importance
removal-explanations
A lightweight implementation of removal-based explanations for ML models.
Stars: ✭ 46 (-26.98%)
Mutual labels:  shapley
shapr
Explaining the output of machine learning models with more accurately estimated Shapley values
Stars: ✭ 95 (+50.79%)
Mutual labels:  shapley
sage
For calculating global feature importance using Shapley values.
Stars: ✭ 129 (+104.76%)
Mutual labels:  shapley
mta
Multi-Touch Attribution
Stars: ✭ 60 (-4.76%)
Mutual labels:  shapley
Layerwise-Relevance-Propagation
Implementation of Layerwise Relevance Propagation for heatmapping "deep" layers
Stars: ✭ 78 (+23.81%)
ml-fairness-framework
FairPut - Machine Learning Fairness Framework with LightGBM — Explainability, Robustness, Fairness (by @firmai)
Stars: ✭ 59 (-6.35%)
ProtoTree
ProtoTrees: Neural Prototype Trees for Interpretable Fine-grained Image Recognition, published at CVPR2021
Stars: ✭ 47 (-25.4%)
ShapleyExplanationNetworks
Implementation of the paper "Shapley Explanation Networks"
Stars: ✭ 62 (-1.59%)
1-26 of 26 similar projects