All Projects → ProtoTree → Similar Projects or Alternatives

259 Open source projects that are alternatives of or similar to ProtoTree

Interpret
Fit interpretable models. Explain blackbox machine learning.
Stars: ✭ 4,352 (+9159.57%)
mllp
The code of AAAI 2020 paper "Transparent Classification with Multilayer Logical Perceptrons and Random Binarization".
Stars: ✭ 15 (-68.09%)
deep-explanation-penalization
Code for using CDEP from the paper "Interpretations are useful: penalizing explanations to align neural networks with prior knowledge" https://arxiv.org/abs/1909.13584
Stars: ✭ 110 (+134.04%)
Awesome Machine Learning Interpretability
A curated list of awesome machine learning interpretability resources.
Stars: ✭ 2,404 (+5014.89%)
zennit
Zennit is a high-level framework in Python using PyTorch for explaining/exploring neural networks using attribution methods like LRP.
Stars: ✭ 57 (+21.28%)
responsible-ai-toolbox
This project provides responsible AI user interfaces for Fairlearn, interpret-community, and Error Analysis, as well as foundational building blocks that they rely on.
Stars: ✭ 615 (+1208.51%)
ShapleyExplanationNetworks
Implementation of the paper "Shapley Explanation Networks"
Stars: ✭ 62 (+31.91%)
xai-iml-sota
Interesting resources related to Explainable Artificial Intelligence, Interpretable Machine Learning, Interactive Machine Learning, Human in Loop and Visual Analytics.
Stars: ✭ 51 (+8.51%)
CARLA
CARLA: A Python Library to Benchmark Algorithmic Recourse and Counterfactual Explanation Algorithms
Stars: ✭ 166 (+253.19%)
interpretable-ml
Techniques & resources for training interpretable ML models, explaining ML models, and debugging ML models.
Stars: ✭ 17 (-63.83%)
diabetes use case
Sample use case for Xavier AI in Healthcare conference: https://www.xavierhealth.org/ai-summit-day2/
Stars: ✭ 22 (-53.19%)
hierarchical-dnn-interpretations
Using / reproducing ACD from the paper "Hierarchical interpretations for neural network predictions" 🧠 (ICLR 2019)
Stars: ✭ 110 (+134.04%)
ml-fairness-framework
FairPut - Machine Learning Fairness Framework with LightGBM — Explainability, Robustness, Fairness (by @firmai)
Stars: ✭ 59 (+25.53%)
concept-based-xai
Library implementing state-of-the-art Concept-based and Disentanglement Learning methods for Explainable AI
Stars: ✭ 41 (-12.77%)
fastshap
Fast approximate Shapley values in R
Stars: ✭ 79 (+68.09%)
Transformer-MM-Explainability
[ICCV 2021- Oral] Official PyTorch implementation for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers, a novel method to visualize any Transformer-based network. Including examples for DETR, VQA.
Stars: ✭ 484 (+929.79%)
sage
For calculating global feature importance using Shapley values.
Stars: ✭ 129 (+174.47%)
Mutual labels:  interpretability, explainability
yggdrasil-decision-forests
A collection of state-of-the-art algorithms for the training, serving and interpretation of Decision Forest models.
Stars: ✭ 156 (+231.91%)
Mutual labels:  decision-trees, interpretability
adaptive-wavelets
Adaptive, interpretable wavelets across domains (NeurIPS 2021)
Stars: ✭ 58 (+23.4%)
Mutual labels:  interpretability, explainability
transformers-interpret
Model explainability that works seamlessly with 🤗 transformers. Explain your transformers model in just 2 lines of code.
Stars: ✭ 861 (+1731.91%)
Mutual labels:  interpretability, explainable-ai
SHAP FOLD
(Explainable AI) - Learning Non-Monotonic Logic Programs From Statistical Models Using High-Utility Itemset Mining
Stars: ✭ 35 (-25.53%)
Mutual labels:  explainable-ai, explainable-ml
global-attribution-mapping
GAM (Global Attribution Mapping) explains the landscape of neural network predictions across subpopulations
Stars: ✭ 18 (-61.7%)
Mutual labels:  explainable-ai, explainable-ml
Mindsdb
Predictive AI layer for existing databases.
Stars: ✭ 4,199 (+8834.04%)
Mutual labels:  explainable-ai, explainable-ml
awesome-graph-explainability-papers
Papers about explainability of GNNs
Stars: ✭ 153 (+225.53%)
Mutual labels:  explainable-ai, explainability
path explain
A repository for explaining feature attributions and feature interactions in deep neural networks.
Stars: ✭ 151 (+221.28%)
Layerwise-Relevance-Propagation
Implementation of Layerwise Relevance Propagation for heatmapping "deep" layers
Stars: ✭ 78 (+65.96%)
removal-explanations
A lightweight implementation of removal-based explanations for ML models.
Stars: ✭ 46 (-2.13%)
Mutual labels:  interpretability, explainability
Pytorch Grad Cam
Many Class Activation Map methods implemented in Pytorch for CNNs and Vision Transformers. Including Grad-CAM, Grad-CAM++, Score-CAM, Ablation-CAM and XGrad-CAM
Stars: ✭ 3,814 (+8014.89%)
Awesome Production Machine Learning
A curated list of awesome open source libraries to deploy, monitor, version and scale your machine learning
Stars: ✭ 10,504 (+22248.94%)
Mutual labels:  interpretability, explainability
Shap
A game theoretic approach to explain the output of any machine learning model.
Stars: ✭ 14,917 (+31638.3%)
Mutual labels:  interpretability, explainability
Neural Backed Decision Trees
Making decision trees competitive with neural networks on CIFAR10, CIFAR100, TinyImagenet200, Imagenet
Stars: ✭ 411 (+774.47%)
Mutual labels:  decision-trees, interpretability
self critical vqa
Code for NeurIPS 2019 paper ``Self-Critical Reasoning for Robust Visual Question Answering''
Stars: ✭ 39 (-17.02%)
meg
Molecular Explanation Generator
Stars: ✭ 14 (-70.21%)
Mutual labels:  interpretability, explainable-ai
ALPS 2021
XAI Tutorial for the Explainable AI track in the ALPS winter school 2021
Stars: ✭ 55 (+17.02%)
Mutual labels:  interpretability, explainability
thermostat
Collection of NLP model explanations and accompanying analysis tools
Stars: ✭ 126 (+168.09%)
Mutual labels:  interpretability, explainability
DataScience ArtificialIntelligence Utils
Examples of Data Science projects and Artificial Intelligence use cases
Stars: ✭ 302 (+542.55%)
Mutual labels:  explainable-ai, explainable-ml
shapr
Explaining the output of machine learning models with more accurately estimated Shapley values
Stars: ✭ 95 (+102.13%)
Mutual labels:  explainable-ai, explainable-ml
ConceptBottleneck
Concept Bottleneck Models, ICML 2020
Stars: ✭ 91 (+93.62%)
Tensorwatch
Debugging, monitoring and visualization for Python Machine Learning and Data Science
Stars: ✭ 3,191 (+6689.36%)
Mutual labels:  explainable-ai, explainable-ml
Fine-Grained-or-Not
Code release for Your “Flamingo” is My “Bird”: Fine-Grained, or Not (CVPR 2021 Oral)
Stars: ✭ 32 (-31.91%)
XAIatERUM2020
Workshop: Explanation and exploration of machine learning models with R and DALEX at eRum 2020
Stars: ✭ 52 (+10.64%)
ArenaR
Data generator for Arena - interactive XAI dashboard
Stars: ✭ 28 (-40.43%)
Mutual labels:  interpretability, explainability
dlime experiments
In this work, we propose a deterministic version of Local Interpretable Model Agnostic Explanations (LIME) and the experimental results on three different medical datasets shows the superiority for Deterministic Local Interpretable Model-Agnostic Explanations (DLIME).
Stars: ✭ 21 (-55.32%)
Mutual labels:  explainable-ai, explainable-ml
single-positive-multi-label
Multi-Label Learning from Single Positive Labels - CVPR 2021
Stars: ✭ 63 (+34.04%)
Mutual labels:  cvpr2021
RfDNet
Implementation of CVPR'21: RfD-Net: Point Scene Understanding by Semantic Instance Reconstruction
Stars: ✭ 150 (+219.15%)
Mutual labels:  cvpr2021
rfvis
A tool for visualizing the structure and performance of Random Forests 🌳
Stars: ✭ 20 (-57.45%)
Mutual labels:  decision-trees
partial dependence
Python package to visualize and cluster partial dependence.
Stars: ✭ 23 (-51.06%)
Mutual labels:  interpretability
click-through-rate-prediction
📈 Click-Through Rate Prediction using Logistic Regression and Tree Algorithms
Stars: ✭ 60 (+27.66%)
Mutual labels:  decision-trees
BLIP
Official Implementation of CVPR2021 paper: Continual Learning via Bit-Level Information Preserving
Stars: ✭ 33 (-29.79%)
Mutual labels:  cvpr2021
RobustTrees
[ICML 2019, 20 min long talk] Robust Decision Trees Against Adversarial Examples
Stars: ✭ 62 (+31.91%)
Mutual labels:  decision-trees
free-lunch-saliency
Code for "Free-Lunch Saliency via Attention in Atari Agents"
Stars: ✭ 15 (-68.09%)
Mutual labels:  interpretability
MetaBIN
[CVPR2021] Meta Batch-Instance Normalization for Generalizable Person Re-Identification
Stars: ✭ 58 (+23.4%)
Mutual labels:  cvpr2021
multi-imbalance
Python package for tackling multi-class imbalance problems. http://www.cs.put.poznan.pl/mlango/publications/multiimbalance/
Stars: ✭ 66 (+40.43%)
Mutual labels:  decision-trees
fast-tsetlin-machine-with-mnist-demo
A fast Tsetlin Machine implementation employing bit-wise operators, with MNIST demo.
Stars: ✭ 58 (+23.4%)
Mutual labels:  explainable-ai
Bike-Sharing-Demand-Kaggle
Top 5th percentile solution to the Kaggle knowledge problem - Bike Sharing Demand
Stars: ✭ 33 (-29.79%)
Mutual labels:  decision-trees
MachineLearning
Implementations of machine learning algorithm by Python 3
Stars: ✭ 16 (-65.96%)
Mutual labels:  decision-trees
ShapML.jl
A Julia package for interpretable machine learning with stochastic Shapley values
Stars: ✭ 63 (+34.04%)
Awesome-Vision-Transformer-Collection
Variants of Vision Transformer and its downstream tasks
Stars: ✭ 124 (+163.83%)
Mutual labels:  explainable-ai
Im2Vec
[CVPR 2021 Oral] Im2Vec Synthesizing Vector Graphics without Vector Supervision
Stars: ✭ 229 (+387.23%)
Mutual labels:  cvpr2021
Involution
PyTorch reimplementation of the paper "Involution: Inverting the Inherence of Convolution for Visual Recognition" (2D and 3D Involution) [CVPR 2021].
Stars: ✭ 98 (+108.51%)
Mutual labels:  cvpr2021
1-60 of 259 similar projects