All Projects → capitalone → global-attribution-mapping

capitalone / global-attribution-mapping

Licence: Apache-2.0 License
GAM (Global Attribution Mapping) explains the landscape of neural network predictions across subpopulations

Programming Languages

python
139335 projects - #7 most used programming language
Makefile
30231 projects

Projects that are alternatives of or similar to global-attribution-mapping

shapr
Explaining the output of machine learning models with more accurately estimated Shapley values
Stars: ✭ 95 (+427.78%)
Mutual labels:  explainable-ai, explainable-ml
CARLA
CARLA: A Python Library to Benchmark Algorithmic Recourse and Counterfactual Explanation Algorithms
Stars: ✭ 166 (+822.22%)
Mutual labels:  explainable-ai, explainable-ml
Tensorwatch
Debugging, monitoring and visualization for Python Machine Learning and Data Science
Stars: ✭ 3,191 (+17627.78%)
Mutual labels:  explainable-ai, explainable-ml
ml-fairness-framework
FairPut - Machine Learning Fairness Framework with LightGBM — Explainability, Robustness, Fairness (by @firmai)
Stars: ✭ 59 (+227.78%)
Mutual labels:  explainable-ai, explainable-ml
ProtoTree
ProtoTrees: Neural Prototype Trees for Interpretable Fine-grained Image Recognition, published at CVPR2021
Stars: ✭ 47 (+161.11%)
Mutual labels:  explainable-ai, explainable-ml
Interpret
Fit interpretable models. Explain blackbox machine learning.
Stars: ✭ 4,352 (+24077.78%)
Mutual labels:  explainable-ai, explainable-ml
Mindsdb
Predictive AI layer for existing databases.
Stars: ✭ 4,199 (+23227.78%)
Mutual labels:  explainable-ai, explainable-ml
DataScience ArtificialIntelligence Utils
Examples of Data Science projects and Artificial Intelligence use cases
Stars: ✭ 302 (+1577.78%)
Mutual labels:  explainable-ai, explainable-ml
responsible-ai-toolbox
This project provides responsible AI user interfaces for Fairlearn, interpret-community, and Error Analysis, as well as foundational building blocks that they rely on.
Stars: ✭ 615 (+3316.67%)
Mutual labels:  explainable-ai, explainable-ml
dlime experiments
In this work, we propose a deterministic version of Local Interpretable Model Agnostic Explanations (LIME) and the experimental results on three different medical datasets shows the superiority for Deterministic Local Interpretable Model-Agnostic Explanations (DLIME).
Stars: ✭ 21 (+16.67%)
Mutual labels:  explainable-ai, explainable-ml
fastshap
Fast approximate Shapley values in R
Stars: ✭ 79 (+338.89%)
Mutual labels:  explainable-ai, explainable-ml
mllp
The code of AAAI 2020 paper "Transparent Classification with Multilayer Logical Perceptrons and Random Binarization".
Stars: ✭ 15 (-16.67%)
Mutual labels:  explainable-ai, explainable-ml
SHAP FOLD
(Explainable AI) - Learning Non-Monotonic Logic Programs From Statistical Models Using High-Utility Itemset Mining
Stars: ✭ 35 (+94.44%)
Mutual labels:  explainable-ai, explainable-ml
javaAnchorExplainer
Explains machine learning models fast using the Anchor algorithm originally proposed by marcotcr in 2018
Stars: ✭ 17 (-5.56%)
Mutual labels:  explainable-ai
path explain
A repository for explaining feature attributions and feature interactions in deep neural networks.
Stars: ✭ 151 (+738.89%)
Mutual labels:  explainable-ai
Awesome-Vision-Transformer-Collection
Variants of Vision Transformer and its downstream tasks
Stars: ✭ 124 (+588.89%)
Mutual labels:  explainable-ai
deep-explanation-penalization
Code for using CDEP from the paper "Interpretations are useful: penalizing explanations to align neural networks with prior knowledge" https://arxiv.org/abs/1909.13584
Stars: ✭ 110 (+511.11%)
Mutual labels:  explainable-ai
ddsm-visual-primitives
Using deep learning to discover interpretable representations for mammogram classification and explanation
Stars: ✭ 25 (+38.89%)
Mutual labels:  explainable-ai
Relational Deep Reinforcement Learning
No description or website provided.
Stars: ✭ 44 (+144.44%)
Mutual labels:  explainable-ai
trulens
Library containing attribution and interpretation methods for deep nets.
Stars: ✭ 146 (+711.11%)
Mutual labels:  explainable-ml

GAM (Global Attribution Mapping)

Global Explanations for Deep Neural Networks

GAM explains the landscape of neural network predictions across subpopulations.

This implementation is based on "Global Explanations for Neural Networks: Mapping the Landscape of Predictions" (AAAI/ACM AIES 2019).

Installation

python3 -m pip install gam

Get Started

First generate local attributions using your favorite technique, then:

>>> from gam.gam import GAM
>>> # for a quick example use `attributions_path="tests/test_attributes.csv"`
>>> # Input/Output: csv (columns: features, rows: local/global attribution)
>>> gam = GAM(attributions_path="<path_to_your_attributes>.csv", distance="spearman", k=2)
>>> gam.generate()
>>> gam.explanations
[[('height', .6), ('weight', .3), ('hair color', .1)], 
 [('weight', .9), ('weight', .05), ('hair color', .05)]]
 
>>> gam.subpopulation_sizes
[90, 10]

>>> gam.subpopulations
# global explanation assignment
[0, 1, 0, 0,...]

>>> gam.plot()
# bar chart of feature importance with subpopulation size

Tests

To run tests:

$ python -m pytest tests/

Contributors

We welcome Your interest in Capital One’s Open Source Projects (the “Project”). Any Contributor to the Project must accept and sign an Agreement indicating agreement to the license terms below. Except for the license granted in this Agreement to Capital One and to recipients of software distributed by Capital One, You reserve all right, title, and interest in and to Your Contributions; this Agreement does not impact Your rights to use Your own Contributions for any other purpose.

Sign the Individual Agreement

Sign the Corporate Agreement

Code of Conduct

This project adheres to the Open Code of Conduct By participating, you are expected to honor this code.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].