All Projects → firmai → ml-fairness-framework

firmai / ml-fairness-framework

Licence: other
FairPut - Machine Learning Fairness Framework with LightGBM — Explainability, Robustness, Fairness (by @firmai)

Programming Languages

Jupyter Notebook
11667 projects

Projects that are alternatives of or similar to ml-fairness-framework

fastshap
Fast approximate Shapley values in R
Stars: ✭ 79 (+33.9%)
Mutual labels:  explainable-ai, explainable-ml, xai, interpretable-machine-learning
Interpret
Fit interpretable models. Explain blackbox machine learning.
Stars: ✭ 4,352 (+7276.27%)
Mutual labels:  explainable-ai, explainable-ml, xai, interpretable-machine-learning
responsible-ai-toolbox
This project provides responsible AI user interfaces for Fairlearn, interpret-community, and Error Analysis, as well as foundational building blocks that they rely on.
Stars: ✭ 615 (+942.37%)
Mutual labels:  explainable-ai, explainable-ml, fairness-ai, fairness-ml
mllp
The code of AAAI 2020 paper "Transparent Classification with Multilayer Logical Perceptrons and Random Binarization".
Stars: ✭ 15 (-74.58%)
Mutual labels:  explainable-ai, explainable-ml, xai, interpretable-machine-learning
dlime experiments
In this work, we propose a deterministic version of Local Interpretable Model Agnostic Explanations (LIME) and the experimental results on three different medical datasets shows the superiority for Deterministic Local Interpretable Model-Agnostic Explanations (DLIME).
Stars: ✭ 21 (-64.41%)
Mutual labels:  explainable-ai, explainable-ml, xai
xai-iml-sota
Interesting resources related to Explainable Artificial Intelligence, Interpretable Machine Learning, Interactive Machine Learning, Human in Loop and Visual Analytics.
Stars: ✭ 51 (-13.56%)
Mutual labels:  explainable-ml, xai, interpretable-machine-learning
ProtoTree
ProtoTrees: Neural Prototype Trees for Interpretable Fine-grained Image Recognition, published at CVPR2021
Stars: ✭ 47 (-20.34%)
Mutual labels:  explainable-ai, explainable-ml, interpretable-machine-learning
diabetes use case
Sample use case for Xavier AI in Healthcare conference: https://www.xavierhealth.org/ai-summit-day2/
Stars: ✭ 22 (-62.71%)
Mutual labels:  explainable-ml, xai, interpretable-machine-learning
Awesome Machine Learning Interpretability
A curated list of awesome machine learning interpretability resources.
Stars: ✭ 2,404 (+3974.58%)
Mutual labels:  explainable-ml, xai, interpretable-machine-learning
CARLA
CARLA: A Python Library to Benchmark Algorithmic Recourse and Counterfactual Explanation Algorithms
Stars: ✭ 166 (+181.36%)
Mutual labels:  counterfactual, explainable-ai, explainable-ml
wefe
WEFE: The Word Embeddings Fairness Evaluation Framework. WEFE is a framework that standardizes the bias measurement and mitigation in Word Embeddings models. Please feel welcome to open an issue in case you have any questions or a pull request if you want to contribute to the project!
Stars: ✭ 164 (+177.97%)
Mutual labels:  fairness-ai, fairness-ml
ShapleyExplanationNetworks
Implementation of the paper "Shapley Explanation Networks"
Stars: ✭ 62 (+5.08%)
Mutual labels:  explainable-ai, interpretable-machine-learning
DataScience ArtificialIntelligence Utils
Examples of Data Science projects and Artificial Intelligence use cases
Stars: ✭ 302 (+411.86%)
Mutual labels:  explainable-ai, explainable-ml
expmrc
ExpMRC: Explainability Evaluation for Machine Reading Comprehension
Stars: ✭ 58 (-1.69%)
Mutual labels:  explainable-ai, xai
concept-based-xai
Library implementing state-of-the-art Concept-based and Disentanglement Learning methods for Explainable AI
Stars: ✭ 41 (-30.51%)
Mutual labels:  explainable-ai, xai
global-attribution-mapping
GAM (Global Attribution Mapping) explains the landscape of neural network predictions across subpopulations
Stars: ✭ 18 (-69.49%)
Mutual labels:  explainable-ai, explainable-ml
awesome-graph-explainability-papers
Papers about explainability of GNNs
Stars: ✭ 153 (+159.32%)
Mutual labels:  explainable-ai, xai
deep-explanation-penalization
Code for using CDEP from the paper "Interpretations are useful: penalizing explanations to align neural networks with prior knowledge" https://arxiv.org/abs/1909.13584
Stars: ✭ 110 (+86.44%)
Mutual labels:  explainable-ai, fairness-ml
shapr
Explaining the output of machine learning models with more accurately estimated Shapley values
Stars: ✭ 95 (+61.02%)
Mutual labels:  explainable-ai, explainable-ml
mindsdb server
MindsDB server allows you to consume and expose MindsDB workflows, through http.
Stars: ✭ 3 (-94.92%)
Mutual labels:  explainable-ai, xai

FairPut — Fair Machine Learning Framework

Colab Notebook: Mortgage Case Study

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3619715


This is a holistic approach to implement fair outputs at the individual and group level. Some of the methods developed or used includes quantitative monotonic measures, residual explanations, benchmark competition, adverserial attacks, disparate error analysis, model agnostic pre-and post-processing, reasoning codes, counterfactuals, contrastive explanations, and prototypical examples.

FairPut is a light open framework that describes a preferred process at the end of the machine learning pipeline to enchance model fairness. The aim is to simultaneously enhance model interpretability, robustness, and fairness while maintaining a reasonable level of accuracy. FairPut unifies various recent machine learning constructs in a practical manner. This method is model agnostic, but this particular development instance uses LightGBM.

1. Model Explainability (Colab)


  • Model Respecification
    • Protected Values Prediction
    • Model Constraints
    • Hyperparameter Modelling
    • Interpretable Model
    • Global Explanations
    • Monotonicity Feature Explanations
  • Quantitative Validation
    • Level Two Monotonicity
    • Relationship Analysis
    • Partial Dependence (LV1) Monotonicity
    • Feature Interactions
    • Metrics and Cut-off

2. Model Robustness (Colab)


  • Residual Deviation
  • Residual Explanations
  • Benchmark Competition
  • Adversarial Attack

3. Regulatory Fairness (Colab)


  • Group
    • Disparate Error Analysis
      • Parity Indicators
      • Fair Lending Measures
    • Model Agnostic Processing
      • Reweighing Preprocessing
      • Disparate Impact Preprocessing
      • Calibrate Equalized Odds
    • Feature Decomposition
  • Individual
    • Reasoning
      • Individual Disparity
      • Reasoning Codes
    • Example Base
      • Prototypical
      • Counterfactual
      • Contrastive

If you end up using any of the novel techniques, or the framework as a whole, you can cite the following.

BibTeX entry:

@software{fairput,
  title = {{FairPut}: Fair Machine Learning Framework},
  author = {Snow, Derek},
  url = {https://github.com/firmai/fairput/},
  version = {1.15},
  date = {2020-03-31},
}

Stack: Alibi, AIF360, AIX360, SHAP, PDPbox

What Questions Do We Attempt to Answer?

  1. Can the model predict the outcome using just protected values? (Protected Value Prediction)
  2. Is the model monotonic and are variables randomly selected? (Model Constraints, LV1 & LV2 Monotonicity)
  3. Is the model explainable? (Model Selection, Feature Interactions)
  4. Can you explain the predictions globally and locally? (SHAP)
  5. Does the model perform well? (Metrics)
  6. What individuals have received the most and least accurate predictions? (Residual Deviation)
  7. Can you point to the feature responsible for large individual residuals? (Residual Explanations)
  8. What feature values could potentially be outliers due to their misprediction? (Residual Explanations)
  9. Do some models perform better at predicting the outcomes for a certain type of individual? (Benchmark Competition)
  10. Can the model outcome be changed by artificially perturbing certain values of interest? (Adversarial Attack)
  11. Do certain groups suffer relative to others as measured through group statistics? (Parity Indicators, Fair Lending Measures)
  12. Can various data and prediction processing techniques improve these group statistics? (Model Agnostic Processing)
  13. What features are driving the structural differences between groups controlling for demographic factors? (Feature Decomposition)
  14. What individuals have received the most unfair prediction or treatment by the model? (Individual Disparity)
  15. Why did the model decide to predict a specific outcome for a particular individual or sub-group of individuals? (Reasoning Codes)
  16. What individuals are most similar to those receiving unfair treatment and were these individuals treated similar? (Prototypical)
  17. What individual is the closest related instance to a sample individual but has a different predicted outcome? (Counterfactual)
  18. What is the minimal feature perturbation necessary to switch an individual's prediction to another category? (Contrastive)
  19. What is the maximum perturbation possible while the model prediction remains the same? (Contrastive)

Noteworthy Screen Captures

text text text text text text text text text text text text text text

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].