All Projects → ml-fairness-framework → Similar Projects or Alternatives

68 Open source projects that are alternatives of or similar to ml-fairness-framework

fastshap
Fast approximate Shapley values in R
Stars: ✭ 79 (+33.9%)
mllp
The code of AAAI 2020 paper "Transparent Classification with Multilayer Logical Perceptrons and Random Binarization".
Stars: ✭ 15 (-74.58%)
Interpret
Fit interpretable models. Explain blackbox machine learning.
Stars: ✭ 4,352 (+7276.27%)
responsible-ai-toolbox
This project provides responsible AI user interfaces for Fairlearn, interpret-community, and Error Analysis, as well as foundational building blocks that they rely on.
Stars: ✭ 615 (+942.37%)
Awesome Machine Learning Interpretability
A curated list of awesome machine learning interpretability resources.
Stars: ✭ 2,404 (+3974.58%)
ProtoTree
ProtoTrees: Neural Prototype Trees for Interpretable Fine-grained Image Recognition, published at CVPR2021
Stars: ✭ 47 (-20.34%)
CARLA
CARLA: A Python Library to Benchmark Algorithmic Recourse and Counterfactual Explanation Algorithms
Stars: ✭ 166 (+181.36%)
dlime experiments
In this work, we propose a deterministic version of Local Interpretable Model Agnostic Explanations (LIME) and the experimental results on three different medical datasets shows the superiority for Deterministic Local Interpretable Model-Agnostic Explanations (DLIME).
Stars: ✭ 21 (-64.41%)
Mutual labels:  explainable-ai, explainable-ml, xai
xai-iml-sota
Interesting resources related to Explainable Artificial Intelligence, Interpretable Machine Learning, Interactive Machine Learning, Human in Loop and Visual Analytics.
Stars: ✭ 51 (-13.56%)
diabetes use case
Sample use case for Xavier AI in Healthcare conference: https://www.xavierhealth.org/ai-summit-day2/
Stars: ✭ 22 (-62.71%)
Mindsdb
Predictive AI layer for existing databases.
Stars: ✭ 4,199 (+7016.95%)
Mutual labels:  explainable-ai, explainable-ml
interpretable-ml
Techniques & resources for training interpretable ML models, explaining ML models, and debugging ML models.
Stars: ✭ 17 (-71.19%)
awesome-graph-explainability-papers
Papers about explainability of GNNs
Stars: ✭ 153 (+159.32%)
Mutual labels:  explainable-ai, xai
ShapleyExplanationNetworks
Implementation of the paper "Shapley Explanation Networks"
Stars: ✭ 62 (+5.08%)
concept-based-xai
Library implementing state-of-the-art Concept-based and Disentanglement Learning methods for Explainable AI
Stars: ✭ 41 (-30.51%)
Mutual labels:  explainable-ai, xai
DataScience ArtificialIntelligence Utils
Examples of Data Science projects and Artificial Intelligence use cases
Stars: ✭ 302 (+411.86%)
Mutual labels:  explainable-ai, explainable-ml
expmrc
ExpMRC: Explainability Evaluation for Machine Reading Comprehension
Stars: ✭ 58 (-1.69%)
Mutual labels:  explainable-ai, xai
wefe
WEFE: The Word Embeddings Fairness Evaluation Framework. WEFE is a framework that standardizes the bias measurement and mitigation in Word Embeddings models. Please feel welcome to open an issue in case you have any questions or a pull request if you want to contribute to the project!
Stars: ✭ 164 (+177.97%)
Mutual labels:  fairness-ai, fairness-ml
Tensorwatch
Debugging, monitoring and visualization for Python Machine Learning and Data Science
Stars: ✭ 3,191 (+5308.47%)
Mutual labels:  explainable-ai, explainable-ml
XAIatERUM2020
Workshop: Explanation and exploration of machine learning models with R and DALEX at eRum 2020
Stars: ✭ 52 (-11.86%)
shapr
Explaining the output of machine learning models with more accurately estimated Shapley values
Stars: ✭ 95 (+61.02%)
Mutual labels:  explainable-ai, explainable-ml
deep-explanation-penalization
Code for using CDEP from the paper "Interpretations are useful: penalizing explanations to align neural networks with prior knowledge" https://arxiv.org/abs/1909.13584
Stars: ✭ 110 (+86.44%)
Mutual labels:  explainable-ai, fairness-ml
zennit
Zennit is a high-level framework in Python using PyTorch for explaining/exploring neural networks using attribution methods like LRP.
Stars: ✭ 57 (-3.39%)
Mutual labels:  explainable-ai, xai
global-attribution-mapping
GAM (Global Attribution Mapping) explains the landscape of neural network predictions across subpopulations
Stars: ✭ 18 (-69.49%)
Mutual labels:  explainable-ai, explainable-ml
SHAP FOLD
(Explainable AI) - Learning Non-Monotonic Logic Programs From Statistical Models Using High-Utility Itemset Mining
Stars: ✭ 35 (-40.68%)
Mutual labels:  explainable-ai, explainable-ml
mindsdb server
MindsDB server allows you to consume and expose MindsDB workflows, through http.
Stars: ✭ 3 (-94.92%)
Mutual labels:  explainable-ai, xai
auditor
Model verification, validation, and error analysis
Stars: ✭ 56 (-5.08%)
Mutual labels:  xai
Remembering-for-the-Right-Reasons
Official Implementation of Remembering for the Right Reasons (ICLR 2021)
Stars: ✭ 27 (-54.24%)
Mutual labels:  xai
bias-in-credit-models
Examples of unfairness detection for a classification-based credit model
Stars: ✭ 18 (-69.49%)
Mutual labels:  fairness-ai
meg
Molecular Explanation Generator
Stars: ✭ 14 (-76.27%)
Mutual labels:  explainable-ai
hierarchical-dnn-interpretations
Using / reproducing ACD from the paper "Hierarchical interpretations for neural network predictions" 🧠 (ICLR 2019)
Stars: ✭ 110 (+86.44%)
Mutual labels:  explainable-ai
cnn-raccoon
Create interactive dashboards for your Convolutional Neural Networks with a single line of code!
Stars: ✭ 31 (-47.46%)
Mutual labels:  explainable-ml
awesome-agi-cocosci
An awesome & curated list for Artificial General Intelligence, an emerging inter-discipline field that combines artificial intelligence and computational cognitive sciences.
Stars: ✭ 81 (+37.29%)
Mutual labels:  explainable-ai
Deep XF
Package towards building Explainable Forecasting and Nowcasting Models with State-of-the-art Deep Neural Networks and Dynamic Factor Model on Time Series data sets with single line of code. Also, provides utilify facility for time-series signal similarities matching, and removing noise from timeseries signals.
Stars: ✭ 83 (+40.68%)
Mutual labels:  explainable-ml
causal-ml
Must-read papers and resources related to causal inference and machine (deep) learning
Stars: ✭ 387 (+555.93%)
Mutual labels:  counterfactual
javaAnchorExplainer
Explains machine learning models fast using the Anchor algorithm originally proposed by marcotcr in 2018
Stars: ✭ 17 (-71.19%)
Mutual labels:  explainable-ai
Transformer-MM-Explainability
[ICCV 2021- Oral] Official PyTorch implementation for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers, a novel method to visualize any Transformer-based network. Including examples for DETR, VQA.
Stars: ✭ 484 (+720.34%)
Mutual labels:  explainable-ai
gdc
Code for the ICLR 2021 paper "A Distributional Approach to Controlled Text Generation"
Stars: ✭ 94 (+59.32%)
Mutual labels:  fairness-ml
ArenaR
Data generator for Arena - interactive XAI dashboard
Stars: ✭ 28 (-52.54%)
Mutual labels:  xai
DIG
A library for graph deep learning research
Stars: ✭ 1,078 (+1727.12%)
Mutual labels:  explainable-ml
path explain
A repository for explaining feature attributions and feature interactions in deep neural networks.
Stars: ✭ 151 (+155.93%)
Mutual labels:  explainable-ai
Awesome-Vision-Transformer-Collection
Variants of Vision Transformer and its downstream tasks
Stars: ✭ 124 (+110.17%)
Mutual labels:  explainable-ai
bert attn viz
Visualize BERT's self-attention layers on text classification tasks
Stars: ✭ 41 (-30.51%)
Mutual labels:  explainable-ai
Relational Deep Reinforcement Learning
No description or website provided.
Stars: ✭ 44 (-25.42%)
Mutual labels:  explainable-ai
Awesome-XAI-Evaluation
Reference tables to introduce and organize evaluation methods and measures for explainable machine learning systems
Stars: ✭ 57 (-3.39%)
3D-GuidedGradCAM-for-Medical-Imaging
This Repo containes the implemnetation of generating Guided-GradCAM for 3D medical Imaging using Nifti file in tensorflow 2.0. Different input files can be used in that case need to edit the input to the Guided-gradCAM model.
Stars: ✭ 60 (+1.69%)
Mutual labels:  explainable-ai
ddsm-visual-primitives
Using deep learning to discover interpretable representations for mammogram classification and explanation
Stars: ✭ 25 (-57.63%)
Mutual labels:  explainable-ai
SyntheticControlMethods
A Python package for causal inference using Synthetic Controls
Stars: ✭ 90 (+52.54%)
Mutual labels:  counterfactual
grasp
Essential NLP & ML, short & fast pure Python code
Stars: ✭ 58 (-1.69%)
Mutual labels:  explainable-ai
neuro-symbolic-sudoku-solver
⚙️ Solving sudoku using Deep Reinforcement learning in combination with powerful symbolic representations.
Stars: ✭ 60 (+1.69%)
Mutual labels:  explainable-ai
trulens
Library containing attribution and interpretation methods for deep nets.
Stars: ✭ 146 (+147.46%)
Mutual labels:  explainable-ml
cfvqa
[CVPR 2021] Counterfactual VQA: A Cause-Effect Look at Language Bias
Stars: ✭ 96 (+62.71%)
Mutual labels:  counterfactual
GNNLens2
Visualization tool for Graph Neural Networks
Stars: ✭ 155 (+162.71%)
Mutual labels:  xai
GraphLIME
This is a Pytorch implementation of GraphLIME
Stars: ✭ 40 (-32.2%)
Mutual labels:  explainable-ai
TERM
Tilted Empirical Risk Minimization (ICLR '21)
Stars: ✭ 37 (-37.29%)
Mutual labels:  fairness-ml
adaptive-wavelets
Adaptive, interpretable wavelets across domains (NeurIPS 2021)
Stars: ✭ 58 (-1.69%)
Mutual labels:  xai
fast-tsetlin-machine-with-mnist-demo
A fast Tsetlin Machine implementation employing bit-wise operators, with MNIST demo.
Stars: ✭ 58 (-1.69%)
Mutual labels:  explainable-ai
mindsdb native
Machine Learning in one line of code
Stars: ✭ 34 (-42.37%)
Mutual labels:  xai
datafsm
Machine Learning Finite State Machine Models from Data with Genetic Algorithms
Stars: ✭ 14 (-76.27%)
Mutual labels:  explainable-ai
ShapML.jl
A Julia package for interpretable machine learning with stochastic Shapley values
Stars: ✭ 63 (+6.78%)
1-60 of 68 similar projects