All Projects → EthicalML → Xai

EthicalML / Xai

Licence: mit
XAI - An eXplainability toolbox for machine learning

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Xai

Imodels
Interpretable ML package 🔍 for concise, transparent, and accurate predictive modeling (sklearn-compatible).
Stars: ✭ 194 (-67.45%)
Mutual labels:  artificial-intelligence, ai, ml, interpretability
Pycm
Multi-class confusion matrix library in Python
Stars: ✭ 1,076 (+80.54%)
Mutual labels:  artificial-intelligence, ai, ml, evaluation
Modelchimp
Experiment tracking for machine and deep learning projects
Stars: ✭ 121 (-79.7%)
Mutual labels:  artificial-intelligence, ai, ml
Image classifier
CNN image classifier implemented in Keras Notebook 🖼️.
Stars: ✭ 139 (-76.68%)
Mutual labels:  artificial-intelligence, ai, ml
Interpret
Fit interpretable models. Explain blackbox machine learning.
Stars: ✭ 4,352 (+630.2%)
Mutual labels:  artificial-intelligence, ai, interpretability
Kglib
Grakn Knowledge Graph Library (ML R&D)
Stars: ✭ 405 (-32.05%)
Mutual labels:  artificial-intelligence, ai, ml
Evalai
☁️ 🚀 📊 📈 Evaluating state of the art in AI
Stars: ✭ 1,087 (+82.38%)
Mutual labels:  artificial-intelligence, ai, evaluation
Atari
AI research environment for the Atari 2600 games 🤖.
Stars: ✭ 174 (-70.81%)
Mutual labels:  artificial-intelligence, ai, ml
Ffdl
Fabric for Deep Learning (FfDL, pronounced fiddle) is a Deep Learning Platform offering TensorFlow, Caffe, PyTorch etc. as a Service on Kubernetes
Stars: ✭ 640 (+7.38%)
Mutual labels:  artificial-intelligence, ai, ml
Atlas
An Open Source, Self-Hosted Platform For Applied Deep Learning Development
Stars: ✭ 259 (-56.54%)
Mutual labels:  artificial-intelligence, ai, ml
Polyaxon
Machine Learning Platform for Kubernetes (MLOps tools for experimentation and automation)
Stars: ✭ 2,966 (+397.65%)
Mutual labels:  artificial-intelligence, ai, ml
Clai
Command Line Artificial Intelligence or CLAI is an open-sourced project from IBM Research aimed to bring the power of AI to the command line interface.
Stars: ✭ 320 (-46.31%)
Mutual labels:  artificial-intelligence, ai, ml
Awesome Ai Ml Dl
Awesome Artificial Intelligence, Machine Learning and Deep Learning as we learn it. Study notes and a curated list of awesome resources of such topics.
Stars: ✭ 831 (+39.43%)
Mutual labels:  artificial-intelligence, ai, ml
Caffe2
Caffe2 is a lightweight, modular, and scalable deep learning framework.
Stars: ✭ 8,409 (+1310.91%)
Mutual labels:  artificial-intelligence, ai, ml
Hyperparameter hunter
Easy hyperparameter optimization and automatic result saving across machine learning algorithms and libraries
Stars: ✭ 648 (+8.72%)
Mutual labels:  artificial-intelligence, ai, ml
Nlpaug
Data augmentation for NLP
Stars: ✭ 2,761 (+363.26%)
Mutual labels:  artificial-intelligence, ai, ml
Classifai
Enhance your WordPress content with Artificial Intelligence and Machine Learning services.
Stars: ✭ 188 (-68.46%)
Mutual labels:  artificial-intelligence, ai, ml
0xdeca10b
Sharing Updatable Models (SUM) on Blockchain
Stars: ✭ 285 (-52.18%)
Mutual labels:  artificial-intelligence, ai, ml
Csinva.github.io
Slides, paper notes, class notes, blog posts, and research on ML 📉, statistics 📊, and AI 🤖.
Stars: ✭ 342 (-42.62%)
Mutual labels:  artificial-intelligence, ai, ml
Differentiable Plasticity
Implementations of the algorithms described in Differentiable plasticity: training plastic networks with gradient descent, a research paper from Uber AI Labs.
Stars: ✭ 371 (-37.75%)
Mutual labels:  ai, ml

GitHub GitHub GitHub GitHub

XAI - An eXplainability toolbox for machine learning

XAI is a Machine Learning library that is designed with AI explainability in its core. XAI contains various tools that enable for analysis and evaluation of data and models. The XAI library is maintained by The Institute for Ethical AI & ML, and it was developed based on the 8 principles for Responsible Machine Learning.

You can find the documentation at https://ethicalml.github.io/xai/index.html. You can also check out our talk at Tensorflow London where the idea was first conceived - the talk also contains an insight on the definitions and principles in this library.

YouTube video showing how to use XAI to mitigate undesired biases

This video of the talk presented at the PyData London 2019 Conference which provides an overview on the motivations for machine learning explainability as well as techniques to introduce explainability and mitigate undesired biases using the XAI Library.
Do you want to learn about more awesome machine learning explainability tools? Check out our community-built "Awesome Machine Learning Production & Operations" list which contains an extensive list of tools for explainability, privacy, orchestration and beyond.

0.0.5 - ALPHA Version

This library is currently in early stage developments and hence it will be quite unstable due to the fast updates. It is important to bare this in mind if using it in production.

If you want to see a fully functional demo in action clone this repo and run the Example Jupyter Notebook in the Examples folder.

What do we mean by eXplainable AI?

We see the challenge of explainability as more than just an algorithmic challenge, which requires a combination of data science best practices with domain-specific knowledge. The XAI library is designed to empower machine learning engineers and relevant domain experts to analyse the end-to-end solution and identify discrepancies that may result in sub-optimal performance relative to the objectives required. More broadly, the XAI library is designed using the 3-steps of explainable machine learning, which involve 1) data analysis, 2) model evaluation, and 3) production monitoring.

We provide a visual overview of these three steps mentioned above in this diagram:

XAI Quickstart

Installation

The XAI package is on PyPI. To install you can run:

pip install xai

Alternatively you can install from source by cloning the repo and running:

python setup.py install 

Usage

You can find example usage in the examples folder.

1) Data Analysis

With XAI you can identify imbalances in the data. For this, we will load the census dataset from the XAI library.

import xai.data
df = xai.data.load_census()
df.head()

View class imbalances for all categories of one column

ims = xai.imbalance_plot(df, "gender")

View imbalances for all categories across multiple columns

im = xai.show_imbalance(df, "gender", "loan")

Balance classes using upsampling and/or downsampling

bal_df = xai.balance(df, "gender", "loan", upsample=0.8)

Perform custom operations on groups

groups = xai.group_by_columns(df, ["gender", "loan"])
for group, group_df in groups:    
    print(group) 
    print(group_df["loan"].head(), "\n")

Visualise correlations as a matrix

_ = xai.correlations(df, include_categorical=True, plot_type="matrix")

Visualise correlations as a hierarchical dendogram

_ = xai.correlations(df, include_categorical=True)

Create a balanced validation and training split dataset

# Balanced train-test split with minimum 300 examples of 
#     the cross of the target y and the column gender
x_train, y_train, x_test, y_test, train_idx, test_idx = \
    xai.balanced_train_test_split(
            x, y, "gender", 
            min_per_group=300,
            max_per_group=300,
            categorical_cols=categorical_cols)

x_train_display = bal_df[train_idx]
x_test_display = bal_df[test_idx]

print("Total number of examples: ", x_test.shape[0])

df_test = x_test_display.copy()
df_test["loan"] = y_test

_= xai.imbalance_plot(df_test, "gender", "loan", categorical_cols=categorical_cols)

2) Model Evaluation

We are able to also analyse the interaction between inference results and input features. For this, we will train a single layer deep learning model.

model = build_model(proc_df.drop("loan", axis=1))

model.fit(f_in(x_train), y_train, epochs=50, batch_size=512)

probabilities = model.predict(f_in(x_test))
predictions = list((probabilities >= 0.5).astype(int).T[0])

Visualise permutation feature importance

def get_avg(x, y):
    return model.evaluate(f_in(x), y, verbose=0)[1]

imp = xai.feature_importance(x_test, y_test, get_avg)

imp.head()

Identify metric imbalances against all test data

_= xai.metrics_plot(
        y_test, 
        probabilities)

Identify metric imbalances across a specific column

_ = xai.metrics_plot(
    y_test, 
    probabilities, 
    df=x_test_display, 
    cross_cols=["gender"],
    categorical_cols=categorical_cols)

Identify metric imbalances across multiple columns

_ = xai.metrics_plot(
    y_test, 
    probabilities, 
    df=x_test_display, 
    cross_cols=["gender", "ethnicity"],
    categorical_cols=categorical_cols)

Draw confusion matrix

xai.confusion_matrix_plot(y_test, pred)

Visualise the ROC curve against all test data

_ = xai.roc_plot(y_test, probabilities)

Visualise the ROC curves grouped by a protected column

protected = ["gender", "ethnicity", "age"]
_ = [xai.roc_plot(
    y_test, 
    probabilities, 
    df=x_test_display, 
    cross_cols=[p],
    categorical_cols=categorical_cols) for p in protected]

Visualise accuracy grouped by probability buckets

d = xai.smile_imbalance(
    y_test, 
    probabilities)

Visualise statistical metrics grouped by probability buckets

d = xai.smile_imbalance(
    y_test, 
    probabilities,
    display_breakdown=True)

Visualise benefits of adding manual review on probability thresholds

d = xai.smile_imbalance(
    y_test, 
    probabilities,
    bins=9,
    threshold=0.75,
    manual_review=0.375,
    display_breakdown=False)
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].