All Projects → aredier → Trelawney

aredier / Trelawney

Licence: mit
General Interpretability Package

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Trelawney

Ashengine
A cross-platform 3D engine based on Qt 5.9.7, OpenGL 3.3 and Assimp 4.1.
Stars: ✭ 35 (-36.36%)
Mutual labels:  graphics
Perfect Freehand
Draw perfect pressure-sensitive freehand strokes.
Stars: ✭ 999 (+1716.36%)
Mutual labels:  graphics
Sophus
C++ implementation of Lie Groups using Eigen.
Stars: ✭ 1,048 (+1805.45%)
Mutual labels:  graphics
Gaugekit
Kit for building custom gauges + easy reproducible Apple's style ring gauges.
Stars: ✭ 997 (+1712.73%)
Mutual labels:  graphics
Shadertoy Rs
A desktop client for Shadertoy written in Rust
Stars: ✭ 41 (-25.45%)
Mutual labels:  graphics
Shaderworkshop
Interactive GLSL fragment shaders editor made with Qt
Stars: ✭ 43 (-21.82%)
Mutual labels:  graphics
Cml1
The Configurable Math Library, v1
Stars: ✭ 35 (-36.36%)
Mutual labels:  graphics
Generative Designs
A collection of generative design React components
Stars: ✭ 51 (-7.27%)
Mutual labels:  graphics
Picasso
Homebrew PICA200 shader assembler
Stars: ✭ 41 (-25.45%)
Mutual labels:  graphics
Atrament.js
A small JS library for beautiful drawing and handwriting on the HTML Canvas.
Stars: ✭ 1,045 (+1800%)
Mutual labels:  graphics
Xna.js
WebGL framework strongly inspired by the XNA library
Stars: ✭ 40 (-27.27%)
Mutual labels:  graphics
Grubo
Audio visual experience with Roland Groovebox MC-101 and the Unity game engine
Stars: ✭ 41 (-25.45%)
Mutual labels:  graphics
Jglm
Java OpenGL Mathematics Library
Stars: ✭ 44 (-20%)
Mutual labels:  graphics
Packedrgbmshader
32-bit packed color format with RGBM encoding for shader use
Stars: ✭ 39 (-29.09%)
Mutual labels:  graphics
Helix Toolkit
Helix Toolkit is a collection of 3D components for .NET.
Stars: ✭ 1,050 (+1809.09%)
Mutual labels:  graphics
Contrastiveexplanation
Contrastive Explanation (Foil Trees), developed at TNO/Utrecht University
Stars: ✭ 36 (-34.55%)
Mutual labels:  interpretability
Shaderc
A collection of tools, libraries, and tests for Vulkan shader compilation.
Stars: ✭ 1,016 (+1747.27%)
Mutual labels:  graphics
Ronin
Experimental Graphics Terminal
Stars: ✭ 1,065 (+1836.36%)
Mutual labels:  graphics
Kino
A collection of custom post processing effects for Unity
Stars: ✭ 1,054 (+1816.36%)
Mutual labels:  graphics
Sketch
A Common Lisp framework for the creation of electronic art, visual design, game prototyping, game making, computer graphics, exploration of human-computer interaction, and more.
Stars: ✭ 1,026 (+1765.45%)
Mutual labels:  graphics

========= trelawney

.. image:: https://img.shields.io/pypi/v/trelawney.svg :target: https://pypi.python.org/pypi/trelawney

.. image:: https://img.shields.io/travis/aredier/trelawney.svg :target: https://travis-ci.org/aredier/trelawney

.. image:: https://readthedocs.org/projects/trelawney/badge/?version=latest :target: https://trelawney.readthedocs.io/en/latest/?badge=latest :alt: Documentation Status

.. image:: https://img.shields.io/github/license/skanderkam/trelawney :alt: MIT License

Trelawney is a general interpretability package that aims at providing a common api to use most of the modern interpretability methods to shed light on sklearn compatible models (support for Keras and XGBoost are tested).

Trelawney will try to provide you with two kind of explanation when possible:

  • global explanation of the model that highlights the most importance features the model uses to make its predictions globally
  • local explanation of the model that will try to shed light on why a specific model made a specific prediction

The Trelawney package is build around:

  • some model specific explainers that use the inner workings of some types of models to explain them:

    • LogRegExplainer that uses the weights of the your logistic regression to produce global and local explanations of your model
    • TreeExplainer that uses the path of your tree (single tree model only) to produce explanations of the model
  • Some model agnostic explainers that should work with all models:

    • LimeExplainer that uses the Lime_ package to create local explanations only (the local nature of Lime prohibits it from generating global explanations of a model
    • ShapExplainer that uses the SHAP_ package to create local and global explanations of your model
    • SurrogateExplainer that creates a general surogate of your model (fitted on the output of your model) using an explainable model (DecisionTreeClassifier,LogisticRegression for now). The explainer will then use the internals of the surrogate model to explain your black box model as well as informing you on how well the surrogate model explains the black box one

Quick Tutorial (30s to Trelawney):

Here is an example of how to use a Trelawney explainer

model = LogisticRegression().fit(X, y)

creating and fiting the explainer

explainer = ShapExplainer() explainer.fit(model, X, y)

explaining observation

explanation = explainer.explain_local(X_expain) [ {'var_1': 0.1, 'var_2': -0.07, ...}, ... {'var_1': 0.23, 'var_2': -0.15, ...} , ] explanation = explainer.graph_local_explanation(X_expain.iloc[:1, :])

.. image:: http://drive.google.com/uc?export=view&id=1a1kdH8mjGdKiiF_JHR56L2-JeaRStwr2 :width: 400 :alt: Local Explanation Graph

explanation = explainer.feature_importance(X_expain) {'var_1': 0.5, 'var_2': 0.2, ...} , explanation = explainer.graph_feature_importance(X_expain)

R .. image:: http://drive.google.com/uc?export=view&id=1R2NFEU0bcZYpeiFsLZDKYfPkjHz-cHJ_ :width: 400 :alt: Local Explanation Graph

FAQ

Why should I use Trelawney rather than Lime_ and SHAP_


while you can definitally use the Lime and SHAP packages directly (they will give you more control over how to use their packages), they are very specialized packages with different APIs, graphs and vocabulary. Trelawnaey offers you a unified API, representation and vocabulary for all state of the art explanation methods so that you don't lose time adapting to each new method but just change a class and Trelawney will adapt to you.

How to create implement my own interpretation method in the Trelawney framework?


To implement your own explainer you will need to inherit from the BaseExplainer class and overide it's three abstract methods as such:

class MyOwnInterpreter(BaseExplainer): ... def fit(self, model: sklearn.base.BaseEstimator, x_train: Union[pd.Series, pd.DataFrame, np.ndarray], ... y_train: pd.DataFrame): ... # fit your interpreter with some training data if needed ... pass ... def explain_local(self, x_explain: Union[pd.Series, pd.DataFrame, np.ndarray], ... n_cols: Optional[int] = None) -> List[Dict[str, float]]: ... # interpret a single prediction of the model ... pass ... def feature_importance(self, x_explain: Union[pd.Series, pd.DataFrame, np.ndarray], ... n_cols: Optional[int] = None) -> Dict[str, float]: ... # interpret the global importance of all at most n_cols features on the predictions on x_explain ... pass

You can find some more information by reading the documentation of the BaseExplainer class. If possible don't hesitate to contribute to trelawney and create a PR.

Comming Soon

  • Regressor Support (PR welcome)
  • Image and text Support (PR welcome)

Credits

This package was created with Cookiecutter_ and the audreyr/cookiecutter-pypackage_ project template.

.. _Cookiecutter: https://github.com/audreyr/cookiecutter .. _audreyr/cookiecutter-pypackage: https://github.com/audreyr/cookiecutter-pypackage .. _SHAP: https://github.com/slundberg/shap .. _Lime: https://github.com/marcotcr/lime

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].