All Projects → viadee → javaAnchorExplainer

viadee / javaAnchorExplainer

Licence: BSD-3-Clause license
Explains machine learning models fast using the Anchor algorithm originally proposed by marcotcr in 2018

Programming Languages

java
68154 projects - #9 most used programming language

Projects that are alternatives of or similar to javaAnchorExplainer

expmrc
ExpMRC: Explainability Evaluation for Machine Reading Comprehension
Stars: ✭ 58 (+241.18%)
Mutual labels:  explainable-ai
meg
Molecular Explanation Generator
Stars: ✭ 14 (-17.65%)
Mutual labels:  explainable-ai
fast-tsetlin-machine-with-mnist-demo
A fast Tsetlin Machine implementation employing bit-wise operators, with MNIST demo.
Stars: ✭ 58 (+241.18%)
Mutual labels:  explainable-ai
DataScience ArtificialIntelligence Utils
Examples of Data Science projects and Artificial Intelligence use cases
Stars: ✭ 302 (+1676.47%)
Mutual labels:  explainable-ai
Transformer-MM-Explainability
[ICCV 2021- Oral] Official PyTorch implementation for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers, a novel method to visualize any Transformer-based network. Including examples for DETR, VQA.
Stars: ✭ 484 (+2747.06%)
Mutual labels:  explainable-ai
responsible-ai-toolbox
This project provides responsible AI user interfaces for Fairlearn, interpret-community, and Error Analysis, as well as foundational building blocks that they rely on.
Stars: ✭ 615 (+3517.65%)
Mutual labels:  explainable-ai
grasp
Essential NLP & ML, short & fast pure Python code
Stars: ✭ 58 (+241.18%)
Mutual labels:  explainable-ai
telco-customer-churn-in-r-and-h2o
Showcase for using H2O and R for churn prediction (inspired by ZhouFang928 examples)
Stars: ✭ 59 (+247.06%)
Mutual labels:  h2oai
dlime experiments
In this work, we propose a deterministic version of Local Interpretable Model Agnostic Explanations (LIME) and the experimental results on three different medical datasets shows the superiority for Deterministic Local Interpretable Model-Agnostic Explanations (DLIME).
Stars: ✭ 21 (+23.53%)
Mutual labels:  explainable-ai
transformers-interpret
Model explainability that works seamlessly with 🤗 transformers. Explain your transformers model in just 2 lines of code.
Stars: ✭ 861 (+4964.71%)
Mutual labels:  explainable-ai
bert attn viz
Visualize BERT's self-attention layers on text classification tasks
Stars: ✭ 41 (+141.18%)
Mutual labels:  explainable-ai
concept-based-xai
Library implementing state-of-the-art Concept-based and Disentanglement Learning methods for Explainable AI
Stars: ✭ 41 (+141.18%)
Mutual labels:  explainable-ai
markdown-it-github-headings
Add anchors and links to headings just like Github does
Stars: ✭ 22 (+29.41%)
Mutual labels:  anchor
react-use-downloader
Creates a download handler function and gives progress information
Stars: ✭ 65 (+282.35%)
Mutual labels:  anchor
Relational Deep Reinforcement Learning
No description or website provided.
Stars: ✭ 44 (+158.82%)
Mutual labels:  explainable-ai
3D-GuidedGradCAM-for-Medical-Imaging
This Repo containes the implemnetation of generating Guided-GradCAM for 3D medical Imaging using Nifti file in tensorflow 2.0. Different input files can be used in that case need to edit the input to the Guided-gradCAM model.
Stars: ✭ 60 (+252.94%)
Mutual labels:  explainable-ai
hierarchical-dnn-interpretations
Using / reproducing ACD from the paper "Hierarchical interpretations for neural network predictions" 🧠 (ICLR 2019)
Stars: ✭ 110 (+547.06%)
Mutual labels:  explainable-ai
Awesome-Vision-Transformer-Collection
Variants of Vision Transformer and its downstream tasks
Stars: ✭ 124 (+629.41%)
Mutual labels:  explainable-ai
deep-explanation-penalization
Code for using CDEP from the paper "Interpretations are useful: penalizing explanations to align neural networks with prior knowledge" https://arxiv.org/abs/1909.13584
Stars: ✭ 110 (+547.06%)
Mutual labels:  explainable-ai
mllp
The code of AAAI 2020 paper "Transparent Classification with Multilayer Logical Perceptrons and Random Binarization".
Stars: ✭ 15 (-11.76%)
Mutual labels:  explainable-ai

Anchorj

License Build Status Sonarcloud Coverage

This project provides an efficient java implementation of the Anchors explanation algorithm for machine learning models.

The initial proposal "Anchors: High-Precision Model-Agnostic Explanations" by Marco Tulio Ribeiro (2018) can be found here.

The Algorithm

A short description of how the algorithm works is provided in the author's GitHub repository:

An anchor explanation is a rule that sufficiently “anchors” the prediction locally – such that changes to the rest of the feature values of the instance do not matter. In other words, for instances on which the anchor holds, the prediction is (almost) always the same.

The anchor method is able to explain any black box classifier, with two or more classes. All we require is that the classifier implements a function that takes [a data instance] and outputs [an integer] prediction.

Why Java?

Java has been chosen as the platform's foundation, since it provides multiple advantages: it integrates well into a large ecosystem and can be used in conjunction with advanced technologies like H2O and Apache Spark.

This implementation furthermore serves as a library based on which more approaches can be developed. Among others, adapters, interfaces and API's are in development to offer the opportunity of platform-independent access.

It is thus expected to reach a high dissemination among ML projects.

Related Projects

  • This Anchors implementations features several add-ons and optional extensions which can be found in a dedicated project, called AnchorAdapters. These can, depending on the use-case, significantly ease implementation and customization efforts. The project aims to include methodological, i.e. default approaches to common Anchors applications. Thus, Anchors' drawback of not being application-agnostic is being approached for default domains.
  • Examples of Anchors' usage can be found in the XAI Examples project. It features a readily compilable Maven project that can be used to skip necessary configuration steps.
  • This implementation has been released as an R package and will soon be available on CRAN.

Getting Started

Prerequisites and Installation

In order to use the core project, no prerequisites and installation is are required. There are no dependencies and the algorithm may be used by providing the required interfaces.

Using the Algorithm

In order to explain a prediction, one has to use the base Anchors algorithm provided by the AnchorConstruction class. This class may be instantiated by using the AnchorConstructionBuilder.

Mainly, the builder requires an implementation of the ClassificationFunction and the PerturbationFunction, as well as an instance and its label to be explained. These components must all have the same type parameter T. The result may be built as follows:

new AnchorConstructionBuilder<>(classificationFunction, perturbationFunction, labeledInstance, instanceLabel)
        .build()
        .constructAnchor();

The builder offers many more options on how to construct the anchor. Amongst others, the multi-armed bandit algorithm or the coverage calculation function may be customized. Additionally, the algorithm may be configured to utilize threading.

Tutorials and Examples

As mentioned above, please refer to the XAI Examples project for ready-to-use application scenarios.

Collaboration

The project is operated and further developed by the viadee Consulting AG in Münster, Westphalia. Results from theses at the WWU Münster and the FH Münster have been incorporated.

  • Further theses are planned: Contact person is Dr. Frank Köhne from viadee. Community contributions to the project are welcome: Please open Github-Issues with suggestions (or PR), which we can then edit in the team.
  • We are looking for further partners who have interesting process data to refine our tooling as well as partners that are simply interested in a discussion about AI in the context of business process automation and explainability.

Authors

License

BSD 3-Clause License

Acknowledgments

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].