All Projects → dmitrykazhdan → concept-based-xai

dmitrykazhdan / concept-based-xai

Licence: MIT license
Library implementing state-of-the-art Concept-based and Disentanglement Learning methods for Explainable AI

Programming Languages

python
139335 projects - #7 most used programming language
shell
77523 projects

Projects that are alternatives of or similar to concept-based-xai

zennit
Zennit is a high-level framework in Python using PyTorch for explaining/exploring neural networks using attribution methods like LRP.
Stars: ✭ 57 (+39.02%)
Mutual labels:  interpretability, explainable-ai, xai, explainability
Interpret
Fit interpretable models. Explain blackbox machine learning.
Stars: ✭ 4,352 (+10514.63%)
Mutual labels:  interpretability, explainable-ai, xai, explainability
mllp
The code of AAAI 2020 paper "Transparent Classification with Multilayer Logical Perceptrons and Random Binarization".
Stars: ✭ 15 (-63.41%)
Mutual labels:  interpretability, explainable-ai, xai, explainability
disent
🧶 Modular VAE disentanglement framework for python built with PyTorch Lightning ▸ Including metrics and datasets ▸ With strongly supervised, weakly supervised and unsupervised methods ▸ Easily configured and run with Hydra config ▸ Inspired by disentanglement_lib
Stars: ✭ 41 (+0%)
Mutual labels:  vae, disentanglement, disentangled-representations
Transformer-MM-Explainability
[ICCV 2021- Oral] Official PyTorch implementation for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers, a novel method to visualize any Transformer-based network. Including examples for DETR, VQA.
Stars: ✭ 484 (+1080.49%)
Mutual labels:  interpretability, explainable-ai, explainability
awesome-graph-explainability-papers
Papers about explainability of GNNs
Stars: ✭ 153 (+273.17%)
Mutual labels:  explainable-ai, xai, explainability
ArenaR
Data generator for Arena - interactive XAI dashboard
Stars: ✭ 28 (-31.71%)
Mutual labels:  interpretability, xai, explainability
hierarchical-dnn-interpretations
Using / reproducing ACD from the paper "Hierarchical interpretations for neural network predictions" 🧠 (ICLR 2019)
Stars: ✭ 110 (+168.29%)
Mutual labels:  interpretability, explainable-ai, explainability
deep-explanation-penalization
Code for using CDEP from the paper "Interpretations are useful: penalizing explanations to align neural networks with prior knowledge" https://arxiv.org/abs/1909.13584
Stars: ✭ 110 (+168.29%)
Mutual labels:  interpretability, explainable-ai, explainability
ProtoTree
ProtoTrees: Neural Prototype Trees for Interpretable Fine-grained Image Recognition, published at CVPR2021
Stars: ✭ 47 (+14.63%)
Mutual labels:  interpretability, explainable-ai, explainability
adaptive-wavelets
Adaptive, interpretable wavelets across domains (NeurIPS 2021)
Stars: ✭ 58 (+41.46%)
Mutual labels:  interpretability, xai, explainability
Shap
A game theoretic approach to explain the output of any machine learning model.
Stars: ✭ 14,917 (+36282.93%)
Mutual labels:  interpretability, explainability
Awesome Machine Learning Interpretability
A curated list of awesome machine learning interpretability resources.
Stars: ✭ 2,404 (+5763.41%)
Mutual labels:  interpretability, xai
GNNLens2
Visualization tool for Graph Neural Networks
Stars: ✭ 155 (+278.05%)
Mutual labels:  xai, explainability
Awesome Production Machine Learning
A curated list of awesome open source libraries to deploy, monitor, version and scale your machine learning
Stars: ✭ 10,504 (+25519.51%)
Mutual labels:  interpretability, explainability
Learning-From-Rules
Implementation of experiments in paper "Learning from Rules Generalizing Labeled Exemplars" to appear in ICLR2020 (https://openreview.net/forum?id=SkeuexBtDr)
Stars: ✭ 46 (+12.2%)
Mutual labels:  weak-supervision, weakly-supervised-learning
xai-iml-sota
Interesting resources related to Explainable Artificial Intelligence, Interpretable Machine Learning, Interactive Machine Learning, Human in Loop and Visual Analytics.
Stars: ✭ 51 (+24.39%)
Mutual labels:  interpretability, xai
trove
Weakly supervised medical named entity classification
Stars: ✭ 55 (+34.15%)
Mutual labels:  weak-supervision, weakly-supervised-learning
knodle
A PyTorch-based open-source framework that provides methods for improving the weakly annotated data and allows researchers to efficiently develop and compare their own methods.
Stars: ✭ 76 (+85.37%)
Mutual labels:  weak-supervision, weakly-supervised-learning
weasel
Weakly Supervised End-to-End Learning (NeurIPS 2021)
Stars: ✭ 117 (+185.37%)
Mutual labels:  weak-supervision, weakly-supervised-learning

Concept-based XAI Library

CXAI is an open-source library for research on concept-based Explainable AI (XAI).

CXAI supports a variety of different models, datasets, and evaluation metrics, associated with concept-based approaches:

High-level Specs:

Methods:

Datasets:

to get the datasets run script datasets/download_datasets.sh

Requirements

  • Python 3.7 - 3.8
  • See 'requirements.txt' for the rest of required packages

Installation

If installing from the source, please proceed by running the following command:

python setup.py install

This will install the concepts-xai package together with all its dependencies.

To test that the package has been successfully installed, you may run:

import concepts_xai
help("concepts_xai")

to display all the subpackages included from this installation.

Subpackages

  • datasets: datasets to use, including task functions.
  • evaluation: different evaluation metrics to use for evaluating our methods.
  • experiments: experimental setups (To-be-added soon)
  • methods: defines the concept-based methods. Note: SSCC defines wrappers around these methods, that turn then into semi-supervised concept labelling methods.
  • utils: contains utility functions for model creation as well as data management.

Citing

If you find this code useful in your research, please consider citing:

@article{kazhdan2021disentanglement,
  title={Is Disentanglement all you need? Comparing Concept-based \& Disentanglement Approaches},
  author={Kazhdan, Dmitry and Dimanov, Botty and Terre, Helena Andres and Jamnik, Mateja and Li{\`o}, Pietro and Weller, Adrian},
  journal={arXiv preprint arXiv:2104.06917},
  year={2021}
}

This work has been presented at the RAI, WeaSuL, and RobustML workshops, at The Ninth International Conference on Learning Representations (ICLR 2021).

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].