All Projects → yewsiang → ConceptBottleneck

yewsiang / ConceptBottleneck

Licence: MIT license
Concept Bottleneck Models, ICML 2020

Programming Languages

python
139335 projects - #7 most used programming language
shell
77523 projects

Projects that are alternatives of or similar to ConceptBottleneck

Interpret
Fit interpretable models. Explain blackbox machine learning.
Stars: ✭ 4,352 (+4682.42%)
Mutual labels:  interpretability, interpretable-machine-learning
ProtoTree
ProtoTrees: Neural Prototype Trees for Interpretable Fine-grained Image Recognition, published at CVPR2021
Stars: ✭ 47 (-48.35%)
Mutual labels:  interpretability, interpretable-machine-learning
mllp
The code of AAAI 2020 paper "Transparent Classification with Multilayer Logical Perceptrons and Random Binarization".
Stars: ✭ 15 (-83.52%)
Mutual labels:  interpretability, interpretable-machine-learning
diabetes use case
Sample use case for Xavier AI in Healthcare conference: https://www.xavierhealth.org/ai-summit-day2/
Stars: ✭ 22 (-75.82%)
Mutual labels:  interpretability, interpretable-machine-learning
Awesome Machine Learning Interpretability
A curated list of awesome machine learning interpretability resources.
Stars: ✭ 2,404 (+2541.76%)
Mutual labels:  interpretability, interpretable-machine-learning
interpretable-ml
Techniques & resources for training interpretable ML models, explaining ML models, and debugging ML models.
Stars: ✭ 17 (-81.32%)
Mutual labels:  interpretability, interpretable-machine-learning
xai-iml-sota
Interesting resources related to Explainable Artificial Intelligence, Interpretable Machine Learning, Interactive Machine Learning, Human in Loop and Visual Analytics.
Stars: ✭ 51 (-43.96%)
Mutual labels:  interpretability, interpretable-machine-learning
ALPS 2021
XAI Tutorial for the Explainable AI track in the ALPS winter school 2021
Stars: ✭ 55 (-39.56%)
Mutual labels:  interpretability
Transformer-MM-Explainability
[ICCV 2021- Oral] Official PyTorch implementation for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers, a novel method to visualize any Transformer-based network. Including examples for DETR, VQA.
Stars: ✭ 484 (+431.87%)
Mutual labels:  interpretability
adaptive-wavelets
Adaptive, interpretable wavelets across domains (NeurIPS 2021)
Stars: ✭ 58 (-36.26%)
Mutual labels:  interpretability
XAIatERUM2020
Workshop: Explanation and exploration of machine learning models with R and DALEX at eRum 2020
Stars: ✭ 52 (-42.86%)
Mutual labels:  interpretable-machine-learning
Awesome-XAI-Evaluation
Reference tables to introduce and organize evaluation methods and measures for explainable machine learning systems
Stars: ✭ 57 (-37.36%)
Mutual labels:  interpretable-machine-learning
glcapsnet
Global-Local Capsule Network (GLCapsNet) is a capsule-based architecture able to provide context-based eye fixation prediction for several autonomous driving scenarios, while offering interpretability both globally and locally.
Stars: ✭ 33 (-63.74%)
Mutual labels:  interpretability
kernel-mod
NeurIPS 2018. Linear-time model comparison tests.
Stars: ✭ 17 (-81.32%)
Mutual labels:  interpretability
mmn
Moore Machine Networks (MMN): Learning Finite-State Representations of Recurrent Policy Networks
Stars: ✭ 39 (-57.14%)
Mutual labels:  interpretability
thermostat
Collection of NLP model explanations and accompanying analysis tools
Stars: ✭ 126 (+38.46%)
Mutual labels:  interpretability
NamingThings
Content on tips, tricks, advice, practices for naming things in in software/technology
Stars: ✭ 31 (-65.93%)
Mutual labels:  concepts
tf retrieval baseline
A Tensorflow retrieval (space embedding) baseline. Metric learning baseline on CUB and Stanford Online Products.
Stars: ✭ 39 (-57.14%)
Mutual labels:  cub-dataset
adversarial-robustness-public
Code for AAAI 2018 accepted paper: "Improving the Adversarial Robustness and Interpretability of Deep Neural Networks by Regularizing their Input Gradients"
Stars: ✭ 49 (-46.15%)
Mutual labels:  interpretability
ArenaR
Data generator for Arena - interactive XAI dashboard
Stars: ✭ 28 (-69.23%)
Mutual labels:  interpretability

Concept Bottleneck Models

teaser

This repository contains code and scripts for the following paper:

Concept Bottleneck Models

Pang Wei Koh*, Thao Nguyen*, Yew Siang Tang*, Stephen Mussmann, Emma Pierson, Been Kim, and Percy Liang

ICML 2020

The experiments use the following datasets:

To download the TravelingBirds dataset, which we use to test robustness to background shifts, please download the CUB_fixed folder from this CodaLab bundle by clicking on the download button. If you use this dataset, please also cite the original CUB and Places datasets.

The NIH Osteoarthritis Initiative (OAI) dataset requires an application for data access, so we are unable to provide the raw data here. To access that data, please first obtain data access permission from the Osteoarthritis Initiative, and then refer to this Github repository for data processing code. If you use it, please cite the Pierson et al. paper corresponding to that repository as well.

Here, we focus on scripts replicating our results on CUB, which is public. We provide an executable, Dockerized version of those experiments on CodaLab.

Abstract

We seek to learn models that we can interact with using high-level concepts: would the model predict severe arthritis if it thinks there is a bone spur in the x-ray? State-of-the-art models today do not typically support the manipulation of concepts like "the existence of bone spurs", as they are trained end-to-end to go directly from raw input (e.g., pixels) to output (e.g., arthritis severity). We revisit the classic idea of first predicting concepts that are provided at training time, and then using these concepts to predict the label. By construction, we can intervene on these concept bottleneck models by editing their predicted concept values and propagating these changes to the final prediction. On x-ray grading and bird identification, concept bottleneck models achieve competitive accuracy with standard end-to-end models, while enabling interpretation in terms of high-level clinical concepts ("bone spurs") or bird attributes ("wing color"). These models also allow for richer human-model interaction: accuracy improves significantly if we can correct model mistakes on concepts at test time.

teaser

Prerequisites

We used the same environment as Codalab's default gpu setting, please run pip install -r requirements.txt. Main packages are:

  • matplotlib 3.1.1
  • numpy 1.17.1
  • pandas 0.25.1
  • Pillow 6.2.2
  • scipy 1.3.1
  • scikit-learn 0.21.3
  • torch 1.1.0
  • torchvision 0.4.0

Note that we updated Pillow and removed tensorflow-gpu and tensorboard from requirements.txt.

Docker

You can pull the Docker image directly from Docker Hub.

docker pull codalab/default-gpu

Usage

Standard task training for CUB can be ran using the scripts/experiments.sh and Codalab scripts can be ran using scripts/codalab_experiments.sh. More information about how to perform data processing and other evaluations can be found in the README in CUB/.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].