All Projects → copenlu → ALPS_2021

copenlu / ALPS_2021

Licence: other
XAI Tutorial for the Explainable AI track in the ALPS winter school 2021

Programming Languages

Jupyter Notebook
11667 projects
python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to ALPS 2021

thermostat
Collection of NLP model explanations and accompanying analysis tools
Stars: ✭ 126 (+129.09%)
Mutual labels:  interpretability, explainability, captum
Interpret
Fit interpretable models. Explain blackbox machine learning.
Stars: ✭ 4,352 (+7812.73%)
Mutual labels:  interpretability, explainability
removal-explanations
A lightweight implementation of removal-based explanations for ML models.
Stars: ✭ 46 (-16.36%)
Mutual labels:  interpretability, explainability
Shap
A game theoretic approach to explain the output of any machine learning model.
Stars: ✭ 14,917 (+27021.82%)
Mutual labels:  interpretability, explainability
Lucid
A collection of infrastructure and tools for research in neural network interpretability.
Stars: ✭ 4,344 (+7798.18%)
Mutual labels:  colab, interpretability
zennit
Zennit is a high-level framework in Python using PyTorch for explaining/exploring neural networks using attribution methods like LRP.
Stars: ✭ 57 (+3.64%)
Mutual labels:  interpretability, explainability
video coloriser
Pytorch Convolutional Neural Net and GAN based video coloriser that converts black and white video to colorised video.
Stars: ✭ 29 (-47.27%)
Mutual labels:  colab, colab-notebook
transformers-interpret
Model explainability that works seamlessly with 🤗 transformers. Explain your transformers model in just 2 lines of code.
Stars: ✭ 861 (+1465.45%)
Mutual labels:  interpretability, captum
Torrent-To-Google-Drive-Downloader
Simple notebook to stream torrent files to Google Drive using Google Colab and python3.
Stars: ✭ 256 (+365.45%)
Mutual labels:  colab, colab-notebook
TFLite-ModelMaker-EfficientDet-Colab-Hands-On
TensorFlow Lite Model Makerで物体検出を行うハンズオン用資料です(Hands-on for object detection with TensorFlow Lite Model Maker)
Stars: ✭ 15 (-72.73%)
Mutual labels:  colab, colab-notebook
Tensorflow2-ObjectDetectionAPI-Colab-Hands-On
Tensorflow2 Object Detection APIのハンズオン用資料です(Hands-on documentation for the Tensorflow2 Object Detection API)
Stars: ✭ 33 (-40%)
Mutual labels:  colab, colab-notebook
ProtoTree
ProtoTrees: Neural Prototype Trees for Interpretable Fine-grained Image Recognition, published at CVPR2021
Stars: ✭ 47 (-14.55%)
Mutual labels:  interpretability, explainability
steam-stylegan2
Train a StyleGAN2 model on Colaboratory to generate Steam banners.
Stars: ✭ 30 (-45.45%)
Mutual labels:  colab, colab-notebook
sage
For calculating global feature importance using Shapley values.
Stars: ✭ 129 (+134.55%)
Mutual labels:  interpretability, explainability
deep-explanation-penalization
Code for using CDEP from the paper "Interpretations are useful: penalizing explanations to align neural networks with prior knowledge" https://arxiv.org/abs/1909.13584
Stars: ✭ 110 (+100%)
Mutual labels:  interpretability, explainability
Awesome Production Machine Learning
A curated list of awesome open source libraries to deploy, monitor, version and scale your machine learning
Stars: ✭ 10,504 (+18998.18%)
Mutual labels:  interpretability, explainability
hierarchical-dnn-interpretations
Using / reproducing ACD from the paper "Hierarchical interpretations for neural network predictions" 🧠 (ICLR 2019)
Stars: ✭ 110 (+100%)
Mutual labels:  interpretability, explainability
mllp
The code of AAAI 2020 paper "Transparent Classification with Multilayer Logical Perceptrons and Random Binarization".
Stars: ✭ 15 (-72.73%)
Mutual labels:  interpretability, explainability
MineColab
Run Minecraft Server on Google Colab.
Stars: ✭ 135 (+145.45%)
Mutual labels:  colab, colab-notebook
keras-buoy
Keras wrapper that autosaves what ModelCheckpoint cannot.
Stars: ✭ 22 (-60%)
Mutual labels:  colab, colab-notebook

ALPS_2021 - LAB 2: XAI in NLP - January 22 2021

Repository for the Explainable AI track in the ALPS winter school 2021 - schedule.

This lab consists of two parts - one on explainability and one on explorative interpretability. You should try to split your time between the two labs.

Lab 2.1

The first part of the lab focuses on explainability for Natural Language Processing Models. In this part, we will lay the foundations of post-hoc explainability techniques and ways of evaluating them.

Lab 2.1 code

CoLAB <- copy this Colab notebook and add code to it for the exercises. CoLAB Solutions

For this notebook of the lab, we encourage you to work in groups, so that you could split the work and discuss the outcomes.

Goals of LAB 2.1:

  • learn how to implement two basic and commonly used types of gradient-based explainability techniques
  • learn how to implement an approximation-based explainability technique
  • exercise how to apply explainability techniques to discover flaws of machine learning models and construct adversarial explamples using them
  • learn how to evaluate explainability techniques with common diagnostic properties (based on this paper)
  • exercise using the diagnostic properties to find which architecture parameters of a model make it harder to explain

If you find this code useful for your research, please consider citing:

@inproceedings{atanasova2020diagnostic,
title={A Diagnostic Study of Explainability Techniques for Text Classification},
author={Pepa Atanasova and Jakob Grue Simonsen and Christina Lioma and Isabelle Augenstein},
booktitle = {Proceedings of EMNLP},
publisher = {Association for Computational Linguistics},
year = 2020
}

Lab 2.2

The second lab focuses explorative interpretability via acivation maximization - i.e. TX-Ray https://arxiv.org/abs/1912.00982. Activation maximization works for supervised and self/un-supervised settings alike, but the lab focuses analyzing CNN filters in a simple supervised setting.

Lab 2.2 code

CoLAB2 <- copy this Colab notebook and add code to it for the exercises. There are two types of exercises:

Familiarization exercise: to 'play with and understand' the technique. These allow quickly changes data collection and visualization parameters. They are intended for explorative analysis.

Advanced Exercises: these are optional and concern applications of the technique. They have no solution, but give solution outlines (starter code). Opt-EX1: XAI based pruning with hooks, Opt-EX2 influence of overparameterization (wider CNN with more filters), Opt-Ex3: filter redundancy. Opt-Ex2-3 belong together

Goals of LAB 2.2:

  • learn how to explore and interpret activations in NNs using 'activation maximization' principles
  • learn how to extract activations via forward_hooks
  • exercise how to usefully interpret and visualize activation behaviour
  • exercise how to prune activations -- advanced
  • analyze neuron/ filter redundancy, specialization, generalization - advanced
  • Overall: explore/ develop ideas towards 'model understanding' -- see https://arxiv.org/abs/1907.10739 for a great introduction of 'decision understanding' vs. 'model understanding'

If you find this code useful for your research, please consider citing:

@inproceedings{Rethmeier19TX-Ray,
title = {TX-Ray: Quantifying and Explaining Model-Knowledge Transfer in (Un-)Supervised NLP},
author = {Rethmeier, Nils and Kumar Saxena, Vageesh and Augenstein, Isabelle},
booktitle = {Proceedings of the Conference on Uncertainty in Artificial Intelligence (UAI)},
year = 2020
}
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].