All Projects → fredhohman → summit

fredhohman / summit

Licence: MIT license
🏔️ Summit: Scaling Deep Learning Interpretability by Visualizing Activation and Attribution Summarizations

Programming Languages

javascript
184084 projects - #8 most used programming language
HTML
75241 projects
CSS
56736 projects
shell
77523 projects

Projects that are alternatives of or similar to summit

xai-iml-sota
Interesting resources related to Explainable Artificial Intelligence, Interpretable Machine Learning, Interactive Machine Learning, Human in Loop and Visual Analytics.
Stars: ✭ 51 (-46.32%)
Mutual labels:  interpretability, deep-learning-visualization
ProtoTree
ProtoTrees: Neural Prototype Trees for Interpretable Fine-grained Image Recognition, published at CVPR2021
Stars: ✭ 47 (-50.53%)
Mutual labels:  interpretability
adversarial-robustness-public
Code for AAAI 2018 accepted paper: "Improving the Adversarial Robustness and Interpretability of Deep Neural Networks by Regularizing their Input Gradients"
Stars: ✭ 49 (-48.42%)
Mutual labels:  interpretability
ConceptBottleneck
Concept Bottleneck Models, ICML 2020
Stars: ✭ 91 (-4.21%)
Mutual labels:  interpretability
glcapsnet
Global-Local Capsule Network (GLCapsNet) is a capsule-based architecture able to provide context-based eye fixation prediction for several autonomous driving scenarios, while offering interpretability both globally and locally.
Stars: ✭ 33 (-65.26%)
Mutual labels:  interpretability
free-lunch-saliency
Code for "Free-Lunch Saliency via Attention in Atari Agents"
Stars: ✭ 15 (-84.21%)
Mutual labels:  interpretability
EgoCNN
Code for "Distributed, Egocentric Representations of Graphs for Detecting Critical Structures" (ICML 2019)
Stars: ✭ 16 (-83.16%)
Mutual labels:  interpretability
yggdrasil-decision-forests
A collection of state-of-the-art algorithms for the training, serving and interpretation of Decision Forest models.
Stars: ✭ 156 (+64.21%)
Mutual labels:  interpretability
Visualizing-CNNs-for-monocular-depth-estimation
official implementation of "Visualization of Convolutional Neural Networks for Monocular Depth Estimation"
Stars: ✭ 120 (+26.32%)
Mutual labels:  interpretability
interpretable-ml
Techniques & resources for training interpretable ML models, explaining ML models, and debugging ML models.
Stars: ✭ 17 (-82.11%)
Mutual labels:  interpretability
mllp
The code of AAAI 2020 paper "Transparent Classification with Multilayer Logical Perceptrons and Random Binarization".
Stars: ✭ 15 (-84.21%)
Mutual labels:  interpretability
meg
Molecular Explanation Generator
Stars: ✭ 14 (-85.26%)
Mutual labels:  interpretability
partial dependence
Python package to visualize and cluster partial dependence.
Stars: ✭ 23 (-75.79%)
Mutual labels:  interpretability
Transformer-MM-Explainability
[ICCV 2021- Oral] Official PyTorch implementation for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers, a novel method to visualize any Transformer-based network. Including examples for DETR, VQA.
Stars: ✭ 484 (+409.47%)
Mutual labels:  interpretability
zennit
Zennit is a high-level framework in Python using PyTorch for explaining/exploring neural networks using attribution methods like LRP.
Stars: ✭ 57 (-40%)
Mutual labels:  interpretability
ArenaR
Data generator for Arena - interactive XAI dashboard
Stars: ✭ 28 (-70.53%)
Mutual labels:  interpretability
mmn
Moore Machine Networks (MMN): Learning Finite-State Representations of Recurrent Policy Networks
Stars: ✭ 39 (-58.95%)
Mutual labels:  interpretability
transformers-interpret
Model explainability that works seamlessly with 🤗 transformers. Explain your transformers model in just 2 lines of code.
Stars: ✭ 861 (+806.32%)
Mutual labels:  interpretability
sage
For calculating global feature importance using Shapley values.
Stars: ✭ 129 (+35.79%)
Mutual labels:  interpretability
twic
Topic Words in Context (TWiC) is a highly-interactive, browser-based visualization for MALLET topic models
Stars: ✭ 51 (-46.32%)
Mutual labels:  interactive-visualization

Summit

Summit is an interactive system that scalably and systematically summarizes and visualizes what features a deep learning model has learned and how those features interact to make predictions.

🏔️ Live demo: fredhohman.com/summit
📘 Paper: https://fredhohman.com/papers/19-summit-vast.pdf
🎥 Video: https://youtu.be/J4GMLvoH1ZU
💻 Code: https://github.com/fredhohman/summit
📺 Slides: https://fredhohman.com/slides/19-summit-vast-slides.pdf
🎤 Recording: https://vimeo.com/368704428

Summit: Scaling Deep Learning Interpretability by Visualizing Activation and Attribution Summarizations
Fred Hohman, Haekyu Park, Caleb Robinson, Duen Horng (Polo) Chau
IEEE Transactions on Visualization and Computer Graphics (TVCG, Proc. VAST'19). 2020.

Summit overview YouTube video

Live Demo

For a live demo, visit: fredhohman.com/summit.

Other Repositories

For the Summit notebook code, Visualization: summit-notebooks.
For the Summit data, visit: summit-data.

Running Locally

Download or clone this repository:

git clone https://github.com/fredhohman/summit.git

Download the data from summit-data:

git clone https://github.com/fredhohman/summit-data.git

Place summit-data's data folder in the top level of the summit repo. Then, within summit run:

npm install
npm run build
npm run start

Requirements

Summit requires npm to run.

License

MIT License. See LICENSE.md.

Citation

@article{hohman2020summit,
  title={Summit: Scaling Deep Learning Interpretability by Visualizing Activation and Attribution Summarizations},
  author={Hohman, Fred and Park, Haekyu and Robinson, Caleb and Chau, Duen Horng},
  journal={IEEE Transactions on Visualization and Computer Graphics (TVCG)},
  year={2020},
  publisher={IEEE},
  url={https://fredhohman.com/summit/}
}

Contact

For questions or support open an issue or contact Fred Hohman.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].