All Projects → seilna → CNN-Units-in-NLP

seilna / CNN-Units-in-NLP

Licence: MIT License
✂️ Repository for our ICLR 2019 paper: Discovery of Natural Language Concepts in Individual Units of CNNs

Programming Languages

python
139335 projects - #7 most used programming language
shell
77523 projects
javascript
184084 projects - #8 most used programming language

Projects that are alternatives of or similar to CNN-Units-in-NLP

InterpretableCNN
No description or website provided.
Stars: ✭ 36 (+38.46%)
Mutual labels:  interpretable-deep-learning
path explain
A repository for explaining feature attributions and feature interactions in deep neural networks.
Stars: ✭ 151 (+480.77%)
Mutual labels:  interpretable-deep-learning
ProtoTree
ProtoTrees: Neural Prototype Trees for Interpretable Fine-grained Image Recognition, published at CVPR2021
Stars: ✭ 47 (+80.77%)
Mutual labels:  interpretable-deep-learning
ShapleyExplanationNetworks
Implementation of the paper "Shapley Explanation Networks"
Stars: ✭ 62 (+138.46%)
Mutual labels:  interpretable-deep-learning
rl singing voice
Unsupervised Representation Learning for Singing Voice Separation
Stars: ✭ 18 (-30.77%)
Mutual labels:  interpretable-deep-learning
ISeeU
ISeeU: Visually interpretable deep learning for mortality prediction inside the ICU
Stars: ✭ 20 (-23.08%)
Mutual labels:  interpretable-deep-learning
deep-explanation-penalization
Code for using CDEP from the paper "Interpretations are useful: penalizing explanations to align neural networks with prior knowledge" https://arxiv.org/abs/1909.13584
Stars: ✭ 110 (+323.08%)
Mutual labels:  interpretable-deep-learning
m-phate
Multislice PHATE for tensor embeddings
Stars: ✭ 54 (+107.69%)
Mutual labels:  interpretable-deep-learning
redunet paper
Official NumPy Implementation of Deep Networks from the Principle of Rate Reduction (2021)
Stars: ✭ 49 (+88.46%)
Mutual labels:  interpretable-deep-learning
self critical vqa
Code for NeurIPS 2019 paper ``Self-Critical Reasoning for Robust Visual Question Answering''
Stars: ✭ 39 (+50%)
Mutual labels:  interpretable-deep-learning
Awesome Machine Learning Interpretability
A curated list of awesome machine learning interpretability resources.
Stars: ✭ 2,404 (+9146.15%)
Mutual labels:  interpretable-deep-learning
Pytorch Grad Cam
Many Class Activation Map methods implemented in Pytorch for CNNs and Vision Transformers. Including Grad-CAM, Grad-CAM++, Score-CAM, Ablation-CAM and XGrad-CAM
Stars: ✭ 3,814 (+14569.23%)
Mutual labels:  interpretable-deep-learning
Layerwise-Relevance-Propagation
Implementation of Layerwise Relevance Propagation for heatmapping "deep" layers
Stars: ✭ 78 (+200%)
Mutual labels:  interpretable-deep-learning

Overview

This repository contains an implementation of our ICLR 2019 paper;

TL;DR: Individual units of deep CNNs learned in NLP tasks (e.g. translation, classification) could act as a natural language concept detector.

This work covers the interpretability of Deep Neural Network. We expect that it sheds useful light on how the representation of Deep CNNs learned in language tasks represents the given text.

We show that several information in the given text are not distributed across all units of representation. We observe AND quantify that even a single unit can act as a natural language concept (e.g. morpheme, word, phrase) detector.


Visualizing Individual Units

In this work, we align three natural language concepts per unit. Most units are selectively responsive to the concepts we align. If you want to see the full results, see Optional-Full Visualization Results.

Natural Language Concepts

Concepts that go beyond natural language form

We also discovered that several units tend to capture the concepts that go beyond natural langauge form. Although it is relatively hard to quantify it, we belive that further investigation would be one of interesting future direction. We visualize some units that capture abstract form concepts as follows:

Number

Number + Time

Number + Question

Quantity

Wh-questions

A Demonstrative Pronoun

Similar Meaning or Role

Polarity (Positive)

Polarity (Negative)


Run

If you want to get the results without running the code, skip these parts and see Optional-Full Visualization Results.

Prerequisites

  • Python 2.7
  • anaconda (Python 2.7 version, latest version recommended)

Download

  • Clone the code from GitHub.
git clone https://github.com/seilna/CNN-Units-in-NLP.git
  • Create environment via conda & downlaod spacy (english) model
conda env create -f environment.yml
conda activate iclr_19_na
python -m spacy download en
  • Download training data & pretrained models (~160GB space)
cd script
bash setup.sh 

Running Visualization Code

cd script
bash run.sh 

will save visualization results at visualization/.

or skip to Optional-Full Visualization Results.

Optional-Full Visualization Results

cd script
bash download_visualization.sh

or google drive link


Reference

If you find the code useful, please cite the following paper.

@inproceedings{
  Na:ICLR:2019,
  title = "{Discovery of Natural Language Concepts in Individual Units of CNNs}",
  author = {Seil Na and Yo Joong Choe and Dong-Hyun Lee and Gunhee Kim},
  booktitle = {International Conference on Learning Representations},
  year = {2019},
  url = {https://openreview.net/forum?id=S1EERs09YQ},
}

Acknowledgements

Each model used in our experiments is implemented based on this and this repository. We thank the authors.

We also appreciate Insu Jeon, Jaemin Cho, Sewon Min, Yunseok Jang and the anonymous reviewers for their helpful comments and discussions. This work was supported by Kakao and Kakao Brain corporations, IITP grant funded by the Korea government (MSIT) (No. 2017-0-01772) and Creative Pioneering Researchers Program through Seoul National University.


Contact

Have any question? Please contact:

[email protected]

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].