All Projects → jessevig → Bertviz

jessevig / Bertviz

Licence: apache-2.0
Tool for visualizing attention in the Transformer model (BERT, GPT-2, Albert, XLNet, RoBERTa, CTRL, etc.)

Programming Languages

python
139335 projects - #7 most used programming language
javascript
184084 projects - #8 most used programming language
Jupyter Notebook
11667 projects

Projects that are alternatives of or similar to Bertviz

Nlp Tutorial
Natural Language Processing Tutorial for Deep Learning Researchers
Stars: ✭ 9,895 (+187.39%)
Mutual labels:  jupyter-notebook, natural-language-processing, transformer, bert
Transformer-QG-on-SQuAD
Implement Question Generator with SOTA pre-trained Language Models (RoBERTa, BERT, GPT, BART, T5, etc.)
Stars: ✭ 28 (-99.19%)
Mutual labels:  bert, roberta, gpt2
les-military-mrc-rank7
莱斯杯:全国第二届“军事智能机器阅读”挑战赛 - Rank7 解决方案
Stars: ✭ 37 (-98.93%)
Mutual labels:  transformer, bert, roberta
COVID-19-Tweet-Classification-using-Roberta-and-Bert-Simple-Transformers
Rank 1 / 216
Stars: ✭ 24 (-99.3%)
Mutual labels:  transformer, bert, roberta
vietnamese-roberta
A Robustly Optimized BERT Pretraining Approach for Vietnamese
Stars: ✭ 22 (-99.36%)
Mutual labels:  transformer, bert, roberta
tensorflow-ml-nlp-tf2
텐서플로2와 머신러닝으로 시작하는 자연어처리 (로지스틱회귀부터 BERT와 GPT3까지) 실습자료
Stars: ✭ 245 (-92.88%)
Mutual labels:  transformer, bert, gpt2
Text-Summarization
Abstractive and Extractive Text summarization using Transformers.
Stars: ✭ 38 (-98.9%)
Mutual labels:  bert, roberta, gpt2
transformer-models
Deep Learning Transformer models in MATLAB
Stars: ✭ 90 (-97.39%)
Mutual labels:  transformer, bert, gpt2
Question generation
Neural question generation using transformers
Stars: ✭ 356 (-89.66%)
Mutual labels:  jupyter-notebook, natural-language-processing, transformer
Vietnamese Electra
Electra pre-trained model using Vietnamese corpus
Stars: ✭ 55 (-98.4%)
Mutual labels:  jupyter-notebook, natural-language-processing, transformer
Roberta zh
RoBERTa中文预训练模型: RoBERTa for Chinese
Stars: ✭ 1,953 (-43.28%)
Mutual labels:  bert, roberta, gpt2
Transformers
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
Stars: ✭ 55,742 (+1519%)
Mutual labels:  natural-language-processing, transformer, bert
Pytorch Sentiment Analysis
Tutorials on getting started with PyTorch and TorchText for sentiment analysis.
Stars: ✭ 3,209 (-6.8%)
Mutual labels:  jupyter-notebook, natural-language-processing, bert
Germanwordembeddings
Toolkit to obtain and preprocess german corpora, train models using word2vec (gensim) and evaluate them with generated testsets
Stars: ✭ 189 (-94.51%)
Mutual labels:  jupyter-notebook, natural-language-processing
Notebooks
Jupyter Notebooks with Deep Learning Tutorials
Stars: ✭ 188 (-94.54%)
Mutual labels:  jupyter-notebook, natural-language-processing
Pytorch graph Rel
A PyTorch implementation of GraphRel
Stars: ✭ 204 (-94.07%)
Mutual labels:  jupyter-notebook, natural-language-processing
Hardware Aware Transformers
[ACL 2020] HAT: Hardware-Aware Transformers for Efficient Natural Language Processing
Stars: ✭ 206 (-94.02%)
Mutual labels:  natural-language-processing, transformer
Texar
Toolkit for Machine Learning, Natural Language Processing, and Text Generation, in TensorFlow. This is part of the CASL project: http://casl-project.ai/
Stars: ✭ 2,236 (-35.06%)
Mutual labels:  natural-language-processing, bert
Aind Nlp
Coding exercises for the Natural Language Processing concentration, part of Udacity's AIND program.
Stars: ✭ 202 (-94.13%)
Mutual labels:  jupyter-notebook, natural-language-processing
Graph Convolution Nlp
Graph Convolution Network for NLP
Stars: ✭ 208 (-93.96%)
Mutual labels:  jupyter-notebook, natural-language-processing

BertViz

BertViz is a tool for visualizing attention in the Transformer model, supporting most models from the transformers library (BERT, GPT-2, XLNet, RoBERTa, XLM, CTRL, BART, etc.). It extends the Tensor2Tensor visualization tool by Llion Jones and the transformers library from HuggingFace.

⚡️ Quickstart | 🕹️ Colab tutorial | 📖 Documentation | ✍️ Blog post | 🔬 Paper

Quick Tour

Head View

The head view visualizes the attention patterns produced by one or more attention heads in a given transformer layer. It is based on the excellent Tensor2Tensor visualization tool by Llion Jones.

🕹 Try out this interactive Colab Notebook with the head view pre-loaded.

head view

The head view supports most models from the Transformers library. Example notebooks:
BERT: [Notebook] [Colab]
GPT-2: [Notebook] [Colab]
XLNet: [Notebook]
RoBERTa: [Notebook]
XLM: [Notebook]
ALBERT: [Notebook]
DistilBERT: [Notebook]
BART (encoder-decoder): [Notebook]

Model View

The model view provides a birds-eye view of attention across all of the model’s layers and heads.

🕹 Try out this interactive Colab Notebook with the model view pre-loaded.

model view

The model view supports most models from the Transformers library. Examples:
BERT: [Notebook] [Colab]
GPT2: [Notebook] [Colab]
XLNet: [Notebook]
RoBERTa: [Notebook]
XLM: [Notebook]
ALBERT: [Notebook]
DistilBERT: [Notebook]
BART (encoder-decoder): [Notebook]

Neuron View

The neuron view visualizes the individual neurons in the query and key vectors and shows how they are used to compute attention.

🕹 Try out this interactive Colab Notebook with the neuron view pre-loaded.

neuron view

The neuron view supports the following three models:
BERT: [Notebook] [Colab]
GPT-2 [Notebook] [Colab]
RoBERTa [Notebook]

⚡️ Getting Started

Installation

pip install bertviz

You must also have Jupyter Notebook and ipywidgets installed in order to run BertViz in a notebook:

pip install jupyterlab
pip install ipywidgets

For more details on installing Jupyter or ipywidgets, consult the documentation here and here.

Quickstart

Start Jupyter Notebook:

jupyter notebook

Click New to create a new notebook, and select Python 3 (ipykernel) if prompted.

Add the following cell:

from transformers import AutoTokenizer, AutoModel, utils
from bertviz import model_view

utils.logging.set_verbosity_error()  # Remove line to see warnings
tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased")
model = AutoModel.from_pretrained("distilbert-base-uncased", output_attentions=True)
inputs = tokenizer.encode("The cat sat on the mat", return_tensors='pt')
outputs = model(inputs)
attention = outputs[-1]  # Output includes attention weights when output_attentions=True
tokens = tokenizer.convert_ids_to_tokens(inputs[0]) 
model_view(attention, tokens)

And run it (Shift + Enter)! The visualization may take a few seconds to load.

Running example notebooks

You may also run any of the sample notebooks:

git clone --depth 1 [email protected]:jessevig/bertviz.git
cd bertviz/notebooks
jupyter notebook

📖 Documentation

Table of Contents

Self-Attention Models (BERT, GPT-2, etc.)

Head and Model Views

First load a Huggingface model, either a pre-trained model as shown below, or your own fine-tuned model. Be sure to set output_attention=True.

from transformers import AutoTokenizer, AutoModel, utils
utils.logging.set_verbosity_error()  # Remove this line to see warnings
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
model = AutoModel.from_pretrained("bert-base-uncased", output_attentions=True)

Then prepare inputs and compute attention:

inputs = tokenizer.encode("The cat sat on the mat", return_tensors='pt')
outputs = model(inputs)
attention = outputs[-1]  # Output includes attention weights when output_attentions=True
tokens = tokenizer.convert_ids_to_tokens(inputs[0]) 

Finally, display the attention weights using the head_view or model_view function:

from bertviz import head_view
head_view(attention, tokens)

For more advanced use cases, e.g., specifying a two-sentence input to the model, please refer to the sample notebooks.

Neuron View

The neuron view is invoked differently than the head view or model view, due to requiring access to the model's query/key vectors, which are not returned through the Huggingface API. It is currently limited to custom versions of BERT, GPT-2, and RoBERTa included with BertViz.

# Import specialized versions of models (that return query/key vectors)
from bertviz.transformers_neuron_view import BertModel, BertTokenizer
from bertviz.neuron_view import show

model_type = 'bert'
model_version = 'bert-base-uncased'
do_lower_case = True
sentence_a = "The cat sat on the mat"
sentence_b = "The cat lay on the rug"
model = BertModel.from_pretrained(model_version, output_attentions=True)
tokenizer = BertTokenizer.from_pretrained(model_version, do_lower_case=do_lower_case)
show(model, model_type, tokenizer, sentence_a, sentence_b, layer=2, head=0)

Encoder-Decoder Models (BART, MarianMT, etc.)

The head view and model view both support encoder-decoder models.

First, load an encoder-decoder model:

from transformers import AutoTokenizer, AutoModel

tokenizer = AutoTokenizer.from_pretrained("Helsinki-NLP/opus-mt-en-de")
model = AutoModel.from_pretrained("Helsinki-NLP/opus-mt-en-de", output_attentions=True)

Then prepare the inputs and compute attention:

encoder_input_ids = tokenizer("She sees the small elephant.", return_tensors="pt", add_special_tokens=True).input_ids
decoder_input_ids = tokenizer("Sie sieht den kleinen Elefanten.", return_tensors="pt", add_special_tokens=True).input_ids

outputs = model(input_ids=encoder_input_ids, decoder_input_ids=decoder_input_ids)

encoder_text = tokenizer.convert_ids_to_tokens(encoder_input_ids[0])
decoder_text = tokenizer.convert_ids_to_tokens(decoder_input_ids[0])

Finally, display the visualization using either head_view or model_view.

from bertviz import model_view
model_view(
    encoder_attention=outputs.encoder_attentions,
    decoder_attention=outputs.decoder_attentions,
    cross_attention=outputs.cross_attentions,
    encoder_tokens= encoder_text,
    decoder_tokens = decoder_text
)

You may select Encoder, Decoder, or Cross attention from the drop-down in the upper left corner of the visualization.

Installing from source

git clone https://github.com/jessevig/bertviz.git
cd bertviz
python setup.py develop

Additional options

Dark / light mode

The model view and neuron view support dark (default) and light modes. You may set the mode using the display_mode parameter:

model_view(attention, tokens, display_mode="light")

Filtering layers

To improve the responsiveness of the tool when visualizing larger models or inputs, you may set the include_layers parameter to restrict the visualization to a subset of layers (zero-indexed). This option is available in the head view and model view.

Example: Render model view with only layers 5 and 6 displayed

model_view(attention, tokens, include_layers=[5, 6])

For the model view, you may also restrict the visualization to a subset of attention heads (zero-indexed) by setting the include_heads parameter.

Setting default layer/head(s)

In the head view, you may choose a specific layer and collection of heads as the default selection when the visualization first renders. Note: this is different from the include_heads/include_layers parameter (above), which removes layers and heads from the visualization completely.

Example: Render head view with layer 2 and heads 3 and 5 pre-selected

head_view(attention, tokens, layer=2, heads=[3,5])

You may also pre-select a specific layer and single head for the neuron view.

Non-Huggingface models

The head_view and model_view functions may technically be used to visualize self-attention for any Transformer model, as long as the attention weights are available and follow the format specified in model_view and head_view (which is the format returned from Huggingface models). In some case, Tensorflow checkpoints may be loaded as Huggingface models as described in the Huggingface docs.

⚠️ Limitations

Tool

  • This tool is designed for shorter inputs and may run slowly if the input text is very long and/or the model is very large. To mitigate this, you may wish to filter the layers displayed by setting the include_layers parameter, as described above.
  • When running on Colab, some of the visualizations will fail (runtime disconnection) when the input text is long. To mitigate this, you may wish to filter the layers displayed by setting the include_layers parameter, as described above.
  • The neuron view only supports the custom BERT, GPT-2, and RoBERTa models included with the tool. This view needs access to the query and key vectors, which required modifying the model code (see transformers_neuron_view directory), which has only been done for these three models. Also, only one neuron view may be included per notebook.

Attention as "explanation"

Visualizing attention weights illuminates a particular mechanism within the model architecture but does not necessarily provide a direct explanation for model predictions. See [1, 2, 3].

👋 Authors

Jesse Vig (homepage)

🔬 Paper

A Multiscale Visualization of Attention in the Transformer Model (ACL 2019 System Demonstrations).

Citation

@inproceedings{vig-2019-multiscale,
    title = "A Multiscale Visualization of Attention in the Transformer Model",
    author = "Vig, Jesse",
    booktitle = "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: System Demonstrations",
    month = jul,
    year = "2019",
    address = "Florence, Italy",
    publisher = "Association for Computational Linguistics",
    url = "https://www.aclweb.org/anthology/P19-3007",
    doi = "10.18653/v1/P19-3007",
    pages = "37--42",
}

License

This project is licensed under the Apache 2.0 License - see the LICENSE file for details

🙏 Acknowledgments

We are grateful to the authors of the following projects, which are incorporated into this repo:

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].