All Projects → PAIR-code → Lit

PAIR-code / Lit

Licence: apache-2.0
The Language Interpretability Tool: Interactively analyze NLP models for model understanding in an extensible and framework agnostic interface.

Programming Languages

typescript
32286 projects
python
139335 projects - #7 most used programming language
CSS
56736 projects
Jupyter Notebook
11667 projects
javascript
184084 projects - #8 most used programming language
Liquid
124 projects

Projects that are alternatives of or similar to Lit

Claf
CLaF: Open-Source Clova Language Framework
Stars: ✭ 196 (-92.8%)
Mutual labels:  natural-language-processing
Pytorch Beam Search Decoding
PyTorch implementation of beam search decoding for seq2seq models
Stars: ✭ 204 (-92.5%)
Mutual labels:  natural-language-processing
Shifterator
Interpretable data visualizations for understanding how texts differ at the word level
Stars: ✭ 209 (-92.32%)
Mutual labels:  natural-language-processing
Gluon Nlp
NLP made easy
Stars: ✭ 2,344 (-13.86%)
Mutual labels:  natural-language-processing
Stringi
THE String Processing Package for R (with ICU)
Stars: ✭ 204 (-92.5%)
Mutual labels:  natural-language-processing
Hardware Aware Transformers
[ACL 2020] HAT: Hardware-Aware Transformers for Efficient Natural Language Processing
Stars: ✭ 206 (-92.43%)
Mutual labels:  natural-language-processing
Polyai Models
Neural Models for Conversational AI
Stars: ✭ 195 (-92.83%)
Mutual labels:  natural-language-processing
Spacy Lookup
Named Entity Recognition based on dictionaries
Stars: ✭ 212 (-92.21%)
Mutual labels:  natural-language-processing
Character Based Cnn
Implementation of character based convolutional neural network
Stars: ✭ 205 (-92.47%)
Mutual labels:  natural-language-processing
Graph Convolution Nlp
Graph Convolution Network for NLP
Stars: ✭ 208 (-92.36%)
Mutual labels:  natural-language-processing
Attention Mechanisms
Implementations for a family of attention mechanisms, suitable for all kinds of natural language processing tasks and compatible with TensorFlow 2.0 and Keras.
Stars: ✭ 203 (-92.54%)
Mutual labels:  natural-language-processing
Aind Nlp
Coding exercises for the Natural Language Processing concentration, part of Udacity's AIND program.
Stars: ✭ 202 (-92.58%)
Mutual labels:  natural-language-processing
Conllu
A CoNLL-U parser that takes a CoNLL-U formatted string and turns it into a nested python dictionary.
Stars: ✭ 207 (-92.39%)
Mutual labels:  natural-language-processing
Thinc
🔮 A refreshing functional take on deep learning, compatible with your favorite libraries
Stars: ✭ 2,422 (-10.99%)
Mutual labels:  natural-language-processing
Nlp Roadmap
ROADMAP(Mind Map) and KEYWORD for students those who have interest in learning NLP
Stars: ✭ 2,653 (-2.5%)
Mutual labels:  natural-language-processing
Pyhanlp
中文分词 词性标注 命名实体识别 依存句法分析 新词发现 关键词短语提取 自动摘要 文本分类聚类 拼音简繁 自然语言处理
Stars: ✭ 2,564 (-5.77%)
Mutual labels:  natural-language-processing
Minerva
Meandering In Networks of Entities to Reach Verisimilar Answers
Stars: ✭ 205 (-92.47%)
Mutual labels:  natural-language-processing
Visdial
[CVPR 2017] Torch code for Visual Dialog
Stars: ✭ 215 (-92.1%)
Mutual labels:  natural-language-processing
Neat Vision
Neat (Neural Attention) Vision, is a visualization tool for the attention mechanisms of deep-learning models for Natural Language Processing (NLP) tasks. (framework-agnostic)
Stars: ✭ 213 (-92.17%)
Mutual labels:  natural-language-processing
Kagnet
Knowledge-Aware Graph Networks for Commonsense Reasoning (EMNLP-IJCNLP 19)
Stars: ✭ 205 (-92.47%)
Mutual labels:  natural-language-processing

🔥 Language Interpretability Tool (LIT)

The Language Interpretability Tool (LIT) is a visual, interactive model-understanding tool for ML models, focusing on NLP use-cases. It can be run as a standalone server, or inside of notebook environments such as Colab, Jupyter, and Google Cloud Vertex AI notebooks.

LIT is built to answer questions such as:

  • What kind of examples does my model perform poorly on?
  • Why did my model make this prediction? Can this prediction be attributed to adversarial behavior, or to undesirable priors in the training set?
  • Does my model behave consistently if I change things like textual style, verb tense, or pronoun gender?

Example of LIT UI

LIT supports a variety of debugging workflows through a browser-based UI. Features include:

  • Local explanations via salience maps, attention, and rich visualization of model predictions.
  • Aggregate analysis including custom metrics, slicing and binning, and visualization of embedding spaces.
  • Counterfactual generation via manual edits or generator plug-ins to dynamically create and evaluate new examples.
  • Side-by-side mode to compare two or more models, or one model on a pair of examples.
  • Highly extensible to new model types, including classification, regression, span labeling, seq2seq, and language modeling. Supports multi-head models and multiple input features out of the box.
  • Framework-agnostic and compatible with TensorFlow, PyTorch, and more.

LIT has a website with live demos, tutorials, a setup guide and more.

Stay up to date on LIT by joining the lit-announcements mailing list.

For a broader overview, check out our paper and the user guide.

Documentation

Download and Installation

LIT can be installed via pip, or can be built from source. Building from source is necessary if you wish to update any of the front-end or core back-end code.

Install from source

Download the repo and set up a Python environment:

git clone https://github.com/PAIR-code/lit.git ~/lit

# Set up Python environment
cd ~/lit
conda env create -f environment.yml
conda activate lit-nlp
conda install cudnn cupti  # optional, for GPU support
conda install -c pytorch pytorch  # optional, for PyTorch

# Build the frontend
pushd lit_nlp; yarn && yarn build; popd

Note: if you see an error running yarn on Ubuntu/Debian, be sure you have the correct version installed.

pip installation

pip install lit-nlp

The pip installation will install all necessary prerequisite packages for use of the core LIT package. It also installs the code to run our demo examples. It does not install the prerequisites for those demos, so you need to install those yourself if you wish to run the demos. See environment.yml for the list of all packages needed for running the demos.

Running LIT

Explore a collection of hosted demos on the LIT website demos page.

Colab notebooks showing the use of LIT inside of notebooks can be found at lit_nlp/examples/notebooks. A simple example can be viewed here.

Quick-start: classification and regression

To explore classification and regression models tasks from the popular GLUE benchmark:

python -m lit_nlp.examples.glue_demo --port=5432 --quickstart

Navigate to http://localhost:5432 to access the LIT UI.

Your default view will be a small BERT-based model fine-tuned on the Stanford Sentiment Treebank, but you can switch to STS-B or MultiNLI using the toolbar or the gear icon in the upper right.

Quick start: language modeling

To explore predictions from a pretrained language model (BERT or GPT-2), run:

python -m lit_nlp.examples.lm_demo --models=bert-base-uncased \
  --port=5432

And navigate to http://localhost:5432 for the UI.

Notebook usage

A simple colab demo can be found here. Just run all the cells to see LIT on an example classification model right in the notebook.

Run LIT in a Docker container

See docker.md for instructions on running LIT as a containerized web app. This is the approach we take for our website demos.

More Examples

See lit_nlp/examples. Run similarly to the above:

python -m lit_nlp.examples.<example_name> --port=5432 [optional --args]

User Guide

To learn about LIT's features, check out the user guide, or watch this video.

Adding your own models or data

You can easily run LIT with your own model by creating a custom demo.py launcher, similar to those in lit_nlp/examples. The basic steps are:

  • Write a data loader which follows the Dataset API
  • Write a model wrapper which follows the Model API
  • Pass models, datasets, and any additional components to the LIT server class

For a full walkthrough, see adding models and data.

Extending LIT with new components

LIT is easy to extend with new interpretability components, generators, and more, both on the frontend or the backend. See our documentation to get started.

Pull Request Process

To make code changes to LIT, please work off of the dev branch and create pull requests against that branch. The main branch is for stable releases, and it is expected that the dev branch will always be ahead of main in terms of commits.

Citing LIT

If you use LIT as part of your work, please cite our EMNLP paper:

@misc{tenney2020language,
    title={The Language Interpretability Tool: Extensible, Interactive Visualizations and Analysis for {NLP} Models},
    author={Ian Tenney and James Wexler and Jasmijn Bastings and Tolga Bolukbasi and Andy Coenen and Sebastian Gehrmann and Ellen Jiang and Mahima Pushkarna and Carey Radebaugh and Emily Reif and Ann Yuan},
    booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
    year = "2020",
    publisher = "Association for Computational Linguistics",
    pages = "107--118",
    url = "https://www.aclweb.org/anthology/2020.emnlp-demos.15",
}

Disclaimer

This is not an official Google product.

LIT is a research project, and under active development by a small team. There will be some bugs and rough edges, but we're releasing at an early stage because we think it's pretty useful already. We want LIT to be an open platform, not a walled garden, and we'd love your suggestions and feedback - drop us a line in the issues.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].