All Projects → salesforce → Glad

salesforce / Glad

Licence: bsd-3-clause
Global-Locally Self-Attentive Dialogue State Tracker

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Glad

Paper Reading
Paper reading list in natural language processing, including dialogue systems and text generation related topics.
Stars: ✭ 508 (+174.59%)
Mutual labels:  natural-language-processing, dialogue-systems
Convai Baseline
ConvAI baseline solution
Stars: ✭ 49 (-73.51%)
Mutual labels:  natural-language-processing, dialogue-systems
Conv Emotion
This repo contains implementation of different architectures for emotion recognition in conversations.
Stars: ✭ 646 (+249.19%)
Mutual labels:  natural-language-processing, dialogue-systems
Nndial
NNDial is an open source toolkit for building end-to-end trainable task-oriented dialogue models. It is released by Tsung-Hsien (Shawn) Wen from Cambridge Dialogue Systems Group under Apache License 2.0.
Stars: ✭ 332 (+79.46%)
Mutual labels:  natural-language-processing, dialogue-systems
Neuraldialog Larl
PyTorch implementation of latent space reinforcement learning for E2E dialog published at NAACL 2019. It is released by Tiancheng Zhao (Tony) from Dialog Research Center, LTI, CMU
Stars: ✭ 127 (-31.35%)
Mutual labels:  natural-language-processing, dialogue-systems
Rnnlg
RNNLG is an open source benchmark toolkit for Natural Language Generation (NLG) in spoken dialogue system application domains. It is released by Tsung-Hsien (Shawn) Wen from Cambridge Dialogue Systems Group under Apache License 2.0.
Stars: ✭ 487 (+163.24%)
Mutual labels:  natural-language-processing, dialogue-systems
Conversational Ai
Conversational AI Reading Materials
Stars: ✭ 34 (-81.62%)
Mutual labels:  natural-language-processing, dialogue-systems
Multiwoz
Source code for end-to-end dialogue model from the MultiWOZ paper (Budzianowski et al. 2018, EMNLP)
Stars: ✭ 384 (+107.57%)
Mutual labels:  natural-language-processing, dialogue-systems
Awesome Emotion Recognition In Conversations
A comprehensive reading list for Emotion Recognition in Conversations
Stars: ✭ 111 (-40%)
Mutual labels:  natural-language-processing, dialogue-systems
Dialogue Understanding
This repository contains PyTorch implementation for the baseline models from the paper Utterance-level Dialogue Understanding: An Empirical Study
Stars: ✭ 77 (-58.38%)
Mutual labels:  natural-language-processing, dialogue-systems
Arxivnotes
IssuesにNLP(自然言語処理)に関連するの論文を読んだまとめを書いています.雑です.🚧 マークは編集中の論文です(事実上放置のものも多いです).🍡 マークは概要のみ書いてます(早く見れる的な意味で団子).
Stars: ✭ 190 (+2.7%)
Mutual labels:  natural-language-processing, dialogue-systems
Multimodal Sentiment Analysis
Attention-based multimodal fusion for sentiment analysis
Stars: ✭ 172 (-7.03%)
Mutual labels:  natural-language-processing, dialogue-systems
Knowledge Graphs
A collection of research on knowledge graphs
Stars: ✭ 845 (+356.76%)
Mutual labels:  natural-language-processing, dialogue-systems
Convai Bot 1337
NIPS Conversational Intelligence Challenge 2017 Winner System: Skill-based Conversational Agent with Supervised Dialog Manager
Stars: ✭ 65 (-64.86%)
Mutual labels:  natural-language-processing, dialogue-systems
Nlp4rec Papers
Paper list of NLP for recommender systems
Stars: ✭ 162 (-12.43%)
Mutual labels:  natural-language-processing, dialogue-systems
Kb Infobot
A dialogue bot for information access
Stars: ✭ 181 (-2.16%)
Mutual labels:  natural-language-processing, dialogue-systems
Fastnlp
fastNLP: A Modularized and Extensible NLP Framework. Currently still in incubation.
Stars: ✭ 2,441 (+1219.46%)
Mutual labels:  natural-language-processing
Deeptoxic
top 1% solution to toxic comment classification challenge on Kaggle.
Stars: ✭ 180 (-2.7%)
Mutual labels:  natural-language-processing
Cleannlp
R package providing annotators and a normalized data model for natural language processing
Stars: ✭ 174 (-5.95%)
Mutual labels:  natural-language-processing
Web Database Analytics
Web scrapping and related analytics using Python tools
Stars: ✭ 175 (-5.41%)
Mutual labels:  natural-language-processing

Global-Locally Self-Attentive Dialogue State Tracker

This repository contains an implementation of the Global-Locally Self-Attentive Dialogue State Tracker (GLAD). If you use this in your work, please cite the following

@inproceedings{ zhong2018global,
  title={ Global-Locally Self-Attentive Encoder for Dialogue State Tracking },
  author={ Zhong, Victor and Xiong, Caiming and Socher, Richard },
  booktitle={ ACL },
  year={ 2018 }
}

Install dependencies

Using Docker

docker build -t glad:0.4 .
docker run --name embeddings -d vzhong/embeddings:0.0.5  # get the embeddings
env NV_GPU=0 nvidia-docker run --name glad -d -t --net host --volumes-from embeddings glad:0.4

If you do not want to build the Docker image, then run the following (you still need to have the CoreNLP server).

pip install -r requirements.txt

Download and annotate data

This project uses Stanford CoreNLP to annotate the dataset. In particular, we use the Stanford NLP Stanza python interface. To run the server, do

docker run --name corenlp -d -p 9000:9000 vzhong/corenlp-server

The first time you preprocess the data, we will download word embeddings and character embeddings and put them into a SQLite database, which will be slow. Subsequent runs will be much faster.

docker exec glad python preprocess_data.py

The raw data will be stored in data/woz/raw of the container. The annotation results will be stored in data/woz/ann of the container.

If you do not want to build the Docker image, then run

python preprocess_data.py

Train model

You can checkout the training options via python train.py -h. By default, train.py will save checkpoints to exp/glad/default.

docker exec glad python train.py --gpu 0

You can attach to the container via docker exec glad -it bin/bash to look at what's inside or docker cp glad /opt/glad/exp exp to copy out the experiment results.

If you do not want to build the Docker image, then run

python train.py --gpu 0

Evaluation

You can evaluate the model using

docker exec glad python evaluate.py --gpu 0 --split test exp/glad/default

You can also dump a predictions file by specifying the --fout flag. In this case, the output will be a list of lists. Each ith sublist is the set of predicted slot-value pairs for the ith turn. Please see evaluate.py to see how to match up the turn predictions with the dialogues.

If you do not want to build the Docker image, then run

python evaluate.py --gpu 0 --split test exp/glad/default

Contribution

Pull requests are welcome! If you have any questions, please create an issue or contact the corresponding author at victor <at> victorzhong <dot> com.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].