All Projects → snakeztc → Neuraldialog Laed

snakeztc / Neuraldialog Laed

Licence: apache-2.0
PyTorch implementation for Interpretable Dialog Generation ACL 2018, It is released by Tiancheng Zhao (Tony) from Dialog Research Center, LTI, CMU

Projects that are alternatives of or similar to Neuraldialog Laed

Convai Bot 1337
NIPS Conversational Intelligence Challenge 2017 Winner System: Skill-based Conversational Agent with Supervised Dialog Manager
Stars: ✭ 65 (-66.15%)
Mutual labels:  dialogue-systems
Matilda
LIDA: Lightweight Interactive Dialogue Annotator (in EMNLP 2019)
Stars: ✭ 125 (-34.9%)
Mutual labels:  dialogue-systems
Alex
Alex Dialogue Systems Framework
Stars: ✭ 177 (-7.81%)
Mutual labels:  dialogue-systems
Dialogue Understanding
This repository contains PyTorch implementation for the baseline models from the paper Utterance-level Dialogue Understanding: An Empirical Study
Stars: ✭ 77 (-59.9%)
Mutual labels:  dialogue-systems
Lic2019 Competition
2019语言与智能技术竞赛-基于知识图谱的主动聊天
Stars: ✭ 109 (-43.23%)
Mutual labels:  dialogue-systems
Dstc7 End To End Conversation Modeling
Grounded conversational dataset for end-to-end conversational AI (official DSTC7 data)
Stars: ✭ 141 (-26.56%)
Mutual labels:  dialogue-systems
Letsgodataset
This repository makes the integral Let's Go dataset publicly available.
Stars: ✭ 41 (-78.65%)
Mutual labels:  dialogue-systems
Glad
Global-Locally Self-Attentive Dialogue State Tracker
Stars: ✭ 185 (-3.65%)
Mutual labels:  dialogue-systems
Awesome Emotion Recognition In Conversations
A comprehensive reading list for Emotion Recognition in Conversations
Stars: ✭ 111 (-42.19%)
Mutual labels:  dialogue-systems
Multimodal Sentiment Analysis
Attention-based multimodal fusion for sentiment analysis
Stars: ✭ 172 (-10.42%)
Mutual labels:  dialogue-systems
Atis dataset
The ATIS (Airline Travel Information System) Dataset
Stars: ✭ 81 (-57.81%)
Mutual labels:  dialogue-systems
Cakechat
CakeChat: Emotional Generative Dialog System
Stars: ✭ 1,361 (+608.85%)
Mutual labels:  dialogue-systems
Nlp4rec Papers
Paper list of NLP for recommender systems
Stars: ✭ 162 (-15.62%)
Mutual labels:  dialogue-systems
Korean restaurant reservation
Implement korean restaurant reservation dialogue system based on hybrid code network.
Stars: ✭ 73 (-61.98%)
Mutual labels:  dialogue-systems
Tgen
Statistical NLG for spoken dialogue systems
Stars: ✭ 179 (-6.77%)
Mutual labels:  dialogue-systems
Convai Baseline
ConvAI baseline solution
Stars: ✭ 49 (-74.48%)
Mutual labels:  dialogue-systems
Neuraldialog Larl
PyTorch implementation of latent space reinforcement learning for E2E dialog published at NAACL 2019. It is released by Tiancheng Zhao (Tony) from Dialog Research Center, LTI, CMU
Stars: ✭ 127 (-33.85%)
Mutual labels:  dialogue-systems
Arxivnotes
IssuesにNLP(自然言語処理)に関連するの論文を読んだまとめを書いています.雑です.🚧 マークは編集中の論文です(事実上放置のものも多いです).🍡 マークは概要のみ書いてます(早く見れる的な意味で団子).
Stars: ✭ 190 (-1.04%)
Mutual labels:  dialogue-systems
Kb Infobot
A dialogue bot for information access
Stars: ✭ 181 (-5.73%)
Mutual labels:  dialogue-systems
Metalearning4nlp Papers
A list of recent papers about Meta / few-shot learning methods applied in NLP areas.
Stars: ✭ 163 (-15.1%)
Mutual labels:  dialogue-systems

Interpretable Neural Dialog Generation via Discrete Sentence Representation Learning

Codebase for Unsupervised Discrete Sentence Representation Learning for Interpretable Neural Dialog Generation, published as a long paper in ACL 2018. You can find my presentation slides here.

If you use any source codes or datasets included in this toolkit in your work, please cite the following paper. The bibtex are listed below:

@article{zhao2018unsupervised,
  title={Unsupervised Discrete Sentence Representation Learning for Interpretable Neural Dialog Generation},
  author={Zhao, Tiancheng and Lee, Kyusong and Eskenazi, Maxine},
  journal={arXiv preprint arXiv:1804.08069},
  year={2018}
}

Requirements

python 2.7
pytorch >= 0.3.0.post4
numpy
nltk

Datasets

The data folder contains three datasets:

Run Models

The first two scripts are sentence models (DI-VAE/DI-VST) that learn discrete sentence representations from either auto-encoding or context-predicting.

Discrete Info Variational Autoencoder (DI-VAE)

The following command will train a DI-VAE on the PTB dataset. To run on different datasets, follows the pattern in PTB dataloader and corpus reader and implement your own data interface.

python ptb-utt.py

Discrete info Variational Skip-thought (DI-VST)

The following command will train a DI-VST on the Daily Dialog corpus.

python dailydialog-utt-skip.py

The next two train a latent-action encoder decoder with either DI-VAE or DI-VST.

DI-VAE + Encoder Decoder (AE-ED)

The following command will first train a DI-VAE on the Stanford multi domain dialog dataset, and then train a hierarchical encoder decoder (HRED) model with the latent code from the DI-VAE.

python stanford-ae.py

DI-VST + Encoder Decoder (ST-ED)

The following command will first train a DI-VST on the Stanford multi domain dialog dataset, and then train a hierarchical encoder decoder (HRED) model with the latent code from the DI-VST.

python stanford-skip.py

Change Configurations

Change model parameters

Generally all the parameters are defined at the top of each script. You can either passed a different value in the command line or change the default value of each parameters. Some key parameters are explained below:

  • y_size: the number of discrete latent variable
  • k: the number of classes for each discrete latent variable
  • use_reg_kl: whether or not use KL regulization on the latetn space. If False, the model becomes normal autoencoder or skip thought.
  • use_mutual: whether or not use Batch Prior Regulization (BPR) proposed in our work or the standard ELBO setup.

Extra essential parameters for LA-ED or ST-ED:

  • use_attribute: whether or not use the attribute forcing loss in Eq 10.
  • freeze_step: the number of batch we train DI-VAE/VST before we freeze latent action and training encoder-decoders.

Test a existing model

All trained models and log files are saved to the log folder. To run a existing model, you can:

  • Set the forward_only argument to be True
  • Set the load_sess argument to te the path to the model folder in log
  • Run the script
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].