All Projects → hengyicai → ContrastiveLearning4Dialogue

hengyicai / ContrastiveLearning4Dialogue

Licence: MIT License
The codebase for "Group-wise Contrastive Learning for Neural Dialogue Generation" (Cai et al., Findings of EMNLP 2020)

Programming Languages

python
139335 projects - #7 most used programming language
javascript
184084 projects - #8 most used programming language
HTML
75241 projects

Projects that are alternatives of or similar to ContrastiveLearning4Dialogue

few shot dialogue generation
Dialogue Knowledge Transfer Networks (DiKTNet)
Stars: ✭ 24 (-55.56%)
Mutual labels:  dialog, conversational-ai
Medi-CoQA
Conversational Question Answering on Clinical Text
Stars: ✭ 22 (-59.26%)
Mutual labels:  conversational-ai
DSTC6-End-to-End-Conversation-Modeling
DSTC6: End-to-End Conversation Modeling Track
Stars: ✭ 56 (+3.7%)
Mutual labels:  dialog
MOON
Model-Contrastive Federated Learning (CVPR 2021)
Stars: ✭ 93 (+72.22%)
Mutual labels:  contrastive-learning
newbot-framework
Framework to create chatbots on all platforms and on the browser - https://newbot.io
Stars: ✭ 35 (-35.19%)
Mutual labels:  conversational-ai
AdCo
AdCo: Adversarial Contrast for Efficient Learning of Unsupervised Representations from Self-Trained Negative Adversaries
Stars: ✭ 148 (+174.07%)
Mutual labels:  contrastive-learning
HINT3
This repository contains datasets and code for the paper "HINT3: Raising the bar for Intent Detection in the Wild" accepted at EMNLP-2020's Insights Workshop https://insights-workshop.github.io/ Preprint for the paper is available here https://arxiv.org/abs/2009.13833
Stars: ✭ 27 (-50%)
Mutual labels:  conversational-ai
TaskDialog
.NET implementation of the Windows Task Dialog.
Stars: ✭ 48 (-11.11%)
Mutual labels:  dialog
GCA
[WWW 2021] Source code for "Graph Contrastive Learning with Adaptive Augmentation"
Stars: ✭ 69 (+27.78%)
Mutual labels:  contrastive-learning
cl-ica
Code for the paper "Contrastive Learning Inverts the Data Generating Process".
Stars: ✭ 65 (+20.37%)
Mutual labels:  contrastive-learning
AudioVisualSceneAwareDialog
No description or website provided.
Stars: ✭ 22 (-59.26%)
Mutual labels:  dialog
ngx-modal
Dynamic modal dialog for Angular
Stars: ✭ 54 (+0%)
Mutual labels:  dialog
TheBashMenu
A useful bash script allowing you to easily create your own menu, which uses the directional keys! Quickly add your title, options and commands and you're good to go!
Stars: ✭ 52 (-3.7%)
Mutual labels:  dialog
react-native-wxui
A UI package for React Native
Stars: ✭ 21 (-61.11%)
Mutual labels:  dialog
android-versioninfo
A version info widget for Android. Material style.
Stars: ✭ 21 (-61.11%)
Mutual labels:  dialog
contrastive loss
Experiments with supervised contrastive learning methods with different loss functions
Stars: ✭ 143 (+164.81%)
Mutual labels:  contrastive-learning
Licenser
An android library to display the licenses of your application libraries in a easy way.
Stars: ✭ 75 (+38.89%)
Mutual labels:  dialog
permuted-bAbI-dialog-tasks
Dataset for 'Learning End-to-End Goal-Oriented Dialog with Multiple Answers' EMNLP 2018
Stars: ✭ 17 (-68.52%)
Mutual labels:  dialog
ak-vue3
组件库包含了 AutoForm 自动表单、BackTop 返回顶部、Breadcrumb 面包屑、 Button 按钮、Cascader 级联选择器、Checkbox 多选框、Collapse 折叠面板、ColorPicker 颜色选择器、DataPicker 时间选择器、Dialog 弹层对话框、Alert 弹框、Echarts 图形图表、Form 表单、Input 输入框、Lazy 图片延时加载、Loading 加载等待、Menu 菜单、Pagination 分页、Progress 进度条、Radio 单选框、Select 选择器、Steps 步骤条、Swiper 图片轮播、Switch 开关、Table 表格、Tabs 标签页、Textarea 文本框、Tooltip 提示、Tr…
Stars: ✭ 24 (-55.56%)
Mutual labels:  dialog
GRACE
[GRL+ @ ICML 2020] PyTorch implementation for "Deep Graph Contrastive Representation Learning" (https://arxiv.org/abs/2006.04131v2)
Stars: ✭ 144 (+166.67%)
Mutual labels:  contrastive-learning

Group-wise Contrastive Learning for Neural Dialogue Generation

This repo contains preliminary code of the EMNLP2020 paper (Findings) named "Group-wise Contrastive Learning for Neural Dialogue Generation".

This codebase is built upon the ParlAI project (Thanks for their pioneering contributions on developing such a great conversational platform!). Check parlai/agents/contrastive_learning for framework implementations. Running scripts can be found in projects/contrastive_learning.

Framework Overview

method_overview

Requirements

  • Python3
  • Pytorch 1.2 or newer

Dependencies of the core modules are listed in requirement.txt.

Installing

git clone [email protected]:hengyicai/ContrastiveLearning4Dialogue.git ~/ContrastiveLearning4Dialogue
cd ~/ContrastiveLearning4Dialogue; python setup.py develop
echo "export PARLAI_HOME=~/ContrastiveLearning4Dialogue" >> ~/.bashrc; source ~/.bashrc

Dataset

Download PersonaChat/OpenSubtitles/Douban and untar them to ${PARLAI_HOME}/data/ as:

data
├── DoubanConversaionCorpus
│   ├── douban.embed.vec
│   ├── test.txt
│   ├── train.txt
│   ├── train.txt.lengths
│   └── valid.txt
├── OpenSubExtend
│   ├── opensub_extend.embed.vec
│   ├── test.txt
│   ├── train.txt
│   ├── train.txt.lengths
│   └── valid.txt
└── PersonaChatExtend
    ├── personachat_extend.embed.vec
    ├── test.txt
    ├── train.txt
    ├── train.txt.lengths
    └── valid.txt

Running

cd ~/ContrastiveLearning4Dialogue
bash projects/contrastive_learning/shell/run.sh

The last line of projects/contrastive_learning/shell/run.sh specifies preliminary arguments for the training:


# MODEL_NAME TO_MINIMIZE TASK PRETRAIN_STEPS SAMPLE_K CONTRAST_BY NAIVE_NEG_SAMPLING CL_THRESHOLD CL_ANNEAL ANNEAL_SPEED
export CUDA_VISIBLE_DEVICES=0; train_model cl_seq2seq to_minimize personachat_extend 5000 6 both False 0.5 True 1.0

See projects/adaptive_learning/shell/run.sh for details.

Running Details

1. Preparing the reference model

Since the contrastive learning framework involves an auxiliary model during the training process, i.e., the reference model $p_n(\cdot; \phi)$, we need to prepare a reference model before running the contrastive learning procedure. We can use the same script to train a reference model, for example, a naive seq2seq model:

# MODEL_NAME TO_MINIMIZE TASK PRETRAIN_STEPS SAMPLE_K CONTRAST_BY NAIVE_NEG_SAMPLING CL_THRESHOLD CL_ANNEAL ANNEAL_SPEED
export CUDA_VISIBLE_DEVICES=0; train_model seq2seq ppl personachat_extend 5000 6 both False 0.5 True 1.0

2. Specifying mandatory arguments

There are several arguments required to be declared explicitly in projects/contrastive_learning/shell/run.sh.

Input the reference model path here:

declare -A ref_model_files=(
  ["none"]=None
  ["REF_MODEL_KEY"]="PATH/TO/THE/REFERENCE/MODEL"
)

and use it by setting the variable ref_model:

ref_model=REF_MODEL_KEY

3. Running the framework

Apply the contrastive learning frmaework to seq2seq (or transformer by replacing cl_seq2seq with cl_transformer):

# MODEL_NAME TO_MINIMIZE TASK PRETRAIN_STEPS SAMPLE_K CONTRAST_BY NAIVE_NEG_SAMPLING CL_THRESHOLD CL_ANNEAL ANNEAL_SPEED
export CUDA_VISIBLE_DEVICES=0; train_model cl_seq2seq to_minimize personachat_extend 5000 6 both False 0.5 True 1.0

Start training by bash projects/contrastive_learning/shell/run.sh

Contact

Please reach me via my email (caihengyi at ict dot ac dot cn) if there is anything unclear.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].