All Projects → JasonForJoy → SA-BERT

JasonForJoy / SA-BERT

Licence: other
CIKM 2020: Speaker-Aware BERT for Multi-Turn Response Selection in Retrieval-Based Chatbots

Programming Languages

python
139335 projects - #7 most used programming language
shell
77523 projects

Projects that are alternatives of or similar to SA-BERT

CheXbert
Combining Automatic Labelers and Expert Annotations for Accurate Radiology Report Labeling Using BERT
Stars: ✭ 51 (-28.17%)
Mutual labels:  bert
Sohu2019
2019搜狐校园算法大赛
Stars: ✭ 26 (-63.38%)
Mutual labels:  bert
tensorflow-ml-nlp-tf2
텐서플로2와 머신러닝으로 시작하는 자연어처리 (로지스틱회귀부터 BERT와 GPT3까지) 실습자료
Stars: ✭ 245 (+245.07%)
Mutual labels:  bert
neural-ranking-kd
Improving Efficient Neural Ranking Models with Cross-Architecture Knowledge Distillation
Stars: ✭ 74 (+4.23%)
Mutual labels:  bert
oreilly-bert-nlp
This repository contains code for the O'Reilly Live Online Training for BERT
Stars: ✭ 19 (-73.24%)
Mutual labels:  bert
backprop
Backprop makes it simple to use, finetune, and deploy state-of-the-art ML models.
Stars: ✭ 229 (+222.54%)
Mutual labels:  bert
TriB-QA
吹逼我们是认真的
Stars: ✭ 45 (-36.62%)
Mutual labels:  bert
Romanian-Transformers
This repo is the home of Romanian Transformers.
Stars: ✭ 60 (-15.49%)
Mutual labels:  bert
DiscEval
Discourse Based Evaluation of Language Understanding
Stars: ✭ 18 (-74.65%)
Mutual labels:  bert
BertSimilarity
Computing similarity of two sentences with google's BERT algorithm。利用Bert计算句子相似度。语义相似度计算。文本相似度计算。
Stars: ✭ 348 (+390.14%)
Mutual labels:  bert
Transformer-QG-on-SQuAD
Implement Question Generator with SOTA pre-trained Language Models (RoBERTa, BERT, GPT, BART, T5, etc.)
Stars: ✭ 28 (-60.56%)
Mutual labels:  bert
bert for corrector
基于bert进行中文文本纠错
Stars: ✭ 199 (+180.28%)
Mutual labels:  bert
FasterTransformer
Transformer related optimization, including BERT, GPT
Stars: ✭ 1,571 (+2112.68%)
Mutual labels:  bert
BiaffineDependencyParsing
BERT+Self-attention Encoder ; Biaffine Decoder ; Pytorch Implement
Stars: ✭ 67 (-5.63%)
Mutual labels:  bert
banglabert
This repository contains the official release of the model "BanglaBERT" and associated downstream finetuning code and datasets introduced in the paper titled "BanglaBERT: Language Model Pretraining and Benchmarks for Low-Resource Language Understanding Evaluation in Bangla" accpeted in Findings of the Annual Conference of the North American Chap…
Stars: ✭ 186 (+161.97%)
Mutual labels:  bert
BERT-QE
Code and resources for the paper "BERT-QE: Contextualized Query Expansion for Document Re-ranking".
Stars: ✭ 43 (-39.44%)
Mutual labels:  bert
ganbert
Enhancing the BERT training with Semi-supervised Generative Adversarial Networks
Stars: ✭ 205 (+188.73%)
Mutual labels:  bert
KitanaQA
KitanaQA: Adversarial training and data augmentation for neural question-answering models
Stars: ✭ 58 (-18.31%)
Mutual labels:  bert
JointIDSF
BERT-based joint intent detection and slot filling with intent-slot attention mechanism (INTERSPEECH 2021)
Stars: ✭ 55 (-22.54%)
Mutual labels:  bert
CAIL
法研杯CAIL2019阅读理解赛题参赛模型
Stars: ✭ 34 (-52.11%)
Mutual labels:  bert

Speaker-Aware BERT for Multi-Turn Response Selection

This repository contains the source code and pre-trained models for the CIKM 2020 paper Speaker-Aware BERT for Multi-Turn Response Selection in Retrieval-Based Chatbots by Gu et al.

Results

Dependencies

Python 3.6
Tensorflow 1.13.1

Download

Adaptation

Create the adaptation data.

cd data/Ubuntu_V1_Xu/
python create_adaptation_data.py 

Running the adaptation process.

cd scripts/
bash adaptation.sh

The adapted model will be saved to the path ./uncased_L-12_H-768_A-12_adapted.
Modify the filenames in this folder to make it the same as those in Google's BERT.

Training

Create the fine-tuning data.

cd data/Ubuntu_V1_Xu/
python create_finetuning_data.py 

Running the fine-tuning process.

cd scripts/
bash ubuntu_train.sh

Testing

Modify the variable restore_model_dir in ubuntu_test.sh

cd scripts/
bash ubuntu_v1_test.sh

A "output_test.txt" file which records scores for each context-response pair will be saved to the path of restore_model_dir.
Modify the variable test_out_filename in compute_metrics.py and then run the following command, various metrics will be shown.

python compute_metrics.py

Cite

If you use the source code and pre-trained models, please cite the following paper: "Speaker-Aware BERT for Multi-Turn Response Selection in Retrieval-Based Chatbots" Jia-Chen Gu, Tianda Li, Quan Liu, Zhen-Hua Ling, Zhiming Su, Si Wei, Xiaodan Zhu. CIKM (2020)

@inproceedings{Gu:2020:SABERT:3340531.3412330,
 author = {Gu, Jia-Chen and 
           Li, Tianda and
           Liu, Quan and
           Ling, Zhen-Hua and
           Su, Zhiming and 
           Wei, Si and
           Zhu, Xiaodan
           },
 title = {Speaker-Aware BERT for Multi-Turn Response Selection in Retrieval-Based Chatbots},
 booktitle = {Proceedings of the 29th ACM International Conference on Information and Knowledge Management},
 series = {CIKM '20},
 year = {2020},
 isbn = {978-1-4503-6859-9},
 location = {Virtual Event, Ireland},
 pages = {2041--2044},
 url = {http://doi.acm.org/10.1145/3340531.3412330},
 doi = {10.1145/3340531.3412330},
 acmid = {3412330},
 publisher = {ACM},
}

Update

Please feel free to open issues if you have some problems.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].