All Projects → VinAIResearch → JointIDSF

VinAIResearch / JointIDSF

Licence: AGPL-3.0 license
BERT-based joint intent detection and slot filling with intent-slot attention mechanism (INTERSPEECH 2021)

Programming Languages

python
139335 projects - #7 most used programming language
shell
77523 projects

Projects that are alternatives of or similar to JointIDSF

ChineseNER
中文NER的那些事儿
Stars: ✭ 241 (+338.18%)
Mutual labels:  bert, multitask-learning
Deeppavlov
An open source library for deep learning end-to-end dialog systems and chatbots.
Stars: ✭ 5,525 (+9945.45%)
Mutual labels:  slot-filling, intent-detection
OpenUE
OpenUE是一个轻量级知识图谱抽取工具 (An Open Toolkit for Universal Extraction from Text published at EMNLP2020: https://aclanthology.org/2020.emnlp-demos.1.pdf)
Stars: ✭ 274 (+398.18%)
Mutual labels:  bert, slot-filling
vietnamese-roberta
A Robustly Optimized BERT Pretraining Approach for Vietnamese
Stars: ✭ 22 (-60%)
Mutual labels:  vietnamese, bert
GEANet-BioMed-Event-Extraction
Code for the paper Biomedical Event Extraction with Hierarchical Knowledge Graphs
Stars: ✭ 52 (-5.45%)
Mutual labels:  bert, multitask-learning
dnn.cool
A framework for multi-task learning, where you may precondition tasks and compose them into bigger tasks. Conditional objectives and per-task evaluations and interpretations.
Stars: ✭ 44 (-20%)
Mutual labels:  multitask-learning
Sohu2019
2019搜狐校园算法大赛
Stars: ✭ 26 (-52.73%)
Mutual labels:  bert
neural-ranking-kd
Improving Efficient Neural Ranking Models with Cross-Architecture Knowledge Distillation
Stars: ✭ 74 (+34.55%)
Mutual labels:  bert
CheXbert
Combining Automatic Labelers and Expert Annotations for Accurate Radiology Report Labeling Using BERT
Stars: ✭ 51 (-7.27%)
Mutual labels:  bert
tensorflow-ml-nlp-tf2
텐서플로2와 머신러닝으로 시작하는 자연어처리 (로지스틱회귀부터 BERT와 GPT3까지) 실습자료
Stars: ✭ 245 (+345.45%)
Mutual labels:  bert
FasterTransformer
Transformer related optimization, including BERT, GPT
Stars: ✭ 1,571 (+2756.36%)
Mutual labels:  bert
vietTTS
Vietnamese Text to Speech library
Stars: ✭ 78 (+41.82%)
Mutual labels:  vietnamese
question generator
An NLP system for generating reading comprehension questions
Stars: ✭ 188 (+241.82%)
Mutual labels:  bert
MTL-AQA
What and How Well You Performed? A Multitask Learning Approach to Action Quality Assessment [CVPR 2019]
Stars: ✭ 38 (-30.91%)
Mutual labels:  multitask-learning
Transformer-QG-on-SQuAD
Implement Question Generator with SOTA pre-trained Language Models (RoBERTa, BERT, GPT, BART, T5, etc.)
Stars: ✭ 28 (-49.09%)
Mutual labels:  bert
CAIL
法研杯CAIL2019阅读理解赛题参赛模型
Stars: ✭ 34 (-38.18%)
Mutual labels:  bert
BiaffineDependencyParsing
BERT+Self-attention Encoder ; Biaffine Decoder ; Pytorch Implement
Stars: ✭ 67 (+21.82%)
Mutual labels:  bert
Soft-Module
Code for "Multi-task Reinforcement Learning with Soft Modularization"
Stars: ✭ 71 (+29.09%)
Mutual labels:  multitask-learning
backprop
Backprop makes it simple to use, finetune, and deploy state-of-the-art ML models.
Stars: ✭ 229 (+316.36%)
Mutual labels:  bert
AuxiLearn
Official implementation of Auxiliary Learning by Implicit Differentiation [ICLR 2021]
Stars: ✭ 71 (+29.09%)
Mutual labels:  multitask-learning

JointIDSF: Joint intent detection and slot filling

  • We propose a joint model (namely, JointIDSF) for intent detection and slot filling, that extends the recent state-of-the-art JointBERT+CRF model with an intent-slot attention layer to explicitly incorporate intent context information into slot filling via "soft" intent label embedding.
  • We also introduce the first public intent detection and slot filling dataset for Vietnamese.
  • Experimental results on our Vietnamese dataset show that our proposed model significantly outperforms JointBERT+CRF.

model

Details of our JointIDSF model architecture, dataset construction and experimental results can be found in our following paper:

@inproceedings{JointIDSF,
    title     = {{Intent Detection and Slot Filling for Vietnamese}},
    author    = {Mai Hoang Dao and Thinh Hung Truong and Dat Quoc Nguyen},
    booktitle = {Proceedings of the 22nd Annual Conference of the International Speech Communication Association (INTERSPEECH)},
    year      = {2021}
}

Please CITE our paper whenever our dataset or model implementation is used to help produce published results or incorporated into other software.

Dataset

statistic

By downloading our dataset, USER agrees:

  • to use the dataset for research or educational purposes only.
  • to not distribute the dataset or part of the dataset in any original or modified form.
  • and to cite our paper above whenever the dataset is employed to help produce published results.

Model installation, training and evaluation

Installation

  • Python version >= 3.6
  • PyTorch version >= 1.4.0
    git clone https://github.com/VinAIResearch/JointIDSF.git
    cd JointIDSF/
    pip3 install -r requirements.txt

Training and Evaluation

Run the following two bash files to reproduce results presented in our paper:

    ./run_jointIDSF_PhoBERTencoder.sh
    ./run_jointIDSF_XLM-Rencoder.sh
  • Here, in these bash files, we include running scripts to train both our JointIDSF and the baseline JointBERT+CRF.
  • Although we conduct experiments using our Vietnamese dataset, the running scripts in run_jointIDSF_XLM-Rencoder.sh can adapt for other languages that have gold annotated corpora available for intent detection and slot filling. Please prepare your data with the same format as in the data directory.

Inference

We also provide model checkpoints of JointBERT+CRF and JointIDSF. Please download these checkpoints if you want to make inference on a new text file without training the models from scratch.

  • JointIDSF

http://public.vinai.io/JointIDSF_PhoBERTencoder.tar.gz

http://public.vinai.io/JointIDSF_XLM-Rencoder.tar.gz

  • JointBERT+CRF

http://public.vinai.io/JointBERT-CRF_PhoBERTencoder.tar.gz

http://public.vinai.io/JointBERT-CRF_XLM-Rencoder.tar.gz

Example of tagging a new text file using JointIDSF model:

python3 predict.py  --input_file <path_to_input_file> \
                    --output_file <output_file_name> \
                    --model_dir JointIDSF_XLM-Rencoder

where the input file is a raw text file (one utterance per line).

Acknowledgement

Our code is based on the unofficial implementation of the JointBERT+CRF paper from https://github.com/monologg/JointBERT

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].