All Projects → ChenRocks → Distill-BERT-Textgen

ChenRocks / Distill-BERT-Textgen

Licence: MIT license
Research code for ACL 2020 paper: "Distilling Knowledge Learned in BERT for Text Generation".

Programming Languages

python
139335 projects - #7 most used programming language
shell
77523 projects
Dockerfile
14818 projects

Projects that are alternatives of or similar to Distill-BERT-Textgen

kdtf
Knowledge Distillation using Tensorflow
Stars: ✭ 139 (+14.88%)
Mutual labels:  knowledge-distillation
osdg-tool
OSDG is an open-source tool that maps and connects activities to the UN Sustainable Development Goals (SDGs) by identifying SDG-relevant content in any text. The tool is available online at www.osdg.ai. API access available for research purposes.
Stars: ✭ 22 (-81.82%)
Mutual labels:  machine-translation
tvsub
TVsub: DCU-Tencent Chinese-English Dialogue Corpus
Stars: ✭ 40 (-66.94%)
Mutual labels:  machine-translation
transformer
Build English-Vietnamese machine translation with ProtonX Transformer. :D
Stars: ✭ 41 (-66.12%)
Mutual labels:  machine-translation
head-network-distillation
[IEEE Access] "Head Network Distillation: Splitting Distilled Deep Neural Networks for Resource-constrained Edge Computing Systems" and [ACM MobiCom HotEdgeVideo 2019] "Distilled Split Deep Neural Networks for Edge-assisted Real-time Systems"
Stars: ✭ 27 (-77.69%)
Mutual labels:  knowledge-distillation
BAKE
Self-distillation with Batch Knowledge Ensembling Improves ImageNet Classification
Stars: ✭ 79 (-34.71%)
Mutual labels:  knowledge-distillation
Opennmt
Open Source Neural Machine Translation in Torch (deprecated)
Stars: ✭ 2,339 (+1833.06%)
Mutual labels:  machine-translation
OPUS-MT-train
Training open neural machine translation models
Stars: ✭ 166 (+37.19%)
Mutual labels:  machine-translation
distill-and-select
Authors official PyTorch implementation of the "DnS: Distill-and-Select for Efficient and Accurate Video Indexing and Retrieval" [IJCV 2022]
Stars: ✭ 43 (-64.46%)
Mutual labels:  knowledge-distillation
Transformer Temporal Tagger
Code and data form the paper BERT Got a Date: Introducing Transformers to Temporal Tagging
Stars: ✭ 55 (-54.55%)
Mutual labels:  bert-model
FinBERT-QA
Financial Domain Question Answering with pre-trained BERT Language Model
Stars: ✭ 70 (-42.15%)
Mutual labels:  bert-model
sb-nmt
Code for Synchronous Bidirectional Neural Machine Translation (SB-NMT)
Stars: ✭ 66 (-45.45%)
Mutual labels:  machine-translation
awesome-efficient-gnn
Code and resources on scalable and efficient Graph Neural Networks
Stars: ✭ 498 (+311.57%)
Mutual labels:  knowledge-distillation
ibleu
A visual and interactive scoring environment for machine translation systems.
Stars: ✭ 27 (-77.69%)
Mutual labels:  machine-translation
R-BERT
Pytorch re-implementation of R-BERT model
Stars: ✭ 59 (-51.24%)
Mutual labels:  bert-model
Modernmt
Neural Adaptive Machine Translation that adapts to context and learns from corrections.
Stars: ✭ 231 (+90.91%)
Mutual labels:  machine-translation
Object-Detection-Knowledge-Distillation
An Object Detection Knowledge Distillation framework powered by pytorch, now having SSD and yolov5.
Stars: ✭ 189 (+56.2%)
Mutual labels:  knowledge-distillation
extreme-adaptation-for-personalized-translation
Code for the paper "Extreme Adaptation for Personalized Neural Machine Translation"
Stars: ✭ 42 (-65.29%)
Mutual labels:  machine-translation
LabelRelaxation-CVPR21
Official PyTorch Implementation of Embedding Transfer with Label Relaxation for Improved Metric Learning, CVPR 2021
Stars: ✭ 37 (-69.42%)
Mutual labels:  knowledge-distillation
bergamot-translator
Cross platform C++ library focusing on optimized machine translation on the consumer-grade device.
Stars: ✭ 181 (+49.59%)
Mutual labels:  machine-translation

Distill-BERT-Textgen

Research code for ACL 2020 paper: "Distilling Knowledge Learned in BERT for Text Generation".

Overview

This repository contains the code needed to reproduce our IWSLT De-En experiments.

Setting Up

This repo is tested on Ubuntu 18.04 machine with Nvidia GPU. We do not plan to support other OS or CPU-only machines.

  1. Prerequisite

    • Docker

      you also need to follow this to run docker without sudo

    • nvidia-driver (we tested on version 418)

      # reference installation command on Ubuntu
      sudo add-apt-repository ppa:graphics-drivers/ppa
      sudo apt update
      sudo apt install nvidia-driver-418
    • nvidia-docker

    • clone this repo and its submodule (we use a modified version of OpenNMT-py)

      git clone --recursive [email protected]:ChenRocks/Distill-BERT-Textgen.git

    Users can potentially setup non-docker environment following the Dockerfile to install python packages and other dependencies. However, to guarantee reproducibility, it is safest to use our official docker image and we will not provide official support/troubleshooting if you do not use dockerized setup. (If you absolutely need non-docker install, feel free to discuss in github issue with other users and contribution is welcome.)

  2. Downloading Data and Preprocessing

    • Run the following command to download raw data and then preprocess
      source scripts/setup.sh <data_folder>
      and then you should see <data_folder> populated with files of the following structure.
      ├── download
      │   ├── de-en
      │   └── de-en.tgz
      ├── dump
      │   └── de-en
      │       ├── DEEN.db.bak
      │       ├── DEEN.db.dat
      │       ├── DEEN.db.dir
      │       ├── DEEN.train.0.pt
      │       ├── DEEN.valid.0.pt
      │       ├── DEEN.vocab.pt
      │       ├── dev.de.bert
      │       ├── dev.en.bert
      │       ├── ref
      │       ├── test.de.bert
      │       └── test.en.bert
      ├── raw
      │   └── de-en
      └── tmp
          └── de-en
      

Usage

First, launch the docker container

source launch_container.sh <data_folder> <output_folder>

This will mount <data_folder>/dump (contains preprocessed data), <output_folder> (store experiment outputs), and the repo itself (so that any code you change is reflected inside the container). All following commands in this section should be run inside the docker container. To exit the docker environment, type exit or press Ctrl+D.

  1. Training

    1. C-MLM finetuning
      python run_cmlm_finetuning.py --train_file /data/de-en/DEEN.db \
                                  --vocab_file /data/de-en/DEEN.vocab.pt \
                                  --valid_src /data/de-en/dev.de.bert \
                                  --valid_tgt /data/de-en/dev.en.bert \
                                  --bert_model bert-base-multilingual-cased \
                                  --output_dir /output/<exp_name> \
                                  --train_batch_size 16384 \
                                  --learning_rate 5e-5 \
                                  --valid_steps 5000 \
                                  --num_train_steps 100000 \
                                  --warmup_proportion 0.05 \
                                  --gradient_accumulation_steps 1 \
                                  --fp16
    2. Extract teacher soft label
      # extract hidden states of teacher
      python dump_teacher_hiddens.py \
          --bert bert-base-multilingual-cased \
          --ckpt /output/<exp_name>/ckpt/model_step_100000.pt \
          --db /data/de-en/DEEN.db --output /data/de-en/targets/<teacher_name>
      
      # extract top-k logits
      python dump_teacher_topk.py --bert_hidden /data/de-en/targets/<teacher_name>
    3. Seq2Seq training with KD
      python opennmt/train.py \
          --bert_kd \
          --bert_dump /data/de-en/targets/<teacher_name> \
          --data_db /data/de-en/DEEN.db \
          -data /data/de-en/DEEN \
          -config opennmt/config/config-transformer-base-mt-deen.yml \
          -learning_rate 2.0 \
          -warmup_steps 8000 \
          --kd_alpha 0.5 \
          --kd_temperature 10.0 \
          --kd_topk 8 \
          --train_steps 100000 \
          -save_model /output/<kd_exp_name>
  2. Inference and Evaluatation

    The following command will translate the dev split using the 100k step checkpoint, with beam size 5 and length penalty 0.6.

    ./run_mt.sh /output/<kd_exp_name> 100000 dev 5 0.6

    Usually the BLEU score correlates well with the accuracy in validation. The results will be stored at /output/<kd_exp_name>/output/.

Misc

  • We test on a single Nvidia Titan RTX GPU, which has 24GB of RAM. If you encounter OOM, try decrease batch size and increase gradient accumulation.
  • If you have a multi-GPU machine, use CUDA_VISIBLE_DEVICES to sepcify GPU you want to use before launching the docker container. Otherwise it will use GPU 0 only.
  • Feel free to ask questions and discuss in the github issues.

Citation

If you find this work helpful to your research, please consider citing:

@inproceedings{chen2020distilling,
  title={Distilling Knowledge Learned in BERT for Text Generation},
  author={Chen, Yen-Chun and Gan, Zhe and Cheng, Yu and Liu, Jingzhou and Liu, Jingjing},
  booktitle={ACL},
  year={2020}
}
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].