All Projects → KLUE-benchmark → KLUE

KLUE-benchmark / KLUE

Licence: CC-BY-SA-4.0 License
📖 Korean NLU Benchmark

Projects that are alternatives of or similar to KLUE

BERT-embedding
A simple wrapper class for extracting features(embedding) and comparing them using BERT in TensorFlow
Stars: ✭ 24 (-94.29%)
Mutual labels:  korean, bert, korean-nlp
Clue
中文语言理解测评基准 Chinese Language Understanding Evaluation Benchmark: datasets, baselines, pre-trained models, corpus and leaderboard
Stars: ✭ 2,425 (+477.38%)
Mutual labels:  benchmark, bert, roberta
tensorflow-ml-nlp-tf2
텐서플로2와 머신러닝으로 시작하는 자연어처리 (로지스틱회귀부터 BERT와 GPT3까지) 실습자료
Stars: ✭ 245 (-41.67%)
Mutual labels:  bert, korean-nlp
roberta-wwm-base-distill
this is roberta wwm base distilled model which was distilled from roberta wwm by roberta wwm large
Stars: ✭ 61 (-85.48%)
Mutual labels:  bert, roberta
Text-Summarization
Abstractive and Extractive Text summarization using Transformers.
Stars: ✭ 38 (-90.95%)
Mutual labels:  bert, roberta
les-military-mrc-rank7
莱斯杯:全国第二届“军事智能机器阅读”挑战赛 - Rank7 解决方案
Stars: ✭ 37 (-91.19%)
Mutual labels:  bert, roberta
Transformer-QG-on-SQuAD
Implement Question Generator with SOTA pre-trained Language Models (RoBERTa, BERT, GPT, BART, T5, etc.)
Stars: ✭ 28 (-93.33%)
Mutual labels:  bert, roberta
PyKOMORAN
(Beta) PyKOMORAN is wrapped KOMORAN in Python using Py4J.
Stars: ✭ 38 (-90.95%)
Mutual labels:  korean, korean-nlp
kss
Kss: A Toolkit for Korean sentence segmentation
Stars: ✭ 198 (-52.86%)
Mutual labels:  korean, korean-nlp
Tianchi2020ChineseMedicineQuestionGeneration
2020 阿里云天池大数据竞赛-中医药文献问题生成挑战赛
Stars: ✭ 20 (-95.24%)
Mutual labels:  bert, roberta
KoEDA
Korean Easy Data Augmentation
Stars: ✭ 62 (-85.24%)
Mutual labels:  korean, korean-nlp
hangul-search-js
🇰🇷 Simple Korean text search module
Stars: ✭ 22 (-94.76%)
Mutual labels:  korean, korean-nlp
vietnamese-roberta
A Robustly Optimized BERT Pretraining Approach for Vietnamese
Stars: ✭ 22 (-94.76%)
Mutual labels:  bert, roberta
DiscEval
Discourse Based Evaluation of Language Understanding
Stars: ✭ 18 (-95.71%)
Mutual labels:  benchmark, bert
KoSpacing
Automatic Korean word spacing with R
Stars: ✭ 76 (-81.9%)
Mutual labels:  korean, korean-nlp
g2pK
g2pK: g2p module for Korean
Stars: ✭ 137 (-67.38%)
Mutual labels:  korean, korean-nlp
Filipino-Text-Benchmarks
Open-source benchmark datasets and pretrained transformer models in the Filipino language.
Stars: ✭ 22 (-94.76%)
Mutual labels:  benchmark, bert
FewCLUE
FewCLUE 小样本学习测评基准,中文版
Stars: ✭ 251 (-40.24%)
Mutual labels:  benchmark, bert
COVID-19-Tweet-Classification-using-Roberta-and-Bert-Simple-Transformers
Rank 1 / 216
Stars: ✭ 24 (-94.29%)
Mutual labels:  bert, roberta
KAREN
KAREN: Unifying Hatespeech Detection and Benchmarking
Stars: ✭ 18 (-95.71%)
Mutual labels:  benchmark, bert

KLUE: Korean Language Understanding Evaluation

The KLUE is introduced to make advances in Korean NLP. Korean pre-trained language models (PLMs) have appeared to solve Korean NLP problems since PLMs have brought significant performance gains in NLP problems in other languages. Despite the proliferation of Korean language models, however, none of the proper evaluation datasets has been opened yet. The lack of such benchmark dataset limits the fair comparison between the models and further progress on model architectures.

Along with the benchmark tasks and data, we provide suitable evaluation metrics and fine-tuning recipes for pretrained language models for each task. We furthermore release the PLMs, KLUE-BERT and KLUE-RoBERTa, to help reproducing baseline models on KLUE and thereby facilitate future research.

See our paper for more details.

Design Principles

In designing the Korean Language Understanding Evaluation (KLUE) benchmark, we aim to make KLUE;

  1. cover diverse tasks and corpora
  2. accessible to everyone without any restriction
  3. include accurate and unambiguous annotations
  4. mitigate AI ethical issues.

Benchmark Datasets

KLUE benchmark is composed of 8 tasks:

  • Topic Classification (TC)
  • Sentence Textual Similarity (STS)
  • Natural Language Inference (NLI)
  • Named Entity Recognition (NER)
  • Relation Extraction (RE)
  • (Part-Of-Speech) + Dependency Parsing (DP)
  • Machine Reading Comprehension (MRC)
  • Dialogue State Tracking (DST)

See wiki for dataset description.

NOTE: In the paper, we describe more in detail how our 4 principles have guided creating KLUE from task selection, corpus selection, annotation protocols, determining evaluation metrics to baseline construction.

KLUE-PLMs

We have trained 2 models: KLUE-BERT and KLUE-RoBERTa.

Model Embedding size Hidden size # Layers # Heads
KLUE-BERT-base 768 768 12 12
KLUE-RoBERTa-small 768 768 6 12
KLUE-RoBERTa-base 768 768 12 12
KLUE-RoBERTa-large 1024 1024 24 16

NOTE: All the pretrained models are uploaded in Huggingface Model Hub. Check https://huggingface.co/klue.

Baseline Scores

Evaluation results of our PLMs and other baselines on KLUE benchmark. Bold shows the best performance across the models, and Italic indicates the best performance among BASE models.

Model TC STS NLI NER RE DP MRC DST
F1 Pearsons' r F1 ACC entity F1 char F1 F1 AUPRC UAS LAS EM ROUGE JGA Slot F1
mBERT-base 81.55 84.66 76.00 73.20 76.50 89.23 57.88 53.82 90.30 86.66 44.66 55.92 35.46 88.63
XLM-R-base 83.52 89.16 82.01 77.33 80.37 92.12 57.46 54.98 89.20 87.69 27.48 53.93 39.82 89.61
XLM-R-large 86.06 92.97 85.86 85.93 82.27 93.22 58.39 61.15 92.71 88.70 35.99 66.77 41.20 89.80
KR-BERT-base 84.58 88.61 81.07 77.17 74.58 90.13 62.74 60.94 89.92 87.48 48.28 58.54 45.33 90.70
koELECTRA-base 84.59 92.46 84.84 85.63 86.11 92.56 62.85 58.94 92.90 87.77 59.82 66.05 41.58 89.60
KLUE-BERT-base 85.73 90.85 82.84 81.63 83.97 91.39 66.44 66.17 89.96 88.05 62.32 68.51 46.64 91.61
KLUE-RoBERTa-small 84.98 91.54 85.16 79.33 83.65 91.14 60.89 58.96 90.04 88.14 57.32 62.70 46.62 91.44
KLUE-RoBERTa-base 85.07 92.50 85.40 84.83 84.60 91.44 67.65 68.55 93.04 88.32 68.67 73.98 47.49 91.64
KLUE-RoBERTa-large 85.69 93.35 86.63 89.17 85.00 91.86 71.13 72.98 93.48 88.36 75.58 80.59 50.22 92.23

Leaderboard

https://klue-benchmark.com

Submission Guideline

See https://aistages-prod-server-public.s3.amazonaws.com/app/Competitions/000065/data/klue_code.tar.gz

Members

Researchers

Sungjoon Park, Jihyung Moon, Sungdong Kim, Won Ik Cho, Jiyoon Han, Jangwon Park, Chisung Song, Junseong Kim, Youngsook Song, Taehwan Oh, Joohong Lee, Juhyun Oh, Sungwon Ryu, Younghoon Jeong, Inkwon Lee, Sangwoo Seo, Dongjun Lee, Hyunwoo Kim, Myeonghwa Lee, Seongbo Jang, Seungwon Do, Sunkyoung Kim, Kyungtae Lim, Jongwon Lee, Kyumin Park, Jamin Shin, Seonghyun Kim, Lucy Park

Advisors

Alice Oh, Jung-Woo Ha, Kyunghyun Cho

Sponsors

  • Platinum: Upstage, NAVER Clova, Google
  • Gold: Kakao Enterprise
  • Silver: Scatter Lab, Selectstar
  • Bronze: Riiid, DeepNatural, KAIST
  • Data Providers: Acrofan, KED

Organizers

  • Host: Upstage
  • Co-organizers: Naver AI Labs, NYU, KAIST
  • Research Collaborators: Kakao Enterprise, Scatter Lab, Riiid, Seoul National Univ., Yonsei Univ., Sogang Univ., Kyunghee Univ., Hanbat National Univ.

Reference

@misc{park2021klue,
      title={KLUE: Korean Language Understanding Evaluation},
      author={Sungjoon Park and Jihyung Moon and Sungdong Kim and Won Ik Cho and Jiyoon Han and Jangwon Park and Chisung Song and Junseong Kim and Yongsook Song and Taehwan Oh and Joohong Lee and Juhyun Oh and Sungwon Lyu and Younghoon Jeong and Inkwon Lee and Sangwoo Seo and Dongjun Lee and Hyunwoo Kim and Myeonghwa Lee and Seongbo Jang and Seungwon Do and Sunkyoung Kim and Kyungtae Lim and Jongwon Lee and Kyumin Park and Jamin Shin and Seonghyun Kim and Lucy Park and Alice Oh and Jungwoo Ha and Kyunghyun Cho},
      year={2021},
      eprint={2105.09680},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}

License

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

Creative Commons License

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].