cmrc2019A Sentence Cloze Dataset for Chinese Machine Reading Comprehension (CMRC 2019)
Stars: ✭ 118 (+78.79%)
text2textText2Text: Cross-lingual natural language processing and generation toolkit
Stars: ✭ 188 (+184.85%)
AiSpaceAiSpace: Better practices for deep learning model development and deployment For Tensorflow 2.0
Stars: ✭ 28 (-57.58%)
cdQA-ui⛔ [NOT MAINTAINED] A web interface for cdQA and other question answering systems.
Stars: ✭ 19 (-71.21%)
exams-qaA Multi-subject High School Examinations Dataset for Cross-lingual and Multilingual Question Answering
Stars: ✭ 25 (-62.12%)
ExpBERTCode for our ACL '20 paper "Representation Engineering with Natural Language Explanations"
Stars: ✭ 28 (-57.58%)
Sohu20192019搜狐校园算法大赛
Stars: ✭ 26 (-60.61%)
bert attn vizVisualize BERT's self-attention layers on text classification tasks
Stars: ✭ 41 (-37.88%)
backpropBackprop makes it simple to use, finetune, and deploy state-of-the-art ML models.
Stars: ✭ 229 (+246.97%)
KitanaQAKitanaQA: Adversarial training and data augmentation for neural question-answering models
Stars: ✭ 58 (-12.12%)
cross-lingual-open-ieMT/IE: Cross-lingual Open Information Extraction with Neural Sequence-to-Sequence Models
Stars: ✭ 22 (-66.67%)
oreilly-bert-nlpThis repository contains code for the O'Reilly Live Online Training for BERT
Stars: ✭ 19 (-71.21%)
KMRC-PapersA list of recent papers regarding knowledge-based machine reading comprehension.
Stars: ✭ 40 (-39.39%)
JointIDSFBERT-based joint intent detection and slot filling with intent-slot attention mechanism (INTERSPEECH 2021)
Stars: ✭ 55 (-16.67%)
question generatorAn NLP system for generating reading comprehension questions
Stars: ✭ 188 (+184.85%)
AliceMindALIbaba's Collection of Encoder-decoders from MinD (Machine IntelligeNce of Damo) Lab
Stars: ✭ 1,479 (+2140.91%)
neural-ranking-kdImproving Efficient Neural Ranking Models with Cross-Architecture Knowledge Distillation
Stars: ✭ 74 (+12.12%)
sisterSImple SenTence EmbeddeR
Stars: ✭ 66 (+0%)
Fill-the-GAP[ACL-WS] 4th place solution to gendered pronoun resolution challenge on Kaggle
Stars: ✭ 13 (-80.3%)
korpatbert특허분야 특화된 한국어 AI언어모델 KorPatBERT
Stars: ✭ 48 (-27.27%)
FasterTransformerTransformer related optimization, including BERT, GPT
Stars: ✭ 1,571 (+2280.3%)
TabFormerCode & Data for "Tabular Transformers for Modeling Multivariate Time Series" (ICASSP, 2021)
Stars: ✭ 209 (+216.67%)
SA-BERTCIKM 2020: Speaker-Aware BERT for Multi-Turn Response Selection in Retrieval-Based Chatbots
Stars: ✭ 71 (+7.58%)
R-ATRegularized Adversarial Training
Stars: ✭ 19 (-71.21%)
ganbertEnhancing the BERT training with Semi-supervised Generative Adversarial Networks
Stars: ✭ 205 (+210.61%)
PromptPapersMust-read papers on prompt-based tuning for pre-trained language models.
Stars: ✭ 2,317 (+3410.61%)
DiscEvalDiscourse Based Evaluation of Language Understanding
Stars: ✭ 18 (-72.73%)
beirA Heterogeneous Benchmark for Information Retrieval. Easy to use, evaluate your models across 15+ diverse IR datasets.
Stars: ✭ 738 (+1018.18%)
BERTOverflowA Pre-trained BERT on StackOverflow Corpus
Stars: ✭ 40 (-39.39%)
LAMB Optimizer TFLAMB Optimizer for Large Batch Training (TensorFlow version)
Stars: ✭ 119 (+80.3%)
mixed-language-trainingAttention-Informed Mixed-Language Training for Zero-shot Cross-lingual Task-oriented Dialogue Systems (AAAI-2020)
Stars: ✭ 29 (-56.06%)
Transformer-QG-on-SQuADImplement Question Generator with SOTA pre-trained Language Models (RoBERTa, BERT, GPT, BART, T5, etc.)
Stars: ✭ 28 (-57.58%)
CheXbertCombining Automatic Labelers and Expert Annotations for Accurate Radiology Report Labeling Using BERT
Stars: ✭ 51 (-22.73%)
TwinBertpytorch implementation of the TwinBert paper
Stars: ✭ 36 (-45.45%)
banglabertThis repository contains the official release of the model "BanglaBERT" and associated downstream finetuning code and datasets introduced in the paper titled "BanglaBERT: Language Model Pretraining and Benchmarks for Low-Resource Language Understanding Evaluation in Bangla" accpeted in Findings of the Annual Conference of the North American Chap…
Stars: ✭ 186 (+181.82%)
NLPDataAugmentationChinese NLP Data Augmentation, BERT Contextual Augmentation
Stars: ✭ 94 (+42.42%)
neuro-comma🇷🇺 Punctuation restoration production-ready model for Russian language 🇷🇺
Stars: ✭ 46 (-30.3%)
sticker2Further developed as SyntaxDot: https://github.com/tensordot/syntaxdot
Stars: ✭ 14 (-78.79%)
BERT-QECode and resources for the paper "BERT-QE: Contextualized Query Expansion for Document Re-ranking".
Stars: ✭ 43 (-34.85%)
wisdomifyA BERT-based reverse dictionary of Korean proverbs
Stars: ✭ 95 (+43.94%)
OpenUEOpenUE是一个轻量级知识图谱抽取工具 (An Open Toolkit for Universal Extraction from Text published at EMNLP2020: https://aclanthology.org/2020.emnlp-demos.1.pdf)
Stars: ✭ 274 (+315.15%)
BertSimilarityComputing similarity of two sentences with google's BERT algorithm。利用Bert计算句子相似度。语义相似度计算。文本相似度计算。
Stars: ✭ 348 (+427.27%)
CLSPCode and data for EMNLP 2018 paper "Cross-lingual Lexical Sememe Prediction"
Stars: ✭ 19 (-71.21%)
Kaleido-BERT(CVPR2021) Kaleido-BERT: Vision-Language Pre-training on Fashion Domain.
Stars: ✭ 252 (+281.82%)
NAG-BERT[EACL'21] Non-Autoregressive with Pretrained Language Model
Stars: ✭ 47 (-28.79%)