vietnamese-robertaA Robustly Optimized BERT Pretraining Approach for Vietnamese
Stars: ✭ 22 (-60%)
DeeppavlovAn open source library for deep learning end-to-end dialog systems and chatbots.
Stars: ✭ 5,525 (+9945.45%)
OpenUEOpenUE是一个轻量级知识图谱抽取工具 (An Open Toolkit for Universal Extraction from Text published at EMNLP2020: https://aclanthology.org/2020.emnlp-demos.1.pdf)
Stars: ✭ 274 (+398.18%)
bert attn vizVisualize BERT's self-attention layers on text classification tasks
Stars: ✭ 41 (-25.45%)
neural-ranking-kdImproving Efficient Neural Ranking Models with Cross-Architecture Knowledge Distillation
Stars: ✭ 74 (+34.55%)
CSSRCrack Segmentation for Low-Resolution Images using Joint Learning with Super-Resolution (CSSR) was accepted to international conference on MVA2021 (oral), and selected for the Best Practical Paper Award.
Stars: ✭ 50 (-9.09%)
vietTTSVietnamese Text to Speech library
Stars: ✭ 78 (+41.82%)
TriB-QA吹逼我们是认真的
Stars: ✭ 45 (-18.18%)
neuro-comma🇷🇺 Punctuation restoration production-ready model for Russian language 🇷🇺
Stars: ✭ 46 (-16.36%)
dnn.coolA framework for multi-task learning, where you may precondition tasks and compose them into bigger tasks. Conditional objectives and per-task evaluations and interpretations.
Stars: ✭ 44 (-20%)
LAMB Optimizer TFLAMB Optimizer for Large Batch Training (TensorFlow version)
Stars: ✭ 119 (+116.36%)
Sohu20192019搜狐校园算法大赛
Stars: ✭ 26 (-52.73%)
CheXbertCombining Automatic Labelers and Expert Annotations for Accurate Radiology Report Labeling Using BERT
Stars: ✭ 51 (-7.27%)
Fill-the-GAP[ACL-WS] 4th place solution to gendered pronoun resolution challenge on Kaggle
Stars: ✭ 13 (-76.36%)
FasterTransformerTransformer related optimization, including BERT, GPT
Stars: ✭ 1,571 (+2756.36%)
spertPyTorch code for SpERT: Span-based Entity and Relation Transformer
Stars: ✭ 572 (+940%)
cmrc2019A Sentence Cloze Dataset for Chinese Machine Reading Comprehension (CMRC 2019)
Stars: ✭ 118 (+114.55%)
Kaleido-BERT(CVPR2021) Kaleido-BERT: Vision-Language Pre-training on Fashion Domain.
Stars: ✭ 252 (+358.18%)
AuxiLearnOfficial implementation of Auxiliary Learning by Implicit Differentiation [ICLR 2021]
Stars: ✭ 71 (+29.09%)
ExpBERTCode for our ACL '20 paper "Representation Engineering with Natural Language Explanations"
Stars: ✭ 28 (-49.09%)
question generatorAn NLP system for generating reading comprehension questions
Stars: ✭ 188 (+241.82%)
beirA Heterogeneous Benchmark for Information Retrieval. Easy to use, evaluate your models across 15+ diverse IR datasets.
Stars: ✭ 738 (+1241.82%)
MTL-AQAWhat and How Well You Performed? A Multitask Learning Approach to Action Quality Assessment [CVPR 2019]
Stars: ✭ 38 (-30.91%)
BERTOverflowA Pre-trained BERT on StackOverflow Corpus
Stars: ✭ 40 (-27.27%)
Transformer-QG-on-SQuADImplement Question Generator with SOTA pre-trained Language Models (RoBERTa, BERT, GPT, BART, T5, etc.)
Stars: ✭ 28 (-49.09%)
AliceMindALIbaba's Collection of Encoder-decoders from MinD (Machine IntelligeNce of Damo) Lab
Stars: ✭ 1,479 (+2589.09%)
CAIL法研杯CAIL2019阅读理解赛题参赛模型
Stars: ✭ 34 (-38.18%)
sisterSImple SenTence EmbeddeR
Stars: ✭ 66 (+20%)
DiscEvalDiscourse Based Evaluation of Language Understanding
Stars: ✭ 18 (-67.27%)
NLPDataAugmentationChinese NLP Data Augmentation, BERT Contextual Augmentation
Stars: ✭ 94 (+70.91%)
BERT-QECode and resources for the paper "BERT-QE: Contextualized Query Expansion for Document Re-ranking".
Stars: ✭ 43 (-21.82%)
TwinBertpytorch implementation of the TwinBert paper
Stars: ✭ 36 (-34.55%)
korpatbert특허분야 특화된 한국어 AI언어모델 KorPatBERT
Stars: ✭ 48 (-12.73%)
wisdomifyA BERT-based reverse dictionary of Korean proverbs
Stars: ✭ 95 (+72.73%)
Soft-ModuleCode for "Multi-task Reinforcement Learning with Soft Modularization"
Stars: ✭ 71 (+29.09%)
TabFormerCode & Data for "Tabular Transformers for Modeling Multivariate Time Series" (ICASSP, 2021)
Stars: ✭ 209 (+280%)
gplPowerful unsupervised domain adaptation method for dense retrieval. Requires only unlabeled corpus and yields massive improvement: "GPL: Generative Pseudo Labeling for Unsupervised Domain Adaptation of Dense Retrieval" https://arxiv.org/abs/2112.07577
Stars: ✭ 216 (+292.73%)
backpropBackprop makes it simple to use, finetune, and deploy state-of-the-art ML models.
Stars: ✭ 229 (+316.36%)
SlotfillingUsing Tensorflow to train a slot-filling & intent joint model
Stars: ✭ 14 (-74.55%)
R-ATRegularized Adversarial Training
Stars: ✭ 19 (-65.45%)
oreilly-bert-nlpThis repository contains code for the O'Reilly Live Online Training for BERT
Stars: ✭ 19 (-65.45%)
py-lingualyticsA text analytics library with support for codemixed data
Stars: ✭ 36 (-34.55%)
bert-sentimentFine-grained Sentiment Classification Using BERT
Stars: ✭ 49 (-10.91%)
banglabertThis repository contains the official release of the model "BanglaBERT" and associated downstream finetuning code and datasets introduced in the paper titled "BanglaBERT: Language Model Pretraining and Benchmarks for Low-Resource Language Understanding Evaluation in Bangla" accpeted in Findings of the Annual Conference of the North American Chap…
Stars: ✭ 186 (+238.18%)
BertSimilarityComputing similarity of two sentences with google's BERT algorithm。利用Bert计算句子相似度。语义相似度计算。文本相似度计算。
Stars: ✭ 348 (+532.73%)
ganbertEnhancing the BERT training with Semi-supervised Generative Adversarial Networks
Stars: ✭ 205 (+272.73%)