task-transferabilityData and code for our paper "Exploring and Predicting Transferability across NLP Tasks", to appear at EMNLP 2020.
Stars: ✭ 35 (+75%)
rasa milktea chatbotChatbot with bert chinese model, base on rasa framework(中文聊天机器人,结合bert意图分析,基于rasa框架)
Stars: ✭ 97 (+385%)
berserkerBerserker - BERt chineSE woRd toKenizER
Stars: ✭ 17 (-15%)
bert-tensorflow-pytorch-spacy-conversionInstructions for how to convert a BERT Tensorflow model to work with HuggingFace's pytorch-transformers, and spaCy. This walk-through uses DeepPavlov's RuBERT as example.
Stars: ✭ 26 (+30%)
classifier multi labelmulti-label,classifier,text classification,多标签文本分类,文本分类,BERT,ALBERT,multi-label-classification
Stars: ✭ 127 (+535%)
label-studio-transformersLabel data using HuggingFace's transformers and automatically get a prediction service
Stars: ✭ 117 (+485%)
bert nliA Natural Language Inference (NLI) model based on Transformers (BERT and ALBERT)
Stars: ✭ 97 (+385%)
Text-SummarizationAbstractive and Extractive Text summarization using Transformers.
Stars: ✭ 38 (+90%)
consistencyImplementation of models in our EMNLP 2019 paper: A Logic-Driven Framework for Consistency of Neural Models
Stars: ✭ 26 (+30%)
Transformers-TutorialsThis repository contains demos I made with the Transformers library by HuggingFace.
Stars: ✭ 2,828 (+14040%)
roberta-wwm-base-distillthis is roberta wwm base distilled model which was distilled from roberta wwm by roberta wwm large
Stars: ✭ 61 (+205%)
Kevinpro-NLP-demoAll NLP you Need Here. 个人实现了一些好玩的NLP demo,目前包含13个NLP应用的pytorch实现
Stars: ✭ 117 (+485%)
AnnA Anki neuronal AppendixUsing machine learning on your anki collection to enhance the scheduling via semantic clustering and semantic similarity
Stars: ✭ 39 (+95%)
hard-label-attackNatural Language Attacks in a Hard Label Black Box Setting.
Stars: ✭ 26 (+30%)
NLP-paper🎨 🎨NLP 自然语言处理教程 🎨🎨 https://dataxujing.github.io/NLP-paper/
Stars: ✭ 23 (+15%)
Cross-Lingual-MRCCross-Lingual Machine Reading Comprehension (EMNLP 2019)
Stars: ✭ 66 (+230%)
NAG-BERT[EACL'21] Non-Autoregressive with Pretrained Language Model
Stars: ✭ 47 (+135%)
wechselCode for WECHSEL: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models.
Stars: ✭ 39 (+95%)
PromptPapersMust-read papers on prompt-based tuning for pre-trained language models.
Stars: ✭ 2,317 (+11485%)
sticker2Further developed as SyntaxDot: https://github.com/tensordot/syntaxdot
Stars: ✭ 14 (-30%)
DE-LIMITDeEpLearning models for MultIlingual haTespeech (DELIMIT): Benchmarking multilingual models across 9 languages and 16 datasets.
Stars: ✭ 90 (+350%)
SA-BERTCIKM 2020: Speaker-Aware BERT for Multi-Turn Response Selection in Retrieval-Based Chatbots
Stars: ✭ 71 (+255%)
KitanaQAKitanaQA: Adversarial training and data augmentation for neural question-answering models
Stars: ✭ 58 (+190%)
JointIDSFBERT-based joint intent detection and slot filling with intent-slot attention mechanism (INTERSPEECH 2021)
Stars: ✭ 55 (+175%)
banglabertThis repository contains the official release of the model "BanglaBERT" and associated downstream finetuning code and datasets introduced in the paper titled "BanglaBERT: Language Model Pretraining and Benchmarks for Low-Resource Language Understanding Evaluation in Bangla" accpeted in Findings of the Annual Conference of the North American Chap…
Stars: ✭ 186 (+830%)
BertSimilarityComputing similarity of two sentences with google's BERT algorithm。利用Bert计算句子相似度。语义相似度计算。文本相似度计算。
Stars: ✭ 348 (+1640%)
CAIL法研杯CAIL2019阅读理解赛题参赛模型
Stars: ✭ 34 (+70%)
FasterTransformerTransformer related optimization, including BERT, GPT
Stars: ✭ 1,571 (+7755%)
backpropBackprop makes it simple to use, finetune, and deploy state-of-the-art ML models.
Stars: ✭ 229 (+1045%)
ganbertEnhancing the BERT training with Semi-supervised Generative Adversarial Networks
Stars: ✭ 205 (+925%)
DiscEvalDiscourse Based Evaluation of Language Understanding
Stars: ✭ 18 (-10%)
oreilly-bert-nlpThis repository contains code for the O'Reilly Live Online Training for BERT
Stars: ✭ 19 (-5%)
question generatorAn NLP system for generating reading comprehension questions
Stars: ✭ 188 (+840%)
Transformer-QG-on-SQuADImplement Question Generator with SOTA pre-trained Language Models (RoBERTa, BERT, GPT, BART, T5, etc.)
Stars: ✭ 28 (+40%)
neural-ranking-kdImproving Efficient Neural Ranking Models with Cross-Architecture Knowledge Distillation
Stars: ✭ 74 (+270%)
CheXbertCombining Automatic Labelers and Expert Annotations for Accurate Radiology Report Labeling Using BERT
Stars: ✭ 51 (+155%)
BERT-QECode and resources for the paper "BERT-QE: Contextualized Query Expansion for Document Re-ranking".
Stars: ✭ 43 (+115%)
TriB-QA吹逼我们是认真的
Stars: ✭ 45 (+125%)
korpatbert특허분야 특화된 한국어 AI언어모델 KorPatBERT
Stars: ✭ 48 (+140%)
TabFormerCode & Data for "Tabular Transformers for Modeling Multivariate Time Series" (ICASSP, 2021)
Stars: ✭ 209 (+945%)
ExpBERTCode for our ACL '20 paper "Representation Engineering with Natural Language Explanations"
Stars: ✭ 28 (+40%)
R-ATRegularized Adversarial Training
Stars: ✭ 19 (-5%)
SentEncodingSentence encoder and training code for Mean-Max AAE
Stars: ✭ 16 (-20%)
dodrioExploring attention weights in transformer-based models with linguistic knowledge.
Stars: ✭ 233 (+1065%)