NLP-paper🎨 🎨NLP 自然语言处理教程 🎨🎨 https://dataxujing.github.io/NLP-paper/
Stars: ✭ 23 (-28.12%)
Bert PytorchGoogle AI 2018 BERT pytorch implementation
Stars: ✭ 4,642 (+14406.25%)
keras-bert-nerKeras solution of Chinese NER task using BiLSTM-CRF/BiGRU-CRF/IDCNN-CRF model with Pretrained Language Model: supporting BERT/RoBERTa/ALBERT
Stars: ✭ 7 (-78.12%)
Transformers🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
Stars: ✭ 55,742 (+174093.75%)
FasterTransformerTransformer related optimization, including BERT, GPT
Stars: ✭ 1,571 (+4809.38%)
SIGIR2021 ConureOne Person, One Model, One World: Learning Continual User Representation without Forgetting
Stars: ✭ 23 (-28.12%)
Albert zhA LITE BERT FOR SELF-SUPERVISED LEARNING OF LANGUAGE REPRESENTATIONS, 海量中文预训练ALBERT模型
Stars: ✭ 3,500 (+10837.5%)
Kevinpro-NLP-demoAll NLP you Need Here. 个人实现了一些好玩的NLP demo,目前包含13个NLP应用的pytorch实现
Stars: ✭ 117 (+265.63%)
CLUE pytorchCLUE baseline pytorch CLUE的pytorch版本基线
Stars: ✭ 72 (+125%)
Nlp TutorialNatural Language Processing Tutorial for Deep Learning Researchers
Stars: ✭ 9,895 (+30821.88%)
TabFormerCode & Data for "Tabular Transformers for Modeling Multivariate Time Series" (ICASSP, 2021)
Stars: ✭ 209 (+553.13%)
sisterSImple SenTence EmbeddeR
Stars: ✭ 66 (+106.25%)
tfbert基于tensorflow1.x的预训练模型调用,支持单机多卡、梯度累积,XLA加速,混合精度。可灵活训练、验证、预测。
Stars: ✭ 54 (+68.75%)
Spark NlpState of the Art Natural Language Processing
Stars: ✭ 2,518 (+7768.75%)
golgothaContextualised Embeddings and Language Modelling using BERT and Friends using R
Stars: ✭ 39 (+21.88%)
ClusterTransformerTopic clustering library built on Transformer embeddings and cosine similarity metrics.Compatible with all BERT base transformers from huggingface.
Stars: ✭ 36 (+12.5%)
Transformer-QG-on-SQuADImplement Question Generator with SOTA pre-trained Language Models (RoBERTa, BERT, GPT, BART, T5, etc.)
Stars: ✭ 28 (-12.5%)
sticker2Further developed as SyntaxDot: https://github.com/tensordot/syntaxdot
Stars: ✭ 14 (-56.25%)
XpersonaXPersona: Evaluating Multilingual Personalized Chatbot
Stars: ✭ 54 (+68.75%)
Filipino-Text-BenchmarksOpen-source benchmark datasets and pretrained transformer models in the Filipino language.
Stars: ✭ 22 (-31.25%)
ChineseglueLanguage Understanding Evaluation benchmark for Chinese: datasets, baselines, pre-trained models,corpus and leaderboard
Stars: ✭ 1,548 (+4737.5%)
ALBERT-PytorchPytorch Implementation of ALBERT(A Lite BERT for Self-supervised Learning of Language Representations)
Stars: ✭ 214 (+568.75%)
Clue中文语言理解测评基准 Chinese Language Understanding Evaluation Benchmark: datasets, baselines, pre-trained models, corpus and leaderboard
Stars: ✭ 2,425 (+7478.13%)
MobileQA离线端阅读理解应用 QA for mobile, Android & iPhone
Stars: ✭ 49 (+53.13%)
KitanaQAKitanaQA: Adversarial training and data augmentation for neural question-answering models
Stars: ✭ 58 (+81.25%)
semantic-document-relationsImplementation, trained models and result data for the paper "Pairwise Multi-Class Document Classification for Semantic Relations between Wikipedia Articles"
Stars: ✭ 21 (-34.37%)
bert nliA Natural Language Inference (NLI) model based on Transformers (BERT and ALBERT)
Stars: ✭ 97 (+203.13%)
PDNThe official PyTorch implementation of "Pathfinder Discovery Networks for Neural Message Passing" (WebConf '21)
Stars: ✭ 44 (+37.5%)
vietnamese-robertaA Robustly Optimized BERT Pretraining Approach for Vietnamese
Stars: ✭ 22 (-31.25%)
BertvizTool for visualizing attention in the Transformer model (BERT, GPT-2, Albert, XLNet, RoBERTa, CTRL, etc.)
Stars: ✭ 3,443 (+10659.38%)
classifier multi labelmulti-label,classifier,text classification,多标签文本分类,文本分类,BERT,ALBERT,multi-label-classification
Stars: ✭ 127 (+296.88%)
Medi-CoQAConversational Question Answering on Clinical Text
Stars: ✭ 22 (-31.25%)
bert-as-a-service TFXEnd-to-end pipeline with TFX to train and deploy a BERT model for sentiment analysis.
Stars: ✭ 32 (+0%)
syntaxdotNeural syntax annotator, supporting sequence labeling, lemmatization, and dependency parsing.
Stars: ✭ 32 (+0%)
charformer-pytorchImplementation of the GBST block from the Charformer paper, in Pytorch
Stars: ✭ 74 (+131.25%)
attention-is-all-you-need-paperImplementation of Vaswani, Ashish, et al. "Attention is all you need." Advances in neural information processing systems. 2017.
Stars: ✭ 97 (+203.13%)
trapperState-of-the-art NLP through transformer models in a modular design and consistent APIs.
Stars: ✭ 28 (-12.5%)
galerkin-transformer[NeurIPS 2021] Galerkin Transformer: a linear attention without softmax
Stars: ✭ 111 (+246.88%)
saintThe official PyTorch implementation of recent paper - SAINT: Improved Neural Networks for Tabular Data via Row Attention and Contrastive Pre-Training
Stars: ✭ 209 (+553.13%)
MengziMengzi Pretrained Models
Stars: ✭ 238 (+643.75%)
T3[EMNLP 2020] "T3: Tree-Autoencoder Constrained Adversarial Text Generation for Targeted Attack" by Boxin Wang, Hengzhi Pei, Boyuan Pan, Qian Chen, Shuohang Wang, Bo Li
Stars: ✭ 25 (-21.87%)
pynmta simple and complete pytorch implementation of neural machine translation system
Stars: ✭ 13 (-59.37%)
bangla-bertBangla-Bert is a pretrained bert model for Bengali language
Stars: ✭ 41 (+28.13%)
kwxBERT, LDA, and TFIDF based keyword extraction in Python
Stars: ✭ 33 (+3.13%)
FinBERTA Pretrained BERT Model for Financial Communications. https://arxiv.org/abs/2006.08097
Stars: ✭ 193 (+503.13%)
TextPrunerA PyTorch-based model pruning toolkit for pre-trained language models
Stars: ✭ 94 (+193.75%)
linformerImplementation of Linformer for Pytorch
Stars: ✭ 119 (+271.88%)
mcQA🔮 Answering multiple choice questions with Language Models.
Stars: ✭ 23 (-28.12%)
cdQA-ui⛔ [NOT MAINTAINED] A web interface for cdQA and other question answering systems.
Stars: ✭ 19 (-40.62%)
policy-data-analyzerBuilding a model to recognize incentives for landscape restoration in environmental policies from Latin America, the US and India. Bringing NLP to the world of policy analysis through an extensible framework that includes scraping, preprocessing, active learning and text analysis pipelines.
Stars: ✭ 22 (-31.25%)
KLUE📖 Korean NLU Benchmark
Stars: ✭ 420 (+1212.5%)