RobbertA Dutch RoBERTa-based language model
Stars: ✭ 120 (-94.7%)
banglabertThis repository contains the official release of the model "BanglaBERT" and associated downstream finetuning code and datasets introduced in the paper titled "BanglaBERT: Language Model Pretraining and Benchmarks for Low-Resource Language Understanding Evaluation in Bangla" accpeted in Findings of the Annual Conference of the North American Chap…
Stars: ✭ 186 (-91.78%)
syntaxdotNeural syntax annotator, supporting sequence labeling, lemmatization, and dependency parsing.
Stars: ✭ 32 (-98.59%)
BertSimilarityComputing similarity of two sentences with google's BERT algorithm。利用Bert计算句子相似度。语义相似度计算。文本相似度计算。
Stars: ✭ 348 (-84.63%)
Pytorch CppC++ Implementation of PyTorch Tutorials for Everyone
Stars: ✭ 1,014 (-55.21%)
MengziMengzi Pretrained Models
Stars: ✭ 238 (-89.49%)
CAIL法研杯CAIL2019阅读理解赛题参赛模型
Stars: ✭ 34 (-98.5%)
Sohu20192019搜狐校园算法大赛
Stars: ✭ 26 (-98.85%)
bangla-bertBangla-Bert is a pretrained bert model for Bengali language
Stars: ✭ 41 (-98.19%)
DiscEvalDiscourse Based Evaluation of Language Understanding
Stars: ✭ 18 (-99.2%)
SpagoSelf-contained Machine Learning and Natural Language Processing library in Go
Stars: ✭ 854 (-62.28%)
question generatorAn NLP system for generating reading comprehension questions
Stars: ✭ 188 (-91.7%)
Lingopackage lingo provides the data structures and algorithms required for natural language processing
Stars: ✭ 113 (-95.01%)
dasher-webDasher text entry in HTML, CSS, JavaScript, and SVG
Stars: ✭ 34 (-98.5%)
mcQA🔮 Answering multiple choice questions with Language Models.
Stars: ✭ 23 (-98.98%)
Spacy Transformers🛸 Use pretrained transformers like BERT, XLNet and GPT-2 in spaCy
Stars: ✭ 919 (-59.41%)
CheXbertCombining Automatic Labelers and Expert Annotations for Accurate Radiology Report Labeling Using BERT
Stars: ✭ 51 (-97.75%)
KLUE📖 Korean NLU Benchmark
Stars: ✭ 420 (-81.45%)
TriB-QA吹逼我们是认真的
Stars: ✭ 45 (-98.01%)
classyclassy is a simple-to-use library for building high-performance Machine Learning models in NLP.
Stars: ✭ 61 (-97.31%)
ExpBERTCode for our ACL '20 paper "Representation Engineering with Natural Language Explanations"
Stars: ✭ 28 (-98.76%)
Chinese ElectraPre-trained Chinese ELECTRA(中文ELECTRA预训练模型)
Stars: ✭ 830 (-63.34%)
text2textText2Text: Cross-lingual natural language processing and generation toolkit
Stars: ✭ 188 (-91.7%)
lm-scorer📃Language Model based sentences scoring library
Stars: ✭ 264 (-88.34%)
GetlangNatural language detection package in pure Go
Stars: ✭ 110 (-95.14%)
text2classMulti-class text categorization using state-of-the-art pre-trained contextualized language models, e.g. BERT
Stars: ✭ 15 (-99.34%)
PykaldiA Python wrapper for Kaldi
Stars: ✭ 756 (-66.61%)
bert attn vizVisualize BERT's self-attention layers on text classification tasks
Stars: ✭ 41 (-98.19%)
Xlnet GenXLNet for generating language.
Stars: ✭ 164 (-92.76%)
FasterTransformerTransformer related optimization, including BERT, GPT
Stars: ✭ 1,571 (-30.61%)
AliceMindALIbaba's Collection of Encoder-decoders from MinD (Machine IntelligeNce of Damo) Lab
Stars: ✭ 1,479 (-34.67%)
MobileQA离线端阅读理解应用 QA for mobile, Android & iPhone
Stars: ✭ 49 (-97.84%)
mlmachine learning
Stars: ✭ 29 (-98.72%)
miniconsUtility for analyzing Transformer based representations of language.
Stars: ✭ 28 (-98.76%)
sisterSImple SenTence EmbeddeR
Stars: ✭ 66 (-97.08%)
ChineseglueLanguage Understanding Evaluation benchmark for Chinese: datasets, baselines, pre-trained models,corpus and leaderboard
Stars: ✭ 1,548 (-31.63%)
Fill-the-GAP[ACL-WS] 4th place solution to gendered pronoun resolution challenge on Kaggle
Stars: ✭ 13 (-99.43%)
bert experimentalcode and supplementary materials for a series of Medium articles about the BERT model
Stars: ✭ 72 (-96.82%)
TwinBertpytorch implementation of the TwinBert paper
Stars: ✭ 36 (-98.41%)
Dl Nlp ReadingsMy Reading Lists of Deep Learning and Natural Language Processing
Stars: ✭ 656 (-71.02%)
OptimusOptimus: the first large-scale pre-trained VAE language model
Stars: ✭ 180 (-92.05%)
KashgariKashgari is a production-level NLP Transfer learning framework built on top of tf.keras for text-labeling and text-classification, includes Word2Vec, BERT, and GPT2 Language Embedding.
Stars: ✭ 2,235 (-1.28%)
Indic BertBERT-based Multilingual Model for Indian Languages
Stars: ✭ 160 (-92.93%)
Electra pytorchPretrain and finetune ELECTRA with fastai and huggingface. (Results of the paper replicated !)
Stars: ✭ 149 (-93.42%)
Boilerplate Dynet Rnn LmBoilerplate code for quickly getting set up to run language modeling experiments
Stars: ✭ 37 (-98.37%)
semantic-document-relationsImplementation, trained models and result data for the paper "Pairwise Multi-Class Document Classification for Semantic Relations between Wikipedia Articles"
Stars: ✭ 21 (-99.07%)
ganbertEnhancing the BERT training with Semi-supervised Generative Adversarial Networks
Stars: ✭ 205 (-90.95%)