Easy BertA Dead Simple BERT API for Python and Java (https://github.com/google-research/bert)
Stars: ✭ 106 (-48.04%)
Bio embeddingsGet protein embeddings from protein sequences
Stars: ✭ 86 (-57.84%)
Lingopackage lingo provides the data structures and algorithms required for natural language processing
Stars: ✭ 113 (-44.61%)
PhonlpPhoNLP: A BERT-based multi-task learning toolkit for part-of-speech tagging, named entity recognition and dependency parsing (NAACL 2021)
Stars: ✭ 56 (-72.55%)
F LmLanguage Modeling
Stars: ✭ 156 (-23.53%)
TongramsA C++ library providing fast language model queries in compressed space.
Stars: ✭ 88 (-56.86%)
OptimusOptimus: the first large-scale pre-trained VAE language model
Stars: ✭ 180 (-11.76%)
Cross Domain nerCross-domain NER using cross-domain language modeling, code for ACL 2019 paper
Stars: ✭ 67 (-67.16%)
TupeTransformer with Untied Positional Encoding (TUPE). Code of paper "Rethinking Positional Encoding in Language Pre-training". Improve existing models like BERT.
Stars: ✭ 143 (-29.9%)
RobbertA Dutch RoBERTa-based language model
Stars: ✭ 120 (-41.18%)
LmchallengeA library & tools to evaluate predictive language models.
Stars: ✭ 47 (-76.96%)
Lotclass[EMNLP 2020] Text Classification Using Label Names Only: A Language Model Self-Training Approach
Stars: ✭ 160 (-21.57%)
GetlangNatural language detection package in pure Go
Stars: ✭ 110 (-46.08%)
Bert Sklearna sklearn wrapper for Google's BERT model
Stars: ✭ 182 (-10.78%)
Pytorch gbw lmPyTorch Language Model for 1-Billion Word (LM1B / GBW) Dataset
Stars: ✭ 101 (-50.49%)
SpeechtAn opensource speech-to-text software written in tensorflow
Stars: ✭ 152 (-25.49%)
Bit RnnQuantize weights and activations in Recurrent Neural Networks.
Stars: ✭ 86 (-57.84%)
Char Rnn ChineseMulti-layer Recurrent Neural Networks (LSTM, GRU, RNN) for character-level language models in Torch. Based on code of https://github.com/karpathy/char-rnn. Support Chinese and other things.
Stars: ✭ 192 (-5.88%)
Full stack transformerPytorch library for end-to-end transformer models training, inference and serving
Stars: ✭ 71 (-65.2%)
Gpt2PyTorch Implementation of OpenAI GPT-2
Stars: ✭ 64 (-68.63%)
Gpt NeoAn implementation of model parallel GPT2& GPT3-like models, with the ability to scale up to full GPT3 sizes (and possibly more!), using the mesh-tensorflow library.
Stars: ✭ 1,252 (+513.73%)
TnerLanguage model finetuning on NER with an easy interface, and cross-domain evaluation. We released NER models finetuned on various domain via huggingface model hub.
Stars: ✭ 54 (-73.53%)
Electra中文 预训练 ELECTRA 模型: 基于对抗学习 pretrain Chinese Model
Stars: ✭ 132 (-35.29%)
Gpt2 FrenchGPT-2 French demo | Démo française de GPT-2
Stars: ✭ 47 (-76.96%)
LazynlpLibrary to scrape and clean web pages to create massive datasets.
Stars: ✭ 1,985 (+873.04%)
Haystack🔍 Haystack is an open source NLP framework that leverages Transformer models. It enables developers to implement production-ready neural search, question answering, semantic document search and summarization for a wide range of applications.
Stars: ✭ 3,409 (+1571.08%)
Bert As Language Modelbert as language model, fork from https://github.com/google-research/bert
Stars: ✭ 185 (-9.31%)
Keras Gpt 2Load GPT-2 checkpoint and generate texts
Stars: ✭ 113 (-44.61%)
Keras XlnetImplementation of XLNet that can load pretrained checkpoints
Stars: ✭ 159 (-22.06%)
Transformers🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
Stars: ✭ 55,742 (+27224.51%)
Gpt ScrollsA collaborative collection of open-source safe GPT-3 prompts that work well
Stars: ✭ 195 (-4.41%)
Openseq2seqToolkit for efficient experimentation with Speech Recognition, Text2Speech and NLP
Stars: ✭ 1,378 (+575.49%)
Transformer LmTransformer language model (GPT-2) with sentencepiece tokenizer
Stars: ✭ 154 (-24.51%)
PycluePython toolkit for Chinese Language Understanding(CLUE) Evaluation benchmark
Stars: ✭ 91 (-55.39%)
Keras BertImplementation of BERT that could load official pre-trained models for feature extraction and prediction
Stars: ✭ 2,264 (+1009.8%)
Electra pytorchPretrain and finetune ELECTRA with fastai and huggingface. (Results of the paper replicated !)
Stars: ✭ 149 (-26.96%)
Pytorch Openai Transformer Lm🐥A PyTorch implementation of OpenAI's finetuned transformer language model with a script to import the weights pre-trained by OpenAI
Stars: ✭ 1,268 (+521.57%)
LingvoLingvo
Stars: ✭ 2,361 (+1057.35%)
Greek BertA Greek edition of BERT pre-trained language model
Stars: ✭ 84 (-58.82%)
Awd Lstm LmLSTM and QRNN Language Model Toolkit for PyTorch
Stars: ✭ 1,834 (+799.02%)
Nezha chinese pytorchNEZHA: Neural Contextualized Representation for Chinese Language Understanding
Stars: ✭ 65 (-68.14%)
MacbertRevisiting Pre-trained Models for Chinese Natural Language Processing (Findings of EMNLP)
Stars: ✭ 167 (-18.14%)
Ld NetEfficient Contextualized Representation: Language Model Pruning for Sequence Labeling
Stars: ✭ 148 (-27.45%)
Char rnn lm zhlanguage model in Chinese,基于Pytorch官方文档实现
Stars: ✭ 57 (-72.06%)
Clue中文语言理解测评基准 Chinese Language Understanding Evaluation Benchmark: datasets, baselines, pre-trained models, corpus and leaderboard
Stars: ✭ 2,425 (+1088.73%)
SuggestTop-k Approximate String Matching.
Stars: ✭ 50 (-75.49%)
Indic BertBERT-based Multilingual Model for Indian Languages
Stars: ✭ 160 (-21.57%)
Chars2vecCharacter-based word embeddings model based on RNN for handling real world texts
Stars: ✭ 130 (-36.27%)
Attention MechanismsImplementations for a family of attention mechanisms, suitable for all kinds of natural language processing tasks and compatible with TensorFlow 2.0 and Keras.
Stars: ✭ 203 (-0.49%)
Nlp learning结合python一起学习自然语言处理 (nlp): 语言模型、HMM、PCFG、Word2vec、完形填空式阅读理解任务、朴素贝叶斯分类器、TFIDF、PCA、SVD
Stars: ✭ 188 (-7.84%)
Xlnet GenXLNet for generating language.
Stars: ✭ 164 (-19.61%)
Kogpt2 Finetuning🔥 Korean GPT-2, KoGPT2 FineTuning cased. 한국어 가사 데이터 학습 🔥
Stars: ✭ 124 (-39.22%)