Gpt2 FrenchGPT-2 French demo | Démo française de GPT-2
Stars: ✭ 47 (-96.25%)
Keras Gpt 2Load GPT-2 checkpoint and generate texts
Stars: ✭ 113 (-90.97%)
Greek BertA Greek edition of BERT pre-trained language model
Stars: ✭ 84 (-93.29%)
Chinese ElectraPre-trained Chinese ELECTRA(中文ELECTRA预训练模型)
Stars: ✭ 830 (-33.71%)
Awd Lstm LmLSTM and QRNN Language Model Toolkit for PyTorch
Stars: ✭ 1,834 (+46.49%)
SpagoSelf-contained Machine Learning and Natural Language Processing library in Go
Stars: ✭ 854 (-31.79%)
Openseq2seqToolkit for efficient experimentation with Speech Recognition, Text2Speech and NLP
Stars: ✭ 1,378 (+10.06%)
Pytorch Openai Transformer Lm🐥A PyTorch implementation of OpenAI's finetuned transformer language model with a script to import the weights pre-trained by OpenAI
Stars: ✭ 1,268 (+1.28%)
Nlp chinese corpus大规模中文自然语言处理语料 Large Scale Chinese Corpus for NLP
Stars: ✭ 6,656 (+431.63%)
Chars2vecCharacter-based word embeddings model based on RNN for handling real world texts
Stars: ✭ 130 (-89.62%)
Nezha chinese pytorchNEZHA: Neural Contextualized Representation for Chinese Language Understanding
Stars: ✭ 65 (-94.81%)
Electra pytorchPretrain and finetune ELECTRA with fastai and huggingface. (Results of the paper replicated !)
Stars: ✭ 149 (-88.1%)
Char rnn lm zhlanguage model in Chinese,基于Pytorch官方文档实现
Stars: ✭ 57 (-95.45%)
Haystack🔍 Haystack is an open source NLP framework that leverages Transformer models. It enables developers to implement production-ready neural search, question answering, semantic document search and summarization for a wide range of applications.
Stars: ✭ 3,409 (+172.28%)
SuggestTop-k Approximate String Matching.
Stars: ✭ 50 (-96.01%)
Keras XlnetImplementation of XLNet that can load pretrained checkpoints
Stars: ✭ 159 (-87.3%)
Pytorch CppC++ Implementation of PyTorch Tutorials for Everyone
Stars: ✭ 1,014 (-19.01%)
Transformers🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
Stars: ✭ 55,742 (+4352.24%)
Spacy Transformers🛸 Use pretrained transformers like BERT, XLNet and GPT-2 in spaCy
Stars: ✭ 919 (-26.6%)
Ld NetEfficient Contextualized Representation: Language Model Pruning for Sequence Labeling
Stars: ✭ 148 (-88.18%)
PykaldiA Python wrapper for Kaldi
Stars: ✭ 756 (-39.62%)
PycluePython toolkit for Chinese Language Understanding(CLUE) Evaluation benchmark
Stars: ✭ 91 (-92.73%)
Bit RnnQuantize weights and activations in Recurrent Neural Networks.
Stars: ✭ 86 (-93.13%)
Electra中文 预训练 ELECTRA 模型: 基于对抗学习 pretrain Chinese Model
Stars: ✭ 132 (-89.46%)
Bio embeddingsGet protein embeddings from protein sequences
Stars: ✭ 86 (-93.13%)
SpeechtAn opensource speech-to-text software written in tensorflow
Stars: ✭ 152 (-87.86%)
Full stack transformerPytorch library for end-to-end transformer models training, inference and serving
Stars: ✭ 71 (-94.33%)
Kogpt2 Finetuning🔥 Korean GPT-2, KoGPT2 FineTuning cased. 한국어 가사 데이터 학습 🔥
Stars: ✭ 124 (-90.1%)
Cross Domain nerCross-domain NER using cross-domain language modeling, code for ACL 2019 paper
Stars: ✭ 67 (-94.65%)
Lotclass[EMNLP 2020] Text Classification Using Label Names Only: A Language Model Self-Training Approach
Stars: ✭ 160 (-87.22%)
Gpt2PyTorch Implementation of OpenAI GPT-2
Stars: ✭ 64 (-94.89%)
RobbertA Dutch RoBERTa-based language model
Stars: ✭ 120 (-90.42%)
PhonlpPhoNLP: A BERT-based multi-task learning toolkit for part-of-speech tagging, named entity recognition and dependency parsing (NAACL 2021)
Stars: ✭ 56 (-95.53%)
TnerLanguage model finetuning on NER with an easy interface, and cross-domain evaluation. We released NER models finetuned on various domain via huggingface model hub.
Stars: ✭ 54 (-95.69%)
Lingopackage lingo provides the data structures and algorithms required for natural language processing
Stars: ✭ 113 (-90.97%)
LmchallengeA library & tools to evaluate predictive language models.
Stars: ✭ 47 (-96.25%)
Xlnet GenXLNet for generating language.
Stars: ✭ 164 (-86.9%)
Nlp Librarycurated collection of papers for the nlp practitioner 📖👩🔬
Stars: ✭ 1,025 (-18.13%)
GetlangNatural language detection package in pure Go
Stars: ✭ 110 (-91.21%)
Boilerplate Dynet Rnn LmBoilerplate code for quickly getting set up to run language modeling experiments
Stars: ✭ 37 (-97.04%)
Bert language understandingPre-training of Deep Bidirectional Transformers for Language Understanding: pre-train TextCNN
Stars: ✭ 933 (-25.48%)
Easy BertA Dead Simple BERT API for Python and Java (https://github.com/google-research/bert)
Stars: ✭ 106 (-91.53%)
F LmLanguage Modeling
Stars: ✭ 156 (-87.54%)
Lm Lstm CrfEmpower Sequence Labeling with Task-Aware Language Model
Stars: ✭ 778 (-37.86%)
Pytorch gbw lmPyTorch Language Model for 1-Billion Word (LM1B / GBW) Dataset
Stars: ✭ 101 (-91.93%)
Lightnlp基于Pytorch和torchtext的自然语言处理深度学习框架。
Stars: ✭ 739 (-40.97%)
TupeTransformer with Untied Positional Encoding (TUPE). Code of paper "Rethinking Positional Encoding in Language Pre-training". Improve existing models like BERT.
Stars: ✭ 143 (-88.58%)
TongramsA C++ library providing fast language model queries in compressed space.
Stars: ✭ 88 (-92.97%)
Indic BertBERT-based Multilingual Model for Indian Languages
Stars: ✭ 160 (-87.22%)
LazynlpLibrary to scrape and clean web pages to create massive datasets.
Stars: ✭ 1,985 (+58.55%)
Transformer LmTransformer language model (GPT-2) with sentencepiece tokenizer
Stars: ✭ 154 (-87.7%)
Clue中文语言理解测评基准 Chinese Language Understanding Evaluation Benchmark: datasets, baselines, pre-trained models, corpus and leaderboard
Stars: ✭ 2,425 (+93.69%)