All Projects β†’ huggingface β†’ Tokenizers

huggingface / Tokenizers

Licence: apache-2.0
πŸ’₯ Fast State-of-the-Art Tokenizers optimized for Research and Production

Programming Languages

rust
11053 projects
python
139335 projects - #7 most used programming language
typescript
32286 projects
Jupyter Notebook
11667 projects
javascript
184084 projects - #8 most used programming language
CSS
56736 projects

Projects that are alternatives of or similar to Tokenizers

Transformers
πŸ€— Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
Stars: ✭ 55,742 (+997.93%)
Mutual labels:  natural-language-processing, language-model, natural-language-understanding, bert
Spark Nlp
State of the Art Natural Language Processing
Stars: ✭ 2,518 (-50.4%)
Mutual labels:  natural-language-processing, transformers, bert
Easy Bert
A Dead Simple BERT API for Python and Java (https://github.com/google-research/bert)
Stars: ✭ 106 (-97.91%)
Mutual labels:  natural-language-processing, language-model, natural-language-understanding
Attention Mechanisms
Implementations for a family of attention mechanisms, suitable for all kinds of natural language processing tasks and compatible with TensorFlow 2.0 and Keras.
Stars: ✭ 203 (-96%)
Mutual labels:  natural-language-processing, language-model, natural-language-understanding
Clue
δΈ­ζ–‡θ―­θ¨€η†θ§£ζ΅‹θ―„εŸΊε‡† Chinese Language Understanding Evaluation Benchmark: datasets, baselines, pre-trained models, corpus and leaderboard
Stars: ✭ 2,425 (-52.24%)
Mutual labels:  language-model, transformers, bert
Bert As Service
Mapping a variable-length sentence to a fixed-length vector using BERT model
Stars: ✭ 9,779 (+92.61%)
Mutual labels:  natural-language-processing, natural-language-understanding, bert
Spacy Transformers
πŸ›Έ Use pretrained transformers like BERT, XLNet and GPT-2 in spaCy
Stars: ✭ 919 (-81.9%)
Mutual labels:  natural-language-processing, language-model, natural-language-understanding
wechsel
Code for WECHSEL: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models.
Stars: ✭ 39 (-99.23%)
Mutual labels:  transformers, language-model, bert
backprop
Backprop makes it simple to use, finetune, and deploy state-of-the-art ML models.
Stars: ✭ 229 (-95.49%)
Mutual labels:  transformers, language-model, bert
COCO-LM
[NeurIPS 2021] COCO-LM: Correcting and Contrasting Text Sequences for Language Model Pretraining
Stars: ✭ 109 (-97.85%)
Mutual labels:  transformers, language-model, natural-language-understanding
Haystack
πŸ” Haystack is an open source NLP framework that leverages Transformer models. It enables developers to implement production-ready neural search, question answering, semantic document search and summarization for a wide range of applications.
Stars: ✭ 3,409 (-32.85%)
Mutual labels:  language-model, transformers, bert
text2class
Multi-class text categorization using state-of-the-art pre-trained contextualized language models, e.g. BERT
Stars: ✭ 15 (-99.7%)
Mutual labels:  transformers, bert, natural-language-understanding
Chars2vec
Character-based word embeddings model based on RNN for handling real worldΒ texts
Stars: ✭ 130 (-97.44%)
Mutual labels:  natural-language-processing, language-model, natural-language-understanding
Pytorch Sentiment Analysis
Tutorials on getting started with PyTorch and TorchText for sentiment analysis.
Stars: ✭ 3,209 (-36.79%)
Mutual labels:  natural-language-processing, transformers, bert
label-studio-transformers
Label data using HuggingFace's transformers and automatically get a prediction service
Stars: ✭ 117 (-97.7%)
Mutual labels:  transformers, bert, natural-language-understanding
classy
classy is a simple-to-use library for building high-performance Machine Learning models in NLP.
Stars: ✭ 61 (-98.8%)
Mutual labels:  transformers, bert, natural-language-understanding
Bert Pytorch
Google AI 2018 BERT pytorch implementation
Stars: ✭ 4,642 (-8.57%)
Mutual labels:  language-model, bert
policy-data-analyzer
Building a model to recognize incentives for landscape restoration in environmental policies from Latin America, the US and India. Bringing NLP to the world of policy analysis through an extensible framework that includes scraping, preprocessing, active learning and text analysis pipelines.
Stars: ✭ 22 (-99.57%)
Mutual labels:  transformers, bert
few-shot-lm
The source code of "Language Models are Few-shot Multilingual Learners" (MRL @ EMNLP 2021)
Stars: ✭ 32 (-99.37%)
Mutual labels:  gpt, language-model
Practical Nlp
Official Repository for 'Practical Natural Language Processing' by O'Reilly Media
Stars: ✭ 452 (-91.1%)
Mutual labels:  natural-language-processing, natural-language-understanding



Build GitHub

Provides an implementation of today's most used tokenizers, with a focus on performance and versatility.

Main features:

  • Train new vocabularies and tokenize, using today's most used tokenizers.
  • Extremely fast (both training and tokenization), thanks to the Rust implementation. Takes less than 20 seconds to tokenize a GB of text on a server's CPU.
  • Easy to use, but also extremely versatile.
  • Designed for research and production.
  • Normalization comes with alignments tracking. It's always possible to get the part of the original sentence that corresponds to a given token.
  • Does all the pre-processing: Truncate, Pad, add the special tokens your model needs.

Bindings

We provide bindings to the following languages (more to come!):

Quick example using Python:

Choose your model between Byte-Pair Encoding, WordPiece or Unigram and instantiate a tokenizer:

from tokenizers import Tokenizer
from tokenizers.models import BPE

tokenizer = Tokenizer(BPE())

You can customize how pre-tokenization (e.g., splitting into words) is done:

from tokenizers.pre_tokenizers import Whitespace

tokenizer.pre_tokenizer = Whitespace()

Then training your tokenizer on a set of files just takes two lines of codes:

from tokenizers.trainers import BpeTrainer

trainer = BpeTrainer(special_tokens=["[UNK]", "[CLS]", "[SEP]", "[PAD]", "[MASK]"])
tokenizer.train(files=["wiki.train.raw", "wiki.valid.raw", "wiki.test.raw"], trainer=trainer)

Once your tokenizer is trained, encode any text with just one line:

output = tokenizer.encode("Hello, y'all! How are you 😁 ?")
print(output.tokens)
# ["Hello", ",", "y", "'", "all", "!", "How", "are", "you", "[UNK]", "?"]

Check the python documentation or the python quicktour to learn more!

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].