All Projects → avidale → compress-fasttext

avidale / compress-fasttext

Licence: MIT license
Tools for shrinking fastText models (in gensim format)

Programming Languages

Jupyter Notebook
11667 projects
python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to compress-fasttext

word embedding
Sample code for training Word2Vec and FastText using wiki corpus and their pretrained word embedding..
Stars: ✭ 21 (-83.06%)
Mutual labels:  word-embeddings, fasttext
Embeddingsviz
Visualize word embeddings of a vocabulary in TensorBoard, including the neighbors
Stars: ✭ 40 (-67.74%)
Mutual labels:  word-embeddings, fasttext
Persian-Sentiment-Analyzer
Persian sentiment analysis ( آناکاوی سهش های فارسی | تحلیل احساسات فارسی )
Stars: ✭ 30 (-75.81%)
Mutual labels:  fasttext, fasttext-embeddings
Fastrtext
R wrapper for fastText
Stars: ✭ 103 (-16.94%)
Mutual labels:  word-embeddings, fasttext
Gensim
Topic Modelling for Humans
Stars: ✭ 12,763 (+10192.74%)
Mutual labels:  word-embeddings, fasttext
Pytorch Sentiment Analysis
Tutorials on getting started with PyTorch and TorchText for sentiment analysis.
Stars: ✭ 3,209 (+2487.9%)
Mutual labels:  word-embeddings, fasttext
Biosentvec
BioWordVec & BioSentVec: pre-trained embeddings for biomedical words and sentences
Stars: ✭ 308 (+148.39%)
Mutual labels:  word-embeddings, fasttext
Magnitude
A fast, efficient universal vector embedding utility package.
Stars: ✭ 1,394 (+1024.19%)
Mutual labels:  word-embeddings, fasttext
Fasttext.js
FastText for Node.js
Stars: ✭ 127 (+2.42%)
Mutual labels:  word-embeddings, fasttext
Shallowlearn
An experiment about re-implementing supervised learning models based on shallow neural network approaches (e.g. fastText) with some additional exclusive features and nice API. Written in Python and fully compatible with Scikit-learn.
Stars: ✭ 196 (+58.06%)
Mutual labels:  word-embeddings, fasttext
Simple-Sentence-Similarity
Exploring the simple sentence similarity measurements using word embeddings
Stars: ✭ 99 (-20.16%)
Mutual labels:  word-embeddings, fasttext
dasem
Danish Semantic analysis
Stars: ✭ 17 (-86.29%)
Mutual labels:  word-embeddings
Word-Embeddings-and-Document-Vectors
An evaluation of word-embeddings for classification
Stars: ✭ 32 (-74.19%)
Mutual labels:  fasttext-embeddings
wefe
WEFE: The Word Embeddings Fairness Evaluation Framework. WEFE is a framework that standardizes the bias measurement and mitigation in Word Embeddings models. Please feel welcome to open an issue in case you have any questions or a pull request if you want to contribute to the project!
Stars: ✭ 164 (+32.26%)
Mutual labels:  word-embeddings
fasttext-serving
Serve your fastText models for text classification and word vectors
Stars: ✭ 21 (-83.06%)
Mutual labels:  fasttext
NLP-paper
🎨 🎨NLP 自然语言处理教程 🎨🎨 https://dataxujing.github.io/NLP-paper/
Stars: ✭ 23 (-81.45%)
Mutual labels:  fasttext
word2vec-on-wikipedia
A pipeline for training word embeddings using word2vec on wikipedia corpus.
Stars: ✭ 68 (-45.16%)
Mutual labels:  word-embeddings
actions-suggest-related-links
A GitHub Action to suggest related or similar issues, documents, and links. Based on the power of NLP and fastText.
Stars: ✭ 23 (-81.45%)
Mutual labels:  fasttext
sister
SImple SenTence EmbeddeR
Stars: ✭ 66 (-46.77%)
Mutual labels:  word-embeddings
Word2VecfJava
Word2VecfJava: Java implementation of Dependency-Based Word Embeddings and extensions
Stars: ✭ 14 (-88.71%)
Mutual labels:  word-embeddings

Compress-fastText

This Python 3 package allows to compress fastText word embedding models (from the gensim package) by orders of magnitude, without significantly affecting their quality.

Here are some links to the models that have already been compressed.

This blogpost in Russian and this one in English give more details about the motivation and methods for compressing fastText models.

Note: gensim==4.0.0 has introduced some backward-incompatible changes:

  • With gensim<4.0.0, please use compress-fasttext<=0.0.7 (and optionally Russian models from our first release).
  • With gensim>=4.0.0, please use compress-fasttext>=0.1.0 (and optionally Russian or English models from our 0.1.0 release).
  • Some models are no longer supported in the new version of gensim+compress-fasttext (for example, multiple models from RusVectores that use compatible_hash=False).
  • For any particular model, compatibility should be determined experimentally. If you notice any strange behaviour, please report in the Github issues.

The package can be installed with pip:

pip install compress-fasttext[full]

If you are not going to perform matrix decomposition or quantization, you can install a variety with less dependencies:

pip install compress-fasttext

Model compression

You can use this package to compress your own fastText model (or one downloaded e.g. from RusVectores):

Compress a model in Gensim format:

import gensim
import compress_fasttext
big_model = gensim.models.fasttext.FastTextKeyedVectors.load('path-to-original-model')
small_model = compress_fasttext.prune_ft_freq(big_model, pq=True)
small_model.save('path-to-new-model')

Import a model in Facebook original format and compress it:

from gensim.models.fasttext import load_facebook_model
import compress_fasttext
big_model = load_facebook_model('path-to-original-model').wv
small_model = compress_fasttext.prune_ft_freq(big_model, pq=True)
small_model.save('path-to-new-model')

To perform this compression, you will need to pip install gensim==3.8.3 pqkmeans beforehand.

Different compression methods include:

  • matrix decomposition (svd_ft)
  • product quantization (quantize_ft)
  • optimization of feature hashing (prune_ft)
  • feature selection (prune_ft_freq)

The recommended approach is combination of feature selection and quantization (prune_ft_freq with pq=True).

Model usage

If you just need a tiny fastText model for Russian, you can download this 21-megabyte model. It's a compressed version of geowac_tokens_none_fasttextskipgram_300_5_2020 model from RusVectores.

If compress-fasttext is already installed, you can download and use this tiny model

import compress_fasttext
small_model = compress_fasttext.models.CompressedFastTextKeyedVectors.load(
    'https://github.com/avidale/compress-fasttext/releases/download/gensim-4-draft/geowac_tokens_sg_300_5_2020-100K-20K-100.bin'
)
print(small_model['спасибо'])
# [ 0.26762889  0.35489027 ...  -0.06149674] # a 300-dimensional vector
print(small_model.most_similar('котенок'))
# [('кот', 0.7391024827957153), ('пес', 0.7388300895690918), ('малыш', 0.7280327081680298), ... ]

The class CompressedFastTextKeyedVectors inherits from gensim.models.fasttext.FastTextKeyedVectors, but makes a few additional optimizations.

For English, you can use this tiny model, obtained by compressing the model by Facebook.

import compress_fasttext
small_model = compress_fasttext.models.CompressedFastTextKeyedVectors.load(
    'https://github.com/avidale/compress-fasttext/releases/download/v0.0.4/cc.en.300.compressed.bin'
)
print(small_model['hello'])
# [ 1.84736611e-01  6.32683930e-03  4.43901886e-03 ... -2.88431027e-02]  # a 300-dimensional vector
print(small_model.most_similar('Python'))
# [('PHP', 0.5252903699874878), ('.NET', 0.5027452707290649), ('Java', 0.4897131323814392),  ... ]

More compressed models for 101 various languages can be found at https://zenodo.org/record/4905385.

Example of application

In practical applications, you usually feed fastText embeddings to some other model. The class FastTextTransformer uses the scikit-learn interface and represents a text as the average of the embedding of its words. With it you can, for example, train a classifier on top of fastText to tell edible things from inedible ones:

import compress_fasttext
from sklearn.pipeline import make_pipeline
from sklearn.linear_model import LogisticRegression
from compress_fasttext.feature_extraction import FastTextTransformer

small_model = compress_fasttext.models.CompressedFastTextKeyedVectors.load(
    'https://github.com/avidale/compress-fasttext/releases/download/v0.0.4/cc.en.300.compressed.bin'
)

classifier = make_pipeline(
    FastTextTransformer(model=small_model), 
    LogisticRegression()
).fit(
    ['banana', 'soup', 'burger', 'car', 'tree', 'city'],
    [1, 1, 1, 0, 0, 0]
)
classifier.predict(['jet', 'train', 'cake', 'apple'])
# array([0, 0, 1, 1])

Notes

This code is heavily based on the navec package by Alexander Kukushkin and the blogpost by Andrey Vasnetsov about shrinking fastText embeddings.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].