All Projects → pdrm83 → sent2vec

pdrm83 / sent2vec

Licence: MIT License
How to encode sentences in a high-dimensional vector space, a.k.a., sentence embedding.

Programming Languages

python
139335 projects - #7 most used programming language
shell
77523 projects

Projects that are alternatives of or similar to sent2vec

Russian news corpus
Russian mass media stemmed texts corpus / Корпус лемматизированных (морфологически нормализованных) текстов российских СМИ
Stars: ✭ 76 (-23.23%)
Mutual labels:  word2vec, nlp-machine-learning
word2vec-tsne
Google News and Leo Tolstoy: Visualizing Word2Vec Word Embeddings using t-SNE.
Stars: ✭ 59 (-40.4%)
Mutual labels:  word2vec, nlp-machine-learning
Repo 2016
R, Python and Mathematica Codes in Machine Learning, Deep Learning, Artificial Intelligence, NLP and Geolocation
Stars: ✭ 103 (+4.04%)
Mutual labels:  word2vec, nlp-machine-learning
sentiment-analysis-of-tweets-in-russian
Sentiment analysis of tweets in Russian using Convolutional Neural Networks (CNN) with Word2Vec embeddings.
Stars: ✭ 51 (-48.48%)
Mutual labels:  word2vec, nlp-machine-learning
Simple-Sentence-Similarity
Exploring the simple sentence similarity measurements using word embeddings
Stars: ✭ 99 (+0%)
Mutual labels:  word2vec, sentence-embeddings
NTUA-slp-nlp
💻Speech and Natural Language Processing (SLP & NLP) Lab Assignments for ECE NTUA
Stars: ✭ 19 (-80.81%)
Mutual labels:  word2vec, nlp-machine-learning
watchman
Watchman: An open-source social-media event-detection system
Stars: ✭ 18 (-81.82%)
Mutual labels:  word2vec
CompareModels TRECQA
Compare six baseline deep learning models on TrecQA
Stars: ✭ 61 (-38.38%)
Mutual labels:  nlp-machine-learning
Embedding
Embedding模型代码和学习笔记总结
Stars: ✭ 25 (-74.75%)
Mutual labels:  word2vec
vector space modelling
NLP in python Vector Space Modelling and document classification NLP
Stars: ✭ 16 (-83.84%)
Mutual labels:  word2vec
Natural-Language-Processing
Contains various architectures and novel paper implementations for Natural Language Processing tasks like Sequence Modelling and Neural Machine Translation.
Stars: ✭ 48 (-51.52%)
Mutual labels:  nlp-machine-learning
sensim
Sentence Similarity Estimator (SenSim)
Stars: ✭ 15 (-84.85%)
Mutual labels:  nlp-machine-learning
use-cases-of-bert
Use-cases of Hugging Face's BERT (e.g. paraphrase generation, unsupervised extractive summarization).
Stars: ✭ 18 (-81.82%)
Mutual labels:  nlp-machine-learning
phd-resources
Internet Delivered Treatment using Adaptive Technology
Stars: ✭ 37 (-62.63%)
Mutual labels:  nlp-machine-learning
nlp newsletter
Natural language processing (NLP) newsletter right on GitHub
Stars: ✭ 57 (-42.42%)
Mutual labels:  nlp-machine-learning
codenames
Codenames AI using Word Vectors
Stars: ✭ 41 (-58.59%)
Mutual labels:  word2vec
NMFADMM
A sparsity aware implementation of "Alternating Direction Method of Multipliers for Non-Negative Matrix Factorization with the Beta-Divergence" (ICASSP 2014).
Stars: ✭ 39 (-60.61%)
Mutual labels:  word2vec
Product-Categorization-NLP
Multi-Class Text Classification for products based on their description with Machine Learning algorithms and Neural Networks (MLP, CNN, Distilbert).
Stars: ✭ 30 (-69.7%)
Mutual labels:  word2vec
SWDM
SIGIR 2017: Embedding-based query expansion for weighted sequential dependence retrieval model
Stars: ✭ 35 (-64.65%)
Mutual labels:  word2vec
awesome-semantic-search
A curated list of awesome resources related to Semantic Search🔎 and Semantic Similarity tasks.
Stars: ✭ 161 (+62.63%)
Mutual labels:  sentence-embeddings

license doc

Sent2Vec - How to Compute Sentence Embedding Fast and Flexible

In the past, we mostly encode text data using, for example, one-hot, term frequency, or TF-IDF (normalized term frequency). There are many challenges to these techniques. In recent years, the latest advancements give us the opportunity to encode sentences or words in more meaningful formats. The word2vec technique and BERT language model are two important ones.

The sentence embedding is an important step of various NLP tasks such as sentiment analysis and summarization. A flexible sentence embedding library is needed to prototype fast and contextualized. The open-source sent2vec Python package gives you the opportunity to do so. You currently have access to the standard encoders. More advanced techniques will be added in the later releases. Hope you can use this library in your exciting NLP projects.

🔓 Install

The sent2vec is developed to help you prototype faster. That is why it has many dependencies on other libraries. The module requires the following libraries:

  • gensim
  • numpy
  • spacy
  • transformers
  • torch

Then, it can be installed using pip:

pip3 install sent2vec

📚 Documentation

class sent2vec.vectorizer.Vectorizer(pretrained_weights='distilbert-base-uncased', ensemble_method='average')

Parameters

  • pretrained_weights: str, default='distilbert-base-uncased' - How word embeddings are computed. => You can pass other BERT models into this parameter such as base multilingual model, i.e., distilbert-base-multilingual-cased. Basically, the vectorizer uses the BERT vectorizer with specified weights unless you pass a file path with extensions .txt, .gz or .bin to this parameter. In that case, the Gensim library will load the provided word2ved model (pretrained weights). For example, you can pass glove-wiki-gigaword-300.gz to load the Wiki vectors (when saved in the same folder you are running the code).
  • ensemble_method: str, default='average' - How word vectors are aggregated into sentece vectors.

Methods

run(sentences, remove_stop_words = ['not'], add_stop_words = [])
  • sentences: list, - List of sentences.
  • remove_stop_words: list, default=['not'] - When using sent2vec, list of words to remove from stop words when splitting sentences.
  • add_stop_words: list, default=[] - When using sent2vec, list of words to add to stop words when splitting sentences.

🧰 Usage

1. How to use BERT model?

If you want to use the BERT language model (more specifically, distilbert-base-uncased) to encode sentences for downstream applications, you must use the code below.

from sent2vec.vectorizer import Vectorizer

sentences = [
    "This is an awesome book to learn NLP.",
    "DistilBERT is an amazing NLP model.",
    "We can interchangeably use embedding, encoding, or vectorizing.",
]
vectorizer = Vectorizer()
vectorizer.run(sentences)
vectors = vectorizer.vectors

Now it's possible to compute distance among sentences by using their vectors. In the example, as expected, the distance between vectors[0] and vectors[1] is less than the distance between vectors[0] and vectors[2].

from scipy import spatial

dist_1 = spatial.distance.cosine(vectors[0], vectors[1])
dist_2 = spatial.distance.cosine(vectors[0], vectors[2])
print('dist_1: {0}, dist_2: {1}'.format(dist_1, dist_2))
assert dist_1 < dist_2
# dist_1: 0.043, dist_2: 0.192

Note: The default vectorizer for the BERT model is distilbert-base-uncased but it's possible to pass the argument pretrained_weights to chose another BERT model. For example, you can use the code below to load the base multilingual model.

vectorizer = Vectorizer(pretrained_weights='distilbert-base-multilingual-cased')

2. How to use Word2Vec model?

If you want to use a Word2Vec approach instead, you must pass a valid path to the model weights. Under the hood, the sentences will be split into lists of words using the sent2words method from the Splitter class. It is possible to customize the list of stop-words by adding or removing to/from the default list. Two additional arguments (both lists) must be passed when the vectorizer's method .run is called: remove_stop_words and add_stop_words.

Note: The default method to computes the sentence embeddings after extracting list of vectors is average of vectors corresponding to the remaining words.

from sent2vec.vectorizer import Vectorizer

sentences = [
    "Alice is in the Wonderland.",
    "Alice is not in the Wonderland.",
]

vectorizer = Vectorizer(pretrained_weights= PRETRAINED_VECTORS_PATH)
vectorizer.run(sentences, remove_stop_words=['not'], add_stop_words=[])
vectors = vectorizer.vectors

And, that's pretty much it!

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].