All Projects → amansrivastava17 → Embedding As Service

amansrivastava17 / Embedding As Service

Licence: mit
One-Stop Solution to encode sentence to fixed length vectors from various embedding techniques

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Embedding As Service

Lmdb Embeddings
Fast word vectors with little memory usage in Python
Stars: ✭ 404 (+167.55%)
Mutual labels:  word2vec, embeddings, fasttext, glove
NLP-paper
🎨 🎨NLP 自然语言处理教程 🎨🎨 https://dataxujing.github.io/NLP-paper/
Stars: ✭ 23 (-84.77%)
Mutual labels:  word2vec, transformer, glove, fasttext
Finalfusion Rust
finalfusion embeddings in Rust
Stars: ✭ 35 (-76.82%)
Mutual labels:  word2vec, embeddings, fasttext, glove
Magnitude
A fast, efficient universal vector embedding utility package.
Stars: ✭ 1,394 (+823.18%)
Mutual labels:  word2vec, embeddings, fasttext, glove
Cw2vec
cw2vec: Learning Chinese Word Embeddings with Stroke n-gram Information
Stars: ✭ 224 (+48.34%)
Mutual labels:  word2vec, embeddings, fasttext
navec
Compact high quality word embeddings for Russian language
Stars: ✭ 118 (-21.85%)
Mutual labels:  word2vec, embeddings, glove
Simple-Sentence-Similarity
Exploring the simple sentence similarity measurements using word embeddings
Stars: ✭ 99 (-34.44%)
Mutual labels:  word2vec, glove, fasttext
Persian-Sentiment-Analyzer
Persian sentiment analysis ( آناکاوی سهش های فارسی | تحلیل احساسات فارسی )
Stars: ✭ 30 (-80.13%)
Mutual labels:  word2vec, embeddings, fasttext
Embedding
Embedding模型代码和学习笔记总结
Stars: ✭ 25 (-83.44%)
Mutual labels:  word2vec, transformer, fasttext
Nlp
兜哥出品 <一本开源的NLP入门书籍>
Stars: ✭ 1,677 (+1010.6%)
Mutual labels:  ai, word2vec, fasttext
Keras Textclassification
中文长文本分类、短句子分类、多标签分类、两句子相似度(Chinese Text Classification of Keras NLP, multi-label classify, or sentence classify, long or short),字词句向量嵌入层(embeddings)和网络层(graph)构建基类,FastText,TextCNN,CharCNN,TextRNN, RCNN, DCNN, DPCNN, VDCNN, CRNN, Bert, Xlnet, Albert, Attention, DeepMoji, HAN, 胶囊网络-CapsuleNet, Transformer-encode, Seq2seq, SWEM, LEAM, TextGCN
Stars: ✭ 914 (+505.3%)
Mutual labels:  embeddings, fasttext, transformer
Embeddingsviz
Visualize word embeddings of a vocabulary in TensorBoard, including the neighbors
Stars: ✭ 40 (-73.51%)
Mutual labels:  embeddings, fasttext, glove
Nlp research
NLP research:基于tensorflow的nlp深度学习项目,支持文本分类/句子匹配/序列标注/文本生成 四大任务
Stars: ✭ 141 (-6.62%)
Mutual labels:  word2vec, fasttext, transformer
Deeplearning Nlp Models
A small, interpretable codebase containing the re-implementation of a few "deep" NLP models in PyTorch. Colab notebooks to run with GPUs. Models: word2vec, CNNs, transformer, gpt.
Stars: ✭ 64 (-57.62%)
Mutual labels:  word2vec, embeddings, transformer
Wordembeddings Elmo Fasttext Word2vec
Using pre trained word embeddings (Fasttext, Word2Vec)
Stars: ✭ 146 (-3.31%)
Mutual labels:  word2vec, fasttext, glove
Glove As A Tensorflow Embedding Layer
Taking a pretrained GloVe model, and using it as a TensorFlow embedding weight layer **inside the GPU**. Therefore, you only need to send the index of the words through the GPU data transfer bus, reducing data transfer overhead.
Stars: ✭ 85 (-43.71%)
Mutual labels:  word2vec, glove
Vectorsinsearch
Dice.com repo to accompany the dice.com 'Vectors in Search' talk by Simon Hughes, from the Activate 2018 search conference, and the 'Searching with Vectors' talk from Haystack 2019 (US). Builds upon my conceptual search and semantic search work from 2015
Stars: ✭ 71 (-52.98%)
Mutual labels:  word2vec, glove
Nlp Journey
Documents, papers and codes related to Natural Language Processing, including Topic Model, Word Embedding, Named Entity Recognition, Text Classificatin, Text Generation, Text Similarity, Machine Translation),etc. All codes are implemented intensorflow 2.0.
Stars: ✭ 1,290 (+754.3%)
Mutual labels:  word2vec, fasttext
Dict2vec
Dict2vec is a framework to learn word embeddings using lexical dictionaries.
Stars: ✭ 91 (-39.74%)
Mutual labels:  word2vec, embeddings
Convai Bot 1337
NIPS Conversational Intelligence Challenge 2017 Winner System: Skill-based Conversational Agent with Supervised Dialog Manager
Stars: ✭ 65 (-56.95%)
Mutual labels:  ai, fasttext

embedding-as-service

One-Stop Solution to encode sentence to fixed length vectors from various embedding techniques
• Inspired from bert-as-service

GitHub stars Downloads Pypi package GitHub issues GitHub license Contributors

What is itInstallationGetting StartedSupported EmbeddingsAPI

What is it

Encoding/Embedding is a upstream task of encoding any inputs in the form of text, image, audio, video, transactional data to fixed length vector. Embeddings are quite popular in the field of NLP, there has been various Embeddings models being proposed in recent years by researchers, some of the famous one are bert, xlnet, word2vec etc. The goal of this repo is to build one stop solution for all embeddings techniques available, here we are starting with popular text embeddings for now and later on we aim to add as much technique for image, audio, video inputs also.

embedding-as-service help you to encode any given text to fixed length vector from supported embeddings and models.

💾 Installation

▴ Back to top

Here we have given the capability to use embedding-as-service like a module or you can run it as a server and handle queries by installing client package embedding-as-service-client

Using embedding-as-service as module

Install the embedding-as-servive via pip.

$ pip install embedding-as-service

Note that the code MUST be running on Python >= 3.6. Again module does not support Python 2!

Using embedding-as-service as a server

Here you also need to install a client module embedding-as-service-client

$ pip install embedding-as-service # server
$ pip install embedding-as-service-client # client

Client module need not to be on Python 3.6, it supports both Python2 and Python3

⚡ ️Getting Started

▴ Back to top

1. Intialise encoder using supported embedding and models from here

If using embedding-as-service as a module

>>> from embedding_as_service.text.encode import Encoder  
>>> en = Encoder(embedding='bert', model='bert_base_cased', max_seq_length=256)  

If using embedding-as-service as a server

# start the server by proving embedding, model, port, max_seq_length[default=256], num_workers[default=4]
$ embedding-as-service-start --embedding bert --model bert_base_cased --port 8080 --max_seq_length 256
>>> from embedding_as_service_client import EmbeddingClient
>>> en = EmbeddingClient(host=<host_server_ip>, port=<host_port>)

2. Get sentences tokens embedding

>>> vecs = en.encode(texts=['hello aman', 'how are you?'])  
>>> vecs  
array([[[ 1.7049843 ,  0.        ,  1.3486509 , ..., -1.3647075 ,  
 0.6958289 ,  1.8013777 ], ... [ 0.4913215 ,  0.60877025,  0.73050433, ..., -0.64490885, 0.8525057 ,  0.3080206 ]]], dtype=float32)  
>>> vecs.shape  
(2, 128, 768) # batch x max_sequence_length x embedding_size  

3. Using pooling strategy, click here for more.

Supported Pooling Methods
Strategy Description
None no pooling at all, useful when you want to use word embedding instead of sentence embedding. This will results in a [max_seq_len, embedding_size] encode matrix for a sequence.
reduce_mean take the average of all token embeddings
reduce_min take the minumun of all token embeddings
reduce_max take the maximum of all token embeddings
reduce_mean_max do reduce_mean and reduce_max separately and then concat them together
first_token get the token embedding of first token of a sentence
last_token get the token embedding of last token of a sentence
>>> vecs = en.encode(texts=['hello aman', 'how are you?'], pooling='reduce_mean')  
>>> vecs  
array([[-0.33547154,  0.34566957,  1.1954105 , ...,  0.33702594,  
 1.0317835 , -0.785943  ], [-0.3439088 ,  0.36881036,  1.0612687 , ...,  0.28851607, 1.1107115 , -0.6253736 ]], dtype=float32)  
  
>>> vecs.shape  
(2, 768) # batch x embedding_size  

4. Show embedding Tokens

>>> en.tokenize(texts=['hello aman', 'how are you?'])  
[['_hello', '_aman'], ['_how', '_are', '_you', '?']]  

5. Using your own tokenizer

>>> texts = ['hello aman!', 'how are you']  
  
# a naive whitespace tokenizer  
>>> tokens = [s.split() for s in texts]  
>>> vecs = en.encode(tokens, is_tokenized=True)  

📋 API

▴ Back to top

  1. class embedding_as_service.text.encoder.Encoder
Argument Type Default Description
embedding str Required embedding method to be used, check Embedding column here
model str Required Model to be used for mentioned embedding, check Model column here
max_seq_length int 128 Maximum Sequence Length, default is 128
  1. def embedding_as_service.text.encoder.Encoder.encode
Argument Type Default Description
Texts List[str] or List[List[str]] Required List of sentences or list of list of sentence tokens in case of is_tokenized=True
pooling str (Optional) Pooling methods to apply, here is available methods
is_tokenized bool False set as True in case of tokens are passed for encoding
batch_size int 128 maximum number of sequences handled by encoder, larger batch will be partitioned into small batches.
  1. def embedding_as_service.text.encoder.Encoder.tokenize
Argument Type Default Description
Texts List[str] Required List of sentences

✅ Supported Embeddings and Models

▴ Back to top

Here are the list of supported embeddings and their respective models.

Embedding Model Embedding dimensions Paper
1️⃣ albert albert_base 768 Read Paper 🔖
albert_large 1024
albert_xlarge 2048
albert_xxlarge 4096
2️⃣ xlnet xlnet_large_cased 1024 Read Paper 🔖
xlnet_base_cased 768
3️⃣ bert bert_base_uncased 768 Read Paper 🔖
bert_base_cased 768
bert_multi_cased 768
bert_large_uncased 1024
bert_large_cased 1024
4️⃣ elmo elmo_bi_lm 512 Read Paper 🔖
5️⃣ ulmfit ulmfit_forward 300 Read Paper 🔖
ulmfit_backward 300
6️⃣ use use_dan 512 Read Paper 🔖
use_transformer_large 512
use_transformer_lite 512
7️⃣ word2vec google_news_300 300 Read Paper 🔖
8️⃣ fasttext wiki_news_300 300 Read Paper 🔖
wiki_news_300_sub 300
common_crawl_300 300
common_crawl_300_sub 300
9️⃣ glove twitter_200 200 Read Paper 🔖
twitter_100 100
twitter_50 50
twitter_25 25
wiki_300 300
wiki_200 200
wiki_100 100
wiki_50 50
crawl_42B_300 300
crawl_840B_300 300

Credits

This software uses the following open source packages:

Contributors ✨

Thanks goes to these wonderful people (emoji key):


MrPranav101

💻 📖 🚇

Aman Srivastava

💻 📖 🚇

Chirag Jain

💻 📖 🚇

Ashutosh Singh

💻 📖 🚇

Dhaval Taunk

💻 📖 🚇

Alec Koumjian

🐛

Pradeesh

🐛

This project follows the all-contributors specification. Contributions of any kind welcome!

Please read the contribution guidelines first.

Citing

▴ Back to top

If you use embedding-as-service in a scientific publication, we would appreciate references to the following BibTex entry:

@misc{aman2019embeddingservice,
  title={embedding-as-service},
  author={Srivastava, Aman},
  howpublished={\url{https://github.com/amansrivastava17/embedding-as-service}},
  year={2019}
}
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].