All Projects → ltgoslo → simple_elmo

ltgoslo / simple_elmo

Licence: GPL-3.0 license
Simple library to work with pre-trained ELMo models in TensorFlow

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to simple elmo

embedding study
中文预训练模型生成字向量学习,测试BERT,ELMO的中文效果
Stars: ✭ 94 (+91.84%)
Mutual labels:  embeddings, elmo
Kprn
Reasoning Over Knowledge Graph Paths for Recommendation
Stars: ✭ 220 (+348.98%)
Mutual labels:  embeddings
Pytorch Nlp
Basic Utilities for PyTorch Natural Language Processing (NLP)
Stars: ✭ 1,996 (+3973.47%)
Mutual labels:  embeddings
Research2vec
Representing research papers as vectors / latent representations.
Stars: ✭ 192 (+291.84%)
Mutual labels:  embeddings
Cofactor
CoFactor: Regularizing Matrix Factorization with Item Co-occurrence
Stars: ✭ 160 (+226.53%)
Mutual labels:  embeddings
Food2vec
🍔
Stars: ✭ 199 (+306.12%)
Mutual labels:  embeddings
Elastiknn
Elasticsearch plugin for nearest neighbor search. Store vectors and run similarity search using exact and approximate algorithms.
Stars: ✭ 139 (+183.67%)
Mutual labels:  embeddings
Catalyst
🚀 Catalyst is a C# Natural Language Processing library built for speed. Inspired by spaCy's design, it brings pre-trained models, out-of-the box support for training word and document embeddings, and flexible entity recognition models.
Stars: ✭ 224 (+357.14%)
Mutual labels:  embeddings
Magnetloss Pytorch
PyTorch implementation of a deep metric learning technique called "Magnet Loss" from Facebook AI Research (FAIR) in ICLR 2016.
Stars: ✭ 217 (+342.86%)
Mutual labels:  embeddings
Parallax
Tool for interactive embeddings visualization
Stars: ✭ 192 (+291.84%)
Mutual labels:  embeddings
Vec4ir
Word Embeddings for Information Retrieval
Stars: ✭ 188 (+283.67%)
Mutual labels:  embeddings
Cx db8
a contextual, biasable, word-or-sentence-or-paragraph extractive summarizer powered by the latest in text embeddings (Bert, Universal Sentence Encoder, Flair)
Stars: ✭ 164 (+234.69%)
Mutual labels:  embeddings
Sensegram
Making sense embedding out of word embeddings using graph-based word sense induction
Stars: ✭ 209 (+326.53%)
Mutual labels:  embeddings
Entity2rec
entity2rec generates item recommendation using property-specific knowledge graph embeddings
Stars: ✭ 159 (+224.49%)
Mutual labels:  embeddings
Deepehr
Chronic Disease Prediction Using Medical Notes
Stars: ✭ 220 (+348.98%)
Mutual labels:  embeddings
Embedding As Service
One-Stop Solution to encode sentence to fixed length vectors from various embedding techniques
Stars: ✭ 151 (+208.16%)
Mutual labels:  embeddings
Datastories Semeval2017 Task4
Deep-learning model presented in "DataStories at SemEval-2017 Task 4: Deep LSTM with Attention for Message-level and Topic-based Sentiment Analysis".
Stars: ✭ 184 (+275.51%)
Mutual labels:  embeddings
Vectorai
Vector AI — A platform for building vector based applications. Encode, query and analyse data using vectors.
Stars: ✭ 195 (+297.96%)
Mutual labels:  embeddings
Whatlies
Toolkit to help understand "what lies" in word embeddings. Also benchmarking!
Stars: ✭ 246 (+402.04%)
Mutual labels:  embeddings
Cw2vec
cw2vec: Learning Chinese Word Embeddings with Stroke n-gram Information
Stars: ✭ 224 (+357.14%)
Mutual labels:  embeddings

Simple_elmo is a Python library to work with pre-trained ELMo embeddings in TensorFlow.

This is a significantly updated wrapper to the original ELMo implementation. The main changes are:

  • more convenient and transparent data loading (including from compressed files)
  • code adapted to modern TensorFlow versions (including TensorFlow 2).

Installation

pip install --upgrade simple_elmo

Make sure to update the package regularly, we are actively developing.

Usage

from simple_elmo import ElmoModel

model = ElmoModel()

Loading

First, let's load a pretrained model from disk:

model.load(PATH_TO_ELMO)

Required arguments

PATH_TO_ELMO is either a ZIP archive downloaded from the NLPL vector repository, OR a directory containing 2 files:

  • *.hdf5, pre-trained ELMo weights in HDF5 format (simple_elmo assumes the file is named model.hdf5; if it is not found, the first existing file with the .hdf5 extension will be used);
  • options.json, description of the model architecture in JSON;

One can also provide a vocab.txt/vocab.txt.gz file in the same directory: a one-word-per-line vocabulary of words to be cached (as character id representations) before inference. Even if it is not present at all, ELMo will still process all words normally. However, providing the vocabulary file can slightly increase inference speed when working with very large corpora (by reducing the amount of word to char ids conversions).

Optional arguments

  • max_batch_size: integer, default 32;

    the maximum number of sentences/documents in a batch during inference; your input will be automatically split into chunks of the respective size; if your computational resources allow, you might want to increase this value.

  • limit: integer, default 100;

    the number of words from the vocabulary file to actually cache (counted from the first line). Increase the default value if you are sure these words occur in your training data much more often than 1 or 2 times.

  • full: boolean, default False;

    if True, will try to load the full model from TensorFlow checkpoints, together with the vocabulary. Models loaded this way can be used for language modeling.

Working with models

Currently, we provide three methods for loaded models (will be expanded in the future):

  • model.get_elmo_vectors(SENTENCES)

  • model.get_elmo_vector_average(SENTENCES)

  • model.get_elmo_substitutes(RAW_SENTENCES)

SENTENCES is a list of input sentences (lists of words). RAW_SENTENCES is a list of input sentences as strings.

The get_elmo_vectors() method produces a tensor of contextualized word embeddings. Its shape is (number of sentences, the length of the longest sentence, ELMo dimensionality).

The get_elmo_vector_average() method produces a tensor with one vector per each input sentence, constructed by averaging individual contextualized word embeddings. Its shape is (number of sentences, ELMo dimensionality).

Both these methods can be used with the layers argument, which takes one of the three values:

  • average (default): return the average of all ELMo layers for each word;
  • top: return only the top (last) layer for each word;
  • all: return all ELMo layers for each word (an additional dimension appears in the produced tensor, with the shape equal to the number of layers in the model, 3 as a rule)

Use these tensors for your downstream tasks.

Another argument for these methods is session. It defaults to None which means a new TensorFlow session is created automatically when the method is called. This is convenient, since one does not have to worry about initializing the computational graph. However, in some cases, you might want to re-use an existing session (for example, to call the method multiple times without the initialization overhead).

For this to work, one must do all the initialization manually before the method is called, for example:

import tensorflow as tf
from simple_elmo import ElmoModel

graph = tf.Graph()
with graph.as_default() as elmo_graph:
    elmo_model = ElmoModel()
    elmo_model.load(PATH_TO_ELMO)
...
with elmo_graph.as_default() as current_graph:
    tf_session = tf.compat.v1.Session(graph=elmo_graph)
        with tf_session.as_default() as sess:
            elmo_model.elmo_sentence_input = simple_elmo.elmo.weight_layers("input", elmo_model.sentence_embeddings_op)
            sess.run(tf.compat.v1.global_variables_initializer())
...
elmo_model.get_elmo_vectors(SENTENCES, session=tf_session)
elmo_model.get_elmo_vectors(SENTENCES2, session=tf_session)
...

The get_elmo_substitutes() method currently works only with the models loaded with full=True. For each input sentence, it produces a list of lexical substitutes (LM predictions) for each word token in the sentence, produced by the forward and backward ELMo language models. The substitutes are yielded as dictionaries containing the vocabulary identifiers of the most probable LM predictions, their lexical forms and their logit scores. NB: this method is still experimental!

Example scripts

We provide three example scripts to make it easier to start using simple_elmo right away:

Inferring token embeddings

python3 get_elmo_vectors.py -i test.txt -e ~/PATH_TO_ELMO/

This script simply returns contextualized ELMo embeddings for the words in your input sentences.

Text pairs classification

python3 text_classification.py -i paraphrases_lemm.tsv.gz -e ~/PATH_TO_ELMO/

This script can be used to perform document pair classification (like in text entailment or paraphrase detection). Simple average of ELMo embeddings for all words in a document is used; then, the cosine similarity between two documents is calculated and used as a single classifier feature. Evaluated with macro F1 score and 10-fold cross-validation.

Example paraphrase dataset for English (adapted from MRPC):

Example paraphrase datasets for Russian (adapted from http://paraphraser.ru/):

Word sense disambiguation

python3 wsd_eval.py -i senseval3.tsv -e ~/PATH_TO_ELMO/

This script takes as an input a word sense disambiguation (WSD) dataset and a pre-trained ELMo model. It extracts token embeddings for ambiguous words and trains a simple Logistic Regression classifier to predict word senses. Averaged macro F1 score across all words in the test set is used as the evaluation measure (with 5-fold cross-validation).

Example WSD datasets for English (adapted from Senseval 3):

Example WSD datasets for Russian (adapted from RUSSE'18):

Frequently Asked Questions

Where can I find pre-trained ELMo models?

Several repositories are available where one can download ELMo models compatible with simple_elmo:

Can I load ELMoForManyLangs models?

Unfortunately not. These models are trained using a slightly different architecture. Therefore, they are not compatible neither with AllenNLP nor with simple_elmo. You should use the original ELMoForManyLangs code to work with these models.

I see a lot of warnings about deprecated methods

This is normal. The simple_elmo library is based on the original ELMo implementation which was aimed at the versions of TensorFlow which are very outdated today. We significantly updated the code and fixed many warnings - but not all of them yet. The work continues (and will eventually lead to a complete switch to TensorFlow 2).

Meaniwhile, these warnings can be ignored: they do not harm the resulting embeddings in any way.

Can I train my own ELMo with this library?

Currently we provide ELMo training code (updated and improved in the same way compared to the original implementation) in a separate repository. It will be integrated into the simple_elmo package in the nearest future.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].