All Projects → devmount → Germanwordembeddings

devmount / Germanwordembeddings

Licence: mit
Toolkit to obtain and preprocess german corpora, train models using word2vec (gensim) and evaluate them with generated testsets

Projects that are alternatives of or similar to Germanwordembeddings

Magnitude
A fast, efficient universal vector embedding utility package.
Stars: ✭ 1,394 (+637.57%)
Mutual labels:  natural-language-processing, word2vec, word-embeddings, gensim
Deep Math Machine Learning.ai
A blog which talks about machine learning, deep learning algorithms and the Math. and Machine learning algorithms written from scratch.
Stars: ✭ 173 (-8.47%)
Mutual labels:  jupyter-notebook, natural-language-processing, deep-neural-networks, word2vec
Nlp In Practice
Starter code to solve real world text data problems. Includes: Gensim Word2Vec, phrase embeddings, Text Classification with Logistic Regression, word count with pyspark, simple text preprocessing, pre-trained embeddings and more.
Stars: ✭ 790 (+317.99%)
Mutual labels:  jupyter-notebook, natural-language-processing, word2vec, gensim
Gensim
Topic Modelling for Humans
Stars: ✭ 12,763 (+6652.91%)
Mutual labels:  natural-language-processing, word2vec, word-embeddings, gensim
Sense2vec
🦆 Contextually-keyed word vectors
Stars: ✭ 1,184 (+526.46%)
Mutual labels:  natural-language-processing, word2vec, gensim
Text Analytics With Python
Learn how to process, classify, cluster, summarize, understand syntax, semantics and sentiment of text data with the power of Python! This repository contains code and datasets used in my book, "Text Analytics with Python" published by Apress/Springer.
Stars: ✭ 1,132 (+498.94%)
Mutual labels:  jupyter-notebook, natural-language-processing, gensim
Glove As A Tensorflow Embedding Layer
Taking a pretrained GloVe model, and using it as a TensorFlow embedding weight layer **inside the GPU**. Therefore, you only need to send the index of the words through the GPU data transfer bus, reducing data transfer overhead.
Stars: ✭ 85 (-55.03%)
Mutual labels:  jupyter-notebook, word2vec, word-embeddings
Syntree2vec
An algorithm to augment syntactic hierarchy into word embeddings
Stars: ✭ 9 (-95.24%)
Mutual labels:  jupyter-notebook, natural-language-processing, word-embeddings
Pytorchnlpbook
Code and data accompanying Natural Language Processing with PyTorch published by O'Reilly Media https://nlproc.info
Stars: ✭ 1,390 (+635.45%)
Mutual labels:  jupyter-notebook, natural-language-processing, deep-neural-networks
Debiaswe
Remove problematic gender bias from word embeddings.
Stars: ✭ 175 (-7.41%)
Mutual labels:  jupyter-notebook, word2vec, word-embeddings
Nlp Pretrained Model
A collection of Natural language processing pre-trained models.
Stars: ✭ 122 (-35.45%)
Mutual labels:  natural-language-processing, deep-neural-networks, model
Word2vec
訓練中文詞向量 Word2vec, Word2vec was created by a team of researchers led by Tomas Mikolov at Google.
Stars: ✭ 48 (-74.6%)
Mutual labels:  jupyter-notebook, word2vec, gensim
Coursera Natural Language Processing Specialization
Programming assignments from all courses in the Coursera Natural Language Processing Specialization offered by deeplearning.ai.
Stars: ✭ 39 (-79.37%)
Mutual labels:  jupyter-notebook, natural-language-processing, word-embeddings
Log Anomaly Detector
Log Anomaly Detection - Machine learning to detect abnormal events logs
Stars: ✭ 169 (-10.58%)
Mutual labels:  jupyter-notebook, word2vec, gensim
Servenet
Service Classification based on Service Description
Stars: ✭ 21 (-88.89%)
Mutual labels:  jupyter-notebook, deep-neural-networks, word2vec
Scattertext
Beautiful visualizations of how language differs among document types.
Stars: ✭ 1,722 (+811.11%)
Mutual labels:  natural-language-processing, word2vec, word-embeddings
Vec4ir
Word Embeddings for Information Retrieval
Stars: ✭ 188 (-0.53%)
Mutual labels:  natural-language-processing, word-embeddings, evaluation
Concise Ipython Notebooks For Deep Learning
Ipython Notebooks for solving problems like classification, segmentation, generation using latest Deep learning algorithms on different publicly available text and image data-sets.
Stars: ✭ 23 (-87.83%)
Mutual labels:  jupyter-notebook, deep-neural-networks, word-embeddings
Twitter sentiment analysis word2vec convnet
Twitter Sentiment Analysis with Gensim Word2Vec and Keras Convolutional Network
Stars: ✭ 24 (-87.3%)
Mutual labels:  jupyter-notebook, word2vec, gensim
Awesome Embedding Models
A curated list of awesome embedding models tutorials, projects and communities.
Stars: ✭ 1,486 (+686.24%)
Mutual labels:  jupyter-notebook, natural-language-processing, word2vec

GermanWordEmbeddings

license downloads

There has been a lot of research about the training of word embeddings on English corpora. This toolkit applies deep learning via gensims's word2vec on German corpora to train and evaluate German language models. An overview about the project, evaluation results and download links can be found on the project's website or directly in this repository.

This project is released under the MIT license.

  1. Get started
  2. Obtaining corpora
  3. Preprocessing
  4. Training models
  5. Vocabulary
  6. Evaluation
  7. Download

Get started

Make sure you have Python 3 installed, as well as the following libraries:

pip install gensim nltk matplotlib numpy scipy scikit-learn

Now you can download word2vec_german.sh and execute it in your shell to automatically download this toolkit and the corresponding corpus files and do the model training and evaluation. Be aware that this could take a huge amount of time!

You can also clone this repository and use my already trained model to play around with the evaluation and visualization.

If you just want to see how the different Python scripts work, have a look into the code directory to see Jupyter Notebook script output examples.

Obtaining corpora

There are multiple possibilities for obtaining huge German corpora that are publicly available and free to use:

German Wikipedia

wget https://dumps.wikimedia.org/dewiki/latest/dewiki-latest-pages-articles.xml.bz2

Statistical Machine Translation

Shuffled German news of the years 2007 to 2013:

for i in 2007 2008 2009 2010 2011 2012 2013; do
  wget http://www.statmt.org/wmt14/training-monolingual-news-crawl/news.$i.de.shuffled.gz
done

Models trained with this toolkit are based on the German Wikipedia and German news of 2013.

Preprocessing

This Tool preprocesses the raw wikipedia XML corpus with the WikipediaExtractor (a Python Script from Giuseppe Attardi to filter a Wikipedia XML Dump, licensed under GPLv3) and some shell instructions to filter all XML tags and quotations:

wget http://medialab.di.unipi.it/Project/SemaWiki/Tools/WikiExtractor.py
python WikiExtractor.py -c -b 25M -o extracted dewiki-latest-pages-articles.xml.bz2
find extracted -name '*bz2' \! -exec bzip2 -k -c -d {} \; > dewiki.xml
sed -i 's/<[^>]*>//g' dewiki.xml
sed -i 's|["'\''„“‚‘]||g' dewiki.xml
rm -rf extracted

The German news already contain one sentence per line and don't have any XML syntax overhead. Only quotation should to be removed:

for i in 2007 2008 2009 2010 2011 2012 2013; do
  gzip -d news.$i.de.shuffled.gz
  sed -i 's|["'\''„“‚‘]||g' news.$i.de.shuffled
done

Afterwards, the preprocessing.py script can be called on these corpus files with the following options:

flag default description
-h, --help - show a help message and exit
-p, --punctuation False filter punctuation tokens
-s, --stopwords False filter stop word tokens
-u, --umlauts False replace german umlauts with their respective digraphs
-b, --bigram False detect and process common bigram phrases
-t [ ], --threads [ ] NUMBER_OF_PROCESSORS number of worker threads
--batch_size [ ] 32 batch size for sentence processing

Example usage:

python preprocessing.py dewiki.xml corpus/dewiki.corpus -psub
for file in *.shuffled; do python preprocessing.py $file corpus/$file.corpus -psub; done

Training models

Models are trained with the help of the training.py script with the following options:

flag default description
-h, --help - show this help message and exit
-s [ ], --size [ ] 100 dimension of word vectors
-w [ ], --window [ ] 5 size of the sliding window
-m [ ], --mincount [ ] 5 minimum number of occurences of a word to be considered
-t [ ], --threads [ ] NUMBER_OF_PROCESSORS number of worker threads to train the model
-g [ ], --sg [ ] 1 training algorithm: Skip-Gram (1), otherwise CBOW (0)
-i [ ], --hs [ ] 1 use of hierachical sampling for training
-n [ ], --negative [ ] 0 use of negative sampling for training (usually between 5-20)
-o [ ], --cbowmean [ ] 0 for CBOW training algorithm: use sum (0) or mean (1) to merge context vectors

Example usage:

python training.py corpus/ my.model -s 200 -w 5

Mind that the first parameter is a directory and that every contained file will be taken as a corpus file for training.

If the time needed to train the model should be measured and stored into the results file, this would be a possible command:

{ time python training.py corpus/ my.model -s 200 -w 5; } 2>> my.model.result

Vocabulary

To compute the vocabulary of a given corpus, the vocabulary.py script can be used:

python vocabulary.py my.model my.model.vocab

Evaluation

To create test sets and evaluate trained models, the evaluation.py script can be used. It's possible to evaluate both syntactic and semantic features of a trained model. For a successful creation of testsets, the following source files should be created before starting the script (see the configuration part in the script for more information).

Syntactic test set

With the syntactic test, features like singular, plural, 3rd person, past tense, comparative or superlative can be evaluated. Therefore there are 3 source files: adjectives, nouns and verbs. Every file contains a unique word with its conjugations per line, divided bei a dash. These combination patterns can be entered in the PATTERN_SYN constant in the script configuration. The script now combinates each word with 5 random other words according to the given pattern, to create appropriate analogy questions. Once the data file with the questions is created, it can be evaluated. Normally the evaluation can be done by gensim's word2vec accuracy function, but to get a more specific evaluation result (correct matches, top n matches and coverage), this project uses it's own accuracy functions (test_mostsimilar_groups() and test_mostsimilar() in evaluation.py).

The given source files of this project contains 100 unique nouns with 2 patterns, 100 unique adjectives with 6 patterns and 100 unique verbs with 12 patterns, resulting in 10k analogy questions. Here are some examples for possible source files:

adjectives.txt

Possible pattern: basic-comparative-superlative

Example content:

gut-besser-beste
laut-lauter-lauteste

See src/adjectives.txt

nouns.txt

Possible pattern: singular-plural

Example content:

Bild-Bilder
Name-Namen

See src/nouns.txt

verbs.txt

Possible pattern: basic-1stPersonSingularPresent-2ndPersonPluralPresent-3rdPersonSingularPast-3rdPersonPluralPast

Example content:

finden-finde-findet-fand-fanden
suchen-suche-sucht-suchte-suchten

See src/verbs.txt

Semantic test set

With the semantic test, features concering word meanings can be evaluated. Therefore there are 3 source files: opposite, best match and doesn't match. The given source files result in a total of 950 semantic questions.

opposite.txt

This file contains opposite words, following the pattern of oneword-oppositeword per line, to evaluate the models' ability to find opposites.. The script combinates each pair with 10 random other pairs, to build analogy questions. The given opposite source file of this project includes 30 unique pairs, resulting in 300 analogy questions.

Example content:

Sommer-Winter
Tag-Nacht

See src/opposite.txt

bestmatch.txt

This file contains groups of content similar word pairs, to evaluate the models ability to find thematic relevant analogies. The script combines each pair with all other pairs of the same group to build analogy questions. The given bestmatch source file of this project includes 7 groups with a total of 77 unique pairs, resulting in 540 analogy questions.

Example content:

: Politik
Elisabeth-Königin
Charles-Prinz
: Technik
Android-Google
iOS-Apple
Windows-Microsoft

See src/bestmatch.txt

doesntfit.txt

This file contains 3 words (per line) with similar content divided by space and a set of words that do not fit, divided by dash, like fittingword1 fittingword2 fittingword3 notfittingword1-notfittingword2-...-notfittingwordn. This tests the models' ability to find the least fitting word in a set of 4 words. The script combines each matching triple with every not matching word of the list divided by dash, to build doesntfit questions. The available doesntfit source file of this project includes 11 triples, each with 10 words that do not fit, resulting in 110 questions.

Example content:

Hase Hund Katze Baum-Besitzer-Elefant-Essen-Haus-Mensch-Tier-Tierheim-Wiese-Zoo
August April September Jahr-Monat-Tag-Stunde-Minute-Zeit-Kalender-Woche-Quartal-Uhr

See src/doesntfit.txt

Those options for the script execution are possible:

flag description
-h, --help show a help message and exit
-c, --create if set, create testsets before evaluating
-u, --umlauts if set, create additional testsets with transformed umlauts and/or use them instead

Example usage:

python evaluation.py my.model -u

Note: Only files with the filetypes .bin, .model or without any suffix are treated as binary files.

Download

The optimized German language model, that was trained with this toolkit based on the German Wikipedia (15th May 2015) and German news articles from 2013 (15th May 2015) can be downloaded here:

german.model [704 MB]

If you want to use this project for your own work, you can use the following BibTex entry for citation:

@thesis{mueller2015,
  author = {{Müller}, Andreas},
  title  = "{Analyse von Wort-Vektoren deutscher Textkorpora}",
  school = {Technische Universität Berlin},
  year   = 2015,
  month  = jun,
  type   = {Bachelor's Thesis},
  url    = {https://devmount.github.io/GermanWordEmbeddings}
}

The GermanWordEmbeddings tool and the pretrained language model are completely free to use. If you enjoy it, please consider donating via Paypal for further development. 💚

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].