All Projects → dccuchile → Spanish Word Embeddings

dccuchile / Spanish Word Embeddings

Licence: other
Spanish word embeddings computed with different methods and from different corpora

Projects that are alternatives of or similar to Spanish Word Embeddings

Fasttext.js
FastText for Node.js
Stars: ✭ 127 (-46.19%)
Mutual labels:  word-embeddings
Debiaswe
Remove problematic gender bias from word embeddings.
Stars: ✭ 175 (-25.85%)
Mutual labels:  word-embeddings
Shallowlearn
An experiment about re-implementing supervised learning models based on shallow neural network approaches (e.g. fastText) with some additional exclusive features and nice API. Written in Python and fully compatible with Scikit-learn.
Stars: ✭ 196 (-16.95%)
Mutual labels:  word-embeddings
Elmo Tutorial
A short tutorial on Elmo training (Pre trained, Training on new data, Incremental training)
Stars: ✭ 145 (-38.56%)
Mutual labels:  word-embeddings
Lftm
Improving topic models LDA and DMM (one-topic-per-document model for short texts) with word embeddings (TACL 2015)
Stars: ✭ 168 (-28.81%)
Mutual labels:  word-embeddings
Datastories Semeval2017 Task4
Deep-learning model presented in "DataStories at SemEval-2017 Task 4: Deep LSTM with Attention for Message-level and Topic-based Sentiment Analysis".
Stars: ✭ 184 (-22.03%)
Mutual labels:  word-embeddings
Scattertext
Beautiful visualizations of how language differs among document types.
Stars: ✭ 1,722 (+629.66%)
Mutual labels:  word-embeddings
Wordgcn
ACL 2019: Incorporating Syntactic and Semantic Information in Word Embeddings using Graph Convolutional Networks
Stars: ✭ 230 (-2.54%)
Mutual labels:  word-embeddings
Sifrank zh
基于预训练模型的中文关键词抽取方法(论文SIFRank: A New Baseline for Unsupervised Keyphrase Extraction Based on Pre-trained Language Model 的中文版代码)
Stars: ✭ 175 (-25.85%)
Mutual labels:  word-embeddings
Jfasttext
Java interface for fastText
Stars: ✭ 193 (-18.22%)
Mutual labels:  word-embeddings
Awesome Sentence Embedding
A curated list of pretrained sentence and word embedding models
Stars: ✭ 1,973 (+736.02%)
Mutual labels:  word-embeddings
Gensim
Topic Modelling for Humans
Stars: ✭ 12,763 (+5308.05%)
Mutual labels:  word-embeddings
Vec4ir
Word Embeddings for Information Retrieval
Stars: ✭ 188 (-20.34%)
Mutual labels:  word-embeddings
Spherical Text Embedding
[NeurIPS 2019] Spherical Text Embedding
Stars: ✭ 143 (-39.41%)
Mutual labels:  word-embeddings
Chameleon recsys
Source code of CHAMELEON - A Deep Learning Meta-Architecture for News Recommender Systems
Stars: ✭ 202 (-14.41%)
Mutual labels:  word-embeddings
Hash Embeddings
PyTorch implementation of Hash Embeddings (NIPS 2017). Submission to the NIPS Implementation Challenge.
Stars: ✭ 126 (-46.61%)
Mutual labels:  word-embeddings
Texthero
Text preprocessing, representation and visualization from zero to hero.
Stars: ✭ 2,407 (+919.92%)
Mutual labels:  word-embeddings
Koan
A word2vec negative sampling implementation with correct CBOW update.
Stars: ✭ 232 (-1.69%)
Mutual labels:  word-embeddings
Question Generation
Generating multiple choice questions from text using Machine Learning.
Stars: ✭ 227 (-3.81%)
Mutual labels:  word-embeddings
Germanwordembeddings
Toolkit to obtain and preprocess german corpora, train models using word2vec (gensim) and evaluate them with generated testsets
Stars: ✭ 189 (-19.92%)
Mutual labels:  word-embeddings

Spanish Word Embeddings

Below you find links to Spanish word embeddings computed with different methods and from different corpora. Whenever it is possible, a description of the parameters used to compute the embeddings is included, together with simple statistics of the vectors, vocabulary, and description of the corpus from which the embeddings were computed. Direct links to the embeddings are provided, so please refer to the original sources for proper citation (also see References). An example of the use of some of these embeddings can be found here or in this tutorial (both in Spanish).

Summary (and links) for the embeddings in this page:

Corpus Size Algorithm #vectors vec-dim Credits
1 Spanish Unannotated Corpora 2.6B FastText 1,313,423 300 José Cañete
2 Spanish Billion Word Corpus 1.4B FastText 855,380 300 Jorge Pérez
3 Spanish Billion Word Corpus 1.4B Glove 855,380 300 Jorge Pérez
4 Spanish Billion Word Corpus 1.4B Word2Vec 1,000,653 300 Cristian Cardellino
5 Spanish Wikipedia ??? FastText 985,667 300 FastText team

FastText embeddings from SUC

Embeddings

Links to the embeddings (#dimensions=300, #vectors=1,313,423):

More vectors with different dimensiones (10, 30, 100, and 300) can be found here

Algorithm

  • Implementation: FastText with Skipgram
  • Parameters:
    • min subword-ngram = 3
    • max subword-ngram = 6
    • minCount = 5
    • epochs = 20
    • dim = 300
    • all other parameters set as default

Corpus

FastText embeddings from SBWC

Embeddings

Links to the embeddings (#dimensions=300, #vectors=855,380):

Algorithm

  • Implementation: FastText with Skipgram
  • Parameters:
    • min subword-ngram = 3
    • max subword-ngram = 6
    • minCount = 5
    • epochs = 20
    • dim = 300
    • all other parameters set as default

Corpus

  • Spanish Billion Word Corpus
  • Corpus Size: 1.4 billion words
  • Post processing: Besides the post processing of the raw corpus explained in the SBWCE page that included deletion of punctuation, numbers, etc., the following processing was applied:
    • Words were converted to lower case letters
    • Every sequence of the 'DIGITO' keyword was replaced by (a single) '0'
    • All words of more than 3 characteres plus a '0' were ommitted (example: 'padre0')

GloVe embeddings from SBWC

Embeddings

Links to the embeddings (#dimensions=300, #vectors=855,380):

Algorithm

  • Implementation: GloVe
  • Parameters:
    • vector-size = 300
    • iter = 25
    • min-count = 5
    • all other parameters set as default

Corpus

Word2Vec embeddings from SBWC

Embeddings

Links to the embeddings (#dimensions=300, #vectors=1,000,653)

Algorithm

Corpus

FastText embeddings from Spanish Wikipedia

Embeddings

Links to the embeddings (#dimensions=300, #vectors=985,667):

Algorithm

  • Implementation: FastText with Skipgram
  • Parameters: FastText default parameters

Corpus

References

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].