All Projects → vyraun → Half Size

vyraun / Half Size

Code for "Effective Dimensionality Reduction for Word Embeddings".

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Half Size

Embedding As Service
One-Stop Solution to encode sentence to fixed length vectors from various embedding techniques
Stars: ✭ 151 (+69.66%)
Mutual labels:  fasttext, glove
Magnitude
A fast, efficient universal vector embedding utility package.
Stars: ✭ 1,394 (+1466.29%)
Mutual labels:  fasttext, glove
Wordembeddings Elmo Fasttext Word2vec
Using pre trained word embeddings (Fasttext, Word2Vec)
Stars: ✭ 146 (+64.04%)
Mutual labels:  fasttext, glove
Simple-Sentence-Similarity
Exploring the simple sentence similarity measurements using word embeddings
Stars: ✭ 99 (+11.24%)
Mutual labels:  glove, fasttext
NLP-paper
🎨 🎨NLP 自然语言处理教程 🎨🎨 https://dataxujing.github.io/NLP-paper/
Stars: ✭ 23 (-74.16%)
Mutual labels:  glove, fasttext
Finalfusion Rust
finalfusion embeddings in Rust
Stars: ✭ 35 (-60.67%)
Mutual labels:  fasttext, glove
Lmdb Embeddings
Fast word vectors with little memory usage in Python
Stars: ✭ 404 (+353.93%)
Mutual labels:  fasttext, glove
Embeddingsviz
Visualize word embeddings of a vocabulary in TensorBoard, including the neighbors
Stars: ✭ 40 (-55.06%)
Mutual labels:  fasttext, glove
Img2imggan
Implementation of the paper : "Toward Multimodal Image-to-Image Translation"
Stars: ✭ 49 (-44.94%)
Mutual labels:  nips-2017
Alphacsc
Convolution dictionary learning for time-series
Stars: ✭ 66 (-25.84%)
Mutual labels:  nips-2017
Pytorchtext
1st Place Solution for Zhihu Machine Learning Challenge . Implementation of various text-classification models.(知乎看山杯第一名解决方案)
Stars: ✭ 1,022 (+1048.31%)
Mutual labels:  fasttext
Learning2run
Our NIPS 2017: Learning to Run source code
Stars: ✭ 57 (-35.96%)
Mutual labels:  nips-2017
Vectorsinsearch
Dice.com repo to accompany the dice.com 'Vectors in Search' talk by Simon Hughes, from the Activate 2018 search conference, and the 'Searching with Vectors' talk from Haystack 2019 (US). Builds upon my conceptual search and semantic search work from 2015
Stars: ✭ 71 (-20.22%)
Mutual labels:  glove
Accurate Binary Convolution Network
Binary Convolution Network for faster real-time processing in ASICs
Stars: ✭ 49 (-44.94%)
Mutual labels:  nips-2017
Ml code
A repository for recording the machine learning code
Stars: ✭ 75 (-15.73%)
Mutual labels:  pca
Convai Bot 1337
NIPS Conversational Intelligence Challenge 2017 Winner System: Skill-based Conversational Agent with Supervised Dialog Manager
Stars: ✭ 65 (-26.97%)
Mutual labels:  fasttext
Cvpr paper search tool
Automatic paper clustering and search tool by fastext from Facebook Research
Stars: ✭ 43 (-51.69%)
Mutual labels:  fasttext
Glove As A Tensorflow Embedding Layer
Taking a pretrained GloVe model, and using it as a TensorFlow embedding weight layer **inside the GPU**. Therefore, you only need to send the index of the words through the GPU data transfer bus, reducing data transfer overhead.
Stars: ✭ 85 (-4.49%)
Mutual labels:  glove
Ntp
End-to-End Differentiable Proving
Stars: ✭ 74 (-16.85%)
Mutual labels:  nips-2017
Mean Teacher
A state-of-the-art semi-supervised method for image recognition
Stars: ✭ 1,130 (+1169.66%)
Mutual labels:  nips-2017

Code for Effective Dimensionality Reduction for Word Embeddings, and its earlier version.

Accepted at NIPS 2017 LLLD Workshop and Published at the 4th Workshop on Representation Learning for NLP, ACL.

Abstract: Word embeddings have become the basic building blocks for several natural language processing and information retrieval tasks. Pre-trained word embeddings are used in several downstream applications as well as for constructing representations for sentences, paragraphs and documents. Recently, there has been an emphasis on further improving the pre-trained word vectors through post-processing algorithms. One such area of improvement is the dimensionality reduction of the word embeddings. Reducing the size of word embeddings through dimensionality reduction can improve their utility in memory constrained devices, benefiting several real-world applications. In this work, we present a novel algorithm that effectively combines PCA based dimensionality reduction with a recently proposed post-processing algorithm, to construct word embeddings of lower dimensions. Empirical evaluations on 12 standard word similarity benchmarks show that our algorithm reduces the embedding dimensionality by 50%, while achieving similar or (more often) better performance than the higher dimension embeddings.

The word-vector evaluation code is directly used from https://github.com/mfaruqui/eval-word-vectors.

Run the script algo.py (embedding file location is hardcoded as of now) to reproduce the algorithm and its evaluation on word-similarity benchmarks.

Similarly, other baselines': PCA pca_simple.py, PPA+PCA ppa_pca.py and PCA+PPA pca_ppa.py results can be reproduced.

To run the algo and the baselines (as in the paper) get the embedding files (Glove, FastText) and put the file locations as required in the code.

The code will generate and evaluate (on 12 word-similarity datasets) a modified word embedding file that is half-the-size of the original embeddings.

The sentence-vector evalution is based on SentEval (https://github.com/facebookresearch/SentEval). The generated embedding file could be directly used following the SentEval bow.py example.

The algorithm can be used to generate embeddings of any size, not necessarily half.

Another paper which partially uses this code is On Dimensional Linguistic Properties of the Word Embedding Space.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].