All Projects → acbull → HiCE

acbull / HiCE

Licence: MIT license
Code for ACL'19 "Few-Shot Representation Learning for Out-Of-Vocabulary Words"

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to HiCE

Gensim
Topic Modelling for Humans
Stars: ✭ 12,763 (+22691.07%)
Mutual labels:  word-embeddings
Jfasttext
Java interface for fastText
Stars: ✭ 193 (+244.64%)
Mutual labels:  word-embeddings
Spanish Word Embeddings
Spanish word embeddings computed with different methods and from different corpora
Stars: ✭ 236 (+321.43%)
Mutual labels:  word-embeddings
Sifrank zh
基于预训练模型的中文关键词抽取方法(论文SIFRank: A New Baseline for Unsupervised Keyphrase Extraction Based on Pre-trained Language Model 的中文版代码)
Stars: ✭ 175 (+212.5%)
Mutual labels:  word-embeddings
Vec4ir
Word Embeddings for Information Retrieval
Stars: ✭ 188 (+235.71%)
Mutual labels:  word-embeddings
Chameleon recsys
Source code of CHAMELEON - A Deep Learning Meta-Architecture for News Recommender Systems
Stars: ✭ 202 (+260.71%)
Mutual labels:  word-embeddings
Awesome Sentence Embedding
A curated list of pretrained sentence and word embedding models
Stars: ✭ 1,973 (+3423.21%)
Mutual labels:  word-embeddings
SCL
📄 Spatial Contrastive Learning for Few-Shot Classification (ECML/PKDD 2021).
Stars: ✭ 42 (-25%)
Mutual labels:  few-shot-learning
Germanwordembeddings
Toolkit to obtain and preprocess german corpora, train models using word2vec (gensim) and evaluate them with generated testsets
Stars: ✭ 189 (+237.5%)
Mutual labels:  word-embeddings
Koan
A word2vec negative sampling implementation with correct CBOW update.
Stars: ✭ 232 (+314.29%)
Mutual labels:  word-embeddings
Debiaswe
Remove problematic gender bias from word embeddings.
Stars: ✭ 175 (+212.5%)
Mutual labels:  word-embeddings
Datastories Semeval2017 Task4
Deep-learning model presented in "DataStories at SemEval-2017 Task 4: Deep LSTM with Attention for Message-level and Topic-based Sentiment Analysis".
Stars: ✭ 184 (+228.57%)
Mutual labels:  word-embeddings
Question Generation
Generating multiple choice questions from text using Machine Learning.
Stars: ✭ 227 (+305.36%)
Mutual labels:  word-embeddings
Lftm
Improving topic models LDA and DMM (one-topic-per-document model for short texts) with word embeddings (TACL 2015)
Stars: ✭ 168 (+200%)
Mutual labels:  word-embeddings
Pytorch Sentiment Analysis
Tutorials on getting started with PyTorch and TorchText for sentiment analysis.
Stars: ✭ 3,209 (+5630.36%)
Mutual labels:  word-embeddings
Mimick
Code for Mimicking Word Embeddings using Subword RNNs (EMNLP 2017)
Stars: ✭ 152 (+171.43%)
Mutual labels:  word-embeddings
Shallowlearn
An experiment about re-implementing supervised learning models based on shallow neural network approaches (e.g. fastText) with some additional exclusive features and nice API. Written in Python and fully compatible with Scikit-learn.
Stars: ✭ 196 (+250%)
Mutual labels:  word-embeddings
Simple-Sentence-Similarity
Exploring the simple sentence similarity measurements using word embeddings
Stars: ✭ 99 (+76.79%)
Mutual labels:  word-embeddings
protonet-bert-text-classification
finetune bert for small dataset text classification in a few-shot learning manner using ProtoNet
Stars: ✭ 28 (-50%)
Mutual labels:  few-shot-learning
Wordgcn
ACL 2019: Incorporating Syntactic and Semantic Information in Word Embeddings using Graph Convolutional Networks
Stars: ✭ 230 (+310.71%)
Mutual labels:  word-embeddings

Overview

HiCE (Hierarchical Context Encoding) is a model for learning accurate embedding of an OOV word with few occurrences. This repository is a pytorch implementation of HICE.

The basic idea is to train the model on a large scale dataset, masking some words out and use limited contexts to estimate their ground-truth embedding. The learned model can then be served to estimate OOV words in a new corpus. The model can be furthered improved by adapting to the new corpus with first order MAML (Model-Agnostic Meta-Learning).

You can see our ACL 2019 paper Few-Shot Representation Learning for Out-Of-Vocabulary Words for more details.

Setup

This implementation is based on Pytorch We assume that you're using Python 3 with pip installed. To run the code, you need the following dependencies:

For fair comparison with earlier works, we utilize the same word embedding provided by Herbelot & Baroni, 2017, which is a 259,376 word2vec embedding pre-trained on Wikipedia. After downloading it, unzip and put it into '/data/' directory.

To fit this word embedding, we use WikiText-103 as source corpus to train our model. Download WikiText-103, unzip and put it into the '/data/' directory.

Usage

Execute the following scripts to train and evaluate the model:

python3 train.py --cuda 0 --use_morph --adapt  # Train HiCE with morphology feature and use MAML for adaptation
python3 train.py --cuda 0 --use_morph          # Train HiCE with morphology feature and no adaptation
python3 train.py --cuda 0 --adapt              # Train HiCE with context only without morphology and use MAML for adaptation
python3 train.py --cuda 0                      # Train HiCE with context only without morphology and no adaptation

There's also other hyperparameters to be tuned, which can be found in 'train.py' for details.

The model will parse the training corpus in a way that some words (which frequency is not too high or too low) are selected as OOV words, with the sentences containing these words as features and ground-truth embedding as the label. For each batch, the model will randomly select some words with K context sentences to estimate the ground-truth embedding. The model will be evaluated on 'Chimera dataset' (Lazaridou et al, 2017).

After finish training, the model can further be adapted to the target corpus with first order MAML. We also use the known words in the target corpus as OOV words and construct a target dataset. Then we use the better initialization get from source dataset to calculate the gradient on target dataset. Noted that this is not equivalent to the original definition of MAML, where there exist multiple tasks. If one can get access to multiple datasets in different domains, the model can also be trained in the original paper's style.

The trained model will be saved in a given directory (default in '/save' directory), which can be adopted to handle OOV in other downstream tasks.

Citation

Please consider citing the following paper when using our code for your application.

@inproceedings{hice2019,
  title={Few-Shot Representation Learning for Out-Of-Vocabulary Words},
  author={Ziniu Hu and Ting Chen and Kai-Wei Chang and Yizhou Sun},
  booktitle={Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, ACL},
  year={2019}
}
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].