All Projects → gaohuang → S-WMD

gaohuang / S-WMD

Licence: other
Code for Supervised Word Mover's Distance (SWMD)

Programming Languages

matlab
3953 projects
c
50402 projects - #5 most used programming language

Projects that are alternatives of or similar to S-WMD

Awesome-Few-shot
Awesome Few-shot learning
Stars: ✭ 50 (-44.44%)
Mutual labels:  metric-learning
proxy-synthesis
Official PyTorch implementation of "Proxy Synthesis: Learning with Synthetic Classes for Deep Metric Learning" (AAAI 2021)
Stars: ✭ 30 (-66.67%)
Mutual labels:  metric-learning
GPQ
Generalized Product Quantization Network For Semi-supervised Image Retrieval - CVPR 2020
Stars: ✭ 60 (-33.33%)
Mutual labels:  metric-learning
SPL-ADVisE
PyTorch code for BMVC 2018 paper: "Self-Paced Learning with Adaptive Visual Embeddings"
Stars: ✭ 20 (-77.78%)
Mutual labels:  metric-learning
MinkLocMultimodal
MinkLoc++: Lidar and Monocular Image Fusion for Place Recognition
Stars: ✭ 65 (-27.78%)
Mutual labels:  metric-learning
simple-cnaps
Source codes for "Improved Few-Shot Visual Classification" (CVPR 2020), "Enhancing Few-Shot Image Classification with Unlabelled Examples" (WACV 2022), and "Beyond Simple Meta-Learning: Multi-Purpose Models for Multi-Domain, Active and Continual Few-Shot Learning" (Neural Networks 2022 - in submission)
Stars: ✭ 88 (-2.22%)
Mutual labels:  metric-learning
HiCE
Code for ACL'19 "Few-Shot Representation Learning for Out-Of-Vocabulary Words"
Stars: ✭ 56 (-37.78%)
Mutual labels:  word-embeddings
CVPR2020 PADS
(CVPR 2020) This repo contains code for "PADS: Policy-Adapted Sampling for Visual Similarity Learning", which proposes learnable triplet mining with Reinforcement Learning.
Stars: ✭ 57 (-36.67%)
Mutual labels:  metric-learning
sister
SImple SenTence EmbeddeR
Stars: ✭ 66 (-26.67%)
Mutual labels:  word-embeddings
acl2017 document clustering
code for "Determining Gains Acquired from Word Embedding Quantitatively Using Discrete Distribution Clustering" ACL 2017
Stars: ✭ 21 (-76.67%)
Mutual labels:  wasserstein
awesome-few-shot-meta-learning
awesome few shot / meta learning papers
Stars: ✭ 44 (-51.11%)
Mutual labels:  metric-learning
Word2VecfJava
Word2VecfJava: Java implementation of Dependency-Based Word Embeddings and extensions
Stars: ✭ 14 (-84.44%)
Mutual labels:  word-embeddings
visual-compatibility
Context-Aware Visual Compatibility Prediction (https://arxiv.org/abs/1902.03646)
Stars: ✭ 92 (+2.22%)
Mutual labels:  metric-learning
Npair loss pytorch
Improved Deep Metric Learning with Multi-class N-pair Loss Objective
Stars: ✭ 75 (-16.67%)
Mutual labels:  metric-learning
triplet-loss-pytorch
Highly efficient PyTorch version of the Semi-hard Triplet loss ⚡️
Stars: ✭ 79 (-12.22%)
Mutual labels:  metric-learning
overview-and-benchmark-of-traditional-and-deep-learning-models-in-text-classification
NLP tutorial
Stars: ✭ 41 (-54.44%)
Mutual labels:  word-embeddings
wefe
WEFE: The Word Embeddings Fairness Evaluation Framework. WEFE is a framework that standardizes the bias measurement and mitigation in Word Embeddings models. Please feel welcome to open an issue in case you have any questions or a pull request if you want to contribute to the project!
Stars: ✭ 164 (+82.22%)
Mutual labels:  word-embeddings
tf retrieval baseline
A Tensorflow retrieval (space embedding) baseline. Metric learning baseline on CUB and Stanford Online Products.
Stars: ✭ 39 (-56.67%)
Mutual labels:  metric-learning
PersianNER
Named-Entity Recognition in Persian Language
Stars: ✭ 48 (-46.67%)
Mutual labels:  word-embeddings
fuzzymax
Code for the paper: Don't Settle for Average, Go for the Max: Fuzzy Sets and Max-Pooled Word Vectors, ICLR 2019.
Stars: ✭ 43 (-52.22%)
Mutual labels:  word-embeddings

S-WMD

A demo code in Matlab for S-WMD [Supervised Word Mover's Distance, NIPS 2016] [Oral presentation video recording by Matt Kusner].

Demo code runs on the bbcsport dataset. Usage: run swmd.m in MATLAB. Dataset is preprocessed to contain the following fields:

  • X is a cell array of all documents, each represented by a dxm matrix where d is the dimensionality of the word embedding and m is the number of unique words in the document
  • Y is an array of labels
  • BOW_X is a cell array of word counts for each document
  • indices is a cell array of global unique IDs for words in a document
  • TR is a matrix whose ith row is the ith training split of document indices
  • TE is a matrix whose ith row is the ith testing split of document indices

Paper Datasets

Here is a Dropbox link to the datasets used in the paper: https://www.dropbox.com/sh/nf532hddgdt68ix/AABGLUiPRyXv6UL2YAcHmAFqa?dl=0

They're all matlab .mat files and have the following variables (note the similarity to the demo dataset):

for bbcsport, twitter, recipe, classic, amazon

  • X [1,n+ne]: each cell corresponds to a document and is a [d,u] matrix where d is the dimensionality of the word embedding, u is the number of unique words in that document, n is the number of training points, and ne is the number of test points. Each column is the word2vec vector for a particular word.
  • Y [1,n+ne]: the label of each document
  • BOW_X [1,n+ne]: each cell in the cell array is a vector corresponding to a document. The size of the vector is the number of unique words in the document, and each entry is how often each unique word occurs.
  • words [1,n+ne]: each cell corresponds to a document and is itself a {1,u} cell where each entry is the actual word corresponding to each unique word
  • TR [5,n]: each row corresponds to a random split of the training set, each entry is the index with respect to the full dataset. So for example, to get the BOW of the training set for the third split do: BOW_xtr = BOW_X(TR(3,:))
  • TE [5,ne]: same as TR except for the test set

for ohsumed, reuters (r8), 20news (20ng2_500)

The only difference with the above datasets is that because there are pre-defined train-test splits, there are already variables BOW_xtr, BOW_xte, xtr, xte, ytr, yte.

KNN

In the paper, we used cross-validation to set k for each dataset and tried these k's [1,3,5,7,9,11,13,15,17,19]. We also implemented a KNN function that given a k (or a list of k's) would only classify a point if the majority of the k nearest neighbors voted on the same class. If not, then we would reduce k (by 2) and consider if for this smaller k there was a majority vote for a class. This would continue this way until either a majority was reached or k=1 (in which case we just use the nearest neighbors vote). This function is in the file knn_fall_back.m

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].