Semantically Consistent Regularizer (SCoRe)
By Pedro Morgado and Nuno Vasconcelos.
Statistical Visual Computing Lab (SVCL)
University of California, San Diego
Introduction
This repository contains the source code for "Semantically Consistent Regularization for Zero-Shot Recognition", CVPR, 2017.
Implementation was written by Pedro Morgado. If you encounter any issue when using our code/models, let me know.
Citation
Please cite our paper if it helps your research:
@inproceedings{MorgadoCVPR17,
author={Pedro Morgado and Nuno Vasconcelos},
title={Semantically Consistent Regularization for Zero-Shot Recognition},
booktitle={Computer Vision and Pattern Recognition (CVPR), IEEE Conf.~on},
year={2017},
organization={IEEE}
}
Pedro Morgado and Nuno Vasconcelos.
Semantically consistent regularization for zero-shot recognition.
Computer Vision and Pattern Recognition (CVPR), IEEE Conf. on, 2017.
Prerequisites
- python 2.7. (with scikit-image and lmdb)
- caffe (with pycaffe)
Tour
Code
score_model.py
: Defines the SCoRe model.score_train.py
: Training script.score_eval.py
: Evaluation script.tools/prepare_LMDBs.py
: Script for preparing LMDBs used inscore_train.py
andscore_test.py
.
Type python xxx.py --help
for usage options.
For a better understanding of the workflow of our code, a script
train_eval_CUB.sh
is provided for training a SCoRe model on the CUB dataset.
This script 1) downloads all required data (CUB and caffemodels for standard
CNNs), 2) prepares LMDBs for both images and semantics, 3) trains the
classifier and 4) evaluates on source and target (Zero-Shot) classes.
Data
- Semantic codewords: Pre-extracted for all classes in both AwA and
CUB datasets (see
data/
). - Partition into source and target classes: see
classes.txt
,train_classes.txt
andtest_classes.txt
atdata/${DB}
. - Training sets:
data/${DB}/train_images.txt
- Test sets:
data/${DB}/testRecg_images.txt
(source classes) anddata/${DB}/testZS_images.txt
(target classes).
Results
Mean class accuracy for source and target classes using three architectures: AlexNet, GoogLeNet and VGG19.
Note: Lagrangian coefficients were tuned for Zero-Shot MCA on a set of validation classes. Obtained coefficients are shown below.
Animals with Attributes
Semantics | Source Classes | Target Classes | Semantic Coeff | Codeword Coeff |
---|---|---|---|---|
Attributes | 72.5 / 85.1 / 84.6 | 66.7 / 78.3 / 82.8 | 0.01 | 10.0 |
Hierarchy | 74.4 / 84.2 / 84.5 | 52.3 / 61.2 / 60.7 | 0.05 | 1.0 |
Word2Vec | 76.7 / 86.7 / 85.8 | 51.9 / 60.9 / 57.9 | 0.01 | 0.5 |
Key: AlexNet / GoogLeNet / VGG19
Caltech-UCSD Birds-200-2011
Semantics | Source Classes | Target Classes | Semantic Coeff | Codeword Coeff |
---|---|---|---|---|
Attributes | 61.7 / 71.6 / 70.9 | 48.5 / 58.4 / 59.5 | 0.01 | 1.0 |
Hierarchy | 60.2 / 73.1 / 69.6 | 24.2 / 31.8 / 31.3 | 0.05 | 5.0 |
Word2Vec | 61.4 / 73.6 / 71.9 | 26.0 / 31.5 / 30.1 | 0.01 | 1.0 |
Key: AlexNet / GoogLeNet / VGG19
Trained models
These models are compatible with the provided code. Simply download and uncompress the .tar.gz
files, and use
score_eval.py
to evaluate them.
-/- | AwA | CUB |
---|---|---|
Attributes | AlexNet / GoogLeNet / VGG19 | AlexNet / GoogLeNet / VGG19 |
Hierarchy | AlexNet / GoogLeNet / VGG19 | AlexNet / GoogLeNet / VGG19 |
Word2Vec | AlexNet / GoogLeNet / VGG19 | AlexNet / GoogLeNet / VGG19 |