jakesnell / Prototypical Networks
Licence: mit
Code for the NeurIPS 2017 Paper "Prototypical Networks for Few-shot Learning"
Stars: ✭ 705
Programming Languages
python
139335 projects - #7 most used programming language
Projects that are alternatives of or similar to Prototypical Networks
Hardnet
Hardnet descriptor model - "Working hard to know your neighbor's margins: Local descriptor learning loss"
Stars: ✭ 350 (-50.35%)
Mutual labels: nips-2017, metric-learning
Survey of deep metric learning
A comprehensive survey of deep metric learning and related works
Stars: ✭ 406 (-42.41%)
Mutual labels: metric-learning
Rkd
Official pytorch Implementation of Relational Knowledge Distillation, CVPR 2019
Stars: ✭ 257 (-63.55%)
Mutual labels: metric-learning
finetuner
Finetuning any DNN for better embedding on neural search tasks
Stars: ✭ 442 (-37.3%)
Mutual labels: metric-learning
Pytorch Metric Learning
The easiest way to use deep metric learning in your application. Modular, flexible, and extensible. Written in PyTorch.
Stars: ✭ 3,936 (+458.3%)
Mutual labels: metric-learning
deep-steg
Global NIPS Paper Implementation Challenge of "Hiding Images in Plain Sight: Deep Steganography"
Stars: ✭ 43 (-93.9%)
Mutual labels: nips-2017
Additive Margin Softmax
This is the implementation of paper <Additive Margin Softmax for Face Verification>
Stars: ✭ 464 (-34.18%)
Mutual labels: metric-learning
symmetrical-synthesis
Official Tensorflow implementation of "Symmetrical Synthesis for Deep Metric Learning" (AAAI 2020)
Stars: ✭ 67 (-90.5%)
Mutual labels: metric-learning
disent
🧶 Modular VAE disentanglement framework for python built with PyTorch Lightning ▸ Including metrics and datasets ▸ With strongly supervised, weakly supervised and unsupervised methods ▸ Easily configured and run with Hydra config ▸ Inspired by disentanglement_lib
Stars: ✭ 41 (-94.18%)
Mutual labels: metric-learning
MetricLearning-mnist-pytorch
Playground of Metric Learning with MNIST @pytorch. We provide ArcFace, CosFace, SphereFace, CircleLoss and visualization.
Stars: ✭ 19 (-97.3%)
Mutual labels: metric-learning
advrank
Adversarial Ranking Attack and Defense, ECCV, 2020.
Stars: ✭ 19 (-97.3%)
Mutual labels: metric-learning
Batch Dropblock Network
Official source code of "Batch DropBlock Network for Person Re-identification and Beyond" (ICCV 2019)
Stars: ✭ 304 (-56.88%)
Mutual labels: metric-learning
Heated Up Softmax Embedding
Project page for Heated-up Softmax Embedding
Stars: ✭ 42 (-94.04%)
Mutual labels: metric-learning
Deep Metric Learning Baselines
PyTorch Implementation for Deep Metric Learning Pipelines
Stars: ✭ 442 (-37.3%)
Mutual labels: metric-learning
ePillID-benchmark
ePillID Dataset: A Low-Shot Fine-Grained Benchmark for Pill Identification (CVPR 2020 VL3)
Stars: ✭ 54 (-92.34%)
Mutual labels: metric-learning
Powerful Benchmarker
A PyTorch library for benchmarking deep metric learning. It's powerful.
Stars: ✭ 272 (-61.42%)
Mutual labels: metric-learning
Humpback Whale Identification 1st
https://www.kaggle.com/c/humpback-whale-identification
Stars: ✭ 591 (-16.17%)
Mutual labels: metric-learning
Amsoftmax
A simple yet effective loss function for face verification.
Stars: ✭ 443 (-37.16%)
Mutual labels: metric-learning
Voxceleb trainer
In defence of metric learning for speaker recognition
Stars: ✭ 316 (-55.18%)
Mutual labels: metric-learning
Prototypical Networks for Few-shot Learning
Code for the NIPS 2017 paper Prototypical Networks for Few-shot Learning.
If you use this code, please cite our paper:
@inproceedings{snell2017prototypical,
title={Prototypical Networks for Few-shot Learning},
author={Snell, Jake and Swersky, Kevin and Zemel, Richard},
booktitle={Advances in Neural Information Processing Systems},
year={2017}
}
Training a prototypical network
Install dependencies
- This code has been tested on Ubuntu 16.04 with Python 3.6 and PyTorch 0.4.
- Install PyTorch and torchvision.
- Install torchnet by running
pip install git+https://github.com/pytorch/[email protected]
. - Install the protonets package by running
python setup.py install
orpython setup.py develop
.
Set up the Omniglot dataset
- Run
sh download_omniglot.sh
.
Train the model
- Run
python scripts/train/few_shot/run_train.py
. This will run training and place the results intoresults
.- You can specify a different output directory by passing in the option
--log.exp_dir EXP_DIR
, whereEXP_DIR
is your desired output directory. - If you are running on a GPU you can pass in the option
--data.cuda
.
- You can specify a different output directory by passing in the option
- Re-run in trainval mode
python scripts/train/few_shot/run_trainval.py
. This will save your model intoresults/trainval
by default.
Evaluate
- Run evaluation as:
python scripts/predict/few_shot/run_eval.py --model.model_path results/trainval/best_model.pt
.
Note that the project description data, including the texts, logos, images, and/or trademarks,
for each open source project belongs to its rightful owner.
If you wish to add or remove any projects, please contact us at [email protected].