All Projects → imtiazziko → LaplacianShot

imtiazziko / LaplacianShot

Licence: other
Laplacian Regularized Few Shot Learning

Programming Languages

python
139335 projects - #7 most used programming language
shell
77523 projects

Projects that are alternatives of or similar to LaplacianShot

Meta Learning Papers
Meta Learning / Learning to Learn / One Shot Learning / Few Shot Learning
Stars: ✭ 2,420 (+3261.11%)
Mutual labels:  few-shot-learning
multilingual kws
Few-shot Keyword Spotting in Any Language and Multilingual Spoken Word Corpus
Stars: ✭ 122 (+69.44%)
Mutual labels:  few-shot-learning
FUSION
PyTorch code for NeurIPSW 2020 paper (4th Workshop on Meta-Learning) "Few-Shot Unsupervised Continual Learning through Meta-Examples"
Stars: ✭ 18 (-75%)
Mutual labels:  few-shot-learning
SCL
📄 Spatial Contrastive Learning for Few-Shot Classification (ECML/PKDD 2021).
Stars: ✭ 42 (-41.67%)
Mutual labels:  few-shot-learning
few-shot-segmentation
PyTorch implementation of 'Squeeze and Excite' Guided Few Shot Segmentation of Volumetric Scans
Stars: ✭ 78 (+8.33%)
Mutual labels:  few-shot-learning
Learning-To-Compare-For-Text
Learning To Compare For Text , Few shot learning in text classification
Stars: ✭ 38 (-47.22%)
Mutual labels:  few-shot-learning
Awesome Domain Adaptation
A collection of AWESOME things about domian adaptation
Stars: ✭ 3,357 (+4562.5%)
Mutual labels:  few-shot-learning
sib meta learn
Code of Empirical Bayes Transductive Meta-Learning with Synthetic Gradients
Stars: ✭ 56 (-22.22%)
Mutual labels:  few-shot-learning
P-tuning
A novel method to tune language models. Codes and datasets for paper ``GPT understands, too''.
Stars: ✭ 593 (+723.61%)
Mutual labels:  few-shot-learning
simple-cnaps
Source codes for "Improved Few-Shot Visual Classification" (CVPR 2020), "Enhancing Few-Shot Image Classification with Unlabelled Examples" (WACV 2022), and "Beyond Simple Meta-Learning: Multi-Purpose Models for Multi-Domain, Active and Continual Few-Shot Learning" (Neural Networks 2022 - in submission)
Stars: ✭ 88 (+22.22%)
Mutual labels:  few-shot-learning
HiCE
Code for ACL'19 "Few-Shot Representation Learning for Out-Of-Vocabulary Words"
Stars: ✭ 56 (-22.22%)
Mutual labels:  few-shot-learning
Awesome-Few-shot
Awesome Few-shot learning
Stars: ✭ 50 (-30.56%)
Mutual labels:  few-shot-learning
attMPTI
[CVPR 2021] Few-shot 3D Point Cloud Semantic Segmentation
Stars: ✭ 118 (+63.89%)
Mutual labels:  few-shot-learning
protonet-bert-text-classification
finetune bert for small dataset text classification in a few-shot learning manner using ProtoNet
Stars: ✭ 28 (-61.11%)
Mutual labels:  few-shot-learning
sinkhorn-label-allocation
Sinkhorn Label Allocation is a label assignment method for semi-supervised self-training algorithms. The SLA algorithm is described in full in this ICML 2021 paper: https://arxiv.org/abs/2102.08622.
Stars: ✭ 49 (-31.94%)
Mutual labels:  few-shot-learning
Transferlearning
Transfer learning / domain adaptation / domain generalization / multi-task learning etc. Papers, codes, datasets, applications, tutorials.-迁移学习
Stars: ✭ 8,481 (+11679.17%)
Mutual labels:  few-shot-learning
awesome-few-shot-meta-learning
awesome few shot / meta learning papers
Stars: ✭ 44 (-38.89%)
Mutual labels:  few-shot-learning
FewShotDetection
(ECCV 2020) PyTorch implementation of paper "Few-Shot Object Detection and Viewpoint Estimation for Objects in the Wild"
Stars: ✭ 188 (+161.11%)
Mutual labels:  few-shot-learning
one-shot-steel-surfaces
One-Shot Recognition of Manufacturing Defects in Steel Surfaces
Stars: ✭ 29 (-59.72%)
Mutual labels:  few-shot-learning
few shot dialogue generation
Dialogue Knowledge Transfer Networks (DiKTNet)
Stars: ✭ 24 (-66.67%)
Mutual labels:  few-shot-learning

LaplacianShot: Laplacian Regularized Few Shot Learning

This repository contains the code for LaplacianShot. The code is adapted from SimpleShot github.

More details in the following ICML 2020 paper:

Laplacian Regularized Few-shot Learning
Imtiaz Masud Ziko, Jose Dolz, Eric Granger and Ismail Ben Ayed
In ICML 2020.

Introduction

We propose LaplacianShot for few-shot learning tasks, which integrates two types of potentials: (1) assigning query samples to the nearest class prototype, and (2) pairwise Laplacian potentials encouraging nearby query samples to have consistent predictions.

LaplacianShot is utilized during inference in few-shot scenarios, following the traditional training of a deep convolutional network on the base classes with the cross-entropy loss. In fact, LaplacianShot can be utilized during inference on top of any learned feature embeddings.

Usage

1. Code tested with Dependencies

  • Python 3.6
  • Pytorch 1.2
  • Install dependencies by running:
pip install -r requirements.txt

2. Datasets

2.1 Mini-ImageNet

You can download the dataset from here. Unpack the dataset in to data/ directory.

2.2 Tiered-ImageNet

You can download the Tiered-ImageNet from here. Unpack this dataset in data/ directory. Then run the following script to generate split files.

python src/utils/tieredImagenet.py --data path-to-tiered --split split/tiered/

2.3 CUB

Download and unpack the CUB 200-2011 from here in data/ directory. Then run the following script to generate split files.

python src/utils/cub.py --data path-to-cub --split split/cub/

2.4 iNat2017

We follow the instruction from https://github.com/daviswer/fewshotlocal. Download and unpack the iNat2017 Training and validation images, and the Training bounding box annotations, to data/iNat directory from here. Also download traincatlist.pth and testcatlist.pth in the same directory from here. Then, run the following to setup the dataset:

cd ./data/iNat
python iNat_setup.py

And run the following script to generate the split files.

python ./src/inatural_split.py --data path-to-inat/setup --split ./split/inatural/

3 Train and Test

You can download the pretrained network models from here.

Alternatively to train the network on the base classes from scratch remove the "--evaluate " options in the following script. The scripts to test LaplacianShot:

sh run.sh

You can change the commented options accordingly for each dataset. Also all the different options are fairly described in the configuration.py file.

Results

We get the following results in different few-shot benchmarks:

On mini-ImageNet

With WRN network:

Methods 1-shot 5-shot
ProtoNet (Snell et al., 2017) 62.60 79.97
CC+rot (Gidaris et al., 2019) 62.93 79.87
MatchingNet (Vinyals et al., 2016) 64.03 76.32
FEAT (Ye et al., 2020) 65.10 81.11
Transductive tuning (Dhillon et al., 2020) 65.73 78.40
SimpleShot (Wang et al., 2019) 65.87 82.09
SIB (Hu et al., 2020) 70.0 79.2
BD-CSPN (Liu et al., 2019) 70.31 81.89
LaplacianShot (ours) 73.44 83.93

On tiered-ImageNet

With WRN network:

Methods 1-shot 5-shot
CC+rot (Gidaris et al., 2019) 70.53 84.98
FEAT (Ye et al., 2020) 70.41 84.38
Transductive tuning (Dhillon et al., 2020) 73.34 85.50
SimpleShot (Wang et al., 2019) 70.90 85.76
BD-CSPN (Liu et al., 2019) 78.74 86.92
LaplacianShot (ours) 78.80 87.48

On CUB

With ResNet-18 network

Methods 1-shot 5-shot
MatchingNet (Vinyals et al., 2016) 73.49 84.45
MAML (Finn et al., 2017) 68.42 83.47
ProtoNet (Snell et al., 2017) 72.99 86.64
RelationNet (Sung et al., 2018) 68.58 84.05
Chen (Chen et al., 2019) 67.02 83.58
SimpleShot (Wang et al., 2019) 70.28 86.37
LaplacianShot (ours) 79.90 88.69

On iNat

With WRN network Top-1 accuracy Per Class and Top-1 accuracy Mean:

Methods Per Class Mean
SimpleShot (Wang et al., 2019) 62.44 65.08
LaplacianShot (ours) 71.55 74.97
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].