All Projects → markdtw → matching-networks

markdtw / matching-networks

Licence: other
Matching Networks for one-shot learning in tensorflow (NIPS'16)

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to matching-networks

attMPTI
[CVPR 2021] Few-shot 3D Point Cloud Semantic Segmentation
Stars: ✭ 118 (+118.52%)
Mutual labels:  few-shot-learning
LaplacianShot
Laplacian Regularized Few Shot Learning
Stars: ✭ 72 (+33.33%)
Mutual labels:  few-shot-learning
Black-Box-Tuning
ICML'2022: Black-Box Tuning for Language-Model-as-a-Service
Stars: ✭ 99 (+83.33%)
Mutual labels:  few-shot-learning
simple-cnaps
Source codes for "Improved Few-Shot Visual Classification" (CVPR 2020), "Enhancing Few-Shot Image Classification with Unlabelled Examples" (WACV 2022), and "Beyond Simple Meta-Learning: Multi-Purpose Models for Multi-Domain, Active and Continual Few-Shot Learning" (Neural Networks 2022 - in submission)
Stars: ✭ 88 (+62.96%)
Mutual labels:  few-shot-learning
sib meta learn
Code of Empirical Bayes Transductive Meta-Learning with Synthetic Gradients
Stars: ✭ 56 (+3.7%)
Mutual labels:  few-shot-learning
Few-NERD
Code and data of ACL 2021 paper "Few-NERD: A Few-shot Named Entity Recognition Dataset"
Stars: ✭ 317 (+487.04%)
Mutual labels:  few-shot-learning
awesome-few-shot-meta-learning
awesome few shot / meta learning papers
Stars: ✭ 44 (-18.52%)
Mutual labels:  few-shot-learning
FSL-Mate
FSL-Mate: A collection of resources for few-shot learning (FSL).
Stars: ✭ 1,346 (+2392.59%)
Mutual labels:  few-shot-learning
FewShotDetection
(ECCV 2020) PyTorch implementation of paper "Few-Shot Object Detection and Viewpoint Estimation for Objects in the Wild"
Stars: ✭ 188 (+248.15%)
Mutual labels:  few-shot-learning
bruno
a deep recurrent model for exchangeable data
Stars: ✭ 34 (-37.04%)
Mutual labels:  few-shot-learning
FUSION
PyTorch code for NeurIPSW 2020 paper (4th Workshop on Meta-Learning) "Few-Shot Unsupervised Continual Learning through Meta-Examples"
Stars: ✭ 18 (-66.67%)
Mutual labels:  few-shot-learning
one-shot-steel-surfaces
One-Shot Recognition of Manufacturing Defects in Steel Surfaces
Stars: ✭ 29 (-46.3%)
Mutual labels:  few-shot-learning
ganbert
Enhancing the BERT training with Semi-supervised Generative Adversarial Networks
Stars: ✭ 205 (+279.63%)
Mutual labels:  few-shot-learning
few shot dialogue generation
Dialogue Knowledge Transfer Networks (DiKTNet)
Stars: ✭ 24 (-55.56%)
Mutual labels:  few-shot-learning
LearningToCompare-Tensorflow
Tensorflow implementation for paper: Learning to Compare: Relation Network for Few-Shot Learning.
Stars: ✭ 17 (-68.52%)
Mutual labels:  few-shot-learning
Learning-To-Compare-For-Text
Learning To Compare For Text , Few shot learning in text classification
Stars: ✭ 38 (-29.63%)
Mutual labels:  few-shot-learning
MLMAN
ACL 2019 paper:Multi-Level Matching and Aggregation Network for Few-Shot Relation Classification
Stars: ✭ 59 (+9.26%)
Mutual labels:  few-shot-learning
renet
[ICCV'21] Official PyTorch implementation of Relational Embedding for Few-Shot Classification
Stars: ✭ 72 (+33.33%)
Mutual labels:  few-shot-learning
WARP
Code for ACL'2021 paper WARP 🌀 Word-level Adversarial ReProgramming. Outperforming `GPT-3` on SuperGLUE Few-Shot text classification. https://aclanthology.org/2021.acl-long.381/
Stars: ✭ 66 (+22.22%)
Mutual labels:  few-shot-learning
pytorch-meta-dataset
A non-official 100% PyTorch implementation of META-DATASET benchmark for few-shot classification
Stars: ✭ 39 (-27.78%)
Mutual labels:  few-shot-learning

Matching Networks for One Shot Learning

Tensorflow implementation of Matching Networks for One Shot Learning by Vinyals et al.

Prerequisites

Data

Preparation

  1. Download and extract omniglot dataset, modify omniglot_train and omniglot_test in utils.py to your location.

  2. First time training will generate omniglot.npy to the directory. The shape should be (1632, 80, 28, 28, 1) , meaning 1623 classes, 20 * 4 90-degree-transforms (0, 90, 180, 270), height, width, channel. 1200 classes used for training and 423 used for testing.

Train

python main.py --train

Train from a previous checkpoint at epoch X:

python main.py --train --modelpath=ckpt/model-X

Check out tunable hyper-parameters:

python main.py

Test

python main.py --eval

Notes

  • The model will test the evaluation accuracy after every epoch.
  • As the paper indicated, training on Omniglot with FCE does not do any better but I still implemented them (as far as I'm concerned there are no repos that fully implement the FCEs by far).
  • The authors did not mentioned the value of time steps K in FCE_f, in the sited paper, K is tested with 0, 1, 5, 10 as shown in table 1.
  • When using the data generated by myself (through utils.py), the evaluation accuracy at epoch 100 is around 82.00% (training accuracy 83.14%) without data augmentation.
  • Nevertheless, when using data provided by zergylord in his repo, this implementation can achieve up to 96.61% accuracy (training 97.22%) at epoch 100.
  • Issues are welcome!

Resources

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].