P-tuning-v2Source code and data for ACL 2022 paper "P-tuning v2: Prompt Tuning Can Be Comparable to Fine-tuning Across Scales and Tasks"
Stars: ✭ 373 (-37.1%)
few-shot-segmentationPyTorch implementation of 'Squeeze and Excite' Guided Few Shot Segmentation of Volumetric Scans
Stars: ✭ 78 (-86.85%)
LibFewShotLibFewShot: A Comprehensive Library for Few-shot Learning.
Stars: ✭ 629 (+6.07%)
HiCECode for ACL'19 "Few-Shot Representation Learning for Out-Of-Vocabulary Words"
Stars: ✭ 56 (-90.56%)
SCL📄 Spatial Contrastive Learning for Few-Shot Classification (ECML/PKDD 2021).
Stars: ✭ 42 (-92.92%)
Meta Learning PapersMeta Learning / Learning to Learn / One Shot Learning / Few Shot Learning
Stars: ✭ 2,420 (+308.09%)
TransferlearningTransfer learning / domain adaptation / domain generalization / multi-task learning etc. Papers, codes, datasets, applications, tutorials.-迁移学习
Stars: ✭ 8,481 (+1330.19%)
few-shot-gan-adaptation[CVPR '21] Official repository for Few-shot Image Generation via Cross-domain Correspondence
Stars: ✭ 198 (-66.61%)
adaptAwesome Domain Adaptation Python Toolbox
Stars: ✭ 46 (-92.24%)
FewCLUEFewCLUE 小样本学习测评基准,中文版
Stars: ✭ 251 (-57.67%)
few-shot-lmThe source code of "Language Models are Few-shot Multilingual Learners" (MRL @ EMNLP 2021)
Stars: ✭ 32 (-94.6%)
MemoPainter-PyTorchAn unofficial implementation of MemoPainter(Coloring With Limited Data: Few-shot Colorization via Memory Augmented Networks) using PyTorch framework.
Stars: ✭ 63 (-89.38%)
mmfewshotOpenMMLab FewShot Learning Toolbox and Benchmark
Stars: ✭ 336 (-43.34%)
CDFSL-ATA[IJCAI 2021] Cross-Domain Few-Shot Classification via Adversarial Task Augmentation
Stars: ✭ 21 (-96.46%)
Meta-Fine-Tuning[CVPR 2020 VL3] The repository for meta fine-tuning in cross-domain few-shot learning.
Stars: ✭ 29 (-95.11%)
MeTALOfficial PyTorch implementation of "Meta-Learning with Task-Adaptive Loss Function for Few-Shot Learning" (ICCV2021 Oral)
Stars: ✭ 24 (-95.95%)
finetunerFinetuning any DNN for better embedding on neural search tasks
Stars: ✭ 442 (-25.46%)
Meta-TTSOfficial repository of https://arxiv.org/abs/2111.04040v1
Stars: ✭ 69 (-88.36%)
lowshot-shapebiasLearning low-shot object classification with explicit shape bias learned from point clouds
Stars: ✭ 37 (-93.76%)
FRN(CVPR 2021) Few-Shot Classification with Feature Map Reconstruction Networks
Stars: ✭ 43 (-92.75%)
few shot slot tagging and NERPyTorch implementation of the paper: Vector Projection Network for Few-shot Slot Tagging in Natural Language Understanding. Su Zhu, Ruisheng Cao, Lu Chen and Kai Yu.
Stars: ✭ 17 (-97.13%)
Awesome-Weak-Shot-LearningA curated list of papers, code and resources pertaining to weak-shot classification, detection, and segmentation.
Stars: ✭ 142 (-76.05%)
deviation-networkSource code of the KDD19 paper "Deep anomaly detection with deviation networks", weakly/partially supervised anomaly detection, few-shot anomaly detection
Stars: ✭ 94 (-84.15%)
Meta-GDN AnomalyDetectionImplementation of TheWebConf 2021 -- Few-shot Network Anomaly Detection via Cross-network Meta-learning
Stars: ✭ 22 (-96.29%)
matching-networksMatching Networks for one-shot learning in tensorflow (NIPS'16)
Stars: ✭ 54 (-90.89%)
renet[ICCV'21] Official PyTorch implementation of Relational Embedding for Few-Shot Classification
Stars: ✭ 72 (-87.86%)
FSL-MateFSL-Mate: A collection of resources for few-shot learning (FSL).
Stars: ✭ 1,346 (+126.98%)
WARPCode for ACL'2021 paper WARP 🌀 Word-level Adversarial ReProgramming. Outperforming `GPT-3` on SuperGLUE Few-Shot text classification. https://aclanthology.org/2021.acl-long.381/
Stars: ✭ 66 (-88.87%)
LearningToCompare-TensorflowTensorflow implementation for paper: Learning to Compare: Relation Network for Few-Shot Learning.
Stars: ✭ 17 (-97.13%)
Black-Box-TuningICML'2022: Black-Box Tuning for Language-Model-as-a-Service
Stars: ✭ 99 (-83.31%)
brunoa deep recurrent model for exchangeable data
Stars: ✭ 34 (-94.27%)
pytorch-meta-datasetA non-official 100% PyTorch implementation of META-DATASET benchmark for few-shot classification
Stars: ✭ 39 (-93.42%)
ganbertEnhancing the BERT training with Semi-supervised Generative Adversarial Networks
Stars: ✭ 205 (-65.43%)
Few-NERDCode and data of ACL 2021 paper "Few-NERD: A Few-shot Named Entity Recognition Dataset"
Stars: ✭ 317 (-46.54%)
MLMANACL 2019 paper:Multi-Level Matching and Aggregation Network for Few-Shot Relation Classification
Stars: ✭ 59 (-90.05%)
LaplacianShotLaplacian Regularized Few Shot Learning
Stars: ✭ 72 (-87.86%)
FewShotDetection(ECCV 2020) PyTorch implementation of paper "Few-Shot Object Detection and Viewpoint Estimation for Objects in the Wild"
Stars: ✭ 188 (-68.3%)
sib meta learnCode of Empirical Bayes Transductive Meta-Learning with Synthetic Gradients
Stars: ✭ 56 (-90.56%)
sinkhorn-label-allocationSinkhorn Label Allocation is a label assignment method for semi-supervised self-training algorithms. The SLA algorithm is described in full in this ICML 2021 paper: https://arxiv.org/abs/2102.08622.
Stars: ✭ 49 (-91.74%)
FUSIONPyTorch code for NeurIPSW 2020 paper (4th Workshop on Meta-Learning) "Few-Shot Unsupervised Continual Learning through Meta-Examples"
Stars: ✭ 18 (-96.96%)
simple-cnapsSource codes for "Improved Few-Shot Visual Classification" (CVPR 2020), "Enhancing Few-Shot Image Classification with Unlabelled Examples" (WACV 2022), and "Beyond Simple Meta-Learning: Multi-Purpose Models for Multi-Domain, Active and Continual Few-Shot Learning" (Neural Networks 2022 - in submission)
Stars: ✭ 88 (-85.16%)
attMPTI[CVPR 2021] Few-shot 3D Point Cloud Semantic Segmentation
Stars: ✭ 118 (-80.1%)
multilingual kwsFew-shot Keyword Spotting in Any Language and Multilingual Spoken Word Corpus
Stars: ✭ 122 (-79.43%)
Roberta zhRoBERTa中文预训练模型: RoBERTa for Chinese
Stars: ✭ 1,953 (+229.34%)
TextPrunerA PyTorch-based model pruning toolkit for pre-trained language models
Stars: ✭ 94 (-84.15%)
SIFRankThe code of our paper "SIFRank: A New Baseline for Unsupervised Keyphrase Extraction Based on Pre-trained Language Model"
Stars: ✭ 96 (-83.81%)
OpenPromptAn Open-Source Framework for Prompt-Learning.
Stars: ✭ 1,769 (+198.31%)
PromptPapersMust-read papers on prompt-based tuning for pre-trained language models.
Stars: ✭ 2,317 (+290.73%)