All Projects → bcmi → Awesome-Few-Shot-Image-Generation

bcmi / Awesome-Few-Shot-Image-Generation

Licence: other
A curated list of papers, code and resources pertaining to few-shot image generation.

Projects that are alternatives of or similar to Awesome-Few-Shot-Image-Generation

ganbert
Enhancing the BERT training with Semi-supervised Generative Adversarial Networks
Stars: ✭ 205 (-1.91%)
Mutual labels:  few-shot-learning
keras-transform
Library for data augmentation
Stars: ✭ 31 (-85.17%)
Mutual labels:  data-augmentation
WARP
Code for ACL'2021 paper WARP 🌀 Word-level Adversarial ReProgramming. Outperforming `GPT-3` on SuperGLUE Few-Shot text classification. https://aclanthology.org/2021.acl-long.381/
Stars: ✭ 66 (-68.42%)
Mutual labels:  few-shot-learning
awesome-graph-self-supervised-learning
Awesome Graph Self-Supervised Learning
Stars: ✭ 805 (+285.17%)
Mutual labels:  data-augmentation
audio degrader
Audio degradation toolbox in python, with a command-line tool. It is useful to apply controlled degradations to audio: e.g. data augmentation, evaluation in noisy conditions, etc.
Stars: ✭ 40 (-80.86%)
Mutual labels:  data-augmentation
bruno
a deep recurrent model for exchangeable data
Stars: ✭ 34 (-83.73%)
Mutual labels:  few-shot-learning
MobilePose
Light-weight Single Person Pose Estimator
Stars: ✭ 588 (+181.34%)
Mutual labels:  data-augmentation
matching-networks
Matching Networks for one-shot learning in tensorflow (NIPS'16)
Stars: ✭ 54 (-74.16%)
Mutual labels:  few-shot-learning
Unets
Implemenation of UNets for Lung Segmentation
Stars: ✭ 18 (-91.39%)
Mutual labels:  data-augmentation
LearningToCompare-Tensorflow
Tensorflow implementation for paper: Learning to Compare: Relation Network for Few-Shot Learning.
Stars: ✭ 17 (-91.87%)
Mutual labels:  few-shot-learning
KitanaQA
KitanaQA: Adversarial training and data augmentation for neural question-answering models
Stars: ✭ 58 (-72.25%)
Mutual labels:  data-augmentation
GAug
AAAI'21: Data Augmentation for Graph Neural Networks
Stars: ✭ 139 (-33.49%)
Mutual labels:  data-augmentation
fastai sparse
3D augmentation and transforms of 2D/3D sparse data, such as 3D triangle meshes or point clouds in Euclidean space. Extension of the Fast.ai library to train Sub-manifold Sparse Convolution Networks
Stars: ✭ 46 (-77.99%)
Mutual labels:  data-augmentation
pytorch-meta-dataset
A non-official 100% PyTorch implementation of META-DATASET benchmark for few-shot classification
Stars: ✭ 39 (-81.34%)
Mutual labels:  few-shot-learning
FSL-Mate
FSL-Mate: A collection of resources for few-shot learning (FSL).
Stars: ✭ 1,346 (+544.02%)
Mutual labels:  few-shot-learning
coursera-gan-specialization
Programming assignments and quizzes from all courses within the GANs specialization offered by deeplearning.ai
Stars: ✭ 277 (+32.54%)
Mutual labels:  data-augmentation
Keras-MultiClass-Image-Classification
Multiclass image classification using Convolutional Neural Network
Stars: ✭ 48 (-77.03%)
Mutual labels:  data-augmentation
Meta-GDN AnomalyDetection
Implementation of TheWebConf 2021 -- Few-shot Network Anomaly Detection via Cross-network Meta-learning
Stars: ✭ 22 (-89.47%)
Mutual labels:  few-shot-learning
renet
[ICCV'21] Official PyTorch implementation of Relational Embedding for Few-Shot Classification
Stars: ✭ 72 (-65.55%)
Mutual labels:  few-shot-learning
Black-Box-Tuning
ICML'2022: Black-Box Tuning for Language-Model-as-a-Service
Stars: ✭ 99 (-52.63%)
Mutual labels:  few-shot-learning

Awesome Few-Shot Image Generation Awesome

A curated list of resources including papers, datasets, and relevant links pertaining to few-shot image generation. Since few-shot image generation is a very broad concept, there are various experimental settings and research lines in the realm of few-shot image generation.

From Base Categories to Novel Categories

The generative model is trained on base categories and applied to novel categories with (optimization-based) or without finetuning (fusion-based and transformation-based).

Optimization-based methods:

  • Louis Clouâtre, Marc Demers: "FIGR: Few-shot Image Generation with Reptile." CoRR abs/1901.02199 (2019) [pdf] [code]
  • Weixin Liang, Zixuan Liu, Can Liu: "DAWSON: A Domain Adaptive Few Shot Generation Framework." CoRR abs/2001.00576 (2020) [pdf] [code]

Fusion-based methods:

  • Sergey Bartunov, Dmitry P. Vetrov: "Few-shot Generative Modelling with Generative Matching Networks." AISTATS (2018) [pdf] [code]
  • Davis Wertheimer, Omid Poursaeed, Bharath Hariharan: "Augmentation-interpolative Autoencoders for Unsupervised Few-shot Image Generation." arXiv (2020). [pdf]
  • Yan Hong, Li Niu, Jianfu Zhang, Liqing Zhang: "MatchingGAN: Matching-based Few-shot Image Generation." ICME (2020) [pdf] [code]
  • Yan Hong, Li Niu, Jianfu Zhang, Weijie Zhao, Chen Fu, Liqing Zhang: "F2GAN: Fusing-and-Filling GAN for Few-shot Image Generation." ACM MM (2020) [pdf] [code]
  • Zheng Gu, Wenbin Li, Jing Huo, Lei Wang, Yang Gao: "Lofgan: Fusing local representations for fewshot image generation." ICCV (2021) [pdf] [code]

Transformation-based methods:

  • Antreas Antoniou, Amos J. Storkey, Harrison Edwards: "Data Augmentation Generative Adversarial Networks." stat (2018) [pdf] [code]
  • Guanqi Ding, Xinzhe Han, Shuhui Wang, Shuzhe Wu, Xin Jin, Dandan Tu, Qingming Huang: "Attribute Group Editing for Reliable Few-shot Image Generation." CVPR (2022) [pdf] [code]
  • Yan Hong, Li Niu, Jianfu Zhang, Liqing Zhang: "Few-shot Image Generation Using Discrete Content Representation." ACM MM (2022)
  • Yan Hong, Li Niu, Jianfu Zhang, Liqing Zhang: "DeltaGAN: Towards Diverse Few-shot Image Generation with Sample-Specific Delta." ECCV (2022) [pdf]

Datasets:

  • Omniglot: 1623 handwritten characters from 50 different alphabets. Each of the 1623 characters was drawn online via Amazon's Mechanical Turk by 20 different people [link]
  • EMNIST: 47 balanced classes [link]
  • FIGR: 17,375 classes of 1,548,256 images representing pictograms, ideograms, icons, emoticons or object or conception depictions [link]
  • VGG-Faces: 2395 categories [link]
  • Flowers: 8189 images from 102 flower classes [link]
  • Animal Faces: 117574 images from 149 animal classes [link]

From Large Dataset to Small Dataset

The generative model is trained on a large dataset (base domain/category) and transferred to a small dataset (novel domain/category).

Finetuning-based methods: Only finetune a part of the model parameters or train a few additional parameters.

  • Atsuhiro Noguchi, Tatsuya Harada: "Image generation from small datasets via batch statistics adaptation." ICCV (2019) [pdf] [code]
  • Yijun Li, Richard Zhang, Jingwan Lu, Eli Shechtman: "Few-shot Image Generation with Elastic Weight Consolidation." NeurIPS (2020) [pdf]
  • Esther Robb, Wen-Sheng Chu, Abhishek Kumar, Jia-Bin Huang: "Few-Shot Adaptation of Generative Adversarial Networks." arXiv (2020) [pdf] [code]
  • Miaoyun Zhao, Yulai Cong, Lawrence Carin: "On Leveraging Pretrained GANs for Generation with Limited Data." ICML (2020) [pdf] [code]
  • Yaxing Wang, Abel Gonzalez-Garcia, David Berga, Luis Herranz, Fahad Shahbaz Khan, Joost van de Weijer: "MineGAN: effective knowledge transfer from GANs to target domains with few images." CVPR (2020) [pdf] [code]

Regularization-based methods: Regularize the generated target images based on the prior regularization knowledge from source domain.

  • Utkarsh Ojha, Yijun Li, Jingwan Lu, Alexei A. Efros, Yong Jae Lee, Eli Shechtman, Richard Zhang: "Few-shot Image Generation via Cross-domain Correspondence." CVPR (2021) [pdf] [code]
  • Jiayu Xiao, Liang Li, Chaofei Wang, Zheng-Jun Zha, Qingming Huang: "Few Shot Generative Model Adaption via Relaxed Spatial Structural Alignment." CVPR (2022) [pdf] [code]

Datasets: Sometimes a subset of a dataset is used as the target dataset.

  • ImageNet: Over 1.4M images of 1k categories. [link]
  • FFHQ (Flickr Faces HQ Dataset): 70k 1024*1024 face images proposed by NVIDIA in StyleGAN papers. [link]
  • Danbooru: Anime image dataset series. The latest version (2021) contains 4.9M images annotated with 162M tags. [link]
  • AFHQ (Animal Faces HQ Dataset): 15k 512*512 animal images of three categories cat, dog and wildlife. [link]
  • Artistic-Faces Dataset: 160 artistic portraits of 16 artists. [link]
  • LSUN: 1M images for each of 10 scene categories and 20 object categories. [link]
  • CelebA: 203k face images of 10k identities. [link]

Only Small Dataset

The generative model is directly trained on a small dataset.

  • Shengyu Zhao, Zhijian Liu, Ji Lin, Jun-Yan Zhu, Song Han: "Differentiable Augmentation for Data-Efficient GAN Training." NeurIPS (2020). [pdf] [code]
  • Bingchen Liu, Yizhe Zhu, Kunpeng Song, Ahmed Elgammal: "Towards Faster and Stabilized GAN Training for High-fidelity Few-shot Image Synthesis." ICLR (2021). [pdf] [code]
  • Mengyu Dai, Haibin Hang, Xiaoyang Guo: "Implicit Data Augmentation Using Feature Interpolation for Diversified Low-Shot Image Generation." arXiv (2021). [pdf]

In the extreme case, the generative model is directly trained on a single image. However, the learnt model generally only manipulates the repeated patterns in this image.

  • Tamar Rott Shaham, Tali Dekel, Tomer Michaeli: "SinGAN: Learning a Generative Model from a Single Natural Image." ICCV (2019). [pdf] [code]
  • Vadim Sushko, Jurgen Gall, Anna Khoreva: "One-Shot GAN: Learning to Generate Samples from Single Images and Videos." CVPR workshop (2021). [pdf]
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].