All Projects → artix41 → Awesome Transfer Learning

artix41 / Awesome Transfer Learning

Best transfer learning and domain adaptation resources (papers, tutorials, datasets, etc.)

Projects that are alternatives of or similar to Awesome Transfer Learning

Transferlearning
Transfer learning / domain adaptation / domain generalization / multi-task learning etc. Papers, codes, datasets, applications, tutorials.-迁移学习
Stars: ✭ 8,481 (+528.69%)
Mutual labels:  transfer-learning, domain-adaptation, paper, unsupervised-learning
KD3A
Here is the official implementation of the model KD3A in paper "KD3A: Unsupervised Multi-Source Decentralized Domain Adaptation via Knowledge Distillation".
Stars: ✭ 63 (-95.33%)
Mutual labels:  transfer-learning, unsupervised-learning, domain-adaptation
Awesome Domain Adaptation
A collection of AWESOME things about domian adaptation
Stars: ✭ 3,357 (+148.85%)
Mutual labels:  paper, transfer-learning, domain-adaptation
TA3N
[ICCV 2019 Oral] TA3N: https://github.com/cmhungsteve/TA3N (Most updated repo)
Stars: ✭ 45 (-96.66%)
Mutual labels:  transfer-learning, unsupervised-learning, domain-adaptation
L2c
Learning to Cluster. A deep clustering strategy.
Stars: ✭ 262 (-80.58%)
Mutual labels:  unsupervised-learning, transfer-learning
adapt
Awesome Domain Adaptation Python Toolbox
Stars: ✭ 46 (-96.59%)
Mutual labels:  transfer-learning, domain-adaptation
Mmt
[ICLR-2020] Mutual Mean-Teaching: Pseudo Label Refinery for Unsupervised Domain Adaptation on Person Re-identification.
Stars: ✭ 345 (-74.43%)
Mutual labels:  unsupervised-learning, domain-adaptation
All About The Gan
All About the GANs(Generative Adversarial Networks) - Summarized lists for GAN
Stars: ✭ 630 (-53.3%)
Mutual labels:  paper, unsupervised-learning
cmd
Central Moment Discrepancy for Domain-Invariant Representation Learning (ICLR 2017, keras)
Stars: ✭ 53 (-96.07%)
Mutual labels:  transfer-learning, domain-adaptation
Multitask Learning
Awesome Multitask Learning Resources
Stars: ✭ 361 (-73.24%)
Mutual labels:  transfer-learning, domain-adaptation
Transfer Learning Library
Transfer-Learning-Library
Stars: ✭ 678 (-49.74%)
Mutual labels:  transfer-learning, domain-adaptation
SHOT-plus
code for our TPAMI 2021 paper "Source Data-absent Unsupervised Domain Adaptation through Hypothesis Transfer and Labeling Transfer"
Stars: ✭ 46 (-96.59%)
Mutual labels:  transfer-learning, domain-adaptation
Cross Domain ner
Cross-domain NER using cross-domain language modeling, code for ACL 2019 paper
Stars: ✭ 67 (-95.03%)
Mutual labels:  transfer-learning, domain-adaptation
He4o
和(he for objective-c) —— “信息熵减机系统”
Stars: ✭ 284 (-78.95%)
Mutual labels:  unsupervised-learning, transfer-learning
awesome-contrastive-self-supervised-learning
A comprehensive list of awesome contrastive self-supervised learning papers.
Stars: ✭ 748 (-44.55%)
Mutual labels:  transfer-learning, unsupervised-learning
Nlp Paper
NLP Paper
Stars: ✭ 484 (-64.12%)
Mutual labels:  paper, transfer-learning
Weakly Supervised 3d Object Detection
Weakly Supervised 3D Object Detection from Point Clouds (VS3D), ACM MM 2020
Stars: ✭ 61 (-95.48%)
Mutual labels:  unsupervised-learning, transfer-learning
Deep Transfer Learning
Deep Transfer Learning Papers
Stars: ✭ 68 (-94.96%)
Mutual labels:  transfer-learning, domain-adaptation
BIFI
[ICML 2021] Break-It-Fix-It: Unsupervised Learning for Program Repair
Stars: ✭ 74 (-94.51%)
Mutual labels:  unsupervised-learning, domain-adaptation
transfer-learning-algorithms
Implementation of many transfer learning algorithms in Python with Jupyter notebooks
Stars: ✭ 42 (-96.89%)
Mutual labels:  transfer-learning, domain-adaptation

Awesome Transfer Learning

A list of awesome papers and cool resources on transfer learning, domain adaptation and domain-to-domain translation in general! As you will notice, this list is currently mostly focused on domain adaptation (DA) and domain-to-domain translation, but don't hesitate to suggest resources in other subfields of transfer learning.

Note: this list is not actively maintained anymore, but I still accept pull requests, so please don't hesitate if you want to contribute with newer resources

Table of Contents

Tutorials and Blogs

Papers

Papers are ordered by theme and inside each theme by publication date (submission date for arXiv papers). If the network or algorithm is given a name in a paper, this one is written in bold before the paper's name.

Surveys

Deep Transfer Learning

Transfer of deep learning models.

Fine-tuning approach

Feature extraction (embedding) approach

Multi-task learning

Policy transfer for RL

Few-shot transfer learning

Meta transfer learning

Applications

Medical imaging:

Robotics

Unsupervised Domain Adaptation

Transfer between a source and a target domain. In unsupervised domain adaptation, only the source domain can have labels.

Theory

General

Multi-source

Adversarial methods

Learning a latent space

Image-to-Image translation

Multi-source adaptation

Temporal models (videos)

Optimal Transport

Embedding methods

Kernel methods

Autoencoder approach

Subspace Learning

Self-Ensembling methods

Other

Semi-supervised Domain Adaptation

All the source points are labelled, but only few target points are.

General methods

Subspace learning

Copulas methods

Few-shot Supervised Domain Adaptation

Only a few target examples are available, but they are labelled

Adversarial methods

Embedding methods

Applied Domain Adaptation

Domain adaptation applied to other fields

Physics

Audio Processing

Datasets

Image-to-image

  • MNIST vs MNIST-M vs SVHN vs Synth vs USPS: digit images
  • GTSRB vs Syn Signs : traffic sign recognition datasets, transfer between real and synthetic signs.
  • NYU Depth Dataset V2: labeled paired images taken with two different cameras (normal and depth)
  • CelebA: faces of celebrities, offering the possibility to perform gender or hair color translation for instance
  • Office-Caltech dataset: images of office objects from 10 common categories shared by the Office-31 and Caltech-256 datasets. There are in total four domains: Amazon, Webcam, DSLR and Caltech.
  • Cityscapes dataset: street scene photos (source) and their annoted version (target)
  • UnityEyes vs MPIIGaze: simulated vs real gaze images (eyes)
  • CycleGAN datasets: horse2zebra, apple2orange, cezanne2photo, monet2photo, ukiyoe2photo, vangogh2photo, summer2winter
  • pix2pix dataset: edges2handbags, edges2shoes, facade, maps
  • RaFD: facial images with 8 different emotions (anger, disgust, fear, happiness, sadness, surprise, contempt, and neutral). You can transfer a face from one emotion to another.
  • VisDA 2017 classification dataset: 12 categories of object images in 2 domains: 3D-models and real images.
  • Office-Home dataset: images of objects in 4 domains: art, clipart, product and real-world.
  • DukeMTMC-reid and Market-1501: two pedestrian datasets collected at different places. The evaluation metric is based on open-set image retrieval.

Text-to-text

Results

The results are indicated as the prediction accuracy (in %) in the target domain after adapting the source to the target. For the moment, they only correspond to the results given in the original papers, so the methodology may vary between each paper and these results must be taken with a grain of salt.

Digits transfer (unsupervised)

Source
Target
MNIST
MNIST-M
Synth
SVHN
MNIST
SVHN
SVHN
MNIST
MNIST
USPS
USPS
MNIST
SA 56.90 86.44 ? 59.32 ? ?
DANN 76.66 91.09 ? 73.85 ? ?
iDANN 96.67 91.95 36.49 84.50 ? ?
CoGAN ? ? ? ? 91.2 89.1
DRCN ? ? 40.05 81.97 91.80 73.67
DSN 83.2 91.2 ? 82.7 ? ?
DTN ? ? 90.66 79.72 ? ?
PixelDA 98.2 ? ? ? 95.9 ?
ADDA ? ? ? 76.0 89.4 90.1
UNIT ? ? ? 90.53 95.97 93.58
GenToAdapt ? ? ? 92.4 95.3 90.8
SBADA-GAN 99.4 ? 61.1 76.1 97.6 95.0
DAassoc 89.47 91.86 ? 97.60 ? ?
CyCADA ? ? ? 90.4 95.6 96.5
I2I ? ? ? 92.1 95.1 92.2
DIRT-T 98.7 ? 76.5 99.4 ? ?
DeepJDOT 92.4 ? ? 96.7 95.7 96.4
DTA ? ? ? 99.4 99.5 99.1
LSTNet ? ? ? ? 97.61 97.01

Challenges

Libraries

  • Domain Adaptation: Salad (Semi-supervised Adaptive Learning Across Domains)

Books

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].