All Projects → hankyul2 → DomainAdaptation

hankyul2 / DomainAdaptation

Licence: Apache-2.0 license
Domain Adaptation

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to DomainAdaptation

Transferable-E2E-ABSA
Transferable End-to-End Aspect-based Sentiment Analysis with Selective Adversarial Learning (EMNLP'19)
Stars: ✭ 62 (+210%)
Mutual labels:  domain-adaptation
DA-RetinaNet
Official Detectron2 implementation of DA-RetinaNet of our Image and Vision Computing 2021 work 'An unsupervised domain adaptation scheme for single-stage artwork recognition in cultural sites'
Stars: ✭ 31 (+55%)
Mutual labels:  domain-adaptation
transfer-learning-algorithms
Implementation of many transfer learning algorithms in Python with Jupyter notebooks
Stars: ✭ 42 (+110%)
Mutual labels:  domain-adaptation
meta-learning-progress
Repository to track the progress in Meta-Learning (MtL), including the datasets and the current state-of-the-art for the most common MtL problems.
Stars: ✭ 26 (+30%)
Mutual labels:  domain-adaptation
pytorch-domain-adaptation
Unofficial pytorch implementation of algorithms for domain adaptation
Stars: ✭ 24 (+20%)
Mutual labels:  domain-adaptation
pytorch-dann
A PyTorch implementation for Unsupervised Domain Adaptation by Backpropagation
Stars: ✭ 110 (+450%)
Mutual labels:  domain-adaptation
AdaptationSeg
Curriculum Domain Adaptation for Semantic Segmentation of Urban Scenes, ICCV 2017
Stars: ✭ 128 (+540%)
Mutual labels:  domain-adaptation
bert-AAD
Adversarial Adaptation with Distillation for BERT Unsupervised Domain Adaptation
Stars: ✭ 27 (+35%)
Mutual labels:  domain-adaptation
DualStudent
Code for Paper ''Dual Student: Breaking the Limits of the Teacher in Semi-Supervised Learning'' [ICCV 2019]
Stars: ✭ 106 (+430%)
Mutual labels:  domain-adaptation
BIFI
[ICML 2021] Break-It-Fix-It: Unsupervised Learning for Program Repair
Stars: ✭ 74 (+270%)
Mutual labels:  domain-adaptation
CAC-UNet-DigestPath2019
1st to MICCAI DigestPath2019 challenge (https://digestpath2019.grand-challenge.org/Home/) on colonoscopy tissue segmentation and classification task. (MICCAI 2019) https://teacher.bupt.edu.cn/zhuchuang/en/index.htm
Stars: ✭ 83 (+315%)
Mutual labels:  domain-adaptation
ACAN
Code for NAACL 2019 paper: Adversarial Category Alignment Network for Cross-domain Sentiment Classification
Stars: ✭ 23 (+15%)
Mutual labels:  domain-adaptation
Natural-language-understanding-papers
NLU: domain-intent-slot; text2SQL
Stars: ✭ 77 (+285%)
Mutual labels:  domain-adaptation
CrossNER
CrossNER: Evaluating Cross-Domain Named Entity Recognition (AAAI-2021)
Stars: ✭ 87 (+335%)
Mutual labels:  domain-adaptation
game-feature-learning
Code for paper "Cross-Domain Self-supervised Multi-task Feature Learning using Synthetic Imagery", Ren et al., CVPR'18
Stars: ✭ 68 (+240%)
Mutual labels:  domain-adaptation
DCAN
[AAAI 2020] Code release for "Domain Conditioned Adaptation Network" https://arxiv.org/abs/2005.06717
Stars: ✭ 27 (+35%)
Mutual labels:  domain-adaptation
VisDA2020
VisDA2020: 4th Visual Domain Adaptation Challenge in ECCV'20
Stars: ✭ 53 (+165%)
Mutual labels:  domain-adaptation
chainer-ADDA
Adversarial Discriminative Domain Adaptation in Chainer
Stars: ✭ 24 (+20%)
Mutual labels:  domain-adaptation
DRCN
Pytorch implementation of Deep Reconstruction Classification Networks
Stars: ✭ 31 (+55%)
Mutual labels:  domain-adaptation
ganslate
Simple and extensible GAN image-to-image translation framework. Supports natural and medical images.
Stars: ✭ 17 (-15%)
Mutual labels:  domain-adaptation

Domain Adaptation (Pytorch-Lightning)

This repository contains unofficial pytorch (pytorch-lightning) version source code introduced by domain adaptation papers:

  1. DANN (2015) [paper, repo]
  2. CDAN (2017) [paper, repo]
  3. MSTN (2018) [paper, repo]
  4. BSP (2019) [paper, repo]
  5. DSBN (2019) [paper, repo]
  6. RSDA-MSTN (2020) [paper, repo]
  7. SHOT (2020) [paper, repo]
  8. TransDA (2021) [paper, repo]
  9. FixBi (2021) [paper ,repo]

Tutorial (use standard dataset: Office-31)

  1. Clone repository and install dependency

    git clone https://github.com/hankyul2/DomainAdaptation.git
    pip3 install -r requirements.txt
  2. Train Model (If you don't use neptune, just remove it from here). Check configuration in configs

    python3 main.py fit --config=configs/cdan_e.yaml -d 'amazon_webcam' -g '0,'

Closed world Benchmark Result

Office-31 Dataset Benchmark reported in original papers

A>D A>W D>A D>W W>A W>D Avg
source only 80.8 76.9 60.3 95.3 63.6 98.7 79.3
DANN (2015) 79.7 82.0 68.2 96.9 67.4 99.1 82.2
CDAN (2017) 89.8 93.1 70.1 98.2 68.0 100.0 86.6
CDAN+E (2017) 92.9 94.1 71.0 98.6 69.3 100.0 87.7
MSTN (2018) 90.4 91.3 72.7 98.9 65.6 100.0 86.5
BSP+DANN (2019) 90.0 93.0 71.9 98.0 73.0 100.0 87.7
BSP+CDAN+E (2019) 93.0 93.3 73.6 98.2 72.6 100.0 88.5
DSBN+MSTN (2019) 90.8 93.3 72.7 99.1 73.9 100.0 88.3
RSDA+MSTN (2020) 95.8 96.1 77.4 99.3 78.9 100.0 91.1
SHOT (2020) 94.0 90.1 74.7 98.4 74.3 99.9 88.6
TransDA (2021) 97.2 95.0 73.7 99.3 79.3 99.6 90.7
FixBi (2021) 95.0 96.1 78.7 99.3 79.4 100.0 91.4

In this work

A>D
[tf.dev]
A>W
[tf.dev]
D>A
[tf.dev]
D>W
[tf.dev]
W>A
[tf.dev]
W>D
[tf.dev]
Avg
source only
[code, config]
82.3
weight
77.9 63.0
weight
94.5 64.7
weight
98.3 80.1
source only ViT
[code, config]
88.0
weight
87.9 76.7
weight
97.7 77.1
weight
99.7 87.8
DANN (2015)
[code, config]
87.2
weight
90.4
weight
70.6
weight
97.8
weight
73.7
weight
99.7
weight
86.6
CDAN (2017)
[code, config]
92.4 95.1 75.8 98.6 74.4 99.9 89.4
CDAN+E (2017)
[code, config]
93.2 95.6 75.1 98.7 75.0 100.0 89.6
MSTN (2018)
[code, config]
89.0 92.7 71.4 97.9 74.1 99.9 87.5
BSP+DANN (2019)
[code, config]
86.3 89.1 71.4 97.7 73.4 100.0 86.3
BSP+CDAN+E (2019)
[code, config]
92.6 94.7 73.8 98.7 74.7 100.0 89.1
DSBN+MSTN Stage1 (2019)
[code, config]
87.8 92.3 72.2 98.0 73.2 99.9 87.2
DSBN+MSTN Stage1 (2019)
[code, config]
90.6 93.5 74.0 98.0 73.1 99.5 88.1
RSDA+MSTN (2020)
[Not Implemented]
- - - - - - -
SHOT (2020)
[code, config]
93.2 92.5 74.3 98.2 75.9 100.0 89.0
SHOT (CDAN+E) (2020)
[code, config]
93.2 95.7 77.7 98.9 76.0 100.0 90.2
MIXUP (CDAN+E) (2021)
[code, config]
92.9 96.1 76.2 98.9 77.7 100.0 90.3
TransDA (2021)
[code, config]
94.4 95.8 82.3 99.2 82.0 99.8 92.3
FixBi (2021)
[code, config]
90.8 95.7 72.6 98.7 74.8 100.0 88.8

Note

  1. Reported scores are from SHOT, FixBi paper
  2. Evaluation datasets are: valid = test = target. For me, this looks weird, but there are no other way to reproduce results in paper. But, source only model's evaluation is a bit different: valid=source, test=target
  3. In this works, scores are 3 times averaged scores.
  4. If you want to use pretrained model weight, you should add loading pretrained model weights.
  5. Optimizer and learning rate scheduler are same to all model(SGD) except mstn, dsbn+mstn (Adam)
  6. SHOT can results lower accuracy than reported scores. To reproduce reported score, I recommend you to use provided source only model weights. I don't know why...
  7. BSP, DSBN+MSTN, FixBi: Fails to reproduce scores reported in paper

Experiment

Combination of several methods or models. (No weights and tf.dev)

A>D A>W D>A D>W W>A W>D Avg
DANN ViT (2015)
[code, config]
90.0 91.0 79.0 98.9 78.8 99.9 89.6
CDAN ViT (2017)
[code, config]
94.7 96.3 80.0 98.9 80.4 100.0 91.7
CDAN+E ViT (2017)
[code, config]
97.1 96.7 80.1 99.2 79.8 99.9 92.1
SHOT (CDAN+E) (2020)
[code, config]
93.2 95.7 77.7 98.9 76.0 100.0 90.2
MIXUP (CDAN+E) (2021)
[code, config]
92.9 96.1 76.2 98.9 77.7 100.0 90.3

Future Updates

  • Add weight parameter
  • Add ViT results
  • Check Fixbi code
  • Add office-home dataset results
  • Add digits results

Some Notations

  1. We use pytorch-lightning in the code. So if you are unfamiliar with pytorch-lightning, I recommend you to read quick-start of pytorch-lightning. (quick start is enough to read this code)
  2. To avoid duplication of code, we use class inheritance and just add changes proposed in papers. We try to keep code simple and nice to read. So if you think code is difficult to read, please leave it as issue or PR.
  3. Only 8 papers are now implemented. If there are any request for certain paper, we will try to re-implement it.
  4. There are some problems in backbone code. (I could not find where) So performance can be lower than reported table. I recommend to use standard library model. (timm, torchvision, etc)
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].