All Projects → Panda-Peter → visda2019-multisource

Panda-Peter / visda2019-multisource

Licence: other
Source code of our submission (Rank 1) for Multi-Source Domain Adaptation task in VisDA-2019

Programming Languages

python
139335 projects - #7 most used programming language
shell
77523 projects

Projects that are alternatives of or similar to visda2019-multisource

SHOT-plus
code for our TPAMI 2021 paper "Source Data-absent Unsupervised Domain Adaptation through Hypothesis Transfer and Labeling Transfer"
Stars: ✭ 46 (-6.12%)
Mutual labels:  domain-adaptation, multi-source-domain-adaptation
Meta-SelfLearning
Meta Self-learning for Multi-Source Domain Adaptation: A Benchmark
Stars: ✭ 157 (+220.41%)
Mutual labels:  domain-adaptation, multi-source-domain-adaptation
Learning Via Translation
Image-Image Domain Adaptation with Preserved Self-Similarity and Domain-Dissimilarity for Person Re-identification (https://arxiv.org/pdf/1711.07027.pdf). CVPR2018
Stars: ✭ 202 (+312.24%)
Mutual labels:  domain-adaptation
Unlearning for MRI harmonisation
Code for implementation of Unlearning Scanner Bias for MRI Harmonisation
Stars: ✭ 22 (-55.1%)
Mutual labels:  domain-adaptation
SSTDA
[CVPR 2020] Action Segmentation with Joint Self-Supervised Temporal Domain Adaptation (PyTorch)
Stars: ✭ 150 (+206.12%)
Mutual labels:  domain-adaptation
Intrada
Unsupervised Intra-domain Adaptation for Semantic Segmentation through Self-Supervision (CVPR 2020 Oral)
Stars: ✭ 211 (+330.61%)
Mutual labels:  domain-adaptation
transfertools
Python toolbox for transfer learning.
Stars: ✭ 22 (-55.1%)
Mutual labels:  domain-adaptation
Crst
Code for <Confidence Regularized Self-Training> in ICCV19 (Oral)
Stars: ✭ 177 (+261.22%)
Mutual labels:  domain-adaptation
Deep-Unsupervised-Domain-Adaptation
Pytorch implementation of four neural network based domain adaptation techniques: DeepCORAL, DDC, CDAN and CDAN+E. Evaluated on benchmark dataset Office31.
Stars: ✭ 50 (+2.04%)
Mutual labels:  domain-adaptation
MGAN
Exploiting Coarse-to-Fine Task Transfer for Aspect-level Sentiment Classification (AAAI'19)
Stars: ✭ 44 (-10.2%)
Mutual labels:  domain-adaptation
gpl
Powerful unsupervised domain adaptation method for dense retrieval. Requires only unlabeled corpus and yields massive improvement: "GPL: Generative Pseudo Labeling for Unsupervised Domain Adaptation of Dense Retrieval" https://arxiv.org/abs/2112.07577
Stars: ✭ 216 (+340.82%)
Mutual labels:  domain-adaptation
TA3N
[ICCV 2019 Oral] TA3N: https://github.com/cmhungsteve/TA3N (Most updated repo)
Stars: ✭ 45 (-8.16%)
Mutual labels:  domain-adaptation
Ta3n
[ICCV 2019 (Oral)] Temporal Attentive Alignment for Large-Scale Video Domain Adaptation (PyTorch)
Stars: ✭ 217 (+342.86%)
Mutual labels:  domain-adaptation
G-SFDA
code for our ICCV 2021 paper 'Generalized Source-free Domain Adaptation'
Stars: ✭ 88 (+79.59%)
Mutual labels:  domain-adaptation
Seg Uncertainty
IJCAI2020 & IJCV 2020 🌇 Unsupervised Scene Adaptation with Memory Regularization in vivo
Stars: ✭ 202 (+312.24%)
Mutual labels:  domain-adaptation
DAOSL
Implementation of Domain Adaption in One-Shot Learning
Stars: ✭ 14 (-71.43%)
Mutual labels:  domain-adaptation
Bnm
code of Towards Discriminability and Diversity: Batch Nuclear-norm Maximization under Label Insufficient Situations (CVPR2020 oral)
Stars: ✭ 192 (+291.84%)
Mutual labels:  domain-adaptation
Clan
( CVPR2019 Oral ) Taking A Closer Look at Domain Shift: Category-level Adversaries for Semantics Consistent Domain Adaptation
Stars: ✭ 248 (+406.12%)
Mutual labels:  domain-adaptation
pytorch-revgrad
A minimal pytorch package implementing a gradient reversal layer.
Stars: ✭ 142 (+189.8%)
Mutual labels:  domain-adaptation
Domain-Consensus-Clustering
[CVPR2021] Domain Consensus Clustering for Universal Domain Adaptation
Stars: ✭ 85 (+73.47%)
Mutual labels:  domain-adaptation

visda2019-multisource

We release the source code of our submission (Rank 1) for Multi-Source Domain Adaptation task in VisDA-2019. Details can be referred in Technical report.

All the pretrained models, synthetic data generated via CycleGAN , and submission files can be downloaded from the link.

Prerequisites

You may need a machine with 4 GPUs and PyTorch v1.1.0 for Python 3.

Training

Train source only models

  1. Go to the Adapt folder

  2. Train source only models

bash experiments/<DOMAIN>/<NET>/train.sh

Where <DOMAIN> is clipart or painting, <NET> is the network (e.g. senet154)

Then repeat the following procedures 4 times.

Train the end-to-end adaptation module

bash experiments/<DOMAIN>/<NET>_<phase_id>/train.sh

Extract features

  1. Copy the adaptation models to the folder ExtractFeat/experiments/<phase_id>/<DOMAIN>/<NET>/snapshot

  2. Extract features by running the scripts

bash experiments/<phase_id>/<DOMAIN>/scripts/<NET>.sh

  1. Copy the features from experiments/<phase_id>/<DOMAIN>/<NET>/<NET>_<source_and_target_domains>/result to dataset/visda2019/pkl_test/<phase_id>/<DOMAIN>/<NET>

Train the feature fusion based adaptation module

  1. Go to the FeatFusionTest folder

  2. Train feature fusion based adaptation module

bash experiments/<phase_id>/<DOMAIN>/train.sh

  1. Copy the pseudo labels file to Adapt/experiments/<DOMAIN>/<NET>_<next_phase_id> for the next adaptation.

Citation

Please cite our technical report in your publications if it helps your research:

@article{pan2019visda,
  title={Multi-Source Domain Adaptation and Semi-Supervised Domain Adaptation with Focus on Visual Domain Adaptation Challenge 2019},
  author={Pan, Yingwei and Li, Yehao and Cai, Qi and Chen, Yang and Yao, Ting},
  booktitle={Visual Domain Adaptation Challenge},
  year={2019}
}

Acknowledgements

Thanks to the domain adaptation community and the contributers of the pytorch ecosystem.

Pytorch pretrained-models Cadene and EfficientNet

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].