All Projects → NaJaeMin92 → FixBi

NaJaeMin92 / FixBi

Licence: other
FixBi: Bridging Domain Spaces for Unsupervised Domain Adaptation (CVPR 2021)

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to FixBi

Domain-Consensus-Clustering
[CVPR2021] Domain Consensus Clustering for Universal Domain Adaptation
Stars: ✭ 85 (+77.08%)
Mutual labels:  domain-adaptation, cvpr2021
CrossNER
CrossNER: Evaluating Cross-Domain Named Entity Recognition (AAAI-2021)
Stars: ✭ 87 (+81.25%)
Mutual labels:  domain-adaptation
Transfer-learning-materials
resource collection for transfer learning!
Stars: ✭ 213 (+343.75%)
Mutual labels:  domain-adaptation
AdaptationSeg
Curriculum Domain Adaptation for Semantic Segmentation of Urban Scenes, ICCV 2017
Stars: ✭ 128 (+166.67%)
Mutual labels:  domain-adaptation
IAST-ECCV2020
IAST: Instance Adaptive Self-training for Unsupervised Domain Adaptation (ECCV 2020) https://teacher.bupt.edu.cn/zhuchuang/en/index.htm
Stars: ✭ 84 (+75%)
Mutual labels:  domain-adaptation
Transferable-E2E-ABSA
Transferable End-to-End Aspect-based Sentiment Analysis with Selective Adversarial Learning (EMNLP'19)
Stars: ✭ 62 (+29.17%)
Mutual labels:  domain-adaptation
DeFLOCNet
The official pytorch code of DeFLOCNet: Deep Image Editing via Flexible Low-level Controls (CVPR2021)
Stars: ✭ 38 (-20.83%)
Mutual labels:  cvpr2021
single-positive-multi-label
Multi-Label Learning from Single Positive Labels - CVPR 2021
Stars: ✭ 63 (+31.25%)
Mutual labels:  cvpr2021
CondenseNetV2
[CVPR 2021] CondenseNet V2: Sparse Feature Reactivation for Deep Networks
Stars: ✭ 73 (+52.08%)
Mutual labels:  cvpr2021
Transformers-Domain-Adaptation
Adapt Transformer-based language models to new text domains
Stars: ✭ 67 (+39.58%)
Mutual labels:  domain-adaptation
BA3US
code for our ECCV 2020 paper "A Balanced and Uncertainty-aware Approach for Partial Domain Adaptation"
Stars: ✭ 31 (-35.42%)
Mutual labels:  domain-adaptation
BCNet
Deep Occlusion-Aware Instance Segmentation with Overlapping BiLayers [CVPR 2021]
Stars: ✭ 434 (+804.17%)
Mutual labels:  cvpr2021
RSCD
[CVPR2021] Towards Rolling Shutter Correction and Deblurring in Dynamic Scenes
Stars: ✭ 83 (+72.92%)
Mutual labels:  cvpr2021
multichannel-semseg-with-uda
Multichannel Semantic Segmentation with Unsupervised Domain Adaptation
Stars: ✭ 19 (-60.42%)
Mutual labels:  domain-adaptation
Im2Vec
[CVPR 2021 Oral] Im2Vec Synthesizing Vector Graphics without Vector Supervision
Stars: ✭ 229 (+377.08%)
Mutual labels:  cvpr2021
LabelRelaxation-CVPR21
Official PyTorch Implementation of Embedding Transfer with Label Relaxation for Improved Metric Learning, CVPR 2021
Stars: ✭ 37 (-22.92%)
Mutual labels:  cvpr2021
DAS
Code and datasets for EMNLP2018 paper ‘‘Adaptive Semi-supervised Learning for Cross-domain Sentiment Classification’’.
Stars: ✭ 48 (+0%)
Mutual labels:  domain-adaptation
DCAN
[AAAI 2020] Code release for "Domain Conditioned Adaptation Network" https://arxiv.org/abs/2005.06717
Stars: ✭ 27 (-43.75%)
Mutual labels:  domain-adaptation
CAC-UNet-DigestPath2019
1st to MICCAI DigestPath2019 challenge (https://digestpath2019.grand-challenge.org/Home/) on colonoscopy tissue segmentation and classification task. (MICCAI 2019) https://teacher.bupt.edu.cn/zhuchuang/en/index.htm
Stars: ✭ 83 (+72.92%)
Mutual labels:  domain-adaptation
meta-learning-progress
Repository to track the progress in Meta-Learning (MtL), including the datasets and the current state-of-the-art for the most common MtL problems.
Stars: ✭ 26 (-45.83%)
Mutual labels:  domain-adaptation

FixBi: Bridging Domain Spaces for Unsupervised Domain Adaptation

PWC PWC PWC

PWC PWC

FixBi: Bridging Domain Spaces for Unsupervised Domain Adaptation
Jaemin Na, Heechul Jung, Hyung Jin Chang, Wonjun Hwang
In CVPR 2021.

Abstract: Unsupervised domain adaptation (UDA) methods for learning domain invariant representations have achieved remarkable progress. However, most of the studies were based on direct adaptation from the source domain to the target domain and have suffered from large domain discrepancies. In this paper, we propose a UDA method that effectively handles such large domain discrepancies. We introduce a fixed ratio-based mixup to augment multiple intermediate domains between the source and target domain. From the augmented-domains, we train the source-dominant model and the target-dominant model that have complementary characteristics. Using our confidence-based learning methodologies, e.g., bidirectional matching with high-confidence predictions and self-penalization using low-confidence predictions, the models can learn from each other or from its own results. Through our proposed methods, the models gradually transfer domain knowledge from the source to the target domain. Extensive experiments demonstrate the superiority of our proposed method on three public benchmarks: Office-31, Office-Home, and VisDA-2017.

Table of Contents

Introduction

Video: Click the figure to watch the explanation video.

YouTube

Requirements

  • Linux
  • Python >= 3.7
  • PyTorch == 1.7.1
  • CUDA (must be a version supported by the pytorch version)

Getting Started

Training process.

Below we provide an example for training a FixBi on Office-31.

python main.py \
-gpu 0,1
-source amazon \
-target dslr \
-db_path $DATASET_PATH \
-baseline_path $BASELINE_PATH \
-save_path $SAVE_PATH
  • $DATA denotes the location where datasets are installed.
  • $BASELINE_PATH requires the path where pretrained models (DANN, MSTN, etc.) are stored.
  • For DANN, the following code may be used: pytorch-DANN

Citation

If you use this code in your research, please cite:

@InProceedings{na2021fixbi,
  title     = {FixBi: Bridging Domain Spaces for Unsupervised Domain Adaptation},
  author    = {Jaemin Na and Heechul Jung and Hyung Jin Chang and Wonjun Hwang},
  booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  year      = {2021}
}
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].