All Projects → RogerZhangzz → Cag_uda

RogerZhangzz / Cag_uda

Licence: mit
(NeurIPS2019) Category Anchor-Guided Unsupervised Domain Adaptation for Semantic Segmentation

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Cag uda

Seg By Interaction
Unsupervised instance segmentation via active robot interaction
Stars: ✭ 78 (-14.29%)
Mutual labels:  segmentation
Vnet Tensorflow
Tensorflow implementation of the V-Net architecture for medical imaging segmentation.
Stars: ✭ 84 (-7.69%)
Mutual labels:  segmentation
Ai Challenger Retinal Edema Segmentation
DeepSeg : 4th place(top3%) solution for Retinal Edema Segmentation Challenge in 2018 AI challenger
Stars: ✭ 88 (-3.3%)
Mutual labels:  segmentation
Fcn.tensorflow
Tensorflow implementation of Fully Convolutional Networks for Semantic Segmentation (http://fcn.berkeleyvision.org)
Stars: ✭ 1,230 (+1251.65%)
Mutual labels:  segmentation
Litiv
C++ implementation pool for computer vision R&D projects.
Stars: ✭ 82 (-9.89%)
Mutual labels:  segmentation
Rgbd semantic segmentation pytorch
PyTorch Implementation of some RGBD Semantic Segmentation models.
Stars: ✭ 84 (-7.69%)
Mutual labels:  segmentation
Mit Deep Learning
Tutorials, assignments, and competitions for MIT Deep Learning related courses.
Stars: ✭ 8,912 (+9693.41%)
Mutual labels:  segmentation
Pixeltcn
Tensorflow Implementation of Pixel Transposed Convolutional Networks (PixelTCN and PixelTCL)
Stars: ✭ 91 (+0%)
Mutual labels:  segmentation
Dlcv for beginners
《深度学习与计算机视觉》配套代码
Stars: ✭ 1,244 (+1267.03%)
Mutual labels:  segmentation
Niftynet
[unmaintained] An open-source convolutional neural networks platform for research in medical image analysis and image-guided therapy
Stars: ✭ 1,276 (+1302.2%)
Mutual labels:  segmentation
Prin
Pointwise Rotation-Invariant Network (AAAI 2020)
Stars: ✭ 81 (-10.99%)
Mutual labels:  segmentation
Cws
Source code for an ACL2016 paper of Chinese word segmentation
Stars: ✭ 81 (-10.99%)
Mutual labels:  segmentation
Brats17
Patch-based 3D U-Net for brain tumor segmentation
Stars: ✭ 85 (-6.59%)
Mutual labels:  segmentation
Pointclouddatasets
3D point cloud datasets in HDF5 format, containing uniformly sampled 2048 points per shape.
Stars: ✭ 80 (-12.09%)
Mutual labels:  segmentation
Kits19 Challege
KiTS19——2019 Kidney Tumor Segmentation Challenge
Stars: ✭ 91 (+0%)
Mutual labels:  segmentation
Cnn Paper2
🎨 🎨 深度学习 卷积神经网络教程 :图像识别,目标检测,语义分割,实例分割,人脸识别,神经风格转换,GAN等🎨🎨 https://dataxujing.github.io/CNN-paper2/
Stars: ✭ 77 (-15.38%)
Mutual labels:  segmentation
Caffe Model
Caffe models (including classification, detection and segmentation) and deploy files for famouse networks
Stars: ✭ 1,258 (+1282.42%)
Mutual labels:  segmentation
Changepoint
A place for the development version of the changepoint package on CRAN.
Stars: ✭ 90 (-1.1%)
Mutual labels:  segmentation
3dunet abdomen cascade
Stars: ✭ 91 (+0%)
Mutual labels:  segmentation
3dunet Pytorch
3DUNet implemented with pytorch
Stars: ✭ 84 (-7.69%)
Mutual labels:  segmentation

Category-anchor Guided Unsupervised Domain Adaptation for Semantic Segmentation

Qiming Zhang*, Jing Zhang*, Wei Liu, Dacheng Tao

paper

Table of Contents

Introduction

This respository contains the CAG-UDA method as described in the NeurIPS 2019 paper "Category-anchor Guided Unsupervised Domain Adaptation for Semantic Segmentation".

Requirements

The code is implemented based on Pytorch 0.4.1 with CUDA 9.0, Python 3.6.7. The code is trained using a NVIDIA Tesla V100 with 16 GB memory. Please see the 'requirements.txt' file for other requirements.

Usage

Assuming you are in the CAG-UDA master folder.

  1. Preparation:
  • Download the GTA5 dataset as the source domain, and the Cityscapes dataset as the target domain.
  • Then put them into a folder (dataset/GTA5 for example). Please carefully check the directory of the folder whether containing invalid characters.
  • Please notice that images in GTA5 have slightly different resolutions, which has been resolved in our code.
  • Download pretrained models here and put them in the 'pretrained/' folder. There are four models for warmup, stage 1, stage 2, and stage 3 respectively.
  1. Setup the config file with directory 'config/adaptation_from_city_to_gta.yml'.
  • Set the dataset path in the config file (data:source:rootpath and data:target:rootpath).
  • Set the pretrained model path to 'training:resume' and 'training:Pred_resume' in the config file. 'Pred_resume' model is used to assign pseudo-labels..
  • To better understand the meaning of each parameter in the config file, please see 'config/readme'.
  1. Training
  • To run the code:
python train.py
  • During the training, the generated files (log file) will be written in the folder 'runs/..'.
  1. Evaluation
  • Set the config file for test (configs/test_from_city_to_gta.yml): (1). Set the dataset path as illustrated before. (2). Set the model path in 'test:path:'.
  • Run:
python test.py

to see the results.

  1. Constructing anchors
  • Setting the config file 'configs/CAC_from_gta_to_city.yml' as illustrated before.
  • Run:
python cac.py
  • The anchor file would be in 'run/cac_from_gta_to_city/..'

License

MIT

The code is heavily borrowed from the repository (https://github.com/meetshah1995/pytorch-semseg).

If you use this code and find it usefule, please cite:

@inproceedings{zhang2019category,
  title={Category Anchor-Guided Unsupervised Domain Adaptation for Semantic Segmentation},
  author={Zhang, Qiming and Zhang, Jing and Liu, Wei and Tao, Dacheng},
  booktitle={Advances in Neural Information Processing Systems},
  pages={433--443},
  year={2019}
}

Notes

The category anchors are stored in the file 'category_anchors'. It is calculated as the mean value of features with respect to each category from the source domain.

Contact: [email protected] / [email protected]

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].