All Projects → Jia-Research-Lab → Pfenet

Jia-Research-Lab / Pfenet

PFENet: Prior Guided Feature Enrichment Network for Few-shot Segmentation (TPAMI).

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Pfenet

Pointclouddatasets
3D point cloud datasets in HDF5 format, containing uniformly sampled 2048 points per shape.
Stars: ✭ 80 (-14.89%)
Mutual labels:  segmentation
Caffe Model
Caffe models (including classification, detection and segmentation) and deploy files for famouse networks
Stars: ✭ 1,258 (+1238.3%)
Mutual labels:  segmentation
Kits19 Challege
KiTS19——2019 Kidney Tumor Segmentation Challenge
Stars: ✭ 91 (-3.19%)
Mutual labels:  segmentation
Prin
Pointwise Rotation-Invariant Network (AAAI 2020)
Stars: ✭ 81 (-13.83%)
Mutual labels:  segmentation
Dlcv for beginners
《深度学习与计算机视觉》配套代码
Stars: ✭ 1,244 (+1223.4%)
Mutual labels:  segmentation
Brats17
Patch-based 3D U-Net for brain tumor segmentation
Stars: ✭ 85 (-9.57%)
Mutual labels:  segmentation
Cnn Paper2
🎨 🎨 深度学习 卷积神经网络教程 :图像识别,目标检测,语义分割,实例分割,人脸识别,神经风格转换,GAN等🎨🎨 https://dataxujing.github.io/CNN-paper2/
Stars: ✭ 77 (-18.09%)
Mutual labels:  segmentation
Changepoint
A place for the development version of the changepoint package on CRAN.
Stars: ✭ 90 (-4.26%)
Mutual labels:  segmentation
Vnet Tensorflow
Tensorflow implementation of the V-Net architecture for medical imaging segmentation.
Stars: ✭ 84 (-10.64%)
Mutual labels:  segmentation
Ai Challenger Retinal Edema Segmentation
DeepSeg : 4th place(top3%) solution for Retinal Edema Segmentation Challenge in 2018 AI challenger
Stars: ✭ 88 (-6.38%)
Mutual labels:  segmentation
Handy
Hand detection software built with OpenCV.
Stars: ✭ 81 (-13.83%)
Mutual labels:  segmentation
Litiv
C++ implementation pool for computer vision R&D projects.
Stars: ✭ 82 (-12.77%)
Mutual labels:  segmentation
3dunet Pytorch
3DUNet implemented with pytorch
Stars: ✭ 84 (-10.64%)
Mutual labels:  segmentation
Fcn.tensorflow
Tensorflow implementation of Fully Convolutional Networks for Semantic Segmentation (http://fcn.berkeleyvision.org)
Stars: ✭ 1,230 (+1208.51%)
Mutual labels:  segmentation
3dunet abdomen cascade
Stars: ✭ 91 (-3.19%)
Mutual labels:  segmentation
Seg By Interaction
Unsupervised instance segmentation via active robot interaction
Stars: ✭ 78 (-17.02%)
Mutual labels:  segmentation
Rgbd semantic segmentation pytorch
PyTorch Implementation of some RGBD Semantic Segmentation models.
Stars: ✭ 84 (-10.64%)
Mutual labels:  segmentation
Cag uda
(NeurIPS2019) Category Anchor-Guided Unsupervised Domain Adaptation for Semantic Segmentation
Stars: ✭ 91 (-3.19%)
Mutual labels:  segmentation
Pixeltcn
Tensorflow Implementation of Pixel Transposed Convolutional Networks (PixelTCN and PixelTCL)
Stars: ✭ 91 (-3.19%)
Mutual labels:  segmentation
Niftynet
[unmaintained] An open-source convolutional neural networks platform for research in medical image analysis and image-guided therapy
Stars: ✭ 1,276 (+1257.45%)
Mutual labels:  segmentation

PFENet

This is the implementation of our paper PFENet: Prior Guided Feature Enrichment Network for Few-shot Segmentation that has been accepted to IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI).

Get Started

Environment

  • torch==1.4.0 (torch version >= 1.0.1.post2 should be okay to run this repo)
  • numpy==1.18.4
  • tensorboardX==1.8
  • cv2==4.2.0

Datasets and Data Preparation

Please download the following datasets:

  • PASCAL-5i is based on the PASCAL VOC 2012 and SBD where the val images should be excluded from the list of training samples.

  • COCO 2014.

This code reads data from .txt files where each line contains the paths for image and the correcponding label respectively. Image and label paths are seperated by a space. Example is as follows:

image_path_1 label_path_1
image_path_2 label_path_2
image_path_3 label_path_3
...
image_path_n label_path_n

Then update the train/val/test list paths in the config files.

[Update] We have uploaded the lists we use in our paper.

  • The train/val lists for COCO contain 82081 and 40137 images respectively. They are the default train/val splits of COCO.
  • The train/val lists for PASCAL5i contain 5953 and 1449 images respectively. The train list should be voc_sbd_merge_noduplicate.txt and the val list is the original val list of pascal voc (val.txt).
To get voc_sbd_merge_noduplicate.txt:
  • We first merge the original VOC (voc_original_train.txt) and SBD (sbd_data.txt) training data.
  • [Important] sbd_data.txt does not overlap with the PASCALVOC 2012 validation data.
  • The merged list (voc_sbd_merge.txt) is then processed by the script (duplicate_removal.py) to remove the duplicate images and labels.

Run Demo / Test with Pretrained Models

  • Please download the pretrained models.

  • We provide 8 pre-trained models: 4 ResNet-50 based models for PASCAL-5i and 4 VGG-16 based models for COCO.

  • Update the config file by speficifying the target split and path (weights) for loading the checkpoint.

  • Execute mkdir initmodel at the root directory.

  • Download the ImageNet pretrained backbones and put them into the initmodel directory.

  • Then execute the command:

    sh test.sh {*dataset*} {*model_config*}

Example: Test PFENet with ResNet50 on the split 0 of PASCAL-5i:

sh test.sh pascal split0_resnet50

Train

Execute this command at the root directory:

sh train.sh {*dataset*} {*model_config*}

Related Repositories

This project is built upon a very early version of SemSeg: https://github.com/hszhao/semseg.

Other projects in few-shot segmentation:

Many thanks to their greak work!

Citation

If you find this project useful, please consider citing:

@article{tian2020pfenet,
  title={Prior Guided Feature Enrichment Network for Few-Shot Segmentation},
  author={Tian, Zhuotao and Zhao, Hengshuang and Shu, Michelle and Yang, Zhicheng and Li, Ruiyu and Jia, Jiaya},
  journal={TPAMI},
  year={2020}
}
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].