All Projects → mboudiaf → RePRI-for-Few-Shot-Segmentation

mboudiaf / RePRI-for-Few-Shot-Segmentation

Licence: MIT license
(CVPR 2021) Code for our method RePRI for Few-Shot Segmentation. Paper at http://arxiv.org/abs/2012.06166

Programming Languages

python
139335 projects - #7 most used programming language
shell
77523 projects

Projects that are alternatives of or similar to RePRI-for-Few-Shot-Segmentation

Jumpserver
JumpServer 是全球首款开源的堡垒机,是符合 4A 的专业运维安全审计系统。
Stars: ✭ 17,563 (+13410%)
Mutual labels:  coco
Graph Neural Net
Graph Convolutional Networks, Graph Attention Networks, Gated Graph Neural Net, Mixhop
Stars: ✭ 27 (-79.23%)
Mutual labels:  transductive-learning
CoCoC
C development system for (Nitr)OS9/6x09, with source
Stars: ✭ 22 (-83.08%)
Mutual labels:  coco
DiscoBox
The Official PyTorch Implementation of DiscoBox.
Stars: ✭ 95 (-26.92%)
Mutual labels:  coco
deep utils
An open-source toolkit which is full of handy functions, including the most used models and utilities for deep-learning practitioners!
Stars: ✭ 73 (-43.85%)
Mutual labels:  coco
PyTorch-Spiking-YOLOv3
A PyTorch implementation of Spiking-YOLOv3. Two branches are provided, based on two common PyTorch implementation of YOLOv3(ultralytics/yolov3 & eriklindernoren/PyTorch-YOLOv3), with support for Spiking-YOLOv3-Tiny at present.
Stars: ✭ 144 (+10.77%)
Mutual labels:  coco
Unidet
Object detection on multiple datasets with an automatically learned unified label space.
Stars: ✭ 217 (+66.92%)
Mutual labels:  coco
datumaro
Dataset Management Framework, a Python library and a CLI tool to build, analyze and manage Computer Vision datasets.
Stars: ✭ 274 (+110.77%)
Mutual labels:  coco
FCOS GluonCV
FCOS: Fully Convolutional One-Stage Object Detection.
Stars: ✭ 24 (-81.54%)
Mutual labels:  coco
OneTwoStep
CC游戏案例之 激流竞速
Stars: ✭ 17 (-86.92%)
Mutual labels:  coco
Awesome-Few-shot
Awesome Few-shot learning
Stars: ✭ 50 (-61.54%)
Mutual labels:  few-shot-segmentation
cisip-FIRe
Fast Image Retrieval (FIRe) is an open source project to promote image retrieval research. It implements most of the major binary hashing methods to date, together with different popular backbone networks and public datasets.
Stars: ✭ 40 (-69.23%)
Mutual labels:  coco
datasetsome
一些数据集处理相关的 API
Stars: ✭ 37 (-71.54%)
Mutual labels:  coco
Yolo-to-COCO-format-converter
Yolo to COCO annotation format converter
Stars: ✭ 176 (+35.38%)
Mutual labels:  coco
coco-viewer
Simple COCO Viewer in Tkinter
Stars: ✭ 65 (-50%)
Mutual labels:  coco
Cocostuff10k
The official homepage of the (outdated) COCO-Stuff 10K dataset.
Stars: ✭ 248 (+90.77%)
Mutual labels:  coco
pylabel
Python library for computer vision labeling tasks. The core functionality is to translate bounding box annotations between different formats-for example, from coco to yolo.
Stars: ✭ 171 (+31.54%)
Mutual labels:  coco
JSON2YOLO
Convert JSON annotations into YOLO format.
Stars: ✭ 222 (+70.77%)
Mutual labels:  coco
DetectionMetrics
Tool to evaluate deep-learning detection and segmentation models, and to create datasets
Stars: ✭ 66 (-49.23%)
Mutual labels:  coco
SegCaps
A Clone version from Original SegCaps source code with enhancements on MS COCO dataset.
Stars: ✭ 62 (-52.31%)
Mutual labels:  coco

Region Proportion Regularized Inference (RePRI) for Few-Shot Segmentation

Update 03/21: Paper accepted at CVPR 2021 !

Code for the paper : "Few-Shot Segmentation Without Meta-Learning: A Good Transductive Inference Is All You Need?", freely available at https://arxiv.org/abs/2012.06166:

Getting Started

Minimum requirements

  1. Software :
  • torch==1.7.0
  • numpy==1.18.4
  • cv2==4.2.0
  • pyyaml==5.3.1

For both training and testing, metrics monitoring is done through visdom_logger (https://github.com/luizgh/visdom_logger). To install this package with pip, use the following command:

pip install git+https://github.com/luizgh/visdom_logger.git
  1. Hardware : If you plan to train, 24GB memory (can be split across GPUS). For testing only, a single 12 GB GPU is enough.

Download data

All pre-processed from Google Drive

We provide the versions of Pascal-VOC 2012 and MS-COCO 2017 used in this work at https://drive.google.com/file/d/1Lj-oBzBNUsAqA9y65BDrSQxirV8S15Rk/view?usp=sharing. You can download the full .zip and directly extract it at the root of this repo.

If the previous download failed

Here is the structure of the data folder for you to reproduce:

data
├── coco
│   ├── annotations
│   ├── train
│   ├── train2014
│   ├── val
│   └── val2014
└── pascal
|    ├── JPEGImages
|    └── SegmentationClassAug

Pascal : The JPEG images can be found in the PascalVOC 2012 toolkit to be downloaded at PascalVOC2012 and SegmentationClassAug (pre-processed ground-truth masks).

Coco : Coco 2014 train, validation images and annotations can be downloaded at Coco. Once this is done, you will have to generate the subfolders coco/train and coco/val (ground truth masks). Both folders can be generated by executing the python script data/coco/create_masks.py (note that the script uses the package pycocotools that can be found at https://github.com/cocodataset/cocoapi/tree/master/PythonAPI/pycocotools):

python

cd data/coco
python create_masks.py

About the train/val splits

The train/val splits are directly provided in lists/. How they were obtained is explained at https://github.com/Jia-Research-Lab/PFENet

Download pre-trained models

Pre-trained backbones

First, you will need to download the ImageNet pre-trained backbones at https://drive.google.com/drive/folders/1Hrz1wOxOZm4nIIS7UMJeL79AQrdvpj6v and put them under initmodel/. These will be used if you decide to train your models from scratch.

Pre-trained models

We directly provide the full pre-trained models at https://drive.google.com/file/d/1iuMAo5cJ27oBdyDkUI0JyGIEH60Ln2zm/view?usp=sharing. You can download them and directly extract them at the root of this repo. This includes Resnet50 and Resnet101 backbones on Pascal-5i, and Resnet50 on Coco-20i.

Overview of the repo

Data are located in data/. All the code is provided in src/. Default configuration files can be found in config_files/. Training and testing scripts are located in scripts/. Lists/ contains the train/validation splits for each dataset.

Training (optional)

If you want to use the pre-trained models, this step is optional. Otherwise, you can train your own models from scratch with the scripts/train.sh script, as follows.

bash scripts/train.sh {data} {fold} {[gpu_ids]} {layers}

For instance, if you want to train a Resnet50-based model on the fold-0 of Pascal-5i on GPU 1, use:

bash scripts/train.sh pascal 0 [1] 50

Note that this code supports distributed training. If you want to train on multiple GPUs, you may simply replace [1] in the previous examples with the list of gpus_id you want to use.

Testing

To test your models, use the scripts/test.sh script, the general synthax is:

bash scripts/test.sh {data} {shot} {[gpu_ids]} {layers}

This script will test successively on all folds of the current dataset. Below are presented specific commands for several experiments.

Pascal-5i

Results :

(1 shot/5 shot) Arch Fold-0 Fold-1 Fold-2 Fold-3 Mean
RePRI Resnet-50 60.2 / 64.5 67.0 / 70.8 61.7 / 71.7 47.5 / 60.3 59.1 / 66.8
Oracle-RePRI Resnet-50 72.4 / 75.1 78.0 / 80.8 77.1 / 81.4 65.8 / 74.4 73.3 / 77.9
RePRI Resnet-101 59.6 / 66.2 68.3 / 71.4 62.2 / 67.0 47.2 / 57.7 59.4 / 65.6
Oracle-RePRI Resnet-101 73.9 / 76.8 79.7 / 81.7 76.1 / 79.5 65.1 / 74.5 73.7 / 78.1

Command:

bash scripts/test.sh pascal 1 [0] 50  # 1-shot
bash scripts/test.sh pascal 5 [0] 50  # 5-shot

Coco-20i

Results :

(1 shot/5 shot) Arch Fold-0 Fold-1 Fold-2 Fold-3 Mean
RePRI Resnet-50 31.2 / 38.5 38.1 / 46.2 33.3 / 40.0 33.0 / 43.6 34.0/42.1
Oracle-RePRI Resnet-50 49.3 / 51.5 51.4 / 60.8 38.2 / 54.7 41.6 / 55.2 45.1 / 55.5

Command :

bash scripts/test.sh coco 1 [0] 50  # 1-shot
bash scripts/test.sh coco 5 [0] 50  # 5-shot

Coco-20i -> Pascal-VOC

The folds used for cross-domain experiments are presented in the image below:

Results :

(1 shot/5 shot) Arch Fold-0 Fold-1 Fold-2 Fold-3 Mean
RePRI Resnet-50 52.2 / 56.5 64.3 / 68.2 64.8 / 70.0 71.6 / 76.2 63.2 / 67.7
Oracle-RePRI Resnet-50 69.6 / 73.5 71.7 / 74.9 77.6 / 82.2 86.2 / 88.1 76.2 / 79.7

Command :

bash scripts/test.sh coco2pascal 1 [0] 50  # 1-shot
bash scripts/test.sh coco2pascal 5 [0] 50  # 5-shot

Monitoring metrics

This code offers two options to visualize/plot metrics during training

Live monitoring with visdom

For both training and testing, you can monitor metrics using visdom_logger (https://github.com/luizgh/visdom_logger). To install this package, simply clone the repo and install it with pip:

git clone https://github.com/luizgh/visdom_logger.git
pip install -e visdom_logger

Then, you need to start a visdom server with:

python -m visdom.server -port 8098

Finally, add the line visdom_port 8098 in the options in scripts/train.sh or scripts/test.sh, and metrics will be displayed at this port. You can monitor them through your navigator.

Good old fashioned matplotlib

Alternatively, this code also saves important metrics (training loss, accuracy and validation loss and accuracy) as training progresses in the form of numpy files (.npy). Then, you can plot these metrics with:

bash scripts/plot_training.sh model_ckpt

Contact

For further questions or details, please post an issue or directly reach out to Malik Boudiaf ([email protected])

Acknowledgments

We gratefully thank the authors of https://github.com/Jia-Research-Lab/PFENet, as well as https://github.com/hszhao/semseg from which some parts of our code are inspired.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].