All Projects → arthurdouillard → CVPR2021_PLOP

arthurdouillard / CVPR2021_PLOP

Licence: MIT license
Official code of CVPR 2021's PLOP: Learning without Forgetting for Continual Semantic Segmentation

Programming Languages

python
139335 projects - #7 most used programming language
shell
77523 projects

Projects that are alternatives of or similar to CVPR2021 PLOP

Context-Aware-Consistency
Semi-supervised Semantic Segmentation with Directional Context-aware Consistency (CVPR 2021)
Stars: ✭ 121 (+18.63%)
Mutual labels:  semantic-segmentation, cvpr2021
HoHoNet
"HoHoNet: 360 Indoor Holistic Understanding with Latent Horizontal Features" official pytorch implementation.
Stars: ✭ 65 (-36.27%)
Mutual labels:  semantic-segmentation, cvpr2021
BLIP
Official Implementation of CVPR2021 paper: Continual Learning via Bit-Level Information Preserving
Stars: ✭ 33 (-67.65%)
Mutual labels:  continual-learning, cvpr2021
cvpr-buzz
🐝 Explore Trending Papers at CVPR
Stars: ✭ 37 (-63.73%)
Mutual labels:  cvpr2021
SkeletonMerger
Code repository for paper `Skeleton Merger: an Unsupervised Aligned Keypoint Detector`.
Stars: ✭ 49 (-51.96%)
Mutual labels:  cvpr2021
Semantic-Segmentation-BiSeNet
Keras BiseNet architecture implementation
Stars: ✭ 55 (-46.08%)
Mutual labels:  semantic-segmentation
Involution
PyTorch reimplementation of the paper "Involution: Inverting the Inherence of Convolution for Visual Recognition" (2D and 3D Involution) [CVPR 2021].
Stars: ✭ 98 (-3.92%)
Mutual labels:  cvpr2021
pytorch-segmentation
🎨 Semantic segmentation models, datasets and losses implemented in PyTorch.
Stars: ✭ 1,184 (+1060.78%)
Mutual labels:  semantic-segmentation
RfDNet
Implementation of CVPR'21: RfD-Net: Point Scene Understanding by Semantic Instance Reconstruction
Stars: ✭ 150 (+47.06%)
Mutual labels:  cvpr2021
MobileUNET
U-NET Semantic Segmentation model for Mobile
Stars: ✭ 39 (-61.76%)
Mutual labels:  semantic-segmentation
Automated-objects-removal-inpainter
Automated object remover Inpainter is a project that combines Semantic segmentation and EdgeConnect architectures with minor changes in order to remove specified object/s from list of 20 objects from all the input photos
Stars: ✭ 88 (-13.73%)
Mutual labels:  semantic-segmentation
CVPR2021-Papers-with-Code-Demo
收集 CVPR 最新的成果,包括论文、代码和demo视频等,欢迎大家推荐!
Stars: ✭ 752 (+637.25%)
Mutual labels:  cvpr2021
temporal-depth-segmentation
Source code (train/test) accompanying the paper entitled "Veritatem Dies Aperit - Temporally Consistent Depth Prediction Enabled by a Multi-Task Geometric and Semantic Scene Understanding Approach" in CVPR 2019 (https://arxiv.org/abs/1903.10764).
Stars: ✭ 20 (-80.39%)
Mutual labels:  semantic-segmentation
HESIC
Official Code of "Deep Homography for Efficient Stereo Image Compression"[cvpr21oral]
Stars: ✭ 42 (-58.82%)
Mutual labels:  cvpr2021
flexinfer
A flexible Python front-end inference SDK based on TensorRT
Stars: ✭ 83 (-18.63%)
Mutual labels:  semantic-segmentation
FixBi
FixBi: Bridging Domain Spaces for Unsupervised Domain Adaptation (CVPR 2021)
Stars: ✭ 48 (-52.94%)
Mutual labels:  cvpr2021
deeplabv3plus-keras
deeplabv3plus (Google's new algorithm for semantic segmentation) in keras:Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation
Stars: ✭ 67 (-34.31%)
Mutual labels:  semantic-segmentation
PhotographicImageSynthesiswithCascadedRefinementNetworks-Pytorch
Photographic Image Synthesis with Cascaded Refinement Networks - Pytorch Implementation
Stars: ✭ 63 (-38.24%)
Mutual labels:  semantic-segmentation
MetaBIN
[CVPR2021] Meta Batch-Instance Normalization for Generalizable Person Re-Identification
Stars: ✭ 58 (-43.14%)
Mutual labels:  cvpr2021
DeepLab-V3
Google DeepLab V3 for Image Semantic Segmentation
Stars: ✭ 103 (+0.98%)
Mutual labels:  semantic-segmentation

PLOP: Learning without Forgetting for Continual Semantic Segmentation

Paper Conference Youtube

Vizualization on VOC 15-1

This repository contains all of our code. It is a modified version of Cermelli et al.'s repository.

@inproceedings{douillard2021plop,
  title={PLOP: Learning without Forgetting for Continual Semantic Segmentation},
  authors={Douillard, Arthur and Chen, Yifu and Dapogny, Arnaud and Cord, Matthieu},
  booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2021}
}

Requirements

You need to install the following libraries:

  • Python (3.6)
  • Pytorch (1.8.1+cu102)
  • torchvision (0.9.1+cu102)
  • tensorboardX (1.8)
  • apex (0.1)
  • matplotlib (3.3.1)
  • numpy (1.17.2)
  • inplace-abn (1.0.7)

Note also that apex seems to only work with some CUDA versions, therefore try to install Pytorch (and torchvision) with the 10.2 CUDA version. You'll probably need anaconda instead of pip in that case, sorry! Do:

conda install -y pytorch torchvision cudatoolkit=10.2 -c pytorch
cd apex
pip3 install -v --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" ./

Note that while the code should be runnable without mixed precision (apex), some have reported lower perfs without it. So try with it!

Dataset

Two scripts are available to download ADE20k and Pascal-VOC 2012, please see in the data folder. For Cityscapes, you need to do it yourself, because you have to ask "permission" to the holders; but be reassured, it's only a formality, you can get the link in a few days by mail.

Performance on VOC

How to perform training

The most important file is run.py, that is in charge to start the training or test procedure. To run it, simpy use the following command:

python -m torch.distributed.launch --nproc_per_node=<num_GPUs> run.py --data_root <data_folder> --name <exp_name> .. other args ..

The default is to use a pretraining for the backbone used, that is searched in the pretrained folder of the project. We used the pretrained model released by the authors of In-place ABN (as said in the paper), that can be found here: link. I've also upload those weights there: link.

Since the pretrained are made on multiple-gpus, they contain a prefix "module." in each key of the network. Please, be sure to remove them to be compatible with this code (simply rename them using key = key[7:]) (if you're working on single gpu). If you don't want to use pretrained, please use --no-pretrained.

There are many options (you can see them all by using --help option), but we arranged the code to being straightforward to test the reported methods. Leaving all the default parameters, you can replicate the experiments by setting the following options.

  • please specify the data folder using: --data_root <data_root>
  • dataset: --dataset voc (Pascal-VOC 2012) | ade (ADE20K)
  • task: --task <task>, where tasks are
    • 15-5, 15-5s, 19-1 (VOC), 100-50, 100-10, 50, 100-50b, 100-10b, 50b (ADE, b indicates the order)
  • step (each step is run separately): --step <N>, where N is the step number, starting from 0
  • (only for Pascal-VOC) disjoint is default setup, to enable overlapped: --overlapped
  • learning rate: --lr 0.01 (for step 0) | 0.001 (for step > 0)
  • batch size: --batch_size <24/num_GPUs>
  • epochs: --epochs 30 (Pascal-VOC 2012) | 60 (ADE20K)
  • method: --method <method name>, where names are
    • FT, LWF, LWF-MC, ILT, EWC, RW, PI, MIB

For all details please follow the information provided using the help option.

Example commands

LwF on the 100-50 setting of ADE20K, step 0:

python -m torch.distributed.launch --nproc_per_node=2 run.py --data_root data --batch_size 12 --dataset ade --name LWF --task 100-50 --step 0 --lr 0.01 --epochs 60 --method LWF

MIB on the 50b setting of ADE20K, step 2:

python -m torch.distributed.launch --nproc_per_node=2 run.py --data_root data --batch_size 12 --dataset ade --name MIB --task 100-50 --step 2 --lr 0.001 --epochs 60 --method MIB

LWF-MC on 15-5 disjoint setting of VOC, step 1:

python -m torch.distributed.launch --nproc_per_node=2 run.py --data_root data --batch_size 12 --dataset voc --name LWF-MC --task 15-5 --step 1 --lr 0.001 --epochs 30 --method LWF-MC

PLOP on 15-1 overlapped setting of VOC, step 1:

python -m torch.distributed.launch --nproc_per_node=2 run.py --data_root data --batch_size 12 --dataset voc --name PLOP --task 15-5s --overlapped --step 1 --lr 0.001 --epochs 30 --method FT --pod local --pod_factor 0.01 --pod_logits --pseudo entropy --threshold 0.001 --classif_adaptive_factor --init_balanced --pod_options "{"switch": {"after": {"extra_channels": "sum", "factor": 0.0005, "type": "local"}}}"

Once you trained the model, you can see the result on tensorboard (we perform the test after the whole training) or you can test it by using the same script and parameters but using the command

--test

that will skip all the training procedure and test the model on test data.

Or more simply you can use one of the provided script that will launch every step of a continual training.

For example, do

bash scripts/voc/plop_15-1.sh

Note that you will need to modify those scripts to include the path where your data.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].