All Projects → nianticlabs → panoptic-forecasting

nianticlabs / panoptic-forecasting

Licence: other
[CVPR 2021] Forecasting the panoptic segmentation of future video frames

Programming Languages

python
139335 projects - #7 most used programming language
shell
77523 projects

Projects that are alternatives of or similar to panoptic-forecasting

Seg Uncertainty
IJCAI2020 & IJCV 2020 🌇 Unsupervised Scene Adaptation with Memory Regularization in vivo
Stars: ✭ 202 (+359.09%)
Mutual labels:  semantic-segmentation, cityscapes
K-Net
[NeurIPS2021] Code Release of K-Net: Towards Unified Image Segmentation
Stars: ✭ 434 (+886.36%)
Mutual labels:  semantic-segmentation, panoptic-segmentation
Lightnetplusplus
LightNet++: Boosted Light-weighted Networks for Real-time Semantic Segmentation
Stars: ✭ 218 (+395.45%)
Mutual labels:  semantic-segmentation, cityscapes
Hrnet Semantic Segmentation
The OCR approach is rephrased as Segmentation Transformer: https://arxiv.org/abs/1909.11065. This is an official implementation of semantic segmentation for HRNet. https://arxiv.org/abs/1908.07919
Stars: ✭ 2,369 (+5284.09%)
Mutual labels:  semantic-segmentation, cityscapes
semantic-segmentation
SOTA Semantic Segmentation Models in PyTorch
Stars: ✭ 464 (+954.55%)
Mutual labels:  semantic-segmentation, cityscapes
Cgnet
CGNet: A Light-weight Context Guided Network for Semantic Segmentation [IEEE Transactions on Image Processing 2020]
Stars: ✭ 186 (+322.73%)
Mutual labels:  semantic-segmentation, cityscapes
multiclass-semantic-segmentation
Experiments with UNET/FPN models and cityscapes/kitti datasets [Pytorch]
Stars: ✭ 96 (+118.18%)
Mutual labels:  semantic-segmentation, cityscapes
Nas Segm Pytorch
Code for Fast Neural Architecture Search of Compact Semantic Segmentation Models via Auxiliary Cells, CVPR '19
Stars: ✭ 126 (+186.36%)
Mutual labels:  semantic-segmentation, cityscapes
FaPN
[ICCV 2021] FaPN: Feature-aligned Pyramid Network for Dense Image Prediction
Stars: ✭ 173 (+293.18%)
Mutual labels:  semantic-segmentation, panoptic-segmentation
EDANet
Implementation details for EDANet
Stars: ✭ 34 (-22.73%)
Mutual labels:  semantic-segmentation, cityscapes
Fchardnet
Fully Convolutional HarDNet for Segmentation in Pytorch
Stars: ✭ 150 (+240.91%)
Mutual labels:  semantic-segmentation, cityscapes
AttaNet
AttaNet for real-time semantic segmentation.
Stars: ✭ 37 (-15.91%)
Mutual labels:  semantic-segmentation, cityscapes
Bisenetv2 Tensorflow
Unofficial tensorflow implementation of real-time scene image segmentation model "BiSeNet V2: Bilateral Network with Guided Aggregation for Real-time Semantic Segmentation"
Stars: ✭ 139 (+215.91%)
Mutual labels:  semantic-segmentation, cityscapes
Fastseg
📸 PyTorch implementation of MobileNetV3 for real-time semantic segmentation, with pretrained weights & state-of-the-art performance
Stars: ✭ 202 (+359.09%)
Mutual labels:  semantic-segmentation, cityscapes
Contrastiveseg
Exploring Cross-Image Pixel Contrast for Semantic Segmentation
Stars: ✭ 135 (+206.82%)
Mutual labels:  semantic-segmentation, cityscapes
Decouplesegnets
Implementation of Our ECCV2020-work: Improving Semantic Segmentation via Decoupled Body and Edge Supervision
Stars: ✭ 232 (+427.27%)
Mutual labels:  semantic-segmentation, cityscapes
Chainer Pspnet
PSPNet in Chainer
Stars: ✭ 76 (+72.73%)
Mutual labels:  semantic-segmentation, cityscapes
Dabnet
Depth-wise Asymmetric Bottleneck for Real-time Semantic Segmentation (BMVC2019)
Stars: ✭ 109 (+147.73%)
Mutual labels:  semantic-segmentation, cityscapes
IAST-ECCV2020
IAST: Instance Adaptive Self-training for Unsupervised Domain Adaptation (ECCV 2020) https://teacher.bupt.edu.cn/zhuchuang/en/index.htm
Stars: ✭ 84 (+90.91%)
Mutual labels:  semantic-segmentation, cityscapes
SegFormer
Official PyTorch implementation of SegFormer
Stars: ✭ 1,264 (+2772.73%)
Mutual labels:  semantic-segmentation, cityscapes

Panoptic Segmentation Forecasting

Colin Graber, Grace Tsai, Michael Firman, Gabriel Brostow, Alexander Schwing - CVPR 2021

[Link to paper]

Animated gif showing visual comparison of our model's results compared against the hybrid baseline

We propose to study the novel task of ‘panoptic segmentation forecasting’: given a set of observed frames, the goal is to forecast the panoptic segmentation for a set of unobserved frames. We also propose a first approach to forecasting future panoptic segmentations. In contrast to typical semantic forecasting, we model the motion of individual object instances and the background separately. This makes instance information persistent during forecasting, and allows us to understand the motion of each moving object.

Image presenting the model diagram

⚙️ Setup

Dependencies

Install the code using the following command: pip install -e ./

Data

  • To run this code, the gtFine_trainvaltest dataset will need to be downloaded/decompressed from the Cityscapes website into the data/cityscapes/ directory. If you would like to visualize predictions, you will also need to download the leftImg8bit dataset.
  • Additionally, a few additional Cityscapes ground-truth files will need to be generated. This can be done by running the following commands:
    • python -m cityscapesscripts.preparation.createPanopticImgs --dataset-folder data/cityscapes/gtFine/
    • CITYSCAPES_DATASET=data/cityscapes/ python -m cityscapesscripts.preparation.createTrainIdLabelImgs
  • The remainder of the required data can be downloaded using the script download_data.sh. By default, everything is downloaded into the data/ directory.
  • Training the background model requires generating a version of the semantic segmentation annotations where foreground regions have been removed. This can be done by running the script scripts/preprocessing/remove_fg_from_gt.sh.
  • Training the foreground model requires additionally downloading a pretrained MaskRCNN model. This can be found at this link. This should be saved as pretrained_models/fg/mask_rcnn_pretrain.pkl.
  • Training the background model requires additionally downloading a pretrained HarDNet model. This can be found at this link. This should be saved as pretrained_models/bg/hardnet70_cityscapes_model.pkl.

Running our code

The scripts directory contains scripts which can be used to train and evaluate the foreground, background, and egomotion models. Note that these scripts should be run from the root project directory as shown below. Specifically:

  • scripts/odom/run_odom_train.sh trains the egomotion prediction model.
  • scripts/odom/export_odom.sh exports the odometry predictions, which can then be used during evaluation by other models
  • scripts/bg/run_bg_train.sh trains the background prediction model.
  • scripts/bg/run_export_bg_val.sh exports predictions make by the background using input reprojected point clouds which come from using predicted egomotion.
  • scripts/fg/run_fg_train.sh trains the foreground prediction model.
  • scripts/fg/run_fg_eval_panoptic.sh produces final panoptic semgnetation predictions based on the trained foreground model and exported background predictions. This also uses predicted egomotion as input. Note that the background export script must be run before this one so that the full panoptic segmentation outputs can be generated. Also, if you re-run this script, make sure to delete the predictions in the folder experiments/pretrained_fg/exported_panoptics_*_val/ first, as otherwise the generated json file will not contain entries for the sequences where foreground instances are not present.

We provide our pretrained foreground, background, and egomotion prediction models. The data downloading script additionally downloads these models into the directory pretrained_models/

✏️ 📄 Citation

If you found our work relevant to yours, please consider citing our paper:

@inproceedings{graber-2021-panopticforecasting,
 title   = {Panoptic Segmentation Forecasting},
 author  = {Colin Graber and
            Grace Tsai and
            Michael Firman and
            Gabriel Brostow and
            Alexander Schwing},
 booktitle = {Computer Vision and Pattern Recognition ({CVPR})},
 year = {2021}
}

👩‍⚖️ License

Copyright © Niantic, Inc. 2021. Patent Pending. All rights reserved. Please see the license file for terms.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].