All Projects → yz93 → anchor-diff-VOS

yz93 / anchor-diff-VOS

Licence: other
Anchor Diffusion for Unsupervised Video Object Segmentation

Programming Languages

python
139335 projects - #7 most used programming language
Cuda
1817 projects
C++
36643 projects - #6 most used programming language

Projects that are alternatives of or similar to anchor-diff-VOS

e-osvos
Implementation of "Make One-Shot Video Object Segmentation Efficient Again” and the semi-supervised fine-tuning "e-OSVOS" approach (NeurIPS 2020).
Stars: ✭ 31 (-72.57%)
Mutual labels:  video-object-segmentation
generative pose
Code for our ICCV 19 paper : Monocular 3D Human Pose Estimation by Generation and Ordinal Ranking
Stars: ✭ 63 (-44.25%)
Mutual labels:  iccv2019
DualStudent
Code for Paper ''Dual Student: Breaking the Limits of the Teacher in Semi-Supervised Learning'' [ICCV 2019]
Stars: ✭ 106 (-6.19%)
Mutual labels:  iccv2019
Siammask
[CVPR2019] Fast Online Object Tracking and Segmentation: A Unifying Approach
Stars: ✭ 3,205 (+2736.28%)
Mutual labels:  video-object-segmentation
Seg2Eye
Official implementation of "Content-Consistent Generation of Realistic Eyes with Style", ICCW 2019
Stars: ✭ 26 (-76.99%)
Mutual labels:  iccv2019
RMNet
Implementation of "Efficient Regional Memory Network for Video Object Segmentation". (Xie et al., CVPR 2021)
Stars: ✭ 76 (-32.74%)
Mutual labels:  video-object-segmentation
Deep Text Recognition Benchmark
Text recognition (optical character recognition) with deep learning methods.
Stars: ✭ 2,665 (+2258.41%)
Mutual labels:  iccv2019
ICCV2021-Paper-Code-Interpretation
ICCV2021/2019/2017 论文/代码/解读/直播合集,极市团队整理
Stars: ✭ 2,022 (+1689.38%)
Mutual labels:  iccv2019
SiamMaskCpp
C++ Implementation of SiamMask
Stars: ✭ 92 (-18.58%)
Mutual labels:  video-object-segmentation
UniTrack
[NeurIPS'21] Unified tracking framework with a single appearance model. It supports Single Object Tracking (SOT), Video Object Segmentation (VOS), Multi-Object Tracking (MOT), Multi-Object Tracking and Segmentation (MOTS), Pose Tracking, Video Instance Segmentation (VIS), and class-agnostic MOT (e.g. TAO dataset).
Stars: ✭ 293 (+159.29%)
Mutual labels:  video-object-segmentation
TA3N
[ICCV 2019 Oral] TA3N: https://github.com/cmhungsteve/TA3N (Most updated repo)
Stars: ✭ 45 (-60.18%)
Mutual labels:  iccv2019
SMIT
Pytorch implemenation of Stochastic Multi-Label Image-to-image Translation (SMIT), ICCV Workshops 2019.
Stars: ✭ 37 (-67.26%)
Mutual labels:  iccv2019
ViNet
ViNet Pushing the limits of Visual Modality for Audio Visual Saliency Prediction
Stars: ✭ 36 (-68.14%)
Mutual labels:  video-saliency
Mask-Propagation
[CVPR 2021] MiVOS - Mask Propagation module. Reproduced STM (and better) with training code 🌟. Semi-supervised video object segmentation evaluation.
Stars: ✭ 71 (-37.17%)
Mutual labels:  video-object-segmentation
GIS-RAmap
Pytorch implementation of CVPR2021 oral paper (best paper candidate), "Guided Interactive Video Object Segmentation Using Reliability-Based Attention Maps"
Stars: ✭ 36 (-68.14%)
Mutual labels:  video-object-segmentation
Fcos
FCOS: Fully Convolutional One-Stage Object Detection (ICCV'19)
Stars: ✭ 2,839 (+2412.39%)
Mutual labels:  iccv2019
GraphMemVOS
Video Object Segmentation with Episodic Graph Memory Networks (ECCV2020 spotlight)
Stars: ✭ 92 (-18.58%)
Mutual labels:  video-object-segmentation
MiVOS
[CVPR 2021] Modular Interactive Video Object Segmentation: Interaction-to-Mask, Propagation and Difference-Aware Fusion. Semi-supervised VOS as well!
Stars: ✭ 302 (+167.26%)
Mutual labels:  video-object-segmentation
Tensorflow-YOLACT
Implementation of the paper "YOLACT Real-time Instance Segmentation" in Tensorflow 2
Stars: ✭ 97 (-14.16%)
Mutual labels:  iccv2019
platonicgan
Escaping Plato’s Cave: 3D Shape from Adversarial Rendering [ICCV 2019]
Stars: ✭ 40 (-64.6%)
Mutual labels:  iccv2019

Anchor diffusion VOS

This repository contains code for the paper

Anchor Diffusion for Unsupervised Video Object Segmentation
Zhao Yang*, Qiang Wang*, Luca Bertinetto, Weiming Hu, Song Bai, Philip H.S. Torr
ICCV 2019 | PDF | BibTex

Setup

Code tested for Ubuntu 16.04, Python 3.7, PyTorch 0.4.1, and CUDA 9.2.

  • Clone the repository and change to the new directory.
git clone https://github.com/yz93/anchor-diff-VOS-internal.git && cd anchor-diff-VOS
  • Save the working directory to an environment variable for reference.
export AnchorDiff=$PWD
  • Set up a new conda environment.
    • For installing PyTorch 0.4.1 with different versions of CUDA, see here.
conda create -n anchordiff python=3.7 pytorch=0.4.1 cuda92 -c pytorch
source activate anchordiff
pip install -r requirements.txt

Data preparation

  • Download the data set
cd $AnchorDiff
wget https://data.vision.ee.ethz.ch/csergi/share/davis/DAVIS-2017-trainval-480p.zip
unzip DAVIS-2017-trainval-480p.zip -d data
cd $AnchorDiff
unzip snapshots.zip -d snapshots
  • (If you do not intend to apply instance pruning described in the paper, feel free to skip this.) Download the detection results that we have computed using ExtremeNet, and generate the pruning masks.
cd $AnchorDiff
wget www.robots.ox.ac.uk/~yz/detection.zip
unzip detection.zip
python detection_filter.py

Evaluation on DAVIS 2016

  • Examples for evaluating mean IoU on the validation set with options,
    • save-mask (default 'True') for saving the predicted masks,
    • ms-mirror (default 'False') for multiple-scale and mirrored input (slow),
    • inst-prune (default 'False') for instance pruning,
    • model (default 'ad') specifying models in Table 1 of the paper,
    • eval-sal (default 'False') for computing saliency measures, MAE and F-score.
cd $AnchorDiff
python eval.py
python eval.py --ms-mirror True --inst-prune True --eval-sal True

License

The MIT License.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].