All Projects → omkar13 → Masktrack

omkar13 / Masktrack

Licence: mit
Implementation of MaskTrack method which is the baseline of several state-of-the-art video object segmentation methods in Pytorch

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Masktrack

Niftynet
[unmaintained] An open-source convolutional neural networks platform for research in medical image analysis and image-guided therapy
Stars: ✭ 1,276 (+1060%)
Mutual labels:  segmentation
Retina Features
Project for segmentation of blood vessels, microaneurysm and hardexudates in fundus images.
Stars: ✭ 95 (-13.64%)
Mutual labels:  segmentation
Gacnet
Pytorch implementation of 'Graph Attention Convolution for Point Cloud Segmentation'
Stars: ✭ 103 (-6.36%)
Mutual labels:  segmentation
Kits19 Challege
KiTS19——2019 Kidney Tumor Segmentation Challenge
Stars: ✭ 91 (-17.27%)
Mutual labels:  segmentation
Cag uda
(NeurIPS2019) Category Anchor-Guided Unsupervised Domain Adaptation for Semantic Segmentation
Stars: ✭ 91 (-17.27%)
Mutual labels:  segmentation
Setr Pytorch
Rethinking Semantic Segmentation from a Sequence-to-Sequence Perspective with Transformers
Stars: ✭ 96 (-12.73%)
Mutual labels:  segmentation
Brats17
Patch-based 3D U-Net for brain tumor segmentation
Stars: ✭ 85 (-22.73%)
Mutual labels:  segmentation
Pointnet Keras
Keras implementation for Pointnet
Stars: ✭ 110 (+0%)
Mutual labels:  segmentation
Pfenet
PFENet: Prior Guided Feature Enrichment Network for Few-shot Segmentation (TPAMI).
Stars: ✭ 94 (-14.55%)
Mutual labels:  segmentation
Crfasrnn pytorch
CRF-RNN PyTorch version http://crfasrnn.torr.vision
Stars: ✭ 102 (-7.27%)
Mutual labels:  segmentation
3dunet abdomen cascade
Stars: ✭ 91 (-17.27%)
Mutual labels:  segmentation
Changepoint
A place for the development version of the changepoint package on CRAN.
Stars: ✭ 90 (-18.18%)
Mutual labels:  segmentation
Dataset
Crop/Weed Field Image Dataset
Stars: ✭ 98 (-10.91%)
Mutual labels:  segmentation
Ai Challenger Retinal Edema Segmentation
DeepSeg : 4th place(top3%) solution for Retinal Edema Segmentation Challenge in 2018 AI challenger
Stars: ✭ 88 (-20%)
Mutual labels:  segmentation
Fcn train
The code includes all the file that you need in the training stage for FCN
Stars: ✭ 104 (-5.45%)
Mutual labels:  segmentation
3dunet Pytorch
3DUNet implemented with pytorch
Stars: ✭ 84 (-23.64%)
Mutual labels:  segmentation
Deepbrainseg
Fully automatic brain tumour segmentation using Deep 3-D convolutional neural networks
Stars: ✭ 96 (-12.73%)
Mutual labels:  segmentation
Deep Learning Based Ecg Annotator
Annotation of ECG signals using deep learning, tensorflow’ Keras
Stars: ✭ 110 (+0%)
Mutual labels:  segmentation
Edafa
Test Time Augmentation (TTA) wrapper for computer vision tasks: segmentation, classification, super-resolution, ... etc.
Stars: ✭ 107 (-2.73%)
Mutual labels:  segmentation
Segmentation
Tensorflow implementation : U-net and FCN with global convolution
Stars: ✭ 101 (-8.18%)
Mutual labels:  segmentation

MaskTrack

The MaskTrack method is the baseline for state-of-the-art methods in video object segmentation like Video Object Segmentation with Re-identification and[ Lucid Data Dreaming for Multiple Object Tracking] (https://arxiv.org/abs/1703.09554). The top three methods in DAVIS 2017 challenge were based on the MaskTrack method. However, no open source code is available for the MaskTrack method. Here I provide the MaskTrack method with following specifications:

  1. The code gives a score of 0.466 on the DAVIS 2017 test-dev dataset. J-mean is 0.440 and F-mean is 0.492.
  2. The code handles multiple objects present in DAVIS 2017.
  3. Data generation code in matlab for offline training on DAVIS 17 train+val and online training on DAVIS 17 test is also included. Thus, all of the code is packaged together.

Getting Started

Machine configuration used for testing:

  1. Two 'GeForce GTX 1080 Ti' cards with 11GB memory each.
  2. CPU RAM memory of 32 GB (though only about 11GB is required)

Offline training is done on DAVIS 2017 train data. The online training and testing is done on DAVIS 2017 test dataset. I recommend using conda for downloading and managing the environments.

Download the Deeplab Resnet 101 pretrained COCO model from here and place it in 'training/pretrained' folder.

If you want to skip offline training and directly perform online training and testing, download the offline trained model from here and place it in 'training/offline_save_dir/lr_0.001_wd_0.001' folder.

Prerequisites

What things you need to install the software and how to install them

Software used:

  1. Pytorch 0.3.1
  2. Matlab 2017b
  3. Python 2.7

Dependencies: Create a conda environment using the training/deeplab_resnet_env.yml file. Use: conda env create -f training/deeplab_resnet_env.yml

If you are not using conda as a package manager, refer to the yml file and install the libraries manually.

Running the code

Please refer to the following instructions:

A. Offline training data generation

B. Setting paths for python files

  • Go to training/path.py and change the paths returned by the following methods:
  1. db_root_dir(): Point this to the DAVIS 17 test dataset (download from: https://data.vision.ee.ethz.ch/csergi/share/davis/DAVIS-2017-test-dev-480p.zip)
  2. db_offline_train_root_dir(): Point this to the DAVIS 17 train+val dataset

C. Offline training

  • Run training/train_offline.py with appropriate parameters. Recommended: --NoLabels 2 --lr 0.001 --wtDecay 0.001 --epochResume 0 --epochs 15 --batchSize 6

D. Online Training data generation

  • Go to generating_masks/online_training/script_DAVIS17_test_dev.m and change required paths.
  • Run the script with 12 iterations (can be varied) as argument: run script_DAVIS17_test_dev(12)

E. Online Training and testing

  • Run training/train_online.py with appropriate parameters. Recommended: --NoLabels 2 --lr 0.0025 --wtDecay 0.0001 --seqName aerobatics --parentEpochResume 8 --epochsOnline 10 --noIterations 5 --batchSize 2
  • Results are stored in training/models17_DAVIS17_test/lr_xxx_wd_xxx/seq_name

License

This project is licensed under the MIT License - see the LICENSE file for details

Acknowledgments

This code was produced during my internship at Nanyang Technological University under Prof. Guosheng Lin. I would like to thank him for providing access to the GPUs.

  1. The code for generation of masks was based on: www.mpi-inf.mpg.de/masktrack
  2. The code for Deeplab Resnet was taken from: https://github.com/isht7/pytorch-deeplab-resnet
  3. Some of the dataloader code was based on: https://github.com/fperazzi/davis-2017
  4. Template of Readme.md file: https://gist.github.com/PurpleBooth/109311bb0361f32d87a2

I would like to thank K.K. Maninis for providing this code: https://github.com/kmaninis/OSVOS-PyTorch for reference.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].