All Projects → tobiasfshr → Motsfusion

tobiasfshr / Motsfusion

Licence: mit
MOTSFusion: Track to Reconstruct and Reconstruct to Track

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Motsfusion

Maskfusion
MaskFusion: Real-Time Recognition, Tracking and Reconstruction of Multiple Moving Objects
Stars: ✭ 404 (+242.37%)
Mutual labels:  segmentation, reconstruction
Awesome Gan For Medical Imaging
Awesome GAN for Medical Imaging
Stars: ✭ 1,814 (+1437.29%)
Mutual labels:  segmentation, reconstruction
Cilantro
A lean C++ library for working with point cloud data
Stars: ✭ 577 (+388.98%)
Mutual labels:  segmentation, reconstruction
Co Fusion
Co-Fusion: Real-time Segmentation, Tracking and Fusion of Multiple Objects
Stars: ✭ 400 (+238.98%)
Mutual labels:  segmentation, reconstruction
All About The Gan
All About the GANs(Generative Adversarial Networks) - Summarized lists for GAN
Stars: ✭ 630 (+433.9%)
Mutual labels:  segmentation, reconstruction
Awesome Visual Slam
📚 The list of vision-based SLAM / Visual Odometry open source, blogs, and papers
Stars: ✭ 1,336 (+1032.2%)
Mutual labels:  reconstruction
Edafa
Test Time Augmentation (TTA) wrapper for computer vision tasks: segmentation, classification, super-resolution, ... etc.
Stars: ✭ 107 (-9.32%)
Mutual labels:  segmentation
Retina Features
Project for segmentation of blood vessels, microaneurysm and hardexudates in fundus images.
Stars: ✭ 95 (-19.49%)
Mutual labels:  segmentation
Cag uda
(NeurIPS2019) Category Anchor-Guided Unsupervised Domain Adaptation for Semantic Segmentation
Stars: ✭ 91 (-22.88%)
Mutual labels:  segmentation
Dstl unet
Dstl Satellite Imagery Feature Detection
Stars: ✭ 117 (-0.85%)
Mutual labels:  segmentation
Masktrack
Implementation of MaskTrack method which is the baseline of several state-of-the-art video object segmentation methods in Pytorch
Stars: ✭ 110 (-6.78%)
Mutual labels:  segmentation
Fcn train
The code includes all the file that you need in the training stage for FCN
Stars: ✭ 104 (-11.86%)
Mutual labels:  segmentation
Setr Pytorch
Rethinking Semantic Segmentation from a Sequence-to-Sequence Perspective with Transformers
Stars: ✭ 96 (-18.64%)
Mutual labels:  segmentation
Pointnet Keras
Keras implementation for Pointnet
Stars: ✭ 110 (-6.78%)
Mutual labels:  segmentation
Deepbrainseg
Fully automatic brain tumour segmentation using Deep 3-D convolutional neural networks
Stars: ✭ 96 (-18.64%)
Mutual labels:  segmentation
Nnunet
No description or website provided.
Stars: ✭ 2,231 (+1790.68%)
Mutual labels:  segmentation
Pfenet
PFENet: Prior Guided Feature Enrichment Network for Few-shot Segmentation (TPAMI).
Stars: ✭ 94 (-20.34%)
Mutual labels:  segmentation
Gacnet
Pytorch implementation of 'Graph Attention Convolution for Point Cloud Segmentation'
Stars: ✭ 103 (-12.71%)
Mutual labels:  segmentation
Singleviewreconstruction
Official Code: 3D Scene Reconstruction from a Single Viewport
Stars: ✭ 110 (-6.78%)
Mutual labels:  reconstruction
Crfasrnn pytorch
CRF-RNN PyTorch version http://crfasrnn.torr.vision
Stars: ✭ 102 (-13.56%)
Mutual labels:  segmentation

MOTSFusion: Track to Reconstruct and Reconstruct to Track

Method Overview

Introduction

This repository contains the corresponding source code for the paper "Track to Reconstruct and Reconstruct to Track" arXiv PrePrint.

News

This paper has been accepted as both a conference paper at ICRA 2020, as well as being accepted as a journal paper in Robotics and Automation Letters (RA-L)!!!

Requirements

The code was tested on:

  • CUDA 9, cuDNN 7
  • Tensorflow 1.13
  • Python 3.6

Note: For some external code that is required to run in the precompute script, you need different requirements (see references). Please refer to the corresponding repositories to obtain the requirements for these.

Instructions

At first, download the datasets in section References (Stereo image pairs) as well as the detections and adapt the config files in ./configs according to your desired setup. The file "config_default" will run 2D as well as 3D tracking on the KITTI MOTS validation set. Next, download our pretrained segmentation network and extract it into './external/BB2SegNet'. Before running the main script, run:

python precompute.py -config ./configs/config_default

After this, all necessary information such as segmentations, optical flow, disparity and the corresponding pointcloud should be computed. Now you can run the tracker using:

python main.py -config ./configs/config_default

After the tracker has completed all sequences, results will be evaluated automatically.

Results

We release our results on the MOTS test set cars pedestrians

As well as the detections and segmentations (test-RRC, test-TRCNN, train/val-RRC, train/val-TRCNN).

References

Citation

@article{luiten2019track,
  title={Track to Reconstruct and Reconstruct to Track},
  author={Luiten, Jonathon and Fischer, Tobias and Leibe, Bastian},
  journal={arXiv:1910.00130},
  year={2019}
}

License

MIT License

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].