All Projects → vt-vl-lab → Df Net

vt-vl-lab / Df Net

Licence: mit
[ECCV 2018] DF-Net: Unsupervised Joint Learning of Depth and Flow using Cross-Task Consistency

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Df Net

Pwc Net pytorch
pytorch implementation of "PWC-Net: CNNs for Optical Flow Using Pyramid, Warping, and Cost Volume"
Stars: ✭ 111 (-41.58%)
Mutual labels:  optical-flow
Logosdistort
Uses matrix3d perspective distortions to create 3d scenes in the browser. Inspired by HelloMonday
Stars: ✭ 142 (-25.26%)
Mutual labels:  depth
Goleft
goleft is a collection of bioinformatics tools distributed under MIT license in a single static binary
Stars: ✭ 175 (-7.89%)
Mutual labels:  depth
Densematchingbenchmark
Dense Matching Benchmark
Stars: ✭ 120 (-36.84%)
Mutual labels:  optical-flow
Flownet2 Docker
Dockerfile and runscripts for FlowNet 2.0 (estimation of optical flow)
Stars: ✭ 137 (-27.89%)
Mutual labels:  optical-flow
Frvsr
Frame-Recurrent Video Super-Resolution (official repository)
Stars: ✭ 157 (-17.37%)
Mutual labels:  optical-flow
Back2future.pytorch
Unsupervised Learning of Multi-Frame Optical Flow with Occlusions
Stars: ✭ 104 (-45.26%)
Mutual labels:  optical-flow
Pdi
PDI: Panorama Depth Image
Stars: ✭ 180 (-5.26%)
Mutual labels:  depth
Deep Learning For Tracking And Detection
Collection of papers, datasets, code and other resources for object tracking and detection using deep learning
Stars: ✭ 1,920 (+910.53%)
Mutual labels:  optical-flow
Py Denseflow
Extract TVL1 optical flows in python (multi-process && multi-server)
Stars: ✭ 159 (-16.32%)
Mutual labels:  optical-flow
Netdef models
Repository for different network models related to flow/disparity (ECCV 18)
Stars: ✭ 130 (-31.58%)
Mutual labels:  optical-flow
Video2tfrecord
Easily convert RGB video data (e.g. .avi) to the TensorFlow tfrecords file format for training e.g. a NN in TensorFlow. This implementation allows to limit the number of frames per video to be stored in the tfrecords.
Stars: ✭ 137 (-27.89%)
Mutual labels:  optical-flow
Spynet
Spatial Pyramid Network for Optical Flow
Stars: ✭ 158 (-16.84%)
Mutual labels:  optical-flow
Vcn
Volumetric Correspondence Networks for Optical Flow, NeurIPS 2019.
Stars: ✭ 118 (-37.89%)
Mutual labels:  optical-flow
Hidden Two Stream
Caffe implementation for "Hidden Two-Stream Convolutional Networks for Action Recognition"
Stars: ✭ 179 (-5.79%)
Mutual labels:  optical-flow
Unsupervised Depth Completion Visual Inertial Odometry
Tensorflow implementation of Unsupervised Depth Completion from Visual Inertial Odometry (in RA-L January 2020 & ICRA 2020)
Stars: ✭ 109 (-42.63%)
Mutual labels:  depth
Tfvos
Semi-Supervised Video Object Segmentation (VOS) with Tensorflow. Includes implementation of *MaskRNN: Instance Level Video Object Segmentation (NIPS 2017)* as part of the NIPS Paper Implementation Challenge.
Stars: ✭ 151 (-20.53%)
Mutual labels:  optical-flow
Opticalflow visualization
Python optical flow visualization following Baker et al. (ICCV 2007) as used by the MPI-Sintel challenge
Stars: ✭ 183 (-3.68%)
Mutual labels:  optical-flow
Clover
ROS-based framework and RPi image to control PX4-powered drones 🍀
Stars: ✭ 177 (-6.84%)
Mutual labels:  optical-flow
Pysteps
Python framework for short-term ensemble prediction systems.
Stars: ✭ 159 (-16.32%)
Mutual labels:  optical-flow

DF-Net: Unsupervised Joint Learning of Depth and Flow using Cross-Task Consistency

A TensorFlow re-implementation for DF-Net: Unsupervised Joint Learning of Depth and Flow using Cross-Task Consistency. There are some minor differences from the model described in the paper:

  • Model in the paper uses 2-frame as input, while this code uses 5-frame as input (you might use any odd numbers of frames as input, though you would need to tune the hyper-parameters)
  • FlowNet in the paper is pre-trained on SYNTHIA, while this one is pre-trained on Cityscapes

Please see the project page for more details.

Prerequisites

This codebase was developed and tested with the following settings:

Python 3.6
TensorFlow 1.2.0 (this is the only supported version)
g++ 4.x (this is the only supported version)
CUDA 8.0
Unbuntu 14.04
4 Tesla K80 GPUs (w/ 12G memory each)

Some Python packages you might not have

pypng
opencv-python

Installation

  1. Clone this repository
git clone [email protected]:vt-vl-lab/DF-Net.git
cd DF-Net
  1. Prepare models and training data
chmod +x ./misc/prepare.sh
./misc/prepare.sh

NOTE: Frames belonging to KITTI2012/2015 train/test scenes have been excluded in the provided training set. Add these frames back to the training set would improve the performance of DepthNet.

Data preparation (for evaluation)

After accepting their license conditions, download KITTI raw, KITTI flow 2012, KITTI flow 2015.

Then you can make soft-link for them

cd dataset
mkdir KITTI
cd KITTI

ln -s /path/to/KITTI/raw raw
ln -s /path/to/KITTI/2012 flow2012
ln -s /path/to/KITTI/2015 flow2015

(Optional) You can add those KITTI2012/2015 frames back to the training set, by commenting line81~line85 in data/kitti/kitti_raw_loader.py, and do

python data/prepare_train_data.py --dataset_name='kitti_raw_eigen' --dump_root=/path/to/save/ --num_threads=4

Training

export CUDA_VISIBLE_DEVICES=0,1,2,3
python train_df.py --dataset_dir=/path/to/your/data --checkpoint_dir=/path/to/save/your/model

For the first time, custom CUDA operations for FlowNet will be compiled. If you have any compliation issues, please check core/UnFlow/src/e2eflow/ops.py

  • Line31: specify your CUDA path
  • Line32: Add -I $CUDA_HOME/include, where $CUDA_HOME is your CUDA directory
  • Line38: specify your g++ version

Testing

Test DepthNet on KITTI raw (You can use the validation set to selct the best model.)

python test_kitti_depth.py --dataset_dir=/path/to/your/data --output_dir=/path/to/save/your/prediction --ckpt_file=/path/to/your/ckpt --split="val or test"
python kitti_eval/eval_depth.py --pred_file=/path/to/your/prediction --split="val or test"

Test FlowNet on KITTI 2012 (Please use training set)

python test_flownet_2012.py --dataset_dir=/path/to/your/data --ckpt_file=/path/to/your/ckpt

Test FlowNet on KITTI 2015 (Please use training set)

python test_flownet_2015.py --dataset_dir=/path/to/your/data --ckpt_file=/path/to/your/ckpt

NOTE: For KITTI 2012/2015

  • If you want to generate visualization colormap for training set, you can specify output_dir
  • If you want to test on test set and upload it to KITTI server, you can specify output_dir and test on test set.

Pre-trained model performance

You should get the following numbers if you use the pre-trained model pretrained/dfnet

DepthNet (KITTI raw test test)

abs rel sq rel rms log rms a1 a2 a3
0.1452 1.2904 5.6115 0.2194 0.8114 0.9394 0.9767

FlowNet (KITTI 2012/2015 training set)

KITTI 2012 EPE KITTI 2015 EPE KITTI 2015 F1
3.1052 7.4482 0.2695

Citation

If you find this code useful for your research, please consider citing the following paper:

@inproceedings{zou2018dfnet,
author    = {Zou, Yuliang and Luo, Zelun and Huang, Jia-Bin}, 
title     = {DF-Net: Unsupervised Joint Learning of Depth and Flow using Cross-Task Consistency}, 
booktitle = {European Conference on Computer Vision},
year      = {2018}
}

Acknowledgement

Codes are heavily borrowed from several great work, including SfMLearner, monodepth, and UnFlow. We thank Shih-Yang Su for the code review.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].