All Projects → ppliuboy → Ddflow

ppliuboy / Ddflow

Licence: mit
DDFlow: Learning Optical Flow with Unlabeled Data Distillation

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Ddflow

Hidden Two Stream
Caffe implementation for "Hidden Two-Stream Convolutional Networks for Action Recognition"
Stars: ✭ 179 (+77.23%)
Mutual labels:  unsupervised-learning, optical-flow
Joint-Motion-Estimation-and-Segmentation
[MICCAI'18] Joint Learning of Motion Estimation and Segmentation for Cardiac MR Image Sequences
Stars: ✭ 45 (-55.45%)
Mutual labels:  optical-flow, unsupervised-learning
Arflow
The official PyTorch implementation of the paper "Learning by Analogy: Reliable Supervision from Transformations for Unsupervised Optical Flow Estimation".
Stars: ✭ 134 (+32.67%)
Mutual labels:  unsupervised-learning, optical-flow
Back2future.pytorch
Unsupervised Learning of Multi-Frame Optical Flow with Occlusions
Stars: ✭ 104 (+2.97%)
Mutual labels:  unsupervised-learning, optical-flow
back2future
Unsupervised Learning of Multi-Frame Optical Flow with Occlusions
Stars: ✭ 39 (-61.39%)
Mutual labels:  optical-flow, unsupervised-learning
GuidedNet
Caffe implementation for "Guided Optical Flow Learning"
Stars: ✭ 28 (-72.28%)
Mutual labels:  optical-flow, unsupervised-learning
Unflow
UnFlow: Unsupervised Learning of Optical Flow with a Bidirectional Census Loss
Stars: ✭ 239 (+136.63%)
Mutual labels:  unsupervised-learning, optical-flow
Cc
Competitive Collaboration: Joint Unsupervised Learning of Depth, Camera Motion, Optical Flow and Motion Segmentation
Stars: ✭ 348 (+244.55%)
Mutual labels:  unsupervised-learning, optical-flow
PCLNet
Unsupervised Learning for Optical Flow Estimation Using Pyramid Convolution LSTM.
Stars: ✭ 29 (-71.29%)
Mutual labels:  optical-flow, unsupervised-learning
deepOF
TensorFlow implementation for "Guided Optical Flow Learning"
Stars: ✭ 26 (-74.26%)
Mutual labels:  optical-flow, unsupervised-learning
Selflow
SelFlow: Self-Supervised Learning of Optical Flow
Stars: ✭ 319 (+215.84%)
Mutual labels:  unsupervised-learning, optical-flow
Voxelmorph
Unsupervised Learning for Image Registration
Stars: ✭ 1,057 (+946.53%)
Mutual labels:  unsupervised-learning, optical-flow
Karateclub
Karate Club: An API Oriented Open-source Python Framework for Unsupervised Learning on Graphs (CIKM 2020)
Stars: ✭ 1,190 (+1078.22%)
Mutual labels:  unsupervised-learning
Pytoflow
The py version of toflow → https://github.com/anchen1011/toflow
Stars: ✭ 83 (-17.82%)
Mutual labels:  optical-flow
Self Supervised Learning Overview
📜 Self-Supervised Learning from Images: Up-to-date reading list.
Stars: ✭ 73 (-27.72%)
Mutual labels:  unsupervised-learning
Concrete Autoencoders
Stars: ✭ 68 (-32.67%)
Mutual labels:  unsupervised-learning
Ficm
🎮 [IJCAI'20][ICLR'19 Workshop] Flow-based Intrinsic Curiosity Module. Playing SuperMario with RL agent and FICM!
Stars: ✭ 92 (-8.91%)
Mutual labels:  optical-flow
Grounder
Implementation of Grounding of Textual Phrases in Images by Reconstruction in Tensorflow
Stars: ✭ 83 (-17.82%)
Mutual labels:  unsupervised-learning
Insta Dm
Learning Monocular Depth in Dynamic Scenes via Instance-Aware Projection Consistency (AAAI 2021)
Stars: ✭ 67 (-33.66%)
Mutual labels:  unsupervised-learning
Sine
A PyTorch Implementation of "SINE: Scalable Incomplete Network Embedding" (ICDM 2018).
Stars: ✭ 67 (-33.66%)
Mutual labels:  unsupervised-learning

DDFlow: Learning Optical Flow with Unlabeled Data Distillation

The official Tensorflow implementation of DDFlow (AAAI 2019)

Requirements

  • Software: The code was developed with python 2 or python 3, opencv 3, tensorflow 1.8 and anaconda. It's okay to run without anaconda, but you may need to install the lacking packages by yourself when needed.
  • Hardware: GPU with memory 12G or more is recommended. We also implement the multi-gpu version. Please use multiple GPUs when available.

Usage

By default, run "python main.py", you can get the testing results using the pre-trained KITTI model.

Please refer to the configuration file template config for a detailed description of the different operating modes.

Testing

  • Edit config, set mode = test.
  • Create or edit a file, where the first column is the first image name, the second column is the second image name, the thrid column is the saving name. Edit config and set data_list_file to the file directory.
  • Edit config and set img_dir to the directory of your image directory.

Training

  • Datasets: Please download Flying Chairs, KITTI 2012 (multi-view extension), KITTI 2015 (multi-view extension) and Sintel.
  • Here we choose KITTI 2015 dataset as an example. Other datasets have similar training procedures. If you want to fully reproduce the results from scratch, please follow the training procedure in the paper.
  • To reduce computation cost, we fix teacher model and pre-compute optical flow and occlusion map in this implementation, which is a little different from the paper implementation. Under such setting, we can achieve similar performance with much less computation cost.
  • Step 1: Training without data distillation
    • Edit config, set mode = train.
    • Set training_mode=no_data_distillation
    • Train the model without both census transform and occlusion handling for 100k steps (or more). Specially, edit function create_train_op and set optim_loss as losses['abs_robust_mean']['no_occlusion']. If you want to add regularization, please add it in the optim_loss.
    • If needing to restore model from a checkpoint, set is_restore_model=True, set restore_model to the directory of the checkpoint.
    • Train the model with both census transform and occlusion handling for 300k steps (or more). Specially, edit function create_train_op and set optim_loss as losses['census']['occlusion'].
  • Step 2: Generate reliable optical flow and occlusion map
    • Edit config, set mode=generate_fake_flow_occlusion
    • Run the code to generate both flow and occlusion map.
  • Step 3: Training with data distillation
    • Edit config, set mode = train.
    • Set training_mode=data_distillation
    • Train the model with census transform, occlusion handling and data distillation for 300k steps (or more). Specially, edit function create_train_op and set optim_loss as losses['census']['occlusion']+losses['distillation']['data_distillation'].

Pre-trained Models

Check models for our pre-trained models on different datasets.

Citation

If you find DDFlow useful in your research, please consider citing:

@inproceedings{Liu:2019:DDFlow, 
title = {DDFlow: Learning Optical Flow with Unlabeled Data Distillation}, 
author = {Pengpeng Liu and Irwin King and Michael R. Lyu and Jia Xu}, 
booktitle = {AAAI}, 
year = {2019}}

Acknowledgement

Part of our codes are adapted from PWC-Net and UnFlow, we thank the authors for their contributions.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].