qijiezhao / Py Denseflow
Extract TVL1 optical flows in python (multi-process && multi-server)
Stars: ✭ 159
Programming Languages
python
139335 projects - #7 most used programming language
Labels
Projects that are alternatives of or similar to Py Denseflow
Eskf
ROS Error-State Kalman Filter based on PX4/ecl. Performs GPS/Magnetometer/Vision Pose/Optical Flow/RangeFinder fusion with IMU
Stars: ✭ 63 (-60.38%)
Mutual labels: optical-flow
Vcn
Volumetric Correspondence Networks for Optical Flow, NeurIPS 2019.
Stars: ✭ 118 (-25.79%)
Mutual labels: optical-flow
Deep Learning For Tracking And Detection
Collection of papers, datasets, code and other resources for object tracking and detection using deep learning
Stars: ✭ 1,920 (+1107.55%)
Mutual labels: optical-flow
Dispnet Flownet Docker
Dockerfile and runscripts for DispNet and FlowNet1 (estimation of disparity and optical flow)
Stars: ✭ 78 (-50.94%)
Mutual labels: optical-flow
Back2future.pytorch
Unsupervised Learning of Multi-Frame Optical Flow with Occlusions
Stars: ✭ 104 (-34.59%)
Mutual labels: optical-flow
Netdef models
Repository for different network models related to flow/disparity (ECCV 18)
Stars: ✭ 130 (-18.24%)
Mutual labels: optical-flow
Pwc Net pytorch
pytorch implementation of "PWC-Net: CNNs for Optical Flow Using Pyramid, Warping, and Cost Volume"
Stars: ✭ 111 (-30.19%)
Mutual labels: optical-flow
Flownet2 Docker
Dockerfile and runscripts for FlowNet 2.0 (estimation of optical flow)
Stars: ✭ 137 (-13.84%)
Mutual labels: optical-flow
Pytoflow
The py version of toflow → https://github.com/anchen1011/toflow
Stars: ✭ 83 (-47.8%)
Mutual labels: optical-flow
Ddflow
DDFlow: Learning Optical Flow with Unlabeled Data Distillation
Stars: ✭ 101 (-36.48%)
Mutual labels: optical-flow
Arflow
The official PyTorch implementation of the paper "Learning by Analogy: Reliable Supervision from Transformations for Unsupervised Optical Flow Estimation".
Stars: ✭ 134 (-15.72%)
Mutual labels: optical-flow
Pwc Net
PWC-Net: CNNs for Optical Flow Using Pyramid, Warping, and Cost Volume, CVPR 2018 (Oral)
Stars: ✭ 1,142 (+618.24%)
Mutual labels: optical-flow
Tfvos
Semi-Supervised Video Object Segmentation (VOS) with Tensorflow. Includes implementation of *MaskRNN: Instance Level Video Object Segmentation (NIPS 2017)* as part of the NIPS Paper Implementation Challenge.
Stars: ✭ 151 (-5.03%)
Mutual labels: optical-flow
Voxelmorph
Unsupervised Learning for Image Registration
Stars: ✭ 1,057 (+564.78%)
Mutual labels: optical-flow
Pysteps
Python framework for short-term ensemble prediction systems.
Stars: ✭ 159 (+0%)
Mutual labels: optical-flow
Frvsr
Frame-Recurrent Video Super-Resolution (official repository)
Stars: ✭ 157 (-1.26%)
Mutual labels: optical-flow
Video2tfrecord
Easily convert RGB video data (e.g. .avi) to the TensorFlow tfrecords file format for training e.g. a NN in TensorFlow. This implementation allows to limit the number of frames per video to be stored in the tfrecords.
Stars: ✭ 137 (-13.84%)
Mutual labels: optical-flow
Py-denseflow
This is a python port of denseflow, which extract the videos' frames and optical flow images with TVL1 algorithm as default.
Requirements:
- numpy
- cv2
- PIL.Image
- multiprocess
- scikit-video (optional)
- scipy
Installation
Install the requirements:
pip install -r requirements.txt
Usage
The denseflow.py contains two modes including 'run' and 'debug'.
here 'debug' is built for debugging the video paths and video-read methods. (IPython.embed suggested)
Just simply run the following code:
python denseflow.py --new_dir=denseflow_py --num_workers=1 --step=1 --bound=20 --mode=debug
While in 'run' mode, here we provide multi-process as well as multi-server with manually s_/e_ IDs setting.
for example: server 0 need to process 3000 videos with 4 processes parallelly working:
python denseflow.py --new_dir=denseflow_py --num_workers=4 --step=1 --bound=20 --mode=run --s_=0 --e_=3000
Just feel free to let me know if any bugs exist.
Note that the project description data, including the texts, logos, images, and/or trademarks,
for each open source project belongs to its rightful owner.
If you wish to add or remove any projects, please contact us at [email protected].