All Projects → michael-fonder → M4Depth

michael-fonder / M4Depth

Licence: other
Official implementation of the network presented in the paper "M4Depth: A motion-based approach for monocular depth estimation on video sequences"

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to M4Depth

FMT-Firmware
FMT Autopilot Embedded System
Stars: ✭ 207 (+233.87%)
Mutual labels:  uav, drone
QGISFMV
QGIS Full Motion Video (FMV)
Stars: ✭ 104 (+67.74%)
Mutual labels:  uav, drone
FlyingCarUdacity
🛩️⚙️ 3D Planning, PID Control, Extended Kalman Filter for the Udacity Flying Car Nanodegree // FCND-Term1
Stars: ✭ 16 (-74.19%)
Mutual labels:  uav, drone
multi uav simulator
A quadrotor swarm simulator based on ROS (Robot Operating System).
Stars: ✭ 73 (+17.74%)
Mutual labels:  uav, drone
dji-tello
Java API for the DJI Tello Drone.
Stars: ✭ 13 (-79.03%)
Mutual labels:  uav, drone
ufomap
UFOMap: An Efficient Probabilistic 3D Mapping Framework That Embraces the Unknown
Stars: ✭ 117 (+88.71%)
Mutual labels:  uav, drone
UAV-Stereo-Vision
A program for controlling a micro-UAV for obstacle detection and collision avoidance using disparity mapping
Stars: ✭ 30 (-51.61%)
Mutual labels:  uav, drone
Dronin
The dRonin flight controller software.
Stars: ✭ 238 (+283.87%)
Mutual labels:  uav, drone
olympe
Python controller library for Parrot Drones
Stars: ✭ 62 (+0%)
Mutual labels:  uav, drone
FisheyeDistanceNet
FisheyeDistanceNet
Stars: ✭ 33 (-46.77%)
Mutual labels:  depth, depth-estimation
uav core
The main integrator of MRS UAV packages in ROS, part of the "mrs_uav_system".
Stars: ✭ 28 (-54.84%)
Mutual labels:  uav, drone
grvc-ual
An abstraction layer for unmanned aerial vehicles
Stars: ✭ 35 (-43.55%)
Mutual labels:  uav, drone
ESP32
DroneBridge for ESP32. A short range wifi based telemetry link. Support for MAVLink, MSP & LTM (iNAV).
Stars: ✭ 183 (+195.16%)
Mutual labels:  uav, drone
ZeroPilot-SW
Software for WARG custom autopilot
Stars: ✭ 13 (-79.03%)
Mutual labels:  uav, drone
Faster
3D Trajectory Planner in Unknown Environments
Stars: ✭ 246 (+296.77%)
Mutual labels:  uav, drone
gobot
Golang framework for robotics, drones, and the Internet of Things (IoT)
Stars: ✭ 7,869 (+12591.94%)
Mutual labels:  uav, drone
Qgroundcontrol
Cross-platform ground control station for drones (Android, iOS, Mac OS, Linux, Windows)
Stars: ✭ 2,026 (+3167.74%)
Mutual labels:  uav, drone
Gymfc
A universal flight control tuning framework
Stars: ✭ 210 (+238.71%)
Mutual labels:  uav, drone
zubax gnss
Zubax GNSS module
Stars: ✭ 45 (-27.42%)
Mutual labels:  uav, drone
groundsdk-android
Parrot Ground SDK for Android
Stars: ✭ 17 (-72.58%)
Mutual labels:  uav, drone

M4Depth

This is the reference TensorFlow implementation for training and testing depth estimation models using the method described in

M4Depth: A motion-based approach for monocular depth estimation on video sequences

Michaël Fonder, Damien Ernst and Marc Van Droogenbroeck

arXiv pdf

1 1 1 1 1 1
1 1 1 1 1 1
1 1 1 1 1 1

Some samples produced by our method: the first line shows the RGB picture capured by the camera, the second the ground-truth depth map and the last one the results produced by our method.

If you find our work useful in your research please consider citing our paper:

@article{Fonder2021M4Depth,
  title     = {M4Depth: A motion-based approach for monocular depth estimation on video sequences},
  author    = {Michael Fonder and Damien Ernst and Marc Van Droogenbroeck},
  booktitle = {arXiv},
  month     = {May},
  year      = {2021}
}

If you use the Mid-Air dataset in your research, please consider citing the related paper:

@INPROCEEDINGS{Fonder2019MidAir,
  author    = {Michael Fonder and Marc Van Droogenbroeck},
  title     = {Mid-Air: A multi-modal dataset for extremely low altitude drone flights},
  booktitle = {Conference on Computer Vision and Pattern Recognition Workshop (CVPRW)},
  year      = {2019},
  month     = {June}
} 

Dependencies

Assuming a fresh Anaconda distribution, you can install the dependencies with:

conda install tensorflow-gpu=1.15 h5py pyquaternion numpy 

Formatting data

Our code works with tensorflow protobuffer files data for training and testing therefore need to be encoded properly before being passed to the network.

Mid-Air dataset

To reproduce the results of our paper, you can use the Mid-Air dataset for training and testing our network. For this, you will first need to download the required data on your computer. The procedure to get them is the following:

  1. Go on the download page of the Mid-Air dataset
  2. Select the "Left RGB" and "Stereo Disparity" image types
  3. Move to the end of the page and enter your email to get the download links (the volume of selected data should be equal to 316.5Go)
  4. Follow the procedure given at the begining of the download page to download and extract the dataset

Once the dataset is downloaded you can generate the required protobuffer files by running the following script:

python3 midair-protobuf_generation.py --db_path path/to/midair-root --output_dir desired/protobuf-location --write

This script generates trajectory sequences with a length of 8 frames and automatically creates the train and test splits for Mid-Air in separated subdirectories.

Custom data

You can also train or test our newtork on your own data. You can generate your own protobuffer files by repurpusing our midair-protobuf_generation.py script. When creating your own protobuffer files, you should pay attention to two major parameters; All sequences should have the same length and each element of a sequence should come with the following data:

  • "image/color_i" : the binary data of the jpeg picture encoding the color data of the frame
  • "Image/depth_i" : the binary data of the 16-bit png file encoding the stereo disparity map
  • "data/omega_i" : a list of three float32 numbers corresponding to the angular rotation between two consecutive frames
  • "data/trans_i" : a list of three float32 numbers corresponding to the translation between two consecutive frames

The subscript i has to be replaced by the index of the data within the trajectory. Translations and rotations are expressed in the standard camera frame of refence axis system.

Training

You can launch a training or a finetuning (if the log_dir already exists) by exectuting the following command line:

python3 m4depth_pipeline.py --train_datadir=path/to/protobuf/dir --log_dir=path/to/logdir --dataset=midair --arch_depth=6 --db_seq_len=8 --seq_len=6 --num_batches=200000 -b=3 -g=1 --summary_interval_secs=900 --save_interval_secs=1800

If needed, other options are available for the training phase and are described in pipeline_options.py and in m4depth_options.py files. Please note that the code can run on multiple GPUs to speedup the training.

Testing/Evaluation

You can launch the evaluation of your test samples by exectuting the following command line:

python3 m4depth_pipeline.py --test_datadir=path/to/protobuf/dir --log_dir=path/to/logdir --dataset=midair --arch_depth=6 --db_seq_len=8 --seq_len=8 --b=3 -g=1

If needed, other options are available for the evaluation phase and are described in pipeline_options.py and in m4depth_options.py files.

Pretrained model

We provide pretrained weights for our model in the "trained_weights" directory. Testing or evaluating a dataset from these weight can be done by executing the following command line:

python3 m4depth_pipeline.py --test_datadir=path/to/protobuf/dir --log_dir=trained_weights/M4Depth-d6 --dataset=midair --arch_depth=6 --db_seq_len=8 --seq_len=8 --b=3 -g=1
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].