All Projects → CnnDepth → tx2_fcnn_node

CnnDepth / tx2_fcnn_node

Licence: MIT license
ROS node for real-time FCNN depth reconstruction

Programming Languages

C++
36643 projects - #6 most used programming language
CMake
9771 projects
Dockerfile
14818 projects
shell
77523 projects

Projects that are alternatives of or similar to tx2 fcnn node

G2LTex
Code for CVPR 2018 paper --- Texture Mapping for 3D Reconstruction with RGB-D Sensor
Stars: ✭ 104 (+1.96%)
Mutual labels:  depth, slam
Crypto Exchange
Pulls together list of crypto exchanges to interact with their API's in a uniform fashion.
Stars: ✭ 241 (+136.27%)
Mutual labels:  depth
Depth
Add some Depth to your fragments
Stars: ✭ 789 (+673.53%)
Mutual labels:  depth
Unsupervised Depth Completion Visual Inertial Odometry
Tensorflow implementation of Unsupervised Depth Completion from Visual Inertial Odometry (in RA-L January 2020 & ICRA 2020)
Stars: ✭ 109 (+6.86%)
Mutual labels:  depth
Flutter Neumorphic
A complete, ready to use, Neumorphic ui kit for Flutter, 🕶️ dark mode compatible
Stars: ✭ 988 (+868.63%)
Mutual labels:  depth
Goleft
goleft is a collection of bioinformatics tools distributed under MIT license in a single static binary
Stars: ✭ 175 (+71.57%)
Mutual labels:  depth
Depth clustering
🚕 Fast and robust clustering of point clouds generated with a Velodyne sensor.
Stars: ✭ 657 (+544.12%)
Mutual labels:  depth
maks
Motion Averaging
Stars: ✭ 52 (-49.02%)
Mutual labels:  slam
Android 3d Layout
Wow effect, transform your layout into 3D views
Stars: ✭ 199 (+95.1%)
Mutual labels:  depth
360sd Net
Pytorch implementation of ICRA 2020 paper "360° Stereo Depth Estimation with Learnable Cost Volume"
Stars: ✭ 94 (-7.84%)
Mutual labels:  depth
Sdr
Code for 'Segment-based Disparity Refinement with Occlusion Handling for Stereo Matching'
Stars: ✭ 80 (-21.57%)
Mutual labels:  depth
Monodepth360
Master's project implementing depth estimation for spherical images using unsupervised learning with CNNs.
Stars: ✭ 41 (-59.8%)
Mutual labels:  depth
Pdi
PDI: Panorama Depth Image
Stars: ✭ 180 (+76.47%)
Mutual labels:  depth
Astradotnetdemo
Simple .Net solution to demonstrate working with Orbbec Astra (Pro) depth sensors from C#
Stars: ✭ 28 (-72.55%)
Mutual labels:  depth
so dso place recognition
A Fast and Robust Place Recognition Approach for Stereo Visual Odometry using LiDAR Descriptors
Stars: ✭ 52 (-49.02%)
Mutual labels:  slam
Sfmlearner Pytorch
Pytorch version of SfmLearner from Tinghui Zhou et al.
Stars: ✭ 718 (+603.92%)
Mutual labels:  depth
Retouch
🎬 An OpenGL application for editing and retouching images using depth-maps in 2.5D
Stars: ✭ 63 (-38.24%)
Mutual labels:  depth
Logosdistort
Uses matrix3d perspective distortions to create 3d scenes in the browser. Inspired by HelloMonday
Stars: ✭ 142 (+39.22%)
Mutual labels:  depth
Awesome-Self-Driving
an awesome list of self-driving algorithms, software, tools
Stars: ✭ 74 (-27.45%)
Mutual labels:  slam
li slam ros2
ROS2 package of tightly-coupled lidar inertial ndt/gicp slam
Stars: ✭ 160 (+56.86%)
Mutual labels:  slam

tx2_fcnn_node

Real-time Vision-based Depth Reconstruction with NVidia Jetson for Monocular SLAM

ROS node for real-time FCNN-based depth reconstruction (as in paper). The platforms are NVidia Jetson TX2 and x86_64 PC with GNU/Linux (aarch64 should work as well, but not tested).

Publications

If you use this work in an academic context, please cite the following publication(s):

@conference{Bokovoy2019,
  author={Bokovoy, A. and Muravyev, K. and Yakovlev, K.},
  title={Real-time vision-based depth reconstruction with NVIDIA jetson},
  journal={2019 European Conference on Mobile Robots, ECMR 2019 - Proceedings},
  year={2019},
  doi={10.1109/ECMR.2019.8870936},
  art_number={8870936},
  url={https://www.scopus.com/inward/record.uri?eid=2-s2.0-85074429057&doi=10.1109%2fECMR.2019.8870936&partnerID=40&md5=b87bcba0803147012ee1062f867cc4ef},
  document_type={Conference Paper},
  source={Scopus},
}

System requirements

  • Linux-based system with aarch64 or x86_64 architecture or NVidia Jetson TX2.
  • NVidia graphic card.

Pre-requesites

  1. ROS Kinetic or higher.
  2. TensorRT 5.0 or higher.
  3. CUDA 9.0 or higher
  4. CUDNN + CuBLAS
  5. GStreamer-1.0
  6. glib2.0

Optional:

  • RTAB-MAP

Compile

Assuming you already have ROS and CUDA related tools installed

  1. Install remaining pre-requesites:
$ sudo apt-get update
$ sudo apt-get install -y libqt4-dev qt4-dev-tools \ 
       libglew-dev glew-utils libgstreamer1.0-dev \ 
       libgstreamer-plugins-base1.0-dev libglib2.0-dev \
       libgstreamer-plugins-good
$ sudo apt-get install -y libopencv-calib3d-dev libopencv-dev 
  1. Navigate to your catkin workspace and clone the repository:
$ git clone https://github.com/CnnDepth/tx2_fcnn_node.git
$ cd tx2_fcnn_node && git submodule update --init --recursive
  1. Build the node:

Navigate to catkin workspace folder.

a) On jetson:

$ catkin_make

b) On x86_64 PC

$ catkin_make --cmake-args -DPATH_TO_TENSORRT_LIB=/usr/lib/x86_64-linux-gnu \ 
              -DPATH_TO_TENSORRT_INCLUDE=/usr/include -DPATH_TO_CUDNN=/usr/lib/x86_64-linux-gnu \ 
              -DPATH_TO_CUBLAS=/usr/lib/x86_64-linux-gnu

Change the paths accordingly.

  1. Build the TensorRT engine

Compile engine builder.

$ catkin_make --cmake-args -DBUILD_ENGINE_BUILDER=1

Download UFF models.

$ roscd tx2_fcnn_node
$ sh ./download_models.sh

Compile the engine.

$ cd engine
$ rosrun tx2_fcnn_node fcrn_engine_builder --uff=./resnet_nonbt_shortcuts_320x240.uff --uffInput=tf/Placeholder \
  --output=tf/Reshape --height=240 --width=320 --engine=./test_engine.trt --fp16
  1. Run:
$ roslaunch tx2_fcnn_node cnn_only.launch

or with RTAB-MAP

$ roslaunch tx2_fcnn_node rtabmap_cnn.launch

Run in a container

  1. Build image:
$ cd docker
$ docker build . -t rt-ros-docker
  1. Run an image:
$ nvidia-docker run -device=/dev/video0:/dev/video0 -it --rm rt-ros-docker
  1. Create ros workspace:
$ mkdir -p catkin_ws/src && cd catkin_ws/src
$ catkin_init_workspace
$ cd ..
$ catkin_make
$ source devel/setup.bash
  1. Build tx2_fcnn_node:
$ cd src
$ git clone https://github.com/CnnDepth/tx2_fcnn_node.git
$ cd tx2_fcnn_node && git submodule update --init --recursive
$ catkin_make
  1. Run the node:
rosrun tx2_fcnn_node tx2_fcnn_node

Nodes

tx2_fcnn_node

Reads the images from camera or image topic and computes the depth map.

Subscribed Topics

Published topics

Parameters

  • input_width (int, default: 320)

    Input image width for TensorRT engine

  • input_height (int, default: 240)

    Input image height for TensorRT engine

  • use_camera (bool, default: true)

    If true - use internal camera as image source. False - use /image topic as input source.

  • camera_mode (int, default: -1)

    Only works if use_camera:=true. Sets camera device to be opened. -1 - default device.

  • camera_link (string, default: "camera_link")

    Name of camera's frame_id.

  • depth_link (string, default: "depth_link")

    Name of depth's frame_id

  • engine_name (string, default: "test_engine.trt")

    Name of the compiled TensorRT engine file, localed in "engine" folder.

  • calib_name (string, default: "tx2_camera_calib.yaml")

    Name of calibration file, obrained with camera_calib node. May be either in .yaml or .ini format.

  • input_name (string, default: "tf/Placeholder")

    Name of the input of TensorRT engine.

  • output_name (string, default: "tf/Reshape")

    Name of the output of TensorRT engine

  • mean_r (float, default: 123.0)

    R channel mean value, used during FCNN training.

  • mean_g (float, default: 115.0)

    G channel mean value, used during FCNN training.

  • mean_b (float, default: 101.0)

    B channel mean value, used during FCNN training.

Sample models

Models pre-trained on NYU Depth v2 dataset are available in http://pathplanning.ru/public/ECMR-2019/engines/. The models are stored in UFF format. They can be converted into TensorRT engines using tensorrt_samples.

Troubleshooting

Stack smashing

If you run this node on Ubuntu 16.04 or older, the node may fail to start and show Stack smashing detected log message. To fix it, remove XML.* files in Thirdparty/fcrn-inference/jetson-utils directory, and recompile the project.

Inverted image

If you run this node on Jetson, RGB and depth image may be shown inverted. To fix it, open Thirdparty/fcrn-inference/jetson-utils/camera/gstCamera.cpp file in text editor, go to lines 344-348, and change value of flipMethod constant to 0. After editing, recompile the project.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].