All Projects → rvarun7777 → FisheyeDistanceNet

rvarun7777 / FisheyeDistanceNet

Licence: other
FisheyeDistanceNet

Projects that are alternatives of or similar to FisheyeDistanceNet

SGDepth
[ECCV 2020] Self-Supervised Monocular Depth Estimation: Solving the Dynamic Object Problem by Semantic Guidance
Stars: ✭ 162 (+390.91%)
Mutual labels:  depth-estimation, depth-prediction, monocular-depth-estimation
learning-topology-synthetic-data
Tensorflow implementation of Learning Topology from Synthetic Data for Unsupervised Depth Completion (RAL 2021 & ICRA 2021)
Stars: ✭ 22 (-33.33%)
Mutual labels:  depth, depth-estimation, self-supervised-learning
DiverseDepth
The code and data of DiverseDepth
Stars: ✭ 150 (+354.55%)
Mutual labels:  depth-estimation, depth-prediction, monocular-depth-estimation
sc depth pl
Pytorch Lightning Implementation of SC-Depth (V1, V2...) for Unsupervised Monocular Depth Estimation.
Stars: ✭ 86 (+160.61%)
Mutual labels:  depth-estimation, self-supervised-learning
Depth estimation
Deep learning model to estimate the depth of image.
Stars: ✭ 62 (+87.88%)
Mutual labels:  depth-estimation, monocular-depth-estimation
Sfmlearner
An unsupervised learning framework for depth and ego-motion estimation from monocular videos
Stars: ✭ 1,661 (+4933.33%)
Mutual labels:  depth-prediction, self-supervised-learning
adareg-monodispnet
Repository for Bilateral Cyclic Constraint and Adaptive Regularization for Unsupervised Monocular Depth Prediction (CVPR2019)
Stars: ✭ 22 (-33.33%)
Mutual labels:  depth-prediction, self-supervised-learning
rectified-features
[ECCV 2020] Single image depth prediction allows us to rectify planar surfaces in images and extract view-invariant local features for better feature matching
Stars: ✭ 57 (+72.73%)
Mutual labels:  depth-estimation, monocular-depth-estimation
improving segmentation with selfsupervised depth
[CVPR21] Implementation of our work "Three Ways to Improve Semantic Segmentation with Self-Supervised Depth Estimation"
Stars: ✭ 189 (+472.73%)
Mutual labels:  depth-estimation, self-supervised-learning
defish
Algorithmic correction of fisheye lens distortion
Stars: ✭ 56 (+69.7%)
Mutual labels:  fisheye, fisheye-lens-distortion
EPCDepth
[ICCV 2021] Excavating the Potential Capacity of Self-Supervised Monocular Depth Estimation
Stars: ✭ 105 (+218.18%)
Mutual labels:  depth-estimation, monocular-depth-estimation
M4Depth
Official implementation of the network presented in the paper "M4Depth: A motion-based approach for monocular depth estimation on video sequences"
Stars: ✭ 62 (+87.88%)
Mutual labels:  depth, depth-estimation
temporal-depth-segmentation
Source code (train/test) accompanying the paper entitled "Veritatem Dies Aperit - Temporally Consistent Depth Prediction Enabled by a Multi-Task Geometric and Semantic Scene Understanding Approach" in CVPR 2019 (https://arxiv.org/abs/1903.10764).
Stars: ✭ 20 (-39.39%)
Mutual labels:  depth-prediction, monocular-depth-estimation
Visualizing-CNNs-for-monocular-depth-estimation
official implementation of "Visualization of Convolutional Neural Networks for Monocular Depth Estimation"
Stars: ✭ 120 (+263.64%)
Mutual labels:  depth-estimation, monocular-depth-estimation
GeoSup
Code for Geo-Supervised Visual Depth Prediction
Stars: ✭ 27 (-18.18%)
Mutual labels:  depth
project-defude
Refocus an image just by clicking on it with no additional data
Stars: ✭ 69 (+109.09%)
Mutual labels:  depth-estimation
Self-Supervised-Embedding-Fusion-Transformer
The code for our IEEE ACCESS (2020) paper Multimodal Emotion Recognition with Transformer-Based Self Supervised Feature Fusion.
Stars: ✭ 57 (+72.73%)
Mutual labels:  self-supervised-learning
BYOL
Bootstrap Your Own Latent: A New Approach to Self-Supervised Learning
Stars: ✭ 102 (+209.09%)
Mutual labels:  self-supervised-learning
FKD
A Fast Knowledge Distillation Framework for Visual Recognition
Stars: ✭ 49 (+48.48%)
Mutual labels:  self-supervised-learning
video repres mas
code for CVPR-2019 paper: Self-supervised Spatio-temporal Representation Learning for Videos by Predicting Motion and Appearance Statistics
Stars: ✭ 63 (+90.91%)
Mutual labels:  self-supervised-learning

OmniDet: Surround View Cameras based Multi-task Visual Perception Network for Autonomous Driving

The repository contains a boilerplate code to encourage further research in building a unified perception model for autonomous driving.

FisheyeDistanceNet: Self-Supervised Scale-Aware Distance Estimation using Monocular Fisheye Camera for Autonomous Driving

Varun Ravi Kumar, Sandesh Athni Hiremath, Markus Bach, Stefan Milz, Christian Witt, Clément Pinard, Senthil Yogamani and Patrick Mäder

Accepted to ICRA 2020

FisheyeDistanceNet WebSite

Abstract

Fisheye cameras are commonly used in applications like autonomous driving and surveillance to provide a large field of view (> 180◦). However, they come at the cost of strong non-linear distortions which require more complex algorithms. In this paper, we explore Euclidean distance estimation on fisheye cameras for automotive scenes. Obtaining accurate and dense depth supervision is difficult in practice, but self-supervised learning approaches show promising results and could potentially overcome the problem. We present a novel self-supervised scale-aware framework for learning Euclidean distance and ego-motion from raw monocular fisheye videos without applying rectification. While it is possible to perform piece-wise linear approximation of fisheye projection surface and apply standard rectilinear models, it has its own set of issues like resampling distortion and discontinuities in transition regions. To encourage further research in this area, we will release our dataset as part of the WoodScape project. We further evaluated the proposed algorithm on the KITTI dataset and obtained state-of-the-art results comparable to other self-supervised monocular methods. Qualitative results on an unseen fisheye video demonstrate impressive performance.

Method

Selection_137

The first row represents our ego masks as described in Section Masking Static Pixels and Ego Mask, , indicate which pixel coordinates are valid when constructing from and from respectively. The second row indicates the masking of static pixels computed after 2 epochs, where black pixels are filtered from the photometric loss (i.e. ). It prevents dynamic objects at similar speed as the ego car and low texture regions from contaminating the loss. The masks are computed for forward and backward sequences from the input sequence and reconstructed images using Eq. 11 in our paper as described in Section Masking Static Pixels and Ego Mask. The third row represents the distance estimates corresponding to their input frames. Finally, the vehicle's odometry data is used to resolve the scale factor issue.

[Full paper] [YouTube]

Demo

Quantitaive Results

Selection_138

Citation

Please use the following citation when referencing our work:

@article{kumar2019fisheyedistancenet,
  title={FisheyeDistanceNet: Self-Supervised Scale-Aware Distance Estimation using Monocular Fisheye Camera for Autonomous Driving},
  author={Kumar, Varun Ravi and Hiremath, Sandesh Athni and Milz, Stefan and Witt, Christian and Pinnard, Clement and Yogamani, Senthil and Mader, Patrick},
  journal={arXiv preprint arXiv:1910.04076},
  year={2019}
}
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].