All Projects → JunjH → Visualizing-CNNs-for-monocular-depth-estimation

JunjH / Visualizing-CNNs-for-monocular-depth-estimation

Licence: MIT license
official implementation of "Visualization of Convolutional Neural Networks for Monocular Depth Estimation"

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Visualizing-CNNs-for-monocular-depth-estimation

FisheyeDistanceNet
FisheyeDistanceNet
Stars: ✭ 33 (-72.5%)
Mutual labels:  depth-estimation, monocular-depth-estimation
SGDepth
[ECCV 2020] Self-Supervised Monocular Depth Estimation: Solving the Dynamic Object Problem by Semantic Guidance
Stars: ✭ 162 (+35%)
Mutual labels:  depth-estimation, monocular-depth-estimation
EPCDepth
[ICCV 2021] Excavating the Potential Capacity of Self-Supervised Monocular Depth Estimation
Stars: ✭ 105 (-12.5%)
Mutual labels:  depth-estimation, monocular-depth-estimation
Depth estimation
Deep learning model to estimate the depth of image.
Stars: ✭ 62 (-48.33%)
Mutual labels:  depth-estimation, monocular-depth-estimation
DiverseDepth
The code and data of DiverseDepth
Stars: ✭ 150 (+25%)
Mutual labels:  depth-estimation, monocular-depth-estimation
rectified-features
[ECCV 2020] Single image depth prediction allows us to rectify planar surfaces in images and extract view-invariant local features for better feature matching
Stars: ✭ 57 (-52.5%)
Mutual labels:  depth-estimation, monocular-depth-estimation
All4Depth
Self-Supervised Depth Estimation on Monocular Sequences
Stars: ✭ 58 (-51.67%)
Mutual labels:  depth-estimation
transformers-interpret
Model explainability that works seamlessly with 🤗 transformers. Explain your transformers model in just 2 lines of code.
Stars: ✭ 861 (+617.5%)
Mutual labels:  interpretability
meg
Molecular Explanation Generator
Stars: ✭ 14 (-88.33%)
Mutual labels:  interpretability
Dual-CNN-Models-for-Unsupervised-Monocular-Depth-Estimation
Dual CNN Models for Unsupervised Monocular Depth Estimation
Stars: ✭ 36 (-70%)
Mutual labels:  depth-estimation
Transformer-MM-Explainability
[ICCV 2021- Oral] Official PyTorch implementation for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers, a novel method to visualize any Transformer-based network. Including examples for DETR, VQA.
Stars: ✭ 484 (+303.33%)
Mutual labels:  interpretability
ConceptBottleneck
Concept Bottleneck Models, ICML 2020
Stars: ✭ 91 (-24.17%)
Mutual labels:  interpretability
hierarchical-dnn-interpretations
Using / reproducing ACD from the paper "Hierarchical interpretations for neural network predictions" 🧠 (ICLR 2019)
Stars: ✭ 110 (-8.33%)
Mutual labels:  interpretability
free-lunch-saliency
Code for "Free-Lunch Saliency via Attention in Atari Agents"
Stars: ✭ 15 (-87.5%)
Mutual labels:  interpretability
deep-explanation-penalization
Code for using CDEP from the paper "Interpretations are useful: penalizing explanations to align neural networks with prior knowledge" https://arxiv.org/abs/1909.13584
Stars: ✭ 110 (-8.33%)
Mutual labels:  interpretability
glcapsnet
Global-Local Capsule Network (GLCapsNet) is a capsule-based architecture able to provide context-based eye fixation prediction for several autonomous driving scenarios, while offering interpretability both globally and locally.
Stars: ✭ 33 (-72.5%)
Mutual labels:  interpretability
interpretable-ml
Techniques & resources for training interpretable ML models, explaining ML models, and debugging ML models.
Stars: ✭ 17 (-85.83%)
Mutual labels:  interpretability
BridgeDepthFlow
Bridging Stereo Matching and Optical Flow via Spatiotemporal Correspondence, CVPR 2019
Stars: ✭ 114 (-5%)
Mutual labels:  depth-estimation
mllp
The code of AAAI 2020 paper "Transparent Classification with Multilayer Logical Perceptrons and Random Binarization".
Stars: ✭ 15 (-87.5%)
Mutual labels:  interpretability
mmn
Moore Machine Networks (MMN): Learning Finite-State Representations of Recurrent Policy Networks
Stars: ✭ 39 (-67.5%)
Mutual labels:  interpretability

Visualization of Convolutional Neural Networks for Monocular Depth Estimation


Junjie Hu, Yan Zhang, Takayuki Okatani, "Visualization of Convolutional Neural Networks for Monocular Depth Estimation," ICCV, 2019. paper

Introduction

We attempt to interpret CNNs for monocular depth estimation. To this end, we propose to locate the most relevant pixels of input image to depth inference. We formulate it as an optimization problem of identifying the smallest number of image pixels from which the CNN can estimate a depth map with the minimum difference from the estimate from the entire image.

Predicted Masks

Extensive experimental results show

  • The behaviour of CNNs that they seem to select edges in input images depending not on their strengths but on importance for inference of scene geometry.

  • The tendency of attending not only on the boundary but the inside region of each individual object.

  • The importance of image regions around the vanishing points for depth estimation on outdoor scenes.

Please check our paper for more details.

Dependencies

  • python 2.7
  • pytorch 0.3.1

Running

Download the trained networks for depth estimation : Depth estimation networks

Download the trained networks for mask prediction : Mask prediction network

Download the NYU-v2 dataset: NYU-v2 dataset

  • Test

    python test.py
  • Train

    python train.py

Citation

If you use the code or the pre-processed data, please cite:

@inproceedings{Hu2019VisualizationOC,
  title={Visualization of Convolutional Neural Networks for Monocular Depth Estimation},
  author={Junjie Hu and Yan Zhang and Takayuki Okatani},
  booktitle={IEEE International Conf. on Computer Vision (ICCV)},
  year={2019}
}

@inproceedings{Hu2019RevisitingSI,
  title={Revisiting Single Image Depth Estimation: Toward Higher Resolution Maps With Accurate Object Boundaries},
  author={Junjie Hu and Mete Ozay and Yan Zhang and Takayuki Okatani},
  booktitle={2019 IEEE Winter Conference on Applications of Computer Vision (WACV)},
  year={2019}
}
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].