All Projects → javiermcebrian → glcapsnet

javiermcebrian / glcapsnet

Licence: Apache-2.0 license
Global-Local Capsule Network (GLCapsNet) is a capsule-based architecture able to provide context-based eye fixation prediction for several autonomous driving scenarios, while offering interpretability both globally and locally.

Programming Languages

python
139335 projects - #7 most used programming language
shell
77523 projects
Dockerfile
14818 projects

Projects that are alternatives of or similar to glcapsnet

Self Driving Golf Cart
Be Driven 🚘
Stars: ✭ 147 (+345.45%)
Mutual labels:  autonomous-driving, autonomous-vehicles
Apollo perception ros
Object detection / tracking / fusion based on Apollo r3.0.0 perception module in ROS
Stars: ✭ 179 (+442.42%)
Mutual labels:  autonomous-driving, autonomous-vehicles
Autonomousdrivingcookbook
Scenarios, tutorials and demos for Autonomous Driving
Stars: ✭ 1,939 (+5775.76%)
Mutual labels:  autonomous-driving, autonomous-vehicles
JuliaAutonomy
Julia sample codes for Autonomy, Robotics and Self-Driving Algorithms.
Stars: ✭ 21 (-36.36%)
Mutual labels:  autonomous-driving, autonomous-vehicles
autonomous-delivery-robot
Repository for Autonomous Delivery Robot project of IvLabs, VNIT
Stars: ✭ 65 (+96.97%)
Mutual labels:  autonomous-driving, autonomous-vehicles
Lidarobstacledetection
Lidar Obstacle Detection
Stars: ✭ 90 (+172.73%)
Mutual labels:  autonomous-driving, autonomous-vehicles
Jetson Car
Autonomous Racing Car using NVIDIA Jetson TX2 using end-to-end driving approach. Paper: https://arxiv.org/abs/1604.07316
Stars: ✭ 172 (+421.21%)
Mutual labels:  autonomous-driving, autonomous-vehicles
Dig Into Apollo
Apollo notes (Apollo学习笔记) - Apollo learning notes for beginners.
Stars: ✭ 903 (+2636.36%)
Mutual labels:  autonomous-driving, autonomous-vehicles
Carma Platform
CARMA Platform is built on robot operating system (ROS) and utilizes open source software (OSS) that enables Cooperative Driving Automation (CDA) features to allow Automated Driving Systems to interact and cooperate with infrastructure and other vehicles through communication.
Stars: ✭ 243 (+636.36%)
Mutual labels:  autonomous-driving, autonomous-vehicles
Rtm3d
Unofficial PyTorch implementation of "RTM3D: Real-time Monocular 3D Detection from Object Keypoints for Autonomous Driving" (ECCV 2020)
Stars: ✭ 211 (+539.39%)
Mutual labels:  autonomous-driving, autonomous-vehicles
Novel Deep Learning Model For Traffic Sign Detection Using Capsule Networks
capsule networks that achieves outstanding performance on the German traffic sign dataset
Stars: ✭ 88 (+166.67%)
Mutual labels:  autonomous-driving, autonomous-vehicles
loco car
Software for LOCO, our autonomous drifting RC car.
Stars: ✭ 44 (+33.33%)
Mutual labels:  autonomous-driving, autonomous-vehicles
Autonomous driving
Ros package for basic autonomous lane tracking and object detection
Stars: ✭ 67 (+103.03%)
Mutual labels:  autonomous-driving, autonomous-vehicles
Pylot
Modular autonomous driving platform running on the CARLA simulator and real-world vehicles.
Stars: ✭ 104 (+215.15%)
Mutual labels:  autonomous-driving, autonomous-vehicles
Constrained attention filter
(ECCV 2020) Tensorflow implementation of A Generic Visualization Approach for Convolutional Neural Networks
Stars: ✭ 36 (+9.09%)
Mutual labels:  autonomous-driving, autonomous-vehicles
Pythonrobotics
Python sample codes for robotics algorithms.
Stars: ✭ 13,934 (+42124.24%)
Mutual labels:  autonomous-driving, autonomous-vehicles
Carla
Open-source simulator for autonomous driving research.
Stars: ✭ 7,012 (+21148.48%)
Mutual labels:  autonomous-driving, autonomous-vehicles
Ultra Fast Lane Detection
Ultra Fast Structure-aware Deep Lane Detection (ECCV 2020)
Stars: ✭ 688 (+1984.85%)
Mutual labels:  autonomous-driving, autonomous-vehicles
Awesome Self Driving Car
An awesome list of self-driving cars
Stars: ✭ 185 (+460.61%)
Mutual labels:  autonomous-driving, autonomous-vehicles
dreyeve
[TPAMI 2018] Predicting the Driver’s Focus of Attention: the DR(eye)VE Project. A deep neural network learnt to reproduce the human driver focus of attention (FoA) in a variety of real-world driving scenarios.
Stars: ✭ 88 (+166.67%)
Mutual labels:  autonomous-driving, autonomous-vehicles

GLCapsNet

Code for the paper entitled Interpretable Global-Local Dynamics for the Prediction of Eye Fixations in Autonomous Driving Scenarios, publicly available in IEEE Access. Supplementary material as videos and images are provided along with the paper in the IEEE Access site.

picture

Global-Local Capsule Network (GLCapsNet) block diagram. It predicts eye fixations based on several contextual conditions of the scene, which are represented as combinations of several spatio-temporal features (RGB, Optical Flow and Semantic Segmentation). Its hierarchical multi-task approach routes Feature Capsules to Condition Capsules both globally and locally, which allows for the interpretation of visual attention in autonomous driving scenarios.

Docker environment

How to use it?

  • Install nvidia-docker
  • Configure environment-manager.sh:
    • image_name: the name of the Docker image
    • data_folder: the path to the storage (mounted as volume)
    • src_folder: the path to the local copy of this source code (mounted as volume)
  • Run environment-manager.sh:
    • service: one of the service names defined at docker-config.json, with the path to the child Dockerfile and the tag of the CUDA base image to use.
    • action: what to do with the environment

How to create a new environment?

Experiments

How to run it?

  • Generate the input features:
  • The usage is defined at execute.py:
    • mode: train, test (efficient computation of metrics), predict (sample-wise prediction for saving data to disk)
    • feature: rgb, of (optical flow), segmentation_probabilities (semantic segmentation)
    • conv_block: the kind of convolutional module to use from conv_blocks.py
    • caps_block: the kind of capsule-based module to use from caps_blocks.py
    • experiment_id: folder name of the experiment with datetime
    • do_visual: save visual predictions
  • The execution generates the following:
/path_output_in_config/[all,rgb,of,segmentation_probabilities]/conv_block/caps_block/experiment_id/config_train.py
/path_output_in_config/[all,rgb,of,segmentation_probabilities]/conv_block/caps_block/experiment_id/checkpoints/weights.h5
/path_output_in_config/[all,rgb,of,segmentation_probabilities]/conv_block/caps_block/experiment_id/logs/tensorboard-logs
/path_output_in_config/[all,rgb,of,segmentation_probabilities]/conv_block/caps_block/experiment_id/logs/log.csv
/path_output_in_config/[all,rgb,of,segmentation_probabilities]/conv_block/caps_block/experiment_id/logs/trace_sampling.npy
/path_output_in_config/[all,rgb,of,segmentation_probabilities]/conv_block/caps_block/experiment_id/predictions/[test_id,prediction_id]/[resulting_files]
  • Below it is described the training command to use per predefined config file (please note that the dataset and some other files must be generated first, and also the paths have to be adapted in each config file):
    • 00_branches:
      • rgb: python3.6 execute.py -m train -f rgb --conv_block cnn_generic_branch
      • of: python3.6 execute.py -m train -f of --conv_block cnn_generic_branch
      • segmentation_probabilities: python3.6 execute.py -m train -f segmentation_probabilities --conv_block cnn_generic_branch
    • 01_sf: python3.6 execute.py -m train -f all --conv_block cnn_generic_fusion
    • 02_gf: python3.6 execute.py -m train -f all --conv_block cnn_generic_fusion
    • 03_sc: python3.6 execute.py -m train -f all --conv_block cnn_generic_branch --caps_block ns_sc
    • 04_ns_sc: python3.6 execute.py -m train -f all --conv_block cnn_generic_branch --caps_block ns_sc
    • 05_triple_ns_sc: python3.6 execute.py -m train -f all --conv_block cnn_generic_branch --caps_block triple_ns_sc
    • 06_mask_triple_ns_sc: python3.6 execute.py -m train -f all --conv_block cnn_generic_branch --caps_block mask_triple_ns_sc
    • 07_mt_mask_triple_ns_sc: python3.6 execute.py -m train -f all --conv_block cnn_generic_branch --caps_block glcapsnet
    • 08_glcapsnet: python3.6 execute.py -m train -f all --conv_block cnn_generic_branch --caps_block glcapsnet

How to create new models?

Same I/O schema

  • Keep the input features, conditions and targets as for the already developed models:

New I/O schema:

Requirements

Model function names are required to be unique per conv_block or caps_block, as the code manage the executions via that names.

Citation

If you use portions of this code or ideas from the paper, please cite our work:

@article{martinez2020glcapsnet,
  title={Interpretable Global-Local Dynamics for the Prediction of Eye Fixations in Autonomous Driving Scenarios},
  author={J. {Martínez-Cebrián} and M. {Fernández-Torres} and F. {Díaz-de-María}},
  journal={IEEE Access},
  volume={8},
  pages={217068-217085},
  year={2020},
  publisher={IEEE},
  doi={10.1109/ACCESS.2020.3041606}
}

Questions

Plese, any question or comment email me at [email protected]. I will be happy to discuss anything related to the topic of the paper.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].