All Projects → turlucode → ros-docker-gui

turlucode / ros-docker-gui

Licence: BSD-3-Clause license
ROS Docker Containers with X11 (GUI) support [Linux]

Programming Languages

Dockerfile
14818 projects
Makefile
30231 projects
shell
77523 projects

Projects that are alternatives of or similar to ros-docker-gui

isaac ros dnn inference
Hardware-accelerated DNN model inference ROS2 packages using NVIDIA Triton/TensorRT for both Jetson and x86_64 with CUDA-capable GPU
Stars: ✭ 67 (-51.09%)
Mutual labels:  nvidia, ros2
NVOC
No description or website provided.
Stars: ✭ 26 (-81.02%)
Mutual labels:  nvidia
dynamixel control
ros2_control packages for ROBOTIS Dynamixel
Stars: ✭ 69 (-49.64%)
Mutual labels:  ros2
dockerfiles
Dockerfiles I use for development
Stars: ✭ 64 (-53.28%)
Mutual labels:  ros2
DDS-Router
The DDS Router is an application developed by eProsima that allows, using Fast DDS, to communicate by DDS protocol different networks.
Stars: ✭ 34 (-75.18%)
Mutual labels:  ros2
RTX-Mesh-Shaders
Different mesh shading techniques using the NVIDIA RTX (Turing) technology.
Stars: ✭ 84 (-38.69%)
Mutual labels:  nvidia
li slam ros2
ROS2 package of tightly-coupled lidar inertial ndt/gicp slam
Stars: ✭ 160 (+16.79%)
Mutual labels:  ros2
Nindamani-the-weed-removal-robot
Nindamani, the AI based mechanically weed removal robot
Stars: ✭ 73 (-46.72%)
Mutual labels:  ros2
ros msft camera
This ROS node uses Windows Media Foundation's frame server to efficiently process camera frames.
Stars: ✭ 17 (-87.59%)
Mutual labels:  ros2
vscode ros2 workspace
A template for using VSCode as an IDE for ROS2 development.
Stars: ✭ 527 (+284.67%)
Mutual labels:  ros2
gl dynamic lod
GPU classifies how to render millions of particles
Stars: ✭ 63 (-54.01%)
Mutual labels:  nvidia
iknet
Inverse kinematics estimation of ROBOTIS Open Manipulator X with neural networks
Stars: ✭ 27 (-80.29%)
Mutual labels:  ros2
nvidia-vaapi-driver
A VA-API implemention using NVIDIA's NVDEC
Stars: ✭ 789 (+475.91%)
Mutual labels:  nvidia
nvidia-docker-bootstrap
For those times when nvidia-docker is not possible (like AWS ECS)
Stars: ✭ 19 (-86.13%)
Mutual labels:  nvidia
nvidia-auto-installer-for-fedora-linux
A CLI tool which lets you install proprietary NVIDIA drivers and much more easily on Fedora Linux (32 or above and Rawhide)
Stars: ✭ 270 (+97.08%)
Mutual labels:  nvidia
F1-demo
Real-time vehicle telematics analytics demo using OmniSci
Stars: ✭ 27 (-80.29%)
Mutual labels:  nvidia
fahclient
Dockerized Folding@home client with NVIDIA GPU support to help battle COVID-19
Stars: ✭ 38 (-72.26%)
Mutual labels:  nvidia
rtx-voice-script
A python script that takes an input MP3/FLAC and outputs an acapella/background noise stripped WAV using the power of NVIDIA's RTX Voice
Stars: ✭ 50 (-63.5%)
Mutual labels:  nvidia
CUDAfy.NET
CUDAfy .NET allows easy development of high performance GPGPU applications completely from the .NET. It's developed in C#.
Stars: ✭ 56 (-59.12%)
Mutual labels:  nvidia
cucim
No description or website provided.
Stars: ✭ 218 (+59.12%)
Mutual labels:  nvidia

Robot Operating System (ROS) Docker Containers with X11 support [Linux]

N|Solid

This project aims to bring different versions of ROS as docker containers with GUI support! This means your local OS is no more bound the version of ROS you are using! You can use any version of ROS with any Linux distribution, thanks to the amazing power of docker!

Getting Started

The idea is to have HW accelerated GUIs on docker. Generally it has proven that this is a challenging task. However graphics card companies like NVIDIA, already provide solutions for their platform. To make this work, the idea is to share the host's X11 socket with the container as an external volume.

Current Support

Currently this project supports HW accelerated containers for:

Support for other grahics cards will follow!

Supported ROS Images

ROS Distribution Integrated Graphics NVIDIA Graphics OpenCV
Indigo yes
  • CUDA 8 (cuDNN 6,7)
  • CUDA 10 (cuDNN 7)
  • CUDA 10.1 (cuDNN 7)
  • OpenCV 2.x (default)
  • OpenCV 3.x
Kinetic yes
  • CUDA 8 (cuDNN 6)
  • CUDA 10 (cuDNN 7)
  • CUDA 10.1 (cuDNN 7)
  • OpenCV 2.x (default)
  • OpenCV 3.x
Melodic yes
  • CUDA 10 (cuDNN 7)
  • CUDA 10.1 (cuDNN 7)
  • CUDA 11.4.2 (cuDNN 8)
  • OpenCV 3.x (default)
Noetic yes
  • CUDA 11.4.2 (cuDNN 8)
  • OpenCV 4.x (default)
Bouncy (ROS2) yes no support yet
  • OpenCV 3.x (Ubuntu 18.04)

You can also see the complete list by running:

make

Integraded GPU

This repository supports ROS docker images that rely their integrated GPU of the CPU for Graphics.

Build desired Docker Image

You can use the docker images labeled with [CPU] to get support for your integraded GPU:

# Prints Help
make

# E.g. Build ROS Indigo
make cpu_ros_indigo

Running the image (as root)

Once the container has been built, you can issue the following command to run it:

docker run --rm -it --privileged --net=host --ipc=host \
--device=/dev/dri:/dev/dri \
-v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=$DISPLAY \
-v $HOME/.Xauthority:/home/$(id -un)/.Xauthority -e XAUTHORITY=/home/$(id -un)/.Xauthority \
-e ROS_IP=127.0.0.1 \
turlucode/ros-indigo:cpu

A terminator window will pop-up and the rest you know it! :)

Important Remark: This will launch the container as root. This might have unwanted effects! If you want to run it as the current user, see next section.

Running the image (as current user)

You can also run the script as the current linux-user by passing the DOCKER_USER_* variables like this:

docker run --rm -it --privileged --net=host --ipc=host \
--device=/dev/dri:/dev/dri \
-v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=$DISPLAY \
-v $HOME/.Xauthority:/home/$(id -un)/.Xauthority -e XAUTHORITY=/home/$(id -un)/.Xauthority \
-e DOCKER_USER_NAME=$(id -un) \
-e DOCKER_USER_ID=$(id -u) \
-e DOCKER_USER_GROUP_NAME=$(id -gn) \
-e DOCKER_USER_GROUP_ID=$(id -g) \
-e ROS_IP=127.0.0.1 \
turlucode/ros-indigo:cpu

Important Remark:

  • Please note that you need to pass the Xauthority to the correct user's home directory.

  • You may need to run xhost si:localuser:$USER or worst case xhost local:root if get errors like Error: cannot open display

  • See also this section for other options.

NVIDIA Graphics Card

For machines that are using NVIDIA graphics cards we need to have the nvidia-docker-plugin.

IMPORTANT: This repo supports nvidia-docker version 2.x!!!

For nvidia-docker-v1.0 support, check the corresponding branch

Install nvidia-docker-plugin

Assuming the NVIDIA drivers and Docker® Engine are properly installed (see installation)

Ubuntu 14.04/16.04/18.04, Debian Jessie/Stretch

# If you have nvidia-docker 1.0 installed: we need to remove it and all existing GPU containers
docker volume ls -q -f driver=nvidia-docker | xargs -r -I{} -n1 docker ps -q -a -f volume={} | xargs -r docker rm -f
sudo apt-get purge -y nvidia-docker

# Add the package repositories
curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | \
  sudo apt-key add -
distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | \
  sudo tee /etc/apt/sources.list.d/nvidia-docker.list
sudo apt-get update

# Install nvidia-docker2 and reload the Docker daemon configuration
sudo apt-get install -y nvidia-docker2
sudo pkill -SIGHUP dockerd

# Test nvidia-smi with the latest official CUDA image
docker run --runtime=nvidia --rm nvidia/cuda:9.0-base nvidia-smi

CentOS 7 (docker-ce), RHEL 7.4/7.5 (docker-ce), Amazon Linux 1/2

If you are not using the official docker-ce package on CentOS/RHEL, use the next section.

# If you have nvidia-docker 1.0 installed: we need to remove it and all existing GPU containers
docker volume ls -q -f driver=nvidia-docker | xargs -r -I{} -n1 docker ps -q -a -f volume={} | xargs -r docker rm -f
sudo yum remove nvidia-docker

# Add the package repositories
distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.repo | \
  sudo tee /etc/yum.repos.d/nvidia-docker.repo

# Install nvidia-docker2 and reload the Docker daemon configuration
sudo yum install -y nvidia-docker2
sudo pkill -SIGHUP dockerd

# Test nvidia-smi with the latest official CUDA image
docker run --runtime=nvidia --rm nvidia/cuda:9.0-base nvidia-smi

If yum reports a conflict on /etc/docker/daemon.json with the docker package, you need to use the next section instead.

For docker-ce on ppc64le, look at the FAQ.

Arch-linux

# Install nvidia-docker and nvidia-docker-plugin
# If you have nvidia-docker 1.0 installed: we need to remove it and all existing GPU containers
docker volume ls -q -f driver=nvidia-docker | xargs -r -I{} -n1 docker ps -q -a -f volume={} | xargs -r docker rm -f

sudo rm /usr/bin/nvidia-docker /usr/bin/nvidia-docker-plugin

# Install nvidia-docker2 from AUR and reload the Docker daemon configuration
yaourt -S aur/nvidia-docker
sudo pkill -SIGHUP dockerd

# Test nvidia-smi with the latest official CUDA image
docker run --runtime=nvidia --rm nvidia/cuda:9.0-base nvidia-smi

Proceed only if nvidia-smi works

If the nvidia-smi test was successful you may proceed. Otherwise please visit the official NVIDIA support.

Remarks & Troubleshooting

  • If your nvidia-driver is 4.10.x and greater, you need to choose CUDA 10 images.

Build desired Docker Image

You can either browse to directory of the version you want to install and issue manually a docker build command or just use the makefile:

# Prints Help
make

# E.g. Build ROS Indigo
make nvidia_ros_indigo

Note: The build process takes a while.

For nvidia-driver >= 4.10.x you need to build CUDA10 images for compatibility!

Running the image (as root)

Once the container has been built, you can issue the following command to run it:

docker run --rm -it --runtime=nvidia --privileged --net=host --ipc=host \
-v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=$DISPLAY \
-v $HOME/.Xauthority:/root/.Xauthority -e XAUTHORITY=/root/.Xauthority \
-v <PATH_TO_YOUR_CATKIN_WS>:/root/catkin_ws \
-e ROS_IP=<HOST_IP or HOSTNAME> \
turlucode/ros-indigo:nvidia

A terminator window will pop-up and the rest you know it! :)

Important Remark: This will launch the container as root. This might have unwanted effects! If you want to run it as the current user, see next section.

Running the image (as current user)

You can also run the script as the current linux-user by passing the DOCKER_USER_* variables like this:

docker run --rm -it --runtime=nvidia --privileged --net=host --ipc=host \
-v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=$DISPLAY \
-v $HOME/.Xauthority:/home/$(id -un)/.Xauthority -e XAUTHORITY=/home/$(id -un)/.Xauthority \
-e DOCKER_USER_NAME=$(id -un) \
-e DOCKER_USER_ID=$(id -u) \
-e DOCKER_USER_GROUP_NAME=$(id -gn) \
-e DOCKER_USER_GROUP_ID=$(id -g) \
-e ROS_IP=127.0.0.1 \
turlucode/ros-indigo:nvidia

Important Remark:

  • Please note that you need to pass the Xauthority to the correct user's home directory.
  • You may need to run xhost si:localuser:$USER or worst case xhost local:root if get errors like Error: cannot open display

Other options

Mount your ssh-keys

For both root and custom user use:

-v $HOME/.ssh:/root/.ssh

For the custom-user the container will make sure to copy them to the right location.

Mount your local catkin_ws

To mount your local catkin_ws you can just use the following docker feature:

# for root user
-v $HOME/<some_path>/catkin_ws:/root/catkin_ws
# for local user
-v $HOME/<some_path>/catkin_ws:/home/$(id -un)/catkin_ws

Passing a camera device

If you have a virtual device node like /dev/video0, e.g. a compatible usb camera, you pass this to the docker container like this:

--device /dev/video0

Tools

Visual Studio Code

You can have the option to create a new container that contais Visual Studio Code. This allows to use Visual Studio Code wihtin the ROS docker image and in turn use it for development and debugging.

Create image

To create the new image run:

make tools_vscode <existing_ros_docker_image>

# E.g.
make tools_vscode turlucode/ros-indigo:cuda10.1-cudnn7-opencv3
# which creates the image turlucode/ros-indigo:cuda10.1-cudnn7-opencv3-vscode

This will create a new docker image that uses as base <existing_ros_docker_image> with the name <existing_ros_docker_image>-vscode. If the image doesn't exist, the command will terminate with an error, so make sure you build the ros-docker-image first, before you use it as a base-image to install Visual Studio Code.

Run image

You can run the newly created image as you were running the rest of the ROS images. If you want to keep the Visual Studio Code configuration consistent then you need to mount .vscode, e.g.:

# Mount argument for the docker run command:
- v <local_path_to_store_configuration>:/home/$(id -un)/.vscode

If you are running the images as root then you need to follow the Visual Studio Code recommendations, which state:

You are trying to start vscode as a super user which is not recommended. If you really want to, you must specify an alternate user data directory using the --user-data-dir argument.

So act accordingly.

Base images

NVIDIA

The images on this repository are based on the following work:

OpenCV Build References

Issues and Contributing

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].