All Projects → anibali → Docker Pytorch

anibali / Docker Pytorch

Licence: mit
A Docker image for PyTorch

Projects that are alternatives of or similar to Docker Pytorch

Dokai
Collection of Docker images for ML/DL and video processing projects
Stars: ✭ 58 (-88.51%)
Mutual labels:  cuda, docker-image
Deep Learning Boot Camp
A community run, 5-day PyTorch Deep Learning Bootcamp
Stars: ✭ 1,270 (+151.49%)
Mutual labels:  cuda, docker-image
docker python-opencv-ffmpeg
Dockerfile containing FFmpeg, OpenCV4 and Python2/3, based on Ubuntu LTS
Stars: ✭ 38 (-92.48%)
Mutual labels:  docker-image, cuda
Icpcuda
Super fast implementation of ICP in CUDA for compute capable devices 3.5 or higher
Stars: ✭ 416 (-17.62%)
Mutual labels:  cuda
Tensorflow Cmake
TensorFlow examples in C, C++, Go and Python without bazel but with cmake and FindTensorFlow.cmake
Stars: ✭ 418 (-17.23%)
Mutual labels:  cuda
Tsdf Fusion Python
Python code to fuse multiple RGB-D images into a TSDF voxel volume.
Stars: ✭ 464 (-8.12%)
Mutual labels:  cuda
Docker Yapi
接口管理平台 YApi 的 Docker 镜像。
Stars: ✭ 479 (-5.15%)
Mutual labels:  docker-image
Docker Puppeteer
docker image with Google Puppeteer installed
Stars: ✭ 415 (-17.82%)
Mutual labels:  docker-image
Xray Oxygen
🌀 Oxygen Engine 2.0. [Preview] Discord: https://discord.gg/P3aMf66
Stars: ✭ 481 (-4.75%)
Mutual labels:  cuda
Caer
High-performance Vision library in Python. Scale your research, not boilerplate.
Stars: ✭ 452 (-10.5%)
Mutual labels:  cuda
Open3d
Open3D: A Modern Library for 3D Data Processing
Stars: ✭ 5,860 (+1060.4%)
Mutual labels:  cuda
Tsdf Fusion
Fuse multiple depth frames into a TSDF voxel volume.
Stars: ✭ 426 (-15.64%)
Mutual labels:  cuda
Uwsgi Nginx Docker
Docker image with uWSGI and Nginx for applications in Python 3.5 and above and Python 2.7 (as Flask) in a single container. Optionally with Alpine Linux.
Stars: ✭ 466 (-7.72%)
Mutual labels:  docker-image
Accel
(Mirror of GitLab) GPGPU Framework for Rust
Stars: ✭ 420 (-16.83%)
Mutual labels:  cuda
Klar
Integration of Clair and Docker Registry
Stars: ✭ 480 (-4.95%)
Mutual labels:  docker-image
H2o4gpu
H2Oai GPU Edition
Stars: ✭ 416 (-17.62%)
Mutual labels:  cuda
Bitnami Docker Wordpress
Bitnami Docker Image for WordPress
Stars: ✭ 476 (-5.74%)
Mutual labels:  docker-image
Docker Aosp
🏗 Minimal Android AOSP build environment with handy automation wrapper scripts
Stars: ✭ 440 (-12.87%)
Mutual labels:  docker-image
Containers
Bioinformatics containers
Stars: ✭ 435 (-13.86%)
Mutual labels:  docker-image
Bitcracker
BitCracker is the first open source password cracking tool for memory units encrypted with BitLocker
Stars: ✭ 463 (-8.32%)
Mutual labels:  cuda

PyTorch Docker image

Docker Automated build

Ubuntu + PyTorch + CUDA (optional)

Requirements

In order to use this image you must have Docker Engine installed. Instructions for setting up Docker Engine are available on the Docker website.

CUDA requirements

If you have a CUDA-compatible NVIDIA graphics card, you can use a CUDA-enabled version of the PyTorch image to enable hardware acceleration. I have only tested this in Ubuntu Linux.

Firstly, ensure that you install the appropriate NVIDIA drivers. On Ubuntu, I've found that the easiest way of ensuring that you have the right version of the drivers set up is by installing a version of CUDA at least as new as the image you intend to use via the official NVIDIA CUDA download page. As an example, if you intend on using the cuda-10.1 image then setting up CUDA 10.1 or CUDA 10.2 should ensure that you have the correct graphics drivers.

You will also need to install the NVIDIA Container Toolkit to enable GPU device access within Docker containers. This can be found at NVIDIA/nvidia-docker.

Prebuilt images

Prebuilt images are available on Docker Hub under the name anibali/pytorch.

For example, you can pull an image with PyTorch 1.5.0 and CUDA 10.2 using:

$ docker pull anibali/pytorch:1.5.0-cuda10.2

Usage

Running PyTorch scripts

It is possible to run PyTorch programs inside a container using the python3 command. For example, if you are within a directory containing some PyTorch project with entrypoint main.py, you could run it with the following command:

docker run --rm -it --init \
  --gpus=all \
  --ipc=host \
  --user="$(id -u):$(id -g)" \
  --volume="$PWD:/app" \
  anibali/pytorch python3 main.py

Here's a description of the Docker command-line options shown above:

  • --gpus=all: Required if using CUDA, optional otherwise. Passes the graphics cards from the host to the container. You can also more precisely control which graphics cards are exposed using this option (see documentation at https://github.com/NVIDIA/nvidia-docker).
  • --ipc=host: Required if using multiprocessing, as explained at https://github.com/pytorch/pytorch#docker-image.
  • --user="$(id -u):$(id -g)": Sets the user inside the container to match your user and group ID. Optional, but is useful for writing files with correct ownership.
  • --volume="$PWD:/app": Mounts the current working directory into the container. The default working directory inside the container is /app. Optional.

Running graphical applications

If you are running on a Linux host, you can get code running inside the Docker container to display graphics using the host X server (this allows you to use OpenCV's imshow, for example). Here we describe a quick-and-dirty (but INSECURE) way of doing this. For a more comprehensive guide on GUIs and Docker check out http://wiki.ros.org/docker/Tutorials/GUI.

On the host run:

sudo xhost +local:root

You can revoke these access permissions later with sudo xhost -local:root. Now when you run a container make sure you add the options -e "DISPLAY" and --volume="/tmp/.X11-unix:/tmp/.X11-unix:rw". This will provide the container with your X11 socket for communication and your display ID. Here's an example:

docker run --rm -it --init \
  --gpus=all \
  -e "DISPLAY" --volume="/tmp/.X11-unix:/tmp/.X11-unix:rw" \
  anibali/pytorch python3 -c "import tkinter; tkinter.Tk().mainloop()"
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].