All Projects → dominiek → Deep Base

dominiek / Deep Base

Deep learning base image for Docker (Tensorflow, Caffe, MXNet, Torch, Openface, etc.)

Labels

Projects that are alternatives of or similar to Deep Base

Awesome Blogdown
An awesome curated list of blogs built using blogdown
Stars: ✭ 80 (-4.76%)
Mutual labels:  makefile
Dokku Wordpress
A simple repository that will guide you through deploying wordpress on dokku
Stars: ✭ 82 (-2.38%)
Mutual labels:  makefile
Sqlite3 Android
SQLite CLI and Library build scripts for Android
Stars: ✭ 83 (-1.19%)
Mutual labels:  makefile
Learn machine learning
Road to Machine Learning
Stars: ✭ 81 (-3.57%)
Mutual labels:  makefile
Openwrt Vlmcsd
a package for vlmcsd
Stars: ✭ 81 (-3.57%)
Mutual labels:  makefile
Make Docker Command
Seamlessly execute commands (composer, bower, compass) in isolation using docker and make.
Stars: ✭ 82 (-2.38%)
Mutual labels:  makefile
Freedom Tools
Tools for SiFive's Freedom Platform
Stars: ✭ 80 (-4.76%)
Mutual labels:  makefile
Riscv Sbi Doc
Documentation for the RISC-V Supervisor Binary Interface
Stars: ✭ 84 (+0%)
Mutual labels:  makefile
Avrqueue
Queueing Library for AVR and Arduino
Stars: ✭ 81 (-3.57%)
Mutual labels:  makefile
Wiki
Archive of free60.org mediawiki
Stars: ✭ 83 (-1.19%)
Mutual labels:  makefile
Corteza Docs
Documentation, manual, instructions
Stars: ✭ 81 (-3.57%)
Mutual labels:  makefile
Docker Trino Cluster
Multiple node presto cluster on docker container
Stars: ✭ 81 (-3.57%)
Mutual labels:  makefile
Op Build
Buildroot overlay for Open Power
Stars: ✭ 82 (-2.38%)
Mutual labels:  makefile
K8s Mediaserver Operator
Repository for k8s Mediaserver Operator project
Stars: ✭ 81 (-3.57%)
Mutual labels:  makefile
Kodi Standalone Service
A systemd service to allow for standalone operation of kodi.
Stars: ✭ 83 (-1.19%)
Mutual labels:  makefile
Openwrt Kcptun
kcptun for OpenWrt
Stars: ✭ 80 (-4.76%)
Mutual labels:  makefile
Ont Assembly Polish
ONT assembly and Illumina polishing pipeline
Stars: ✭ 82 (-2.38%)
Mutual labels:  makefile
Ergodone
ErgoDox using pro micro. Original work by Dox. Brainhole association present
Stars: ✭ 84 (+0%)
Mutual labels:  makefile
Device Sony Yuga
Stars: ✭ 83 (-1.19%)
Mutual labels:  makefile
Vala Object
Use Vala from Ruby, Python, Lua, JavaScript (Node.js, gjs, seed) and many other languages
Stars: ✭ 82 (-2.38%)
Mutual labels:  makefile

Deep Learning Base Image

Today's deep learning frameworks require an extraordinary amount of work to install and run. This Docker container bundles all popular deep learning frameworks into a single Docker instance. Ubuntu Linux is the base OS of choice for this container (it is a requirement for CUDA and all DL frameworks play nice with it).

Supported DL frameworks:

Other ML frameworks:

Usage

For GPU usage see below

Run the latest version. All DL frameworks are available at your fingertips:

docker run -it dominiek/deep-base:latest python
import tensorflow
import matplotlib
matplotlib.use('Agg')
import caffe
import openface

Or a specific version tag:

docker pull dominiek/deep-base:v1.3

In order to use deep-base as a base for your deployment's docker container specify the right FROM directive following in your Dockerfile:

FROM dominiek/deep-base:v1.3

To run code from the Host OS simply mount the source code dir:

mkdir code
echo 'import tensorflow' > code/app.py
docker run --volume `pwd`/code:/code -it dominiek/deep-base:latest python /code/app.py

GPU Usage

GPU support requires many additional libraries like Nvidia CUDA and CuDNN. There is a separate Docker repository for the GPU version:

FROM dominiek/deep-base-gpu:v1.3

Running the GPU image requires you to bind the host OS's CUDA libraries and devices. This requires the same CUDA version on the host OS as inside deep-base (Cuda 8.0)

The most reliable way to do this is to use NVIDIA Docker:

nvidia-docker run -it dominiek/deep-base-gpu /bin/bash

Alternatively, you can use vanilla docker and bind the devices:

export CUDA_SO=$(\ls /usr/lib/x86_64-linux-gnu/libcuda.* | xargs -I{} echo '-v {}:{}')
export CUDA_DEVICES=$(\ls /dev/nvidia* | xargs -I{} echo '--device {}:{}')
docker run --privileged $CUDA_SO $CUDA_DEVICES -it dominiek/deep-base-gpu /bin/bash

Now, to make sure that the GPU hardware is working correctly, use the cuda_device_query command inside the container:

[email protected]:/workdir# cuda_device_query
...
Result = PASS

Build a customized Docker image

This is optional. In order to start the build process execute:

  make docker.build

During the build process small tests will be done to make sure compiled Python bindings load properly.

For GPU support (requires CUDA-compatible host hardware and Linux host OS):

  make docker.build.gpu

Performance

There is a CPU and a GPU version of this Docker container. The latter will require CUDA compatible hardware which include AWS GPU instances. When running Docker on a Linux host OS there is no virtual machine used and all CUDA hardware can be fully utilized.

Note however that on Windows and Mac OS X a virtual machine like VirtualBox is used which does not support GPU passthrough. This means no GPU can be used on these host OS's. The recommended pattern here is to use virtualization in a Windows/Mac based local development environment, but really use Linux for staging and production environments.

TODO

  • Add the MNIST example that can be run easily
  • Create a benchmark utility that shows performance of frameworks in running instance
  • Use OpenBlas for frameworks that support it
  • Reduce container size footprint of image
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].