All Projects → tlkh → Ai Lab

tlkh / Ai Lab

Licence: gpl-3.0
All-in-one AI container for rapid prototyping

Programming Languages

javascript
184084 projects - #8 most used programming language

Projects that are alternatives of or similar to Ai Lab

Deep Learning Boot Camp
A community run, 5-day PyTorch Deep Learning Bootcamp
Stars: ✭ 1,270 (+212.81%)
Mutual labels:  nvidia, data-science, cuda
Gprmax
gprMax is open source software that simulates electromagnetic wave propagation using the Finite-Difference Time-Domain (FDTD) method for numerical modelling of Ground Penetrating Radar (GPR)
Stars: ✭ 268 (-33.99%)
Mutual labels:  nvidia, cuda
Polyaxon
Machine Learning Platform for Kubernetes (MLOps tools for experimentation and automation)
Stars: ✭ 2,966 (+630.54%)
Mutual labels:  data-science, jupyter
Lantern
Data exploration glue
Stars: ✭ 292 (-28.08%)
Mutual labels:  data-science, jupyter
cuda-toolkit
GitHub Action to install CUDA
Stars: ✭ 34 (-91.63%)
Mutual labels:  cuda, nvidia
opencv-cuda-docker
Dockerfiles for OpenCV compiled with CUDA, opencv_contrib modules and Python 3 bindings
Stars: ✭ 55 (-86.45%)
Mutual labels:  cuda, nvidia
Deep Diamond
A fast Clojure Tensor & Deep Learning library
Stars: ✭ 288 (-29.06%)
Mutual labels:  nvidia, cuda
lane detection
Lane detection for the Nvidia Jetson TX2 using OpenCV4Tegra
Stars: ✭ 15 (-96.31%)
Mutual labels:  cuda, nvidia
Tensorwatch
Debugging, monitoring and visualization for Python Machine Learning and Data Science
Stars: ✭ 3,191 (+685.96%)
Mutual labels:  data-science, jupyter
Thrust
The C++ parallel algorithms library.
Stars: ✭ 3,595 (+785.47%)
Mutual labels:  nvidia, cuda
Quantitative Notebooks
Educational notebooks on quantitative finance, algorithmic trading, financial modelling and investment strategy
Stars: ✭ 356 (-12.32%)
Mutual labels:  data-science, jupyter
JetScan
JetScan : GPU accelerated portable RGB-D reconstruction system
Stars: ✭ 77 (-81.03%)
Mutual labels:  cuda, nvidia
peakperf
Achieve peak performance on x86 CPUs and NVIDIA GPUs
Stars: ✭ 33 (-91.87%)
Mutual labels:  cuda, nvidia
Torch-TensorRT
PyTorch/TorchScript compiler for NVIDIA GPUs using TensorRT
Stars: ✭ 1,216 (+199.51%)
Mutual labels:  cuda, nvidia
ONNX-Runtime-with-TensorRT-and-OpenVINO
Docker scripts for building ONNX Runtime with TensorRT and OpenVINO in manylinux environment
Stars: ✭ 15 (-96.31%)
Mutual labels:  cuda, nvidia
Gophernotes
The Go kernel for Jupyter notebooks and nteract.
Stars: ✭ 3,100 (+663.55%)
Mutual labels:  data-science, jupyter
Ilgpu
ILGPU JIT Compiler for high-performance .Net GPU programs
Stars: ✭ 374 (-7.88%)
Mutual labels:  nvidia, cuda
Nvidia Modded Inf
Modified nVidia .inf files to run drivers on all video cards, research & telemetry free drivers
Stars: ✭ 227 (-44.09%)
Mutual labels:  nvidia, cuda
Plotoptix
Data visualisation in Python based on OptiX 7.2 ray tracing framework.
Stars: ✭ 252 (-37.93%)
Mutual labels:  nvidia, cuda
Komputation
Komputation is a neural network framework for the Java Virtual Machine written in Kotlin and CUDA C.
Stars: ✭ 295 (-27.34%)
Mutual labels:  nvidia, cuda

header image

GitHub last commit

All-in-one AI development container for rapid prototyping, compatible with the nvidia-docker GPU-accelerated container runtime as well as JupyterHub. This is designed as a lighter and more portable alternative to various cloud provider "Deep Learning Virtual Machines". Get up and running with a wide range of machine learning and deep learning tasks by pulling and running the container on your workstation, on the cloud or within JupyterHub.

What's included?

frameworks

ide

Using the AI Lab Container

This image can be used together with NVIDIA GPUs on workstation, servers, cloud instances. It can also be used via JupyterHub deployments as no additional ports are required things like for TensorBoard. Please note that the following instructions assume you already have the NVIDIA drivers and container runtime already installed. If not, here are some quick instructions.

Pulling the container

docker pull nvaitc/ai-lab:20.03

Running an interactive shell (bash)

nvidia-docker run --rm -it nvaitc/ai-lab:20.03 bash

Run Jupyter Notebook

The additional command line flags define the following options:

  • forward port 8888 to your host machine
  • mount /home/$USER as the working directory (/home/jovyan)
nvidia-docker run --rm \
 -p 8888:8888 \
 -v /home/$USER:/home/jovyan \
 nvaitc/ai-lab:20.03

Run JupyterLab by replacing tree with lab in the browser address bar.

There is a default blank password for the Jupyter interface. To set your own password, pass it as an environment variable NB_PASSWD as follows:

nvidia-docker run --rm \
 -p 8888:8888 \
 -v /home/$USER:/home/jovyan \
 -e NB_PASSWD='mypassword' \
 nvaitc/ai-lab:20.03

Run Batch Job

It is also perfectly possible to run a batch job with this container, be it on a workstation or as part of a larger cluster with a scheduler that can schedule Docker containers.

nvidia-docker run --rm bash nvaitc/ai-lab:20.03 -c 'echo "Hello world!" && python3 script.py'

Additional Instructions

For extended instructions, please take a look at: INSTRUCTIONS.md.

INSTRUCTIONS.md contains full instructions and addresses common questions on deploying to public cloud (GCP/AWS), as well as using PyTorch DataLoader or troubleshooting permission issues with some setups.

If you have any ideas or suggestions, please feel free to open an issue.

FAQ

1. Can I modify/build this container myself?

Sure! The Dockerfile is provided in this repository. All you need is a fast internet connection and about 1h of time to build this container from scratch.

Should you only require some extra packages, you can build your own Docker image using nvaitc/ai-lab as the base image.

For a detailed guide, check out BUILD.md.

2. Do you support MXNet/some-package?

See Point 1 above to see how to add MXNet/some-package into the container. I had chosen not to distribute MXNet/some-package with the container as it is less widely used and is large in size, and can be easily installed with pip since the environment is already properly configured. If you have a suggestion for a package that you would like to see added, open an issue.

3. Do you support multi-node or multi-GPU tasks?

Multi-GPU has been tested with tf.distribute and Horovod, and it works as expected. Multi-node has not been tested.

4. Can I get hardware accelerated GUI (OpenGL) applications?

Yes! Be sure to pull the vnc version of the container e.g. nvaitc/ai-lab:20.03-vnc and use the "New" menu in Jupyter Notebook to launch a new VNC Desktop. This will allow you to use a virtual desktop interface. Next, you need to allow the container to access your host X server (this may be a security concern for some people).

xhost +local:root
nvidia-docker --rm run \
 -e "DISPLAY" \
 -v /tmp/.X11-unix:/tmp/.X11-unix:rw \
 -p 8888:8888 \
 -v /home/$USER:/home/jovyan \
 nvaitc/ai-lab:20.03-vnc

Next, start your application adding vglrun in front of the application command (e.g. vglrun glxgears). You can see a video of SuperTuxKart running in the VNC desktop here.

5. How does this contrast with NGC containers?

NVIDIA GPU Cloud (NGC) features NVIDIA tuned, tested, certified, and maintained containers for deep learning and HPC frameworks that take full advantage of NVIDIA GPUs on supported systems, such as NVIDIA DGX products. We recommend the use of NGC containers for performance critical and production workloads.

The AI Lab container was designed for students and researchers. The container is primarily designed to create a frictionless experience (by including all frameworks) during the initial prototyping and exploration phase, with a focus on iteration with fast feedback and less focus on deciding on specific approaches or frameworks. This is not an official NVIDIA product!

If you would like to use NGC containers in an AI Lab like container, there is an example of how you can build one yourself. Take a look at tf-amp.Dockerfile. Do note that you are restricted from distributing derivative images from NGC containers in a public Docker registry.

6. What GPUs do you support?

The container supports compute capability 6.0, 6.1, 7.0, 7.5:

  • Pascal (P100, GTX 10-series)
  • Volta (V100, Titan V)
  • Turing (T4, RTX 20-series)

7. Any detailed system requirements?

  1. Ubuntu 18.04+, or a close derivative distro
  2. NVIDIA drivers (>=418, or >=410 Tesla-ready driver)
  3. NVIDIA container runtime (nvidia-docker)
  4. NVIDIA Pascal, Volta or Turing GPU
    • If you have a GTX 10-series or newer GPU, you're fine
    • K80 and GTX 9-series cards are not supported

Support

  • Core Maintainer: Timothy Liu (tlkh)
  • This is not an official NVIDIA product!
  • The website, its software and all content found on it are provided on an “as is” and “as available” basis. NVIDIA/NVAITC does not give any warranties, whether express or implied, as to the suitability or usability of the website, its software or any of its content. NVIDIA/NVAITC will not be liable for any loss, whether such loss is direct, indirect, special or consequential, suffered by any party as a result of their use of the libraries or content. Any usage of the libraries is done at the user’s own risk and the user will be solely responsible for any damage to any computer system or loss of data that results from such activities.
  • Please open an issue if you encounter problems or have a feature request

Adapted from the Jupyter Docker Stacks

GitHub contributors GitHub

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].