All Projects → Azure → DistributedDeepLearning

Azure / DistributedDeepLearning

Licence: MIT license
Tutorials on running distributed deep learning on Batch AI

Programming Languages

shell
77523 projects
Jupyter Notebook
11667 projects
python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to DistributedDeepLearning

docker-nvidia-glx-desktop
MATE Desktop container designed for Kubernetes supporting OpenGL GLX and Vulkan for NVIDIA GPUs with WebRTC and HTML5, providing an open source remote cloud graphics or game streaming platform. Spawns its own fully isolated X Server instead of using the host X server, therefore not requiring /tmp/.X11-unix host sockets or host configuration.
Stars: ✭ 47 (+104.35%)
Mutual labels:  nvidia, nvidia-docker
handbrake-nvenc-docker
Handbrake GUI with Web browser and VNC access. Supports NVENC encoding
Stars: ✭ 32 (+39.13%)
Mutual labels:  nvidia, nvidia-docker
GPU-Jupyterhub
Setting up a Jupyterhub Dockercontainer to spawn Jupyter Notebooks with GPU support (containing Tensorflow, Pytorch and Keras)
Stars: ✭ 23 (+0%)
Mutual labels:  nvidia, nvidia-docker
mpu
A shim driver allows in-docker nvidia-smi showing correct process list without modify anything
Stars: ✭ 27 (+17.39%)
Mutual labels:  nvidia, nvidia-docker
fahclient
Dockerized Folding@home client with NVIDIA GPU support to help battle COVID-19
Stars: ✭ 38 (+65.22%)
Mutual labels:  nvidia, nvidia-docker
basecls
A codebase & model zoo for pretrained backbone based on MegEngine.
Stars: ✭ 29 (+26.09%)
Mutual labels:  distributed-training
ansible-nvidia
No description or website provided.
Stars: ✭ 32 (+39.13%)
Mutual labels:  nvidia-docker
libai
LiBai(李白): A Toolbox for Large-Scale Distributed Parallel Training
Stars: ✭ 284 (+1134.78%)
Mutual labels:  distributed-training
horovod-ansible
Create Horovod cluster easily using Ansible
Stars: ✭ 22 (-4.35%)
Mutual labels:  distributed-training
nvhtop
A tool for enriching the output of nvidia-smi forked from peci1/nvidia-htop.
Stars: ✭ 21 (-8.7%)
Mutual labels:  nvidia
CuAssembler
An unofficial cuda assembler, for all generations of SASS, hopefully :)
Stars: ✭ 168 (+630.43%)
Mutual labels:  nvidia
nvbench
CUDA Kernel Benchmarking Library
Stars: ✭ 213 (+826.09%)
Mutual labels:  nvidia
rtx-voice-script
A python script that takes an input MP3/FLAC and outputs an acapella/background noise stripped WAV using the power of NVIDIA's RTX Voice
Stars: ✭ 50 (+117.39%)
Mutual labels:  nvidia
learn-gpgpu
Algorithms implemented in CUDA + resources about GPGPU
Stars: ✭ 37 (+60.87%)
Mutual labels:  nvidia
CUDAfy.NET
CUDAfy .NET allows easy development of high performance GPGPU applications completely from the .NET. It's developed in C#.
Stars: ✭ 56 (+143.48%)
Mutual labels:  nvidia
PLSC
Paddle Large Scale Classification Tools,supports ArcFace, CosFace, PartialFC, Data Parallel + Model Parallel. Model includes ResNet, ViT, DeiT, FaceViT.
Stars: ✭ 113 (+391.3%)
Mutual labels:  distributed-training
nvidia-auto-installer-for-fedora-linux
A CLI tool which lets you install proprietary NVIDIA drivers and much more easily on Fedora Linux (32 or above and Rawhide)
Stars: ✭ 270 (+1073.91%)
Mutual labels:  nvidia
Geforce-Kepler-patcher
Install Nvidia binaries files on Snapshot disk for macOS Monterey 12
Stars: ✭ 285 (+1139.13%)
Mutual labels:  nvidia
fedora-prime
Simple program to switch between intel and nvidia gpu
Stars: ✭ 24 (+4.35%)
Mutual labels:  nvidia
nix-install-vendor-gl
Ensure that a system-compatible OpenGL driver is available for `nix-shell`-encapsulated programs.
Stars: ✭ 22 (-4.35%)
Mutual labels:  nvidia

Training Distributed Training on Batch AI

This repo is a tutorial on how to train a CNN model in a distributed fashion using Batch AI. The scenario covered is image classification, but the solution can be generalized for other deep learning scenarios such as segmentation and object detection.

Distributed training diagram

Image classification is a common task in computer vision applications and is often tackled by training a convolutional neural network (CNN). For particularly large models with large datasets, the training process can take weeks or months on a single GPU. In some situations, the models are so large that it isn’t possible to fit reasonable batch sizes onto the GPU. Using distributed training in these situations helps shorten the training time. In this specific scenario, a ResNet50 CNN model is trained using Horovod on the ImageNet dataset as well as on synthetic data. The tutorial demonstrates how to accomplish this using three of the most popular deep learning frameworks: TensorFlow, Keras, and PyTorch. There are number of ways to train a deep learning model in a distributed fashion, including data parallel and model parallel approaches based on synchronous and asynchronous updates. Currently the most common scenario is data parallel with synchronous updates—it’s the easiest to implement and sufficient for the majority of use cases. In data parallel distributed training with synchronous updates the model is replicated across N hardware devices and a mini-batch of training samples is divided into N micro-batches (see Figure 2). Each device performs the forward and backward pass for a micro-batch and when it finishes the process it shares the updates with the other devices. These are then used to calculate the updated weights of the entire mini-batch and then the weights are synchronized across the models. This is the scenario that is covered in the GitHub repository. The same architecture though can be used for model parallel and asynchronous updates.

Prerequisites

Setup

Before you begin make sure you are logged into your dockerhub account by running on your machine:

docker login 

Setup Execution Environment

Before being able to run anything you will need to set up the environment in which you will be executing the Batch AI commands etc. There are a number of dependencies therefore we offer a dockerfile that will take care of these dependencies for you. If you don't want to use Docker simply look inside the Docker directory at the dockerfile and environment.yml file for the dependencies. To build the container run(replace all instances of with your own dockerhub account name):

make build dockerhub=<dockerhub account>

The you run the command to start the environment (replace <data_location> with a location on your file system. Make sure it has at least 300GB of free space for the ImageNet dataset)

make jupyter dockerhub=<dockerhub account> data=<data_location>

This will start the Jupyter notebook on port 9999. Simply point your browser to the IP or DNS of your machine. From there you can navigate to 00_DataProcessing.ipynb to process the ImageNet Data.

Once you have covered the two prerequisite notebooks folders 00_DataProcessing.ipynb and 01_CreateResources.ipynb you can navigate to the tutorials for each of the frameworks HorovodTF, HorovodPytorch and HorovodKeras.

Contributing

This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.microsoft.com.

When you submit a pull request, a CLA-bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., label, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact [email protected] with any additional questions or comments.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].