All Projects → kubeflow → Mpi Operator

kubeflow / Mpi Operator

Licence: apache-2.0
Kubernetes Operator for Allreduce-style Distributed Training

Programming Languages

go
31211 projects - #10 most used programming language

Projects that are alternatives of or similar to Mpi Operator

Theano-MPI
MPI Parallel framework for training deep learning models built in Theano
Stars: ✭ 55 (-71.05%)
Mutual labels:  mpi, distributed-computing
frovedis
Framework of vectorized and distributed data analytics
Stars: ✭ 59 (-68.95%)
Mutual labels:  mpi, distributed-computing
Easylambda
distributed dataflows with functional list operations for data processing with C++14
Stars: ✭ 475 (+150%)
Mutual labels:  distributed-computing, mpi
Horovod
Distributed training framework for TensorFlow, Keras, PyTorch, and Apache MXNet.
Stars: ✭ 11,943 (+6185.79%)
Mutual labels:  mpi
Local Cluster
Easy local cluster creation for Elixir to aid in unit testing
Stars: ✭ 142 (-25.26%)
Mutual labels:  distributed-computing
Quda
QUDA is a library for performing calculations in lattice QCD on GPUs.
Stars: ✭ 166 (-12.63%)
Mutual labels:  mpi
Mlcomp
Distributed DAG (Directed acyclic graph) framework for machine learning with UI
Stars: ✭ 183 (-3.68%)
Mutual labels:  distributed-computing
Core
parallel finite element unstructured meshes
Stars: ✭ 124 (-34.74%)
Mutual labels:  mpi
Hpcinfo
Information about many aspects of high-performance computing. Wiki content moved to ~/docs.
Stars: ✭ 171 (-10%)
Mutual labels:  mpi
Future.apply
🚀 R package: future.apply - Apply Function to Elements in Parallel using Futures
Stars: ✭ 159 (-16.32%)
Mutual labels:  distributed-computing
Sysmon
An intuitive remotely-accessible system performance monitoring and task management tool for servers and headless Raspberry Pi setups.
Stars: ✭ 158 (-16.84%)
Mutual labels:  distributed-computing
Dizk
Java library for distributed zero knowledge proof systems
Stars: ✭ 140 (-26.32%)
Mutual labels:  distributed-computing
Klyng
A message-passing distributed computing framework for node.js
Stars: ✭ 167 (-12.11%)
Mutual labels:  distributed-computing
Orleans.clustering.kubernetes
Orleans Membership provider for Kubernetes
Stars: ✭ 140 (-26.32%)
Mutual labels:  distributed-computing
Cuneiform
Cuneiform distributed programming language
Stars: ✭ 175 (-7.89%)
Mutual labels:  distributed-computing
Dash
DASH, the C++ Template Library for Distributed Data Structures with Support for Hierarchical Locality for HPC and Data-Driven Science
Stars: ✭ 134 (-29.47%)
Mutual labels:  mpi
Libgrape Lite
🍇 A C++ library for parallel graph processing 🍇
Stars: ✭ 169 (-11.05%)
Mutual labels:  mpi
Geni
A Clojure dataframe library that runs on Spark
Stars: ✭ 152 (-20%)
Mutual labels:  distributed-computing
Wukong Agent
Web scan foundation framework
Stars: ✭ 153 (-19.47%)
Mutual labels:  distributed-computing
Hydra Express
A module which wraps Hydra and ExpressJS into a library for building distributed applications - such as microservices
Stars: ✭ 166 (-12.63%)
Mutual labels:  distributed-computing

MPI Operator

Build Status Go Report Card Docker Pulls

The MPI Operator makes it easy to run allreduce-style distributed training on Kubernetes. Please check out this blog post for an introduction to MPI Operator and its industry adoption.

Installation

You can deploy the operator with default settings by running the following commands:

git clone https://github.com/kubeflow/mpi-operator
cd mpi-operator
kubectl create -f deploy/v1alpha2/mpi-operator.yaml

Alternatively, follow the getting started guide to deploy Kubeflow.

An alpha version of MPI support was introduced with Kubeflow 0.2.0. You must be using a version of Kubeflow newer than 0.2.0.

You can check whether the MPI Job custom resource is installed via:

kubectl get crd

The output should include mpijobs.kubeflow.org like the following:

NAME                                       AGE
...
mpijobs.kubeflow.org                       4d
...

If it is not included you can add it as follows using kustomize:

git clone https://github.com/kubeflow/mpi-operator
cd mpi-operator/manifests
kustomize build overlays/kubeflow | kubectl apply -f -

Note that since Kubernetes v1.14, kustomize became a subcommand in kubectl so you can also run the following command instead:

kubectl kustomize base | kubectl apply -f -

Creating an MPI Job

You can create an MPI job by defining an MPIJob config file. See TensorFlow benchmark example config file for launching a multi-node TensorFlow benchmark training job. You may change the config file based on your requirements.

cat examples/v1alpha2/tensorflow-benchmarks.yaml

Deploy the MPIJob resource to start training:

kubectl create -f examples/v1alpha2/tensorflow-benchmarks.yaml

Monitoring an MPI Job

Once the MPIJob resource is created, you should now be able to see the created pods matching the specified number of GPUs. You can also monitor the job status from the status section. Here is sample output when the job is successfully completed.

kubectl get -o yaml mpijobs tensorflow-benchmarks
apiVersion: kubeflow.org/v1alpha2
kind: MPIJob
metadata:
  creationTimestamp: "2019-07-09T22:15:51Z"
  generation: 1
  name: tensorflow-benchmarks
  namespace: default
  resourceVersion: "5645868"
  selfLink: /apis/kubeflow.org/v1alpha2/namespaces/default/mpijobs/tensorflow-benchmarks
  uid: 1c5b470f-a297-11e9-964d-88d7f67c6e6d
spec:
  cleanPodPolicy: Running
  mpiReplicaSpecs:
    Launcher:
      replicas: 1
      template:
        spec:
          containers:
          - command:
            - mpirun
            - --allow-run-as-root
            - -np
            - "2"
            - -bind-to
            - none
            - -map-by
            - slot
            - -x
            - NCCL_DEBUG=INFO
            - -x
            - LD_LIBRARY_PATH
            - -x
            - PATH
            - -mca
            - pml
            - ob1
            - -mca
            - btl
            - ^openib
            - python
            - scripts/tf_cnn_benchmarks/tf_cnn_benchmarks.py
            - --model=resnet101
            - --batch_size=64
            - --variable_update=horovod
            image: mpioperator/tensorflow-benchmarks:latest
            name: tensorflow-benchmarks
    Worker:
      replicas: 1
      template:
        spec:
          containers:
          - image: mpioperator/tensorflow-benchmarks:latest
            name: tensorflow-benchmarks
            resources:
              limits:
                nvidia.com/gpu: 2
  slotsPerWorker: 2
status:
  completionTime: "2019-07-09T22:17:06Z"
  conditions:
  - lastTransitionTime: "2019-07-09T22:15:51Z"
    lastUpdateTime: "2019-07-09T22:15:51Z"
    message: MPIJob default/tensorflow-benchmarks is created.
    reason: MPIJobCreated
    status: "True"
    type: Created
  - lastTransitionTime: "2019-07-09T22:15:54Z"
    lastUpdateTime: "2019-07-09T22:15:54Z"
    message: MPIJob default/tensorflow-benchmarks is running.
    reason: MPIJobRunning
    status: "False"
    type: Running
  - lastTransitionTime: "2019-07-09T22:17:06Z"
    lastUpdateTime: "2019-07-09T22:17:06Z"
    message: MPIJob default/tensorflow-benchmarks successfully completed.
    reason: MPIJobSucceeded
    status: "True"
    type: Succeeded
  replicaStatuses:
    Launcher:
      succeeded: 1
    Worker: {}
  startTime: "2019-07-09T22:15:51Z"

Training should run for 100 steps and takes a few minutes on a GPU cluster. You can inspect the logs to see the training progress. When the job starts, access the logs from the launcher pod:

PODNAME=$(kubectl get pods -l mpi_job_name=tensorflow-benchmarks,mpi_role_type=launcher -o name)
kubectl logs -f ${PODNAME}
TensorFlow:  1.14
Model:       resnet101
Dataset:     imagenet (synthetic)
Mode:        training
SingleSess:  False
Batch size:  128 global
             64 per device
Num batches: 100
Num epochs:  0.01
Devices:     ['horovod/gpu:0', 'horovod/gpu:1']
NUMA bind:   False
Data format: NCHW
Optimizer:   sgd
Variables:   horovod

...

40	images/sec: 154.4 +/- 0.7 (jitter = 4.0)	8.280
40	images/sec: 154.4 +/- 0.7 (jitter = 4.1)	8.482
50	images/sec: 154.8 +/- 0.6 (jitter = 4.0)	8.397
50	images/sec: 154.8 +/- 0.6 (jitter = 4.2)	8.450
60	images/sec: 154.5 +/- 0.5 (jitter = 4.1)	8.321
60	images/sec: 154.5 +/- 0.5 (jitter = 4.4)	8.349
70	images/sec: 154.5 +/- 0.5 (jitter = 4.0)	8.433
70	images/sec: 154.5 +/- 0.5 (jitter = 4.4)	8.430
80	images/sec: 154.8 +/- 0.4 (jitter = 3.6)	8.199
80	images/sec: 154.8 +/- 0.4 (jitter = 3.8)	8.404
90	images/sec: 154.6 +/- 0.4 (jitter = 3.7)	8.418
90	images/sec: 154.6 +/- 0.4 (jitter = 3.6)	8.459
100	images/sec: 154.2 +/- 0.4 (jitter = 4.0)	8.372
100	images/sec: 154.2 +/- 0.4 (jitter = 4.0)	8.542
----------------------------------------------------------------
total images/sec: 308.27

Docker Images

Docker images are built and pushed automatically to mpioperator on Dockerhub. You can use the following Dockerfiles to build the images yourself:

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].