All Projects → sylabs → Wlm Operator

sylabs / Wlm Operator

Licence: other
Singularity implementation of k8s operator for interacting with SLURM.

Programming Languages

go
31211 projects - #10 most used programming language

Projects that are alternatives of or similar to Wlm Operator

Bonny
The Elixir based Kubernetes Development Framework
Stars: ✭ 190 (+143.59%)
Mutual labels:  k8s, kubernetes-operator
td-redis-operator
一款强大的云原生redis-operator,经过大规模生产级运行考验,支持分布式集群、支持主备切换等缓存集群解决方案…The powerful cloud-native redis-operator, which has passed the test of large-scale production-level operation, supports distributed clusters and active/standby switching ...
Stars: ✭ 327 (+319.23%)
Mutual labels:  k8s, kubernetes-operator
Rbacsync
Automatically sync groups into Kubernetes RBAC
Stars: ✭ 197 (+152.56%)
Mutual labels:  k8s, kubernetes-operator
K8gb
A cloud native Kubernetes Global Balancer
Stars: ✭ 113 (+44.87%)
Mutual labels:  k8s, kubernetes-operator
couchdb-operator
prototype kubernetes operator for couchDB
Stars: ✭ 17 (-78.21%)
Mutual labels:  k8s, kubernetes-operator
Cronjobber
Cronjobber is a cronjob controller for Kubernetes with support for time zones
Stars: ✭ 169 (+116.67%)
Mutual labels:  k8s, kubernetes-operator
kotary
Managing Kubernetes Quota with confidence
Stars: ✭ 85 (+8.97%)
Mutual labels:  k8s, kubernetes-operator
Eunomia
A GitOps Operator for Kubernetes
Stars: ✭ 130 (+66.67%)
Mutual labels:  k8s, kubernetes-operator
siddhi-operator
Operator allows you to run stream processing logic directly on a Kubernetes cluster
Stars: ✭ 16 (-79.49%)
Mutual labels:  k8s, kubernetes-operator
grafana-operator
An operator for Grafana that installs and manages Grafana instances, Dashboards and Datasources through Kubernetes/OpenShift CRs
Stars: ✭ 449 (+475.64%)
Mutual labels:  k8s, kubernetes-operator
K8s Mediaserver Operator
Repository for k8s Mediaserver Operator project
Stars: ✭ 81 (+3.85%)
Mutual labels:  k8s, kubernetes-operator
Wfl
A Simple Way of Creating Job Workflows in Go running in Processes, Containers, Tasks, Pods, or Jobs
Stars: ✭ 30 (-61.54%)
Mutual labels:  hpc, k8s
infinispan-operator
Infinispan Operator
Stars: ✭ 32 (-58.97%)
Mutual labels:  k8s, kubernetes-operator
mloperator
Machine Learning Operator & Controller for Kubernetes
Stars: ✭ 85 (+8.97%)
Mutual labels:  k8s, kubernetes-operator
rabbitmq-operator
RabbitMQ Kubernetes operator
Stars: ✭ 16 (-79.49%)
Mutual labels:  k8s, kubernetes-operator
Sens8
Kubernetes controller for Sensu checks
Stars: ✭ 42 (-46.15%)
Mutual labels:  k8s, kubernetes-operator
Slurm In Docker
Slurm in Docker - Exploring Slurm using CentOS 7 based Docker images
Stars: ✭ 63 (-19.23%)
Mutual labels:  hpc
Parenchyma
An extensible HPC framework for CUDA, OpenCL and native CPU.
Stars: ✭ 71 (-8.97%)
Mutual labels:  hpc
Kubedev
A simpler and more powerful Kubernetes Dashboard
Stars: ✭ 62 (-20.51%)
Mutual labels:  k8s
Kubeadm Dind Cluster
[EOL] A Kubernetes multi-node test cluster based on kubeadm
Stars: ✭ 1,112 (+1325.64%)
Mutual labels:  k8s

WLM-operator

The singularity-cri and wlm-operator projects were created by Sylabs to explore interaction between the Kubernetes and HPC worlds. In 2020, rather than dilute our efforts over a large number of projects, we have focused on Singularity itself and our supporting services. We're also looking forward to introducing new features and technologies in 2021.

At this point we have archived the repositories to indicate that they aren't under active development or maintenance. We recognize there is still interest in singularity-cri and wlm-operator, and we'd like these projects to find a home within a community that can further develop and maintain them. The code is open-source under the Apache License 2.0, to be compatible with other projects in the k8s ecosystem.

Please reach out to us via [email protected] if you are interested in establishing a new home for the projects.


CircleCI

WLM operator is a Kubernetes operator implementation, capable of submitting and monitoring WLM jobs, while using all of Kubernetes features, such as smart scheduling and volumes.

WLM operator connects Kubernetes node with a whole WLM cluster, which enables multi-cluster scheduling. In other words, Kubernetes integrates with WLM as one to many.

Each WLM partition(queue) is represented as a dedicated virtual node in Kubernetes. WLM operator can automatically discover WLM partition resources(CPUs, memory, nodes, wall-time) and propagates them to Kubernetes by labeling virtual node. Those node labels will be respected during Slurm job scheduling so that a job will appear only on a suitable partition with enough resources.

Right now WLM-operator supports only SLURM clusters. But it's easy to add a support for another WLM. For it you need to implement a GRPc server. You can use current SLURM implementation as a reference.

Installation

Since wlm-operator is now built with go modules there is no need to create standard go workspace. If you still prefer keeping source code under GOPATH make sure GO111MODULE is set.

Prerequisites

  • Go 1.11+

Installation steps

Installation process is required to connect Kubernetes with Slurm cluster.

NOTE: further described installation process for a single Slurm cluster, the same steps should be performed for each cluster to be connected.

  1. Create a new Kubernetes node with Singularity-CRI on the Slurm login host. Make sure you set up NoSchedule taint so that no random pod will be scheduled there.

  2. Create a new dedicated user on the Slurm login host. All submitted Slurm jobs will be executed on behalf of that user. Make sure the user has execute permissions for the following Slurm binaries:sbatch, scancel, sacct and scontol.

  3. Clone the repo.

git clone https://github.com/sylabs/wlm-operator
  1. Build and start red-box – a gRPC proxy between Kubernetes and a Slurm cluster.
cd wlm-operator && make

Use dedicated user from step 2 to run red-box, e.g. set up User in systemd red-box.service. By default red-box listens on /var/run/syslurm/red-box.sock, so you have to make sure the user has read and write permissions for /var/run/syslurm.

  1. Set up Slurm operator in Kubernetes.
kubectl apply -f deploy/crds/slurm_v1alpha1_slurmjob.yaml
kubectl apply -f deploy/operator-rbac.yaml
kubectl apply -f deploy/operator.yaml

This will create new CRD that introduces SlurmJob to Kubernetes. After that, Kubernetes controller for SlurmJob CRD is set up as a Deployment.

  1. Start up configurator that will bring up a virtual node for each partition in the Slurm cluster.
kubectl apply -f deploy/configurator.yaml

After all those steps Kubernetes cluster is ready to run SlurmJobs.

Usage

The most convenient way to submit them is using YAML files, take a look at basic examples.

We will walk thought basic example how to submit jobs to Slurm in Vagrant.

apiVersion: wlm.sylabs.io/v1alpha1
kind: SlurmJob
metadata:
  name: cow
spec:
  batch: |
    #!/bin/sh
    #SBATCH --nodes=1
    #SBATCH --output cow.out
    srun singularity pull -U library://sylabsed/examples/lolcow
    srun singularity run lolcow_latest.sif
    srun rm lolcow_latest.sif
  nodeSelector:
    wlm.sylabs.io/containers: singularity
  results:
    from: cow.out
    mount:
      name: data
      hostPath:
        path: /home/job-results
        type: DirectoryOrCreate

In the example above we will run lolcow Singularity container in Slurm and collect the results to /home/job-results located on a k8s node where job has been scheduled. Generally, job results can be collected to any supported k8s volume.

Slurm job specification will be processed by operator and a dummy pod will be scheduled in order to transfer job specification to a specific queue. That dummy pod will not have actual physical process under that hood, but instead its specification will be used to schedule slurm job directly on a connected cluster. To collect results another pod will be created with UID and GID 1000 (default values), so you should make sure it has a write access to a volume where you want to store the results (host directory /home/job-results in the example above). The UID and GID are inherited from virtual kubelet that spawns the pod, and virtual kubelet inherits them from configurator (see runAsUser in configurator.yaml).

After that you can submit cow job:

$ kubectl apply -f examples/cow.yaml 
slurmjob.wlm.sylabs.io "cow" created

$ kubectl get slurmjob
NAME   AGE   STATUS
cow    66s   Succeeded


$ kubectl get pod
NAME                             READY   STATUS         RESTARTS   AGE
cow-job                          0/1     Job finished   0          17s
cow-job-collect                  0/1     Completed      0          9s

Validate job results appeared on a node:

$ ls -la /home/job-results
cow-job
  
$ ls /home/job-results/cow-job 
cow.out

$ cat cow.out
WARNING: No default remote in use, falling back to: https://library.sylabs.io
 _________________________________________
/ It is right that he too should have his \
| little chronicle, his memories, his     |
| reason, and be able to recognize the    |
| good in the bad, the bad in the worst,  |
| and so grow gently old all down the     |
| unchanging days and die one day like    |
| any other day, only shorter.            |
|                                         |
\ -- Samuel Beckett, "Malone Dies"        /
 -----------------------------------------
        \   ^__^
         \  (oo)\_______
            (__)\       )\/\
                ||----w |
                ||     ||

Results collection

Slurm operator supports result collection into k8s volume so that a user won't need to have access Slurm cluster to analyze job results.

However, some configuration is required for this feature to work. More specifically, results can be collected located on a login host only (i.e. where red-box is running), while Slurm job can be scheduled on an arbitrary Slurm worker node. This means that some kind of a shared storage among Slurm nodes should be configured so that despite of a Slurm worker node chosen to run a job, results will appear on a login host as well. NOTE: result collection is a network and IO consuming task, so collecting large files (e.g. 1Gb result of an ML job) may not be a great idea.

Let's walk through basic configuration steps. Further assumed that file cow.out from example above is collected. This file can be found on a Slurm worker node that is executing a job. More specifically, you'll find it in a folder, from which job was submitted (i.e. red-box's working dir). Configuration for other results file will differ in shared paths only:

$RESULTS_DIR = red-box's working directory

Share $RESULTS_DIR among all Slurm nodes, e.g set up nfs share for $RESULTS_DIR.

Configuring red-box

By default red-box performs automatic resources discovery for all partitions. However, it's possible to setup available resources for a partition manually with in the config file. The following resources can be specified: nodes, cpu_per_node, mem_per_node and wall_time. Additionally you can specify partition features there, e.g. available software or hardware. Config path should be passed to red-box with the --config flag.

Config example:

patition1:
  nodes: 10
  mem_per_node: 2048 # in MBs
  cpu_per_node: 8
  wall_time: 10h 
partition2:
  nodes: 10
  # mem, cpu and wall_time will be automatic discovered
partition3:
  additional_feautres:
    - name: singularity
      version: 3.2.0
    - name: nvidia-gpu
      version: 2080ti-cuda-7.0
      quantity: 20

Vagrant

If you want to try wlm-operator locally before updating your production cluster, use vagrant that will automatically install and configure all necessary software:

cd vagrant
vagrant up && vagrant ssh k8s-master

NOTE: vagrant up may take about 15 minutes to start as k8s cluster will be installed from scratch.

Vagrant will spin up two VMs: a k8s master and a k8s worker node with Slurm installed. If you wish to set up more workers, fell free to modify N parameter in Vagrantfile.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].