All Projects → dgruber → Wfl

dgruber / Wfl

Licence: bsd-2-clause
A Simple Way of Creating Job Workflows in Go running in Processes, Containers, Tasks, Pods, or Jobs

Programming Languages

go
31211 projects - #10 most used programming language

Projects that are alternatives of or similar to Wfl

Maestrowf
A tool to easily orchestrate general computational workflows both locally and on supercomputers
Stars: ✭ 72 (+140%)
Mutual labels:  workflow, hpc
Wlm Operator
Singularity implementation of k8s operator for interacting with SLURM.
Stars: ✭ 78 (+160%)
Mutual labels:  hpc, k8s
Argo Workflows
Workflow engine for Kubernetes
Stars: ✭ 10,024 (+33313.33%)
Mutual labels:  workflow, k8s
Pegasus
Pegasus Workflow Management System - Automate, recover, and debug scientific computations.
Stars: ✭ 110 (+266.67%)
Mutual labels:  workflow, hpc
Kubo Release
Kubernetes BOSH release
Stars: ✭ 153 (+410%)
Mutual labels:  cloud-foundry, k8s
Jug
Parallel programming with Python
Stars: ✭ 337 (+1023.33%)
Mutual labels:  workflow, hpc
Polyaxon
Machine Learning Platform for Kubernetes (MLOps tools for experimentation and automation)
Stars: ✭ 2,966 (+9786.67%)
Mutual labels:  workflow, k8s
Cromwell
Scientific workflow engine designed for simplicity & scalability. Trivially transition between one off use cases to massive scale production environments
Stars: ✭ 655 (+2083.33%)
Mutual labels:  workflow, hpc
Workflow
A Swift and Kotlin library for making composable state machines, and UIs driven by those state machines.
Stars: ✭ 860 (+2766.67%)
Mutual labels:  workflow
Dart
Self-service data workflow management
Stars: ✭ 15 (-50%)
Mutual labels:  workflow
Dragonview
Visual Analytics Tool for Dragonfly Network-based Supercomputers
Stars: ✭ 10 (-66.67%)
Mutual labels:  hpc
Sooty
The SOC Analysts all-in-one CLI tool to automate and speed up workflow.
Stars: ✭ 867 (+2790%)
Mutual labels:  workflow
Fugitive
Simple command line tool to make git more intuitive, along with useful GitHub addons.
Stars: ✭ 20 (-33.33%)
Mutual labels:  workflow
Mqrun
Automate MaxQuant
Stars: ✭ 10 (-66.67%)
Mutual labels:  workflow
Airflow Maintenance Dags
A series of DAGs/Workflows to help maintain the operation of Airflow
Stars: ✭ 914 (+2946.67%)
Mutual labels:  workflow
Fluentd
Log shipping mechanism for Deis Workflow
Stars: ✭ 10 (-66.67%)
Mutual labels:  k8s
Automate Sketch
Make your workflow more efficient.
Stars: ✭ 856 (+2753.33%)
Mutual labels:  workflow
Sv Callers
Snakemake-based workflow for detecting structural variants in WGS data
Stars: ✭ 28 (-6.67%)
Mutual labels:  workflow
Blog Post Workflow
Show your latest blog posts from any sources or StackOverflow activity or Youtube Videos on your GitHub profile/project readme automatically using the RSS feed
Stars: ✭ 910 (+2933.33%)
Mutual labels:  workflow
Slugrunner
Buildpack application runner for Deis Workflow.
Stars: ✭ 14 (-53.33%)
Mutual labels:  k8s

wfl - A Simple and Pluggable Workflow Language for Go

Don't mix wfl with WFL.

CircleCI codecov

Update: In order to reflect the underlying drmaa2os changes which separates different backends more clearly some context creation functions are moved to pkg/context. That avoids having to deal with dependencies from bigger libraries like Kubernetes or Docker when not using them.

Creating process, container, pod, task, or job workflows based on raw interfaces of operating systems, Docker, Singularity, Kubernetes, Cloud Foundry, and HPC job schedulers can be a tedios. Lots of repeating code is required. All workload management systems have a different API.

wfl abstracts away from the underlying details of the processes, containers, and workload management systems. wfl provides a simple, unified interface which allows to quickly define and execute a job workflow and change between different execution backends without changing the workflow itself.

wfl does not come with many features but is simple to use and enough to define and run jobs and job workflows with inter-job dependencies.

In its simplest form a process can be started and waited for:

    wfl.NewWorkflow(wfl.NewProcessContext()).Run("convert", "image.jpg", "image.png").Wait()

If the output of the command needs to be displayed on the terminal you can set the out path in the default JobTemplate (see below) configuration:

	template := drmaa2interface.JobTemplate{
		ErrorPath:  "/dev/stderr",
		OutputPath: "/dev/stdout",
	}
	flow := wfl.NewWorkflow(wfl.NewProcessContextByCfg(wfl.ProcessConfig{
		DefaultTemplate: template,
	}))
	flow.Run("echo", "hello").Wait()

Running a job as a Docker container requires a different context (and the image already pulled before).

    import (
	"github.com/dgruber/drmaa2interface"
	"github.com/dgruber/wfl"
	"github.com/dgruber/wfl/pkg/context/docker"
    )
    
    ctx := docker.NewDockerContextByCfg(docker.Config{DefaultDockerImage: "golang:latest"})
    wfl.NewWorkflow(ctx).Run("sleep", "60").Wait()

Starting a Docker container without a run command which exposes ports requires more configuration which can be provided by using a JobTemplate together with the RunT() method.

    jt := drmaa2interface.JobTemplate{
        JobCategory: "swaggerapi/swagger-editor",
    }
    jt.ExtensionList = map[string]string{"exposedPorts": "80:8080/tcp"}
    
    wfl.NewJob(wfl.NewWorkflow(docker.NewDockerContext())).RunT(jt).Wait()

Starting a Kubernetes batch job and waiting for its end is not much different.

    wfl.NewWorkflow(kubernetes.NewKubernetesContext()).Run("sleep", "60").Wait()

wfl also supports submitting jobs into HPC schedulers like SLURM, Grid Engine and so on.

    wfl.NewWorkflow(libdrmaa.NewLibDRMAAContext()).Run("sleep", "60").Wait()

wfl aims to work for any kind of workload. It works on a Mac and Raspberry Pi the same way as on a high-performance compute cluster. Things missing: On small scale you probably miss data management - moving results from one job to another. That's deliberately not implemented. On large scale you are missing checkpoint and restart functionality or HA of the workflow process itself.

wfl works with simple primitives: context, workflow, job, and jobtemplate

Experimental: Jobs can also be processed in job control streams.

First support for logging is also available. Log levels can be controlled by environment variables (export WFL_LOGLEVEL=DEBUG or INFO/WARNING/ERROR/NONE). Applications can use the same logging facility by getting the logger from the workflow (workflow.Logger()) or registering your own logger in a workflow (workflow.SetLogger(Logger interface)). Default is set to ERROR.

Getting Started

Dependencies of wfl (like drmaa2) are vendored in. The only external package required to be installed manually is the drmaa2interface.

    go get github.com/dgruber/drmaa2interface

Context

A context defines the execution backend for the workflow. Contexts can be easily created with the New functions which are defined in the context.go file or in the separate packages found in pkg/context.

For creating a context which executes the jobs of a workflow in operating system processses use:

    wfl.NewProcessContext()

If the workflow needs to be executed in containers the DockerContext can be used:

    docker.NewDockerContext()

If the Docker context needs to be configured with a default Docker image (when Run() is used or RunT() without a configured JobCategory (which is the Docker image)) then the ContextByCfg() can be called.

    docker.NewDockerContextByCfg(docker.Config{DefaultDockerImage: "golang:latest"})

When you want to run the workflow as Cloud Foundry tasks the CloudFoundryContext can be used:

    cloudfoundry.NewCloudFoundryContext()

Without a config it uses following environment variables to access the Cloud Foundry cloud controller API:

For submitting Kubernetes batch jobs a Kubernetes context exists.

   ctx := kubernetes.NewKubernetesContext()

Note that each job requires a container image specified which can be done by using the JobTemplate's JobCategory. When the same container image is used within the whole job workflow it makes sense to use the Kubernetes config.

   ctx := kubernetes.NewKubernetesContextByCfg(kubernetes.Config{DefaultImage: "busybox:latest"})

Singularity containers can be executed within the Singularity context. When setting the DefaultImage (like in the Kubernetes Context) then then Run() methods can be used otherwise the Container image must be specified in the JobTemplate's JobCategory field separately for each job. The DefaultImage can always be overridden by the JobCategory. Note that each task / job executes a separate Singularity container process.

   ctx := wfl.NewSingularityContextByCfg(wfl.SingularityConfig{DefaultImage: ""}))

For working with HPC schedulers the libdrmaa context can be used. This context requires libdrmaa.so available in the library path at runtime. Grid Engine ships libdrmaa.so but the LD_LIBRARY_PATH needs to be typically set. For SLURM libdrmaa.so often needs to be build.

Since C go is used under the hood (drmaa2os which uses go drmaa) some compiler flags needs to be set during build time. Those flags depend on the workload manager used. Best check out the go drmaa project for finding the right flags.

For building SLURM requires:

export CGO_LDFLAGS="-L$SLURM_DRMAA_ROOT/lib"
export CGO_CFLAGS="-DSLURM -I$SLURM_DRMAA_ROOT/include"

If all set a libdrmaa context can be created by importing:

   ctx := libdrmaa.NewLibDRMAAContext()

The JobCategory is whatever the workloadmanager associates with it. Typically it is a set of submission parameters. A basic example is here.

Workflow

A workflow encapsulates a set of jobs using the same backend (context). Depending on the execution backend it can be seen as a namespace.

It can be created by using:

    wf := wfl.NewWorkflow(ctx)

Errors during creation can be catched with

    wf := wfl.NewWorkflow(ctx).OnError(func(e error) {panic(e)})

or with

    if wf.HasError() {
        panic(wf.Error())
    }

Job

Jobs are the main objects in wfl. A job defines helper methods. Many of them return the job object itself to allow chaining calls in an easy way. A job can also be seen as a container and control unit for tasks. Tasks are often mapped to jobs of the underlying workload manager (like in Kubernetes, HPC schedulers etc.).

In some systems it is required to delete job related resources after the job is finished and no more information needs to be queried about its execution. This functionality is implemented in the DRMAA2 Reap() method which can be executed by ReapAll() for each task in the job object. Afterwards the job object should not be used anymore as some information might not be available anymore.

Methods can be classified in blocking, non-blocking, job template based, function based, and error handlers.

Job Submission

Function Name Purpose Blocking Examples
Run() Starts a process, container, or submits a task and comes back immediately no
RunT() Like above but with a JobTemplate as parameter no
RunArray() Submits a bulk job which runs many iterations of the same command no
Resubmit() Submits a job n-times (Run().Run().Run()...) no
RunEvery() Submits a task every d time.Duration yes
RunEveryT() Like RunEvery() but with JobTemplate as param yes

Job Control

Function Name Purpose Blocking Examples
Suspend() Stops a task from execution (e.g. sending SIGTSTP to the process group)...
Resume() Continues a task (e.g. sending SIGCONT)...
Kill() Stops process (SIGKILL), container, task, job immediately.

Function Execution

Function Name Purpose Blocking Examples
Do() Executes a Go function yes
Then() Waits for end of process and executes a Go function yes
OnSuccess() Executes a function if the task run successfully (exit code 0) yes
OnFailure() Executes a function if the task failed (exit code != 0) yes
OnError() Executes a function if the task could not be created yes

Blocker

Function Name Purpose Blocking Examples
After() Blocks a specific amount of time and continues yes
Wait() Waits until the task submitted latest finished yes
Synchronize() Waits until all submitted tasks finished yes

Job Flow Control

Function Name Purpose Blocking Examples
ThenRun() Wait() (last task finished) followed by an async Run() partially
ThenRunT() ThenRun() with template partially
OnSuccessRun() Wait() if Success() then Run() partially  
OnSuccessRunT() OnSuccessRun() but with template as param partially
OnFailureRun() Wait() if Failed() then Run() partially  
OnFailureRunT() OnFailureRun() but with template as param partially
Retry() wait() + !success() + resubmit() + wait() + !success() yes  
AnyFailed() Cchecks if one of the tasks in the job failed yes  

Job Status and General Checks

Function Name Purpose Blocking Examples
JobID() Returns the ID of the submitted job no  
JobInfo() Returns the DRMAA2 JobInfo of the job no  
Template() no  
State() no  
LastError() no  
Failed() no  
Success() no  
ExitStatus() no  
ReapAll() Cleans up all job related resources from the workload manager. Do not
use the job object afterwards. Calls DRMAA2 Reap() on all tasks. no  

JobTemplate

JobTemplates are specifying the details about a job. In the simplest case the job is specified by the application name and its arguments like it is typically done in the OS shell. In that case the Run() methods (ThenRun(), OnSuccessRun(), OnFailureRun()) can be used. Job template based methods (like RunT()) can be completely avoided by providing a default template when creating the context (...ByConfig()). Then each Run() inherits the settings (like JobCategory for the container image name and OutputPath for redirecting output to stdout). If more details for specifying the jobs are required the RunT() methods needs to be used. I'm using currently the DRMAA2 Go JobTemplate. In most cases only RemoteCommand, Args, WorkingDirectory, JobCategory, JobEnvironment, StageInFiles are evaluated. Functionality and semantic is up to the underlying drmaa2os job tracker.

The Template object provides helper functions for job templates and required as generators of job streams. For an example see here.

Examples

For examples please have a look into the examples directory. template is a canonical example of a pre-processing job, followed by parallel execution, followed by a post-processing job.

test is an use case for testing. It compiles all examples with the local go compiler and then within a Docker container using the golang:latest image and reports errors.

cloudfoundry demonstrates how a Cloud Foundry taks can be created.

Singularity containers can also be created which is helpful when managing a simple Singularity wfl container workflow within a single HPC job either to fully exploit all resources and reduce the amount of HPC jobs.

Creating a Workflow which is Executed as OS Processes

The allocated context defines which workload management system / job execution backend is used.

    ctx := wfl.NewProcessContext()

Different contexts can be used within a single program. That way multi-clustering potentially over different cloud solutions is supported.

Using a context a workflow can be established.

    wfl.NewWorkflow(wfl.NewProcessContext())

Handling an error during workflow generation can be done by specifying a function which is only called in the case of an error.

    wfl.NewWorkflow(wfl.NewProcessContext()).OnError(func(e error) {
		panic(e)
	})

The workflow is used in order to instantiate the first job using the Run() method.

    wfl.NewWorkflow(wfl.NewProcessContext()).Run("sleep", "123")

But you can also create an initial job like that:

    job := wfl.NewJob(wfl.NewWorkflow(wfl.NewProcessContext()))

For more detailed settings (like resource limits) the DRMAA2 job template can be used as parameter for RunT().

Jobs allow the execution of workload as well as expressing dependencies.

    wfl.NewWorkflow(wfl.NewProcessContext()).Run("sleep", "2").ThenRun("sleep", "1").Wait()

The line above executes two OS processes sequentially and waits until the last job in chain is finished.

In the following example the two sleep processes are executed in parallel. Wait() only waitf for the sleep 1 job. Hence sleep 2 still runs after the wait call comes back.

    wfl.NewWorkflow(wfl.NewProcessContext()).Run("sleep", "2").Run("sleep", "1").Wait()

Running two jobs in parallel and waiting until all jobs finished can be done Synchronize().

    wfl.NewWorkflow(wfl.NewProcessContext()).Run("sleep", "2").Run("sleep", "1").Synchronize()

Jobs can also be suspended (stopped) and resumed (continued) - if supported by the execution backend (like OS, Docker).

    wf.Run("sleep", "1").After(time.Millisecond * 100).Suspend().After(time.Millisecond * 100).Resume().Wait()

The exit status is available as well. ExitStatus() blocks until the previously submitted job is finished.

    wfl.NewWorkflow(ctx).Run("echo", "hello").ExitStatus()

In order to run jobs depending on the exit status the OnFailure and OnSuccess methods can be used:

    wf.Run("false").OnFailureRun("true").OnSuccessRun("false")

For executing a function on a submission error OnError() can be used.

More methods can be found in the sources.

Basic Workflow Patterns

Sequence

The successor task runs after the completion of the pre-decessor task.

    flow := wfl.NewWorkflow(ctx)
    flow.Run("echo", "first task").ThenRun("echo", "second task")
    ...

or

    flow := wfl.NewWorkflow(ctx)
    job := flow.Run("echo", "first task")
    job.Wait()
    job.Run("echo", "second task")
    ...

Parallel Split

After completion of a task run multiple branches of tasks.


    flow := wfl.NewWorkflow(ctx)
    flow.Run("echo", "first task").Wait()

    notifier := wfl.NewNotifier()

    go func() {
        wfl.NewJob(wfl.NewWorkflow(ctx)).
            TagWith("BranchA").
            Run("sleep", "1").
            ThenRun("sleep", "3").
            Synchronize().
            Notify(notifier)
    }

    go func() {
        wfl.NewJob(wfl.NewWorkflow(ctx)).
            TagWith("BranchB").
            Run("sleep", "1").
            ThenRun("sleep", "3").
            Synchronize().
            Notify(notifier)
    }

    notifier.ReceiveJob()
    notifier.ReceiveJob()

    ...

Synchronization of Tasks

Wait until all tasks of a job which are running in parallel are finished.

    flow := wfl.NewWorkflow(ctx)
    flow.Run("echo", "first task").
        Run("echo", "second task").
        Run("echo", "third task").
        Synchronize()

Synchronization of Branches

Wait until all branches of a workflow are finished.


    notifier := wfl.NewNotifier()

    go func() {
        wfl.NewJob(wfl.NewWorkflow(ctx)).
            TagWith("BranchA").
            Run("sleep", "1").
            Wait().
			Notify(notifier)
    }

    go func() {
        wfl.NewJob(wfl.NewWorkflow(ctx)).
            TagWith("BranchB").
            Run("sleep", "1").
            Wait().
			Notify(notifier)
    }

    notifier.ReceiveJob()
    notifier.ReceiveJob()

    ...

Exclusive Choice

    flow := wfl.NewWorkflow(ctx)
    job := flow.Run("echo", "first task")
    job.Wait()

    if job.Success() {
        // do something
    } else {
        // do something else
    }
    ...

Fork Pattern

When a task is finished n tasks needs to be started in parallel.

    job := wfl.NewWorkflow(ctx).Run("echo", "first task").
        ThenRun("echo", "parallel task 1").
        Run("echo", "parallel task 2").
        Run("echo", "parallel task 3")
    ...

or

    flow := wfl.NewWorkflow(ctx)
    
    job := flow.Run("echo", "first task")
    job.Wait()
    for i := 1; i <= 3; i++ {
        job.Run("echo", fmt.Sprintf("parallel task %d", i))
    }
    ...

For missing functionality or bugs please open an issue on github. Contributions welcome!

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].