All Projects → SESYNC-ci → rslurm

SESYNC-ci / rslurm

Licence: other
Submit R code to a Slurm cluster

Programming Languages

r
7636 projects

Labels

Projects that are alternatives of or similar to rslurm

HPC
A collection of various resources, examples, and executables for the general NREL HPC user community's benefit. Use the following website for accessing documentation.
Stars: ✭ 64 (+60%)
Mutual labels:  slurm
slurmR
slurmR: A Lightweight Wrapper for Slurm
Stars: ✭ 43 (+7.5%)
Mutual labels:  slurm
launcher-scripts
(DEPRECATED) A set of launcher scripts to be used with OAR and Slurm for running jobs on the UL HPC platform
Stars: ✭ 14 (-65%)
Mutual labels:  slurm
a-minimalist-guide
Walkthroughs for DSL, AirSim, the Vector Institute, and more
Stars: ✭ 37 (-7.5%)
Mutual labels:  slurm
uchuva
A scientific web portal that allows users to create and submit workflows to HTCondor (Dagman), Slurm, OpenLava (LSF), Torque (PBS)
Stars: ✭ 17 (-57.5%)
Mutual labels:  slurm
puppet-slurm
A Puppet module designed to configure and manage SLURM(see https://slurm.schedmd.com/), an open source, fault-tolerant, and highly scalable cluster management and job scheduling system for large and small Linux clusters
Stars: ✭ 18 (-55%)
Mutual labels:  slurm
stui
A Slurm dashboard for the terminal.
Stars: ✭ 36 (-10%)
Mutual labels:  slurm
spart
spart: a user-oriented partition info command for slurm
Stars: ✭ 14 (-65%)
Mutual labels:  slurm
future.batchtools
🚀 R package future.batchtools: A Future API for Parallel and Distributed Processing using batchtools
Stars: ✭ 77 (+92.5%)
Mutual labels:  slurm
omnia
An open-source toolkit for deploying and managing high performance clusters for HPC, AI, and data analytics workloads.
Stars: ✭ 128 (+220%)
Mutual labels:  slurm
slurm-mail
Slurm-Mail is a drop in replacement for Slurm's e-mails to give users much more information about their jobs compared to the standard Slurm e-mails.
Stars: ✭ 47 (+17.5%)
Mutual labels:  slurm
task-spooler
A scheduler for GPU/CPU tasks
Stars: ✭ 77 (+92.5%)
Mutual labels:  slurm
torchx
TorchX is a universal job launcher for PyTorch applications. TorchX is designed to have fast iteration time for training/research and support for E2E production ML pipelines when you're ready.
Stars: ✭ 165 (+312.5%)
Mutual labels:  slurm
awflow
Reproducible research and reusable acyclic workflows in Python. Execute code on HPC systems as if you executed them on your personal computer!
Stars: ✭ 15 (-62.5%)
Mutual labels:  slurm
SlurmClusterManager.jl
julia package for running code on slurm clusters
Stars: ✭ 27 (-32.5%)
Mutual labels:  slurm

rslurm: submit R code to a Slurm cluster

cran checks rstudio mirror downloads R build status Project Status: Active – The project has reached a stable, usable state and is being actively developed. CRAN status DOI

About

Development of this R package was supported by the National Socio-Environmental Synthesis Center (SESYNC) under funding received from the National Science Foundation grants DBI-1052875 and DBI-1639145.

The package was developed by Philippe Marchand and Ian Carroll, with Mike Smorul and Rachael Blake contributing. Quentin Read is the current maintainer.

Installation

You can install the released version of rslurm from CRAN with:

install.packages("rslurm")

And the development version from GitHub with:

# install.packages("devtools")
devtools::install_github("SESYNC-ci/rslurm")

Documentation

Package documentation is accessible from the R console through package?rslurm and online.

Example

Note that job submission is only possible on a system with access to a Slurm workload manager (i.e. a system where the command line utilities squeue or sinfo return information from a Slurm head node).

To illustrate a typical rslurm workflow, we use a simple function that takes a mean and standard deviation as parameters, generates a million normal deviates and returns the sample mean and standard deviation.

test_func <- function(par_mu, par_sd) {
    samp <- rnorm(10^6, par_mu, par_sd)
    c(s_mu = mean(samp), s_sd = sd(samp))
}

We then create a parameter data frame where each row is a parameter set and each column matches an argument of the function.

pars <- data.frame(par_mu = 1:10,
                   par_sd = seq(0.1, 1, length.out = 10))

We can now pass that function and the parameters data frame to slurm_apply, specifying the number of cluster nodes to use and the number of CPUs per node.

library(rslurm)
sjob <- slurm_apply(test_func, pars, jobname = 'test_apply',
                    nodes = 2, cpus_per_node = 2, submit = FALSE)

The output of slurm_apply is a slurm_job object that stores a few pieces of information (job name, job ID, and the number of nodes) needed to retrieve the job’s output.

See Get started for more information.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].