Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 3 Current »

Many computational endeavors benefit from some form of parallelization, and SLURM provides a way to do “embarrassingly parallel” processing relatively simply (read more about parallelization here). Additionally, submitting a SLURM array puts all parts of the job (tasks) under one job ID, which simplifies canceling or otherwise managing dozens/hundreds/thousands of tasks.

Basics

Array submission works via the “--array” option of sbatch (read more about it here). This requires the start, stop, and optional step size for your array indices – each index will be given its own task ID.

For example:

--array=0-10

Will create a job with tasks 0 - 10 (11 tasks in total).

--array=5-7

Will create a job with tasks 5, 6, and 7 (3 in total).

The “:” operator will specify a step size

--array=0,12:3

Will create a job with tasks 0, 3, 6, 9, and 12.

The “%” operator will specify a max number of tasks that can run at any given time

--array=0,10%5

Will create tasks 0-10, but only 0-4 will only run initially and 5 will start when another task finishes.

Selecting these options without any other configuration in your script will result in the same task being submitted for each task ID.

Further configuration

As mentioned above, creating an array of tasks within a job without other configuration will create identical jobs for each task. This isn’t useless, but being able to use different data for each task may be more useful.

SLURM assigns a task ID to each array index, which can be accessed via $SLURM_ARRAY_TASK_ID in your bash script. This ID can then be used in a variety of ways to change the data used or program run on a task-by-task basis. Below are some examples on how to utilize the task IDs to configure individual tasks – note that there are more possibilities than those mentioned here and that multiple approaches can be combined together.

For example:

If you have a series of R scripts that you’d like to run with the same computational requirements, you could do the following.

#!/bin/bash

#SBATCH --account=placeholder
#SBATCH --time=01:00:00
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=1
#SBATCH --cpus-per-task=1
#SBATCH --array=0-12

module load gcc
module load swset
module load r/3.6.1

list=(/home/name/dir/coolscript*.R)

Rscript ${list[$SLURM_ARRAY_TASK_ID]}

This will create a bash array of R scripts in the directory with the “coolscript” prefix and submit each one to its own task based on their index in that array. Note that this example is assuming you have 13 “coolscript” R scripts in the “/home/name/dir/” directory.

You can also feed the task ID as an argument to the program/script your job will run.

#!/bin/bash

#SBATCH --account=placeholder
#SBATCH --time=01:00:00
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=1
#SBATCH --cpus-per-task=1
#SBATCH --array=0-12

module load gcc
module load swset
module load r/3.6.1

Rscript /home/name/dir/coolscript.R ${SLURM_ARRAY_TASK_ID}

This option is especially useful for changing output file names automatically based on task ID, but assumes that the script or program you run knows what to do with the argument you’ve provided.

Lastly, you may want to change the data fed into a given program or script based on task ID.

#!/bin/bash

#SBATCH --account=placeholder
#SBATCH --time=01:00:00
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=1
#SBATCH --cpus-per-task=1
#SBATCH --array=0-12

module load gcc
module load swset
module load r/3.6.1

data=(/home/name/dir/simulated_data*.csv)

Rscript /home/name/dir/coolscript.R ${data[$SLURM_ARRAY_TASK_ID]}

Again, this assumes that you have 13 .csv files with the prefix “simulated_data” in the “/home/name/dir/” directory.

There are certainly more ways to use this system than those I’ve laid out here – please add any others you can think of if others might find it useful!

  • No labels