How to submit an array of tasks to SLURM

Many computational endeavors benefit from some form of parallelization, and SLURM provides a way to do “embarrassingly parallel” processing relatively simply (read more about parallelization here). Additionally, submitting a SLURM array puts all parts of the job (tasks) under one job ID, which simplifies canceling or otherwise managing dozens/hundreds/thousands of tasks.

Basics

Array submission works via the “--array” option of sbatch (read more about it here). This requires the start, stop, and optional step size for your array indices – each index will be given its own task ID and will be submitted as a batch job with the #SBATCH commands listed in your header.

For example:

--array=0-10

Will create a job with tasks 0 - 10 (11 tasks in total).

--array=5-7

Will create a job with tasks 5, 6, and 7 (3 in total).

The “:” operator will specify a step size

--array=0,12:3

Will create a job with tasks 0, 3, 6, 9, and 12.

The “%” operator will specify a max number of tasks that can run at any given time

Will create tasks 0-10, but only 0-4 will only run initially and 5 will start when another task finishes.

Selecting these options without any other configuration in your script will result in the same task being submitted for each task ID.

Further configuration

As mentioned above, creating an array of tasks within a job without other configuration will create identical jobs for each task. This isn’t useless, but being able to use different data for each task may be more useful.

SLURM assigns a task ID to each array index, which can be accessed via $SLURM_ARRAY_TASK_ID in your bash script. This ID can then be used in a variety of ways to change the data used or program run on a task-by-task basis. Below are some examples on how to utilize the task IDs to configure individual tasks – note that there are more possibilities than those mentioned here and that multiple approaches can be combined together.

For example:

If you have a series of R scripts that you’d like to run with the same computational requirements, you could do the following.

This will create a bash array of R scripts in the directory with the “coolscript” prefix and submit each one to its own task based on their index in that array. Note that this example is assuming you have 13 “coolscript” R scripts in the “/home/name/dir/” directory.

You can also feed the task ID as an argument to the program/script your job will run.

This option is especially useful for changing output file names automatically based on task ID, but assumes that the script or program you run knows what to do with the argument you’ve provided.

Lastly, you may want to change the data fed into a given program or script based on task ID.

Again, this assumes that you have 13 .csv files with the prefix “simulated_data” in the “/home/name/dir/” directory.

If you would rather provide a file that contains a list that you would like to iterate through, you can read in the file using the readarray command, and then assign this to your “data” variable, as shown below.

There are certainly more ways to use this system than those I’ve laid out here – please add any others you can think of if others might find it useful!