BinAC2/Migrate Moab to Slurm jobs

From bwHPC Wiki
< BinAC2
Revision as of 12:19, 24 June 2025 by F Bartusch (talk | contribs) (Created page with "== BinAC 1 Queues -> BinAC 2 Slurm partitions == BinAC 2 uses Slurm scheduler instead of Moab/Torque. Please refer to the BinAC 2 Slurm paritions page for an overview of available Slurm partitions. == Moab/Torque flags -> Slurm flags == Replace Moab/Torque job specification flags in your job scripts by their corresponding Slurm counterparts. '''Commonly used Moab job specification flags and their Slurm equivalents''' {| width=750px class=...")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

BinAC 1 Queues -> BinAC 2 Slurm partitions

BinAC 2 uses Slurm scheduler instead of Moab/Torque. Please refer to the BinAC 2 Slurm paritions page for an overview of available Slurm partitions.

Moab/Torque flags -> Slurm flags

Replace Moab/Torque job specification flags in your job scripts by their corresponding Slurm counterparts.

Commonly used Moab job specification flags and their Slurm equivalents

Option Moab (msub) Slurm (sbatch)
Script directive #PBS #SBATCH
Job name -N <name> --job-name=<name> (-J <name>)
Account -A <account> --account=<account> (-A <account>)
Queue -q <queue> --partition=<partition> (-p <partition>)
Wall time limit -l walltime=<hh:mm:ss> --time=<hh:mm:ss> (-t <hh:mm:ss>)
Node count -l nodes=<count> --nodes=<count> (-N <count>)
Core count -l procs=<count> --ntasks=<count> (-n <count>)
Process count per node -l ppn=<count> --ntasks-per-node=<count>
Core count per process --cpus-per-task=<count> (-c <count>)
Memory limit per node -l mem=<limit> --mem=<limit>
Memory limit per process -l pmem=<limit> --mem-per-cpu=<limit>
Job array -t <array indices> --array=<indices> (-a <indices>)
Node exclusive job -l naccesspolicy=singlejob --exclusive
Initial working directory -d <directory> (default: $HOME) --chdir=<directory> (-D <directory>) (default: submission directory)
Standard output file -o <file path> --output=<file> (-o <file>)
Standard error file -e <file path> --error=<file> (-e <file>)
Combine stdout/stderr to stdout -j oe --output=<combined stdout/stderr file>
Mail notification events -m <event> --mail-type=<events> (valid types include: NONE, BEGIN, END, FAIL, ALL)
Export environment to job -V --export=ALL (default)
Don't export environment to job (default) --export=NONE
Export environment variables to job -v <var[=value][,var2=value2[, ...]]> --export=<var[=value][,var2=value2[,...]]>

Moab/Torque environment variables

Attention.svg

By default Moab does not export any environment variables to the job's runtime environment. With Slurm most of the login environment variables are exported to your job's runtime environment. This includes environment variables from software modules that were loaded at job submission time (and also $HOSTNAME variable). See sbatch man page for a complete list of flags and environment variables.

Replace Moab/Torque job specification environment variables in your job scripts by their corresponding Slurm counterparts.

Information Moab Torque Slurm
Job name $MOAB_JOBNAME $PBS_JOBNAME $SLURM_JOB_NAME
Job ID $MOAB_JOBID $PBS_JOBID $SLURM_JOB_ID
Submit directory $MOAB_SUBMITDIR $PBS_O_WORKDIR $SLURM_SUBMIT_DIR
Number of nodes allocated $MOAB_NODECOUNT $PBS_NUM_NODES $SLURM_JOB_NUM_NODES (and: $SLURM_NNODES)
Node list $MOAB_NODELIST cat $PBS_NODEFILE $SLURM_JOB_NODELIST
Number of processes $MOAB_PROCCOUNT $PBS_TASKNUM $SLURM_NTASKS
Requested tasks per node --- $PBS_NUM_PPN $SLURM_NTASKS_PER_NODE
Requested CPUs per task --- --- $SLURM_CPUS_PER_TASK
Job array index $MOAB_JOBARRAYINDEX $PBS_ARRAY_INDEX $SLURM_ARRAY_TASK_ID
Job array range $MOAB_JOBARRAYRANGE - $SLURM_ARRAY_TASK_COUNT
Queue name $MOAB_CLASS $PBS_QUEUE $SLURM_JOB_PARTITION
QOS name $MOAB_QOS --- $SLURM_JOB_QOS
--- $PBS_NUM_PPN $SLURM_TASKS_PER_NODE
Job user $MOAB_USER $PBS_O_LOGNAME $SLURM_JOB_USER
Hostname $MOAB_MACHINE $PBS_O_HOST $SLURMD_NODENAME