BinAC/Moab: Difference between revisions

From bwHPC Wiki
Jump to navigation Jump to search
 
(2 intermediate revisions by the same user not shown)
Line 110: Line 110:
Queue classes define maximum resources such as walltime, nodes and processes per node and partition of the compute system. Note that queue settings of the bwHPC cluster are not '''identical''', but differ due to their different prerequisites, such as HPC performance, scalability and throughput levels. Details can be found here:
Queue classes define maximum resources such as walltime, nodes and processes per node and partition of the compute system. Note that queue settings of the bwHPC cluster are not '''identical''', but differ due to their different prerequisites, such as HPC performance, scalability and throughput levels. Details can be found here:
* [[BinAC/Queues|bwForCluster BINAC queue settings]]
* [[BinAC/Queues|bwForCluster BINAC queue settings]]

With the change from MOAB to Torque, you may have to adept your jobscripts in order to use certain queues:

* short/long/gpu: No change needed
* smp: add <code>:smp</code> to the node/proc resource request, e.g.: <code>-l nodes=x:ppn=n:'''smp'''</code>
* inter: add <code>:inter</code> to the node/proc resource request, e.g.: <code>-l nodes=x:ppn=n:'''inter'''</code>


=== qsub Examples ===
=== qsub Examples ===
Line 349: Line 355:


== List your jobs and show job details : qstat ==
== List your jobs and show job details : qstat ==
Displays information about active, eligible, blocked, and/or recently completed jobs. Since the resource manager is not actually scheduling jobs, the job ordering it displays is not valid. The showq command displays the actual job ordering under the Moab Workload Manager. When used without flags, this command displays all jobs in active, idle, and non-queued states.
Displays information about active, eligible, blocked, and/or recently completed jobs. When used without flags, this command displays all jobs in active, idle, and non-queued states.
<br>
<br>


Line 407: Line 413:
job_state = C
job_state = C
</pre>
</pre>

== Moab Job Control : mjobctl ==
The mjobctl command controls various aspects of jobs. It is used to submit, cancel, execute, and checkpoint jobs. It can also display diagnostic information about your own jobs.
<br>
=== Canceling own jobs : mjobctl -c ===
If you want to cancel a job that has been submitted, please do not use the PBS/Torque qdel (n./a.) or the deprecated [[#Canceling own jobs : canceljob|canceljob]] commands.
<br>
<font color=green>Instead, use '''mjobctl -c <jobid>'''. </font>
<br>
{| width=750px class="wikitable"
! Flag !! Format !! Default !! Description !! Example
|- style="vertical-align:top;"
| -cl
| JobId
| (none)
| Cancel a job.
| see: [[#Example Use of Mjobctl -c|example use of mjobctl -c]]
|}

=== Example Use of mjobctl -c ===
Canceling a job on the bwUniCluster
<pre>
[...-calc_repo-0]$ qsub bwhpc-fasta-example.moab
8374426

$ checkjob 8374426 | grep ^State
State: Idle # job is 'Idle'

$ mjobctl -c 8374426
job '8374426' cancelled # job is cancelled

checkjob 8374426 | grep ^State
State: Removed # now, job is removed

$ # my own checkjob wrapper
cj 8374426
Job: 8374426 Status: < Removed > Wartezeit: 1m30s Intervall: 30s
Job 8374426 wurde gelöscht!
$
</pre>
* [[#Checkjob Examples|checkjob wrapper]]



=== Other Mjobctl-Options ===
See also:
* [http://docs.adaptivecomputing.com/mwm/6-1-9/Content/commands/mjobctl.html Complete list of mjobctl options and parameters].

<font color=red>Not all of the listed options are available for 'normal' users. Some are for MOAB-admins only.</font>

Latest revision as of 14:47, 3 February 2025

Attention.svg

As of February 1st 2025 Moab® is not licensed any more. As a consequence, the tools previously provided by the module module load system/moab/9.1.3 (like checkjob) are not available any more.

Torque scheduler

Any kind of calculation on the bwForCluster BinAC compute nodes requires the user to define calculations as a sequence of commands or single command together with required run time, number of CPU cores and main memory and submit all, i.e., the batch job, to a resource and workload managing software. Therefore any job submission by the user is to be executed by commands of the Torque scheduler. Torque queues and runs user jobs based on fair sharing policies.

Torque Commands

Some of the most used Torque commands for non-administrators working on the bwForCluster BinAC

Torque commands Brief explanation
qsub Submits a job and queues it in an input queue [qsub]

Job Submission : qsub

Batch jobs are submitted by using the command qsub. The main purpose of the qsub command is to specify the resources that are needed to run the job. qsub will then queue the batch job. However, starting of batch job depends on availability of the requested resources and the fair sharing value.

qsub Command Parameters

The syntax and use of qsub can be displayed via:

$ man qsub

qsub options can be used from the command line or in your job script.

qsub Options
Command line Script Purpose
-l resources #PBS -l resources Defines the resources that are required by the job.

See the description below for this important flag.

-N name #PBS -N name Gives a user specified name to the job.
-o filename #PBS -o filename Defines the file-name to be used for the standard output stream of the

batch job. By default the file with defined file name is placed under your
job submit directory. To place under a different location, expand
file name by the relative or absolute path of destination.

-q queue #PBS -q queue Defines the queue class
-v variable=arg #PBS -v variable=arg Expands the list of environment variables that are exported to the job
-S Shell #PBS -S Shell Declares the shell (state path+name, e.g. /bin/bash) that interpret

the job script

-m bea #PBS -m bea Send email when job begins (b), ends (e) or aborts (a).
-M name@uni.de #PBS -M name@uni.de Send email to the specified email address "name@uni.de".

qsub -l resource_list

The -l option is one of the most important qsub options. It is used to specify a number of resource requirements for your job. Multiple resource strings are separated by commas.

qsub -l resource_list
resource Purpose
-l nodes=2:ppn=16 Number of nodes and number of processes per node
-l walltime=600
-l walltime=01:30:00
Wall-clock time. Default units are seconds.

HH:MM:SS format is also accepted.

-l pmem=1000mb Maximum amount of physical memory used by any single process of the job.

Allowed units are kb, mb, gb. Be aware that processes are either MPI tasks

memory for all MPI tasks or all threads of the job.
-l advres=res_name Specifies the reservation "res_name" required to run the job.

qsub -q queues

Queue classes define maximum resources such as walltime, nodes and processes per node and partition of the compute system. Note that queue settings of the bwHPC cluster are not identical, but differ due to their different prerequisites, such as HPC performance, scalability and throughput levels. Details can be found here:

With the change from MOAB to Torque, you may have to adept your jobscripts in order to use certain queues:

  • short/long/gpu: No change needed
  • smp: add :smp to the node/proc resource request, e.g.: -l nodes=x:ppn=n:smp
  • inter: add :inter to the node/proc resource request, e.g.: -l nodes=x:ppn=n:inter

qsub Examples

Serial Programs

To submit a serial job that runs the script job.sh and that requires 5000 MB of main memory and 3 hours of wall clock time

a) execute:

$ qsub -q short -N test -l nodes=1:ppn=1,walltime=3:00:00,mem=5000mb   job.sh

or b) add after the initial line of your script job.sh the lines (here with a high memory request):

#PBS -l nodes=1:ppn=1
#PBS -l walltime=3:00:00
#PBS -l mem=200gb
#PBS -N test

and execute the modified script with the command line option -q smp, as the compute nodes only have 128GB memory.

$ qsub -q smp job.sh

Note, that qsub command line options overrule script options.

Multithreaded Programs

Multithreaded programs operate faster than serial programs on CPUs with multiple cores.
Moreover, multiple threads of one process share resources such as memory.
For multithreaded programs based on Open Multi-Processing (OpenMP) number of threads are defined by the environment variable OMP_NUM_THREADS. By default this variable is set to 1 (OMP_NUM_THREADS=1).
To submit a batch job called OpenMP_Test that runs a fourfold threaded program omp_executable which requires 6000 MByte of total physical memory and total wall clock time of 3 hours:

  • generate the script job_omp.sh containing the following lines:
#!/bin/bash
#PBS -l nodes=1:ppn=4
#PBS -l walltime=3:00:00
#PBS -l mem=6000mb
#PBS -v EXECUTABLE=./omp_executable
#PBS -v MODULE=<placeholder>
#PBS -N OpenMP_Test

#Usually you should set
export KMP_AFFINITY=compact,1,0
#export KMP_AFFINITY=verbose,compact,1,0 prints messages concerning the supported affinity
#KMP_AFFINITY Description: https://software.intel.com/en-us/node/524790#KMP_AFFINITY_ENVIRONMENT_VARIABLE

module load ${MODULE}
export OMP_NUM_THREADS=${PBS_NUM_PPN}
echo "Executable ${EXECUTABLE} running on ${PBS_NUM_PPN} cores with ${OMP_NUM_THREADS} threads"
startexe=${EXECUTABLE}
echo $startexe
exec $startexe

Using Intel compiler the environment variable KMP_AFFINITY switches on binding of threads to specific cores and, if necessary, replace <placeholder> with the required modulefile to enable the OpenMP environment and execute the script job_omp.sh adding the queue class short as qsub option:

$ qsub -q short job_omp.sh

Note, that qsub command line options overrule script options, e.g.,

$ qsub -l mem=2000mb -q short job_omp.sh

overwrites the script setting of 6000 MByte with 2000 MByte.

MPI Parallel Programs

MPI parallel programs run faster than serial programs on multi CPU and multi core systems. N-fold spawned processes of the MPI program, i.e., MPI tasks, run simultaneously and communicate via the Message Passing Interface (MPI) paradigm. MPI tasks do not share memory but can be spawned over different nodes.
Multiple MPI tasks can not be launched by the MPI parallel program itself but via mpirun, e.g. 4 MPI tasks of my_par_program:

$ mpirun -n 4 my_par_program

Generate a script job_ompi.sh for OpenMPI containing the following lines:

#!/bin/bash
module load mpi/openmpi/<placeholder_for_version>
# Use when loading OpenMPI in version 1.8.x
mpirun --bind-to core --map-by core -report-bindings my_par_program
# Use when loading OpenMPI in an old version 1.6.x
mpirun -bind-to-core -bycore -report-bindings my_par_program

Attention: Do NOT add mpirun options -n <number_of_processes> or any other option defining processes or nodes, since Torque instructs mpirun about number of processes and node hostnames. Use ALWAYS the MPI options --bind-to core and --map-by core|socket|node (OpenMPI version 1.8.x). Please type mpirun --help for an explanation of the meaning of the different options of mpirun option --map-by.
Considering 4 OpenMPI tasks on a single node, each requiring 1000 MByte, and running for 1 hour, execute:

$ qsub -q short -l nodes=1:ppn=4,pmem=1000mb,walltime=01:00:00 job_ompi.sh

Multithreaded + MPI parallel Programs

Multithreaded + MPI parallel programs operate faster than serial programs on multi CPUs with multiple cores. All threads of one process share resources such as memory. On the contrary MPI tasks do not share memory but can be spawned over different nodes.
Multiple MPI tasks using OpenMPI must be launched by the MPI parallel program mpirun. For multithreaded programs based on Open Multi-Processing (OpenMP) number of threads are defined by the environment variable OMP_NUM_THREADS. By default this variable is set to 1 (OMP_NUM_THREADS=1).
For OpenMPI a job-script to submit a batch job called job_ompi_omp.sh that runs a MPI program with 4 tasks and an fivefold threaded program ompi_omp_program requiring 6000 MByte of physical memory per process/thread (using 5 threads per MPI task you will get 5*6000 MByte = 30000 MByte per MPI task) and total wall clock time of 3 hours looks like:

#!/bin/bash
#PBS -l nodes=2:ppn=10
#PBS -l walltime=03:00:00
#PBS -l pmem=6000mb
#PBS -v MPI_MODULE=mpi/ompi
#PBS -v OMP_NUM_THREADS=5
#PBS -v MPIRUN_OPTIONS="--bind-to core --map-by socket:PE=5 -report-bindings"
#PBS -v EXECUTABLE=./ompi_omp_program
#PBS -N test_ompi_omp

module load ${MPI_MODULE}
TASK_COUNT=$((${PBS_NUM_PPN}/${OMP_NUM_THREADS}))
echo "${EXECUTABLE} running on ${PBS_NUM_PPN} cores with ${TASK_COUNT} MPI-tasks and ${OMP_NUM_THREADS} threads"
startexe="mpirun -n ${TASK_COUNT} ${MPIRUN_OPTIONS} ${EXECUTABLE}"
echo $startexe
exec $startexe

Execute the script job_ompi_omp.sh adding the queue class multinode to your qsub command:

$ qsub -q multinode job_ompi_omp.sh
  • With the mpirun option --bind-to core MPI tasks and OpenMP threads are bound to physical cores.
  • With the option --map-by socket:PE=<value> (neighbored) MPI tasks will be attached to different sockets and each MPI task is bound to the (in <value>) specified number of cpus. <value> must be set to ${OMP_NUM_THREADS}.
  • Old OpenMPI version 1.6.x: With the mpirun option -bind-to-core MPI tasks and OpenMP threads are bound to physical cores.
  • With the option -bysocket (neighbored) MPI tasks will be attached to different sockets and the option -cpus-per-proc <value> binds each MPI task to the (in <value>) specified number of cpus. <value> must be set to ${OMP_NUM_THREADS}.
  • The option -report-bindings shows the bindings between MPI tasks and physical cores.
  • The mpirun-options --bind-to core', --map-by socket|...|node:PE=<value> should always be used when running a multithreaded MPI program. (OpenMPI version 1.6.x: The mpirun-options -bind-to-core, -bysocket|-bynode and -cpus-per-proc <value> should always be used when running a multithreaded MPI program.)

Handling job script options and arguments

Job script options and arguments as followed:

$ ./job.sh -n 10

can not be passed while using qsub command since those will be interpreted as command line options of job.sh (like $1 = -n, $2 = 10).

Solution A:

Submit a wrapper script, e.g. wrapper.sh:

$ qsub -q singlenode wrapper.sh

which simply contains all options and arguments of job.sh. The script wrapper.sh would at least contain the following lines:

#!/bin/bash
./job.sh -n 10

Solution B:

Add after the header of your BASH script job.sh the following lines:

## check if $SCRIPT_FLAGS is "set"
if [ -n "${SCRIPT_FLAGS}" ] ; then
   ## but if positional parameters are already present
   ## we are going to ignore $SCRIPT_FLAGS
   if [ -z "${*}"  ] ; then
      set -- ${SCRIPT_FLAGS}
   fi
fi

These lines modify your BASH script to read options and arguments from the environment variable $SCRIPT_FLAGS. Now submit your script job.sh as followed:

$ qsub -q singlenode -v SCRIPT_FLAGS='-n 10' job.sh


Environment Variables

Once an eligible compute jobs starts on the compute system, PBS (our resource manager) adds the following variables to the job's environment:

PBS variables
Environment variables Description
PBS_JOBID Job ID
PBS_JOBNAME Job name
PBS_NUM_NODES Number of nodes allocated to job
PBS_QUEUE Partition name the job is running in
PBS_NP Number of processors allocated to job
PBS_O_WORKDIR Directory of job submission
PBS_O_LOGNAME User name

Interpreting PBS exit codes

  • The PBS Server logs and accounting logs record the ‘exit status’ of jobs.
  • Zero or positive exit status is the status of the top-level shell.
  • Certain negative exit statuses are used internally and will never be reported to the user.
  • The positive exit status values indicate which signal killed the job.
  • Depending on the system, values greater than 128 (or on some systems 256, see wait(2) or waitpid(2) for more information) are the value of the signal that killed the job.
  • To interpret (or ‘decode’) the signal contained in the exit status value, subtract the base value from the exit status.
    For example, if a job had an exit status of 143, that indicates the jobs was killed via a SIGTERM (e.g. 143 - 128 = 15, signal 15 is SIGTERM).

Job termination

  • The exit code from a batch job is a standard Unix termination signal.
  • Typically, exit code 0 means successful completion.
  • Codes 1-127 are generated from the job calling exit() with a non-zero value to indicate an error.
  • Exit codes 129-255 represent jobs terminated by Unix signals.
  • Each signal has a corresponding value which is indicated in the job exit code.

Job termination signals

Specific job exit codes are also supplied by the underlying resource manager of the cluster's batch system. More detailed information can be found in the corresponding documentation:

Submitting Termination Signal

Here is an example, how to 'save' a qsub termination signal in a typical bwHPC-submit script.

[...]
exit_code=$?
echo "### Calling YOUR_PROGRAM command ..."
mpirun -np 'NUMBER_OF_CORES' $YOUR_PROGRAM_BIN_DIR/runproc ... (options)  2>&1
[ "$exit_code" -eq 0 ] && echo "all clean..." || \
   echo "Executable ${YOUR_PROGRAM_BIN_DIR}/runproc finished with exit code ${$exit_code}"
[...]
  • Do not use 'time' mpirun! The exit code will be the one submitted by the first (time) program and not the qsub exit code.
  • You do not need an exit $exit_code in the scripts.

List your jobs and show job details : qstat

Displays information about active, eligible, blocked, and/or recently completed jobs. When used without flags, this command displays all jobs in active, idle, and non-queued states.

  • Show all your jobs: qstat -u $USER
  • Show details about a specific job: qstat -f JOBID
  • For further options of qstat read the manpage of qstat.

Canceling own jobs : qdel

The qdel <JobId> command is used to selectively cancel the specified job(s) (active, idle, or non-queued) from the queue.

Note that only own jobs can be cancelled.

Access

This command can be run by any Administrator and by the owner of the job.

Flag Name Format Default Description Example
-h HELP n./a. Display usage information
$ canceljob -h
JOB ID <STRING> (none) a jobid, a job expression, or the keyword 'ALL' see: example use of canceljob

Example Use of qdel

Example use of qdel run on BinAC

[...calc_repo-0]$ qsub bwhpc-fasta-example.pbs
8374356              # this is the JobId
$
$ qstat -f 8374356
(base) [tu_iioba01@login03 samtools]$ qstat -f 11870719
Job Id: 8374356
    Job_Name = bwhpc-fasta-example.pbs
    Job_Owner = tu_iioba01@login03
    resources_used.cput = 00:00:02
    resources_used.energy_used = 0
    resources_used.mem = 1580kb
    resources_used.vmem = 56404kb
    resources_used.walltime = 00:00:03
    job_state = R
[...]

$ # now cancel the job
$ qdel 8374356
Terminated

$ qstat -f 8374356 | grep state:
    job_state = C