Helix/Slurm: Difference between revisions

From bwHPC Wiki
Jump to navigation Jump to search
Line 113: Line 113:
#SBATCH --partition=single
#SBATCH --partition=single
#SBATCH --ntasks=1
#SBATCH --ntasks=1
#SBATCH --time=120:00:00
#SBATCH --time=20:00:00
#SBATCH --mem=4gb
#SBATCH --mem=4gb
./my_serial_program
./my_serial_program

Revision as of 16:27, 1 August 2023

General information about Slurm

The bwForCluster Helix uses Slurm as batch system.

Slurm Command Overview

Slurm commands Brief explanation
sbatch Submits a job and queues it in an input queue
saclloc Request resources for an interactive job
squeue Displays information about active, eligible, blocked, and/or recently completed jobs
scontrol Displays detailed job state information
sstat Displays status information about a running job
scancel Cancels a job

Job Submission

Batch jobs are submitted with the command:

$ sbatch <job-script>

A job script contains options for Slurm in lines beginning with #SBATCH as well as your commands which you want to execute on the compute nodes. For example:

#!/bin/bash
#SBATCH --partition=single
#SBATCH --ntasks=1
#SBATCH --time=00:20:00
#SBATCH --mem=1gb
#SBATCH --export=NONE
echo 'Hello world'

This jobs requests one core (--ntasks=1) and 1 GB memory (--mem=1gb) for 20 minutes (--time=00:20:00) on nodes provided by the partition 'single'.

For the sake of a better reproducibility of jobs it is recommended to use the option --export=NONE to prevent the propagation of environment variables from the submit session into the job environment and to load required software modules in the job script.

Partitions

On bwForCluster Helix it is necessary to request a partition with '--partition=<partition_name>' on job submission. Within a partition job allocations are routed automatically to the most suitable compute node(s) for the requested resources (e.g. amount of nodes and cores, memory, number of GPUs). The devel partition is the default partition, if no partition is requested.

The partitions devel and single are operated in shared mode, i.e. jobs from different users can run on the same node. Jobs can get exclusive access to compute nodes in these partitions with the "--exclusive" option. The partitions cpu-multi and gpu-multi are operated in exclusive mode. Jobs in these partitions automatically get exclusive access to the requested compute nodes.

GPUs are requested with the option "--gres=gpu:<number-of-gpus>".

Partition Node Access Policy Node Types Default Limits
devel shared cpu, gpu4 ntasks=1, time=00:10:00, mem-per-cpu=2gb nodes=2, time=00:30:00
single shared cpu, fat, gpu4, gpu8 ntasks=1, time=00:30:00, mem-per-cpu=2gb nodes=1, time=120:00:00
cpu-multi job exclusive cpu nodes=2, time=00:30:00 nodes=32, time=48:00:00
gpu-multi job exclusive gpu4 nodes=2, time=00:30:00 nodes=8, time=48:00:00

Constraints

It is possible to request explicitly the CPU manufacturer of compute nodes with the option "--constraint=<constraint_name>".

Constraint Meaning
amd request AMD nodes (default)
intel request Intel nodes (when available)

Examples

Here you can find some example scripts for batch jobs.

Serial Programs

#!/bin/bash
#SBATCH --partition=single
#SBATCH --ntasks=1
#SBATCH --time=20:00:00
#SBATCH --mem=4gb
./my_serial_program

Notes:

  • Jobs with "--mem" up to 248gb can run on all node types associated with the single partition.

Multi-threaded Programs

#!/bin/bash
#SBATCH --partition=single
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=16
#SBATCH --time=01:30:00
#SBATCH --mem=50gb
export OMP_NUM_THREADS=${SLURM_NTASKS}
./my_multithreaded_program

Notes:

  • Jobs with "--ntasks-per-node" up to 64 and "--mem" up to 248gb can run on all node types associated with the single partition.

MPI Programs

#!/bin/bash
#SBATCH --partition=cpu-multi
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=64
#SBATCH --time=12:00:00
#SBATCH --mem=50gb
module load compiler/gnu
module load mpi/openmpi
srun ./my_mpi_program

Notes:

  • "--mem" requests the memory per node. The maximum is 248gb.

GPU Programs

#!/bin/bash
#SBATCH --partition=single
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=40
#SBATCH --gres=gpu:4
#SBATCH --time=12:00:00
#SBATCH --mem=200gb
module load devel/cuda
export OMP_NUM_THREADS=${SLURM_NTASKS}
./my_cuda_program

Notes:

  • The number of GPUs per node is requested with the option "--gres=gpu:<number-of-gpus>"
  • It is possible to request a certain GPU type with the option "--gres=gpu:<gpu-type>:<number-of-gpus>". For <gpu-type> put the 'GPU Type' listed in the last line of the Compute Nodes table.

Interactive Jobs

Interactive jobs must NOT run on the logins nodes, however resources for interactive jobs can be requested using srun. The following example requests an interactive session on 1 core for 2 hours:

$ salloc --partition=single --ntasks=1 --time=2:00:00

After execution of this command wait until the queueing system has granted you the requested resources. Once granted you will be automatically logged on the allocated compute node.

If you use applications or tools which provide a GUI, enable X-forwarding for your interactive session with:

$ salloc --partition=single --ntasks=1 --time=2:00:00 --x11

Once the walltime limit has been reached you will be automatically logged out from the compute node.

Job Monitoring

Information about submitted jobs

For an overview of your submitted jobs use the command:

$ squeue

To get detailed information about a specific jobs use the command:

$ scontrol show job <jobid>

Informations about resource usage of running jobs

You can monitor the resource usage of running jobs with the sstat command. For example:

$ sstat --format=JobId,AveCPU,AveRSS,MaxRSS -j <jobid>

This will show average CPU time, average and maximum memory consumption of all tasks in the running job.

'sstat -e' command shows a list of fields that can be specified with the '--format' option.

Interactive access to running jobs

It is also possible to attach an interactive shell to a running job with command:

$ srun --jobid=<jobid> --overlap --pty /bin/bash

Commands like 'top' show you the most busy processes on the node. To exit 'top' type 'q'.

To monitor your GPU processes use the command 'nvidia-smi'.

Job Feedback

You get feedback on resource usage and job efficiency for completed jobs with the command:

$ seff <jobid>

Job feedback is also attached to the regular output file of a job.

Example Output:

============================= JOB FEEDBACK =============================
Job ID: 12345678
Cluster: helix
User/Group: hd_ab123/hd_hd
State: COMPLETED (exit code 0)
Nodes: 2
Cores per node: 64
CPU Utilized: 3-04:11:46
CPU Efficiency: 97.90% of 3-05:49:52 core-walltime
Job Wall-clock time: 00:36:29
Memory Utilized: 432.74 GB (estimated maximum)
Memory Efficiency: 85.96% of 503.42 GB (251.71 GB/node)

Explanation:

  • Nodes: Number of allocated nodes for the job.
  • Cores per node: Number of physical cores per node allocated for the job.
  • CPU Utilized: Sum of utilized core time.
  • CPU Efficiency: 'CPU Utilized' with respect to core-walltime (= 'Nodes' x 'Cores per node' x 'Job Wall-clock time') in percent.
  • Job Wall-clock time: runtime of the job.
  • Memory Utilized: Sum of memory used. For multi node MPI jobs the sum is only correct when srun is used instead of mpirun.
  • Memory Efficiency: 'Memory Utilized' with respect to total allocated memory for the job.

Accounting

Jobs are billed for allocated CPU cores, memory and GPUs.

To see the accounting data of a specific job:

$ sacct -j <jobid> --format=user,jobid,account,nnodes,ncpus,time,elapsed,AllocTRES%50

To retrive the job history for a specific user for a certain time frame:

$ sacct -u <user> -S 2022-08-20 -E 2022-08-30 --format=user,jobid,account,nnodes,ncpus,time,elapsed,AllocTRES%50

Overview about free resources

On the login nodes the following command shows what resources are available for immediate use:

$ sinfo_t_idle