Helix/Slurm: Difference between revisions
S Richling (talk | contribs) |
S Richling (talk | contribs) |
||
Line 54: | Line 54: | ||
! style="width:20%"| Partition |
! style="width:20%"| Partition |
||
! style="width:20%"| Node Access Policy |
! style="width:20%"| Node Access Policy |
||
! style="width:20%"| [https://wiki.bwhpc.de/e/ |
! style="width:20%"| [https://wiki.bwhpc.de/e/Helix/Haredware] |
||
! style="width:20%"| Default |
! style="width:20%"| Default |
||
! style="width:20%"| Limits |
! style="width:20%"| Limits |
Revision as of 22:13, 12 July 2022
General information about Slurm
The bwForCluster Helix uses Slurm as batch system.
- Slurm documentation: https://slurm.schedmd.com/documentation.html
- Slurm cheat sheet: https://slurm.schedmd.com/pdfs/summary.pdf
- Slurm tutorials: https://slurm.schedmd.com/tutorials.html
Slurm Command Overview
Slurm commands | Brief explanation |
---|---|
sbatch | Submits a job and queues it in an input queue |
squeue | Displays information about active, eligible, blocked, and/or recently completed jobs |
scontrol | Displays detailed job state information |
scancel | Cancels a job |
Job Submission
Batch jobs are submitted with the command:
$ sbatch <job-script>
A job script contains options for Slurm in lines beginning with #SBATCH as well as your commands which you want to execute on the compute nodes. For example:
#!/bin/bash
#SBATCH --partition=single
#SBATCH --ntasks=1
#SBATCH --time=00:20:00
#SBATCH --mem=1gb
#SBATCH --export=NONE
echo 'Hello world'
This jobs requests one core (--ntasks=1) and 1 GB memory (--mem=1gb) for 20 minutes (--time=00:20:00) on nodes provided by the partition 'single'.
For the sake of a better reproducibility of jobs it is recommended to use the option --export=NONE to prevent the propagation of environment variables from the submit session into the job environment and to load required software modules in the job script.
Partitions
On bwForCluster Helix Production it is necessary to request a partition with '--partition=<partition_name>' on job submission. Within a partition job allocations are routed automatically to the most suitable compute node(s) for the requested resources (e.g. amount of nodes and cores, memory, number of GPUs). The devel partition is the default partition, if no partition is requested.
The partitions devel and single are operated in shared mode, i.e. jobs from different users can run on the same node. Jobs can get exclusive access to compute nodes in these partitions with the "--exclusive" option. The partitions multi is operated in exclusive mode. Jobs in these partitions automatically get exclusive access to the requested compute nodes.
GPUs are requested with the option "--gres=gpu:<number-of-gpus>".
Partition | Node Access Policy | [1] | Default | Limits |
---|---|---|---|---|
devel | shared | amd | ntasks=1, time=00:10:00, mem-per-cpu=1gb | nodes=1, time=00:30:00 |
single | shared | amd | ntasks=1, time=00:30:00, mem-per-cpu=1gb | nodes=1, time=120:00:00 |
multi | job exclusive | amd | nodes=2, time=00:30:00 | nodes=128, time=48:00:00 |