Helix/Slurm: Difference between revisions
< Helix
Jump to navigation
Jump to search
S Richling (talk | contribs) No edit summary |
S Richling (talk | contribs) |
||
Line 20: | Line 20: | ||
|} |
|} |
||
= Job Submission = |
== Job Submission == |
||
Batch jobs are submitted with the command: |
Batch jobs are submitted with the command: |
Revision as of 23:28, 30 June 2022
General information about Slurm
The bwForCluster Helix uses Slurm as batch system.
- Slurm documentation: https://slurm.schedmd.com/documentation.html
- Slurm cheat sheet: https://slurm.schedmd.com/pdfs/summary.pdf
- Slurm tutorials: https://slurm.schedmd.com/tutorials.html
Slurm Command Overview
Slurm commands | Brief explanation |
---|---|
sbatch | Submits a job and queues it in an input queue |
squeue | Displays information about active, eligible, blocked, and/or recently completed jobs |
scontrol | Displays detailed job state information |
scancel | Cancels a job |
Job Submission
Batch jobs are submitted with the command:
$ sbatch <job-script>
A job script contains options for Slurm in lines beginning with #SBATCH as well as your commands which you want to execute on the compute nodes. For example:
#!/bin/bash
#SBATCH --partition=XXX
#SBATCH --ntasks=1
#SBATCH --time=00:20:00
#SBATCH --mem=1gb
#SBATCH --export=NONE
echo 'Hello world'
This jobs requests one core (--ntasks=1) and 1 GB memory (--mem=1gb) for 20 minutes (--time=00:20:00) on nodes provided by the partition 'XXX'.
For the sake of a better reproducibility of jobs it is recommended to use the option --export=NONE to prevent the propagation of environment variables from the submit session into the job environment and to load required software modules in the job script.