Helix/Slurm: Difference between revisions

From bwHPC Wiki
Jump to navigation Jump to search
No edit summary
Line 20: Line 20:
|}
|}


= Job Submission =
== Job Submission ==


Batch jobs are submitted with the command:
Batch jobs are submitted with the command:

Revision as of 23:28, 30 June 2022

General information about Slurm

The bwForCluster Helix uses Slurm as batch system.

Slurm Command Overview

Slurm commands Brief explanation
sbatch Submits a job and queues it in an input queue
squeue Displays information about active, eligible, blocked, and/or recently completed jobs
scontrol Displays detailed job state information
scancel Cancels a job

Job Submission

Batch jobs are submitted with the command:

$ sbatch <job-script>

A job script contains options for Slurm in lines beginning with #SBATCH as well as your commands which you want to execute on the compute nodes. For example:

#!/bin/bash
#SBATCH --partition=XXX
#SBATCH --ntasks=1
#SBATCH --time=00:20:00
#SBATCH --mem=1gb
#SBATCH --export=NONE
echo 'Hello world'

This jobs requests one core (--ntasks=1) and 1 GB memory (--mem=1gb) for 20 minutes (--time=00:20:00) on nodes provided by the partition 'XXX'.

For the sake of a better reproducibility of jobs it is recommended to use the option --export=NONE to prevent the propagation of environment variables from the submit session into the job environment and to load required software modules in the job script.