BwForCluster JUSTUS 2 Slurm HOWTO: Difference between revisions

From bwHPC Wiki
Jump to navigation Jump to search
No edit summary
Line 2: Line 2:
{{Justus2}}
{{Justus2}}


= Slurm Howto =
Slurm Howto


== Preface ==
= Preface =


This is a collection of howtos and convenient commands that I initially
This is a collection of howtos and convenient commands that I initially
Line 14: Line 14:
may behave differently with different Slurm versions and configurations.
may behave differently with different Slurm versions and configurations.


== GENERAL ==
= GENERAL =


=== How to find Slurm FAQ? ===
== How to find Slurm FAQ? ==


https://slurm.schedmd.com/faq.html
https://slurm.schedmd.com/faq.html


=== How to find a Slurm cheat sheet? ===
== How to find a Slurm cheat sheet? ==


https://slurm.schedmd.com/pdfs/summary.pdf
https://slurm.schedmd.com/pdfs/summary.pdf


=== How to get more information? ===
== How to get more information? ==


(Almost) every Slurm command has a man page. Use it.
(Almost) every Slurm command has a man page. Use it.
Line 30: Line 30:
Online versions: https://slurm.schedmd.com/man_index.html
Online versions: https://slurm.schedmd.com/man_index.html


== JOB SUBMISSION ==
= JOB SUBMISSION =


=== How to submit an interactive job? ===
== How to submit an interactive job? ==


Use [https://slurm.schedmd.com/srun.html srun] command, e.g.:
Use [https://slurm.schedmd.com/srun.html srun] command, e.g.:
<pre>$ srun --nodes=1 --ntasks-per-node=8 --pty bash </pre>
<pre>$ srun --nodes=1 --ntasks-per-node=8 --pty bash </pre>


=== How to enable X11 forwarding for an interactive job? ===
== How to enable X11 forwarding for an interactive job? ==


Use --x11 flag, e.g.
Use --x11 flag, e.g.
Line 49: Line 49:
* For X11 forwarding to work, you must also enable X11 forwarding for your ssh login from your local computer to the cluster, i.e.:
* For X11 forwarding to work, you must also enable X11 forwarding for your ssh login from your local computer to the cluster, i.e.:
<pre>local> ssh -X <username>@justus2.uni-ulm.de></pre>
<pre>local> ssh -X <username>@justus2.uni-ulm.de></pre>

== How to submit a batch job? ==

Use [https://slurm.schedmd.com/sbatch.html sbatch] command:

<pre> $ sbatch <job-script> </pre>

=== How to convert Moab batch job scripts to Slurm? ===

Replace Moab/Torque job specification flags and environment variables in your job
scripts by their corresponding Slurm counterparts.

'''Commonly used Moab job specification flags and their Slurm equivalents*'''

{| width=750px class="wikitable"
! Option !! Moab (msub) !! Slurm (sbatch)
|-
| Script directive || #MSUB || #SBATCH
|-
| Job name || -N <name> || --job-name=<name> (-J <name>)
|-
| Account || -A <account> || --account=<account> (-A <account>)
|-
| Queue || -q <queue> || --partition=<partition> (-p <partition>)
|-
| Wall time limit || -l walltime=<hh:mm:ss> || --time=<hh:mm:ss> (-t <hh:mm:ss>)
|-
| Node count || -l nodes=<count> || --nodes=<count> (-N <count>)
|-
| Core count || -l procs=<count> || --ntasks=<count> (-n <count>)
|-
| Process count per node || -l ppn=<count> || --ntasks-per-node=<count>
|-
| Core count per process || || --cpus-per-task=<count>
|-
| Memory limit per node || -l mem=<limit> || --mem=<limit>
|-
| Memory limit per process || -l pmem=<limit> || --mem-per-cpu=<limit>
|-
| Job array || -t <array indices> || --array=<indices> (-a <indices>)
|-
| Node exclusive job || -l naccesspolicy=singlejob || --exclusive
|-
| Initial working directory || -d <directory> (default: $HOME) || --chdir=<directory> (-D <directory>) (default: submission directory)
|-
| Standard output file || -o <file path> || --output=<file> (-o <file>)
|-
| Standard error file || -e <file path> || --error=<file> (-e <file>)
|-
| Combine stdout/stderr to stdout || -j oe || --output=<combined stdout/stderr file>
|-
| Mail notification events || -m <event> || --mail-type=<events> (valid types include: NONE, BEGIN, END, FAIL, ALL)
|-
| Export environment to job || -V || --export=ALL (default)
|-
| Don't export environment to job || (default) || --export=NONE
|-
| Export environment variables to job || -v <var[=value][,var2=value2[, ...]]> || --export=<var[=value][,var2=value2[,...]]>
|}

Revision as of 14:01, 17 April 2020

The bwForCluster JUSTUS 2 is a state-wide high-performance compute resource dedicated to Computational Chemistry and Quantum Sciences in Baden-Württemberg, Germany. The bwForCluster JUSTUS 2 is a state-wide high-performance compute resource dedicated to Computational Chemistry and Quantum Sciences in Baden-Württemberg, Germany.

Slurm Howto

Preface

This is a collection of howtos and convenient commands that I initially wrote for internal use at Ulm only. Scripts and commands have been tested within our Slurm test environment at JUSTUS (running Slurm 19.05 at the moment).

Maybe you find this collection useful, but use on your own risk. Things may behave differently with different Slurm versions and configurations.

GENERAL

How to find Slurm FAQ?

https://slurm.schedmd.com/faq.html

How to find a Slurm cheat sheet?

https://slurm.schedmd.com/pdfs/summary.pdf

How to get more information?

(Almost) every Slurm command has a man page. Use it.

Online versions: https://slurm.schedmd.com/man_index.html

JOB SUBMISSION

How to submit an interactive job?

Use srun command, e.g.:

$ srun --nodes=1 --ntasks-per-node=8 --pty bash 

How to enable X11 forwarding for an interactive job?

Use --x11 flag, e.g.

$ srun --nodes=1 --ntasks-per-node=8 --pty --x11 bash     # run shell with X11 forwarding enabled
$ srun --nodes=1 --ntasks-per-node=8 --pty --x11 xterm    # directly launch terminal window on node

Note:

  • For X11 forwarding to work, you must also enable X11 forwarding for your ssh login from your local computer to the cluster, i.e.:
local> ssh -X <username>@justus2.uni-ulm.de>

How to submit a batch job?

Use sbatch command:

 $ sbatch <job-script> 

How to convert Moab batch job scripts to Slurm?

Replace Moab/Torque job specification flags and environment variables in your job scripts by their corresponding Slurm counterparts.

Commonly used Moab job specification flags and their Slurm equivalents*

Option Moab (msub) Slurm (sbatch)
Script directive #MSUB #SBATCH
Job name -N <name> --job-name=<name> (-J <name>)
Account -A <account> --account=<account> (-A <account>)
Queue -q <queue> --partition=<partition> (-p <partition>)
Wall time limit -l walltime=<hh:mm:ss> --time=<hh:mm:ss> (-t <hh:mm:ss>)
Node count -l nodes=<count> --nodes=<count> (-N <count>)
Core count -l procs=<count> --ntasks=<count> (-n <count>)
Process count per node -l ppn=<count> --ntasks-per-node=<count>
Core count per process --cpus-per-task=<count>
Memory limit per node -l mem=<limit> --mem=<limit>
Memory limit per process -l pmem=<limit> --mem-per-cpu=<limit>
Job array -t <array indices> --array=<indices> (-a <indices>)
Node exclusive job -l naccesspolicy=singlejob --exclusive
Initial working directory -d <directory> (default: $HOME) --chdir=<directory> (-D <directory>) (default: submission directory)
Standard output file -o <file path> --output=<file> (-o <file>)
Standard error file -e <file path> --error=<file> (-e <file>)
Combine stdout/stderr to stdout -j oe --output=<combined stdout/stderr file>
Mail notification events -m <event> --mail-type=<events> (valid types include: NONE, BEGIN, END, FAIL, ALL)
Export environment to job -V --export=ALL (default)
Don't export environment to job (default) --export=NONE
Export environment variables to job -v <var[=value][,var2=value2[, ...]]> --export=<var[=value][,var2=value2[,...]]>