JUSTUS2/Running Your Calculations: Difference between revisions
K Siegmund (talk | contribs) |
|||
(82 intermediate revisions by 3 users not shown) | |||
Line 1: | Line 1: | ||
{{Justus2}} |
{{Justus2}} |
||
The JUSTUS 2 cluster uses [https://slurm.schedmd.com/ |
The JUSTUS 2 cluster uses Slurm ([https://slurm.schedmd.com/ https://slurm.schedmd.com/]) for scheduling compute jobs. |
||
= JUSTUS 2 Slurm Howto = |
|||
In order to get started with Slurm at JUSTUS 2, please visit our [[bwForCluster JUSTUS 2 Slurm HOWTO|Slurm HOWTO]] for JUSTUS 2. |
|||
This page only presents some very basic introduction. |
|||
== Partitions == |
|||
Job allocations at JUSTUS 2 are routed automatically to the most suitable compute node(s) that can provide the requested resources for the job (e.g. amount of cores, memory, local scratch space). This is to prevent fragmentation of the cluster system and to ensure most efficient usage of available compute resources. There is no need to request a specific partition in in your batch job scripts, i.e. users '''must not''' specify "-p, --partition=<partition_name>" on job submission. This is of particular importance if you adapt job scripts from other cluster systems (e.g. bwUniCluster 2.0) to JUSTUS 2. |
|||
Please see the '''[[bwForCluster JUSTUS 2 Slurm HOWTO|JUSTUS 2 Slurm HOWTO]]''' for many more examples and commands for common tasks. |
|||
== Job Priority == |
|||
= Slurm Command Overview = |
|||
{| width=750px class="wikitable" |
|||
! Slurm commands !! Brief explanation |
|||
|- |
|||
| [https://slurm.schedmd.com/sbatch.html sbatch] || Submits a job and queues it in an input queue |
|||
|- |
|||
| [https://slurm.schedmd.com/salloc.html salloc] || Request resources for an interactive job |
|||
|- |
|||
| [https://slurm.schedmd.com/squeue.html squeue] || Displays information about active, eligible, blocked, and/or recently completed jobs |
|||
|- |
|||
| [https://slurm.schedmd.com/scontrol.html scontrol] || Displays detailed job state information |
|||
|- |
|||
| [https://slurm.schedmd.com/scontrol.html sstat] || Displays status information about a running job |
|||
|- |
|||
| [https://slurm.schedmd.com/scancel.html scancel] || Cancels a job |
|||
|- |
|||
|} |
|||
= Submitting Jobs on the bwForCluster JUSTUS 2 = |
|||
Batch jobs are submitted with the command: |
|||
<source lang=bash>$ sbatch <job-script> </source> |
|||
A job script contains options for Slurm in lines beginning with #SBATCH as well as your commands which you want to execute on the compute nodes. For example: |
|||
<source lang='bash'> |
|||
#!/bin/bash |
|||
#SBATCH --nodes=1 |
|||
#SBATCH --ntasks-per-node=1 |
|||
#SBATCH --time=00:14:00 |
|||
#SBATCH --mem=1gb |
|||
echo 'Here starts the calculation' |
|||
</source> |
|||
You can override options from the script on the command-line: |
|||
<source lang=bash>$ sbatch --time=03:00:00 <job-script> </source> |
|||
Note: <font color="red"> Compute jobs must not write/read from the global file systems as a calculation swap file. </font> |
|||
Use local storage /tmp in the ramdisk for small files or /scratch (see [[BwForCluster_JUSTUS_2_Slurm_HOWTO#How_to_request_local_scratch_.28SSD.2FNVMe.29_at_job_submission.3F|How to request NVME]]) for this purpose |
|||
To not use the central file system for calculation, you must often configure the the program you are using to write temporary files elsewhere. |
|||
If the program uses the current directory to look for files, you must copy files to a temporary directory - and copy/save the results of the calculation in the end, else your results get deleted by automated cleanup happening after the job. |
|||
There diskless nodes have a disk in RAM memory, that can have a maximum of half the size of the total RAM. Note that files created plus memory requirement of your job need to fit into the total memory. |
|||
There are more diskless nodes than nodes with disks, so if your job can run on a diskless node, you should choose this option. |
|||
Example job script with requesting 700GB disk space and copying files: |
|||
<source lang='bash'> |
|||
#!/bin/bash |
|||
#SBATCH --nodes=1 |
|||
#SBATCH --ntasks-per-node=1 |
|||
#SBATCH --time=00:14:00 |
|||
#SBATCH --mem=1gb |
|||
#SBATCH --gres=scratch:700 |
|||
# copy input file |
|||
cp $HOME/inputfiles/myinput.inp $SCRATCH |
|||
# switch directory |
|||
cd $SCRATCH |
|||
echo 'Here starts the calculation' |
|||
myprogram --input=$SCRATCH/myinput.inp |
|||
# calculation ends |
|||
# copy result |
|||
cp outfile.out results2.txt $HOME/resultdir/job12345 |
|||
# clean up |
|||
rm myinput outfile.out results2.txt |
|||
</source> |
|||
= Testing Your Jobs = |
|||
Justus2 has three compute nodes reserved for jobs with a walltime under 15 minutes. You can test if your jobs start properly just by specifying a short walltime, e.g. --time=00:14:00 and your job should start very quickly. |
|||
= Monitoring Your Jobs = |
|||
== squeue == |
|||
After you submitted the job, you can see it waiting using the <code>squeue</code> command: |
|||
(also read the man page with <code>man squeue</code> for more information on how to use the command) |
|||
<source lang='shell'> |
|||
> squeue |
|||
JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON) |
|||
6260301 standard r_60_b_2 ul_yxz1 PD 0:00 1 (AssocGrpMemRunMinutes) |
|||
</source> |
|||
Output shows: |
|||
* JOBID: the jobid is an unique number your job gets |
|||
* PARTITION: the cluster can be divided in different types of nodes. |
|||
* NAME: the name you gave your job with the --job-name= option |
|||
* USER: your username |
|||
* ST: the state the job is in. R = running, PD = pending, CD = completed. See man page for a full list on states. |
|||
* TIME: how long the job has been running |
|||
* NODES: how many nodes were requested |
|||
* NODELIST(REASON): either show the node(s) the job is running on, or a reason why it hasn't started |
|||
==scontrol== |
|||
You can then show more info on one specific running job using the <code>scontrol</code> command, e.g for the job with ID 6260301 listed above: |
|||
<code> |
|||
> scontrol show job 6260301 |
|||
</code> |
|||
== Monitoring a Started Job == |
|||
After a job has started, you can ssh to the node(s) the job is running on, using the node name from NODELIST, e.g. if your job runs on n0603: |
|||
<code>> ssh n0603 |
|||
</code> |
|||
= Partitions = |
|||
Job allocations at JUSTUS 2 are routed automatically to the most suitable compute node(s) that can provide the requested resources for the job (e.g. amount of cores, memory, local scratch space). This is to prevent fragmentation of the cluster system and to ensure most efficient usage of available compute resources. Thus, there is no point in requesting a partition in batch job scripts, i.e. users '''should not''' specify any partition "-p, --partition=<partition_name>" on job submission. This is of particular importance if you adapt job scripts from other cluster systems (e.g. bwUniCluster 2.0) to JUSTUS 2. |
|||
= Job Priorities = |
|||
Job priorities at JUSTUS 2 depend on [https://slurm.schedmd.com/priority_multifactor.html multiple factors ]: |
Job priorities at JUSTUS 2 depend on [https://slurm.schedmd.com/priority_multifactor.html multiple factors ]: |
||
* Age: The amount of time a job has been waiting in the queue, eligible to be scheduled |
* Age: The amount of time a job has been waiting in the queue, eligible to be scheduled. |
||
* Fairshare: The difference between the portion of the computing resource allocated to an association and the amount of resources that has been consumed. |
* Fairshare: The difference between the portion of the computing resource allocated to an association and the amount of resources that has been consumed. |
||
'''Notes:''' |
|||
'''Note:''' Fairshare does '''not''' introduce a fixed allotment, in that a user's ability to run new jobs is cut off as soon as a fixed target utilization is reached. Instead, the fairshare factor ensures that jobs from users who were under-served in the past are given higher priority than jobs from users who were over-served in the past. This keeps individual groups from monopolizing the resources, thus making it unfair to groups who have not used their fairshare for quite some time. |
|||
Jobs that are pending because the user reached one of the resource usage limits (see below) are not eligible to be scheduled and, thus, do not accrue priority by their age. |
|||
Fairshare does '''not''' introduce a fixed allotment, in that a user's ability to run new jobs is cut off as soon as a fixed target utilization is reached. Instead, the fairshare factor ensures that jobs from users who were under-served in the past are given higher priority than jobs from users who were over-served in the past. This keeps individual groups from long term monopolizing the resources, thus making it unfair to groups who have not used their fairshare for quite some time. |
|||
Slurm features '''backfilling''', meaning that the scheduler will start lower priority jobs if doing so does not delay the expected start time of '''any''' higher priority job. Since the expected start time of pending jobs depends upon the expected completion time of running jobs, reasonably accurate time limits are valuable for backfill scheduling to work well. This '''[https://youtu.be/OKhWwem1XZg?t=161 video]''' gives an illustrative description to how backfilling works. |
|||
In summary, an approximate model of Slurm's behavior for scheduling jobs is this: |
|||
* Step 1: Can the job in position one (highest priority) start now? |
|||
* Step 2: If it can, remove it from the queue, start it and continue with step 1. |
|||
* Step 3: If it can not, look at next job. |
|||
* Step 4: Can it start now, without delaying the start time of any job before it in the queue? |
|||
* Step 5: If it can, remove it from the queue, start it, recalculate what nodes are free, look at next job and continue with step 4. |
|||
* Step 6: If it can not, look at next job, and continue with step 4. |
|||
As soon as a new job is submitted and as soon as a job finishes, Slurm restarts its main scheduling cycle with step 1. |
|||
= Usage Limits/Throttling Policies = |
|||
While the fairshare factor ensures fair long term balance of resource utilization between users and groups, there are additional usage limits that constrain the total cumulative resources at a given time. This is to prevent individual users from short term monopolizing large fractions of the whole cluster system. |
|||
* The '''maximum walltime''' for a job is '''14 days''' (336 hours) |
|||
--time=336:00:00 or --time=14-0 |
|||
* The '''maximum amount of cores''' used at any given time from jobs running is '''1920''' per user (aggregated over all running jobs). This translates to 40 nodes. An equivalent limit for allocated memory does also apply. If this limit is reached new jobs will be queued (with REASON: AssocGrpCpuLimit) but only allowed to run after resources have been relinquished. |
|||
* The maximum amount of '''remaining allocated core-minutes''' per user is '''3300000''' (aggregated over all running jobs). For example, if a user has a 4-core job running that will complete in 1 hour and a 2-core job that will complete in 6 hours, this translates to 4 * 1 * 60 + 2 * 6 * 60 = 16 * 60 = 960 remaining core-minutes. Once a user reaches the limit, no more jobs are allowed to start (REASON: AssocGrpCPURunMinutesLimit). As the jobs continue to run, the remaining core time will decrease and eventually allow more jobs to start in a staggered way. This limit also '''correlates the maximum walltime and amount of cores that can be allocated''' for this amount of time. Thus, shorter walltimes for the jobs allow more resources to be allocated at a given time (but capped by the maximum amount of cores limit above). Watch this '''[https://youtu.be/OKhWwem1XZg?t=306 video]''' for an illustrative description. An equivalent limit applies for remaining time of memory allocation in which case jobs may be held back from starting with REASON AssocGrpMemRunMinutes. |
|||
* The '''maximum amount of GPUs''' allocated by running jobs is '''4''' per user. If this limit it reached new jobs will be queued (with REASON: AssocGrpGRES) but only allowed to run after GPU resources have been relinquished. |
|||
'''Note:''' |
|||
Usage limits are subject to change. |
|||
= Other Considerations = |
|||
== Default Values == |
|||
Default values for jobs are: |
|||
* Runtime: --time=02:00:00 (2 hours) |
|||
* Nodes: --nodes=1 (one node) |
|||
* Tasks: --tasks-per-node=1 (one task per node) |
|||
* Cores: --cpus-per-task=1 (one core per task) |
|||
* Memory: --mem-per-cpu=2gb (2 GB per core) |
|||
== Node Access Policy == |
|||
Node access policy for jobs is "'''exclusive user'''". Nodes will be exclusively allocated to users. '''Multiple jobs (up to 48) of the same user can run on a single node''' at any time. |
|||
'''Note:''' This implies that for '''sub-node jobs''', it is advisable for efficient resource utilization and maximum job throughput to '''adjust the number of cores to be an integer divisor of 48''' (total number of cores on each node). For example, two 24-core jobs can run simultaneously on one and the same node, while two 32-core jobs will always have to allocate two separate nodes, but leave 16 cores unused on each of them. Users must therefore always '''think carefully about how many cores to request''' and whether their applications really benefit from allocating more cores for their jobs. Similar considerations apply - at the same time - to the '''requested amount of memory per job'''. |
|||
Think of it as the scheduler playing a game of multi-dimensional Tetris, where the dimensions are number of cores, amount of memory and other resources. '''Users can support this by making resource allocations that allow the scheduler to pack their jobs as densely as possible on the nodes'''. |
|||
== Memory Management == |
|||
The '''wait time of a job also depends largely on the amount of requested resources''' and the available number of nodes providing this amount of resources. This must be taken into account '''in particular when requesting a certain amount of memory'''. |
|||
For example, there is a total of 692 compute nodes in JUSTUS, of which 456 nodes have 192 GB RAM. However, '''not the entire amount of physical RAM is available exclusively for user jobs''', because the operating system, system services and local file systems also require a certain amount of RAM. |
|||
This means that if a job requests 192 GB RAM per node (i.e. --mem=192gb or --tasks-per-node=48 and --mem-per-cpu=4gb), Slurm will rule out 456 out of 692 nodes as being suitable for this job and considers only 220 out of 692 nodes as being eligible for running this job. |
|||
The following table provides an overview of how much memory can be allocated by user jobs on the various node types and how many nodes can serve this memory requirement: |
|||
{| width=500px class="wikitable" |
|||
! Physical RAM on node !! Available RAM on node !! Number of suitable nodes |
|||
|- |
|||
| 192 GB || 187 GB || 692 |
|||
|- |
|||
| 384 GB || 376 GB || 220 |
|||
|- |
|||
| 768 GB || 754 GB || 28 |
|||
|- |
|||
| 1536 GB || 1510 GB || 8 |
|||
|} |
|||
Also note that allocated memory is factored into resource usage accounting for fair share. This means over-requesting memory may have a negative impact on the priority of subsequent jobs. |
Latest revision as of 12:21, 12 September 2024
The bwForCluster JUSTUS 2 is a state-wide high-performance compute resource dedicated to Computational Chemistry and Quantum Sciences in Baden-Württemberg, Germany.
The JUSTUS 2 cluster uses Slurm (https://slurm.schedmd.com/) for scheduling compute jobs.
JUSTUS 2 Slurm Howto
This page only presents some very basic introduction.
Please see the JUSTUS 2 Slurm HOWTO for many more examples and commands for common tasks.
Slurm Command Overview
Slurm commands | Brief explanation |
---|---|
sbatch | Submits a job and queues it in an input queue |
salloc | Request resources for an interactive job |
squeue | Displays information about active, eligible, blocked, and/or recently completed jobs |
scontrol | Displays detailed job state information |
sstat | Displays status information about a running job |
scancel | Cancels a job |
Submitting Jobs on the bwForCluster JUSTUS 2
Batch jobs are submitted with the command:
$ sbatch <job-script>
A job script contains options for Slurm in lines beginning with #SBATCH as well as your commands which you want to execute on the compute nodes. For example:
#!/bin/bash
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=1
#SBATCH --time=00:14:00
#SBATCH --mem=1gb
echo 'Here starts the calculation'
You can override options from the script on the command-line:
$ sbatch --time=03:00:00 <job-script>
Note: Compute jobs must not write/read from the global file systems as a calculation swap file.
Use local storage /tmp in the ramdisk for small files or /scratch (see How to request NVME) for this purpose
To not use the central file system for calculation, you must often configure the the program you are using to write temporary files elsewhere.
If the program uses the current directory to look for files, you must copy files to a temporary directory - and copy/save the results of the calculation in the end, else your results get deleted by automated cleanup happening after the job.
There diskless nodes have a disk in RAM memory, that can have a maximum of half the size of the total RAM. Note that files created plus memory requirement of your job need to fit into the total memory.
There are more diskless nodes than nodes with disks, so if your job can run on a diskless node, you should choose this option.
Example job script with requesting 700GB disk space and copying files:
#!/bin/bash
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=1
#SBATCH --time=00:14:00
#SBATCH --mem=1gb
#SBATCH --gres=scratch:700
# copy input file
cp $HOME/inputfiles/myinput.inp $SCRATCH
# switch directory
cd $SCRATCH
echo 'Here starts the calculation'
myprogram --input=$SCRATCH/myinput.inp
# calculation ends
# copy result
cp outfile.out results2.txt $HOME/resultdir/job12345
# clean up
rm myinput outfile.out results2.txt
Testing Your Jobs
Justus2 has three compute nodes reserved for jobs with a walltime under 15 minutes. You can test if your jobs start properly just by specifying a short walltime, e.g. --time=00:14:00 and your job should start very quickly.
Monitoring Your Jobs
squeue
After you submitted the job, you can see it waiting using the squeue
command:
(also read the man page with man squeue
for more information on how to use the command)
> squeue
JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON)
6260301 standard r_60_b_2 ul_yxz1 PD 0:00 1 (AssocGrpMemRunMinutes)
Output shows:
- JOBID: the jobid is an unique number your job gets
- PARTITION: the cluster can be divided in different types of nodes.
- NAME: the name you gave your job with the --job-name= option
- USER: your username
- ST: the state the job is in. R = running, PD = pending, CD = completed. See man page for a full list on states.
- TIME: how long the job has been running
- NODES: how many nodes were requested
- NODELIST(REASON): either show the node(s) the job is running on, or a reason why it hasn't started
scontrol
You can then show more info on one specific running job using the scontrol
command, e.g for the job with ID 6260301 listed above:
> scontrol show job 6260301
Monitoring a Started Job
After a job has started, you can ssh to the node(s) the job is running on, using the node name from NODELIST, e.g. if your job runs on n0603:
> ssh n0603
Partitions
Job allocations at JUSTUS 2 are routed automatically to the most suitable compute node(s) that can provide the requested resources for the job (e.g. amount of cores, memory, local scratch space). This is to prevent fragmentation of the cluster system and to ensure most efficient usage of available compute resources. Thus, there is no point in requesting a partition in batch job scripts, i.e. users should not specify any partition "-p, --partition=<partition_name>" on job submission. This is of particular importance if you adapt job scripts from other cluster systems (e.g. bwUniCluster 2.0) to JUSTUS 2.
Job Priorities
Job priorities at JUSTUS 2 depend on multiple factors :
- Age: The amount of time a job has been waiting in the queue, eligible to be scheduled.
- Fairshare: The difference between the portion of the computing resource allocated to an association and the amount of resources that has been consumed.
Notes:
Jobs that are pending because the user reached one of the resource usage limits (see below) are not eligible to be scheduled and, thus, do not accrue priority by their age.
Fairshare does not introduce a fixed allotment, in that a user's ability to run new jobs is cut off as soon as a fixed target utilization is reached. Instead, the fairshare factor ensures that jobs from users who were under-served in the past are given higher priority than jobs from users who were over-served in the past. This keeps individual groups from long term monopolizing the resources, thus making it unfair to groups who have not used their fairshare for quite some time.
Slurm features backfilling, meaning that the scheduler will start lower priority jobs if doing so does not delay the expected start time of any higher priority job. Since the expected start time of pending jobs depends upon the expected completion time of running jobs, reasonably accurate time limits are valuable for backfill scheduling to work well. This video gives an illustrative description to how backfilling works.
In summary, an approximate model of Slurm's behavior for scheduling jobs is this:
- Step 1: Can the job in position one (highest priority) start now?
- Step 2: If it can, remove it from the queue, start it and continue with step 1.
- Step 3: If it can not, look at next job.
- Step 4: Can it start now, without delaying the start time of any job before it in the queue?
- Step 5: If it can, remove it from the queue, start it, recalculate what nodes are free, look at next job and continue with step 4.
- Step 6: If it can not, look at next job, and continue with step 4.
As soon as a new job is submitted and as soon as a job finishes, Slurm restarts its main scheduling cycle with step 1.
Usage Limits/Throttling Policies
While the fairshare factor ensures fair long term balance of resource utilization between users and groups, there are additional usage limits that constrain the total cumulative resources at a given time. This is to prevent individual users from short term monopolizing large fractions of the whole cluster system.
- The maximum walltime for a job is 14 days (336 hours)
--time=336:00:00 or --time=14-0
- The maximum amount of cores used at any given time from jobs running is 1920 per user (aggregated over all running jobs). This translates to 40 nodes. An equivalent limit for allocated memory does also apply. If this limit is reached new jobs will be queued (with REASON: AssocGrpCpuLimit) but only allowed to run after resources have been relinquished.
- The maximum amount of remaining allocated core-minutes per user is 3300000 (aggregated over all running jobs). For example, if a user has a 4-core job running that will complete in 1 hour and a 2-core job that will complete in 6 hours, this translates to 4 * 1 * 60 + 2 * 6 * 60 = 16 * 60 = 960 remaining core-minutes. Once a user reaches the limit, no more jobs are allowed to start (REASON: AssocGrpCPURunMinutesLimit). As the jobs continue to run, the remaining core time will decrease and eventually allow more jobs to start in a staggered way. This limit also correlates the maximum walltime and amount of cores that can be allocated for this amount of time. Thus, shorter walltimes for the jobs allow more resources to be allocated at a given time (but capped by the maximum amount of cores limit above). Watch this video for an illustrative description. An equivalent limit applies for remaining time of memory allocation in which case jobs may be held back from starting with REASON AssocGrpMemRunMinutes.
- The maximum amount of GPUs allocated by running jobs is 4 per user. If this limit it reached new jobs will be queued (with REASON: AssocGrpGRES) but only allowed to run after GPU resources have been relinquished.
Note:
Usage limits are subject to change.
Other Considerations
Default Values
Default values for jobs are:
- Runtime: --time=02:00:00 (2 hours)
- Nodes: --nodes=1 (one node)
- Tasks: --tasks-per-node=1 (one task per node)
- Cores: --cpus-per-task=1 (one core per task)
- Memory: --mem-per-cpu=2gb (2 GB per core)
Node Access Policy
Node access policy for jobs is "exclusive user". Nodes will be exclusively allocated to users. Multiple jobs (up to 48) of the same user can run on a single node at any time.
Note: This implies that for sub-node jobs, it is advisable for efficient resource utilization and maximum job throughput to adjust the number of cores to be an integer divisor of 48 (total number of cores on each node). For example, two 24-core jobs can run simultaneously on one and the same node, while two 32-core jobs will always have to allocate two separate nodes, but leave 16 cores unused on each of them. Users must therefore always think carefully about how many cores to request and whether their applications really benefit from allocating more cores for their jobs. Similar considerations apply - at the same time - to the requested amount of memory per job.
Think of it as the scheduler playing a game of multi-dimensional Tetris, where the dimensions are number of cores, amount of memory and other resources. Users can support this by making resource allocations that allow the scheduler to pack their jobs as densely as possible on the nodes.
Memory Management
The wait time of a job also depends largely on the amount of requested resources and the available number of nodes providing this amount of resources. This must be taken into account in particular when requesting a certain amount of memory.
For example, there is a total of 692 compute nodes in JUSTUS, of which 456 nodes have 192 GB RAM. However, not the entire amount of physical RAM is available exclusively for user jobs, because the operating system, system services and local file systems also require a certain amount of RAM. This means that if a job requests 192 GB RAM per node (i.e. --mem=192gb or --tasks-per-node=48 and --mem-per-cpu=4gb), Slurm will rule out 456 out of 692 nodes as being suitable for this job and considers only 220 out of 692 nodes as being eligible for running this job.
The following table provides an overview of how much memory can be allocated by user jobs on the various node types and how many nodes can serve this memory requirement:
Physical RAM on node | Available RAM on node | Number of suitable nodes |
---|---|---|
192 GB | 187 GB | 692 |
384 GB | 376 GB | 220 |
768 GB | 754 GB | 28 |
1536 GB | 1510 GB | 8 |
Also note that allocated memory is factored into resource usage accounting for fair share. This means over-requesting memory may have a negative impact on the priority of subsequent jobs.