JUSTUS2/Slurm

From bwHPC Wiki
< JUSTUS2
Revision as of 12:25, 16 July 2020 by J Salk (talk | contribs) (Other Considerations)
Jump to: navigation, search

The bwForCluster JUSTUS 2 is a state-wide high-performance compute resource dedicated to Computational Chemistry and Quantum Sciences in Baden-Württemberg, Germany.

1 Submitting Jobs on the bwForCluster JUSTUS 2

The JUSTUS 2 cluster uses Slurm for scheduling compute jobs. In order to get started with Slurm at JUSTUS 2, please visit our Slurm HOWTO for JUSTUS 2.

1.1 Partitions

Job allocations at JUSTUS 2 are routed automatically to the most suitable compute node(s) that can provide the requested resources for the job (e.g. amount of cores, memory, local scratch space). This is to prevent fragmentation of the cluster system and to ensure most efficient usage of available compute resources. Thus, there is no point in requesting a partition in batch job scripts, i.e. users should not specify any partition "-p, --partition=<partition_name>" on job submission. This is of particular importance if you adapt job scripts from other cluster systems (e.g. bwUniCluster 2.0) to JUSTUS 2.

1.2 Job Priorities

Job priorities at JUSTUS 2 depend on multiple factors :

  • Age: The amount of time a job has been waiting in the queue, eligible to be scheduled.
  • Fairshare: The difference between the portion of the computing resource allocated to an association and the amount of resources that has been consumed.

Notes:

Jobs that are pending because the user reached one of the resource usage limits (see below) are not eligible to be scheduled and, thus, do not accrue priority by their age.

Fairshare does not introduce a fixed allotment, in that a user's ability to run new jobs is cut off as soon as a fixed target utilization is reached. Instead, the fairshare factor ensures that jobs from users who were under-served in the past are given higher priority than jobs from users who were over-served in the past. This keeps individual groups from long term monopolizing the resources, thus making it unfair to groups who have not used their fairshare for quite some time.

Slurm features backfilling, meaning that the scheduler will start lower priority jobs if doing so does not delay the expected start time of any higher priority job. Since the expected start time of pending jobs depends upon the expected completion time of running jobs, reasonably accurate time limits are valuable for backfill scheduling to work well. This video gives an illustrative description to how backfilling works.

1.3 Usage Limits/Throttling Policies

While the fairshare factor ensures fair long term balance of resource utilization between users and groups, there are additional usage limits that constrain the total cumulative resources at a given time. This is to prevent individual users from short term monopolizing large fractions of the whole cluster system.

  • The maximum walltime for a job is 14 days (336 hours)
 --time=336:00:00 or --time=14-0
  • The maximum amount of cores used at any given time from jobs running is 1920 per user (aggregated over all running jobs). This translates to 40 nodes. An equivalent limit for allocated memory does also apply. If this limit is reached new jobs will be queued (with REASON: AssocGrpCpuLimit) but only allowed to run after resources have been relinquished.
  • The maximum amount of remaining allocated core-minutes per user is 3300000 (aggregated over all running jobs). For example, if a user has a 4-core job running that will complete in 1 hour and a 2-core job that will complete in 6 hours, this translates to 4 * 1 * 60 + 2 * 6 * 60 = 16 * 60 = 960 remaining core-minutes. Once a user reaches the limit, no more jobs are allowed to start (REASON: AssocGrpCPURunMinutesLimit). As the jobs continue to run, the remaining core time will decrease and eventually allow more jobs to start in a staggered way. This limit also correlates the maximum walltime and amount of cores that can be allocated for this amount of time. Thus, shorter walltimes for the jobs allow more resources to be allocated at a given time (but capped by the maximum amount of cores limit above). Watch this video for an illustrative description. An equivalent limit applies for remaining time of memory allocation.

Note:

Usage limits are subject to change.

1.4 Other Considerations

The wait time of a job also depends largely on the amount of requested resources and the available number of nodes providing this amount of resources. This must be taken into account in particular when requesting a certain amount of memory.

For example, there is a total of 500 standard nodes in JUSTUS, of which 456 nodes have 192 GB RAM and 44 nodes have 384 GB RAM. However, not the entire amount of physical RAM is available exclusively for user jobs, because the operating system, system services and local file systems also require a certain amount of RAM.

This means that if a job requests exactly 192 GB RAM per node (i.e. --mem=192gb or --tasks-per-node=48 and --mem-per-cpu=4gb), then Slurm will rule out 456 out of 500 standard nodes as being suitable for this job and does consider only 44 out of 500 standard nodes for scheduling this job.

The following table provides an overview of how much memory can be allocated by user jobs on the various node types:

Physical RAM on node Available RAM on node
192 GB 187 GB
384 GB 376 GB
768 GB 754 GB
1536 GB 1510 GB