Difference between revisions of "JUSTUS2/Slurm"

From bwHPC Wiki
Jump to: navigation, search
Line 4: Line 4:
   
 
In order to get started with Slurm at JUSTUS 2, please visit our [[bwForCluster JUSTUS 2 Slurm HOWTO|Slurm HOWTO]] for JUSTUS 2.
 
In order to get started with Slurm at JUSTUS 2, please visit our [[bwForCluster JUSTUS 2 Slurm HOWTO|Slurm HOWTO]] for JUSTUS 2.
  +
  +
== Partitions ==
  +
Job allocations at JUSTUS 2 are routed automatically to the most suitable compute node(s) that fulfill the requested resources (e.g. amount of cores, memory, local scratch space). This is to prevent fragmentation of the cluster system and to ensure most efficient usage of available compute resources. There is no need to request a specific partition in in your batch job scripts, i.e. do not specify "-p, --partition=<partition_name>" on job submission.

Revision as of 11:17, 6 July 2020

JUSTUS 2

The JUSTUS 2 cluster uses Slurm for scheduling compute jobs.

In order to get started with Slurm at JUSTUS 2, please visit our Slurm HOWTO for JUSTUS 2.

Partitions

Job allocations at JUSTUS 2 are routed automatically to the most suitable compute node(s) that fulfill the requested resources (e.g. amount of cores, memory, local scratch space). This is to prevent fragmentation of the cluster system and to ensure most efficient usage of available compute resources. There is no need to request a specific partition in in your batch job scripts, i.e. do not specify "-p, --partition=<partition_name>" on job submission.