JUSTUS2/Running Your Calculations: Difference between revisions
< JUSTUS2
Jump to navigation
Jump to search
No edit summary |
No edit summary |
||
Line 4: | Line 4: | ||
In order to get started with Slurm at JUSTUS 2, please visit our [[bwForCluster JUSTUS 2 Slurm HOWTO|Slurm HOWTO]] for JUSTUS 2. |
In order to get started with Slurm at JUSTUS 2, please visit our [[bwForCluster JUSTUS 2 Slurm HOWTO|Slurm HOWTO]] for JUSTUS 2. |
||
== Partitions == |
|||
Job allocations at JUSTUS 2 are routed automatically to the most suitable compute node(s) that fulfill the requested resources (e.g. amount of cores, memory, local scratch space). This is to prevent fragmentation of the cluster system and to ensure most efficient usage of available compute resources. There is no need to request a specific partition in in your batch job scripts, i.e. do not specify "-p, --partition=<partition_name>" on job submission. |
Revision as of 11:17, 6 July 2020
JUSTUS 2
The JUSTUS 2 cluster uses Slurm for scheduling compute jobs.
In order to get started with Slurm at JUSTUS 2, please visit our Slurm HOWTO for JUSTUS 2.
Partitions
Job allocations at JUSTUS 2 are routed automatically to the most suitable compute node(s) that fulfill the requested resources (e.g. amount of cores, memory, local scratch space). This is to prevent fragmentation of the cluster system and to ensure most efficient usage of available compute resources. There is no need to request a specific partition in in your batch job scripts, i.e. do not specify "-p, --partition=<partition_name>" on job submission.