JUSTUS2/Running Your Calculations: Difference between revisions

From bwHPC Wiki
Jump to navigation Jump to search
No edit summary
Line 6: Line 6:


== Partitions ==
== Partitions ==
Job allocations at JUSTUS 2 are routed automatically to the most suitable compute node(s) that fulfill the requested resources (e.g. amount of cores, memory, local scratch space). This is to prevent fragmentation of the cluster system and to ensure most efficient usage of available compute resources. There is no need to request a specific partition in in your batch job scripts, i.e. do not specify "-p, --partition=<partition_name>" on job submission.
Job allocations at JUSTUS 2 are routed automatically to the most suitable compute node(s) that can provide the requested resources for the job (e.g. amount of cores, memory, local scratch space). This is to prevent fragmentation of the cluster system and to ensure most efficient usage of available compute resources. There is no need to request a specific partition in in your batch job scripts, i.e. users '''must not''' specify "-p, --partition=<partition_name>" on job submission.

Revision as of 07:28, 7 July 2020

The bwForCluster JUSTUS 2 is a state-wide high-performance compute resource dedicated to Computational Chemistry and Quantum Sciences in Baden-Württemberg, Germany.

The JUSTUS 2 cluster uses Slurm for scheduling compute jobs.

In order to get started with Slurm at JUSTUS 2, please visit our Slurm HOWTO for JUSTUS 2.

Partitions

Job allocations at JUSTUS 2 are routed automatically to the most suitable compute node(s) that can provide the requested resources for the job (e.g. amount of cores, memory, local scratch space). This is to prevent fragmentation of the cluster system and to ensure most efficient usage of available compute resources. There is no need to request a specific partition in in your batch job scripts, i.e. users must not specify "-p, --partition=<partition_name>" on job submission.