BinAC2/SLURM Partitions: Difference between revisions
F Bartusch (talk | contribs) (Create stub) |
F Bartusch (talk | contribs) No edit summary |
||
Line 1: | Line 1: | ||
== Partitions == |
== Partitions == |
||
The bwForCluster BinAC 2 provides two partitions (e.g. queues) for job submission. Within a partition job allocations are routed automatically to the most suitable compute node(s) for the requested resources (e.g. amount of nodes and cores, memory, number of GPUs). |
|||
<!-- |
<!-- |
||
All partitions are operated in shared mode, that is, jobs from different users can be executed on the same node. However, one can get exclusive access to compute nodes by using the "--exclusive" option. |
|||
GPUs are requested with the option "--gres=gpu:<number-of-gpus>". |
|||
--> |
--> |
||
{| class="wikitable" |
{| class="wikitable" |
||
|- |
|- |
||
Line 19: | Line 13: | ||
! style="width:20%"| Limits |
! style="width:20%"| Limits |
||
|- |
|- |
||
| compute (default) |
|||
| eth |
|||
| shared |
| shared |
||
| cpu |
| cpu |
||
| ntasks=1, time=00:10:00, mem-per-cpu= |
| ntasks=1, time=00:10:00, mem-per-cpu=1gb |
||
| nodes=2, time=00: |
| nodes=2, time=14-00:00:00 |
||
|- |
|- |
||
| |
| gpu |
||
| shared |
| shared |
||
| |
| gpu |
||
| ntasks=1, time=00: |
| ntasks=1, time=00:10:00, mem-per-cpu=1gb |
||
| nodes=1, time= |
| nodes=1, time=14-00:00:00 |
||
|- |
|||
|} |
|||
=== Parallel Jobs === |
|||
In order to submit parallel jobs to the InfiniBand part of the cluster, i.e., for fast inter-node communication, please select the appropriate nodes via the <code>--constraint=ib</code> option in your job script. For less demanding parallel jobs, you may try the <code>--constraint=eth</code> option, which utilizes 100Gb/s Ethernet instead of the low-latency 100Gb/s InfiniBand. |
|||
=== GPU Jobs === |
|||
BinAC 2 provides different GPU models for computations. Please select the appropriate GPU type via the <code>--gres=aXX:1..N</code> option in your job script |
|||
{| class="wikitable" |
|||
|- |
|||
! style="width:20%"| GPU |
|||
! style="width:20%"| GPU Memory |
|||
! style="width:20%"| # GPUs per Node [N] |
|||
! style="width:20%"| Submit Option |
|||
|- |
|||
| Nvidia A30 |
|||
| 24GB |
|||
| 2 |
|||
| <code>--gres=gpu:a30:[1..N]</code> |
|||
|- |
|||
| Nvidia A100 |
|||
| 80GB |
|||
| 4 |
|||
| <code>--gres=gpu:a100:[1..N]</code> |
|||
|- |
|- |
||
|} |
|} |
Revision as of 15:36, 4 December 2024
Partitions
The bwForCluster BinAC 2 provides two partitions (e.g. queues) for job submission. Within a partition job allocations are routed automatically to the most suitable compute node(s) for the requested resources (e.g. amount of nodes and cores, memory, number of GPUs).
Partition | Node Access Policy | Node Types | Default | Limits |
---|---|---|---|---|
compute (default) | shared | cpu | ntasks=1, time=00:10:00, mem-per-cpu=1gb | nodes=2, time=14-00:00:00 |
gpu | shared | gpu | ntasks=1, time=00:10:00, mem-per-cpu=1gb | nodes=1, time=14-00:00:00 |
Parallel Jobs
In order to submit parallel jobs to the InfiniBand part of the cluster, i.e., for fast inter-node communication, please select the appropriate nodes via the --constraint=ib
option in your job script. For less demanding parallel jobs, you may try the --constraint=eth
option, which utilizes 100Gb/s Ethernet instead of the low-latency 100Gb/s InfiniBand.
GPU Jobs
BinAC 2 provides different GPU models for computations. Please select the appropriate GPU type via the --gres=aXX:1..N
option in your job script
GPU | GPU Memory | # GPUs per Node [N] | Submit Option |
---|---|---|---|
Nvidia A30 | 24GB | 2 | --gres=gpu:a30:[1..N]
|
Nvidia A100 | 80GB | 4 | --gres=gpu:a100:[1..N]
|