BinAC2/SLURM Partitions: Difference between revisions

From bwHPC Wiki
Jump to navigation Jump to search
(Create stub)
 
No edit summary
 
(3 intermediate revisions by the same user not shown)
Line 1: Line 1:
== Partitions ==
== Partitions ==


The bwForCluster BinAC 2 provides two partitions (e.g. queues) for job submission.
On bwForCluster BinAC 2 it is necessary to request a partition with <code>--partition=<partition_name></code> on job submission. Within a partition job allocations are routed automatically to the most suitable compute node(s) for the requested resources (e.g. amount of nodes and cores, memory, number of GPUs). The devel partition is the default partition, if no partition is requested.
Within a partition job allocations are routed automatically to the most suitable compute node(s) for the requested resources (e.g. amount of nodes and cores, memory, number of GPUs).




The <code>gpu</code> partition will only run 8 jobs per user at the same time. A user can only use 4 A100 and 8 A30 GPUs at the same time.
<!--
<!--
The partitions devel, cpu-single and gpu-single are operated in shared mode, i.e. jobs from different users can run on the same node. Jobs can get exclusive access to compute nodes in these partitions with the "--exclusive" option. The partitions cpu-multi and gpu-multi are operated in exclusive mode. Jobs in these partitions automatically get exclusive access to the requested compute nodes.
All partitions are operated in shared mode, that is, jobs from different users can be executed on the same node. However, one can get exclusive access to compute nodes by using the "--exclusive" option.

GPUs are requested with the option "--gres=gpu:<number-of-gpus>".
-->
-->

{| class="wikitable"
{| class="wikitable"
|-
|-
! style="width:20%"| Partition
! style="width:10%"| Partition
! style="width:20%"| Node Access Policy
! style="width:10%"| Node Access Policy
! style="width:20%"| Node Types
! style="width:10%"| Node Types
! style="width:20%"| Default
! style="width:20%"| Default
! style="width:20%"| Limits
! style="width:20%"| Limits
|-
|-
| compute (default)
| eth
| shared
| shared
| cpu
| cpu
| ntasks=1, time=00:10:00, mem-per-cpu=2gb
| ntasks=1, time=00:10:00, mem-per-cpu=1gb
| nodes=2, time=00:30:00
| nodes=2, time=14-00:00:00
|-
|-
| ib
| gpu
| shared
| shared
| cpu
| gpu
| ntasks=1, time=00:30:00, mem-per-cpu=2gb
| ntasks=1, time=00:10:00, mem-per-cpu=1gb
| time=14-00:00:00</br>MaxJobsPerUser: 8</br>MaxTRESPerUser: <code>gres/gpu:a100=4,gres/gpu:a30=8</code>
| nodes=1, time=120:00:00
|-
|}

=== Parallel Jobs ===

In order to submit parallel jobs to the InfiniBand part of the cluster, i.e., for fast inter-node communication, please select the appropriate nodes via the <code>--constraint=ib</code> option in your job script. For less demanding parallel jobs, you may try the <code>--constraint=eth</code> option, which utilizes 100Gb/s Ethernet instead of the low-latency 100Gb/s InfiniBand.

=== GPU Jobs ===

BinAC 2 provides different GPU models for computations. Please select the appropriate GPU type and the amount of GPUs with the <code>--gres=aXX:N</code> option in your job script

{| class="wikitable"
|-
! style="width:20%"| GPU
! style="width:20%"| GPU Memory
! style="width:20%"| # GPUs per Node [N]
! style="width:20%"| Submit Option
|-
| Nvidia A30
| 24GB
| 2
| <code>--gres=gpu:a30:N</code>
|-
| Nvidia A100
| 80GB
| 4
| <code>--gres=gpu:a100:N</code>
|-
|-
|}
|}

Latest revision as of 16:20, 19 December 2024

Partitions

The bwForCluster BinAC 2 provides two partitions (e.g. queues) for job submission. Within a partition job allocations are routed automatically to the most suitable compute node(s) for the requested resources (e.g. amount of nodes and cores, memory, number of GPUs).

The gpu partition will only run 8 jobs per user at the same time. A user can only use 4 A100 and 8 A30 GPUs at the same time.

Partition Node Access Policy Node Types Default Limits
compute (default) shared cpu ntasks=1, time=00:10:00, mem-per-cpu=1gb nodes=2, time=14-00:00:00
gpu shared gpu ntasks=1, time=00:10:00, mem-per-cpu=1gb time=14-00:00:00
MaxJobsPerUser: 8
MaxTRESPerUser: gres/gpu:a100=4,gres/gpu:a30=8

Parallel Jobs

In order to submit parallel jobs to the InfiniBand part of the cluster, i.e., for fast inter-node communication, please select the appropriate nodes via the --constraint=ib option in your job script. For less demanding parallel jobs, you may try the --constraint=eth option, which utilizes 100Gb/s Ethernet instead of the low-latency 100Gb/s InfiniBand.

GPU Jobs

BinAC 2 provides different GPU models for computations. Please select the appropriate GPU type and the amount of GPUs with the --gres=aXX:N option in your job script

GPU GPU Memory # GPUs per Node [N] Submit Option
Nvidia A30 24GB 2 --gres=gpu:a30:N
Nvidia A100 80GB 4 --gres=gpu:a100:N