BinAC2/SLURM Partitions: Difference between revisions

From bwHPC Wiki
Jump to navigation Jump to search
No edit summary
No edit summary
Line 34: Line 34:
=== GPU Jobs ===
=== GPU Jobs ===


BinAC 2 provides different GPU models for computations. Please select the appropriate GPU type via the <code>--gres=aXX:1..N</code> option in your job script
BinAC 2 provides different GPU models for computations. Please select the appropriate GPU type and the amount of GPUs with the <code>--gres=aXX:N</code> option in your job script


{| class="wikitable"
{| class="wikitable"
Line 46: Line 46:
| 24GB
| 24GB
| 2
| 2
| <code>--gres=gpu:a30:[1..N]</code>
| <code>--gres=gpu:a30:N</code>
|-
|-
| Nvidia A100
| Nvidia A100
| 80GB
| 80GB
| 4
| 4
| <code>--gres=gpu:a100:[1..N]</code>
| <code>--gres=gpu:a100:N</code>
|-
|-
|}
|}

Revision as of 17:12, 4 December 2024

Partitions

The bwForCluster BinAC 2 provides two partitions (e.g. queues) for job submission. Within a partition job allocations are routed automatically to the most suitable compute node(s) for the requested resources (e.g. amount of nodes and cores, memory, number of GPUs).

Partition Node Access Policy Node Types Default Limits
compute (default) shared cpu ntasks=1, time=00:10:00, mem-per-cpu=1gb nodes=2, time=14-00:00:00
gpu shared gpu ntasks=1, time=00:10:00, mem-per-cpu=1gb nodes=1, time=14-00:00:00


Parallel Jobs

In order to submit parallel jobs to the InfiniBand part of the cluster, i.e., for fast inter-node communication, please select the appropriate nodes via the --constraint=ib option in your job script. For less demanding parallel jobs, you may try the --constraint=eth option, which utilizes 100Gb/s Ethernet instead of the low-latency 100Gb/s InfiniBand.

GPU Jobs

BinAC 2 provides different GPU models for computations. Please select the appropriate GPU type and the amount of GPUs with the --gres=aXX:N option in your job script

GPU GPU Memory # GPUs per Node [N] Submit Option
Nvidia A30 24GB 2 --gres=gpu:a30:N
Nvidia A100 80GB 4 --gres=gpu:a100:N