DACHS/Hardware and Architecture: Difference between revisions

From bwHPC Wiki
Jump to navigation Jump to search
mNo edit summary
 
(One intermediate revision by the same user not shown)
Line 31: Line 31:
| 2
| 2
| 2
| 2
| 2
| 4
| 2
| 2
|-
|-
Line 77: Line 77:
|}
|}
Table 1: Properties of the nodes
Table 1: Properties of the nodes

== Storage Architecture ==
The system features a 700 TB large BeeGFS filesystem available on login and compute nodes.

Latest revision as of 12:31, 28 October 2024

Architecture of DACHS

The Datenanalyse Cluster der Hochschulen (DACHS) is a parallel computer with distributed memory connected over Infiniband and Ethernet. The compute nodes contain at least dual AMD processors, at least 384GB of local memory, 2 TB local NVMe-based disc storage and accelerators as shown in the table below. With BeeGFS a fast and scalable filesystem is provided via Infiniband to all login and compute nodes

The Operating System is Rocky-Linux 9.4 (which is based on RHEL). The setup is kept in-line (with regard to Software, Setup and general usage) is mostly equivalent to bwHPC and bwUniCluster in particular.


Components of DACHS

Compute nodes "L40S" Compute nodes "H100" Compute nodes "AMD_APU" Login
Number of nodes 45 1 1 2
Processors AMD EPYC 9254 AMD EPYC 9454 AMD MI300A AMD EPYC 9254
Number of sockets 2 2 4 2
Processor frequency (GHz) 2.9 Ghz 2.75 Ghz 2.1 Ghz 2.9 Ghz
Total number of cores 48 96 96 48
Main memory 384 GB 1536 GB 512 GB 384 GB
Local SSD 1,92 TB NVMe 1,92 TB NVMe 1,92 TB NVMe 1,92 TB NVMe
Accelerators 1x NVIDIA L40S 8x NVIDIA H100 4x AMD MI300A -
Accelerator memory 48 GB 8x 80 GB 4x 128 GB -
Interconnect IB HDR100 IB HDR100 IB HDR100 IB HDR100

Table 1: Properties of the nodes

Storage Architecture

The system features a 700 TB large BeeGFS filesystem available on login and compute nodes.