Difference between revisions of "Helix/Hardware"

From bwHPC Wiki
Jump to: navigation, search
(Network)
(AMD Nodes)
Line 12: Line 12:
   
 
=== AMD Nodes ===
 
=== AMD Nodes ===
  +
  +
{| class="wikitable"
  +
|-
  +
! style="width:8%"|
  +
! style="width:10%"| CPU
  +
! style="width:10%"| FAT
  +
! style="width:10%"| GPU4
  +
! style="width:10%"| GPU4
  +
! style="width:10%"| GPU8
  +
|-
  +
!scope="column"| Node Type
  +
| standard
  +
| <span style="color:#ff0000"> best (out of service) </span>
  +
| best-sky
  +
| best-cas
  +
| <span style="color:#ff0000"> fat (out of service) </span>
  +
|-
  +
!scope="column"| Architecture
  +
| Haswell
  +
| Haswell
  +
| Sky Lake
  +
| Cascade Lake
  +
| Haswell
  +
|-
  +
!scope="column"| Quantity
  +
| 476
  +
| 164
  +
| 24
  +
| 5
  +
| 4
  +
|-
  +
!scope="column" | Processors
  +
| 2 x Intel Xeon E5-2630v3
  +
| 2 x Intel Xeon E5-2640v3
  +
| 2 x Intel Xeon Gold 6130
  +
| 2 x Intel Xeon Gold 6230
  +
| 4 x Intel Xeon E5-4620v3
  +
|-
  +
!scope="column" | Processor Frequency (GHz)
  +
| 2.4
  +
| 2.6
  +
| 2.1
  +
| 2.1
  +
| 2.0
  +
|-
  +
!scope="column" | Number of Cores
  +
| 16
  +
| 16
  +
| 32
  +
| 40
  +
| 40
  +
|-
  +
!scope="column" | Working Memory (GB)
  +
| 64
  +
| 128
  +
| 192
  +
| 384
  +
| 1536
  +
|-
  +
!scope="column" | Local Disk (GB)
  +
| 128 (SSD)
  +
| 128 (SSD)
  +
| 512 (SSD)
  +
| 480 (SSD)
  +
| 9000 (SATA)
  +
|-
  +
!scope="column" | Interconnect
  +
| QDR
  +
| FDR
  +
| EDR
  +
| EDR
  +
| FDR
  +
|}
   
 
=== Intel Nodes ===
 
=== Intel Nodes ===

Revision as of 13:09, 24 July 2022

1 System Architecture

The bwForCluster Helix is a high performance supercomputer with high speed interconnect. Is composed of login nodes, compute nodes and parallel storage systems connected by fast data networks. It is connected to the Internet via Baden Württemberg's extended LAN BelWü.

2 Operating System and Software

  • Operating system: RedHat
  • Queuing system: Slurm
  • Access to application software: Environment Modules

3 Compute Nodes

3.1 AMD Nodes

CPU FAT GPU4 GPU4 GPU8
Node Type standard best (out of service) best-sky best-cas fat (out of service)
Architecture Haswell Haswell Sky Lake Cascade Lake Haswell
Quantity 476 164 24 5 4
Processors 2 x Intel Xeon E5-2630v3 2 x Intel Xeon E5-2640v3 2 x Intel Xeon Gold 6130 2 x Intel Xeon Gold 6230 4 x Intel Xeon E5-4620v3
Processor Frequency (GHz) 2.4 2.6 2.1 2.1 2.0
Number of Cores 16 16 32 40 40
Working Memory (GB) 64 128 192 384 1536
Local Disk (GB) 128 (SSD) 128 (SSD) 512 (SSD) 480 (SSD) 9000 (SATA)
Interconnect QDR FDR EDR EDR FDR

3.2 Intel Nodes

4 Storage Architecture

There is one storage system providing a large parallel file system based on IBM Spectrum Scale for $HOME, for workspaces, and for temporary job data.

5 Network

The components of the cluster are connected via two independent networks, a management network (Ethernet and IPMI) and an Infiniband fabric for MPI communication and storage access. The Infiniband backbone is a fully non-blocking fabric with 200 GB/s data speed. The compute nodes are connected with different data speeds according to the requirements of the configuration.