Helix/Hardware: Difference between revisions

From bwHPC Wiki
Jump to navigation Jump to search
Line 12: Line 12:


=== AMD Nodes ===
=== AMD Nodes ===

Common features of alle AMD nodes:
* Processors: 2 x AMD Milan EPYC 7513
* Processor Frequency: GHz
* Number of Cores per Node: 64
* Local disk space: None


{| class="wikitable"
{| class="wikitable"
Line 31: Line 37:
| xxx
| xxx
| xxx
| xxx
|-
!scope="column" | Processors
| 2 x Intel Xeon E5-2630v3
| 2 x Intel Xeon E5-2640v3
| 2 x Intel Xeon Gold 6130
| 2 x Intel Xeon Gold 6230
| 4 x Intel Xeon E5-4620v3
|-
!scope="column" | Processor Frequency (GHz)
| 2.4
| 2.6
| 2.1
| 2.1
| 2.0
|-
!scope="column" | Number of Cores
| 16
| 16
| 32
| 40
| 40
|-
|-
!scope="column" | Working Memory (GB)
!scope="column" | Working Memory (GB)
| 64
| 256
| 128
| 2048
| 192
| 256
| 384
| 256
| 1536
| 2048
|-
!scope="column" | Local Disk (GB)
| 128 (SSD)
| 128 (SSD)
| 512 (SSD)
| 480 (SSD)
| 9000 (SATA)
|-
|-
!scope="column" | Interconnect
!scope="column" | Interconnect
| 1x HDR100
| QDR
| 1x HDR100
| FDR
| 2x HDR100
| EDR
| 2x HDR200
| EDR
| 4x HDR200
| FDR
|}
|}



Revision as of 13:22, 24 July 2022

System Architecture

The bwForCluster Helix is a high performance supercomputer with high speed interconnect. Is composed of login nodes, compute nodes and parallel storage systems connected by fast data networks. It is connected to the Internet via Baden Württemberg's extended LAN BelWü.

Operating System and Software

  • Operating system: RedHat
  • Queuing system: Slurm
  • Access to application software: Environment Modules

Compute Nodes

AMD Nodes

Common features of alle AMD nodes:

  • Processors: 2 x AMD Milan EPYC 7513
  • Processor Frequency: GHz
  • Number of Cores per Node: 64
  • Local disk space: None
CPU Nodes GPU Nodes
Node Type cpu fat gpu4 gpu8
Quantity xxx xxx xxx xxx xxx
Working Memory (GB) 256 2048 256 256 2048
Interconnect 1x HDR100 1x HDR100 2x HDR100 2x HDR200 4x HDR200

Intel Nodes

Storage Architecture

There is one storage system providing a large parallel file system based on IBM Spectrum Scale for $HOME, for workspaces, and for temporary job data.

Network

The components of the cluster are connected via two independent networks, a management network (Ethernet and IPMI) and an Infiniband fabric for MPI communication and storage access. The Infiniband backbone is a fully non-blocking fabric with 200 GB/s data speed. The compute nodes are connected with different data speeds according to the requirements of the configuration.