BwForCluster MLS&WISO Production Hardware

From bwHPC Wiki
Revision as of 13:51, 1 October 2020 by S Richling (talk | contribs) (CPU Nodes)
Jump to: navigation, search

1 System Architecture

The production part of bwForCluster MLS&WISO is a high performance supercomputer dedicated to research in Molecular Life Science, Economics, and Social Sciences. It features different types of compute nodes with a common software environment.

bwForCluster MLS&WISO Production

1.1 Basic Software Features

  • Operating system: CentOS
  • Queuing system: Slurm
  • Access to application software: Environment Modules

1.2 Compute Nodes

The production part of the bwForCluster MLS&WISO consists of different compute nodes. In addition to "Standard" type nodes, "Best" and "Fat" type nodes are targeted towards compute jobs with special demands on processor power and memory capacity. "Coprocessor" nodes feature different Nvidia GPUs.

The "Standard" nodes are situated in Mannheim, while the "Best", "Fat", and "Coprocessor" nodes are hosted in Heidelberg. Both sites are connected via a 160 GBit/s Infiniband link.

Due to several extensions the cluster also provides some compute nodes based on the newer Intel Skylake and Intel Cascade Lake architecture.

1.2.1 CPU Nodes

Standard Best Best (Sky Lake) Best (Cascade Lake) Fat Fat (Ivy Bridge)
Node Type standard best best-sky best-cas fat fat-ivy
Architecture Haswell Haswell Sky Lake Cascade Lake Haswell Ivy Bridge
Quantity 476 184 24 8 8 4
Processors 2 x Intel Xeon E5-2630v3 2 x Intel Xeon E5-2640v3 2 x Intel Xeon Gold 6130 2 x Intel Xeon Gold 6230 4 x Intel Xeon E5-4620v3 4 x Intel Xeon E5-4620v2
Processor Frequency (GHz) 2.4 2.6 2.1 2.1 2.0 2.6
Number of Cores 16 16 32 40 40 32
Working Memory (GB) 64 128 192 384 1536 1024
Local Disk (GB) 128 (SSD) 128 (SSD) 512 (SSD) 480 (SSD) 9000 (SATA) 128 (SSD)
Interconnect QDR FDR EDR EDR FDR FDR

1.2.2 Coprocessor Nodes

GPU GPU (Skylake) GPU (Cascade Lake)
Node Type gpu gpu-sky gpu-cas
Architecture Haswell Skylake Cascade Lake
Quantity 18 1 1 2 4 3
Processors 2 x Intel Xeon E5-2630v3 2 x Intel Xeon Gold 6130 2 x Intel Xeon Gold 6130 2 x Intel Xeon Gold 6130 2 x Intel Xeon Gold 6130 2 x Intel Xeon Gold 6230
Processor Frequency (GHz) 2.4 2.1 2.1 2.1 2.1 2.1
Number of Cores 16 32 32 32 32 40
Working Memory (GB) 64 192 384 384 384 384
Local Disk (GB) 128 (SSD) 512 (SSD) 512 (SSD) 512 (SSD) 512 (SSD) 480 (SSD)
Interconnect FDR EDR EDR EDR EDR EDR
Coprocessors 1 x Nvidia Tesla K80 4 x Nvidia Titan Xp (12 GB) 4 x Nvidia Tesla V100 (16 GB) 4 x Nvidia GeForce GTX 1080Ti (11 GB) 4 x Nvidia GeForce RTX 2080Ti (11 GB) 4 x Nvidia Tesla V100 (16 GB)
Number of GPUs 2 4 4 4 4 4
GPU Type K80 TITAN V100 GTX108 RTX208 V100

2 Storage Architecture

There are two separate storage systems, one for $HOME and one for workspaces. Both use the parallel file system BeeGFS. Additionally, each compute node provides high-speed temporary storage on the node-local solid state disk via the $TMPDIR environment variable. For details and best practices see File Systems.

$HOME Workspaces $TMPDIR
Visibility global global node local
Lifetime permanent workspace lifetime batch job walltime
Capacity 36 TB 384 TB 128 GB per node (9 TB per fat node)
Quotas 100 GB none none
Backup no no no

3 Network

The components of the cluster at both sites are connected via two independent networks, a management network (Ethernet and IPMI) and an Infiniband fabric for MPI communication and storage access. The Infiniband interconnect in Mannheim is of Quad Data Rate (QDR) and fully non-blocking across all "Standard" nodes. The Infiniband fabric in Heidelberg is of Full Data Rate (FDR) and also fully non-blocking across all "Best", "Fat", and "Coprocessor" nodes.

The 28 km distance between the cluster sites in Mannheim and Heidelberg is linked via optical fibre and Mellanox MetroX TX6240 LongHaul appliances, transparently aggregating four 40 GBit/s links into a single 160 GBit/s connection. Latency is only slightly above the hard limit set by the speed of light, effectively merging the two parts into a single high-performance computing resource.