BwForCluster MLS&WISO Production Hardware
1 System Architecture
The production part of bwForCluster MLS&WISO is a high performance supercomputer dedicated to research in Molecular Life Science, Economics, and Social Sciences. It features different types of compute nodes with a common software environment.
1.1 Basic Software Features
- Operating system: CentOS
- Queuing system: Slurm
- Access to application software: Environment Modules
1.2 Compute Nodes
The production part of the bwForCluster MLS&WISO consists of different compute nodes. In addition to "Standard" type nodes, "Best" and "Fat" type nodes are targeted towards compute jobs with special demands on processor power and memory capacity. "Coprocessor" nodes feature different Nvidia GPUs.
The "Standard" nodes are situated in Mannheim, while the "Best", "Fat", and "Coprocessor" nodes are hosted in Heidelberg. Both sites are connected via a 160 GBit/s Infiniband link.
Due to several extensions the cluster also provides some compute nodes based on the newer Intel Skylake and Intel Cascade Lake architecture.
1.2.1 CPU Nodes
|Standard||Best||Best (Sky Lake)||Best (Cascade Lake)||Fat|
|Node Type||standard||best (out of service)||best-sky||best-cas||fat (out of service)|
|Architecture||Haswell||Haswell||Sky Lake||Cascade Lake||Haswell|
|Processors||2 x Intel Xeon E5-2630v3||2 x Intel Xeon E5-2640v3||2 x Intel Xeon Gold 6130||2 x Intel Xeon Gold 6230||4 x Intel Xeon E5-4620v3|
|Processor Frequency (GHz)||2.4||2.6||2.1||2.1||2.0|
|Number of Cores||16||16||32||40||40|
|Working Memory (GB)||64||128||192||384||1536|
|Local Disk (GB)||128 (SSD)||128 (SSD)||512 (SSD)||480 (SSD)||9000 (SATA)|
1.2.2 Coprocessor Nodes
|GPU||GPU (Skylake)||GPU (Cascade Lake)|
|Node Type||gpu (out of service)||gpu-sky||gpu-cas|
|Processors||2 x Intel Xeon E5-2630v3||2 x Intel Xeon Gold 6130||2 x Intel Xeon Gold 6130||2 x Intel Xeon Gold 6130||2 x Intel Xeon Gold 6130||2 x Intel Xeon Gold 6230||2 x Intel Xeon Gold 6230||2 x Intel Xeon Gold 6240R||2 x Intel Xeon Gold 6240R|
|Processor Frequency (GHz)||2.4||2.1||2.1||2.1||2.1||2.1||2.1||2.4||2.4|
|Number of Cores||16||32||32||32||32||40||40||48||48|
|Working Memory (GB)||64||192||384||384||384||384||384||384||384|
|Local Disk (GB)||128 (SSD)||512 (SSD)||512 (SSD)||512 (SSD)||512 (SSD)||480 (SSD)||480 (SSD)||480 (SSD)||480 (SSD)|
|Coprocessors||1 x Nvidia Tesla K80||4 x Nvidia Titan Xp (12 GB)||4 x Nvidia Tesla V100 (16 GB)||4 x Nvidia GeForce GTX 1080Ti (11 GB)||4 x Nvidia GeForce RTX 2080Ti (11 GB)||4 x Nvidia Tesla V100 (16 GB)||4 x Nvidia Tesla V100s (32 GB)||4 x Nvidia GeForce RTX 3090 (24 GB)||4 x Nvidia Quadro RTX 8000 (48 GB)|
|Number of GPUs||2||4||4||4||4||4||4||4||4|
2 Storage Architecture
There is one storage system for $HOME and for workspaces providing a large parallel file system based on IBM Spectrum Scale. Additionally, each compute node provides high-speed temporary storage on the node-local solid state disk via the $TMPDIR environment variable. For details and best practices see File Systems.
|Lifetime||permanent||workspace lifetime||batch job walltime|
|Capacity||5.2 PB||5.2 PB||128 GB per node (9 TB per fat node)|
|Quotas||200 GB||10 TB||none|
- global: all nodes access the same file system.
- node local:each node has its own file system.
- permanent: files are stored permanently.
- workspace lifetime: files are removed at end of workspace lifetime.
- batch job walltime: files are removed at end of the batch job.
The components of the cluster at both sites are connected via two independent networks, a management network (Ethernet and IPMI) and an Infiniband fabric for MPI communication and storage access. The Infiniband interconnect in Mannheim is of Quad Data Rate (QDR) and fully non-blocking across all "Standard" nodes. The Infiniband fabric in Heidelberg is of Full Data Rate (FDR) and also fully non-blocking across all "Best", "Fat", and "Coprocessor" nodes.
The 28 km distance between the cluster sites in Mannheim and Heidelberg is linked via optical fibre and Mellanox MetroX TX6240 LongHaul appliances, transparently aggregating four 40 GBit/s links into a single 160 GBit/s connection. Latency is only slightly above the hard limit set by the speed of light, effectively merging the two parts into a single high-performance computing resource.