Difference between revisions of "BwUniCluster2.0/Hardware and Architecture"

From bwHPC Wiki
Jump to: navigation, search
(Created page with "= Architecture of bwUniCluster 2.0 = The bwUniCluster 2.0 is a parallel computer with distributed memory. Each node of system consists of at least two Intel Xeon processor, l...")
 
Line 46: Line 46:
 
! style="width:13%"| GPU x4
 
! style="width:13%"| GPU x4
 
! style="width:13%"| GPU x8
 
! style="width:13%"| GPU x8
  +
! style="width:13%"| Login
 
|-
 
|-
 
!scope="column"| Number of nodes
 
!scope="column"| Number of nodes
Line 54: Line 55:
 
| 14
 
| 14
 
| 24
 
| 24
  +
| 4 + 2 (Broadwell)
  +
|-
  +
!scope="column"| Processors
  +
| Intel Xeon Gold 6230
  +
| Intel Xeon Gold 6230
  +
| Intel Xeon E5-2660 v4
  +
| Intel Xeon Gold 6230
  +
| Intel Xeon Gold 6230
  +
| Intel Xeon Gold 6248
  +
|-
  +
!scope="column"| Number of sockets
  +
| 2
  +
| 2
  +
| 2
  +
| 4
  +
| 2
  +
| 2
  +
| 2
  +
|-
  +
!scope="column"| Processor frequency (GHz)
  +
| 2.1 Ghz
  +
| 2.1 Ghz
  +
| 2.0 GHz
  +
| 2.1 Ghz
  +
| 2.1 Ghz
  +
| 2.1 Ghz
  +
|
  +
|-
  +
!scope="column"| Total number of cores
  +
| 40
  +
| 40
  +
| 28
  +
| 80
  +
| 40
  +
| 40
  +
| 40 / 20 (Broadwell)
  +
|-
  +
!scope="column"| Main memory
  +
| 96 GB
  +
| 96 GB
  +
| 128 GB
  +
| 3 TB
  +
| 384 GB
  +
| 768 GB
  +
| 384 GB / 128 GB (Broadwell)
  +
|-
  +
!scope="column"| Local disk
  +
| 960 GB SATA
  +
| 960 GB SATA
  +
| 480 GB SATA
  +
| 4,8 TB NVMe
  +
| 3,2 TB NVMe
  +
| 6,4 TB NVMe
  +
|
  +
|-
  +
!scope="column"| Accelerators
  +
| -
  +
| -
  +
| -
  +
| -
  +
| 4x NVIDIA Tesla V100
  +
| 8x NVIDIA Tesla V100
  +
|
  +
|-
  +
!scope="column"| Interconnect
  +
| IB HDR100 (blocking)
  +
| IB HDR100
  +
| IB FDR
  +
| IB HDR
  +
| IB HDR
  +
| IB HDR
  +
| IB HDR100 (blocking)
 
|}
 
|}
   

Revision as of 15:34, 18 February 2020

1 Architecture of bwUniCluster 2.0

The bwUniCluster 2.0 is a parallel computer with distributed memory. Each node of system consists of at least two Intel Xeon processor, local memory, disks, network adapters and optionally accelerators (NVIDIA Tesla V100). All nodes are connected by a fast InfiniBand 4X FDR interconnect. In addition the file system Lustre, that is connected by coupling the InfiniBand of the file server with the InfiniBand switch of the compute cluster, is added to bwUniCluster (uc1) to provide a fast and scalable parallel file system.

The operating system on each node is Red Hat Enterprise Linux (RHEL) 7.x. A number of additional software packages like e.g. SLURM have been installed on top. Some of these components are of special interest to end users and are briefly discussed in this document. Others which are of greater importance to system administrators will not be covered by this document.

The individual nodes of the system may act in different roles. According to the services supplied by the nodes, they are separated into disjoint groups. From an end users point of view the different groups of nodes are login nodes, compute nodes, file server nodes and administrative server nodes.

Login Nodes

The login nodes are the only nodes that are directly accessible by end users. These nodes are used for interactive login, file management, program development and interactive pre- and postprocessing. Two nodes are dedicated to this service but they are all accessible via one address and a DNS round-robin alias distributes the login sessions to the different login nodes.

Compute Node

The majority of nodes are compute nodes which are managed by a batch system. Users submit their jobs to the SLURM batch system and a job is executed when the required resources become available (depending on its fair-share priority).

File Server Nodes

The hardware of the parallel file system Lustre incorporates some file server nodes; the file system Lustre is connected by coupling the InfiniBand of the file server with the independent InfiniBand switch of the compute cluster. In addition to shared file space there is also local storage on the disks of each node (for details see chapter "File Systems").

Administrative Server Nodes

Some other nodes are delivering additional services like resource management, external network connection, administration etc. These nodes can be accessed directly by system administrators only.

2 Components of bwUniCluster

Compute nodes "Thin" Compute nodes "HPC" Compute nodes "HPC Broadwell" Compute nodes "Fat" GPU x4 GPU x8 Login
Number of nodes 100 360 352 6 14 24 4 + 2 (Broadwell)
Processors Intel Xeon Gold 6230 Intel Xeon Gold 6230 Intel Xeon E5-2660 v4 Intel Xeon Gold 6230 Intel Xeon Gold 6230 Intel Xeon Gold 6248
Number of sockets 2 2 2 4 2 2 2
Processor frequency (GHz) 2.1 Ghz 2.1 Ghz 2.0 GHz 2.1 Ghz 2.1 Ghz 2.1 Ghz
Total number of cores 40 40 28 80 40 40 40 / 20 (Broadwell)
Main memory 96 GB 96 GB 128 GB 3 TB 384 GB 768 GB 384 GB / 128 GB (Broadwell)
Local disk 960 GB SATA 960 GB SATA 480 GB SATA 4,8 TB NVMe 3,2 TB NVMe 6,4 TB NVMe
Accelerators - - - - 4x NVIDIA Tesla V100 8x NVIDIA Tesla V100
Interconnect IB HDR100 (blocking) IB HDR100 IB FDR IB HDR IB HDR IB HDR IB HDR100 (blocking)

3 File Systems

New Lustre parallel file systems with a total capacity of 5 PB and aggregate throughput of 72 GB/s were procured with bwUniCluster 2.0.