Helix/Hardware: Difference between revisions

From bwHPC Wiki
Jump to navigation Jump to search
Line 22: Line 22:


The components of the cluster are connected via two independent networks, a management network (Ethernet and IPMI) and an Infiniband fabric for MPI communication and storage access.
The components of the cluster are connected via two independent networks, a management network (Ethernet and IPMI) and an Infiniband fabric for MPI communication and storage access.
The Infiniband backbone is a fully non-blocking fabric with 200 GB/s data speed. The compute nodes are connected with different data speed according to the requirements of the configuration.
The Infiniband backbone is a fully non-blocking fabric with 200 GB/s data speed. The compute nodes are connected with different data speeds according to the requirements of the configuration.

Revision as of 23:22, 12 July 2022

System Architecture

The bwForCluster Helix is a high performance supercomputer with high speed interconnect. Is composed of login nodes, compute nodes and parallel storage systems connected by fast data networks. It is connected to the Internet via Baden Württemberg's extended LAN BelWü.

Operating System and Software

  • Operating system: RedHat
  • Queuing system: Slurm
  • Access to application software: Environment Modules

Compute Nodes

AMD Nodes

Intel Nodes

Storage Architecture

There is one storage system providing a large parallel file system based on IBM Spectrum Scale for $HOME, for workspaces, and for temporary job data.

Network

The components of the cluster are connected via two independent networks, a management network (Ethernet and IPMI) and an Infiniband fabric for MPI communication and storage access. The Infiniband backbone is a fully non-blocking fabric with 200 GB/s data speed. The compute nodes are connected with different data speeds according to the requirements of the configuration.