Infiniband: Difference between revisions
K Siegmund (talk | contribs) No edit summary |
K Siegmund (talk | contribs) No edit summary |
||
(One intermediate revision by the same user not shown) | |||
Line 1: | Line 1: | ||
← This page is part of the [[HPC Glossary]] |
|||
---- |
|||
'''Infiniband''' and '''Omni-Path''' are a high-speed Network often used in HPC systems for their low latency. The name comes from "infinite bandwidth", because you can connect several cables to the same machine, up to a theoretical infinite bandwidth. |
'''Infiniband''' and '''Omni-Path''' are a high-speed Network often used in HPC systems for their low latency. The name comes from "infinite bandwidth", because you can connect several cables to the same machine, up to a theoretical infinite bandwidth. |
||
Latest revision as of 11:47, 19 February 2025
← This page is part of the HPC Glossary
Infiniband and Omni-Path are a high-speed Network often used in HPC systems for their low latency. The name comes from "infinite bandwidth", because you can connect several cables to the same machine, up to a theoretical infinite bandwidth.
Infiniband is the original network distributed by Melanox (now part of Nvidia).
Omni-Path or OPA is about the same technology originally created by Intel, but now sold by a separate company Cornelis Networks.
Infiniband/Omni-Path do not natively use the TCP protocol, but RDMA ("remote direct memory access"), but TCP/IP is normally provided on top.
Usual setups reserve the high-speed network for compute jobs and data transfer with the file systems, while ssh connections and other maintenance connections happen via a standard ethernet connection to the compute nodes.
See wikipedia articles for more detailed information: