Difference between revisions of "Helix/Migration Guide"

From bwHPC Wiki
Jump to: navigation, search
(Software Modules)
 
(54 intermediate revisions by the same user not shown)
Line 1: Line 1:
Here we list important changes on bwForCluster Helix for users of the previous system bwForCluster MLS&WISO.
+
Here we list important changes on bwForCluster Helix compared to the previous system bwForCluster MLS&WISO.
   
== CPU manufacturer ==
+
== Registration ==
  +
The registration process for a [[Registration/bwForCluster|bwForCluster]] applies.
  +
Notice for step B: Compute projects ("Rechenvorhaben") that were approved for bwForCluster MLS&WISO (Production) are also valid for bwForCluster Helix.
  +
  +
== Data access during migration phase ==
  +
* '''$HOME:''' You start with a fresh home directory on bwForCluster Helix. Your home directory from bwForCluster MLS&WISO is available on the login nodes under /mnt/mls_home/ (read-only) until 14.10.2022. Please copy what you need.
  +
* '''Workspaces:''' The workspaces are shared on both systems and can be used on both systems (read and write !!!). The migration of data happens automatically in the background. No action is required on the part of the user.
  +
  +
== CPU Manufacturer ==
 
The CPU manufacturer has changed from Intel to AMD. All compute nodes are now equipped with the same AMD processors.
 
The CPU manufacturer has changed from Intel to AMD. All compute nodes are now equipped with the same AMD processors.
It is recommend that you newly compile your applications.
+
It is recommended that you newly compile your applications.
   
 
== Software Modules ==
 
== Software Modules ==
  +
* Basic and most used software modules are already available. More will come. For special requests please submit a support ticket.
The software module system is now based on lmod and MPI modules are provided in a hierarchical manner:
 
  +
* MPI modules are now provided in a hierarchical manner. That means:
* You need to load a compiler module first to see suitable MPI modules with 'module avail' and to load MPI modules.
 
* 'module spider' shows all available MPI modules.
+
** You need to load a compiler module first to see suitable MPI modules with 'module avail' and to load MPI modules.
  +
** 'module spider' shows all available MPI modules.
   
== Batch System Slurm ==
+
== Batch System ==
  +
Partitions:
 
  +
* The 'single' partition now contains all node types, including GPU nodes.
== File systems ==
 
  +
* The partition for multi-node CPU jobs is now named 'cpu-multi'.
   
 
== Access to SDS@hd ==
 
== Access to SDS@hd ==
 
 
You can access your storage space on SDS@hd directly in /mnt/sds-hd/ on all login and compute nodes. Kerberos tickets are no longer needed.
 
You can access your storage space on SDS@hd directly in /mnt/sds-hd/ on all login and compute nodes. Kerberos tickets are no longer needed.

Latest revision as of 13:52, 25 August 2023

Here we list important changes on bwForCluster Helix compared to the previous system bwForCluster MLS&WISO.

1 Registration

The registration process for a bwForCluster applies. Notice for step B: Compute projects ("Rechenvorhaben") that were approved for bwForCluster MLS&WISO (Production) are also valid for bwForCluster Helix.

2 Data access during migration phase

  • $HOME: You start with a fresh home directory on bwForCluster Helix. Your home directory from bwForCluster MLS&WISO is available on the login nodes under /mnt/mls_home/ (read-only) until 14.10.2022. Please copy what you need.
  • Workspaces: The workspaces are shared on both systems and can be used on both systems (read and write !!!). The migration of data happens automatically in the background. No action is required on the part of the user.

3 CPU Manufacturer

The CPU manufacturer has changed from Intel to AMD. All compute nodes are now equipped with the same AMD processors. It is recommended that you newly compile your applications.

4 Software Modules

  • Basic and most used software modules are already available. More will come. For special requests please submit a support ticket.
  • MPI modules are now provided in a hierarchical manner. That means:
    • You need to load a compiler module first to see suitable MPI modules with 'module avail' and to load MPI modules.
    • 'module spider' shows all available MPI modules.

5 Batch System

Partitions:

  • The 'single' partition now contains all node types, including GPU nodes.
  • The partition for multi-node CPU jobs is now named 'cpu-multi'.

6 Access to SDS@hd

You can access your storage space on SDS@hd directly in /mnt/sds-hd/ on all login and compute nodes. Kerberos tickets are no longer needed.