Difference between revisions of "Helix/Migration Guide"

From bwHPC Wiki
Jump to: navigation, search
Line 1: Line 1:
= Data Transfer =
+
= Data =
  +
  +
== Home directory ==
  +
  +
You start with a fresh home directory on bwForCluster Helix.
  +
Your home directory from bwForCluster MLS&WISO is availalbe on the login nodes under /mnt/mls_home/ (read-only).
  +
Please copy what you need.
  +
  +
== Workspaces ==
  +
  +
The workspaces are identical and can be used on both systems (read and write).
  +
 
 
 
= Changes =
 
= Changes =
Line 9: Line 20:
   
 
== Software Modules ==
 
== Software Modules ==
Basis and most used software modules are already available. Further modules are in preparation.
+
Basis and most used software modules are already available.
   
 
MPI modules are now provided in a hierarchical manner. That means:
 
MPI modules are now provided in a hierarchical manner. That means:

Revision as of 09:51, 17 August 2022

1 Data

1.1 Home directory

You start with a fresh home directory on bwForCluster Helix. Your home directory from bwForCluster MLS&WISO is availalbe on the login nodes under /mnt/mls_home/ (read-only). Please copy what you need.

1.2 Workspaces

The workspaces are identical and can be used on both systems (read and write).


2 Changes

Here you can find a list of important changes on bwForCluster Helix compared to the previous system bwForCluster MLS&WISO.

2.1 CPU manufacturer

The CPU manufacturer has changed from Intel to AMD. All compute nodes are now equipped with the same AMD processors. It is recommend that you newly compile your applications.

2.2 Software Modules

Basis and most used software modules are already available.

MPI modules are now provided in a hierarchical manner. That means:

  • You need to load a compiler module first to see suitable MPI modules with 'module avail' and to load MPI modules.
  • 'module spider' shows all available MPI modules.

2.3 Batch System

Partitions:

  • The 'single' partition now contains all node types, including GPU nodes.
  • The partition for multi-node CPU jobs is now named 'cpu-multi'.

2.4 Access to SDS@hd

You can access your storage space on SDS@hd directly in /mnt/sds-hd/ on all login and compute nodes. Kerberos tickets are no longer needed.