BwForCluster MLS&WISO Production Login

From bwHPC Wiki
Revision as of 16:08, 25 November 2015 by F Schmitt (talk | contribs) (initiale Version, nur Mannheimer Login-Knoten)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

1 Login

After registration (see here), the bwForCluster MLS&WISO can be accessed via Secure Shell (SSH). Only SSH is allowed for login. From linux machines, you can log in using the following command. For other SSH client programs, fill in UserID and login server name accordingly.

ssh <UserID>

To run graphical applications, you can use the -X flag to openssh:

ssh -X <UserID>

The bwForCluster MLS&WISO has four dedicated login nodes. The selection of the login node is done automatically. If you log in multiple times, different sessions might run on different login nodes. In general, you should use the generic hostname given above to allow us to balance the load over the login nodes.

1.1 About UserID / Username

<UserID> of the ssh command is a placeholder for your username at your home organization and a prefix denoting your organization. Prefixes and resulting user names are as follows:

Site Prefix Username
Freiburg fr fr_username
Hohenheim ho ho_username
Karlsruhe ka ka_username
Konstanz kn kn_username
Mannheim ma ma_username
Stuttgart st st_username
Tübingen tu tu_username
Ulm ul ul_username

1.2 Allowed activities on login nodes

The login nodes are the access point to the compute system and its $HOME directory. The login nodes are shared with all the users of the cluster. Therefore, your activities on the login nodes are limited to primarily set up your batch jobs. Your activities may also be:

  • compilation of your program code and
  • short pre- and postprocessing of your batch jobs.

To guarantee usability for all the users of bwUniCluster you must not run your compute jobs on the login nodes. Compute jobs must be submitted as Batch Jobs. Any compute job running on the login nodes will be terminated without notice.

2 Further reading

  • Compute jobs must be submitted as Batch Jobs. Cluster-specific information is here.
  • The available hardware is described here.
  • Details and best practices regarding data storage are described here.