BwForCluster MLS&WISO Production Login
Contents
1 Login
After registration (see here), the bwForCluster MLS&WISO can be accessed via Secure Shell (SSH). Only SSH is allowed for login. From linux machines, you can log in using the following command. For other SSH client programs, fill in UserID and login server name accordingly.
ssh <UserID>@bwfor.cluster.uni-mannheim.de
To run graphical applications, you can use the -X flag to openssh:
ssh -X <UserID>@bwfor.cluster.uni-mannheim.de
The bwForCluster MLS&WISO has four dedicated login nodes. The selection of the login node is done automatically. If you log in multiple times, different sessions might run on different login nodes. In general, you should use the generic hostname given above to allow us to balance the load over the login nodes.
1.1 About UserID / Username
<UserID> of the ssh command is a placeholder for your username at your home organization and a prefix denoting your organization. Prefixes and resulting user names are as follows:
Site | Prefix | Username |
---|---|---|
Freiburg | fr | fr_username |
Hohenheim | ho | ho_username |
Karlsruhe | ka | ka_username |
Konstanz | kn | kn_username |
Mannheim | ma | ma_username |
Stuttgart | st | st_username |
Tübingen | tu | tu_username |
Ulm | ul | ul_username |
1.2 Allowed activities on login nodes
The login nodes are the access point to the compute system and its $HOME directory. The login nodes are shared with all the users of the cluster. Therefore, your activities on the login nodes are limited to primarily set up your batch jobs. Your activities may also be:
- compilation of your program code and
- short pre- and postprocessing of your batch jobs.
To guarantee usability for all the users of bwUniCluster you must not run your compute jobs on the login nodes. Compute jobs must be submitted as
Batch Jobs. Any compute job running on the login nodes will be terminated without notice.
2 Further reading
- Scientific software is made accessible using the Environment Modules system
- Compute jobs must be submitted as Batch Jobs. Cluster-specific information is here.
- The available hardware is described here.
- Details and best practices regarding data storage are described here.