BwForCluster MLS&WISO Production Login
After Registration the bwForCluster MLS&WISO (Production) can be accessed via Secure Shell (SSH). Only SSH is allowed for login.
From Linux machines, you can log in using the following command:
ssh <UserID>@bwfor.cluster.uni-mannheim.de or
To run graphical applications, you need to enable X11 forwarding with the -X option:
ssh -X <UserID>@bwfor.cluster.uni-mannheim.de or
ssh -X <UserID>@bwforcluster.bwservices.uni-heidelberg.de
For SSH client programs with graphical user interfaces, fill in UserID and login server name accordingly. For Windows you can use PuTTY or MobaXterm. The later comes with an integrated X11 server which is necessary for the work with graphical applications.
The bwForCluster MLS&WISO (Production) has four dedicated login nodes. The selection of the login node is done automatically. If you log in multiple times, different sessions might run on different login nodes. In general, you should use the generic hostnames given above to allow us to balance the load over the login nodes.
There is a separate login node for the preparation of jobs for the Skylake nodes with the hostname: bwforcluster-sky.bwservices.uni-heidelberg.de
1.1 About UserID
<UserID> of the ssh command is a placeholder for your username at your home organization and a prefix denoting your organization. Prefixes and resulting UserIDs are as follows:
1.2 Allowed activities on login nodes
The login nodes are the access point to the compute system and to your $HOME directory. The login nodes are shared with all the users of the cluster. Therefore, your activities on the login nodes are limited to primarily set up your batch jobs. Your activities may also be:
- compilation of your program code and
- short pre- and postprocessing of your batch jobs.
To guarantee usability for all the users of the bwForCluster you must not run your compute jobs on the login nodes. Compute jobs must be submitted as Batch Jobs. Any compute job running on the login nodes will be terminated without notice.
2 Further reading
- Scientific software is made accessible using the Environment Modules system
- Compute jobs must be submitted as Batch Jobs. Cluster-specific information is here.
- The available hardware is described here.
- Details and best practices regarding data storage are described here.