BwForCluster MLS&WISO Production Login
Contents
1 Login
1.1 Prerequisites for login
- You have registered your account at the registration server for MLS&WISO.
- If this is still missing, then please log in to the registration server of MLS&WISO and click on "Register" in section "MLS&WISO".
- You have set a service password for MLS&WISO.
- If you have not done so already, then please log in to the registration server of MLS&WISO and select "Set Password" in section "MLS&WISO".
- You have set up a time-based one-time password (TOTP) for the two factor authentication (2FA) log in.
- If this is still missing, then please follow the instructions for registering a new 2FA token on the following page: BwForCluster MLS&WISO Production 2FA tokens.
- Your IP is within the IP range of your university. Either you are working on a computer on your campus or you are connected via a virtual private network (VPN) to your university.
- If you have an external IP (i.e. home office) and you are not connected via VPN to your university, you can not connect to MLS&WISO. Please consult the documentation of your university how to connect to your university via VPN.
1.2 Login to bwForCluster MLS&WISO
After Registration the bwForCluster MLS&WISO (Production) can be accessed via Secure Shell (SSH). Only SSH is allowed for login.
From Linux machines, you can log in using the following command:
ssh <UserID>@bwfor.cluster.uni-mannheim.de
or
ssh <UserID>@bwforcluster.bwservices.uni-heidelberg.de
To run graphical applications, you need to enable X11 forwarding with the -X option:
ssh -X <UserID>@bwfor.cluster.uni-mannheim.de
or
ssh -X <UserID>@bwforcluster.bwservices.uni-heidelberg.de
For SSH client programs with graphical user interfaces, fill in UserID and login server name accordingly. For Windows you can use PuTTY or MobaXterm. The later comes with an integrated X11 server which is necessary for the work with graphical applications.
The bwForCluster MLS&WISO (Production) has four dedicated login nodes. The selection of the login node is done automatically. If you log in multiple times, different sessions might run on different login nodes. In general, you should use the generic hostnames given above to allow us to balance the load over the login nodes.
There is a separate login node for the preparation of jobs for the Skylake nodes with the hostname: bwforcluster-sky.bwservices.uni-heidelberg.de (temporarily down)
1.3 Establishing Network Access
Access to the bwForCluster MLS&WISO is currently limited to IP addresses from the so-called BelWue networks. All universities in Baden-Württemberg are connected to BelWue, so if you are on your campus network (e.g. in your office or on the Campus WiFi) you should be able to connect without restrictions. If you are outside of one of the BelWue networks (e.g. in your home office), a VPN connection to your home institution has to be established first.
1.4 About UserID
<UserID> of the ssh command is a placeholder for your username at your home organization and a prefix denoting your organization. Prefixes and resulting UserIDs are as follows:
Site | Prefix | UserID |
---|---|---|
Freiburg | fr | fr_username |
Heidelberg | hd | hd_username |
Hohenheim | ho | ho_username |
Karlsruhe | ka | ka_username |
Konstanz | kn | kn_username |
Mannheim | ma | ma_username |
Stuttgart | st | st_username |
Tübingen | tu | tu_username |
Ulm | ul | ul_username |
1.5 Allowed activities on login nodes
The login nodes are the access point to the compute system and to your $HOME directory. The login nodes are shared with all the users of the cluster. Therefore, your activities on the login nodes are limited to primarily set up your batch jobs. Your activities may also be:
- compilation of your program code and
- short pre- and postprocessing of your batch jobs.
To guarantee usability for all the users of the bwForCluster you must not run your compute jobs on the login nodes. Compute jobs must be submitted as Batch Jobs. Any compute job running on the login nodes will be terminated without notice.
2 Further reading
- Scientific software is made accessible using the Environment Modules system
- Compute jobs must be submitted as Batch Jobs. Cluster-specific information is here.
- The available hardware is described here.
- Details and best practices regarding data storage are described here.