<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://wiki.bwhpc.de/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=H+Haefner</id>
	<title>bwHPC Wiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://wiki.bwhpc.de/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=H+Haefner"/>
	<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/e/Special:Contributions/H_Haefner"/>
	<updated>2026-04-11T12:42:38Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.39.17</generator>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster_2.0_User_Access&amp;diff=8519</id>
		<title>BwUniCluster 2.0 User Access</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster_2.0_User_Access&amp;diff=8519"/>
		<updated>2021-03-31T13:31:50Z</updated>

		<summary type="html">&lt;p&gt;H Haefner: /* Step A: bwUniCluster entitlement for registration */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;!--{| style=&amp;quot;border-style: solid; border-width: 1px&amp;quot;&lt;br /&gt;
! Navigation: [[BwHPC_Best_Practices_Repository|bwHPC BPR]] / [[BwUniCluster_User_Guide|bwUniCluster]] &lt;br /&gt;
|}--&amp;gt;&lt;br /&gt;
[[bwUniCluster_2.0|bwUniCluster 2.0]] is Baden-Württemberg&#039;s general purpose tier 3 high performance computing (HPC)&lt;br /&gt;
cluster co-financed by Baden-Württemberg&#039;s ministry of science, research and arts and the shareholders:&lt;br /&gt;
&lt;br /&gt;
* Albert Ludwig University of Freiburg&lt;br /&gt;
* Eberhard Karls University, Tübingen&lt;br /&gt;
* Karlsruhe Institute of Technology (KIT)&lt;br /&gt;
* Heidelberg University (Ruprecht-Karls-Universität Heidelberg)&lt;br /&gt;
* Ulm University&lt;br /&gt;
* University of Hohenheim&lt;br /&gt;
* University of Konstanz&lt;br /&gt;
* University of Mannheim&lt;br /&gt;
* University of Stuttgart&lt;br /&gt;
* HAW BW e.V. (an association of several universities of applied sciences in Baden-Württemberg, see below) &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
To &#039;&#039;&#039;log on&#039;&#039;&#039; [[bwUniCluster_2.0|bwUniCluster 2.0]] a user account is required. All members of the shareholder&lt;br /&gt;
universities can apply for an account.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;width: 100%; border-spacing: 5px;&amp;quot;&lt;br /&gt;
| style=&amp;quot;text-align:left; color:#000;vertical-align:top;&amp;quot; |__TOC__ &lt;br /&gt;
|&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;[[File:bwUniCluster_17Jan2014_p044-rot_t10.10.00.jpg|center|border|250px|bwUniCluster wiring by Holger Obermaier, copyright: KIT (SCC)]] &amp;lt;span style=&amp;quot;font-size:80%&amp;quot;&amp;gt;bwUniCluster wiring © KIT (SCC)&amp;lt;/span&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
= Registration =&lt;br /&gt;
&lt;br /&gt;
Granting access and issuing a user account for &#039;&#039;&#039;bwUniCluster 2.0&#039;&#039;&#039; requires the registration at the KIT service website [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu] (step B). However, this registration depends on the &#039;&#039;&#039;bwUniCluster entitlement&#039;&#039;&#039; issued by your university (step A).&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Step A: bwUniCluster entitlement for registration ==&lt;br /&gt;
&#039;&#039;&#039;The entitlement is called bwUniCluster (not bwUniCluster 2.0)&#039;&#039;&#039; and each university issues the bwUniCluster entitlement &#039;&#039;&#039;only&#039;&#039;&#039; for their own respective members. Some have established on-line processes or provide downloads of the entitlement application forms. If there is no link behind the name of an institution in the following list, please contact the local IT support services: &lt;br /&gt;
&lt;br /&gt;
* [[BwCluster_User_Access_Uni_Freiburg|Albert Ludwig University of Freiburg]]&lt;br /&gt;
* [https://bwunicluster.urz.uni-heidelberg.de/ Heidelberg University]&lt;br /&gt;
* [https://kim.uni-hohenheim.de/bwhpc-account University of Hohenheim]&lt;br /&gt;
* [http://www.scc.kit.edu/downloads/ism/Accessform_bwUniCluster_DE_EN.pdf Karlsruhe Institute of Technology (KIT)]&lt;br /&gt;
* [[BWUniCluster_User_Access_Members_Uni_Konstanz|University of Konstanz]]&lt;br /&gt;
* [[BWUniCluster_User_Access_Members_Uni_Mannheim|University of Mannheim]]&lt;br /&gt;
* [https://www.hlrs.de/solutions-services/academic-users/bwunicluster-access/ University of Stuttgart]&lt;br /&gt;
* [https://uni-tuebingen.de/de/155157 Eberhard Karls University Tübingen]&lt;br /&gt;
* [[BWUniCluster_User_Access_Members_Uni_Ulm|Ulm University]]&lt;br /&gt;
* Hochschule Aalen&lt;br /&gt;
* Hochschule Albstadt-Sigmaringen&lt;br /&gt;
* Hochschule Esslingen&lt;br /&gt;
* Hochschule Heilbronn&lt;br /&gt;
* Hochschule Karlsruhe&lt;br /&gt;
* Hochschule Konstanz&lt;br /&gt;
* Hochschule Mannheim&lt;br /&gt;
* Hochschule Offenburg&lt;br /&gt;
* Hochschule Reutlingen&lt;br /&gt;
* Hochschule Rottenburg&lt;br /&gt;
* Hochschule Stuttgart (HfT)&lt;br /&gt;
* Hochschule Ulm&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Step B: Web Registration, service password and 2-factor authentication ==&lt;br /&gt;
&lt;br /&gt;
After completing step A, i.e., after successfull issueing of the bwUniCluster entitlement, you have to register yourself for the service. To do so please visit [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu] and complete the following steps.&lt;br /&gt;
&lt;br /&gt;
1. Select your home organization from the list on the main page and click &#039;&#039;&#039;Proceed&#039;&#039;&#039; or &#039;&#039;&#039;Fortfahren&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
[[File:Bwidm-register-red.png|center|border|]]&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
2. You will be directed to the &#039;&#039;Identity Provider&#039;&#039; of your home organisation. Enter the user ID / username and password of your home organisation - this is usually the same password used for your e-mail account and other services - and click on &#039;&#039;&#039;Login&#039;&#039;&#039;, &#039;&#039;&#039;Einloggen&#039;&#039;&#039; or something similar.&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
3. You will be redirected back to the registration website [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu/]. If you are logging into bwIDM for the first time, there will be a summary screen which shows the account details your home institution is providing to the central system. Please check that all data is valid and then click on &#039;&#039;&#039;Continue&#039;&#039;&#039; or &#039;&#039;&#039;Weiter&#039;&#039;&#039;.&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
4. Once you have successfully logged into the bwIDM system, you will be greeted by a home screen showing all state-wide services you have access to. There will be a box labelled &amp;quot;bwUniCluster&amp;quot;. Click on &#039;&#039;&#039;Register&#039;&#039;&#039; or &#039;&#039;&#039;Registrieren&#039;&#039;&#039; to start the registration process.&lt;br /&gt;
&lt;br /&gt;
[[File:Bwidm-2-red.png|center|border|]]&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
5. Since August 13, 2020 a &#039;&#039;&#039;2-factor authentication&#039;&#039;&#039; mechanism (2FA) is being enforced to improve security. If you have never registered a 2FA token on bwIDM before, the following error message will appear:&lt;br /&gt;
&lt;br /&gt;
[[File:Bwidm-3-red.png|center|]]&lt;br /&gt;
&lt;br /&gt;
Click on the [https://bwidm.scc.kit.edu/user/twofa.xhtml Link] or on the &#039;&#039;&#039;My Tokens&#039;&#039;&#039; link in the main menu. The instructions for registering a new 2FA token can be found on the following page: [[bwUniCluster 2.0 User Access/2FA Tokens]]. Please complete them before proceeding.&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
6. Make sure all requirements are met by checking the &#039;&#039;&#039;Requirements&#039;&#039;&#039; box at the top. If the requirements are not met you might be able to correct the issure by following the instructions. In all other cases please [[Registration_Support_-_bwUniCluster|contact your local hotline]].&lt;br /&gt;
&lt;br /&gt;
[[File:BwUniCluster 2.0 access login bwidm registration requirements.png|center|border|]]&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
7. Read the Terms of Use (&#039;&#039;&#039;Nutzungsbedingungen und -richtlinien&#039;&#039;&#039;), check the box besides &#039;&#039;&#039;I have read and accepted the terms of use&#039;&#039;&#039; and click on &#039;&#039;&#039;Register&#039;&#039;&#039; or &#039;&#039;&#039;Registrieren&#039;&#039;&#039;.&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
8. Set a service password for the bwUniCluster and click on &#039;&#039;&#039;Save&#039;&#039;&#039; or &#039;&#039;&#039;Speichern&#039;&#039;&#039;. Logging in with the password of your home organisation, like on the former bwUniCluster 1, is no longer possible. Please make sure to use a strong password which is different from any other password you are currently using or have used on other systems. You will also be asked to change the service password regularly.&lt;br /&gt;
&lt;br /&gt;
[[File:Bwidm-5-red.png|center|]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Step C: Fill out the bwUniCluster questionnaire ==&lt;br /&gt;
&lt;br /&gt;
Filling out the bwUniCluster questionaire on&lt;br /&gt;
&lt;br /&gt;
   https://zas.bwhpc.de/shib/en/bwunicluster_survey.php&lt;br /&gt;
&lt;br /&gt;
is mandatory for all users. The input is solely used to improve our support activities and for capacity planning of future HPC resources. &#039;&#039;&#039;If the questionaire is not filled out, access to bwUniCluster 2.0 is blocked 14 days after the registration.&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Changing the Service Password ==&lt;br /&gt;
&lt;br /&gt;
Your bwUniCluster 2.0 &#039;&#039;&#039;password&#039;&#039;&#039; is the service password you set during the web registration (compare step 7 of chapter 1.2).  At any time, you can set a new bwUniCluster 2.0 password via the registration website [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu] by carrying out the following steps:&lt;br /&gt;
# Go to [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu] and select your home organization &lt;br /&gt;
# Authenticate yourself via the user id / username and password provided by your home institution&lt;br /&gt;
# Find the entry &#039;&#039;&#039;bwUniCluster&#039;&#039;&#039; and select &#039;&#039;&#039;Set Service Password&#039;&#039;&#039;&lt;br /&gt;
# Enter the new password, repeat it and click &#039;&#039;&#039;Save&#039;&#039;&#039; button.&lt;br /&gt;
# If the change was sucessfull, the message &amp;quot;Das Passwort wurde bei dem Dienst geändert&amp;quot; (&amp;quot;Password has been changed&amp;quot;) will be shown&lt;br /&gt;
# Proceed to log in using the new password&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
== Contact / Support ==&lt;br /&gt;
If you have questions or problems concerning the bwUniCluster (2.0) registration, please [[bwUniCluster 2.0 Support|contact your local hotline]]. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Establishing network access = &lt;br /&gt;
&lt;br /&gt;
Access to bwUniCluster 2.0 is &#039;&#039;&#039;limited to IP addresses from the so-called BelWü networks&#039;&#039;&#039;. All home institutions of our current users are connected to BelWue, so if you are on your campus network (e.g. in your office or on the Campus WiFi) you should be able to connect to bwUniCluster 2.0 without restrictions. If you are outside of one of the BelWue networks (e.g. in your home office instead of in your campus office), a VPN connection to your home institution has to be established first (see e.g. [1] for the KIT).&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Login =&lt;br /&gt;
&lt;br /&gt;
After finishing the web registration and making sure that you are on a network from which you have access to bwUniCluster 2.0 (e.g. by establishing a VPN connection), the HPC cluster is ready for your &#039;&#039;&#039;SSH&#039;&#039;&#039; based login. Recommended SSH clients applications are:&lt;br /&gt;
&lt;br /&gt;
* the ssh (OpenSSH) command included in all Linux distributions and macOS, -in command under Linux and macOS using the application &#039;&#039;terminal&#039;&#039;&lt;br /&gt;
* [http://mobaxterm.mobatek.net/ MobaXterm] under Windows&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Hostnames ==&lt;br /&gt;
&lt;br /&gt;
The main hostname required to connect to bwUniCluster 2.0 is &#039;&#039;&#039;bwunicluster.scc.kit.edu&#039;&#039;&#039; or &#039;&#039;&#039;uc2.scc.kit.edu&#039;&#039;&#039;. The system has four login nodes and we use so-called &#039;&#039;DNS round-robin scheduling&#039;&#039; to load-balance the incoming connections between the nodes. If you open multiple SSH sessions to bwUniCluster 2.0, these sessions will be established to different login nodes, so processes started in one session might not be visible in other sessions.&lt;br /&gt;
&lt;br /&gt;
The older Broadwell extension partition of the former bwUniCluster 1 is connected to bwUniCluster 2.0. You can use the hostname &#039;&#039;&#039;uc1e.scc.kit.edu&#039;&#039;&#039; to connect to the login nodes of this partition. &lt;br /&gt;
&lt;br /&gt;
If you need to connect to specific login nodes, you can use the following hostnames:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Hostname !! Node type&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;uc2-login1.scc.kit.edu&#039;&#039;&#039; || bwUniCluster 2.0, first login node&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;uc2-login2.scc.kit.edu&#039;&#039;&#039; || bwUniCluster 2.0, second login node&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;uc2-login3.scc.kit.edu&#039;&#039;&#039; || bwUniCluster 2.0, third login node&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;uc2-login4.scc.kit.edu&#039;&#039;&#039; || bwUniCluster 2.0, fourth login node&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;uc1e-login1.scc.kit.edu&#039;&#039;&#039; || Broadwell partition, first login node&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;uc1e-login2.scc.kit.edu&#039;&#039;&#039; || Broadwell partition, second login node&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Only the secure shell &#039;&#039;SSH&#039;&#039; is allowed to login. Other protocols like &#039;&#039;telnet&#039;&#039; or &#039;&#039;rlogin&#039;&#039; are not allowed for security reasons.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Usernames ==&lt;br /&gt;
&lt;br /&gt;
Your username will be the same as the one provided by your home institution, but &#039;&#039;&#039;prefixed&#039;&#039;&#039; with two characters and an underscore indicating your home institution. For example: If you are a member of the university of Konstanz and your local username is ab1234, your username on bwUniCluster 2.0 is kn_ab1234.&lt;br /&gt;
&lt;br /&gt;
The following list contains all prefixes currently in use:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Home organization !! &amp;lt;UserID&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| Universität Freiburg || &#039;&#039;fr_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Universität Heidelberg || &#039;&#039;hd_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Universität Hohenheim || &#039;&#039;ho_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| KIT || username &#039;&#039;(without any prefix)&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
| Universität Konstanz || &#039;&#039;kn_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Universität Mannheim || &#039;&#039;ma_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Universität Stuttgart ||  &#039;&#039;st_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Universität Tübingen || &#039;&#039;tu_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Universität Ulm  || &#039;&#039;ul_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Aalen || &#039;&#039;aa_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Albstadt-Sigmaringen || &#039;&#039;as_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Esslingen || &#039;&#039;es_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Heilbronn || &#039;&#039;hn_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Karlsruhe || &#039;&#039;hk_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| HTWG Konstanz || &#039;&#039;ht_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Mannheim || &#039;&#039;mn_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Offenburg || &#039;&#039;of_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Reutlingen || &#039;&#039;hr_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Rottenburg || &#039;&#039;ro_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule für Technik Stuttgart || &#039;&#039;hs_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Ulm || &#039;&#039;hu_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Client application: OpenSSH ==&lt;br /&gt;
&lt;br /&gt;
Most Unix and Unix-like operating systems like Linux, macOS and *BSD come with a built-in SSH client provided by the OpenSSH project. More recent versions of Windows 10 and the Windows Subsystem for Linux also come with a built-in OpenSSH client.&lt;br /&gt;
&lt;br /&gt;
To use this client, simply open a command line terminal (the exact process differs on every operating system, but usually involves starting an application called &#039;&#039;&#039;Terminal&#039;&#039;&#039; or &#039;&#039;&#039;Command Prompt&#039;&#039;&#039;) and enter the following command to connect to bwUniCluster 2.0:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ssh &amp;lt;UserID&amp;gt;@bwunicluster.scc.kit.edu&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you are on a Linux or Unix system running the X Window System (X11) and want to use a GUI-based application on bwUniCluster 2.0, you can use the &#039;&#039;-X&#039;&#039; option for the ssh command to set up X11 forwarding:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ssh -X &amp;lt;UserID&amp;gt;@uc2.scc.kit.edu&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Windows users requiring X11 forwarding for graphical applications should use &#039;&#039;&#039;MobaXterm&#039;&#039;&#039; instead.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Client application: MobaXterm ==&lt;br /&gt;
&lt;br /&gt;
The bwHPC-C5 support team strongly recommends to use [http://mobaxterm.mobatek.net/ MobaXterm] instead of &#039;&#039;PuTTY&#039;&#039; or &#039;&#039;WinSCP&#039;&#039; on Windows. &#039;&#039;MobaXterm&#039;&#039; provides a built-in X11 server allowing to start GUI based software.&lt;br /&gt;
 &lt;br /&gt;
Start &#039;&#039;MobaXterm&#039;&#039;, fill in the following fields:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Remote name              : uc2.scc.kit.edu    # or uc1e.scc.kit.edu&lt;br /&gt;
Specify user name        : &amp;lt;UserID&amp;gt;&lt;br /&gt;
Port                     : 22&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After that click on &#039;ok&#039;. Then a terminal will be opened and there you can enter your credentials.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Example login process ==&lt;br /&gt;
&lt;br /&gt;
After the connection has been initiated, a successful login process will go through the following three steps:&lt;br /&gt;
&lt;br /&gt;
1. The system asks for a &#039;&#039;&#039;One-Time Password&#039;&#039;&#039;. Generate one using the Software or Hardware Token registered on the bwIDM system (see [[bwUniCluster 2.0 User Access/2FA Tokens]]) and enter it after the &#039;&#039;&#039;Your OTP:&#039;&#039;&#039; prompt.&lt;br /&gt;
&lt;br /&gt;
2. The systems asks for your service password. Enter it after the &#039;&#039;&#039;Password:&#039;&#039;&#039; prompt.&lt;br /&gt;
&lt;br /&gt;
3. You are greeted by the bwUniCluster 2.0 banner followed by a shell.&lt;br /&gt;
&lt;br /&gt;
The result should look like this:&lt;br /&gt;
&lt;br /&gt;
[[File:BwUniCluster 2.0 access login example.png|center|]]&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Troubleshooting ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Issue: The &amp;quot;Your OTP:&amp;quot; prompt never appears and the connection hangs/times out instead&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Likely cause: You are most likely not on a network from which access to the bwUniCluster 2.0 system is allowed. Please check if you might have to establish a VPN connection first.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Issue: The system asks for the One-Time Password multiple times&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Likely cause: Make sure you are using the correct Software Token to generate the One-Time Password.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Issue: The system asks for the service password multiple times&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Likely cause: Make sure you are using the service password set on bwIDM and not the password valid for your home institution. Unlike the bwUniCluster 1, the bwUniCluster 2.0 only accepts the service password.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Issue: There is an error message by the pam_ses_open.sh skript&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Likely cause: Your account is in the &amp;quot;LOST_ACCESS&amp;quot; state because the entitlement is no longer valid, the questionaire was not filled out or there was a problem during the communication between your home institution and the central bwIDM system. Please try the following steps:&lt;br /&gt;
&lt;br /&gt;
* Log into [https://bwidm.scc.kit.edu bwIDM], look for the bwUniCluster entry and click on &#039;&#039;&#039;Registry info&#039;&#039;&#039;. Your &amp;quot;Status:&amp;quot; should be &amp;quot;ACTIVE&amp;quot;. If it is not, please wait for ten minutes since logging into the bwIDM causes a refresh and the problem might fix itself. If the status does not change to ACTIVE after a longer amount of time, please contact the support channels.&lt;br /&gt;
&lt;br /&gt;
* If you have not filled out the questionaire, please do so on [https://zas.bwhpc.de/shib/en/bwunicluster_survey.php    https://zas.bwhpc.de/shib/en/bwunicluster_survey.php] and then wait for about ten minutes before attempting to log into the HPC system again.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Allowed activities on login nodes ==&lt;br /&gt;
&lt;br /&gt;
The login nodes of bwUniCluster 2.0 are the access point to the compute system and to your bwUniCluster 2.0 $HOME directory. The login nodes are shared with all the users of bwUniCluster 2.0. Therefore, your activities on the login nodes are limited to primarily set up your batch jobs. Your activities may also be:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;short&#039;&#039;&#039; compilation of your program code and&lt;br /&gt;
* &#039;&#039;&#039;short&#039;&#039;&#039; pre- and postprocessing of your batch jobs.&lt;br /&gt;
&lt;br /&gt;
To guarantee usability for all the users of bwUniCluster 2.0 &amp;lt;span style=&amp;quot;color:red;font-size:100%;&amp;quot;&amp;gt;&#039;&#039;&#039;you must not run your compute jobs on the login nodes&#039;&#039;&#039;&amp;lt;/span&amp;gt;. Compute jobs must be submitted to the&lt;br /&gt;
[[bwUniCluster Batch Jobs|queueing system]]. Any compute job running on the login nodes will be terminated without any notice. Any long-running compilation or any long-running pre- or postprocessing of batch jobs must also be submitted to the [[bwUniCluster Batch Jobs|queueing system]].&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== SSH Keys ==&lt;br /&gt;
&lt;br /&gt;
In contrast to the bwUniCluster 1 and many other HPC systems it is &#039;&#039;&#039;no longer possible to self-manage your SSH Keys by adding them to the ~/.ssh/authorized_keys file&#039;&#039;&#039;. Existing files will no longer be evaluated. SSH Keys have to be managed via the central bwIDM system instead. Please refer to the user guide for this functionality:&lt;br /&gt;
&lt;br /&gt;
[[bwUniCluster 2.0 User Access/SSH Keys]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= [[First_Steps_on_bwHPC_cluster|First steps on bwUniCluster]] =&lt;br /&gt;
&lt;br /&gt;
First and some important steps on bwUniCluster 2.0 can be found [[First_Steps_on_bwHPC_cluster|here]].&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Deregistration =&lt;br /&gt;
If you plan to permanently leave the bwUniCluster 2.0, follow the deregister checklist:&lt;br /&gt;
# Transfer all your data in $HOME and workspace to your local computer/storage and after that clear off all your data&lt;br /&gt;
# Visit [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu] &lt;br /&gt;
#* Select your home organization from the list and click &#039;&#039;&#039;Proceed&#039;&#039;&#039;  &lt;br /&gt;
#* Enter your home-organisational user ID / username  and your home-organisational password and click &#039;&#039;&#039;Login&#039;&#039;&#039; button&lt;br /&gt;
#* You will be redirected back to the registration website [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu/] &lt;br /&gt;
#* &amp;lt;div&amp;gt;Select &#039;&#039;&#039;Registry Info&#039;&#039;&#039; of the service &#039;&#039;&#039;bwUniCluster&#039;&#039;&#039; (on the left hand side)&amp;lt;br&amp;gt;[[File:bwUniCluster_registration_sidebar.png|center|border|]]&amp;lt;/div&amp;gt;&lt;br /&gt;
#* Click &#039;&#039;&#039;Deregister&#039;&#039;&#039;&lt;br /&gt;
Note that Step 2 will automatically unsubscribe you from the bwunicluster mail list.&lt;/div&gt;</summary>
		<author><name>H Haefner</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster2.0/Batch_Queues&amp;diff=8072</id>
		<title>BwUniCluster2.0/Batch Queues</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster2.0/Batch_Queues&amp;diff=8072"/>
		<updated>2020-12-18T09:38:14Z</updated>

		<summary type="html">&lt;p&gt;H Haefner: /* sbatch -p queue */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This article contains information on the queues of the [[BwUniCluster_2.0_Slurm_common_Features|batch job system]] and on interactive jobs.&lt;br /&gt;
&lt;br /&gt;
= Job Submission =&lt;br /&gt;
== sbatch Command ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== sbatch -p &#039;&#039;queue&#039;&#039; ===&lt;br /&gt;
Compute resources such as (wall-)time, nodes and memory are restricted and must fit into &#039;&#039;&#039;queues&#039;&#039;&#039;. Since requested compute resources are NOT always automatically mapped to the correct queue class, &#039;&#039;&#039;you must add the correct queue class to your sbatch command &#039;&#039;&#039;. &amp;lt;font color=red&amp;gt;The specification of a queue is obligatory on BwUniCluster 2.0.&amp;lt;/font&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Details are:&lt;br /&gt;
&lt;br /&gt;
{| width=750px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! colspan=&amp;quot;5&amp;quot; | bwUniCluster 2.0 &amp;lt;br&amp;gt; sbatch -p &#039;&#039;queue&#039;&#039;&lt;br /&gt;
|- style=&amp;quot;text-align:left;&amp;quot;&lt;br /&gt;
! queue !! node !! default resources !! minimum resources !! maximum resources&lt;br /&gt;
|- style=&amp;quot;text-align:left&amp;quot;&lt;br /&gt;
| dev_single&lt;br /&gt;
| thin&lt;br /&gt;
| time=10, mem-per-cpu=1125mb&lt;br /&gt;
| &lt;br /&gt;
| time=30, nodes=1, mem=180000mb, ntasks-per-node=40, (threads-per-core=2)&amp;lt;br&amp;gt;6 nodes are reserved for this queue. &amp;lt;br&amp;gt; Only for development, i.e. debugging or performance optimization ...&lt;br /&gt;
|- style=&amp;quot;text-align:left;&amp;quot;&lt;br /&gt;
| single&lt;br /&gt;
| thin&lt;br /&gt;
| time=30, mem-per-cpu=1125mb&lt;br /&gt;
| &lt;br /&gt;
| time=72:00:00, nodes=1, mem=180000mb, ntasks-per-node=40, (threads-per-core)=2&lt;br /&gt;
|- style=&amp;quot;text-align:left;&amp;quot;&lt;br /&gt;
| dev_multiple&lt;br /&gt;
| thin&lt;br /&gt;
| time=10, mem-per-cpu=1125mb&lt;br /&gt;
| nodes=2&lt;br /&gt;
| time=30, nodes=4, mem=90000mb, ntasks-per-node=40, (threads-per-core=2)&amp;lt;br&amp;gt;8 nodes are reserved for this queue.&amp;lt;br&amp;gt; Only for development, i.e. debugging or performance optimization ...&lt;br /&gt;
|- style=&amp;quot;text-align:left;&amp;quot;&lt;br /&gt;
| multiple&lt;br /&gt;
| thin&lt;br /&gt;
| time=30, mem-per-cpu=1125mb&lt;br /&gt;
| nodes=2&lt;br /&gt;
| time=72:00:00, mem=90000mb, nodes=128, ntasks-per-node=40, (threads-per-core=2) &lt;br /&gt;
|- style=&amp;quot;text-align:left;&amp;quot;&lt;br /&gt;
| dev_multiple_e&lt;br /&gt;
| thin (Broadwell)&lt;br /&gt;
| time=10, mem-per-cpu=2178mb&lt;br /&gt;
| nodes=2&lt;br /&gt;
| time=30, nodes=8, mem=122000, ntasks-per-node=28, (threads-per-core=2)&amp;lt;br&amp;gt;8 nodes are reserved for this queue &amp;lt;br&amp;gt; Only for development, i.e. debugging or performance optimization ...&lt;br /&gt;
|- style=&amp;quot;text-align:left;&amp;quot;&lt;br /&gt;
| multiple_e&lt;br /&gt;
| thin (Broadwell)&lt;br /&gt;
| time=10, mem-per-cpu=2178mb&lt;br /&gt;
| nodes=2&lt;br /&gt;
| time=72:00:00, nodes=128, mem=122000mb, ntasks-per-node=28, (threads-per-core=2)&lt;br /&gt;
|- style=&amp;quot;text-align:left;&amp;quot;&lt;br /&gt;
| dev_special&amp;lt;br/&amp;gt;(&amp;lt;i&amp;gt;restricted access!&amp;lt;/i&amp;gt;)&lt;br /&gt;
| thin (Broadwell)&lt;br /&gt;
| time=10, mem-per-cpu=2178mb&lt;br /&gt;
| &lt;br /&gt;
| time=30, nodes=1, mem=122000mb, ntasks-per-node=28, (threads-per-core=2)&lt;br /&gt;
|- style=&amp;quot;text-align:left;&amp;quot;&lt;br /&gt;
| special&amp;lt;br/&amp;gt;(&amp;lt;i&amp;gt;restricted access!&amp;lt;/i&amp;gt;)&lt;br /&gt;
| thin (Broadwell)&lt;br /&gt;
| time=10, mem-per-cpu=2178mb&lt;br /&gt;
| &lt;br /&gt;
| time=72:00:00, nodes=1, mem=122000mb, ntasks-per-node=28, (threads-per-core=2)&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; text-align:left&amp;quot;&lt;br /&gt;
| fat &lt;br /&gt;
| fat&lt;br /&gt;
| time=10, mem-per-cpu=18750mb&lt;br /&gt;
| &lt;br /&gt;
| time=72:00:00, nodes=1, ntasks-per-node=80, (threads-per-core=2)&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; text-align:left&amp;quot;&lt;br /&gt;
| dev_gpu_4&lt;br /&gt;
| gpu_4&lt;br /&gt;
| time=10, mem-per-gpu=94000mb, cpu-per-gpu=20&lt;br /&gt;
| &lt;br /&gt;
| time=30, nodes=1, mem=376000, ntasks-per-node=40, (threads-per-core=2)&amp;lt;br&amp;gt;1 node is reserved for this queue &amp;lt;br&amp;gt; Only for development, i.e. debugging or performance optimization ...&lt;br /&gt;
|- style=&amp;quot;text-align:left;&amp;quot;&lt;br /&gt;
| gpu_4&lt;br /&gt;
| gpu4&lt;br /&gt;
| time=10, mem-per-gpu=94000mb, cpu-per-gpu=20&lt;br /&gt;
| &lt;br /&gt;
| time=48:00:00, mem=376000, nodes=14, ntasks-per-node=40, (threads-per-core=2)&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; text-align:left&amp;quot;&lt;br /&gt;
| gpu_8&lt;br /&gt;
| gpu8&lt;br /&gt;
| time=10, mem-per-cpu=94000mb, cpu-per-gpu=10&lt;br /&gt;
|&lt;br /&gt;
| time=48:00:00, mem=752000, nodes=10, ntasks-per-node=40, (threads-per-core=2)&lt;br /&gt;
|- &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Default resources of a queue class defines time, #tasks and memory if not explicitly given with sbatch command. Resource list acronyms &#039;&#039;--time&#039;&#039;, &#039;&#039;--ntasks&#039;&#039;, &#039;&#039;--nodes&#039;&#039;, &#039;&#039;--mem&#039;&#039; and &#039;&#039;--mem-per-cpu&#039;&#039; are described [[ForHLR_Batch_Jobs_SLURM#sbatch_Command_Parameters|here]].&lt;br /&gt;
&lt;br /&gt;
Access to the &amp;quot;special&amp;quot; and &amp;quot;dev_special&amp;quot; partitions on the bwUniCluster 2.0 is restricted to members of the institutions which participated in the procurement of the extension partition specifically for this purpose. Please contact the support team if your institution participated in the procurement and your account should be able to run jobs in this partition.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
==== Queue class examples ====&lt;br /&gt;
To run your batch job on one of the thin nodes, please use:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sbatch --partition=dev_multiple&lt;br /&gt;
     or &lt;br /&gt;
$ sbatch -p dev_multiple&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Interactive Jobs ====&lt;br /&gt;
On bwUniCluster 2.0 you are only allowed to run short jobs (&amp;lt;&amp;lt; 1 hour) with little memory requirements (&amp;lt;&amp;lt; 8 GByte) on the logins nodes. If you want to run longer jobs and/or jobs with a request of more than 8 GByte of memory, you must allocate resources for so-called interactive jobs by usage of the command salloc on a login node. Considering a serial application running on a compute node that requires 5000 MByte of memory and limiting the interactive run to 2 hours the following command has to be executed:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ salloc -p single -n 1 -t 120 --mem=5000&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Then you will get one core on a compute node within the partition &amp;quot;single&amp;quot;. After execution of this command &#039;&#039;&#039;DO NOT CLOSE&#039;&#039;&#039; your current terminal session but wait until the queueing system Slurm has granted you the requested resources on the compute system. You will be logged in automatically on the granted core! To run a serial program on the granted core you only have to type the name of the executable.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ./&amp;lt;my_serial_program&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Please be aware that your serial job must run less than 2 hours in this example, else the job will be killed during runtime by the system. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
You can also start now a graphical X11-terminal connecting you to the dedicated resource that is available for 2 hours. You can start it by the command:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ xterm&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Note that, once the walltime limit has been reached the resources - i.e. the compute node - will automatically be revoked.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
An interactive parallel application running on one compute node or on many compute nodes (e.g. here 5 nodes) with 40 cores each requires usually an amount of memory in GByte (e.g. 50 GByte) and a maximum time (e.g. 1 hour). E.g. 5 nodes can be allocated by the following command:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ salloc -p multiple -N 5 --ntasks-per-node=40 -t 01:00:00  --mem=50gb&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Now you can run parallel jobs on 200 cores requiring 50 GByte of memory per node. Please be aware that you will be logged in on core 0 of the first node. If you want to run MPI-programs, you can do it by simply typing mpirun &amp;lt;program_name&amp;gt;. Then your program will be run on 200 cores. A very simple example for starting a parallel job can be:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ mpirun &amp;lt;my_mpi_program&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
You can also start the debugger ddt by the commands:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ module add devel/ddt&lt;br /&gt;
$ ddt &amp;lt;my_mpi_program&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The above commands will execute the parallel program &amp;lt;my_mpi_program&amp;gt; on all available cores. You can also start parallel programs on a subset of cores; an example for this can be:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ mpirun -n 50 &amp;lt;my_mpi_program&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
If you are using Intel MPI you must start &amp;lt;my_mpi_program&amp;gt; by the command mpiexec.hydra (instead of mpirun).&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
[[Category:bwUniCluster 2.0|Batch Jobs - bwUniCluster 2.0 Features]]&lt;/div&gt;</summary>
		<author><name>H Haefner</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster2.0/Batch_Queues&amp;diff=8071</id>
		<title>BwUniCluster2.0/Batch Queues</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster2.0/Batch_Queues&amp;diff=8071"/>
		<updated>2020-12-18T09:37:10Z</updated>

		<summary type="html">&lt;p&gt;H Haefner: /* sbatch -p queue */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This article contains information on the queues of the [[BwUniCluster_2.0_Slurm_common_Features|batch job system]] and on interactive jobs.&lt;br /&gt;
&lt;br /&gt;
= Job Submission =&lt;br /&gt;
== sbatch Command ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== sbatch -p &#039;&#039;queue&#039;&#039; ===&lt;br /&gt;
Compute resources such as (wall-)time, nodes and memory are restricted and must fit into &#039;&#039;&#039;queues&#039;&#039;&#039;. Since requested compute resources are NOT always automatically mapped to the correct queue class, &#039;&#039;&#039;you must add the correct queue class to your sbatch command &#039;&#039;&#039;. &amp;lt;font color=red&amp;gt;The specification of a queue is obligatory on BwUniCluster 2.0.&amp;lt;/font&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Details are:&lt;br /&gt;
&lt;br /&gt;
{| width=750px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! colspan=&amp;quot;5&amp;quot; | bwUniCluster 2.0 &amp;lt;br&amp;gt; sbatch -p &#039;&#039;queue&#039;&#039;&lt;br /&gt;
|- style=&amp;quot;text-align:left;&amp;quot;&lt;br /&gt;
! queue !! node !! default resources !! minimum resources !! maximum resources&lt;br /&gt;
|- style=&amp;quot;text-align:left&amp;quot;&lt;br /&gt;
| dev_single&lt;br /&gt;
| thin&lt;br /&gt;
| time=10, mem-per-cpu=1125mb&lt;br /&gt;
| &lt;br /&gt;
| time=30, nodes=1, mem=180000mb, ntasks-per-node=40, (threads-per-core=2)&amp;lt;br&amp;gt;6 nodes are reserved for this queue. &amp;lt;br&amp;gt; Only for development, i.e. debugging or performance optimization ...&lt;br /&gt;
|- style=&amp;quot;text-align:left;&amp;quot;&lt;br /&gt;
| single&lt;br /&gt;
| thin&lt;br /&gt;
| time=30, mem-per-cpu=1125mb&lt;br /&gt;
| &lt;br /&gt;
| time=72:00:00, nodes=1, mem=180000mb, ntasks-per-node=40, (threads-per-core)=2&lt;br /&gt;
|- style=&amp;quot;text-align:left;&amp;quot;&lt;br /&gt;
| dev_multiple&lt;br /&gt;
| thin&lt;br /&gt;
| time=10, mem-per-cpu=1125mb&lt;br /&gt;
| nodes=2&lt;br /&gt;
| time=30, nodes=4, mem=90000mb, ntasks-per-node=40, (threads-per-core=2)&amp;lt;br&amp;gt;8 nodes are reserved for this queue.&amp;lt;br&amp;gt; Only for development, i.e. debugging or performance optimization ...&lt;br /&gt;
|- style=&amp;quot;text-align:left;&amp;quot;&lt;br /&gt;
| multiple&lt;br /&gt;
| thin&lt;br /&gt;
| time=30, mem-per-cpu=1125mb&lt;br /&gt;
| nodes=2&lt;br /&gt;
| time=72:00:00, mem=90000mb, nodes=128, ntasks-per-node=40, (threads-per-core=2) &lt;br /&gt;
|- style=&amp;quot;text-align:left;&amp;quot;&lt;br /&gt;
| dev_multiple_e&lt;br /&gt;
| thin (Broadwell)&lt;br /&gt;
| time=10, mem-per-cpu=2178mb&lt;br /&gt;
| nodes=2&lt;br /&gt;
| time=30, nodes=8, mem=122000, ntasks-per-node=28, (threads-per-core=2)&amp;lt;br&amp;gt;8 nodes are reserved for this queue &amp;lt;br&amp;gt; Only for development, i.e. debugging or performance optimization ...&lt;br /&gt;
|- style=&amp;quot;text-align:left;&amp;quot;&lt;br /&gt;
| multiple_e&lt;br /&gt;
| thin (Broadwell)&lt;br /&gt;
| time=10, mem-per-cpu=2178mb&lt;br /&gt;
| nodes=2&lt;br /&gt;
| time=72:00:00, nodes=128, mem=122000mb, ntasks-per-node=28, (threads-per-core=2)&lt;br /&gt;
|- style=&amp;quot;text-align:left;&amp;quot;&lt;br /&gt;
| dev_special&amp;lt;br/&amp;gt;(&amp;lt;i&amp;gt;restricted access!&amp;lt;/i&amp;gt;)&lt;br /&gt;
| thin (Broadwell)&lt;br /&gt;
| time=10, mem-per-cpu=2178mb&lt;br /&gt;
| &lt;br /&gt;
| time=30, nodes=1, mem=122000mb, ntasks-per-node=28, (threads-per-core=2)&lt;br /&gt;
|- style=&amp;quot;text-align:left;&amp;quot;&lt;br /&gt;
| special&amp;lt;br/&amp;gt;(&amp;lt;i&amp;gt;restricted access!&amp;lt;/i&amp;gt;)&lt;br /&gt;
| thin (Broadwell)&lt;br /&gt;
| time=10, mem-per-cpu=2178mb&lt;br /&gt;
| &lt;br /&gt;
| time=72:00:00, nodes=1, mem=122000mb, ntasks-per-node=28, (threads-per-core=2)&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; text-align:left&amp;quot;&lt;br /&gt;
| fat &lt;br /&gt;
| fat&lt;br /&gt;
| time=10, mem-per-cpu=18750mb&lt;br /&gt;
| &lt;br /&gt;
| time=72:00:00, nodes=1, ntasks-per-node=80, (threads-per-core=2)&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; text-align:left&amp;quot;&lt;br /&gt;
| dev_gpu_4&lt;br /&gt;
| gpu_4&lt;br /&gt;
| time=10, mem-per-gpu=94000mb, cpu-per-gpu=20&lt;br /&gt;
| &lt;br /&gt;
| time=30, nodes=1, mem=376000, ntasks-per-node=40, (threads-per-core=2)&amp;lt;br&amp;gt;8 node is reserved for this queue &amp;lt;br&amp;gt; Only for development, i.e. debugging or performance optimization ...&lt;br /&gt;
|- style=&amp;quot;text-align:left;&amp;quot;&lt;br /&gt;
| gpu_4&lt;br /&gt;
| gpu4&lt;br /&gt;
| time=10, mem-per-gpu=94000mb, cpu-per-gpu=20&lt;br /&gt;
| &lt;br /&gt;
| time=48:00:00, mem=376000, nodes=14, ntasks-per-node=40, (threads-per-core=2)&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; text-align:left&amp;quot;&lt;br /&gt;
| gpu_8&lt;br /&gt;
| gpu8&lt;br /&gt;
| time=10, mem-per-cpu=94000mb, cpu-per-gpu=10&lt;br /&gt;
|&lt;br /&gt;
| time=48:00:00, mem=752000, nodes=10, ntasks-per-node=40, (threads-per-core=2)&lt;br /&gt;
|- &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Default resources of a queue class defines time, #tasks and memory if not explicitly given with sbatch command. Resource list acronyms &#039;&#039;--time&#039;&#039;, &#039;&#039;--ntasks&#039;&#039;, &#039;&#039;--nodes&#039;&#039;, &#039;&#039;--mem&#039;&#039; and &#039;&#039;--mem-per-cpu&#039;&#039; are described [[ForHLR_Batch_Jobs_SLURM#sbatch_Command_Parameters|here]].&lt;br /&gt;
&lt;br /&gt;
Access to the &amp;quot;special&amp;quot; and &amp;quot;dev_special&amp;quot; partitions on the bwUniCluster 2.0 is restricted to members of the institutions which participated in the procurement of the extension partition specifically for this purpose. Please contact the support team if your institution participated in the procurement and your account should be able to run jobs in this partition.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
==== Queue class examples ====&lt;br /&gt;
To run your batch job on one of the thin nodes, please use:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sbatch --partition=dev_multiple&lt;br /&gt;
     or &lt;br /&gt;
$ sbatch -p dev_multiple&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Interactive Jobs ====&lt;br /&gt;
On bwUniCluster 2.0 you are only allowed to run short jobs (&amp;lt;&amp;lt; 1 hour) with little memory requirements (&amp;lt;&amp;lt; 8 GByte) on the logins nodes. If you want to run longer jobs and/or jobs with a request of more than 8 GByte of memory, you must allocate resources for so-called interactive jobs by usage of the command salloc on a login node. Considering a serial application running on a compute node that requires 5000 MByte of memory and limiting the interactive run to 2 hours the following command has to be executed:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ salloc -p single -n 1 -t 120 --mem=5000&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Then you will get one core on a compute node within the partition &amp;quot;single&amp;quot;. After execution of this command &#039;&#039;&#039;DO NOT CLOSE&#039;&#039;&#039; your current terminal session but wait until the queueing system Slurm has granted you the requested resources on the compute system. You will be logged in automatically on the granted core! To run a serial program on the granted core you only have to type the name of the executable.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ./&amp;lt;my_serial_program&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Please be aware that your serial job must run less than 2 hours in this example, else the job will be killed during runtime by the system. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
You can also start now a graphical X11-terminal connecting you to the dedicated resource that is available for 2 hours. You can start it by the command:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ xterm&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Note that, once the walltime limit has been reached the resources - i.e. the compute node - will automatically be revoked.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
An interactive parallel application running on one compute node or on many compute nodes (e.g. here 5 nodes) with 40 cores each requires usually an amount of memory in GByte (e.g. 50 GByte) and a maximum time (e.g. 1 hour). E.g. 5 nodes can be allocated by the following command:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ salloc -p multiple -N 5 --ntasks-per-node=40 -t 01:00:00  --mem=50gb&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Now you can run parallel jobs on 200 cores requiring 50 GByte of memory per node. Please be aware that you will be logged in on core 0 of the first node. If you want to run MPI-programs, you can do it by simply typing mpirun &amp;lt;program_name&amp;gt;. Then your program will be run on 200 cores. A very simple example for starting a parallel job can be:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ mpirun &amp;lt;my_mpi_program&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
You can also start the debugger ddt by the commands:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ module add devel/ddt&lt;br /&gt;
$ ddt &amp;lt;my_mpi_program&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The above commands will execute the parallel program &amp;lt;my_mpi_program&amp;gt; on all available cores. You can also start parallel programs on a subset of cores; an example for this can be:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ mpirun -n 50 &amp;lt;my_mpi_program&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
If you are using Intel MPI you must start &amp;lt;my_mpi_program&amp;gt; by the command mpiexec.hydra (instead of mpirun).&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
[[Category:bwUniCluster 2.0|Batch Jobs - bwUniCluster 2.0 Features]]&lt;/div&gt;</summary>
		<author><name>H Haefner</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster2.0/Batch_Queues&amp;diff=8070</id>
		<title>BwUniCluster2.0/Batch Queues</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster2.0/Batch_Queues&amp;diff=8070"/>
		<updated>2020-12-18T09:25:46Z</updated>

		<summary type="html">&lt;p&gt;H Haefner: /* sbatch -p queue */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This article contains information on the queues of the [[BwUniCluster_2.0_Slurm_common_Features|batch job system]] and on interactive jobs.&lt;br /&gt;
&lt;br /&gt;
= Job Submission =&lt;br /&gt;
== sbatch Command ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== sbatch -p &#039;&#039;queue&#039;&#039; ===&lt;br /&gt;
Compute resources such as (wall-)time, nodes and memory are restricted and must fit into &#039;&#039;&#039;queues&#039;&#039;&#039;. Since requested compute resources are NOT always automatically mapped to the correct queue class, &#039;&#039;&#039;you must add the correct queue class to your sbatch command &#039;&#039;&#039;. &amp;lt;font color=red&amp;gt;The specification of a queue is obligatory on BwUniCluster 2.0.&amp;lt;/font&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Details are:&lt;br /&gt;
&lt;br /&gt;
{| width=750px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! colspan=&amp;quot;5&amp;quot; | bwUniCluster 2.0 &amp;lt;br&amp;gt; sbatch -p &#039;&#039;queue&#039;&#039;&lt;br /&gt;
|- style=&amp;quot;text-align:left;&amp;quot;&lt;br /&gt;
! queue !! node !! default resources !! minimum resources !! maximum resources&lt;br /&gt;
|- style=&amp;quot;text-align:left&amp;quot;&lt;br /&gt;
| dev_single&lt;br /&gt;
| thin&lt;br /&gt;
| time=10, mem-per-cpu=1125mb&lt;br /&gt;
| &lt;br /&gt;
| time=30, nodes=1, mem=180000mb, ntasks-per-node=40, (threads-per-core=2)&amp;lt;br&amp;gt;6 nodes are reserved for this queue. &amp;lt;br&amp;gt; Only for development, i.e. debugging or performance optimization ...&lt;br /&gt;
|- style=&amp;quot;text-align:left;&amp;quot;&lt;br /&gt;
| single&lt;br /&gt;
| thin&lt;br /&gt;
| time=30, mem-per-cpu=1125mb&lt;br /&gt;
| &lt;br /&gt;
| time=72:00:00, nodes=1, mem=180000mb, ntasks-per-node=40, (threads-per-core)=2&lt;br /&gt;
|- style=&amp;quot;text-align:left;&amp;quot;&lt;br /&gt;
| dev_multiple&lt;br /&gt;
| thin&lt;br /&gt;
| time=10, mem-per-cpu=1125mb&lt;br /&gt;
| nodes=2&lt;br /&gt;
| time=30, nodes=4, mem=90000mb, ntasks-per-node=40, (threads-per-core=2)&amp;lt;br&amp;gt;8 nodes are reserved for this queue.&amp;lt;br&amp;gt; Only for development, i.e. debugging or performance optimization ...&lt;br /&gt;
|- style=&amp;quot;text-align:left;&amp;quot;&lt;br /&gt;
| multiple&lt;br /&gt;
| thin&lt;br /&gt;
| time=30, mem-per-cpu=1125mb&lt;br /&gt;
| nodes=2&lt;br /&gt;
| time=72:00:00, mem=90000mb, nodes=128, ntasks-per-node=40, (threads-per-core=2) &lt;br /&gt;
|- style=&amp;quot;text-align:left;&amp;quot;&lt;br /&gt;
| dev_multiple_e&lt;br /&gt;
| thin (Broadwell)&lt;br /&gt;
| time=10, mem-per-cpu=2178mb&lt;br /&gt;
| nodes=2&lt;br /&gt;
| time=30, nodes=8, mem=122000, ntasks-per-node=28, (threads-per-core=2)&amp;lt;br&amp;gt;8 nodes are reserved for this queue &amp;lt;br&amp;gt; Only for development, i.e. debugging or performance optimization ...&lt;br /&gt;
|- style=&amp;quot;text-align:left;&amp;quot;&lt;br /&gt;
| multiple_e&lt;br /&gt;
| thin (Broadwell)&lt;br /&gt;
| time=10, mem-per-cpu=2178mb&lt;br /&gt;
| nodes=2&lt;br /&gt;
| time=72:00:00, nodes=128, mem=122000mb, ntasks-per-node=28, (threads-per-core=2)&lt;br /&gt;
|- style=&amp;quot;text-align:left;&amp;quot;&lt;br /&gt;
| dev_special&amp;lt;br/&amp;gt;(&amp;lt;i&amp;gt;restricted access!&amp;lt;/i&amp;gt;)&lt;br /&gt;
| thin (Broadwell)&lt;br /&gt;
| time=10, mem-per-cpu=2178mb&lt;br /&gt;
| &lt;br /&gt;
| time=30, nodes=1, mem=122000mb, ntasks-per-node=28, (threads-per-core=2)&lt;br /&gt;
|- style=&amp;quot;text-align:left;&amp;quot;&lt;br /&gt;
| special&amp;lt;br/&amp;gt;(&amp;lt;i&amp;gt;restricted access!&amp;lt;/i&amp;gt;)&lt;br /&gt;
| thin (Broadwell)&lt;br /&gt;
| time=10, mem-per-cpu=2178mb&lt;br /&gt;
| &lt;br /&gt;
| time=72:00:00, nodes=1, mem=122000mb, ntasks-per-node=28, (threads-per-core=2)&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; text-align:left&amp;quot;&lt;br /&gt;
| fat &lt;br /&gt;
| fat&lt;br /&gt;
| time=10, mem-per-cpu=18750mb&lt;br /&gt;
| &lt;br /&gt;
| time=72:00:00, nodes=1, ntasks-per-node=80, (threads-per-core=2)&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; text-align:left&amp;quot;&lt;br /&gt;
| dev_gpu_4&lt;br /&gt;
| gpu_4&lt;br /&gt;
| time=10, mem-per-cpu=2178mb&lt;br /&gt;
| &lt;br /&gt;
| time=30, nodes=8, mem=122000, ntasks-per-node=28, (threads-per-core=2)&amp;lt;br&amp;gt;8 nodes are reserved for this queue &amp;lt;br&amp;gt; Only for development, i.e. debugging or performance optimization ...&lt;br /&gt;
|- style=&amp;quot;text-align:left;&amp;quot;&lt;br /&gt;
| gpu_4&lt;br /&gt;
| gpu4&lt;br /&gt;
| time=10, mem-per-gpu=94000mb, cpu-per-gpu=20&lt;br /&gt;
| &lt;br /&gt;
| time=48:00:00, mem=376000, nodes=14, ntasks-per-node=40, (threads-per-core=2)&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; text-align:left&amp;quot;&lt;br /&gt;
| gpu_8&lt;br /&gt;
| gpu8&lt;br /&gt;
| time=10, mem-per-cpu=94000mb, cpu-per-gpu=10&lt;br /&gt;
|&lt;br /&gt;
| time=48:00:00, mem=752000, nodes=10, ntasks-per-node=40, (threads-per-core=2)&lt;br /&gt;
|- &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Default resources of a queue class defines time, #tasks and memory if not explicitly given with sbatch command. Resource list acronyms &#039;&#039;--time&#039;&#039;, &#039;&#039;--ntasks&#039;&#039;, &#039;&#039;--nodes&#039;&#039;, &#039;&#039;--mem&#039;&#039; and &#039;&#039;--mem-per-cpu&#039;&#039; are described [[ForHLR_Batch_Jobs_SLURM#sbatch_Command_Parameters|here]].&lt;br /&gt;
&lt;br /&gt;
Access to the &amp;quot;special&amp;quot; and &amp;quot;dev_special&amp;quot; partitions on the bwUniCluster 2.0 is restricted to members of the institutions which participated in the procurement of the extension partition specifically for this purpose. Please contact the support team if your institution participated in the procurement and your account should be able to run jobs in this partition.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
==== Queue class examples ====&lt;br /&gt;
To run your batch job on one of the thin nodes, please use:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sbatch --partition=dev_multiple&lt;br /&gt;
     or &lt;br /&gt;
$ sbatch -p dev_multiple&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Interactive Jobs ====&lt;br /&gt;
On bwUniCluster 2.0 you are only allowed to run short jobs (&amp;lt;&amp;lt; 1 hour) with little memory requirements (&amp;lt;&amp;lt; 8 GByte) on the logins nodes. If you want to run longer jobs and/or jobs with a request of more than 8 GByte of memory, you must allocate resources for so-called interactive jobs by usage of the command salloc on a login node. Considering a serial application running on a compute node that requires 5000 MByte of memory and limiting the interactive run to 2 hours the following command has to be executed:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ salloc -p single -n 1 -t 120 --mem=5000&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Then you will get one core on a compute node within the partition &amp;quot;single&amp;quot;. After execution of this command &#039;&#039;&#039;DO NOT CLOSE&#039;&#039;&#039; your current terminal session but wait until the queueing system Slurm has granted you the requested resources on the compute system. You will be logged in automatically on the granted core! To run a serial program on the granted core you only have to type the name of the executable.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ./&amp;lt;my_serial_program&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Please be aware that your serial job must run less than 2 hours in this example, else the job will be killed during runtime by the system. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
You can also start now a graphical X11-terminal connecting you to the dedicated resource that is available for 2 hours. You can start it by the command:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ xterm&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Note that, once the walltime limit has been reached the resources - i.e. the compute node - will automatically be revoked.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
An interactive parallel application running on one compute node or on many compute nodes (e.g. here 5 nodes) with 40 cores each requires usually an amount of memory in GByte (e.g. 50 GByte) and a maximum time (e.g. 1 hour). E.g. 5 nodes can be allocated by the following command:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ salloc -p multiple -N 5 --ntasks-per-node=40 -t 01:00:00  --mem=50gb&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Now you can run parallel jobs on 200 cores requiring 50 GByte of memory per node. Please be aware that you will be logged in on core 0 of the first node. If you want to run MPI-programs, you can do it by simply typing mpirun &amp;lt;program_name&amp;gt;. Then your program will be run on 200 cores. A very simple example for starting a parallel job can be:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ mpirun &amp;lt;my_mpi_program&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
You can also start the debugger ddt by the commands:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ module add devel/ddt&lt;br /&gt;
$ ddt &amp;lt;my_mpi_program&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The above commands will execute the parallel program &amp;lt;my_mpi_program&amp;gt; on all available cores. You can also start parallel programs on a subset of cores; an example for this can be:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ mpirun -n 50 &amp;lt;my_mpi_program&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
If you are using Intel MPI you must start &amp;lt;my_mpi_program&amp;gt; by the command mpiexec.hydra (instead of mpirun).&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
[[Category:bwUniCluster 2.0|Batch Jobs - bwUniCluster 2.0 Features]]&lt;/div&gt;</summary>
		<author><name>H Haefner</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster2.0/Batch_Queues&amp;diff=8069</id>
		<title>BwUniCluster2.0/Batch Queues</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster2.0/Batch_Queues&amp;diff=8069"/>
		<updated>2020-12-18T09:24:30Z</updated>

		<summary type="html">&lt;p&gt;H Haefner: /* sbatch -p queue */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This article contains information on the queues of the [[BwUniCluster_2.0_Slurm_common_Features|batch job system]] and on interactive jobs.&lt;br /&gt;
&lt;br /&gt;
= Job Submission =&lt;br /&gt;
== sbatch Command ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== sbatch -p &#039;&#039;queue&#039;&#039; ===&lt;br /&gt;
Compute resources such as (wall-)time, nodes and memory are restricted and must fit into &#039;&#039;&#039;queues&#039;&#039;&#039;. Since requested compute resources are NOT always automatically mapped to the correct queue class, &#039;&#039;&#039;you must add the correct queue class to your sbatch command &#039;&#039;&#039;. &amp;lt;font color=red&amp;gt;The specification of a queue is obligatory on BwUniCluster 2.0.&amp;lt;/font&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Details are:&lt;br /&gt;
&lt;br /&gt;
{| width=750px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! colspan=&amp;quot;5&amp;quot; | bwUniCluster 2.0 &amp;lt;br&amp;gt; sbatch -p &#039;&#039;queue&#039;&#039;&lt;br /&gt;
|- style=&amp;quot;text-align:left;&amp;quot;&lt;br /&gt;
! queue !! node !! default resources !! minimum resources !! maximum resources&lt;br /&gt;
|- style=&amp;quot;text-align:left&amp;quot;&lt;br /&gt;
| dev_single&lt;br /&gt;
| thin&lt;br /&gt;
| time=10, mem-per-cpu=1125mb&lt;br /&gt;
| &lt;br /&gt;
| time=30, nodes=1, mem=180000mb, ntasks-per-node=40, (threads-per-core=2)&amp;lt;br&amp;gt;6 nodes are reserved for this queue. &amp;lt;br&amp;gt; Only for development, i.e. debugging or performance optimization ...&lt;br /&gt;
|- style=&amp;quot;text-align:left;&amp;quot;&lt;br /&gt;
| single&lt;br /&gt;
| thin&lt;br /&gt;
| time=30, mem-per-cpu=1125mb&lt;br /&gt;
| &lt;br /&gt;
| time=72:00:00, nodes=1, mem=180000mb, ntasks-per-node=40, (threads-per-core)=2&lt;br /&gt;
|- style=&amp;quot;text-align:left;&amp;quot;&lt;br /&gt;
| dev_multiple&lt;br /&gt;
| thin&lt;br /&gt;
| time=10, mem-per-cpu=1125mb&lt;br /&gt;
| nodes=2&lt;br /&gt;
| time=30, nodes=4, mem=90000mb, ntasks-per-node=40, (threads-per-core=2)&amp;lt;br&amp;gt;8 nodes are reserved for this queue.&amp;lt;br&amp;gt; Only for development, i.e. debugging or performance optimization ...&lt;br /&gt;
|- style=&amp;quot;text-align:left;&amp;quot;&lt;br /&gt;
| multiple&lt;br /&gt;
| thin&lt;br /&gt;
| time=30, mem-per-cpu=1125mb&lt;br /&gt;
| nodes=2&lt;br /&gt;
| time=72:00:00, mem=90000mb, nodes=128, ntasks-per-node=40, (threads-per-core=2) &lt;br /&gt;
|- style=&amp;quot;text-align:left;&amp;quot;&lt;br /&gt;
| dev_multiple_e&lt;br /&gt;
| thin (Broadwell)&lt;br /&gt;
| time=10, mem-per-cpu=2178mb&lt;br /&gt;
| nodes=2&lt;br /&gt;
| time=30, nodes=8, mem=122000, ntasks-per-node=28, (threads-per-core=2)&amp;lt;br&amp;gt;8 nodes are reserved for this queue &amp;lt;br&amp;gt; Only for development, i.e. debugging or performance optimization ...&lt;br /&gt;
|- style=&amp;quot;text-align:left;&amp;quot;&lt;br /&gt;
| multiple_e&lt;br /&gt;
| thin (Broadwell)&lt;br /&gt;
| time=10, mem-per-cpu=2178mb&lt;br /&gt;
| nodes=2&lt;br /&gt;
| time=72:00:00, nodes=128, mem=122000mb, ntasks-per-node=28, (threads-per-core=2)&lt;br /&gt;
|- style=&amp;quot;text-align:left;&amp;quot;&lt;br /&gt;
| dev_special&amp;lt;br/&amp;gt;(&amp;lt;i&amp;gt;restricted access!&amp;lt;/i&amp;gt;)&lt;br /&gt;
| thin (Broadwell)&lt;br /&gt;
| time=10, mem-per-cpu=2178mb&lt;br /&gt;
| &lt;br /&gt;
| time=30, nodes=1, mem=122000mb, ntasks-per-node=28, (threads-per-core=2)&lt;br /&gt;
|- style=&amp;quot;text-align:left;&amp;quot;&lt;br /&gt;
| special&amp;lt;br/&amp;gt;(&amp;lt;i&amp;gt;restricted access!&amp;lt;/i&amp;gt;)&lt;br /&gt;
| thin (Broadwell)&lt;br /&gt;
| time=10, mem-per-cpu=2178mb&lt;br /&gt;
| &lt;br /&gt;
| time=72:00:00, nodes=1, mem=122000mb, ntasks-per-node=28, (threads-per-core=2)&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; text-align:left&amp;quot;&lt;br /&gt;
| fat &lt;br /&gt;
| fat&lt;br /&gt;
| time=10, mem-per-cpu=18750mb&lt;br /&gt;
| nodes=1&lt;br /&gt;
| time=72:00:00, nodes=1, ntasks-per-node=80, (threads-per-core=2)&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; text-align:left&amp;quot;&lt;br /&gt;
| dev_gpu_4&lt;br /&gt;
| gpu_4&lt;br /&gt;
| time=10, mem-per-cpu=2178mb&lt;br /&gt;
| nodes=2&lt;br /&gt;
| time=30, nodes=8, mem=122000, ntasks-per-node=28, (threads-per-core=2)&amp;lt;br&amp;gt;8 nodes are reserved for this queue &amp;lt;br&amp;gt; Only for development, i.e. debugging or performance optimization ...&lt;br /&gt;
|- style=&amp;quot;text-align:left;&amp;quot;&lt;br /&gt;
| gpu_4&lt;br /&gt;
| gpu4&lt;br /&gt;
| time=10, mem-per-gpu=94000mb, cpu-per-gpu=20&lt;br /&gt;
| &lt;br /&gt;
| time=48:00:00, mem=376000, nodes=14, ntasks-per-node=40, (threads-per-core=2)&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; text-align:left&amp;quot;&lt;br /&gt;
| gpu_8&lt;br /&gt;
| gpu8&lt;br /&gt;
| time=10, mem-per-cpu=94000mb, cpu-per-gpu=10&lt;br /&gt;
|&lt;br /&gt;
| time=48:00:00, mem=752000, nodes=10, ntasks-per-node=40, (threads-per-core=2)&lt;br /&gt;
|- &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Default resources of a queue class defines time, #tasks and memory if not explicitly given with sbatch command. Resource list acronyms &#039;&#039;--time&#039;&#039;, &#039;&#039;--ntasks&#039;&#039;, &#039;&#039;--nodes&#039;&#039;, &#039;&#039;--mem&#039;&#039; and &#039;&#039;--mem-per-cpu&#039;&#039; are described [[ForHLR_Batch_Jobs_SLURM#sbatch_Command_Parameters|here]].&lt;br /&gt;
&lt;br /&gt;
Access to the &amp;quot;special&amp;quot; and &amp;quot;dev_special&amp;quot; partitions on the bwUniCluster 2.0 is restricted to members of the institutions which participated in the procurement of the extension partition specifically for this purpose. Please contact the support team if your institution participated in the procurement and your account should be able to run jobs in this partition.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
==== Queue class examples ====&lt;br /&gt;
To run your batch job on one of the thin nodes, please use:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sbatch --partition=dev_multiple&lt;br /&gt;
     or &lt;br /&gt;
$ sbatch -p dev_multiple&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Interactive Jobs ====&lt;br /&gt;
On bwUniCluster 2.0 you are only allowed to run short jobs (&amp;lt;&amp;lt; 1 hour) with little memory requirements (&amp;lt;&amp;lt; 8 GByte) on the logins nodes. If you want to run longer jobs and/or jobs with a request of more than 8 GByte of memory, you must allocate resources for so-called interactive jobs by usage of the command salloc on a login node. Considering a serial application running on a compute node that requires 5000 MByte of memory and limiting the interactive run to 2 hours the following command has to be executed:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ salloc -p single -n 1 -t 120 --mem=5000&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Then you will get one core on a compute node within the partition &amp;quot;single&amp;quot;. After execution of this command &#039;&#039;&#039;DO NOT CLOSE&#039;&#039;&#039; your current terminal session but wait until the queueing system Slurm has granted you the requested resources on the compute system. You will be logged in automatically on the granted core! To run a serial program on the granted core you only have to type the name of the executable.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ./&amp;lt;my_serial_program&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Please be aware that your serial job must run less than 2 hours in this example, else the job will be killed during runtime by the system. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
You can also start now a graphical X11-terminal connecting you to the dedicated resource that is available for 2 hours. You can start it by the command:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ xterm&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Note that, once the walltime limit has been reached the resources - i.e. the compute node - will automatically be revoked.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
An interactive parallel application running on one compute node or on many compute nodes (e.g. here 5 nodes) with 40 cores each requires usually an amount of memory in GByte (e.g. 50 GByte) and a maximum time (e.g. 1 hour). E.g. 5 nodes can be allocated by the following command:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ salloc -p multiple -N 5 --ntasks-per-node=40 -t 01:00:00  --mem=50gb&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Now you can run parallel jobs on 200 cores requiring 50 GByte of memory per node. Please be aware that you will be logged in on core 0 of the first node. If you want to run MPI-programs, you can do it by simply typing mpirun &amp;lt;program_name&amp;gt;. Then your program will be run on 200 cores. A very simple example for starting a parallel job can be:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ mpirun &amp;lt;my_mpi_program&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
You can also start the debugger ddt by the commands:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ module add devel/ddt&lt;br /&gt;
$ ddt &amp;lt;my_mpi_program&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The above commands will execute the parallel program &amp;lt;my_mpi_program&amp;gt; on all available cores. You can also start parallel programs on a subset of cores; an example for this can be:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ mpirun -n 50 &amp;lt;my_mpi_program&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
If you are using Intel MPI you must start &amp;lt;my_mpi_program&amp;gt; by the command mpiexec.hydra (instead of mpirun).&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
[[Category:bwUniCluster 2.0|Batch Jobs - bwUniCluster 2.0 Features]]&lt;/div&gt;</summary>
		<author><name>H Haefner</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster2.0/Batch_Queues&amp;diff=8068</id>
		<title>BwUniCluster2.0/Batch Queues</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster2.0/Batch_Queues&amp;diff=8068"/>
		<updated>2020-12-18T09:23:59Z</updated>

		<summary type="html">&lt;p&gt;H Haefner: /* sbatch -p queue */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This article contains information on the queues of the [[BwUniCluster_2.0_Slurm_common_Features|batch job system]] and on interactive jobs.&lt;br /&gt;
&lt;br /&gt;
= Job Submission =&lt;br /&gt;
== sbatch Command ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== sbatch -p &#039;&#039;queue&#039;&#039; ===&lt;br /&gt;
Compute resources such as (wall-)time, nodes and memory are restricted and must fit into &#039;&#039;&#039;queues&#039;&#039;&#039;. Since requested compute resources are NOT always automatically mapped to the correct queue class, &#039;&#039;&#039;you must add the correct queue class to your sbatch command &#039;&#039;&#039;. &amp;lt;font color=red&amp;gt;The specification of a queue is obligatory on BwUniCluster 2.0.&amp;lt;/font&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Details are:&lt;br /&gt;
&lt;br /&gt;
{| width=750px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! colspan=&amp;quot;5&amp;quot; | bwUniCluster 2.0 &amp;lt;br&amp;gt; sbatch -p &#039;&#039;queue&#039;&#039;&lt;br /&gt;
|- style=&amp;quot;text-align:left;&amp;quot;&lt;br /&gt;
! queue !! node !! default resources !! minimum resources !! maximum resources&lt;br /&gt;
|- style=&amp;quot;text-align:left&amp;quot;&lt;br /&gt;
| dev_single&lt;br /&gt;
| thin&lt;br /&gt;
| time=10, mem-per-cpu=1125mb&lt;br /&gt;
| &lt;br /&gt;
| time=30, nodes=1, mem=180000mb, ntasks-per-node=40, (threads-per-core=2)&amp;lt;br&amp;gt;6 nodes are reserved for this queue. &amp;lt;br&amp;gt; Only for development, i.e. debugging or performance optimization ...&lt;br /&gt;
|- style=&amp;quot;text-align:left;&amp;quot;&lt;br /&gt;
| single&lt;br /&gt;
| thin&lt;br /&gt;
| time=30, mem-per-cpu=1125mb&lt;br /&gt;
| &lt;br /&gt;
| time=72:00:00, nodes=1, mem=180000mb, ntasks-per-node=40, (threads-per-core)=2&lt;br /&gt;
|- style=&amp;quot;text-align:left;&amp;quot;&lt;br /&gt;
| dev_multiple&lt;br /&gt;
| thin&lt;br /&gt;
| time=10, mem-per-cpu=1125mb&lt;br /&gt;
| nodes=2&lt;br /&gt;
| time=30, nodes=4, mem=90000mb, ntasks-per-node=40, (threads-per-core=2)&amp;lt;br&amp;gt;8 nodes are reserved for this queue.&amp;lt;br&amp;gt; Only for development, i.e. debugging or performance optimization ...&lt;br /&gt;
|- style=&amp;quot;text-align:left;&amp;quot;&lt;br /&gt;
| multiple&lt;br /&gt;
| thin&lt;br /&gt;
| time=30, mem-per-cpu=1125mb&lt;br /&gt;
| nodes=2&lt;br /&gt;
| time=72:00:00, mem=90000mb, nodes=128, ntasks-per-node=40, (threads-per-core=2) &lt;br /&gt;
|- style=&amp;quot;text-align:left;&amp;quot;&lt;br /&gt;
| dev_multiple_e&lt;br /&gt;
| thin (Broadwell)&lt;br /&gt;
| time=10, mem-per-cpu=2178mb&lt;br /&gt;
| nodes=2&lt;br /&gt;
| time=30, nodes=8, mem=122000, ntasks-per-node=28, (threads-per-core=2)&amp;lt;br&amp;gt;8 nodes are reserved for this queue &amp;lt;br&amp;gt; Only for development, i.e. debugging or performance optimization ...&lt;br /&gt;
|- style=&amp;quot;text-align:left;&amp;quot;&lt;br /&gt;
| multiple_e&lt;br /&gt;
| thin (Broadwell)&lt;br /&gt;
| time=10, mem-per-cpu=2178mb&lt;br /&gt;
| nodes=2&lt;br /&gt;
| time=72:00:00, nodes=128, mem=122000mb, ntasks-per-node=28, (threads-per-core=2)&lt;br /&gt;
|- style=&amp;quot;text-align:left;&amp;quot;&lt;br /&gt;
| dev_special&amp;lt;br/&amp;gt;(&amp;lt;i&amp;gt;restricted access!&amp;lt;/i&amp;gt;)&lt;br /&gt;
| thin (Broadwell)&lt;br /&gt;
| time=10, mem-per-cpu=2178mb&lt;br /&gt;
| &lt;br /&gt;
| time=30, nodes=1, mem=122000mb, ntasks-per-node=28, (threads-per-core=2)&lt;br /&gt;
|- style=&amp;quot;text-align:left;&amp;quot;&lt;br /&gt;
| special&amp;lt;br/&amp;gt;(&amp;lt;i&amp;gt;restricted access!&amp;lt;/i&amp;gt;)&lt;br /&gt;
| thin (Broadwell)&lt;br /&gt;
| time=10, mem-per-cpu=2178mb&lt;br /&gt;
| &lt;br /&gt;
| time=72:00:00, nodes=1, mem=122000mb, ntasks-per-node=28, (threads-per-core=2)&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; text-align:left&amp;quot;&lt;br /&gt;
| fat &lt;br /&gt;
| fat&lt;br /&gt;
| time=10, mem-per-cpu=18750mb&lt;br /&gt;
| nodes=1&lt;br /&gt;
| time=72:00:00, nodes=1, ntasks-per-node=80, (threads-per-core=2)&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; text-align:left&amp;quot;&lt;br /&gt;
|&lt;br /&gt;
| dev_gpu_4&lt;br /&gt;
| gpu_4&lt;br /&gt;
| time=10, mem-per-cpu=2178mb&lt;br /&gt;
| nodes=2&lt;br /&gt;
| time=30, nodes=8, mem=122000, ntasks-per-node=28, (threads-per-core=2)&amp;lt;br&amp;gt;8 nodes are reserved for this queue &amp;lt;br&amp;gt; Only for development, i.e. debugging or performance optimization ...&lt;br /&gt;
|- style=&amp;quot;text-align:left;&amp;quot;&lt;br /&gt;
| gpu_4&lt;br /&gt;
| gpu4&lt;br /&gt;
| time=10, mem-per-gpu=94000mb, cpu-per-gpu=20&lt;br /&gt;
| &lt;br /&gt;
| time=48:00:00, mem=376000, nodes=14, ntasks-per-node=40, (threads-per-core=2)&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; text-align:left&amp;quot;&lt;br /&gt;
| gpu_8&lt;br /&gt;
| gpu8&lt;br /&gt;
| time=10, mem-per-cpu=94000mb, cpu-per-gpu=10&lt;br /&gt;
|&lt;br /&gt;
| time=48:00:00, mem=752000, nodes=10, ntasks-per-node=40, (threads-per-core=2)&lt;br /&gt;
|- &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Default resources of a queue class defines time, #tasks and memory if not explicitly given with sbatch command. Resource list acronyms &#039;&#039;--time&#039;&#039;, &#039;&#039;--ntasks&#039;&#039;, &#039;&#039;--nodes&#039;&#039;, &#039;&#039;--mem&#039;&#039; and &#039;&#039;--mem-per-cpu&#039;&#039; are described [[ForHLR_Batch_Jobs_SLURM#sbatch_Command_Parameters|here]].&lt;br /&gt;
&lt;br /&gt;
Access to the &amp;quot;special&amp;quot; and &amp;quot;dev_special&amp;quot; partitions on the bwUniCluster 2.0 is restricted to members of the institutions which participated in the procurement of the extension partition specifically for this purpose. Please contact the support team if your institution participated in the procurement and your account should be able to run jobs in this partition.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
==== Queue class examples ====&lt;br /&gt;
To run your batch job on one of the thin nodes, please use:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sbatch --partition=dev_multiple&lt;br /&gt;
     or &lt;br /&gt;
$ sbatch -p dev_multiple&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Interactive Jobs ====&lt;br /&gt;
On bwUniCluster 2.0 you are only allowed to run short jobs (&amp;lt;&amp;lt; 1 hour) with little memory requirements (&amp;lt;&amp;lt; 8 GByte) on the logins nodes. If you want to run longer jobs and/or jobs with a request of more than 8 GByte of memory, you must allocate resources for so-called interactive jobs by usage of the command salloc on a login node. Considering a serial application running on a compute node that requires 5000 MByte of memory and limiting the interactive run to 2 hours the following command has to be executed:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ salloc -p single -n 1 -t 120 --mem=5000&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Then you will get one core on a compute node within the partition &amp;quot;single&amp;quot;. After execution of this command &#039;&#039;&#039;DO NOT CLOSE&#039;&#039;&#039; your current terminal session but wait until the queueing system Slurm has granted you the requested resources on the compute system. You will be logged in automatically on the granted core! To run a serial program on the granted core you only have to type the name of the executable.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ./&amp;lt;my_serial_program&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Please be aware that your serial job must run less than 2 hours in this example, else the job will be killed during runtime by the system. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
You can also start now a graphical X11-terminal connecting you to the dedicated resource that is available for 2 hours. You can start it by the command:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ xterm&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Note that, once the walltime limit has been reached the resources - i.e. the compute node - will automatically be revoked.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
An interactive parallel application running on one compute node or on many compute nodes (e.g. here 5 nodes) with 40 cores each requires usually an amount of memory in GByte (e.g. 50 GByte) and a maximum time (e.g. 1 hour). E.g. 5 nodes can be allocated by the following command:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ salloc -p multiple -N 5 --ntasks-per-node=40 -t 01:00:00  --mem=50gb&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Now you can run parallel jobs on 200 cores requiring 50 GByte of memory per node. Please be aware that you will be logged in on core 0 of the first node. If you want to run MPI-programs, you can do it by simply typing mpirun &amp;lt;program_name&amp;gt;. Then your program will be run on 200 cores. A very simple example for starting a parallel job can be:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ mpirun &amp;lt;my_mpi_program&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
You can also start the debugger ddt by the commands:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ module add devel/ddt&lt;br /&gt;
$ ddt &amp;lt;my_mpi_program&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The above commands will execute the parallel program &amp;lt;my_mpi_program&amp;gt; on all available cores. You can also start parallel programs on a subset of cores; an example for this can be:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ mpirun -n 50 &amp;lt;my_mpi_program&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
If you are using Intel MPI you must start &amp;lt;my_mpi_program&amp;gt; by the command mpiexec.hydra (instead of mpirun).&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
[[Category:bwUniCluster 2.0|Batch Jobs - bwUniCluster 2.0 Features]]&lt;/div&gt;</summary>
		<author><name>H Haefner</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster2.0/Batch_Queues&amp;diff=8067</id>
		<title>BwUniCluster2.0/Batch Queues</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster2.0/Batch_Queues&amp;diff=8067"/>
		<updated>2020-12-18T09:21:46Z</updated>

		<summary type="html">&lt;p&gt;H Haefner: /* sbatch -p queue */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This article contains information on the queues of the [[BwUniCluster_2.0_Slurm_common_Features|batch job system]] and on interactive jobs.&lt;br /&gt;
&lt;br /&gt;
= Job Submission =&lt;br /&gt;
== sbatch Command ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== sbatch -p &#039;&#039;queue&#039;&#039; ===&lt;br /&gt;
Compute resources such as (wall-)time, nodes and memory are restricted and must fit into &#039;&#039;&#039;queues&#039;&#039;&#039;. Since requested compute resources are NOT always automatically mapped to the correct queue class, &#039;&#039;&#039;you must add the correct queue class to your sbatch command &#039;&#039;&#039;. &amp;lt;font color=red&amp;gt;The specification of a queue is obligatory on BwUniCluster 2.0.&amp;lt;/font&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Details are:&lt;br /&gt;
&lt;br /&gt;
{| width=750px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! colspan=&amp;quot;5&amp;quot; | bwUniCluster 2.0 &amp;lt;br&amp;gt; sbatch -p &#039;&#039;queue&#039;&#039;&lt;br /&gt;
|- style=&amp;quot;text-align:left;&amp;quot;&lt;br /&gt;
! queue !! node !! default resources !! minimum resources !! maximum resources&lt;br /&gt;
|- style=&amp;quot;text-align:left&amp;quot;&lt;br /&gt;
| dev_single&lt;br /&gt;
| thin&lt;br /&gt;
| time=10, mem-per-cpu=1125mb&lt;br /&gt;
| &lt;br /&gt;
| time=30, nodes=1, mem=180000mb, ntasks-per-node=40, (threads-per-core=2)&amp;lt;br&amp;gt;6 nodes are reserved for this queue. &amp;lt;br&amp;gt; Only for development, i.e. debugging or performance optimization ...&lt;br /&gt;
|- style=&amp;quot;text-align:left;&amp;quot;&lt;br /&gt;
| single&lt;br /&gt;
| thin&lt;br /&gt;
| time=30, mem-per-cpu=1125mb&lt;br /&gt;
| &lt;br /&gt;
| time=72:00:00, nodes=1, mem=180000mb, ntasks-per-node=40, (threads-per-core)=2&lt;br /&gt;
|- style=&amp;quot;text-align:left;&amp;quot;&lt;br /&gt;
| dev_multiple&lt;br /&gt;
| thin&lt;br /&gt;
| time=10, mem-per-cpu=1125mb&lt;br /&gt;
| nodes=2&lt;br /&gt;
| time=30, nodes=4, mem=90000mb, ntasks-per-node=40, (threads-per-core=2)&amp;lt;br&amp;gt;8 nodes are reserved for this queue.&amp;lt;br&amp;gt; Only for development, i.e. debugging or performance optimization ...&lt;br /&gt;
|- style=&amp;quot;text-align:left;&amp;quot;&lt;br /&gt;
| multiple&lt;br /&gt;
| thin&lt;br /&gt;
| time=30, mem-per-cpu=1125mb&lt;br /&gt;
| nodes=2&lt;br /&gt;
| time=72:00:00, mem=90000mb, nodes=128, ntasks-per-node=40, (threads-per-core=2) &lt;br /&gt;
|- style=&amp;quot;text-align:left;&amp;quot;&lt;br /&gt;
| dev_multiple_e&lt;br /&gt;
| thin (Broadwell)&lt;br /&gt;
| time=10, mem-per-cpu=2178mb&lt;br /&gt;
| nodes=2&lt;br /&gt;
| time=30, nodes=8, mem=122000, ntasks-per-node=28, (threads-per-core=2)&amp;lt;br&amp;gt;8 nodes are reserved for this queue &amp;lt;br&amp;gt; Only for development, i.e. debugging or performance optimization ...&lt;br /&gt;
|- style=&amp;quot;text-align:left;&amp;quot;&lt;br /&gt;
| multiple_e&lt;br /&gt;
| thin (Broadwell)&lt;br /&gt;
| time=10, mem-per-cpu=2178mb&lt;br /&gt;
| nodes=2&lt;br /&gt;
| time=72:00:00, nodes=128, mem=122000mb, ntasks-per-node=28, (threads-per-core=2)&lt;br /&gt;
|- style=&amp;quot;text-align:left;&amp;quot;&lt;br /&gt;
| dev_special&amp;lt;br/&amp;gt;(&amp;lt;i&amp;gt;restricted access!&amp;lt;/i&amp;gt;)&lt;br /&gt;
| thin (Broadwell)&lt;br /&gt;
| time=10, mem-per-cpu=2178mb&lt;br /&gt;
| &lt;br /&gt;
| time=30, nodes=1, mem=122000mb, ntasks-per-node=28, (threads-per-core=2)&lt;br /&gt;
|- style=&amp;quot;text-align:left;&amp;quot;&lt;br /&gt;
| special&amp;lt;br/&amp;gt;(&amp;lt;i&amp;gt;restricted access!&amp;lt;/i&amp;gt;)&lt;br /&gt;
| thin (Broadwell)&lt;br /&gt;
| time=10, mem-per-cpu=2178mb&lt;br /&gt;
| &lt;br /&gt;
| time=72:00:00, nodes=1, mem=122000mb, ntasks-per-node=28, (threads-per-core=2)&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; text-align:left&amp;quot;&lt;br /&gt;
| fat &lt;br /&gt;
| fat&lt;br /&gt;
| time=10, mem-per-cpu=18750mb&lt;br /&gt;
| time=72:00:00, nodes=1, ntasks-per-node=80, (threads-per-core=2)&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; text-align:left&amp;quot;&lt;br /&gt;
|&lt;br /&gt;
| dev_gpu_4&lt;br /&gt;
| gpu_4&lt;br /&gt;
| time=10, mem-per-cpu=2178mb&lt;br /&gt;
| nodes=2&lt;br /&gt;
| time=30, nodes=8, mem=122000, ntasks-per-node=28, (threads-per-core=2)&amp;lt;br&amp;gt;8 nodes are reserved for this queue &amp;lt;br&amp;gt; Only for development, i.e. debugging or performance optimization ...&lt;br /&gt;
|- style=&amp;quot;text-align:left;&amp;quot;&lt;br /&gt;
| gpu_4&lt;br /&gt;
| gpu4&lt;br /&gt;
| time=10, mem-per-gpu=94000mb, cpu-per-gpu=20&lt;br /&gt;
| &lt;br /&gt;
| time=48:00:00, mem=376000, nodes=14, ntasks-per-node=40, (threads-per-core=2)&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; text-align:left&amp;quot;&lt;br /&gt;
| gpu_8&lt;br /&gt;
| gpu8&lt;br /&gt;
| time=10, mem-per-cpu=94000mb, cpu-per-gpu=10&lt;br /&gt;
|&lt;br /&gt;
| time=48:00:00, mem=752000, nodes=10, ntasks-per-node=40, (threads-per-core=2)&lt;br /&gt;
|- &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Default resources of a queue class defines time, #tasks and memory if not explicitly given with sbatch command. Resource list acronyms &#039;&#039;--time&#039;&#039;, &#039;&#039;--ntasks&#039;&#039;, &#039;&#039;--nodes&#039;&#039;, &#039;&#039;--mem&#039;&#039; and &#039;&#039;--mem-per-cpu&#039;&#039; are described [[ForHLR_Batch_Jobs_SLURM#sbatch_Command_Parameters|here]].&lt;br /&gt;
&lt;br /&gt;
Access to the &amp;quot;special&amp;quot; and &amp;quot;dev_special&amp;quot; partitions on the bwUniCluster 2.0 is restricted to members of the institutions which participated in the procurement of the extension partition specifically for this purpose. Please contact the support team if your institution participated in the procurement and your account should be able to run jobs in this partition.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
==== Queue class examples ====&lt;br /&gt;
To run your batch job on one of the thin nodes, please use:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sbatch --partition=dev_multiple&lt;br /&gt;
     or &lt;br /&gt;
$ sbatch -p dev_multiple&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Interactive Jobs ====&lt;br /&gt;
On bwUniCluster 2.0 you are only allowed to run short jobs (&amp;lt;&amp;lt; 1 hour) with little memory requirements (&amp;lt;&amp;lt; 8 GByte) on the logins nodes. If you want to run longer jobs and/or jobs with a request of more than 8 GByte of memory, you must allocate resources for so-called interactive jobs by usage of the command salloc on a login node. Considering a serial application running on a compute node that requires 5000 MByte of memory and limiting the interactive run to 2 hours the following command has to be executed:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ salloc -p single -n 1 -t 120 --mem=5000&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Then you will get one core on a compute node within the partition &amp;quot;single&amp;quot;. After execution of this command &#039;&#039;&#039;DO NOT CLOSE&#039;&#039;&#039; your current terminal session but wait until the queueing system Slurm has granted you the requested resources on the compute system. You will be logged in automatically on the granted core! To run a serial program on the granted core you only have to type the name of the executable.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ./&amp;lt;my_serial_program&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Please be aware that your serial job must run less than 2 hours in this example, else the job will be killed during runtime by the system. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
You can also start now a graphical X11-terminal connecting you to the dedicated resource that is available for 2 hours. You can start it by the command:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ xterm&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Note that, once the walltime limit has been reached the resources - i.e. the compute node - will automatically be revoked.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
An interactive parallel application running on one compute node or on many compute nodes (e.g. here 5 nodes) with 40 cores each requires usually an amount of memory in GByte (e.g. 50 GByte) and a maximum time (e.g. 1 hour). E.g. 5 nodes can be allocated by the following command:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ salloc -p multiple -N 5 --ntasks-per-node=40 -t 01:00:00  --mem=50gb&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Now you can run parallel jobs on 200 cores requiring 50 GByte of memory per node. Please be aware that you will be logged in on core 0 of the first node. If you want to run MPI-programs, you can do it by simply typing mpirun &amp;lt;program_name&amp;gt;. Then your program will be run on 200 cores. A very simple example for starting a parallel job can be:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ mpirun &amp;lt;my_mpi_program&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
You can also start the debugger ddt by the commands:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ module add devel/ddt&lt;br /&gt;
$ ddt &amp;lt;my_mpi_program&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The above commands will execute the parallel program &amp;lt;my_mpi_program&amp;gt; on all available cores. You can also start parallel programs on a subset of cores; an example for this can be:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ mpirun -n 50 &amp;lt;my_mpi_program&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
If you are using Intel MPI you must start &amp;lt;my_mpi_program&amp;gt; by the command mpiexec.hydra (instead of mpirun).&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
[[Category:bwUniCluster 2.0|Batch Jobs - bwUniCluster 2.0 Features]]&lt;/div&gt;</summary>
		<author><name>H Haefner</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster2.0/Batch_Queues&amp;diff=8066</id>
		<title>BwUniCluster2.0/Batch Queues</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster2.0/Batch_Queues&amp;diff=8066"/>
		<updated>2020-12-18T09:17:08Z</updated>

		<summary type="html">&lt;p&gt;H Haefner: /* sbatch -p queue */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This article contains information on the queues of the [[BwUniCluster_2.0_Slurm_common_Features|batch job system]] and on interactive jobs.&lt;br /&gt;
&lt;br /&gt;
= Job Submission =&lt;br /&gt;
== sbatch Command ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== sbatch -p &#039;&#039;queue&#039;&#039; ===&lt;br /&gt;
Compute resources such as (wall-)time, nodes and memory are restricted and must fit into &#039;&#039;&#039;queues&#039;&#039;&#039;. Since requested compute resources are NOT always automatically mapped to the correct queue class, &#039;&#039;&#039;you must add the correct queue class to your sbatch command &#039;&#039;&#039;. &amp;lt;font color=red&amp;gt;The specification of a queue is obligatory on BwUniCluster 2.0.&amp;lt;/font&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Details are:&lt;br /&gt;
&lt;br /&gt;
{| width=750px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! colspan=&amp;quot;5&amp;quot; | bwUniCluster 2.0 &amp;lt;br&amp;gt; sbatch -p &#039;&#039;queue&#039;&#039;&lt;br /&gt;
|- style=&amp;quot;text-align:left;&amp;quot;&lt;br /&gt;
! queue !! node !! default resources !! minimum resources !! maximum resources&lt;br /&gt;
|- style=&amp;quot;text-align:left&amp;quot;&lt;br /&gt;
| dev_single&lt;br /&gt;
| thin&lt;br /&gt;
| time=10, mem-per-cpu=1125mb&lt;br /&gt;
| &lt;br /&gt;
| time=30, nodes=1, mem=180000mb, ntasks-per-node=40, (threads-per-core=2)&amp;lt;br&amp;gt;6 nodes are reserved for this queue. &amp;lt;br&amp;gt; Only for development, i.e. debugging or performance optimization ...&lt;br /&gt;
|- style=&amp;quot;text-align:left;&amp;quot;&lt;br /&gt;
| single&lt;br /&gt;
| thin&lt;br /&gt;
| time=30, mem-per-cpu=1125mb&lt;br /&gt;
| &lt;br /&gt;
| time=72:00:00, nodes=1, mem=180000mb, ntasks-per-node=40, (threads-per-core)=2&lt;br /&gt;
|- style=&amp;quot;text-align:left;&amp;quot;&lt;br /&gt;
| dev_multiple&lt;br /&gt;
| thin&lt;br /&gt;
| time=10, mem-per-cpu=1125mb&lt;br /&gt;
| nodes=2&lt;br /&gt;
| time=30, nodes=4, mem=90000mb, ntasks-per-node=40, (threads-per-core=2)&amp;lt;br&amp;gt;8 nodes are reserved for this queue.&amp;lt;br&amp;gt; Only for development, i.e. debugging or performance optimization ...&lt;br /&gt;
|- style=&amp;quot;text-align:left;&amp;quot;&lt;br /&gt;
| multiple&lt;br /&gt;
| thin&lt;br /&gt;
| time=30, mem-per-cpu=1125mb&lt;br /&gt;
| nodes=2&lt;br /&gt;
| time=72:00:00, mem=90000mb, nodes=128, ntasks-per-node=40, (threads-per-core=2) &lt;br /&gt;
|- style=&amp;quot;text-align:left;&amp;quot;&lt;br /&gt;
| dev_multiple_e&lt;br /&gt;
| thin (Broadwell)&lt;br /&gt;
| time=10, mem-per-cpu=2178mb&lt;br /&gt;
| nodes=2&lt;br /&gt;
| time=30, nodes=8, mem=122000, ntasks-per-node=28, (threads-per-core=2)&amp;lt;br&amp;gt;8 nodes are reserved for this queue &amp;lt;br&amp;gt; Only for development, i.e. debugging or performance optimization ...&lt;br /&gt;
|- style=&amp;quot;text-align:left;&amp;quot;&lt;br /&gt;
| multiple_e&lt;br /&gt;
| thin (Broadwell)&lt;br /&gt;
| time=10, mem-per-cpu=2178mb&lt;br /&gt;
| nodes=2&lt;br /&gt;
| time=72:00:00, nodes=128, mem=122000mb, ntasks-per-node=28, (threads-per-core=2)&lt;br /&gt;
|- style=&amp;quot;text-align:left;&amp;quot;&lt;br /&gt;
| dev_special&amp;lt;br/&amp;gt;(&amp;lt;i&amp;gt;restricted access!&amp;lt;/i&amp;gt;)&lt;br /&gt;
| thin (Broadwell)&lt;br /&gt;
| time=10, mem-per-cpu=2178mb&lt;br /&gt;
| &lt;br /&gt;
| time=30, nodes=1, mem=122000mb, ntasks-per-node=28, (threads-per-core=2)&lt;br /&gt;
|- style=&amp;quot;text-align:left;&amp;quot;&lt;br /&gt;
| special&amp;lt;br/&amp;gt;(&amp;lt;i&amp;gt;restricted access!&amp;lt;/i&amp;gt;)&lt;br /&gt;
| thin (Broadwell)&lt;br /&gt;
| time=10, mem-per-cpu=2178mb&lt;br /&gt;
| &lt;br /&gt;
| time=72:00:00, nodes=1, mem=122000mb, ntasks-per-node=28, (threads-per-core=2)&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; text-align:left&amp;quot;&lt;br /&gt;
| fat &lt;br /&gt;
| fat&lt;br /&gt;
| time=10, mem-per-cpu=18750mb&lt;br /&gt;
|&lt;br /&gt;
| dev_gpu_4&lt;br /&gt;
| gpu_4&lt;br /&gt;
| time=10, mem-per-cpu=2178mb&lt;br /&gt;
| nodes=2&lt;br /&gt;
| time=30, nodes=8, mem=122000, ntasks-per-node=28, (threads-per-core=2)&amp;lt;br&amp;gt;8 nodes are reserved for this queue &amp;lt;br&amp;gt; Only for development, i.e. debugging or performance optimization ...&lt;br /&gt;
|- style=&amp;quot;text-align:left;&amp;quot;&lt;br /&gt;
|&lt;br /&gt;
| time=72:00:00, nodes=1, ntasks-per-node=80, (threads-per-core=2)&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; text-align:left&amp;quot;&lt;br /&gt;
| gpu_4&lt;br /&gt;
| gpu4&lt;br /&gt;
| time=10, mem-per-gpu=94000mb, cpu-per-gpu=20&lt;br /&gt;
| &lt;br /&gt;
| time=48:00:00, mem=376000, nodes=14, ntasks-per-node=40, (threads-per-core=2)&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; text-align:left&amp;quot;&lt;br /&gt;
| gpu_8&lt;br /&gt;
| gpu8&lt;br /&gt;
| time=10, mem-per-cpu=94000mb, cpu-per-gpu=10&lt;br /&gt;
|&lt;br /&gt;
| time=48:00:00, mem=752000, nodes=10, ntasks-per-node=40, (threads-per-core=2)&lt;br /&gt;
|- &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Default resources of a queue class defines time, #tasks and memory if not explicitly given with sbatch command. Resource list acronyms &#039;&#039;--time&#039;&#039;, &#039;&#039;--ntasks&#039;&#039;, &#039;&#039;--nodes&#039;&#039;, &#039;&#039;--mem&#039;&#039; and &#039;&#039;--mem-per-cpu&#039;&#039; are described [[ForHLR_Batch_Jobs_SLURM#sbatch_Command_Parameters|here]].&lt;br /&gt;
&lt;br /&gt;
Access to the &amp;quot;special&amp;quot; and &amp;quot;dev_special&amp;quot; partitions on the bwUniCluster 2.0 is restricted to members of the institutions which participated in the procurement of the extension partition specifically for this purpose. Please contact the support team if your institution participated in the procurement and your account should be able to run jobs in this partition.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
==== Queue class examples ====&lt;br /&gt;
To run your batch job on one of the thin nodes, please use:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sbatch --partition=dev_multiple&lt;br /&gt;
     or &lt;br /&gt;
$ sbatch -p dev_multiple&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Interactive Jobs ====&lt;br /&gt;
On bwUniCluster 2.0 you are only allowed to run short jobs (&amp;lt;&amp;lt; 1 hour) with little memory requirements (&amp;lt;&amp;lt; 8 GByte) on the logins nodes. If you want to run longer jobs and/or jobs with a request of more than 8 GByte of memory, you must allocate resources for so-called interactive jobs by usage of the command salloc on a login node. Considering a serial application running on a compute node that requires 5000 MByte of memory and limiting the interactive run to 2 hours the following command has to be executed:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ salloc -p single -n 1 -t 120 --mem=5000&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Then you will get one core on a compute node within the partition &amp;quot;single&amp;quot;. After execution of this command &#039;&#039;&#039;DO NOT CLOSE&#039;&#039;&#039; your current terminal session but wait until the queueing system Slurm has granted you the requested resources on the compute system. You will be logged in automatically on the granted core! To run a serial program on the granted core you only have to type the name of the executable.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ./&amp;lt;my_serial_program&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Please be aware that your serial job must run less than 2 hours in this example, else the job will be killed during runtime by the system. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
You can also start now a graphical X11-terminal connecting you to the dedicated resource that is available for 2 hours. You can start it by the command:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ xterm&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Note that, once the walltime limit has been reached the resources - i.e. the compute node - will automatically be revoked.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
An interactive parallel application running on one compute node or on many compute nodes (e.g. here 5 nodes) with 40 cores each requires usually an amount of memory in GByte (e.g. 50 GByte) and a maximum time (e.g. 1 hour). E.g. 5 nodes can be allocated by the following command:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ salloc -p multiple -N 5 --ntasks-per-node=40 -t 01:00:00  --mem=50gb&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Now you can run parallel jobs on 200 cores requiring 50 GByte of memory per node. Please be aware that you will be logged in on core 0 of the first node. If you want to run MPI-programs, you can do it by simply typing mpirun &amp;lt;program_name&amp;gt;. Then your program will be run on 200 cores. A very simple example for starting a parallel job can be:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ mpirun &amp;lt;my_mpi_program&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
You can also start the debugger ddt by the commands:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ module add devel/ddt&lt;br /&gt;
$ ddt &amp;lt;my_mpi_program&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The above commands will execute the parallel program &amp;lt;my_mpi_program&amp;gt; on all available cores. You can also start parallel programs on a subset of cores; an example for this can be:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ mpirun -n 50 &amp;lt;my_mpi_program&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
If you are using Intel MPI you must start &amp;lt;my_mpi_program&amp;gt; by the command mpiexec.hydra (instead of mpirun).&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
[[Category:bwUniCluster 2.0|Batch Jobs - bwUniCluster 2.0 Features]]&lt;/div&gt;</summary>
		<author><name>H Haefner</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=File:Bwidm-register-red.png&amp;diff=8065</id>
		<title>File:Bwidm-register-red.png</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=File:Bwidm-register-red.png&amp;diff=8065"/>
		<updated>2020-12-17T15:28:32Z</updated>

		<summary type="html">&lt;p&gt;H Haefner: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>H Haefner</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster_2.0_User_Access&amp;diff=8064</id>
		<title>BwUniCluster 2.0 User Access</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster_2.0_User_Access&amp;diff=8064"/>
		<updated>2020-12-17T15:27:52Z</updated>

		<summary type="html">&lt;p&gt;H Haefner: /* Step B: Web Registration, service password and 2-factor authentication */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;!--{| style=&amp;quot;border-style: solid; border-width: 1px&amp;quot;&lt;br /&gt;
! Navigation: [[BwHPC_Best_Practices_Repository|bwHPC BPR]] / [[BwUniCluster_User_Guide|bwUniCluster]] &lt;br /&gt;
|}--&amp;gt;&lt;br /&gt;
[[bwUniCluster_2.0|bwUniCluster 2.0]] is Baden-Württemberg&#039;s general purpose tier 3 high performance computing (HPC)&lt;br /&gt;
cluster co-financed by Baden-Württemberg&#039;s ministry of science, research and arts and the shareholders:&lt;br /&gt;
&lt;br /&gt;
* Albert Ludwig University of Freiburg&lt;br /&gt;
* Eberhard Karls University, Tübingen&lt;br /&gt;
* Karlsruhe Institute of Technology (KIT)&lt;br /&gt;
* Heidelberg University (Ruprecht-Karls-Universität Heidelberg)&lt;br /&gt;
* Ulm University&lt;br /&gt;
* University of Hohenheim&lt;br /&gt;
* University of Konstanz&lt;br /&gt;
* University of Mannheim&lt;br /&gt;
* University of Stuttgart&lt;br /&gt;
* HAW BW e.V. (an association of several universities of applied sciences in Baden-Württemberg, see below) &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
To &#039;&#039;&#039;log on&#039;&#039;&#039; [[bwUniCluster_2.0|bwUniCluster 2.0]] a user account is required. All members of the shareholder&lt;br /&gt;
universities can apply for an account.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;width: 100%; border-spacing: 5px;&amp;quot;&lt;br /&gt;
| style=&amp;quot;text-align:left; color:#000;vertical-align:top;&amp;quot; |__TOC__ &lt;br /&gt;
|&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;[[File:bwUniCluster_17Jan2014_p044-rot_t10.10.00.jpg|center|border|250px|bwUniCluster wiring by Holger Obermaier, copyright: KIT (SCC)]] &amp;lt;span style=&amp;quot;font-size:80%&amp;quot;&amp;gt;bwUniCluster wiring © KIT (SCC)&amp;lt;/span&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
= Registration =&lt;br /&gt;
&lt;br /&gt;
Granting access and issuing a user account for &#039;&#039;&#039;bwUniCluster 2.0&#039;&#039;&#039; requires the registration at the KIT service website [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu] (step B). However, this registration depends on the &#039;&#039;&#039;bwUniCluster entitlement&#039;&#039;&#039; issued by your university (step A).&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Step A: bwUniCluster entitlement for registration ==&lt;br /&gt;
&#039;&#039;&#039;The entitlement is called bwUniCluster (not bwUniCluster 2.0)&#039;&#039;&#039; and each university issues the bwUniCluster entitlement &#039;&#039;&#039;only&#039;&#039;&#039; for their own respective members. Some have established on-line processes or provide downloads of the entitlement application forms. If there is no link behind the name of an institution in the following list, please contact the local IT support services: &lt;br /&gt;
&lt;br /&gt;
* [[BwCluster_User_Access_Uni_Freiburg|Albert Ludwig University of Freiburg]]&lt;br /&gt;
* [https://bwunicluster.urz.uni-heidelberg.de/ Heidelberg University]&lt;br /&gt;
* [https://kim.uni-hohenheim.de/bwhpc-account University of Hohenheim]&lt;br /&gt;
* [http://www.scc.kit.edu/downloads/sdo/Antrag_Benutzernummer_bwUniCluster.pdf Karlsruhe Institute of Technology (KIT)]&lt;br /&gt;
* [[BWUniCluster_User_Access_Members_Uni_Konstanz|University of Konstanz]]&lt;br /&gt;
* [[BWUniCluster_User_Access_Members_Uni_Mannheim|University of Mannheim]]&lt;br /&gt;
* [https://www.hlrs.de/solutions-services/academic-users/bwunicluster-access/ University of Stuttgart]&lt;br /&gt;
* [https://uni-tuebingen.de/de/155157 Eberhard Karls University Tübingen]&lt;br /&gt;
* [[BWUniCluster_User_Access_Members_Uni_Ulm|Ulm University]]&lt;br /&gt;
* Hochschule Aalen&lt;br /&gt;
* Hochschule Albstadt-Sigmaringen&lt;br /&gt;
* Hochschule Esslingen&lt;br /&gt;
* Hochschule Furtwangen&lt;br /&gt;
* Hochschule Heilbronn&lt;br /&gt;
* Hochschule Karlsruhe&lt;br /&gt;
* Hochschule Konstanz&lt;br /&gt;
* Hochschule Reutlingen&lt;br /&gt;
* Hochschule Rottenburg&lt;br /&gt;
* Hochschule Stuttgart (HfT)&lt;br /&gt;
* Hochschule Ulm&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Step B: Web Registration, service password and 2-factor authentication ==&lt;br /&gt;
&lt;br /&gt;
After completing step A, i.e., after successfull issueing of the bwUniCluster entitlement, you have to register yourself for the service. To do so please visit [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu] and complete the following steps.&lt;br /&gt;
&lt;br /&gt;
1. Select your home organization from the list on the main page and click &#039;&#039;&#039;Proceed&#039;&#039;&#039; or &#039;&#039;&#039;Fortfahren&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
[[File:Bwidm-register-red.png|center|border|]]&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
2. You will be directed to the &#039;&#039;Identity Provider&#039;&#039; of your home organisation. Enter the user ID / username and password of your home organisation - this is usually the same password used for your e-mail account and other services - and click on &#039;&#039;&#039;Login&#039;&#039;&#039;, &#039;&#039;&#039;Einloggen&#039;&#039;&#039; or something similar.&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
3. You will be redirected back to the registration website [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu/]. If you are logging into bwIDM for the first time, there will be a summary screen which shows the account details your home institution is providing to the central system. Please check that all data is valid and then click on &#039;&#039;&#039;Continue&#039;&#039;&#039; or &#039;&#039;&#039;Weiter&#039;&#039;&#039;.&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
4. Once you have successfully logged into the bwIDM system, you will be greeted by a home screen showing all state-wide services you have access to. There will be a box labelled &amp;quot;bwUniCluster&amp;quot;. Click on &#039;&#039;&#039;Register&#039;&#039;&#039; or &#039;&#039;&#039;Registrieren&#039;&#039;&#039; to start the registration process.&lt;br /&gt;
&lt;br /&gt;
[[File:Bwidm-2-red.png|center|border|]]&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
5. Since August 13, 2020 a &#039;&#039;&#039;2-factor authentication&#039;&#039;&#039; mechanism (2FA) is being enforced to improve security. If you have never registered a 2FA token on bwIDM before, the following error message will appear:&lt;br /&gt;
&lt;br /&gt;
[[File:Bwidm-3-red.png|center|]]&lt;br /&gt;
&lt;br /&gt;
Click on the [https://bwidm.scc.kit.edu/user/twofa.xhtml Link] or on the &#039;&#039;&#039;My Tokens&#039;&#039;&#039; link in the main menu. The instructions for registering a new 2FA token can be found on the following page: [[bwUniCluster 2.0 User Access/2FA Tokens]]. Please complete them before proceeding.&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
6. Make sure all requirements are met by checking the &#039;&#039;&#039;Requirements&#039;&#039;&#039; box at the top. If the requirements are not met you might be able to correct the issure by following the instructions. In all other cases please [[Registration_Support_-_bwUniCluster|contact your local hotline]].&lt;br /&gt;
&lt;br /&gt;
[[File:BwUniCluster 2.0 access login bwidm registration requirements.png|center|border|]]&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
7. Read the Terms of Use (&#039;&#039;&#039;Nutzungsbedingungen und -richtlinien&#039;&#039;&#039;), check the box besides &#039;&#039;&#039;I have read and accepted the terms of use&#039;&#039;&#039; and click on &#039;&#039;&#039;Register&#039;&#039;&#039; or &#039;&#039;&#039;Registrieren&#039;&#039;&#039;.&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
8. Set a service password for the bwUniCluster and click on &#039;&#039;&#039;Save&#039;&#039;&#039; or &#039;&#039;&#039;Speichern&#039;&#039;&#039;. Logging in with the password of your home organisation, like on the former bwUniCluster 1, is no longer possible. Please make sure to use a strong password which is different from any other password you are currently using or have used on other systems. You will also be asked to change the service password regularly.&lt;br /&gt;
&lt;br /&gt;
[[File:Bwidm-5-red.png|center|]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Step C: Fill out the bwUniCluster questionnaire ==&lt;br /&gt;
&lt;br /&gt;
Filling out the bwUniCluster questionaire on&lt;br /&gt;
&lt;br /&gt;
   https://zas.bwhpc.de/shib/en/bwunicluster_survey.php&lt;br /&gt;
&lt;br /&gt;
is mandatory for all users. The input is solely used to improve our support activities and for capacity planning of future HPC resources. &#039;&#039;&#039;If the questionaire is not filled out, access to bwUniCluster 2.0 is blocked 14 days after the registration.&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Changing the Service Password ==&lt;br /&gt;
&lt;br /&gt;
Your bwUniCluster 2.0 &#039;&#039;&#039;password&#039;&#039;&#039; is the service password you set during the web registration (compare step 7 of chapter 1.2).  At any time, you can set a new bwUniCluster 2.0 password via the registration website [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu] by carrying out the following steps:&lt;br /&gt;
# Go to [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu] and select your home organization &lt;br /&gt;
# Authenticate yourself via the user id / username and password provided by your home institution&lt;br /&gt;
# Find the entry &#039;&#039;&#039;bwUniCluster&#039;&#039;&#039; and select &#039;&#039;&#039;Set Service Password&#039;&#039;&#039;&lt;br /&gt;
# Enter the new password, repeat it and click &#039;&#039;&#039;Save&#039;&#039;&#039; button.&lt;br /&gt;
# If the change was sucessfull, the message &amp;quot;Das Passwort wurde bei dem Dienst geändert&amp;quot; (&amp;quot;Password has been changed&amp;quot;) will be shown&lt;br /&gt;
# Proceed to log in using the new password&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
== Contact / Support ==&lt;br /&gt;
If you have questions or problems concerning the bwUniCluster (2.0) registration, please [[bwUniCluster 2.0 Support|contact your local hotline]]. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Establishing network access = &lt;br /&gt;
&lt;br /&gt;
Access to bwUniCluster 2.0 is &#039;&#039;&#039;limited to IP addresses from the so-called BelWü networks&#039;&#039;&#039;. All home institutions of our current users are connected to BelWue, so if you are on your campus network (e.g. in your office or on the Campus WiFi) you should be able to connect to bwUniCluster 2.0 without restrictions. If you are outside of one of the BelWue networks (e.g. in your home office instead of in your campus office), a VPN connection to your home institution has to be established first (see e.g. [1] for the KIT).&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Login =&lt;br /&gt;
&lt;br /&gt;
After finishing the web registration and making sure that you are on a network from which you have access to bwUniCluster 2.0 (e.g. by establishing a VPN connection), the HPC cluster is ready for your &#039;&#039;&#039;SSH&#039;&#039;&#039; based login. Recommended SSH clients applications are:&lt;br /&gt;
&lt;br /&gt;
* the ssh (OpenSSH) command included in all Linux distributions and macOS, -in command under Linux and macOS using the application &#039;&#039;terminal&#039;&#039;&lt;br /&gt;
* [http://mobaxterm.mobatek.net/ MobaXterm] under Windows&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Hostnames ==&lt;br /&gt;
&lt;br /&gt;
The main hostname required to connect to bwUniCluster 2.0 is &#039;&#039;&#039;bwunicluster.scc.kit.edu&#039;&#039;&#039; or &#039;&#039;&#039;uc2.scc.kit.edu&#039;&#039;&#039;. The system has four login nodes and we use so-called &#039;&#039;DNS round-robin scheduling&#039;&#039; to load-balance the incoming connections between the nodes. If you open multiple SSH sessions to bwUniCluster 2.0, these sessions will be established to different login nodes, so processes started in one session might not be visible in other sessions.&lt;br /&gt;
&lt;br /&gt;
The older Broadwell extension partition of the former bwUniCluster 1 is connected to bwUniCluster 2.0. You can use the hostname &#039;&#039;&#039;uc1e.scc.kit.edu&#039;&#039;&#039; to connect to the login nodes of this partition. &lt;br /&gt;
&lt;br /&gt;
If you need to connect to specific login nodes, you can use the following hostnames:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Hostname !! Node type&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;uc2-login1.scc.kit.edu&#039;&#039;&#039; || bwUniCluster 2.0, first login node&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;uc2-login2.scc.kit.edu&#039;&#039;&#039; || bwUniCluster 2.0, second login node&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;uc2-login3.scc.kit.edu&#039;&#039;&#039; || bwUniCluster 2.0, third login node&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;uc2-login4.scc.kit.edu&#039;&#039;&#039; || bwUniCluster 2.0, fourth login node&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;uc1e-login1.scc.kit.edu&#039;&#039;&#039; || Broadwell partition, first login node&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;uc1e-login2.scc.kit.edu&#039;&#039;&#039; || Broadwell partition, second login node&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Only the secure shell &#039;&#039;SSH&#039;&#039; is allowed to login. Other protocols like &#039;&#039;telnet&#039;&#039; or &#039;&#039;rlogin&#039;&#039; are not allowed for security reasons.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Usernames ==&lt;br /&gt;
&lt;br /&gt;
Your username will be the same as the one provided by your home institution, but &#039;&#039;&#039;prefixed&#039;&#039;&#039; with two characters and an underscore indicating your home institution. For example: If you are a member of the university of Konstanz and your local username is ab1234, your username on bwUniCluster 2.0 is kn_ab1234.&lt;br /&gt;
&lt;br /&gt;
The following list contains all prefixes currently in use:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Home organization !! &amp;lt;UserID&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| Universität Freiburg || &#039;&#039;fr_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Universität Heidelberg || &#039;&#039;hd_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Universität Hohenheim || &#039;&#039;ho_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| KIT || username &#039;&#039;(without any prefix)&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
| Universität Konstanz || &#039;&#039;kn_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Universität Mannheim || &#039;&#039;ma_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Universität Stuttgart ||  &#039;&#039;st_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Universität Tübingen || &#039;&#039;tu_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Universität Ulm  || &#039;&#039;ul_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Aalen || &#039;&#039;aa_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Albstadt-Sigmaringen || &#039;&#039;as_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Esslingen || &#039;&#039;es_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Furtwangen || &#039;&#039;hf_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Heilbronn || &#039;&#039;hn_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Karlsruhe || &#039;&#039;hk_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| HTWG Konstanz || &#039;&#039;ht_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Reutlingen || &#039;&#039;hr_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Rottenburg || &#039;&#039;ro_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule für Technik Stuttgart || &#039;&#039;hs_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Ulm || &#039;&#039;hu_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Client application: OpenSSH ==&lt;br /&gt;
&lt;br /&gt;
Most Unix and Unix-like operating systems like Linux, macOS and *BSD come with a built-in SSH client provided by the OpenSSH project. More recent versions of Windows 10 and the Windows Subsystem for Linux also come with a built-in OpenSSH client.&lt;br /&gt;
&lt;br /&gt;
To use this client, simply open a command line terminal (the exact process differs on every operating system, but usually involves starting an application called &#039;&#039;&#039;Terminal&#039;&#039;&#039; or &#039;&#039;&#039;Command Prompt&#039;&#039;&#039;) and enter the following command to connect to bwUniCluster 2.0:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ssh &amp;lt;UserID&amp;gt;@bwunicluster.scc.kit.edu&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you are on a Linux or Unix system running the X Window System (X11) and want to use a GUI-based application on bwUniCluster 2.0, you can use the &#039;&#039;-X&#039;&#039; option for the ssh command to set up X11 forwarding:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ssh -X &amp;lt;UserID&amp;gt;@uc2.scc.kit.edu&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Windows users requiring X11 forwarding for graphical applications should use &#039;&#039;&#039;MobaXterm&#039;&#039;&#039; instead.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Client application: MobaXterm ==&lt;br /&gt;
&lt;br /&gt;
The bwHPC-C5 support team strongly recommends to use [http://mobaxterm.mobatek.net/ MobaXterm] instead of &#039;&#039;PuTTY&#039;&#039; or &#039;&#039;WinSCP&#039;&#039; on Windows. &#039;&#039;MobaXterm&#039;&#039; provides a built-in X11 server allowing to start GUI based software.&lt;br /&gt;
 &lt;br /&gt;
Start &#039;&#039;MobaXterm&#039;&#039;, fill in the following fields:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Remote name              : uc2.scc.kit.edu    # or uc1e.scc.kit.edu&lt;br /&gt;
Specify user name        : &amp;lt;UserID&amp;gt;&lt;br /&gt;
Port                     : 22&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After that click on &#039;ok&#039;. Then a terminal will be opened and there you can enter your credentials.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Example login process ==&lt;br /&gt;
&lt;br /&gt;
After the connection has been initiated, a successful login process will go through the following three steps:&lt;br /&gt;
&lt;br /&gt;
1. The system asks for a &#039;&#039;&#039;One-Time Password&#039;&#039;&#039;. Generate one using the Software or Hardware Token registered on the bwIDM system (see [[bwUniCluster 2.0 User Access/2FA Tokens]]) and enter it after the &#039;&#039;&#039;Your OTP:&#039;&#039;&#039; prompt.&lt;br /&gt;
&lt;br /&gt;
2. The systems asks for your service password. Enter it after the &#039;&#039;&#039;Password:&#039;&#039;&#039; prompt.&lt;br /&gt;
&lt;br /&gt;
3. You are greeted by the bwUniCluster 2.0 banner followed by a shell.&lt;br /&gt;
&lt;br /&gt;
The result should look like this:&lt;br /&gt;
&lt;br /&gt;
[[File:BwUniCluster 2.0 access login example.png|center|]]&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Troubleshooting ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Issue: The &amp;quot;Your OTP:&amp;quot; prompt never appears and the connection hangs/times out instead&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Likely cause: You are most likely not on a network from which access to the bwUniCluster 2.0 system is allowed. Please check if you might have to establish a VPN connection first.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Issue: The system asks for the One-Time Password multiple times&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Likely cause: Make sure you are using the correct Software Token to generate the One-Time Password.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Issue: The system asks for the service password multiple times&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Likely cause: Make sure you are using the service password set on bwIDM and not the password valid for your home institution. Unlike the bwUniCluster 1, the bwUniCluster 2.0 only accepts the service password.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Issue: There is an error message by the pam_ses_open.sh skript&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Likely cause: Your account is in the &amp;quot;LOST_ACCESS&amp;quot; state because the entitlement is no longer valid, the questionaire was not filled out or there was a problem during the communication between your home institution and the central bwIDM system. Please try the following steps:&lt;br /&gt;
&lt;br /&gt;
* Log into [https://bwidm.scc.kit.edu bwIDM], look for the bwUniCluster entry and click on &#039;&#039;&#039;Registry info&#039;&#039;&#039;. Your &amp;quot;Status:&amp;quot; should be &amp;quot;ACTIVE&amp;quot;. If it is not, please wait for ten minutes since logging into the bwIDM causes a refresh and the problem might fix itself. If the status does not change to ACTIVE after a longer amount of time, please contact the support channels.&lt;br /&gt;
&lt;br /&gt;
* If you have not filled out the questionaire, please do so on [https://zas.bwhpc.de/shib/en/bwunicluster_survey.php    https://zas.bwhpc.de/shib/en/bwunicluster_survey.php] and then wait for about ten minutes before attempting to log into the HPC system again.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Allowed activities on login nodes ==&lt;br /&gt;
&lt;br /&gt;
The login nodes of bwUniCluster 2.0 are the access point to the compute system and to your bwUniCluster 2.0 $HOME directory. The login nodes are shared with all the users of bwUniCluster 2.0. Therefore, your activities on the login nodes are limited to primarily set up your batch jobs. Your activities may also be:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;short&#039;&#039;&#039; compilation of your program code and&lt;br /&gt;
* &#039;&#039;&#039;short&#039;&#039;&#039; pre- and postprocessing of your batch jobs.&lt;br /&gt;
&lt;br /&gt;
To guarantee usability for all the users of bwUniCluster 2.0 &amp;lt;span style=&amp;quot;color:red;font-size:100%;&amp;quot;&amp;gt;&#039;&#039;&#039;you must not run your compute jobs on the login nodes&#039;&#039;&#039;&amp;lt;/span&amp;gt;. Compute jobs must be submitted to the&lt;br /&gt;
[[bwUniCluster Batch Jobs|queueing system]]. Any compute job running on the login nodes will be terminated without any notice. Any long-running compilation or any long-running pre- or postprocessing of batch jobs must also be submitted to the [[bwUniCluster Batch Jobs|queueing system]].&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== SSH Keys ==&lt;br /&gt;
&lt;br /&gt;
In contrast to the bwUniCluster 1 and many other HPC systems it is &#039;&#039;&#039;no longer possible to self-manage your SSH Keys by adding them to the ~/.ssh/authorized_keys file&#039;&#039;&#039;. Existing files will no longer be evaluated. SSH Keys have to be managed via the central bwIDM system instead. Please refer to the user guide for this functionality:&lt;br /&gt;
&lt;br /&gt;
[[bwUniCluster 2.0 User Access/SSH Keys]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= [[First_Steps_on_bwHPC_cluster|First steps on bwUniCluster]] =&lt;br /&gt;
&lt;br /&gt;
First and some important steps on bwUniCluster 2.0 can be found [[First_Steps_on_bwHPC_cluster|here]].&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Deregistration =&lt;br /&gt;
If you plan to permanently leave the bwUniCluster 2.0, follow the deregister checklist:&lt;br /&gt;
# Transfer all your data in $HOME and workspace to your local computer/storage and after that clear off all your data&lt;br /&gt;
# Visit [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu] &lt;br /&gt;
#* Select your home organization from the list and click &#039;&#039;&#039;Proceed&#039;&#039;&#039;  &lt;br /&gt;
#* Enter your home-organisational user ID / username  and your home-organisational password and click &#039;&#039;&#039;Login&#039;&#039;&#039; button&lt;br /&gt;
#* You will be redirected back to the registration website [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu/] &lt;br /&gt;
#* &amp;lt;div&amp;gt;Select &#039;&#039;&#039;Registry Info&#039;&#039;&#039; of the service &#039;&#039;&#039;bwUniCluster&#039;&#039;&#039; (on the left hand side)&amp;lt;br&amp;gt;[[File:bwUniCluster_registration_sidebar.png|center|border|]]&amp;lt;/div&amp;gt;&lt;br /&gt;
#* Click &#039;&#039;&#039;Deregister&#039;&#039;&#039;&lt;br /&gt;
Note that Step 2 will automatically unsubscribe you from the bwunicluster mail list.&lt;/div&gt;</summary>
		<author><name>H Haefner</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=File:Bwidm-5-red.png&amp;diff=8063</id>
		<title>File:Bwidm-5-red.png</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=File:Bwidm-5-red.png&amp;diff=8063"/>
		<updated>2020-12-17T15:20:59Z</updated>

		<summary type="html">&lt;p&gt;H Haefner: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>H Haefner</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=File:Bwidm-3-red.png&amp;diff=8062</id>
		<title>File:Bwidm-3-red.png</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=File:Bwidm-3-red.png&amp;diff=8062"/>
		<updated>2020-12-17T15:19:49Z</updated>

		<summary type="html">&lt;p&gt;H Haefner: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>H Haefner</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster_2.0_User_Access&amp;diff=8061</id>
		<title>BwUniCluster 2.0 User Access</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster_2.0_User_Access&amp;diff=8061"/>
		<updated>2020-12-17T15:19:27Z</updated>

		<summary type="html">&lt;p&gt;H Haefner: /* Step B: Web Registration, service password and 2-factor authentication */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;!--{| style=&amp;quot;border-style: solid; border-width: 1px&amp;quot;&lt;br /&gt;
! Navigation: [[BwHPC_Best_Practices_Repository|bwHPC BPR]] / [[BwUniCluster_User_Guide|bwUniCluster]] &lt;br /&gt;
|}--&amp;gt;&lt;br /&gt;
[[bwUniCluster_2.0|bwUniCluster 2.0]] is Baden-Württemberg&#039;s general purpose tier 3 high performance computing (HPC)&lt;br /&gt;
cluster co-financed by Baden-Württemberg&#039;s ministry of science, research and arts and the shareholders:&lt;br /&gt;
&lt;br /&gt;
* Albert Ludwig University of Freiburg&lt;br /&gt;
* Eberhard Karls University, Tübingen&lt;br /&gt;
* Karlsruhe Institute of Technology (KIT)&lt;br /&gt;
* Heidelberg University (Ruprecht-Karls-Universität Heidelberg)&lt;br /&gt;
* Ulm University&lt;br /&gt;
* University of Hohenheim&lt;br /&gt;
* University of Konstanz&lt;br /&gt;
* University of Mannheim&lt;br /&gt;
* University of Stuttgart&lt;br /&gt;
* HAW BW e.V. (an association of several universities of applied sciences in Baden-Württemberg, see below) &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
To &#039;&#039;&#039;log on&#039;&#039;&#039; [[bwUniCluster_2.0|bwUniCluster 2.0]] a user account is required. All members of the shareholder&lt;br /&gt;
universities can apply for an account.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;width: 100%; border-spacing: 5px;&amp;quot;&lt;br /&gt;
| style=&amp;quot;text-align:left; color:#000;vertical-align:top;&amp;quot; |__TOC__ &lt;br /&gt;
|&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;[[File:bwUniCluster_17Jan2014_p044-rot_t10.10.00.jpg|center|border|250px|bwUniCluster wiring by Holger Obermaier, copyright: KIT (SCC)]] &amp;lt;span style=&amp;quot;font-size:80%&amp;quot;&amp;gt;bwUniCluster wiring © KIT (SCC)&amp;lt;/span&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
= Registration =&lt;br /&gt;
&lt;br /&gt;
Granting access and issuing a user account for &#039;&#039;&#039;bwUniCluster 2.0&#039;&#039;&#039; requires the registration at the KIT service website [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu] (step B). However, this registration depends on the &#039;&#039;&#039;bwUniCluster entitlement&#039;&#039;&#039; issued by your university (step A).&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Step A: bwUniCluster entitlement for registration ==&lt;br /&gt;
&#039;&#039;&#039;The entitlement is called bwUniCluster (not bwUniCluster 2.0)&#039;&#039;&#039; and each university issues the bwUniCluster entitlement &#039;&#039;&#039;only&#039;&#039;&#039; for their own respective members. Some have established on-line processes or provide downloads of the entitlement application forms. If there is no link behind the name of an institution in the following list, please contact the local IT support services: &lt;br /&gt;
&lt;br /&gt;
* [[BwCluster_User_Access_Uni_Freiburg|Albert Ludwig University of Freiburg]]&lt;br /&gt;
* [https://bwunicluster.urz.uni-heidelberg.de/ Heidelberg University]&lt;br /&gt;
* [https://kim.uni-hohenheim.de/bwhpc-account University of Hohenheim]&lt;br /&gt;
* [http://www.scc.kit.edu/downloads/sdo/Antrag_Benutzernummer_bwUniCluster.pdf Karlsruhe Institute of Technology (KIT)]&lt;br /&gt;
* [[BWUniCluster_User_Access_Members_Uni_Konstanz|University of Konstanz]]&lt;br /&gt;
* [[BWUniCluster_User_Access_Members_Uni_Mannheim|University of Mannheim]]&lt;br /&gt;
* [https://www.hlrs.de/solutions-services/academic-users/bwunicluster-access/ University of Stuttgart]&lt;br /&gt;
* [https://uni-tuebingen.de/de/155157 Eberhard Karls University Tübingen]&lt;br /&gt;
* [[BWUniCluster_User_Access_Members_Uni_Ulm|Ulm University]]&lt;br /&gt;
* Hochschule Aalen&lt;br /&gt;
* Hochschule Albstadt-Sigmaringen&lt;br /&gt;
* Hochschule Esslingen&lt;br /&gt;
* Hochschule Furtwangen&lt;br /&gt;
* Hochschule Heilbronn&lt;br /&gt;
* Hochschule Karlsruhe&lt;br /&gt;
* Hochschule Konstanz&lt;br /&gt;
* Hochschule Reutlingen&lt;br /&gt;
* Hochschule Rottenburg&lt;br /&gt;
* Hochschule Stuttgart (HfT)&lt;br /&gt;
* Hochschule Ulm&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Step B: Web Registration, service password and 2-factor authentication ==&lt;br /&gt;
&lt;br /&gt;
After completing step A, i.e., after successfull issueing of the bwUniCluster entitlement, you have to register yourself for the service. To do so please visit [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu] and complete the following steps.&lt;br /&gt;
&lt;br /&gt;
1. Select your home organization from the list on the main page and click &#039;&#039;&#039;Proceed&#039;&#039;&#039; or &#039;&#039;&#039;Fortfahren&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
[[File:Bwidm-1-red.png|center|border|]]&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
2. You will be directed to the &#039;&#039;Identity Provider&#039;&#039; of your home organisation. Enter the user ID / username and password of your home organisation - this is usually the same password used for your e-mail account and other services - and click on &#039;&#039;&#039;Login&#039;&#039;&#039;, &#039;&#039;&#039;Einloggen&#039;&#039;&#039; or something similar.&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
3. You will be redirected back to the registration website [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu/]. If you are logging into bwIDM for the first time, there will be a summary screen which shows the account details your home institution is providing to the central system. Please check that all data is valid and then click on &#039;&#039;&#039;Continue&#039;&#039;&#039; or &#039;&#039;&#039;Weiter&#039;&#039;&#039;.&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
4. Once you have successfully logged into the bwIDM system, you will be greeted by a home screen showing all state-wide services you have access to. There will be a box labelled &amp;quot;bwUniCluster&amp;quot;. Click on &#039;&#039;&#039;Register&#039;&#039;&#039; or &#039;&#039;&#039;Registrieren&#039;&#039;&#039; to start the registration process.&lt;br /&gt;
&lt;br /&gt;
[[File:Bwidm-2-red.png|center|border|]]&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
5. Since August 13, 2020 a &#039;&#039;&#039;2-factor authentication&#039;&#039;&#039; mechanism (2FA) is being enforced to improve security. If you have never registered a 2FA token on bwIDM before, the following error message will appear:&lt;br /&gt;
&lt;br /&gt;
[[File:Bwidm-3-red.png|center|]]&lt;br /&gt;
&lt;br /&gt;
Click on the [https://bwidm.scc.kit.edu/user/twofa.xhtml Link] or on the &#039;&#039;&#039;My Tokens&#039;&#039;&#039; link in the main menu. The instructions for registering a new 2FA token can be found on the following page: [[bwUniCluster 2.0 User Access/2FA Tokens]]. Please complete them before proceeding.&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
6. Make sure all requirements are met by checking the &#039;&#039;&#039;Requirements&#039;&#039;&#039; box at the top. If the requirements are not met you might be able to correct the issure by following the instructions. In all other cases please [[Registration_Support_-_bwUniCluster|contact your local hotline]].&lt;br /&gt;
&lt;br /&gt;
[[File:BwUniCluster 2.0 access login bwidm registration requirements.png|center|border|]]&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
7. Read the Terms of Use (&#039;&#039;&#039;Nutzungsbedingungen und -richtlinien&#039;&#039;&#039;), check the box besides &#039;&#039;&#039;I have read and accepted the terms of use&#039;&#039;&#039; and click on &#039;&#039;&#039;Register&#039;&#039;&#039; or &#039;&#039;&#039;Registrieren&#039;&#039;&#039;.&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
8. Set a service password for the bwUniCluster and click on &#039;&#039;&#039;Save&#039;&#039;&#039; or &#039;&#039;&#039;Speichern&#039;&#039;&#039;. Logging in with the password of your home organisation, like on the former bwUniCluster 1, is no longer possible. Please make sure to use a strong password which is different from any other password you are currently using or have used on other systems. You will also be asked to change the service password regularly.&lt;br /&gt;
&lt;br /&gt;
[[File:Bwidm-5-red.png|center|]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Step C: Fill out the bwUniCluster questionnaire ==&lt;br /&gt;
&lt;br /&gt;
Filling out the bwUniCluster questionaire on&lt;br /&gt;
&lt;br /&gt;
   https://zas.bwhpc.de/shib/en/bwunicluster_survey.php&lt;br /&gt;
&lt;br /&gt;
is mandatory for all users. The input is solely used to improve our support activities and for capacity planning of future HPC resources. &#039;&#039;&#039;If the questionaire is not filled out, access to bwUniCluster 2.0 is blocked 14 days after the registration.&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Changing the Service Password ==&lt;br /&gt;
&lt;br /&gt;
Your bwUniCluster 2.0 &#039;&#039;&#039;password&#039;&#039;&#039; is the service password you set during the web registration (compare step 7 of chapter 1.2).  At any time, you can set a new bwUniCluster 2.0 password via the registration website [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu] by carrying out the following steps:&lt;br /&gt;
# Go to [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu] and select your home organization &lt;br /&gt;
# Authenticate yourself via the user id / username and password provided by your home institution&lt;br /&gt;
# Find the entry &#039;&#039;&#039;bwUniCluster&#039;&#039;&#039; and select &#039;&#039;&#039;Set Service Password&#039;&#039;&#039;&lt;br /&gt;
# Enter the new password, repeat it and click &#039;&#039;&#039;Save&#039;&#039;&#039; button.&lt;br /&gt;
# If the change was sucessfull, the message &amp;quot;Das Passwort wurde bei dem Dienst geändert&amp;quot; (&amp;quot;Password has been changed&amp;quot;) will be shown&lt;br /&gt;
# Proceed to log in using the new password&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
== Contact / Support ==&lt;br /&gt;
If you have questions or problems concerning the bwUniCluster (2.0) registration, please [[bwUniCluster 2.0 Support|contact your local hotline]]. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Establishing network access = &lt;br /&gt;
&lt;br /&gt;
Access to bwUniCluster 2.0 is &#039;&#039;&#039;limited to IP addresses from the so-called BelWü networks&#039;&#039;&#039;. All home institutions of our current users are connected to BelWue, so if you are on your campus network (e.g. in your office or on the Campus WiFi) you should be able to connect to bwUniCluster 2.0 without restrictions. If you are outside of one of the BelWue networks (e.g. in your home office instead of in your campus office), a VPN connection to your home institution has to be established first (see e.g. [1] for the KIT).&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Login =&lt;br /&gt;
&lt;br /&gt;
After finishing the web registration and making sure that you are on a network from which you have access to bwUniCluster 2.0 (e.g. by establishing a VPN connection), the HPC cluster is ready for your &#039;&#039;&#039;SSH&#039;&#039;&#039; based login. Recommended SSH clients applications are:&lt;br /&gt;
&lt;br /&gt;
* the ssh (OpenSSH) command included in all Linux distributions and macOS, -in command under Linux and macOS using the application &#039;&#039;terminal&#039;&#039;&lt;br /&gt;
* [http://mobaxterm.mobatek.net/ MobaXterm] under Windows&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Hostnames ==&lt;br /&gt;
&lt;br /&gt;
The main hostname required to connect to bwUniCluster 2.0 is &#039;&#039;&#039;bwunicluster.scc.kit.edu&#039;&#039;&#039; or &#039;&#039;&#039;uc2.scc.kit.edu&#039;&#039;&#039;. The system has four login nodes and we use so-called &#039;&#039;DNS round-robin scheduling&#039;&#039; to load-balance the incoming connections between the nodes. If you open multiple SSH sessions to bwUniCluster 2.0, these sessions will be established to different login nodes, so processes started in one session might not be visible in other sessions.&lt;br /&gt;
&lt;br /&gt;
The older Broadwell extension partition of the former bwUniCluster 1 is connected to bwUniCluster 2.0. You can use the hostname &#039;&#039;&#039;uc1e.scc.kit.edu&#039;&#039;&#039; to connect to the login nodes of this partition. &lt;br /&gt;
&lt;br /&gt;
If you need to connect to specific login nodes, you can use the following hostnames:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Hostname !! Node type&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;uc2-login1.scc.kit.edu&#039;&#039;&#039; || bwUniCluster 2.0, first login node&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;uc2-login2.scc.kit.edu&#039;&#039;&#039; || bwUniCluster 2.0, second login node&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;uc2-login3.scc.kit.edu&#039;&#039;&#039; || bwUniCluster 2.0, third login node&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;uc2-login4.scc.kit.edu&#039;&#039;&#039; || bwUniCluster 2.0, fourth login node&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;uc1e-login1.scc.kit.edu&#039;&#039;&#039; || Broadwell partition, first login node&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;uc1e-login2.scc.kit.edu&#039;&#039;&#039; || Broadwell partition, second login node&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Only the secure shell &#039;&#039;SSH&#039;&#039; is allowed to login. Other protocols like &#039;&#039;telnet&#039;&#039; or &#039;&#039;rlogin&#039;&#039; are not allowed for security reasons.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Usernames ==&lt;br /&gt;
&lt;br /&gt;
Your username will be the same as the one provided by your home institution, but &#039;&#039;&#039;prefixed&#039;&#039;&#039; with two characters and an underscore indicating your home institution. For example: If you are a member of the university of Konstanz and your local username is ab1234, your username on bwUniCluster 2.0 is kn_ab1234.&lt;br /&gt;
&lt;br /&gt;
The following list contains all prefixes currently in use:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Home organization !! &amp;lt;UserID&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| Universität Freiburg || &#039;&#039;fr_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Universität Heidelberg || &#039;&#039;hd_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Universität Hohenheim || &#039;&#039;ho_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| KIT || username &#039;&#039;(without any prefix)&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
| Universität Konstanz || &#039;&#039;kn_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Universität Mannheim || &#039;&#039;ma_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Universität Stuttgart ||  &#039;&#039;st_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Universität Tübingen || &#039;&#039;tu_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Universität Ulm  || &#039;&#039;ul_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Aalen || &#039;&#039;aa_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Albstadt-Sigmaringen || &#039;&#039;as_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Esslingen || &#039;&#039;es_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Furtwangen || &#039;&#039;hf_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Heilbronn || &#039;&#039;hn_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Karlsruhe || &#039;&#039;hk_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| HTWG Konstanz || &#039;&#039;ht_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Reutlingen || &#039;&#039;hr_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Rottenburg || &#039;&#039;ro_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule für Technik Stuttgart || &#039;&#039;hs_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Ulm || &#039;&#039;hu_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Client application: OpenSSH ==&lt;br /&gt;
&lt;br /&gt;
Most Unix and Unix-like operating systems like Linux, macOS and *BSD come with a built-in SSH client provided by the OpenSSH project. More recent versions of Windows 10 and the Windows Subsystem for Linux also come with a built-in OpenSSH client.&lt;br /&gt;
&lt;br /&gt;
To use this client, simply open a command line terminal (the exact process differs on every operating system, but usually involves starting an application called &#039;&#039;&#039;Terminal&#039;&#039;&#039; or &#039;&#039;&#039;Command Prompt&#039;&#039;&#039;) and enter the following command to connect to bwUniCluster 2.0:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ssh &amp;lt;UserID&amp;gt;@bwunicluster.scc.kit.edu&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you are on a Linux or Unix system running the X Window System (X11) and want to use a GUI-based application on bwUniCluster 2.0, you can use the &#039;&#039;-X&#039;&#039; option for the ssh command to set up X11 forwarding:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ssh -X &amp;lt;UserID&amp;gt;@uc2.scc.kit.edu&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Windows users requiring X11 forwarding for graphical applications should use &#039;&#039;&#039;MobaXterm&#039;&#039;&#039; instead.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Client application: MobaXterm ==&lt;br /&gt;
&lt;br /&gt;
The bwHPC-C5 support team strongly recommends to use [http://mobaxterm.mobatek.net/ MobaXterm] instead of &#039;&#039;PuTTY&#039;&#039; or &#039;&#039;WinSCP&#039;&#039; on Windows. &#039;&#039;MobaXterm&#039;&#039; provides a built-in X11 server allowing to start GUI based software.&lt;br /&gt;
 &lt;br /&gt;
Start &#039;&#039;MobaXterm&#039;&#039;, fill in the following fields:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Remote name              : uc2.scc.kit.edu    # or uc1e.scc.kit.edu&lt;br /&gt;
Specify user name        : &amp;lt;UserID&amp;gt;&lt;br /&gt;
Port                     : 22&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After that click on &#039;ok&#039;. Then a terminal will be opened and there you can enter your credentials.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Example login process ==&lt;br /&gt;
&lt;br /&gt;
After the connection has been initiated, a successful login process will go through the following three steps:&lt;br /&gt;
&lt;br /&gt;
1. The system asks for a &#039;&#039;&#039;One-Time Password&#039;&#039;&#039;. Generate one using the Software or Hardware Token registered on the bwIDM system (see [[bwUniCluster 2.0 User Access/2FA Tokens]]) and enter it after the &#039;&#039;&#039;Your OTP:&#039;&#039;&#039; prompt.&lt;br /&gt;
&lt;br /&gt;
2. The systems asks for your service password. Enter it after the &#039;&#039;&#039;Password:&#039;&#039;&#039; prompt.&lt;br /&gt;
&lt;br /&gt;
3. You are greeted by the bwUniCluster 2.0 banner followed by a shell.&lt;br /&gt;
&lt;br /&gt;
The result should look like this:&lt;br /&gt;
&lt;br /&gt;
[[File:BwUniCluster 2.0 access login example.png|center|]]&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Troubleshooting ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Issue: The &amp;quot;Your OTP:&amp;quot; prompt never appears and the connection hangs/times out instead&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Likely cause: You are most likely not on a network from which access to the bwUniCluster 2.0 system is allowed. Please check if you might have to establish a VPN connection first.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Issue: The system asks for the One-Time Password multiple times&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Likely cause: Make sure you are using the correct Software Token to generate the One-Time Password.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Issue: The system asks for the service password multiple times&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Likely cause: Make sure you are using the service password set on bwIDM and not the password valid for your home institution. Unlike the bwUniCluster 1, the bwUniCluster 2.0 only accepts the service password.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Issue: There is an error message by the pam_ses_open.sh skript&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Likely cause: Your account is in the &amp;quot;LOST_ACCESS&amp;quot; state because the entitlement is no longer valid, the questionaire was not filled out or there was a problem during the communication between your home institution and the central bwIDM system. Please try the following steps:&lt;br /&gt;
&lt;br /&gt;
* Log into [https://bwidm.scc.kit.edu bwIDM], look for the bwUniCluster entry and click on &#039;&#039;&#039;Registry info&#039;&#039;&#039;. Your &amp;quot;Status:&amp;quot; should be &amp;quot;ACTIVE&amp;quot;. If it is not, please wait for ten minutes since logging into the bwIDM causes a refresh and the problem might fix itself. If the status does not change to ACTIVE after a longer amount of time, please contact the support channels.&lt;br /&gt;
&lt;br /&gt;
* If you have not filled out the questionaire, please do so on [https://zas.bwhpc.de/shib/en/bwunicluster_survey.php    https://zas.bwhpc.de/shib/en/bwunicluster_survey.php] and then wait for about ten minutes before attempting to log into the HPC system again.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Allowed activities on login nodes ==&lt;br /&gt;
&lt;br /&gt;
The login nodes of bwUniCluster 2.0 are the access point to the compute system and to your bwUniCluster 2.0 $HOME directory. The login nodes are shared with all the users of bwUniCluster 2.0. Therefore, your activities on the login nodes are limited to primarily set up your batch jobs. Your activities may also be:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;short&#039;&#039;&#039; compilation of your program code and&lt;br /&gt;
* &#039;&#039;&#039;short&#039;&#039;&#039; pre- and postprocessing of your batch jobs.&lt;br /&gt;
&lt;br /&gt;
To guarantee usability for all the users of bwUniCluster 2.0 &amp;lt;span style=&amp;quot;color:red;font-size:100%;&amp;quot;&amp;gt;&#039;&#039;&#039;you must not run your compute jobs on the login nodes&#039;&#039;&#039;&amp;lt;/span&amp;gt;. Compute jobs must be submitted to the&lt;br /&gt;
[[bwUniCluster Batch Jobs|queueing system]]. Any compute job running on the login nodes will be terminated without any notice. Any long-running compilation or any long-running pre- or postprocessing of batch jobs must also be submitted to the [[bwUniCluster Batch Jobs|queueing system]].&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== SSH Keys ==&lt;br /&gt;
&lt;br /&gt;
In contrast to the bwUniCluster 1 and many other HPC systems it is &#039;&#039;&#039;no longer possible to self-manage your SSH Keys by adding them to the ~/.ssh/authorized_keys file&#039;&#039;&#039;. Existing files will no longer be evaluated. SSH Keys have to be managed via the central bwIDM system instead. Please refer to the user guide for this functionality:&lt;br /&gt;
&lt;br /&gt;
[[bwUniCluster 2.0 User Access/SSH Keys]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= [[First_Steps_on_bwHPC_cluster|First steps on bwUniCluster]] =&lt;br /&gt;
&lt;br /&gt;
First and some important steps on bwUniCluster 2.0 can be found [[First_Steps_on_bwHPC_cluster|here]].&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Deregistration =&lt;br /&gt;
If you plan to permanently leave the bwUniCluster 2.0, follow the deregister checklist:&lt;br /&gt;
# Transfer all your data in $HOME and workspace to your local computer/storage and after that clear off all your data&lt;br /&gt;
# Visit [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu] &lt;br /&gt;
#* Select your home organization from the list and click &#039;&#039;&#039;Proceed&#039;&#039;&#039;  &lt;br /&gt;
#* Enter your home-organisational user ID / username  and your home-organisational password and click &#039;&#039;&#039;Login&#039;&#039;&#039; button&lt;br /&gt;
#* You will be redirected back to the registration website [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu/] &lt;br /&gt;
#* &amp;lt;div&amp;gt;Select &#039;&#039;&#039;Registry Info&#039;&#039;&#039; of the service &#039;&#039;&#039;bwUniCluster&#039;&#039;&#039; (on the left hand side)&amp;lt;br&amp;gt;[[File:bwUniCluster_registration_sidebar.png|center|border|]]&amp;lt;/div&amp;gt;&lt;br /&gt;
#* Click &#039;&#039;&#039;Deregister&#039;&#039;&#039;&lt;br /&gt;
Note that Step 2 will automatically unsubscribe you from the bwunicluster mail list.&lt;/div&gt;</summary>
		<author><name>H Haefner</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=File:Bwidm-1-red.png&amp;diff=8060</id>
		<title>File:Bwidm-1-red.png</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=File:Bwidm-1-red.png&amp;diff=8060"/>
		<updated>2020-12-17T15:14:49Z</updated>

		<summary type="html">&lt;p&gt;H Haefner: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>H Haefner</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster_2.0_User_Access&amp;diff=8059</id>
		<title>BwUniCluster 2.0 User Access</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster_2.0_User_Access&amp;diff=8059"/>
		<updated>2020-12-17T15:11:54Z</updated>

		<summary type="html">&lt;p&gt;H Haefner: /* Step B: Web Registration, service password and 2-factor authentication */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;!--{| style=&amp;quot;border-style: solid; border-width: 1px&amp;quot;&lt;br /&gt;
! Navigation: [[BwHPC_Best_Practices_Repository|bwHPC BPR]] / [[BwUniCluster_User_Guide|bwUniCluster]] &lt;br /&gt;
|}--&amp;gt;&lt;br /&gt;
[[bwUniCluster_2.0|bwUniCluster 2.0]] is Baden-Württemberg&#039;s general purpose tier 3 high performance computing (HPC)&lt;br /&gt;
cluster co-financed by Baden-Württemberg&#039;s ministry of science, research and arts and the shareholders:&lt;br /&gt;
&lt;br /&gt;
* Albert Ludwig University of Freiburg&lt;br /&gt;
* Eberhard Karls University, Tübingen&lt;br /&gt;
* Karlsruhe Institute of Technology (KIT)&lt;br /&gt;
* Heidelberg University (Ruprecht-Karls-Universität Heidelberg)&lt;br /&gt;
* Ulm University&lt;br /&gt;
* University of Hohenheim&lt;br /&gt;
* University of Konstanz&lt;br /&gt;
* University of Mannheim&lt;br /&gt;
* University of Stuttgart&lt;br /&gt;
* HAW BW e.V. (an association of several universities of applied sciences in Baden-Württemberg, see below) &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
To &#039;&#039;&#039;log on&#039;&#039;&#039; [[bwUniCluster_2.0|bwUniCluster 2.0]] a user account is required. All members of the shareholder&lt;br /&gt;
universities can apply for an account.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;width: 100%; border-spacing: 5px;&amp;quot;&lt;br /&gt;
| style=&amp;quot;text-align:left; color:#000;vertical-align:top;&amp;quot; |__TOC__ &lt;br /&gt;
|&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;[[File:bwUniCluster_17Jan2014_p044-rot_t10.10.00.jpg|center|border|250px|bwUniCluster wiring by Holger Obermaier, copyright: KIT (SCC)]] &amp;lt;span style=&amp;quot;font-size:80%&amp;quot;&amp;gt;bwUniCluster wiring © KIT (SCC)&amp;lt;/span&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
= Registration =&lt;br /&gt;
&lt;br /&gt;
Granting access and issuing a user account for &#039;&#039;&#039;bwUniCluster 2.0&#039;&#039;&#039; requires the registration at the KIT service website [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu] (step B). However, this registration depends on the &#039;&#039;&#039;bwUniCluster entitlement&#039;&#039;&#039; issued by your university (step A).&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Step A: bwUniCluster entitlement for registration ==&lt;br /&gt;
&#039;&#039;&#039;The entitlement is called bwUniCluster (not bwUniCluster 2.0)&#039;&#039;&#039; and each university issues the bwUniCluster entitlement &#039;&#039;&#039;only&#039;&#039;&#039; for their own respective members. Some have established on-line processes or provide downloads of the entitlement application forms. If there is no link behind the name of an institution in the following list, please contact the local IT support services: &lt;br /&gt;
&lt;br /&gt;
* [[BwCluster_User_Access_Uni_Freiburg|Albert Ludwig University of Freiburg]]&lt;br /&gt;
* [https://bwunicluster.urz.uni-heidelberg.de/ Heidelberg University]&lt;br /&gt;
* [https://kim.uni-hohenheim.de/bwhpc-account University of Hohenheim]&lt;br /&gt;
* [http://www.scc.kit.edu/downloads/sdo/Antrag_Benutzernummer_bwUniCluster.pdf Karlsruhe Institute of Technology (KIT)]&lt;br /&gt;
* [[BWUniCluster_User_Access_Members_Uni_Konstanz|University of Konstanz]]&lt;br /&gt;
* [[BWUniCluster_User_Access_Members_Uni_Mannheim|University of Mannheim]]&lt;br /&gt;
* [https://www.hlrs.de/solutions-services/academic-users/bwunicluster-access/ University of Stuttgart]&lt;br /&gt;
* [https://uni-tuebingen.de/de/155157 Eberhard Karls University Tübingen]&lt;br /&gt;
* [[BWUniCluster_User_Access_Members_Uni_Ulm|Ulm University]]&lt;br /&gt;
* Hochschule Aalen&lt;br /&gt;
* Hochschule Albstadt-Sigmaringen&lt;br /&gt;
* Hochschule Esslingen&lt;br /&gt;
* Hochschule Furtwangen&lt;br /&gt;
* Hochschule Heilbronn&lt;br /&gt;
* Hochschule Karlsruhe&lt;br /&gt;
* Hochschule Konstanz&lt;br /&gt;
* Hochschule Reutlingen&lt;br /&gt;
* Hochschule Rottenburg&lt;br /&gt;
* Hochschule Stuttgart (HfT)&lt;br /&gt;
* Hochschule Ulm&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Step B: Web Registration, service password and 2-factor authentication ==&lt;br /&gt;
&lt;br /&gt;
After completing step A, i.e., after successfull issueing of the bwUniCluster entitlement, you have to register yourself for the service. To do so please visit [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu] and complete the following steps.&lt;br /&gt;
&lt;br /&gt;
1. Select your home organization from the list on the main page and click &#039;&#039;&#039;Proceed&#039;&#039;&#039; or &#039;&#039;&#039;Fortfahren&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
[[File:Bwidm-1-red.png|center|border|]]&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
2. You will be directed to the &#039;&#039;Identity Provider&#039;&#039; of your home organisation. Enter the user ID / username and password of your home organisation - this is usually the same password used for your e-mail account and other services - and click on &#039;&#039;&#039;Login&#039;&#039;&#039;, &#039;&#039;&#039;Einloggen&#039;&#039;&#039; or something similar.&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
3. You will be redirected back to the registration website [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu/]. If you are logging into bwIDM for the first time, there will be a summary screen which shows the account details your home institution is providing to the central system. Please check that all data is valid and then click on &#039;&#039;&#039;Continue&#039;&#039;&#039; or &#039;&#039;&#039;Weiter&#039;&#039;&#039;.&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
4. Once you have successfully logged into the bwIDM system, you will be greeted by a home screen showing all state-wide services you have access to. There will be a box labelled &amp;quot;bwUniCluster&amp;quot;. Click on &#039;&#039;&#039;Register&#039;&#039;&#039; or &#039;&#039;&#039;Registrieren&#039;&#039;&#039; to start the registration process.&lt;br /&gt;
&lt;br /&gt;
[[File:Bwidm-2-red.png|center|border|]]&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
5. Since August 13, 2020 a &#039;&#039;&#039;2-factor authentication&#039;&#039;&#039; mechanism (2FA) is being enforced to improve security. If you have never registered a 2FA token on bwIDM before, the following error message will appear:&lt;br /&gt;
&lt;br /&gt;
[[File:BwUniCluster 2.0 access login bwidm registration requirements 2fa.png|center|]]&lt;br /&gt;
&lt;br /&gt;
Click on the [https://bwidm.scc.kit.edu/user/twofa.xhtml Link] or on the &#039;&#039;&#039;My Tokens&#039;&#039;&#039; link in the main menu. The instructions for registering a new 2FA token can be found on the following page: [[bwUniCluster 2.0 User Access/2FA Tokens]]. Please complete them before proceeding.&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
6. Make sure all requirements are met by checking the &#039;&#039;&#039;Requirements&#039;&#039;&#039; box at the top. If the requirements are not met you might be able to correct the issure by following the instructions. In all other cases please [[Registration_Support_-_bwUniCluster|contact your local hotline]].&lt;br /&gt;
&lt;br /&gt;
[[File:BwUniCluster 2.0 access login bwidm registration requirements.png|center|border|]]&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
7. Read the Terms of Use (&#039;&#039;&#039;Nutzungsbedingungen und -richtlinien&#039;&#039;&#039;), check the box besides &#039;&#039;&#039;I have read and accepted the terms of use&#039;&#039;&#039; and click on &#039;&#039;&#039;Register&#039;&#039;&#039; or &#039;&#039;&#039;Registrieren&#039;&#039;&#039;.&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
8. Set a service password for the bwUniCluster and click on &#039;&#039;&#039;Save&#039;&#039;&#039; or &#039;&#039;&#039;Speichern&#039;&#039;&#039;. Logging in with the password of your home organisation, like on the former bwUniCluster 1, is no longer possible. Please make sure to use a strong password which is different from any other password you are currently using or have used on other systems. You will also be asked to change the service password regularly.&lt;br /&gt;
&lt;br /&gt;
[[File:BwUniCluster 2.0 access login bwidm service password.png|center|]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Step C: Fill out the bwUniCluster questionnaire ==&lt;br /&gt;
&lt;br /&gt;
Filling out the bwUniCluster questionaire on&lt;br /&gt;
&lt;br /&gt;
   https://zas.bwhpc.de/shib/en/bwunicluster_survey.php&lt;br /&gt;
&lt;br /&gt;
is mandatory for all users. The input is solely used to improve our support activities and for capacity planning of future HPC resources. &#039;&#039;&#039;If the questionaire is not filled out, access to bwUniCluster 2.0 is blocked 14 days after the registration.&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Changing the Service Password ==&lt;br /&gt;
&lt;br /&gt;
Your bwUniCluster 2.0 &#039;&#039;&#039;password&#039;&#039;&#039; is the service password you set during the web registration (compare step 7 of chapter 1.2).  At any time, you can set a new bwUniCluster 2.0 password via the registration website [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu] by carrying out the following steps:&lt;br /&gt;
# Go to [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu] and select your home organization &lt;br /&gt;
# Authenticate yourself via the user id / username and password provided by your home institution&lt;br /&gt;
# Find the entry &#039;&#039;&#039;bwUniCluster&#039;&#039;&#039; and select &#039;&#039;&#039;Set Service Password&#039;&#039;&#039;&lt;br /&gt;
# Enter the new password, repeat it and click &#039;&#039;&#039;Save&#039;&#039;&#039; button.&lt;br /&gt;
# If the change was sucessfull, the message &amp;quot;Das Passwort wurde bei dem Dienst geändert&amp;quot; (&amp;quot;Password has been changed&amp;quot;) will be shown&lt;br /&gt;
# Proceed to log in using the new password&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
== Contact / Support ==&lt;br /&gt;
If you have questions or problems concerning the bwUniCluster (2.0) registration, please [[bwUniCluster 2.0 Support|contact your local hotline]]. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Establishing network access = &lt;br /&gt;
&lt;br /&gt;
Access to bwUniCluster 2.0 is &#039;&#039;&#039;limited to IP addresses from the so-called BelWü networks&#039;&#039;&#039;. All home institutions of our current users are connected to BelWue, so if you are on your campus network (e.g. in your office or on the Campus WiFi) you should be able to connect to bwUniCluster 2.0 without restrictions. If you are outside of one of the BelWue networks (e.g. in your home office instead of in your campus office), a VPN connection to your home institution has to be established first (see e.g. [1] for the KIT).&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Login =&lt;br /&gt;
&lt;br /&gt;
After finishing the web registration and making sure that you are on a network from which you have access to bwUniCluster 2.0 (e.g. by establishing a VPN connection), the HPC cluster is ready for your &#039;&#039;&#039;SSH&#039;&#039;&#039; based login. Recommended SSH clients applications are:&lt;br /&gt;
&lt;br /&gt;
* the ssh (OpenSSH) command included in all Linux distributions and macOS, -in command under Linux and macOS using the application &#039;&#039;terminal&#039;&#039;&lt;br /&gt;
* [http://mobaxterm.mobatek.net/ MobaXterm] under Windows&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Hostnames ==&lt;br /&gt;
&lt;br /&gt;
The main hostname required to connect to bwUniCluster 2.0 is &#039;&#039;&#039;bwunicluster.scc.kit.edu&#039;&#039;&#039; or &#039;&#039;&#039;uc2.scc.kit.edu&#039;&#039;&#039;. The system has four login nodes and we use so-called &#039;&#039;DNS round-robin scheduling&#039;&#039; to load-balance the incoming connections between the nodes. If you open multiple SSH sessions to bwUniCluster 2.0, these sessions will be established to different login nodes, so processes started in one session might not be visible in other sessions.&lt;br /&gt;
&lt;br /&gt;
The older Broadwell extension partition of the former bwUniCluster 1 is connected to bwUniCluster 2.0. You can use the hostname &#039;&#039;&#039;uc1e.scc.kit.edu&#039;&#039;&#039; to connect to the login nodes of this partition. &lt;br /&gt;
&lt;br /&gt;
If you need to connect to specific login nodes, you can use the following hostnames:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Hostname !! Node type&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;uc2-login1.scc.kit.edu&#039;&#039;&#039; || bwUniCluster 2.0, first login node&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;uc2-login2.scc.kit.edu&#039;&#039;&#039; || bwUniCluster 2.0, second login node&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;uc2-login3.scc.kit.edu&#039;&#039;&#039; || bwUniCluster 2.0, third login node&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;uc2-login4.scc.kit.edu&#039;&#039;&#039; || bwUniCluster 2.0, fourth login node&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;uc1e-login1.scc.kit.edu&#039;&#039;&#039; || Broadwell partition, first login node&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;uc1e-login2.scc.kit.edu&#039;&#039;&#039; || Broadwell partition, second login node&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Only the secure shell &#039;&#039;SSH&#039;&#039; is allowed to login. Other protocols like &#039;&#039;telnet&#039;&#039; or &#039;&#039;rlogin&#039;&#039; are not allowed for security reasons.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Usernames ==&lt;br /&gt;
&lt;br /&gt;
Your username will be the same as the one provided by your home institution, but &#039;&#039;&#039;prefixed&#039;&#039;&#039; with two characters and an underscore indicating your home institution. For example: If you are a member of the university of Konstanz and your local username is ab1234, your username on bwUniCluster 2.0 is kn_ab1234.&lt;br /&gt;
&lt;br /&gt;
The following list contains all prefixes currently in use:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Home organization !! &amp;lt;UserID&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| Universität Freiburg || &#039;&#039;fr_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Universität Heidelberg || &#039;&#039;hd_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Universität Hohenheim || &#039;&#039;ho_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| KIT || username &#039;&#039;(without any prefix)&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
| Universität Konstanz || &#039;&#039;kn_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Universität Mannheim || &#039;&#039;ma_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Universität Stuttgart ||  &#039;&#039;st_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Universität Tübingen || &#039;&#039;tu_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Universität Ulm  || &#039;&#039;ul_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Aalen || &#039;&#039;aa_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Albstadt-Sigmaringen || &#039;&#039;as_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Esslingen || &#039;&#039;es_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Furtwangen || &#039;&#039;hf_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Heilbronn || &#039;&#039;hn_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Karlsruhe || &#039;&#039;hk_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| HTWG Konstanz || &#039;&#039;ht_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Reutlingen || &#039;&#039;hr_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Rottenburg || &#039;&#039;ro_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule für Technik Stuttgart || &#039;&#039;hs_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Ulm || &#039;&#039;hu_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Client application: OpenSSH ==&lt;br /&gt;
&lt;br /&gt;
Most Unix and Unix-like operating systems like Linux, macOS and *BSD come with a built-in SSH client provided by the OpenSSH project. More recent versions of Windows 10 and the Windows Subsystem for Linux also come with a built-in OpenSSH client.&lt;br /&gt;
&lt;br /&gt;
To use this client, simply open a command line terminal (the exact process differs on every operating system, but usually involves starting an application called &#039;&#039;&#039;Terminal&#039;&#039;&#039; or &#039;&#039;&#039;Command Prompt&#039;&#039;&#039;) and enter the following command to connect to bwUniCluster 2.0:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ssh &amp;lt;UserID&amp;gt;@bwunicluster.scc.kit.edu&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you are on a Linux or Unix system running the X Window System (X11) and want to use a GUI-based application on bwUniCluster 2.0, you can use the &#039;&#039;-X&#039;&#039; option for the ssh command to set up X11 forwarding:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ssh -X &amp;lt;UserID&amp;gt;@uc2.scc.kit.edu&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Windows users requiring X11 forwarding for graphical applications should use &#039;&#039;&#039;MobaXterm&#039;&#039;&#039; instead.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Client application: MobaXterm ==&lt;br /&gt;
&lt;br /&gt;
The bwHPC-C5 support team strongly recommends to use [http://mobaxterm.mobatek.net/ MobaXterm] instead of &#039;&#039;PuTTY&#039;&#039; or &#039;&#039;WinSCP&#039;&#039; on Windows. &#039;&#039;MobaXterm&#039;&#039; provides a built-in X11 server allowing to start GUI based software.&lt;br /&gt;
 &lt;br /&gt;
Start &#039;&#039;MobaXterm&#039;&#039;, fill in the following fields:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Remote name              : uc2.scc.kit.edu    # or uc1e.scc.kit.edu&lt;br /&gt;
Specify user name        : &amp;lt;UserID&amp;gt;&lt;br /&gt;
Port                     : 22&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After that click on &#039;ok&#039;. Then a terminal will be opened and there you can enter your credentials.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Example login process ==&lt;br /&gt;
&lt;br /&gt;
After the connection has been initiated, a successful login process will go through the following three steps:&lt;br /&gt;
&lt;br /&gt;
1. The system asks for a &#039;&#039;&#039;One-Time Password&#039;&#039;&#039;. Generate one using the Software or Hardware Token registered on the bwIDM system (see [[bwUniCluster 2.0 User Access/2FA Tokens]]) and enter it after the &#039;&#039;&#039;Your OTP:&#039;&#039;&#039; prompt.&lt;br /&gt;
&lt;br /&gt;
2. The systems asks for your service password. Enter it after the &#039;&#039;&#039;Password:&#039;&#039;&#039; prompt.&lt;br /&gt;
&lt;br /&gt;
3. You are greeted by the bwUniCluster 2.0 banner followed by a shell.&lt;br /&gt;
&lt;br /&gt;
The result should look like this:&lt;br /&gt;
&lt;br /&gt;
[[File:BwUniCluster 2.0 access login example.png|center|]]&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Troubleshooting ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Issue: The &amp;quot;Your OTP:&amp;quot; prompt never appears and the connection hangs/times out instead&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Likely cause: You are most likely not on a network from which access to the bwUniCluster 2.0 system is allowed. Please check if you might have to establish a VPN connection first.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Issue: The system asks for the One-Time Password multiple times&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Likely cause: Make sure you are using the correct Software Token to generate the One-Time Password.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Issue: The system asks for the service password multiple times&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Likely cause: Make sure you are using the service password set on bwIDM and not the password valid for your home institution. Unlike the bwUniCluster 1, the bwUniCluster 2.0 only accepts the service password.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Issue: There is an error message by the pam_ses_open.sh skript&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Likely cause: Your account is in the &amp;quot;LOST_ACCESS&amp;quot; state because the entitlement is no longer valid, the questionaire was not filled out or there was a problem during the communication between your home institution and the central bwIDM system. Please try the following steps:&lt;br /&gt;
&lt;br /&gt;
* Log into [https://bwidm.scc.kit.edu bwIDM], look for the bwUniCluster entry and click on &#039;&#039;&#039;Registry info&#039;&#039;&#039;. Your &amp;quot;Status:&amp;quot; should be &amp;quot;ACTIVE&amp;quot;. If it is not, please wait for ten minutes since logging into the bwIDM causes a refresh and the problem might fix itself. If the status does not change to ACTIVE after a longer amount of time, please contact the support channels.&lt;br /&gt;
&lt;br /&gt;
* If you have not filled out the questionaire, please do so on [https://zas.bwhpc.de/shib/en/bwunicluster_survey.php    https://zas.bwhpc.de/shib/en/bwunicluster_survey.php] and then wait for about ten minutes before attempting to log into the HPC system again.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Allowed activities on login nodes ==&lt;br /&gt;
&lt;br /&gt;
The login nodes of bwUniCluster 2.0 are the access point to the compute system and to your bwUniCluster 2.0 $HOME directory. The login nodes are shared with all the users of bwUniCluster 2.0. Therefore, your activities on the login nodes are limited to primarily set up your batch jobs. Your activities may also be:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;short&#039;&#039;&#039; compilation of your program code and&lt;br /&gt;
* &#039;&#039;&#039;short&#039;&#039;&#039; pre- and postprocessing of your batch jobs.&lt;br /&gt;
&lt;br /&gt;
To guarantee usability for all the users of bwUniCluster 2.0 &amp;lt;span style=&amp;quot;color:red;font-size:100%;&amp;quot;&amp;gt;&#039;&#039;&#039;you must not run your compute jobs on the login nodes&#039;&#039;&#039;&amp;lt;/span&amp;gt;. Compute jobs must be submitted to the&lt;br /&gt;
[[bwUniCluster Batch Jobs|queueing system]]. Any compute job running on the login nodes will be terminated without any notice. Any long-running compilation or any long-running pre- or postprocessing of batch jobs must also be submitted to the [[bwUniCluster Batch Jobs|queueing system]].&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== SSH Keys ==&lt;br /&gt;
&lt;br /&gt;
In contrast to the bwUniCluster 1 and many other HPC systems it is &#039;&#039;&#039;no longer possible to self-manage your SSH Keys by adding them to the ~/.ssh/authorized_keys file&#039;&#039;&#039;. Existing files will no longer be evaluated. SSH Keys have to be managed via the central bwIDM system instead. Please refer to the user guide for this functionality:&lt;br /&gt;
&lt;br /&gt;
[[bwUniCluster 2.0 User Access/SSH Keys]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= [[First_Steps_on_bwHPC_cluster|First steps on bwUniCluster]] =&lt;br /&gt;
&lt;br /&gt;
First and some important steps on bwUniCluster 2.0 can be found [[First_Steps_on_bwHPC_cluster|here]].&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Deregistration =&lt;br /&gt;
If you plan to permanently leave the bwUniCluster 2.0, follow the deregister checklist:&lt;br /&gt;
# Transfer all your data in $HOME and workspace to your local computer/storage and after that clear off all your data&lt;br /&gt;
# Visit [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu] &lt;br /&gt;
#* Select your home organization from the list and click &#039;&#039;&#039;Proceed&#039;&#039;&#039;  &lt;br /&gt;
#* Enter your home-organisational user ID / username  and your home-organisational password and click &#039;&#039;&#039;Login&#039;&#039;&#039; button&lt;br /&gt;
#* You will be redirected back to the registration website [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu/] &lt;br /&gt;
#* &amp;lt;div&amp;gt;Select &#039;&#039;&#039;Registry Info&#039;&#039;&#039; of the service &#039;&#039;&#039;bwUniCluster&#039;&#039;&#039; (on the left hand side)&amp;lt;br&amp;gt;[[File:bwUniCluster_registration_sidebar.png|center|border|]]&amp;lt;/div&amp;gt;&lt;br /&gt;
#* Click &#039;&#039;&#039;Deregister&#039;&#039;&#039;&lt;br /&gt;
Note that Step 2 will automatically unsubscribe you from the bwunicluster mail list.&lt;/div&gt;</summary>
		<author><name>H Haefner</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster_2.0_User_Access&amp;diff=8058</id>
		<title>BwUniCluster 2.0 User Access</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster_2.0_User_Access&amp;diff=8058"/>
		<updated>2020-12-17T15:10:24Z</updated>

		<summary type="html">&lt;p&gt;H Haefner: /* Step B: Web Registration, service password and 2-factor authentication */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;!--{| style=&amp;quot;border-style: solid; border-width: 1px&amp;quot;&lt;br /&gt;
! Navigation: [[BwHPC_Best_Practices_Repository|bwHPC BPR]] / [[BwUniCluster_User_Guide|bwUniCluster]] &lt;br /&gt;
|}--&amp;gt;&lt;br /&gt;
[[bwUniCluster_2.0|bwUniCluster 2.0]] is Baden-Württemberg&#039;s general purpose tier 3 high performance computing (HPC)&lt;br /&gt;
cluster co-financed by Baden-Württemberg&#039;s ministry of science, research and arts and the shareholders:&lt;br /&gt;
&lt;br /&gt;
* Albert Ludwig University of Freiburg&lt;br /&gt;
* Eberhard Karls University, Tübingen&lt;br /&gt;
* Karlsruhe Institute of Technology (KIT)&lt;br /&gt;
* Heidelberg University (Ruprecht-Karls-Universität Heidelberg)&lt;br /&gt;
* Ulm University&lt;br /&gt;
* University of Hohenheim&lt;br /&gt;
* University of Konstanz&lt;br /&gt;
* University of Mannheim&lt;br /&gt;
* University of Stuttgart&lt;br /&gt;
* HAW BW e.V. (an association of several universities of applied sciences in Baden-Württemberg, see below) &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
To &#039;&#039;&#039;log on&#039;&#039;&#039; [[bwUniCluster_2.0|bwUniCluster 2.0]] a user account is required. All members of the shareholder&lt;br /&gt;
universities can apply for an account.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;width: 100%; border-spacing: 5px;&amp;quot;&lt;br /&gt;
| style=&amp;quot;text-align:left; color:#000;vertical-align:top;&amp;quot; |__TOC__ &lt;br /&gt;
|&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;[[File:bwUniCluster_17Jan2014_p044-rot_t10.10.00.jpg|center|border|250px|bwUniCluster wiring by Holger Obermaier, copyright: KIT (SCC)]] &amp;lt;span style=&amp;quot;font-size:80%&amp;quot;&amp;gt;bwUniCluster wiring © KIT (SCC)&amp;lt;/span&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
= Registration =&lt;br /&gt;
&lt;br /&gt;
Granting access and issuing a user account for &#039;&#039;&#039;bwUniCluster 2.0&#039;&#039;&#039; requires the registration at the KIT service website [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu] (step B). However, this registration depends on the &#039;&#039;&#039;bwUniCluster entitlement&#039;&#039;&#039; issued by your university (step A).&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Step A: bwUniCluster entitlement for registration ==&lt;br /&gt;
&#039;&#039;&#039;The entitlement is called bwUniCluster (not bwUniCluster 2.0)&#039;&#039;&#039; and each university issues the bwUniCluster entitlement &#039;&#039;&#039;only&#039;&#039;&#039; for their own respective members. Some have established on-line processes or provide downloads of the entitlement application forms. If there is no link behind the name of an institution in the following list, please contact the local IT support services: &lt;br /&gt;
&lt;br /&gt;
* [[BwCluster_User_Access_Uni_Freiburg|Albert Ludwig University of Freiburg]]&lt;br /&gt;
* [https://bwunicluster.urz.uni-heidelberg.de/ Heidelberg University]&lt;br /&gt;
* [https://kim.uni-hohenheim.de/bwhpc-account University of Hohenheim]&lt;br /&gt;
* [http://www.scc.kit.edu/downloads/sdo/Antrag_Benutzernummer_bwUniCluster.pdf Karlsruhe Institute of Technology (KIT)]&lt;br /&gt;
* [[BWUniCluster_User_Access_Members_Uni_Konstanz|University of Konstanz]]&lt;br /&gt;
* [[BWUniCluster_User_Access_Members_Uni_Mannheim|University of Mannheim]]&lt;br /&gt;
* [https://www.hlrs.de/solutions-services/academic-users/bwunicluster-access/ University of Stuttgart]&lt;br /&gt;
* [https://uni-tuebingen.de/de/155157 Eberhard Karls University Tübingen]&lt;br /&gt;
* [[BWUniCluster_User_Access_Members_Uni_Ulm|Ulm University]]&lt;br /&gt;
* Hochschule Aalen&lt;br /&gt;
* Hochschule Albstadt-Sigmaringen&lt;br /&gt;
* Hochschule Esslingen&lt;br /&gt;
* Hochschule Furtwangen&lt;br /&gt;
* Hochschule Heilbronn&lt;br /&gt;
* Hochschule Karlsruhe&lt;br /&gt;
* Hochschule Konstanz&lt;br /&gt;
* Hochschule Reutlingen&lt;br /&gt;
* Hochschule Rottenburg&lt;br /&gt;
* Hochschule Stuttgart (HfT)&lt;br /&gt;
* Hochschule Ulm&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Step B: Web Registration, service password and 2-factor authentication ==&lt;br /&gt;
&lt;br /&gt;
After completing step A, i.e., after successfull issueing of the bwUniCluster entitlement, you have to register yourself for the service. To do so please visit [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu] and complete the following steps.&lt;br /&gt;
&lt;br /&gt;
1. Select your home organization from the list on the main page and click &#039;&#039;&#039;Proceed&#039;&#039;&#039; or &#039;&#039;&#039;Fortfahren&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
[[File:BwIDM-1.png|center|border|]]&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
2. You will be directed to the &#039;&#039;Identity Provider&#039;&#039; of your home organisation. Enter the user ID / username and password of your home organisation - this is usually the same password used for your e-mail account and other services - and click on &#039;&#039;&#039;Login&#039;&#039;&#039;, &#039;&#039;&#039;Einloggen&#039;&#039;&#039; or something similar.&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
3. You will be redirected back to the registration website [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu/]. If you are logging into bwIDM for the first time, there will be a summary screen which shows the account details your home institution is providing to the central system. Please check that all data is valid and then click on &#039;&#039;&#039;Continue&#039;&#039;&#039; or &#039;&#039;&#039;Weiter&#039;&#039;&#039;.&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
4. Once you have successfully logged into the bwIDM system, you will be greeted by a home screen showing all state-wide services you have access to. There will be a box labelled &amp;quot;bwUniCluster&amp;quot;. Click on &#039;&#039;&#039;Register&#039;&#039;&#039; or &#039;&#039;&#039;Registrieren&#039;&#039;&#039; to start the registration process.&lt;br /&gt;
&lt;br /&gt;
[[File:Bwidm-2-red.png|center|border|]]&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
5. Since August 13, 2020 a &#039;&#039;&#039;2-factor authentication&#039;&#039;&#039; mechanism (2FA) is being enforced to improve security. If you have never registered a 2FA token on bwIDM before, the following error message will appear:&lt;br /&gt;
&lt;br /&gt;
[[File:BwUniCluster 2.0 access login bwidm registration requirements 2fa.png|center|]]&lt;br /&gt;
&lt;br /&gt;
Click on the [https://bwidm.scc.kit.edu/user/twofa.xhtml Link] or on the &#039;&#039;&#039;My Tokens&#039;&#039;&#039; link in the main menu. The instructions for registering a new 2FA token can be found on the following page: [[bwUniCluster 2.0 User Access/2FA Tokens]]. Please complete them before proceeding.&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
6. Make sure all requirements are met by checking the &#039;&#039;&#039;Requirements&#039;&#039;&#039; box at the top. If the requirements are not met you might be able to correct the issure by following the instructions. In all other cases please [[Registration_Support_-_bwUniCluster|contact your local hotline]].&lt;br /&gt;
&lt;br /&gt;
[[File:BwUniCluster 2.0 access login bwidm registration requirements.png|center|border|]]&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
7. Read the Terms of Use (&#039;&#039;&#039;Nutzungsbedingungen und -richtlinien&#039;&#039;&#039;), check the box besides &#039;&#039;&#039;I have read and accepted the terms of use&#039;&#039;&#039; and click on &#039;&#039;&#039;Register&#039;&#039;&#039; or &#039;&#039;&#039;Registrieren&#039;&#039;&#039;.&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
8. Set a service password for the bwUniCluster and click on &#039;&#039;&#039;Save&#039;&#039;&#039; or &#039;&#039;&#039;Speichern&#039;&#039;&#039;. Logging in with the password of your home organisation, like on the former bwUniCluster 1, is no longer possible. Please make sure to use a strong password which is different from any other password you are currently using or have used on other systems. You will also be asked to change the service password regularly.&lt;br /&gt;
&lt;br /&gt;
[[File:BwUniCluster 2.0 access login bwidm service password.png|center|]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Step C: Fill out the bwUniCluster questionnaire ==&lt;br /&gt;
&lt;br /&gt;
Filling out the bwUniCluster questionaire on&lt;br /&gt;
&lt;br /&gt;
   https://zas.bwhpc.de/shib/en/bwunicluster_survey.php&lt;br /&gt;
&lt;br /&gt;
is mandatory for all users. The input is solely used to improve our support activities and for capacity planning of future HPC resources. &#039;&#039;&#039;If the questionaire is not filled out, access to bwUniCluster 2.0 is blocked 14 days after the registration.&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Changing the Service Password ==&lt;br /&gt;
&lt;br /&gt;
Your bwUniCluster 2.0 &#039;&#039;&#039;password&#039;&#039;&#039; is the service password you set during the web registration (compare step 7 of chapter 1.2).  At any time, you can set a new bwUniCluster 2.0 password via the registration website [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu] by carrying out the following steps:&lt;br /&gt;
# Go to [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu] and select your home organization &lt;br /&gt;
# Authenticate yourself via the user id / username and password provided by your home institution&lt;br /&gt;
# Find the entry &#039;&#039;&#039;bwUniCluster&#039;&#039;&#039; and select &#039;&#039;&#039;Set Service Password&#039;&#039;&#039;&lt;br /&gt;
# Enter the new password, repeat it and click &#039;&#039;&#039;Save&#039;&#039;&#039; button.&lt;br /&gt;
# If the change was sucessfull, the message &amp;quot;Das Passwort wurde bei dem Dienst geändert&amp;quot; (&amp;quot;Password has been changed&amp;quot;) will be shown&lt;br /&gt;
# Proceed to log in using the new password&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
== Contact / Support ==&lt;br /&gt;
If you have questions or problems concerning the bwUniCluster (2.0) registration, please [[bwUniCluster 2.0 Support|contact your local hotline]]. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Establishing network access = &lt;br /&gt;
&lt;br /&gt;
Access to bwUniCluster 2.0 is &#039;&#039;&#039;limited to IP addresses from the so-called BelWü networks&#039;&#039;&#039;. All home institutions of our current users are connected to BelWue, so if you are on your campus network (e.g. in your office or on the Campus WiFi) you should be able to connect to bwUniCluster 2.0 without restrictions. If you are outside of one of the BelWue networks (e.g. in your home office instead of in your campus office), a VPN connection to your home institution has to be established first (see e.g. [1] for the KIT).&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Login =&lt;br /&gt;
&lt;br /&gt;
After finishing the web registration and making sure that you are on a network from which you have access to bwUniCluster 2.0 (e.g. by establishing a VPN connection), the HPC cluster is ready for your &#039;&#039;&#039;SSH&#039;&#039;&#039; based login. Recommended SSH clients applications are:&lt;br /&gt;
&lt;br /&gt;
* the ssh (OpenSSH) command included in all Linux distributions and macOS, -in command under Linux and macOS using the application &#039;&#039;terminal&#039;&#039;&lt;br /&gt;
* [http://mobaxterm.mobatek.net/ MobaXterm] under Windows&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Hostnames ==&lt;br /&gt;
&lt;br /&gt;
The main hostname required to connect to bwUniCluster 2.0 is &#039;&#039;&#039;bwunicluster.scc.kit.edu&#039;&#039;&#039; or &#039;&#039;&#039;uc2.scc.kit.edu&#039;&#039;&#039;. The system has four login nodes and we use so-called &#039;&#039;DNS round-robin scheduling&#039;&#039; to load-balance the incoming connections between the nodes. If you open multiple SSH sessions to bwUniCluster 2.0, these sessions will be established to different login nodes, so processes started in one session might not be visible in other sessions.&lt;br /&gt;
&lt;br /&gt;
The older Broadwell extension partition of the former bwUniCluster 1 is connected to bwUniCluster 2.0. You can use the hostname &#039;&#039;&#039;uc1e.scc.kit.edu&#039;&#039;&#039; to connect to the login nodes of this partition. &lt;br /&gt;
&lt;br /&gt;
If you need to connect to specific login nodes, you can use the following hostnames:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Hostname !! Node type&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;uc2-login1.scc.kit.edu&#039;&#039;&#039; || bwUniCluster 2.0, first login node&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;uc2-login2.scc.kit.edu&#039;&#039;&#039; || bwUniCluster 2.0, second login node&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;uc2-login3.scc.kit.edu&#039;&#039;&#039; || bwUniCluster 2.0, third login node&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;uc2-login4.scc.kit.edu&#039;&#039;&#039; || bwUniCluster 2.0, fourth login node&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;uc1e-login1.scc.kit.edu&#039;&#039;&#039; || Broadwell partition, first login node&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;uc1e-login2.scc.kit.edu&#039;&#039;&#039; || Broadwell partition, second login node&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Only the secure shell &#039;&#039;SSH&#039;&#039; is allowed to login. Other protocols like &#039;&#039;telnet&#039;&#039; or &#039;&#039;rlogin&#039;&#039; are not allowed for security reasons.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Usernames ==&lt;br /&gt;
&lt;br /&gt;
Your username will be the same as the one provided by your home institution, but &#039;&#039;&#039;prefixed&#039;&#039;&#039; with two characters and an underscore indicating your home institution. For example: If you are a member of the university of Konstanz and your local username is ab1234, your username on bwUniCluster 2.0 is kn_ab1234.&lt;br /&gt;
&lt;br /&gt;
The following list contains all prefixes currently in use:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Home organization !! &amp;lt;UserID&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| Universität Freiburg || &#039;&#039;fr_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Universität Heidelberg || &#039;&#039;hd_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Universität Hohenheim || &#039;&#039;ho_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| KIT || username &#039;&#039;(without any prefix)&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
| Universität Konstanz || &#039;&#039;kn_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Universität Mannheim || &#039;&#039;ma_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Universität Stuttgart ||  &#039;&#039;st_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Universität Tübingen || &#039;&#039;tu_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Universität Ulm  || &#039;&#039;ul_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Aalen || &#039;&#039;aa_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Albstadt-Sigmaringen || &#039;&#039;as_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Esslingen || &#039;&#039;es_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Furtwangen || &#039;&#039;hf_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Heilbronn || &#039;&#039;hn_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Karlsruhe || &#039;&#039;hk_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| HTWG Konstanz || &#039;&#039;ht_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Reutlingen || &#039;&#039;hr_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Rottenburg || &#039;&#039;ro_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule für Technik Stuttgart || &#039;&#039;hs_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Ulm || &#039;&#039;hu_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Client application: OpenSSH ==&lt;br /&gt;
&lt;br /&gt;
Most Unix and Unix-like operating systems like Linux, macOS and *BSD come with a built-in SSH client provided by the OpenSSH project. More recent versions of Windows 10 and the Windows Subsystem for Linux also come with a built-in OpenSSH client.&lt;br /&gt;
&lt;br /&gt;
To use this client, simply open a command line terminal (the exact process differs on every operating system, but usually involves starting an application called &#039;&#039;&#039;Terminal&#039;&#039;&#039; or &#039;&#039;&#039;Command Prompt&#039;&#039;&#039;) and enter the following command to connect to bwUniCluster 2.0:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ssh &amp;lt;UserID&amp;gt;@bwunicluster.scc.kit.edu&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you are on a Linux or Unix system running the X Window System (X11) and want to use a GUI-based application on bwUniCluster 2.0, you can use the &#039;&#039;-X&#039;&#039; option for the ssh command to set up X11 forwarding:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ssh -X &amp;lt;UserID&amp;gt;@uc2.scc.kit.edu&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Windows users requiring X11 forwarding for graphical applications should use &#039;&#039;&#039;MobaXterm&#039;&#039;&#039; instead.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Client application: MobaXterm ==&lt;br /&gt;
&lt;br /&gt;
The bwHPC-C5 support team strongly recommends to use [http://mobaxterm.mobatek.net/ MobaXterm] instead of &#039;&#039;PuTTY&#039;&#039; or &#039;&#039;WinSCP&#039;&#039; on Windows. &#039;&#039;MobaXterm&#039;&#039; provides a built-in X11 server allowing to start GUI based software.&lt;br /&gt;
 &lt;br /&gt;
Start &#039;&#039;MobaXterm&#039;&#039;, fill in the following fields:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Remote name              : uc2.scc.kit.edu    # or uc1e.scc.kit.edu&lt;br /&gt;
Specify user name        : &amp;lt;UserID&amp;gt;&lt;br /&gt;
Port                     : 22&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After that click on &#039;ok&#039;. Then a terminal will be opened and there you can enter your credentials.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Example login process ==&lt;br /&gt;
&lt;br /&gt;
After the connection has been initiated, a successful login process will go through the following three steps:&lt;br /&gt;
&lt;br /&gt;
1. The system asks for a &#039;&#039;&#039;One-Time Password&#039;&#039;&#039;. Generate one using the Software or Hardware Token registered on the bwIDM system (see [[bwUniCluster 2.0 User Access/2FA Tokens]]) and enter it after the &#039;&#039;&#039;Your OTP:&#039;&#039;&#039; prompt.&lt;br /&gt;
&lt;br /&gt;
2. The systems asks for your service password. Enter it after the &#039;&#039;&#039;Password:&#039;&#039;&#039; prompt.&lt;br /&gt;
&lt;br /&gt;
3. You are greeted by the bwUniCluster 2.0 banner followed by a shell.&lt;br /&gt;
&lt;br /&gt;
The result should look like this:&lt;br /&gt;
&lt;br /&gt;
[[File:BwUniCluster 2.0 access login example.png|center|]]&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Troubleshooting ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Issue: The &amp;quot;Your OTP:&amp;quot; prompt never appears and the connection hangs/times out instead&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Likely cause: You are most likely not on a network from which access to the bwUniCluster 2.0 system is allowed. Please check if you might have to establish a VPN connection first.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Issue: The system asks for the One-Time Password multiple times&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Likely cause: Make sure you are using the correct Software Token to generate the One-Time Password.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Issue: The system asks for the service password multiple times&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Likely cause: Make sure you are using the service password set on bwIDM and not the password valid for your home institution. Unlike the bwUniCluster 1, the bwUniCluster 2.0 only accepts the service password.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Issue: There is an error message by the pam_ses_open.sh skript&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Likely cause: Your account is in the &amp;quot;LOST_ACCESS&amp;quot; state because the entitlement is no longer valid, the questionaire was not filled out or there was a problem during the communication between your home institution and the central bwIDM system. Please try the following steps:&lt;br /&gt;
&lt;br /&gt;
* Log into [https://bwidm.scc.kit.edu bwIDM], look for the bwUniCluster entry and click on &#039;&#039;&#039;Registry info&#039;&#039;&#039;. Your &amp;quot;Status:&amp;quot; should be &amp;quot;ACTIVE&amp;quot;. If it is not, please wait for ten minutes since logging into the bwIDM causes a refresh and the problem might fix itself. If the status does not change to ACTIVE after a longer amount of time, please contact the support channels.&lt;br /&gt;
&lt;br /&gt;
* If you have not filled out the questionaire, please do so on [https://zas.bwhpc.de/shib/en/bwunicluster_survey.php    https://zas.bwhpc.de/shib/en/bwunicluster_survey.php] and then wait for about ten minutes before attempting to log into the HPC system again.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Allowed activities on login nodes ==&lt;br /&gt;
&lt;br /&gt;
The login nodes of bwUniCluster 2.0 are the access point to the compute system and to your bwUniCluster 2.0 $HOME directory. The login nodes are shared with all the users of bwUniCluster 2.0. Therefore, your activities on the login nodes are limited to primarily set up your batch jobs. Your activities may also be:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;short&#039;&#039;&#039; compilation of your program code and&lt;br /&gt;
* &#039;&#039;&#039;short&#039;&#039;&#039; pre- and postprocessing of your batch jobs.&lt;br /&gt;
&lt;br /&gt;
To guarantee usability for all the users of bwUniCluster 2.0 &amp;lt;span style=&amp;quot;color:red;font-size:100%;&amp;quot;&amp;gt;&#039;&#039;&#039;you must not run your compute jobs on the login nodes&#039;&#039;&#039;&amp;lt;/span&amp;gt;. Compute jobs must be submitted to the&lt;br /&gt;
[[bwUniCluster Batch Jobs|queueing system]]. Any compute job running on the login nodes will be terminated without any notice. Any long-running compilation or any long-running pre- or postprocessing of batch jobs must also be submitted to the [[bwUniCluster Batch Jobs|queueing system]].&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== SSH Keys ==&lt;br /&gt;
&lt;br /&gt;
In contrast to the bwUniCluster 1 and many other HPC systems it is &#039;&#039;&#039;no longer possible to self-manage your SSH Keys by adding them to the ~/.ssh/authorized_keys file&#039;&#039;&#039;. Existing files will no longer be evaluated. SSH Keys have to be managed via the central bwIDM system instead. Please refer to the user guide for this functionality:&lt;br /&gt;
&lt;br /&gt;
[[bwUniCluster 2.0 User Access/SSH Keys]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= [[First_Steps_on_bwHPC_cluster|First steps on bwUniCluster]] =&lt;br /&gt;
&lt;br /&gt;
First and some important steps on bwUniCluster 2.0 can be found [[First_Steps_on_bwHPC_cluster|here]].&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Deregistration =&lt;br /&gt;
If you plan to permanently leave the bwUniCluster 2.0, follow the deregister checklist:&lt;br /&gt;
# Transfer all your data in $HOME and workspace to your local computer/storage and after that clear off all your data&lt;br /&gt;
# Visit [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu] &lt;br /&gt;
#* Select your home organization from the list and click &#039;&#039;&#039;Proceed&#039;&#039;&#039;  &lt;br /&gt;
#* Enter your home-organisational user ID / username  and your home-organisational password and click &#039;&#039;&#039;Login&#039;&#039;&#039; button&lt;br /&gt;
#* You will be redirected back to the registration website [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu/] &lt;br /&gt;
#* &amp;lt;div&amp;gt;Select &#039;&#039;&#039;Registry Info&#039;&#039;&#039; of the service &#039;&#039;&#039;bwUniCluster&#039;&#039;&#039; (on the left hand side)&amp;lt;br&amp;gt;[[File:bwUniCluster_registration_sidebar.png|center|border|]]&amp;lt;/div&amp;gt;&lt;br /&gt;
#* Click &#039;&#039;&#039;Deregister&#039;&#039;&#039;&lt;br /&gt;
Note that Step 2 will automatically unsubscribe you from the bwunicluster mail list.&lt;/div&gt;</summary>
		<author><name>H Haefner</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster_2.0_User_Access&amp;diff=8057</id>
		<title>BwUniCluster 2.0 User Access</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster_2.0_User_Access&amp;diff=8057"/>
		<updated>2020-12-17T15:10:07Z</updated>

		<summary type="html">&lt;p&gt;H Haefner: /* Step B: Web Registration, service password and 2-factor authentication */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;!--{| style=&amp;quot;border-style: solid; border-width: 1px&amp;quot;&lt;br /&gt;
! Navigation: [[BwHPC_Best_Practices_Repository|bwHPC BPR]] / [[BwUniCluster_User_Guide|bwUniCluster]] &lt;br /&gt;
|}--&amp;gt;&lt;br /&gt;
[[bwUniCluster_2.0|bwUniCluster 2.0]] is Baden-Württemberg&#039;s general purpose tier 3 high performance computing (HPC)&lt;br /&gt;
cluster co-financed by Baden-Württemberg&#039;s ministry of science, research and arts and the shareholders:&lt;br /&gt;
&lt;br /&gt;
* Albert Ludwig University of Freiburg&lt;br /&gt;
* Eberhard Karls University, Tübingen&lt;br /&gt;
* Karlsruhe Institute of Technology (KIT)&lt;br /&gt;
* Heidelberg University (Ruprecht-Karls-Universität Heidelberg)&lt;br /&gt;
* Ulm University&lt;br /&gt;
* University of Hohenheim&lt;br /&gt;
* University of Konstanz&lt;br /&gt;
* University of Mannheim&lt;br /&gt;
* University of Stuttgart&lt;br /&gt;
* HAW BW e.V. (an association of several universities of applied sciences in Baden-Württemberg, see below) &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
To &#039;&#039;&#039;log on&#039;&#039;&#039; [[bwUniCluster_2.0|bwUniCluster 2.0]] a user account is required. All members of the shareholder&lt;br /&gt;
universities can apply for an account.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;width: 100%; border-spacing: 5px;&amp;quot;&lt;br /&gt;
| style=&amp;quot;text-align:left; color:#000;vertical-align:top;&amp;quot; |__TOC__ &lt;br /&gt;
|&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;[[File:bwUniCluster_17Jan2014_p044-rot_t10.10.00.jpg|center|border|250px|bwUniCluster wiring by Holger Obermaier, copyright: KIT (SCC)]] &amp;lt;span style=&amp;quot;font-size:80%&amp;quot;&amp;gt;bwUniCluster wiring © KIT (SCC)&amp;lt;/span&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
= Registration =&lt;br /&gt;
&lt;br /&gt;
Granting access and issuing a user account for &#039;&#039;&#039;bwUniCluster 2.0&#039;&#039;&#039; requires the registration at the KIT service website [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu] (step B). However, this registration depends on the &#039;&#039;&#039;bwUniCluster entitlement&#039;&#039;&#039; issued by your university (step A).&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Step A: bwUniCluster entitlement for registration ==&lt;br /&gt;
&#039;&#039;&#039;The entitlement is called bwUniCluster (not bwUniCluster 2.0)&#039;&#039;&#039; and each university issues the bwUniCluster entitlement &#039;&#039;&#039;only&#039;&#039;&#039; for their own respective members. Some have established on-line processes or provide downloads of the entitlement application forms. If there is no link behind the name of an institution in the following list, please contact the local IT support services: &lt;br /&gt;
&lt;br /&gt;
* [[BwCluster_User_Access_Uni_Freiburg|Albert Ludwig University of Freiburg]]&lt;br /&gt;
* [https://bwunicluster.urz.uni-heidelberg.de/ Heidelberg University]&lt;br /&gt;
* [https://kim.uni-hohenheim.de/bwhpc-account University of Hohenheim]&lt;br /&gt;
* [http://www.scc.kit.edu/downloads/sdo/Antrag_Benutzernummer_bwUniCluster.pdf Karlsruhe Institute of Technology (KIT)]&lt;br /&gt;
* [[BWUniCluster_User_Access_Members_Uni_Konstanz|University of Konstanz]]&lt;br /&gt;
* [[BWUniCluster_User_Access_Members_Uni_Mannheim|University of Mannheim]]&lt;br /&gt;
* [https://www.hlrs.de/solutions-services/academic-users/bwunicluster-access/ University of Stuttgart]&lt;br /&gt;
* [https://uni-tuebingen.de/de/155157 Eberhard Karls University Tübingen]&lt;br /&gt;
* [[BWUniCluster_User_Access_Members_Uni_Ulm|Ulm University]]&lt;br /&gt;
* Hochschule Aalen&lt;br /&gt;
* Hochschule Albstadt-Sigmaringen&lt;br /&gt;
* Hochschule Esslingen&lt;br /&gt;
* Hochschule Furtwangen&lt;br /&gt;
* Hochschule Heilbronn&lt;br /&gt;
* Hochschule Karlsruhe&lt;br /&gt;
* Hochschule Konstanz&lt;br /&gt;
* Hochschule Reutlingen&lt;br /&gt;
* Hochschule Rottenburg&lt;br /&gt;
* Hochschule Stuttgart (HfT)&lt;br /&gt;
* Hochschule Ulm&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Step B: Web Registration, service password and 2-factor authentication ==&lt;br /&gt;
&lt;br /&gt;
After completing step A, i.e., after successfull issueing of the bwUniCluster entitlement, you have to register yourself for the service. To do so please visit [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu] and complete the following steps.&lt;br /&gt;
&lt;br /&gt;
1. Select your home organization from the list on the main page and click &#039;&#039;&#039;Proceed&#039;&#039;&#039; or &#039;&#039;&#039;Fortfahren&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
[[File:Bwidm-1.png|center|border|]]&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
2. You will be directed to the &#039;&#039;Identity Provider&#039;&#039; of your home organisation. Enter the user ID / username and password of your home organisation - this is usually the same password used for your e-mail account and other services - and click on &#039;&#039;&#039;Login&#039;&#039;&#039;, &#039;&#039;&#039;Einloggen&#039;&#039;&#039; or something similar.&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
3. You will be redirected back to the registration website [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu/]. If you are logging into bwIDM for the first time, there will be a summary screen which shows the account details your home institution is providing to the central system. Please check that all data is valid and then click on &#039;&#039;&#039;Continue&#039;&#039;&#039; or &#039;&#039;&#039;Weiter&#039;&#039;&#039;.&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
4. Once you have successfully logged into the bwIDM system, you will be greeted by a home screen showing all state-wide services you have access to. There will be a box labelled &amp;quot;bwUniCluster&amp;quot;. Click on &#039;&#039;&#039;Register&#039;&#039;&#039; or &#039;&#039;&#039;Registrieren&#039;&#039;&#039; to start the registration process.&lt;br /&gt;
&lt;br /&gt;
[[File:Bwidm-2-red.png|center|border|]]&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
5. Since August 13, 2020 a &#039;&#039;&#039;2-factor authentication&#039;&#039;&#039; mechanism (2FA) is being enforced to improve security. If you have never registered a 2FA token on bwIDM before, the following error message will appear:&lt;br /&gt;
&lt;br /&gt;
[[File:BwUniCluster 2.0 access login bwidm registration requirements 2fa.png|center|]]&lt;br /&gt;
&lt;br /&gt;
Click on the [https://bwidm.scc.kit.edu/user/twofa.xhtml Link] or on the &#039;&#039;&#039;My Tokens&#039;&#039;&#039; link in the main menu. The instructions for registering a new 2FA token can be found on the following page: [[bwUniCluster 2.0 User Access/2FA Tokens]]. Please complete them before proceeding.&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
6. Make sure all requirements are met by checking the &#039;&#039;&#039;Requirements&#039;&#039;&#039; box at the top. If the requirements are not met you might be able to correct the issure by following the instructions. In all other cases please [[Registration_Support_-_bwUniCluster|contact your local hotline]].&lt;br /&gt;
&lt;br /&gt;
[[File:BwUniCluster 2.0 access login bwidm registration requirements.png|center|border|]]&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
7. Read the Terms of Use (&#039;&#039;&#039;Nutzungsbedingungen und -richtlinien&#039;&#039;&#039;), check the box besides &#039;&#039;&#039;I have read and accepted the terms of use&#039;&#039;&#039; and click on &#039;&#039;&#039;Register&#039;&#039;&#039; or &#039;&#039;&#039;Registrieren&#039;&#039;&#039;.&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
8. Set a service password for the bwUniCluster and click on &#039;&#039;&#039;Save&#039;&#039;&#039; or &#039;&#039;&#039;Speichern&#039;&#039;&#039;. Logging in with the password of your home organisation, like on the former bwUniCluster 1, is no longer possible. Please make sure to use a strong password which is different from any other password you are currently using or have used on other systems. You will also be asked to change the service password regularly.&lt;br /&gt;
&lt;br /&gt;
[[File:BwUniCluster 2.0 access login bwidm service password.png|center|]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Step C: Fill out the bwUniCluster questionnaire ==&lt;br /&gt;
&lt;br /&gt;
Filling out the bwUniCluster questionaire on&lt;br /&gt;
&lt;br /&gt;
   https://zas.bwhpc.de/shib/en/bwunicluster_survey.php&lt;br /&gt;
&lt;br /&gt;
is mandatory for all users. The input is solely used to improve our support activities and for capacity planning of future HPC resources. &#039;&#039;&#039;If the questionaire is not filled out, access to bwUniCluster 2.0 is blocked 14 days after the registration.&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Changing the Service Password ==&lt;br /&gt;
&lt;br /&gt;
Your bwUniCluster 2.0 &#039;&#039;&#039;password&#039;&#039;&#039; is the service password you set during the web registration (compare step 7 of chapter 1.2).  At any time, you can set a new bwUniCluster 2.0 password via the registration website [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu] by carrying out the following steps:&lt;br /&gt;
# Go to [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu] and select your home organization &lt;br /&gt;
# Authenticate yourself via the user id / username and password provided by your home institution&lt;br /&gt;
# Find the entry &#039;&#039;&#039;bwUniCluster&#039;&#039;&#039; and select &#039;&#039;&#039;Set Service Password&#039;&#039;&#039;&lt;br /&gt;
# Enter the new password, repeat it and click &#039;&#039;&#039;Save&#039;&#039;&#039; button.&lt;br /&gt;
# If the change was sucessfull, the message &amp;quot;Das Passwort wurde bei dem Dienst geändert&amp;quot; (&amp;quot;Password has been changed&amp;quot;) will be shown&lt;br /&gt;
# Proceed to log in using the new password&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
== Contact / Support ==&lt;br /&gt;
If you have questions or problems concerning the bwUniCluster (2.0) registration, please [[bwUniCluster 2.0 Support|contact your local hotline]]. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Establishing network access = &lt;br /&gt;
&lt;br /&gt;
Access to bwUniCluster 2.0 is &#039;&#039;&#039;limited to IP addresses from the so-called BelWü networks&#039;&#039;&#039;. All home institutions of our current users are connected to BelWue, so if you are on your campus network (e.g. in your office or on the Campus WiFi) you should be able to connect to bwUniCluster 2.0 without restrictions. If you are outside of one of the BelWue networks (e.g. in your home office instead of in your campus office), a VPN connection to your home institution has to be established first (see e.g. [1] for the KIT).&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Login =&lt;br /&gt;
&lt;br /&gt;
After finishing the web registration and making sure that you are on a network from which you have access to bwUniCluster 2.0 (e.g. by establishing a VPN connection), the HPC cluster is ready for your &#039;&#039;&#039;SSH&#039;&#039;&#039; based login. Recommended SSH clients applications are:&lt;br /&gt;
&lt;br /&gt;
* the ssh (OpenSSH) command included in all Linux distributions and macOS, -in command under Linux and macOS using the application &#039;&#039;terminal&#039;&#039;&lt;br /&gt;
* [http://mobaxterm.mobatek.net/ MobaXterm] under Windows&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Hostnames ==&lt;br /&gt;
&lt;br /&gt;
The main hostname required to connect to bwUniCluster 2.0 is &#039;&#039;&#039;bwunicluster.scc.kit.edu&#039;&#039;&#039; or &#039;&#039;&#039;uc2.scc.kit.edu&#039;&#039;&#039;. The system has four login nodes and we use so-called &#039;&#039;DNS round-robin scheduling&#039;&#039; to load-balance the incoming connections between the nodes. If you open multiple SSH sessions to bwUniCluster 2.0, these sessions will be established to different login nodes, so processes started in one session might not be visible in other sessions.&lt;br /&gt;
&lt;br /&gt;
The older Broadwell extension partition of the former bwUniCluster 1 is connected to bwUniCluster 2.0. You can use the hostname &#039;&#039;&#039;uc1e.scc.kit.edu&#039;&#039;&#039; to connect to the login nodes of this partition. &lt;br /&gt;
&lt;br /&gt;
If you need to connect to specific login nodes, you can use the following hostnames:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Hostname !! Node type&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;uc2-login1.scc.kit.edu&#039;&#039;&#039; || bwUniCluster 2.0, first login node&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;uc2-login2.scc.kit.edu&#039;&#039;&#039; || bwUniCluster 2.0, second login node&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;uc2-login3.scc.kit.edu&#039;&#039;&#039; || bwUniCluster 2.0, third login node&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;uc2-login4.scc.kit.edu&#039;&#039;&#039; || bwUniCluster 2.0, fourth login node&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;uc1e-login1.scc.kit.edu&#039;&#039;&#039; || Broadwell partition, first login node&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;uc1e-login2.scc.kit.edu&#039;&#039;&#039; || Broadwell partition, second login node&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Only the secure shell &#039;&#039;SSH&#039;&#039; is allowed to login. Other protocols like &#039;&#039;telnet&#039;&#039; or &#039;&#039;rlogin&#039;&#039; are not allowed for security reasons.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Usernames ==&lt;br /&gt;
&lt;br /&gt;
Your username will be the same as the one provided by your home institution, but &#039;&#039;&#039;prefixed&#039;&#039;&#039; with two characters and an underscore indicating your home institution. For example: If you are a member of the university of Konstanz and your local username is ab1234, your username on bwUniCluster 2.0 is kn_ab1234.&lt;br /&gt;
&lt;br /&gt;
The following list contains all prefixes currently in use:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Home organization !! &amp;lt;UserID&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| Universität Freiburg || &#039;&#039;fr_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Universität Heidelberg || &#039;&#039;hd_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Universität Hohenheim || &#039;&#039;ho_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| KIT || username &#039;&#039;(without any prefix)&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
| Universität Konstanz || &#039;&#039;kn_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Universität Mannheim || &#039;&#039;ma_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Universität Stuttgart ||  &#039;&#039;st_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Universität Tübingen || &#039;&#039;tu_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Universität Ulm  || &#039;&#039;ul_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Aalen || &#039;&#039;aa_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Albstadt-Sigmaringen || &#039;&#039;as_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Esslingen || &#039;&#039;es_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Furtwangen || &#039;&#039;hf_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Heilbronn || &#039;&#039;hn_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Karlsruhe || &#039;&#039;hk_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| HTWG Konstanz || &#039;&#039;ht_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Reutlingen || &#039;&#039;hr_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Rottenburg || &#039;&#039;ro_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule für Technik Stuttgart || &#039;&#039;hs_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Ulm || &#039;&#039;hu_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Client application: OpenSSH ==&lt;br /&gt;
&lt;br /&gt;
Most Unix and Unix-like operating systems like Linux, macOS and *BSD come with a built-in SSH client provided by the OpenSSH project. More recent versions of Windows 10 and the Windows Subsystem for Linux also come with a built-in OpenSSH client.&lt;br /&gt;
&lt;br /&gt;
To use this client, simply open a command line terminal (the exact process differs on every operating system, but usually involves starting an application called &#039;&#039;&#039;Terminal&#039;&#039;&#039; or &#039;&#039;&#039;Command Prompt&#039;&#039;&#039;) and enter the following command to connect to bwUniCluster 2.0:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ssh &amp;lt;UserID&amp;gt;@bwunicluster.scc.kit.edu&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you are on a Linux or Unix system running the X Window System (X11) and want to use a GUI-based application on bwUniCluster 2.0, you can use the &#039;&#039;-X&#039;&#039; option for the ssh command to set up X11 forwarding:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ssh -X &amp;lt;UserID&amp;gt;@uc2.scc.kit.edu&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Windows users requiring X11 forwarding for graphical applications should use &#039;&#039;&#039;MobaXterm&#039;&#039;&#039; instead.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Client application: MobaXterm ==&lt;br /&gt;
&lt;br /&gt;
The bwHPC-C5 support team strongly recommends to use [http://mobaxterm.mobatek.net/ MobaXterm] instead of &#039;&#039;PuTTY&#039;&#039; or &#039;&#039;WinSCP&#039;&#039; on Windows. &#039;&#039;MobaXterm&#039;&#039; provides a built-in X11 server allowing to start GUI based software.&lt;br /&gt;
 &lt;br /&gt;
Start &#039;&#039;MobaXterm&#039;&#039;, fill in the following fields:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Remote name              : uc2.scc.kit.edu    # or uc1e.scc.kit.edu&lt;br /&gt;
Specify user name        : &amp;lt;UserID&amp;gt;&lt;br /&gt;
Port                     : 22&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After that click on &#039;ok&#039;. Then a terminal will be opened and there you can enter your credentials.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Example login process ==&lt;br /&gt;
&lt;br /&gt;
After the connection has been initiated, a successful login process will go through the following three steps:&lt;br /&gt;
&lt;br /&gt;
1. The system asks for a &#039;&#039;&#039;One-Time Password&#039;&#039;&#039;. Generate one using the Software or Hardware Token registered on the bwIDM system (see [[bwUniCluster 2.0 User Access/2FA Tokens]]) and enter it after the &#039;&#039;&#039;Your OTP:&#039;&#039;&#039; prompt.&lt;br /&gt;
&lt;br /&gt;
2. The systems asks for your service password. Enter it after the &#039;&#039;&#039;Password:&#039;&#039;&#039; prompt.&lt;br /&gt;
&lt;br /&gt;
3. You are greeted by the bwUniCluster 2.0 banner followed by a shell.&lt;br /&gt;
&lt;br /&gt;
The result should look like this:&lt;br /&gt;
&lt;br /&gt;
[[File:BwUniCluster 2.0 access login example.png|center|]]&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Troubleshooting ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Issue: The &amp;quot;Your OTP:&amp;quot; prompt never appears and the connection hangs/times out instead&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Likely cause: You are most likely not on a network from which access to the bwUniCluster 2.0 system is allowed. Please check if you might have to establish a VPN connection first.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Issue: The system asks for the One-Time Password multiple times&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Likely cause: Make sure you are using the correct Software Token to generate the One-Time Password.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Issue: The system asks for the service password multiple times&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Likely cause: Make sure you are using the service password set on bwIDM and not the password valid for your home institution. Unlike the bwUniCluster 1, the bwUniCluster 2.0 only accepts the service password.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Issue: There is an error message by the pam_ses_open.sh skript&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Likely cause: Your account is in the &amp;quot;LOST_ACCESS&amp;quot; state because the entitlement is no longer valid, the questionaire was not filled out or there was a problem during the communication between your home institution and the central bwIDM system. Please try the following steps:&lt;br /&gt;
&lt;br /&gt;
* Log into [https://bwidm.scc.kit.edu bwIDM], look for the bwUniCluster entry and click on &#039;&#039;&#039;Registry info&#039;&#039;&#039;. Your &amp;quot;Status:&amp;quot; should be &amp;quot;ACTIVE&amp;quot;. If it is not, please wait for ten minutes since logging into the bwIDM causes a refresh and the problem might fix itself. If the status does not change to ACTIVE after a longer amount of time, please contact the support channels.&lt;br /&gt;
&lt;br /&gt;
* If you have not filled out the questionaire, please do so on [https://zas.bwhpc.de/shib/en/bwunicluster_survey.php    https://zas.bwhpc.de/shib/en/bwunicluster_survey.php] and then wait for about ten minutes before attempting to log into the HPC system again.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Allowed activities on login nodes ==&lt;br /&gt;
&lt;br /&gt;
The login nodes of bwUniCluster 2.0 are the access point to the compute system and to your bwUniCluster 2.0 $HOME directory. The login nodes are shared with all the users of bwUniCluster 2.0. Therefore, your activities on the login nodes are limited to primarily set up your batch jobs. Your activities may also be:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;short&#039;&#039;&#039; compilation of your program code and&lt;br /&gt;
* &#039;&#039;&#039;short&#039;&#039;&#039; pre- and postprocessing of your batch jobs.&lt;br /&gt;
&lt;br /&gt;
To guarantee usability for all the users of bwUniCluster 2.0 &amp;lt;span style=&amp;quot;color:red;font-size:100%;&amp;quot;&amp;gt;&#039;&#039;&#039;you must not run your compute jobs on the login nodes&#039;&#039;&#039;&amp;lt;/span&amp;gt;. Compute jobs must be submitted to the&lt;br /&gt;
[[bwUniCluster Batch Jobs|queueing system]]. Any compute job running on the login nodes will be terminated without any notice. Any long-running compilation or any long-running pre- or postprocessing of batch jobs must also be submitted to the [[bwUniCluster Batch Jobs|queueing system]].&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== SSH Keys ==&lt;br /&gt;
&lt;br /&gt;
In contrast to the bwUniCluster 1 and many other HPC systems it is &#039;&#039;&#039;no longer possible to self-manage your SSH Keys by adding them to the ~/.ssh/authorized_keys file&#039;&#039;&#039;. Existing files will no longer be evaluated. SSH Keys have to be managed via the central bwIDM system instead. Please refer to the user guide for this functionality:&lt;br /&gt;
&lt;br /&gt;
[[bwUniCluster 2.0 User Access/SSH Keys]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= [[First_Steps_on_bwHPC_cluster|First steps on bwUniCluster]] =&lt;br /&gt;
&lt;br /&gt;
First and some important steps on bwUniCluster 2.0 can be found [[First_Steps_on_bwHPC_cluster|here]].&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Deregistration =&lt;br /&gt;
If you plan to permanently leave the bwUniCluster 2.0, follow the deregister checklist:&lt;br /&gt;
# Transfer all your data in $HOME and workspace to your local computer/storage and after that clear off all your data&lt;br /&gt;
# Visit [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu] &lt;br /&gt;
#* Select your home organization from the list and click &#039;&#039;&#039;Proceed&#039;&#039;&#039;  &lt;br /&gt;
#* Enter your home-organisational user ID / username  and your home-organisational password and click &#039;&#039;&#039;Login&#039;&#039;&#039; button&lt;br /&gt;
#* You will be redirected back to the registration website [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu/] &lt;br /&gt;
#* &amp;lt;div&amp;gt;Select &#039;&#039;&#039;Registry Info&#039;&#039;&#039; of the service &#039;&#039;&#039;bwUniCluster&#039;&#039;&#039; (on the left hand side)&amp;lt;br&amp;gt;[[File:bwUniCluster_registration_sidebar.png|center|border|]]&amp;lt;/div&amp;gt;&lt;br /&gt;
#* Click &#039;&#039;&#039;Deregister&#039;&#039;&#039;&lt;br /&gt;
Note that Step 2 will automatically unsubscribe you from the bwunicluster mail list.&lt;/div&gt;</summary>
		<author><name>H Haefner</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=File:Bwidm-2-red.png&amp;diff=8056</id>
		<title>File:Bwidm-2-red.png</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=File:Bwidm-2-red.png&amp;diff=8056"/>
		<updated>2020-12-17T15:03:27Z</updated>

		<summary type="html">&lt;p&gt;H Haefner: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>H Haefner</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster_2.0_User_Access&amp;diff=8055</id>
		<title>BwUniCluster 2.0 User Access</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster_2.0_User_Access&amp;diff=8055"/>
		<updated>2020-12-17T15:03:10Z</updated>

		<summary type="html">&lt;p&gt;H Haefner: /* Step B: Web Registration, service password and 2-factor authentication */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;!--{| style=&amp;quot;border-style: solid; border-width: 1px&amp;quot;&lt;br /&gt;
! Navigation: [[BwHPC_Best_Practices_Repository|bwHPC BPR]] / [[BwUniCluster_User_Guide|bwUniCluster]] &lt;br /&gt;
|}--&amp;gt;&lt;br /&gt;
[[bwUniCluster_2.0|bwUniCluster 2.0]] is Baden-Württemberg&#039;s general purpose tier 3 high performance computing (HPC)&lt;br /&gt;
cluster co-financed by Baden-Württemberg&#039;s ministry of science, research and arts and the shareholders:&lt;br /&gt;
&lt;br /&gt;
* Albert Ludwig University of Freiburg&lt;br /&gt;
* Eberhard Karls University, Tübingen&lt;br /&gt;
* Karlsruhe Institute of Technology (KIT)&lt;br /&gt;
* Heidelberg University (Ruprecht-Karls-Universität Heidelberg)&lt;br /&gt;
* Ulm University&lt;br /&gt;
* University of Hohenheim&lt;br /&gt;
* University of Konstanz&lt;br /&gt;
* University of Mannheim&lt;br /&gt;
* University of Stuttgart&lt;br /&gt;
* HAW BW e.V. (an association of several universities of applied sciences in Baden-Württemberg, see below) &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
To &#039;&#039;&#039;log on&#039;&#039;&#039; [[bwUniCluster_2.0|bwUniCluster 2.0]] a user account is required. All members of the shareholder&lt;br /&gt;
universities can apply for an account.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;width: 100%; border-spacing: 5px;&amp;quot;&lt;br /&gt;
| style=&amp;quot;text-align:left; color:#000;vertical-align:top;&amp;quot; |__TOC__ &lt;br /&gt;
|&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;[[File:bwUniCluster_17Jan2014_p044-rot_t10.10.00.jpg|center|border|250px|bwUniCluster wiring by Holger Obermaier, copyright: KIT (SCC)]] &amp;lt;span style=&amp;quot;font-size:80%&amp;quot;&amp;gt;bwUniCluster wiring © KIT (SCC)&amp;lt;/span&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
= Registration =&lt;br /&gt;
&lt;br /&gt;
Granting access and issuing a user account for &#039;&#039;&#039;bwUniCluster 2.0&#039;&#039;&#039; requires the registration at the KIT service website [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu] (step B). However, this registration depends on the &#039;&#039;&#039;bwUniCluster entitlement&#039;&#039;&#039; issued by your university (step A).&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Step A: bwUniCluster entitlement for registration ==&lt;br /&gt;
&#039;&#039;&#039;The entitlement is called bwUniCluster (not bwUniCluster 2.0)&#039;&#039;&#039; and each university issues the bwUniCluster entitlement &#039;&#039;&#039;only&#039;&#039;&#039; for their own respective members. Some have established on-line processes or provide downloads of the entitlement application forms. If there is no link behind the name of an institution in the following list, please contact the local IT support services: &lt;br /&gt;
&lt;br /&gt;
* [[BwCluster_User_Access_Uni_Freiburg|Albert Ludwig University of Freiburg]]&lt;br /&gt;
* [https://bwunicluster.urz.uni-heidelberg.de/ Heidelberg University]&lt;br /&gt;
* [https://kim.uni-hohenheim.de/bwhpc-account University of Hohenheim]&lt;br /&gt;
* [http://www.scc.kit.edu/downloads/sdo/Antrag_Benutzernummer_bwUniCluster.pdf Karlsruhe Institute of Technology (KIT)]&lt;br /&gt;
* [[BWUniCluster_User_Access_Members_Uni_Konstanz|University of Konstanz]]&lt;br /&gt;
* [[BWUniCluster_User_Access_Members_Uni_Mannheim|University of Mannheim]]&lt;br /&gt;
* [https://www.hlrs.de/solutions-services/academic-users/bwunicluster-access/ University of Stuttgart]&lt;br /&gt;
* [https://uni-tuebingen.de/de/155157 Eberhard Karls University Tübingen]&lt;br /&gt;
* [[BWUniCluster_User_Access_Members_Uni_Ulm|Ulm University]]&lt;br /&gt;
* Hochschule Aalen&lt;br /&gt;
* Hochschule Albstadt-Sigmaringen&lt;br /&gt;
* Hochschule Esslingen&lt;br /&gt;
* Hochschule Furtwangen&lt;br /&gt;
* Hochschule Heilbronn&lt;br /&gt;
* Hochschule Karlsruhe&lt;br /&gt;
* Hochschule Konstanz&lt;br /&gt;
* Hochschule Reutlingen&lt;br /&gt;
* Hochschule Rottenburg&lt;br /&gt;
* Hochschule Stuttgart (HfT)&lt;br /&gt;
* Hochschule Ulm&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Step B: Web Registration, service password and 2-factor authentication ==&lt;br /&gt;
&lt;br /&gt;
After completing step A, i.e., after successfull issueing of the bwUniCluster entitlement, you have to register yourself for the service. To do so please visit [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu] and complete the following steps.&lt;br /&gt;
&lt;br /&gt;
1. Select your home organization from the list on the main page and click &#039;&#039;&#039;Proceed&#039;&#039;&#039; or &#039;&#039;&#039;Fortfahren&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
[[File:Bwidm-1-red.png|center|border|]]&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
2. You will be directed to the &#039;&#039;Identity Provider&#039;&#039; of your home organisation. Enter the user ID / username and password of your home organisation - this is usually the same password used for your e-mail account and other services - and click on &#039;&#039;&#039;Login&#039;&#039;&#039;, &#039;&#039;&#039;Einloggen&#039;&#039;&#039; or something similar.&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
3. You will be redirected back to the registration website [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu/]. If you are logging into bwIDM for the first time, there will be a summary screen which shows the account details your home institution is providing to the central system. Please check that all data is valid and then click on &#039;&#039;&#039;Continue&#039;&#039;&#039; or &#039;&#039;&#039;Weiter&#039;&#039;&#039;.&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
4. Once you have successfully logged into the bwIDM system, you will be greeted by a home screen showing all state-wide services you have access to. There will be a box labelled &amp;quot;bwUniCluster&amp;quot;. Click on &#039;&#039;&#039;Register&#039;&#039;&#039; or &#039;&#039;&#039;Registrieren&#039;&#039;&#039; to start the registration process.&lt;br /&gt;
&lt;br /&gt;
[[File:Bwidm-2-red.png|center|border|]]&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
5. Since August 13, 2020 a &#039;&#039;&#039;2-factor authentication&#039;&#039;&#039; mechanism (2FA) is being enforced to improve security. If you have never registered a 2FA token on bwIDM before, the following error message will appear:&lt;br /&gt;
&lt;br /&gt;
[[File:BwUniCluster 2.0 access login bwidm registration requirements 2fa.png|center|]]&lt;br /&gt;
&lt;br /&gt;
Click on the [https://bwidm.scc.kit.edu/user/twofa.xhtml Link] or on the &#039;&#039;&#039;My Tokens&#039;&#039;&#039; link in the main menu. The instructions for registering a new 2FA token can be found on the following page: [[bwUniCluster 2.0 User Access/2FA Tokens]]. Please complete them before proceeding.&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
6. Make sure all requirements are met by checking the &#039;&#039;&#039;Requirements&#039;&#039;&#039; box at the top. If the requirements are not met you might be able to correct the issure by following the instructions. In all other cases please [[Registration_Support_-_bwUniCluster|contact your local hotline]].&lt;br /&gt;
&lt;br /&gt;
[[File:BwUniCluster 2.0 access login bwidm registration requirements.png|center|border|]]&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
7. Read the Terms of Use (&#039;&#039;&#039;Nutzungsbedingungen und -richtlinien&#039;&#039;&#039;), check the box besides &#039;&#039;&#039;I have read and accepted the terms of use&#039;&#039;&#039; and click on &#039;&#039;&#039;Register&#039;&#039;&#039; or &#039;&#039;&#039;Registrieren&#039;&#039;&#039;.&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
8. Set a service password for the bwUniCluster and click on &#039;&#039;&#039;Save&#039;&#039;&#039; or &#039;&#039;&#039;Speichern&#039;&#039;&#039;. Logging in with the password of your home organisation, like on the former bwUniCluster 1, is no longer possible. Please make sure to use a strong password which is different from any other password you are currently using or have used on other systems. You will also be asked to change the service password regularly.&lt;br /&gt;
&lt;br /&gt;
[[File:BwUniCluster 2.0 access login bwidm service password.png|center|]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Step C: Fill out the bwUniCluster questionnaire ==&lt;br /&gt;
&lt;br /&gt;
Filling out the bwUniCluster questionaire on&lt;br /&gt;
&lt;br /&gt;
   https://zas.bwhpc.de/shib/en/bwunicluster_survey.php&lt;br /&gt;
&lt;br /&gt;
is mandatory for all users. The input is solely used to improve our support activities and for capacity planning of future HPC resources. &#039;&#039;&#039;If the questionaire is not filled out, access to bwUniCluster 2.0 is blocked 14 days after the registration.&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Changing the Service Password ==&lt;br /&gt;
&lt;br /&gt;
Your bwUniCluster 2.0 &#039;&#039;&#039;password&#039;&#039;&#039; is the service password you set during the web registration (compare step 7 of chapter 1.2).  At any time, you can set a new bwUniCluster 2.0 password via the registration website [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu] by carrying out the following steps:&lt;br /&gt;
# Go to [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu] and select your home organization &lt;br /&gt;
# Authenticate yourself via the user id / username and password provided by your home institution&lt;br /&gt;
# Find the entry &#039;&#039;&#039;bwUniCluster&#039;&#039;&#039; and select &#039;&#039;&#039;Set Service Password&#039;&#039;&#039;&lt;br /&gt;
# Enter the new password, repeat it and click &#039;&#039;&#039;Save&#039;&#039;&#039; button.&lt;br /&gt;
# If the change was sucessfull, the message &amp;quot;Das Passwort wurde bei dem Dienst geändert&amp;quot; (&amp;quot;Password has been changed&amp;quot;) will be shown&lt;br /&gt;
# Proceed to log in using the new password&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
== Contact / Support ==&lt;br /&gt;
If you have questions or problems concerning the bwUniCluster (2.0) registration, please [[bwUniCluster 2.0 Support|contact your local hotline]]. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Establishing network access = &lt;br /&gt;
&lt;br /&gt;
Access to bwUniCluster 2.0 is &#039;&#039;&#039;limited to IP addresses from the so-called BelWü networks&#039;&#039;&#039;. All home institutions of our current users are connected to BelWue, so if you are on your campus network (e.g. in your office or on the Campus WiFi) you should be able to connect to bwUniCluster 2.0 without restrictions. If you are outside of one of the BelWue networks (e.g. in your home office instead of in your campus office), a VPN connection to your home institution has to be established first (see e.g. [1] for the KIT).&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Login =&lt;br /&gt;
&lt;br /&gt;
After finishing the web registration and making sure that you are on a network from which you have access to bwUniCluster 2.0 (e.g. by establishing a VPN connection), the HPC cluster is ready for your &#039;&#039;&#039;SSH&#039;&#039;&#039; based login. Recommended SSH clients applications are:&lt;br /&gt;
&lt;br /&gt;
* the ssh (OpenSSH) command included in all Linux distributions and macOS, -in command under Linux and macOS using the application &#039;&#039;terminal&#039;&#039;&lt;br /&gt;
* [http://mobaxterm.mobatek.net/ MobaXterm] under Windows&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Hostnames ==&lt;br /&gt;
&lt;br /&gt;
The main hostname required to connect to bwUniCluster 2.0 is &#039;&#039;&#039;bwunicluster.scc.kit.edu&#039;&#039;&#039; or &#039;&#039;&#039;uc2.scc.kit.edu&#039;&#039;&#039;. The system has four login nodes and we use so-called &#039;&#039;DNS round-robin scheduling&#039;&#039; to load-balance the incoming connections between the nodes. If you open multiple SSH sessions to bwUniCluster 2.0, these sessions will be established to different login nodes, so processes started in one session might not be visible in other sessions.&lt;br /&gt;
&lt;br /&gt;
The older Broadwell extension partition of the former bwUniCluster 1 is connected to bwUniCluster 2.0. You can use the hostname &#039;&#039;&#039;uc1e.scc.kit.edu&#039;&#039;&#039; to connect to the login nodes of this partition. &lt;br /&gt;
&lt;br /&gt;
If you need to connect to specific login nodes, you can use the following hostnames:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Hostname !! Node type&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;uc2-login1.scc.kit.edu&#039;&#039;&#039; || bwUniCluster 2.0, first login node&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;uc2-login2.scc.kit.edu&#039;&#039;&#039; || bwUniCluster 2.0, second login node&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;uc2-login3.scc.kit.edu&#039;&#039;&#039; || bwUniCluster 2.0, third login node&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;uc2-login4.scc.kit.edu&#039;&#039;&#039; || bwUniCluster 2.0, fourth login node&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;uc1e-login1.scc.kit.edu&#039;&#039;&#039; || Broadwell partition, first login node&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;uc1e-login2.scc.kit.edu&#039;&#039;&#039; || Broadwell partition, second login node&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Only the secure shell &#039;&#039;SSH&#039;&#039; is allowed to login. Other protocols like &#039;&#039;telnet&#039;&#039; or &#039;&#039;rlogin&#039;&#039; are not allowed for security reasons.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Usernames ==&lt;br /&gt;
&lt;br /&gt;
Your username will be the same as the one provided by your home institution, but &#039;&#039;&#039;prefixed&#039;&#039;&#039; with two characters and an underscore indicating your home institution. For example: If you are a member of the university of Konstanz and your local username is ab1234, your username on bwUniCluster 2.0 is kn_ab1234.&lt;br /&gt;
&lt;br /&gt;
The following list contains all prefixes currently in use:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Home organization !! &amp;lt;UserID&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| Universität Freiburg || &#039;&#039;fr_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Universität Heidelberg || &#039;&#039;hd_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Universität Hohenheim || &#039;&#039;ho_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| KIT || username &#039;&#039;(without any prefix)&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
| Universität Konstanz || &#039;&#039;kn_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Universität Mannheim || &#039;&#039;ma_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Universität Stuttgart ||  &#039;&#039;st_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Universität Tübingen || &#039;&#039;tu_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Universität Ulm  || &#039;&#039;ul_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Aalen || &#039;&#039;aa_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Albstadt-Sigmaringen || &#039;&#039;as_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Esslingen || &#039;&#039;es_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Furtwangen || &#039;&#039;hf_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Heilbronn || &#039;&#039;hn_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Karlsruhe || &#039;&#039;hk_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| HTWG Konstanz || &#039;&#039;ht_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Reutlingen || &#039;&#039;hr_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Rottenburg || &#039;&#039;ro_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule für Technik Stuttgart || &#039;&#039;hs_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Ulm || &#039;&#039;hu_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Client application: OpenSSH ==&lt;br /&gt;
&lt;br /&gt;
Most Unix and Unix-like operating systems like Linux, macOS and *BSD come with a built-in SSH client provided by the OpenSSH project. More recent versions of Windows 10 and the Windows Subsystem for Linux also come with a built-in OpenSSH client.&lt;br /&gt;
&lt;br /&gt;
To use this client, simply open a command line terminal (the exact process differs on every operating system, but usually involves starting an application called &#039;&#039;&#039;Terminal&#039;&#039;&#039; or &#039;&#039;&#039;Command Prompt&#039;&#039;&#039;) and enter the following command to connect to bwUniCluster 2.0:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ssh &amp;lt;UserID&amp;gt;@bwunicluster.scc.kit.edu&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you are on a Linux or Unix system running the X Window System (X11) and want to use a GUI-based application on bwUniCluster 2.0, you can use the &#039;&#039;-X&#039;&#039; option for the ssh command to set up X11 forwarding:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ssh -X &amp;lt;UserID&amp;gt;@uc2.scc.kit.edu&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Windows users requiring X11 forwarding for graphical applications should use &#039;&#039;&#039;MobaXterm&#039;&#039;&#039; instead.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Client application: MobaXterm ==&lt;br /&gt;
&lt;br /&gt;
The bwHPC-C5 support team strongly recommends to use [http://mobaxterm.mobatek.net/ MobaXterm] instead of &#039;&#039;PuTTY&#039;&#039; or &#039;&#039;WinSCP&#039;&#039; on Windows. &#039;&#039;MobaXterm&#039;&#039; provides a built-in X11 server allowing to start GUI based software.&lt;br /&gt;
 &lt;br /&gt;
Start &#039;&#039;MobaXterm&#039;&#039;, fill in the following fields:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Remote name              : uc2.scc.kit.edu    # or uc1e.scc.kit.edu&lt;br /&gt;
Specify user name        : &amp;lt;UserID&amp;gt;&lt;br /&gt;
Port                     : 22&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After that click on &#039;ok&#039;. Then a terminal will be opened and there you can enter your credentials.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Example login process ==&lt;br /&gt;
&lt;br /&gt;
After the connection has been initiated, a successful login process will go through the following three steps:&lt;br /&gt;
&lt;br /&gt;
1. The system asks for a &#039;&#039;&#039;One-Time Password&#039;&#039;&#039;. Generate one using the Software or Hardware Token registered on the bwIDM system (see [[bwUniCluster 2.0 User Access/2FA Tokens]]) and enter it after the &#039;&#039;&#039;Your OTP:&#039;&#039;&#039; prompt.&lt;br /&gt;
&lt;br /&gt;
2. The systems asks for your service password. Enter it after the &#039;&#039;&#039;Password:&#039;&#039;&#039; prompt.&lt;br /&gt;
&lt;br /&gt;
3. You are greeted by the bwUniCluster 2.0 banner followed by a shell.&lt;br /&gt;
&lt;br /&gt;
The result should look like this:&lt;br /&gt;
&lt;br /&gt;
[[File:BwUniCluster 2.0 access login example.png|center|]]&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Troubleshooting ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Issue: The &amp;quot;Your OTP:&amp;quot; prompt never appears and the connection hangs/times out instead&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Likely cause: You are most likely not on a network from which access to the bwUniCluster 2.0 system is allowed. Please check if you might have to establish a VPN connection first.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Issue: The system asks for the One-Time Password multiple times&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Likely cause: Make sure you are using the correct Software Token to generate the One-Time Password.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Issue: The system asks for the service password multiple times&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Likely cause: Make sure you are using the service password set on bwIDM and not the password valid for your home institution. Unlike the bwUniCluster 1, the bwUniCluster 2.0 only accepts the service password.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Issue: There is an error message by the pam_ses_open.sh skript&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Likely cause: Your account is in the &amp;quot;LOST_ACCESS&amp;quot; state because the entitlement is no longer valid, the questionaire was not filled out or there was a problem during the communication between your home institution and the central bwIDM system. Please try the following steps:&lt;br /&gt;
&lt;br /&gt;
* Log into [https://bwidm.scc.kit.edu bwIDM], look for the bwUniCluster entry and click on &#039;&#039;&#039;Registry info&#039;&#039;&#039;. Your &amp;quot;Status:&amp;quot; should be &amp;quot;ACTIVE&amp;quot;. If it is not, please wait for ten minutes since logging into the bwIDM causes a refresh and the problem might fix itself. If the status does not change to ACTIVE after a longer amount of time, please contact the support channels.&lt;br /&gt;
&lt;br /&gt;
* If you have not filled out the questionaire, please do so on [https://zas.bwhpc.de/shib/en/bwunicluster_survey.php    https://zas.bwhpc.de/shib/en/bwunicluster_survey.php] and then wait for about ten minutes before attempting to log into the HPC system again.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Allowed activities on login nodes ==&lt;br /&gt;
&lt;br /&gt;
The login nodes of bwUniCluster 2.0 are the access point to the compute system and to your bwUniCluster 2.0 $HOME directory. The login nodes are shared with all the users of bwUniCluster 2.0. Therefore, your activities on the login nodes are limited to primarily set up your batch jobs. Your activities may also be:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;short&#039;&#039;&#039; compilation of your program code and&lt;br /&gt;
* &#039;&#039;&#039;short&#039;&#039;&#039; pre- and postprocessing of your batch jobs.&lt;br /&gt;
&lt;br /&gt;
To guarantee usability for all the users of bwUniCluster 2.0 &amp;lt;span style=&amp;quot;color:red;font-size:100%;&amp;quot;&amp;gt;&#039;&#039;&#039;you must not run your compute jobs on the login nodes&#039;&#039;&#039;&amp;lt;/span&amp;gt;. Compute jobs must be submitted to the&lt;br /&gt;
[[bwUniCluster Batch Jobs|queueing system]]. Any compute job running on the login nodes will be terminated without any notice. Any long-running compilation or any long-running pre- or postprocessing of batch jobs must also be submitted to the [[bwUniCluster Batch Jobs|queueing system]].&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== SSH Keys ==&lt;br /&gt;
&lt;br /&gt;
In contrast to the bwUniCluster 1 and many other HPC systems it is &#039;&#039;&#039;no longer possible to self-manage your SSH Keys by adding them to the ~/.ssh/authorized_keys file&#039;&#039;&#039;. Existing files will no longer be evaluated. SSH Keys have to be managed via the central bwIDM system instead. Please refer to the user guide for this functionality:&lt;br /&gt;
&lt;br /&gt;
[[bwUniCluster 2.0 User Access/SSH Keys]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= [[First_Steps_on_bwHPC_cluster|First steps on bwUniCluster]] =&lt;br /&gt;
&lt;br /&gt;
First and some important steps on bwUniCluster 2.0 can be found [[First_Steps_on_bwHPC_cluster|here]].&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Deregistration =&lt;br /&gt;
If you plan to permanently leave the bwUniCluster 2.0, follow the deregister checklist:&lt;br /&gt;
# Transfer all your data in $HOME and workspace to your local computer/storage and after that clear off all your data&lt;br /&gt;
# Visit [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu] &lt;br /&gt;
#* Select your home organization from the list and click &#039;&#039;&#039;Proceed&#039;&#039;&#039;  &lt;br /&gt;
#* Enter your home-organisational user ID / username  and your home-organisational password and click &#039;&#039;&#039;Login&#039;&#039;&#039; button&lt;br /&gt;
#* You will be redirected back to the registration website [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu/] &lt;br /&gt;
#* &amp;lt;div&amp;gt;Select &#039;&#039;&#039;Registry Info&#039;&#039;&#039; of the service &#039;&#039;&#039;bwUniCluster&#039;&#039;&#039; (on the left hand side)&amp;lt;br&amp;gt;[[File:bwUniCluster_registration_sidebar.png|center|border|]]&amp;lt;/div&amp;gt;&lt;br /&gt;
#* Click &#039;&#039;&#039;Deregister&#039;&#039;&#039;&lt;br /&gt;
Note that Step 2 will automatically unsubscribe you from the bwunicluster mail list.&lt;/div&gt;</summary>
		<author><name>H Haefner</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster_2.0_User_Access&amp;diff=8054</id>
		<title>BwUniCluster 2.0 User Access</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster_2.0_User_Access&amp;diff=8054"/>
		<updated>2020-12-17T15:01:24Z</updated>

		<summary type="html">&lt;p&gt;H Haefner: /* Step B: Web Registration, service password and 2-factor authentication */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;!--{| style=&amp;quot;border-style: solid; border-width: 1px&amp;quot;&lt;br /&gt;
! Navigation: [[BwHPC_Best_Practices_Repository|bwHPC BPR]] / [[BwUniCluster_User_Guide|bwUniCluster]] &lt;br /&gt;
|}--&amp;gt;&lt;br /&gt;
[[bwUniCluster_2.0|bwUniCluster 2.0]] is Baden-Württemberg&#039;s general purpose tier 3 high performance computing (HPC)&lt;br /&gt;
cluster co-financed by Baden-Württemberg&#039;s ministry of science, research and arts and the shareholders:&lt;br /&gt;
&lt;br /&gt;
* Albert Ludwig University of Freiburg&lt;br /&gt;
* Eberhard Karls University, Tübingen&lt;br /&gt;
* Karlsruhe Institute of Technology (KIT)&lt;br /&gt;
* Heidelberg University (Ruprecht-Karls-Universität Heidelberg)&lt;br /&gt;
* Ulm University&lt;br /&gt;
* University of Hohenheim&lt;br /&gt;
* University of Konstanz&lt;br /&gt;
* University of Mannheim&lt;br /&gt;
* University of Stuttgart&lt;br /&gt;
* HAW BW e.V. (an association of several universities of applied sciences in Baden-Württemberg, see below) &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
To &#039;&#039;&#039;log on&#039;&#039;&#039; [[bwUniCluster_2.0|bwUniCluster 2.0]] a user account is required. All members of the shareholder&lt;br /&gt;
universities can apply for an account.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;width: 100%; border-spacing: 5px;&amp;quot;&lt;br /&gt;
| style=&amp;quot;text-align:left; color:#000;vertical-align:top;&amp;quot; |__TOC__ &lt;br /&gt;
|&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;[[File:bwUniCluster_17Jan2014_p044-rot_t10.10.00.jpg|center|border|250px|bwUniCluster wiring by Holger Obermaier, copyright: KIT (SCC)]] &amp;lt;span style=&amp;quot;font-size:80%&amp;quot;&amp;gt;bwUniCluster wiring © KIT (SCC)&amp;lt;/span&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
= Registration =&lt;br /&gt;
&lt;br /&gt;
Granting access and issuing a user account for &#039;&#039;&#039;bwUniCluster 2.0&#039;&#039;&#039; requires the registration at the KIT service website [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu] (step B). However, this registration depends on the &#039;&#039;&#039;bwUniCluster entitlement&#039;&#039;&#039; issued by your university (step A).&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Step A: bwUniCluster entitlement for registration ==&lt;br /&gt;
&#039;&#039;&#039;The entitlement is called bwUniCluster (not bwUniCluster 2.0)&#039;&#039;&#039; and each university issues the bwUniCluster entitlement &#039;&#039;&#039;only&#039;&#039;&#039; for their own respective members. Some have established on-line processes or provide downloads of the entitlement application forms. If there is no link behind the name of an institution in the following list, please contact the local IT support services: &lt;br /&gt;
&lt;br /&gt;
* [[BwCluster_User_Access_Uni_Freiburg|Albert Ludwig University of Freiburg]]&lt;br /&gt;
* [https://bwunicluster.urz.uni-heidelberg.de/ Heidelberg University]&lt;br /&gt;
* [https://kim.uni-hohenheim.de/bwhpc-account University of Hohenheim]&lt;br /&gt;
* [http://www.scc.kit.edu/downloads/sdo/Antrag_Benutzernummer_bwUniCluster.pdf Karlsruhe Institute of Technology (KIT)]&lt;br /&gt;
* [[BWUniCluster_User_Access_Members_Uni_Konstanz|University of Konstanz]]&lt;br /&gt;
* [[BWUniCluster_User_Access_Members_Uni_Mannheim|University of Mannheim]]&lt;br /&gt;
* [https://www.hlrs.de/solutions-services/academic-users/bwunicluster-access/ University of Stuttgart]&lt;br /&gt;
* [https://uni-tuebingen.de/de/155157 Eberhard Karls University Tübingen]&lt;br /&gt;
* [[BWUniCluster_User_Access_Members_Uni_Ulm|Ulm University]]&lt;br /&gt;
* Hochschule Aalen&lt;br /&gt;
* Hochschule Albstadt-Sigmaringen&lt;br /&gt;
* Hochschule Esslingen&lt;br /&gt;
* Hochschule Furtwangen&lt;br /&gt;
* Hochschule Heilbronn&lt;br /&gt;
* Hochschule Karlsruhe&lt;br /&gt;
* Hochschule Konstanz&lt;br /&gt;
* Hochschule Reutlingen&lt;br /&gt;
* Hochschule Rottenburg&lt;br /&gt;
* Hochschule Stuttgart (HfT)&lt;br /&gt;
* Hochschule Ulm&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Step B: Web Registration, service password and 2-factor authentication ==&lt;br /&gt;
&lt;br /&gt;
After completing step A, i.e., after successfull issueing of the bwUniCluster entitlement, you have to register yourself for the service. To do so please visit [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu] and complete the following steps.&lt;br /&gt;
&lt;br /&gt;
1. Select your home organization from the list on the main page and click &#039;&#039;&#039;Proceed&#039;&#039;&#039; or &#039;&#039;&#039;Fortfahren&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
[[File:Bwidm-1-red.png|center|border|]]&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
2. You will be directed to the &#039;&#039;Identity Provider&#039;&#039; of your home organisation. Enter the user ID / username and password of your home organisation - this is usually the same password used for your e-mail account and other services - and click on &#039;&#039;&#039;Login&#039;&#039;&#039;, &#039;&#039;&#039;Einloggen&#039;&#039;&#039; or something similar.&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
3. You will be redirected back to the registration website [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu/]. If you are logging into bwIDM for the first time, there will be a summary screen which shows the account details your home institution is providing to the central system. Please check that all data is valid and then click on &#039;&#039;&#039;Continue&#039;&#039;&#039; or &#039;&#039;&#039;Weiter&#039;&#039;&#039;.&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
4. Once you have successfully logged into the bwIDM system, you will be greeted by a home screen showing all state-wide services you have access to. There will be a box labelled &amp;quot;bwUniCluster&amp;quot;. Click on &#039;&#039;&#039;Register&#039;&#039;&#039; or &#039;&#039;&#039;Registrieren&#039;&#039;&#039; to start the registration process.&lt;br /&gt;
&lt;br /&gt;
[[File:BwUniCluster 2.0 access login bwidm service list.png|center|border|]]&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
5. Since August 13, 2020 a &#039;&#039;&#039;2-factor authentication&#039;&#039;&#039; mechanism (2FA) is being enforced to improve security. If you have never registered a 2FA token on bwIDM before, the following error message will appear:&lt;br /&gt;
&lt;br /&gt;
[[File:BwUniCluster 2.0 access login bwidm registration requirements 2fa.png|center|]]&lt;br /&gt;
&lt;br /&gt;
Click on the [https://bwidm.scc.kit.edu/user/twofa.xhtml Link] or on the &#039;&#039;&#039;My Tokens&#039;&#039;&#039; link in the main menu. The instructions for registering a new 2FA token can be found on the following page: [[bwUniCluster 2.0 User Access/2FA Tokens]]. Please complete them before proceeding.&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
6. Make sure all requirements are met by checking the &#039;&#039;&#039;Requirements&#039;&#039;&#039; box at the top. If the requirements are not met you might be able to correct the issure by following the instructions. In all other cases please [[Registration_Support_-_bwUniCluster|contact your local hotline]].&lt;br /&gt;
&lt;br /&gt;
[[File:BwUniCluster 2.0 access login bwidm registration requirements.png|center|border|]]&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
7. Read the Terms of Use (&#039;&#039;&#039;Nutzungsbedingungen und -richtlinien&#039;&#039;&#039;), check the box besides &#039;&#039;&#039;I have read and accepted the terms of use&#039;&#039;&#039; and click on &#039;&#039;&#039;Register&#039;&#039;&#039; or &#039;&#039;&#039;Registrieren&#039;&#039;&#039;.&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
8. Set a service password for the bwUniCluster and click on &#039;&#039;&#039;Save&#039;&#039;&#039; or &#039;&#039;&#039;Speichern&#039;&#039;&#039;. Logging in with the password of your home organisation, like on the former bwUniCluster 1, is no longer possible. Please make sure to use a strong password which is different from any other password you are currently using or have used on other systems. You will also be asked to change the service password regularly.&lt;br /&gt;
&lt;br /&gt;
[[File:BwUniCluster 2.0 access login bwidm service password.png|center|]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Step C: Fill out the bwUniCluster questionnaire ==&lt;br /&gt;
&lt;br /&gt;
Filling out the bwUniCluster questionaire on&lt;br /&gt;
&lt;br /&gt;
   https://zas.bwhpc.de/shib/en/bwunicluster_survey.php&lt;br /&gt;
&lt;br /&gt;
is mandatory for all users. The input is solely used to improve our support activities and for capacity planning of future HPC resources. &#039;&#039;&#039;If the questionaire is not filled out, access to bwUniCluster 2.0 is blocked 14 days after the registration.&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Changing the Service Password ==&lt;br /&gt;
&lt;br /&gt;
Your bwUniCluster 2.0 &#039;&#039;&#039;password&#039;&#039;&#039; is the service password you set during the web registration (compare step 7 of chapter 1.2).  At any time, you can set a new bwUniCluster 2.0 password via the registration website [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu] by carrying out the following steps:&lt;br /&gt;
# Go to [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu] and select your home organization &lt;br /&gt;
# Authenticate yourself via the user id / username and password provided by your home institution&lt;br /&gt;
# Find the entry &#039;&#039;&#039;bwUniCluster&#039;&#039;&#039; and select &#039;&#039;&#039;Set Service Password&#039;&#039;&#039;&lt;br /&gt;
# Enter the new password, repeat it and click &#039;&#039;&#039;Save&#039;&#039;&#039; button.&lt;br /&gt;
# If the change was sucessfull, the message &amp;quot;Das Passwort wurde bei dem Dienst geändert&amp;quot; (&amp;quot;Password has been changed&amp;quot;) will be shown&lt;br /&gt;
# Proceed to log in using the new password&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
== Contact / Support ==&lt;br /&gt;
If you have questions or problems concerning the bwUniCluster (2.0) registration, please [[bwUniCluster 2.0 Support|contact your local hotline]]. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Establishing network access = &lt;br /&gt;
&lt;br /&gt;
Access to bwUniCluster 2.0 is &#039;&#039;&#039;limited to IP addresses from the so-called BelWü networks&#039;&#039;&#039;. All home institutions of our current users are connected to BelWue, so if you are on your campus network (e.g. in your office or on the Campus WiFi) you should be able to connect to bwUniCluster 2.0 without restrictions. If you are outside of one of the BelWue networks (e.g. in your home office instead of in your campus office), a VPN connection to your home institution has to be established first (see e.g. [1] for the KIT).&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Login =&lt;br /&gt;
&lt;br /&gt;
After finishing the web registration and making sure that you are on a network from which you have access to bwUniCluster 2.0 (e.g. by establishing a VPN connection), the HPC cluster is ready for your &#039;&#039;&#039;SSH&#039;&#039;&#039; based login. Recommended SSH clients applications are:&lt;br /&gt;
&lt;br /&gt;
* the ssh (OpenSSH) command included in all Linux distributions and macOS, -in command under Linux and macOS using the application &#039;&#039;terminal&#039;&#039;&lt;br /&gt;
* [http://mobaxterm.mobatek.net/ MobaXterm] under Windows&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Hostnames ==&lt;br /&gt;
&lt;br /&gt;
The main hostname required to connect to bwUniCluster 2.0 is &#039;&#039;&#039;bwunicluster.scc.kit.edu&#039;&#039;&#039; or &#039;&#039;&#039;uc2.scc.kit.edu&#039;&#039;&#039;. The system has four login nodes and we use so-called &#039;&#039;DNS round-robin scheduling&#039;&#039; to load-balance the incoming connections between the nodes. If you open multiple SSH sessions to bwUniCluster 2.0, these sessions will be established to different login nodes, so processes started in one session might not be visible in other sessions.&lt;br /&gt;
&lt;br /&gt;
The older Broadwell extension partition of the former bwUniCluster 1 is connected to bwUniCluster 2.0. You can use the hostname &#039;&#039;&#039;uc1e.scc.kit.edu&#039;&#039;&#039; to connect to the login nodes of this partition. &lt;br /&gt;
&lt;br /&gt;
If you need to connect to specific login nodes, you can use the following hostnames:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Hostname !! Node type&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;uc2-login1.scc.kit.edu&#039;&#039;&#039; || bwUniCluster 2.0, first login node&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;uc2-login2.scc.kit.edu&#039;&#039;&#039; || bwUniCluster 2.0, second login node&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;uc2-login3.scc.kit.edu&#039;&#039;&#039; || bwUniCluster 2.0, third login node&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;uc2-login4.scc.kit.edu&#039;&#039;&#039; || bwUniCluster 2.0, fourth login node&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;uc1e-login1.scc.kit.edu&#039;&#039;&#039; || Broadwell partition, first login node&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;uc1e-login2.scc.kit.edu&#039;&#039;&#039; || Broadwell partition, second login node&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Only the secure shell &#039;&#039;SSH&#039;&#039; is allowed to login. Other protocols like &#039;&#039;telnet&#039;&#039; or &#039;&#039;rlogin&#039;&#039; are not allowed for security reasons.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Usernames ==&lt;br /&gt;
&lt;br /&gt;
Your username will be the same as the one provided by your home institution, but &#039;&#039;&#039;prefixed&#039;&#039;&#039; with two characters and an underscore indicating your home institution. For example: If you are a member of the university of Konstanz and your local username is ab1234, your username on bwUniCluster 2.0 is kn_ab1234.&lt;br /&gt;
&lt;br /&gt;
The following list contains all prefixes currently in use:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Home organization !! &amp;lt;UserID&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| Universität Freiburg || &#039;&#039;fr_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Universität Heidelberg || &#039;&#039;hd_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Universität Hohenheim || &#039;&#039;ho_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| KIT || username &#039;&#039;(without any prefix)&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
| Universität Konstanz || &#039;&#039;kn_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Universität Mannheim || &#039;&#039;ma_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Universität Stuttgart ||  &#039;&#039;st_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Universität Tübingen || &#039;&#039;tu_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Universität Ulm  || &#039;&#039;ul_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Aalen || &#039;&#039;aa_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Albstadt-Sigmaringen || &#039;&#039;as_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Esslingen || &#039;&#039;es_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Furtwangen || &#039;&#039;hf_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Heilbronn || &#039;&#039;hn_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Karlsruhe || &#039;&#039;hk_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| HTWG Konstanz || &#039;&#039;ht_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Reutlingen || &#039;&#039;hr_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Rottenburg || &#039;&#039;ro_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule für Technik Stuttgart || &#039;&#039;hs_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Ulm || &#039;&#039;hu_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Client application: OpenSSH ==&lt;br /&gt;
&lt;br /&gt;
Most Unix and Unix-like operating systems like Linux, macOS and *BSD come with a built-in SSH client provided by the OpenSSH project. More recent versions of Windows 10 and the Windows Subsystem for Linux also come with a built-in OpenSSH client.&lt;br /&gt;
&lt;br /&gt;
To use this client, simply open a command line terminal (the exact process differs on every operating system, but usually involves starting an application called &#039;&#039;&#039;Terminal&#039;&#039;&#039; or &#039;&#039;&#039;Command Prompt&#039;&#039;&#039;) and enter the following command to connect to bwUniCluster 2.0:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ssh &amp;lt;UserID&amp;gt;@bwunicluster.scc.kit.edu&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you are on a Linux or Unix system running the X Window System (X11) and want to use a GUI-based application on bwUniCluster 2.0, you can use the &#039;&#039;-X&#039;&#039; option for the ssh command to set up X11 forwarding:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ssh -X &amp;lt;UserID&amp;gt;@uc2.scc.kit.edu&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Windows users requiring X11 forwarding for graphical applications should use &#039;&#039;&#039;MobaXterm&#039;&#039;&#039; instead.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Client application: MobaXterm ==&lt;br /&gt;
&lt;br /&gt;
The bwHPC-C5 support team strongly recommends to use [http://mobaxterm.mobatek.net/ MobaXterm] instead of &#039;&#039;PuTTY&#039;&#039; or &#039;&#039;WinSCP&#039;&#039; on Windows. &#039;&#039;MobaXterm&#039;&#039; provides a built-in X11 server allowing to start GUI based software.&lt;br /&gt;
 &lt;br /&gt;
Start &#039;&#039;MobaXterm&#039;&#039;, fill in the following fields:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Remote name              : uc2.scc.kit.edu    # or uc1e.scc.kit.edu&lt;br /&gt;
Specify user name        : &amp;lt;UserID&amp;gt;&lt;br /&gt;
Port                     : 22&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After that click on &#039;ok&#039;. Then a terminal will be opened and there you can enter your credentials.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Example login process ==&lt;br /&gt;
&lt;br /&gt;
After the connection has been initiated, a successful login process will go through the following three steps:&lt;br /&gt;
&lt;br /&gt;
1. The system asks for a &#039;&#039;&#039;One-Time Password&#039;&#039;&#039;. Generate one using the Software or Hardware Token registered on the bwIDM system (see [[bwUniCluster 2.0 User Access/2FA Tokens]]) and enter it after the &#039;&#039;&#039;Your OTP:&#039;&#039;&#039; prompt.&lt;br /&gt;
&lt;br /&gt;
2. The systems asks for your service password. Enter it after the &#039;&#039;&#039;Password:&#039;&#039;&#039; prompt.&lt;br /&gt;
&lt;br /&gt;
3. You are greeted by the bwUniCluster 2.0 banner followed by a shell.&lt;br /&gt;
&lt;br /&gt;
The result should look like this:&lt;br /&gt;
&lt;br /&gt;
[[File:BwUniCluster 2.0 access login example.png|center|]]&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Troubleshooting ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Issue: The &amp;quot;Your OTP:&amp;quot; prompt never appears and the connection hangs/times out instead&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Likely cause: You are most likely not on a network from which access to the bwUniCluster 2.0 system is allowed. Please check if you might have to establish a VPN connection first.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Issue: The system asks for the One-Time Password multiple times&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Likely cause: Make sure you are using the correct Software Token to generate the One-Time Password.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Issue: The system asks for the service password multiple times&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Likely cause: Make sure you are using the service password set on bwIDM and not the password valid for your home institution. Unlike the bwUniCluster 1, the bwUniCluster 2.0 only accepts the service password.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Issue: There is an error message by the pam_ses_open.sh skript&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Likely cause: Your account is in the &amp;quot;LOST_ACCESS&amp;quot; state because the entitlement is no longer valid, the questionaire was not filled out or there was a problem during the communication between your home institution and the central bwIDM system. Please try the following steps:&lt;br /&gt;
&lt;br /&gt;
* Log into [https://bwidm.scc.kit.edu bwIDM], look for the bwUniCluster entry and click on &#039;&#039;&#039;Registry info&#039;&#039;&#039;. Your &amp;quot;Status:&amp;quot; should be &amp;quot;ACTIVE&amp;quot;. If it is not, please wait for ten minutes since logging into the bwIDM causes a refresh and the problem might fix itself. If the status does not change to ACTIVE after a longer amount of time, please contact the support channels.&lt;br /&gt;
&lt;br /&gt;
* If you have not filled out the questionaire, please do so on [https://zas.bwhpc.de/shib/en/bwunicluster_survey.php    https://zas.bwhpc.de/shib/en/bwunicluster_survey.php] and then wait for about ten minutes before attempting to log into the HPC system again.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Allowed activities on login nodes ==&lt;br /&gt;
&lt;br /&gt;
The login nodes of bwUniCluster 2.0 are the access point to the compute system and to your bwUniCluster 2.0 $HOME directory. The login nodes are shared with all the users of bwUniCluster 2.0. Therefore, your activities on the login nodes are limited to primarily set up your batch jobs. Your activities may also be:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;short&#039;&#039;&#039; compilation of your program code and&lt;br /&gt;
* &#039;&#039;&#039;short&#039;&#039;&#039; pre- and postprocessing of your batch jobs.&lt;br /&gt;
&lt;br /&gt;
To guarantee usability for all the users of bwUniCluster 2.0 &amp;lt;span style=&amp;quot;color:red;font-size:100%;&amp;quot;&amp;gt;&#039;&#039;&#039;you must not run your compute jobs on the login nodes&#039;&#039;&#039;&amp;lt;/span&amp;gt;. Compute jobs must be submitted to the&lt;br /&gt;
[[bwUniCluster Batch Jobs|queueing system]]. Any compute job running on the login nodes will be terminated without any notice. Any long-running compilation or any long-running pre- or postprocessing of batch jobs must also be submitted to the [[bwUniCluster Batch Jobs|queueing system]].&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== SSH Keys ==&lt;br /&gt;
&lt;br /&gt;
In contrast to the bwUniCluster 1 and many other HPC systems it is &#039;&#039;&#039;no longer possible to self-manage your SSH Keys by adding them to the ~/.ssh/authorized_keys file&#039;&#039;&#039;. Existing files will no longer be evaluated. SSH Keys have to be managed via the central bwIDM system instead. Please refer to the user guide for this functionality:&lt;br /&gt;
&lt;br /&gt;
[[bwUniCluster 2.0 User Access/SSH Keys]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= [[First_Steps_on_bwHPC_cluster|First steps on bwUniCluster]] =&lt;br /&gt;
&lt;br /&gt;
First and some important steps on bwUniCluster 2.0 can be found [[First_Steps_on_bwHPC_cluster|here]].&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Deregistration =&lt;br /&gt;
If you plan to permanently leave the bwUniCluster 2.0, follow the deregister checklist:&lt;br /&gt;
# Transfer all your data in $HOME and workspace to your local computer/storage and after that clear off all your data&lt;br /&gt;
# Visit [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu] &lt;br /&gt;
#* Select your home organization from the list and click &#039;&#039;&#039;Proceed&#039;&#039;&#039;  &lt;br /&gt;
#* Enter your home-organisational user ID / username  and your home-organisational password and click &#039;&#039;&#039;Login&#039;&#039;&#039; button&lt;br /&gt;
#* You will be redirected back to the registration website [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu/] &lt;br /&gt;
#* &amp;lt;div&amp;gt;Select &#039;&#039;&#039;Registry Info&#039;&#039;&#039; of the service &#039;&#039;&#039;bwUniCluster&#039;&#039;&#039; (on the left hand side)&amp;lt;br&amp;gt;[[File:bwUniCluster_registration_sidebar.png|center|border|]]&amp;lt;/div&amp;gt;&lt;br /&gt;
#* Click &#039;&#039;&#039;Deregister&#039;&#039;&#039;&lt;br /&gt;
Note that Step 2 will automatically unsubscribe you from the bwunicluster mail list.&lt;/div&gt;</summary>
		<author><name>H Haefner</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster_2.0_User_Access&amp;diff=8053</id>
		<title>BwUniCluster 2.0 User Access</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster_2.0_User_Access&amp;diff=8053"/>
		<updated>2020-12-17T14:59:00Z</updated>

		<summary type="html">&lt;p&gt;H Haefner: /* Step B: Web Registration, service password and 2-factor authentication */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;!--{| style=&amp;quot;border-style: solid; border-width: 1px&amp;quot;&lt;br /&gt;
! Navigation: [[BwHPC_Best_Practices_Repository|bwHPC BPR]] / [[BwUniCluster_User_Guide|bwUniCluster]] &lt;br /&gt;
|}--&amp;gt;&lt;br /&gt;
[[bwUniCluster_2.0|bwUniCluster 2.0]] is Baden-Württemberg&#039;s general purpose tier 3 high performance computing (HPC)&lt;br /&gt;
cluster co-financed by Baden-Württemberg&#039;s ministry of science, research and arts and the shareholders:&lt;br /&gt;
&lt;br /&gt;
* Albert Ludwig University of Freiburg&lt;br /&gt;
* Eberhard Karls University, Tübingen&lt;br /&gt;
* Karlsruhe Institute of Technology (KIT)&lt;br /&gt;
* Heidelberg University (Ruprecht-Karls-Universität Heidelberg)&lt;br /&gt;
* Ulm University&lt;br /&gt;
* University of Hohenheim&lt;br /&gt;
* University of Konstanz&lt;br /&gt;
* University of Mannheim&lt;br /&gt;
* University of Stuttgart&lt;br /&gt;
* HAW BW e.V. (an association of several universities of applied sciences in Baden-Württemberg, see below) &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
To &#039;&#039;&#039;log on&#039;&#039;&#039; [[bwUniCluster_2.0|bwUniCluster 2.0]] a user account is required. All members of the shareholder&lt;br /&gt;
universities can apply for an account.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;width: 100%; border-spacing: 5px;&amp;quot;&lt;br /&gt;
| style=&amp;quot;text-align:left; color:#000;vertical-align:top;&amp;quot; |__TOC__ &lt;br /&gt;
|&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;[[File:bwUniCluster_17Jan2014_p044-rot_t10.10.00.jpg|center|border|250px|bwUniCluster wiring by Holger Obermaier, copyright: KIT (SCC)]] &amp;lt;span style=&amp;quot;font-size:80%&amp;quot;&amp;gt;bwUniCluster wiring © KIT (SCC)&amp;lt;/span&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
= Registration =&lt;br /&gt;
&lt;br /&gt;
Granting access and issuing a user account for &#039;&#039;&#039;bwUniCluster 2.0&#039;&#039;&#039; requires the registration at the KIT service website [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu] (step B). However, this registration depends on the &#039;&#039;&#039;bwUniCluster entitlement&#039;&#039;&#039; issued by your university (step A).&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Step A: bwUniCluster entitlement for registration ==&lt;br /&gt;
&#039;&#039;&#039;The entitlement is called bwUniCluster (not bwUniCluster 2.0)&#039;&#039;&#039; and each university issues the bwUniCluster entitlement &#039;&#039;&#039;only&#039;&#039;&#039; for their own respective members. Some have established on-line processes or provide downloads of the entitlement application forms. If there is no link behind the name of an institution in the following list, please contact the local IT support services: &lt;br /&gt;
&lt;br /&gt;
* [[BwCluster_User_Access_Uni_Freiburg|Albert Ludwig University of Freiburg]]&lt;br /&gt;
* [https://bwunicluster.urz.uni-heidelberg.de/ Heidelberg University]&lt;br /&gt;
* [https://kim.uni-hohenheim.de/bwhpc-account University of Hohenheim]&lt;br /&gt;
* [http://www.scc.kit.edu/downloads/sdo/Antrag_Benutzernummer_bwUniCluster.pdf Karlsruhe Institute of Technology (KIT)]&lt;br /&gt;
* [[BWUniCluster_User_Access_Members_Uni_Konstanz|University of Konstanz]]&lt;br /&gt;
* [[BWUniCluster_User_Access_Members_Uni_Mannheim|University of Mannheim]]&lt;br /&gt;
* [https://www.hlrs.de/solutions-services/academic-users/bwunicluster-access/ University of Stuttgart]&lt;br /&gt;
* [https://uni-tuebingen.de/de/155157 Eberhard Karls University Tübingen]&lt;br /&gt;
* [[BWUniCluster_User_Access_Members_Uni_Ulm|Ulm University]]&lt;br /&gt;
* Hochschule Aalen&lt;br /&gt;
* Hochschule Albstadt-Sigmaringen&lt;br /&gt;
* Hochschule Esslingen&lt;br /&gt;
* Hochschule Furtwangen&lt;br /&gt;
* Hochschule Heilbronn&lt;br /&gt;
* Hochschule Karlsruhe&lt;br /&gt;
* Hochschule Konstanz&lt;br /&gt;
* Hochschule Reutlingen&lt;br /&gt;
* Hochschule Rottenburg&lt;br /&gt;
* Hochschule Stuttgart (HfT)&lt;br /&gt;
* Hochschule Ulm&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Step B: Web Registration, service password and 2-factor authentication ==&lt;br /&gt;
&lt;br /&gt;
After completing step A, i.e., after successfull issueing of the bwUniCluster entitlement, you have to register yourself for the service. To do so please visit [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu] and complete the following steps.&lt;br /&gt;
&lt;br /&gt;
1. Select your home organization from the list on the main page and click &#039;&#039;&#039;Proceed&#039;&#039;&#039; or &#039;&#039;&#039;Fortfahren&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
[[File:BwIDM-1-red.png|center|border|]]&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
2. You will be directed to the &#039;&#039;Identity Provider&#039;&#039; of your home organisation. Enter the user ID / username and password of your home organisation - this is usually the same password used for your e-mail account and other services - and click on &#039;&#039;&#039;Login&#039;&#039;&#039;, &#039;&#039;&#039;Einloggen&#039;&#039;&#039; or something similar.&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
3. You will be redirected back to the registration website [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu/]. If you are logging into bwIDM for the first time, there will be a summary screen which shows the account details your home institution is providing to the central system. Please check that all data is valid and then click on &#039;&#039;&#039;Continue&#039;&#039;&#039; or &#039;&#039;&#039;Weiter&#039;&#039;&#039;.&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
4. Once you have successfully logged into the bwIDM system, you will be greeted by a home screen showing all state-wide services you have access to. There will be a box labelled &amp;quot;bwUniCluster&amp;quot;. Click on &#039;&#039;&#039;Register&#039;&#039;&#039; or &#039;&#039;&#039;Registrieren&#039;&#039;&#039; to start the registration process.&lt;br /&gt;
&lt;br /&gt;
[[File:BwUniCluster 2.0 access login bwidm service list.png|center|border|]]&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
5. Since August 13, 2020 a &#039;&#039;&#039;2-factor authentication&#039;&#039;&#039; mechanism (2FA) is being enforced to improve security. If you have never registered a 2FA token on bwIDM before, the following error message will appear:&lt;br /&gt;
&lt;br /&gt;
[[File:BwUniCluster 2.0 access login bwidm registration requirements 2fa.png|center|]]&lt;br /&gt;
&lt;br /&gt;
Click on the [https://bwidm.scc.kit.edu/user/twofa.xhtml Link] or on the &#039;&#039;&#039;My Tokens&#039;&#039;&#039; link in the main menu. The instructions for registering a new 2FA token can be found on the following page: [[bwUniCluster 2.0 User Access/2FA Tokens]]. Please complete them before proceeding.&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
6. Make sure all requirements are met by checking the &#039;&#039;&#039;Requirements&#039;&#039;&#039; box at the top. If the requirements are not met you might be able to correct the issure by following the instructions. In all other cases please [[Registration_Support_-_bwUniCluster|contact your local hotline]].&lt;br /&gt;
&lt;br /&gt;
[[File:BwUniCluster 2.0 access login bwidm registration requirements.png|center|border|]]&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
7. Read the Terms of Use (&#039;&#039;&#039;Nutzungsbedingungen und -richtlinien&#039;&#039;&#039;), check the box besides &#039;&#039;&#039;I have read and accepted the terms of use&#039;&#039;&#039; and click on &#039;&#039;&#039;Register&#039;&#039;&#039; or &#039;&#039;&#039;Registrieren&#039;&#039;&#039;.&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
8. Set a service password for the bwUniCluster and click on &#039;&#039;&#039;Save&#039;&#039;&#039; or &#039;&#039;&#039;Speichern&#039;&#039;&#039;. Logging in with the password of your home organisation, like on the former bwUniCluster 1, is no longer possible. Please make sure to use a strong password which is different from any other password you are currently using or have used on other systems. You will also be asked to change the service password regularly.&lt;br /&gt;
&lt;br /&gt;
[[File:BwUniCluster 2.0 access login bwidm service password.png|center|]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Step C: Fill out the bwUniCluster questionnaire ==&lt;br /&gt;
&lt;br /&gt;
Filling out the bwUniCluster questionaire on&lt;br /&gt;
&lt;br /&gt;
   https://zas.bwhpc.de/shib/en/bwunicluster_survey.php&lt;br /&gt;
&lt;br /&gt;
is mandatory for all users. The input is solely used to improve our support activities and for capacity planning of future HPC resources. &#039;&#039;&#039;If the questionaire is not filled out, access to bwUniCluster 2.0 is blocked 14 days after the registration.&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Changing the Service Password ==&lt;br /&gt;
&lt;br /&gt;
Your bwUniCluster 2.0 &#039;&#039;&#039;password&#039;&#039;&#039; is the service password you set during the web registration (compare step 7 of chapter 1.2).  At any time, you can set a new bwUniCluster 2.0 password via the registration website [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu] by carrying out the following steps:&lt;br /&gt;
# Go to [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu] and select your home organization &lt;br /&gt;
# Authenticate yourself via the user id / username and password provided by your home institution&lt;br /&gt;
# Find the entry &#039;&#039;&#039;bwUniCluster&#039;&#039;&#039; and select &#039;&#039;&#039;Set Service Password&#039;&#039;&#039;&lt;br /&gt;
# Enter the new password, repeat it and click &#039;&#039;&#039;Save&#039;&#039;&#039; button.&lt;br /&gt;
# If the change was sucessfull, the message &amp;quot;Das Passwort wurde bei dem Dienst geändert&amp;quot; (&amp;quot;Password has been changed&amp;quot;) will be shown&lt;br /&gt;
# Proceed to log in using the new password&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
== Contact / Support ==&lt;br /&gt;
If you have questions or problems concerning the bwUniCluster (2.0) registration, please [[bwUniCluster 2.0 Support|contact your local hotline]]. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Establishing network access = &lt;br /&gt;
&lt;br /&gt;
Access to bwUniCluster 2.0 is &#039;&#039;&#039;limited to IP addresses from the so-called BelWü networks&#039;&#039;&#039;. All home institutions of our current users are connected to BelWue, so if you are on your campus network (e.g. in your office or on the Campus WiFi) you should be able to connect to bwUniCluster 2.0 without restrictions. If you are outside of one of the BelWue networks (e.g. in your home office instead of in your campus office), a VPN connection to your home institution has to be established first (see e.g. [1] for the KIT).&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Login =&lt;br /&gt;
&lt;br /&gt;
After finishing the web registration and making sure that you are on a network from which you have access to bwUniCluster 2.0 (e.g. by establishing a VPN connection), the HPC cluster is ready for your &#039;&#039;&#039;SSH&#039;&#039;&#039; based login. Recommended SSH clients applications are:&lt;br /&gt;
&lt;br /&gt;
* the ssh (OpenSSH) command included in all Linux distributions and macOS, -in command under Linux and macOS using the application &#039;&#039;terminal&#039;&#039;&lt;br /&gt;
* [http://mobaxterm.mobatek.net/ MobaXterm] under Windows&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Hostnames ==&lt;br /&gt;
&lt;br /&gt;
The main hostname required to connect to bwUniCluster 2.0 is &#039;&#039;&#039;bwunicluster.scc.kit.edu&#039;&#039;&#039; or &#039;&#039;&#039;uc2.scc.kit.edu&#039;&#039;&#039;. The system has four login nodes and we use so-called &#039;&#039;DNS round-robin scheduling&#039;&#039; to load-balance the incoming connections between the nodes. If you open multiple SSH sessions to bwUniCluster 2.0, these sessions will be established to different login nodes, so processes started in one session might not be visible in other sessions.&lt;br /&gt;
&lt;br /&gt;
The older Broadwell extension partition of the former bwUniCluster 1 is connected to bwUniCluster 2.0. You can use the hostname &#039;&#039;&#039;uc1e.scc.kit.edu&#039;&#039;&#039; to connect to the login nodes of this partition. &lt;br /&gt;
&lt;br /&gt;
If you need to connect to specific login nodes, you can use the following hostnames:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Hostname !! Node type&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;uc2-login1.scc.kit.edu&#039;&#039;&#039; || bwUniCluster 2.0, first login node&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;uc2-login2.scc.kit.edu&#039;&#039;&#039; || bwUniCluster 2.0, second login node&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;uc2-login3.scc.kit.edu&#039;&#039;&#039; || bwUniCluster 2.0, third login node&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;uc2-login4.scc.kit.edu&#039;&#039;&#039; || bwUniCluster 2.0, fourth login node&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;uc1e-login1.scc.kit.edu&#039;&#039;&#039; || Broadwell partition, first login node&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;uc1e-login2.scc.kit.edu&#039;&#039;&#039; || Broadwell partition, second login node&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Only the secure shell &#039;&#039;SSH&#039;&#039; is allowed to login. Other protocols like &#039;&#039;telnet&#039;&#039; or &#039;&#039;rlogin&#039;&#039; are not allowed for security reasons.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Usernames ==&lt;br /&gt;
&lt;br /&gt;
Your username will be the same as the one provided by your home institution, but &#039;&#039;&#039;prefixed&#039;&#039;&#039; with two characters and an underscore indicating your home institution. For example: If you are a member of the university of Konstanz and your local username is ab1234, your username on bwUniCluster 2.0 is kn_ab1234.&lt;br /&gt;
&lt;br /&gt;
The following list contains all prefixes currently in use:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Home organization !! &amp;lt;UserID&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| Universität Freiburg || &#039;&#039;fr_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Universität Heidelberg || &#039;&#039;hd_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Universität Hohenheim || &#039;&#039;ho_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| KIT || username &#039;&#039;(without any prefix)&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
| Universität Konstanz || &#039;&#039;kn_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Universität Mannheim || &#039;&#039;ma_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Universität Stuttgart ||  &#039;&#039;st_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Universität Tübingen || &#039;&#039;tu_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Universität Ulm  || &#039;&#039;ul_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Aalen || &#039;&#039;aa_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Albstadt-Sigmaringen || &#039;&#039;as_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Esslingen || &#039;&#039;es_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Furtwangen || &#039;&#039;hf_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Heilbronn || &#039;&#039;hn_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Karlsruhe || &#039;&#039;hk_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| HTWG Konstanz || &#039;&#039;ht_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Reutlingen || &#039;&#039;hr_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Rottenburg || &#039;&#039;ro_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule für Technik Stuttgart || &#039;&#039;hs_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Ulm || &#039;&#039;hu_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Client application: OpenSSH ==&lt;br /&gt;
&lt;br /&gt;
Most Unix and Unix-like operating systems like Linux, macOS and *BSD come with a built-in SSH client provided by the OpenSSH project. More recent versions of Windows 10 and the Windows Subsystem for Linux also come with a built-in OpenSSH client.&lt;br /&gt;
&lt;br /&gt;
To use this client, simply open a command line terminal (the exact process differs on every operating system, but usually involves starting an application called &#039;&#039;&#039;Terminal&#039;&#039;&#039; or &#039;&#039;&#039;Command Prompt&#039;&#039;&#039;) and enter the following command to connect to bwUniCluster 2.0:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ssh &amp;lt;UserID&amp;gt;@bwunicluster.scc.kit.edu&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you are on a Linux or Unix system running the X Window System (X11) and want to use a GUI-based application on bwUniCluster 2.0, you can use the &#039;&#039;-X&#039;&#039; option for the ssh command to set up X11 forwarding:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ssh -X &amp;lt;UserID&amp;gt;@uc2.scc.kit.edu&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Windows users requiring X11 forwarding for graphical applications should use &#039;&#039;&#039;MobaXterm&#039;&#039;&#039; instead.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Client application: MobaXterm ==&lt;br /&gt;
&lt;br /&gt;
The bwHPC-C5 support team strongly recommends to use [http://mobaxterm.mobatek.net/ MobaXterm] instead of &#039;&#039;PuTTY&#039;&#039; or &#039;&#039;WinSCP&#039;&#039; on Windows. &#039;&#039;MobaXterm&#039;&#039; provides a built-in X11 server allowing to start GUI based software.&lt;br /&gt;
 &lt;br /&gt;
Start &#039;&#039;MobaXterm&#039;&#039;, fill in the following fields:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Remote name              : uc2.scc.kit.edu    # or uc1e.scc.kit.edu&lt;br /&gt;
Specify user name        : &amp;lt;UserID&amp;gt;&lt;br /&gt;
Port                     : 22&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After that click on &#039;ok&#039;. Then a terminal will be opened and there you can enter your credentials.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Example login process ==&lt;br /&gt;
&lt;br /&gt;
After the connection has been initiated, a successful login process will go through the following three steps:&lt;br /&gt;
&lt;br /&gt;
1. The system asks for a &#039;&#039;&#039;One-Time Password&#039;&#039;&#039;. Generate one using the Software or Hardware Token registered on the bwIDM system (see [[bwUniCluster 2.0 User Access/2FA Tokens]]) and enter it after the &#039;&#039;&#039;Your OTP:&#039;&#039;&#039; prompt.&lt;br /&gt;
&lt;br /&gt;
2. The systems asks for your service password. Enter it after the &#039;&#039;&#039;Password:&#039;&#039;&#039; prompt.&lt;br /&gt;
&lt;br /&gt;
3. You are greeted by the bwUniCluster 2.0 banner followed by a shell.&lt;br /&gt;
&lt;br /&gt;
The result should look like this:&lt;br /&gt;
&lt;br /&gt;
[[File:BwUniCluster 2.0 access login example.png|center|]]&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Troubleshooting ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Issue: The &amp;quot;Your OTP:&amp;quot; prompt never appears and the connection hangs/times out instead&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Likely cause: You are most likely not on a network from which access to the bwUniCluster 2.0 system is allowed. Please check if you might have to establish a VPN connection first.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Issue: The system asks for the One-Time Password multiple times&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Likely cause: Make sure you are using the correct Software Token to generate the One-Time Password.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Issue: The system asks for the service password multiple times&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Likely cause: Make sure you are using the service password set on bwIDM and not the password valid for your home institution. Unlike the bwUniCluster 1, the bwUniCluster 2.0 only accepts the service password.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Issue: There is an error message by the pam_ses_open.sh skript&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Likely cause: Your account is in the &amp;quot;LOST_ACCESS&amp;quot; state because the entitlement is no longer valid, the questionaire was not filled out or there was a problem during the communication between your home institution and the central bwIDM system. Please try the following steps:&lt;br /&gt;
&lt;br /&gt;
* Log into [https://bwidm.scc.kit.edu bwIDM], look for the bwUniCluster entry and click on &#039;&#039;&#039;Registry info&#039;&#039;&#039;. Your &amp;quot;Status:&amp;quot; should be &amp;quot;ACTIVE&amp;quot;. If it is not, please wait for ten minutes since logging into the bwIDM causes a refresh and the problem might fix itself. If the status does not change to ACTIVE after a longer amount of time, please contact the support channels.&lt;br /&gt;
&lt;br /&gt;
* If you have not filled out the questionaire, please do so on [https://zas.bwhpc.de/shib/en/bwunicluster_survey.php    https://zas.bwhpc.de/shib/en/bwunicluster_survey.php] and then wait for about ten minutes before attempting to log into the HPC system again.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Allowed activities on login nodes ==&lt;br /&gt;
&lt;br /&gt;
The login nodes of bwUniCluster 2.0 are the access point to the compute system and to your bwUniCluster 2.0 $HOME directory. The login nodes are shared with all the users of bwUniCluster 2.0. Therefore, your activities on the login nodes are limited to primarily set up your batch jobs. Your activities may also be:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;short&#039;&#039;&#039; compilation of your program code and&lt;br /&gt;
* &#039;&#039;&#039;short&#039;&#039;&#039; pre- and postprocessing of your batch jobs.&lt;br /&gt;
&lt;br /&gt;
To guarantee usability for all the users of bwUniCluster 2.0 &amp;lt;span style=&amp;quot;color:red;font-size:100%;&amp;quot;&amp;gt;&#039;&#039;&#039;you must not run your compute jobs on the login nodes&#039;&#039;&#039;&amp;lt;/span&amp;gt;. Compute jobs must be submitted to the&lt;br /&gt;
[[bwUniCluster Batch Jobs|queueing system]]. Any compute job running on the login nodes will be terminated without any notice. Any long-running compilation or any long-running pre- or postprocessing of batch jobs must also be submitted to the [[bwUniCluster Batch Jobs|queueing system]].&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== SSH Keys ==&lt;br /&gt;
&lt;br /&gt;
In contrast to the bwUniCluster 1 and many other HPC systems it is &#039;&#039;&#039;no longer possible to self-manage your SSH Keys by adding them to the ~/.ssh/authorized_keys file&#039;&#039;&#039;. Existing files will no longer be evaluated. SSH Keys have to be managed via the central bwIDM system instead. Please refer to the user guide for this functionality:&lt;br /&gt;
&lt;br /&gt;
[[bwUniCluster 2.0 User Access/SSH Keys]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= [[First_Steps_on_bwHPC_cluster|First steps on bwUniCluster]] =&lt;br /&gt;
&lt;br /&gt;
First and some important steps on bwUniCluster 2.0 can be found [[First_Steps_on_bwHPC_cluster|here]].&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Deregistration =&lt;br /&gt;
If you plan to permanently leave the bwUniCluster 2.0, follow the deregister checklist:&lt;br /&gt;
# Transfer all your data in $HOME and workspace to your local computer/storage and after that clear off all your data&lt;br /&gt;
# Visit [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu] &lt;br /&gt;
#* Select your home organization from the list and click &#039;&#039;&#039;Proceed&#039;&#039;&#039;  &lt;br /&gt;
#* Enter your home-organisational user ID / username  and your home-organisational password and click &#039;&#039;&#039;Login&#039;&#039;&#039; button&lt;br /&gt;
#* You will be redirected back to the registration website [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu/] &lt;br /&gt;
#* &amp;lt;div&amp;gt;Select &#039;&#039;&#039;Registry Info&#039;&#039;&#039; of the service &#039;&#039;&#039;bwUniCluster&#039;&#039;&#039; (on the left hand side)&amp;lt;br&amp;gt;[[File:bwUniCluster_registration_sidebar.png|center|border|]]&amp;lt;/div&amp;gt;&lt;br /&gt;
#* Click &#039;&#039;&#039;Deregister&#039;&#039;&#039;&lt;br /&gt;
Note that Step 2 will automatically unsubscribe you from the bwunicluster mail list.&lt;/div&gt;</summary>
		<author><name>H Haefner</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster_2.0_User_Access&amp;diff=8052</id>
		<title>BwUniCluster 2.0 User Access</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster_2.0_User_Access&amp;diff=8052"/>
		<updated>2020-12-17T14:15:29Z</updated>

		<summary type="html">&lt;p&gt;H Haefner: /* Step B: Web Registration, service password and 2-factor authentication */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;!--{| style=&amp;quot;border-style: solid; border-width: 1px&amp;quot;&lt;br /&gt;
! Navigation: [[BwHPC_Best_Practices_Repository|bwHPC BPR]] / [[BwUniCluster_User_Guide|bwUniCluster]] &lt;br /&gt;
|}--&amp;gt;&lt;br /&gt;
[[bwUniCluster_2.0|bwUniCluster 2.0]] is Baden-Württemberg&#039;s general purpose tier 3 high performance computing (HPC)&lt;br /&gt;
cluster co-financed by Baden-Württemberg&#039;s ministry of science, research and arts and the shareholders:&lt;br /&gt;
&lt;br /&gt;
* Albert Ludwig University of Freiburg&lt;br /&gt;
* Eberhard Karls University, Tübingen&lt;br /&gt;
* Karlsruhe Institute of Technology (KIT)&lt;br /&gt;
* Heidelberg University (Ruprecht-Karls-Universität Heidelberg)&lt;br /&gt;
* Ulm University&lt;br /&gt;
* University of Hohenheim&lt;br /&gt;
* University of Konstanz&lt;br /&gt;
* University of Mannheim&lt;br /&gt;
* University of Stuttgart&lt;br /&gt;
* HAW BW e.V. (an association of several universities of applied sciences in Baden-Württemberg, see below) &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
To &#039;&#039;&#039;log on&#039;&#039;&#039; [[bwUniCluster_2.0|bwUniCluster 2.0]] a user account is required. All members of the shareholder&lt;br /&gt;
universities can apply for an account.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;width: 100%; border-spacing: 5px;&amp;quot;&lt;br /&gt;
| style=&amp;quot;text-align:left; color:#000;vertical-align:top;&amp;quot; |__TOC__ &lt;br /&gt;
|&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;[[File:bwUniCluster_17Jan2014_p044-rot_t10.10.00.jpg|center|border|250px|bwUniCluster wiring by Holger Obermaier, copyright: KIT (SCC)]] &amp;lt;span style=&amp;quot;font-size:80%&amp;quot;&amp;gt;bwUniCluster wiring © KIT (SCC)&amp;lt;/span&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
= Registration =&lt;br /&gt;
&lt;br /&gt;
Granting access and issuing a user account for &#039;&#039;&#039;bwUniCluster 2.0&#039;&#039;&#039; requires the registration at the KIT service website [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu] (step B). However, this registration depends on the &#039;&#039;&#039;bwUniCluster entitlement&#039;&#039;&#039; issued by your university (step A).&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Step A: bwUniCluster entitlement for registration ==&lt;br /&gt;
&#039;&#039;&#039;The entitlement is called bwUniCluster (not bwUniCluster 2.0)&#039;&#039;&#039; and each university issues the bwUniCluster entitlement &#039;&#039;&#039;only&#039;&#039;&#039; for their own respective members. Some have established on-line processes or provide downloads of the entitlement application forms. If there is no link behind the name of an institution in the following list, please contact the local IT support services: &lt;br /&gt;
&lt;br /&gt;
* [[BwCluster_User_Access_Uni_Freiburg|Albert Ludwig University of Freiburg]]&lt;br /&gt;
* [https://bwunicluster.urz.uni-heidelberg.de/ Heidelberg University]&lt;br /&gt;
* [https://kim.uni-hohenheim.de/bwhpc-account University of Hohenheim]&lt;br /&gt;
* [http://www.scc.kit.edu/downloads/sdo/Antrag_Benutzernummer_bwUniCluster.pdf Karlsruhe Institute of Technology (KIT)]&lt;br /&gt;
* [[BWUniCluster_User_Access_Members_Uni_Konstanz|University of Konstanz]]&lt;br /&gt;
* [[BWUniCluster_User_Access_Members_Uni_Mannheim|University of Mannheim]]&lt;br /&gt;
* [https://www.hlrs.de/solutions-services/academic-users/bwunicluster-access/ University of Stuttgart]&lt;br /&gt;
* [https://uni-tuebingen.de/de/155157 Eberhard Karls University Tübingen]&lt;br /&gt;
* [[BWUniCluster_User_Access_Members_Uni_Ulm|Ulm University]]&lt;br /&gt;
* Hochschule Aalen&lt;br /&gt;
* Hochschule Albstadt-Sigmaringen&lt;br /&gt;
* Hochschule Esslingen&lt;br /&gt;
* Hochschule Furtwangen&lt;br /&gt;
* Hochschule Heilbronn&lt;br /&gt;
* Hochschule Karlsruhe&lt;br /&gt;
* Hochschule Konstanz&lt;br /&gt;
* Hochschule Reutlingen&lt;br /&gt;
* Hochschule Rottenburg&lt;br /&gt;
* Hochschule Stuttgart (HfT)&lt;br /&gt;
* Hochschule Ulm&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Step B: Web Registration, service password and 2-factor authentication ==&lt;br /&gt;
&lt;br /&gt;
After completing step A, i.e., after successfull issueing of the bwUniCluster entitlement, you have to register yourself for the service. To do so please visit [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu] and complete the following steps.&lt;br /&gt;
&lt;br /&gt;
1. Select your home organization from the list on the main page and click &#039;&#039;&#039;Proceed&#039;&#039;&#039; or &#039;&#039;&#039;Fortfahren&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
[[File:bwIDM-1-red.png|center|border|]]&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
2. You will be directed to the &#039;&#039;Identity Provider&#039;&#039; of your home organisation. Enter the user ID / username and password of your home organisation - this is usually the same password used for your e-mail account and other services - and click on &#039;&#039;&#039;Login&#039;&#039;&#039;, &#039;&#039;&#039;Einloggen&#039;&#039;&#039; or something similar.&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
3. You will be redirected back to the registration website [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu/]. If you are logging into bwIDM for the first time, there will be a summary screen which shows the account details your home institution is providing to the central system. Please check that all data is valid and then click on &#039;&#039;&#039;Continue&#039;&#039;&#039; or &#039;&#039;&#039;Weiter&#039;&#039;&#039;.&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
4. Once you have successfully logged into the bwIDM system, you will be greeted by a home screen showing all state-wide services you have access to. There will be a box labelled &amp;quot;bwUniCluster&amp;quot;. Click on &#039;&#039;&#039;Register&#039;&#039;&#039; or &#039;&#039;&#039;Registrieren&#039;&#039;&#039; to start the registration process.&lt;br /&gt;
&lt;br /&gt;
[[File:BwUniCluster 2.0 access login bwidm service list.png|center|border|]]&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
5. Since August 13, 2020 a &#039;&#039;&#039;2-factor authentication&#039;&#039;&#039; mechanism (2FA) is being enforced to improve security. If you have never registered a 2FA token on bwIDM before, the following error message will appear:&lt;br /&gt;
&lt;br /&gt;
[[File:BwUniCluster 2.0 access login bwidm registration requirements 2fa.png|center|]]&lt;br /&gt;
&lt;br /&gt;
Click on the [https://bwidm.scc.kit.edu/user/twofa.xhtml Link] or on the &#039;&#039;&#039;My Tokens&#039;&#039;&#039; link in the main menu. The instructions for registering a new 2FA token can be found on the following page: [[bwUniCluster 2.0 User Access/2FA Tokens]]. Please complete them before proceeding.&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
6. Make sure all requirements are met by checking the &#039;&#039;&#039;Requirements&#039;&#039;&#039; box at the top. If the requirements are not met you might be able to correct the issure by following the instructions. In all other cases please [[Registration_Support_-_bwUniCluster|contact your local hotline]].&lt;br /&gt;
&lt;br /&gt;
[[File:BwUniCluster 2.0 access login bwidm registration requirements.png|center|border|]]&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
7. Read the Terms of Use (&#039;&#039;&#039;Nutzungsbedingungen und -richtlinien&#039;&#039;&#039;), check the box besides &#039;&#039;&#039;I have read and accepted the terms of use&#039;&#039;&#039; and click on &#039;&#039;&#039;Register&#039;&#039;&#039; or &#039;&#039;&#039;Registrieren&#039;&#039;&#039;.&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
8. Set a service password for the bwUniCluster and click on &#039;&#039;&#039;Save&#039;&#039;&#039; or &#039;&#039;&#039;Speichern&#039;&#039;&#039;. Logging in with the password of your home organisation, like on the former bwUniCluster 1, is no longer possible. Please make sure to use a strong password which is different from any other password you are currently using or have used on other systems. You will also be asked to change the service password regularly.&lt;br /&gt;
&lt;br /&gt;
[[File:BwUniCluster 2.0 access login bwidm service password.png|center|]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Step C: Fill out the bwUniCluster questionnaire ==&lt;br /&gt;
&lt;br /&gt;
Filling out the bwUniCluster questionaire on&lt;br /&gt;
&lt;br /&gt;
   https://zas.bwhpc.de/shib/en/bwunicluster_survey.php&lt;br /&gt;
&lt;br /&gt;
is mandatory for all users. The input is solely used to improve our support activities and for capacity planning of future HPC resources. &#039;&#039;&#039;If the questionaire is not filled out, access to bwUniCluster 2.0 is blocked 14 days after the registration.&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Changing the Service Password ==&lt;br /&gt;
&lt;br /&gt;
Your bwUniCluster 2.0 &#039;&#039;&#039;password&#039;&#039;&#039; is the service password you set during the web registration (compare step 7 of chapter 1.2).  At any time, you can set a new bwUniCluster 2.0 password via the registration website [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu] by carrying out the following steps:&lt;br /&gt;
# Go to [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu] and select your home organization &lt;br /&gt;
# Authenticate yourself via the user id / username and password provided by your home institution&lt;br /&gt;
# Find the entry &#039;&#039;&#039;bwUniCluster&#039;&#039;&#039; and select &#039;&#039;&#039;Set Service Password&#039;&#039;&#039;&lt;br /&gt;
# Enter the new password, repeat it and click &#039;&#039;&#039;Save&#039;&#039;&#039; button.&lt;br /&gt;
# If the change was sucessfull, the message &amp;quot;Das Passwort wurde bei dem Dienst geändert&amp;quot; (&amp;quot;Password has been changed&amp;quot;) will be shown&lt;br /&gt;
# Proceed to log in using the new password&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
== Contact / Support ==&lt;br /&gt;
If you have questions or problems concerning the bwUniCluster (2.0) registration, please [[bwUniCluster 2.0 Support|contact your local hotline]]. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Establishing network access = &lt;br /&gt;
&lt;br /&gt;
Access to bwUniCluster 2.0 is &#039;&#039;&#039;limited to IP addresses from the so-called BelWü networks&#039;&#039;&#039;. All home institutions of our current users are connected to BelWue, so if you are on your campus network (e.g. in your office or on the Campus WiFi) you should be able to connect to bwUniCluster 2.0 without restrictions. If you are outside of one of the BelWue networks (e.g. in your home office instead of in your campus office), a VPN connection to your home institution has to be established first (see e.g. [1] for the KIT).&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Login =&lt;br /&gt;
&lt;br /&gt;
After finishing the web registration and making sure that you are on a network from which you have access to bwUniCluster 2.0 (e.g. by establishing a VPN connection), the HPC cluster is ready for your &#039;&#039;&#039;SSH&#039;&#039;&#039; based login. Recommended SSH clients applications are:&lt;br /&gt;
&lt;br /&gt;
* the ssh (OpenSSH) command included in all Linux distributions and macOS, -in command under Linux and macOS using the application &#039;&#039;terminal&#039;&#039;&lt;br /&gt;
* [http://mobaxterm.mobatek.net/ MobaXterm] under Windows&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Hostnames ==&lt;br /&gt;
&lt;br /&gt;
The main hostname required to connect to bwUniCluster 2.0 is &#039;&#039;&#039;bwunicluster.scc.kit.edu&#039;&#039;&#039; or &#039;&#039;&#039;uc2.scc.kit.edu&#039;&#039;&#039;. The system has four login nodes and we use so-called &#039;&#039;DNS round-robin scheduling&#039;&#039; to load-balance the incoming connections between the nodes. If you open multiple SSH sessions to bwUniCluster 2.0, these sessions will be established to different login nodes, so processes started in one session might not be visible in other sessions.&lt;br /&gt;
&lt;br /&gt;
The older Broadwell extension partition of the former bwUniCluster 1 is connected to bwUniCluster 2.0. You can use the hostname &#039;&#039;&#039;uc1e.scc.kit.edu&#039;&#039;&#039; to connect to the login nodes of this partition. &lt;br /&gt;
&lt;br /&gt;
If you need to connect to specific login nodes, you can use the following hostnames:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Hostname !! Node type&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;uc2-login1.scc.kit.edu&#039;&#039;&#039; || bwUniCluster 2.0, first login node&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;uc2-login2.scc.kit.edu&#039;&#039;&#039; || bwUniCluster 2.0, second login node&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;uc2-login3.scc.kit.edu&#039;&#039;&#039; || bwUniCluster 2.0, third login node&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;uc2-login4.scc.kit.edu&#039;&#039;&#039; || bwUniCluster 2.0, fourth login node&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;uc1e-login1.scc.kit.edu&#039;&#039;&#039; || Broadwell partition, first login node&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;uc1e-login2.scc.kit.edu&#039;&#039;&#039; || Broadwell partition, second login node&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Only the secure shell &#039;&#039;SSH&#039;&#039; is allowed to login. Other protocols like &#039;&#039;telnet&#039;&#039; or &#039;&#039;rlogin&#039;&#039; are not allowed for security reasons.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Usernames ==&lt;br /&gt;
&lt;br /&gt;
Your username will be the same as the one provided by your home institution, but &#039;&#039;&#039;prefixed&#039;&#039;&#039; with two characters and an underscore indicating your home institution. For example: If you are a member of the university of Konstanz and your local username is ab1234, your username on bwUniCluster 2.0 is kn_ab1234.&lt;br /&gt;
&lt;br /&gt;
The following list contains all prefixes currently in use:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Home organization !! &amp;lt;UserID&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| Universität Freiburg || &#039;&#039;fr_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Universität Heidelberg || &#039;&#039;hd_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Universität Hohenheim || &#039;&#039;ho_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| KIT || username &#039;&#039;(without any prefix)&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
| Universität Konstanz || &#039;&#039;kn_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Universität Mannheim || &#039;&#039;ma_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Universität Stuttgart ||  &#039;&#039;st_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Universität Tübingen || &#039;&#039;tu_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Universität Ulm  || &#039;&#039;ul_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Aalen || &#039;&#039;aa_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Albstadt-Sigmaringen || &#039;&#039;as_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Esslingen || &#039;&#039;es_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Furtwangen || &#039;&#039;hf_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Heilbronn || &#039;&#039;hn_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Karlsruhe || &#039;&#039;hk_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| HTWG Konstanz || &#039;&#039;ht_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Reutlingen || &#039;&#039;hr_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Rottenburg || &#039;&#039;ro_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule für Technik Stuttgart || &#039;&#039;hs_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Ulm || &#039;&#039;hu_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Client application: OpenSSH ==&lt;br /&gt;
&lt;br /&gt;
Most Unix and Unix-like operating systems like Linux, macOS and *BSD come with a built-in SSH client provided by the OpenSSH project. More recent versions of Windows 10 and the Windows Subsystem for Linux also come with a built-in OpenSSH client.&lt;br /&gt;
&lt;br /&gt;
To use this client, simply open a command line terminal (the exact process differs on every operating system, but usually involves starting an application called &#039;&#039;&#039;Terminal&#039;&#039;&#039; or &#039;&#039;&#039;Command Prompt&#039;&#039;&#039;) and enter the following command to connect to bwUniCluster 2.0:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ssh &amp;lt;UserID&amp;gt;@bwunicluster.scc.kit.edu&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you are on a Linux or Unix system running the X Window System (X11) and want to use a GUI-based application on bwUniCluster 2.0, you can use the &#039;&#039;-X&#039;&#039; option for the ssh command to set up X11 forwarding:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ssh -X &amp;lt;UserID&amp;gt;@uc2.scc.kit.edu&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Windows users requiring X11 forwarding for graphical applications should use &#039;&#039;&#039;MobaXterm&#039;&#039;&#039; instead.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Client application: MobaXterm ==&lt;br /&gt;
&lt;br /&gt;
The bwHPC-C5 support team strongly recommends to use [http://mobaxterm.mobatek.net/ MobaXterm] instead of &#039;&#039;PuTTY&#039;&#039; or &#039;&#039;WinSCP&#039;&#039; on Windows. &#039;&#039;MobaXterm&#039;&#039; provides a built-in X11 server allowing to start GUI based software.&lt;br /&gt;
 &lt;br /&gt;
Start &#039;&#039;MobaXterm&#039;&#039;, fill in the following fields:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Remote name              : uc2.scc.kit.edu    # or uc1e.scc.kit.edu&lt;br /&gt;
Specify user name        : &amp;lt;UserID&amp;gt;&lt;br /&gt;
Port                     : 22&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After that click on &#039;ok&#039;. Then a terminal will be opened and there you can enter your credentials.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Example login process ==&lt;br /&gt;
&lt;br /&gt;
After the connection has been initiated, a successful login process will go through the following three steps:&lt;br /&gt;
&lt;br /&gt;
1. The system asks for a &#039;&#039;&#039;One-Time Password&#039;&#039;&#039;. Generate one using the Software or Hardware Token registered on the bwIDM system (see [[bwUniCluster 2.0 User Access/2FA Tokens]]) and enter it after the &#039;&#039;&#039;Your OTP:&#039;&#039;&#039; prompt.&lt;br /&gt;
&lt;br /&gt;
2. The systems asks for your service password. Enter it after the &#039;&#039;&#039;Password:&#039;&#039;&#039; prompt.&lt;br /&gt;
&lt;br /&gt;
3. You are greeted by the bwUniCluster 2.0 banner followed by a shell.&lt;br /&gt;
&lt;br /&gt;
The result should look like this:&lt;br /&gt;
&lt;br /&gt;
[[File:BwUniCluster 2.0 access login example.png|center|]]&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Troubleshooting ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Issue: The &amp;quot;Your OTP:&amp;quot; prompt never appears and the connection hangs/times out instead&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Likely cause: You are most likely not on a network from which access to the bwUniCluster 2.0 system is allowed. Please check if you might have to establish a VPN connection first.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Issue: The system asks for the One-Time Password multiple times&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Likely cause: Make sure you are using the correct Software Token to generate the One-Time Password.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Issue: The system asks for the service password multiple times&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Likely cause: Make sure you are using the service password set on bwIDM and not the password valid for your home institution. Unlike the bwUniCluster 1, the bwUniCluster 2.0 only accepts the service password.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Issue: There is an error message by the pam_ses_open.sh skript&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Likely cause: Your account is in the &amp;quot;LOST_ACCESS&amp;quot; state because the entitlement is no longer valid, the questionaire was not filled out or there was a problem during the communication between your home institution and the central bwIDM system. Please try the following steps:&lt;br /&gt;
&lt;br /&gt;
* Log into [https://bwidm.scc.kit.edu bwIDM], look for the bwUniCluster entry and click on &#039;&#039;&#039;Registry info&#039;&#039;&#039;. Your &amp;quot;Status:&amp;quot; should be &amp;quot;ACTIVE&amp;quot;. If it is not, please wait for ten minutes since logging into the bwIDM causes a refresh and the problem might fix itself. If the status does not change to ACTIVE after a longer amount of time, please contact the support channels.&lt;br /&gt;
&lt;br /&gt;
* If you have not filled out the questionaire, please do so on [https://zas.bwhpc.de/shib/en/bwunicluster_survey.php    https://zas.bwhpc.de/shib/en/bwunicluster_survey.php] and then wait for about ten minutes before attempting to log into the HPC system again.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Allowed activities on login nodes ==&lt;br /&gt;
&lt;br /&gt;
The login nodes of bwUniCluster 2.0 are the access point to the compute system and to your bwUniCluster 2.0 $HOME directory. The login nodes are shared with all the users of bwUniCluster 2.0. Therefore, your activities on the login nodes are limited to primarily set up your batch jobs. Your activities may also be:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;short&#039;&#039;&#039; compilation of your program code and&lt;br /&gt;
* &#039;&#039;&#039;short&#039;&#039;&#039; pre- and postprocessing of your batch jobs.&lt;br /&gt;
&lt;br /&gt;
To guarantee usability for all the users of bwUniCluster 2.0 &amp;lt;span style=&amp;quot;color:red;font-size:100%;&amp;quot;&amp;gt;&#039;&#039;&#039;you must not run your compute jobs on the login nodes&#039;&#039;&#039;&amp;lt;/span&amp;gt;. Compute jobs must be submitted to the&lt;br /&gt;
[[bwUniCluster Batch Jobs|queueing system]]. Any compute job running on the login nodes will be terminated without any notice. Any long-running compilation or any long-running pre- or postprocessing of batch jobs must also be submitted to the [[bwUniCluster Batch Jobs|queueing system]].&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== SSH Keys ==&lt;br /&gt;
&lt;br /&gt;
In contrast to the bwUniCluster 1 and many other HPC systems it is &#039;&#039;&#039;no longer possible to self-manage your SSH Keys by adding them to the ~/.ssh/authorized_keys file&#039;&#039;&#039;. Existing files will no longer be evaluated. SSH Keys have to be managed via the central bwIDM system instead. Please refer to the user guide for this functionality:&lt;br /&gt;
&lt;br /&gt;
[[bwUniCluster 2.0 User Access/SSH Keys]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= [[First_Steps_on_bwHPC_cluster|First steps on bwUniCluster]] =&lt;br /&gt;
&lt;br /&gt;
First and some important steps on bwUniCluster 2.0 can be found [[First_Steps_on_bwHPC_cluster|here]].&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Deregistration =&lt;br /&gt;
If you plan to permanently leave the bwUniCluster 2.0, follow the deregister checklist:&lt;br /&gt;
# Transfer all your data in $HOME and workspace to your local computer/storage and after that clear off all your data&lt;br /&gt;
# Visit [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu] &lt;br /&gt;
#* Select your home organization from the list and click &#039;&#039;&#039;Proceed&#039;&#039;&#039;  &lt;br /&gt;
#* Enter your home-organisational user ID / username  and your home-organisational password and click &#039;&#039;&#039;Login&#039;&#039;&#039; button&lt;br /&gt;
#* You will be redirected back to the registration website [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu/] &lt;br /&gt;
#* &amp;lt;div&amp;gt;Select &#039;&#039;&#039;Registry Info&#039;&#039;&#039; of the service &#039;&#039;&#039;bwUniCluster&#039;&#039;&#039; (on the left hand side)&amp;lt;br&amp;gt;[[File:bwUniCluster_registration_sidebar.png|center|border|]]&amp;lt;/div&amp;gt;&lt;br /&gt;
#* Click &#039;&#039;&#039;Deregister&#039;&#039;&#039;&lt;br /&gt;
Note that Step 2 will automatically unsubscribe you from the bwunicluster mail list.&lt;/div&gt;</summary>
		<author><name>H Haefner</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=File:BwIDM-1.png&amp;diff=8051</id>
		<title>File:BwIDM-1.png</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=File:BwIDM-1.png&amp;diff=8051"/>
		<updated>2020-12-17T14:12:57Z</updated>

		<summary type="html">&lt;p&gt;H Haefner: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>H Haefner</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster_2.0_User_Access&amp;diff=8050</id>
		<title>BwUniCluster 2.0 User Access</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster_2.0_User_Access&amp;diff=8050"/>
		<updated>2020-12-17T14:04:42Z</updated>

		<summary type="html">&lt;p&gt;H Haefner: /* Step B: Web Registration, service password and 2-factor authentication */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;!--{| style=&amp;quot;border-style: solid; border-width: 1px&amp;quot;&lt;br /&gt;
! Navigation: [[BwHPC_Best_Practices_Repository|bwHPC BPR]] / [[BwUniCluster_User_Guide|bwUniCluster]] &lt;br /&gt;
|}--&amp;gt;&lt;br /&gt;
[[bwUniCluster_2.0|bwUniCluster 2.0]] is Baden-Württemberg&#039;s general purpose tier 3 high performance computing (HPC)&lt;br /&gt;
cluster co-financed by Baden-Württemberg&#039;s ministry of science, research and arts and the shareholders:&lt;br /&gt;
&lt;br /&gt;
* Albert Ludwig University of Freiburg&lt;br /&gt;
* Eberhard Karls University, Tübingen&lt;br /&gt;
* Karlsruhe Institute of Technology (KIT)&lt;br /&gt;
* Heidelberg University (Ruprecht-Karls-Universität Heidelberg)&lt;br /&gt;
* Ulm University&lt;br /&gt;
* University of Hohenheim&lt;br /&gt;
* University of Konstanz&lt;br /&gt;
* University of Mannheim&lt;br /&gt;
* University of Stuttgart&lt;br /&gt;
* HAW BW e.V. (an association of several universities of applied sciences in Baden-Württemberg, see below) &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
To &#039;&#039;&#039;log on&#039;&#039;&#039; [[bwUniCluster_2.0|bwUniCluster 2.0]] a user account is required. All members of the shareholder&lt;br /&gt;
universities can apply for an account.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;width: 100%; border-spacing: 5px;&amp;quot;&lt;br /&gt;
| style=&amp;quot;text-align:left; color:#000;vertical-align:top;&amp;quot; |__TOC__ &lt;br /&gt;
|&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;[[File:bwUniCluster_17Jan2014_p044-rot_t10.10.00.jpg|center|border|250px|bwUniCluster wiring by Holger Obermaier, copyright: KIT (SCC)]] &amp;lt;span style=&amp;quot;font-size:80%&amp;quot;&amp;gt;bwUniCluster wiring © KIT (SCC)&amp;lt;/span&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
= Registration =&lt;br /&gt;
&lt;br /&gt;
Granting access and issuing a user account for &#039;&#039;&#039;bwUniCluster 2.0&#039;&#039;&#039; requires the registration at the KIT service website [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu] (step B). However, this registration depends on the &#039;&#039;&#039;bwUniCluster entitlement&#039;&#039;&#039; issued by your university (step A).&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Step A: bwUniCluster entitlement for registration ==&lt;br /&gt;
&#039;&#039;&#039;The entitlement is called bwUniCluster (not bwUniCluster 2.0)&#039;&#039;&#039; and each university issues the bwUniCluster entitlement &#039;&#039;&#039;only&#039;&#039;&#039; for their own respective members. Some have established on-line processes or provide downloads of the entitlement application forms. If there is no link behind the name of an institution in the following list, please contact the local IT support services: &lt;br /&gt;
&lt;br /&gt;
* [[BwCluster_User_Access_Uni_Freiburg|Albert Ludwig University of Freiburg]]&lt;br /&gt;
* [https://bwunicluster.urz.uni-heidelberg.de/ Heidelberg University]&lt;br /&gt;
* [https://kim.uni-hohenheim.de/bwhpc-account University of Hohenheim]&lt;br /&gt;
* [http://www.scc.kit.edu/downloads/sdo/Antrag_Benutzernummer_bwUniCluster.pdf Karlsruhe Institute of Technology (KIT)]&lt;br /&gt;
* [[BWUniCluster_User_Access_Members_Uni_Konstanz|University of Konstanz]]&lt;br /&gt;
* [[BWUniCluster_User_Access_Members_Uni_Mannheim|University of Mannheim]]&lt;br /&gt;
* [https://www.hlrs.de/solutions-services/academic-users/bwunicluster-access/ University of Stuttgart]&lt;br /&gt;
* [https://uni-tuebingen.de/de/155157 Eberhard Karls University Tübingen]&lt;br /&gt;
* [[BWUniCluster_User_Access_Members_Uni_Ulm|Ulm University]]&lt;br /&gt;
* Hochschule Aalen&lt;br /&gt;
* Hochschule Albstadt-Sigmaringen&lt;br /&gt;
* Hochschule Esslingen&lt;br /&gt;
* Hochschule Furtwangen&lt;br /&gt;
* Hochschule Heilbronn&lt;br /&gt;
* Hochschule Karlsruhe&lt;br /&gt;
* Hochschule Konstanz&lt;br /&gt;
* Hochschule Reutlingen&lt;br /&gt;
* Hochschule Rottenburg&lt;br /&gt;
* Hochschule Stuttgart (HfT)&lt;br /&gt;
* Hochschule Ulm&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Step B: Web Registration, service password and 2-factor authentication ==&lt;br /&gt;
&lt;br /&gt;
After completing step A, i.e., after successfull issueing of the bwUniCluster entitlement, you have to register yourself for the service. To do so please visit [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu] and complete the following steps.&lt;br /&gt;
&lt;br /&gt;
1. Select your home organization from the list on the main page and click &#039;&#039;&#039;Proceed&#039;&#039;&#039; or &#039;&#039;&#039;Fortfahren&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
[[File:BwUniCluster 2.0 access login bwidm select institution_new.png|center|border|]]&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
2. You will be directed to the &#039;&#039;Identity Provider&#039;&#039; of your home organisation. Enter the user ID / username and password of your home organisation - this is usually the same password used for your e-mail account and other services - and click on &#039;&#039;&#039;Login&#039;&#039;&#039;, &#039;&#039;&#039;Einloggen&#039;&#039;&#039; or something similar.&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
3. You will be redirected back to the registration website [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu/]. If you are logging into bwIDM for the first time, there will be a summary screen which shows the account details your home institution is providing to the central system. Please check that all data is valid and then click on &#039;&#039;&#039;Continue&#039;&#039;&#039; or &#039;&#039;&#039;Weiter&#039;&#039;&#039;.&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
4. Once you have successfully logged into the bwIDM system, you will be greeted by a home screen showing all state-wide services you have access to. There will be a box labelled &amp;quot;bwUniCluster&amp;quot;. Click on &#039;&#039;&#039;Register&#039;&#039;&#039; or &#039;&#039;&#039;Registrieren&#039;&#039;&#039; to start the registration process.&lt;br /&gt;
&lt;br /&gt;
[[File:BwUniCluster 2.0 access login bwidm service list.png|center|border|]]&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
5. Since August 13, 2020 a &#039;&#039;&#039;2-factor authentication&#039;&#039;&#039; mechanism (2FA) is being enforced to improve security. If you have never registered a 2FA token on bwIDM before, the following error message will appear:&lt;br /&gt;
&lt;br /&gt;
[[File:BwUniCluster 2.0 access login bwidm registration requirements 2fa.png|center|]]&lt;br /&gt;
&lt;br /&gt;
Click on the [https://bwidm.scc.kit.edu/user/twofa.xhtml Link] or on the &#039;&#039;&#039;My Tokens&#039;&#039;&#039; link in the main menu. The instructions for registering a new 2FA token can be found on the following page: [[bwUniCluster 2.0 User Access/2FA Tokens]]. Please complete them before proceeding.&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
6. Make sure all requirements are met by checking the &#039;&#039;&#039;Requirements&#039;&#039;&#039; box at the top. If the requirements are not met you might be able to correct the issure by following the instructions. In all other cases please [[Registration_Support_-_bwUniCluster|contact your local hotline]].&lt;br /&gt;
&lt;br /&gt;
[[File:BwUniCluster 2.0 access login bwidm registration requirements.png|center|border|]]&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
7. Read the Terms of Use (&#039;&#039;&#039;Nutzungsbedingungen und -richtlinien&#039;&#039;&#039;), check the box besides &#039;&#039;&#039;I have read and accepted the terms of use&#039;&#039;&#039; and click on &#039;&#039;&#039;Register&#039;&#039;&#039; or &#039;&#039;&#039;Registrieren&#039;&#039;&#039;.&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
8. Set a service password for the bwUniCluster and click on &#039;&#039;&#039;Save&#039;&#039;&#039; or &#039;&#039;&#039;Speichern&#039;&#039;&#039;. Logging in with the password of your home organisation, like on the former bwUniCluster 1, is no longer possible. Please make sure to use a strong password which is different from any other password you are currently using or have used on other systems. You will also be asked to change the service password regularly.&lt;br /&gt;
&lt;br /&gt;
[[File:BwUniCluster 2.0 access login bwidm service password.png|center|]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Step C: Fill out the bwUniCluster questionnaire ==&lt;br /&gt;
&lt;br /&gt;
Filling out the bwUniCluster questionaire on&lt;br /&gt;
&lt;br /&gt;
   https://zas.bwhpc.de/shib/en/bwunicluster_survey.php&lt;br /&gt;
&lt;br /&gt;
is mandatory for all users. The input is solely used to improve our support activities and for capacity planning of future HPC resources. &#039;&#039;&#039;If the questionaire is not filled out, access to bwUniCluster 2.0 is blocked 14 days after the registration.&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Changing the Service Password ==&lt;br /&gt;
&lt;br /&gt;
Your bwUniCluster 2.0 &#039;&#039;&#039;password&#039;&#039;&#039; is the service password you set during the web registration (compare step 7 of chapter 1.2).  At any time, you can set a new bwUniCluster 2.0 password via the registration website [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu] by carrying out the following steps:&lt;br /&gt;
# Go to [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu] and select your home organization &lt;br /&gt;
# Authenticate yourself via the user id / username and password provided by your home institution&lt;br /&gt;
# Find the entry &#039;&#039;&#039;bwUniCluster&#039;&#039;&#039; and select &#039;&#039;&#039;Set Service Password&#039;&#039;&#039;&lt;br /&gt;
# Enter the new password, repeat it and click &#039;&#039;&#039;Save&#039;&#039;&#039; button.&lt;br /&gt;
# If the change was sucessfull, the message &amp;quot;Das Passwort wurde bei dem Dienst geändert&amp;quot; (&amp;quot;Password has been changed&amp;quot;) will be shown&lt;br /&gt;
# Proceed to log in using the new password&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
== Contact / Support ==&lt;br /&gt;
If you have questions or problems concerning the bwUniCluster (2.0) registration, please [[bwUniCluster 2.0 Support|contact your local hotline]]. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Establishing network access = &lt;br /&gt;
&lt;br /&gt;
Access to bwUniCluster 2.0 is &#039;&#039;&#039;limited to IP addresses from the so-called BelWü networks&#039;&#039;&#039;. All home institutions of our current users are connected to BelWue, so if you are on your campus network (e.g. in your office or on the Campus WiFi) you should be able to connect to bwUniCluster 2.0 without restrictions. If you are outside of one of the BelWue networks (e.g. in your home office instead of in your campus office), a VPN connection to your home institution has to be established first (see e.g. [1] for the KIT).&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Login =&lt;br /&gt;
&lt;br /&gt;
After finishing the web registration and making sure that you are on a network from which you have access to bwUniCluster 2.0 (e.g. by establishing a VPN connection), the HPC cluster is ready for your &#039;&#039;&#039;SSH&#039;&#039;&#039; based login. Recommended SSH clients applications are:&lt;br /&gt;
&lt;br /&gt;
* the ssh (OpenSSH) command included in all Linux distributions and macOS, -in command under Linux and macOS using the application &#039;&#039;terminal&#039;&#039;&lt;br /&gt;
* [http://mobaxterm.mobatek.net/ MobaXterm] under Windows&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Hostnames ==&lt;br /&gt;
&lt;br /&gt;
The main hostname required to connect to bwUniCluster 2.0 is &#039;&#039;&#039;bwunicluster.scc.kit.edu&#039;&#039;&#039; or &#039;&#039;&#039;uc2.scc.kit.edu&#039;&#039;&#039;. The system has four login nodes and we use so-called &#039;&#039;DNS round-robin scheduling&#039;&#039; to load-balance the incoming connections between the nodes. If you open multiple SSH sessions to bwUniCluster 2.0, these sessions will be established to different login nodes, so processes started in one session might not be visible in other sessions.&lt;br /&gt;
&lt;br /&gt;
The older Broadwell extension partition of the former bwUniCluster 1 is connected to bwUniCluster 2.0. You can use the hostname &#039;&#039;&#039;uc1e.scc.kit.edu&#039;&#039;&#039; to connect to the login nodes of this partition. &lt;br /&gt;
&lt;br /&gt;
If you need to connect to specific login nodes, you can use the following hostnames:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Hostname !! Node type&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;uc2-login1.scc.kit.edu&#039;&#039;&#039; || bwUniCluster 2.0, first login node&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;uc2-login2.scc.kit.edu&#039;&#039;&#039; || bwUniCluster 2.0, second login node&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;uc2-login3.scc.kit.edu&#039;&#039;&#039; || bwUniCluster 2.0, third login node&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;uc2-login4.scc.kit.edu&#039;&#039;&#039; || bwUniCluster 2.0, fourth login node&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;uc1e-login1.scc.kit.edu&#039;&#039;&#039; || Broadwell partition, first login node&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;uc1e-login2.scc.kit.edu&#039;&#039;&#039; || Broadwell partition, second login node&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Only the secure shell &#039;&#039;SSH&#039;&#039; is allowed to login. Other protocols like &#039;&#039;telnet&#039;&#039; or &#039;&#039;rlogin&#039;&#039; are not allowed for security reasons.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Usernames ==&lt;br /&gt;
&lt;br /&gt;
Your username will be the same as the one provided by your home institution, but &#039;&#039;&#039;prefixed&#039;&#039;&#039; with two characters and an underscore indicating your home institution. For example: If you are a member of the university of Konstanz and your local username is ab1234, your username on bwUniCluster 2.0 is kn_ab1234.&lt;br /&gt;
&lt;br /&gt;
The following list contains all prefixes currently in use:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Home organization !! &amp;lt;UserID&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| Universität Freiburg || &#039;&#039;fr_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Universität Heidelberg || &#039;&#039;hd_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Universität Hohenheim || &#039;&#039;ho_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| KIT || username &#039;&#039;(without any prefix)&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
| Universität Konstanz || &#039;&#039;kn_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Universität Mannheim || &#039;&#039;ma_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Universität Stuttgart ||  &#039;&#039;st_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Universität Tübingen || &#039;&#039;tu_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Universität Ulm  || &#039;&#039;ul_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Aalen || &#039;&#039;aa_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Albstadt-Sigmaringen || &#039;&#039;as_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Esslingen || &#039;&#039;es_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Furtwangen || &#039;&#039;hf_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Heilbronn || &#039;&#039;hn_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Karlsruhe || &#039;&#039;hk_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| HTWG Konstanz || &#039;&#039;ht_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Reutlingen || &#039;&#039;hr_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Rottenburg || &#039;&#039;ro_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule für Technik Stuttgart || &#039;&#039;hs_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Ulm || &#039;&#039;hu_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Client application: OpenSSH ==&lt;br /&gt;
&lt;br /&gt;
Most Unix and Unix-like operating systems like Linux, macOS and *BSD come with a built-in SSH client provided by the OpenSSH project. More recent versions of Windows 10 and the Windows Subsystem for Linux also come with a built-in OpenSSH client.&lt;br /&gt;
&lt;br /&gt;
To use this client, simply open a command line terminal (the exact process differs on every operating system, but usually involves starting an application called &#039;&#039;&#039;Terminal&#039;&#039;&#039; or &#039;&#039;&#039;Command Prompt&#039;&#039;&#039;) and enter the following command to connect to bwUniCluster 2.0:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ssh &amp;lt;UserID&amp;gt;@bwunicluster.scc.kit.edu&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you are on a Linux or Unix system running the X Window System (X11) and want to use a GUI-based application on bwUniCluster 2.0, you can use the &#039;&#039;-X&#039;&#039; option for the ssh command to set up X11 forwarding:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ssh -X &amp;lt;UserID&amp;gt;@uc2.scc.kit.edu&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Windows users requiring X11 forwarding for graphical applications should use &#039;&#039;&#039;MobaXterm&#039;&#039;&#039; instead.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Client application: MobaXterm ==&lt;br /&gt;
&lt;br /&gt;
The bwHPC-C5 support team strongly recommends to use [http://mobaxterm.mobatek.net/ MobaXterm] instead of &#039;&#039;PuTTY&#039;&#039; or &#039;&#039;WinSCP&#039;&#039; on Windows. &#039;&#039;MobaXterm&#039;&#039; provides a built-in X11 server allowing to start GUI based software.&lt;br /&gt;
 &lt;br /&gt;
Start &#039;&#039;MobaXterm&#039;&#039;, fill in the following fields:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Remote name              : uc2.scc.kit.edu    # or uc1e.scc.kit.edu&lt;br /&gt;
Specify user name        : &amp;lt;UserID&amp;gt;&lt;br /&gt;
Port                     : 22&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After that click on &#039;ok&#039;. Then a terminal will be opened and there you can enter your credentials.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Example login process ==&lt;br /&gt;
&lt;br /&gt;
After the connection has been initiated, a successful login process will go through the following three steps:&lt;br /&gt;
&lt;br /&gt;
1. The system asks for a &#039;&#039;&#039;One-Time Password&#039;&#039;&#039;. Generate one using the Software or Hardware Token registered on the bwIDM system (see [[bwUniCluster 2.0 User Access/2FA Tokens]]) and enter it after the &#039;&#039;&#039;Your OTP:&#039;&#039;&#039; prompt.&lt;br /&gt;
&lt;br /&gt;
2. The systems asks for your service password. Enter it after the &#039;&#039;&#039;Password:&#039;&#039;&#039; prompt.&lt;br /&gt;
&lt;br /&gt;
3. You are greeted by the bwUniCluster 2.0 banner followed by a shell.&lt;br /&gt;
&lt;br /&gt;
The result should look like this:&lt;br /&gt;
&lt;br /&gt;
[[File:BwUniCluster 2.0 access login example.png|center|]]&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Troubleshooting ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Issue: The &amp;quot;Your OTP:&amp;quot; prompt never appears and the connection hangs/times out instead&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Likely cause: You are most likely not on a network from which access to the bwUniCluster 2.0 system is allowed. Please check if you might have to establish a VPN connection first.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Issue: The system asks for the One-Time Password multiple times&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Likely cause: Make sure you are using the correct Software Token to generate the One-Time Password.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Issue: The system asks for the service password multiple times&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Likely cause: Make sure you are using the service password set on bwIDM and not the password valid for your home institution. Unlike the bwUniCluster 1, the bwUniCluster 2.0 only accepts the service password.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Issue: There is an error message by the pam_ses_open.sh skript&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Likely cause: Your account is in the &amp;quot;LOST_ACCESS&amp;quot; state because the entitlement is no longer valid, the questionaire was not filled out or there was a problem during the communication between your home institution and the central bwIDM system. Please try the following steps:&lt;br /&gt;
&lt;br /&gt;
* Log into [https://bwidm.scc.kit.edu bwIDM], look for the bwUniCluster entry and click on &#039;&#039;&#039;Registry info&#039;&#039;&#039;. Your &amp;quot;Status:&amp;quot; should be &amp;quot;ACTIVE&amp;quot;. If it is not, please wait for ten minutes since logging into the bwIDM causes a refresh and the problem might fix itself. If the status does not change to ACTIVE after a longer amount of time, please contact the support channels.&lt;br /&gt;
&lt;br /&gt;
* If you have not filled out the questionaire, please do so on [https://zas.bwhpc.de/shib/en/bwunicluster_survey.php    https://zas.bwhpc.de/shib/en/bwunicluster_survey.php] and then wait for about ten minutes before attempting to log into the HPC system again.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Allowed activities on login nodes ==&lt;br /&gt;
&lt;br /&gt;
The login nodes of bwUniCluster 2.0 are the access point to the compute system and to your bwUniCluster 2.0 $HOME directory. The login nodes are shared with all the users of bwUniCluster 2.0. Therefore, your activities on the login nodes are limited to primarily set up your batch jobs. Your activities may also be:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;short&#039;&#039;&#039; compilation of your program code and&lt;br /&gt;
* &#039;&#039;&#039;short&#039;&#039;&#039; pre- and postprocessing of your batch jobs.&lt;br /&gt;
&lt;br /&gt;
To guarantee usability for all the users of bwUniCluster 2.0 &amp;lt;span style=&amp;quot;color:red;font-size:100%;&amp;quot;&amp;gt;&#039;&#039;&#039;you must not run your compute jobs on the login nodes&#039;&#039;&#039;&amp;lt;/span&amp;gt;. Compute jobs must be submitted to the&lt;br /&gt;
[[bwUniCluster Batch Jobs|queueing system]]. Any compute job running on the login nodes will be terminated without any notice. Any long-running compilation or any long-running pre- or postprocessing of batch jobs must also be submitted to the [[bwUniCluster Batch Jobs|queueing system]].&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== SSH Keys ==&lt;br /&gt;
&lt;br /&gt;
In contrast to the bwUniCluster 1 and many other HPC systems it is &#039;&#039;&#039;no longer possible to self-manage your SSH Keys by adding them to the ~/.ssh/authorized_keys file&#039;&#039;&#039;. Existing files will no longer be evaluated. SSH Keys have to be managed via the central bwIDM system instead. Please refer to the user guide for this functionality:&lt;br /&gt;
&lt;br /&gt;
[[bwUniCluster 2.0 User Access/SSH Keys]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= [[First_Steps_on_bwHPC_cluster|First steps on bwUniCluster]] =&lt;br /&gt;
&lt;br /&gt;
First and some important steps on bwUniCluster 2.0 can be found [[First_Steps_on_bwHPC_cluster|here]].&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Deregistration =&lt;br /&gt;
If you plan to permanently leave the bwUniCluster 2.0, follow the deregister checklist:&lt;br /&gt;
# Transfer all your data in $HOME and workspace to your local computer/storage and after that clear off all your data&lt;br /&gt;
# Visit [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu] &lt;br /&gt;
#* Select your home organization from the list and click &#039;&#039;&#039;Proceed&#039;&#039;&#039;  &lt;br /&gt;
#* Enter your home-organisational user ID / username  and your home-organisational password and click &#039;&#039;&#039;Login&#039;&#039;&#039; button&lt;br /&gt;
#* You will be redirected back to the registration website [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu/] &lt;br /&gt;
#* &amp;lt;div&amp;gt;Select &#039;&#039;&#039;Registry Info&#039;&#039;&#039; of the service &#039;&#039;&#039;bwUniCluster&#039;&#039;&#039; (on the left hand side)&amp;lt;br&amp;gt;[[File:bwUniCluster_registration_sidebar.png|center|border|]]&amp;lt;/div&amp;gt;&lt;br /&gt;
#* Click &#039;&#039;&#039;Deregister&#039;&#039;&#039;&lt;br /&gt;
Note that Step 2 will automatically unsubscribe you from the bwunicluster mail list.&lt;/div&gt;</summary>
		<author><name>H Haefner</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=File:BwUniCluster_2.0_access_login_bwidm_service_list_new.png&amp;diff=8049</id>
		<title>File:BwUniCluster 2.0 access login bwidm service list new.png</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=File:BwUniCluster_2.0_access_login_bwidm_service_list_new.png&amp;diff=8049"/>
		<updated>2020-12-17T14:03:36Z</updated>

		<summary type="html">&lt;p&gt;H Haefner: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>H Haefner</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster2.0/Maintenance/2020-10&amp;diff=7209</id>
		<title>BwUniCluster2.0/Maintenance/2020-10</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster2.0/Maintenance/2020-10&amp;diff=7209"/>
		<updated>2020-10-01T12:55:03Z</updated>

		<summary type="html">&lt;p&gt;H Haefner: /* Compilers, Libaries and Runtime Environments */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The following changes are planned for the maintenance interval starting on 06.10.2020 (Tuesday) at 10 AM and ending on 13.10.2020 (Tuesday) at 10 AM.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;This information is still preliminary and subjected to change.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
= Operating system =&lt;br /&gt;
&lt;br /&gt;
* The operating system version will be upgraded to Red Hat Enterprise Linux (RHEL) 8.2. We recommend to re-compile all applications after the upgrade.&lt;br /&gt;
&lt;br /&gt;
* The Linux kernel version will be upgraded to 4.18 (from 3.10).&lt;br /&gt;
&lt;br /&gt;
* The Mellanox OFED InfiniBand Stack will be upgraded to version 4.9 (from 4.7). If you maintain libraries which directly link against OFED libraried (e.g. MPI or PGAS libraries compiled in your $HOME directory) we recommend to re-compile these libraries after the upgrade.&lt;br /&gt;
&lt;br /&gt;
= Compilers, Libaries and Runtime Environments =&lt;br /&gt;
&lt;br /&gt;
* The &#039;&#039;GNU C Library&#039;&#039; (glibc) will be upgraded to version 2.28 (from 2.17). This version will no longer automatically print backtraces after stack checking failures due to security hardening, which might affect the debugging of applications.&lt;br /&gt;
&lt;br /&gt;
* The default system compiler will be upgraded to &#039;&#039;GCC&#039;&#039; version 8.3.1 (from 4.8.5).&lt;br /&gt;
&lt;br /&gt;
* The Intel compiler module versions will not change.&lt;br /&gt;
&lt;br /&gt;
* The &#039;&#039;Clang&#039;&#039; 8 module will probably be replaced in favor of Clang 10. The final decision on this has not been made yet. Clang 9 will continue to be available in any case.&lt;br /&gt;
&lt;br /&gt;
* The PGI Compiler will be updated to version 20.7 (from 19.1). It includes the NVIDIA environment in version 11.0  (e.g. nvcc).&lt;br /&gt;
&lt;br /&gt;
* The system &#039;&#039;Python&#039;&#039; version will be upgraded to 3.6. If no other Python module is loaded, the &#039;&#039;python&#039;&#039; command will default to Python 3.6, while Python 2.7 can be still used with the &#039;&#039;python2&#039;&#039; command.&lt;br /&gt;
&lt;br /&gt;
* The system &#039;&#039;Perl&#039;&#039; version will be upgraded to 5.26.&lt;br /&gt;
&lt;br /&gt;
= Development tools =&lt;br /&gt;
&lt;br /&gt;
* The default system version of the &#039;&#039;GNU Make&#039;&#039; build utility will be upgraded to 4.2.1. Please note the new &#039;&#039;--output-sync&#039;&#039; command line option that allows better debugging of parallel builds.&lt;br /&gt;
&lt;br /&gt;
* The default system version of the &#039;&#039;autotools&#039;&#039; package will be upgraded to 2.69. The &amp;quot;devel/autotools/2.69&amp;quot; software module is no longer necessary and will be removed.&lt;br /&gt;
&lt;br /&gt;
* The default system version of the &#039;&#039;CMake&#039;&#039; build system will be upgraded to 3.11.&lt;br /&gt;
&lt;br /&gt;
* The default system version of the &#039;&#039;Git&#039;&#039; version control utility will be upgraded to 2.18.4. The &amp;quot;devel/git/2.18.4&amp;quot; software module is no longer necessary and will be removed.&lt;br /&gt;
&lt;br /&gt;
= Userspace tools =&lt;br /&gt;
&lt;br /&gt;
* The &#039;&#039;screen&#039;&#039; utility is no longer be supported by Red Hat. The recommended replacement is &#039;&#039;tmux&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
= Batch system =&lt;br /&gt;
&lt;br /&gt;
* The Slurm version will be upgraded to the most recent minor version available at the start of the maintenance interval. At this moment this would be version 20.02.5 (instead of 20.02.3).&lt;br /&gt;
&lt;br /&gt;
* The maximum total number of jobs of a single user in all queues (running and waiting) will be reduced to 100 (from 200).&lt;br /&gt;
&lt;br /&gt;
= Graphics stack =&lt;br /&gt;
&lt;br /&gt;
* The KDE Plasma desktop environment is no longer supported by Red Hat. &#039;&#039;start_vnc_desktop&#039;&#039; will default to GNOME 3.28 instead.&lt;br /&gt;
&lt;br /&gt;
* The NVIDIA driver will be upgraded to the latest version available at the start of the maintenance interval.&lt;br /&gt;
&lt;br /&gt;
= Containers =&lt;br /&gt;
&lt;br /&gt;
* Network Namespaces will be disabled due to security concerns.&lt;/div&gt;</summary>
		<author><name>H Haefner</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster2.0/Batch_Queues&amp;diff=6622</id>
		<title>BwUniCluster2.0/Batch Queues</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster2.0/Batch_Queues&amp;diff=6622"/>
		<updated>2020-07-03T13:22:43Z</updated>

		<summary type="html">&lt;p&gt;H Haefner: /* sbatch -p queue */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This article contains information on the queues of the [[BwUniCluster_2.0_Slurm_common_Features|batch job system]] and on interactive jobs.&lt;br /&gt;
&lt;br /&gt;
= Job Submission =&lt;br /&gt;
== sbatch Command ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== sbatch -p &#039;&#039;queue&#039;&#039; ===&lt;br /&gt;
Compute resources such as (wall-)time, nodes and memory are restricted and must fit into &#039;&#039;&#039;queues&#039;&#039;&#039;. Since requested compute resources are NOT always automatically mapped to the correct queue class, &#039;&#039;&#039;you must add the correct queue class to your sbatch command &#039;&#039;&#039;. &amp;lt;font color=red&amp;gt;&#039;The specification of a queue is obligatory on BwUniCluster 2.0.&amp;lt;/font&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Details are:&lt;br /&gt;
&lt;br /&gt;
{| width=750px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! colspan=&amp;quot;5&amp;quot; | bwUniCluster 2.0 &amp;lt;br&amp;gt; sbatch -p &#039;&#039;queue&#039;&#039;&lt;br /&gt;
|- style=&amp;quot;text-align:left;&amp;quot;&lt;br /&gt;
! queue !! node !! default resources !! minimum resources !! maximum resources&lt;br /&gt;
|- style=&amp;quot;text-align:left&amp;quot;&lt;br /&gt;
| dev_single&lt;br /&gt;
| thin&lt;br /&gt;
| time=10, mem-per-cpu=1125mb&lt;br /&gt;
| &lt;br /&gt;
| time=30, nodes=1, mem=90000mb, ntasks-per-node=40, (threads-per-core=2)&amp;lt;br&amp;gt;4 nodes are reserved for this queue. &amp;lt;br&amp;gt; Only for development, i.e. debugging or performance optimization ...&lt;br /&gt;
|- style=&amp;quot;text-align:left;&amp;quot;&lt;br /&gt;
| single&lt;br /&gt;
| thin&lt;br /&gt;
| time=30, mem-per-cpu=1125mb&lt;br /&gt;
| &lt;br /&gt;
| time=72:00:00, nodes=1, mem=90000mb, ntasks-per-node=40, (threads-per-core)=2&lt;br /&gt;
|- style=&amp;quot;text-align:left;&amp;quot;&lt;br /&gt;
| dev_multiple&lt;br /&gt;
| thin&lt;br /&gt;
| time=10, mem-per-cpu=1125mb&lt;br /&gt;
| nodes=2&lt;br /&gt;
| time=30, nodes=4, mem=90000mb, ntasks-per-node=40, (threads-per-core=2)&amp;lt;br&amp;gt;8 nodes are reserved for this queue.&amp;lt;br&amp;gt; Only for development, i.e. debugging or performance optimization ...&lt;br /&gt;
|- style=&amp;quot;text-align:left;&amp;quot;&lt;br /&gt;
| multiple&lt;br /&gt;
| thin&lt;br /&gt;
| time=30, mem-per-cpu=1125mb&lt;br /&gt;
| nodes=2&lt;br /&gt;
| time=72:00:00, mem=90000mb, nodes=128, ntasks-per-node=40, (threads-per-core=2) &lt;br /&gt;
|- style=&amp;quot;text-align:left;&amp;quot;&lt;br /&gt;
| dev_multiple_e&lt;br /&gt;
| thin (Broadwell)&lt;br /&gt;
| time=10, mem-per-cpu=2178mb&lt;br /&gt;
| nodes=2&lt;br /&gt;
| time=30, nodes=8, mem=122000, ntasks-per-node=28, (threads-per-core=2)&amp;lt;br&amp;gt;8 nodes are reserved for this queue &amp;lt;br&amp;gt; Only for development, i.e. debugging or performance optimization ...&lt;br /&gt;
|- style=&amp;quot;text-align:left;&amp;quot;&lt;br /&gt;
| multiple_e&lt;br /&gt;
| thin (Broadwell)&lt;br /&gt;
| time=10, mem-per-cpu=2178mb&lt;br /&gt;
| nodes=2&lt;br /&gt;
| time=72:00:00, nodes=128, mem=122000mb, ntasks-per-node=28, (threads-per-core=2)&lt;br /&gt;
|- style=&amp;quot;text-align:left;&amp;quot;&lt;br /&gt;
| dev_special&amp;lt;br/&amp;gt;(&amp;lt;i&amp;gt;restricted access!&amp;lt;/i&amp;gt;)&lt;br /&gt;
| thin (Broadwell)&lt;br /&gt;
| time=10, mem-per-cpu=2178mb&lt;br /&gt;
| &lt;br /&gt;
| time=30, nodes=1, mem=122000mb, ntasks-per-node=28, (threads-per-core=2)&lt;br /&gt;
|- style=&amp;quot;text-align:left;&amp;quot;&lt;br /&gt;
| special&amp;lt;br/&amp;gt;(&amp;lt;i&amp;gt;restricted access!&amp;lt;/i&amp;gt;)&lt;br /&gt;
| thin (Broadwell)&lt;br /&gt;
| time=10, mem-per-cpu=2178mb&lt;br /&gt;
| &lt;br /&gt;
| time=72:00:00, nodes=1, mem=122000mb, ntasks-per-node=28, (threads-per-core=2)&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; text-align:left&amp;quot;&lt;br /&gt;
| fat &lt;br /&gt;
| fat&lt;br /&gt;
| time=10, mem-per-cpu=18750mb&lt;br /&gt;
|&lt;br /&gt;
| time=72:00:00, nodes=1, ntasks-per-node=80, (threads-per-core=2)&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; text-align:left&amp;quot;&lt;br /&gt;
| gpu_4&lt;br /&gt;
| gpu4&lt;br /&gt;
| time=10, mem-per-gpu=94000mb, cpu-per-gpu=20&lt;br /&gt;
| &lt;br /&gt;
| time=48:00:00, mem=376000, nodes=14, ntasks-per-node=40, (threads-per-core=2)&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; text-align:left&amp;quot;&lt;br /&gt;
| gpu_8&lt;br /&gt;
| gpu8&lt;br /&gt;
| time=10, mem-per-cpu=94000mb, cpu-per-gpu=10&lt;br /&gt;
|&lt;br /&gt;
| time=48:00:00, mem=752000, nodes=10, ntasks-per-node=40, (threads-per-core=2)&lt;br /&gt;
|- &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Default resources of a queue class defines time, #tasks and memory if not explicitly given with sbatch command. Resource list acronyms &#039;&#039;--time&#039;&#039;, &#039;&#039;--ntasks&#039;&#039;, &#039;&#039;--nodes&#039;&#039;, &#039;&#039;--mem&#039;&#039; and &#039;&#039;--mem-per-cpu&#039;&#039; are described [[ForHLR_Batch_Jobs_SLURM#sbatch_Command_Parameters|here]].&lt;br /&gt;
&lt;br /&gt;
Access to the &amp;quot;special&amp;quot; and &amp;quot;dev_special&amp;quot; partitions on the bwUniCluster 2.0 is restricted to members of the institutions which participated in the procurement of the extension partition specifically for this purpose. Please contact the support team if your institution participated in the procurement and your account should be able to run jobs in this partition.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
==== Queue class examples ====&lt;br /&gt;
To run your batch job on one of the thin nodes, please use:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sbatch --partition=dev_multiple&lt;br /&gt;
     or &lt;br /&gt;
$ sbatch -p dev_multiple&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Interactive Jobs ====&lt;br /&gt;
On bwUniCluster 2.0 you are only allowed to run short jobs (&amp;lt;&amp;lt; 1 hour) with little memory requirements (&amp;lt;&amp;lt; 8 GByte) on the logins nodes. If you want to run longer jobs and/or jobs with a request of more than 8 GByte of memory, you must allocate resources for so-called interactive jobs by usage of the command salloc on a login node. Considering a serial application running on a compute node that requires 5000 MByte of memory and limiting the interactive run to 2 hours the following command has to be executed:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ salloc -p single -n 1 -t 120 --mem=5000&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Then you will get one core on a compute node within the partition &amp;quot;single&amp;quot;. After execution of this command &#039;&#039;&#039;DO NOT CLOSE&#039;&#039;&#039; your current terminal session but wait until the queueing system Slurm has granted you the requested resources on the compute system. You will be logged in automatically on the granted core! To run a serial program on the granted core you only have to type the name of the executable.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ./&amp;lt;my_serial_program&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Please be aware that your serial job must run less than 2 hours in this example, else the job will be killed during runtime by the system. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
You can also start now a graphical X11-terminal connecting you to the dedicated resource that is available for 2 hours. You can start it by the command:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ xterm&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Note that, once the walltime limit has been reached the resources - i.e. the compute node - will automatically be revoked.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
An interactive parallel application running on one compute node or on many compute nodes (e.g. here 5 nodes) with 40 cores each requires usually an amount of memory in GByte (e.g. 50 GByte) and a maximum time (e.g. 1 hour). E.g. 5 nodes can be allocated by the following command:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ salloc -p multiple -N 5 --ntasks-per-node=40 -t 01:00:00  --mem=50gb&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Now you can run parallel jobs on 200 cores requiring 50 GByte of memory per node. Please be aware that you will be logged in on core 0 of the first node. If you want to run MPI-programs, you can do it by simply typing mpirun &amp;lt;program_name&amp;gt;. Then your program will be run on 200 cores. A very simple example for starting a parallel job can be:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ mpirun &amp;lt;my_mpi_program&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
You can also start the debugger ddt by the commands:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ module add devel/ddt&lt;br /&gt;
$ ddt &amp;lt;my_mpi_program&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The above commands will execute the parallel program &amp;lt;my_mpi_program&amp;gt; on all available cores. You can also start parallel programs on a subset of cores; an example for this can be:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ mpirun -n 50 &amp;lt;my_mpi_program&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
If you are using Intel MPI you must start &amp;lt;my_mpi_program&amp;gt; by the command mpiexec.hydra (instead of mpirun).&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
[[Category:bwUniCluster 2.0|Batch Jobs - bwUniCluster 2.0 Features]]&lt;/div&gt;</summary>
		<author><name>H Haefner</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster2.0/Batch_Queues&amp;diff=6621</id>
		<title>BwUniCluster2.0/Batch Queues</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster2.0/Batch_Queues&amp;diff=6621"/>
		<updated>2020-07-03T13:17:52Z</updated>

		<summary type="html">&lt;p&gt;H Haefner: /* sbatch -p queue */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This article contains information on the queues of the [[BwUniCluster_2.0_Slurm_common_Features|batch job system]] and on interactive jobs.&lt;br /&gt;
&lt;br /&gt;
= Job Submission =&lt;br /&gt;
== sbatch Command ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== sbatch -p &#039;&#039;queue&#039;&#039; ===&lt;br /&gt;
Compute resources such as (wall-)time, nodes and memory are restricted and must fit into &#039;&#039;&#039;queues&#039;&#039;&#039;. Since requested compute resources are NOT always automatically mapped to the correct queue class, &#039;&#039;&#039;you must add the correct queue class to your sbatch command &#039;&#039;&#039;. &amp;lt;font color=red&amp;gt;&#039;The specification of a queue is obligatory on BwUniCluster 2.0.&amp;lt;/font&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Details are:&lt;br /&gt;
&lt;br /&gt;
{| width=750px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! colspan=&amp;quot;5&amp;quot; | bwUniCluster 2.0 &amp;lt;br&amp;gt; sbatch -p &#039;&#039;queue&#039;&#039;&lt;br /&gt;
|- style=&amp;quot;text-align:left;&amp;quot;&lt;br /&gt;
! queue !! node !! default resources !! minimum resources !! maximum resources&lt;br /&gt;
|- style=&amp;quot;text-align:left&amp;quot;&lt;br /&gt;
| dev_single&lt;br /&gt;
| thin&lt;br /&gt;
| time=10, mem-per-cpu=1125mb&lt;br /&gt;
| &lt;br /&gt;
| time=30, nodes=1, mem=90000mb, ntasks-per-node=40, (threads-per-core=2)&amp;lt;br&amp;gt;4 nodes are reserved for this queue. &amp;lt;br&amp;gt; Only for development, i.e. debugging or performance optimization ...&lt;br /&gt;
|- style=&amp;quot;text-align:left;&amp;quot;&lt;br /&gt;
| single&lt;br /&gt;
| thin&lt;br /&gt;
| time=30, mem-per-cpu=1125mb&lt;br /&gt;
| &lt;br /&gt;
| time=72:00:00, nodes=1, mem=90000mb, ntasks-per-node=80&lt;br /&gt;
|- style=&amp;quot;text-align:left;&amp;quot;&lt;br /&gt;
| dev_multiple&lt;br /&gt;
| thin&lt;br /&gt;
| time=10, mem-per-cpu=1125mb&lt;br /&gt;
| nodes=2&lt;br /&gt;
| time=30, nodes=4, mem=90000mb, ntasks-per-node=80 &amp;lt;br&amp;gt;8 nodes are reserved for this queue.&amp;lt;br&amp;gt; Only for development, i.e. debugging or performance optimization ...&lt;br /&gt;
|- style=&amp;quot;text-align:left;&amp;quot;&lt;br /&gt;
| multiple&lt;br /&gt;
| thin&lt;br /&gt;
| time=30, mem-per-cpu=1125mb&lt;br /&gt;
| nodes=2&lt;br /&gt;
| time=72:00:00, mem=90000mb, nodes=128, ntasks-per-node=80 &lt;br /&gt;
|- style=&amp;quot;text-align:left;&amp;quot;&lt;br /&gt;
| dev_multiple_e&lt;br /&gt;
| thin (Broadwell)&lt;br /&gt;
| time=10, mem-per-cpu=2178mb&lt;br /&gt;
| nodes=2&lt;br /&gt;
| time=30, nodes=8, mem=122000, ntasks-per-node=56 &amp;lt;br&amp;gt;8 nodes are reserved for this queue &amp;lt;br&amp;gt; Only for development, i.e. debugging or performance optimization ...&lt;br /&gt;
|- style=&amp;quot;text-align:left;&amp;quot;&lt;br /&gt;
| multiple_e&lt;br /&gt;
| thin (Broadwell)&lt;br /&gt;
| time=10, mem-per-cpu=2178mb&lt;br /&gt;
| nodes=2&lt;br /&gt;
| time=72:00:00, nodes=128, mem=122000mb, ntasks-per-node=56&lt;br /&gt;
|- style=&amp;quot;text-align:left;&amp;quot;&lt;br /&gt;
| dev_special&amp;lt;br/&amp;gt;(&amp;lt;i&amp;gt;restricted access!&amp;lt;/i&amp;gt;)&lt;br /&gt;
| thin (Broadwell)&lt;br /&gt;
| time=10, mem-per-cpu=2178mb&lt;br /&gt;
| &lt;br /&gt;
| time=30, nodes=1, mem=122000mb, ntasks-per-node=56&lt;br /&gt;
|- style=&amp;quot;text-align:left;&amp;quot;&lt;br /&gt;
| special&amp;lt;br/&amp;gt;(&amp;lt;i&amp;gt;restricted access!&amp;lt;/i&amp;gt;)&lt;br /&gt;
| thin (Broadwell)&lt;br /&gt;
| time=10, mem-per-cpu=2178mb&lt;br /&gt;
| &lt;br /&gt;
| time=72:00:00, nodes=1, mem=122000mb, ntasks-per-node=56&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; text-align:left&amp;quot;&lt;br /&gt;
| fat &lt;br /&gt;
| fat&lt;br /&gt;
| time=10, mem-per-cpu=18750mb&lt;br /&gt;
|&lt;br /&gt;
| time=72:00:00, nodes=1, ntasks-per-node=160&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; text-align:left&amp;quot;&lt;br /&gt;
| gpu_4&lt;br /&gt;
| gpu4&lt;br /&gt;
| time=10, mem-per-gpu=94000mb, cpu-per-gpu=20&lt;br /&gt;
| &lt;br /&gt;
| time=48:00:00, mem=376000, nodes=14, ntasks-per-node=80&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; text-align:left&amp;quot;&lt;br /&gt;
| gpu_8&lt;br /&gt;
| gpu8&lt;br /&gt;
| time=10, mem-per-cpu=94000mb, cpu-per-gpu=10&lt;br /&gt;
|&lt;br /&gt;
| time=48:00:00, mem=752000, nodes=10, ntasks-per-node=80&lt;br /&gt;
|- &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Default resources of a queue class defines time, #tasks and memory if not explicitly given with sbatch command. Resource list acronyms &#039;&#039;--time&#039;&#039;, &#039;&#039;--ntasks&#039;&#039;, &#039;&#039;--nodes&#039;&#039;, &#039;&#039;--mem&#039;&#039; and &#039;&#039;--mem-per-cpu&#039;&#039; are described [[ForHLR_Batch_Jobs_SLURM#sbatch_Command_Parameters|here]].&lt;br /&gt;
&lt;br /&gt;
Access to the &amp;quot;special&amp;quot; and &amp;quot;dev_special&amp;quot; partitions on the bwUniCluster 2.0 is restricted to members of the institutions which participated in the procurement of the extension partition specifically for this purpose. Please contact the support team if your institution participated in the procurement and your account should be able to run jobs in this partition.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
==== Queue class examples ====&lt;br /&gt;
To run your batch job on one of the thin nodes, please use:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sbatch --partition=dev_multiple&lt;br /&gt;
     or &lt;br /&gt;
$ sbatch -p dev_multiple&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Interactive Jobs ====&lt;br /&gt;
On bwUniCluster 2.0 you are only allowed to run short jobs (&amp;lt;&amp;lt; 1 hour) with little memory requirements (&amp;lt;&amp;lt; 8 GByte) on the logins nodes. If you want to run longer jobs and/or jobs with a request of more than 8 GByte of memory, you must allocate resources for so-called interactive jobs by usage of the command salloc on a login node. Considering a serial application running on a compute node that requires 5000 MByte of memory and limiting the interactive run to 2 hours the following command has to be executed:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ salloc -p single -n 1 -t 120 --mem=5000&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Then you will get one core on a compute node within the partition &amp;quot;single&amp;quot;. After execution of this command &#039;&#039;&#039;DO NOT CLOSE&#039;&#039;&#039; your current terminal session but wait until the queueing system Slurm has granted you the requested resources on the compute system. You will be logged in automatically on the granted core! To run a serial program on the granted core you only have to type the name of the executable.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ./&amp;lt;my_serial_program&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Please be aware that your serial job must run less than 2 hours in this example, else the job will be killed during runtime by the system. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
You can also start now a graphical X11-terminal connecting you to the dedicated resource that is available for 2 hours. You can start it by the command:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ xterm&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Note that, once the walltime limit has been reached the resources - i.e. the compute node - will automatically be revoked.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
An interactive parallel application running on one compute node or on many compute nodes (e.g. here 5 nodes) with 40 cores each requires usually an amount of memory in GByte (e.g. 50 GByte) and a maximum time (e.g. 1 hour). E.g. 5 nodes can be allocated by the following command:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ salloc -p multiple -N 5 --ntasks-per-node=40 -t 01:00:00  --mem=50gb&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Now you can run parallel jobs on 200 cores requiring 50 GByte of memory per node. Please be aware that you will be logged in on core 0 of the first node. If you want to run MPI-programs, you can do it by simply typing mpirun &amp;lt;program_name&amp;gt;. Then your program will be run on 200 cores. A very simple example for starting a parallel job can be:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ mpirun &amp;lt;my_mpi_program&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
You can also start the debugger ddt by the commands:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ module add devel/ddt&lt;br /&gt;
$ ddt &amp;lt;my_mpi_program&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The above commands will execute the parallel program &amp;lt;my_mpi_program&amp;gt; on all available cores. You can also start parallel programs on a subset of cores; an example for this can be:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ mpirun -n 50 &amp;lt;my_mpi_program&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
If you are using Intel MPI you must start &amp;lt;my_mpi_program&amp;gt; by the command mpiexec.hydra (instead of mpirun).&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
[[Category:bwUniCluster 2.0|Batch Jobs - bwUniCluster 2.0 Features]]&lt;/div&gt;</summary>
		<author><name>H Haefner</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster2.0/Batch_System_Migration_Guide&amp;diff=6456</id>
		<title>BwUniCluster2.0/Batch System Migration Guide</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster2.0/Batch_System_Migration_Guide&amp;diff=6456"/>
		<updated>2020-04-23T14:39:45Z</updated>

		<summary type="html">&lt;p&gt;H Haefner: /* General Overview */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;While the former bwUniCluster 1 system used the combination Moab/SLURM for the batch system queue, bwUniCluster 2.0 uses only SLURM. This means that most job scripts and workflows which relied on Moab-specific pragmas and commands have to be changed.&lt;br /&gt;
&lt;br /&gt;
=General Overview=&lt;br /&gt;
&lt;br /&gt;
Job parameters can be passed to SLURM in the same ways which were possible with Moab.&lt;br /&gt;
&lt;br /&gt;
* Instead of the #MOAB or #PBS pragmas, the pragma #SLURM has to be used within job files.&lt;br /&gt;
&lt;br /&gt;
* Instead of the Moab commands, the corresponding SLURM commands have to be used.&lt;br /&gt;
&lt;br /&gt;
A general mapping of Moab to SLURM commands can be found in the following table:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Moab command !! SLURM command&lt;br /&gt;
|-&lt;br /&gt;
| msub || sbatch&lt;br /&gt;
|-&lt;br /&gt;
| msub -I || salloc&lt;br /&gt;
|-&lt;br /&gt;
| canceljob || scancel&lt;br /&gt;
|-&lt;br /&gt;
| showq || squeue&lt;br /&gt;
|-&lt;br /&gt;
| checkjob $JOBID || scontrol show job $JOBID&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In the next table you find the most MOAB job specification flags and environment variables. You have to replace all MOAB flags and environment variables of your batch scripts by their corresponding Slurm counterparts.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Commonly used MOAB job specification flags and their Slurm equivalents&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
{| width=750px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Option !! Moab (msub) !! Slurm (sbatch)&lt;br /&gt;
|-&lt;br /&gt;
| Script directive                            || #MSUB                                  || #SBATCH&lt;br /&gt;
|-&lt;br /&gt;
| Job name                                    || -N &amp;lt;name&amp;gt;                              || --job-name=&amp;lt;name&amp;gt;  (-J &amp;lt;name&amp;gt;)&lt;br /&gt;
|-&lt;br /&gt;
| Account                                     || -A &amp;lt;account&amp;gt;                           || --account=&amp;lt;account&amp;gt; (-A &amp;lt;account&amp;gt;)&lt;br /&gt;
|-&lt;br /&gt;
| Queue                                       || -q &amp;lt;queue&amp;gt;                             || --partition=&amp;lt;partition&amp;gt; (-p &amp;lt;partition&amp;gt;)&lt;br /&gt;
|-&lt;br /&gt;
| Wall time limit                             || -l walltime=&amp;lt;hh:mm:ss&amp;gt;                 || --time=&amp;lt;hh:mm:ss&amp;gt; (-t &amp;lt;hh:mm:ss&amp;gt;)&lt;br /&gt;
|-&lt;br /&gt;
| Node count                                  || -l nodes=&amp;lt;count&amp;gt;                       || --nodes=&amp;lt;count&amp;gt; (-N &amp;lt;count&amp;gt;)&lt;br /&gt;
|-&lt;br /&gt;
| Core count                                  || -l procs=&amp;lt;count&amp;gt;                       || --ntasks=&amp;lt;count&amp;gt; (-n &amp;lt;count&amp;gt;)&lt;br /&gt;
|-&lt;br /&gt;
| Process count per node                      || -l ppn=&amp;lt;count&amp;gt;                         || --ntasks-per-node=&amp;lt;count&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| Core count per process                      ||                                        || --cpus-per-task=&amp;lt;count&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| Memory limit per node                       || -l mem=&amp;lt;limit&amp;gt;                         || --mem=&amp;lt;limit&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| Memory limit per process                    || -l pmem=&amp;lt;limit&amp;gt;                        || --mem-per-cpu=&amp;lt;limit&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| Job array                                   || -t &amp;lt;array indices&amp;gt;                     || --array=&amp;lt;indices&amp;gt; (-a &amp;lt;indices&amp;gt;)&lt;br /&gt;
|-&lt;br /&gt;
| Node exclusive job                          || -l naccesspolicy=singlejob             || --exclusive&lt;br /&gt;
|-&lt;br /&gt;
| Initial working directory                   || -d &amp;lt;directory&amp;gt; (default: $HOME)        || --chdir=&amp;lt;directory&amp;gt; (-D &amp;lt;directory&amp;gt;) (default: submission directory)&lt;br /&gt;
|-&lt;br /&gt;
| Standard output file                        || -o &amp;lt;file path&amp;gt;                         || --output=&amp;lt;file&amp;gt; (-o &amp;lt;file&amp;gt;)&lt;br /&gt;
|-&lt;br /&gt;
| Standard error file                         || -e &amp;lt;file path&amp;gt;                         || --error=&amp;lt;file&amp;gt;  (-e &amp;lt;file&amp;gt;)&lt;br /&gt;
|-&lt;br /&gt;
| Combine stdout/stderr to stdout             || -j oe                                  || --output=&amp;lt;combined stdout/stderr file&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| Mail notification events                    || -m &amp;lt;event&amp;gt;                             || --mail-type=&amp;lt;events&amp;gt; (valid types include: NONE, BEGIN, END, FAIL, ALL)&lt;br /&gt;
|-&lt;br /&gt;
| Export environment to job                   || -V                                     || --export=ALL (default)&lt;br /&gt;
|-&lt;br /&gt;
| Don&#039;t export environment to job             || (default)                              || --export=NONE&lt;br /&gt;
|-&lt;br /&gt;
| Export environment variables to job         || -v &amp;lt;var[=value][,var2=value2[, ...]]&amp;gt;  || --export=&amp;lt;var[=value][,var2=value2[,...]]&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Notes:&#039;&#039;&#039;&lt;br /&gt;
* Default initial job working directory is $HOME for MOAB. For Slurm the default working directory is where you submit your job from.&lt;br /&gt;
* By default MOAB does not export any environment variables to the job&#039;s runtime environment. With Slurm most of the login environment variables are exported to your job&#039;s runtime environment. This includes environment variables from software modules that were loaded at job submission time (and also $HOSTNAME variable).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Commonly used MOAB script environment variables and their Slurm equivalents&lt;br /&gt;
&lt;br /&gt;
{| width=750px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Information                 !! MOAB                  !! Slurm                                     &lt;br /&gt;
|-&lt;br /&gt;
| Job name                     || $MOAB_JOBNAME        || $SLURM_JOB_NAME                           &lt;br /&gt;
|-&lt;br /&gt;
| Job ID                       || $MOAB_JOBID          || $SLURM_JOB_ID                             &lt;br /&gt;
|-&lt;br /&gt;
| Submit directory             || $MOAB_SUBMITDIR      || $SLURM_SUBMIT_DIR                         &lt;br /&gt;
|-&lt;br /&gt;
| Number of nodes allocated    || $MOAB_NODECOUNT      || $SLURM_JOB_NUM_NODES (and: $SLURM_NNODES) &lt;br /&gt;
|-&lt;br /&gt;
| Node list                    || $MOAB_NODELIST       || $SLURM_JOB_NODELIST                       &lt;br /&gt;
|-&lt;br /&gt;
| Number of processes          || $MOAB_PROCCOUNT      || $SLURM_NTASKS                             &lt;br /&gt;
|-&lt;br /&gt;
| Requested tasks per node     || ---                  || $SLURM_NTASKS_PER_NODE                    &lt;br /&gt;
|-&lt;br /&gt;
| Requested CPUs per task      || ---                  || $SLURM_CPUS_PER_TASK                      &lt;br /&gt;
|-&lt;br /&gt;
| Job array index              || $MOAB_JOBARRAYINDEX  || $SLURM_ARRAY_TASK_ID                      &lt;br /&gt;
|-&lt;br /&gt;
| Job array range              || $MOAB_JOBARRAYRANGE  || $SLURM_ARRAY_TASK_COUNT                   &lt;br /&gt;
|-&lt;br /&gt;
| Queue name                   || $MOAB_CLASS          || $SLURM_JOB_PARTITION                      &lt;br /&gt;
|-&lt;br /&gt;
| QOS name                     || $MOAB_QOS            || $SLURM_JOB_QOS                            &lt;br /&gt;
|-&lt;br /&gt;
| Number of processes per node || ---                  || $SLURM_TASKS_PER_NODE                     &lt;br /&gt;
|-&lt;br /&gt;
| Job user                     || $MOAB_USER           || $SLURM_JOB_USER                           &lt;br /&gt;
|-&lt;br /&gt;
| Hostname                     || $MOAB_MACHINE        || $SLURMD_NODENAME                          &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Serial Programs=&lt;br /&gt;
&lt;br /&gt;
* Use the time option &#039;&#039;&#039;-t&#039;&#039;&#039; or &#039;&#039;&#039;--time&#039;&#039;&#039; (instead of &#039;&#039;&#039;-l walltime&#039;&#039;&#039;). If only one number is entered behind &#039;&#039;&#039;-t&#039;&#039;&#039;, the default unit is minutes.&lt;br /&gt;
* Use the option &#039;&#039;&#039;-n 1&#039;&#039;&#039; or &#039;&#039;&#039;--ntasks=1&#039;&#039;&#039; (instead of &#039;&#039;&#039;-l nodes=1,ppn=1&#039;&#039;&#039;).&lt;br /&gt;
* Use the option &#039;&#039;&#039;-m&#039;&#039;&#039; or &#039;&#039;&#039;--mem&#039;&#039;&#039; (instead of &#039;&#039;&#039;-l pmem&#039;&#039;&#039;). The default unit is MegaByte.&lt;br /&gt;
* If you want to use one node exclusively, you must enter the whole memory (&#039;&#039;&#039;-m 96327&#039;&#039;&#039; or &#039;&#039;&#039;--mem=96327&#039;&#039;&#039;).&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Example for a serial job&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sbatch -p single -t 60 -n 1 -m 96327 ./job.sh &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The script job.sh (containing the execution of a serial program) is started running 60 minutes exclusively on a batch node.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Multithreaded Programs=&lt;br /&gt;
&lt;br /&gt;
* Use the time option &#039;&#039;&#039;-t&#039;&#039;&#039; or &#039;&#039;&#039;--time&#039;&#039;&#039; (instead of &#039;&#039;&#039;-l walltime&#039;&#039;&#039;). If only one number is entered behind &#039;&#039;&#039;-t&#039;&#039;&#039;, the default unit is minutes.&lt;br /&gt;
* Use the option &#039;&#039;&#039;-N 1&#039;&#039;&#039; or &#039;&#039;&#039;--nodes=1&#039;&#039;&#039; and &#039;&#039;&#039;c &#039;&#039;x&#039;&#039;&#039;&#039;&#039; or &#039;&#039;&#039;--cpus-per-task=&#039;&#039;x&#039;&#039;&#039;&#039;&#039; (instead of &#039;&#039;&#039;-l nodes=1,ppn=&#039;&#039;x&#039;&#039; &#039;&#039;&#039;). &#039;&#039;&#039;&#039;&#039;x&#039;&#039;&#039;&#039;&#039; can be a number between 1 and 40 (because of 40 cores within one node); it can also be a number between 41 and 80 (because of active hyperthreading).&lt;br /&gt;
* Use the option &#039;&#039;&#039;-m&#039;&#039;&#039; or &#039;&#039;&#039;--mem&#039;&#039;&#039; (instead of &#039;&#039;&#039;-l pmem&#039;&#039;&#039;). The default unit is MegaByte.&lt;br /&gt;
* Use the option &#039;&#039;&#039;--export&#039;&#039;&#039; to set the needed environment variable OMP_NUM_THREADS for the batch job. Adding &#039;&#039;&#039;ALL&#039;&#039;&#039; means to  pass all interactively set environment variables to the batch job.&lt;br /&gt;
* If you want to use one node exclusively, you must either enter the whole memory (&#039;&#039;-m 96327&#039;&#039; or &#039;&#039;--mem=96327&#039;&#039;) or set the number of threads greater than 39.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Example for a multithreaded job&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sbatch -p single -t 1:00:00 -N 1 -c 20 -m 50gb --export=ALL,OMP_NUM_THREADS=20 ./job_threaded.sh &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The script &#039;&#039;&#039;job_threaded.sh&#039;&#039;&#039; (containing a multithreaded program) is started running 1 hour in shared mode on 20 cores requesting 50GB on one batch node.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=MPI Parallel Programs within one node=&lt;br /&gt;
&lt;br /&gt;
* Use the time option &#039;&#039;&#039;-t&#039;&#039;&#039; or &#039;&#039;&#039;--time&#039;&#039;&#039; (instead of &#039;&#039;&#039;-l walltime&#039;&#039;&#039;). If only one number is entered behind &#039;&#039;&#039;-t&#039;&#039;&#039;, the default unit is minutes.&lt;br /&gt;
* Use the option &#039;&#039;&#039;-n &#039;&#039;x&#039;&#039;&#039;&#039;&#039; or &#039;&#039;&#039;--ntasks=&#039;&#039;x&#039;&#039;&#039;&#039;&#039; (instead of &#039;&#039;&#039;-l nodes=1,ppn=&#039;&#039;x&#039;&#039; &#039;&#039;&#039;). &#039;&#039;&#039;&#039;&#039;x&#039;&#039;&#039;&#039;&#039; can be a number between 1 and 40 (because of 40 cores within one node); you should&#039;nt utilize hyperthreading.&lt;br /&gt;
* Use the option &#039;&#039;&#039;-m&#039;&#039;&#039; or &#039;&#039;&#039;--mem&#039;&#039;&#039; (instead of &#039;&#039;&#039;-l pmem&#039;&#039;&#039;). The default unit is MegaByte.&lt;br /&gt;
* If you want to use one node exclusively, you must either enter the whole memory (&#039;&#039;-m 96327&#039;&#039; or &#039;&#039;--mem=96327&#039;&#039;) or set the number of MPI tasks greater than 39.&lt;br /&gt;
* Don&#039;t forget to load the appropriate MPI-module in your job script.&lt;br /&gt;
* If you are using OpenMPI, the options &#039;&#039;&#039;--bind-to core --map-by core|socket|node&#039;&#039;&#039; of the command mpirun should be used.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Example for a MPI job&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sbatch -p single -t 600 -n 10 -m 40000 ./job_mpi.sh &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The script &#039;&#039;&#039;job_mpi.sh&#039;&#039;&#039; (containing a MPI program after loading the appropriate MPI module) is started running 10 hours in shared mode on 10 cores requesting 40000 MB on one batch node.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=MPI Parallel Programs on many nodes=&lt;br /&gt;
&lt;br /&gt;
* Use the time option &#039;&#039;&#039;-t&#039;&#039;&#039; or &#039;&#039;&#039;--time&#039;&#039;&#039; (instead of &#039;&#039;&#039;-l walltime&#039;&#039;&#039;). If only one number is entered behind &#039;&#039;&#039;-t&#039;&#039;&#039;, the default unit is minutes.&lt;br /&gt;
* Use the option &#039;&#039;&#039;-N &#039;&#039;y&#039;&#039;&#039;&#039;&#039; or &#039;&#039;&#039;--nodes=&#039;&#039;y&#039;&#039;&#039;&#039;&#039; and &#039;&#039;&#039;--ntasks-per-node=&#039;&#039;x&#039;&#039;&#039;&#039;&#039; (instead of &#039;&#039;&#039;-l nodes=&#039;&#039;y&#039;&#039;,ppn=&#039;&#039;x&#039;&#039;&#039;&#039;&#039;). &#039;&#039;&#039;&#039;&#039;x&#039;&#039;&#039;&#039;&#039; can be a number between 1 and 40 (28 for Broadwell nodes) (because of 40 (28) cores within one node); you should&#039;nt utilize hyperthreading.&lt;br /&gt;
* You should&#039;nt use the option &#039;&#039;&#039;-m&#039;&#039;&#039; or &#039;&#039;&#039;--mem&#039;&#039;&#039; because the nodes are used exclusively.&lt;br /&gt;
* You always use the nodes exclusively.&lt;br /&gt;
* Don&#039;t forget to load the appropriate MPI-module in your job script.&lt;br /&gt;
* If you are using OpenMPI, the options &#039;&#039;&#039;--bind-to core --map-by core|socket|node&#039;&#039;&#039; ofthe command mpirun should be used.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Example for a MPI job&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sbatch -p multiple -t 48:00:00 -N 10 --ntasks-per-node=40  ./job_mpi.sh &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The script &#039;&#039;&#039;job_mpi.sh&#039;&#039;&#039; (containing a MPI program after loading the appropriate MPI module) is started running 2 days on 400 cores on ten batch nodes.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Multithreaded + MPI Parallel Programs on many nodes=&lt;br /&gt;
&lt;br /&gt;
* Use the time option &#039;&#039;&#039;-t&#039;&#039;&#039; or &#039;&#039;&#039;--time&#039;&#039;&#039; (instead of &#039;&#039;&#039;-l walltime&#039;&#039;&#039;). If only one number is entered behind &#039;&#039;&#039;-t&#039;&#039;&#039;, the default unit is minutes.&lt;br /&gt;
* Use the option &#039;&#039;&#039;-N &#039;&#039;y&#039;&#039;&#039;&#039;&#039; or &#039;&#039;&#039;--nodes=&#039;&#039;y&#039;&#039;&#039;&#039;&#039; and &#039;&#039;&#039;--ntasks-per-node=&#039;&#039;x&#039;&#039;&#039;&#039;&#039;  and &#039;&#039;&#039;-c &#039;&#039;z&#039;&#039;&#039;&#039;&#039; or &#039;&#039;&#039;--cpus-per-task=&#039;&#039;z&#039;&#039;&#039;&#039;&#039; (instead of &#039;&#039;&#039;-l nodes=&#039;&#039;y&#039;&#039;,ppn=&#039;&#039;x+z&#039;&#039;&#039;&#039;&#039;). &#039;&#039;&#039;&#039;&#039;x&#039;&#039;&#039;&#039;&#039; usually should be 1 or 2 and  &#039;&#039;&#039;&#039;&#039;x+z&#039;&#039;&#039;&#039;&#039; usually  40 (28 on Broadwell nodes); you can utilize hyperthreading if you want.&lt;br /&gt;
* You should&#039;nt use the option &#039;&#039;&#039;-m&#039;&#039;&#039; or &#039;&#039;&#039;--mem&#039;&#039;&#039; because the nodes are used exclusively.&lt;br /&gt;
* You always use the nodes exclusively.&lt;br /&gt;
* Don&#039;t forget to load the appropriate MPI-module in your job script.&lt;br /&gt;
* If you are using OpenMPI, the options &#039;&#039;&#039;--bind-to core --map-by socket|node:PE=&#039;&#039;z&#039;&#039;&#039;&#039;&#039; of the command mpirun must be used.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Example for a MPI job&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sbatch -p multiple -t 2-12 -N 10 --ntasks-per-node=2 -c 20 ./job_threaded_mpi.sh &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The script &#039;&#039;&#039;job_threaded_mpi.sh&#039;&#039;&#039; (containing a multithreaded MPI program after loading the appropriate MPI module) is started running 2.5 days on 400 cores with 20 MPI tasks and 20 threads per task on ten batch nodes. Here the options  &#039;&#039;&#039;--bind-to core --map-by socket:PE=10&#039;&#039;&#039; of the command mpirun must be used.&lt;/div&gt;</summary>
		<author><name>H Haefner</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster2.0/Batch_System_Migration_Guide&amp;diff=6455</id>
		<title>BwUniCluster2.0/Batch System Migration Guide</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster2.0/Batch_System_Migration_Guide&amp;diff=6455"/>
		<updated>2020-04-23T14:37:34Z</updated>

		<summary type="html">&lt;p&gt;H Haefner: /* General Overview */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;While the former bwUniCluster 1 system used the combination Moab/SLURM for the batch system queue, bwUniCluster 2.0 uses only SLURM. This means that most job scripts and workflows which relied on Moab-specific pragmas and commands have to be changed.&lt;br /&gt;
&lt;br /&gt;
=General Overview=&lt;br /&gt;
&lt;br /&gt;
Job parameters can be passed to SLURM in the same ways which were possible with Moab.&lt;br /&gt;
&lt;br /&gt;
* Instead of the #MOAB or #PBS pragmas, the pragma #SLURM has to be used within job files.&lt;br /&gt;
&lt;br /&gt;
* Instead of the Moab commands, the corresponding SLURM commands have to be used.&lt;br /&gt;
&lt;br /&gt;
A general mapping of Moab to SLURM commands can be found in the following table:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Moab command !! SLURM command&lt;br /&gt;
|-&lt;br /&gt;
| msub || sbatch&lt;br /&gt;
|-&lt;br /&gt;
| msub -I || salloc&lt;br /&gt;
|-&lt;br /&gt;
| canceljob || scancel&lt;br /&gt;
|-&lt;br /&gt;
| showq || squeue&lt;br /&gt;
|-&lt;br /&gt;
| checkjob $JOBID || scontrol show job $JOBID&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In the next table you find the most MOAB job specification flags and environment variables. You have to replace all MOAB flags and environment variables of your batch scripts by their corresponding Slurm counterparts.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Commonly used MOAB job specification flags and their Slurm equivalents&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
{| width=750px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Option !! Moab (msub) !! Slurm (sbatch)&lt;br /&gt;
|-&lt;br /&gt;
| Script directive                            || #MSUB                                  || #SBATCH&lt;br /&gt;
|-&lt;br /&gt;
| Job name                                    || -N &amp;lt;name&amp;gt;                              || --job-name=&amp;lt;name&amp;gt;  (-J &amp;lt;name&amp;gt;)&lt;br /&gt;
|-&lt;br /&gt;
| Account                                     || -A &amp;lt;account&amp;gt;                           || --account=&amp;lt;account&amp;gt; (-A &amp;lt;account&amp;gt;)&lt;br /&gt;
|-&lt;br /&gt;
| Queue                                       || -q &amp;lt;queue&amp;gt;                             || --partition=&amp;lt;partition&amp;gt; (-p &amp;lt;partition&amp;gt;)&lt;br /&gt;
|-&lt;br /&gt;
| Wall time limit                             || -l walltime=&amp;lt;hh:mm:ss&amp;gt;                 || --time=&amp;lt;hh:mm:ss&amp;gt; (-t &amp;lt;hh:mm:ss&amp;gt;)&lt;br /&gt;
|-&lt;br /&gt;
| Node count                                  || -l nodes=&amp;lt;count&amp;gt;                       || --nodes=&amp;lt;count&amp;gt; (-N &amp;lt;count&amp;gt;)&lt;br /&gt;
|-&lt;br /&gt;
| Core count                                  || -l procs=&amp;lt;count&amp;gt;                       || --ntasks=&amp;lt;count&amp;gt; (-n &amp;lt;count&amp;gt;)&lt;br /&gt;
|-&lt;br /&gt;
| Process count per node                      || -l ppn=&amp;lt;count&amp;gt;                         || --ntasks-per-node=&amp;lt;count&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| Core count per process                      ||                                        || --cpus-per-task=&amp;lt;count&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| Memory limit per node                       || -l mem=&amp;lt;limit&amp;gt;                         || --mem=&amp;lt;limit&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| Memory limit per process                    || -l pmem=&amp;lt;limit&amp;gt;                        || --mem-per-cpu=&amp;lt;limit&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| Job array                                   || -t &amp;lt;array indices&amp;gt;                     || --array=&amp;lt;indices&amp;gt; (-a &amp;lt;indices&amp;gt;)&lt;br /&gt;
|-&lt;br /&gt;
| Node exclusive job                          || -l naccesspolicy=singlejob             || --exclusive&lt;br /&gt;
|-&lt;br /&gt;
| Initial working directory                   || -d &amp;lt;directory&amp;gt; (default: $HOME)        || --chdir=&amp;lt;directory&amp;gt; (-D &amp;lt;directory&amp;gt;) (default: submission directory)&lt;br /&gt;
|-&lt;br /&gt;
| Standard output file                        || -o &amp;lt;file path&amp;gt;                         || --output=&amp;lt;file&amp;gt; (-o &amp;lt;file&amp;gt;)&lt;br /&gt;
|-&lt;br /&gt;
| Standard error file                         || -e &amp;lt;file path&amp;gt;                         || --error=&amp;lt;file&amp;gt;  (-e &amp;lt;file&amp;gt;)&lt;br /&gt;
|-&lt;br /&gt;
| Combine stdout/stderr to stdout             || -j oe                                  || --output=&amp;lt;combined stdout/stderr file&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| Mail notification events                    || -m &amp;lt;event&amp;gt;                             || --mail-type=&amp;lt;events&amp;gt; (valid types include: NONE, BEGIN, END, FAIL, ALL)&lt;br /&gt;
|-&lt;br /&gt;
| Export environment to job                   || -V                                     || --export=ALL (default)&lt;br /&gt;
|-&lt;br /&gt;
| Don&#039;t export environment to job             || (default)                              || --export=NONE&lt;br /&gt;
|-&lt;br /&gt;
| Export environment variables to job         || -v &amp;lt;var[=value][,var2=value2[, ...]]&amp;gt;  || --export=&amp;lt;var[=value][,var2=value2[,...]]&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Notes:&#039;&#039;&#039;&lt;br /&gt;
* Default initial job working directory is $HOME for MOAB. For Slurm the default working directory is where you submit your job from.&lt;br /&gt;
* By default MOAB does not export any environment variables to the job&#039;s runtime environment. With Slurm most of the login environment variables are exported to your job&#039;s runtime environment. This includes environment variables from software modules that were loaded at job submission time (and also $HOSTNAME variable).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Commonly used MOAB script environment variables and their Slurm equivalents&lt;br /&gt;
&lt;br /&gt;
{| width=750px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Information                 !! MOAB                  !! Slurm                                     &lt;br /&gt;
|-&lt;br /&gt;
| Job name                     || $MOAB_JOBNAME        || $SLURM_JOB_NAME                           &lt;br /&gt;
|-&lt;br /&gt;
| Job ID                       || $MOAB_JOBID          || $SLURM_JOB_ID                             &lt;br /&gt;
|-&lt;br /&gt;
| Submit directory             || $MOAB_SUBMITDIR      || $SLURM_SUBMIT_DIR                         &lt;br /&gt;
|-&lt;br /&gt;
| Number of nodes allocated    || $MOAB_NODECOUNT      || $SLURM_JOB_NUM_NODES (and: $SLURM_NNODES) &lt;br /&gt;
|-&lt;br /&gt;
| Node list                    || $MOAB_NODELIST       || $SLURM_JOB_NODELIST                       &lt;br /&gt;
|-&lt;br /&gt;
| Number of processes          || $MOAB_PROCCOUNT      || $SLURM_NTASKS                             &lt;br /&gt;
|-&lt;br /&gt;
| Requested tasks per node     || ---                  || $SLURM_NTASKS_PER_NODE                    &lt;br /&gt;
|-&lt;br /&gt;
| Requested CPUs per task      || ---                  || $SLURM_CPUS_PER_TASK                      &lt;br /&gt;
|-&lt;br /&gt;
| Job array index              || $MOAB_JOBARRAYINDEX  || $SLURM_ARRAY_TASK_ID                      &lt;br /&gt;
|-&lt;br /&gt;
| Job array range              || $MOAB_JOBARRAYRANGE  || $SLURM_ARRAY_TASK_COUNT                   &lt;br /&gt;
|-&lt;br /&gt;
| Queue name                   || $MOAB_CLASS          || $SLURM_JOB_PARTITION                      &lt;br /&gt;
|-&lt;br /&gt;
| QOS name                     || $MOAB_QOS            || $SLURM_JOB_QOS                            &lt;br /&gt;
|-&lt;br /&gt;
| Number of processes per node | ---                   || $SLURM_TASKS_PER_NODE                     &lt;br /&gt;
|-&lt;br /&gt;
| Job user                     || $MOAB_USER           || $SLURM_JOB_USER                           &lt;br /&gt;
|-&lt;br /&gt;
| Hostname                     || $MOAB_MACHINE        || $SLURMD_NODENAME                          &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Serial Programs=&lt;br /&gt;
&lt;br /&gt;
* Use the time option &#039;&#039;&#039;-t&#039;&#039;&#039; or &#039;&#039;&#039;--time&#039;&#039;&#039; (instead of &#039;&#039;&#039;-l walltime&#039;&#039;&#039;). If only one number is entered behind &#039;&#039;&#039;-t&#039;&#039;&#039;, the default unit is minutes.&lt;br /&gt;
* Use the option &#039;&#039;&#039;-n 1&#039;&#039;&#039; or &#039;&#039;&#039;--ntasks=1&#039;&#039;&#039; (instead of &#039;&#039;&#039;-l nodes=1,ppn=1&#039;&#039;&#039;).&lt;br /&gt;
* Use the option &#039;&#039;&#039;-m&#039;&#039;&#039; or &#039;&#039;&#039;--mem&#039;&#039;&#039; (instead of &#039;&#039;&#039;-l pmem&#039;&#039;&#039;). The default unit is MegaByte.&lt;br /&gt;
* If you want to use one node exclusively, you must enter the whole memory (&#039;&#039;&#039;-m 96327&#039;&#039;&#039; or &#039;&#039;&#039;--mem=96327&#039;&#039;&#039;).&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Example for a serial job&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sbatch -p single -t 60 -n 1 -m 96327 ./job.sh &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The script job.sh (containing the execution of a serial program) is started running 60 minutes exclusively on a batch node.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Multithreaded Programs=&lt;br /&gt;
&lt;br /&gt;
* Use the time option &#039;&#039;&#039;-t&#039;&#039;&#039; or &#039;&#039;&#039;--time&#039;&#039;&#039; (instead of &#039;&#039;&#039;-l walltime&#039;&#039;&#039;). If only one number is entered behind &#039;&#039;&#039;-t&#039;&#039;&#039;, the default unit is minutes.&lt;br /&gt;
* Use the option &#039;&#039;&#039;-N 1&#039;&#039;&#039; or &#039;&#039;&#039;--nodes=1&#039;&#039;&#039; and &#039;&#039;&#039;c &#039;&#039;x&#039;&#039;&#039;&#039;&#039; or &#039;&#039;&#039;--cpus-per-task=&#039;&#039;x&#039;&#039;&#039;&#039;&#039; (instead of &#039;&#039;&#039;-l nodes=1,ppn=&#039;&#039;x&#039;&#039; &#039;&#039;&#039;). &#039;&#039;&#039;&#039;&#039;x&#039;&#039;&#039;&#039;&#039; can be a number between 1 and 40 (because of 40 cores within one node); it can also be a number between 41 and 80 (because of active hyperthreading).&lt;br /&gt;
* Use the option &#039;&#039;&#039;-m&#039;&#039;&#039; or &#039;&#039;&#039;--mem&#039;&#039;&#039; (instead of &#039;&#039;&#039;-l pmem&#039;&#039;&#039;). The default unit is MegaByte.&lt;br /&gt;
* Use the option &#039;&#039;&#039;--export&#039;&#039;&#039; to set the needed environment variable OMP_NUM_THREADS for the batch job. Adding &#039;&#039;&#039;ALL&#039;&#039;&#039; means to  pass all interactively set environment variables to the batch job.&lt;br /&gt;
* If you want to use one node exclusively, you must either enter the whole memory (&#039;&#039;-m 96327&#039;&#039; or &#039;&#039;--mem=96327&#039;&#039;) or set the number of threads greater than 39.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Example for a multithreaded job&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sbatch -p single -t 1:00:00 -N 1 -c 20 -m 50gb --export=ALL,OMP_NUM_THREADS=20 ./job_threaded.sh &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The script &#039;&#039;&#039;job_threaded.sh&#039;&#039;&#039; (containing a multithreaded program) is started running 1 hour in shared mode on 20 cores requesting 50GB on one batch node.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=MPI Parallel Programs within one node=&lt;br /&gt;
&lt;br /&gt;
* Use the time option &#039;&#039;&#039;-t&#039;&#039;&#039; or &#039;&#039;&#039;--time&#039;&#039;&#039; (instead of &#039;&#039;&#039;-l walltime&#039;&#039;&#039;). If only one number is entered behind &#039;&#039;&#039;-t&#039;&#039;&#039;, the default unit is minutes.&lt;br /&gt;
* Use the option &#039;&#039;&#039;-n &#039;&#039;x&#039;&#039;&#039;&#039;&#039; or &#039;&#039;&#039;--ntasks=&#039;&#039;x&#039;&#039;&#039;&#039;&#039; (instead of &#039;&#039;&#039;-l nodes=1,ppn=&#039;&#039;x&#039;&#039; &#039;&#039;&#039;). &#039;&#039;&#039;&#039;&#039;x&#039;&#039;&#039;&#039;&#039; can be a number between 1 and 40 (because of 40 cores within one node); you should&#039;nt utilize hyperthreading.&lt;br /&gt;
* Use the option &#039;&#039;&#039;-m&#039;&#039;&#039; or &#039;&#039;&#039;--mem&#039;&#039;&#039; (instead of &#039;&#039;&#039;-l pmem&#039;&#039;&#039;). The default unit is MegaByte.&lt;br /&gt;
* If you want to use one node exclusively, you must either enter the whole memory (&#039;&#039;-m 96327&#039;&#039; or &#039;&#039;--mem=96327&#039;&#039;) or set the number of MPI tasks greater than 39.&lt;br /&gt;
* Don&#039;t forget to load the appropriate MPI-module in your job script.&lt;br /&gt;
* If you are using OpenMPI, the options &#039;&#039;&#039;--bind-to core --map-by core|socket|node&#039;&#039;&#039; of the command mpirun should be used.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Example for a MPI job&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sbatch -p single -t 600 -n 10 -m 40000 ./job_mpi.sh &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The script &#039;&#039;&#039;job_mpi.sh&#039;&#039;&#039; (containing a MPI program after loading the appropriate MPI module) is started running 10 hours in shared mode on 10 cores requesting 40000 MB on one batch node.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=MPI Parallel Programs on many nodes=&lt;br /&gt;
&lt;br /&gt;
* Use the time option &#039;&#039;&#039;-t&#039;&#039;&#039; or &#039;&#039;&#039;--time&#039;&#039;&#039; (instead of &#039;&#039;&#039;-l walltime&#039;&#039;&#039;). If only one number is entered behind &#039;&#039;&#039;-t&#039;&#039;&#039;, the default unit is minutes.&lt;br /&gt;
* Use the option &#039;&#039;&#039;-N &#039;&#039;y&#039;&#039;&#039;&#039;&#039; or &#039;&#039;&#039;--nodes=&#039;&#039;y&#039;&#039;&#039;&#039;&#039; and &#039;&#039;&#039;--ntasks-per-node=&#039;&#039;x&#039;&#039;&#039;&#039;&#039; (instead of &#039;&#039;&#039;-l nodes=&#039;&#039;y&#039;&#039;,ppn=&#039;&#039;x&#039;&#039;&#039;&#039;&#039;). &#039;&#039;&#039;&#039;&#039;x&#039;&#039;&#039;&#039;&#039; can be a number between 1 and 40 (28 for Broadwell nodes) (because of 40 (28) cores within one node); you should&#039;nt utilize hyperthreading.&lt;br /&gt;
* You should&#039;nt use the option &#039;&#039;&#039;-m&#039;&#039;&#039; or &#039;&#039;&#039;--mem&#039;&#039;&#039; because the nodes are used exclusively.&lt;br /&gt;
* You always use the nodes exclusively.&lt;br /&gt;
* Don&#039;t forget to load the appropriate MPI-module in your job script.&lt;br /&gt;
* If you are using OpenMPI, the options &#039;&#039;&#039;--bind-to core --map-by core|socket|node&#039;&#039;&#039; ofthe command mpirun should be used.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Example for a MPI job&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sbatch -p multiple -t 48:00:00 -N 10 --ntasks-per-node=40  ./job_mpi.sh &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The script &#039;&#039;&#039;job_mpi.sh&#039;&#039;&#039; (containing a MPI program after loading the appropriate MPI module) is started running 2 days on 400 cores on ten batch nodes.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Multithreaded + MPI Parallel Programs on many nodes=&lt;br /&gt;
&lt;br /&gt;
* Use the time option &#039;&#039;&#039;-t&#039;&#039;&#039; or &#039;&#039;&#039;--time&#039;&#039;&#039; (instead of &#039;&#039;&#039;-l walltime&#039;&#039;&#039;). If only one number is entered behind &#039;&#039;&#039;-t&#039;&#039;&#039;, the default unit is minutes.&lt;br /&gt;
* Use the option &#039;&#039;&#039;-N &#039;&#039;y&#039;&#039;&#039;&#039;&#039; or &#039;&#039;&#039;--nodes=&#039;&#039;y&#039;&#039;&#039;&#039;&#039; and &#039;&#039;&#039;--ntasks-per-node=&#039;&#039;x&#039;&#039;&#039;&#039;&#039;  and &#039;&#039;&#039;-c &#039;&#039;z&#039;&#039;&#039;&#039;&#039; or &#039;&#039;&#039;--cpus-per-task=&#039;&#039;z&#039;&#039;&#039;&#039;&#039; (instead of &#039;&#039;&#039;-l nodes=&#039;&#039;y&#039;&#039;,ppn=&#039;&#039;x+z&#039;&#039;&#039;&#039;&#039;). &#039;&#039;&#039;&#039;&#039;x&#039;&#039;&#039;&#039;&#039; usually should be 1 or 2 and  &#039;&#039;&#039;&#039;&#039;x+z&#039;&#039;&#039;&#039;&#039; usually  40 (28 on Broadwell nodes); you can utilize hyperthreading if you want.&lt;br /&gt;
* You should&#039;nt use the option &#039;&#039;&#039;-m&#039;&#039;&#039; or &#039;&#039;&#039;--mem&#039;&#039;&#039; because the nodes are used exclusively.&lt;br /&gt;
* You always use the nodes exclusively.&lt;br /&gt;
* Don&#039;t forget to load the appropriate MPI-module in your job script.&lt;br /&gt;
* If you are using OpenMPI, the options &#039;&#039;&#039;--bind-to core --map-by socket|node:PE=&#039;&#039;z&#039;&#039;&#039;&#039;&#039; of the command mpirun must be used.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Example for a MPI job&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sbatch -p multiple -t 2-12 -N 10 --ntasks-per-node=2 -c 20 ./job_threaded_mpi.sh &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The script &#039;&#039;&#039;job_threaded_mpi.sh&#039;&#039;&#039; (containing a multithreaded MPI program after loading the appropriate MPI module) is started running 2.5 days on 400 cores with 20 MPI tasks and 20 threads per task on ten batch nodes. Here the options  &#039;&#039;&#039;--bind-to core --map-by socket:PE=10&#039;&#039;&#039; of the command mpirun must be used.&lt;/div&gt;</summary>
		<author><name>H Haefner</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster2.0/Batch_System_Migration_Guide&amp;diff=6454</id>
		<title>BwUniCluster2.0/Batch System Migration Guide</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster2.0/Batch_System_Migration_Guide&amp;diff=6454"/>
		<updated>2020-04-23T14:28:18Z</updated>

		<summary type="html">&lt;p&gt;H Haefner: /* General Overview */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;While the former bwUniCluster 1 system used the combination Moab/SLURM for the batch system queue, bwUniCluster 2.0 uses only SLURM. This means that most job scripts and workflows which relied on Moab-specific pragmas and commands have to be changed.&lt;br /&gt;
&lt;br /&gt;
=General Overview=&lt;br /&gt;
&lt;br /&gt;
Job parameters can be passed to SLURM in the same ways which were possible with Moab.&lt;br /&gt;
&lt;br /&gt;
* Instead of the #MOAB or #PBS pragmas, the pragma #SLURM has to be used within job files.&lt;br /&gt;
&lt;br /&gt;
* Instead of the Moab commands, the corresponding SLURM commands have to be used.&lt;br /&gt;
&lt;br /&gt;
A general mapping of Moab to SLURM commands can be found in the following table:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Moab command !! SLURM command&lt;br /&gt;
|-&lt;br /&gt;
| msub || sbatch&lt;br /&gt;
|-&lt;br /&gt;
| msub -I || salloc&lt;br /&gt;
|-&lt;br /&gt;
| canceljob || scancel&lt;br /&gt;
|-&lt;br /&gt;
| showq || squeue&lt;br /&gt;
|-&lt;br /&gt;
| checkjob $JOBID || scontrol show job $JOBID&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In the next table you find the most MOAB job specification flags and environment variables. You have to replace all MOAB flags and environment variables of your batch scripts by their corresponding Slurm counterparts.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Commonly used MOAB job specification flags and their Slurm equivalents&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
{| width=750px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Option !! Moab (msub) !! Slurm (sbatch)&lt;br /&gt;
|-&lt;br /&gt;
| Script directive                            || #MSUB                                  || #SBATCH&lt;br /&gt;
|-&lt;br /&gt;
| Job name                                    || -N &amp;lt;name&amp;gt;                              || --job-name=&amp;lt;name&amp;gt;  (-J &amp;lt;name&amp;gt;)&lt;br /&gt;
|-&lt;br /&gt;
| Account                                     || -A &amp;lt;account&amp;gt;                           || --account=&amp;lt;account&amp;gt; (-A &amp;lt;account&amp;gt;)&lt;br /&gt;
|-&lt;br /&gt;
| Queue                                       || -q &amp;lt;queue&amp;gt;                             || --partition=&amp;lt;partition&amp;gt; (-p &amp;lt;partition&amp;gt;)&lt;br /&gt;
|-&lt;br /&gt;
| Wall time limit                             || -l walltime=&amp;lt;hh:mm:ss&amp;gt;                 || --time=&amp;lt;hh:mm:ss&amp;gt; (-t &amp;lt;hh:mm:ss&amp;gt;)&lt;br /&gt;
|-&lt;br /&gt;
| Node count                                  || -l nodes=&amp;lt;count&amp;gt;                       || --nodes=&amp;lt;count&amp;gt; (-N &amp;lt;count&amp;gt;)&lt;br /&gt;
|-&lt;br /&gt;
| Core count                                  || -l procs=&amp;lt;count&amp;gt;                       || --ntasks=&amp;lt;count&amp;gt; (-n &amp;lt;count&amp;gt;)&lt;br /&gt;
|-&lt;br /&gt;
| Process count per node                      || -l ppn=&amp;lt;count&amp;gt;                         || --ntasks-per-node=&amp;lt;count&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| Core count per process                      ||                                        || --cpus-per-task=&amp;lt;count&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| Memory limit per node                       || -l mem=&amp;lt;limit&amp;gt;                         || --mem=&amp;lt;limit&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| Memory limit per process                    || -l pmem=&amp;lt;limit&amp;gt;                        || --mem-per-cpu=&amp;lt;limit&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| Job array                                   || -t &amp;lt;array indices&amp;gt;                     || --array=&amp;lt;indices&amp;gt; (-a &amp;lt;indices&amp;gt;)&lt;br /&gt;
|-&lt;br /&gt;
| Node exclusive job                          || -l naccesspolicy=singlejob             || --exclusive&lt;br /&gt;
|-&lt;br /&gt;
| Initial working directory                   || -d &amp;lt;directory&amp;gt; (default: $HOME)        || --chdir=&amp;lt;directory&amp;gt; (-D &amp;lt;directory&amp;gt;) (default: submission directory)&lt;br /&gt;
|-&lt;br /&gt;
| Standard output file                        || -o &amp;lt;file path&amp;gt;                         || --output=&amp;lt;file&amp;gt; (-o &amp;lt;file&amp;gt;)&lt;br /&gt;
|-&lt;br /&gt;
| Standard error file                         || -e &amp;lt;file path&amp;gt;                         || --error=&amp;lt;file&amp;gt;  (-e &amp;lt;file&amp;gt;)&lt;br /&gt;
|-&lt;br /&gt;
| Combine stdout/stderr to stdout             || -j oe                                  || --output=&amp;lt;combined stdout/stderr file&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| Mail notification events                    || -m &amp;lt;event&amp;gt;                             || --mail-type=&amp;lt;events&amp;gt; (valid types include: NONE, BEGIN, END, FAIL, ALL)&lt;br /&gt;
|-&lt;br /&gt;
| Export environment to job                   || -V                                     || --export=ALL (default)&lt;br /&gt;
|-&lt;br /&gt;
| Don&#039;t export environment to job             || (default)                              || --export=NONE&lt;br /&gt;
|-&lt;br /&gt;
| Export environment variables to job         || -v &amp;lt;var[=value][,var2=value2[, ...]]&amp;gt;  || --export=&amp;lt;var[=value][,var2=value2[,...]]&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Notes:&#039;&#039;&#039;&lt;br /&gt;
* Default initial job working directory is $HOME for Moab. For Slurm the default working directory is where you submit your job from.&lt;br /&gt;
* By default Moab does not export any environment variables to the job&#039;s runtime environment. With Slurm most of the login environment variables are exported to your job&#039;s runtime environment. This includes environment variables from software modules that were loaded at job submission time (and also $HOSTNAME variable).&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Commonly used Moab/Torque script environment variables and their Slurm equivalents&lt;br /&gt;
&lt;br /&gt;
{| width=750px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Information                 !! Moab                !! Torque               !! Slurm                                     &lt;br /&gt;
|-&lt;br /&gt;
| Job name                     || $MOAB_JOBNAME        || $PBS_JOBNAME        || $SLURM_JOB_NAME                           &lt;br /&gt;
|-&lt;br /&gt;
| Job ID                       || $MOAB_JOBID          || $PBS_JOBID          || $SLURM_JOB_ID                             &lt;br /&gt;
|-&lt;br /&gt;
| Submit directory             || $MOAB_SUBMITDIR      || $PBS_O_WORKDIR      || $SLURM_SUBMIT_DIR                         &lt;br /&gt;
|-&lt;br /&gt;
| Number of nodes allocated    || $MOAB_NODECOUNT      || $PBS_NUM_NODES      || $SLURM_JOB_NUM_NODES (and: $SLURM_NNODES) &lt;br /&gt;
|-&lt;br /&gt;
| Node list                    || $MOAB_NODELIST       || cat $PBS_NODEFILE   || $SLURM_JOB_NODELIST                       &lt;br /&gt;
|-&lt;br /&gt;
| Number of processes          || $MOAB_PROCCOUNT      || $PBS_TASKNUM        || $SLURM_NTASKS                             &lt;br /&gt;
|-&lt;br /&gt;
| Requested tasks per node     || ---                    || $PBS_NUM_PPN        || $SLURM_NTASKS_PER_NODE                    &lt;br /&gt;
|-&lt;br /&gt;
| Requested CPUs per task      || ---                  || ---                 || $SLURM_CPUS_PER_TASK                      &lt;br /&gt;
|-&lt;br /&gt;
| Job array index              || $MOAB_JOBARRAYINDEX  || $PBS_ARRAY_INDEX    || $SLURM_ARRAY_TASK_ID                      &lt;br /&gt;
|-&lt;br /&gt;
| Job array range              || $MOAB_JOBARRAYRANGE  || -                   || $SLURM_ARRAY_TASK_COUNT                   &lt;br /&gt;
|-&lt;br /&gt;
| Queue name                   || $MOAB_CLASS          || $PBS_QUEUE          || $SLURM_JOB_PARTITION                      &lt;br /&gt;
|-&lt;br /&gt;
| QOS name                     || $MOAB_QOS            || ---                 || $SLURM_JOB_QOS                            &lt;br /&gt;
|-&lt;br /&gt;
| Number of processes per node | ---                   || $PBS_NUM_PPN        || $SLURM_TASKS_PER_NODE                     &lt;br /&gt;
|-&lt;br /&gt;
| Job user                     || $MOAB_USER           || $PBS_O_LOGNAME      || $SLURM_JOB_USER                           &lt;br /&gt;
|-&lt;br /&gt;
| Hostname                     || $MOAB_MACHINE        || $PBS_O_HOST         || $SLURMD_NODENAME                          &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
The following chapters contain short examples for the most common job types. For a full overview of SLURM see the [[BwUniCluster_2.0_Slurm_common_Features|full article]].&lt;br /&gt;
&lt;br /&gt;
=Serial Programs=&lt;br /&gt;
&lt;br /&gt;
* Use the time option &#039;&#039;&#039;-t&#039;&#039;&#039; or &#039;&#039;&#039;--time&#039;&#039;&#039; (instead of &#039;&#039;&#039;-l walltime&#039;&#039;&#039;). If only one number is entered behind &#039;&#039;&#039;-t&#039;&#039;&#039;, the default unit is minutes.&lt;br /&gt;
* Use the option &#039;&#039;&#039;-n 1&#039;&#039;&#039; or &#039;&#039;&#039;--ntasks=1&#039;&#039;&#039; (instead of &#039;&#039;&#039;-l nodes=1,ppn=1&#039;&#039;&#039;).&lt;br /&gt;
* Use the option &#039;&#039;&#039;-m&#039;&#039;&#039; or &#039;&#039;&#039;--mem&#039;&#039;&#039; (instead of &#039;&#039;&#039;-l pmem&#039;&#039;&#039;). The default unit is MegaByte.&lt;br /&gt;
* If you want to use one node exclusively, you must enter the whole memory (&#039;&#039;&#039;-m 96327&#039;&#039;&#039; or &#039;&#039;&#039;--mem=96327&#039;&#039;&#039;).&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Example for a serial job&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sbatch -p single -t 60 -n 1 -m 96327 ./job.sh &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The script job.sh (containing the execution of a serial program) is started running 60 minutes exclusively on a batch node.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Multithreaded Programs=&lt;br /&gt;
&lt;br /&gt;
* Use the time option &#039;&#039;&#039;-t&#039;&#039;&#039; or &#039;&#039;&#039;--time&#039;&#039;&#039; (instead of &#039;&#039;&#039;-l walltime&#039;&#039;&#039;). If only one number is entered behind &#039;&#039;&#039;-t&#039;&#039;&#039;, the default unit is minutes.&lt;br /&gt;
* Use the option &#039;&#039;&#039;-N 1&#039;&#039;&#039; or &#039;&#039;&#039;--nodes=1&#039;&#039;&#039; and &#039;&#039;&#039;c &#039;&#039;x&#039;&#039;&#039;&#039;&#039; or &#039;&#039;&#039;--cpus-per-task=&#039;&#039;x&#039;&#039;&#039;&#039;&#039; (instead of &#039;&#039;&#039;-l nodes=1,ppn=&#039;&#039;x&#039;&#039; &#039;&#039;&#039;). &#039;&#039;&#039;&#039;&#039;x&#039;&#039;&#039;&#039;&#039; can be a number between 1 and 40 (because of 40 cores within one node); it can also be a number between 41 and 80 (because of active hyperthreading).&lt;br /&gt;
* Use the option &#039;&#039;&#039;-m&#039;&#039;&#039; or &#039;&#039;&#039;--mem&#039;&#039;&#039; (instead of &#039;&#039;&#039;-l pmem&#039;&#039;&#039;). The default unit is MegaByte.&lt;br /&gt;
* Use the option &#039;&#039;&#039;--export&#039;&#039;&#039; to set the needed environment variable OMP_NUM_THREADS for the batch job. Adding &#039;&#039;&#039;ALL&#039;&#039;&#039; means to  pass all interactively set environment variables to the batch job.&lt;br /&gt;
* If you want to use one node exclusively, you must either enter the whole memory (&#039;&#039;-m 96327&#039;&#039; or &#039;&#039;--mem=96327&#039;&#039;) or set the number of threads greater than 39.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Example for a multithreaded job&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sbatch -p single -t 1:00:00 -N 1 -c 20 -m 50gb --export=ALL,OMP_NUM_THREADS=20 ./job_threaded.sh &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The script &#039;&#039;&#039;job_threaded.sh&#039;&#039;&#039; (containing a multithreaded program) is started running 1 hour in shared mode on 20 cores requesting 50GB on one batch node.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=MPI Parallel Programs within one node=&lt;br /&gt;
&lt;br /&gt;
* Use the time option &#039;&#039;&#039;-t&#039;&#039;&#039; or &#039;&#039;&#039;--time&#039;&#039;&#039; (instead of &#039;&#039;&#039;-l walltime&#039;&#039;&#039;). If only one number is entered behind &#039;&#039;&#039;-t&#039;&#039;&#039;, the default unit is minutes.&lt;br /&gt;
* Use the option &#039;&#039;&#039;-n &#039;&#039;x&#039;&#039;&#039;&#039;&#039; or &#039;&#039;&#039;--ntasks=&#039;&#039;x&#039;&#039;&#039;&#039;&#039; (instead of &#039;&#039;&#039;-l nodes=1,ppn=&#039;&#039;x&#039;&#039; &#039;&#039;&#039;). &#039;&#039;&#039;&#039;&#039;x&#039;&#039;&#039;&#039;&#039; can be a number between 1 and 40 (because of 40 cores within one node); you should&#039;nt utilize hyperthreading.&lt;br /&gt;
* Use the option &#039;&#039;&#039;-m&#039;&#039;&#039; or &#039;&#039;&#039;--mem&#039;&#039;&#039; (instead of &#039;&#039;&#039;-l pmem&#039;&#039;&#039;). The default unit is MegaByte.&lt;br /&gt;
* If you want to use one node exclusively, you must either enter the whole memory (&#039;&#039;-m 96327&#039;&#039; or &#039;&#039;--mem=96327&#039;&#039;) or set the number of MPI tasks greater than 39.&lt;br /&gt;
* Don&#039;t forget to load the appropriate MPI-module in your job script.&lt;br /&gt;
* If you are using OpenMPI, the options &#039;&#039;&#039;--bind-to core --map-by core|socket|node&#039;&#039;&#039; of the command mpirun should be used.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Example for a MPI job&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sbatch -p single -t 600 -n 10 -m 40000 ./job_mpi.sh &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The script &#039;&#039;&#039;job_mpi.sh&#039;&#039;&#039; (containing a MPI program after loading the appropriate MPI module) is started running 10 hours in shared mode on 10 cores requesting 40000 MB on one batch node.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=MPI Parallel Programs on many nodes=&lt;br /&gt;
&lt;br /&gt;
* Use the time option &#039;&#039;&#039;-t&#039;&#039;&#039; or &#039;&#039;&#039;--time&#039;&#039;&#039; (instead of &#039;&#039;&#039;-l walltime&#039;&#039;&#039;). If only one number is entered behind &#039;&#039;&#039;-t&#039;&#039;&#039;, the default unit is minutes.&lt;br /&gt;
* Use the option &#039;&#039;&#039;-N &#039;&#039;y&#039;&#039;&#039;&#039;&#039; or &#039;&#039;&#039;--nodes=&#039;&#039;y&#039;&#039;&#039;&#039;&#039; and &#039;&#039;&#039;--ntasks-per-node=&#039;&#039;x&#039;&#039;&#039;&#039;&#039; (instead of &#039;&#039;&#039;-l nodes=&#039;&#039;y&#039;&#039;,ppn=&#039;&#039;x&#039;&#039;&#039;&#039;&#039;). &#039;&#039;&#039;&#039;&#039;x&#039;&#039;&#039;&#039;&#039; can be a number between 1 and 40 (28 for Broadwell nodes) (because of 40 (28) cores within one node); you should&#039;nt utilize hyperthreading.&lt;br /&gt;
* You should&#039;nt use the option &#039;&#039;&#039;-m&#039;&#039;&#039; or &#039;&#039;&#039;--mem&#039;&#039;&#039; because the nodes are used exclusively.&lt;br /&gt;
* You always use the nodes exclusively.&lt;br /&gt;
* Don&#039;t forget to load the appropriate MPI-module in your job script.&lt;br /&gt;
* If you are using OpenMPI, the options &#039;&#039;&#039;--bind-to core --map-by core|socket|node&#039;&#039;&#039; ofthe command mpirun should be used.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Example for a MPI job&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sbatch -p multiple -t 48:00:00 -N 10 --ntasks-per-node=40  ./job_mpi.sh &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The script &#039;&#039;&#039;job_mpi.sh&#039;&#039;&#039; (containing a MPI program after loading the appropriate MPI module) is started running 2 days on 400 cores on ten batch nodes.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Multithreaded + MPI Parallel Programs on many nodes=&lt;br /&gt;
&lt;br /&gt;
* Use the time option &#039;&#039;&#039;-t&#039;&#039;&#039; or &#039;&#039;&#039;--time&#039;&#039;&#039; (instead of &#039;&#039;&#039;-l walltime&#039;&#039;&#039;). If only one number is entered behind &#039;&#039;&#039;-t&#039;&#039;&#039;, the default unit is minutes.&lt;br /&gt;
* Use the option &#039;&#039;&#039;-N &#039;&#039;y&#039;&#039;&#039;&#039;&#039; or &#039;&#039;&#039;--nodes=&#039;&#039;y&#039;&#039;&#039;&#039;&#039; and &#039;&#039;&#039;--ntasks-per-node=&#039;&#039;x&#039;&#039;&#039;&#039;&#039;  and &#039;&#039;&#039;-c &#039;&#039;z&#039;&#039;&#039;&#039;&#039; or &#039;&#039;&#039;--cpus-per-task=&#039;&#039;z&#039;&#039;&#039;&#039;&#039; (instead of &#039;&#039;&#039;-l nodes=&#039;&#039;y&#039;&#039;,ppn=&#039;&#039;x+z&#039;&#039;&#039;&#039;&#039;). &#039;&#039;&#039;&#039;&#039;x&#039;&#039;&#039;&#039;&#039; usually should be 1 or 2 and  &#039;&#039;&#039;&#039;&#039;x+z&#039;&#039;&#039;&#039;&#039; usually  40 (28 on Broadwell nodes); you can utilize hyperthreading if you want.&lt;br /&gt;
* You should&#039;nt use the option &#039;&#039;&#039;-m&#039;&#039;&#039; or &#039;&#039;&#039;--mem&#039;&#039;&#039; because the nodes are used exclusively.&lt;br /&gt;
* You always use the nodes exclusively.&lt;br /&gt;
* Don&#039;t forget to load the appropriate MPI-module in your job script.&lt;br /&gt;
* If you are using OpenMPI, the options &#039;&#039;&#039;--bind-to core --map-by socket|node:PE=&#039;&#039;z&#039;&#039;&#039;&#039;&#039; of the command mpirun must be used.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Example for a MPI job&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sbatch -p multiple -t 2-12 -N 10 --ntasks-per-node=2 -c 20 ./job_threaded_mpi.sh &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The script &#039;&#039;&#039;job_threaded_mpi.sh&#039;&#039;&#039; (containing a multithreaded MPI program after loading the appropriate MPI module) is started running 2.5 days on 400 cores with 20 MPI tasks and 20 threads per task on ten batch nodes. Here the options  &#039;&#039;&#039;--bind-to core --map-by socket:PE=10&#039;&#039;&#039; of the command mpirun must be used.&lt;/div&gt;</summary>
		<author><name>H Haefner</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster2.0/Batch_System_Migration_Guide&amp;diff=6453</id>
		<title>BwUniCluster2.0/Batch System Migration Guide</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster2.0/Batch_System_Migration_Guide&amp;diff=6453"/>
		<updated>2020-04-23T14:27:41Z</updated>

		<summary type="html">&lt;p&gt;H Haefner: /* How to convert Moab batch job scripts to Slurm? */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;While the former bwUniCluster 1 system used the combination Moab/SLURM for the batch system queue, bwUniCluster 2.0 uses only SLURM. This means that most job scripts and workflows which relied on Moab-specific pragmas and commands have to be changed.&lt;br /&gt;
&lt;br /&gt;
=General Overview=&lt;br /&gt;
&lt;br /&gt;
Job parameters can be passed to SLURM in the same ways which were possible with Moab.&lt;br /&gt;
&lt;br /&gt;
* Instead of the #MOAB or #PBS pragmas, the pragma #SLURM has to be used within job files.&lt;br /&gt;
&lt;br /&gt;
* Instead of the Moab commands, the corresponding SLURM commands have to be used.&lt;br /&gt;
&lt;br /&gt;
A general mapping of Moab to SLURM commands can be found in the following table:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Moab command !! SLURM command&lt;br /&gt;
|-&lt;br /&gt;
| msub || sbatch&lt;br /&gt;
|-&lt;br /&gt;
| msub -I || salloc&lt;br /&gt;
|-&lt;br /&gt;
| canceljob || scancel&lt;br /&gt;
|-&lt;br /&gt;
| showq || squeue&lt;br /&gt;
|-&lt;br /&gt;
| checkjob $JOBID || scontrol show job $JOBID&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In the next table you find the most MOAB job specification flags and environment variables. You have to replace all MOAB flags and environment variables of your batch scripts by their corresponding Slurm counterparts.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Commonly used MOAB job specification flags and their Slurm equivalents&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
{| width=750px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Option !! Moab (msub) !! Slurm (sbatch)&lt;br /&gt;
|-&lt;br /&gt;
| Script directive                            || #MSUB                                  || #SBATCH&lt;br /&gt;
|-&lt;br /&gt;
| Job name                                    || -N &amp;lt;name&amp;gt;                              || --job-name=&amp;lt;name&amp;gt;  (-J &amp;lt;name&amp;gt;)&lt;br /&gt;
|-&lt;br /&gt;
| Account                                     || -A &amp;lt;account&amp;gt;                           || --account=&amp;lt;account&amp;gt; (-A &amp;lt;account&amp;gt;)&lt;br /&gt;
|-&lt;br /&gt;
| Queue                                       || -q &amp;lt;queue&amp;gt;                             || --partition=&amp;lt;partition&amp;gt; (-p &amp;lt;partition&amp;gt;)&lt;br /&gt;
|-&lt;br /&gt;
| Wall time limit                             || -l walltime=&amp;lt;hh:mm:ss&amp;gt;                 || --time=&amp;lt;hh:mm:ss&amp;gt; (-t &amp;lt;hh:mm:ss&amp;gt;)&lt;br /&gt;
|-&lt;br /&gt;
| Node count                                  || -l nodes=&amp;lt;count&amp;gt;                       || --nodes=&amp;lt;count&amp;gt; (-N &amp;lt;count&amp;gt;)&lt;br /&gt;
|-&lt;br /&gt;
| Core count                                  || -l procs=&amp;lt;count&amp;gt;                       || --ntasks=&amp;lt;count&amp;gt; (-n &amp;lt;count&amp;gt;)&lt;br /&gt;
|-&lt;br /&gt;
| Process count per node                      || -l ppn=&amp;lt;count&amp;gt;                         || --ntasks-per-node=&amp;lt;count&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| Core count per process                      ||                                        || --cpus-per-task=&amp;lt;count&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| Memory limit per node                       || -l mem=&amp;lt;limit&amp;gt;                         || --mem=&amp;lt;limit&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| Memory limit per process                    || -l pmem=&amp;lt;limit&amp;gt;                        || --mem-per-cpu=&amp;lt;limit&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| Job array                                   || -t &amp;lt;array indices&amp;gt;                     || --array=&amp;lt;indices&amp;gt; (-a &amp;lt;indices&amp;gt;)&lt;br /&gt;
|-&lt;br /&gt;
| Node exclusive job                          || -l naccesspolicy=singlejob             || --exclusive&lt;br /&gt;
|-&lt;br /&gt;
| Initial working directory                   || -d &amp;lt;directory&amp;gt; (default: $HOME)        || --chdir=&amp;lt;directory&amp;gt; (-D &amp;lt;directory&amp;gt;) (default: submission directory)&lt;br /&gt;
|-&lt;br /&gt;
| Standard output file                        || -o &amp;lt;file path&amp;gt;                         || --output=&amp;lt;file&amp;gt; (-o &amp;lt;file&amp;gt;)&lt;br /&gt;
|-&lt;br /&gt;
| Standard error file                         || -e &amp;lt;file path&amp;gt;                         || --error=&amp;lt;file&amp;gt;  (-e &amp;lt;file&amp;gt;)&lt;br /&gt;
|-&lt;br /&gt;
| Combine stdout/stderr to stdout             || -j oe                                  || --output=&amp;lt;combined stdout/stderr file&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| Mail notification events                    || -m &amp;lt;event&amp;gt;                             || --mail-type=&amp;lt;events&amp;gt; (valid types include: NONE, BEGIN, END, FAIL, ALL)&lt;br /&gt;
|-&lt;br /&gt;
| Export environment to job                   || -V                                     || --export=ALL (default)&lt;br /&gt;
|-&lt;br /&gt;
| Don&#039;t export environment to job             || (default)                              || --export=NONE&lt;br /&gt;
|-&lt;br /&gt;
| Export environment variables to job         || -v &amp;lt;var[=value][,var2=value2[, ...]]&amp;gt;  || --export=&amp;lt;var[=value][,var2=value2[,...]]&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Notes:&#039;&#039;&#039;&lt;br /&gt;
* Default initial job working directory is $HOME for Moab. For Slurm the default working directory is where you submit your job from.&lt;br /&gt;
* By default Moab does not export any environment variables to the job&#039;s runtime environment. With Slurm most of the login environment variables are exported to your job&#039;s runtime environment. This includes environment variables from software modules that were loaded at job submission time (and also $HOSTNAME variable).&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Commonly used Moab/Torque script environment variables and their Slurm equivalents&lt;br /&gt;
&lt;br /&gt;
{| width=750px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Information                 !! Moab                !! Torque               !! Slurm                                     &lt;br /&gt;
|-&lt;br /&gt;
| Job name                     || $MOAB_JOBNAME        || $PBS_JOBNAME        || $SLURM_JOB_NAME                           &lt;br /&gt;
|-&lt;br /&gt;
| Job ID                       || $MOAB_JOBID          || $PBS_JOBID          || $SLURM_JOB_ID                             &lt;br /&gt;
|-&lt;br /&gt;
| Submit directory             || $MOAB_SUBMITDIR      || $PBS_O_WORKDIR      || $SLURM_SUBMIT_DIR                         &lt;br /&gt;
|-&lt;br /&gt;
| Number of nodes allocated    || $MOAB_NODECOUNT      || $PBS_NUM_NODES      || $SLURM_JOB_NUM_NODES (and: $SLURM_NNODES) &lt;br /&gt;
|-&lt;br /&gt;
| Node list                    || $MOAB_NODELIST       || cat $PBS_NODEFILE   || $SLURM_JOB_NODELIST                       &lt;br /&gt;
|-&lt;br /&gt;
| Number of processes          || $MOAB_PROCCOUNT      || $PBS_TASKNUM        || $SLURM_NTASKS                             &lt;br /&gt;
|-&lt;br /&gt;
| Requested tasks per node     || ---                    || $PBS_NUM_PPN        || $SLURM_NTASKS_PER_NODE                    &lt;br /&gt;
|-&lt;br /&gt;
| Requested CPUs per task      || ---                  || ---                 || $SLURM_CPUS_PER_TASK                      &lt;br /&gt;
|-&lt;br /&gt;
| Job array index              || $MOAB_JOBARRAYINDEX  || $PBS_ARRAY_INDEX    || $SLURM_ARRAY_TASK_ID                      &lt;br /&gt;
|-&lt;br /&gt;
| Job array range              || $MOAB_JOBARRAYRANGE  || -                   || $SLURM_ARRAY_TASK_COUNT                   &lt;br /&gt;
|-&lt;br /&gt;
| Queue name                   || $MOAB_CLASS          || $PBS_QUEUE          || $SLURM_JOB_PARTITION                      &lt;br /&gt;
|-&lt;br /&gt;
| QOS name                     || $MOAB_QOS            || ---                 || $SLURM_JOB_QOS                            &lt;br /&gt;
|-&lt;br /&gt;
| Number of processes per node | ---                   || $PBS_NUM_PPN        || $SLURM_TASKS_PER_NODE                     &lt;br /&gt;
|-&lt;br /&gt;
| Job user                     || $MOAB_USER           || $PBS_O_LOGNAME      || $SLURM_JOB_USER                           &lt;br /&gt;
|-&lt;br /&gt;
| Hostname                     || $MOAB_MACHINE        || $PBS_O_HOST         || $SLURMD_NODENAME                          &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
The following chapters contain short examples for the most common job types. For a full overview of SLURM see the [[BwUniCluster_2.0_Slurm_common_Features|full article]].&lt;br /&gt;
&lt;br /&gt;
=Serial Programs=&lt;br /&gt;
&lt;br /&gt;
* Use the time option &#039;&#039;&#039;-t&#039;&#039;&#039; or &#039;&#039;&#039;--time&#039;&#039;&#039; (instead of &#039;&#039;&#039;-l walltime&#039;&#039;&#039;). If only one number is entered behind &#039;&#039;&#039;-t&#039;&#039;&#039;, the default unit is minutes.&lt;br /&gt;
* Use the option &#039;&#039;&#039;-n 1&#039;&#039;&#039; or &#039;&#039;&#039;--ntasks=1&#039;&#039;&#039; (instead of &#039;&#039;&#039;-l nodes=1,ppn=1&#039;&#039;&#039;).&lt;br /&gt;
* Use the option &#039;&#039;&#039;-m&#039;&#039;&#039; or &#039;&#039;&#039;--mem&#039;&#039;&#039; (instead of &#039;&#039;&#039;-l pmem&#039;&#039;&#039;). The default unit is MegaByte.&lt;br /&gt;
* If you want to use one node exclusively, you must enter the whole memory (&#039;&#039;&#039;-m 96327&#039;&#039;&#039; or &#039;&#039;&#039;--mem=96327&#039;&#039;&#039;).&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Example for a serial job&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sbatch -p single -t 60 -n 1 -m 96327 ./job.sh &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The script job.sh (containing the execution of a serial program) is started running 60 minutes exclusively on a batch node.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Multithreaded Programs=&lt;br /&gt;
&lt;br /&gt;
* Use the time option &#039;&#039;&#039;-t&#039;&#039;&#039; or &#039;&#039;&#039;--time&#039;&#039;&#039; (instead of &#039;&#039;&#039;-l walltime&#039;&#039;&#039;). If only one number is entered behind &#039;&#039;&#039;-t&#039;&#039;&#039;, the default unit is minutes.&lt;br /&gt;
* Use the option &#039;&#039;&#039;-N 1&#039;&#039;&#039; or &#039;&#039;&#039;--nodes=1&#039;&#039;&#039; and &#039;&#039;&#039;c &#039;&#039;x&#039;&#039;&#039;&#039;&#039; or &#039;&#039;&#039;--cpus-per-task=&#039;&#039;x&#039;&#039;&#039;&#039;&#039; (instead of &#039;&#039;&#039;-l nodes=1,ppn=&#039;&#039;x&#039;&#039; &#039;&#039;&#039;). &#039;&#039;&#039;&#039;&#039;x&#039;&#039;&#039;&#039;&#039; can be a number between 1 and 40 (because of 40 cores within one node); it can also be a number between 41 and 80 (because of active hyperthreading).&lt;br /&gt;
* Use the option &#039;&#039;&#039;-m&#039;&#039;&#039; or &#039;&#039;&#039;--mem&#039;&#039;&#039; (instead of &#039;&#039;&#039;-l pmem&#039;&#039;&#039;). The default unit is MegaByte.&lt;br /&gt;
* Use the option &#039;&#039;&#039;--export&#039;&#039;&#039; to set the needed environment variable OMP_NUM_THREADS for the batch job. Adding &#039;&#039;&#039;ALL&#039;&#039;&#039; means to  pass all interactively set environment variables to the batch job.&lt;br /&gt;
* If you want to use one node exclusively, you must either enter the whole memory (&#039;&#039;-m 96327&#039;&#039; or &#039;&#039;--mem=96327&#039;&#039;) or set the number of threads greater than 39.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Example for a multithreaded job&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sbatch -p single -t 1:00:00 -N 1 -c 20 -m 50gb --export=ALL,OMP_NUM_THREADS=20 ./job_threaded.sh &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The script &#039;&#039;&#039;job_threaded.sh&#039;&#039;&#039; (containing a multithreaded program) is started running 1 hour in shared mode on 20 cores requesting 50GB on one batch node.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=MPI Parallel Programs within one node=&lt;br /&gt;
&lt;br /&gt;
* Use the time option &#039;&#039;&#039;-t&#039;&#039;&#039; or &#039;&#039;&#039;--time&#039;&#039;&#039; (instead of &#039;&#039;&#039;-l walltime&#039;&#039;&#039;). If only one number is entered behind &#039;&#039;&#039;-t&#039;&#039;&#039;, the default unit is minutes.&lt;br /&gt;
* Use the option &#039;&#039;&#039;-n &#039;&#039;x&#039;&#039;&#039;&#039;&#039; or &#039;&#039;&#039;--ntasks=&#039;&#039;x&#039;&#039;&#039;&#039;&#039; (instead of &#039;&#039;&#039;-l nodes=1,ppn=&#039;&#039;x&#039;&#039; &#039;&#039;&#039;). &#039;&#039;&#039;&#039;&#039;x&#039;&#039;&#039;&#039;&#039; can be a number between 1 and 40 (because of 40 cores within one node); you should&#039;nt utilize hyperthreading.&lt;br /&gt;
* Use the option &#039;&#039;&#039;-m&#039;&#039;&#039; or &#039;&#039;&#039;--mem&#039;&#039;&#039; (instead of &#039;&#039;&#039;-l pmem&#039;&#039;&#039;). The default unit is MegaByte.&lt;br /&gt;
* If you want to use one node exclusively, you must either enter the whole memory (&#039;&#039;-m 96327&#039;&#039; or &#039;&#039;--mem=96327&#039;&#039;) or set the number of MPI tasks greater than 39.&lt;br /&gt;
* Don&#039;t forget to load the appropriate MPI-module in your job script.&lt;br /&gt;
* If you are using OpenMPI, the options &#039;&#039;&#039;--bind-to core --map-by core|socket|node&#039;&#039;&#039; of the command mpirun should be used.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Example for a MPI job&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sbatch -p single -t 600 -n 10 -m 40000 ./job_mpi.sh &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The script &#039;&#039;&#039;job_mpi.sh&#039;&#039;&#039; (containing a MPI program after loading the appropriate MPI module) is started running 10 hours in shared mode on 10 cores requesting 40000 MB on one batch node.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=MPI Parallel Programs on many nodes=&lt;br /&gt;
&lt;br /&gt;
* Use the time option &#039;&#039;&#039;-t&#039;&#039;&#039; or &#039;&#039;&#039;--time&#039;&#039;&#039; (instead of &#039;&#039;&#039;-l walltime&#039;&#039;&#039;). If only one number is entered behind &#039;&#039;&#039;-t&#039;&#039;&#039;, the default unit is minutes.&lt;br /&gt;
* Use the option &#039;&#039;&#039;-N &#039;&#039;y&#039;&#039;&#039;&#039;&#039; or &#039;&#039;&#039;--nodes=&#039;&#039;y&#039;&#039;&#039;&#039;&#039; and &#039;&#039;&#039;--ntasks-per-node=&#039;&#039;x&#039;&#039;&#039;&#039;&#039; (instead of &#039;&#039;&#039;-l nodes=&#039;&#039;y&#039;&#039;,ppn=&#039;&#039;x&#039;&#039;&#039;&#039;&#039;). &#039;&#039;&#039;&#039;&#039;x&#039;&#039;&#039;&#039;&#039; can be a number between 1 and 40 (28 for Broadwell nodes) (because of 40 (28) cores within one node); you should&#039;nt utilize hyperthreading.&lt;br /&gt;
* You should&#039;nt use the option &#039;&#039;&#039;-m&#039;&#039;&#039; or &#039;&#039;&#039;--mem&#039;&#039;&#039; because the nodes are used exclusively.&lt;br /&gt;
* You always use the nodes exclusively.&lt;br /&gt;
* Don&#039;t forget to load the appropriate MPI-module in your job script.&lt;br /&gt;
* If you are using OpenMPI, the options &#039;&#039;&#039;--bind-to core --map-by core|socket|node&#039;&#039;&#039; ofthe command mpirun should be used.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Example for a MPI job&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sbatch -p multiple -t 48:00:00 -N 10 --ntasks-per-node=40  ./job_mpi.sh &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The script &#039;&#039;&#039;job_mpi.sh&#039;&#039;&#039; (containing a MPI program after loading the appropriate MPI module) is started running 2 days on 400 cores on ten batch nodes.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Multithreaded + MPI Parallel Programs on many nodes=&lt;br /&gt;
&lt;br /&gt;
* Use the time option &#039;&#039;&#039;-t&#039;&#039;&#039; or &#039;&#039;&#039;--time&#039;&#039;&#039; (instead of &#039;&#039;&#039;-l walltime&#039;&#039;&#039;). If only one number is entered behind &#039;&#039;&#039;-t&#039;&#039;&#039;, the default unit is minutes.&lt;br /&gt;
* Use the option &#039;&#039;&#039;-N &#039;&#039;y&#039;&#039;&#039;&#039;&#039; or &#039;&#039;&#039;--nodes=&#039;&#039;y&#039;&#039;&#039;&#039;&#039; and &#039;&#039;&#039;--ntasks-per-node=&#039;&#039;x&#039;&#039;&#039;&#039;&#039;  and &#039;&#039;&#039;-c &#039;&#039;z&#039;&#039;&#039;&#039;&#039; or &#039;&#039;&#039;--cpus-per-task=&#039;&#039;z&#039;&#039;&#039;&#039;&#039; (instead of &#039;&#039;&#039;-l nodes=&#039;&#039;y&#039;&#039;,ppn=&#039;&#039;x+z&#039;&#039;&#039;&#039;&#039;). &#039;&#039;&#039;&#039;&#039;x&#039;&#039;&#039;&#039;&#039; usually should be 1 or 2 and  &#039;&#039;&#039;&#039;&#039;x+z&#039;&#039;&#039;&#039;&#039; usually  40 (28 on Broadwell nodes); you can utilize hyperthreading if you want.&lt;br /&gt;
* You should&#039;nt use the option &#039;&#039;&#039;-m&#039;&#039;&#039; or &#039;&#039;&#039;--mem&#039;&#039;&#039; because the nodes are used exclusively.&lt;br /&gt;
* You always use the nodes exclusively.&lt;br /&gt;
* Don&#039;t forget to load the appropriate MPI-module in your job script.&lt;br /&gt;
* If you are using OpenMPI, the options &#039;&#039;&#039;--bind-to core --map-by socket|node:PE=&#039;&#039;z&#039;&#039;&#039;&#039;&#039; of the command mpirun must be used.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Example for a MPI job&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sbatch -p multiple -t 2-12 -N 10 --ntasks-per-node=2 -c 20 ./job_threaded_mpi.sh &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The script &#039;&#039;&#039;job_threaded_mpi.sh&#039;&#039;&#039; (containing a multithreaded MPI program after loading the appropriate MPI module) is started running 2.5 days on 400 cores with 20 MPI tasks and 20 threads per task on ten batch nodes. Here the options  &#039;&#039;&#039;--bind-to core --map-by socket:PE=10&#039;&#039;&#039; of the command mpirun must be used.&lt;/div&gt;</summary>
		<author><name>H Haefner</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster2.0/Batch_System_Migration_Guide&amp;diff=6452</id>
		<title>BwUniCluster2.0/Batch System Migration Guide</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster2.0/Batch_System_Migration_Guide&amp;diff=6452"/>
		<updated>2020-04-23T14:21:54Z</updated>

		<summary type="html">&lt;p&gt;H Haefner: /* General Overview */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;While the former bwUniCluster 1 system used the combination Moab/SLURM for the batch system queue, bwUniCluster 2.0 uses only SLURM. This means that most job scripts and workflows which relied on Moab-specific pragmas and commands have to be changed.&lt;br /&gt;
&lt;br /&gt;
=General Overview=&lt;br /&gt;
&lt;br /&gt;
Job parameters can be passed to SLURM in the same ways which were possible with Moab.&lt;br /&gt;
&lt;br /&gt;
* Instead of the #MOAB or #PBS pragmas, the pragma #SLURM has to be used within job files.&lt;br /&gt;
&lt;br /&gt;
* Instead of the Moab commands, the corresponding SLURM commands have to be used.&lt;br /&gt;
&lt;br /&gt;
A general mapping of Moab to SLURM commands can be found in the following table:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Moab command !! SLURM command&lt;br /&gt;
|-&lt;br /&gt;
| msub || sbatch&lt;br /&gt;
|-&lt;br /&gt;
| msub -I || salloc&lt;br /&gt;
|-&lt;br /&gt;
| canceljob || scancel&lt;br /&gt;
|-&lt;br /&gt;
| showq || squeue&lt;br /&gt;
|-&lt;br /&gt;
| checkjob $JOBID || scontrol show job $JOBID&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== How to convert Moab batch job scripts to Slurm? ==&lt;br /&gt;
&lt;br /&gt;
Replace Moab/Torque job specification flags and environment variables in your job&lt;br /&gt;
scripts by their corresponding Slurm counterparts.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Commonly used Moab job specification flags and their Slurm equivalents&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
{| width=750px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Option !! Moab (msub) !! Slurm (sbatch)&lt;br /&gt;
|-&lt;br /&gt;
| Script directive                            || #MSUB                                  || #SBATCH&lt;br /&gt;
|-&lt;br /&gt;
| Job name                                    || -N &amp;lt;name&amp;gt;                              || --job-name=&amp;lt;name&amp;gt;  (-J &amp;lt;name&amp;gt;)&lt;br /&gt;
|-&lt;br /&gt;
| Account                                     || -A &amp;lt;account&amp;gt;                           || --account=&amp;lt;account&amp;gt; (-A &amp;lt;account&amp;gt;)&lt;br /&gt;
|-&lt;br /&gt;
| Queue                                       || -q &amp;lt;queue&amp;gt;                             || --partition=&amp;lt;partition&amp;gt; (-p &amp;lt;partition&amp;gt;)&lt;br /&gt;
|-&lt;br /&gt;
| Wall time limit                             || -l walltime=&amp;lt;hh:mm:ss&amp;gt;                 || --time=&amp;lt;hh:mm:ss&amp;gt; (-t &amp;lt;hh:mm:ss&amp;gt;)&lt;br /&gt;
|-&lt;br /&gt;
| Node count                                  || -l nodes=&amp;lt;count&amp;gt;                       || --nodes=&amp;lt;count&amp;gt; (-N &amp;lt;count&amp;gt;)&lt;br /&gt;
|-&lt;br /&gt;
| Core count                                  || -l procs=&amp;lt;count&amp;gt;                       || --ntasks=&amp;lt;count&amp;gt; (-n &amp;lt;count&amp;gt;)&lt;br /&gt;
|-&lt;br /&gt;
| Process count per node                      || -l ppn=&amp;lt;count&amp;gt;                         || --ntasks-per-node=&amp;lt;count&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| Core count per process                      ||                                        || --cpus-per-task=&amp;lt;count&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| Memory limit per node                       || -l mem=&amp;lt;limit&amp;gt;                         || --mem=&amp;lt;limit&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| Memory limit per process                    || -l pmem=&amp;lt;limit&amp;gt;                        || --mem-per-cpu=&amp;lt;limit&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| Job array                                   || -t &amp;lt;array indices&amp;gt;                     || --array=&amp;lt;indices&amp;gt; (-a &amp;lt;indices&amp;gt;)&lt;br /&gt;
|-&lt;br /&gt;
| Node exclusive job                          || -l naccesspolicy=singlejob             || --exclusive&lt;br /&gt;
|-&lt;br /&gt;
| Initial working directory                   || -d &amp;lt;directory&amp;gt; (default: $HOME)        || --chdir=&amp;lt;directory&amp;gt; (-D &amp;lt;directory&amp;gt;) (default: submission directory)&lt;br /&gt;
|-&lt;br /&gt;
| Standard output file                        || -o &amp;lt;file path&amp;gt;                         || --output=&amp;lt;file&amp;gt; (-o &amp;lt;file&amp;gt;)&lt;br /&gt;
|-&lt;br /&gt;
| Standard error file                         || -e &amp;lt;file path&amp;gt;                         || --error=&amp;lt;file&amp;gt;  (-e &amp;lt;file&amp;gt;)&lt;br /&gt;
|-&lt;br /&gt;
| Combine stdout/stderr to stdout             || -j oe                                  || --output=&amp;lt;combined stdout/stderr file&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| Mail notification events                    || -m &amp;lt;event&amp;gt;                             || --mail-type=&amp;lt;events&amp;gt; (valid types include: NONE, BEGIN, END, FAIL, ALL)&lt;br /&gt;
|-&lt;br /&gt;
| Export environment to job                   || -V                                     || --export=ALL (default)&lt;br /&gt;
|-&lt;br /&gt;
| Don&#039;t export environment to job             || (default)                              || --export=NONE&lt;br /&gt;
|-&lt;br /&gt;
| Export environment variables to job         || -v &amp;lt;var[=value][,var2=value2[, ...]]&amp;gt;  || --export=&amp;lt;var[=value][,var2=value2[,...]]&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Notes:&#039;&#039;&#039;&lt;br /&gt;
* Default initial job working directory is $HOME for Moab. For Slurm the default working directory is where you submit your job from.&lt;br /&gt;
* By default Moab does not export any environment variables to the job&#039;s runtime environment. With Slurm most of the login environment variables are exported to your job&#039;s runtime environment. This includes environment variables from software modules that were loaded at job submission time (and also $HOSTNAME variable).&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Commonly used Moab/Torque script environment variables and their Slurm equivalents&lt;br /&gt;
&lt;br /&gt;
{| width=750px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Information                 !! Moab                !! Torque               !! Slurm                                     &lt;br /&gt;
|-&lt;br /&gt;
| Job name                     || $MOAB_JOBNAME        || $PBS_JOBNAME        || $SLURM_JOB_NAME                           &lt;br /&gt;
|-&lt;br /&gt;
| Job ID                       || $MOAB_JOBID          || $PBS_JOBID          || $SLURM_JOB_ID                             &lt;br /&gt;
|-&lt;br /&gt;
| Submit directory             || $MOAB_SUBMITDIR      || $PBS_O_WORKDIR      || $SLURM_SUBMIT_DIR                         &lt;br /&gt;
|-&lt;br /&gt;
| Number of nodes allocated    || $MOAB_NODECOUNT      || $PBS_NUM_NODES      || $SLURM_JOB_NUM_NODES (and: $SLURM_NNODES) &lt;br /&gt;
|-&lt;br /&gt;
| Node list                    || $MOAB_NODELIST       || cat $PBS_NODEFILE   || $SLURM_JOB_NODELIST                       &lt;br /&gt;
|-&lt;br /&gt;
| Number of processes          || $MOAB_PROCCOUNT      || $PBS_TASKNUM        || $SLURM_NTASKS                             &lt;br /&gt;
|-&lt;br /&gt;
| Requested tasks per node     || ---                    || $PBS_NUM_PPN        || $SLURM_NTASKS_PER_NODE                    &lt;br /&gt;
|-&lt;br /&gt;
| Requested CPUs per task      || ---                  || ---                 || $SLURM_CPUS_PER_TASK                      &lt;br /&gt;
|-&lt;br /&gt;
| Job array index              || $MOAB_JOBARRAYINDEX  || $PBS_ARRAY_INDEX    || $SLURM_ARRAY_TASK_ID                      &lt;br /&gt;
|-&lt;br /&gt;
| Job array range              || $MOAB_JOBARRAYRANGE  || -                   || $SLURM_ARRAY_TASK_COUNT                   &lt;br /&gt;
|-&lt;br /&gt;
| Queue name                   || $MOAB_CLASS          || $PBS_QUEUE          || $SLURM_JOB_PARTITION                      &lt;br /&gt;
|-&lt;br /&gt;
| QOS name                     || $MOAB_QOS            || ---                 || $SLURM_JOB_QOS                            &lt;br /&gt;
|-&lt;br /&gt;
| Number of processes per node | ---                   || $PBS_NUM_PPN        || $SLURM_TASKS_PER_NODE                     &lt;br /&gt;
|-&lt;br /&gt;
| Job user                     || $MOAB_USER           || $PBS_O_LOGNAME      || $SLURM_JOB_USER                           &lt;br /&gt;
|-&lt;br /&gt;
| Hostname                     || $MOAB_MACHINE        || $PBS_O_HOST         || $SLURMD_NODENAME                          &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
The following chapters contain short examples for the most common job types. For a full overview of SLURM see the [[BwUniCluster_2.0_Slurm_common_Features|full article]].&lt;br /&gt;
&lt;br /&gt;
=Serial Programs=&lt;br /&gt;
&lt;br /&gt;
* Use the time option &#039;&#039;&#039;-t&#039;&#039;&#039; or &#039;&#039;&#039;--time&#039;&#039;&#039; (instead of &#039;&#039;&#039;-l walltime&#039;&#039;&#039;). If only one number is entered behind &#039;&#039;&#039;-t&#039;&#039;&#039;, the default unit is minutes.&lt;br /&gt;
* Use the option &#039;&#039;&#039;-n 1&#039;&#039;&#039; or &#039;&#039;&#039;--ntasks=1&#039;&#039;&#039; (instead of &#039;&#039;&#039;-l nodes=1,ppn=1&#039;&#039;&#039;).&lt;br /&gt;
* Use the option &#039;&#039;&#039;-m&#039;&#039;&#039; or &#039;&#039;&#039;--mem&#039;&#039;&#039; (instead of &#039;&#039;&#039;-l pmem&#039;&#039;&#039;). The default unit is MegaByte.&lt;br /&gt;
* If you want to use one node exclusively, you must enter the whole memory (&#039;&#039;&#039;-m 96327&#039;&#039;&#039; or &#039;&#039;&#039;--mem=96327&#039;&#039;&#039;).&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Example for a serial job&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sbatch -p single -t 60 -n 1 -m 96327 ./job.sh &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The script job.sh (containing the execution of a serial program) is started running 60 minutes exclusively on a batch node.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Multithreaded Programs=&lt;br /&gt;
&lt;br /&gt;
* Use the time option &#039;&#039;&#039;-t&#039;&#039;&#039; or &#039;&#039;&#039;--time&#039;&#039;&#039; (instead of &#039;&#039;&#039;-l walltime&#039;&#039;&#039;). If only one number is entered behind &#039;&#039;&#039;-t&#039;&#039;&#039;, the default unit is minutes.&lt;br /&gt;
* Use the option &#039;&#039;&#039;-N 1&#039;&#039;&#039; or &#039;&#039;&#039;--nodes=1&#039;&#039;&#039; and &#039;&#039;&#039;c &#039;&#039;x&#039;&#039;&#039;&#039;&#039; or &#039;&#039;&#039;--cpus-per-task=&#039;&#039;x&#039;&#039;&#039;&#039;&#039; (instead of &#039;&#039;&#039;-l nodes=1,ppn=&#039;&#039;x&#039;&#039; &#039;&#039;&#039;). &#039;&#039;&#039;&#039;&#039;x&#039;&#039;&#039;&#039;&#039; can be a number between 1 and 40 (because of 40 cores within one node); it can also be a number between 41 and 80 (because of active hyperthreading).&lt;br /&gt;
* Use the option &#039;&#039;&#039;-m&#039;&#039;&#039; or &#039;&#039;&#039;--mem&#039;&#039;&#039; (instead of &#039;&#039;&#039;-l pmem&#039;&#039;&#039;). The default unit is MegaByte.&lt;br /&gt;
* Use the option &#039;&#039;&#039;--export&#039;&#039;&#039; to set the needed environment variable OMP_NUM_THREADS for the batch job. Adding &#039;&#039;&#039;ALL&#039;&#039;&#039; means to  pass all interactively set environment variables to the batch job.&lt;br /&gt;
* If you want to use one node exclusively, you must either enter the whole memory (&#039;&#039;-m 96327&#039;&#039; or &#039;&#039;--mem=96327&#039;&#039;) or set the number of threads greater than 39.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Example for a multithreaded job&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sbatch -p single -t 1:00:00 -N 1 -c 20 -m 50gb --export=ALL,OMP_NUM_THREADS=20 ./job_threaded.sh &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The script &#039;&#039;&#039;job_threaded.sh&#039;&#039;&#039; (containing a multithreaded program) is started running 1 hour in shared mode on 20 cores requesting 50GB on one batch node.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=MPI Parallel Programs within one node=&lt;br /&gt;
&lt;br /&gt;
* Use the time option &#039;&#039;&#039;-t&#039;&#039;&#039; or &#039;&#039;&#039;--time&#039;&#039;&#039; (instead of &#039;&#039;&#039;-l walltime&#039;&#039;&#039;). If only one number is entered behind &#039;&#039;&#039;-t&#039;&#039;&#039;, the default unit is minutes.&lt;br /&gt;
* Use the option &#039;&#039;&#039;-n &#039;&#039;x&#039;&#039;&#039;&#039;&#039; or &#039;&#039;&#039;--ntasks=&#039;&#039;x&#039;&#039;&#039;&#039;&#039; (instead of &#039;&#039;&#039;-l nodes=1,ppn=&#039;&#039;x&#039;&#039; &#039;&#039;&#039;). &#039;&#039;&#039;&#039;&#039;x&#039;&#039;&#039;&#039;&#039; can be a number between 1 and 40 (because of 40 cores within one node); you should&#039;nt utilize hyperthreading.&lt;br /&gt;
* Use the option &#039;&#039;&#039;-m&#039;&#039;&#039; or &#039;&#039;&#039;--mem&#039;&#039;&#039; (instead of &#039;&#039;&#039;-l pmem&#039;&#039;&#039;). The default unit is MegaByte.&lt;br /&gt;
* If you want to use one node exclusively, you must either enter the whole memory (&#039;&#039;-m 96327&#039;&#039; or &#039;&#039;--mem=96327&#039;&#039;) or set the number of MPI tasks greater than 39.&lt;br /&gt;
* Don&#039;t forget to load the appropriate MPI-module in your job script.&lt;br /&gt;
* If you are using OpenMPI, the options &#039;&#039;&#039;--bind-to core --map-by core|socket|node&#039;&#039;&#039; of the command mpirun should be used.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Example for a MPI job&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sbatch -p single -t 600 -n 10 -m 40000 ./job_mpi.sh &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The script &#039;&#039;&#039;job_mpi.sh&#039;&#039;&#039; (containing a MPI program after loading the appropriate MPI module) is started running 10 hours in shared mode on 10 cores requesting 40000 MB on one batch node.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=MPI Parallel Programs on many nodes=&lt;br /&gt;
&lt;br /&gt;
* Use the time option &#039;&#039;&#039;-t&#039;&#039;&#039; or &#039;&#039;&#039;--time&#039;&#039;&#039; (instead of &#039;&#039;&#039;-l walltime&#039;&#039;&#039;). If only one number is entered behind &#039;&#039;&#039;-t&#039;&#039;&#039;, the default unit is minutes.&lt;br /&gt;
* Use the option &#039;&#039;&#039;-N &#039;&#039;y&#039;&#039;&#039;&#039;&#039; or &#039;&#039;&#039;--nodes=&#039;&#039;y&#039;&#039;&#039;&#039;&#039; and &#039;&#039;&#039;--ntasks-per-node=&#039;&#039;x&#039;&#039;&#039;&#039;&#039; (instead of &#039;&#039;&#039;-l nodes=&#039;&#039;y&#039;&#039;,ppn=&#039;&#039;x&#039;&#039;&#039;&#039;&#039;). &#039;&#039;&#039;&#039;&#039;x&#039;&#039;&#039;&#039;&#039; can be a number between 1 and 40 (28 for Broadwell nodes) (because of 40 (28) cores within one node); you should&#039;nt utilize hyperthreading.&lt;br /&gt;
* You should&#039;nt use the option &#039;&#039;&#039;-m&#039;&#039;&#039; or &#039;&#039;&#039;--mem&#039;&#039;&#039; because the nodes are used exclusively.&lt;br /&gt;
* You always use the nodes exclusively.&lt;br /&gt;
* Don&#039;t forget to load the appropriate MPI-module in your job script.&lt;br /&gt;
* If you are using OpenMPI, the options &#039;&#039;&#039;--bind-to core --map-by core|socket|node&#039;&#039;&#039; ofthe command mpirun should be used.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Example for a MPI job&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sbatch -p multiple -t 48:00:00 -N 10 --ntasks-per-node=40  ./job_mpi.sh &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The script &#039;&#039;&#039;job_mpi.sh&#039;&#039;&#039; (containing a MPI program after loading the appropriate MPI module) is started running 2 days on 400 cores on ten batch nodes.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Multithreaded + MPI Parallel Programs on many nodes=&lt;br /&gt;
&lt;br /&gt;
* Use the time option &#039;&#039;&#039;-t&#039;&#039;&#039; or &#039;&#039;&#039;--time&#039;&#039;&#039; (instead of &#039;&#039;&#039;-l walltime&#039;&#039;&#039;). If only one number is entered behind &#039;&#039;&#039;-t&#039;&#039;&#039;, the default unit is minutes.&lt;br /&gt;
* Use the option &#039;&#039;&#039;-N &#039;&#039;y&#039;&#039;&#039;&#039;&#039; or &#039;&#039;&#039;--nodes=&#039;&#039;y&#039;&#039;&#039;&#039;&#039; and &#039;&#039;&#039;--ntasks-per-node=&#039;&#039;x&#039;&#039;&#039;&#039;&#039;  and &#039;&#039;&#039;-c &#039;&#039;z&#039;&#039;&#039;&#039;&#039; or &#039;&#039;&#039;--cpus-per-task=&#039;&#039;z&#039;&#039;&#039;&#039;&#039; (instead of &#039;&#039;&#039;-l nodes=&#039;&#039;y&#039;&#039;,ppn=&#039;&#039;x+z&#039;&#039;&#039;&#039;&#039;). &#039;&#039;&#039;&#039;&#039;x&#039;&#039;&#039;&#039;&#039; usually should be 1 or 2 and  &#039;&#039;&#039;&#039;&#039;x+z&#039;&#039;&#039;&#039;&#039; usually  40 (28 on Broadwell nodes); you can utilize hyperthreading if you want.&lt;br /&gt;
* You should&#039;nt use the option &#039;&#039;&#039;-m&#039;&#039;&#039; or &#039;&#039;&#039;--mem&#039;&#039;&#039; because the nodes are used exclusively.&lt;br /&gt;
* You always use the nodes exclusively.&lt;br /&gt;
* Don&#039;t forget to load the appropriate MPI-module in your job script.&lt;br /&gt;
* If you are using OpenMPI, the options &#039;&#039;&#039;--bind-to core --map-by socket|node:PE=&#039;&#039;z&#039;&#039;&#039;&#039;&#039; of the command mpirun must be used.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Example for a MPI job&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sbatch -p multiple -t 2-12 -N 10 --ntasks-per-node=2 -c 20 ./job_threaded_mpi.sh &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The script &#039;&#039;&#039;job_threaded_mpi.sh&#039;&#039;&#039; (containing a multithreaded MPI program after loading the appropriate MPI module) is started running 2.5 days on 400 cores with 20 MPI tasks and 20 threads per task on ten batch nodes. Here the options  &#039;&#039;&#039;--bind-to core --map-by socket:PE=10&#039;&#039;&#039; of the command mpirun must be used.&lt;/div&gt;</summary>
		<author><name>H Haefner</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster2.0/Slurm&amp;diff=6399</id>
		<title>BwUniCluster2.0/Slurm</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster2.0/Slurm&amp;diff=6399"/>
		<updated>2020-04-20T11:53:04Z</updated>

		<summary type="html">&lt;p&gt;H Haefner: /* Job Submission : sbatch */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;div id=&amp;quot;top&amp;quot;&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
=  Slurm HPC Workload Manager = &lt;br /&gt;
== Specification == &lt;br /&gt;
Slurm is an open source, fault-tolerant, and highly scalable cluster management and job scheduling system for large and small Linux clusters. Slurm requires no kernel modifications for its operation and is relatively self-contained. As a cluster workload manager, Slurm has three key functions. First, it allocates exclusive and/or non-exclusive access to resources (compute nodes) to users for some duration of time so they can perform work. Second, it provides a framework for starting, executing, and monitoring work (normally a parallel job) on the set of allocated nodes. Finally, it arbitrates contention for resources by managing a queue of pending work.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Any kind of calculation on the compute nodes of [[bwUniCluster 2.0|bwUniCluster 2.0]] requires the user to define calculations as a sequence of commands or single command together with required run time, number of CPU cores and main memory and submit all, i.e., the &#039;&#039;&#039;batch job&#039;&#039;&#039;, to a resource and workload managing software. bwUniCluster 2.0 has installed the workload managing software Slurm. Therefore any job submission by the user is to be executed by commands of the Slurm software. Slurm queues and runs user jobs based on fair sharing policies.  &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Slurm Commands (excerpt) ==&lt;br /&gt;
Some of the most used Slurm commands for non-administrators working on bwUniCluster 2.0.&lt;br /&gt;
{| width=750px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Slurm commands !! Brief explanation&lt;br /&gt;
|-&lt;br /&gt;
| [[#Job Submission : sbatch|sbatch]] || Submits a job and queues it in an input queue [[https://slurm.schedmd.com/sbatch.html sbatch]] &lt;br /&gt;
|-&lt;br /&gt;
| [[#Detailed job information : scontrol show job|scontrol show job]] || Displays detailed job state information [[https://slurm.schedmd.com/scontrol.html scontrol]]&lt;br /&gt;
|-&lt;br /&gt;
| [[#List of your submitted jo/bs : squeue|squeue]] || Displays information about active, eligible, blocked, and/or recently completed jobs [[https://slurm.schedmd.com/squeue.html squeue]]&lt;br /&gt;
|-&lt;br /&gt;
| [[#Start time of job or resources : squeue|squeue --start]] || Returns start time of submitted job or requested resources [[https://slurm.schedmd.com/squeue.html squeue]]&lt;br /&gt;
|-&lt;br /&gt;
| [[#Shows free resources : sinfo_t_idle|sinfo_t_idle]] || Shows what resources are available for immediate use [[https://slurm.schedmd.com/sinfo.html sinfo]]&lt;br /&gt;
|-&lt;br /&gt;
| [[#Canceling own jobs : scancel|scancel]] || Cancels a job (obsoleted!) [[https://slurm.schedmd.com/scancel.html scancel]]&lt;br /&gt;
|}&lt;br /&gt;
If your job was submitted to the &amp;quot;multiple&amp;quot; queue you can log into the allocated nodes via SSH as soon as the job is running.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
* [https://slurm.schedmd.com/tutorials.html  Slurm Tutorials]&lt;br /&gt;
* [https://slurm.schedmd.com/pdfs/summary.pdf  Slurm command/option summary (2 pages)]&lt;br /&gt;
* [https://slurm.schedmd.com/man_index.html  Slurm Commands]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Job Submission : sbatch ==&lt;br /&gt;
Batch jobs are submitted by using the command &#039;&#039;&#039;sbatch&#039;&#039;&#039;. The main purpose of the &#039;&#039;&#039;sbatch&#039;&#039;&#039; command is to specify the resources that are needed to run the job. &#039;&#039;&#039;sbatch&#039;&#039;&#039; will then queue the batch job. However, starting of batch job depends on the availability of the requested resources and the fair sharing value.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
=== sbatch Command Parameters ===&lt;br /&gt;
The syntax and use of &#039;&#039;&#039;sbatch&#039;&#039;&#039; can be displayed via:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ man sbatch&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;sbatch&#039;&#039;&#039; options can be used from the command line or in your job script.&lt;br /&gt;
{| width=750px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! colspan=&amp;quot;3&amp;quot; | sbatch Options&lt;br /&gt;
|-&lt;br /&gt;
! Command line&lt;br /&gt;
! Script&lt;br /&gt;
! Purpose&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| -t &#039;&#039;time&#039;&#039;  or  --time=&#039;&#039;time&#039;&#039;&lt;br /&gt;
| #SBATCH --time=&#039;&#039;time&#039;&#039;&lt;br /&gt;
| Wall clock time limit.&amp;lt;br&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| -N &#039;&#039;count&#039;&#039;  or  --nodes=&#039;&#039;count&#039;&#039;&lt;br /&gt;
| #SBATCH --nodes=&#039;&#039;count&#039;&#039;&lt;br /&gt;
| Number of nodes to be used.&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| -n &#039;&#039;count&#039;&#039;  or  --ntasks=&#039;&#039;count&#039;&#039;&lt;br /&gt;
| #SBATCH --ntasks=&#039;&#039;count&#039;&#039;&lt;br /&gt;
| Number of tasks to be launched.&lt;br /&gt;
|-&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| --ntasks-per-node=&#039;&#039;count&#039;&#039;&lt;br /&gt;
| #SBATCH --ntasks-per-node=&#039;&#039;count&#039;&#039;&lt;br /&gt;
| Maximum count (&amp;lt;= 28 and &amp;lt;= 40 resp.) of tasks per node.&amp;lt;br&amp;gt;(Replaces the option ppn of MOAB.)&lt;br /&gt;
|-&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| -c &#039;&#039;count&#039;&#039; or --cpus-per-task=&#039;&#039;count&#039;&#039;&lt;br /&gt;
| #SBATCH --cpus-per-task=&#039;&#039;count&#039;&#039;&lt;br /&gt;
| Number of CPUs required per (MPI-)task.&lt;br /&gt;
|-&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| --mem=&#039;&#039;value_in_MB&#039;&#039;&lt;br /&gt;
| #SBATCH --mem=&#039;&#039;value_in_MB&#039;&#039; &lt;br /&gt;
| Memory in MegaByte per node.&amp;lt;br&amp;gt;(Default value is 128000 and 96000 MB resp., i.e. you should omit &amp;lt;br&amp;gt; the setting of this option.)&lt;br /&gt;
|-&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| --mem-per-cpu=&#039;&#039;value_in_MB&#039;&#039;&lt;br /&gt;
| #SBATCH --mem-per-cpu=&#039;&#039;value_in_MB&#039;&#039; &lt;br /&gt;
| Minimum Memory required per allocated CPU.&amp;lt;br&amp;gt;(Replaces the option pmem of MOAB. You should omit &amp;lt;br&amp;gt; the setting of this option.)&lt;br /&gt;
|-&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| --mail-type=&#039;&#039;type&#039;&#039;&lt;br /&gt;
| #SBATCH --mail-type=&#039;&#039;type&#039;&#039;&lt;br /&gt;
| Notify user by email when certain event types occur.&amp;lt;br&amp;gt;Valid type values are NONE, BEGIN, END, FAIL, REQUEUE, ALL.&lt;br /&gt;
|-&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| --mail-user=&#039;&#039;mail-address&#039;&#039;&lt;br /&gt;
| #SBATCH --mail-user=&#039;&#039;mail-address&#039;&#039;&lt;br /&gt;
|  The specified mail-address receives email notification of state&amp;lt;br&amp;gt;changes as defined by --mail-type.&lt;br /&gt;
|-&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| --output=&#039;&#039;name&#039;&#039;&lt;br /&gt;
| #SBATCH --output=&#039;&#039;name&#039;&#039;&lt;br /&gt;
| File in which job output is stored. &lt;br /&gt;
|-&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| --error=&#039;&#039;name&#039;&#039;&lt;br /&gt;
| #SBATCH --error=&#039;&#039;name&#039;&#039;&lt;br /&gt;
| File in which job error messages are stored. &lt;br /&gt;
|-&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| -J &#039;&#039;name&#039;&#039; or --job-name=&#039;&#039;name&#039;&#039;&lt;br /&gt;
| #SBATCH --job-name=&#039;&#039;name&#039;&#039;&lt;br /&gt;
| Job name.&lt;br /&gt;
|-&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| --export=[ALL,] &#039;&#039;env-variables&#039;&#039;&lt;br /&gt;
| #SBATCH --export=[ALL,] &#039;&#039;env-variables&#039;&#039;&lt;br /&gt;
| Identifies which environment variables from the submission &amp;lt;br&amp;gt; environment are propagated to the launched application. Default &amp;lt;br&amp;gt; is ALL. If adding an environment variable to the submission&amp;lt;br&amp;gt; environment is intended, the argument ALL must be added.&lt;br /&gt;
|-&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| -A &#039;&#039;group-name&#039;&#039; or --account=&#039;&#039;group-name&#039;&#039;&lt;br /&gt;
| #SBATCH --account=&#039;&#039;group-name&#039;&#039;&lt;br /&gt;
| Change resources used by this job to specified group. You may &amp;lt;br&amp;gt; need this option if your account is assigned to more &amp;lt;br&amp;gt; than one group. By command &amp;quot;scontrol show job&amp;quot; the project &amp;lt;br&amp;gt; group the job is accounted on can be seen behind &amp;quot;Account=&amp;quot;. &lt;br /&gt;
|-&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| -p &#039;&#039;queue-name&#039;&#039; or --partition=&#039;&#039;queue-name&#039;&#039;&lt;br /&gt;
| #SBATCH --partition=&#039;&#039;queue-name&#039;&#039;&lt;br /&gt;
| Request a specific queue for the resource allocation.&lt;br /&gt;
|-&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| -C &#039;&#039;LSDF&#039;&#039; or --constraint=&#039;&#039;LSDF&#039;&#039;&lt;br /&gt;
| #SBATCH --constraint=LSDF&lt;br /&gt;
| Job constraint LSDF Filesystems.&lt;br /&gt;
|-&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| -C &#039;&#039;BEEOND&#039;&#039; or --constraint=&#039;&#039;BEEOND&#039;&#039;&lt;br /&gt;
| #SBATCH --constraint=BEEOND&lt;br /&gt;
| Job constraint BeeOND file system.&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== sbatch --partition  &#039;&#039;queues&#039;&#039; ====&lt;br /&gt;
Queue classes define maximum resources such as walltime, nodes and processes per node and queue of the compute system. Details can be found here:&lt;br /&gt;
* [[BwUniCluster_2.0_Batch_Queues#sbatch_-p_queue|bwUniCluster 2.0 queue settings]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== sbatch Examples ===&lt;br /&gt;
==== Serial Programs ====&lt;br /&gt;
To submit a serial job that runs the script &#039;&#039;&#039;job.sh&#039;&#039;&#039; and that requires 5000 MB of main memory and 10 minutes of wall clock time&lt;br /&gt;
&lt;br /&gt;
a) execute:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sbatch -p dev_single -n 1 -t 10:00 --mem=5000  job.sh&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
or&lt;br /&gt;
b) add after the initial line of your script &#039;&#039;&#039;job.sh&#039;&#039;&#039; the lines (here with a high memory request):&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#SBATCH --ntasks=1&lt;br /&gt;
#SBATCH --time=10&lt;br /&gt;
#SBATCH --mem=200gb&lt;br /&gt;
#SBATCH --job-name=simple&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
and execute the modified script with the command line option &#039;&#039;--partition=fat&#039;&#039; (with &#039;&#039;--partition=(dev_)single&#039;&#039; maximum &#039;&#039;--mem=96gb&#039;&#039; is possible):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sbatch --partition=fat job.sh&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Note, that sbatch command line options overrule script options.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Multithreaded Programs ====&lt;br /&gt;
Multithreaded programs operate faster than serial programs on CPUs with multiple cores.&amp;lt;br&amp;gt;&lt;br /&gt;
Moreover, multiple threads of one process share resources such as memory.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
For multithreaded programs based on &#039;&#039;&#039;Open&#039;&#039;&#039; &#039;&#039;&#039;M&#039;&#039;&#039;ulti-&#039;&#039;&#039;P&#039;&#039;&#039;rocessing (OpenMP) a number of threads is defined by the environment variable OMP_NUM_THREADS. By default this variable is set to 1 (OMP_NUM_THREADS=1).&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Because hyperthreading is switched on on bwUniCluster 2.0, the option --cpus-per-task (-c) must be set to 2*n, if you want to use n threads.&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
To submit a batch job called &#039;&#039;OpenMP_Test&#039;&#039; that runs a 40-fold threaded program &#039;&#039;omp_exe&#039;&#039; which requires 6000 MByte of total physical memory and total wall clock time of 40 minutes:&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
a) execute:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sbatch -p single --export=ALL,OMP_NUM_THREADS=40 -J OpenMP_Test -N 1 -c 80 -t 40 --mem=6000 ./omp_exe&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
or&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
* generate the script &#039;&#039;&#039;job_omp.sh&#039;&#039;&#039; containing the following lines:&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
#SBATCH --cpus-per-task=80&lt;br /&gt;
#SBATCH --time=40:00&lt;br /&gt;
#SBATCH --mem=6000mb   &lt;br /&gt;
#SBATCH --export=ALL,EXECUTABLE=./omp_exe&lt;br /&gt;
#SBATCH -J OpenMP_Test&lt;br /&gt;
&lt;br /&gt;
#Usually you should set&lt;br /&gt;
export KMP_AFFINITY=compact,1,0&lt;br /&gt;
#export KMP_AFFINITY=verbose,compact,1,0 prints messages concerning the supported affinity&lt;br /&gt;
#KMP_AFFINITY Description: https://software.intel.com/en-us/node/524790#KMP_AFFINITY_ENVIRONMENT_VARIABLE&lt;br /&gt;
&lt;br /&gt;
export OMP_NUM_THREADS=$((${SLURM_JOB_CPUS_PER_NODE}/2))&lt;br /&gt;
echo &amp;quot;Executable ${EXECUTABLE} running on ${SLURM_JOB_CPUS_PER_NODE} cores with ${OMP_NUM_THREADS} threads&amp;quot;&lt;br /&gt;
startexe=${EXECUTABLE}&lt;br /&gt;
echo $startexe&lt;br /&gt;
exec $startexe&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Using Intel compiler the environment variable KMP_AFFINITY switches on binding of threads to specific cores and, if necessary, replace &amp;lt;placeholder&amp;gt; with the required modulefile to enable the OpenMP environment and execute the script &#039;&#039;&#039;job_omp.sh&#039;&#039;&#039; adding the queue class &#039;&#039;single&#039;&#039; as sbatch option:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sbatch -p single job_omp.sh&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Note, that sbatch command line options overrule script options, e.g.,&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sbatch --partition=single --mem=200 job_omp.sh&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
overwrites the script setting of 6000 MByte with 200 MByte.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== MPI Parallel Programs ====&lt;br /&gt;
MPI parallel programs run faster than serial programs on multi CPU and multi core systems. N-fold spawned processes of the MPI program, i.e., &#039;&#039;&#039;MPI tasks&#039;&#039;&#039;,  run simultaneously and communicate via the Message Passing Interface (MPI) paradigm. MPI tasks do not share memory but can be spawned over different nodes.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Multiple MPI tasks must be launched via &#039;&#039;&#039;mpirun&#039;&#039;&#039;, e.g. 4 MPI tasks of &#039;&#039;my_par_program&#039;&#039;:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ mpirun -n 4 my_par_program&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
This command runs 4 MPI tasks of &#039;&#039;my_par_program&#039;&#039; on the node you are logged in.&lt;br /&gt;
To run this command with a loaded Intel MPI the environment variable I_MPI_HYDRA_BOOTSTRAP must be unset ( --&amp;gt; $ unset I_MPI_HYDRA_BOOTSTRAP).&lt;br /&gt;
&lt;br /&gt;
Running MPI parallel programs in a batch job the interactive environment - particularly the loaded modules - will also be set in the batch job. If you want to set a defined module environment in your batch job you have to purge all modules before setting the wished modules. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
===== OpenMPI =====&lt;br /&gt;
&lt;br /&gt;
If you want to run jobs on batch nodes, generate a wrapper script &#039;&#039;job_ompi.sh&#039;&#039; for &#039;&#039;&#039;OpenMPI&#039;&#039;&#039; containing the following lines:&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
# Use when a defined module environment related to OpenMPI is wished&lt;br /&gt;
module load mpi/openmpi/&amp;lt;placeholder_for_version&amp;gt;&lt;br /&gt;
mpirun --bind-to core --map-by core -report-bindings my_par_program&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Attention:&#039;&#039;&#039; Do &#039;&#039;&#039;NOT&#039;&#039;&#039; add mpirun options &#039;&#039;-n &amp;lt;number_of_processes&amp;gt;&#039;&#039; or any other option defining processes or nodes, since Slurm instructs mpirun about number of processes and node hostnames. Use &#039;&#039;&#039;ALWAYS&#039;&#039;&#039; the MPI options &#039;&#039;&#039;&#039;&#039;--bind-to core&#039;&#039;&#039;&#039;&#039; and &#039;&#039;&#039;&#039;&#039;--map-by core|socket|node&#039;&#039;&#039;&#039;&#039;. Please type &#039;&#039;mpirun --help&#039;&#039; for an explanation of the meaning of the different options of mpirun option &#039;&#039;--map-by&#039;&#039;.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Considering 4 OpenMPI tasks on a single node, each requiring 2000 MByte, and running for 1 hour, execute:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sbatch -p single -N 1 -n 4 --mem-per-cpu=2000 --time=01:00:00 ./job_ompi.sh&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===== Intel MPI =====&lt;br /&gt;
&lt;br /&gt;
Generate a wrapper script for &#039;&#039;&#039;Intel MPI&#039;&#039;&#039;, &#039;&#039;job_impi.sh&#039;&#039; containing the following lines:&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
# Use when a defined module environment related to Intel MPI is wished&lt;br /&gt;
module load mpi/impi/&amp;lt;placeholder_for_version&amp;gt;   &lt;br /&gt;
mpiexec.hydra -bootstrap slurm my_par_program&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&amp;lt;font color=red&amp;gt;&#039;&#039;&#039;Attention:&#039;&#039;&#039;&amp;lt;/font&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
Do &#039;&#039;&#039;NOT&#039;&#039;&#039; add mpirun options &#039;&#039;-n &amp;lt;number_of_processes&amp;gt;&#039;&#039; or any other option defining processes or nodes, since Slurm instructs mpirun about number of processes and node hostnames.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Launching and running 200 Intel MPI tasks on 5 nodes, each requiring 80 GByte, and running for 5 hours, execute:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sbatch --partition=multiple -N 5 --ntasks-per-node=40 --mem=80gb -t 300 ./job_impi.sh&lt;br /&gt;
&amp;lt;/pre&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
If you want to use 128 or more nodes, you must also set the environment variable as follows:           &amp;lt;BR&amp;gt;&lt;br /&gt;
export I_MPI_HYDRA_BRANCH_COUNT=-1&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
If you want to use the options perhost, ppn or rr, you must additionally set the environment variable I_MPI_JOB_RESPECT_PROCESS_PLACEMENT=off.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Multithreaded + MPI parallel Programs ====&lt;br /&gt;
Multithreaded + MPI parallel programs operate faster than serial programs on multi CPUs with multiple cores. All threads of one process share resources such as memory. On the contrary MPI tasks do not share memory but can be spawned over different nodes. &#039;&#039;&#039;Because hyperthreading is switched on on bwUniCluster 2.0, the option --cpus-per-task (-c) must be set to 2*n, if you want to use n threads.&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
===== OpenMPI with Multithreading =====&lt;br /&gt;
Multiple MPI tasks using &#039;&#039;&#039;OpenMPI&#039;&#039;&#039; must be launched by the MPI parallel program &#039;&#039;&#039;mpirun&#039;&#039;&#039;. For multithreaded programs based on &#039;&#039;&#039;Open&#039;&#039;&#039; &#039;&#039;&#039;M&#039;&#039;&#039;ulti-&#039;&#039;&#039;P&#039;&#039;&#039;rocessing (OpenMP) number of threads are defined by the environment variable OMP_NUM_THREADS. By default this variable is set to 1 (OMP_NUM_THREADS=1).&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;For OpenMPI&#039;&#039;&#039; a job-script to submit a batch job called &#039;&#039;job_ompi_omp.sh&#039;&#039; that runs a MPI program with 4 tasks and a 28-fold threaded program &#039;&#039;ompi_omp_program&#039;&#039; requiring 3000 MByte of physical memory per thread (using 28 threads per MPI task you will get 28*3000 MByte = 84000 MByte per MPI task) and total wall clock time of 3 hours looks like:&lt;br /&gt;
&amp;lt;!--b)--&amp;gt;&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --ntasks=4&lt;br /&gt;
#SBATCH --cpus-per-task=56&lt;br /&gt;
#SBATCH --time=03:00:00&lt;br /&gt;
#SBATCH --mem=83gb    # 84000 MB = 84000/1024 GB = 82.1 GB&lt;br /&gt;
#SBATCH --export=ALL,MPI_MODULE=mpi/openmpi/3.1,EXECUTABLE=./ompi_omp_program&lt;br /&gt;
#SBATCH --output=&amp;quot;parprog_hybrid_%j.out&amp;quot;  &lt;br /&gt;
&lt;br /&gt;
# Use when a defined module environment related to OpenMPI is wished&lt;br /&gt;
module load ${MPI_MODULE}&lt;br /&gt;
export OMP_NUM_THREADS=$((${SLURM_CPUS_PER_TASK}/2))&lt;br /&gt;
export MPIRUN_OPTIONS=&amp;quot;--bind-to core --map-by socket:PE=${OMP_NUM_THREADS} -report-bindings&amp;quot;&lt;br /&gt;
export NUM_CORES=${SLURM_NTASKS}*${OMP_NUM_THREADS}&lt;br /&gt;
echo &amp;quot;${EXECUTABLE} running on ${NUM_CORES} cores with ${SLURM_NTASKS} MPI-tasks and ${OMP_NUM_THREADS} threads&amp;quot;&lt;br /&gt;
startexe=&amp;quot;mpirun -n ${SLURM_NTASKS} ${MPIRUN_OPTIONS} ${EXECUTABLE}&amp;quot;&lt;br /&gt;
echo $startexe&lt;br /&gt;
exec $startexe&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Execute the script &#039;&#039;&#039;job_ompi_omp.sh&#039;&#039;&#039; by command sbatch:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sbatch -p multiple_e ./job_ompi_omp.sh&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* With the mpirun option &#039;&#039;--bind-to core&#039;&#039; MPI tasks and OpenMP threads are bound to physical cores.&lt;br /&gt;
* With the option &#039;&#039;--map-by node:PE=&amp;lt;value&amp;gt;&#039;&#039; (neighbored) MPI tasks will be attached to different nodes and each MPI task is bound to the first core of a node. &amp;lt;value&amp;gt; must be set to ${OMP_NUM_THREADS}.&lt;br /&gt;
* The option &#039;&#039;-report-bindings&#039;&#039; shows the bindings between MPI tasks and physical cores.&lt;br /&gt;
* The mpirun-options &#039;&#039;&#039;--bind-to core&#039;&#039;&#039;, &#039;&#039;&#039;--map-by socket|...|node:PE=&amp;lt;value&amp;gt;&#039;&#039;&#039; should always be used when running a multithreaded MPI program.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===== Intel MPI with Multithreading =====&lt;br /&gt;
Multithreaded + MPI parallel programs operate faster than serial programs on multi CPUs with multiple cores. All threads of one process share resources such as memory. On the contrary MPI tasks do not share memory but can be spawned over different nodes.  &lt;br /&gt;
&lt;br /&gt;
Multiple Intel MPI tasks must be launched by the MPI parallel program &#039;&#039;&#039;mpiexec.hydra&#039;&#039;&#039;. For multithreaded programs based on &#039;&#039;&#039;Open&#039;&#039;&#039; &#039;&#039;&#039;M&#039;&#039;&#039;ulti-&#039;&#039;&#039;P&#039;&#039;&#039;rocessing (OpenMP) number of threads are defined by the environment variable OMP_NUM_THREADS. By default this variable is set to 1 (OMP_NUM_THREADS=1).&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;For Intel MPI&#039;&#039;&#039; a job-script to submit a batch job called &#039;&#039;job_impi_omp.sh&#039;&#039; that runs a Intel MPI program with 10 tasks and a 40-fold threaded program &#039;&#039;impi_omp_program&#039;&#039; requiring 96000 MByte of total physical memory per task and total wall clock time of 1 hours looks like: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--b)--&amp;gt; &lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --ntasks=10&lt;br /&gt;
#SBATCH --cpus-per-task=80&lt;br /&gt;
#SBATCH --time=60&lt;br /&gt;
#SBATCH --mem=96000&lt;br /&gt;
#SBATCH --export=ALL,MPI_MODULE=mpi/impi,EXE=./impi_omp_program&lt;br /&gt;
#SBATCH --output=&amp;quot;parprog_impi_omp_%j.out&amp;quot;&lt;br /&gt;
&lt;br /&gt;
#If using more than one MPI task per node please set&lt;br /&gt;
export KMP_AFFINITY=compact,1,0&lt;br /&gt;
#export KMP_AFFINITY=verbose,scatter  prints messages concerning the supported affinity &lt;br /&gt;
#KMP_AFFINITY Description: https://software.intel.com/en-us/node/524790#KMP_AFFINITY_ENVIRONMENT_VARIABLE&lt;br /&gt;
&lt;br /&gt;
# Use when a defined module environment related to Intel MPI is wished &lt;br /&gt;
module load ${MPI_MODULE}&lt;br /&gt;
export OMP_NUM_THREADS=$((${SLURM_CPUS_PER_TASK}/2))&lt;br /&gt;
export MPIRUN_OPTIONS=&amp;quot;-binding &amp;quot;domain=omp:compact&amp;quot; -print-rank-map -envall&amp;quot;&lt;br /&gt;
export NUM_PROCS=eval(${SLURM_NTASKS}*${OMP_NUM_THREADS})&lt;br /&gt;
echo &amp;quot;${EXE} running on ${NUM_PROCS} cores with ${SLURM_NTASKS} MPI-tasks and ${OMP_NUM_THREADS} threads&amp;quot;&lt;br /&gt;
startexe=&amp;quot;mpiexec.hydra -bootstrap slurm ${MPIRUN_OPTIONS} -n ${SLURM_NTASKS} ${EXE}&amp;quot;&lt;br /&gt;
echo $startexe&lt;br /&gt;
exec $startexe&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Using Intel compiler the environment variable KMP_AFFINITY switches on binding of threads to specific cores. If you only run one MPI task per node please set KMP_AFFINITY=compact,1,0.&lt;br /&gt;
&amp;lt;BR&amp;gt;&lt;br /&gt;
If you want to use 128 or more nodes, you must also set the environment variable as follows:           &amp;lt;BR&amp;gt;&lt;br /&gt;
export I_MPI_HYDRA_BRANCH_COUNT=-1&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
If you want to use the options perhost, ppn or rr, you must additionally set the environment variable I_MPI_JOB_RESPECT_PROCESS_PLACEMENT=off. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Execute the script &#039;&#039;&#039;job_impi_omp.sh&#039;&#039;&#039; by command sbatch:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sbatch -p multiple ./job_impi_omp.sh&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
The mpirun option &#039;&#039;-print-rank-map&#039;&#039; shows the bindings between MPI tasks and nodes (not very beneficial). The option &#039;&#039;-binding&#039;&#039; binds MPI tasks (processes) to a particular processor; &#039;&#039;domain=omp&#039;&#039; means that the domain size is determined by the number of threads. If you would choose 2 MPI tasks per node, you should choose &#039;&#039;-binding &amp;quot;cell=unit;map=bunch&amp;quot;&#039;&#039;; this binding maps one MPI process to each socket. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Chain jobs ====&lt;br /&gt;
The CPU time requirements of many applications exceed the limits of the job classes. In those situations it is recommended to solve the problem by a job chain. A job chain is a sequence of jobs where each job automatically starts its successor. &lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
####################################&lt;br /&gt;
## simple Slurm submitter script to setup   ## &lt;br /&gt;
## a chain of jobs using Slurm                    ##&lt;br /&gt;
####################################&lt;br /&gt;
## ver.  : 2018-11-27, KIT, SCC&lt;br /&gt;
&lt;br /&gt;
## Define maximum number of jobs via positional parameter 1, default is 5&lt;br /&gt;
max_nojob=${1:-5}&lt;br /&gt;
&lt;br /&gt;
## Define your jobscript (e.g. &amp;quot;~/chain_job.sh&amp;quot;)&lt;br /&gt;
chain_link_job=${PWD}/chain_job.sh&lt;br /&gt;
&lt;br /&gt;
## Define type of dependency via positional parameter 2, default is &#039;afterok&#039;&lt;br /&gt;
dep_type=&amp;quot;${2:-afterok}&amp;quot;&lt;br /&gt;
## -&amp;gt; List of all dependencies:&lt;br /&gt;
## https://slurm.schedmd.com/sbatch.html&lt;br /&gt;
&lt;br /&gt;
myloop_counter=1&lt;br /&gt;
## Submit loop&lt;br /&gt;
while [ ${myloop_counter} -le ${max_nojob} ] ; do&lt;br /&gt;
   ##&lt;br /&gt;
   ## Differ msub_opt depending on chain link number&lt;br /&gt;
   if [ ${myloop_counter} -eq 1 ] ; then&lt;br /&gt;
      slurm_opt=&amp;quot;&amp;quot;&lt;br /&gt;
   else&lt;br /&gt;
      slurm_opt=&amp;quot;-d ${dep_type}:${jobID}&amp;quot;&lt;br /&gt;
   fi&lt;br /&gt;
   ##&lt;br /&gt;
   ## Print current iteration number and sbatch command&lt;br /&gt;
   echo &amp;quot;Chain job iteration = ${myloop_counter}&amp;quot;&lt;br /&gt;
   echo &amp;quot;   sbatch --export=myloop_counter=${myloop_counter} ${slurm_opt} ${chain_link_job}&amp;quot;&lt;br /&gt;
   ## Store job ID for next iteration by storing output of sbatch command with empty lines&lt;br /&gt;
   jobID=$(sbatch -p &amp;lt;queue&amp;gt; --export=ALL,myloop_counter=${myloop_counter} ${slurm_opt} ${chain_link_job} 2&amp;gt;&amp;amp;1 | sed &#039;s/[S,a-z]* //g&#039;)&lt;br /&gt;
   ##   &lt;br /&gt;
   ## Check if ERROR occured&lt;br /&gt;
   if [[ &amp;quot;${jobID}&amp;quot; =~ &amp;quot;ERROR&amp;quot; ]] ; then&lt;br /&gt;
      echo &amp;quot;   -&amp;gt; submission failed!&amp;quot; ; exit 1&lt;br /&gt;
   else&lt;br /&gt;
      echo &amp;quot;   -&amp;gt; job number = ${jobID}&amp;quot;&lt;br /&gt;
   fi&lt;br /&gt;
   ##&lt;br /&gt;
   ## Increase counter&lt;br /&gt;
   let myloop_counter+=1&lt;br /&gt;
done&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== GPU jobs ====&lt;br /&gt;
&lt;br /&gt;
The nodes in the gpu_4 and gpu_8 queues have 4 or 8 NVIDIA Tesla V100 GPUs. Just submitting a job to these queues is not enough to also allocate one or more GPUs, you have to do so using the &amp;quot;--gres=gpu&amp;quot; parameter. You have to specifiy how many GPUs your job needs, e.g. &amp;quot;--gres=gpu:2&amp;quot; will request two GPUs.&lt;br /&gt;
&lt;br /&gt;
The GPU nodes are shared between multiple jobs if the jobs don&#039;t request all the GPUs in a node and there are enough ressources to run more than one job. The individual GPUs are always bound to a single job and will not be shared between different jobs.&lt;br /&gt;
&lt;br /&gt;
a) add after the initial line of your script job.sh the line including the&lt;br /&gt;
information about the LSDF Online Storage usage:&amp;lt;br&amp;gt;   #SBATCH --constraint=LSDF&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --ntasks=40&lt;br /&gt;
#SBATCH --time=02:00:00&lt;br /&gt;
#SBATCH --mem=4000&lt;br /&gt;
#SBATCH --gres=gpu:2&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or b) execute:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sbatch -p &amp;lt;queue&amp;gt; -n 40 -t 02:00:00 --mem 4000 --gres=gpu:2 job.sh&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
If you start an interactive session on of the GPU nodes, you can use the &amp;quot;nvidia-smi&amp;quot; command to list the GPUs allocated to your job:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nvidia-smi&lt;br /&gt;
Sun Mar 29 15:20:05 2020       &lt;br /&gt;
+-----------------------------------------------------------------------------+&lt;br /&gt;
| NVIDIA-SMI 440.33.01    Driver Version: 440.33.01    CUDA Version: 10.2     |&lt;br /&gt;
|-------------------------------+----------------------+----------------------+&lt;br /&gt;
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |&lt;br /&gt;
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |&lt;br /&gt;
|===============================+======================+======================|&lt;br /&gt;
|   0  Tesla V100-SXM2...  Off  | 00000000:3A:00.0 Off |                    0 |&lt;br /&gt;
| N/A   29C    P0    39W / 300W |      9MiB / 32510MiB |      0%      Default |&lt;br /&gt;
+-------------------------------+----------------------+----------------------+&lt;br /&gt;
|   1  Tesla V100-SXM2...  Off  | 00000000:3B:00.0 Off |                    0 |&lt;br /&gt;
| N/A   30C    P0    41W / 300W |      8MiB / 32510MiB |      0%      Default |&lt;br /&gt;
+-------------------------------+----------------------+----------------------+&lt;br /&gt;
                                                                               &lt;br /&gt;
+-----------------------------------------------------------------------------+&lt;br /&gt;
| Processes:                                                       GPU Memory |&lt;br /&gt;
|  GPU       PID   Type   Process name                             Usage      |&lt;br /&gt;
|=============================================================================|&lt;br /&gt;
|    0     14228      G   /usr/bin/X                                     8MiB |&lt;br /&gt;
|    1     14228      G   /usr/bin/X                                     8MiB |&lt;br /&gt;
+-----------------------------------------------------------------------------+&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== LSDF Online Storage ====&lt;br /&gt;
On bwUniCluster 2.0 you can use for special cases the LSDF Online Storage on the HPC cluster nodes. Please request for this service separately ([https://www.lsdf.kit.edu/os/storagerequest/: LSDF Storage Request]).&lt;br /&gt;
To mount the LSDF Online Storage on the compute nodes during the job runtime the&lt;br /&gt;
the constraint flag &amp;quot;LSDF&amp;quot; has to be set.  &lt;br /&gt;
&lt;br /&gt;
a) add after the initial line of your script job.sh the line including the&lt;br /&gt;
information about the LSDF Online Storage usage:&amp;lt;br&amp;gt;   #SBATCH --constraint=LSDF&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --ntasks=1&lt;br /&gt;
#SBATCH --time=120&lt;br /&gt;
#SBATCH --mem=200&lt;br /&gt;
#SBATCH --constraint=LSDF&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or b) execute:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sbatch -p &amp;lt;queue&amp;gt; -n 1 -t 2:00:00 --mem 200 job.sh -C LSDF&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
For the usage of the LSDF Online Storage&lt;br /&gt;
the following environment variables are available: $LSDF, $LSDFPROJECTS, $LSDFHOME.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====BeeOND (BeeGFS On-Demand)====&lt;br /&gt;
&lt;br /&gt;
BeeOND instances are integrated into the prolog and epilog script of the cluster batch system, Slurm. It can be used on the compute nodes during the job runtime with the constraint flag &amp;quot;BEEOND&amp;quot; ([[BwUniCluster_2.0_Slurm_common_Features#sbatch_Command_Parameters|Slurm Command Parameters]])&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH ...&lt;br /&gt;
#SBATCH --constraint=BEEOND&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After your job has started you can find the private on-demand file system in &#039;&#039;&#039;/mnt/odfs/$SLURM_JOB_ID&#039;&#039;&#039; directory. The mountpoint comes with three pre-configured directories:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#for small files (stripe count = 1)&lt;br /&gt;
/mnt/odfs/$SLURM_JOB_ID/stripe_1&lt;br /&gt;
#stripe count = 4&lt;br /&gt;
/mnt/odfs/$SLURM_JOB_ID/stripe_default&lt;br /&gt;
#stripe count = 8, 16 or 32, use this directories for medium sized and large files or when using MPI-IO&lt;br /&gt;
/mnt/odfs/$SLURM_JOB_ID/stripe_8, /mnt/odfs/$SLURM_JOB_ID/stripe_16 or /mnt/odfs/$SLURM_JOB_ID/stripe_32&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you request less nodes than stripe count, the stripe count will be max number of nodes, e.g., You only request 8 nodes , so the directory with stripe count 16 is basically only with a stripe count 8.&lt;br /&gt;
&lt;br /&gt;
The capacity of the private file system depends on the number of nodes. For each node you get 250Gbyte.&lt;br /&gt;
&lt;br /&gt;
!!! Be careful when creating large files, use always the directory with the max stripe count for large files.&lt;br /&gt;
If you create large files use a higher stripe count. For example, if your largest file is 1.1 Tb, then you have to use a stripe count larger&amp;gt;4 (4 x 250GB).  &lt;br /&gt;
&lt;br /&gt;
If you request 100 nodes for your job, the private file system is 100 * 250 Gbyte ~ 25 Tbyte (approx) capacity.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Recommendation:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The private file system is using its own metadata server. This metadata server is started on the first nodes. Depending on your application, the metadata server is consuming decent amount of CPU power. Probably adding a extra node to your job could improve the usability of the on-demand file system. Start your application with the MPI option:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mpirun -nolocal myapplication&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
With the -nolocal option the node where mpirun is initiated is not used for your application. This node is fully available for the meta data server of your requested on-demand file system.&lt;br /&gt;
&lt;br /&gt;
Example job script:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#very simple example on how to use a private on-demand file system&lt;br /&gt;
#SBATCH -N 10&lt;br /&gt;
#SBATCH --constraint=BEEOND&lt;br /&gt;
&lt;br /&gt;
#create a workspace &lt;br /&gt;
ws_allocate myresults-$SLURM_JOB_ID 90&lt;br /&gt;
RESULTDIR=`ws_find myresults-$SLURM_JOB_ID`&lt;br /&gt;
&lt;br /&gt;
#Set ENV variable to on-demand file system&lt;br /&gt;
ODFSDIR=/mnt/odfs/$SLURM_JOB_ID/stripe_16/&lt;br /&gt;
&lt;br /&gt;
#start application and write results to on-demand file system&lt;br /&gt;
mpirun -nolocal myapplication -o $ODFSDIR/results&lt;br /&gt;
&lt;br /&gt;
#Copy back data after your job application end&lt;br /&gt;
rsync -av $ODFSDIR/results $RESULTDIR&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Start time of job or resources : squeue --start ==&lt;br /&gt;
The command can be used by any user to displays the estimated start time of a job based a number of analysis types based on historical usage, earliest available reservable resources, and priority based backlog. The command squeue is explained in detail on the webpage https://slurm.schedmd.com/squeue.html or via manpage (man squeue). &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
=== Access ===&lt;br /&gt;
By default, this command can be run by &#039;&#039;&#039;any user&#039;&#039;&#039;. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== List of your submitted jobs : squeue ==&lt;br /&gt;
Displays information about YOUR active, pending and/or recently completed jobs. The command displays all own active and pending jobs. The command squeue is explained in detail on the webpage https://slurm.schedmd.com/squeue.html or via manpage (man squeue).&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
=== Access ===&lt;br /&gt;
By default, this command can be run by any user.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Flags ===&lt;br /&gt;
{| width=750px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Flag !! Description&lt;br /&gt;
|-&lt;br /&gt;
| -l, --long&lt;br /&gt;
| Report more of the available information for the selected jobs or job steps, subject to any constraints specified.&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Examples ===&lt;br /&gt;
&#039;&#039;squeue&#039;&#039; example on bwUniCluster 2.0 &amp;lt;small&amp;gt;(Only your own jobs are displayed!)&amp;lt;/small&amp;gt;.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ squeue &lt;br /&gt;
             JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)&lt;br /&gt;
          18088744    single CPV.sbat   ab1234 PD       0:00      1 (Priority)&lt;br /&gt;
          18098414  multiple CPV.sbat   ab1234 PD       0:00      2 (Priority) &lt;br /&gt;
          18090089  multiple CPV.sbat   ab1234  R       2:27      2 uc2n[127-128]&lt;br /&gt;
$ squeue -l&lt;br /&gt;
            JOBID PARTITION     NAME     USER    STATE       TIME TIME_LIMI  NODES NODELIST(REASON) &lt;br /&gt;
         18088654    single CPV.sbat   ab1234 COMPLETI       4:29   2:00:00      1 uc2n374&lt;br /&gt;
         18088785    single CPV.sbat   ab1234  PENDING       0:00   2:00:00      1 (Priority)&lt;br /&gt;
         18098414  multiple CPV.sbat   ab1234  PENDING       0:00   2:00:00      2 (Priority)&lt;br /&gt;
         18088683    single CPV.sbat   ab1234  RUNNING       0:14   2:00:00      1 uc2n413  &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* The output of &#039;&#039;squeue&#039;&#039; shows how many jobs of yours are running or pending and how many nodes are in use by your jobs.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Shows free resources : sinfo_t_idle ==&lt;br /&gt;
The Slurm command sinfo is used to view partition and node information for a system running Slurm. It incorporates down time, reservations, and node state information in determining the available backfill window. The sinfo command can only be used by the administrator.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
SCC has prepared a special script (sinfo_t_idle) to find out how many processors are available for immediate use on the system. It is anticipated that users will use this information to submit jobs that meet these criteria and thus obtain quick job turnaround times. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
=== Access ===&lt;br /&gt;
By default, this command can be used by any user or administrator. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
=== Example ===&lt;br /&gt;
* The following command displays what resources are available for immediate use for the whole partition.&lt;br /&gt;
&amp;lt;pre&amp;gt;$ sinfo_t_idle&lt;br /&gt;
Partition dev_multiple  :      8 nodes idle&lt;br /&gt;
Partition multiple      :    332 nodes idle&lt;br /&gt;
Partition dev_single    :      4 nodes idle&lt;br /&gt;
Partition single        :     76 nodes idle&lt;br /&gt;
Partition long          :     80 nodes idle&lt;br /&gt;
Partition fat           :      5 nodes idle&lt;br /&gt;
Partition dev_special   :    342 nodes idle&lt;br /&gt;
Partition special       :    342 nodes idle&lt;br /&gt;
Partition dev_multiple_e:      7 nodes idle&lt;br /&gt;
Partition multiple_e    :    335 nodes idle&lt;br /&gt;
Partition gpu_4         :     12 nodes idle&lt;br /&gt;
Partition gpu_8         :      6 nodes idle&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* For the above example jobs in all partitions can be run immediately.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Detailed job information : scontrol show job ==&lt;br /&gt;
scontrol show job displays detailed job state information and diagnostic output for all or a specified job of yours. Detailed information is available for active, pending and recently completed jobs. The command scontrol is explained in detail on the webpage https://slurm.schedmd.com/scontrol.html or via manpage (man scontrol). &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Display the state of all your jobs in normal mode: scontrol show job&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Display the state of a job with &amp;lt;jobid&amp;gt; in normal mode: scontrol show job &amp;lt;jobid&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
=== Access ===&lt;br /&gt;
* End users can use scontrol show job to view the status of their &#039;&#039;&#039;own jobs&#039;&#039;&#039; only. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Arguments ===&lt;br /&gt;
{| width=750px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Option !! Default !! Description !! Example&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
|- style=&amp;quot;width:12%;&amp;quot; &lt;br /&gt;
| -d&lt;br /&gt;
| (n/a)&lt;br /&gt;
| Detailed mode&lt;br /&gt;
| Example: Display the state with jobid 18089884 in detailed mode. &amp;lt;br&amp;gt; &amp;lt;pre&amp;gt;scontrol -d show job 18089884&amp;lt;/pre&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Scontrol show job Example ===&lt;br /&gt;
Here is an example from bwUniCluster 2.0.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
squeue    # show my own jobs (here the userid is replaced!)&lt;br /&gt;
             JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)&lt;br /&gt;
          18089884  multiple CPV.sbat   bq0742  R      33:44      2 uc2n[165-166]&lt;br /&gt;
&lt;br /&gt;
$&lt;br /&gt;
$ # now, see what&#039;s up with my pending job with jobid 18089884&lt;br /&gt;
$ &lt;br /&gt;
$ scontrol show job 18089884&lt;br /&gt;
&lt;br /&gt;
JobId=18089884 JobName=CPV.sbatch&lt;br /&gt;
   UserId=bq0742(8946) GroupId=scc(12345) MCS_label=N/A&lt;br /&gt;
   Priority=3 Nice=0 Account=kit QOS=normal&lt;br /&gt;
   JobState=RUNNING Reason=None Dependency=(null)&lt;br /&gt;
   Requeue=1 Restarts=0 BatchFlag=1 Reboot=0 ExitCode=0:0&lt;br /&gt;
   RunTime=00:35:06 TimeLimit=02:00:00 TimeMin=N/A&lt;br /&gt;
   SubmitTime=2020-03-16T14:14:54 EligibleTime=2020-03-16T14:14:54&lt;br /&gt;
   AccrueTime=2020-03-16T14:14:54&lt;br /&gt;
   StartTime=2020-03-16T15:12:51 EndTime=2020-03-16T17:12:51 Deadline=N/A&lt;br /&gt;
   SuspendTime=None SecsPreSuspend=0 LastSchedEval=2020-03-16T15:12:51&lt;br /&gt;
   Partition=multiple AllocNode:Sid=uc2n995:5064&lt;br /&gt;
   ReqNodeList=(null) ExcNodeList=(null)&lt;br /&gt;
   NodeList=uc2n[165-166]&lt;br /&gt;
   BatchHost=uc2n165&lt;br /&gt;
   NumNodes=2 NumCPUs=160 NumTasks=80 CPUs/Task=1 ReqB:S:C:T=0:0:*:1&lt;br /&gt;
   TRES=cpu=160,mem=96320M,node=2,billing=160&lt;br /&gt;
   Socks/Node=* NtasksPerN:B:S:C=40:0:*:1 CoreSpec=*&lt;br /&gt;
   MinCPUsNode=40 MinMemoryCPU=1204M MinTmpDiskNode=0&lt;br /&gt;
   Features=(null) DelayBoot=00:00:00&lt;br /&gt;
   OverSubscribe=NO Contiguous=0 Licenses=(null) Network=(null)&lt;br /&gt;
   Command=/pfs/data5/home/kit/scc/bq0742/git/CPV/bin/CPV.sbatch&lt;br /&gt;
   WorkDir=/pfs/data5/home/kit/scc/bq0742/git/CPV/bin&lt;br /&gt;
   StdErr=/pfs/data5/home/kit/scc/bq0742/git/CPV/bin/slurm-18089884.out&lt;br /&gt;
   StdIn=/dev/null&lt;br /&gt;
   StdOut=/pfs/data5/home/kit/scc/bq0742/git/CPV/bin/slurm-18089884.out&lt;br /&gt;
   Power=&lt;br /&gt;
   MailUser=(null) MailType=NONE&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
You can use standard Linux pipe commands to filter the very detailed scontrol show job output.&lt;br /&gt;
* In which state the job is?&lt;br /&gt;
&amp;lt;pre&amp;gt;$ scontrol show job 18089884 | grep -i State&lt;br /&gt;
   JobState=COMPLETED Reason=None Dependency=(null)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Cancel Slurm Jobs ==&lt;br /&gt;
The scancel command is used to cancel jobs. The command scancel is explained in detail on the webpage https://slurm.schedmd.com/scancel.html or via manpage (man scancel).   &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
=== Canceling own jobs : scancel ===&lt;br /&gt;
scancel is used to signal or cancel jobs, job arrays or job steps. The command is:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ scancel [-i] &amp;lt;job-id&amp;gt;&lt;br /&gt;
$ scancel -t &amp;lt;job_state_name&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
{| width=750px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Flag !! Default !! Description !! Example&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| -i, --interactive&lt;br /&gt;
| (n/a)&lt;br /&gt;
| Interactive mode.&lt;br /&gt;
| Cancel the job 987654 interactively. &amp;lt;br&amp;gt; &amp;lt;pre&amp;gt; scancel -i 987654 &amp;lt;/pre&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| -t, --state&lt;br /&gt;
| (n/a)&lt;br /&gt;
| Restrict the scancel operation to jobs in a certain state. &amp;lt;br&amp;gt; &amp;quot;job_state_name&amp;quot; may have a value of either &amp;quot;PENDING&amp;quot;, &amp;quot;RUNNING&amp;quot; or &amp;quot;SUSPENDED&amp;quot;.&lt;br /&gt;
| Cancel all jobs in state &amp;quot;PENDING&amp;quot;. &amp;lt;br&amp;gt; &amp;lt;pre&amp;gt; scancel -t &amp;quot;PENDING&amp;quot; &amp;lt;/pre&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Resource Managers =&lt;br /&gt;
=== Batch Job (Slurm) Variables ===&lt;br /&gt;
The following environment variables of Slurm are added to your environment once your job has started&lt;br /&gt;
&amp;lt;small&amp;gt;(only an excerpt of the most important ones)&amp;lt;/small&amp;gt;.&lt;br /&gt;
{| width=750px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Environment !! Brief explanation&lt;br /&gt;
|- &lt;br /&gt;
| SLURM_JOB_CPUS_PER_NODE &lt;br /&gt;
| Number of processes per node dedicated to the job&lt;br /&gt;
|- &lt;br /&gt;
| SLURM_JOB_NODELIST &lt;br /&gt;
| List of nodes dedicated to the job&lt;br /&gt;
|- &lt;br /&gt;
| SLURM_JOB_NUM_NODES &lt;br /&gt;
| Number of nodes dedicated to the job&lt;br /&gt;
|- &lt;br /&gt;
| SLURM_MEM_PER_NODE &lt;br /&gt;
| Memory per node dedicated to the job &lt;br /&gt;
|- &lt;br /&gt;
| SLURM_NPROCS&lt;br /&gt;
| Total number of processes dedicated to the job &lt;br /&gt;
|-&lt;br /&gt;
| SLURM_CLUSTER_NAME&lt;br /&gt;
| Name of the cluster executing the job&lt;br /&gt;
|-&lt;br /&gt;
| SLURM_CPUS_PER_TASK &lt;br /&gt;
| Number of CPUs requested per task&lt;br /&gt;
|-&lt;br /&gt;
| SLURM_JOB_ACCOUNT&lt;br /&gt;
| Account name &lt;br /&gt;
|-&lt;br /&gt;
| SLURM_JOB_ID&lt;br /&gt;
| Job ID&lt;br /&gt;
|-&lt;br /&gt;
| SLURM_JOB_NAME&lt;br /&gt;
| Job Name&lt;br /&gt;
|-&lt;br /&gt;
| SLURM_JOB_PARTITION&lt;br /&gt;
| Partition/queue running the job&lt;br /&gt;
|-&lt;br /&gt;
| SLURM_JOB_UID&lt;br /&gt;
| User ID of the job&#039;s owner&lt;br /&gt;
|-&lt;br /&gt;
| SLURM_SUBMIT_DIR&lt;br /&gt;
| Job submit folder.  The directory from which msub was invoked. &lt;br /&gt;
|-&lt;br /&gt;
| SLURM_JOB_USER&lt;br /&gt;
| User name of the job&#039;s owner&lt;br /&gt;
|-&lt;br /&gt;
| SLURM_RESTART_COUNT&lt;br /&gt;
| Number of times job has restarted&lt;br /&gt;
|-&lt;br /&gt;
| SLURM_PROCID&lt;br /&gt;
| Task ID (MPI rank)&lt;br /&gt;
|-&lt;br /&gt;
| SLURM_NTASKS&lt;br /&gt;
| The total number of tasks available for the job&lt;br /&gt;
|-&lt;br /&gt;
| SLURM_STEP_ID&lt;br /&gt;
| Job step ID&lt;br /&gt;
|-&lt;br /&gt;
| SLURM_STEP_NUM_TASKS&lt;br /&gt;
| Task count (number of PI ranks)&lt;br /&gt;
|-&lt;br /&gt;
| SLURM_JOB_CONSTRAINT&lt;br /&gt;
| Job constraints&lt;br /&gt;
|}&lt;br /&gt;
See also:&lt;br /&gt;
* [https://slurm.schedmd.com/sbatch.html#lbAI Slurm input and output environment variables]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Job Exit Codes ===&lt;br /&gt;
A job&#039;s exit code (also known as exit status, return code and completion code) is captured by SLURM and saved as part of the job record. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Any non-zero exit code will be assumed to be a job failure and will result in a Job State of FAILED with a reason of &amp;quot;NonZeroExitCode&amp;quot;.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
The exit code is an 8 bit unsigned number ranging between 0 and 255. While it is possible for a job to return a negative exit code, SLURM will display it as an unsigned value in the 0 - 255 range.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
==== Displaying Exit Codes and Signals ====&lt;br /&gt;
SLURM displays a job&#039;s exit code in the output of the &#039;&#039;&#039;scontrol show job&#039;&#039;&#039; and the sview utility.&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
When a signal was responsible for a job or step&#039;s termination, the signal number will be displayed after the exit code, delineated by a colon(:).&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
==== Submitting Termination Signal ====&lt;br /&gt;
Here is an example, how to &#039;save&#039; a Slurm termination signal in a typical jobscript.&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
[...]&lt;br /&gt;
exit_code=$?&lt;br /&gt;
mpirun  -np &amp;lt;#cores&amp;gt;  &amp;lt;EXE_BIN_DIR&amp;gt;/&amp;lt;executable&amp;gt; ... (options)  2&amp;gt;&amp;amp;1&lt;br /&gt;
[ &amp;quot;$exit_code&amp;quot; -eq 0 ] &amp;amp;&amp;amp; echo &amp;quot;all clean...&amp;quot; || \&lt;br /&gt;
   echo &amp;quot;Executable &amp;lt;EXE_BIN_DIR&amp;gt;/&amp;lt;executable&amp;gt; finished with exit code ${$exit_code}&amp;quot;&lt;br /&gt;
[...]&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
* Do not use &#039;&#039;&#039;&#039;time&#039;&#039;&#039;&#039; mpirun! The exit code will be the one submitted by the first (time) program.&lt;br /&gt;
* You do not need an &#039;&#039;&#039;exit $exit_code&#039;&#039;&#039; in the scripts.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
----&lt;br /&gt;
[[Category:bwUniCluster 2.0|bwUniCluster 2.0]]&lt;br /&gt;
[[#top|Back to top]]&lt;/div&gt;</summary>
		<author><name>H Haefner</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster2.0/Acknowledgement&amp;diff=6398</id>
		<title>BwUniCluster2.0/Acknowledgement</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster2.0/Acknowledgement&amp;diff=6398"/>
		<updated>2020-04-20T10:57:00Z</updated>

		<summary type="html">&lt;p&gt;H Haefner: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The HPC resource bwUniCluster (2.0) is funded by the Ministry of Science, Research and the Arts Baden-Württemberg and the Universities of the State of Baden-Württemberg:&lt;br /&gt;
* Albert Ludwig University of Freiburg&lt;br /&gt;
* Eberhard Karls University, Tübingen&lt;br /&gt;
* Karlsruhe Institute of Technology (KIT)&lt;br /&gt;
* Ruprecht-Karls-Universität Heidelberg&lt;br /&gt;
* Ulm University&lt;br /&gt;
* University of Hohenheim&lt;br /&gt;
* University of Konstanz&lt;br /&gt;
* University of Mannheim&lt;br /&gt;
* University of Stuttgart.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Therefore, any work performed on bwUniCluster (2.0) should be appropriately cited:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;quot;The authors acknowledge support by the state of Baden-Württemberg through bwHPC.&amp;quot;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Furthermore, please report any publication that cites bwUniCluster (2.0) to [mailto:publications@bwhpc.de publications@bwhpc.de] stating:&lt;br /&gt;
* DOI (if available)&lt;br /&gt;
* author(s)&lt;br /&gt;
* title &#039;&#039;or&#039;&#039; booktitle&lt;br /&gt;
* journal, volume, pages &#039;&#039;or&#039;&#039; editors, address, publisher &lt;br /&gt;
* year. &lt;br /&gt;
This information will be used for the evaluation of bwUniCluster (2.0).&lt;br /&gt;
&lt;br /&gt;
The publications will be referenced on the bwHPC website:&lt;br /&gt;
 https://www.bwhpc.de/user_publications.html&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
[[Category:bwUniCluster|Acknowledgement]]&lt;br /&gt;
[[Category:Acknowledgement|bwUniCluster]]&lt;/div&gt;</summary>
		<author><name>H Haefner</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster2.0/FAQ_-_broadwell_partition&amp;diff=6397</id>
		<title>BwUniCluster2.0/FAQ - broadwell partition</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster2.0/FAQ_-_broadwell_partition&amp;diff=6397"/>
		<updated>2020-04-20T10:47:05Z</updated>

		<summary type="html">&lt;p&gt;H Haefner: /* How to login to broadwell login nodes? */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;FAQs concerning best practice of [[BwUniCluster_Hardware_and_Architecture#Components_of_bwUniCluster|bwUniCluster broadwell partition]] (aka &amp;quot;extension&amp;quot; partition).&lt;br /&gt;
&lt;br /&gt;
__TOC__&lt;br /&gt;
&lt;br /&gt;
= Login =&lt;br /&gt;
== Are there separate login nodes for the bwUniCluster broadwell partition? ==&lt;br /&gt;
* Yes, but primarily to be used for compiling code.&lt;br /&gt;
&lt;br /&gt;
== How to login to broadwell login nodes? ==&lt;br /&gt;
* You can directly login on broadwell partition login nodes using&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ssh username@uc1e.scc.kit.edu&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* If you are compiling code on broadwell login nodes, your code will not optimally run on the new &amp;quot;Cascade Lake&amp;quot; nodes.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Compilation =&lt;br /&gt;
== How to compile code on broadwell (extension) nodes? ==&lt;br /&gt;
To use the code only on the partition multiple_e:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ icc/ifort -xHost [-further_options]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== How to compile code to be used on ALL partitions? ==&lt;br /&gt;
On uc1e (= extension) login nodes:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ icc/ifort -xCORE-AVX2 -axCORE-AVX512 [-further_options]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Job execution =&lt;br /&gt;
== How to submit jobs to the broadwell (= extension) partition ==&lt;br /&gt;
The submitted job will be distributed either way to the broadwell nodes if specified correctly, i.e.:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sbatch -p multiple_e&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Can I use my old multinode job script for the new broadwell partition? ==&lt;br /&gt;
Yes, but please note that all broadwell nodes do have &#039;&#039;&#039;28 cores per node&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
[[Category:bwUniCluster]]&lt;/div&gt;</summary>
		<author><name>H Haefner</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster2.0/FAQ_-_broadwell_partition&amp;diff=6396</id>
		<title>BwUniCluster2.0/FAQ - broadwell partition</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster2.0/FAQ_-_broadwell_partition&amp;diff=6396"/>
		<updated>2020-04-20T10:46:46Z</updated>

		<summary type="html">&lt;p&gt;H Haefner: /* How to compile code to be used on ALL partitions? */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;FAQs concerning best practice of [[BwUniCluster_Hardware_and_Architecture#Components_of_bwUniCluster|bwUniCluster broadwell partition]] (aka &amp;quot;extension&amp;quot; partition).&lt;br /&gt;
&lt;br /&gt;
__TOC__&lt;br /&gt;
&lt;br /&gt;
= Login =&lt;br /&gt;
== Are there separate login nodes for the bwUniCluster broadwell partition? ==&lt;br /&gt;
* Yes, but primarily to be used for compiling code.&lt;br /&gt;
&lt;br /&gt;
== How to login to broadwell login nodes? ==&lt;br /&gt;
* You can directly login on broadwell partition login nodes using&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ssh username@uc1e.scc.kit.edu&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* If you are compiling code on broadwell login nodes, your code will not optimally run on the new &amp;quot;Cascade Lake&amp;quot; nodes.&lt;br /&gt;
&lt;br /&gt;
= Compilation =&lt;br /&gt;
== How to compile code on broadwell (extension) nodes? ==&lt;br /&gt;
To use the code only on the partition multiple_e:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ icc/ifort -xHost [-further_options]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== How to compile code to be used on ALL partitions? ==&lt;br /&gt;
On uc1e (= extension) login nodes:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ icc/ifort -xCORE-AVX2 -axCORE-AVX512 [-further_options]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Job execution =&lt;br /&gt;
== How to submit jobs to the broadwell (= extension) partition ==&lt;br /&gt;
The submitted job will be distributed either way to the broadwell nodes if specified correctly, i.e.:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sbatch -p multiple_e&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Can I use my old multinode job script for the new broadwell partition? ==&lt;br /&gt;
Yes, but please note that all broadwell nodes do have &#039;&#039;&#039;28 cores per node&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
[[Category:bwUniCluster]]&lt;/div&gt;</summary>
		<author><name>H Haefner</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster2.0/FAQ_-_broadwell_partition&amp;diff=6395</id>
		<title>BwUniCluster2.0/FAQ - broadwell partition</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster2.0/FAQ_-_broadwell_partition&amp;diff=6395"/>
		<updated>2020-04-20T10:45:58Z</updated>

		<summary type="html">&lt;p&gt;H Haefner: /* What happens with code compiled for old partition whic is running on the extension partition? */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;FAQs concerning best practice of [[BwUniCluster_Hardware_and_Architecture#Components_of_bwUniCluster|bwUniCluster broadwell partition]] (aka &amp;quot;extension&amp;quot; partition).&lt;br /&gt;
&lt;br /&gt;
__TOC__&lt;br /&gt;
&lt;br /&gt;
= Login =&lt;br /&gt;
== Are there separate login nodes for the bwUniCluster broadwell partition? ==&lt;br /&gt;
* Yes, but primarily to be used for compiling code.&lt;br /&gt;
&lt;br /&gt;
== How to login to broadwell login nodes? ==&lt;br /&gt;
* You can directly login on broadwell partition login nodes using&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ssh username@uc1e.scc.kit.edu&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* If you are compiling code on broadwell login nodes, your code will not optimally run on the new &amp;quot;Cascade Lake&amp;quot; nodes.&lt;br /&gt;
&lt;br /&gt;
= Compilation =&lt;br /&gt;
== How to compile code on broadwell (extension) nodes? ==&lt;br /&gt;
To use the code only on the partition multiple_e:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ icc/ifort -xHost [-further_options]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== How to compile code to be used on ALL partitions? ==&lt;br /&gt;
On uc1e (= extension) login nodes:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ icc/ifort -xCORE-AVX2 -axCORE-AVX512 [-further_options]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Job execution =&lt;br /&gt;
== How to submit jobs to the broadwell (= extension) partition ==&lt;br /&gt;
The submitted job will be distributed either way to the broadwell nodes if specified correctly, i.e.:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sbatch -p multiple_e&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Can I use my old multinode job script for the new broadwell partition? ==&lt;br /&gt;
Yes, but please note that all broadwell nodes do have &#039;&#039;&#039;28 cores per node&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
[[Category:bwUniCluster]]&lt;/div&gt;</summary>
		<author><name>H Haefner</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster2.0/FAQ_-_broadwell_partition&amp;diff=6394</id>
		<title>BwUniCluster2.0/FAQ - broadwell partition</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster2.0/FAQ_-_broadwell_partition&amp;diff=6394"/>
		<updated>2020-04-20T10:44:20Z</updated>

		<summary type="html">&lt;p&gt;H Haefner: /* How to compile code on broadwell (extension) nodes? */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;FAQs concerning best practice of [[BwUniCluster_Hardware_and_Architecture#Components_of_bwUniCluster|bwUniCluster broadwell partition]] (aka &amp;quot;extension&amp;quot; partition).&lt;br /&gt;
&lt;br /&gt;
__TOC__&lt;br /&gt;
&lt;br /&gt;
= Login =&lt;br /&gt;
== Are there separate login nodes for the bwUniCluster broadwell partition? ==&lt;br /&gt;
* Yes, but primarily to be used for compiling code.&lt;br /&gt;
&lt;br /&gt;
== How to login to broadwell login nodes? ==&lt;br /&gt;
* You can directly login on broadwell partition login nodes using&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ssh username@uc1e.scc.kit.edu&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* If you are compiling code on broadwell login nodes, your code will not optimally run on the new &amp;quot;Cascade Lake&amp;quot; nodes.&lt;br /&gt;
&lt;br /&gt;
= Compilation =&lt;br /&gt;
== How to compile code on broadwell (extension) nodes? ==&lt;br /&gt;
To use the code only on the partition multiple_e:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ icc/ifort -xHost [-further_options]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== How to compile code to be used on ALL partitions? ==&lt;br /&gt;
On uc1e (= extension) login nodes:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ icc/ifort -xCORE-AVX2 -axCORE-AVX512 [-further_options]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== What happens with code compiled for old partition whic is running on the extension partition? ==&lt;br /&gt;
Code will run but significantly slower since AVX2 will not be used. Please recompile your code accordingly.&lt;br /&gt;
 &lt;br /&gt;
&amp;lt;!--* To compile the same jobs on uc1 and uc1e, you have to use appropriate flags. Otherwise the submitted jobs will be crashed. Some of the main flags are:--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Job execution =&lt;br /&gt;
== How to submit jobs to the broadwell (= extension) partition ==&lt;br /&gt;
The submitted job will be distributed either way to the broadwell nodes if specified correctly, i.e.:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sbatch -p multiple_e&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Can I use my old multinode job script for the new broadwell partition? ==&lt;br /&gt;
Yes, but please note that all broadwell nodes do have &#039;&#039;&#039;28 cores per node&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
[[Category:bwUniCluster]]&lt;/div&gt;</summary>
		<author><name>H Haefner</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster2.0/FAQ_-_broadwell_partition&amp;diff=6393</id>
		<title>BwUniCluster2.0/FAQ - broadwell partition</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster2.0/FAQ_-_broadwell_partition&amp;diff=6393"/>
		<updated>2020-04-20T10:42:55Z</updated>

		<summary type="html">&lt;p&gt;H Haefner: /* How to compile code on broadwell (extension) nodes? */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;FAQs concerning best practice of [[BwUniCluster_Hardware_and_Architecture#Components_of_bwUniCluster|bwUniCluster broadwell partition]] (aka &amp;quot;extension&amp;quot; partition).&lt;br /&gt;
&lt;br /&gt;
__TOC__&lt;br /&gt;
&lt;br /&gt;
= Login =&lt;br /&gt;
== Are there separate login nodes for the bwUniCluster broadwell partition? ==&lt;br /&gt;
* Yes, but primarily to be used for compiling code.&lt;br /&gt;
&lt;br /&gt;
== How to login to broadwell login nodes? ==&lt;br /&gt;
* You can directly login on broadwell partition login nodes using&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ssh username@uc1e.scc.kit.edu&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* If you are compiling code on broadwell login nodes, your code will not optimally run on the new &amp;quot;Cascade Lake&amp;quot; nodes.&lt;br /&gt;
&lt;br /&gt;
= Compilation =&lt;br /&gt;
== How to compile code on broadwell (extension) nodes? ==&lt;br /&gt;
On uc1 (old) login nodes:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ icc/ifort -axCORE-AVX2 [-further_options]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
To use the code only on the partition multiple_e:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ icc/ifort -xHost [-further_options]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== How to compile code to be used on ALL partitions? ==&lt;br /&gt;
On uc1e (= extension) login nodes:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ icc/ifort -xCORE-AVX2 -axCORE-AVX512 [-further_options]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== What happens with code compiled for old partition whic is running on the extension partition? ==&lt;br /&gt;
Code will run but significantly slower since AVX2 will not be used. Please recompile your code accordingly.&lt;br /&gt;
 &lt;br /&gt;
&amp;lt;!--* To compile the same jobs on uc1 and uc1e, you have to use appropriate flags. Otherwise the submitted jobs will be crashed. Some of the main flags are:--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Job execution =&lt;br /&gt;
== How to submit jobs to the broadwell (= extension) partition ==&lt;br /&gt;
The submitted job will be distributed either way to the broadwell nodes if specified correctly, i.e.:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sbatch -p multiple_e&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Can I use my old multinode job script for the new broadwell partition? ==&lt;br /&gt;
Yes, but please note that all broadwell nodes do have &#039;&#039;&#039;28 cores per node&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
[[Category:bwUniCluster]]&lt;/div&gt;</summary>
		<author><name>H Haefner</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster2.0/FAQ_-_broadwell_partition&amp;diff=6392</id>
		<title>BwUniCluster2.0/FAQ - broadwell partition</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster2.0/FAQ_-_broadwell_partition&amp;diff=6392"/>
		<updated>2020-04-20T10:41:35Z</updated>

		<summary type="html">&lt;p&gt;H Haefner: /* How to compile code tobe used on ALL partitions? */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;FAQs concerning best practice of [[BwUniCluster_Hardware_and_Architecture#Components_of_bwUniCluster|bwUniCluster broadwell partition]] (aka &amp;quot;extension&amp;quot; partition).&lt;br /&gt;
&lt;br /&gt;
__TOC__&lt;br /&gt;
&lt;br /&gt;
= Login =&lt;br /&gt;
== Are there separate login nodes for the bwUniCluster broadwell partition? ==&lt;br /&gt;
* Yes, but primarily to be used for compiling code.&lt;br /&gt;
&lt;br /&gt;
== How to login to broadwell login nodes? ==&lt;br /&gt;
* You can directly login on broadwell partition login nodes using&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ssh username@uc1e.scc.kit.edu&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* If you are compiling code on broadwell login nodes, your code will not optimally run on the new &amp;quot;Cascade Lake&amp;quot; nodes.&lt;br /&gt;
&lt;br /&gt;
= Compilation =&lt;br /&gt;
== How to compile code on broadwell (extension) nodes? ==&lt;br /&gt;
On uc1 (old) login nodes:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ icc/ifort -axCORE-AVX2 [-further_options]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
On uc1e (extension) login nodes:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ icc/ifort -xHost [-further_options]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== How to compile code to be used on ALL partitions? ==&lt;br /&gt;
On uc1e (= extension) login nodes:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ icc/ifort -xCORE-AVX2 -axCORE-AVX512 [-further_options]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== What happens with code compiled for old partition whic is running on the extension partition? ==&lt;br /&gt;
Code will run but significantly slower since AVX2 will not be used. Please recompile your code accordingly.&lt;br /&gt;
 &lt;br /&gt;
&amp;lt;!--* To compile the same jobs on uc1 and uc1e, you have to use appropriate flags. Otherwise the submitted jobs will be crashed. Some of the main flags are:--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Job execution =&lt;br /&gt;
== How to submit jobs to the broadwell (= extension) partition ==&lt;br /&gt;
The submitted job will be distributed either way to the broadwell nodes if specified correctly, i.e.:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sbatch -p multiple_e&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Can I use my old multinode job script for the new broadwell partition? ==&lt;br /&gt;
Yes, but please note that all broadwell nodes do have &#039;&#039;&#039;28 cores per node&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
[[Category:bwUniCluster]]&lt;/div&gt;</summary>
		<author><name>H Haefner</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster2.0/FAQ_-_broadwell_partition&amp;diff=6391</id>
		<title>BwUniCluster2.0/FAQ - broadwell partition</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster2.0/FAQ_-_broadwell_partition&amp;diff=6391"/>
		<updated>2020-04-20T10:41:22Z</updated>

		<summary type="html">&lt;p&gt;H Haefner: /* How to compile the same code on old and extension partition? */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;FAQs concerning best practice of [[BwUniCluster_Hardware_and_Architecture#Components_of_bwUniCluster|bwUniCluster broadwell partition]] (aka &amp;quot;extension&amp;quot; partition).&lt;br /&gt;
&lt;br /&gt;
__TOC__&lt;br /&gt;
&lt;br /&gt;
= Login =&lt;br /&gt;
== Are there separate login nodes for the bwUniCluster broadwell partition? ==&lt;br /&gt;
* Yes, but primarily to be used for compiling code.&lt;br /&gt;
&lt;br /&gt;
== How to login to broadwell login nodes? ==&lt;br /&gt;
* You can directly login on broadwell partition login nodes using&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ssh username@uc1e.scc.kit.edu&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* If you are compiling code on broadwell login nodes, your code will not optimally run on the new &amp;quot;Cascade Lake&amp;quot; nodes.&lt;br /&gt;
&lt;br /&gt;
= Compilation =&lt;br /&gt;
== How to compile code on broadwell (extension) nodes? ==&lt;br /&gt;
On uc1 (old) login nodes:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ icc/ifort -axCORE-AVX2 [-further_options]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
On uc1e (extension) login nodes:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ icc/ifort -xHost [-further_options]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== How to compile code tobe used on ALL partitions? ==&lt;br /&gt;
On uc1e (= extension) login nodes:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ icc/ifort -xCORE-AVX2 -axCORE-AVX512 [-further_options]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== What happens with code compiled for old partition whic is running on the extension partition? ==&lt;br /&gt;
Code will run but significantly slower since AVX2 will not be used. Please recompile your code accordingly.&lt;br /&gt;
 &lt;br /&gt;
&amp;lt;!--* To compile the same jobs on uc1 and uc1e, you have to use appropriate flags. Otherwise the submitted jobs will be crashed. Some of the main flags are:--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Job execution =&lt;br /&gt;
== How to submit jobs to the broadwell (= extension) partition ==&lt;br /&gt;
The submitted job will be distributed either way to the broadwell nodes if specified correctly, i.e.:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sbatch -p multiple_e&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Can I use my old multinode job script for the new broadwell partition? ==&lt;br /&gt;
Yes, but please note that all broadwell nodes do have &#039;&#039;&#039;28 cores per node&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
[[Category:bwUniCluster]]&lt;/div&gt;</summary>
		<author><name>H Haefner</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster2.0/FAQ_-_broadwell_partition&amp;diff=6390</id>
		<title>BwUniCluster2.0/FAQ - broadwell partition</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster2.0/FAQ_-_broadwell_partition&amp;diff=6390"/>
		<updated>2020-04-20T10:22:22Z</updated>

		<summary type="html">&lt;p&gt;H Haefner: /* How to login to broadwell login nodes? */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;FAQs concerning best practice of [[BwUniCluster_Hardware_and_Architecture#Components_of_bwUniCluster|bwUniCluster broadwell partition]] (aka &amp;quot;extension&amp;quot; partition).&lt;br /&gt;
&lt;br /&gt;
__TOC__&lt;br /&gt;
&lt;br /&gt;
= Login =&lt;br /&gt;
== Are there separate login nodes for the bwUniCluster broadwell partition? ==&lt;br /&gt;
* Yes, but primarily to be used for compiling code.&lt;br /&gt;
&lt;br /&gt;
== How to login to broadwell login nodes? ==&lt;br /&gt;
* You can directly login on broadwell partition login nodes using&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ssh username@uc1e.scc.kit.edu&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* If you are compiling code on broadwell login nodes, your code will not optimally run on the new &amp;quot;Cascade Lake&amp;quot; nodes.&lt;br /&gt;
&lt;br /&gt;
= Compilation =&lt;br /&gt;
== How to compile code on broadwell (extension) nodes? ==&lt;br /&gt;
On uc1 (old) login nodes:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ icc/ifort -axCORE-AVX2 [-further_options]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
On uc1e (extension) login nodes:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ icc/ifort -xHost [-further_options]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== How to compile the same code on old and extension partition? ==&lt;br /&gt;
On uc1e (= extension) login nodes:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ icc/ifort -xAVX -axCORE-AVX2 [-further_options]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== What happens with code compiled for old partition whic is running on the extension partition? ==&lt;br /&gt;
Code will run but significantly slower since AVX2 will not be used. Please recompile your code accordingly.&lt;br /&gt;
 &lt;br /&gt;
&amp;lt;!--* To compile the same jobs on uc1 and uc1e, you have to use appropriate flags. Otherwise the submitted jobs will be crashed. Some of the main flags are:--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Job execution =&lt;br /&gt;
== How to submit jobs to the broadwell (= extension) partition ==&lt;br /&gt;
The submitted job will be distributed either way to the broadwell nodes if specified correctly, i.e.:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sbatch -p multiple_e&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Can I use my old multinode job script for the new broadwell partition? ==&lt;br /&gt;
Yes, but please note that all broadwell nodes do have &#039;&#039;&#039;28 cores per node&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
[[Category:bwUniCluster]]&lt;/div&gt;</summary>
		<author><name>H Haefner</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster2.0/FAQ_-_broadwell_partition&amp;diff=6389</id>
		<title>BwUniCluster2.0/FAQ - broadwell partition</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster2.0/FAQ_-_broadwell_partition&amp;diff=6389"/>
		<updated>2020-04-20T09:41:15Z</updated>

		<summary type="html">&lt;p&gt;H Haefner: /* How to submit jobs to the broadwell (= extension) partition */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;FAQs concerning best practice of [[BwUniCluster_Hardware_and_Architecture#Components_of_bwUniCluster|bwUniCluster broadwell partition]] (aka &amp;quot;extension&amp;quot; partition).&lt;br /&gt;
&lt;br /&gt;
__TOC__&lt;br /&gt;
&lt;br /&gt;
= Login =&lt;br /&gt;
== Are there separate login nodes for the bwUniCluster broadwell partition? ==&lt;br /&gt;
* Yes, but primarily to be used for compiling code.&lt;br /&gt;
&lt;br /&gt;
== How to login to broadwell login nodes? ==&lt;br /&gt;
* You can directly login on broadwell partition login nodes using&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ssh username@uc1e.scc.kit.edu&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* If you login to the &#039;&#039;old&#039;&#039; uc1 login nodes, even though you can use broadwell nodes using the same procedure as &#039;compute nodes&#039;. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Compilation =&lt;br /&gt;
== How to compile code on broadwell (extension) nodes? ==&lt;br /&gt;
On uc1 (old) login nodes:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ icc/ifort -axCORE-AVX2 [-further_options]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
On uc1e (extension) login nodes:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ icc/ifort -xHost [-further_options]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== How to compile the same code on old and extension partition? ==&lt;br /&gt;
On uc1e (= extension) login nodes:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ icc/ifort -xAVX -axCORE-AVX2 [-further_options]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== What happens with code compiled for old partition whic is running on the extension partition? ==&lt;br /&gt;
Code will run but significantly slower since AVX2 will not be used. Please recompile your code accordingly.&lt;br /&gt;
 &lt;br /&gt;
&amp;lt;!--* To compile the same jobs on uc1 and uc1e, you have to use appropriate flags. Otherwise the submitted jobs will be crashed. Some of the main flags are:--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Job execution =&lt;br /&gt;
== How to submit jobs to the broadwell (= extension) partition ==&lt;br /&gt;
The submitted job will be distributed either way to the broadwell nodes if specified correctly, i.e.:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sbatch -p multiple_e&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Can I use my old multinode job script for the new broadwell partition? ==&lt;br /&gt;
Yes, but please note that all broadwell nodes do have &#039;&#039;&#039;28 cores per node&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
[[Category:bwUniCluster]]&lt;/div&gt;</summary>
		<author><name>H Haefner</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=Category:BwUniCluster_2.0&amp;diff=6388</id>
		<title>Category:BwUniCluster 2.0</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=Category:BwUniCluster_2.0&amp;diff=6388"/>
		<updated>2020-04-20T09:38:46Z</updated>

		<summary type="html">&lt;p&gt;H Haefner: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;!--&amp;lt;span style=&amp;quot;color:red;font-size:105%;&amp;quot;&amp;gt;Important note: bwUniCluster is &#039;&#039;&#039;not&#039;&#039;&#039; in production mode yet.&amp;lt;/span&amp;gt;--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;width: 100%; border-spacing: 5px;&amp;quot;&lt;br /&gt;
| style=&amp;quot;text-align:center; color:#000;vertical-align:middle;font-size:75%;&amp;quot; |&lt;br /&gt;
[[File:BwUniCluster_2.0_Feb2020.jpg|center|border|550px|Close-up of bwUniCluster by Robert Barthel, copyright: KIT (SCC)]] &lt;br /&gt;
|-&lt;br /&gt;
| style=&amp;quot;text-align:center; color:#000;vertical-align:middle;&amp;quot; |&amp;lt;span style=&amp;quot;font-size:80%&amp;quot;&amp;gt;Close-up of bwUniCluster © KIT (SCC)&amp;lt;/span&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
On 17.03.2020, the Steinbuch Centre for Computing (SCC) at Karlsruhe Institute of Technology (KIT) commissioned a new parallel computer system called &amp;quot;bwUniCluster 2.0+GFB-HPC&amp;quot; as a state service within the bwHPC framework. The bwUniCluster 2.0 replaces the predecessor system [[bwUniCluster]] and also includes the additional compute nodes which were procured as an extension to the bwUniCluster in November 2016.&lt;br /&gt;
&lt;br /&gt;
The modern bwUniCluster 2.0 system consists of more than 840 SMP nodes with 64-bit Intel Xeon processors. It provides the universities of the state of Baden-Württemberg with general compute resources and can be used free of charge by the staff of all universities in Baden-Württemberg. Users who currently have access to bwUniCluster will automatically also have access to bwUniCluster 2.0. There is no need to apply for new entitlements or to re-register.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- One column table --&amp;gt;&lt;br /&gt;
{| style=&amp;quot;width: 100%; margin:4px 0 0 0; background:none; border-spacing: 0px;&amp;quot;&lt;br /&gt;
| style=&amp;quot;width:100%; border:1px solid #BBBBBB; background:#fff5fa; vertical-align:top; color:#000;&amp;quot; |&lt;br /&gt;
{| style=&amp;quot;width:100%; vertical-align:top; border:0px solid #BBBBBB; padding:4px;&amp;quot; |&lt;br /&gt;
|-&lt;br /&gt;
|{{Red}}| Overview: Changes from bwUniCluster 1&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
* Access: (Nearly) no changes. Entitlements are still valid.&lt;br /&gt;
* Software: Switch to the Lmod module system, cleanup of older software module versions&lt;br /&gt;
* Hardware: Significantly more powerful hardware, 132 Volta V100 GPUs&lt;br /&gt;
* File systems: See the [[BwUniCluster_2.0_File_System_Migration_Guide|File system migration guide]]&lt;br /&gt;
* Batch system: See the [[BwUniCluster_2.0_Batch_System_Migration_Guide|Batch system migration guide]]&lt;br /&gt;
|}&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- Two-column table --&amp;gt;&lt;br /&gt;
{| style=&amp;quot;width: 100%; margin:4px 0 0 0; background:none; border-spacing: 0px;&amp;quot;&lt;br /&gt;
| style=&amp;quot;width:50%; border:1px solid #BBBBBB; background:#f5fffa; vertical-align:top; color:#000;&amp;quot; |&lt;br /&gt;
{| style=&amp;quot;width:100%; vertical-align:top; border:0px solid #BBBBBB; padding:4px;&amp;quot; |&lt;br /&gt;
|-&lt;br /&gt;
|{{Green}}| Access&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
* bwUniCluster [[BwUniCluster_2.0_User_Access|Registration and Login]]&lt;br /&gt;
* Registration [[bwUniCluster_2.0_Contact_and_Support|trouble issues]] &amp;amp;  [[BwUniCluster_2.0_User_Access#Deregistration|Deregistration]]&lt;br /&gt;
* [[First_Steps_on_bwHPC_cluster|First steps on bwUniCluster]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|{{Green}}| Software&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
* [[bwUniCluster_2.0_Software|Software and Environment Modules]]&lt;br /&gt;
|}&lt;br /&gt;
{| style=&amp;quot;width:100%; vertical-align:top; border:0px solid #BBBBBB; padding:4px;&amp;quot; |&lt;br /&gt;
|-&lt;br /&gt;
|{{Green}}| Hardware&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
* [[bwUniCluster_2.0_Hardware_and_Architecture|Hardware and Architecture]]&lt;br /&gt;
* [[BwUniCluster_2.0_Hardware_and_Architecture#File_Systems|File Systems]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
| style=&amp;quot;padding:2px;&amp;quot; |&lt;br /&gt;
&lt;br /&gt;
| style=&amp;quot;width:50%; border:1px solid #BBBBBB; background:#f5faff; vertical-align:top; color:#000;&amp;quot; |&lt;br /&gt;
{| style=&amp;quot;width:100%; vertical-align:top; border:0px solid #BBBBBB; padding:4px;&amp;quot; |&lt;br /&gt;
|-&lt;br /&gt;
|{{Blue}}| Batch/Compute Jobs&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
* [[bwUniCluster_2.0_Slurm_common_Features|Slurm common Features]]&lt;br /&gt;
* [[BwUniCluster_2.0_Batch_Queues|Batch Queues and interactive Jobs]]&lt;br /&gt;
|-&lt;br /&gt;
|{{Blue}}| [[BwHPC_Best_Practices_Repository|bwHPC Best Practice Guides]] / FAQs&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
&amp;lt;!-- * [[Compiler]] --&amp;gt;&lt;br /&gt;
&amp;lt;!-- * [[Debugger]] --&amp;gt;&lt;br /&gt;
&amp;lt;!-- * [[Numerical Libraries]] --&amp;gt;&lt;br /&gt;
&amp;lt;!-- * [[Parallel Programming]] --&amp;gt;&lt;br /&gt;
* [[FAQ - bwUniCluster_broadwell_partition|FAQ - bwUniCluster 2.0 Broadwell partition]]&lt;br /&gt;
|-&lt;br /&gt;
|{{Blue}}| Miscellaneous&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
* [[bwUniCluster_Acknowledgement|Acknowledgement]] of work performed on bwUniCluster (2.0)&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
-----&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[Category:bwHPC_infrastructure]][[Category:bwHPC_Cluster]][[Category:bwCluster]]&lt;/div&gt;</summary>
		<author><name>H Haefner</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=Category:BwUniCluster_2.0&amp;diff=6387</id>
		<title>Category:BwUniCluster 2.0</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=Category:BwUniCluster_2.0&amp;diff=6387"/>
		<updated>2020-04-20T09:35:27Z</updated>

		<summary type="html">&lt;p&gt;H Haefner: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;!--&amp;lt;span style=&amp;quot;color:red;font-size:105%;&amp;quot;&amp;gt;Important note: bwUniCluster is &#039;&#039;&#039;not&#039;&#039;&#039; in production mode yet.&amp;lt;/span&amp;gt;--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;width: 100%; border-spacing: 5px;&amp;quot;&lt;br /&gt;
| style=&amp;quot;text-align:center; color:#000;vertical-align:middle;font-size:75%;&amp;quot; |&lt;br /&gt;
[[File:BwUniCluster_2.0_Feb2020.jpg|center|border|550px|Close-up of bwUniCluster by Robert Barthel, copyright: KIT (SCC)]] &lt;br /&gt;
|-&lt;br /&gt;
| style=&amp;quot;text-align:center; color:#000;vertical-align:middle;&amp;quot; |&amp;lt;span style=&amp;quot;font-size:80%&amp;quot;&amp;gt;Close-up of bwUniCluster © KIT (SCC)&amp;lt;/span&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
On 17.03.2020, the Steinbuch Centre for Computing (SCC) at Karlsruhe Institute of Technology (KIT) commissioned a new parallel computer system called &amp;quot;bwUniCluster 2.0+GFB-HPC&amp;quot; as a state service within the bwHPC framework. The bwUniCluster 2.0 replaces the predecessor system [[bwUniCluster]] and also includes the additional compute nodes which were procured as an extension to the bwUniCluster in November 2016.&lt;br /&gt;
&lt;br /&gt;
The modern bwUniCluster 2.0 system consists of more than 840 SMP nodes with 64-bit Intel Xeon processors. It provides the universities of the state of Baden-Württemberg with general compute resources and can be used free of charge by the staff of all universities in Baden-Württemberg. Users who currently have access to bwUniCluster will automatically also have access to bwUniCluster 2.0. There is no need to apply for new entitlements or to re-register.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- One column table --&amp;gt;&lt;br /&gt;
{| style=&amp;quot;width: 100%; margin:4px 0 0 0; background:none; border-spacing: 0px;&amp;quot;&lt;br /&gt;
| style=&amp;quot;width:100%; border:1px solid #BBBBBB; background:#fff5fa; vertical-align:top; color:#000;&amp;quot; |&lt;br /&gt;
{| style=&amp;quot;width:100%; vertical-align:top; border:0px solid #BBBBBB; padding:4px;&amp;quot; |&lt;br /&gt;
|-&lt;br /&gt;
|{{Red}}| Overview: Changes from bwUniCluster 1&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
* Access: (Nearly) no changes. Entitlements are still valid.&lt;br /&gt;
* Software: Switch to the Lmod module system, cleanup of older software module versions&lt;br /&gt;
* Hardware: Significantly more powerful hardware, 132 Volta V100 GPUs&lt;br /&gt;
* File systems: See the [[BwUniCluster_2.0_File_System_Migration_Guide|File system migration guide]]&lt;br /&gt;
* Batch system: See the [[BwUniCluster_2.0_Batch_System_Migration_Guide|Batch system migration guide]]&lt;br /&gt;
|}&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- Two-column table --&amp;gt;&lt;br /&gt;
{| style=&amp;quot;width: 100%; margin:4px 0 0 0; background:none; border-spacing: 0px;&amp;quot;&lt;br /&gt;
| style=&amp;quot;width:50%; border:1px solid #BBBBBB; background:#f5fffa; vertical-align:top; color:#000;&amp;quot; |&lt;br /&gt;
{| style=&amp;quot;width:100%; vertical-align:top; border:0px solid #BBBBBB; padding:4px;&amp;quot; |&lt;br /&gt;
|-&lt;br /&gt;
|{{Green}}| Access&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
* bwUniCluster [[BwUniCluster_2.0_User_Access|Registration and Login]]&lt;br /&gt;
* Registration [[bwUniCluster_2.0_Contact_and_Support|trouble issues]] &amp;amp;  [[BwUniCluster_2.0_User_Access#Deregistration|Deregistration]]&lt;br /&gt;
* [[First_Steps_on_bwHPC_cluster|First steps on bwUniCluster]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|{{Green}}| Software&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
* [[bwUniCluster_2.0_Software|Software and Environment Modules]]&lt;br /&gt;
|}&lt;br /&gt;
{| style=&amp;quot;width:100%; vertical-align:top; border:0px solid #BBBBBB; padding:4px;&amp;quot; |&lt;br /&gt;
|-&lt;br /&gt;
|{{Green}}| Hardware&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
* [[bwUniCluster_2.0_Hardware_and_Architecture|Hardware and Architecture]]&lt;br /&gt;
* [[BwUniCluster_2.0_Hardware_and_Architecture#File_Systems|File Systems]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
| style=&amp;quot;padding:2px;&amp;quot; |&lt;br /&gt;
&lt;br /&gt;
| style=&amp;quot;width:50%; border:1px solid #BBBBBB; background:#f5faff; vertical-align:top; color:#000;&amp;quot; |&lt;br /&gt;
{| style=&amp;quot;width:100%; vertical-align:top; border:0px solid #BBBBBB; padding:4px;&amp;quot; |&lt;br /&gt;
|-&lt;br /&gt;
|{{Blue}}| Batch/Compute Jobs&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
* [[bwUniCluster_2.0_Slurm_common_Features|Slurm common Features]]&lt;br /&gt;
* [[BwUniCluster_2.0_Batch_Queues|Batch Queues and interactive Jobs]]&lt;br /&gt;
|-&lt;br /&gt;
|{{Blue}}| [[BwHPC_Best_Practices_Repository|bwHPC Best Practice Guides]] / FAQs&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
* [[Compiler]]&lt;br /&gt;
* [[Debugger]]&lt;br /&gt;
* [[Numerical Libraries]]&lt;br /&gt;
* [[Parallel Programming]]&lt;br /&gt;
* [[FAQ - bwUniCluster_broadwell_partition|FAQ - bwUniCluster Broadwell partition]]&lt;br /&gt;
|-&lt;br /&gt;
|{{Blue}}| Miscellaneous&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
* [[bwUniCluster_Acknowledgement|Acknowledgement]] of work performed on bwUniCluster&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
-----&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[Category:bwHPC_infrastructure]][[Category:bwHPC_Cluster]][[Category:bwCluster]]&lt;/div&gt;</summary>
		<author><name>H Haefner</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster2.0/FAQ_-_broadwell_partition&amp;diff=6385</id>
		<title>BwUniCluster2.0/FAQ - broadwell partition</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster2.0/FAQ_-_broadwell_partition&amp;diff=6385"/>
		<updated>2020-04-20T09:30:52Z</updated>

		<summary type="html">&lt;p&gt;H Haefner: H Haefner moved page FAQ - bwUniCluster broadwell partition to FAQ - bwUniCluster 2.0 broadwell partition: bwUniCluster replaced by bwUniCluster 2.0&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;FAQs concerning best practice of [[BwUniCluster_Hardware_and_Architecture#Components_of_bwUniCluster|bwUniCluster broadwell partition]] (aka &amp;quot;extension&amp;quot; partition).&lt;br /&gt;
&lt;br /&gt;
__TOC__&lt;br /&gt;
&lt;br /&gt;
= Login =&lt;br /&gt;
== Are there separate login nodes for the bwUniCluster broadwell partition? ==&lt;br /&gt;
* Yes, but primarily to be used for compiling code.&lt;br /&gt;
&lt;br /&gt;
== How to login to broadwell login nodes? ==&lt;br /&gt;
* You can directly login on broadwell partition login nodes using&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ssh username@uc1e.scc.kit.edu&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* If you login to the &#039;&#039;old&#039;&#039; uc1 login nodes, even though you can use broadwell nodes using the same procedure as &#039;compute nodes&#039;. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Compilation =&lt;br /&gt;
== How to compile code on broadwell (extension) nodes? ==&lt;br /&gt;
On uc1 (old) login nodes:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ icc/ifort -axCORE-AVX2 [-further_options]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
On uc1e (extension) login nodes:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ icc/ifort -xHost [-further_options]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== How to compile the same code on old and extension partition? ==&lt;br /&gt;
On uc1e (= extension) login nodes:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ icc/ifort -xAVX -axCORE-AVX2 [-further_options]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== What happens with code compiled for old partition whic is running on the extension partition? ==&lt;br /&gt;
Code will run but significantly slower since AVX2 will not be used. Please recompile your code accordingly.&lt;br /&gt;
 &lt;br /&gt;
&amp;lt;!--* To compile the same jobs on uc1 and uc1e, you have to use appropriate flags. Otherwise the submitted jobs will be crashed. Some of the main flags are:--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Job execution =&lt;br /&gt;
== How to submit jobs to the broadwell (= extension) partition ==&lt;br /&gt;
The submitted job will be distributed either way to the broadwell nodes if specified correctly, i.e.:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ msub -q multinode&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Can I use my old multinode job script for the new broadwell partition? ==&lt;br /&gt;
Yes, but please note that all broadwell nodes do have &#039;&#039;&#039;28 cores per node&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
[[Category:bwUniCluster]]&lt;/div&gt;</summary>
		<author><name>H Haefner</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster2.0/Slurm&amp;diff=6152</id>
		<title>BwUniCluster2.0/Slurm</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster2.0/Slurm&amp;diff=6152"/>
		<updated>2020-04-02T11:07:30Z</updated>

		<summary type="html">&lt;p&gt;H Haefner: /* Intel MPI with Multithreading */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;div id=&amp;quot;top&amp;quot;&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
=  Slurm HPC Workload Manager = &lt;br /&gt;
== Specification == &lt;br /&gt;
Slurm is an open source, fault-tolerant, and highly scalable cluster management and job scheduling system for large and small Linux clusters. Slurm requires no kernel modifications for its operation and is relatively self-contained. As a cluster workload manager, Slurm has three key functions. First, it allocates exclusive and/or non-exclusive access to resources (compute nodes) to users for some duration of time so they can perform work. Second, it provides a framework for starting, executing, and monitoring work (normally a parallel job) on the set of allocated nodes. Finally, it arbitrates contention for resources by managing a queue of pending work.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Any kind of calculation on the compute nodes of [[bwUniCluster 2.0|bwUniCluster 2.0]] requires the user to define calculations as a sequence of commands or single command together with required run time, number of CPU cores and main memory and submit all, i.e., the &#039;&#039;&#039;batch job&#039;&#039;&#039;, to a resource and workload managing software. bwUniCluster 2.0 has installed the workload managing software Slurm. Therefore any job submission by the user is to be executed by commands of the Slurm software. Slurm queues and runs user jobs based on fair sharing policies.  &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Slurm Commands (excerpt) ==&lt;br /&gt;
Some of the most used Slurm commands for non-administrators working on bwUniCluster 2.0.&lt;br /&gt;
{| width=750px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Slurm commands !! Brief explanation&lt;br /&gt;
|-&lt;br /&gt;
| [[#Job Submission : sbatch|sbatch]] || Submits a job and queues it in an input queue [[https://slurm.schedmd.com/sbatch.html sbatch]] &lt;br /&gt;
|-&lt;br /&gt;
| [[#Detailed job information : scontrol show job|scontrol show job]] || Displays detailed job state information [[https://slurm.schedmd.com/scontrol.html scontrol]]&lt;br /&gt;
|-&lt;br /&gt;
| [[#List of your submitted jo/bs : squeue|squeue]] || Displays information about active, eligible, blocked, and/or recently completed jobs [[https://slurm.schedmd.com/squeue.html squeue]]&lt;br /&gt;
|-&lt;br /&gt;
| [[#Start time of job or resources : squeue|squeue --start]] || Returns start time of submitted job or requested resources [[https://slurm.schedmd.com/squeue.html squeue]]&lt;br /&gt;
|-&lt;br /&gt;
| [[#Shows free resources : sinfo_t_idle|sinfo_t_idle]] || Shows what resources are available for immediate use [[https://slurm.schedmd.com/sinfo.html sinfo]]&lt;br /&gt;
|-&lt;br /&gt;
| [[#Canceling own jobs : scancel|scancel]] || Cancels a job (obsoleted!) [[https://slurm.schedmd.com/scancel.html scancel]]&lt;br /&gt;
|}&lt;br /&gt;
If your job was submitted to the &amp;quot;multiple&amp;quot; queue you can log into the allocated nodes via SSH as soon as the job is running.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
* [https://slurm.schedmd.com/tutorials.html  Slurm Tutorials]&lt;br /&gt;
* [https://slurm.schedmd.com/pdfs/summary.pdf  Slurm command/option summary (2 pages)]&lt;br /&gt;
* [https://slurm.schedmd.com/man_index.html  Slurm Commands]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Job Submission : sbatch ==&lt;br /&gt;
Batch jobs are submitted by using the command &#039;&#039;&#039;sbatch&#039;&#039;&#039;. The main purpose of the &#039;&#039;&#039;sbatch&#039;&#039;&#039; command is to specify the resources that are needed to run the job. &#039;&#039;&#039;sbatch&#039;&#039;&#039; will then queue the batch job. However, starting of batch job depends on the availability of the requested resources and the fair sharing value.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
=== sbatch Command Parameters ===&lt;br /&gt;
The syntax and use of &#039;&#039;&#039;sbatch&#039;&#039;&#039; can be displayed via:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ man sbatch&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;sbatch&#039;&#039;&#039; options can be used from the command line or in your job script.&lt;br /&gt;
{| width=750px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! colspan=&amp;quot;3&amp;quot; | sbatch Options&lt;br /&gt;
|-&lt;br /&gt;
! Command line&lt;br /&gt;
! Script&lt;br /&gt;
! Purpose&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| -t &#039;&#039;time&#039;&#039;  or  --time=&#039;&#039;time&#039;&#039;&lt;br /&gt;
| #SBATCH --time=&#039;&#039;time&#039;&#039;&lt;br /&gt;
| Wall clock time limit.&amp;lt;br&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| -N &#039;&#039;count&#039;&#039;  or  --nodes=&#039;&#039;count&#039;&#039;&lt;br /&gt;
| #SBATCH --nodes=&#039;&#039;count&#039;&#039;&lt;br /&gt;
| Number of nodes to be used.&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| -n &#039;&#039;count&#039;&#039;  or  --ntasks=&#039;&#039;count&#039;&#039;&lt;br /&gt;
| #SBATCH --ntasks=&#039;&#039;count&#039;&#039;&lt;br /&gt;
| Number of tasks to be launched.&lt;br /&gt;
|-&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| --ntasks-per-node=&#039;&#039;count&#039;&#039;&lt;br /&gt;
| #SBATCH --ntasks-per-node=&#039;&#039;count&#039;&#039;&lt;br /&gt;
| Maximum count (&amp;lt;= 28 and &amp;lt;= 40 resp.) of tasks per node.&amp;lt;br&amp;gt;(Replaces the option ppn of MOAB.)&lt;br /&gt;
|-&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| -c &#039;&#039;count&#039;&#039; or --cpus-per-task=&#039;&#039;count&#039;&#039;&lt;br /&gt;
| #SBATCH --cpus-per-task=&#039;&#039;count&#039;&#039;&lt;br /&gt;
| Number of CPUs required per (MPI-)task.&lt;br /&gt;
|-&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| --mem=&#039;&#039;value_in_MB&#039;&#039;&lt;br /&gt;
| #SBATCH --mem=&#039;&#039;value_in_MB&#039;&#039; &lt;br /&gt;
| Memory in MegaByte per node.&amp;lt;br&amp;gt;(Default value is 128000 and 96000 MB resp., i.e. you should omit &amp;lt;br&amp;gt; the setting of this option.)&lt;br /&gt;
|-&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| --mem-per-cpu=&#039;&#039;value_in_MB&#039;&#039;&lt;br /&gt;
| #SBATCH --mem-per-cpu=&#039;&#039;value_in_MB&#039;&#039; &lt;br /&gt;
| Minimum Memory required per allocated CPU.&amp;lt;br&amp;gt;(Replaces the option pmem of MOAB. You should omit &amp;lt;br&amp;gt; the setting of this option.)&lt;br /&gt;
|-&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| --mail-type=&#039;&#039;type&#039;&#039;&lt;br /&gt;
| #SBATCH --mail-type=&#039;&#039;type&#039;&#039;&lt;br /&gt;
| Notify user by email when certain event types occur.&amp;lt;br&amp;gt;Valid type values are NONE, BEGIN, END, FAIL, REQUEUE, ALL.&lt;br /&gt;
|-&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| --mail-user=&#039;&#039;mail-address&#039;&#039;&lt;br /&gt;
| #SBATCH --mail-user=&#039;&#039;mail-address&#039;&#039;&lt;br /&gt;
|  The specified mail-address receives email notification of state&amp;lt;br&amp;gt;changes as defined by --mail-type.&lt;br /&gt;
|-&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| --output=&#039;&#039;name&#039;&#039;&lt;br /&gt;
| #SBATCH --output=&#039;&#039;name&#039;&#039;&lt;br /&gt;
| File in which job output is stored. &lt;br /&gt;
|-&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| --error=&#039;&#039;name&#039;&#039;&lt;br /&gt;
| #SBATCH --error=&#039;&#039;name&#039;&#039;&lt;br /&gt;
| File in which job error messages are stored. &lt;br /&gt;
|-&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| -J &#039;&#039;name&#039;&#039; or --job-name=&#039;&#039;name&#039;&#039;&lt;br /&gt;
| #SBATCH --job-name=&#039;&#039;name&#039;&#039;&lt;br /&gt;
| Job name.&lt;br /&gt;
|-&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| --export=[ALL,] &#039;&#039;env-variables&#039;&#039;&lt;br /&gt;
| #SBATCH --export=[ALL,] &#039;&#039;env-variables&#039;&#039;&lt;br /&gt;
| Identifies which environment variables from the submission &amp;lt;br&amp;gt; environment are propagated to the launched application. Default &amp;lt;br&amp;gt; is ALL. If adding to the submission environment instead of &amp;lt;br&amp;gt; replacing it is intended,  the argument ALL must be added.&lt;br /&gt;
|-&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| -A &#039;&#039;group-name&#039;&#039; or --account=&#039;&#039;group-name&#039;&#039;&lt;br /&gt;
| #SBATCH --account=&#039;&#039;group-name&#039;&#039;&lt;br /&gt;
| Change resources used by this job to specified group. You may &amp;lt;br&amp;gt; need this option if your account is assigned to more &amp;lt;br&amp;gt; than one group. By command &amp;quot;scontrol show job&amp;quot; the project &amp;lt;br&amp;gt; group the job is accounted on can be seen behind &amp;quot;Account=&amp;quot;. &lt;br /&gt;
|-&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| -p &#039;&#039;queue-name&#039;&#039; or --partition=&#039;&#039;queue-name&#039;&#039;&lt;br /&gt;
| #SBATCH --partition=&#039;&#039;queue-name&#039;&#039;&lt;br /&gt;
| Request a specific queue for the resource allocation.&lt;br /&gt;
|-&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| -C &#039;&#039;LSDF&#039;&#039; or --constraint=&#039;&#039;LSDF&#039;&#039;&lt;br /&gt;
| #SBATCH --constraint=LSDF&lt;br /&gt;
| Job constraint LSDF Filesystems.&lt;br /&gt;
|-&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| -C &#039;&#039;BEEOND&#039;&#039; or --constraint=&#039;&#039;BEEOND&#039;&#039;&lt;br /&gt;
| #SBATCH --constraint=BEEOND&lt;br /&gt;
| Job constraint BeeOND file system.&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== sbatch --partition  &#039;&#039;queues&#039;&#039; ====&lt;br /&gt;
Queue classes define maximum resources such as walltime, nodes and processes per node and queue of the compute system. Details can be found here:&lt;br /&gt;
* [[BwUniCluster_2.0_Batch_Queues#sbatch_-p_queue|bwUniCluster 2.0 queue settings]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== sbatch Examples ===&lt;br /&gt;
==== Serial Programs ====&lt;br /&gt;
To submit a serial job that runs the script &#039;&#039;&#039;job.sh&#039;&#039;&#039; and that requires 5000 MB of main memory and 10 minutes of wall clock time&lt;br /&gt;
&lt;br /&gt;
a) execute:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sbatch -p dev_single -n 1 -t 10:00 --mem=5000  job.sh&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
or&lt;br /&gt;
b) add after the initial line of your script &#039;&#039;&#039;job.sh&#039;&#039;&#039; the lines (here with a high memory request):&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#SBATCH --ntasks=1&lt;br /&gt;
#SBATCH --time=10&lt;br /&gt;
#SBATCH --mem=200gb&lt;br /&gt;
#SBATCH --job-name=simple&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
and execute the modified script with the command line option &#039;&#039;--partition=fat&#039;&#039; (with &#039;&#039;--partition=(dev_)single&#039;&#039; maximum &#039;&#039;--mem=96gb&#039;&#039; is possible):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sbatch --partition=fat job.sh&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Note, that sbatch command line options overrule script options.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Multithreaded Programs ====&lt;br /&gt;
Multithreaded programs operate faster than serial programs on CPUs with multiple cores.&amp;lt;br&amp;gt;&lt;br /&gt;
Moreover, multiple threads of one process share resources such as memory.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
For multithreaded programs based on &#039;&#039;&#039;Open&#039;&#039;&#039; &#039;&#039;&#039;M&#039;&#039;&#039;ulti-&#039;&#039;&#039;P&#039;&#039;&#039;rocessing (OpenMP) a number of threads is defined by the environment variable OMP_NUM_THREADS. By default this variable is set to 1 (OMP_NUM_THREADS=1).&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Because hyperthreading is switched on on bwUniCluster 2.0, the option --cpus-per-task (-c) must be set to 2*n, if you want to use n threads.&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
To submit a batch job called &#039;&#039;OpenMP_Test&#039;&#039; that runs a 40-fold threaded program &#039;&#039;omp_exe&#039;&#039; which requires 6000 MByte of total physical memory and total wall clock time of 40 minutes:&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
a) execute:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sbatch -p single --export=ALL,OMP_NUM_THREADS=40 -J OpenMP_Test -N 1 -c 80 -t 40 --mem=6000 ./omp_exe&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
or&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
* generate the script &#039;&#039;&#039;job_omp.sh&#039;&#039;&#039; containing the following lines:&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
#SBATCH --cpus-per-task=80&lt;br /&gt;
#SBATCH --time=40:00&lt;br /&gt;
#SBATCH --mem=6000mb   &lt;br /&gt;
#SBATCH --export=ALL,EXECUTABLE=./omp_exe&lt;br /&gt;
#SBATCH -J OpenMP_Test&lt;br /&gt;
&lt;br /&gt;
#Usually you should set&lt;br /&gt;
export KMP_AFFINITY=compact,1,0&lt;br /&gt;
#export KMP_AFFINITY=verbose,compact,1,0 prints messages concerning the supported affinity&lt;br /&gt;
#KMP_AFFINITY Description: https://software.intel.com/en-us/node/524790#KMP_AFFINITY_ENVIRONMENT_VARIABLE&lt;br /&gt;
&lt;br /&gt;
export OMP_NUM_THREADS=$((${SLURM_JOB_CPUS_PER_NODE}/2))&lt;br /&gt;
echo &amp;quot;Executable ${EXECUTABLE} running on ${SLURM_JOB_CPUS_PER_NODE} cores with ${OMP_NUM_THREADS} threads&amp;quot;&lt;br /&gt;
startexe=${EXECUTABLE}&lt;br /&gt;
echo $startexe&lt;br /&gt;
exec $startexe&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Using Intel compiler the environment variable KMP_AFFINITY switches on binding of threads to specific cores and, if necessary, replace &amp;lt;placeholder&amp;gt; with the required modulefile to enable the OpenMP environment and execute the script &#039;&#039;&#039;job_omp.sh&#039;&#039;&#039; adding the queue class &#039;&#039;single&#039;&#039; as sbatch option:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sbatch -p single job_omp.sh&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Note, that sbatch command line options overrule script options, e.g.,&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sbatch --partition=single --mem=200 job_omp.sh&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
overwrites the script setting of 6000 MByte with 200 MByte.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== MPI Parallel Programs ====&lt;br /&gt;
MPI parallel programs run faster than serial programs on multi CPU and multi core systems. N-fold spawned processes of the MPI program, i.e., &#039;&#039;&#039;MPI tasks&#039;&#039;&#039;,  run simultaneously and communicate via the Message Passing Interface (MPI) paradigm. MPI tasks do not share memory but can be spawned over different nodes.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Multiple MPI tasks must be launched via &#039;&#039;&#039;mpirun&#039;&#039;&#039;, e.g. 4 MPI tasks of &#039;&#039;my_par_program&#039;&#039;:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ mpirun -n 4 my_par_program&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
This command runs 4 MPI tasks of &#039;&#039;my_par_program&#039;&#039; on the node you are logged in.&lt;br /&gt;
To run this command with a loaded Intel MPI the environment variable I_MPI_HYDRA_BOOTSTRAP must be unset ( --&amp;gt; $ unset I_MPI_HYDRA_BOOTSTRAP).&lt;br /&gt;
&lt;br /&gt;
Running MPI parallel programs in a batch job the interactive environment - particularly the loaded modules - will also be set in the batch job. If you want to set a defined module environment in your batch job you have to purge all modules before setting the wished modules. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
===== OpenMPI =====&lt;br /&gt;
&lt;br /&gt;
If you want to run jobs on batch nodes, generate a wrapper script &#039;&#039;job_ompi.sh&#039;&#039; for &#039;&#039;&#039;OpenMPI&#039;&#039;&#039; containing the following lines:&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
# Use when a defined module environment related to OpenMPI is wished&lt;br /&gt;
module load mpi/openmpi/&amp;lt;placeholder_for_version&amp;gt;&lt;br /&gt;
mpirun --bind-to core --map-by core -report-bindings my_par_program&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Attention:&#039;&#039;&#039; Do &#039;&#039;&#039;NOT&#039;&#039;&#039; add mpirun options &#039;&#039;-n &amp;lt;number_of_processes&amp;gt;&#039;&#039; or any other option defining processes or nodes, since Slurm instructs mpirun about number of processes and node hostnames. Use &#039;&#039;&#039;ALWAYS&#039;&#039;&#039; the MPI options &#039;&#039;&#039;&#039;&#039;--bind-to core&#039;&#039;&#039;&#039;&#039; and &#039;&#039;&#039;&#039;&#039;--map-by core|socket|node&#039;&#039;&#039;&#039;&#039;. Please type &#039;&#039;mpirun --help&#039;&#039; for an explanation of the meaning of the different options of mpirun option &#039;&#039;--map-by&#039;&#039;.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Considering 4 OpenMPI tasks on a single node, each requiring 2000 MByte, and running for 1 hour, execute:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sbatch -p single -N 1 -n 4 --mem-per-cpu=2000 --time=01:00:00 ./job_ompi.sh&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===== Intel MPI =====&lt;br /&gt;
&lt;br /&gt;
Generate a wrapper script for &#039;&#039;&#039;Intel MPI&#039;&#039;&#039;, &#039;&#039;job_impi.sh&#039;&#039; containing the following lines:&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
# Use when a defined module environment related to Intel MPI is wished&lt;br /&gt;
module load mpi/impi/&amp;lt;placeholder_for_version&amp;gt;   &lt;br /&gt;
mpiexec.hydra -bootstrap slurm my_par_program&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&amp;lt;font color=red&amp;gt;&#039;&#039;&#039;Attention:&#039;&#039;&#039;&amp;lt;/font&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
Do &#039;&#039;&#039;NOT&#039;&#039;&#039; add mpirun options &#039;&#039;-n &amp;lt;number_of_processes&amp;gt;&#039;&#039; or any other option defining processes or nodes, since Slurm instructs mpirun about number of processes and node hostnames.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Launching and running 200 Intel MPI tasks on 5 nodes, each requiring 80 GByte, and running for 5 hours, execute:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sbatch --partition=multiple -N 5 --ntasks-per-node=40 --mem=80gb -t 300 ./job_impi.sh&lt;br /&gt;
&amp;lt;/pre&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
If you want to use 128 or more nodes, you must also set the environment variable as follows:           &amp;lt;BR&amp;gt;&lt;br /&gt;
export I_MPI_HYDRA_BRANCH_COUNT=-1&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
If you want to use the options perhost, ppn or rr, you must additionally set the environment variable I_MPI_JOB_RESPECT_PROCESS_PLACEMENT=off.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Multithreaded + MPI parallel Programs ====&lt;br /&gt;
Multithreaded + MPI parallel programs operate faster than serial programs on multi CPUs with multiple cores. All threads of one process share resources such as memory. On the contrary MPI tasks do not share memory but can be spawned over different nodes. &#039;&#039;&#039;Because hyperthreading is switched on on bwUniCluster 2.0, the option --cpus-per-task (-c) must be set to 2*n, if you want to use n threads.&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
===== OpenMPI with Multithreading =====&lt;br /&gt;
Multiple MPI tasks using &#039;&#039;&#039;OpenMPI&#039;&#039;&#039; must be launched by the MPI parallel program &#039;&#039;&#039;mpirun&#039;&#039;&#039;. For multithreaded programs based on &#039;&#039;&#039;Open&#039;&#039;&#039; &#039;&#039;&#039;M&#039;&#039;&#039;ulti-&#039;&#039;&#039;P&#039;&#039;&#039;rocessing (OpenMP) number of threads are defined by the environment variable OMP_NUM_THREADS. By default this variable is set to 1 (OMP_NUM_THREADS=1).&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;For OpenMPI&#039;&#039;&#039; a job-script to submit a batch job called &#039;&#039;job_ompi_omp.sh&#039;&#039; that runs a MPI program with 4 tasks and a 28-fold threaded program &#039;&#039;ompi_omp_program&#039;&#039; requiring 3000 MByte of physical memory per thread (using 28 threads per MPI task you will get 28*3000 MByte = 84000 MByte per MPI task) and total wall clock time of 3 hours looks like:&lt;br /&gt;
&amp;lt;!--b)--&amp;gt;&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --ntasks=4&lt;br /&gt;
#SBATCH --cpus-per-task=56&lt;br /&gt;
#SBATCH --time=03:00:00&lt;br /&gt;
#SBATCH --mem=83gb    # 84000 MB = 84000/1024 GB = 82.1 GB&lt;br /&gt;
#SBATCH --export=ALL,MPI_MODULE=mpi/openmpi/3.1,EXECUTABLE=./ompi_omp_program&lt;br /&gt;
#SBATCH --output=&amp;quot;parprog_hybrid_%j.out&amp;quot;  &lt;br /&gt;
&lt;br /&gt;
# Use when a defined module environment related to OpenMPI is wished&lt;br /&gt;
module load ${MPI_MODULE}&lt;br /&gt;
export OMP_NUM_THREADS=$((${SLURM_CPUS_PER_TASK}/2))&lt;br /&gt;
export MPIRUN_OPTIONS=&amp;quot;--bind-to core --map-by socket:PE=${OMP_NUM_THREADS} -report-bindings&amp;quot;&lt;br /&gt;
export NUM_CORES=${SLURM_NTASKS}*${OMP_NUM_THREADS}&lt;br /&gt;
echo &amp;quot;${EXECUTABLE} running on ${NUM_CORES} cores with ${SLURM_NTASKS} MPI-tasks and ${OMP_NUM_THREADS} threads&amp;quot;&lt;br /&gt;
startexe=&amp;quot;mpirun -n ${SLURM_NTASKS} ${MPIRUN_OPTIONS} ${EXECUTABLE}&amp;quot;&lt;br /&gt;
echo $startexe&lt;br /&gt;
exec $startexe&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Execute the script &#039;&#039;&#039;job_ompi_omp.sh&#039;&#039;&#039; by command sbatch:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sbatch -p multiple_e ./job_ompi_omp.sh&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* With the mpirun option &#039;&#039;--bind-to core&#039;&#039; MPI tasks and OpenMP threads are bound to physical cores.&lt;br /&gt;
* With the option &#039;&#039;--map-by node:PE=&amp;lt;value&amp;gt;&#039;&#039; (neighbored) MPI tasks will be attached to different nodes and each MPI task is bound to the first core of a node. &amp;lt;value&amp;gt; must be set to ${OMP_NUM_THREADS}.&lt;br /&gt;
* The option &#039;&#039;-report-bindings&#039;&#039; shows the bindings between MPI tasks and physical cores.&lt;br /&gt;
* The mpirun-options &#039;&#039;&#039;--bind-to core&#039;&#039;&#039;, &#039;&#039;&#039;--map-by socket|...|node:PE=&amp;lt;value&amp;gt;&#039;&#039;&#039; should always be used when running a multithreaded MPI program.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===== Intel MPI with Multithreading =====&lt;br /&gt;
Multithreaded + MPI parallel programs operate faster than serial programs on multi CPUs with multiple cores. All threads of one process share resources such as memory. On the contrary MPI tasks do not share memory but can be spawned over different nodes.  &lt;br /&gt;
&lt;br /&gt;
Multiple Intel MPI tasks must be launched by the MPI parallel program &#039;&#039;&#039;mpiexec.hydra&#039;&#039;&#039;. For multithreaded programs based on &#039;&#039;&#039;Open&#039;&#039;&#039; &#039;&#039;&#039;M&#039;&#039;&#039;ulti-&#039;&#039;&#039;P&#039;&#039;&#039;rocessing (OpenMP) number of threads are defined by the environment variable OMP_NUM_THREADS. By default this variable is set to 1 (OMP_NUM_THREADS=1).&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;For Intel MPI&#039;&#039;&#039; a job-script to submit a batch job called &#039;&#039;job_impi_omp.sh&#039;&#039; that runs a Intel MPI program with 10 tasks and a 40-fold threaded program &#039;&#039;impi_omp_program&#039;&#039; requiring 96000 MByte of total physical memory per task and total wall clock time of 1 hours looks like: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--b)--&amp;gt; &lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --ntasks=10&lt;br /&gt;
#SBATCH --cpus-per-task=80&lt;br /&gt;
#SBATCH --time=60&lt;br /&gt;
#SBATCH --mem=96000&lt;br /&gt;
#SBATCH --export=ALL,MPI_MODULE=mpi/impi,EXE=./impi_omp_program&lt;br /&gt;
#SBATCH --output=&amp;quot;parprog_impi_omp_%j.out&amp;quot;&lt;br /&gt;
&lt;br /&gt;
#If using more than one MPI task per node please set&lt;br /&gt;
export KMP_AFFINITY=compact,1,0&lt;br /&gt;
#export KMP_AFFINITY=verbose,scatter  prints messages concerning the supported affinity &lt;br /&gt;
#KMP_AFFINITY Description: https://software.intel.com/en-us/node/524790#KMP_AFFINITY_ENVIRONMENT_VARIABLE&lt;br /&gt;
&lt;br /&gt;
# Use when a defined module environment related to Intel MPI is wished &lt;br /&gt;
module load ${MPI_MODULE}&lt;br /&gt;
export OMP_NUM_THREADS=$((${SLURM_CPUS_PER_TASK}/2))&lt;br /&gt;
export MPIRUN_OPTIONS=&amp;quot;-binding &amp;quot;domain=omp:compact&amp;quot; -print-rank-map -envall&amp;quot;&lt;br /&gt;
export NUM_PROCS=eval(${SLURM_NTASKS}*${OMP_NUM_THREADS})&lt;br /&gt;
echo &amp;quot;${EXE} running on ${NUM_PROCS} cores with ${SLURM_NTASKS} MPI-tasks and ${OMP_NUM_THREADS} threads&amp;quot;&lt;br /&gt;
startexe=&amp;quot;mpiexec.hydra -bootstrap slurm ${MPIRUN_OPTIONS} -n ${SLURM_NTASKS} ${EXE}&amp;quot;&lt;br /&gt;
echo $startexe&lt;br /&gt;
exec $startexe&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Using Intel compiler the environment variable KMP_AFFINITY switches on binding of threads to specific cores. If you only run one MPI task per node please set KMP_AFFINITY=compact,1,0.&lt;br /&gt;
&amp;lt;BR&amp;gt;&lt;br /&gt;
If you want to use 128 or more nodes, you must also set the environment variable as follows:           &amp;lt;BR&amp;gt;&lt;br /&gt;
export I_MPI_HYDRA_BRANCH_COUNT=-1&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
If you want to use the options perhost, ppn or rr, you must additionally set the environment variable I_MPI_JOB_RESPECT_PROCESS_PLACEMENT=off. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Execute the script &#039;&#039;&#039;job_impi_omp.sh&#039;&#039;&#039; by command sbatch:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sbatch -p multiple ./job_impi_omp.sh&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
The mpirun option &#039;&#039;-print-rank-map&#039;&#039; shows the bindings between MPI tasks and nodes (not very beneficial). The option &#039;&#039;-binding&#039;&#039; binds MPI tasks (processes) to a particular processor; &#039;&#039;domain=omp&#039;&#039; means that the domain size is determined by the number of threads. If you would choose 2 MPI tasks per node, you should choose &#039;&#039;-binding &amp;quot;cell=unit;map=bunch&amp;quot;&#039;&#039;; this binding maps one MPI process to each socket. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Chain jobs ====&lt;br /&gt;
The CPU time requirements of many applications exceed the limits of the job classes. In those situations it is recommended to solve the problem by a job chain. A job chain is a sequence of jobs where each job automatically starts its successor. &lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
####################################&lt;br /&gt;
## simple Slurm submitter script to setup   ## &lt;br /&gt;
## a chain of jobs using Slurm                    ##&lt;br /&gt;
####################################&lt;br /&gt;
## ver.  : 2018-11-27, KIT, SCC&lt;br /&gt;
&lt;br /&gt;
## Define maximum number of jobs via positional parameter 1, default is 5&lt;br /&gt;
max_nojob=${1:-5}&lt;br /&gt;
&lt;br /&gt;
## Define your jobscript (e.g. &amp;quot;~/chain_job.sh&amp;quot;)&lt;br /&gt;
chain_link_job=${PWD}/chain_job.sh&lt;br /&gt;
&lt;br /&gt;
## Define type of dependency via positional parameter 2, default is &#039;afterok&#039;&lt;br /&gt;
dep_type=&amp;quot;${2:-afterok}&amp;quot;&lt;br /&gt;
## -&amp;gt; List of all dependencies:&lt;br /&gt;
## https://slurm.schedmd.com/sbatch.html&lt;br /&gt;
&lt;br /&gt;
myloop_counter=1&lt;br /&gt;
## Submit loop&lt;br /&gt;
while [ ${myloop_counter} -le ${max_nojob} ] ; do&lt;br /&gt;
   ##&lt;br /&gt;
   ## Differ msub_opt depending on chain link number&lt;br /&gt;
   if [ ${myloop_counter} -eq 1 ] ; then&lt;br /&gt;
      slurm_opt=&amp;quot;&amp;quot;&lt;br /&gt;
   else&lt;br /&gt;
      slurm_opt=&amp;quot;-d ${dep_type}:${jobID}&amp;quot;&lt;br /&gt;
   fi&lt;br /&gt;
   ##&lt;br /&gt;
   ## Print current iteration number and sbatch command&lt;br /&gt;
   echo &amp;quot;Chain job iteration = ${myloop_counter}&amp;quot;&lt;br /&gt;
   echo &amp;quot;   sbatch --export=myloop_counter=${myloop_counter} ${slurm_opt} ${chain_link_job}&amp;quot;&lt;br /&gt;
   ## Store job ID for next iteration by storing output of sbatch command with empty lines&lt;br /&gt;
   jobID=$(sbatch -p &amp;lt;queue&amp;gt; --export=ALL,myloop_counter=${myloop_counter} ${slurm_opt} ${chain_link_job} 2&amp;gt;&amp;amp;1 | sed &#039;s/[S,a-z]* //g&#039;)&lt;br /&gt;
   ##   &lt;br /&gt;
   ## Check if ERROR occured&lt;br /&gt;
   if [[ &amp;quot;${jobID}&amp;quot; =~ &amp;quot;ERROR&amp;quot; ]] ; then&lt;br /&gt;
      echo &amp;quot;   -&amp;gt; submission failed!&amp;quot; ; exit 1&lt;br /&gt;
   else&lt;br /&gt;
      echo &amp;quot;   -&amp;gt; job number = ${jobID}&amp;quot;&lt;br /&gt;
   fi&lt;br /&gt;
   ##&lt;br /&gt;
   ## Increase counter&lt;br /&gt;
   let myloop_counter+=1&lt;br /&gt;
done&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== GPU jobs ====&lt;br /&gt;
&lt;br /&gt;
The nodes in the gpu_4 and gpu_8 queues have 4 or 8 NVIDIA Tesla V100 GPUs. Just submitting a job to these queues is not enough to also allocate one or more GPUs, you have to do so using the &amp;quot;--gres=gpu&amp;quot; parameter. You have to specifiy how many GPUs your job needs, e.g. &amp;quot;--gres=gpu:2&amp;quot; will request two GPUs.&lt;br /&gt;
&lt;br /&gt;
The GPU nodes are shared between multiple jobs if the jobs don&#039;t request all the GPUs in a node and there are enough ressources to run more than one job. The individual GPUs are always bound to a single job and will not be shared between different jobs.&lt;br /&gt;
&lt;br /&gt;
a) add after the initial line of your script job.sh the line including the&lt;br /&gt;
information about the LSDF Online Storage usage:&amp;lt;br&amp;gt;   #SBATCH --constraint=LSDF&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --ntasks=40&lt;br /&gt;
#SBATCH --time=02:00:00&lt;br /&gt;
#SBATCH --mem=4000&lt;br /&gt;
#SBATCH --gres=gpu:2&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or b) execute:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sbatch -p &amp;lt;queue&amp;gt; -n 40 -t 02:00:00 --mem 4000 --gres=gpu:2 job.sh&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
If you start an interactive session on of the GPU nodes, you can use the &amp;quot;nvidia-smi&amp;quot; command to list the GPUs allocated to your job:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nvidia-smi&lt;br /&gt;
Sun Mar 29 15:20:05 2020       &lt;br /&gt;
+-----------------------------------------------------------------------------+&lt;br /&gt;
| NVIDIA-SMI 440.33.01    Driver Version: 440.33.01    CUDA Version: 10.2     |&lt;br /&gt;
|-------------------------------+----------------------+----------------------+&lt;br /&gt;
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |&lt;br /&gt;
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |&lt;br /&gt;
|===============================+======================+======================|&lt;br /&gt;
|   0  Tesla V100-SXM2...  Off  | 00000000:3A:00.0 Off |                    0 |&lt;br /&gt;
| N/A   29C    P0    39W / 300W |      9MiB / 32510MiB |      0%      Default |&lt;br /&gt;
+-------------------------------+----------------------+----------------------+&lt;br /&gt;
|   1  Tesla V100-SXM2...  Off  | 00000000:3B:00.0 Off |                    0 |&lt;br /&gt;
| N/A   30C    P0    41W / 300W |      8MiB / 32510MiB |      0%      Default |&lt;br /&gt;
+-------------------------------+----------------------+----------------------+&lt;br /&gt;
                                                                               &lt;br /&gt;
+-----------------------------------------------------------------------------+&lt;br /&gt;
| Processes:                                                       GPU Memory |&lt;br /&gt;
|  GPU       PID   Type   Process name                             Usage      |&lt;br /&gt;
|=============================================================================|&lt;br /&gt;
|    0     14228      G   /usr/bin/X                                     8MiB |&lt;br /&gt;
|    1     14228      G   /usr/bin/X                                     8MiB |&lt;br /&gt;
+-----------------------------------------------------------------------------+&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== LSDF Online Storage ====&lt;br /&gt;
On bwUniCluster 2.0 you can use for special cases the LSDF Online Storage on the HPC cluster nodes. Please request for this service separately ([https://www.lsdf.kit.edu/os/storagerequest/: LSDF Storage Request]).&lt;br /&gt;
To mount the LSDF Online Storage on the compute nodes during the job runtime the&lt;br /&gt;
the constraint flag &amp;quot;LSDF&amp;quot; has to be set.  &lt;br /&gt;
&lt;br /&gt;
a) add after the initial line of your script job.sh the line including the&lt;br /&gt;
information about the LSDF Online Storage usage:&amp;lt;br&amp;gt;   #SBATCH --constraint=LSDF&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --ntasks=1&lt;br /&gt;
#SBATCH --time=120&lt;br /&gt;
#SBATCH --mem=200&lt;br /&gt;
#SBATCH --constraint=LSDF&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or b) execute:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sbatch -p &amp;lt;queue&amp;gt; -n 1 -t 2:00:00 --mem 200 job.sh -C LSDF&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
For the usage of the LSDF Online Storage&lt;br /&gt;
the following environment variables are available: $LSDF, $LSDFPROJECTS, $LSDFHOME.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====BeeOND (BeeGFS On-Demand)====&lt;br /&gt;
&lt;br /&gt;
BeeOND instances are integrated into the prolog and epilog script of the cluster batch system, Slurm. It can be used on the compute nodes during the job runtime with the constraint flag &amp;quot;BEEOND&amp;quot; ([[BwUniCluster_2.0_Slurm_common_Features#sbatch_Command_Parameters|Slurm Command Parameters]])&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH ...&lt;br /&gt;
#SBATCH --constraint=BEEOND&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After your job has started you can find the private on-demand file system in &#039;&#039;&#039;/mnt/odfs/$SLURM_JOB_ID&#039;&#039;&#039; directory. The mountpoint comes with three pre-configured directories:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#for small files (stripe count = 1)&lt;br /&gt;
/mnt/odfs/$SLURM_JOB_ID/stripe_1&lt;br /&gt;
#stripe count = 4&lt;br /&gt;
/mnt/odfs/$SLURM_JOB_ID/stripe_default&lt;br /&gt;
#stripe count = 8, 16 or 32, use this directories for medium sized and large files or when using MPI-IO&lt;br /&gt;
/mnt/odfs/$SLURM_JOB_ID/stripe_8, /mnt/odfs/$SLURM_JOB_ID/stripe_16 or /mnt/odfs/$SLURM_JOB_ID/stripe_32&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you request less nodes than stripe count, the stripe count will be max number of nodes, e.g., You only request 8 nodes , so the directory with stripe count 16 is basically only with a stripe count 8.&lt;br /&gt;
&lt;br /&gt;
The capacity of the private file system depends on the number of nodes. For each node you get 250Gbyte.&lt;br /&gt;
&lt;br /&gt;
!!! Be careful when creating large files, use always the directory with the max stripe count for large files.&lt;br /&gt;
If you create large files use a higher stripe count. For example, if your largest file is 1.1 Tb, then you have to use a stripe count larger&amp;gt;4 (4 x 250GB).  &lt;br /&gt;
&lt;br /&gt;
If you request 100 nodes for your job, the private file system is 100 * 250 Gbyte ~ 25 Tbyte (approx) capacity.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Recommendation:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The private file system is using its own metadata server. This metadata server is started on the first nodes. Depending on your application, the metadata server is consuming decent amount of CPU power. Probably adding a extra node to your job could improve the usability of the on-demand file system. Start your application with the MPI option:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mpirun -nolocal myapplication&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
With the -nolocal option the node where mpirun is initiated is not used for your application. This node is fully available for the meta data server of your requested on-demand file system.&lt;br /&gt;
&lt;br /&gt;
Example job script:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#very simple example on how to use a private on-demand file system&lt;br /&gt;
#SBATCH -N 10&lt;br /&gt;
#SBATCH --constraint=BEEOND&lt;br /&gt;
&lt;br /&gt;
#create a workspace &lt;br /&gt;
ws_allocate myresults-$SLURM_JOB_ID 90&lt;br /&gt;
RESULTDIR=`ws_find myresults-$SLURM_JOB_ID`&lt;br /&gt;
&lt;br /&gt;
#Set ENV variable to on-demand file system&lt;br /&gt;
ODFSDIR=/mnt/odfs/$SLURM_JOB_ID/stripe_16/&lt;br /&gt;
&lt;br /&gt;
#start application and write results to on-demand file system&lt;br /&gt;
mpirun -nolocal myapplication -o $ODFSDIR/results&lt;br /&gt;
&lt;br /&gt;
#Copy back data after your job application end&lt;br /&gt;
rsync -av $ODFSDIR/results $RESULTDIR&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Start time of job or resources : squeue --start ==&lt;br /&gt;
The command can be used by any user to displays the estimated start time of a job based a number of analysis types based on historical usage, earliest available reservable resources, and priority based backlog. The command squeue is explained in detail on the webpage https://slurm.schedmd.com/squeue.html or via manpage (man squeue). &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
=== Access ===&lt;br /&gt;
By default, this command can be run by &#039;&#039;&#039;any user&#039;&#039;&#039;. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== List of your submitted jobs : squeue ==&lt;br /&gt;
Displays information about YOUR active, pending and/or recently completed jobs. The command displays all own active and pending jobs. The command squeue is explained in detail on the webpage https://slurm.schedmd.com/squeue.html or via manpage (man squeue).&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
=== Access ===&lt;br /&gt;
By default, this command can be run by any user.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Flags ===&lt;br /&gt;
{| width=750px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Flag !! Description&lt;br /&gt;
|-&lt;br /&gt;
| -l, --long&lt;br /&gt;
| Report more of the available information for the selected jobs or job steps, subject to any constraints specified.&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Examples ===&lt;br /&gt;
&#039;&#039;squeue&#039;&#039; example on bwUniCluster 2.0 &amp;lt;small&amp;gt;(Only your own jobs are displayed!)&amp;lt;/small&amp;gt;.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ squeue &lt;br /&gt;
             JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)&lt;br /&gt;
          18088744    single CPV.sbat   ab1234 PD       0:00      1 (Priority)&lt;br /&gt;
          18098414  multiple CPV.sbat   ab1234 PD       0:00      2 (Priority) &lt;br /&gt;
          18090089  multiple CPV.sbat   ab1234  R       2:27      2 uc2n[127-128]&lt;br /&gt;
$ squeue -l&lt;br /&gt;
            JOBID PARTITION     NAME     USER    STATE       TIME TIME_LIMI  NODES NODELIST(REASON) &lt;br /&gt;
         18088654    single CPV.sbat   ab1234 COMPLETI       4:29   2:00:00      1 uc2n374&lt;br /&gt;
         18088785    single CPV.sbat   ab1234  PENDING       0:00   2:00:00      1 (Priority)&lt;br /&gt;
         18098414  multiple CPV.sbat   ab1234  PENDING       0:00   2:00:00      2 (Priority)&lt;br /&gt;
         18088683    single CPV.sbat   ab1234  RUNNING       0:14   2:00:00      1 uc2n413  &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* The output of &#039;&#039;squeue&#039;&#039; shows how many jobs of yours are running or pending and how many nodes are in use by your jobs.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Shows free resources : sinfo_t_idle ==&lt;br /&gt;
The Slurm command sinfo is used to view partition and node information for a system running Slurm. It incorporates down time, reservations, and node state information in determining the available backfill window. The sinfo command can only be used by the administrator.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
SCC has prepared a special script (sinfo_t_idle) to find out how many processors are available for immediate use on the system. It is anticipated that users will use this information to submit jobs that meet these criteria and thus obtain quick job turnaround times. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
=== Access ===&lt;br /&gt;
By default, this command can be used by any user or administrator. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
=== Example ===&lt;br /&gt;
* The following command displays what resources are available for immediate use for the whole partition.&lt;br /&gt;
&amp;lt;pre&amp;gt;$ sinfo_t_idle&lt;br /&gt;
Partition dev_multiple  :      8 nodes idle&lt;br /&gt;
Partition multiple      :    332 nodes idle&lt;br /&gt;
Partition dev_single    :      4 nodes idle&lt;br /&gt;
Partition single        :     76 nodes idle&lt;br /&gt;
Partition long          :     80 nodes idle&lt;br /&gt;
Partition fat           :      5 nodes idle&lt;br /&gt;
Partition dev_special   :    342 nodes idle&lt;br /&gt;
Partition special       :    342 nodes idle&lt;br /&gt;
Partition dev_multiple_e:      7 nodes idle&lt;br /&gt;
Partition multiple_e    :    335 nodes idle&lt;br /&gt;
Partition gpu_4         :     12 nodes idle&lt;br /&gt;
Partition gpu_8         :      6 nodes idle&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* For the above example jobs in all partitions can be run immediately.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Detailed job information : scontrol show job ==&lt;br /&gt;
scontrol show job displays detailed job state information and diagnostic output for all or a specified job of yours. Detailed information is available for active, pending and recently completed jobs. The command scontrol is explained in detail on the webpage https://slurm.schedmd.com/scontrol.html or via manpage (man scontrol). &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Display the state of all your jobs in normal mode: scontrol show job&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Display the state of a job with &amp;lt;jobid&amp;gt; in normal mode: scontrol show job &amp;lt;jobid&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
=== Access ===&lt;br /&gt;
* End users can use scontrol show job to view the status of their &#039;&#039;&#039;own jobs&#039;&#039;&#039; only. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Arguments ===&lt;br /&gt;
{| width=750px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Option !! Default !! Description !! Example&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
|- style=&amp;quot;width:12%;&amp;quot; &lt;br /&gt;
| -d&lt;br /&gt;
| (n/a)&lt;br /&gt;
| Detailed mode&lt;br /&gt;
| Example: Display the state with jobid 18089884 in detailed mode. &amp;lt;br&amp;gt; &amp;lt;pre&amp;gt;scontrol -d show job 18089884&amp;lt;/pre&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Scontrol show job Example ===&lt;br /&gt;
Here is an example from bwUniCluster 2.0.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
squeue    # show my own jobs (here the userid is replaced!)&lt;br /&gt;
             JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)&lt;br /&gt;
          18089884  multiple CPV.sbat   bq0742  R      33:44      2 uc2n[165-166]&lt;br /&gt;
&lt;br /&gt;
$&lt;br /&gt;
$ # now, see what&#039;s up with my pending job with jobid 18089884&lt;br /&gt;
$ &lt;br /&gt;
$ scontrol show job 18089884&lt;br /&gt;
&lt;br /&gt;
JobId=18089884 JobName=CPV.sbatch&lt;br /&gt;
   UserId=bq0742(8946) GroupId=scc(12345) MCS_label=N/A&lt;br /&gt;
   Priority=3 Nice=0 Account=kit QOS=normal&lt;br /&gt;
   JobState=RUNNING Reason=None Dependency=(null)&lt;br /&gt;
   Requeue=1 Restarts=0 BatchFlag=1 Reboot=0 ExitCode=0:0&lt;br /&gt;
   RunTime=00:35:06 TimeLimit=02:00:00 TimeMin=N/A&lt;br /&gt;
   SubmitTime=2020-03-16T14:14:54 EligibleTime=2020-03-16T14:14:54&lt;br /&gt;
   AccrueTime=2020-03-16T14:14:54&lt;br /&gt;
   StartTime=2020-03-16T15:12:51 EndTime=2020-03-16T17:12:51 Deadline=N/A&lt;br /&gt;
   SuspendTime=None SecsPreSuspend=0 LastSchedEval=2020-03-16T15:12:51&lt;br /&gt;
   Partition=multiple AllocNode:Sid=uc2n995:5064&lt;br /&gt;
   ReqNodeList=(null) ExcNodeList=(null)&lt;br /&gt;
   NodeList=uc2n[165-166]&lt;br /&gt;
   BatchHost=uc2n165&lt;br /&gt;
   NumNodes=2 NumCPUs=160 NumTasks=80 CPUs/Task=1 ReqB:S:C:T=0:0:*:1&lt;br /&gt;
   TRES=cpu=160,mem=96320M,node=2,billing=160&lt;br /&gt;
   Socks/Node=* NtasksPerN:B:S:C=40:0:*:1 CoreSpec=*&lt;br /&gt;
   MinCPUsNode=40 MinMemoryCPU=1204M MinTmpDiskNode=0&lt;br /&gt;
   Features=(null) DelayBoot=00:00:00&lt;br /&gt;
   OverSubscribe=NO Contiguous=0 Licenses=(null) Network=(null)&lt;br /&gt;
   Command=/pfs/data5/home/kit/scc/bq0742/git/CPV/bin/CPV.sbatch&lt;br /&gt;
   WorkDir=/pfs/data5/home/kit/scc/bq0742/git/CPV/bin&lt;br /&gt;
   StdErr=/pfs/data5/home/kit/scc/bq0742/git/CPV/bin/slurm-18089884.out&lt;br /&gt;
   StdIn=/dev/null&lt;br /&gt;
   StdOut=/pfs/data5/home/kit/scc/bq0742/git/CPV/bin/slurm-18089884.out&lt;br /&gt;
   Power=&lt;br /&gt;
   MailUser=(null) MailType=NONE&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
You can use standard Linux pipe commands to filter the very detailed scontrol show job output.&lt;br /&gt;
* In which state the job is?&lt;br /&gt;
&amp;lt;pre&amp;gt;$ scontrol show job 18089884 | grep -i State&lt;br /&gt;
   JobState=COMPLETED Reason=None Dependency=(null)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Cancel Slurm Jobs ==&lt;br /&gt;
The scancel command is used to cancel jobs. The command scancel is explained in detail on the webpage https://slurm.schedmd.com/scancel.html or via manpage (man scancel).   &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
=== Canceling own jobs : scancel ===&lt;br /&gt;
scancel is used to signal or cancel jobs, job arrays or job steps. The command is:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ scancel [-i] &amp;lt;job-id&amp;gt;&lt;br /&gt;
$ scancel -t &amp;lt;job_state_name&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
{| width=750px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Flag !! Default !! Description !! Example&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| -i, --interactive&lt;br /&gt;
| (n/a)&lt;br /&gt;
| Interactive mode.&lt;br /&gt;
| Cancel the job 987654 interactively. &amp;lt;br&amp;gt; &amp;lt;pre&amp;gt; scancel -i 987654 &amp;lt;/pre&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| -t, --state&lt;br /&gt;
| (n/a)&lt;br /&gt;
| Restrict the scancel operation to jobs in a certain state. &amp;lt;br&amp;gt; &amp;quot;job_state_name&amp;quot; may have a value of either &amp;quot;PENDING&amp;quot;, &amp;quot;RUNNING&amp;quot; or &amp;quot;SUSPENDED&amp;quot;.&lt;br /&gt;
| Cancel all jobs in state &amp;quot;PENDING&amp;quot;. &amp;lt;br&amp;gt; &amp;lt;pre&amp;gt; scancel -t &amp;quot;PENDING&amp;quot; &amp;lt;/pre&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Resource Managers =&lt;br /&gt;
=== Batch Job (Slurm) Variables ===&lt;br /&gt;
The following environment variables of Slurm are added to your environment once your job has started&lt;br /&gt;
&amp;lt;small&amp;gt;(only an excerpt of the most important ones)&amp;lt;/small&amp;gt;.&lt;br /&gt;
{| width=750px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Environment !! Brief explanation&lt;br /&gt;
|- &lt;br /&gt;
| SLURM_JOB_CPUS_PER_NODE &lt;br /&gt;
| Number of processes per node dedicated to the job&lt;br /&gt;
|- &lt;br /&gt;
| SLURM_JOB_NODELIST &lt;br /&gt;
| List of nodes dedicated to the job&lt;br /&gt;
|- &lt;br /&gt;
| SLURM_JOB_NUM_NODES &lt;br /&gt;
| Number of nodes dedicated to the job&lt;br /&gt;
|- &lt;br /&gt;
| SLURM_MEM_PER_NODE &lt;br /&gt;
| Memory per node dedicated to the job &lt;br /&gt;
|- &lt;br /&gt;
| SLURM_NPROCS&lt;br /&gt;
| Total number of processes dedicated to the job &lt;br /&gt;
|-&lt;br /&gt;
| SLURM_CLUSTER_NAME&lt;br /&gt;
| Name of the cluster executing the job&lt;br /&gt;
|-&lt;br /&gt;
| SLURM_CPUS_PER_TASK &lt;br /&gt;
| Number of CPUs requested per task&lt;br /&gt;
|-&lt;br /&gt;
| SLURM_JOB_ACCOUNT&lt;br /&gt;
| Account name &lt;br /&gt;
|-&lt;br /&gt;
| SLURM_JOB_ID&lt;br /&gt;
| Job ID&lt;br /&gt;
|-&lt;br /&gt;
| SLURM_JOB_NAME&lt;br /&gt;
| Job Name&lt;br /&gt;
|-&lt;br /&gt;
| SLURM_JOB_PARTITION&lt;br /&gt;
| Partition/queue running the job&lt;br /&gt;
|-&lt;br /&gt;
| SLURM_JOB_UID&lt;br /&gt;
| User ID of the job&#039;s owner&lt;br /&gt;
|-&lt;br /&gt;
| SLURM_SUBMIT_DIR&lt;br /&gt;
| Job submit folder.  The directory from which msub was invoked. &lt;br /&gt;
|-&lt;br /&gt;
| SLURM_JOB_USER&lt;br /&gt;
| User name of the job&#039;s owner&lt;br /&gt;
|-&lt;br /&gt;
| SLURM_RESTART_COUNT&lt;br /&gt;
| Number of times job has restarted&lt;br /&gt;
|-&lt;br /&gt;
| SLURM_PROCID&lt;br /&gt;
| Task ID (MPI rank)&lt;br /&gt;
|-&lt;br /&gt;
| SLURM_NTASKS&lt;br /&gt;
| The total number of tasks available for the job&lt;br /&gt;
|-&lt;br /&gt;
| SLURM_STEP_ID&lt;br /&gt;
| Job step ID&lt;br /&gt;
|-&lt;br /&gt;
| SLURM_STEP_NUM_TASKS&lt;br /&gt;
| Task count (number of PI ranks)&lt;br /&gt;
|-&lt;br /&gt;
| SLURM_JOB_CONSTRAINT&lt;br /&gt;
| Job constraints&lt;br /&gt;
|}&lt;br /&gt;
See also:&lt;br /&gt;
* [https://slurm.schedmd.com/sbatch.html#lbAI Slurm input and output environment variables]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Job Exit Codes ===&lt;br /&gt;
A job&#039;s exit code (also known as exit status, return code and completion code) is captured by SLURM and saved as part of the job record. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Any non-zero exit code will be assumed to be a job failure and will result in a Job State of FAILED with a reason of &amp;quot;NonZeroExitCode&amp;quot;.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
The exit code is an 8 bit unsigned number ranging between 0 and 255. While it is possible for a job to return a negative exit code, SLURM will display it as an unsigned value in the 0 - 255 range.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
==== Displaying Exit Codes and Signals ====&lt;br /&gt;
SLURM displays a job&#039;s exit code in the output of the &#039;&#039;&#039;scontrol show job&#039;&#039;&#039; and the sview utility.&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
When a signal was responsible for a job or step&#039;s termination, the signal number will be displayed after the exit code, delineated by a colon(:).&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
==== Submitting Termination Signal ====&lt;br /&gt;
Here is an example, how to &#039;save&#039; a Slurm termination signal in a typical jobscript.&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
[...]&lt;br /&gt;
exit_code=$?&lt;br /&gt;
mpirun  -np &amp;lt;#cores&amp;gt;  &amp;lt;EXE_BIN_DIR&amp;gt;/&amp;lt;executable&amp;gt; ... (options)  2&amp;gt;&amp;amp;1&lt;br /&gt;
[ &amp;quot;$exit_code&amp;quot; -eq 0 ] &amp;amp;&amp;amp; echo &amp;quot;all clean...&amp;quot; || \&lt;br /&gt;
   echo &amp;quot;Executable &amp;lt;EXE_BIN_DIR&amp;gt;/&amp;lt;executable&amp;gt; finished with exit code ${$exit_code}&amp;quot;&lt;br /&gt;
[...]&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
* Do not use &#039;&#039;&#039;&#039;time&#039;&#039;&#039;&#039; mpirun! The exit code will be the one submitted by the first (time) program.&lt;br /&gt;
* You do not need an &#039;&#039;&#039;exit $exit_code&#039;&#039;&#039; in the scripts.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
----&lt;br /&gt;
[[Category:bwUniCluster 2.0|bwUniCluster 2.0]]&lt;br /&gt;
[[#top|Back to top]]&lt;/div&gt;</summary>
		<author><name>H Haefner</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster2.0/Slurm&amp;diff=6151</id>
		<title>BwUniCluster2.0/Slurm</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster2.0/Slurm&amp;diff=6151"/>
		<updated>2020-04-02T11:05:29Z</updated>

		<summary type="html">&lt;p&gt;H Haefner: /* OpenMPI with Multithreading */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;div id=&amp;quot;top&amp;quot;&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
=  Slurm HPC Workload Manager = &lt;br /&gt;
== Specification == &lt;br /&gt;
Slurm is an open source, fault-tolerant, and highly scalable cluster management and job scheduling system for large and small Linux clusters. Slurm requires no kernel modifications for its operation and is relatively self-contained. As a cluster workload manager, Slurm has three key functions. First, it allocates exclusive and/or non-exclusive access to resources (compute nodes) to users for some duration of time so they can perform work. Second, it provides a framework for starting, executing, and monitoring work (normally a parallel job) on the set of allocated nodes. Finally, it arbitrates contention for resources by managing a queue of pending work.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Any kind of calculation on the compute nodes of [[bwUniCluster 2.0|bwUniCluster 2.0]] requires the user to define calculations as a sequence of commands or single command together with required run time, number of CPU cores and main memory and submit all, i.e., the &#039;&#039;&#039;batch job&#039;&#039;&#039;, to a resource and workload managing software. bwUniCluster 2.0 has installed the workload managing software Slurm. Therefore any job submission by the user is to be executed by commands of the Slurm software. Slurm queues and runs user jobs based on fair sharing policies.  &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Slurm Commands (excerpt) ==&lt;br /&gt;
Some of the most used Slurm commands for non-administrators working on bwUniCluster 2.0.&lt;br /&gt;
{| width=750px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Slurm commands !! Brief explanation&lt;br /&gt;
|-&lt;br /&gt;
| [[#Job Submission : sbatch|sbatch]] || Submits a job and queues it in an input queue [[https://slurm.schedmd.com/sbatch.html sbatch]] &lt;br /&gt;
|-&lt;br /&gt;
| [[#Detailed job information : scontrol show job|scontrol show job]] || Displays detailed job state information [[https://slurm.schedmd.com/scontrol.html scontrol]]&lt;br /&gt;
|-&lt;br /&gt;
| [[#List of your submitted jo/bs : squeue|squeue]] || Displays information about active, eligible, blocked, and/or recently completed jobs [[https://slurm.schedmd.com/squeue.html squeue]]&lt;br /&gt;
|-&lt;br /&gt;
| [[#Start time of job or resources : squeue|squeue --start]] || Returns start time of submitted job or requested resources [[https://slurm.schedmd.com/squeue.html squeue]]&lt;br /&gt;
|-&lt;br /&gt;
| [[#Shows free resources : sinfo_t_idle|sinfo_t_idle]] || Shows what resources are available for immediate use [[https://slurm.schedmd.com/sinfo.html sinfo]]&lt;br /&gt;
|-&lt;br /&gt;
| [[#Canceling own jobs : scancel|scancel]] || Cancels a job (obsoleted!) [[https://slurm.schedmd.com/scancel.html scancel]]&lt;br /&gt;
|}&lt;br /&gt;
If your job was submitted to the &amp;quot;multiple&amp;quot; queue you can log into the allocated nodes via SSH as soon as the job is running.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
* [https://slurm.schedmd.com/tutorials.html  Slurm Tutorials]&lt;br /&gt;
* [https://slurm.schedmd.com/pdfs/summary.pdf  Slurm command/option summary (2 pages)]&lt;br /&gt;
* [https://slurm.schedmd.com/man_index.html  Slurm Commands]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Job Submission : sbatch ==&lt;br /&gt;
Batch jobs are submitted by using the command &#039;&#039;&#039;sbatch&#039;&#039;&#039;. The main purpose of the &#039;&#039;&#039;sbatch&#039;&#039;&#039; command is to specify the resources that are needed to run the job. &#039;&#039;&#039;sbatch&#039;&#039;&#039; will then queue the batch job. However, starting of batch job depends on the availability of the requested resources and the fair sharing value.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
=== sbatch Command Parameters ===&lt;br /&gt;
The syntax and use of &#039;&#039;&#039;sbatch&#039;&#039;&#039; can be displayed via:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ man sbatch&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;sbatch&#039;&#039;&#039; options can be used from the command line or in your job script.&lt;br /&gt;
{| width=750px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! colspan=&amp;quot;3&amp;quot; | sbatch Options&lt;br /&gt;
|-&lt;br /&gt;
! Command line&lt;br /&gt;
! Script&lt;br /&gt;
! Purpose&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| -t &#039;&#039;time&#039;&#039;  or  --time=&#039;&#039;time&#039;&#039;&lt;br /&gt;
| #SBATCH --time=&#039;&#039;time&#039;&#039;&lt;br /&gt;
| Wall clock time limit.&amp;lt;br&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| -N &#039;&#039;count&#039;&#039;  or  --nodes=&#039;&#039;count&#039;&#039;&lt;br /&gt;
| #SBATCH --nodes=&#039;&#039;count&#039;&#039;&lt;br /&gt;
| Number of nodes to be used.&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| -n &#039;&#039;count&#039;&#039;  or  --ntasks=&#039;&#039;count&#039;&#039;&lt;br /&gt;
| #SBATCH --ntasks=&#039;&#039;count&#039;&#039;&lt;br /&gt;
| Number of tasks to be launched.&lt;br /&gt;
|-&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| --ntasks-per-node=&#039;&#039;count&#039;&#039;&lt;br /&gt;
| #SBATCH --ntasks-per-node=&#039;&#039;count&#039;&#039;&lt;br /&gt;
| Maximum count (&amp;lt;= 28 and &amp;lt;= 40 resp.) of tasks per node.&amp;lt;br&amp;gt;(Replaces the option ppn of MOAB.)&lt;br /&gt;
|-&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| -c &#039;&#039;count&#039;&#039; or --cpus-per-task=&#039;&#039;count&#039;&#039;&lt;br /&gt;
| #SBATCH --cpus-per-task=&#039;&#039;count&#039;&#039;&lt;br /&gt;
| Number of CPUs required per (MPI-)task.&lt;br /&gt;
|-&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| --mem=&#039;&#039;value_in_MB&#039;&#039;&lt;br /&gt;
| #SBATCH --mem=&#039;&#039;value_in_MB&#039;&#039; &lt;br /&gt;
| Memory in MegaByte per node.&amp;lt;br&amp;gt;(Default value is 128000 and 96000 MB resp., i.e. you should omit &amp;lt;br&amp;gt; the setting of this option.)&lt;br /&gt;
|-&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| --mem-per-cpu=&#039;&#039;value_in_MB&#039;&#039;&lt;br /&gt;
| #SBATCH --mem-per-cpu=&#039;&#039;value_in_MB&#039;&#039; &lt;br /&gt;
| Minimum Memory required per allocated CPU.&amp;lt;br&amp;gt;(Replaces the option pmem of MOAB. You should omit &amp;lt;br&amp;gt; the setting of this option.)&lt;br /&gt;
|-&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| --mail-type=&#039;&#039;type&#039;&#039;&lt;br /&gt;
| #SBATCH --mail-type=&#039;&#039;type&#039;&#039;&lt;br /&gt;
| Notify user by email when certain event types occur.&amp;lt;br&amp;gt;Valid type values are NONE, BEGIN, END, FAIL, REQUEUE, ALL.&lt;br /&gt;
|-&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| --mail-user=&#039;&#039;mail-address&#039;&#039;&lt;br /&gt;
| #SBATCH --mail-user=&#039;&#039;mail-address&#039;&#039;&lt;br /&gt;
|  The specified mail-address receives email notification of state&amp;lt;br&amp;gt;changes as defined by --mail-type.&lt;br /&gt;
|-&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| --output=&#039;&#039;name&#039;&#039;&lt;br /&gt;
| #SBATCH --output=&#039;&#039;name&#039;&#039;&lt;br /&gt;
| File in which job output is stored. &lt;br /&gt;
|-&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| --error=&#039;&#039;name&#039;&#039;&lt;br /&gt;
| #SBATCH --error=&#039;&#039;name&#039;&#039;&lt;br /&gt;
| File in which job error messages are stored. &lt;br /&gt;
|-&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| -J &#039;&#039;name&#039;&#039; or --job-name=&#039;&#039;name&#039;&#039;&lt;br /&gt;
| #SBATCH --job-name=&#039;&#039;name&#039;&#039;&lt;br /&gt;
| Job name.&lt;br /&gt;
|-&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| --export=[ALL,] &#039;&#039;env-variables&#039;&#039;&lt;br /&gt;
| #SBATCH --export=[ALL,] &#039;&#039;env-variables&#039;&#039;&lt;br /&gt;
| Identifies which environment variables from the submission &amp;lt;br&amp;gt; environment are propagated to the launched application. Default &amp;lt;br&amp;gt; is ALL. If adding to the submission environment instead of &amp;lt;br&amp;gt; replacing it is intended,  the argument ALL must be added.&lt;br /&gt;
|-&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| -A &#039;&#039;group-name&#039;&#039; or --account=&#039;&#039;group-name&#039;&#039;&lt;br /&gt;
| #SBATCH --account=&#039;&#039;group-name&#039;&#039;&lt;br /&gt;
| Change resources used by this job to specified group. You may &amp;lt;br&amp;gt; need this option if your account is assigned to more &amp;lt;br&amp;gt; than one group. By command &amp;quot;scontrol show job&amp;quot; the project &amp;lt;br&amp;gt; group the job is accounted on can be seen behind &amp;quot;Account=&amp;quot;. &lt;br /&gt;
|-&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| -p &#039;&#039;queue-name&#039;&#039; or --partition=&#039;&#039;queue-name&#039;&#039;&lt;br /&gt;
| #SBATCH --partition=&#039;&#039;queue-name&#039;&#039;&lt;br /&gt;
| Request a specific queue for the resource allocation.&lt;br /&gt;
|-&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| -C &#039;&#039;LSDF&#039;&#039; or --constraint=&#039;&#039;LSDF&#039;&#039;&lt;br /&gt;
| #SBATCH --constraint=LSDF&lt;br /&gt;
| Job constraint LSDF Filesystems.&lt;br /&gt;
|-&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| -C &#039;&#039;BEEOND&#039;&#039; or --constraint=&#039;&#039;BEEOND&#039;&#039;&lt;br /&gt;
| #SBATCH --constraint=BEEOND&lt;br /&gt;
| Job constraint BeeOND file system.&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== sbatch --partition  &#039;&#039;queues&#039;&#039; ====&lt;br /&gt;
Queue classes define maximum resources such as walltime, nodes and processes per node and queue of the compute system. Details can be found here:&lt;br /&gt;
* [[BwUniCluster_2.0_Batch_Queues#sbatch_-p_queue|bwUniCluster 2.0 queue settings]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== sbatch Examples ===&lt;br /&gt;
==== Serial Programs ====&lt;br /&gt;
To submit a serial job that runs the script &#039;&#039;&#039;job.sh&#039;&#039;&#039; and that requires 5000 MB of main memory and 10 minutes of wall clock time&lt;br /&gt;
&lt;br /&gt;
a) execute:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sbatch -p dev_single -n 1 -t 10:00 --mem=5000  job.sh&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
or&lt;br /&gt;
b) add after the initial line of your script &#039;&#039;&#039;job.sh&#039;&#039;&#039; the lines (here with a high memory request):&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#SBATCH --ntasks=1&lt;br /&gt;
#SBATCH --time=10&lt;br /&gt;
#SBATCH --mem=200gb&lt;br /&gt;
#SBATCH --job-name=simple&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
and execute the modified script with the command line option &#039;&#039;--partition=fat&#039;&#039; (with &#039;&#039;--partition=(dev_)single&#039;&#039; maximum &#039;&#039;--mem=96gb&#039;&#039; is possible):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sbatch --partition=fat job.sh&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Note, that sbatch command line options overrule script options.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Multithreaded Programs ====&lt;br /&gt;
Multithreaded programs operate faster than serial programs on CPUs with multiple cores.&amp;lt;br&amp;gt;&lt;br /&gt;
Moreover, multiple threads of one process share resources such as memory.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
For multithreaded programs based on &#039;&#039;&#039;Open&#039;&#039;&#039; &#039;&#039;&#039;M&#039;&#039;&#039;ulti-&#039;&#039;&#039;P&#039;&#039;&#039;rocessing (OpenMP) a number of threads is defined by the environment variable OMP_NUM_THREADS. By default this variable is set to 1 (OMP_NUM_THREADS=1).&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Because hyperthreading is switched on on bwUniCluster 2.0, the option --cpus-per-task (-c) must be set to 2*n, if you want to use n threads.&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
To submit a batch job called &#039;&#039;OpenMP_Test&#039;&#039; that runs a 40-fold threaded program &#039;&#039;omp_exe&#039;&#039; which requires 6000 MByte of total physical memory and total wall clock time of 40 minutes:&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
a) execute:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sbatch -p single --export=ALL,OMP_NUM_THREADS=40 -J OpenMP_Test -N 1 -c 80 -t 40 --mem=6000 ./omp_exe&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
or&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
* generate the script &#039;&#039;&#039;job_omp.sh&#039;&#039;&#039; containing the following lines:&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
#SBATCH --cpus-per-task=80&lt;br /&gt;
#SBATCH --time=40:00&lt;br /&gt;
#SBATCH --mem=6000mb   &lt;br /&gt;
#SBATCH --export=ALL,EXECUTABLE=./omp_exe&lt;br /&gt;
#SBATCH -J OpenMP_Test&lt;br /&gt;
&lt;br /&gt;
#Usually you should set&lt;br /&gt;
export KMP_AFFINITY=compact,1,0&lt;br /&gt;
#export KMP_AFFINITY=verbose,compact,1,0 prints messages concerning the supported affinity&lt;br /&gt;
#KMP_AFFINITY Description: https://software.intel.com/en-us/node/524790#KMP_AFFINITY_ENVIRONMENT_VARIABLE&lt;br /&gt;
&lt;br /&gt;
export OMP_NUM_THREADS=$((${SLURM_JOB_CPUS_PER_NODE}/2))&lt;br /&gt;
echo &amp;quot;Executable ${EXECUTABLE} running on ${SLURM_JOB_CPUS_PER_NODE} cores with ${OMP_NUM_THREADS} threads&amp;quot;&lt;br /&gt;
startexe=${EXECUTABLE}&lt;br /&gt;
echo $startexe&lt;br /&gt;
exec $startexe&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Using Intel compiler the environment variable KMP_AFFINITY switches on binding of threads to specific cores and, if necessary, replace &amp;lt;placeholder&amp;gt; with the required modulefile to enable the OpenMP environment and execute the script &#039;&#039;&#039;job_omp.sh&#039;&#039;&#039; adding the queue class &#039;&#039;single&#039;&#039; as sbatch option:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sbatch -p single job_omp.sh&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Note, that sbatch command line options overrule script options, e.g.,&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sbatch --partition=single --mem=200 job_omp.sh&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
overwrites the script setting of 6000 MByte with 200 MByte.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== MPI Parallel Programs ====&lt;br /&gt;
MPI parallel programs run faster than serial programs on multi CPU and multi core systems. N-fold spawned processes of the MPI program, i.e., &#039;&#039;&#039;MPI tasks&#039;&#039;&#039;,  run simultaneously and communicate via the Message Passing Interface (MPI) paradigm. MPI tasks do not share memory but can be spawned over different nodes.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Multiple MPI tasks must be launched via &#039;&#039;&#039;mpirun&#039;&#039;&#039;, e.g. 4 MPI tasks of &#039;&#039;my_par_program&#039;&#039;:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ mpirun -n 4 my_par_program&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
This command runs 4 MPI tasks of &#039;&#039;my_par_program&#039;&#039; on the node you are logged in.&lt;br /&gt;
To run this command with a loaded Intel MPI the environment variable I_MPI_HYDRA_BOOTSTRAP must be unset ( --&amp;gt; $ unset I_MPI_HYDRA_BOOTSTRAP).&lt;br /&gt;
&lt;br /&gt;
Running MPI parallel programs in a batch job the interactive environment - particularly the loaded modules - will also be set in the batch job. If you want to set a defined module environment in your batch job you have to purge all modules before setting the wished modules. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
===== OpenMPI =====&lt;br /&gt;
&lt;br /&gt;
If you want to run jobs on batch nodes, generate a wrapper script &#039;&#039;job_ompi.sh&#039;&#039; for &#039;&#039;&#039;OpenMPI&#039;&#039;&#039; containing the following lines:&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
# Use when a defined module environment related to OpenMPI is wished&lt;br /&gt;
module load mpi/openmpi/&amp;lt;placeholder_for_version&amp;gt;&lt;br /&gt;
mpirun --bind-to core --map-by core -report-bindings my_par_program&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Attention:&#039;&#039;&#039; Do &#039;&#039;&#039;NOT&#039;&#039;&#039; add mpirun options &#039;&#039;-n &amp;lt;number_of_processes&amp;gt;&#039;&#039; or any other option defining processes or nodes, since Slurm instructs mpirun about number of processes and node hostnames. Use &#039;&#039;&#039;ALWAYS&#039;&#039;&#039; the MPI options &#039;&#039;&#039;&#039;&#039;--bind-to core&#039;&#039;&#039;&#039;&#039; and &#039;&#039;&#039;&#039;&#039;--map-by core|socket|node&#039;&#039;&#039;&#039;&#039;. Please type &#039;&#039;mpirun --help&#039;&#039; for an explanation of the meaning of the different options of mpirun option &#039;&#039;--map-by&#039;&#039;.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Considering 4 OpenMPI tasks on a single node, each requiring 2000 MByte, and running for 1 hour, execute:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sbatch -p single -N 1 -n 4 --mem-per-cpu=2000 --time=01:00:00 ./job_ompi.sh&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===== Intel MPI =====&lt;br /&gt;
&lt;br /&gt;
Generate a wrapper script for &#039;&#039;&#039;Intel MPI&#039;&#039;&#039;, &#039;&#039;job_impi.sh&#039;&#039; containing the following lines:&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
# Use when a defined module environment related to Intel MPI is wished&lt;br /&gt;
module load mpi/impi/&amp;lt;placeholder_for_version&amp;gt;   &lt;br /&gt;
mpiexec.hydra -bootstrap slurm my_par_program&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&amp;lt;font color=red&amp;gt;&#039;&#039;&#039;Attention:&#039;&#039;&#039;&amp;lt;/font&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
Do &#039;&#039;&#039;NOT&#039;&#039;&#039; add mpirun options &#039;&#039;-n &amp;lt;number_of_processes&amp;gt;&#039;&#039; or any other option defining processes or nodes, since Slurm instructs mpirun about number of processes and node hostnames.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Launching and running 200 Intel MPI tasks on 5 nodes, each requiring 80 GByte, and running for 5 hours, execute:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sbatch --partition=multiple -N 5 --ntasks-per-node=40 --mem=80gb -t 300 ./job_impi.sh&lt;br /&gt;
&amp;lt;/pre&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
If you want to use 128 or more nodes, you must also set the environment variable as follows:           &amp;lt;BR&amp;gt;&lt;br /&gt;
export I_MPI_HYDRA_BRANCH_COUNT=-1&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
If you want to use the options perhost, ppn or rr, you must additionally set the environment variable I_MPI_JOB_RESPECT_PROCESS_PLACEMENT=off.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Multithreaded + MPI parallel Programs ====&lt;br /&gt;
Multithreaded + MPI parallel programs operate faster than serial programs on multi CPUs with multiple cores. All threads of one process share resources such as memory. On the contrary MPI tasks do not share memory but can be spawned over different nodes. &#039;&#039;&#039;Because hyperthreading is switched on on bwUniCluster 2.0, the option --cpus-per-task (-c) must be set to 2*n, if you want to use n threads.&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
===== OpenMPI with Multithreading =====&lt;br /&gt;
Multiple MPI tasks using &#039;&#039;&#039;OpenMPI&#039;&#039;&#039; must be launched by the MPI parallel program &#039;&#039;&#039;mpirun&#039;&#039;&#039;. For multithreaded programs based on &#039;&#039;&#039;Open&#039;&#039;&#039; &#039;&#039;&#039;M&#039;&#039;&#039;ulti-&#039;&#039;&#039;P&#039;&#039;&#039;rocessing (OpenMP) number of threads are defined by the environment variable OMP_NUM_THREADS. By default this variable is set to 1 (OMP_NUM_THREADS=1).&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;For OpenMPI&#039;&#039;&#039; a job-script to submit a batch job called &#039;&#039;job_ompi_omp.sh&#039;&#039; that runs a MPI program with 4 tasks and a 28-fold threaded program &#039;&#039;ompi_omp_program&#039;&#039; requiring 3000 MByte of physical memory per thread (using 28 threads per MPI task you will get 28*3000 MByte = 84000 MByte per MPI task) and total wall clock time of 3 hours looks like:&lt;br /&gt;
&amp;lt;!--b)--&amp;gt;&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --ntasks=4&lt;br /&gt;
#SBATCH --cpus-per-task=56&lt;br /&gt;
#SBATCH --time=03:00:00&lt;br /&gt;
#SBATCH --mem=83gb    # 84000 MB = 84000/1024 GB = 82.1 GB&lt;br /&gt;
#SBATCH --export=ALL,MPI_MODULE=mpi/openmpi/3.1,EXECUTABLE=./ompi_omp_program&lt;br /&gt;
#SBATCH --output=&amp;quot;parprog_hybrid_%j.out&amp;quot;  &lt;br /&gt;
&lt;br /&gt;
# Use when a defined module environment related to OpenMPI is wished&lt;br /&gt;
module load ${MPI_MODULE}&lt;br /&gt;
export OMP_NUM_THREADS=$((${SLURM_CPUS_PER_TASK}/2))&lt;br /&gt;
export MPIRUN_OPTIONS=&amp;quot;--bind-to core --map-by socket:PE=${OMP_NUM_THREADS} -report-bindings&amp;quot;&lt;br /&gt;
export NUM_CORES=${SLURM_NTASKS}*${OMP_NUM_THREADS}&lt;br /&gt;
echo &amp;quot;${EXECUTABLE} running on ${NUM_CORES} cores with ${SLURM_NTASKS} MPI-tasks and ${OMP_NUM_THREADS} threads&amp;quot;&lt;br /&gt;
startexe=&amp;quot;mpirun -n ${SLURM_NTASKS} ${MPIRUN_OPTIONS} ${EXECUTABLE}&amp;quot;&lt;br /&gt;
echo $startexe&lt;br /&gt;
exec $startexe&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Execute the script &#039;&#039;&#039;job_ompi_omp.sh&#039;&#039;&#039; by command sbatch:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sbatch -p multiple_e ./job_ompi_omp.sh&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* With the mpirun option &#039;&#039;--bind-to core&#039;&#039; MPI tasks and OpenMP threads are bound to physical cores.&lt;br /&gt;
* With the option &#039;&#039;--map-by node:PE=&amp;lt;value&amp;gt;&#039;&#039; (neighbored) MPI tasks will be attached to different nodes and each MPI task is bound to the first core of a node. &amp;lt;value&amp;gt; must be set to ${OMP_NUM_THREADS}.&lt;br /&gt;
* The option &#039;&#039;-report-bindings&#039;&#039; shows the bindings between MPI tasks and physical cores.&lt;br /&gt;
* The mpirun-options &#039;&#039;&#039;--bind-to core&#039;&#039;&#039;, &#039;&#039;&#039;--map-by socket|...|node:PE=&amp;lt;value&amp;gt;&#039;&#039;&#039; should always be used when running a multithreaded MPI program.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===== Intel MPI with Multithreading =====&lt;br /&gt;
Multithreaded + MPI parallel programs operate faster than serial programs on multi CPUs with multiple cores. All threads of one process share resources such as memory. On the contrary MPI tasks do not share memory but can be spawned over different nodes.  &lt;br /&gt;
&lt;br /&gt;
Multiple Intel MPI tasks must be launched by the MPI parallel program &#039;&#039;&#039;mpiexec.hydra&#039;&#039;&#039;. For multithreaded programs based on &#039;&#039;&#039;Open&#039;&#039;&#039; &#039;&#039;&#039;M&#039;&#039;&#039;ulti-&#039;&#039;&#039;P&#039;&#039;&#039;rocessing (OpenMP) number of threads are defined by the environment variable OMP_NUM_THREADS. By default this variable is set to 1 (OMP_NUM_THREADS=1).&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;For Intel MPI&#039;&#039;&#039; a job-script to submit a batch job called &#039;&#039;job_impi_omp.sh&#039;&#039; that runs a Intel MPI program with 10 tasks and a 40-fold threaded program &#039;&#039;impi_omp_program&#039;&#039; requiring 96000 MByte of total physical memory per task and total wall clock time of 1 hours looks like: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--b)--&amp;gt; &lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --ntasks=10&lt;br /&gt;
#SBATCH --cpus-per-task=40&lt;br /&gt;
#SBATCH --time=60&lt;br /&gt;
#SBATCH --mem=96000&lt;br /&gt;
#SBATCH --export=ALL,MPI_MODULE=mpi/impi,EXE=./impi_omp_program&lt;br /&gt;
#SBATCH --output=&amp;quot;parprog_impi_omp_%j.out&amp;quot;&lt;br /&gt;
&lt;br /&gt;
#If using more than one MPI task per node please set&lt;br /&gt;
export KMP_AFFINITY=compact,1,0&lt;br /&gt;
#export KMP_AFFINITY=verbose,scatter  prints messages concerning the supported affinity &lt;br /&gt;
#KMP_AFFINITY Description: https://software.intel.com/en-us/node/524790#KMP_AFFINITY_ENVIRONMENT_VARIABLE&lt;br /&gt;
&lt;br /&gt;
# Use when a defined module environment related to Intel MPI is wished &lt;br /&gt;
module load ${MPI_MODULE}&lt;br /&gt;
export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK}&lt;br /&gt;
export MPIRUN_OPTIONS=&amp;quot;-binding &amp;quot;domain=omp:compact&amp;quot; -print-rank-map -envall&amp;quot;&lt;br /&gt;
export NUM_PROCS=eval(${SLURM_NTASKS}*${OMP_NUM_THREADS})&lt;br /&gt;
echo &amp;quot;${EXE} running on ${NUM_PROCS} cores with ${SLURM_NTASKS} MPI-tasks and ${OMP_NUM_THREADS} threads&amp;quot;&lt;br /&gt;
startexe=&amp;quot;mpiexec.hydra -bootstrap slurm ${MPIRUN_OPTIONS} -n ${SLURM_NTASKS} ${EXE}&amp;quot;&lt;br /&gt;
echo $startexe&lt;br /&gt;
exec $startexe&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Using Intel compiler the environment variable KMP_AFFINITY switches on binding of threads to specific cores. If you only run one MPI task per node please set KMP_AFFINITY=compact,1,0.&lt;br /&gt;
&amp;lt;BR&amp;gt;&lt;br /&gt;
If you want to use 128 or more nodes, you must also set the environment variable as follows:           &amp;lt;BR&amp;gt;&lt;br /&gt;
export I_MPI_HYDRA_BRANCH_COUNT=-1&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
If you want to use the options perhost, ppn or rr, you must additionally set the environment variable I_MPI_JOB_RESPECT_PROCESS_PLACEMENT=off. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Execute the script &#039;&#039;&#039;job_impi_omp.sh&#039;&#039;&#039; by command sbatch:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sbatch -p multiple ./job_impi_omp.sh&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
The mpirun option &#039;&#039;-print-rank-map&#039;&#039; shows the bindings between MPI tasks and nodes (not very beneficial). The option &#039;&#039;-binding&#039;&#039; binds MPI tasks (processes) to a particular processor; &#039;&#039;domain=omp&#039;&#039; means that the domain size is determined by the number of threads. If you would choose 2 MPI tasks per node, you should choose &#039;&#039;-binding &amp;quot;cell=unit;map=bunch&amp;quot;&#039;&#039;; this binding maps one MPI process to each socket. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Chain jobs ====&lt;br /&gt;
The CPU time requirements of many applications exceed the limits of the job classes. In those situations it is recommended to solve the problem by a job chain. A job chain is a sequence of jobs where each job automatically starts its successor. &lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
####################################&lt;br /&gt;
## simple Slurm submitter script to setup   ## &lt;br /&gt;
## a chain of jobs using Slurm                    ##&lt;br /&gt;
####################################&lt;br /&gt;
## ver.  : 2018-11-27, KIT, SCC&lt;br /&gt;
&lt;br /&gt;
## Define maximum number of jobs via positional parameter 1, default is 5&lt;br /&gt;
max_nojob=${1:-5}&lt;br /&gt;
&lt;br /&gt;
## Define your jobscript (e.g. &amp;quot;~/chain_job.sh&amp;quot;)&lt;br /&gt;
chain_link_job=${PWD}/chain_job.sh&lt;br /&gt;
&lt;br /&gt;
## Define type of dependency via positional parameter 2, default is &#039;afterok&#039;&lt;br /&gt;
dep_type=&amp;quot;${2:-afterok}&amp;quot;&lt;br /&gt;
## -&amp;gt; List of all dependencies:&lt;br /&gt;
## https://slurm.schedmd.com/sbatch.html&lt;br /&gt;
&lt;br /&gt;
myloop_counter=1&lt;br /&gt;
## Submit loop&lt;br /&gt;
while [ ${myloop_counter} -le ${max_nojob} ] ; do&lt;br /&gt;
   ##&lt;br /&gt;
   ## Differ msub_opt depending on chain link number&lt;br /&gt;
   if [ ${myloop_counter} -eq 1 ] ; then&lt;br /&gt;
      slurm_opt=&amp;quot;&amp;quot;&lt;br /&gt;
   else&lt;br /&gt;
      slurm_opt=&amp;quot;-d ${dep_type}:${jobID}&amp;quot;&lt;br /&gt;
   fi&lt;br /&gt;
   ##&lt;br /&gt;
   ## Print current iteration number and sbatch command&lt;br /&gt;
   echo &amp;quot;Chain job iteration = ${myloop_counter}&amp;quot;&lt;br /&gt;
   echo &amp;quot;   sbatch --export=myloop_counter=${myloop_counter} ${slurm_opt} ${chain_link_job}&amp;quot;&lt;br /&gt;
   ## Store job ID for next iteration by storing output of sbatch command with empty lines&lt;br /&gt;
   jobID=$(sbatch -p &amp;lt;queue&amp;gt; --export=ALL,myloop_counter=${myloop_counter} ${slurm_opt} ${chain_link_job} 2&amp;gt;&amp;amp;1 | sed &#039;s/[S,a-z]* //g&#039;)&lt;br /&gt;
   ##   &lt;br /&gt;
   ## Check if ERROR occured&lt;br /&gt;
   if [[ &amp;quot;${jobID}&amp;quot; =~ &amp;quot;ERROR&amp;quot; ]] ; then&lt;br /&gt;
      echo &amp;quot;   -&amp;gt; submission failed!&amp;quot; ; exit 1&lt;br /&gt;
   else&lt;br /&gt;
      echo &amp;quot;   -&amp;gt; job number = ${jobID}&amp;quot;&lt;br /&gt;
   fi&lt;br /&gt;
   ##&lt;br /&gt;
   ## Increase counter&lt;br /&gt;
   let myloop_counter+=1&lt;br /&gt;
done&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== GPU jobs ====&lt;br /&gt;
&lt;br /&gt;
The nodes in the gpu_4 and gpu_8 queues have 4 or 8 NVIDIA Tesla V100 GPUs. Just submitting a job to these queues is not enough to also allocate one or more GPUs, you have to do so using the &amp;quot;--gres=gpu&amp;quot; parameter. You have to specifiy how many GPUs your job needs, e.g. &amp;quot;--gres=gpu:2&amp;quot; will request two GPUs.&lt;br /&gt;
&lt;br /&gt;
The GPU nodes are shared between multiple jobs if the jobs don&#039;t request all the GPUs in a node and there are enough ressources to run more than one job. The individual GPUs are always bound to a single job and will not be shared between different jobs.&lt;br /&gt;
&lt;br /&gt;
a) add after the initial line of your script job.sh the line including the&lt;br /&gt;
information about the LSDF Online Storage usage:&amp;lt;br&amp;gt;   #SBATCH --constraint=LSDF&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --ntasks=40&lt;br /&gt;
#SBATCH --time=02:00:00&lt;br /&gt;
#SBATCH --mem=4000&lt;br /&gt;
#SBATCH --gres=gpu:2&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or b) execute:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sbatch -p &amp;lt;queue&amp;gt; -n 40 -t 02:00:00 --mem 4000 --gres=gpu:2 job.sh&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
If you start an interactive session on of the GPU nodes, you can use the &amp;quot;nvidia-smi&amp;quot; command to list the GPUs allocated to your job:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nvidia-smi&lt;br /&gt;
Sun Mar 29 15:20:05 2020       &lt;br /&gt;
+-----------------------------------------------------------------------------+&lt;br /&gt;
| NVIDIA-SMI 440.33.01    Driver Version: 440.33.01    CUDA Version: 10.2     |&lt;br /&gt;
|-------------------------------+----------------------+----------------------+&lt;br /&gt;
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |&lt;br /&gt;
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |&lt;br /&gt;
|===============================+======================+======================|&lt;br /&gt;
|   0  Tesla V100-SXM2...  Off  | 00000000:3A:00.0 Off |                    0 |&lt;br /&gt;
| N/A   29C    P0    39W / 300W |      9MiB / 32510MiB |      0%      Default |&lt;br /&gt;
+-------------------------------+----------------------+----------------------+&lt;br /&gt;
|   1  Tesla V100-SXM2...  Off  | 00000000:3B:00.0 Off |                    0 |&lt;br /&gt;
| N/A   30C    P0    41W / 300W |      8MiB / 32510MiB |      0%      Default |&lt;br /&gt;
+-------------------------------+----------------------+----------------------+&lt;br /&gt;
                                                                               &lt;br /&gt;
+-----------------------------------------------------------------------------+&lt;br /&gt;
| Processes:                                                       GPU Memory |&lt;br /&gt;
|  GPU       PID   Type   Process name                             Usage      |&lt;br /&gt;
|=============================================================================|&lt;br /&gt;
|    0     14228      G   /usr/bin/X                                     8MiB |&lt;br /&gt;
|    1     14228      G   /usr/bin/X                                     8MiB |&lt;br /&gt;
+-----------------------------------------------------------------------------+&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== LSDF Online Storage ====&lt;br /&gt;
On bwUniCluster 2.0 you can use for special cases the LSDF Online Storage on the HPC cluster nodes. Please request for this service separately ([https://www.lsdf.kit.edu/os/storagerequest/: LSDF Storage Request]).&lt;br /&gt;
To mount the LSDF Online Storage on the compute nodes during the job runtime the&lt;br /&gt;
the constraint flag &amp;quot;LSDF&amp;quot; has to be set.  &lt;br /&gt;
&lt;br /&gt;
a) add after the initial line of your script job.sh the line including the&lt;br /&gt;
information about the LSDF Online Storage usage:&amp;lt;br&amp;gt;   #SBATCH --constraint=LSDF&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --ntasks=1&lt;br /&gt;
#SBATCH --time=120&lt;br /&gt;
#SBATCH --mem=200&lt;br /&gt;
#SBATCH --constraint=LSDF&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or b) execute:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sbatch -p &amp;lt;queue&amp;gt; -n 1 -t 2:00:00 --mem 200 job.sh -C LSDF&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
For the usage of the LSDF Online Storage&lt;br /&gt;
the following environment variables are available: $LSDF, $LSDFPROJECTS, $LSDFHOME.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====BeeOND (BeeGFS On-Demand)====&lt;br /&gt;
&lt;br /&gt;
BeeOND instances are integrated into the prolog and epilog script of the cluster batch system, Slurm. It can be used on the compute nodes during the job runtime with the constraint flag &amp;quot;BEEOND&amp;quot; ([[BwUniCluster_2.0_Slurm_common_Features#sbatch_Command_Parameters|Slurm Command Parameters]])&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH ...&lt;br /&gt;
#SBATCH --constraint=BEEOND&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After your job has started you can find the private on-demand file system in &#039;&#039;&#039;/mnt/odfs/$SLURM_JOB_ID&#039;&#039;&#039; directory. The mountpoint comes with three pre-configured directories:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#for small files (stripe count = 1)&lt;br /&gt;
/mnt/odfs/$SLURM_JOB_ID/stripe_1&lt;br /&gt;
#stripe count = 4&lt;br /&gt;
/mnt/odfs/$SLURM_JOB_ID/stripe_default&lt;br /&gt;
#stripe count = 8, 16 or 32, use this directories for medium sized and large files or when using MPI-IO&lt;br /&gt;
/mnt/odfs/$SLURM_JOB_ID/stripe_8, /mnt/odfs/$SLURM_JOB_ID/stripe_16 or /mnt/odfs/$SLURM_JOB_ID/stripe_32&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you request less nodes than stripe count, the stripe count will be max number of nodes, e.g., You only request 8 nodes , so the directory with stripe count 16 is basically only with a stripe count 8.&lt;br /&gt;
&lt;br /&gt;
The capacity of the private file system depends on the number of nodes. For each node you get 250Gbyte.&lt;br /&gt;
&lt;br /&gt;
!!! Be careful when creating large files, use always the directory with the max stripe count for large files.&lt;br /&gt;
If you create large files use a higher stripe count. For example, if your largest file is 1.1 Tb, then you have to use a stripe count larger&amp;gt;4 (4 x 250GB).  &lt;br /&gt;
&lt;br /&gt;
If you request 100 nodes for your job, the private file system is 100 * 250 Gbyte ~ 25 Tbyte (approx) capacity.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Recommendation:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The private file system is using its own metadata server. This metadata server is started on the first nodes. Depending on your application, the metadata server is consuming decent amount of CPU power. Probably adding a extra node to your job could improve the usability of the on-demand file system. Start your application with the MPI option:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mpirun -nolocal myapplication&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
With the -nolocal option the node where mpirun is initiated is not used for your application. This node is fully available for the meta data server of your requested on-demand file system.&lt;br /&gt;
&lt;br /&gt;
Example job script:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#very simple example on how to use a private on-demand file system&lt;br /&gt;
#SBATCH -N 10&lt;br /&gt;
#SBATCH --constraint=BEEOND&lt;br /&gt;
&lt;br /&gt;
#create a workspace &lt;br /&gt;
ws_allocate myresults-$SLURM_JOB_ID 90&lt;br /&gt;
RESULTDIR=`ws_find myresults-$SLURM_JOB_ID`&lt;br /&gt;
&lt;br /&gt;
#Set ENV variable to on-demand file system&lt;br /&gt;
ODFSDIR=/mnt/odfs/$SLURM_JOB_ID/stripe_16/&lt;br /&gt;
&lt;br /&gt;
#start application and write results to on-demand file system&lt;br /&gt;
mpirun -nolocal myapplication -o $ODFSDIR/results&lt;br /&gt;
&lt;br /&gt;
#Copy back data after your job application end&lt;br /&gt;
rsync -av $ODFSDIR/results $RESULTDIR&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Start time of job or resources : squeue --start ==&lt;br /&gt;
The command can be used by any user to displays the estimated start time of a job based a number of analysis types based on historical usage, earliest available reservable resources, and priority based backlog. The command squeue is explained in detail on the webpage https://slurm.schedmd.com/squeue.html or via manpage (man squeue). &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
=== Access ===&lt;br /&gt;
By default, this command can be run by &#039;&#039;&#039;any user&#039;&#039;&#039;. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== List of your submitted jobs : squeue ==&lt;br /&gt;
Displays information about YOUR active, pending and/or recently completed jobs. The command displays all own active and pending jobs. The command squeue is explained in detail on the webpage https://slurm.schedmd.com/squeue.html or via manpage (man squeue).&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
=== Access ===&lt;br /&gt;
By default, this command can be run by any user.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Flags ===&lt;br /&gt;
{| width=750px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Flag !! Description&lt;br /&gt;
|-&lt;br /&gt;
| -l, --long&lt;br /&gt;
| Report more of the available information for the selected jobs or job steps, subject to any constraints specified.&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Examples ===&lt;br /&gt;
&#039;&#039;squeue&#039;&#039; example on bwUniCluster 2.0 &amp;lt;small&amp;gt;(Only your own jobs are displayed!)&amp;lt;/small&amp;gt;.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ squeue &lt;br /&gt;
             JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)&lt;br /&gt;
          18088744    single CPV.sbat   ab1234 PD       0:00      1 (Priority)&lt;br /&gt;
          18098414  multiple CPV.sbat   ab1234 PD       0:00      2 (Priority) &lt;br /&gt;
          18090089  multiple CPV.sbat   ab1234  R       2:27      2 uc2n[127-128]&lt;br /&gt;
$ squeue -l&lt;br /&gt;
            JOBID PARTITION     NAME     USER    STATE       TIME TIME_LIMI  NODES NODELIST(REASON) &lt;br /&gt;
         18088654    single CPV.sbat   ab1234 COMPLETI       4:29   2:00:00      1 uc2n374&lt;br /&gt;
         18088785    single CPV.sbat   ab1234  PENDING       0:00   2:00:00      1 (Priority)&lt;br /&gt;
         18098414  multiple CPV.sbat   ab1234  PENDING       0:00   2:00:00      2 (Priority)&lt;br /&gt;
         18088683    single CPV.sbat   ab1234  RUNNING       0:14   2:00:00      1 uc2n413  &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* The output of &#039;&#039;squeue&#039;&#039; shows how many jobs of yours are running or pending and how many nodes are in use by your jobs.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Shows free resources : sinfo_t_idle ==&lt;br /&gt;
The Slurm command sinfo is used to view partition and node information for a system running Slurm. It incorporates down time, reservations, and node state information in determining the available backfill window. The sinfo command can only be used by the administrator.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
SCC has prepared a special script (sinfo_t_idle) to find out how many processors are available for immediate use on the system. It is anticipated that users will use this information to submit jobs that meet these criteria and thus obtain quick job turnaround times. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
=== Access ===&lt;br /&gt;
By default, this command can be used by any user or administrator. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
=== Example ===&lt;br /&gt;
* The following command displays what resources are available for immediate use for the whole partition.&lt;br /&gt;
&amp;lt;pre&amp;gt;$ sinfo_t_idle&lt;br /&gt;
Partition dev_multiple  :      8 nodes idle&lt;br /&gt;
Partition multiple      :    332 nodes idle&lt;br /&gt;
Partition dev_single    :      4 nodes idle&lt;br /&gt;
Partition single        :     76 nodes idle&lt;br /&gt;
Partition long          :     80 nodes idle&lt;br /&gt;
Partition fat           :      5 nodes idle&lt;br /&gt;
Partition dev_special   :    342 nodes idle&lt;br /&gt;
Partition special       :    342 nodes idle&lt;br /&gt;
Partition dev_multiple_e:      7 nodes idle&lt;br /&gt;
Partition multiple_e    :    335 nodes idle&lt;br /&gt;
Partition gpu_4         :     12 nodes idle&lt;br /&gt;
Partition gpu_8         :      6 nodes idle&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* For the above example jobs in all partitions can be run immediately.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Detailed job information : scontrol show job ==&lt;br /&gt;
scontrol show job displays detailed job state information and diagnostic output for all or a specified job of yours. Detailed information is available for active, pending and recently completed jobs. The command scontrol is explained in detail on the webpage https://slurm.schedmd.com/scontrol.html or via manpage (man scontrol). &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Display the state of all your jobs in normal mode: scontrol show job&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Display the state of a job with &amp;lt;jobid&amp;gt; in normal mode: scontrol show job &amp;lt;jobid&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
=== Access ===&lt;br /&gt;
* End users can use scontrol show job to view the status of their &#039;&#039;&#039;own jobs&#039;&#039;&#039; only. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Arguments ===&lt;br /&gt;
{| width=750px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Option !! Default !! Description !! Example&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
|- style=&amp;quot;width:12%;&amp;quot; &lt;br /&gt;
| -d&lt;br /&gt;
| (n/a)&lt;br /&gt;
| Detailed mode&lt;br /&gt;
| Example: Display the state with jobid 18089884 in detailed mode. &amp;lt;br&amp;gt; &amp;lt;pre&amp;gt;scontrol -d show job 18089884&amp;lt;/pre&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Scontrol show job Example ===&lt;br /&gt;
Here is an example from bwUniCluster 2.0.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
squeue    # show my own jobs (here the userid is replaced!)&lt;br /&gt;
             JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)&lt;br /&gt;
          18089884  multiple CPV.sbat   bq0742  R      33:44      2 uc2n[165-166]&lt;br /&gt;
&lt;br /&gt;
$&lt;br /&gt;
$ # now, see what&#039;s up with my pending job with jobid 18089884&lt;br /&gt;
$ &lt;br /&gt;
$ scontrol show job 18089884&lt;br /&gt;
&lt;br /&gt;
JobId=18089884 JobName=CPV.sbatch&lt;br /&gt;
   UserId=bq0742(8946) GroupId=scc(12345) MCS_label=N/A&lt;br /&gt;
   Priority=3 Nice=0 Account=kit QOS=normal&lt;br /&gt;
   JobState=RUNNING Reason=None Dependency=(null)&lt;br /&gt;
   Requeue=1 Restarts=0 BatchFlag=1 Reboot=0 ExitCode=0:0&lt;br /&gt;
   RunTime=00:35:06 TimeLimit=02:00:00 TimeMin=N/A&lt;br /&gt;
   SubmitTime=2020-03-16T14:14:54 EligibleTime=2020-03-16T14:14:54&lt;br /&gt;
   AccrueTime=2020-03-16T14:14:54&lt;br /&gt;
   StartTime=2020-03-16T15:12:51 EndTime=2020-03-16T17:12:51 Deadline=N/A&lt;br /&gt;
   SuspendTime=None SecsPreSuspend=0 LastSchedEval=2020-03-16T15:12:51&lt;br /&gt;
   Partition=multiple AllocNode:Sid=uc2n995:5064&lt;br /&gt;
   ReqNodeList=(null) ExcNodeList=(null)&lt;br /&gt;
   NodeList=uc2n[165-166]&lt;br /&gt;
   BatchHost=uc2n165&lt;br /&gt;
   NumNodes=2 NumCPUs=160 NumTasks=80 CPUs/Task=1 ReqB:S:C:T=0:0:*:1&lt;br /&gt;
   TRES=cpu=160,mem=96320M,node=2,billing=160&lt;br /&gt;
   Socks/Node=* NtasksPerN:B:S:C=40:0:*:1 CoreSpec=*&lt;br /&gt;
   MinCPUsNode=40 MinMemoryCPU=1204M MinTmpDiskNode=0&lt;br /&gt;
   Features=(null) DelayBoot=00:00:00&lt;br /&gt;
   OverSubscribe=NO Contiguous=0 Licenses=(null) Network=(null)&lt;br /&gt;
   Command=/pfs/data5/home/kit/scc/bq0742/git/CPV/bin/CPV.sbatch&lt;br /&gt;
   WorkDir=/pfs/data5/home/kit/scc/bq0742/git/CPV/bin&lt;br /&gt;
   StdErr=/pfs/data5/home/kit/scc/bq0742/git/CPV/bin/slurm-18089884.out&lt;br /&gt;
   StdIn=/dev/null&lt;br /&gt;
   StdOut=/pfs/data5/home/kit/scc/bq0742/git/CPV/bin/slurm-18089884.out&lt;br /&gt;
   Power=&lt;br /&gt;
   MailUser=(null) MailType=NONE&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
You can use standard Linux pipe commands to filter the very detailed scontrol show job output.&lt;br /&gt;
* In which state the job is?&lt;br /&gt;
&amp;lt;pre&amp;gt;$ scontrol show job 18089884 | grep -i State&lt;br /&gt;
   JobState=COMPLETED Reason=None Dependency=(null)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Cancel Slurm Jobs ==&lt;br /&gt;
The scancel command is used to cancel jobs. The command scancel is explained in detail on the webpage https://slurm.schedmd.com/scancel.html or via manpage (man scancel).   &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
=== Canceling own jobs : scancel ===&lt;br /&gt;
scancel is used to signal or cancel jobs, job arrays or job steps. The command is:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ scancel [-i] &amp;lt;job-id&amp;gt;&lt;br /&gt;
$ scancel -t &amp;lt;job_state_name&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
{| width=750px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Flag !! Default !! Description !! Example&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| -i, --interactive&lt;br /&gt;
| (n/a)&lt;br /&gt;
| Interactive mode.&lt;br /&gt;
| Cancel the job 987654 interactively. &amp;lt;br&amp;gt; &amp;lt;pre&amp;gt; scancel -i 987654 &amp;lt;/pre&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| -t, --state&lt;br /&gt;
| (n/a)&lt;br /&gt;
| Restrict the scancel operation to jobs in a certain state. &amp;lt;br&amp;gt; &amp;quot;job_state_name&amp;quot; may have a value of either &amp;quot;PENDING&amp;quot;, &amp;quot;RUNNING&amp;quot; or &amp;quot;SUSPENDED&amp;quot;.&lt;br /&gt;
| Cancel all jobs in state &amp;quot;PENDING&amp;quot;. &amp;lt;br&amp;gt; &amp;lt;pre&amp;gt; scancel -t &amp;quot;PENDING&amp;quot; &amp;lt;/pre&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Resource Managers =&lt;br /&gt;
=== Batch Job (Slurm) Variables ===&lt;br /&gt;
The following environment variables of Slurm are added to your environment once your job has started&lt;br /&gt;
&amp;lt;small&amp;gt;(only an excerpt of the most important ones)&amp;lt;/small&amp;gt;.&lt;br /&gt;
{| width=750px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Environment !! Brief explanation&lt;br /&gt;
|- &lt;br /&gt;
| SLURM_JOB_CPUS_PER_NODE &lt;br /&gt;
| Number of processes per node dedicated to the job&lt;br /&gt;
|- &lt;br /&gt;
| SLURM_JOB_NODELIST &lt;br /&gt;
| List of nodes dedicated to the job&lt;br /&gt;
|- &lt;br /&gt;
| SLURM_JOB_NUM_NODES &lt;br /&gt;
| Number of nodes dedicated to the job&lt;br /&gt;
|- &lt;br /&gt;
| SLURM_MEM_PER_NODE &lt;br /&gt;
| Memory per node dedicated to the job &lt;br /&gt;
|- &lt;br /&gt;
| SLURM_NPROCS&lt;br /&gt;
| Total number of processes dedicated to the job &lt;br /&gt;
|-&lt;br /&gt;
| SLURM_CLUSTER_NAME&lt;br /&gt;
| Name of the cluster executing the job&lt;br /&gt;
|-&lt;br /&gt;
| SLURM_CPUS_PER_TASK &lt;br /&gt;
| Number of CPUs requested per task&lt;br /&gt;
|-&lt;br /&gt;
| SLURM_JOB_ACCOUNT&lt;br /&gt;
| Account name &lt;br /&gt;
|-&lt;br /&gt;
| SLURM_JOB_ID&lt;br /&gt;
| Job ID&lt;br /&gt;
|-&lt;br /&gt;
| SLURM_JOB_NAME&lt;br /&gt;
| Job Name&lt;br /&gt;
|-&lt;br /&gt;
| SLURM_JOB_PARTITION&lt;br /&gt;
| Partition/queue running the job&lt;br /&gt;
|-&lt;br /&gt;
| SLURM_JOB_UID&lt;br /&gt;
| User ID of the job&#039;s owner&lt;br /&gt;
|-&lt;br /&gt;
| SLURM_SUBMIT_DIR&lt;br /&gt;
| Job submit folder.  The directory from which msub was invoked. &lt;br /&gt;
|-&lt;br /&gt;
| SLURM_JOB_USER&lt;br /&gt;
| User name of the job&#039;s owner&lt;br /&gt;
|-&lt;br /&gt;
| SLURM_RESTART_COUNT&lt;br /&gt;
| Number of times job has restarted&lt;br /&gt;
|-&lt;br /&gt;
| SLURM_PROCID&lt;br /&gt;
| Task ID (MPI rank)&lt;br /&gt;
|-&lt;br /&gt;
| SLURM_NTASKS&lt;br /&gt;
| The total number of tasks available for the job&lt;br /&gt;
|-&lt;br /&gt;
| SLURM_STEP_ID&lt;br /&gt;
| Job step ID&lt;br /&gt;
|-&lt;br /&gt;
| SLURM_STEP_NUM_TASKS&lt;br /&gt;
| Task count (number of PI ranks)&lt;br /&gt;
|-&lt;br /&gt;
| SLURM_JOB_CONSTRAINT&lt;br /&gt;
| Job constraints&lt;br /&gt;
|}&lt;br /&gt;
See also:&lt;br /&gt;
* [https://slurm.schedmd.com/sbatch.html#lbAI Slurm input and output environment variables]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Job Exit Codes ===&lt;br /&gt;
A job&#039;s exit code (also known as exit status, return code and completion code) is captured by SLURM and saved as part of the job record. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Any non-zero exit code will be assumed to be a job failure and will result in a Job State of FAILED with a reason of &amp;quot;NonZeroExitCode&amp;quot;.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
The exit code is an 8 bit unsigned number ranging between 0 and 255. While it is possible for a job to return a negative exit code, SLURM will display it as an unsigned value in the 0 - 255 range.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
==== Displaying Exit Codes and Signals ====&lt;br /&gt;
SLURM displays a job&#039;s exit code in the output of the &#039;&#039;&#039;scontrol show job&#039;&#039;&#039; and the sview utility.&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
When a signal was responsible for a job or step&#039;s termination, the signal number will be displayed after the exit code, delineated by a colon(:).&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
==== Submitting Termination Signal ====&lt;br /&gt;
Here is an example, how to &#039;save&#039; a Slurm termination signal in a typical jobscript.&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
[...]&lt;br /&gt;
exit_code=$?&lt;br /&gt;
mpirun  -np &amp;lt;#cores&amp;gt;  &amp;lt;EXE_BIN_DIR&amp;gt;/&amp;lt;executable&amp;gt; ... (options)  2&amp;gt;&amp;amp;1&lt;br /&gt;
[ &amp;quot;$exit_code&amp;quot; -eq 0 ] &amp;amp;&amp;amp; echo &amp;quot;all clean...&amp;quot; || \&lt;br /&gt;
   echo &amp;quot;Executable &amp;lt;EXE_BIN_DIR&amp;gt;/&amp;lt;executable&amp;gt; finished with exit code ${$exit_code}&amp;quot;&lt;br /&gt;
[...]&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
* Do not use &#039;&#039;&#039;&#039;time&#039;&#039;&#039;&#039; mpirun! The exit code will be the one submitted by the first (time) program.&lt;br /&gt;
* You do not need an &#039;&#039;&#039;exit $exit_code&#039;&#039;&#039; in the scripts.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
----&lt;br /&gt;
[[Category:bwUniCluster 2.0|bwUniCluster 2.0]]&lt;br /&gt;
[[#top|Back to top]]&lt;/div&gt;</summary>
		<author><name>H Haefner</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster2.0/Slurm&amp;diff=6150</id>
		<title>BwUniCluster2.0/Slurm</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster2.0/Slurm&amp;diff=6150"/>
		<updated>2020-04-02T10:59:34Z</updated>

		<summary type="html">&lt;p&gt;H Haefner: /* Multithreaded + MPI parallel Programs */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;div id=&amp;quot;top&amp;quot;&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
=  Slurm HPC Workload Manager = &lt;br /&gt;
== Specification == &lt;br /&gt;
Slurm is an open source, fault-tolerant, and highly scalable cluster management and job scheduling system for large and small Linux clusters. Slurm requires no kernel modifications for its operation and is relatively self-contained. As a cluster workload manager, Slurm has three key functions. First, it allocates exclusive and/or non-exclusive access to resources (compute nodes) to users for some duration of time so they can perform work. Second, it provides a framework for starting, executing, and monitoring work (normally a parallel job) on the set of allocated nodes. Finally, it arbitrates contention for resources by managing a queue of pending work.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Any kind of calculation on the compute nodes of [[bwUniCluster 2.0|bwUniCluster 2.0]] requires the user to define calculations as a sequence of commands or single command together with required run time, number of CPU cores and main memory and submit all, i.e., the &#039;&#039;&#039;batch job&#039;&#039;&#039;, to a resource and workload managing software. bwUniCluster 2.0 has installed the workload managing software Slurm. Therefore any job submission by the user is to be executed by commands of the Slurm software. Slurm queues and runs user jobs based on fair sharing policies.  &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Slurm Commands (excerpt) ==&lt;br /&gt;
Some of the most used Slurm commands for non-administrators working on bwUniCluster 2.0.&lt;br /&gt;
{| width=750px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Slurm commands !! Brief explanation&lt;br /&gt;
|-&lt;br /&gt;
| [[#Job Submission : sbatch|sbatch]] || Submits a job and queues it in an input queue [[https://slurm.schedmd.com/sbatch.html sbatch]] &lt;br /&gt;
|-&lt;br /&gt;
| [[#Detailed job information : scontrol show job|scontrol show job]] || Displays detailed job state information [[https://slurm.schedmd.com/scontrol.html scontrol]]&lt;br /&gt;
|-&lt;br /&gt;
| [[#List of your submitted jo/bs : squeue|squeue]] || Displays information about active, eligible, blocked, and/or recently completed jobs [[https://slurm.schedmd.com/squeue.html squeue]]&lt;br /&gt;
|-&lt;br /&gt;
| [[#Start time of job or resources : squeue|squeue --start]] || Returns start time of submitted job or requested resources [[https://slurm.schedmd.com/squeue.html squeue]]&lt;br /&gt;
|-&lt;br /&gt;
| [[#Shows free resources : sinfo_t_idle|sinfo_t_idle]] || Shows what resources are available for immediate use [[https://slurm.schedmd.com/sinfo.html sinfo]]&lt;br /&gt;
|-&lt;br /&gt;
| [[#Canceling own jobs : scancel|scancel]] || Cancels a job (obsoleted!) [[https://slurm.schedmd.com/scancel.html scancel]]&lt;br /&gt;
|}&lt;br /&gt;
If your job was submitted to the &amp;quot;multiple&amp;quot; queue you can log into the allocated nodes via SSH as soon as the job is running.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
* [https://slurm.schedmd.com/tutorials.html  Slurm Tutorials]&lt;br /&gt;
* [https://slurm.schedmd.com/pdfs/summary.pdf  Slurm command/option summary (2 pages)]&lt;br /&gt;
* [https://slurm.schedmd.com/man_index.html  Slurm Commands]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Job Submission : sbatch ==&lt;br /&gt;
Batch jobs are submitted by using the command &#039;&#039;&#039;sbatch&#039;&#039;&#039;. The main purpose of the &#039;&#039;&#039;sbatch&#039;&#039;&#039; command is to specify the resources that are needed to run the job. &#039;&#039;&#039;sbatch&#039;&#039;&#039; will then queue the batch job. However, starting of batch job depends on the availability of the requested resources and the fair sharing value.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
=== sbatch Command Parameters ===&lt;br /&gt;
The syntax and use of &#039;&#039;&#039;sbatch&#039;&#039;&#039; can be displayed via:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ man sbatch&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;sbatch&#039;&#039;&#039; options can be used from the command line or in your job script.&lt;br /&gt;
{| width=750px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! colspan=&amp;quot;3&amp;quot; | sbatch Options&lt;br /&gt;
|-&lt;br /&gt;
! Command line&lt;br /&gt;
! Script&lt;br /&gt;
! Purpose&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| -t &#039;&#039;time&#039;&#039;  or  --time=&#039;&#039;time&#039;&#039;&lt;br /&gt;
| #SBATCH --time=&#039;&#039;time&#039;&#039;&lt;br /&gt;
| Wall clock time limit.&amp;lt;br&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| -N &#039;&#039;count&#039;&#039;  or  --nodes=&#039;&#039;count&#039;&#039;&lt;br /&gt;
| #SBATCH --nodes=&#039;&#039;count&#039;&#039;&lt;br /&gt;
| Number of nodes to be used.&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| -n &#039;&#039;count&#039;&#039;  or  --ntasks=&#039;&#039;count&#039;&#039;&lt;br /&gt;
| #SBATCH --ntasks=&#039;&#039;count&#039;&#039;&lt;br /&gt;
| Number of tasks to be launched.&lt;br /&gt;
|-&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| --ntasks-per-node=&#039;&#039;count&#039;&#039;&lt;br /&gt;
| #SBATCH --ntasks-per-node=&#039;&#039;count&#039;&#039;&lt;br /&gt;
| Maximum count (&amp;lt;= 28 and &amp;lt;= 40 resp.) of tasks per node.&amp;lt;br&amp;gt;(Replaces the option ppn of MOAB.)&lt;br /&gt;
|-&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| -c &#039;&#039;count&#039;&#039; or --cpus-per-task=&#039;&#039;count&#039;&#039;&lt;br /&gt;
| #SBATCH --cpus-per-task=&#039;&#039;count&#039;&#039;&lt;br /&gt;
| Number of CPUs required per (MPI-)task.&lt;br /&gt;
|-&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| --mem=&#039;&#039;value_in_MB&#039;&#039;&lt;br /&gt;
| #SBATCH --mem=&#039;&#039;value_in_MB&#039;&#039; &lt;br /&gt;
| Memory in MegaByte per node.&amp;lt;br&amp;gt;(Default value is 128000 and 96000 MB resp., i.e. you should omit &amp;lt;br&amp;gt; the setting of this option.)&lt;br /&gt;
|-&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| --mem-per-cpu=&#039;&#039;value_in_MB&#039;&#039;&lt;br /&gt;
| #SBATCH --mem-per-cpu=&#039;&#039;value_in_MB&#039;&#039; &lt;br /&gt;
| Minimum Memory required per allocated CPU.&amp;lt;br&amp;gt;(Replaces the option pmem of MOAB. You should omit &amp;lt;br&amp;gt; the setting of this option.)&lt;br /&gt;
|-&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| --mail-type=&#039;&#039;type&#039;&#039;&lt;br /&gt;
| #SBATCH --mail-type=&#039;&#039;type&#039;&#039;&lt;br /&gt;
| Notify user by email when certain event types occur.&amp;lt;br&amp;gt;Valid type values are NONE, BEGIN, END, FAIL, REQUEUE, ALL.&lt;br /&gt;
|-&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| --mail-user=&#039;&#039;mail-address&#039;&#039;&lt;br /&gt;
| #SBATCH --mail-user=&#039;&#039;mail-address&#039;&#039;&lt;br /&gt;
|  The specified mail-address receives email notification of state&amp;lt;br&amp;gt;changes as defined by --mail-type.&lt;br /&gt;
|-&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| --output=&#039;&#039;name&#039;&#039;&lt;br /&gt;
| #SBATCH --output=&#039;&#039;name&#039;&#039;&lt;br /&gt;
| File in which job output is stored. &lt;br /&gt;
|-&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| --error=&#039;&#039;name&#039;&#039;&lt;br /&gt;
| #SBATCH --error=&#039;&#039;name&#039;&#039;&lt;br /&gt;
| File in which job error messages are stored. &lt;br /&gt;
|-&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| -J &#039;&#039;name&#039;&#039; or --job-name=&#039;&#039;name&#039;&#039;&lt;br /&gt;
| #SBATCH --job-name=&#039;&#039;name&#039;&#039;&lt;br /&gt;
| Job name.&lt;br /&gt;
|-&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| --export=[ALL,] &#039;&#039;env-variables&#039;&#039;&lt;br /&gt;
| #SBATCH --export=[ALL,] &#039;&#039;env-variables&#039;&#039;&lt;br /&gt;
| Identifies which environment variables from the submission &amp;lt;br&amp;gt; environment are propagated to the launched application. Default &amp;lt;br&amp;gt; is ALL. If adding to the submission environment instead of &amp;lt;br&amp;gt; replacing it is intended,  the argument ALL must be added.&lt;br /&gt;
|-&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| -A &#039;&#039;group-name&#039;&#039; or --account=&#039;&#039;group-name&#039;&#039;&lt;br /&gt;
| #SBATCH --account=&#039;&#039;group-name&#039;&#039;&lt;br /&gt;
| Change resources used by this job to specified group. You may &amp;lt;br&amp;gt; need this option if your account is assigned to more &amp;lt;br&amp;gt; than one group. By command &amp;quot;scontrol show job&amp;quot; the project &amp;lt;br&amp;gt; group the job is accounted on can be seen behind &amp;quot;Account=&amp;quot;. &lt;br /&gt;
|-&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| -p &#039;&#039;queue-name&#039;&#039; or --partition=&#039;&#039;queue-name&#039;&#039;&lt;br /&gt;
| #SBATCH --partition=&#039;&#039;queue-name&#039;&#039;&lt;br /&gt;
| Request a specific queue for the resource allocation.&lt;br /&gt;
|-&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| -C &#039;&#039;LSDF&#039;&#039; or --constraint=&#039;&#039;LSDF&#039;&#039;&lt;br /&gt;
| #SBATCH --constraint=LSDF&lt;br /&gt;
| Job constraint LSDF Filesystems.&lt;br /&gt;
|-&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| -C &#039;&#039;BEEOND&#039;&#039; or --constraint=&#039;&#039;BEEOND&#039;&#039;&lt;br /&gt;
| #SBATCH --constraint=BEEOND&lt;br /&gt;
| Job constraint BeeOND file system.&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== sbatch --partition  &#039;&#039;queues&#039;&#039; ====&lt;br /&gt;
Queue classes define maximum resources such as walltime, nodes and processes per node and queue of the compute system. Details can be found here:&lt;br /&gt;
* [[BwUniCluster_2.0_Batch_Queues#sbatch_-p_queue|bwUniCluster 2.0 queue settings]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== sbatch Examples ===&lt;br /&gt;
==== Serial Programs ====&lt;br /&gt;
To submit a serial job that runs the script &#039;&#039;&#039;job.sh&#039;&#039;&#039; and that requires 5000 MB of main memory and 10 minutes of wall clock time&lt;br /&gt;
&lt;br /&gt;
a) execute:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sbatch -p dev_single -n 1 -t 10:00 --mem=5000  job.sh&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
or&lt;br /&gt;
b) add after the initial line of your script &#039;&#039;&#039;job.sh&#039;&#039;&#039; the lines (here with a high memory request):&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#SBATCH --ntasks=1&lt;br /&gt;
#SBATCH --time=10&lt;br /&gt;
#SBATCH --mem=200gb&lt;br /&gt;
#SBATCH --job-name=simple&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
and execute the modified script with the command line option &#039;&#039;--partition=fat&#039;&#039; (with &#039;&#039;--partition=(dev_)single&#039;&#039; maximum &#039;&#039;--mem=96gb&#039;&#039; is possible):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sbatch --partition=fat job.sh&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Note, that sbatch command line options overrule script options.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Multithreaded Programs ====&lt;br /&gt;
Multithreaded programs operate faster than serial programs on CPUs with multiple cores.&amp;lt;br&amp;gt;&lt;br /&gt;
Moreover, multiple threads of one process share resources such as memory.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
For multithreaded programs based on &#039;&#039;&#039;Open&#039;&#039;&#039; &#039;&#039;&#039;M&#039;&#039;&#039;ulti-&#039;&#039;&#039;P&#039;&#039;&#039;rocessing (OpenMP) a number of threads is defined by the environment variable OMP_NUM_THREADS. By default this variable is set to 1 (OMP_NUM_THREADS=1).&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Because hyperthreading is switched on on bwUniCluster 2.0, the option --cpus-per-task (-c) must be set to 2*n, if you want to use n threads.&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
To submit a batch job called &#039;&#039;OpenMP_Test&#039;&#039; that runs a 40-fold threaded program &#039;&#039;omp_exe&#039;&#039; which requires 6000 MByte of total physical memory and total wall clock time of 40 minutes:&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
a) execute:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sbatch -p single --export=ALL,OMP_NUM_THREADS=40 -J OpenMP_Test -N 1 -c 80 -t 40 --mem=6000 ./omp_exe&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
or&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
* generate the script &#039;&#039;&#039;job_omp.sh&#039;&#039;&#039; containing the following lines:&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
#SBATCH --cpus-per-task=80&lt;br /&gt;
#SBATCH --time=40:00&lt;br /&gt;
#SBATCH --mem=6000mb   &lt;br /&gt;
#SBATCH --export=ALL,EXECUTABLE=./omp_exe&lt;br /&gt;
#SBATCH -J OpenMP_Test&lt;br /&gt;
&lt;br /&gt;
#Usually you should set&lt;br /&gt;
export KMP_AFFINITY=compact,1,0&lt;br /&gt;
#export KMP_AFFINITY=verbose,compact,1,0 prints messages concerning the supported affinity&lt;br /&gt;
#KMP_AFFINITY Description: https://software.intel.com/en-us/node/524790#KMP_AFFINITY_ENVIRONMENT_VARIABLE&lt;br /&gt;
&lt;br /&gt;
export OMP_NUM_THREADS=$((${SLURM_JOB_CPUS_PER_NODE}/2))&lt;br /&gt;
echo &amp;quot;Executable ${EXECUTABLE} running on ${SLURM_JOB_CPUS_PER_NODE} cores with ${OMP_NUM_THREADS} threads&amp;quot;&lt;br /&gt;
startexe=${EXECUTABLE}&lt;br /&gt;
echo $startexe&lt;br /&gt;
exec $startexe&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Using Intel compiler the environment variable KMP_AFFINITY switches on binding of threads to specific cores and, if necessary, replace &amp;lt;placeholder&amp;gt; with the required modulefile to enable the OpenMP environment and execute the script &#039;&#039;&#039;job_omp.sh&#039;&#039;&#039; adding the queue class &#039;&#039;single&#039;&#039; as sbatch option:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sbatch -p single job_omp.sh&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Note, that sbatch command line options overrule script options, e.g.,&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sbatch --partition=single --mem=200 job_omp.sh&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
overwrites the script setting of 6000 MByte with 200 MByte.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== MPI Parallel Programs ====&lt;br /&gt;
MPI parallel programs run faster than serial programs on multi CPU and multi core systems. N-fold spawned processes of the MPI program, i.e., &#039;&#039;&#039;MPI tasks&#039;&#039;&#039;,  run simultaneously and communicate via the Message Passing Interface (MPI) paradigm. MPI tasks do not share memory but can be spawned over different nodes.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Multiple MPI tasks must be launched via &#039;&#039;&#039;mpirun&#039;&#039;&#039;, e.g. 4 MPI tasks of &#039;&#039;my_par_program&#039;&#039;:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ mpirun -n 4 my_par_program&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
This command runs 4 MPI tasks of &#039;&#039;my_par_program&#039;&#039; on the node you are logged in.&lt;br /&gt;
To run this command with a loaded Intel MPI the environment variable I_MPI_HYDRA_BOOTSTRAP must be unset ( --&amp;gt; $ unset I_MPI_HYDRA_BOOTSTRAP).&lt;br /&gt;
&lt;br /&gt;
Running MPI parallel programs in a batch job the interactive environment - particularly the loaded modules - will also be set in the batch job. If you want to set a defined module environment in your batch job you have to purge all modules before setting the wished modules. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
===== OpenMPI =====&lt;br /&gt;
&lt;br /&gt;
If you want to run jobs on batch nodes, generate a wrapper script &#039;&#039;job_ompi.sh&#039;&#039; for &#039;&#039;&#039;OpenMPI&#039;&#039;&#039; containing the following lines:&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
# Use when a defined module environment related to OpenMPI is wished&lt;br /&gt;
module load mpi/openmpi/&amp;lt;placeholder_for_version&amp;gt;&lt;br /&gt;
mpirun --bind-to core --map-by core -report-bindings my_par_program&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Attention:&#039;&#039;&#039; Do &#039;&#039;&#039;NOT&#039;&#039;&#039; add mpirun options &#039;&#039;-n &amp;lt;number_of_processes&amp;gt;&#039;&#039; or any other option defining processes or nodes, since Slurm instructs mpirun about number of processes and node hostnames. Use &#039;&#039;&#039;ALWAYS&#039;&#039;&#039; the MPI options &#039;&#039;&#039;&#039;&#039;--bind-to core&#039;&#039;&#039;&#039;&#039; and &#039;&#039;&#039;&#039;&#039;--map-by core|socket|node&#039;&#039;&#039;&#039;&#039;. Please type &#039;&#039;mpirun --help&#039;&#039; for an explanation of the meaning of the different options of mpirun option &#039;&#039;--map-by&#039;&#039;.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Considering 4 OpenMPI tasks on a single node, each requiring 2000 MByte, and running for 1 hour, execute:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sbatch -p single -N 1 -n 4 --mem-per-cpu=2000 --time=01:00:00 ./job_ompi.sh&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===== Intel MPI =====&lt;br /&gt;
&lt;br /&gt;
Generate a wrapper script for &#039;&#039;&#039;Intel MPI&#039;&#039;&#039;, &#039;&#039;job_impi.sh&#039;&#039; containing the following lines:&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
# Use when a defined module environment related to Intel MPI is wished&lt;br /&gt;
module load mpi/impi/&amp;lt;placeholder_for_version&amp;gt;   &lt;br /&gt;
mpiexec.hydra -bootstrap slurm my_par_program&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&amp;lt;font color=red&amp;gt;&#039;&#039;&#039;Attention:&#039;&#039;&#039;&amp;lt;/font&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
Do &#039;&#039;&#039;NOT&#039;&#039;&#039; add mpirun options &#039;&#039;-n &amp;lt;number_of_processes&amp;gt;&#039;&#039; or any other option defining processes or nodes, since Slurm instructs mpirun about number of processes and node hostnames.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Launching and running 200 Intel MPI tasks on 5 nodes, each requiring 80 GByte, and running for 5 hours, execute:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sbatch --partition=multiple -N 5 --ntasks-per-node=40 --mem=80gb -t 300 ./job_impi.sh&lt;br /&gt;
&amp;lt;/pre&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
If you want to use 128 or more nodes, you must also set the environment variable as follows:           &amp;lt;BR&amp;gt;&lt;br /&gt;
export I_MPI_HYDRA_BRANCH_COUNT=-1&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
If you want to use the options perhost, ppn or rr, you must additionally set the environment variable I_MPI_JOB_RESPECT_PROCESS_PLACEMENT=off.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Multithreaded + MPI parallel Programs ====&lt;br /&gt;
Multithreaded + MPI parallel programs operate faster than serial programs on multi CPUs with multiple cores. All threads of one process share resources such as memory. On the contrary MPI tasks do not share memory but can be spawned over different nodes. &#039;&#039;&#039;Because hyperthreading is switched on on bwUniCluster 2.0, the option --cpus-per-task (-c) must be set to 2*n, if you want to use n threads.&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
===== OpenMPI with Multithreading =====&lt;br /&gt;
Multiple MPI tasks using &#039;&#039;&#039;OpenMPI&#039;&#039;&#039; must be launched by the MPI parallel program &#039;&#039;&#039;mpirun&#039;&#039;&#039;. For multithreaded programs based on &#039;&#039;&#039;Open&#039;&#039;&#039; &#039;&#039;&#039;M&#039;&#039;&#039;ulti-&#039;&#039;&#039;P&#039;&#039;&#039;rocessing (OpenMP) number of threads are defined by the environment variable OMP_NUM_THREADS. By default this variable is set to 1 (OMP_NUM_THREADS=1).&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;For OpenMPI&#039;&#039;&#039; a job-script to submit a batch job called &#039;&#039;job_ompi_omp.sh&#039;&#039; that runs a MPI program with 4 tasks and a 28-fold threaded program &#039;&#039;ompi_omp_program&#039;&#039; requiring 3000 MByte of physical memory per thread (using 28 threads per MPI task you will get 28*3000 MByte = 84000 MByte per MPI task) and total wall clock time of 3 hours looks like:&lt;br /&gt;
&amp;lt;!--b)--&amp;gt;&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --ntasks=4&lt;br /&gt;
#SBATCH --cpus-per-task=28&lt;br /&gt;
#SBATCH --time=03:00:00&lt;br /&gt;
#SBATCH --mem=83gb    # 84000 MB = 84000/1024 GB = 82.1 GB&lt;br /&gt;
#SBATCH --export=ALL,MPI_MODULE=mpi/openmpi/3.1,EXECUTABLE=./ompi_omp_program&lt;br /&gt;
#SBATCH --output=&amp;quot;parprog_hybrid_%j.out&amp;quot;  &lt;br /&gt;
&lt;br /&gt;
# Use when a defined module environment related to OpenMPI is wished&lt;br /&gt;
module load ${MPI_MODULE}&lt;br /&gt;
export MPIRUN_OPTIONS=&amp;quot;--bind-to core --map-by socket:PE=${SLURM_CPUS_PER_TASK} -report-bindings&amp;quot;&lt;br /&gt;
export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK}&lt;br /&gt;
export NUM_CORES=${SLURM_NTASKS}*${SLURM_CPUS_PER_TASK}&lt;br /&gt;
echo &amp;quot;${EXECUTABLE} running on ${NUM_CORES} cores with ${SLURM_NTASKS} MPI-tasks and ${OMP_NUM_THREADS} threads&amp;quot;&lt;br /&gt;
startexe=&amp;quot;mpirun -n ${SLURM_NTASKS} ${MPIRUN_OPTIONS} ${EXECUTABLE}&amp;quot;&lt;br /&gt;
echo $startexe&lt;br /&gt;
exec $startexe&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Execute the script &#039;&#039;&#039;job_ompi_omp.sh&#039;&#039;&#039; by command sbatch:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sbatch -p multiple_e ./job_ompi_omp.sh&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* With the mpirun option &#039;&#039;--bind-to core&#039;&#039; MPI tasks and OpenMP threads are bound to physical cores.&lt;br /&gt;
* With the option &#039;&#039;--map-by node:PE=&amp;lt;value&amp;gt;&#039;&#039; (neighbored) MPI tasks will be attached to different nodes and each MPI task is bound to the first core of a node. &amp;lt;value&amp;gt; must be set to ${OMP_NUM_THREADS}.&lt;br /&gt;
* The option &#039;&#039;-report-bindings&#039;&#039; shows the bindings between MPI tasks and physical cores.&lt;br /&gt;
* The mpirun-options &#039;&#039;&#039;--bind-to core&#039;&#039;&#039;, &#039;&#039;&#039;--map-by socket|...|node:PE=&amp;lt;value&amp;gt;&#039;&#039;&#039; should always be used when running a multithreaded MPI program.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===== Intel MPI with Multithreading =====&lt;br /&gt;
Multithreaded + MPI parallel programs operate faster than serial programs on multi CPUs with multiple cores. All threads of one process share resources such as memory. On the contrary MPI tasks do not share memory but can be spawned over different nodes.  &lt;br /&gt;
&lt;br /&gt;
Multiple Intel MPI tasks must be launched by the MPI parallel program &#039;&#039;&#039;mpiexec.hydra&#039;&#039;&#039;. For multithreaded programs based on &#039;&#039;&#039;Open&#039;&#039;&#039; &#039;&#039;&#039;M&#039;&#039;&#039;ulti-&#039;&#039;&#039;P&#039;&#039;&#039;rocessing (OpenMP) number of threads are defined by the environment variable OMP_NUM_THREADS. By default this variable is set to 1 (OMP_NUM_THREADS=1).&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;For Intel MPI&#039;&#039;&#039; a job-script to submit a batch job called &#039;&#039;job_impi_omp.sh&#039;&#039; that runs a Intel MPI program with 10 tasks and a 40-fold threaded program &#039;&#039;impi_omp_program&#039;&#039; requiring 96000 MByte of total physical memory per task and total wall clock time of 1 hours looks like: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--b)--&amp;gt; &lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --ntasks=10&lt;br /&gt;
#SBATCH --cpus-per-task=40&lt;br /&gt;
#SBATCH --time=60&lt;br /&gt;
#SBATCH --mem=96000&lt;br /&gt;
#SBATCH --export=ALL,MPI_MODULE=mpi/impi,EXE=./impi_omp_program&lt;br /&gt;
#SBATCH --output=&amp;quot;parprog_impi_omp_%j.out&amp;quot;&lt;br /&gt;
&lt;br /&gt;
#If using more than one MPI task per node please set&lt;br /&gt;
export KMP_AFFINITY=compact,1,0&lt;br /&gt;
#export KMP_AFFINITY=verbose,scatter  prints messages concerning the supported affinity &lt;br /&gt;
#KMP_AFFINITY Description: https://software.intel.com/en-us/node/524790#KMP_AFFINITY_ENVIRONMENT_VARIABLE&lt;br /&gt;
&lt;br /&gt;
# Use when a defined module environment related to Intel MPI is wished &lt;br /&gt;
module load ${MPI_MODULE}&lt;br /&gt;
export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK}&lt;br /&gt;
export MPIRUN_OPTIONS=&amp;quot;-binding &amp;quot;domain=omp:compact&amp;quot; -print-rank-map -envall&amp;quot;&lt;br /&gt;
export NUM_PROCS=eval(${SLURM_NTASKS}*${OMP_NUM_THREADS})&lt;br /&gt;
echo &amp;quot;${EXE} running on ${NUM_PROCS} cores with ${SLURM_NTASKS} MPI-tasks and ${OMP_NUM_THREADS} threads&amp;quot;&lt;br /&gt;
startexe=&amp;quot;mpiexec.hydra -bootstrap slurm ${MPIRUN_OPTIONS} -n ${SLURM_NTASKS} ${EXE}&amp;quot;&lt;br /&gt;
echo $startexe&lt;br /&gt;
exec $startexe&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Using Intel compiler the environment variable KMP_AFFINITY switches on binding of threads to specific cores. If you only run one MPI task per node please set KMP_AFFINITY=compact,1,0.&lt;br /&gt;
&amp;lt;BR&amp;gt;&lt;br /&gt;
If you want to use 128 or more nodes, you must also set the environment variable as follows:           &amp;lt;BR&amp;gt;&lt;br /&gt;
export I_MPI_HYDRA_BRANCH_COUNT=-1&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
If you want to use the options perhost, ppn or rr, you must additionally set the environment variable I_MPI_JOB_RESPECT_PROCESS_PLACEMENT=off. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Execute the script &#039;&#039;&#039;job_impi_omp.sh&#039;&#039;&#039; by command sbatch:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sbatch -p multiple ./job_impi_omp.sh&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
The mpirun option &#039;&#039;-print-rank-map&#039;&#039; shows the bindings between MPI tasks and nodes (not very beneficial). The option &#039;&#039;-binding&#039;&#039; binds MPI tasks (processes) to a particular processor; &#039;&#039;domain=omp&#039;&#039; means that the domain size is determined by the number of threads. If you would choose 2 MPI tasks per node, you should choose &#039;&#039;-binding &amp;quot;cell=unit;map=bunch&amp;quot;&#039;&#039;; this binding maps one MPI process to each socket. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Chain jobs ====&lt;br /&gt;
The CPU time requirements of many applications exceed the limits of the job classes. In those situations it is recommended to solve the problem by a job chain. A job chain is a sequence of jobs where each job automatically starts its successor. &lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
####################################&lt;br /&gt;
## simple Slurm submitter script to setup   ## &lt;br /&gt;
## a chain of jobs using Slurm                    ##&lt;br /&gt;
####################################&lt;br /&gt;
## ver.  : 2018-11-27, KIT, SCC&lt;br /&gt;
&lt;br /&gt;
## Define maximum number of jobs via positional parameter 1, default is 5&lt;br /&gt;
max_nojob=${1:-5}&lt;br /&gt;
&lt;br /&gt;
## Define your jobscript (e.g. &amp;quot;~/chain_job.sh&amp;quot;)&lt;br /&gt;
chain_link_job=${PWD}/chain_job.sh&lt;br /&gt;
&lt;br /&gt;
## Define type of dependency via positional parameter 2, default is &#039;afterok&#039;&lt;br /&gt;
dep_type=&amp;quot;${2:-afterok}&amp;quot;&lt;br /&gt;
## -&amp;gt; List of all dependencies:&lt;br /&gt;
## https://slurm.schedmd.com/sbatch.html&lt;br /&gt;
&lt;br /&gt;
myloop_counter=1&lt;br /&gt;
## Submit loop&lt;br /&gt;
while [ ${myloop_counter} -le ${max_nojob} ] ; do&lt;br /&gt;
   ##&lt;br /&gt;
   ## Differ msub_opt depending on chain link number&lt;br /&gt;
   if [ ${myloop_counter} -eq 1 ] ; then&lt;br /&gt;
      slurm_opt=&amp;quot;&amp;quot;&lt;br /&gt;
   else&lt;br /&gt;
      slurm_opt=&amp;quot;-d ${dep_type}:${jobID}&amp;quot;&lt;br /&gt;
   fi&lt;br /&gt;
   ##&lt;br /&gt;
   ## Print current iteration number and sbatch command&lt;br /&gt;
   echo &amp;quot;Chain job iteration = ${myloop_counter}&amp;quot;&lt;br /&gt;
   echo &amp;quot;   sbatch --export=myloop_counter=${myloop_counter} ${slurm_opt} ${chain_link_job}&amp;quot;&lt;br /&gt;
   ## Store job ID for next iteration by storing output of sbatch command with empty lines&lt;br /&gt;
   jobID=$(sbatch -p &amp;lt;queue&amp;gt; --export=ALL,myloop_counter=${myloop_counter} ${slurm_opt} ${chain_link_job} 2&amp;gt;&amp;amp;1 | sed &#039;s/[S,a-z]* //g&#039;)&lt;br /&gt;
   ##   &lt;br /&gt;
   ## Check if ERROR occured&lt;br /&gt;
   if [[ &amp;quot;${jobID}&amp;quot; =~ &amp;quot;ERROR&amp;quot; ]] ; then&lt;br /&gt;
      echo &amp;quot;   -&amp;gt; submission failed!&amp;quot; ; exit 1&lt;br /&gt;
   else&lt;br /&gt;
      echo &amp;quot;   -&amp;gt; job number = ${jobID}&amp;quot;&lt;br /&gt;
   fi&lt;br /&gt;
   ##&lt;br /&gt;
   ## Increase counter&lt;br /&gt;
   let myloop_counter+=1&lt;br /&gt;
done&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== GPU jobs ====&lt;br /&gt;
&lt;br /&gt;
The nodes in the gpu_4 and gpu_8 queues have 4 or 8 NVIDIA Tesla V100 GPUs. Just submitting a job to these queues is not enough to also allocate one or more GPUs, you have to do so using the &amp;quot;--gres=gpu&amp;quot; parameter. You have to specifiy how many GPUs your job needs, e.g. &amp;quot;--gres=gpu:2&amp;quot; will request two GPUs.&lt;br /&gt;
&lt;br /&gt;
The GPU nodes are shared between multiple jobs if the jobs don&#039;t request all the GPUs in a node and there are enough ressources to run more than one job. The individual GPUs are always bound to a single job and will not be shared between different jobs.&lt;br /&gt;
&lt;br /&gt;
a) add after the initial line of your script job.sh the line including the&lt;br /&gt;
information about the LSDF Online Storage usage:&amp;lt;br&amp;gt;   #SBATCH --constraint=LSDF&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --ntasks=40&lt;br /&gt;
#SBATCH --time=02:00:00&lt;br /&gt;
#SBATCH --mem=4000&lt;br /&gt;
#SBATCH --gres=gpu:2&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or b) execute:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sbatch -p &amp;lt;queue&amp;gt; -n 40 -t 02:00:00 --mem 4000 --gres=gpu:2 job.sh&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
If you start an interactive session on of the GPU nodes, you can use the &amp;quot;nvidia-smi&amp;quot; command to list the GPUs allocated to your job:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nvidia-smi&lt;br /&gt;
Sun Mar 29 15:20:05 2020       &lt;br /&gt;
+-----------------------------------------------------------------------------+&lt;br /&gt;
| NVIDIA-SMI 440.33.01    Driver Version: 440.33.01    CUDA Version: 10.2     |&lt;br /&gt;
|-------------------------------+----------------------+----------------------+&lt;br /&gt;
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |&lt;br /&gt;
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |&lt;br /&gt;
|===============================+======================+======================|&lt;br /&gt;
|   0  Tesla V100-SXM2...  Off  | 00000000:3A:00.0 Off |                    0 |&lt;br /&gt;
| N/A   29C    P0    39W / 300W |      9MiB / 32510MiB |      0%      Default |&lt;br /&gt;
+-------------------------------+----------------------+----------------------+&lt;br /&gt;
|   1  Tesla V100-SXM2...  Off  | 00000000:3B:00.0 Off |                    0 |&lt;br /&gt;
| N/A   30C    P0    41W / 300W |      8MiB / 32510MiB |      0%      Default |&lt;br /&gt;
+-------------------------------+----------------------+----------------------+&lt;br /&gt;
                                                                               &lt;br /&gt;
+-----------------------------------------------------------------------------+&lt;br /&gt;
| Processes:                                                       GPU Memory |&lt;br /&gt;
|  GPU       PID   Type   Process name                             Usage      |&lt;br /&gt;
|=============================================================================|&lt;br /&gt;
|    0     14228      G   /usr/bin/X                                     8MiB |&lt;br /&gt;
|    1     14228      G   /usr/bin/X                                     8MiB |&lt;br /&gt;
+-----------------------------------------------------------------------------+&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== LSDF Online Storage ====&lt;br /&gt;
On bwUniCluster 2.0 you can use for special cases the LSDF Online Storage on the HPC cluster nodes. Please request for this service separately ([https://www.lsdf.kit.edu/os/storagerequest/: LSDF Storage Request]).&lt;br /&gt;
To mount the LSDF Online Storage on the compute nodes during the job runtime the&lt;br /&gt;
the constraint flag &amp;quot;LSDF&amp;quot; has to be set.  &lt;br /&gt;
&lt;br /&gt;
a) add after the initial line of your script job.sh the line including the&lt;br /&gt;
information about the LSDF Online Storage usage:&amp;lt;br&amp;gt;   #SBATCH --constraint=LSDF&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --ntasks=1&lt;br /&gt;
#SBATCH --time=120&lt;br /&gt;
#SBATCH --mem=200&lt;br /&gt;
#SBATCH --constraint=LSDF&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or b) execute:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sbatch -p &amp;lt;queue&amp;gt; -n 1 -t 2:00:00 --mem 200 job.sh -C LSDF&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
For the usage of the LSDF Online Storage&lt;br /&gt;
the following environment variables are available: $LSDF, $LSDFPROJECTS, $LSDFHOME.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====BeeOND (BeeGFS On-Demand)====&lt;br /&gt;
&lt;br /&gt;
BeeOND instances are integrated into the prolog and epilog script of the cluster batch system, Slurm. It can be used on the compute nodes during the job runtime with the constraint flag &amp;quot;BEEOND&amp;quot; ([[BwUniCluster_2.0_Slurm_common_Features#sbatch_Command_Parameters|Slurm Command Parameters]])&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH ...&lt;br /&gt;
#SBATCH --constraint=BEEOND&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After your job has started you can find the private on-demand file system in &#039;&#039;&#039;/mnt/odfs/$SLURM_JOB_ID&#039;&#039;&#039; directory. The mountpoint comes with three pre-configured directories:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#for small files (stripe count = 1)&lt;br /&gt;
/mnt/odfs/$SLURM_JOB_ID/stripe_1&lt;br /&gt;
#stripe count = 4&lt;br /&gt;
/mnt/odfs/$SLURM_JOB_ID/stripe_default&lt;br /&gt;
#stripe count = 8, 16 or 32, use this directories for medium sized and large files or when using MPI-IO&lt;br /&gt;
/mnt/odfs/$SLURM_JOB_ID/stripe_8, /mnt/odfs/$SLURM_JOB_ID/stripe_16 or /mnt/odfs/$SLURM_JOB_ID/stripe_32&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you request less nodes than stripe count, the stripe count will be max number of nodes, e.g., You only request 8 nodes , so the directory with stripe count 16 is basically only with a stripe count 8.&lt;br /&gt;
&lt;br /&gt;
The capacity of the private file system depends on the number of nodes. For each node you get 250Gbyte.&lt;br /&gt;
&lt;br /&gt;
!!! Be careful when creating large files, use always the directory with the max stripe count for large files.&lt;br /&gt;
If you create large files use a higher stripe count. For example, if your largest file is 1.1 Tb, then you have to use a stripe count larger&amp;gt;4 (4 x 250GB).  &lt;br /&gt;
&lt;br /&gt;
If you request 100 nodes for your job, the private file system is 100 * 250 Gbyte ~ 25 Tbyte (approx) capacity.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Recommendation:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The private file system is using its own metadata server. This metadata server is started on the first nodes. Depending on your application, the metadata server is consuming decent amount of CPU power. Probably adding a extra node to your job could improve the usability of the on-demand file system. Start your application with the MPI option:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mpirun -nolocal myapplication&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
With the -nolocal option the node where mpirun is initiated is not used for your application. This node is fully available for the meta data server of your requested on-demand file system.&lt;br /&gt;
&lt;br /&gt;
Example job script:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#very simple example on how to use a private on-demand file system&lt;br /&gt;
#SBATCH -N 10&lt;br /&gt;
#SBATCH --constraint=BEEOND&lt;br /&gt;
&lt;br /&gt;
#create a workspace &lt;br /&gt;
ws_allocate myresults-$SLURM_JOB_ID 90&lt;br /&gt;
RESULTDIR=`ws_find myresults-$SLURM_JOB_ID`&lt;br /&gt;
&lt;br /&gt;
#Set ENV variable to on-demand file system&lt;br /&gt;
ODFSDIR=/mnt/odfs/$SLURM_JOB_ID/stripe_16/&lt;br /&gt;
&lt;br /&gt;
#start application and write results to on-demand file system&lt;br /&gt;
mpirun -nolocal myapplication -o $ODFSDIR/results&lt;br /&gt;
&lt;br /&gt;
#Copy back data after your job application end&lt;br /&gt;
rsync -av $ODFSDIR/results $RESULTDIR&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Start time of job or resources : squeue --start ==&lt;br /&gt;
The command can be used by any user to displays the estimated start time of a job based a number of analysis types based on historical usage, earliest available reservable resources, and priority based backlog. The command squeue is explained in detail on the webpage https://slurm.schedmd.com/squeue.html or via manpage (man squeue). &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
=== Access ===&lt;br /&gt;
By default, this command can be run by &#039;&#039;&#039;any user&#039;&#039;&#039;. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== List of your submitted jobs : squeue ==&lt;br /&gt;
Displays information about YOUR active, pending and/or recently completed jobs. The command displays all own active and pending jobs. The command squeue is explained in detail on the webpage https://slurm.schedmd.com/squeue.html or via manpage (man squeue).&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
=== Access ===&lt;br /&gt;
By default, this command can be run by any user.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Flags ===&lt;br /&gt;
{| width=750px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Flag !! Description&lt;br /&gt;
|-&lt;br /&gt;
| -l, --long&lt;br /&gt;
| Report more of the available information for the selected jobs or job steps, subject to any constraints specified.&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Examples ===&lt;br /&gt;
&#039;&#039;squeue&#039;&#039; example on bwUniCluster 2.0 &amp;lt;small&amp;gt;(Only your own jobs are displayed!)&amp;lt;/small&amp;gt;.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ squeue &lt;br /&gt;
             JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)&lt;br /&gt;
          18088744    single CPV.sbat   ab1234 PD       0:00      1 (Priority)&lt;br /&gt;
          18098414  multiple CPV.sbat   ab1234 PD       0:00      2 (Priority) &lt;br /&gt;
          18090089  multiple CPV.sbat   ab1234  R       2:27      2 uc2n[127-128]&lt;br /&gt;
$ squeue -l&lt;br /&gt;
            JOBID PARTITION     NAME     USER    STATE       TIME TIME_LIMI  NODES NODELIST(REASON) &lt;br /&gt;
         18088654    single CPV.sbat   ab1234 COMPLETI       4:29   2:00:00      1 uc2n374&lt;br /&gt;
         18088785    single CPV.sbat   ab1234  PENDING       0:00   2:00:00      1 (Priority)&lt;br /&gt;
         18098414  multiple CPV.sbat   ab1234  PENDING       0:00   2:00:00      2 (Priority)&lt;br /&gt;
         18088683    single CPV.sbat   ab1234  RUNNING       0:14   2:00:00      1 uc2n413  &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* The output of &#039;&#039;squeue&#039;&#039; shows how many jobs of yours are running or pending and how many nodes are in use by your jobs.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Shows free resources : sinfo_t_idle ==&lt;br /&gt;
The Slurm command sinfo is used to view partition and node information for a system running Slurm. It incorporates down time, reservations, and node state information in determining the available backfill window. The sinfo command can only be used by the administrator.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
SCC has prepared a special script (sinfo_t_idle) to find out how many processors are available for immediate use on the system. It is anticipated that users will use this information to submit jobs that meet these criteria and thus obtain quick job turnaround times. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
=== Access ===&lt;br /&gt;
By default, this command can be used by any user or administrator. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
=== Example ===&lt;br /&gt;
* The following command displays what resources are available for immediate use for the whole partition.&lt;br /&gt;
&amp;lt;pre&amp;gt;$ sinfo_t_idle&lt;br /&gt;
Partition dev_multiple  :      8 nodes idle&lt;br /&gt;
Partition multiple      :    332 nodes idle&lt;br /&gt;
Partition dev_single    :      4 nodes idle&lt;br /&gt;
Partition single        :     76 nodes idle&lt;br /&gt;
Partition long          :     80 nodes idle&lt;br /&gt;
Partition fat           :      5 nodes idle&lt;br /&gt;
Partition dev_special   :    342 nodes idle&lt;br /&gt;
Partition special       :    342 nodes idle&lt;br /&gt;
Partition dev_multiple_e:      7 nodes idle&lt;br /&gt;
Partition multiple_e    :    335 nodes idle&lt;br /&gt;
Partition gpu_4         :     12 nodes idle&lt;br /&gt;
Partition gpu_8         :      6 nodes idle&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* For the above example jobs in all partitions can be run immediately.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Detailed job information : scontrol show job ==&lt;br /&gt;
scontrol show job displays detailed job state information and diagnostic output for all or a specified job of yours. Detailed information is available for active, pending and recently completed jobs. The command scontrol is explained in detail on the webpage https://slurm.schedmd.com/scontrol.html or via manpage (man scontrol). &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Display the state of all your jobs in normal mode: scontrol show job&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Display the state of a job with &amp;lt;jobid&amp;gt; in normal mode: scontrol show job &amp;lt;jobid&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
=== Access ===&lt;br /&gt;
* End users can use scontrol show job to view the status of their &#039;&#039;&#039;own jobs&#039;&#039;&#039; only. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Arguments ===&lt;br /&gt;
{| width=750px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Option !! Default !! Description !! Example&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
|- style=&amp;quot;width:12%;&amp;quot; &lt;br /&gt;
| -d&lt;br /&gt;
| (n/a)&lt;br /&gt;
| Detailed mode&lt;br /&gt;
| Example: Display the state with jobid 18089884 in detailed mode. &amp;lt;br&amp;gt; &amp;lt;pre&amp;gt;scontrol -d show job 18089884&amp;lt;/pre&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Scontrol show job Example ===&lt;br /&gt;
Here is an example from bwUniCluster 2.0.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
squeue    # show my own jobs (here the userid is replaced!)&lt;br /&gt;
             JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)&lt;br /&gt;
          18089884  multiple CPV.sbat   bq0742  R      33:44      2 uc2n[165-166]&lt;br /&gt;
&lt;br /&gt;
$&lt;br /&gt;
$ # now, see what&#039;s up with my pending job with jobid 18089884&lt;br /&gt;
$ &lt;br /&gt;
$ scontrol show job 18089884&lt;br /&gt;
&lt;br /&gt;
JobId=18089884 JobName=CPV.sbatch&lt;br /&gt;
   UserId=bq0742(8946) GroupId=scc(12345) MCS_label=N/A&lt;br /&gt;
   Priority=3 Nice=0 Account=kit QOS=normal&lt;br /&gt;
   JobState=RUNNING Reason=None Dependency=(null)&lt;br /&gt;
   Requeue=1 Restarts=0 BatchFlag=1 Reboot=0 ExitCode=0:0&lt;br /&gt;
   RunTime=00:35:06 TimeLimit=02:00:00 TimeMin=N/A&lt;br /&gt;
   SubmitTime=2020-03-16T14:14:54 EligibleTime=2020-03-16T14:14:54&lt;br /&gt;
   AccrueTime=2020-03-16T14:14:54&lt;br /&gt;
   StartTime=2020-03-16T15:12:51 EndTime=2020-03-16T17:12:51 Deadline=N/A&lt;br /&gt;
   SuspendTime=None SecsPreSuspend=0 LastSchedEval=2020-03-16T15:12:51&lt;br /&gt;
   Partition=multiple AllocNode:Sid=uc2n995:5064&lt;br /&gt;
   ReqNodeList=(null) ExcNodeList=(null)&lt;br /&gt;
   NodeList=uc2n[165-166]&lt;br /&gt;
   BatchHost=uc2n165&lt;br /&gt;
   NumNodes=2 NumCPUs=160 NumTasks=80 CPUs/Task=1 ReqB:S:C:T=0:0:*:1&lt;br /&gt;
   TRES=cpu=160,mem=96320M,node=2,billing=160&lt;br /&gt;
   Socks/Node=* NtasksPerN:B:S:C=40:0:*:1 CoreSpec=*&lt;br /&gt;
   MinCPUsNode=40 MinMemoryCPU=1204M MinTmpDiskNode=0&lt;br /&gt;
   Features=(null) DelayBoot=00:00:00&lt;br /&gt;
   OverSubscribe=NO Contiguous=0 Licenses=(null) Network=(null)&lt;br /&gt;
   Command=/pfs/data5/home/kit/scc/bq0742/git/CPV/bin/CPV.sbatch&lt;br /&gt;
   WorkDir=/pfs/data5/home/kit/scc/bq0742/git/CPV/bin&lt;br /&gt;
   StdErr=/pfs/data5/home/kit/scc/bq0742/git/CPV/bin/slurm-18089884.out&lt;br /&gt;
   StdIn=/dev/null&lt;br /&gt;
   StdOut=/pfs/data5/home/kit/scc/bq0742/git/CPV/bin/slurm-18089884.out&lt;br /&gt;
   Power=&lt;br /&gt;
   MailUser=(null) MailType=NONE&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
You can use standard Linux pipe commands to filter the very detailed scontrol show job output.&lt;br /&gt;
* In which state the job is?&lt;br /&gt;
&amp;lt;pre&amp;gt;$ scontrol show job 18089884 | grep -i State&lt;br /&gt;
   JobState=COMPLETED Reason=None Dependency=(null)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Cancel Slurm Jobs ==&lt;br /&gt;
The scancel command is used to cancel jobs. The command scancel is explained in detail on the webpage https://slurm.schedmd.com/scancel.html or via manpage (man scancel).   &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
=== Canceling own jobs : scancel ===&lt;br /&gt;
scancel is used to signal or cancel jobs, job arrays or job steps. The command is:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ scancel [-i] &amp;lt;job-id&amp;gt;&lt;br /&gt;
$ scancel -t &amp;lt;job_state_name&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
{| width=750px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Flag !! Default !! Description !! Example&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| -i, --interactive&lt;br /&gt;
| (n/a)&lt;br /&gt;
| Interactive mode.&lt;br /&gt;
| Cancel the job 987654 interactively. &amp;lt;br&amp;gt; &amp;lt;pre&amp;gt; scancel -i 987654 &amp;lt;/pre&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| -t, --state&lt;br /&gt;
| (n/a)&lt;br /&gt;
| Restrict the scancel operation to jobs in a certain state. &amp;lt;br&amp;gt; &amp;quot;job_state_name&amp;quot; may have a value of either &amp;quot;PENDING&amp;quot;, &amp;quot;RUNNING&amp;quot; or &amp;quot;SUSPENDED&amp;quot;.&lt;br /&gt;
| Cancel all jobs in state &amp;quot;PENDING&amp;quot;. &amp;lt;br&amp;gt; &amp;lt;pre&amp;gt; scancel -t &amp;quot;PENDING&amp;quot; &amp;lt;/pre&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Resource Managers =&lt;br /&gt;
=== Batch Job (Slurm) Variables ===&lt;br /&gt;
The following environment variables of Slurm are added to your environment once your job has started&lt;br /&gt;
&amp;lt;small&amp;gt;(only an excerpt of the most important ones)&amp;lt;/small&amp;gt;.&lt;br /&gt;
{| width=750px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Environment !! Brief explanation&lt;br /&gt;
|- &lt;br /&gt;
| SLURM_JOB_CPUS_PER_NODE &lt;br /&gt;
| Number of processes per node dedicated to the job&lt;br /&gt;
|- &lt;br /&gt;
| SLURM_JOB_NODELIST &lt;br /&gt;
| List of nodes dedicated to the job&lt;br /&gt;
|- &lt;br /&gt;
| SLURM_JOB_NUM_NODES &lt;br /&gt;
| Number of nodes dedicated to the job&lt;br /&gt;
|- &lt;br /&gt;
| SLURM_MEM_PER_NODE &lt;br /&gt;
| Memory per node dedicated to the job &lt;br /&gt;
|- &lt;br /&gt;
| SLURM_NPROCS&lt;br /&gt;
| Total number of processes dedicated to the job &lt;br /&gt;
|-&lt;br /&gt;
| SLURM_CLUSTER_NAME&lt;br /&gt;
| Name of the cluster executing the job&lt;br /&gt;
|-&lt;br /&gt;
| SLURM_CPUS_PER_TASK &lt;br /&gt;
| Number of CPUs requested per task&lt;br /&gt;
|-&lt;br /&gt;
| SLURM_JOB_ACCOUNT&lt;br /&gt;
| Account name &lt;br /&gt;
|-&lt;br /&gt;
| SLURM_JOB_ID&lt;br /&gt;
| Job ID&lt;br /&gt;
|-&lt;br /&gt;
| SLURM_JOB_NAME&lt;br /&gt;
| Job Name&lt;br /&gt;
|-&lt;br /&gt;
| SLURM_JOB_PARTITION&lt;br /&gt;
| Partition/queue running the job&lt;br /&gt;
|-&lt;br /&gt;
| SLURM_JOB_UID&lt;br /&gt;
| User ID of the job&#039;s owner&lt;br /&gt;
|-&lt;br /&gt;
| SLURM_SUBMIT_DIR&lt;br /&gt;
| Job submit folder.  The directory from which msub was invoked. &lt;br /&gt;
|-&lt;br /&gt;
| SLURM_JOB_USER&lt;br /&gt;
| User name of the job&#039;s owner&lt;br /&gt;
|-&lt;br /&gt;
| SLURM_RESTART_COUNT&lt;br /&gt;
| Number of times job has restarted&lt;br /&gt;
|-&lt;br /&gt;
| SLURM_PROCID&lt;br /&gt;
| Task ID (MPI rank)&lt;br /&gt;
|-&lt;br /&gt;
| SLURM_NTASKS&lt;br /&gt;
| The total number of tasks available for the job&lt;br /&gt;
|-&lt;br /&gt;
| SLURM_STEP_ID&lt;br /&gt;
| Job step ID&lt;br /&gt;
|-&lt;br /&gt;
| SLURM_STEP_NUM_TASKS&lt;br /&gt;
| Task count (number of PI ranks)&lt;br /&gt;
|-&lt;br /&gt;
| SLURM_JOB_CONSTRAINT&lt;br /&gt;
| Job constraints&lt;br /&gt;
|}&lt;br /&gt;
See also:&lt;br /&gt;
* [https://slurm.schedmd.com/sbatch.html#lbAI Slurm input and output environment variables]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Job Exit Codes ===&lt;br /&gt;
A job&#039;s exit code (also known as exit status, return code and completion code) is captured by SLURM and saved as part of the job record. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Any non-zero exit code will be assumed to be a job failure and will result in a Job State of FAILED with a reason of &amp;quot;NonZeroExitCode&amp;quot;.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
The exit code is an 8 bit unsigned number ranging between 0 and 255. While it is possible for a job to return a negative exit code, SLURM will display it as an unsigned value in the 0 - 255 range.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
==== Displaying Exit Codes and Signals ====&lt;br /&gt;
SLURM displays a job&#039;s exit code in the output of the &#039;&#039;&#039;scontrol show job&#039;&#039;&#039; and the sview utility.&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
When a signal was responsible for a job or step&#039;s termination, the signal number will be displayed after the exit code, delineated by a colon(:).&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
==== Submitting Termination Signal ====&lt;br /&gt;
Here is an example, how to &#039;save&#039; a Slurm termination signal in a typical jobscript.&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
[...]&lt;br /&gt;
exit_code=$?&lt;br /&gt;
mpirun  -np &amp;lt;#cores&amp;gt;  &amp;lt;EXE_BIN_DIR&amp;gt;/&amp;lt;executable&amp;gt; ... (options)  2&amp;gt;&amp;amp;1&lt;br /&gt;
[ &amp;quot;$exit_code&amp;quot; -eq 0 ] &amp;amp;&amp;amp; echo &amp;quot;all clean...&amp;quot; || \&lt;br /&gt;
   echo &amp;quot;Executable &amp;lt;EXE_BIN_DIR&amp;gt;/&amp;lt;executable&amp;gt; finished with exit code ${$exit_code}&amp;quot;&lt;br /&gt;
[...]&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
* Do not use &#039;&#039;&#039;&#039;time&#039;&#039;&#039;&#039; mpirun! The exit code will be the one submitted by the first (time) program.&lt;br /&gt;
* You do not need an &#039;&#039;&#039;exit $exit_code&#039;&#039;&#039; in the scripts.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
----&lt;br /&gt;
[[Category:bwUniCluster 2.0|bwUniCluster 2.0]]&lt;br /&gt;
[[#top|Back to top]]&lt;/div&gt;</summary>
		<author><name>H Haefner</name></author>
	</entry>
</feed>