<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://wiki.bwhpc.de/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=S+Shamsudeen</id>
	<title>bwHPC Wiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://wiki.bwhpc.de/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=S+Shamsudeen"/>
	<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/e/Special:Contributions/S_Shamsudeen"/>
	<updated>2026-05-12T00:38:51Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.39.17</generator>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster2.0/FAQ_-_broadwell_partition&amp;diff=5123</id>
		<title>BwUniCluster2.0/FAQ - broadwell partition</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster2.0/FAQ_-_broadwell_partition&amp;diff=5123"/>
		<updated>2017-09-14T10:45:30Z</updated>

		<summary type="html">&lt;p&gt;S Shamsudeen: /* What happens with code compiled for old partition running on the extension partition? */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;FAQs concerning best practice of [[BwUniCluster_Hardware_and_Architecture#Components_of_bwUniCluster|bwUniCluster broadwell partition]] (aka &amp;quot;extension&amp;quot; partition).&lt;br /&gt;
&lt;br /&gt;
__TOC__&lt;br /&gt;
&lt;br /&gt;
= Login =&lt;br /&gt;
== Are there separate login nodes for the bwUniCluster broadwell partition? ==&lt;br /&gt;
* Yes, but primarily to be used for compiling code.&lt;br /&gt;
&lt;br /&gt;
== How to login to broadwell login nodes? ==&lt;br /&gt;
* You can directly login on broadwell partition login nodes using&lt;br /&gt;
   $ username@uc1e.scc.kit.edu&lt;br /&gt;
* If you login to the &#039;&#039;old&#039;&#039; uc1 login nodes, even though you can use broadwell nodes using the same procedure as &#039;compute nodes&#039;. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Compilation =&lt;br /&gt;
== How to compile code on broadwell (extension) nodes? ==&lt;br /&gt;
On uc1 (old) login nodes:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
icc/ifort -axCORE-AVX2&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
On uc1e (extension) login nodes:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
icc/ifort -xHost&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== How to compile the same code on old and extension partition? ==&lt;br /&gt;
On uc1e (= extension) login nodes:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
icc/ifort -xAVX -axCORE-AVX2&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== What happens with code compiled for old partition whic is running on the extension partition? ==&lt;br /&gt;
Code will run but significantly slower since AVX2 will not be used. Please recompile your code accordingly.&lt;br /&gt;
 &lt;br /&gt;
&amp;lt;!--* To compile the same jobs on uc1 and uc1e, you have to use appropriate flags. Otherwise the submitted jobs will be crashed. Some of the main flags are:--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Job execution =&lt;br /&gt;
== How to submit jobs to the broadwell (= extension) partition ==&lt;br /&gt;
The submitted job will be distributed either way to the broadwell nodes if specified correctly, i.e.:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
msub -q multinode&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
[[Category:bwUniCluster]]&lt;/div&gt;</summary>
		<author><name>S Shamsudeen</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster2.0/FAQ_-_broadwell_partition&amp;diff=5122</id>
		<title>BwUniCluster2.0/FAQ - broadwell partition</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster2.0/FAQ_-_broadwell_partition&amp;diff=5122"/>
		<updated>2017-09-14T10:38:16Z</updated>

		<summary type="html">&lt;p&gt;S Shamsudeen: /* How to compile code working on the old and extension partition? */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;FAQs concerning best practice of [[BwUniCluster_Hardware_and_Architecture#Components_of_bwUniCluster|bwUniCluster broadwell partition]] (aka &amp;quot;extension&amp;quot; partition).&lt;br /&gt;
&lt;br /&gt;
__TOC__&lt;br /&gt;
&lt;br /&gt;
= Login =&lt;br /&gt;
== Are there separate login nodes for the bwUniCluster broadwell partition? ==&lt;br /&gt;
* Yes, but primarily to be used for compiling code.&lt;br /&gt;
&lt;br /&gt;
== How to login to broadwell login nodes? ==&lt;br /&gt;
* You can directly login on broadwell partition login nodes using&lt;br /&gt;
   $ username@uc1e.scc.kit.edu&lt;br /&gt;
* If you login to the &#039;&#039;old&#039;&#039; uc1 login nodes, even though you can use broadwell nodes using the same procedure as &#039;compute nodes&#039;. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Compilation =&lt;br /&gt;
== How to compile code on broadwell (extension) nodes? ==&lt;br /&gt;
On uc1 (old) login nodes:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
icc/ifort -axCORE-AVX2&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
On uc1e (extension) login nodes:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
icc/ifort -xHost&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== How to compile the same code on old and extension partition? ==&lt;br /&gt;
On uc1e (= extension) login nodes:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
icc/ifort -xAVX -axCORE-AVX2&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== What happens with code compiled for old partition running on the extension partition? ==&lt;br /&gt;
Code will run but significantly slower since AVX2 will not be used. Please recompile your code accordingly.&lt;br /&gt;
 &lt;br /&gt;
&amp;lt;!--* To compile the same jobs on uc1 and uc1e, you have to use appropriate flags. Otherwise the submitted jobs will be crashed. Some of the main flags are:--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Job execution =&lt;br /&gt;
== How to submit jobs to the broadwell (= extension) partition ==&lt;br /&gt;
The submitted job will be distributed either way to the broadwell nodes if specified correctly, i.e.:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
msub -q multinode&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
[[Category:bwUniCluster]]&lt;/div&gt;</summary>
		<author><name>S Shamsudeen</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster2.0/FAQ_-_broadwell_partition&amp;diff=5121</id>
		<title>BwUniCluster2.0/FAQ - broadwell partition</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster2.0/FAQ_-_broadwell_partition&amp;diff=5121"/>
		<updated>2017-09-14T10:36:52Z</updated>

		<summary type="html">&lt;p&gt;S Shamsudeen: /* How to compile code on broadwell (extension) nodes? */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;FAQs concerning best practice of [[BwUniCluster_Hardware_and_Architecture#Components_of_bwUniCluster|bwUniCluster broadwell partition]] (aka &amp;quot;extension&amp;quot; partition).&lt;br /&gt;
&lt;br /&gt;
__TOC__&lt;br /&gt;
&lt;br /&gt;
= Login =&lt;br /&gt;
== Are there separate login nodes for the bwUniCluster broadwell partition? ==&lt;br /&gt;
* Yes, but primarily to be used for compiling code.&lt;br /&gt;
&lt;br /&gt;
== How to login to broadwell login nodes? ==&lt;br /&gt;
* You can directly login on broadwell partition login nodes using&lt;br /&gt;
   $ username@uc1e.scc.kit.edu&lt;br /&gt;
* If you login to the &#039;&#039;old&#039;&#039; uc1 login nodes, even though you can use broadwell nodes using the same procedure as &#039;compute nodes&#039;. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Compilation =&lt;br /&gt;
== How to compile code on broadwell (extension) nodes? ==&lt;br /&gt;
On uc1 (old) login nodes:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
icc/ifort -axCORE-AVX2&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
On uc1e (extension) login nodes:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
icc/ifort -xHost&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== How to compile code working on the old and extension partition? ==&lt;br /&gt;
On uc1e (= extension) login nodes:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
icc/ifort -xAVX -axCORE-AVX2&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== What happens with code compiled for old partition running on the extension partition? ==&lt;br /&gt;
Code will run but significantly slower since AVX2 will not be used. Please recompile your code accordingly.&lt;br /&gt;
 &lt;br /&gt;
&amp;lt;!--* To compile the same jobs on uc1 and uc1e, you have to use appropriate flags. Otherwise the submitted jobs will be crashed. Some of the main flags are:--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Job execution =&lt;br /&gt;
== How to submit jobs to the broadwell (= extension) partition ==&lt;br /&gt;
The submitted job will be distributed either way to the broadwell nodes if specified correctly, i.e.:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
msub -q multinode&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
[[Category:bwUniCluster]]&lt;/div&gt;</summary>
		<author><name>S Shamsudeen</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster2.0/FAQ_-_broadwell_partition&amp;diff=5120</id>
		<title>BwUniCluster2.0/FAQ - broadwell partition</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster2.0/FAQ_-_broadwell_partition&amp;diff=5120"/>
		<updated>2017-09-14T10:36:08Z</updated>

		<summary type="html">&lt;p&gt;S Shamsudeen: /* How to compile code on broadwell (= extension) nodes? */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;FAQs concerning best practice of [[BwUniCluster_Hardware_and_Architecture#Components_of_bwUniCluster|bwUniCluster broadwell partition]] (aka &amp;quot;extension&amp;quot; partition).&lt;br /&gt;
&lt;br /&gt;
__TOC__&lt;br /&gt;
&lt;br /&gt;
= Login =&lt;br /&gt;
== Are there separate login nodes for the bwUniCluster broadwell partition? ==&lt;br /&gt;
* Yes, but primarily to be used for compiling code.&lt;br /&gt;
&lt;br /&gt;
== How to login to broadwell login nodes? ==&lt;br /&gt;
* You can directly login on broadwell partition login nodes using&lt;br /&gt;
   $ username@uc1e.scc.kit.edu&lt;br /&gt;
* If you login to the &#039;&#039;old&#039;&#039; uc1 login nodes, even though you can use broadwell nodes using the same procedure as &#039;compute nodes&#039;. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Compilation =&lt;br /&gt;
== How to compile code on broadwell (extension) nodes? ==&lt;br /&gt;
On uc1 (old) login nodes:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
icc/ifort -axCORE-AVX2&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
On uc1e (= extension) login nodes:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
icc/ifort -xHost&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== How to compile code working on the old and extension partition? ==&lt;br /&gt;
On uc1e (= extension) login nodes:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
icc/ifort -xAVX -axCORE-AVX2&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== What happens with code compiled for old partition running on the extension partition? ==&lt;br /&gt;
Code will run but significantly slower since AVX2 will not be used. Please recompile your code accordingly.&lt;br /&gt;
 &lt;br /&gt;
&amp;lt;!--* To compile the same jobs on uc1 and uc1e, you have to use appropriate flags. Otherwise the submitted jobs will be crashed. Some of the main flags are:--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Job execution =&lt;br /&gt;
== How to submit jobs to the broadwell (= extension) partition ==&lt;br /&gt;
The submitted job will be distributed either way to the broadwell nodes if specified correctly, i.e.:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
msub -q multinode&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
[[Category:bwUniCluster]]&lt;/div&gt;</summary>
		<author><name>S Shamsudeen</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster2.0/FAQ_-_broadwell_partition&amp;diff=5119</id>
		<title>BwUniCluster2.0/FAQ - broadwell partition</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster2.0/FAQ_-_broadwell_partition&amp;diff=5119"/>
		<updated>2017-09-14T10:35:40Z</updated>

		<summary type="html">&lt;p&gt;S Shamsudeen: /* Compilation */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;FAQs concerning best practice of [[BwUniCluster_Hardware_and_Architecture#Components_of_bwUniCluster|bwUniCluster broadwell partition]] (aka &amp;quot;extension&amp;quot; partition).&lt;br /&gt;
&lt;br /&gt;
__TOC__&lt;br /&gt;
&lt;br /&gt;
= Login =&lt;br /&gt;
== Are there separate login nodes for the bwUniCluster broadwell partition? ==&lt;br /&gt;
* Yes, but primarily to be used for compiling code.&lt;br /&gt;
&lt;br /&gt;
== How to login to broadwell login nodes? ==&lt;br /&gt;
* You can directly login on broadwell partition login nodes using&lt;br /&gt;
   $ username@uc1e.scc.kit.edu&lt;br /&gt;
* If you login to the &#039;&#039;old&#039;&#039; uc1 login nodes, even though you can use broadwell nodes using the same procedure as &#039;compute nodes&#039;. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Compilation =&lt;br /&gt;
== How to compile code on broadwell (= extension) nodes? ==&lt;br /&gt;
On uc1 (old) login nodes:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
icc/ifort -axCORE-AVX2&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
On uc1e (= extension) login nodes:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
icc/ifort -xHost&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== How to compile code working on the old and extension partition? ==&lt;br /&gt;
On uc1e (= extension) login nodes:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
icc/ifort -xAVX -axCORE-AVX2&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== What happens with code compiled for old partition running on the extension partition? ==&lt;br /&gt;
Code will run but significantly slower since AVX2 will not be used. Please recompile your code accordingly.&lt;br /&gt;
 &lt;br /&gt;
&amp;lt;!--* To compile the same jobs on uc1 and uc1e, you have to use appropriate flags. Otherwise the submitted jobs will be crashed. Some of the main flags are:--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Job execution =&lt;br /&gt;
== How to submit jobs to the broadwell (= extension) partition ==&lt;br /&gt;
The submitted job will be distributed either way to the broadwell nodes if specified correctly, i.e.:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
msub -q multinode&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
[[Category:bwUniCluster]]&lt;/div&gt;</summary>
		<author><name>S Shamsudeen</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster2.0/FAQ_-_broadwell_partition&amp;diff=5118</id>
		<title>BwUniCluster2.0/FAQ - broadwell partition</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster2.0/FAQ_-_broadwell_partition&amp;diff=5118"/>
		<updated>2017-09-12T11:52:40Z</updated>

		<summary type="html">&lt;p&gt;S Shamsudeen: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;FAQs concerning best practice of [[BwUniCluster_Hardware_and_Architecture#Components_of_bwUniCluster|bwUniCluster broadwell partition]] (aka &amp;quot;extension&amp;quot; partition).&lt;br /&gt;
&lt;br /&gt;
__TOC__&lt;br /&gt;
&lt;br /&gt;
= Login =&lt;br /&gt;
== Are there separate login nodes for the bwUniCluster broadwell partition? ==&lt;br /&gt;
* Yes, but primarily to be used for compiling code.&lt;br /&gt;
&lt;br /&gt;
== How to login to broadwell login nodes? ==&lt;br /&gt;
* You can directly login on broadwell partition login nodes using&lt;br /&gt;
   $ username@uc1e.scc.kit.edu&lt;br /&gt;
* If you login to the &#039;&#039;old&#039;&#039; uc1 login nodes, even though you can use broadwell nodes using the same procedure as &#039;compute nodes&#039;. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Compilation =&lt;br /&gt;
== How to compile code working on broadwell (= extension) nodes? ==&lt;br /&gt;
On uc1 (= old) login nodes:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
icc/ifort -axCORE-AVX2&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
On uc1e (= extension) login nodes:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
icc/ifort -xHost&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== How to compile code working on the old and extension partition? ==&lt;br /&gt;
On uc1e (= extension) login nodes:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
icc/ifort -xAVX -axCORE-AVX2&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== What happens with code compiled for old partition running on the extension partition? ==&lt;br /&gt;
Code will run but significantly slower since AVX2 will not be used. Please recompile your code accordingly.&lt;br /&gt;
 &lt;br /&gt;
&amp;lt;!--* To compile the same jobs on uc1 and uc1e, you have to use appropriate flags. Otherwise the submitted jobs will be crashed. Some of the main flags are:--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Job execution =&lt;br /&gt;
== How to submit jobs to the broadwell (= extension) partition ==&lt;br /&gt;
The submitted job will be distributed either way to the broadwell nodes if specified correctly, i.e.:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
msub -q multinode&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
[[Category:bwUniCluster]]&lt;/div&gt;</summary>
		<author><name>S Shamsudeen</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster_2.0_User_Access&amp;diff=5117</id>
		<title>BwUniCluster 2.0 User Access</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster_2.0_User_Access&amp;diff=5117"/>
		<updated>2017-09-12T11:29:54Z</updated>

		<summary type="html">&lt;p&gt;S Shamsudeen: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;!--{| style=&amp;quot;border-style: solid; border-width: 1px&amp;quot;&lt;br /&gt;
! Navigation: [[BwHPC_Best_Practices_Repository|bwHPC BPR]] / [[BwUniCluster_User_Guide|bwUniCluster]] &lt;br /&gt;
|}--&amp;gt;&lt;br /&gt;
[[bwUniCluster]] is Baden-Württemberg&#039;s general purpose tier 3 high performance computing (HPC)&lt;br /&gt;
cluster co-financed by Baden-Württemberg&#039;s ministry of science, research and arts and &lt;br /&gt;
the shareholders:&lt;br /&gt;
* Albert Ludwig University of Freiburg&lt;br /&gt;
* Eberhard Karls University, Tübingen&lt;br /&gt;
* Karlsruhe Institute of Technology (KIT)&lt;br /&gt;
* Ruprecht-Karls-Universität Heidelberg&lt;br /&gt;
* Ulm University&lt;br /&gt;
* University of Hohenheim&lt;br /&gt;
* University of Konstanz&lt;br /&gt;
* University of Mannheim&lt;br /&gt;
* University of Stuttgart&lt;br /&gt;
* HAW BW e.V. (an association of university of applied sciences in Baden-Württemberg) &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
To &#039;&#039;&#039;log on&#039;&#039;&#039; [[bwUniCluster]] a user account is required. All members of  the shareholder&lt;br /&gt;
universities can apply for an account.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;width: 100%; border-spacing: 5px;&amp;quot;&lt;br /&gt;
| style=&amp;quot;text-align:left; color:#000;vertical-align:top;&amp;quot; |__TOC__ &lt;br /&gt;
|&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;[[File:bwUniCluster_17Jan2014_p044-rot_t10.10.00.jpg|center|border|250px|bwUniCluster wiring by Holger Obermaier, copyright: KIT (SCC)]] &amp;lt;span style=&amp;quot;font-size:80%&amp;quot;&amp;gt;bwUniCluster wiring © KIT (SCC)&amp;lt;/span&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Registration =&lt;br /&gt;
&lt;br /&gt;
Granting access and issuing a user account for &#039;&#039;&#039;&#039;&#039;bwUniCluster&#039;&#039;&#039;&#039;&#039; requires the registration at the KIT service website [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu] (step B). However, this registration depends on the &#039;&#039;&#039;bwUniCluster entitlement&#039;&#039;&#039; issued by your university (step A).&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Step A: bwUniCluster entitlement for registration ==&lt;br /&gt;
Each university issues the bwUniCluster entitlement &#039;&#039;&#039;only&#039;&#039;&#039; for their own members. Details on the issuing process and/or the entitlement application forms are listed hereafter:  &lt;br /&gt;
&lt;br /&gt;
* [[BwCluster_User_Access_Uni_Freiburg|Albert Ludwig University of Freiburg]]&lt;br /&gt;
* [http://www.zdv.uni-tuebingen.de/dienstleistungen/computing/zugang-zu-den-ressourcen.html Eberhard Karls University Tübingen]&lt;br /&gt;
* [http://www.scc.kit.edu/downloads/sdo/Antrag_Benutzernummer_bwUniCluster.pdf Karlsruhe Institute of Technology (KIT)]&lt;br /&gt;
* [[‎BWUniCluster_User_Access_Members_Uni_Heidelberg|Ruprecht-Karls-Universität Heidelberg]]&lt;br /&gt;
* [[BWUniCluster_User_Access_Members_Uni_Ulm|Ulm University]]&lt;br /&gt;
* [https://kim.uni-hohenheim.de/bwhpc-account University of Hohenheim]&lt;br /&gt;
* [[BWUniCluster_User_Access_Members_Uni_Konstanz|University of Konstanz]]&lt;br /&gt;
* [[BWUniCluster_User_Access_Members_Uni_Mannheim|University of Mannheim]]&lt;br /&gt;
* [https://www.hlrs.de/solutions-services/academic-users/bwunicluster-access/ University of Stuttgart]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Step B: Web Registration ==&lt;br /&gt;
After step A, i.e., after issueing the bwUniCluster entitlement, please visit: &lt;br /&gt;
* [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu] &lt;br /&gt;
*# Select your home organization from the list and click &#039;&#039;&#039;Proceed&#039;&#039;&#039;&lt;br /&gt;
*# You will be directed to the &#039;&#039;Identity Provider&#039;&#039; of your home organisation  &lt;br /&gt;
*# Enter your home-organisational user ID / username  and your home-organisational password and click &#039;&#039;&#039;Login&#039;&#039;&#039; button&lt;br /&gt;
*# You will be redirected back to the registration website [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu/] &lt;br /&gt;
*# &amp;lt;div&amp;gt;Select unter &#039;&#039;&#039;The following services are available&#039;&#039;&#039; the service &#039;&#039;&#039;bwUniCluster&#039;&#039;&#039; &amp;lt;br&amp;gt;[[File:bwRegWebApp_avail_services_pic01.png|center|border|]]&amp;lt;/div&amp;gt;&lt;br /&gt;
*# Click &#039;&#039;&#039;Register&#039;&#039;&#039;&lt;br /&gt;
*# Finally&lt;br /&gt;
*#* for all non-KIT members &#039;&#039;&#039;mandatorily&#039;&#039;&#039; set a service password for authentication on bwUniCluster&lt;br /&gt;
*#* for KIT members &#039;&#039;&#039;optionally&#039;&#039;&#039; set a service password for authentication on bwUniCluster&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- old description&lt;br /&gt;
** login with your home-organizational user account and user password,&lt;br /&gt;
** select service &#039;&#039;&#039;&#039;&#039;bwUniCluster&#039;&#039;&#039;&#039;&#039; (on the left side) and &lt;br /&gt;
** follow the instructions to complete the registration.&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Step C: &amp;lt;span style=&amp;quot;color:red;font-size:100%;font-weight:bold&amp;quot;&amp;gt;Amendment:&amp;lt;/span&amp;gt; bwUniCluster questionnaire ==&lt;br /&gt;
Starting June 1st, 2015, usage of bwUniCluster mandatorily requires the&lt;br /&gt;
questionnaire&lt;br /&gt;
&lt;br /&gt;
   https://www.bwhpc-c5.de/en/ZAS/bwunicluster_survey.php&lt;br /&gt;
&lt;br /&gt;
to be answered. The input is solely used for the&lt;br /&gt;
elaboration of future support activities and for planning future HPC&lt;br /&gt;
resources. From June 1st, 2015, login to bwUniCluster is only permitted&lt;br /&gt;
to users who have already answered the questionnaire. &#039;&#039;&#039;&#039;&#039;Exception&#039;&#039;&#039;: For the first 14 days after the web registration login to bwUniCluster is permitted without participation in the questionnaire.&#039;&#039;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Changing Password ==&lt;br /&gt;
By default, for KIT members your bwUniCluster &#039;&#039;&#039;password&#039;&#039;&#039; to log on matches  that one of your KIT &lt;br /&gt;
account, while for all non-KIT members your bwUniCluster &#039;&#039;&#039;password&#039;&#039;&#039; is that one you saved during the web registration (compare step 7 of chapter 1.2). &lt;br /&gt;
At any time, you can set a new bwUniCluster password via the registration website [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu] by carrying out the following steps:&lt;br /&gt;
# visit [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu] and select your home organization &lt;br /&gt;
# authenticate yourself via your home-organizational user id / username and your home-organizational password&lt;br /&gt;
# find on the left side &#039;&#039;&#039;bwUniCluster&#039;&#039;&#039; and select &#039;&#039;&#039;Set Service Password&#039;&#039;&#039;&lt;br /&gt;
# set new service, i.e. bwUniCluster, password, repeat it and click &#039;&#039;&#039;Save&#039;&#039;&#039; button.&lt;br /&gt;
# the page answers e.g. &amp;quot;Das Passwort wurde bei dem Dienst geändert&amp;quot; (&amp;quot;password has been changed&amp;quot;)&lt;br /&gt;
# proceed to log in using the new password in the next step&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==  Contact ==&lt;br /&gt;
If you have questions or problems concerning bwUniCluster registration, please [[Registration_Support_-_bwUniCluster|contact your local hotline]]. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Login =&lt;br /&gt;
After finishing the web registration bwUniCluster is ready for your &#039;&#039;&#039;SSH&#039;&#039;&#039; based login. Available SSH applications are:&lt;br /&gt;
* built-in command under Linux and macOS using the application &#039;&#039;terminal&#039;&#039;&lt;br /&gt;
* [http://mobaxterm.mobatek.net/ MobaXterm] under Windows &lt;br /&gt;
&lt;br /&gt;
bwUniCluster has two dedicated login nodes. The selection of the login node is done automatically and is based on a round-robin scheduling. Logging in another time the session might run &lt;br /&gt;
on the other login node.&lt;br /&gt;
&lt;br /&gt;
Only the secure shell &#039;&#039;SSH&#039;&#039; is allowed to login. Other protocols like &#039;&#039;telnet&#039;&#039; or &#039;&#039;rlogin&#039;&#039;&lt;br /&gt;
are not allowed for security reasons.&lt;br /&gt;
&lt;br /&gt;
== Login applications: Linux and macOS ==&lt;br /&gt;
&lt;br /&gt;
A connection to bwUniCluster can be established by the command:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ssh &amp;lt;UserID&amp;gt;@bwunicluster.scc.kit.edu&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you are using &#039;&#039;openSSH&#039;&#039; (usually installed on Linux based systems) and you want to use a GUI-based application on bwUniCluster you should use the &lt;br /&gt;
&#039;&#039;ssh&#039;&#039; command with the option &#039;&#039;-X&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ssh -X &amp;lt;UserID&amp;gt;@bwunicluster.scc.kit.edu&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Login applications: Windows ==&lt;br /&gt;
&lt;br /&gt;
=== Hints for using MobaXterm ===&lt;br /&gt;
&lt;br /&gt;
Note: The bwHPC-C5 support team strongly recommends to use [http://mobaxterm.mobatek.net/ MobaXterm] instead of &#039;&#039;PuTTY&#039;&#039; or &#039;&#039;WinSCP&#039;&#039;. &#039;&#039;MobaXterm&#039;&#039; provides a X11 server allowing to start GUI based software.&lt;br /&gt;
 &lt;br /&gt;
Start &#039;&#039;MobaXterm&#039;&#039;, fill in the following fields:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Remote name              : uc1.scc.kit.edu&lt;br /&gt;
Specify user name        : &amp;lt;UserID&amp;gt;&lt;br /&gt;
Port                     : 22&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
After that click on &#039;ok&#039;. Then a terminal will be opened and there you can enter your password.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Depreciated: Hints for using PuTTY ===&lt;br /&gt;
Start PuTTY, fill in at window &#039;&#039;PuTTY Configuration&#039;&#039; under category &#039;&#039;Session&#039;&#039; the following fields &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Host Name        : &amp;lt;UserID&amp;gt;@bwunicluster.scc.kit.edu&lt;br /&gt;
Port             : 22&lt;br /&gt;
Connection type  : SSH&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
and click on &#039;&#039;Open&#039;&#039;, enter your password and accept adding &#039;&#039;host key&#039;&#039;. Note that you can also name the configured session via &#039;&#039;Save&#039;&#039; and load it later via the given name.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Depreciated: Hints for using WinSCP ===&lt;br /&gt;
&lt;br /&gt;
Start WinSCP, fill in the following fields:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
File Protocol    : SCP&lt;br /&gt;
Host name        : bwunicluster.scc.kit.edu&lt;br /&gt;
Port             : 22&lt;br /&gt;
User name        : &amp;lt;UserID&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
and click on &#039;Login&#039; and enter your password. Note that you can also name the configured session via &#039;&#039;Save&#039;&#039; and load it later via the given name.&lt;br /&gt;
&lt;br /&gt;
== About UserID / Username ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;UserID&amp;gt;&#039;&#039;&#039; of the ssh command is a placeholder for your &#039;&#039;&#039;&#039;&#039;username&#039;&#039;&#039;&#039;&#039; at your home &lt;br /&gt;
organization together with a &#039;&#039;&#039;&#039;&#039;prefix&#039;&#039;&#039;&#039;&#039; as followed:&lt;br /&gt;
&lt;br /&gt;
{| width=450px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Home organization !! &amp;lt;UserID&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| Universität Freiburg || &#039;&#039;fr_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Universität Heidelberg || &#039;&#039;hd_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Universität Hohenheim || &#039;&#039;ho_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| KIT || username &#039;&#039;(without any prefix)&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
| Universität Konstanz || &#039;&#039;kn_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Universität Mannheim || &#039;&#039;ma_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Universität Stuttgart ||  &#039;&#039;st_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Universität Tübingen || &#039;&#039;tu_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Universität Ulm  || &#039;&#039;ul_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Aalen || &#039;&#039;aa_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Albstadt-Sigmaringen || &#039;&#039;as_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Esslingen || &#039;&#039;es_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Furtwangen || &#039;&#039;hf_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Karlsruhe || &#039;&#039;hk_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| HTWG Konstanz || &#039;&#039;ht_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Reutlingen || &#039;&#039;hr_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Rottenburg || &#039;&#039;ro_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule für Technik Stuttgart || &#039;&#039;hs_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Ulm || &#039;&#039;hu_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Allowed activities on login nodes ==&lt;br /&gt;
&lt;br /&gt;
The login nodes of bwUniCluster are the access point to the compute system and to your bwUniCluster $HOME directory. The login nodes are shared with all the users of bwUniCluster. Therefore, your activities on the login nodes are limited to primarily set up your batch jobs. Your activities may also be:&lt;br /&gt;
* &#039;&#039;&#039;short&#039;&#039;&#039; compilation of your program code and&lt;br /&gt;
* &#039;&#039;&#039;short&#039;&#039;&#039; pre- and postprocessing of your batch jobs.&lt;br /&gt;
&lt;br /&gt;
To guarantee usability for all the users of bwUniCluster &amp;lt;span style=&amp;quot;color:red;font-size:100%;&amp;quot;&amp;gt;&#039;&#039;&#039;you must not run your compute jobs on the login nodes&#039;&#039;&#039;&amp;lt;/span&amp;gt;. Compute jobs must be submitted to the&lt;br /&gt;
[[bwUniCluster Batch Jobs|queueing system]]. Any compute job running on the login nodes will be terminated without any notice. Any long-running compilation or any long-running pre- or postprocessing of batch jobs must also be submitted to the [[bwUniCluster Batch Jobs|queueing system]].&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= [[First_Steps_on_bwHPC_cluster|First steps on bwUniCluster]] =&lt;br /&gt;
&lt;br /&gt;
First and some important steps on bwUniCluster can be found [[First_Steps_on_bwHPC_cluster|here]].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Deregistration =&lt;br /&gt;
If you plan to permanently leave the bwUniCluster, follow the deregister checklist:&lt;br /&gt;
# Transfer all your data in $HOME and workspace to your local computer/storage and after that clear off all your data&lt;br /&gt;
# Visit [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu] &lt;br /&gt;
#* Select your home organization from the list and click &#039;&#039;&#039;Proceed&#039;&#039;&#039;  &lt;br /&gt;
#* Enter your home-organisational user ID / username  and your home-organisational password and click &#039;&#039;&#039;Login&#039;&#039;&#039; button&lt;br /&gt;
#* You will be redirected back to the registration website [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu/] &lt;br /&gt;
#* &amp;lt;div&amp;gt;Select &#039;&#039;&#039;Registry Info&#039;&#039;&#039; of the service &#039;&#039;&#039;bwUniCluster&#039;&#039;&#039; (on the left hand side)&amp;lt;br&amp;gt;[[File:bwUniCluster_registration_sidebar.png|center|border|]]&amp;lt;/div&amp;gt;&lt;br /&gt;
#* Click &#039;&#039;&#039;Deregister&#039;&#039;&#039;&lt;br /&gt;
Note that Step 2 will automatically unsubscribe you from the bwunicluster mail list.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
[[Category:bwUniCluster|Access]][[Category:Access|bwUniCluster]]&lt;/div&gt;</summary>
		<author><name>S Shamsudeen</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster2.0/FAQ_-_broadwell_partition&amp;diff=5100</id>
		<title>BwUniCluster2.0/FAQ - broadwell partition</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster2.0/FAQ_-_broadwell_partition&amp;diff=5100"/>
		<updated>2017-09-06T12:58:22Z</updated>

		<summary type="html">&lt;p&gt;S Shamsudeen: /* Job execution */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;FAQs concerning best practice of bwUniCluster broadwell (aka &amp;quot;extension&amp;quot;) partition.&lt;br /&gt;
&lt;br /&gt;
__TOC__&lt;br /&gt;
&lt;br /&gt;
= Login =&lt;br /&gt;
== Are there separate login nodes for the bwUniCluster broadwell partition? ==&lt;br /&gt;
* Yes, but primarily to be used for compiling code.&lt;br /&gt;
* You can directly login on broadwell partition using&lt;br /&gt;
   $ username@uc1e.scc.kit.edu&lt;br /&gt;
* If you login uc1, even though you can use broadwell nodes using the same procedure as &#039;compute nodes&#039;. The submitted job will be distributed on broadwell nodes also. &lt;br /&gt;
    example: msub -q singlenode &lt;br /&gt;
             msub -q multinode&lt;br /&gt;
* But for the compilation you have to use the appropriate flags&lt;br /&gt;
&lt;br /&gt;
= Compilation =&lt;br /&gt;
== Q1 ==&lt;br /&gt;
* To compile the same jobs on uc1 and uc1e, you have to use appropriate flags. Otherwise the submitted jobs will be crashed. Some of the main flags are:&lt;br /&gt;
&lt;br /&gt;
= Job execution =&lt;br /&gt;
== Q2 ==&lt;br /&gt;
If the appropriate flags are selected, the submitted jobs can be executed on uc1 and uc1e.&lt;br /&gt;
----&lt;br /&gt;
[[Category:bwUniCluster]]&lt;/div&gt;</summary>
		<author><name>S Shamsudeen</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster2.0/FAQ_-_broadwell_partition&amp;diff=5099</id>
		<title>BwUniCluster2.0/FAQ - broadwell partition</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster2.0/FAQ_-_broadwell_partition&amp;diff=5099"/>
		<updated>2017-09-06T12:48:05Z</updated>

		<summary type="html">&lt;p&gt;S Shamsudeen: /* Compilation */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;FAQs concerning best practice of bwUniCluster broadwell (aka &amp;quot;extension&amp;quot;) partition.&lt;br /&gt;
&lt;br /&gt;
__TOC__&lt;br /&gt;
&lt;br /&gt;
= Login =&lt;br /&gt;
== Are there separate login nodes for the bwUniCluster broadwell partition? ==&lt;br /&gt;
* Yes, but primarily to be used for compiling code.&lt;br /&gt;
* You can directly login on broadwell partition using&lt;br /&gt;
   $ username@uc1e.scc.kit.edu&lt;br /&gt;
* If you login uc1, even though you can use broadwell nodes using the same procedure as &#039;compute nodes&#039;. The submitted job will be distributed on broadwell nodes also. &lt;br /&gt;
    example: msub -q singlenode &lt;br /&gt;
             msub -q multinode&lt;br /&gt;
* But for the compilation you have to use the appropriate flags&lt;br /&gt;
&lt;br /&gt;
= Compilation =&lt;br /&gt;
== Q1 ==&lt;br /&gt;
* To compile the same jobs on uc1 and uc1e, you have to use appropriate flags. Otherwise the submitted jobs will be crashed. Some of the main flags are:&lt;br /&gt;
&lt;br /&gt;
= Job execution =&lt;br /&gt;
== Q2 ==&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
[[Category:bwUniCluster]]&lt;/div&gt;</summary>
		<author><name>S Shamsudeen</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster2.0/FAQ_-_broadwell_partition&amp;diff=5098</id>
		<title>BwUniCluster2.0/FAQ - broadwell partition</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster2.0/FAQ_-_broadwell_partition&amp;diff=5098"/>
		<updated>2017-09-06T12:46:00Z</updated>

		<summary type="html">&lt;p&gt;S Shamsudeen: /* Are there separate login nodes for the bwUniCluster broadwell partition? */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;FAQs concerning best practice of bwUniCluster broadwell (aka &amp;quot;extension&amp;quot;) partition.&lt;br /&gt;
&lt;br /&gt;
__TOC__&lt;br /&gt;
&lt;br /&gt;
= Login =&lt;br /&gt;
== Are there separate login nodes for the bwUniCluster broadwell partition? ==&lt;br /&gt;
* Yes, but primarily to be used for compiling code.&lt;br /&gt;
* You can directly login on broadwell partition using&lt;br /&gt;
   $ username@uc1e.scc.kit.edu&lt;br /&gt;
* If you login uc1, even though you can use broadwell nodes using the same procedure as &#039;compute nodes&#039;. The submitted job will be distributed on broadwell nodes also. &lt;br /&gt;
    example: msub -q singlenode &lt;br /&gt;
             msub -q multinode&lt;br /&gt;
* But for the compilation you have to use the appropriate flags&lt;br /&gt;
&lt;br /&gt;
= Compilation =&lt;br /&gt;
== Q1 ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Job execution =&lt;br /&gt;
== Q2 ==&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
[[Category:bwUniCluster]]&lt;/div&gt;</summary>
		<author><name>S Shamsudeen</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster2.0/FAQ_-_broadwell_partition&amp;diff=5097</id>
		<title>BwUniCluster2.0/FAQ - broadwell partition</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster2.0/FAQ_-_broadwell_partition&amp;diff=5097"/>
		<updated>2017-09-06T12:43:09Z</updated>

		<summary type="html">&lt;p&gt;S Shamsudeen: /* Are there separate login nodes for the bwUniCluster broadwell partition? */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;FAQs concerning best practice of bwUniCluster broadwell (aka &amp;quot;extension&amp;quot;) partition.&lt;br /&gt;
&lt;br /&gt;
__TOC__&lt;br /&gt;
&lt;br /&gt;
= Login =&lt;br /&gt;
== Are there separate login nodes for the bwUniCluster broadwell partition? ==&lt;br /&gt;
* Yes, but primarily to be used for compiling code.&lt;br /&gt;
* You can directly login on broadwell partition using&lt;br /&gt;
   $ username@uc1e.scc.kit.edu&lt;br /&gt;
* If you login uc1, even though you can use broadwell nodes using the same procedure as &#039;compute nodes&#039;. The submitted job will be distributed on broadwell node also. &lt;br /&gt;
    example: msub -q singlenode &lt;br /&gt;
             msub -q multinode&lt;br /&gt;
* But for the compilation you have to use the appropriate flags&lt;br /&gt;
&lt;br /&gt;
= Compilation =&lt;br /&gt;
== Q1 ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Job execution =&lt;br /&gt;
== Q2 ==&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
[[Category:bwUniCluster]]&lt;/div&gt;</summary>
		<author><name>S Shamsudeen</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster2.0/FAQ_-_broadwell_partition&amp;diff=5096</id>
		<title>BwUniCluster2.0/FAQ - broadwell partition</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster2.0/FAQ_-_broadwell_partition&amp;diff=5096"/>
		<updated>2017-09-06T12:42:38Z</updated>

		<summary type="html">&lt;p&gt;S Shamsudeen: /* Are there separate login nodes for the bwUniCluster broadwell partition? */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;FAQs concerning best practice of bwUniCluster broadwell (aka &amp;quot;extension&amp;quot;) partition.&lt;br /&gt;
&lt;br /&gt;
__TOC__&lt;br /&gt;
&lt;br /&gt;
= Login =&lt;br /&gt;
== Are there separate login nodes for the bwUniCluster broadwell partition? ==&lt;br /&gt;
* Yes, but primarily to be used for compiling code.&lt;br /&gt;
* You can directly login on broadwell partition using&lt;br /&gt;
   $ username@uc1e.scc.kit.edu&lt;br /&gt;
* If you login uc1, even then you can use broadwell nodes using the same procedure as &#039;compute nodes&#039;. The submitted job will be distributed on broadwell node also. &lt;br /&gt;
    example: msub -q singlenode &lt;br /&gt;
             msub -q multinode&lt;br /&gt;
* But for the compilation you have to use the appropriate flags&lt;br /&gt;
&lt;br /&gt;
= Compilation =&lt;br /&gt;
== Q1 ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Job execution =&lt;br /&gt;
== Q2 ==&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
[[Category:bwUniCluster]]&lt;/div&gt;</summary>
		<author><name>S Shamsudeen</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster2.0/FAQ_-_broadwell_partition&amp;diff=5095</id>
		<title>BwUniCluster2.0/FAQ - broadwell partition</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster2.0/FAQ_-_broadwell_partition&amp;diff=5095"/>
		<updated>2017-09-06T12:39:21Z</updated>

		<summary type="html">&lt;p&gt;S Shamsudeen: /* Are there separate login nodes for the bwUniCluster broadwell partition? */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;FAQs concerning best practice of bwUniCluster broadwell (aka &amp;quot;extension&amp;quot;) partition.&lt;br /&gt;
&lt;br /&gt;
__TOC__&lt;br /&gt;
&lt;br /&gt;
= Login =&lt;br /&gt;
== Are there separate login nodes for the bwUniCluster broadwell partition? ==&lt;br /&gt;
* Yes, but primarily to be used for compiling code.&lt;br /&gt;
* You can directly login on broadwell partition using&lt;br /&gt;
   $ username@uc1e.scc.kit.edu&lt;br /&gt;
* If you login uc1, even then you can use broadwell nodes using the same procedure as &#039;compute nodes&#039;&lt;br /&gt;
    example: msub -q singlenode &lt;br /&gt;
             msub -q multinode&lt;br /&gt;
* But for the compilation you have to use the appropriate flags&lt;br /&gt;
&lt;br /&gt;
= Compilation =&lt;br /&gt;
== Q1 ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Job execution =&lt;br /&gt;
== Q2 ==&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
[[Category:bwUniCluster]]&lt;/div&gt;</summary>
		<author><name>S Shamsudeen</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster2.0/FAQ_-_broadwell_partition&amp;diff=5094</id>
		<title>BwUniCluster2.0/FAQ - broadwell partition</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster2.0/FAQ_-_broadwell_partition&amp;diff=5094"/>
		<updated>2017-09-06T12:38:43Z</updated>

		<summary type="html">&lt;p&gt;S Shamsudeen: /* Are there separate login nodes for the bwUniCluster broadwell partition? */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;FAQs concerning best practice of bwUniCluster broadwell (aka &amp;quot;extension&amp;quot;) partition.&lt;br /&gt;
&lt;br /&gt;
__TOC__&lt;br /&gt;
&lt;br /&gt;
= Login =&lt;br /&gt;
== Are there separate login nodes for the bwUniCluster broadwell partition? ==&lt;br /&gt;
* Yes, but primarily to be used for compiling code.&lt;br /&gt;
* You can directly login on broadwell partition using&lt;br /&gt;
   $ username@uc1e.scc.kit.edu&lt;br /&gt;
* If you login uc1, even then you can use broadwell nodes using the same procedure as &#039;compute nodes&#039;&lt;br /&gt;
    eg: msub -q singlenode &lt;br /&gt;
        msub -q multinode&lt;br /&gt;
* But for the compilation you have to use the appropriate flags&lt;br /&gt;
&lt;br /&gt;
= Compilation =&lt;br /&gt;
== Q1 ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Job execution =&lt;br /&gt;
== Q2 ==&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
[[Category:bwUniCluster]]&lt;/div&gt;</summary>
		<author><name>S Shamsudeen</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster2.0/FAQ_-_broadwell_partition&amp;diff=5093</id>
		<title>BwUniCluster2.0/FAQ - broadwell partition</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster2.0/FAQ_-_broadwell_partition&amp;diff=5093"/>
		<updated>2017-09-06T12:28:04Z</updated>

		<summary type="html">&lt;p&gt;S Shamsudeen: /* Login */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;FAQs concerning best practice of bwUniCluster broadwell (aka &amp;quot;extension&amp;quot;) partition.&lt;br /&gt;
&lt;br /&gt;
__TOC__&lt;br /&gt;
&lt;br /&gt;
= Login =&lt;br /&gt;
== Are there separate login nodes for the bwUniCluster broadwell partition? ==&lt;br /&gt;
* Yes, but primarily to be used for compiling code.&lt;br /&gt;
&lt;br /&gt;
= Compilation =&lt;br /&gt;
== Q1 ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Job execution =&lt;br /&gt;
== Q2 ==&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
[[Category:bwUniCluster]]&lt;/div&gt;</summary>
		<author><name>S Shamsudeen</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster_2.0_User_Access&amp;diff=5092</id>
		<title>BwUniCluster 2.0 User Access</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster_2.0_User_Access&amp;diff=5092"/>
		<updated>2017-09-06T09:12:50Z</updated>

		<summary type="html">&lt;p&gt;S Shamsudeen: /* Hints for using PuTTY */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;!--{| style=&amp;quot;border-style: solid; border-width: 1px&amp;quot;&lt;br /&gt;
! Navigation: [[BwHPC_Best_Practices_Repository|bwHPC BPR]] / [[BwUniCluster_User_Guide|bwUniCluster]] &lt;br /&gt;
|}--&amp;gt;&lt;br /&gt;
[[bwUniCluster]] is Baden-Württemberg&#039;s general purpose tier 3 high performance computing (HPC)&lt;br /&gt;
cluster co-financed by Baden-Württemberg&#039;s ministry of science, research and arts and &lt;br /&gt;
the shareholders:&lt;br /&gt;
* Albert Ludwig University of Freiburg&lt;br /&gt;
* Eberhard Karls University, Tübingen&lt;br /&gt;
* Karlsruhe Institute of Technology (KIT)&lt;br /&gt;
* Ruprecht-Karls-Universität Heidelberg&lt;br /&gt;
* Ulm University&lt;br /&gt;
* University of Hohenheim&lt;br /&gt;
* University of Konstanz&lt;br /&gt;
* University of Mannheim&lt;br /&gt;
* University of Stuttgart&lt;br /&gt;
* HAW BW e.V. (an association of university of applied sciences in Baden-Württemberg) &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
To &#039;&#039;&#039;log on&#039;&#039;&#039; [[bwUniCluster]] a user account is required. All members of  the shareholder&lt;br /&gt;
universities can apply for an account.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;width: 100%; border-spacing: 5px;&amp;quot;&lt;br /&gt;
| style=&amp;quot;text-align:left; color:#000;vertical-align:top;&amp;quot; |__TOC__ &lt;br /&gt;
|&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;[[File:bwUniCluster_17Jan2014_p044-rot_t10.10.00.jpg|center|border|250px|bwUniCluster wiring by Holger Obermaier, copyright: KIT (SCC)]] &amp;lt;span style=&amp;quot;font-size:80%&amp;quot;&amp;gt;bwUniCluster wiring © KIT (SCC)&amp;lt;/span&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Registration =&lt;br /&gt;
&lt;br /&gt;
Granting access and issuing a user account for &#039;&#039;&#039;&#039;&#039;bwUniCluster&#039;&#039;&#039;&#039;&#039; requires the registration at the KIT service website [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu] (step B). However, this registration depends on the &#039;&#039;&#039;bwUniCluster entitlement&#039;&#039;&#039; issued by your university (step A).&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Step A: bwUniCluster entitlement for registration ==&lt;br /&gt;
Each university issues the bwUniCluster entitlement &#039;&#039;&#039;only&#039;&#039;&#039; for their own members. Details on the issuing process and/or the entitlement application forms are listed hereafter:  &lt;br /&gt;
&lt;br /&gt;
* [[BwCluster_User_Access_Uni_Freiburg|Albert Ludwig University of Freiburg]]&lt;br /&gt;
* [http://www.zdv.uni-tuebingen.de/dienstleistungen/computing/zugang-zu-den-ressourcen.html Eberhard Karls University Tübingen]&lt;br /&gt;
* [http://www.scc.kit.edu/downloads/sdo/Antrag_Benutzernummer_bwUniCluster.pdf Karlsruhe Institute of Technology (KIT)]&lt;br /&gt;
* [[‎BWUniCluster_User_Access_Members_Uni_Heidelberg|Ruprecht-Karls-Universität Heidelberg]]&lt;br /&gt;
* [[BWUniCluster_User_Access_Members_Uni_Ulm|Ulm University]]&lt;br /&gt;
* [https://kim.uni-hohenheim.de/bwhpc-account University of Hohenheim]&lt;br /&gt;
* [[BWUniCluster_User_Access_Members_Uni_Konstanz|University of Konstanz]]&lt;br /&gt;
* [[BWUniCluster_User_Access_Members_Uni_Mannheim|University of Mannheim]]&lt;br /&gt;
* [https://www.hlrs.de/solutions-services/academic-users/bwunicluster-access/ University of Stuttgart]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Step B: Web Registration ==&lt;br /&gt;
After step A, i.e., after issueing the bwUniCluster entitlement, please visit: &lt;br /&gt;
* [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu] &lt;br /&gt;
*# Select your home organization from the list and click &#039;&#039;&#039;Proceed&#039;&#039;&#039;&lt;br /&gt;
*# You will be directed to the &#039;&#039;Identity Provider&#039;&#039; of your home organisation  &lt;br /&gt;
*# Enter your home-organisational user ID / username  and your home-organisational password and click &#039;&#039;&#039;Login&#039;&#039;&#039; button&lt;br /&gt;
*# You will be redirected back to the registration website [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu/] &lt;br /&gt;
*# &amp;lt;div&amp;gt;Select unter &#039;&#039;&#039;The following services are available&#039;&#039;&#039; the service &#039;&#039;&#039;bwUniCluster&#039;&#039;&#039; &amp;lt;br&amp;gt;[[File:bwRegWebApp_avail_services_pic01.png|center|border|]]&amp;lt;/div&amp;gt;&lt;br /&gt;
*# Click &#039;&#039;&#039;Register&#039;&#039;&#039;&lt;br /&gt;
*# Finally&lt;br /&gt;
*#* for all non-KIT members &#039;&#039;&#039;mandatorily&#039;&#039;&#039; set a service password for authentication on bwUniCluster&lt;br /&gt;
*#* for KIT members &#039;&#039;&#039;optionally&#039;&#039;&#039; set a service password for authentication on bwUniCluster&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- old description&lt;br /&gt;
** login with your home-organizational user account and user password,&lt;br /&gt;
** select service &#039;&#039;&#039;&#039;&#039;bwUniCluster&#039;&#039;&#039;&#039;&#039; (on the left side) and &lt;br /&gt;
** follow the instructions to complete the registration.&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Step C: &amp;lt;span style=&amp;quot;color:red;font-size:100%;font-weight:bold&amp;quot;&amp;gt;Amendment:&amp;lt;/span&amp;gt; bwUniCluster questionnaire ==&lt;br /&gt;
Starting June 1st, 2015, usage of bwUniCluster mandatorily requires the&lt;br /&gt;
questionnaire&lt;br /&gt;
&lt;br /&gt;
   https://www.bwhpc-c5.de/en/ZAS/bwunicluster_survey.php&lt;br /&gt;
&lt;br /&gt;
to be answered. The input is solely used for the&lt;br /&gt;
elaboration of future support activities and for planning future HPC&lt;br /&gt;
resources. From June 1st, 2015, login to bwUniCluster is only permitted&lt;br /&gt;
to users who have already answered the questionnaire. &#039;&#039;&#039;&#039;&#039;Exception&#039;&#039;&#039;: For the first 14 days after the web registration login to bwUniCluster is permitted without participation in the questionnaire.&#039;&#039;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Changing Password ==&lt;br /&gt;
By default, for KIT members your bwUniCluster &#039;&#039;&#039;password&#039;&#039;&#039; to log on matches  that one of your KIT &lt;br /&gt;
account, while for all non-KIT members your bwUniCluster &#039;&#039;&#039;password&#039;&#039;&#039; is that one you saved during the web registration (compare step 7 of chapter 1.2). &lt;br /&gt;
At any time, you can set a new bwUniCluster password via the registration website [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu] by carrying out the following steps:&lt;br /&gt;
# visit [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu] and select your home organization &lt;br /&gt;
# authenticate yourself via your home-organizational user id / username and your home-organizational password&lt;br /&gt;
# find on the left side &#039;&#039;&#039;bwUniCluster&#039;&#039;&#039; and select &#039;&#039;&#039;Set Service Password&#039;&#039;&#039;&lt;br /&gt;
# set new service, i.e. bwUniCluster, password, repeat it and click &#039;&#039;&#039;Save&#039;&#039;&#039; button.&lt;br /&gt;
# the page answers e.g. &amp;quot;Das Passwort wurde bei dem Dienst geändert&amp;quot; (&amp;quot;password has been changed&amp;quot;)&lt;br /&gt;
# proceed to log in using the new password in the next step&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==  Contact ==&lt;br /&gt;
If you have questions or problems concerning bwUniCluster registration, please [[Registration_Support_-_bwUniCluster|contact your local hotline]]. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Login =&lt;br /&gt;
After finishing the web registration &#039;&#039;&#039;&#039;&#039;bwUniCluster&#039;&#039;&#039;&#039;&#039; is ready for your &#039;&#039;&#039;SSH&#039;&#039;&#039; based login.&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;bwUniCluster&#039;&#039;&#039;&#039;&#039; has two dedicated login nodes. The selection of the login node is done automatically and is based on a round-robin scheduling. Logging in another time the session might run &lt;br /&gt;
on the other login node.&lt;br /&gt;
&lt;br /&gt;
Only the secure shell &#039;&#039;&#039;SSH&#039;&#039;&#039; is allowed to login. Other protocols like &#039;&#039;&#039;telnet&#039;&#039;&#039; or &#039;&#039;&#039;rlogin&#039;&#039;&#039;&lt;br /&gt;
are not allowed for security reasons.&lt;br /&gt;
&lt;br /&gt;
A connection to &#039;&#039;&#039;&#039;&#039;bwUniCluster&#039;&#039;&#039;&#039;&#039; can be established by the command:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ssh &amp;lt;UserID&amp;gt;@bwunicluster.scc.kit.edu&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you are using &#039;&#039;&#039;OpenSSH&#039;&#039;&#039; (usually installed on Linux based systems) and you want to use a GUI-based application on &#039;&#039;&#039;&#039;&#039;bwUniCluster&#039;&#039;&#039;&#039;&#039; you should use the &lt;br /&gt;
&#039;&#039;ssh&#039;&#039; command with the option &#039;&#039;-X&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ssh -X &amp;lt;UserID&amp;gt;@bwunicluster.scc.kit.edu&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Hints for using PuTTY ==&lt;br /&gt;
Start PuTTY, fill in at window &#039;&#039;PuTTY Configuration&#039;&#039; under category &#039;&#039;Session&#039;&#039; the following fields &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Host Name        : &amp;lt;UserID&amp;gt;@bwunicluster.scc.kit.edu&lt;br /&gt;
Port             : 22&lt;br /&gt;
Connection type  : SSH&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
and click on &#039;&#039;Open&#039;&#039;, enter your password and accept adding &#039;&#039;host key&#039;&#039;. Note that you can also name the configured session via &#039;&#039;Save&#039;&#039; and load it later via the given name.&lt;br /&gt;
&lt;br /&gt;
== Hints for using MobaXterm ==&lt;br /&gt;
&lt;br /&gt;
Start MobaXterm, fill in the following fields:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Remote name              : uc1.scc.kit.edu&lt;br /&gt;
Specify user name        : &amp;lt;UserID&amp;gt;&lt;br /&gt;
Port                     : 22&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
After that click on &#039;ok&#039;. Then a terminal will be opened and there you can enter your password. &lt;br /&gt;
&lt;br /&gt;
== Hints for using WinSCP ==&lt;br /&gt;
&lt;br /&gt;
Start WinSCP, fill in the following fields:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
File Protocol    : SCP&lt;br /&gt;
Host name        : bwunicluster.scc.kit.edu&lt;br /&gt;
Port             : 22&lt;br /&gt;
User name        : &amp;lt;UserID&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
and click on &#039;Login&#039; and enter your password. Note that you can also name the configured session via &#039;&#039;Save&#039;&#039; and load it later via the given name.&lt;br /&gt;
&lt;br /&gt;
== About UserID / Username ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;UserID&amp;gt;&#039;&#039;&#039; of the ssh command is a placeholder for your &#039;&#039;&#039;&#039;&#039;username&#039;&#039;&#039;&#039;&#039; at your home &lt;br /&gt;
organization together with a &#039;&#039;&#039;&#039;&#039;prefix&#039;&#039;&#039;&#039;&#039; as followed:&lt;br /&gt;
&lt;br /&gt;
{| width=450px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Home organization !! &amp;lt;UserID&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| Universität Freiburg || &#039;&#039;fr_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Universität Heidelberg || &#039;&#039;hd_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Universität Hohenheim || &#039;&#039;ho_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| KIT || username &#039;&#039;(without any prefix)&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
| Universität Konstanz || &#039;&#039;kn_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Universität Mannheim || &#039;&#039;ma_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Universität Stuttgart ||  &#039;&#039;st_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Universität Tübingen || &#039;&#039;tu_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Universität Ulm  || &#039;&#039;ul_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Aalen || &#039;&#039;aa_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Albstadt-Sigmaringen || &#039;&#039;as_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Esslingen || &#039;&#039;es_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Furtwangen || &#039;&#039;hf_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Karlsruhe || &#039;&#039;hk_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| HTWG Konstanz || &#039;&#039;ht_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Reutlingen || &#039;&#039;hr_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Rottenburg || &#039;&#039;ro_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule für Technik Stuttgart || &#039;&#039;hs_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Ulm || &#039;&#039;hu_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Allowed activities on login nodes ==&lt;br /&gt;
&lt;br /&gt;
The login nodes of bwUniCluster are the access point to the compute system and to your bwUniCluster $HOME directory. The login nodes are shared with all the users of bwUniCluster. Therefore, your activities on the login nodes are limited to primarily set up your batch jobs. Your activities may also be:&lt;br /&gt;
* &#039;&#039;&#039;short&#039;&#039;&#039; compilation of your program code and&lt;br /&gt;
* &#039;&#039;&#039;short&#039;&#039;&#039; pre- and postprocessing of your batch jobs.&lt;br /&gt;
&lt;br /&gt;
To guarantee usability for all the users of bwUniCluster &amp;lt;span style=&amp;quot;color:red;font-size:100%;&amp;quot;&amp;gt;&#039;&#039;&#039;you must not run your compute jobs on the login nodes&#039;&#039;&#039;&amp;lt;/span&amp;gt;. Compute jobs must be submitted to the&lt;br /&gt;
[[bwUniCluster Batch Jobs|queueing system]]. Any compute job running on the login nodes will be terminated without any notice. Any long-running compilation or any long-running pre- or postprocessing of batch jobs must also be submitted to the [[bwUniCluster Batch Jobs|queueing system]].&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= [[First_Steps_on_bwHPC_cluster|First steps on bwUniCluster]] =&lt;br /&gt;
&lt;br /&gt;
First and some important steps on bwUniCluster can be found [[First_Steps_on_bwHPC_cluster|here]].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Deregistration =&lt;br /&gt;
If you plan to permanently leave the bwUniCluster, follow the deregister checklist:&lt;br /&gt;
# Transfer all your data in $HOME and workspace to your local computer/storage and after that clear off all your data&lt;br /&gt;
# Visit [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu] &lt;br /&gt;
#* Select your home organization from the list and click &#039;&#039;&#039;Proceed&#039;&#039;&#039;  &lt;br /&gt;
#* Enter your home-organisational user ID / username  and your home-organisational password and click &#039;&#039;&#039;Login&#039;&#039;&#039; button&lt;br /&gt;
#* You will be redirected back to the registration website [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu/] &lt;br /&gt;
#* &amp;lt;div&amp;gt;Select &#039;&#039;&#039;Registry Info&#039;&#039;&#039; of the service &#039;&#039;&#039;bwUniCluster&#039;&#039;&#039; (on the left hand side)&amp;lt;br&amp;gt;[[File:bwUniCluster_registration_sidebar.png|center|border|]]&amp;lt;/div&amp;gt;&lt;br /&gt;
#* Click &#039;&#039;&#039;Deregister&#039;&#039;&#039;&lt;br /&gt;
Note that Step 2 will automatically unsubscribe you from the bwunicluster mail list.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
[[Category:bwUniCluster|Access]][[Category:Access|bwUniCluster]]&lt;/div&gt;</summary>
		<author><name>S Shamsudeen</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster_2.0_User_Access&amp;diff=5091</id>
		<title>BwUniCluster 2.0 User Access</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster_2.0_User_Access&amp;diff=5091"/>
		<updated>2017-09-06T09:12:15Z</updated>

		<summary type="html">&lt;p&gt;S Shamsudeen: /* Login */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;!--{| style=&amp;quot;border-style: solid; border-width: 1px&amp;quot;&lt;br /&gt;
! Navigation: [[BwHPC_Best_Practices_Repository|bwHPC BPR]] / [[BwUniCluster_User_Guide|bwUniCluster]] &lt;br /&gt;
|}--&amp;gt;&lt;br /&gt;
[[bwUniCluster]] is Baden-Württemberg&#039;s general purpose tier 3 high performance computing (HPC)&lt;br /&gt;
cluster co-financed by Baden-Württemberg&#039;s ministry of science, research and arts and &lt;br /&gt;
the shareholders:&lt;br /&gt;
* Albert Ludwig University of Freiburg&lt;br /&gt;
* Eberhard Karls University, Tübingen&lt;br /&gt;
* Karlsruhe Institute of Technology (KIT)&lt;br /&gt;
* Ruprecht-Karls-Universität Heidelberg&lt;br /&gt;
* Ulm University&lt;br /&gt;
* University of Hohenheim&lt;br /&gt;
* University of Konstanz&lt;br /&gt;
* University of Mannheim&lt;br /&gt;
* University of Stuttgart&lt;br /&gt;
* HAW BW e.V. (an association of university of applied sciences in Baden-Württemberg) &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
To &#039;&#039;&#039;log on&#039;&#039;&#039; [[bwUniCluster]] a user account is required. All members of  the shareholder&lt;br /&gt;
universities can apply for an account.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;width: 100%; border-spacing: 5px;&amp;quot;&lt;br /&gt;
| style=&amp;quot;text-align:left; color:#000;vertical-align:top;&amp;quot; |__TOC__ &lt;br /&gt;
|&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;[[File:bwUniCluster_17Jan2014_p044-rot_t10.10.00.jpg|center|border|250px|bwUniCluster wiring by Holger Obermaier, copyright: KIT (SCC)]] &amp;lt;span style=&amp;quot;font-size:80%&amp;quot;&amp;gt;bwUniCluster wiring © KIT (SCC)&amp;lt;/span&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Registration =&lt;br /&gt;
&lt;br /&gt;
Granting access and issuing a user account for &#039;&#039;&#039;&#039;&#039;bwUniCluster&#039;&#039;&#039;&#039;&#039; requires the registration at the KIT service website [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu] (step B). However, this registration depends on the &#039;&#039;&#039;bwUniCluster entitlement&#039;&#039;&#039; issued by your university (step A).&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Step A: bwUniCluster entitlement for registration ==&lt;br /&gt;
Each university issues the bwUniCluster entitlement &#039;&#039;&#039;only&#039;&#039;&#039; for their own members. Details on the issuing process and/or the entitlement application forms are listed hereafter:  &lt;br /&gt;
&lt;br /&gt;
* [[BwCluster_User_Access_Uni_Freiburg|Albert Ludwig University of Freiburg]]&lt;br /&gt;
* [http://www.zdv.uni-tuebingen.de/dienstleistungen/computing/zugang-zu-den-ressourcen.html Eberhard Karls University Tübingen]&lt;br /&gt;
* [http://www.scc.kit.edu/downloads/sdo/Antrag_Benutzernummer_bwUniCluster.pdf Karlsruhe Institute of Technology (KIT)]&lt;br /&gt;
* [[‎BWUniCluster_User_Access_Members_Uni_Heidelberg|Ruprecht-Karls-Universität Heidelberg]]&lt;br /&gt;
* [[BWUniCluster_User_Access_Members_Uni_Ulm|Ulm University]]&lt;br /&gt;
* [https://kim.uni-hohenheim.de/bwhpc-account University of Hohenheim]&lt;br /&gt;
* [[BWUniCluster_User_Access_Members_Uni_Konstanz|University of Konstanz]]&lt;br /&gt;
* [[BWUniCluster_User_Access_Members_Uni_Mannheim|University of Mannheim]]&lt;br /&gt;
* [https://www.hlrs.de/solutions-services/academic-users/bwunicluster-access/ University of Stuttgart]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Step B: Web Registration ==&lt;br /&gt;
After step A, i.e., after issueing the bwUniCluster entitlement, please visit: &lt;br /&gt;
* [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu] &lt;br /&gt;
*# Select your home organization from the list and click &#039;&#039;&#039;Proceed&#039;&#039;&#039;&lt;br /&gt;
*# You will be directed to the &#039;&#039;Identity Provider&#039;&#039; of your home organisation  &lt;br /&gt;
*# Enter your home-organisational user ID / username  and your home-organisational password and click &#039;&#039;&#039;Login&#039;&#039;&#039; button&lt;br /&gt;
*# You will be redirected back to the registration website [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu/] &lt;br /&gt;
*# &amp;lt;div&amp;gt;Select unter &#039;&#039;&#039;The following services are available&#039;&#039;&#039; the service &#039;&#039;&#039;bwUniCluster&#039;&#039;&#039; &amp;lt;br&amp;gt;[[File:bwRegWebApp_avail_services_pic01.png|center|border|]]&amp;lt;/div&amp;gt;&lt;br /&gt;
*# Click &#039;&#039;&#039;Register&#039;&#039;&#039;&lt;br /&gt;
*# Finally&lt;br /&gt;
*#* for all non-KIT members &#039;&#039;&#039;mandatorily&#039;&#039;&#039; set a service password for authentication on bwUniCluster&lt;br /&gt;
*#* for KIT members &#039;&#039;&#039;optionally&#039;&#039;&#039; set a service password for authentication on bwUniCluster&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- old description&lt;br /&gt;
** login with your home-organizational user account and user password,&lt;br /&gt;
** select service &#039;&#039;&#039;&#039;&#039;bwUniCluster&#039;&#039;&#039;&#039;&#039; (on the left side) and &lt;br /&gt;
** follow the instructions to complete the registration.&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Step C: &amp;lt;span style=&amp;quot;color:red;font-size:100%;font-weight:bold&amp;quot;&amp;gt;Amendment:&amp;lt;/span&amp;gt; bwUniCluster questionnaire ==&lt;br /&gt;
Starting June 1st, 2015, usage of bwUniCluster mandatorily requires the&lt;br /&gt;
questionnaire&lt;br /&gt;
&lt;br /&gt;
   https://www.bwhpc-c5.de/en/ZAS/bwunicluster_survey.php&lt;br /&gt;
&lt;br /&gt;
to be answered. The input is solely used for the&lt;br /&gt;
elaboration of future support activities and for planning future HPC&lt;br /&gt;
resources. From June 1st, 2015, login to bwUniCluster is only permitted&lt;br /&gt;
to users who have already answered the questionnaire. &#039;&#039;&#039;&#039;&#039;Exception&#039;&#039;&#039;: For the first 14 days after the web registration login to bwUniCluster is permitted without participation in the questionnaire.&#039;&#039;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Changing Password ==&lt;br /&gt;
By default, for KIT members your bwUniCluster &#039;&#039;&#039;password&#039;&#039;&#039; to log on matches  that one of your KIT &lt;br /&gt;
account, while for all non-KIT members your bwUniCluster &#039;&#039;&#039;password&#039;&#039;&#039; is that one you saved during the web registration (compare step 7 of chapter 1.2). &lt;br /&gt;
At any time, you can set a new bwUniCluster password via the registration website [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu] by carrying out the following steps:&lt;br /&gt;
# visit [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu] and select your home organization &lt;br /&gt;
# authenticate yourself via your home-organizational user id / username and your home-organizational password&lt;br /&gt;
# find on the left side &#039;&#039;&#039;bwUniCluster&#039;&#039;&#039; and select &#039;&#039;&#039;Set Service Password&#039;&#039;&#039;&lt;br /&gt;
# set new service, i.e. bwUniCluster, password, repeat it and click &#039;&#039;&#039;Save&#039;&#039;&#039; button.&lt;br /&gt;
# the page answers e.g. &amp;quot;Das Passwort wurde bei dem Dienst geändert&amp;quot; (&amp;quot;password has been changed&amp;quot;)&lt;br /&gt;
# proceed to log in using the new password in the next step&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==  Contact ==&lt;br /&gt;
If you have questions or problems concerning bwUniCluster registration, please [[Registration_Support_-_bwUniCluster|contact your local hotline]]. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Login =&lt;br /&gt;
After finishing the web registration &#039;&#039;&#039;&#039;&#039;bwUniCluster&#039;&#039;&#039;&#039;&#039; is ready for your &#039;&#039;&#039;SSH&#039;&#039;&#039; based login.&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;bwUniCluster&#039;&#039;&#039;&#039;&#039; has two dedicated login nodes. The selection of the login node is done automatically and is based on a round-robin scheduling. Logging in another time the session might run &lt;br /&gt;
on the other login node.&lt;br /&gt;
&lt;br /&gt;
Only the secure shell &#039;&#039;&#039;SSH&#039;&#039;&#039; is allowed to login. Other protocols like &#039;&#039;&#039;telnet&#039;&#039;&#039; or &#039;&#039;&#039;rlogin&#039;&#039;&#039;&lt;br /&gt;
are not allowed for security reasons.&lt;br /&gt;
&lt;br /&gt;
A connection to &#039;&#039;&#039;&#039;&#039;bwUniCluster&#039;&#039;&#039;&#039;&#039; can be established by the command:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ssh &amp;lt;UserID&amp;gt;@bwunicluster.scc.kit.edu&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you are using &#039;&#039;&#039;OpenSSH&#039;&#039;&#039; (usually installed on Linux based systems) and you want to use a GUI-based application on &#039;&#039;&#039;&#039;&#039;bwUniCluster&#039;&#039;&#039;&#039;&#039; you should use the &lt;br /&gt;
&#039;&#039;ssh&#039;&#039; command with the option &#039;&#039;-X&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ssh -X &amp;lt;UserID&amp;gt;@bwunicluster.scc.kit.edu&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Hints for using PuTTY ==&lt;br /&gt;
Start PuTTY, fill in at window &#039;&#039;PuTTY Configuration&#039;&#039; under category &#039;&#039;Session&#039;&#039; the following fields &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Host Name        : &amp;lt;UserID&amp;gt;@bwunicluster.scc.kit.edu&lt;br /&gt;
Port                  : 22&lt;br /&gt;
Connection type : SSH&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
and click on &#039;&#039;Open&#039;&#039;, enter your password and accept adding &#039;&#039;host key&#039;&#039;. Note that you can also name the configured session via &#039;&#039;Save&#039;&#039; and load it later via the given name. &lt;br /&gt;
&lt;br /&gt;
== Hints for using MobaXterm ==&lt;br /&gt;
&lt;br /&gt;
Start MobaXterm, fill in the following fields:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Remote name              : uc1.scc.kit.edu&lt;br /&gt;
Specify user name        : &amp;lt;UserID&amp;gt;&lt;br /&gt;
Port                     : 22&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
After that click on &#039;ok&#039;. Then a terminal will be opened and there you can enter your password. &lt;br /&gt;
&lt;br /&gt;
== Hints for using WinSCP ==&lt;br /&gt;
&lt;br /&gt;
Start WinSCP, fill in the following fields:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
File Protocol    : SCP&lt;br /&gt;
Host name        : bwunicluster.scc.kit.edu&lt;br /&gt;
Port             : 22&lt;br /&gt;
User name        : &amp;lt;UserID&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
and click on &#039;Login&#039; and enter your password. Note that you can also name the configured session via &#039;&#039;Save&#039;&#039; and load it later via the given name.&lt;br /&gt;
&lt;br /&gt;
== About UserID / Username ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;UserID&amp;gt;&#039;&#039;&#039; of the ssh command is a placeholder for your &#039;&#039;&#039;&#039;&#039;username&#039;&#039;&#039;&#039;&#039; at your home &lt;br /&gt;
organization together with a &#039;&#039;&#039;&#039;&#039;prefix&#039;&#039;&#039;&#039;&#039; as followed:&lt;br /&gt;
&lt;br /&gt;
{| width=450px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Home organization !! &amp;lt;UserID&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| Universität Freiburg || &#039;&#039;fr_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Universität Heidelberg || &#039;&#039;hd_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Universität Hohenheim || &#039;&#039;ho_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| KIT || username &#039;&#039;(without any prefix)&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
| Universität Konstanz || &#039;&#039;kn_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Universität Mannheim || &#039;&#039;ma_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Universität Stuttgart ||  &#039;&#039;st_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Universität Tübingen || &#039;&#039;tu_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Universität Ulm  || &#039;&#039;ul_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Aalen || &#039;&#039;aa_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Albstadt-Sigmaringen || &#039;&#039;as_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Esslingen || &#039;&#039;es_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Furtwangen || &#039;&#039;hf_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Karlsruhe || &#039;&#039;hk_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| HTWG Konstanz || &#039;&#039;ht_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Reutlingen || &#039;&#039;hr_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Rottenburg || &#039;&#039;ro_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule für Technik Stuttgart || &#039;&#039;hs_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Ulm || &#039;&#039;hu_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Allowed activities on login nodes ==&lt;br /&gt;
&lt;br /&gt;
The login nodes of bwUniCluster are the access point to the compute system and to your bwUniCluster $HOME directory. The login nodes are shared with all the users of bwUniCluster. Therefore, your activities on the login nodes are limited to primarily set up your batch jobs. Your activities may also be:&lt;br /&gt;
* &#039;&#039;&#039;short&#039;&#039;&#039; compilation of your program code and&lt;br /&gt;
* &#039;&#039;&#039;short&#039;&#039;&#039; pre- and postprocessing of your batch jobs.&lt;br /&gt;
&lt;br /&gt;
To guarantee usability for all the users of bwUniCluster &amp;lt;span style=&amp;quot;color:red;font-size:100%;&amp;quot;&amp;gt;&#039;&#039;&#039;you must not run your compute jobs on the login nodes&#039;&#039;&#039;&amp;lt;/span&amp;gt;. Compute jobs must be submitted to the&lt;br /&gt;
[[bwUniCluster Batch Jobs|queueing system]]. Any compute job running on the login nodes will be terminated without any notice. Any long-running compilation or any long-running pre- or postprocessing of batch jobs must also be submitted to the [[bwUniCluster Batch Jobs|queueing system]].&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= [[First_Steps_on_bwHPC_cluster|First steps on bwUniCluster]] =&lt;br /&gt;
&lt;br /&gt;
First and some important steps on bwUniCluster can be found [[First_Steps_on_bwHPC_cluster|here]].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Deregistration =&lt;br /&gt;
If you plan to permanently leave the bwUniCluster, follow the deregister checklist:&lt;br /&gt;
# Transfer all your data in $HOME and workspace to your local computer/storage and after that clear off all your data&lt;br /&gt;
# Visit [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu] &lt;br /&gt;
#* Select your home organization from the list and click &#039;&#039;&#039;Proceed&#039;&#039;&#039;  &lt;br /&gt;
#* Enter your home-organisational user ID / username  and your home-organisational password and click &#039;&#039;&#039;Login&#039;&#039;&#039; button&lt;br /&gt;
#* You will be redirected back to the registration website [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu/] &lt;br /&gt;
#* &amp;lt;div&amp;gt;Select &#039;&#039;&#039;Registry Info&#039;&#039;&#039; of the service &#039;&#039;&#039;bwUniCluster&#039;&#039;&#039; (on the left hand side)&amp;lt;br&amp;gt;[[File:bwUniCluster_registration_sidebar.png|center|border|]]&amp;lt;/div&amp;gt;&lt;br /&gt;
#* Click &#039;&#039;&#039;Deregister&#039;&#039;&#039;&lt;br /&gt;
Note that Step 2 will automatically unsubscribe you from the bwunicluster mail list.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
[[Category:bwUniCluster|Access]][[Category:Access|bwUniCluster]]&lt;/div&gt;</summary>
		<author><name>S Shamsudeen</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster_2.0_User_Access&amp;diff=5090</id>
		<title>BwUniCluster 2.0 User Access</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster_2.0_User_Access&amp;diff=5090"/>
		<updated>2017-09-06T08:42:12Z</updated>

		<summary type="html">&lt;p&gt;S Shamsudeen: /* Login */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;!--{| style=&amp;quot;border-style: solid; border-width: 1px&amp;quot;&lt;br /&gt;
! Navigation: [[BwHPC_Best_Practices_Repository|bwHPC BPR]] / [[BwUniCluster_User_Guide|bwUniCluster]] &lt;br /&gt;
|}--&amp;gt;&lt;br /&gt;
[[bwUniCluster]] is Baden-Württemberg&#039;s general purpose tier 3 high performance computing (HPC)&lt;br /&gt;
cluster co-financed by Baden-Württemberg&#039;s ministry of science, research and arts and &lt;br /&gt;
the shareholders:&lt;br /&gt;
* Albert Ludwig University of Freiburg&lt;br /&gt;
* Eberhard Karls University, Tübingen&lt;br /&gt;
* Karlsruhe Institute of Technology (KIT)&lt;br /&gt;
* Ruprecht-Karls-Universität Heidelberg&lt;br /&gt;
* Ulm University&lt;br /&gt;
* University of Hohenheim&lt;br /&gt;
* University of Konstanz&lt;br /&gt;
* University of Mannheim&lt;br /&gt;
* University of Stuttgart&lt;br /&gt;
* HAW BW e.V. (an association of university of applied sciences in Baden-Württemberg) &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
To &#039;&#039;&#039;log on&#039;&#039;&#039; [[bwUniCluster]] a user account is required. All members of  the shareholder&lt;br /&gt;
universities can apply for an account.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;width: 100%; border-spacing: 5px;&amp;quot;&lt;br /&gt;
| style=&amp;quot;text-align:left; color:#000;vertical-align:top;&amp;quot; |__TOC__ &lt;br /&gt;
|&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;[[File:bwUniCluster_17Jan2014_p044-rot_t10.10.00.jpg|center|border|250px|bwUniCluster wiring by Holger Obermaier, copyright: KIT (SCC)]] &amp;lt;span style=&amp;quot;font-size:80%&amp;quot;&amp;gt;bwUniCluster wiring © KIT (SCC)&amp;lt;/span&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Registration =&lt;br /&gt;
&lt;br /&gt;
Granting access and issuing a user account for &#039;&#039;&#039;&#039;&#039;bwUniCluster&#039;&#039;&#039;&#039;&#039; requires the registration at the KIT service website [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu] (step B). However, this registration depends on the &#039;&#039;&#039;bwUniCluster entitlement&#039;&#039;&#039; issued by your university (step A).&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Step A: bwUniCluster entitlement for registration ==&lt;br /&gt;
Each university issues the bwUniCluster entitlement &#039;&#039;&#039;only&#039;&#039;&#039; for their own members. Details on the issuing process and/or the entitlement application forms are listed hereafter:  &lt;br /&gt;
&lt;br /&gt;
* [[BwCluster_User_Access_Uni_Freiburg|Albert Ludwig University of Freiburg]]&lt;br /&gt;
* [http://www.zdv.uni-tuebingen.de/dienstleistungen/computing/zugang-zu-den-ressourcen.html Eberhard Karls University Tübingen]&lt;br /&gt;
* [http://www.scc.kit.edu/downloads/sdo/Antrag_Benutzernummer_bwUniCluster.pdf Karlsruhe Institute of Technology (KIT)]&lt;br /&gt;
* [[‎BWUniCluster_User_Access_Members_Uni_Heidelberg|Ruprecht-Karls-Universität Heidelberg]]&lt;br /&gt;
* [[BWUniCluster_User_Access_Members_Uni_Ulm|Ulm University]]&lt;br /&gt;
* [https://kim.uni-hohenheim.de/bwhpc-account University of Hohenheim]&lt;br /&gt;
* [[BWUniCluster_User_Access_Members_Uni_Konstanz|University of Konstanz]]&lt;br /&gt;
* [[BWUniCluster_User_Access_Members_Uni_Mannheim|University of Mannheim]]&lt;br /&gt;
* [https://www.hlrs.de/solutions-services/academic-users/bwunicluster-access/ University of Stuttgart]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Step B: Web Registration ==&lt;br /&gt;
After step A, i.e., after issueing the bwUniCluster entitlement, please visit: &lt;br /&gt;
* [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu] &lt;br /&gt;
*# Select your home organization from the list and click &#039;&#039;&#039;Proceed&#039;&#039;&#039;&lt;br /&gt;
*# You will be directed to the &#039;&#039;Identity Provider&#039;&#039; of your home organisation  &lt;br /&gt;
*# Enter your home-organisational user ID / username  and your home-organisational password and click &#039;&#039;&#039;Login&#039;&#039;&#039; button&lt;br /&gt;
*# You will be redirected back to the registration website [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu/] &lt;br /&gt;
*# &amp;lt;div&amp;gt;Select unter &#039;&#039;&#039;The following services are available&#039;&#039;&#039; the service &#039;&#039;&#039;bwUniCluster&#039;&#039;&#039; &amp;lt;br&amp;gt;[[File:bwRegWebApp_avail_services_pic01.png|center|border|]]&amp;lt;/div&amp;gt;&lt;br /&gt;
*# Click &#039;&#039;&#039;Register&#039;&#039;&#039;&lt;br /&gt;
*# Finally&lt;br /&gt;
*#* for all non-KIT members &#039;&#039;&#039;mandatorily&#039;&#039;&#039; set a service password for authentication on bwUniCluster&lt;br /&gt;
*#* for KIT members &#039;&#039;&#039;optionally&#039;&#039;&#039; set a service password for authentication on bwUniCluster&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- old description&lt;br /&gt;
** login with your home-organizational user account and user password,&lt;br /&gt;
** select service &#039;&#039;&#039;&#039;&#039;bwUniCluster&#039;&#039;&#039;&#039;&#039; (on the left side) and &lt;br /&gt;
** follow the instructions to complete the registration.&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Step C: &amp;lt;span style=&amp;quot;color:red;font-size:100%;font-weight:bold&amp;quot;&amp;gt;Amendment:&amp;lt;/span&amp;gt; bwUniCluster questionnaire ==&lt;br /&gt;
Starting June 1st, 2015, usage of bwUniCluster mandatorily requires the&lt;br /&gt;
questionnaire&lt;br /&gt;
&lt;br /&gt;
   https://www.bwhpc-c5.de/en/ZAS/bwunicluster_survey.php&lt;br /&gt;
&lt;br /&gt;
to be answered. The input is solely used for the&lt;br /&gt;
elaboration of future support activities and for planning future HPC&lt;br /&gt;
resources. From June 1st, 2015, login to bwUniCluster is only permitted&lt;br /&gt;
to users who have already answered the questionnaire. &#039;&#039;&#039;&#039;&#039;Exception&#039;&#039;&#039;: For the first 14 days after the web registration login to bwUniCluster is permitted without participation in the questionnaire.&#039;&#039;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Changing Password ==&lt;br /&gt;
By default, for KIT members your bwUniCluster &#039;&#039;&#039;password&#039;&#039;&#039; to log on matches  that one of your KIT &lt;br /&gt;
account, while for all non-KIT members your bwUniCluster &#039;&#039;&#039;password&#039;&#039;&#039; is that one you saved during the web registration (compare step 7 of chapter 1.2). &lt;br /&gt;
At any time, you can set a new bwUniCluster password via the registration website [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu] by carrying out the following steps:&lt;br /&gt;
# visit [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu] and select your home organization &lt;br /&gt;
# authenticate yourself via your home-organizational user id / username and your home-organizational password&lt;br /&gt;
# find on the left side &#039;&#039;&#039;bwUniCluster&#039;&#039;&#039; and select &#039;&#039;&#039;Set Service Password&#039;&#039;&#039;&lt;br /&gt;
# set new service, i.e. bwUniCluster, password, repeat it and click &#039;&#039;&#039;Save&#039;&#039;&#039; button.&lt;br /&gt;
# the page answers e.g. &amp;quot;Das Passwort wurde bei dem Dienst geändert&amp;quot; (&amp;quot;password has been changed&amp;quot;)&lt;br /&gt;
# proceed to log in using the new password in the next step&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==  Contact ==&lt;br /&gt;
If you have questions or problems concerning bwUniCluster registration, please [[Registration_Support_-_bwUniCluster|contact your local hotline]]. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Login =&lt;br /&gt;
After finishing the web registration &#039;&#039;&#039;&#039;&#039;bwUniCluster&#039;&#039;&#039;&#039;&#039; is ready for your &#039;&#039;&#039;SSH&#039;&#039;&#039; based login.&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;bwUniCluster&#039;&#039;&#039;&#039;&#039; has two dedicated login nodes. The selection of the login node is done automatically and is based on a round-robin scheduling. Logging in another time the session might run &lt;br /&gt;
on the other login node.&lt;br /&gt;
&lt;br /&gt;
Only the secure shell &#039;&#039;&#039;SSH&#039;&#039;&#039; is allowed to login. Other protocols like &#039;&#039;&#039;telnet&#039;&#039;&#039; or &#039;&#039;&#039;rlogin&#039;&#039;&#039;&lt;br /&gt;
are not allowed for security reasons.&lt;br /&gt;
&lt;br /&gt;
A connection to &#039;&#039;&#039;&#039;&#039;bwUniCluster&#039;&#039;&#039;&#039;&#039; can be established by the command:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ssh &amp;lt;UserID&amp;gt;@bwunicluster.scc.kit.edu&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you are using &#039;&#039;&#039;OpenSSH&#039;&#039;&#039; (usually installed on Linux based systems) and you want to use a GUI-based application on &#039;&#039;&#039;&#039;&#039;bwUniCluster&#039;&#039;&#039;&#039;&#039; you should use the &lt;br /&gt;
&#039;&#039;ssh&#039;&#039; command with the option &#039;&#039;-X&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ssh -X &amp;lt;UserID&amp;gt;@bwunicluster.scc.kit.edu&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Hints for using PuTTY ==&lt;br /&gt;
Start PuTTY, fill in at window &#039;&#039;PuTTY Configuration&#039;&#039; under category &#039;&#039;Session&#039;&#039; the following fields &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Host Name        : &amp;lt;UserID&amp;gt;@bwunicluster.scc.kit.edu&lt;br /&gt;
Port                  : 22&lt;br /&gt;
Connection type : SSH&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
and click on &#039;&#039;Open&#039;&#039;, enter your password and accept adding &#039;&#039;host key&#039;&#039;. Note that you can also name the configured session via &#039;&#039;Save&#039;&#039; and load it later via the given name. &lt;br /&gt;
&lt;br /&gt;
== Hints for using MobaXterm ==&lt;br /&gt;
&lt;br /&gt;
Start WinSCP, fill in the following fields:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Remote name              : uc1.scc.kit.edu&lt;br /&gt;
Specify user name        : &amp;lt;UserID&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
and click on &#039;Login&#039; and enter your password. Note that you can also name the configured session via &#039;&#039;Save&#039;&#039; and load it later via the given name.&lt;br /&gt;
&lt;br /&gt;
== Hints for using WinSCP ==&lt;br /&gt;
&lt;br /&gt;
Start WinSCP, fill in the following fields:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
File Protocol    : SCP&lt;br /&gt;
Host name        : bwunicluster.scc.kit.edu&lt;br /&gt;
Port             : 22&lt;br /&gt;
User name        : &amp;lt;UserID&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
and click on &#039;Login&#039; and enter your password. Note that you can also name the configured session via &#039;&#039;Save&#039;&#039; and load it later via the given name.&lt;br /&gt;
&lt;br /&gt;
== About UserID / Username ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;UserID&amp;gt;&#039;&#039;&#039; of the ssh command is a placeholder for your &#039;&#039;&#039;&#039;&#039;username&#039;&#039;&#039;&#039;&#039; at your home &lt;br /&gt;
organization together with a &#039;&#039;&#039;&#039;&#039;prefix&#039;&#039;&#039;&#039;&#039; as followed:&lt;br /&gt;
&lt;br /&gt;
{| width=450px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Home organization !! &amp;lt;UserID&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| Universität Freiburg || &#039;&#039;fr_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Universität Heidelberg || &#039;&#039;hd_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Universität Hohenheim || &#039;&#039;ho_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| KIT || username &#039;&#039;(without any prefix)&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
| Universität Konstanz || &#039;&#039;kn_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Universität Mannheim || &#039;&#039;ma_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Universität Stuttgart ||  &#039;&#039;st_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Universität Tübingen || &#039;&#039;tu_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Universität Ulm  || &#039;&#039;ul_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Aalen || &#039;&#039;aa_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Albstadt-Sigmaringen || &#039;&#039;as_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Esslingen || &#039;&#039;es_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Furtwangen || &#039;&#039;hf_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Karlsruhe || &#039;&#039;hk_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| HTWG Konstanz || &#039;&#039;ht_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Reutlingen || &#039;&#039;hr_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Rottenburg || &#039;&#039;ro_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule für Technik Stuttgart || &#039;&#039;hs_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Ulm || &#039;&#039;hu_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Allowed activities on login nodes ==&lt;br /&gt;
&lt;br /&gt;
The login nodes of bwUniCluster are the access point to the compute system and to your bwUniCluster $HOME directory. The login nodes are shared with all the users of bwUniCluster. Therefore, your activities on the login nodes are limited to primarily set up your batch jobs. Your activities may also be:&lt;br /&gt;
* &#039;&#039;&#039;short&#039;&#039;&#039; compilation of your program code and&lt;br /&gt;
* &#039;&#039;&#039;short&#039;&#039;&#039; pre- and postprocessing of your batch jobs.&lt;br /&gt;
&lt;br /&gt;
To guarantee usability for all the users of bwUniCluster &amp;lt;span style=&amp;quot;color:red;font-size:100%;&amp;quot;&amp;gt;&#039;&#039;&#039;you must not run your compute jobs on the login nodes&#039;&#039;&#039;&amp;lt;/span&amp;gt;. Compute jobs must be submitted to the&lt;br /&gt;
[[bwUniCluster Batch Jobs|queueing system]]. Any compute job running on the login nodes will be terminated without any notice. Any long-running compilation or any long-running pre- or postprocessing of batch jobs must also be submitted to the [[bwUniCluster Batch Jobs|queueing system]].&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= [[First_Steps_on_bwHPC_cluster|First steps on bwUniCluster]] =&lt;br /&gt;
&lt;br /&gt;
First and some important steps on bwUniCluster can be found [[First_Steps_on_bwHPC_cluster|here]].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Deregistration =&lt;br /&gt;
If you plan to permanently leave the bwUniCluster, follow the deregister checklist:&lt;br /&gt;
# Transfer all your data in $HOME and workspace to your local computer/storage and after that clear off all your data&lt;br /&gt;
# Visit [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu] &lt;br /&gt;
#* Select your home organization from the list and click &#039;&#039;&#039;Proceed&#039;&#039;&#039;  &lt;br /&gt;
#* Enter your home-organisational user ID / username  and your home-organisational password and click &#039;&#039;&#039;Login&#039;&#039;&#039; button&lt;br /&gt;
#* You will be redirected back to the registration website [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu/] &lt;br /&gt;
#* &amp;lt;div&amp;gt;Select &#039;&#039;&#039;Registry Info&#039;&#039;&#039; of the service &#039;&#039;&#039;bwUniCluster&#039;&#039;&#039; (on the left hand side)&amp;lt;br&amp;gt;[[File:bwUniCluster_registration_sidebar.png|center|border|]]&amp;lt;/div&amp;gt;&lt;br /&gt;
#* Click &#039;&#039;&#039;Deregister&#039;&#039;&#039;&lt;br /&gt;
Note that Step 2 will automatically unsubscribe you from the bwunicluster mail list.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
[[Category:bwUniCluster|Access]][[Category:Access|bwUniCluster]]&lt;/div&gt;</summary>
		<author><name>S Shamsudeen</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster_2.0_User_Access&amp;diff=5089</id>
		<title>BwUniCluster 2.0 User Access</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster_2.0_User_Access&amp;diff=5089"/>
		<updated>2017-09-06T08:34:06Z</updated>

		<summary type="html">&lt;p&gt;S Shamsudeen: /* Login */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;!--{| style=&amp;quot;border-style: solid; border-width: 1px&amp;quot;&lt;br /&gt;
! Navigation: [[BwHPC_Best_Practices_Repository|bwHPC BPR]] / [[BwUniCluster_User_Guide|bwUniCluster]] &lt;br /&gt;
|}--&amp;gt;&lt;br /&gt;
[[bwUniCluster]] is Baden-Württemberg&#039;s general purpose tier 3 high performance computing (HPC)&lt;br /&gt;
cluster co-financed by Baden-Württemberg&#039;s ministry of science, research and arts and &lt;br /&gt;
the shareholders:&lt;br /&gt;
* Albert Ludwig University of Freiburg&lt;br /&gt;
* Eberhard Karls University, Tübingen&lt;br /&gt;
* Karlsruhe Institute of Technology (KIT)&lt;br /&gt;
* Ruprecht-Karls-Universität Heidelberg&lt;br /&gt;
* Ulm University&lt;br /&gt;
* University of Hohenheim&lt;br /&gt;
* University of Konstanz&lt;br /&gt;
* University of Mannheim&lt;br /&gt;
* University of Stuttgart&lt;br /&gt;
* HAW BW e.V. (an association of university of applied sciences in Baden-Württemberg) &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
To &#039;&#039;&#039;log on&#039;&#039;&#039; [[bwUniCluster]] a user account is required. All members of  the shareholder&lt;br /&gt;
universities can apply for an account.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;width: 100%; border-spacing: 5px;&amp;quot;&lt;br /&gt;
| style=&amp;quot;text-align:left; color:#000;vertical-align:top;&amp;quot; |__TOC__ &lt;br /&gt;
|&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;[[File:bwUniCluster_17Jan2014_p044-rot_t10.10.00.jpg|center|border|250px|bwUniCluster wiring by Holger Obermaier, copyright: KIT (SCC)]] &amp;lt;span style=&amp;quot;font-size:80%&amp;quot;&amp;gt;bwUniCluster wiring © KIT (SCC)&amp;lt;/span&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Registration =&lt;br /&gt;
&lt;br /&gt;
Granting access and issuing a user account for &#039;&#039;&#039;&#039;&#039;bwUniCluster&#039;&#039;&#039;&#039;&#039; requires the registration at the KIT service website [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu] (step B). However, this registration depends on the &#039;&#039;&#039;bwUniCluster entitlement&#039;&#039;&#039; issued by your university (step A).&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Step A: bwUniCluster entitlement for registration ==&lt;br /&gt;
Each university issues the bwUniCluster entitlement &#039;&#039;&#039;only&#039;&#039;&#039; for their own members. Details on the issuing process and/or the entitlement application forms are listed hereafter:  &lt;br /&gt;
&lt;br /&gt;
* [[BwCluster_User_Access_Uni_Freiburg|Albert Ludwig University of Freiburg]]&lt;br /&gt;
* [http://www.zdv.uni-tuebingen.de/dienstleistungen/computing/zugang-zu-den-ressourcen.html Eberhard Karls University Tübingen]&lt;br /&gt;
* [http://www.scc.kit.edu/downloads/sdo/Antrag_Benutzernummer_bwUniCluster.pdf Karlsruhe Institute of Technology (KIT)]&lt;br /&gt;
* [[‎BWUniCluster_User_Access_Members_Uni_Heidelberg|Ruprecht-Karls-Universität Heidelberg]]&lt;br /&gt;
* [[BWUniCluster_User_Access_Members_Uni_Ulm|Ulm University]]&lt;br /&gt;
* [https://kim.uni-hohenheim.de/bwhpc-account University of Hohenheim]&lt;br /&gt;
* [[BWUniCluster_User_Access_Members_Uni_Konstanz|University of Konstanz]]&lt;br /&gt;
* [[BWUniCluster_User_Access_Members_Uni_Mannheim|University of Mannheim]]&lt;br /&gt;
* [https://www.hlrs.de/solutions-services/academic-users/bwunicluster-access/ University of Stuttgart]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Step B: Web Registration ==&lt;br /&gt;
After step A, i.e., after issueing the bwUniCluster entitlement, please visit: &lt;br /&gt;
* [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu] &lt;br /&gt;
*# Select your home organization from the list and click &#039;&#039;&#039;Proceed&#039;&#039;&#039;&lt;br /&gt;
*# You will be directed to the &#039;&#039;Identity Provider&#039;&#039; of your home organisation  &lt;br /&gt;
*# Enter your home-organisational user ID / username  and your home-organisational password and click &#039;&#039;&#039;Login&#039;&#039;&#039; button&lt;br /&gt;
*# You will be redirected back to the registration website [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu/] &lt;br /&gt;
*# &amp;lt;div&amp;gt;Select unter &#039;&#039;&#039;The following services are available&#039;&#039;&#039; the service &#039;&#039;&#039;bwUniCluster&#039;&#039;&#039; &amp;lt;br&amp;gt;[[File:bwRegWebApp_avail_services_pic01.png|center|border|]]&amp;lt;/div&amp;gt;&lt;br /&gt;
*# Click &#039;&#039;&#039;Register&#039;&#039;&#039;&lt;br /&gt;
*# Finally&lt;br /&gt;
*#* for all non-KIT members &#039;&#039;&#039;mandatorily&#039;&#039;&#039; set a service password for authentication on bwUniCluster&lt;br /&gt;
*#* for KIT members &#039;&#039;&#039;optionally&#039;&#039;&#039; set a service password for authentication on bwUniCluster&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- old description&lt;br /&gt;
** login with your home-organizational user account and user password,&lt;br /&gt;
** select service &#039;&#039;&#039;&#039;&#039;bwUniCluster&#039;&#039;&#039;&#039;&#039; (on the left side) and &lt;br /&gt;
** follow the instructions to complete the registration.&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Step C: &amp;lt;span style=&amp;quot;color:red;font-size:100%;font-weight:bold&amp;quot;&amp;gt;Amendment:&amp;lt;/span&amp;gt; bwUniCluster questionnaire ==&lt;br /&gt;
Starting June 1st, 2015, usage of bwUniCluster mandatorily requires the&lt;br /&gt;
questionnaire&lt;br /&gt;
&lt;br /&gt;
   https://www.bwhpc-c5.de/en/ZAS/bwunicluster_survey.php&lt;br /&gt;
&lt;br /&gt;
to be answered. The input is solely used for the&lt;br /&gt;
elaboration of future support activities and for planning future HPC&lt;br /&gt;
resources. From June 1st, 2015, login to bwUniCluster is only permitted&lt;br /&gt;
to users who have already answered the questionnaire. &#039;&#039;&#039;&#039;&#039;Exception&#039;&#039;&#039;: For the first 14 days after the web registration login to bwUniCluster is permitted without participation in the questionnaire.&#039;&#039;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Changing Password ==&lt;br /&gt;
By default, for KIT members your bwUniCluster &#039;&#039;&#039;password&#039;&#039;&#039; to log on matches  that one of your KIT &lt;br /&gt;
account, while for all non-KIT members your bwUniCluster &#039;&#039;&#039;password&#039;&#039;&#039; is that one you saved during the web registration (compare step 7 of chapter 1.2). &lt;br /&gt;
At any time, you can set a new bwUniCluster password via the registration website [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu] by carrying out the following steps:&lt;br /&gt;
# visit [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu] and select your home organization &lt;br /&gt;
# authenticate yourself via your home-organizational user id / username and your home-organizational password&lt;br /&gt;
# find on the left side &#039;&#039;&#039;bwUniCluster&#039;&#039;&#039; and select &#039;&#039;&#039;Set Service Password&#039;&#039;&#039;&lt;br /&gt;
# set new service, i.e. bwUniCluster, password, repeat it and click &#039;&#039;&#039;Save&#039;&#039;&#039; button.&lt;br /&gt;
# the page answers e.g. &amp;quot;Das Passwort wurde bei dem Dienst geändert&amp;quot; (&amp;quot;password has been changed&amp;quot;)&lt;br /&gt;
# proceed to log in using the new password in the next step&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==  Contact ==&lt;br /&gt;
If you have questions or problems concerning bwUniCluster registration, please [[Registration_Support_-_bwUniCluster|contact your local hotline]]. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Login =&lt;br /&gt;
After finishing the web registration &#039;&#039;&#039;&#039;&#039;bwUniCluster&#039;&#039;&#039;&#039;&#039; is ready for your &#039;&#039;&#039;SSH&#039;&#039;&#039; based login.&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;bwUniCluster&#039;&#039;&#039;&#039;&#039; has two dedicated login nodes. The selection of the login node is done automatically and is based on a round-robin scheduling. Logging in another time the session might run &lt;br /&gt;
on the other login node.&lt;br /&gt;
&lt;br /&gt;
Only the secure shell &#039;&#039;&#039;SSH&#039;&#039;&#039; is allowed to login. Other protocols like &#039;&#039;&#039;telnet&#039;&#039;&#039; or &#039;&#039;&#039;rlogin&#039;&#039;&#039;&lt;br /&gt;
are not allowed for security reasons.&lt;br /&gt;
&lt;br /&gt;
A connection to &#039;&#039;&#039;&#039;&#039;bwUniCluster&#039;&#039;&#039;&#039;&#039; can be established by the command:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ssh &amp;lt;UserID&amp;gt;@bwunicluster.scc.kit.edu&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you are using &#039;&#039;&#039;OpenSSH&#039;&#039;&#039; (usually installed on Linux based systems) and you want to use a GUI-based application on &#039;&#039;&#039;&#039;&#039;bwUniCluster&#039;&#039;&#039;&#039;&#039; you should use the &lt;br /&gt;
&#039;&#039;ssh&#039;&#039; command with the option &#039;&#039;-X&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ssh -X &amp;lt;UserID&amp;gt;@bwunicluster.scc.kit.edu&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Hints for using PuTTY ==&lt;br /&gt;
Start PuTTY, fill in at window &#039;&#039;PuTTY Configuration&#039;&#039; under category &#039;&#039;Session&#039;&#039; the following fields &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Host Name        : &amp;lt;UserID&amp;gt;@bwunicluster.scc.kit.edu&lt;br /&gt;
Port                  : 22&lt;br /&gt;
Connection type : SSH&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
and click on &#039;&#039;Open&#039;&#039;, enter your password and accept adding &#039;&#039;host key&#039;&#039;. Note that you can also name the configured session via &#039;&#039;Save&#039;&#039; and load it later via the given name. &lt;br /&gt;
&lt;br /&gt;
== Hints for using WinSCP ==&lt;br /&gt;
&lt;br /&gt;
Start WinSCP, fill in the following fields:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
File Protocol    : SCP&lt;br /&gt;
Host name        : bwunicluster.scc.kit.edu&lt;br /&gt;
Port             : 22&lt;br /&gt;
User name        : &amp;lt;UserID&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
and click on &#039;Login&#039; and enter your password. Note that you can also name the configured session via &#039;&#039;Save&#039;&#039; and load it later via the given name.&lt;br /&gt;
&lt;br /&gt;
== About UserID / Username ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;UserID&amp;gt;&#039;&#039;&#039; of the ssh command is a placeholder for your &#039;&#039;&#039;&#039;&#039;username&#039;&#039;&#039;&#039;&#039; at your home &lt;br /&gt;
organization together with a &#039;&#039;&#039;&#039;&#039;prefix&#039;&#039;&#039;&#039;&#039; as followed:&lt;br /&gt;
&lt;br /&gt;
{| width=450px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Home organization !! &amp;lt;UserID&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| Universität Freiburg || &#039;&#039;fr_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Universität Heidelberg || &#039;&#039;hd_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Universität Hohenheim || &#039;&#039;ho_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| KIT || username &#039;&#039;(without any prefix)&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
| Universität Konstanz || &#039;&#039;kn_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Universität Mannheim || &#039;&#039;ma_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Universität Stuttgart ||  &#039;&#039;st_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Universität Tübingen || &#039;&#039;tu_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Universität Ulm  || &#039;&#039;ul_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Aalen || &#039;&#039;aa_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Albstadt-Sigmaringen || &#039;&#039;as_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Esslingen || &#039;&#039;es_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Furtwangen || &#039;&#039;hf_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Karlsruhe || &#039;&#039;hk_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| HTWG Konstanz || &#039;&#039;ht_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Reutlingen || &#039;&#039;hr_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Rottenburg || &#039;&#039;ro_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule für Technik Stuttgart || &#039;&#039;hs_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Ulm || &#039;&#039;hu_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Allowed activities on login nodes ==&lt;br /&gt;
&lt;br /&gt;
The login nodes of bwUniCluster are the access point to the compute system and to your bwUniCluster $HOME directory. The login nodes are shared with all the users of bwUniCluster. Therefore, your activities on the login nodes are limited to primarily set up your batch jobs. Your activities may also be:&lt;br /&gt;
* &#039;&#039;&#039;short&#039;&#039;&#039; compilation of your program code and&lt;br /&gt;
* &#039;&#039;&#039;short&#039;&#039;&#039; pre- and postprocessing of your batch jobs.&lt;br /&gt;
&lt;br /&gt;
To guarantee usability for all the users of bwUniCluster &amp;lt;span style=&amp;quot;color:red;font-size:100%;&amp;quot;&amp;gt;&#039;&#039;&#039;you must not run your compute jobs on the login nodes&#039;&#039;&#039;&amp;lt;/span&amp;gt;. Compute jobs must be submitted to the&lt;br /&gt;
[[bwUniCluster Batch Jobs|queueing system]]. Any compute job running on the login nodes will be terminated without any notice. Any long-running compilation or any long-running pre- or postprocessing of batch jobs must also be submitted to the [[bwUniCluster Batch Jobs|queueing system]].&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= [[First_Steps_on_bwHPC_cluster|First steps on bwUniCluster]] =&lt;br /&gt;
&lt;br /&gt;
First and some important steps on bwUniCluster can be found [[First_Steps_on_bwHPC_cluster|here]].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Deregistration =&lt;br /&gt;
If you plan to permanently leave the bwUniCluster, follow the deregister checklist:&lt;br /&gt;
# Transfer all your data in $HOME and workspace to your local computer/storage and after that clear off all your data&lt;br /&gt;
# Visit [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu] &lt;br /&gt;
#* Select your home organization from the list and click &#039;&#039;&#039;Proceed&#039;&#039;&#039;  &lt;br /&gt;
#* Enter your home-organisational user ID / username  and your home-organisational password and click &#039;&#039;&#039;Login&#039;&#039;&#039; button&lt;br /&gt;
#* You will be redirected back to the registration website [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu/] &lt;br /&gt;
#* &amp;lt;div&amp;gt;Select &#039;&#039;&#039;Registry Info&#039;&#039;&#039; of the service &#039;&#039;&#039;bwUniCluster&#039;&#039;&#039; (on the left hand side)&amp;lt;br&amp;gt;[[File:bwUniCluster_registration_sidebar.png|center|border|]]&amp;lt;/div&amp;gt;&lt;br /&gt;
#* Click &#039;&#039;&#039;Deregister&#039;&#039;&#039;&lt;br /&gt;
Note that Step 2 will automatically unsubscribe you from the bwunicluster mail list.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
[[Category:bwUniCluster|Access]][[Category:Access|bwUniCluster]]&lt;/div&gt;</summary>
		<author><name>S Shamsudeen</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster2.0/FAQ_-_broadwell_partition&amp;diff=5087</id>
		<title>BwUniCluster2.0/FAQ - broadwell partition</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster2.0/FAQ_-_broadwell_partition&amp;diff=5087"/>
		<updated>2017-08-29T12:22:03Z</updated>

		<summary type="html">&lt;p&gt;S Shamsudeen: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;FAQs concerning best practice of bwUniCluster broadwell (aka &amp;quot;extension&amp;quot;) partition.&lt;br /&gt;
&lt;br /&gt;
__TOC__&lt;br /&gt;
&lt;br /&gt;
= Login =&lt;br /&gt;
== Are there separate login nodes for the bwUniCluster broadwell partition? ==&lt;br /&gt;
* Yes, but primarily to be used for compiling code, for just submitting job either login nodes are ok.&lt;br /&gt;
&lt;br /&gt;
= Compilation =&lt;br /&gt;
== Q1 ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Job execution =&lt;br /&gt;
== Q2 ==&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
[[Category:bwUniCluster]]&lt;/div&gt;</summary>
		<author><name>S Shamsudeen</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster2.0/FAQ_-_broadwell_partition&amp;diff=5086</id>
		<title>BwUniCluster2.0/FAQ - broadwell partition</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster2.0/FAQ_-_broadwell_partition&amp;diff=5086"/>
		<updated>2017-08-29T12:19:44Z</updated>

		<summary type="html">&lt;p&gt;S Shamsudeen: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;FAQs concerning best practice of bwUniCluster broadwell (aka &amp;quot;extension&amp;quot;) partition.&lt;br /&gt;
&lt;br /&gt;
__TOC__&lt;br /&gt;
&lt;br /&gt;
== Are there separate login nodes for the bwUniCluster broadwell partition? ==&lt;br /&gt;
* Yes, but primarily to be used for compiling code, for just submitting job either login nodes are ok.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
[[Category:bwUniCluster]]&lt;/div&gt;</summary>
		<author><name>S Shamsudeen</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster2.0/FAQ_-_broadwell_partition&amp;diff=5085</id>
		<title>BwUniCluster2.0/FAQ - broadwell partition</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster2.0/FAQ_-_broadwell_partition&amp;diff=5085"/>
		<updated>2017-08-29T12:18:02Z</updated>

		<summary type="html">&lt;p&gt;S Shamsudeen: Created page with &amp;quot;FAQs concerning best practice of bwUniCluster broadwell (aka &amp;quot;extension&amp;quot;) partition.  __TOC__  == Are there separate login nodes for the bwUniCluster broadwell partition? == *...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;FAQs concerning best practice of bwUniCluster broadwell (aka &amp;quot;extension&amp;quot;) partition.&lt;br /&gt;
&lt;br /&gt;
__TOC__&lt;br /&gt;
&lt;br /&gt;
== Are there separate login nodes for the bwUniCluster broadwell partition? ==&lt;br /&gt;
* Yes, but primarily to be used for compiling code, for just submitting job either login nodes are ok.&lt;/div&gt;</summary>
		<author><name>S Shamsudeen</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster_2.0_User_Access&amp;diff=4926</id>
		<title>BwUniCluster 2.0 User Access</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster_2.0_User_Access&amp;diff=4926"/>
		<updated>2017-07-18T12:33:01Z</updated>

		<summary type="html">&lt;p&gt;S Shamsudeen: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;!--{| style=&amp;quot;border-style: solid; border-width: 1px&amp;quot;&lt;br /&gt;
! Navigation: [[BwHPC_Best_Practices_Repository|bwHPC BPR]] / [[BwUniCluster_User_Guide|bwUniCluster]] &lt;br /&gt;
|}--&amp;gt;&lt;br /&gt;
[[bwUniCluster]] is Baden-Württemberg&#039;s general purpose tier 3 high performance computing (HPC)&lt;br /&gt;
cluster co-financed by Baden-Württemberg&#039;s ministry of science, research and arts and &lt;br /&gt;
the shareholders:&lt;br /&gt;
* Albert Ludwig University of Freiburg&lt;br /&gt;
* Eberhard Karls University, Tübingen&lt;br /&gt;
* Karlsruhe Institute of Technology (KIT)&lt;br /&gt;
* Ruprecht-Karls-Universität Heidelberg&lt;br /&gt;
* Ulm University&lt;br /&gt;
* University of Hohenheim&lt;br /&gt;
* University of Konstanz&lt;br /&gt;
* University of Mannheim&lt;br /&gt;
* University of Stuttgart&lt;br /&gt;
* HAW BW e.V. (an association of university of applied sciences in Baden-Württemberg) &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
To &#039;&#039;&#039;log on&#039;&#039;&#039; [[bwUniCluster]] a user account is required. All members of  the shareholder&lt;br /&gt;
universities can apply for an account.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;width: 100%; border-spacing: 5px;&amp;quot;&lt;br /&gt;
| style=&amp;quot;text-align:left; color:#000;vertical-align:top;&amp;quot; |__TOC__ &lt;br /&gt;
|&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;[[File:bwUniCluster_17Jan2014_p044-rot_t10.10.00.jpg|center|border|250px|bwUniCluster wiring by Holger Obermaier, copyright: KIT (SCC)]] &amp;lt;span style=&amp;quot;font-size:80%&amp;quot;&amp;gt;bwUniCluster wiring © KIT (SCC)&amp;lt;/span&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Registration =&lt;br /&gt;
&lt;br /&gt;
Granting access and issuing a user account for &#039;&#039;&#039;&#039;&#039;bwUniCluster&#039;&#039;&#039;&#039;&#039; requires the registration at the KIT service website [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu] (step B). However, this registration depends on the &#039;&#039;&#039;bwUniCluster entitlement&#039;&#039;&#039; issued by your university (step A).&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Step A: bwUniCluster entitlement for registration ==&lt;br /&gt;
Each university issues the bwUniCluster entitlement &#039;&#039;&#039;only&#039;&#039;&#039; for their own members. Details on the issuing process and/or the entitlement application forms are listed hereafter:  &lt;br /&gt;
&lt;br /&gt;
* [[BwCluster_User_Access_Uni_Freiburg|Albert Ludwig University of Freiburg]]&lt;br /&gt;
* [http://www.zdv.uni-tuebingen.de/dienstleistungen/computing/zugang-zu-den-ressourcen.html Eberhard Karls University Tübingen]&lt;br /&gt;
* [http://www.scc.kit.edu/downloads/sdo/Antrag_Benutzernummer_bwUniCluster.pdf Karlsruhe Institute of Technology (KIT)]&lt;br /&gt;
* [[‎BWUniCluster_User_Access_Members_Uni_Heidelberg|Ruprecht-Karls-Universität Heidelberg]]&lt;br /&gt;
* [[BWUniCluster_User_Access_Members_Uni_Ulm|Ulm University]]&lt;br /&gt;
* [http://www.kim.uni-hohenheim.de/bwhpc University of Hohenheim]&lt;br /&gt;
* [[BWUniCluster_User_Access_Members_Uni_Konstanz|University of Konstanz]]&lt;br /&gt;
* [[BWUniCluster_User_Access_Members_Uni_Mannheim|University of Mannheim]]&lt;br /&gt;
* [https://www.hlrs.de/solutions-services/academic-users/bwunicluster-access/ University of Stuttgart]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Step B: Web Registration ==&lt;br /&gt;
After step A, i.e., after issueing the bwUniCluster entitlement, please visit: &lt;br /&gt;
* [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu] &lt;br /&gt;
*# Select your home organization from the list and click &#039;&#039;&#039;Proceed&#039;&#039;&#039;&lt;br /&gt;
*# You will be directed to the &#039;&#039;Identity Provider&#039;&#039; of your home organisation  &lt;br /&gt;
*# Enter your home-organisational user ID / username  and your home-organisational password and click &#039;&#039;&#039;Login&#039;&#039;&#039; button&lt;br /&gt;
*# You will be redirected back to the registration website [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu/] &lt;br /&gt;
*# &amp;lt;div&amp;gt;Select unter &#039;&#039;&#039;The following services are available&#039;&#039;&#039; the service &#039;&#039;&#039;bwUniCluster&#039;&#039;&#039; &amp;lt;br&amp;gt;[[File:bwRegWebApp_avail_services_pic01.png|center|border|]]&amp;lt;/div&amp;gt;&lt;br /&gt;
*# Click &#039;&#039;&#039;Register&#039;&#039;&#039;&lt;br /&gt;
*# Finally&lt;br /&gt;
*#* for all non-KIT members &#039;&#039;&#039;mandatorily&#039;&#039;&#039; set a service password for authentication on bwUniCluster&lt;br /&gt;
*#* for KIT members &#039;&#039;&#039;optionally&#039;&#039;&#039; set a service password for authentication on bwUniCluster&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- old description&lt;br /&gt;
** login with your home-organizational user account and user password,&lt;br /&gt;
** select service &#039;&#039;&#039;&#039;&#039;bwUniCluster&#039;&#039;&#039;&#039;&#039; (on the left side) and &lt;br /&gt;
** follow the instructions to complete the registration.&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Step C: &amp;lt;span style=&amp;quot;color:red;font-size:100%;font-weight:bold&amp;quot;&amp;gt;Amendment:&amp;lt;/span&amp;gt; bwUniCluster questionnaire ==&lt;br /&gt;
Starting June 1st, 2015, usage of bwUniCluster mandatorily requires the&lt;br /&gt;
questionnaire&lt;br /&gt;
&lt;br /&gt;
   https://www.bwhpc-c5.de/en/ZAS/bwunicluster_survey.php&lt;br /&gt;
&lt;br /&gt;
to be answered. The input is solely used for the&lt;br /&gt;
elaboration of future support activities and for planning future HPC&lt;br /&gt;
resources. From June 1st, 2015, login to bwUniCluster is only permitted&lt;br /&gt;
to users who have already answered the questionnaire. &#039;&#039;&#039;&#039;&#039;Exception&#039;&#039;&#039;: For the first 14 days after the web registration login to bwUniCluster is permitted without participation in the questionnaire.&#039;&#039;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Changing Password ==&lt;br /&gt;
By default, for KIT members your bwUniCluster &#039;&#039;&#039;password&#039;&#039;&#039; to log on matches  that one of your KIT &lt;br /&gt;
account, while for all non-KIT members your bwUniCluster &#039;&#039;&#039;password&#039;&#039;&#039; is that one you saved during the web registration (compare step 7 of chapter 1.2). &lt;br /&gt;
At any time, you can set a new bwUniCluster password via the registration website [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu] by carrying out the following steps:&lt;br /&gt;
# visit [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu] and select your home organization &lt;br /&gt;
# authenticate yourself via your home-organizational user id / username and your home-organizational password&lt;br /&gt;
# find on the left side &#039;&#039;&#039;bwUniCluster&#039;&#039;&#039; and select &#039;&#039;&#039;Set Service Password&#039;&#039;&#039;&lt;br /&gt;
# set new service, i.e. bwUniCluster, password, repeat it and click &#039;&#039;&#039;Save&#039;&#039;&#039; button.&lt;br /&gt;
# the page answers e.g. &amp;quot;Das Passwort wurde bei dem Dienst geändert&amp;quot; (&amp;quot;password has been changed&amp;quot;)&lt;br /&gt;
# proceed to log in using the new password in the next step&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==  Contact ==&lt;br /&gt;
If you have questions or problems concerning bwUniCluster registration, please [[Registration_Support_-_bwUniCluster|contact your local hotline]]. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Login =&lt;br /&gt;
After finishing the web registration &#039;&#039;&#039;&#039;&#039;bwUniCluster&#039;&#039;&#039;&#039;&#039; is ready for your &#039;&#039;&#039;SSH&#039;&#039;&#039; based login.&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;bwUniCluster&#039;&#039;&#039;&#039;&#039; has two dedicated login nodes. The selection of the login node is done automatically and is based on a round-robin scheduling. Logging in another time the session might run &lt;br /&gt;
on the other login node.&lt;br /&gt;
&lt;br /&gt;
Only the secure shell &#039;&#039;&#039;SSH&#039;&#039;&#039; is allowed to login. Other protocols like &#039;&#039;&#039;telnet&#039;&#039;&#039; or &#039;&#039;&#039;rlogin&#039;&#039;&#039;&lt;br /&gt;
are not allowed for security reasons.&lt;br /&gt;
&lt;br /&gt;
A connection to &#039;&#039;&#039;&#039;&#039;bwUniCluster&#039;&#039;&#039;&#039;&#039; can be established by the command:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ssh &amp;lt;UserID&amp;gt;@bwunicluster.scc.kit.edu&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you are using &#039;&#039;&#039;OpenSSH&#039;&#039;&#039; (usually installed on Linux based systems) and you want to use a GUI-based application on &#039;&#039;&#039;&#039;&#039;bwUniCluster&#039;&#039;&#039;&#039;&#039; you should use the &lt;br /&gt;
&#039;&#039;ssh&#039;&#039; command with the option &#039;&#039;-X&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ssh -X &amp;lt;UserID&amp;gt;@bwunicluster.scc.kit.edu&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Hints for using PuTTY ==&lt;br /&gt;
Start PuTTY, fill in at window &#039;&#039;PuTTY Configuration&#039;&#039; under category &#039;&#039;Session&#039;&#039; the following fields &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Host Name        : &amp;lt;UserID&amp;gt;@bwunicluster.scc.kit.edu&lt;br /&gt;
Port                  : 22&lt;br /&gt;
Connection type : SSH&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
and click on &#039;&#039;Open&#039;&#039;, enter your password and accept adding &#039;&#039;host key&#039;&#039;. Note that you can also name the configured session via &#039;&#039;Save&#039;&#039; and load it later via the given name. &lt;br /&gt;
&lt;br /&gt;
== Hints for using WinSCP ==&lt;br /&gt;
&lt;br /&gt;
Start WinSCP, fill in the following fields:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
File Protocol    : SCP&lt;br /&gt;
Host name        : bwunicluster.scc.kit.edu&lt;br /&gt;
Port             : 22&lt;br /&gt;
User name        : &amp;lt;UserID&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
and click on &#039;Login&#039; and enter your password. Note that you can also name the configured session via &#039;&#039;Save&#039;&#039; and load it later via the given name.&lt;br /&gt;
&lt;br /&gt;
== About UserID / Username ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;UserID&amp;gt;&#039;&#039;&#039; of the ssh command is a placeholder for your &#039;&#039;&#039;&#039;&#039;username&#039;&#039;&#039;&#039;&#039; at your home &lt;br /&gt;
organization together with a &#039;&#039;&#039;&#039;&#039;prefix&#039;&#039;&#039;&#039;&#039; as followed:&lt;br /&gt;
&lt;br /&gt;
{| width=450px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Home organization !! &amp;lt;UserID&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| Universität Freiburg || &#039;&#039;fr_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Universität Heidelberg || &#039;&#039;hd_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Universität Hohenheim || &#039;&#039;ho_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| KIT || username &#039;&#039;(without any prefix)&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
| Universität Konstanz || &#039;&#039;kn_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Universität Mannheim || &#039;&#039;ma_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Universität Stuttgart ||  &#039;&#039;st_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Universität Tübingen || &#039;&#039;tu_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Universität Ulm  || &#039;&#039;ul_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Aalen || &#039;&#039;aa_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Albstadt-Sigmaringen || &#039;&#039;as_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Esslingen || &#039;&#039;es_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Furtwangen || &#039;&#039;hf_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Karlsruhe || &#039;&#039;hk_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| HTWG Konstanz || &#039;&#039;ht_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Reutlingen || &#039;&#039;hr_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Rottenburg || &#039;&#039;ro_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule für Technik Stuttgart || &#039;&#039;hs_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Ulm || &#039;&#039;hu_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Allowed activities on login nodes ==&lt;br /&gt;
&lt;br /&gt;
The login nodes of bwUniCluster are the access point to the compute system and to your bwUniCluster $HOME directory. The login nodes are shared with all the users of bwUniCluster. Therefore, your activities on the login nodes are limited to primarily set up your batch jobs. Your activities may also be:&lt;br /&gt;
* &#039;&#039;&#039;short&#039;&#039;&#039; compilation of your program code and&lt;br /&gt;
* &#039;&#039;&#039;short&#039;&#039;&#039; pre- and postprocessing of your batch jobs.&lt;br /&gt;
&lt;br /&gt;
To guarantee usability for all the users of bwUniCluster &amp;lt;span style=&amp;quot;color:red;font-size:100%;&amp;quot;&amp;gt;&#039;&#039;&#039;you must not run your compute jobs on the login nodes&#039;&#039;&#039;&amp;lt;/span&amp;gt;. Compute jobs must be submitted to the&lt;br /&gt;
[[bwUniCluster Batch Jobs|queueing system]]. Any compute job running on the login nodes will be terminated without any notice. Any long-running compilation or any long-running pre- or postprocessing of batch jobs must also be submitted to the [[bwUniCluster Batch Jobs|queueing system]].&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= [[First_Steps_on_bwHPC_cluster|First steps on bwUniCluster]] =&lt;br /&gt;
&lt;br /&gt;
First and some important steps on bwUniCluster can be found [[First_Steps_on_bwHPC_cluster|here]].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Deregistration =&lt;br /&gt;
If you plan to permanently leave the bwUniCluster, follow the deregister checklist:&lt;br /&gt;
# Transfer all your data in $HOME and workspace to your local computer/storage and after that clear off all your data&lt;br /&gt;
# Visit [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu] &lt;br /&gt;
#* Select your home organization from the list and click &#039;&#039;&#039;Proceed&#039;&#039;&#039;  &lt;br /&gt;
#* Enter your home-organisational user ID / username  and your home-organisational password and click &#039;&#039;&#039;Login&#039;&#039;&#039; button&lt;br /&gt;
#* You will be redirected back to the registration website [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu/] &lt;br /&gt;
#* &amp;lt;div&amp;gt;Select &#039;&#039;&#039;Registry Info&#039;&#039;&#039; of the service &#039;&#039;&#039;bwUniCluster&#039;&#039;&#039; (on the left hand side)&amp;lt;br&amp;gt;[[File:bwUniCluster_registration_sidebar.png|center|border|]]&amp;lt;/div&amp;gt;&lt;br /&gt;
#* Click &#039;&#039;&#039;Deregister&#039;&#039;&#039;&lt;br /&gt;
Note that Step 2 will automatically unsubscribe you from the bwunicluster mail list.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
[[Category:bwUniCluster|Access]][[Category:Access|bwUniCluster]]&lt;/div&gt;</summary>
		<author><name>S Shamsudeen</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster2.0/First_Steps&amp;diff=4924</id>
		<title>BwUniCluster2.0/First Steps</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster2.0/First_Steps&amp;diff=4924"/>
		<updated>2017-07-18T12:25:38Z</updated>

		<summary type="html">&lt;p&gt;S Shamsudeen: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Command line interface ==&lt;br /&gt;
&lt;br /&gt;
Any work is to be done via command line interface. A quick guide can be found [https://indico.scc.kit.edu/indico/event/278/material/slides/10.pdf here].&lt;br /&gt;
&lt;br /&gt;
== Software setup ==&lt;br /&gt;
&lt;br /&gt;
Softwares provided by bwHPC have to be loaded according to your need via the [[BwUniCluster_Environment_Modules|software module system]].&lt;br /&gt;
&lt;br /&gt;
== Computation == &lt;br /&gt;
&lt;br /&gt;
Any kind of computation on any HPC cluster is done as [[bwUniCluster Batch Jobs|batch jobs]]. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
For setting up your environment to use compilers and installed software please visit:&lt;br /&gt;
* [[BwUniCluster_Environment_Modules|bwUniCluster user environment]]&lt;br /&gt;
For guides on how to submit compute jobs to bwUniCluster please visit:&lt;br /&gt;
* [[bwUniCluster Batch Jobs|bwUniCluster batch jobs]]--&amp;gt;&lt;/div&gt;</summary>
		<author><name>S Shamsudeen</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster2.0/First_Steps&amp;diff=4922</id>
		<title>BwUniCluster2.0/First Steps</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster2.0/First_Steps&amp;diff=4922"/>
		<updated>2017-07-18T12:25:17Z</updated>

		<summary type="html">&lt;p&gt;S Shamsudeen: S Shamsudeen moved page BwUniCluster First Steps to First Steps on bwHPC cluster&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= First steps on HPC Cluster =&lt;br /&gt;
&lt;br /&gt;
== Command line interface ==&lt;br /&gt;
&lt;br /&gt;
Any work is to be done via command line interface. A quick guide can be found [https://indico.scc.kit.edu/indico/event/278/material/slides/10.pdf here].&lt;br /&gt;
&lt;br /&gt;
== Software setup ==&lt;br /&gt;
&lt;br /&gt;
Softwares provided by bwHPC have to be loaded according to your need via the [[BwUniCluster_Environment_Modules|software module system]].&lt;br /&gt;
&lt;br /&gt;
== Computation == &lt;br /&gt;
&lt;br /&gt;
Any kind of computation on any HPC cluster is done as [[bwUniCluster Batch Jobs|batch jobs]]. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
For setting up your environment to use compilers and installed software please visit:&lt;br /&gt;
* [[BwUniCluster_Environment_Modules|bwUniCluster user environment]]&lt;br /&gt;
For guides on how to submit compute jobs to bwUniCluster please visit:&lt;br /&gt;
* [[bwUniCluster Batch Jobs|bwUniCluster batch jobs]]--&amp;gt;&lt;/div&gt;</summary>
		<author><name>S Shamsudeen</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster2.0/First_Steps&amp;diff=4921</id>
		<title>BwUniCluster2.0/First Steps</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster2.0/First_Steps&amp;diff=4921"/>
		<updated>2017-07-18T12:24:18Z</updated>

		<summary type="html">&lt;p&gt;S Shamsudeen: Created page with &amp;quot;= First steps on HPC Cluster =  == Command line interface ==  Any work is to be done via command line interface. A quick guide can be found [https://indico.scc.kit.edu/indico/...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= First steps on HPC Cluster =&lt;br /&gt;
&lt;br /&gt;
== Command line interface ==&lt;br /&gt;
&lt;br /&gt;
Any work is to be done via command line interface. A quick guide can be found [https://indico.scc.kit.edu/indico/event/278/material/slides/10.pdf here].&lt;br /&gt;
&lt;br /&gt;
== Software setup ==&lt;br /&gt;
&lt;br /&gt;
Softwares provided by bwHPC have to be loaded according to your need via the [[BwUniCluster_Environment_Modules|software module system]].&lt;br /&gt;
&lt;br /&gt;
== Computation == &lt;br /&gt;
&lt;br /&gt;
Any kind of computation on any HPC cluster is done as [[bwUniCluster Batch Jobs|batch jobs]]. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
For setting up your environment to use compilers and installed software please visit:&lt;br /&gt;
* [[BwUniCluster_Environment_Modules|bwUniCluster user environment]]&lt;br /&gt;
For guides on how to submit compute jobs to bwUniCluster please visit:&lt;br /&gt;
* [[bwUniCluster Batch Jobs|bwUniCluster batch jobs]]--&amp;gt;&lt;/div&gt;</summary>
		<author><name>S Shamsudeen</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster_2.0_User_Access&amp;diff=4878</id>
		<title>BwUniCluster 2.0 User Access</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster_2.0_User_Access&amp;diff=4878"/>
		<updated>2017-05-12T12:38:08Z</updated>

		<summary type="html">&lt;p&gt;S Shamsudeen: /* Step B: Amendment: bwUniCluster questionnaire */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;!--{| style=&amp;quot;border-style: solid; border-width: 1px&amp;quot;&lt;br /&gt;
! Navigation: [[BwHPC_Best_Practices_Repository|bwHPC BPR]] / [[BwUniCluster_User_Guide|bwUniCluster]] &lt;br /&gt;
|}--&amp;gt;&lt;br /&gt;
[[bwUniCluster]] is Baden-Württemberg&#039;s general purpose tier 3 high performance computing (HPC)&lt;br /&gt;
cluster co-financed by Baden-Württemberg&#039;s ministry of science, research and arts and &lt;br /&gt;
the shareholders:&lt;br /&gt;
* Albert Ludwig University of Freiburg&lt;br /&gt;
* Eberhard Karls University, Tübingen&lt;br /&gt;
* Karlsruhe Institute of Technology (KIT)&lt;br /&gt;
* Ruprecht-Karls-Universität Heidelberg&lt;br /&gt;
* Ulm University&lt;br /&gt;
* University of Hohenheim&lt;br /&gt;
* University of Konstanz&lt;br /&gt;
* University of Mannheim&lt;br /&gt;
* University of Stuttgart&lt;br /&gt;
* HAW BW e.V. (an association of university of applied sciences in Baden-Württemberg) &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
To &#039;&#039;&#039;log on&#039;&#039;&#039; [[bwUniCluster]] a user account is required. All members of  the shareholder&lt;br /&gt;
universities can apply for an account.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;width: 100%; border-spacing: 5px;&amp;quot;&lt;br /&gt;
| style=&amp;quot;text-align:left; color:#000;vertical-align:top;&amp;quot; |__TOC__ &lt;br /&gt;
|&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;[[File:bwUniCluster_17Jan2014_p044-rot_t10.10.00.jpg|center|border|250px|bwUniCluster wiring by Holger Obermaier, copyright: KIT (SCC)]] &amp;lt;span style=&amp;quot;font-size:80%&amp;quot;&amp;gt;bwUniCluster wiring © KIT (SCC)&amp;lt;/span&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Registration =&lt;br /&gt;
&lt;br /&gt;
Granting access and issuing a user account for &#039;&#039;&#039;&#039;&#039;bwUniCluster&#039;&#039;&#039;&#039;&#039; requires the registration at the KIT service website [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu] (step B). However, this registration depends on the &#039;&#039;&#039;bwUniCluster entitlement&#039;&#039;&#039; issued by your university (step A).&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Step A: bwUniCluster entitlement for registration ==&lt;br /&gt;
Each university issues the bwUniCluster entitlement &#039;&#039;&#039;only&#039;&#039;&#039; for their own members. Details on the issuing process and/or the entitlement application forms are listed hereafter:  &lt;br /&gt;
&lt;br /&gt;
* [[BwCluster_User_Access_Uni_Freiburg|Albert Ludwig University of Freiburg]]&lt;br /&gt;
* [http://www.zdv.uni-tuebingen.de/dienstleistungen/computing/zugang-zu-den-ressourcen.html Eberhard Karls University Tübingen]&lt;br /&gt;
* [http://www.scc.kit.edu/downloads/sdo/Antrag_Benutzernummer_bwUniCluster.pdf Karlsruhe Institute of Technology (KIT)]&lt;br /&gt;
* [[‎BWUniCluster_User_Access_Members_Uni_Heidelberg|Ruprecht-Karls-Universität Heidelberg]]&lt;br /&gt;
* [[BWUniCluster_User_Access_Members_Uni_Ulm|Ulm University]]&lt;br /&gt;
* [http://www.kim.uni-hohenheim.de/bwhpc University of Hohenheim]&lt;br /&gt;
* [[BWUniCluster_User_Access_Members_Uni_Konstanz|University of Konstanz]]&lt;br /&gt;
* [[BWUniCluster_User_Access_Members_Uni_Mannheim|University of Mannheim]]&lt;br /&gt;
* [https://www.hlrs.de/solutions-services/academic-users/bwunicluster-access/ University of Stuttgart]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Step B: Web Registration ==&lt;br /&gt;
After step A, i.e., after issueing the bwUniCluster entitlement, please visit: &lt;br /&gt;
* [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu] &lt;br /&gt;
*# Select your home organization from the list and click &#039;&#039;&#039;Proceed&#039;&#039;&#039;&lt;br /&gt;
*# You will be directed to the &#039;&#039;Identity Provider&#039;&#039; of your home organisation  &lt;br /&gt;
*# Enter your home-organisational user ID / username  and your home-organisational password and click &#039;&#039;&#039;Login&#039;&#039;&#039; button&lt;br /&gt;
*# You will be redirected back to the registration website [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu/] &lt;br /&gt;
*# &amp;lt;div&amp;gt;Select unter &#039;&#039;&#039;The following services are available&#039;&#039;&#039; the service &#039;&#039;&#039;bwUniCluster&#039;&#039;&#039; &amp;lt;br&amp;gt;[[File:bwRegWebApp_avail_services_pic01.png|center|border|]]&amp;lt;/div&amp;gt;&lt;br /&gt;
*# Click &#039;&#039;&#039;Register&#039;&#039;&#039;&lt;br /&gt;
*# Finally&lt;br /&gt;
*#* for all non-KIT members &#039;&#039;&#039;mandatorily&#039;&#039;&#039; set a service password for authentication on bwUniCluster&lt;br /&gt;
*#* for KIT members &#039;&#039;&#039;optionally&#039;&#039;&#039; set a service password for authentication on bwUniCluster&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- old description&lt;br /&gt;
** login with your home-organizational user account and user password,&lt;br /&gt;
** select service &#039;&#039;&#039;&#039;&#039;bwUniCluster&#039;&#039;&#039;&#039;&#039; (on the left side) and &lt;br /&gt;
** follow the instructions to complete the registration.&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Step C: &amp;lt;span style=&amp;quot;color:red;font-size:100%;font-weight:bold&amp;quot;&amp;gt;Amendment:&amp;lt;/span&amp;gt; bwUniCluster questionnaire ==&lt;br /&gt;
Starting June 1st, 2015, usage of bwUniCluster mandatorily requires the&lt;br /&gt;
questionnaire&lt;br /&gt;
&lt;br /&gt;
   https://www.bwhpc-c5.de/en/ZAS/bwunicluster_survey.php&lt;br /&gt;
&lt;br /&gt;
to be answered. The input is solely used for the&lt;br /&gt;
elaboration of future support activities and for planning future HPC&lt;br /&gt;
resources. From June 1st, 2015, login to bwUniCluster is only permitted&lt;br /&gt;
to users who have already answered the questionnaire. &#039;&#039;&#039;&#039;&#039;Exception&#039;&#039;&#039;: For the first 14 days after the web registration login to bwUniCluster is permitted without participation in the questionnaire.&#039;&#039;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Changing Password ==&lt;br /&gt;
By default, for KIT members your bwUniCluster &#039;&#039;&#039;password&#039;&#039;&#039; to log on matches  that one of your KIT &lt;br /&gt;
account, while for all non-KIT members your bwUniCluster &#039;&#039;&#039;password&#039;&#039;&#039; is that one you saved during the web registration (compare step 7 of chapter 1.2). &lt;br /&gt;
At any time, you can set a new bwUniCluster password via the registration website [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu] by carrying out the following steps:&lt;br /&gt;
# visit [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu] and select your home organization &lt;br /&gt;
# authenticate yourself via your home-organizational user id / username and your home-organizational password&lt;br /&gt;
# find on the left side &#039;&#039;&#039;bwUniCluster&#039;&#039;&#039; and select &#039;&#039;&#039;Set Service Password&#039;&#039;&#039;&lt;br /&gt;
# set new service, i.e. bwUniCluster, password, repeat it and click &#039;&#039;&#039;Save&#039;&#039;&#039; button.&lt;br /&gt;
# the page answers e.g. &amp;quot;Das Passwort wurde bei dem Dienst geändert&amp;quot; (&amp;quot;password has been changed&amp;quot;)&lt;br /&gt;
# proceed to log in using the new password in the next step&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==  Contact ==&lt;br /&gt;
If you have questions or problems concerning bwUniCluster registration, please [[Registration_Support_-_bwUniCluster|contact your local hotline]]. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Login =&lt;br /&gt;
After finishing the web registration &#039;&#039;&#039;&#039;&#039;bwUniCluster&#039;&#039;&#039;&#039;&#039; is ready for your &#039;&#039;&#039;SSH&#039;&#039;&#039; based login.&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;bwUniCluster&#039;&#039;&#039;&#039;&#039; has two dedicated login nodes. The selection of the login node is done automatically and is based on a round-robin scheduling. Logging in another time the session might run &lt;br /&gt;
on the other login node.&lt;br /&gt;
&lt;br /&gt;
Only the secure shell &#039;&#039;&#039;SSH&#039;&#039;&#039; is allowed to login. Other protocols like &#039;&#039;&#039;telnet&#039;&#039;&#039; or &#039;&#039;&#039;rlogin&#039;&#039;&#039;&lt;br /&gt;
are not allowed for security reasons.&lt;br /&gt;
&lt;br /&gt;
A connection to &#039;&#039;&#039;&#039;&#039;bwUniCluster&#039;&#039;&#039;&#039;&#039; can be established by the command:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ssh &amp;lt;UserID&amp;gt;@bwunicluster.scc.kit.edu&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you are using &#039;&#039;&#039;OpenSSH&#039;&#039;&#039; (usually installed on Linux based systems) and you want to use a GUI-based application on &#039;&#039;&#039;&#039;&#039;bwUniCluster&#039;&#039;&#039;&#039;&#039; you should use the &lt;br /&gt;
&#039;&#039;ssh&#039;&#039; command with the option &#039;&#039;-X&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ssh -X &amp;lt;UserID&amp;gt;@bwunicluster.scc.kit.edu&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Hints for using PuTTY ==&lt;br /&gt;
Start PuTTY, fill in at window &#039;&#039;PuTTY Configuration&#039;&#039; under category &#039;&#039;Session&#039;&#039; the following fields &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Host Name        : &amp;lt;UserID&amp;gt;@bwunicluster.scc.kit.edu&lt;br /&gt;
Port                  : 22&lt;br /&gt;
Connection type : SSH&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
and click on &#039;&#039;Open&#039;&#039;, enter your password and accept adding &#039;&#039;host key&#039;&#039;. Note that you can also name the configured session via &#039;&#039;Save&#039;&#039; and load it later via the given name. &lt;br /&gt;
&lt;br /&gt;
== Hints for using WinSCP ==&lt;br /&gt;
&lt;br /&gt;
Start WinSCP, fill in the following fields:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
File Protocol    : SCP&lt;br /&gt;
Host name        : bwunicluster.scc.kit.edu&lt;br /&gt;
Port             : 22&lt;br /&gt;
User name        : &amp;lt;UserID&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
and click on &#039;Login&#039; and enter your password. Note that you can also name the configured session via &#039;&#039;Save&#039;&#039; and load it later via the given name.&lt;br /&gt;
&lt;br /&gt;
== About UserID / Username ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;UserID&amp;gt;&#039;&#039;&#039; of the ssh command is a placeholder for your &#039;&#039;&#039;&#039;&#039;username&#039;&#039;&#039;&#039;&#039; at your home &lt;br /&gt;
organization together with a &#039;&#039;&#039;&#039;&#039;prefix&#039;&#039;&#039;&#039;&#039; as followed:&lt;br /&gt;
&lt;br /&gt;
{| width=450px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Home organization !! &amp;lt;UserID&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| Universität Freiburg || &#039;&#039;fr_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Universität Heidelberg || &#039;&#039;hd_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Universität Hohenheim || &#039;&#039;ho_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| KIT || username &#039;&#039;(without any prefix)&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
| Universität Konstanz || &#039;&#039;kn_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Universität Mannheim || &#039;&#039;ma_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Universität Stuttgart ||  &#039;&#039;st_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Universität Tübingen || &#039;&#039;tu_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Universität Ulm  || &#039;&#039;ul_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Aalen || &#039;&#039;aa_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Albstadt-Sigmaringen || &#039;&#039;as_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Esslingen || &#039;&#039;es_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Furtwangen || &#039;&#039;hf_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Karlsruhe || &#039;&#039;hk_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| HTWG Konstanz || &#039;&#039;ht_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Reutlingen || &#039;&#039;hr_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Rottenburg || &#039;&#039;ro_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule für Technik Stuttgart || &#039;&#039;hs_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Ulm || &#039;&#039;hu_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Allowed activities on login nodes ==&lt;br /&gt;
&lt;br /&gt;
The login nodes of bwUniCluster are the access point to the compute system and to your bwUniCluster $HOME directory. The login nodes are shared with all the users of bwUniCluster. Therefore, your activities on the login nodes are limited to primarily set up your batch jobs. Your activities may also be:&lt;br /&gt;
* &#039;&#039;&#039;short&#039;&#039;&#039; compilation of your program code and&lt;br /&gt;
* &#039;&#039;&#039;short&#039;&#039;&#039; pre- and postprocessing of your batch jobs.&lt;br /&gt;
&lt;br /&gt;
To guarantee usability for all the users of bwUniCluster &amp;lt;span style=&amp;quot;color:red;font-size:100%;&amp;quot;&amp;gt;&#039;&#039;&#039;you must not run your compute jobs on the login nodes&#039;&#039;&#039;&amp;lt;/span&amp;gt;. Compute jobs must be submitted to the&lt;br /&gt;
[[bwUniCluster Batch Jobs|queueing system]]. Any compute job running on the login nodes will be terminated without any notice. Any long-running compilation or any long-running pre- or postprocessing of batch jobs must also be submitted to the [[bwUniCluster Batch Jobs|queueing system]].&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= First steps on bwUniCluster =&lt;br /&gt;
For setting up your environment to use compilers and installed software please visit:&lt;br /&gt;
* [[BwUniCluster_Environment_Modules|bwUniCluster user environment]]&lt;br /&gt;
For guides on how to submit compute jobs to bwUniCluster please visit:&lt;br /&gt;
* [[bwUniCluster Batch Jobs|bwUniCluster batch jobs]]  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Deregistration =&lt;br /&gt;
If you plan to permanently leave the bwUniCluster, follow the deregister checklist:&lt;br /&gt;
# Transfer all your data in $HOME and workspace to your local computer/storage and after that clear off all your data&lt;br /&gt;
# Visit [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu] &lt;br /&gt;
#* Select your home organization from the list and click &#039;&#039;&#039;Proceed&#039;&#039;&#039;  &lt;br /&gt;
#* Enter your home-organisational user ID / username  and your home-organisational password and click &#039;&#039;&#039;Login&#039;&#039;&#039; button&lt;br /&gt;
#* You will be redirected back to the registration website [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu/] &lt;br /&gt;
#* &amp;lt;div&amp;gt;Select &#039;&#039;&#039;Registry Info&#039;&#039;&#039; of the service &#039;&#039;&#039;bwUniCluster&#039;&#039;&#039; (on the left hand side)&amp;lt;br&amp;gt;[[File:bwUniCluster_registration_sidebar.png|center|border|]]&amp;lt;/div&amp;gt;&lt;br /&gt;
#* Click &#039;&#039;&#039;Deregister&#039;&#039;&#039;&lt;br /&gt;
Note that Step 2 will automatically unsubscribe you from the bwunicluster mail list.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
[[Category:bwUniCluster|Access]][[Category:Access|bwUniCluster]]&lt;/div&gt;</summary>
		<author><name>S Shamsudeen</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=Batch_Jobs_-_bwUniCluster_Features&amp;diff=4876</id>
		<title>Batch Jobs - bwUniCluster Features</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=Batch_Jobs_-_bwUniCluster_Features&amp;diff=4876"/>
		<updated>2017-05-12T08:16:28Z</updated>

		<summary type="html">&lt;p&gt;S Shamsudeen: /* msub -q queues */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;font color=green size=+2&amp;gt;This article contains information on features of the [[Batch_Jobs|batch job system]] only applicable on bwUniCluster.&amp;lt;/font&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
= Job Submission =&lt;br /&gt;
== msub Command ==&lt;br /&gt;
=== Interactive Jobs ===&lt;br /&gt;
The bwUniCluster supports the following additional msub option(s):&lt;br /&gt;
&lt;br /&gt;
{| width=750px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! colspan=&amp;quot;3&amp;quot; | msub Options&lt;br /&gt;
|-&lt;br /&gt;
! Command line&lt;br /&gt;
! Script&lt;br /&gt;
! Purpose&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
|  -I&lt;br /&gt;
| &lt;br /&gt;
|  Declares the job is to be run interactively.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== msub -l &#039;&#039;resource_list&#039;&#039; ===&lt;br /&gt;
No deviation or additional features to general [[Batch_Jobs|batch job]] setting.&lt;br /&gt;
&lt;br /&gt;
=== msub -q &#039;&#039;queues&#039;&#039; ===&lt;br /&gt;
Compute resources such as walltime, nodes and memory are restricted and must fit into &#039;&#039;&#039;queues&#039;&#039;&#039;. Since requested compute resources are NOT always automatically mapped to the correct queue class, you must add to your msub command the correct queue class. Details are:&lt;br /&gt;
&lt;br /&gt;
{| width=750px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! colspan=&amp;quot;6&amp;quot; style=&amp;quot;background-color:#999999;padding:3px&amp;quot;| msub -q &#039;&#039;queue&#039;&#039;&lt;br /&gt;
|- style=&amp;quot;width:10%;height=20px; text-align:left;&amp;quot;&lt;br /&gt;
! style=&amp;quot;width:10%;padding:3px&amp;quot;| &#039;&#039;queue&#039;&#039;&lt;br /&gt;
! style=&amp;quot;width:5%;padding:3px&amp;quot;| &#039;&#039;node&#039;&#039;&lt;br /&gt;
! style=&amp;quot;width:15%;padding:3px&amp;quot;| &#039;&#039;default resources&#039;&#039;&lt;br /&gt;
! style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;minimum resources&#039;&#039;&lt;br /&gt;
! style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;maximum resources&#039;&#039;&lt;br /&gt;
! style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;node access policy&#039;&#039;&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; height=20px; text-align:left&amp;quot;&lt;br /&gt;
| style=&amp;quot;padding:3px ; style=&amp;quot;color:#00a000&amp;quot;| develop*&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| thin&lt;br /&gt;
| style=&amp;quot;width:15%;padding:3px&amp;quot;| &#039;&#039;walltime&#039;&#039;=00:10:00,&#039;&#039;procs&#039;&#039;=1, &#039;&#039;pmem&#039;&#039;=4000mb&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1:&#039;&#039;ppn&#039;&#039;=16, &#039;&#039;walltime&#039;&#039;=00:30:00&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| shared&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; height=20px; text-align:left&amp;quot;&lt;br /&gt;
| style=&amp;quot;width:10%;padding:3px ; style=&amp;quot;color:#00a000&amp;quot; | singlenode* &lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| thin&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;walltime&#039;&#039;=00:30:01,&#039;&#039;procs&#039;&#039;=1, &#039;&#039;pmem&#039;&#039;=4000mb&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1, &#039;&#039;walltime&#039;&#039;=00:30:01&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1:&#039;&#039;ppn&#039;&#039;=16, &#039;&#039;walltime&#039;&#039;=3:00:00:00&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| shared&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; height=20px; text-align:left&amp;quot;&lt;br /&gt;
| style=&amp;quot;width:10%;vertical-align:top;height=20px; text-align:left;padding:3px ; style=&amp;quot;color:#00a000&amp;quot; | verylong*&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| thin&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;walltime&#039;&#039;=3:00:00:01,&#039;&#039;procs&#039;&#039;=1, &#039;&#039;pmem&#039;&#039;=4000mb&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1&#039;&#039;, walltime&#039;&#039;=3:00:00:01&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes=1:&#039;&#039;ppn&#039;&#039;=16, &#039;&#039;walltime&#039;&#039;=6:00:00:00&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| shared&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; height=20px; text-align:left&amp;quot;&lt;br /&gt;
| style=&amp;quot;width:10%;padding:3px ; style=&amp;quot;color:#36c&amp;quot; | extralong**&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| thin&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;walltime&#039;&#039;=6:00:00:01,&#039;&#039;procs&#039;&#039;=1, &#039;&#039;pmem&#039;&#039;=4000mb&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1&#039;&#039;, walltime&#039;&#039;=6:00:00:01&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes=1:&#039;&#039;ppn&#039;&#039;=16, &#039;&#039;walltime&#039;&#039;=14:00:00:00&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| singlejob (Only one job can run at a time)&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; height=20px; text-align:left&amp;quot; &lt;br /&gt;
| style=&amp;quot;width:10%;padding:3px&amp;quot; | fat&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| fat&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;walltime&#039;&#039;=00:10:00,&#039;&#039;procs&#039;&#039;=1, &#039;&#039;pmem&#039;&#039;=32000mb&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1:&#039;&#039;ppn&#039;&#039;=32, &#039;&#039;walltime&#039;&#039;=3:00:00:00&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| shared&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; height=20px; text-align:left&amp;quot; &lt;br /&gt;
| style=&amp;quot;width:10%;padding:3px ; style=&amp;quot;color:#00a000&amp;quot; | multinode*&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| broadwell&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;walltime&#039;&#039;=00:10:00,&#039;&#039;procs&#039;&#039;=1, &#039;&#039;pmem&#039;&#039;=4500mb&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=2&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=128:&#039;&#039;ppn&#039;&#039;=28, &#039;&#039;walltime&#039;&#039;=48:00:00&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| singlejob&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; height=20px; text-align:left&amp;quot; &lt;br /&gt;
| style=&amp;quot;width:10%;padding:3px&amp;quot; | dev_multinode&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| broadwell&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;walltime&#039;&#039;=00:10:00,&#039;&#039;procs&#039;&#039;=1, &#039;&#039;pmem&#039;&#039;=4500mb&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=2&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=16:&#039;&#039;ppn&#039;&#039;=28, &#039;&#039;walltime&#039;&#039;=00:30:00&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| singlejob&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; height=20px; text-align:left&amp;quot; &lt;br /&gt;
| style=&amp;quot;width:10%;padding:3px ; style=&amp;quot;color:#b32425&amp;quot; | special**&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| broadwell&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;walltime&#039;&#039;=00:30:00,&#039;&#039;procs&#039;&#039;=1, &#039;&#039;pmem&#039;&#039;=4500mb&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1, &#039;&#039;walltime&#039;&#039;=00:30:00&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1:&#039;&#039;ppn&#039;&#039;=28, &#039;&#039;walltime&#039;&#039;=48:00:00&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| shared&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; height=20px; text-align:left&amp;quot; &lt;br /&gt;
| style=&amp;quot;width:10%;padding:3px ; style=&amp;quot;color:#b32425&amp;quot; | dev_special**&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| broadwell&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;walltime&#039;&#039;=00:10:00,&#039;&#039;procs&#039;&#039;=1, &#039;&#039;pmem&#039;&#039;=4500mb&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1, &#039;&#039;walltime&#039;&#039;=00:10:00&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1:&#039;&#039;ppn&#039;&#039;=28, &#039;&#039;walltime&#039;&#039;=00:30:00&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| shared&lt;br /&gt;
|- &lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#00a000&amp;quot;&amp;gt; *Automatic routing.&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#b32425&amp;quot;&amp;gt;**Only accessible to predefined user groups. &amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#36c&amp;quot;&amp;gt; **It can be accessed only limited amount of time. Also only accessible to predefined user groups. &amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note that &#039;&#039;node access policy&#039;&#039;=singlejob means that, irrespective of the requested number of cores, node access is exclusive. &lt;br /&gt;
Default resources of a queue class defines walltime, processes and memory if not explicitly given with msub command. Resource list acronyms &#039;&#039;walltime&#039;&#039;, &#039;&#039;procs&#039;&#039;, &#039;&#039;nodes&#039;&#039; and &#039;&#039;ppn&#039;&#039; are described [[Batch_Jobs#msub_-l_resource_list|here]].&lt;br /&gt;
&lt;br /&gt;
==== Queue class examples ====&lt;br /&gt;
&lt;br /&gt;
* To run your batch job longer than 3 days, please use&amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;$ msub -q verylong&amp;lt;/span&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
* To run your batch job on one of the [[BwUniCluster_File_System#Components_of_bwUniCluster|fat nodes]], please use&amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;$ msub -q fat&amp;lt;/span&amp;gt;. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Environment Variables for Batch Jobs =&lt;br /&gt;
== Additional Moab Environments ==&lt;br /&gt;
The bwUniCluster expands the [[Batch_Jobs#Moab Environment Variables|common set of MOAB environment variables]] by the following variable:&lt;br /&gt;
{| width=700px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! colspan=&amp;quot;3&amp;quot; style=&amp;quot;background-color:#999999;padding:3px&amp;quot;| bwUniCluster specific MOAB variables&lt;br /&gt;
|-&lt;br /&gt;
! Environment variable&lt;br /&gt;
! Description&lt;br /&gt;
|- &lt;br /&gt;
| MOAB_SUBMITDIR&lt;br /&gt;
| Directory of job submission&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Additional Slurm Environments ==&lt;br /&gt;
Since the work load manager MOAB on [[bwUniCluster]] uses the resource manager SLURM, the following environment variables of SLURM are added to your environment once your job has started:&lt;br /&gt;
{| width=750px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! colspan=&amp;quot;3&amp;quot; style=&amp;quot;background-color:#999999;padding:3px&amp;quot;| SLURM variables&lt;br /&gt;
|- style=&amp;quot;width:25%;height=20px; text-align:left;padding:3px&amp;quot;&lt;br /&gt;
! style=&amp;quot;width:20%;height=20px; text-align:left;padding:3px&amp;quot;| Environment variables&lt;br /&gt;
! style=&amp;quot;height=20px; text-align:left;padding:3px&amp;quot;| Description&lt;br /&gt;
|- &lt;br /&gt;
| style=&amp;quot;width:20%;height=20px; text-align:left;padding:3px&amp;quot; | SLURM_JOB_CPUS_PER_NODE &lt;br /&gt;
| style=&amp;quot;height=20px; text-align:left;padding:3px&amp;quot;|  Number of processes per node dedicated to the job&lt;br /&gt;
|- &lt;br /&gt;
| style=&amp;quot;width:20%;height=20px; text-align:left;padding:3px&amp;quot; | SLURM_JOB_NODELIST &lt;br /&gt;
| style=&amp;quot;height=20px; text-align:left;padding:3px&amp;quot;|  List of nodes dedicated to the job&lt;br /&gt;
|- &lt;br /&gt;
| style=&amp;quot;width:20%;height=20px; text-align:left;padding:3px&amp;quot; | SLURM_JOB_NUM_NODES &lt;br /&gt;
| style=&amp;quot;height=20px; text-align:left;padding:3px&amp;quot;| Number of nodes dedicated to the job&lt;br /&gt;
|- &lt;br /&gt;
| style=&amp;quot;width:20%;height=20px; text-align:left;padding:3px&amp;quot; | SLURM_MEM_PER_NODE &lt;br /&gt;
| style=&amp;quot;height=20px; text-align:left;padding:3px&amp;quot;| Memory per node dedicated to the job &lt;br /&gt;
|- &lt;br /&gt;
| style=&amp;quot;width:20%;height=20px; text-align:left;padding:3px&amp;quot; | SLURM_NPROCS&lt;br /&gt;
| style=&amp;quot;height=20px; text-align:left;padding:3px&amp;quot;|  Total number of processes dedicated to the job &lt;br /&gt;
|}&lt;br /&gt;
See also:&lt;br /&gt;
* [[Batch_Jobs#Batch_Job_.28Slurm.29_Variables_:_bwUniCluster|List of almost all important Slurm environments]]&lt;br /&gt;
&lt;br /&gt;
== Interactive Job Monitoring per Node ==&lt;br /&gt;
By default nodes are not used exclusive unless they are requested with &#039;&#039;-l naccesspolicy=singlejob&#039;&#039; as described [[Batch_Jobs#msub_-l_resource_list|here]]. &amp;lt;br&amp;gt;&lt;br /&gt;
If a Job runs exclusive on one node you may do a ssh login to that node. The ssh access will be limited by the set walltime. To get the nodes of your job need to read the environment variable SLURM_JOB_NODELIST during the runtime of the job. It contains all nodes in a shortened way e.g. &#039;&#039;uc1n[344,386]&#039;&#039; or &#039;&#039;uc1n[344-345]&#039;&#039;. To expand this string to &#039;&#039;uc1n344 uc1n345&#039;&#039; you can you can use the command expandnodes like:&lt;br /&gt;
&lt;br /&gt;
  expandnodes $SLURM_JOB_NODELIST &amp;gt; nodelist&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
= Intel MPI parallel Programs =&lt;br /&gt;
== Intel MPI without Multithreading ==&lt;br /&gt;
MPI parallel programs run faster than serial programs on multi CPU and multi core systems. N-fold spawned processes of the MPI program, i.e., &#039;&#039;&#039;MPI tasks&#039;&#039;&#039;,  run simultaneously and communicate via the Message Passing Interface (MPI) paradigm. MPI tasks do not share memory but can be spawned over different nodes. &lt;br /&gt;
&lt;br /&gt;
Generate a wrapper script for &#039;&#039;&#039;Intel MPI&#039;&#039;&#039;, &#039;&#039;job_impi.sh&#039;&#039; containing the following lines:&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
module load mpi/impi/&amp;lt;placeholder_for_version&amp;gt;&lt;br /&gt;
mpiexec.hydra -bootstrap slurm my_par_program&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&amp;lt;font color=red&amp;gt;&#039;&#039;&#039;Attention:&#039;&#039;&#039;&amp;lt;/font&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
Do &#039;&#039;&#039;NOT&#039;&#039;&#039; add mpirun options &#039;&#039;-n &amp;lt;number_of_processes&amp;gt;&#039;&#039; or any other option defining processes or nodes, since MOAB instructs mpirun about number of processes and node hostnames. &lt;br /&gt;
Moreover, replace &amp;lt;placeholder_for_version&amp;gt; with the wished version of &#039;&#039;&#039;Intel MPI&#039;&#039;&#039; to enable the MPI environment.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Launching and running 32 Intel MPI tasks on 4 nodes, each requiring 1000 MByte, and running for 5 hours, execute:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ msub -q multinode -l nodes=4:ppn=16,pmem=1000mb,walltime=05:00:00 job_impi.sh&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Intel MPI with Multithreading ==&lt;br /&gt;
Multithreaded + MPI parallel programs operate faster than serial programs on multi CPUs with multiple cores. All threads of one process share resources such as memory. On the contrary MPI tasks do not share memory but can be spawned over different nodes.  &lt;br /&gt;
&lt;br /&gt;
Multiple Intel MPI tasks must be launched by the MPI parallel program &#039;&#039;&#039;mpiexec.hydra&#039;&#039;&#039;. For multithreaded programs based on &#039;&#039;&#039;Open&#039;&#039;&#039; &#039;&#039;&#039;M&#039;&#039;&#039;ulti-&#039;&#039;&#039;P&#039;&#039;&#039;rocessing (OpenMP) number of threads are defined by the environment variable OMP_NUM_THREADS. By default this variable is set to 1 (OMP_NUM_THREADS=1).&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;For Intel MPI&#039;&#039;&#039; a job-script to submit a batch job called &#039;&#039;job_impi_omp.sh&#039;&#039; that runs a Intel MPI program with 8 tasks and a tenfold threaded program &#039;&#039;impi_omp_program&#039;&#039; requiring 32000 MByte of total physical memory per task and total wall clock time of 6 hours looks like: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--b)--&amp;gt; &lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#MSUB -l nodes=4:ppn=20&lt;br /&gt;
#MSUB -l walltime=06:00:00&lt;br /&gt;
#MSUB -l pmem=3200mb&lt;br /&gt;
#MSUB -v MPI_MODULE=mpi/impi&lt;br /&gt;
#MSUB -v OMP_NUM_THREADS=10&lt;br /&gt;
#MSUB -v MPIRUN_OPTIONS=&amp;quot;-binding &amp;quot;domain=omp&amp;quot; -print-rank-map -ppn 2 -envall&amp;quot;&lt;br /&gt;
#MSUB -v EXE=./impi_omp_program&lt;br /&gt;
#MSUB -N test_impi_omp&lt;br /&gt;
&lt;br /&gt;
#If using more than one MPI task per node please set&lt;br /&gt;
export KMP_AFFINITY=scatter&lt;br /&gt;
#export KMP_AFFINITY=verbose,scatter  prints messages concerning the supported affinity &lt;br /&gt;
#KMP_AFFINITY Description: https://software.intel.com/en-us/node/524790#KMP_AFFINITY_ENVIRONMENT_VARIABLE&lt;br /&gt;
 &lt;br /&gt;
module load ${MPI_MODULE}&lt;br /&gt;
TASK_COUNT=$((${MOAB_PROCCOUNT}/${OMP_NUM_THREADS}))&lt;br /&gt;
echo &amp;quot;${EXE} running on ${MOAB_PROCCOUNT} cores with ${TASK_COUNT} MPI-tasks and ${OMP_NUM_THREADS} threads&amp;quot;&lt;br /&gt;
startexe=&amp;quot;mpiexec.hydra -bootstrap slurm ${MPIRUN_OPTIONS} -n ${TASK_COUNT} ${EXE}&amp;quot;&lt;br /&gt;
echo $startexe&lt;br /&gt;
exec $startexe&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Using Intel compiler the environment variable KMP_AFFINITY switches on binding of threads to specific cores. If you only run one MPI task per node please set KMP_AFFINITY=compact,1,0.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Execute the script &#039;&#039;&#039;job_impi_omp.sh&#039;&#039;&#039; adding the queue class &#039;&#039;multinode&#039;&#039; to your msub command:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ msub -q multinode job_impi_omp.sh&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
The mpirun option &#039;&#039;-print-rank-map&#039;&#039; shows the bindings between MPI tasks and nodes (not very beneficial). The option &#039;&#039;-binding&#039;&#039; binds MPI tasks (processes) to a particular processor; &#039;&#039;domain=omp&#039;&#039; means that the domain size is determined by the number of threads. In the above examples (2 MPI tasks per node) you could also choose &#039;&#039;-binding &amp;quot;cell=unit;map=bunch&amp;quot;&#039;&#039;; this binding maps one MPI process to each socket. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
= Interactive Jobs =&lt;br /&gt;
Interactive jobs on bwUniCluster [[BwUniCluster_User_Access#Allowed_activities_on_login_nodes|must &#039;&#039;&#039;NOT&#039;&#039;&#039; run on the logins nodes]], however resources for interactive jobs can be requested using msub. Considering a serial application with a graphical frontend that requires 5000 MByte of memory and limiting the interactive run to 2 hours execute the following:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ msub  -I  -V  -l nodes=1:ppn=1 -l walltime=0:02:00:00&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The option -V defines that all environment variables are exported to the compute node of the interactive session.&lt;br /&gt;
After execution of this command &#039;&#039;&#039;DO NOT CLOSE&#039;&#039;&#039; your current terminal session but wait until the queueing system MOAB has granted you the requested resources on the compute system. Once granted you will be automatically logged on the dedicated resource. Now you have an interactive session with 1 core and 5000 MByte of memory on the compute system for 2 hours. Simply execute now your application:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ cd to_path&lt;br /&gt;
$ ./application&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Note that, once the walltime limit has been reached you will be automatically logged out of the compute system.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
== Chain Jobs ==&lt;br /&gt;
The CPU time requirements of many applications exceed the limits of the job classes. In those situations it is recommended to solve the problem by a job chain. A job chain is a sequence of jobs where each job automatically starts its successor. &lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
##################################################&lt;br /&gt;
## simple MOAB submitter script to setup        ## &lt;br /&gt;
## a chain of jobs for bwUniCluster             ##&lt;br /&gt;
##################################################&lt;br /&gt;
## ver.  : 2015-09-17, KIT, SCC&lt;br /&gt;
&lt;br /&gt;
## Define maximum number of jobs via positional parameter 1, default is 5&lt;br /&gt;
max_nojob=${1:-5}&lt;br /&gt;
&lt;br /&gt;
## Define your jobscript (e.g. &amp;quot;~/chain_link_job.sh&amp;quot;)&lt;br /&gt;
chain_link_job=${PWD}/chain_link_job.sh&lt;br /&gt;
&lt;br /&gt;
## Define type of dependency via positional parameter 2, default is &#039;afterok&#039;&lt;br /&gt;
dep_type=&amp;quot;${2:-afterok}&amp;quot;&lt;br /&gt;
## -&amp;gt; List of all dependencies:&lt;br /&gt;
## http://docs.adaptivecomputing.com/suite/8-0/enterprise/help.htm#topics/\&lt;br /&gt;
##    moabWorkloadManager/topics/jobAdministration/jobdependencies.html&lt;br /&gt;
&lt;br /&gt;
myloop_counter=1&lt;br /&gt;
## Submit loop&lt;br /&gt;
while [ ${myloop_counter} -le ${max_nojob} ] ; do&lt;br /&gt;
   ##&lt;br /&gt;
   ## Differ msub_opt depending on chain link number&lt;br /&gt;
   if [ ${myloop_counter} -eq 1 ] ; then&lt;br /&gt;
      msub_opt=&amp;quot;&amp;quot;&lt;br /&gt;
   else&lt;br /&gt;
      ## Attention: do NOT use &#039;-W depend&#039; together with msub&lt;br /&gt;
      msub_opt=&amp;quot;-l depend=${dep_type}:${jobID}&amp;quot;&lt;br /&gt;
   fi&lt;br /&gt;
   ##&lt;br /&gt;
   ## Print current iteration number and msub command&lt;br /&gt;
   echo &amp;quot;Chain job iteration = ${myloop_counter}&amp;quot;&lt;br /&gt;
   echo &amp;quot;   msub -v myloop_counter=${myloop_counter} ${msub_opt} ${chain_link_job}&amp;quot;&lt;br /&gt;
   ## Store job ID for next iteration by storing output of msub command with empty lines&lt;br /&gt;
   jobID=$(msub -v myloop_counter=${myloop_counter} ${msub_opt} ${chain_link_job} 2&amp;gt;&amp;amp;1 | sed &#039;/^$/d&#039;)&lt;br /&gt;
   ##   &lt;br /&gt;
   ## Check if ERROR occurred&lt;br /&gt;
   if [[ &amp;quot;${jobID}&amp;quot; =~ &amp;quot;ERROR&amp;quot; ]] ; then&lt;br /&gt;
      echo &amp;quot;   -&amp;gt; submission failed!&amp;quot; ; exit 1&lt;br /&gt;
   else&lt;br /&gt;
      echo &amp;quot;   -&amp;gt; job number = ${jobID}&amp;quot;&lt;br /&gt;
   fi&lt;br /&gt;
   ##&lt;br /&gt;
   ## Increase counter&lt;br /&gt;
   let myloop_counter+=1&lt;br /&gt;
done&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
----&lt;br /&gt;
[[Category:bwUniCluster|Batch Jobs - bwUniCluster features]]&lt;/div&gt;</summary>
		<author><name>S Shamsudeen</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=Batch_Jobs_-_bwUniCluster_Features&amp;diff=4859</id>
		<title>Batch Jobs - bwUniCluster Features</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=Batch_Jobs_-_bwUniCluster_Features&amp;diff=4859"/>
		<updated>2017-05-10T10:41:27Z</updated>

		<summary type="html">&lt;p&gt;S Shamsudeen: /* msub -q queues */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;font color=green size=+2&amp;gt;This article contains information on features of the [[Batch_Jobs|batch job system]] only applicable on bwUniCluster.&amp;lt;/font&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
= Job Submission =&lt;br /&gt;
== msub Command ==&lt;br /&gt;
=== Interactive Jobs ===&lt;br /&gt;
The bwUniCluster supports the following additional msub option(s):&lt;br /&gt;
&lt;br /&gt;
{| width=750px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! colspan=&amp;quot;3&amp;quot; | msub Options&lt;br /&gt;
|-&lt;br /&gt;
! Command line&lt;br /&gt;
! Script&lt;br /&gt;
! Purpose&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
|  -I&lt;br /&gt;
| &lt;br /&gt;
|  Declares the job is to be run interactively.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== msub -l &#039;&#039;resource_list&#039;&#039; ===&lt;br /&gt;
No deviation or additional features to general [[Batch_Jobs|batch job]] setting.&lt;br /&gt;
&lt;br /&gt;
=== msub -q &#039;&#039;queues&#039;&#039; ===&lt;br /&gt;
Compute resources such as walltime, nodes and memory are restricted and must fit into &#039;&#039;&#039;queues&#039;&#039;&#039;. Since requested compute resources are NOT always automatically mapped to the correct queue class, you must add to your msub command the correct queue class. Details are:&lt;br /&gt;
&lt;br /&gt;
{| width=750px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! colspan=&amp;quot;6&amp;quot; style=&amp;quot;background-color:#999999;padding:3px&amp;quot;| msub -q &#039;&#039;queue&#039;&#039;&lt;br /&gt;
|- style=&amp;quot;width:10%;height=20px; text-align:left;&amp;quot;&lt;br /&gt;
! style=&amp;quot;width:10%;padding:3px&amp;quot;| &#039;&#039;queue&#039;&#039;&lt;br /&gt;
! style=&amp;quot;width:5%;padding:3px&amp;quot;| &#039;&#039;node&#039;&#039;&lt;br /&gt;
! style=&amp;quot;width:15%;padding:3px&amp;quot;| &#039;&#039;default resources&#039;&#039;&lt;br /&gt;
! style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;minimum resources&#039;&#039;&lt;br /&gt;
! style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;maximum resources&#039;&#039;&lt;br /&gt;
! style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;node access policy&#039;&#039;&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; height=20px; text-align:left&amp;quot;&lt;br /&gt;
| style=&amp;quot;padding:3px ; style=&amp;quot;color:#FF0000&amp;quot;| develop*&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| thin&lt;br /&gt;
| style=&amp;quot;width:15%;padding:3px&amp;quot;| &#039;&#039;walltime&#039;&#039;=00:10:00,&#039;&#039;procs&#039;&#039;=1, &#039;&#039;pmem&#039;&#039;=4000mb&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1:&#039;&#039;ppn&#039;&#039;=16, &#039;&#039;walltime&#039;&#039;=00:30:00&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| shared&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; height=20px; text-align:left&amp;quot;&lt;br /&gt;
| style=&amp;quot;width:10%;padding:3px ; style=&amp;quot;color:#FF0000&amp;quot; | singlenode* &lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| thin&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;walltime&#039;&#039;=00:30:01,&#039;&#039;procs&#039;&#039;=1, &#039;&#039;pmem&#039;&#039;=4000mb&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1, &#039;&#039;walltime&#039;&#039;=00:30:01&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1:&#039;&#039;ppn&#039;&#039;=16, &#039;&#039;walltime&#039;&#039;=3:00:00:00&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| shared&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; height=20px; text-align:left&amp;quot;&lt;br /&gt;
| style=&amp;quot;width:10%;vertical-align:top;height=20px; text-align:left;padding:3px ; style=&amp;quot;color:#FF0000&amp;quot; | verylong*&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| thin&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;walltime&#039;&#039;=3:00:00:01,&#039;&#039;procs&#039;&#039;=1, &#039;&#039;pmem&#039;&#039;=4000mb&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1&#039;&#039;, walltime&#039;&#039;=3:00:00:01&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes=1:&#039;&#039;ppn&#039;&#039;=16, &#039;&#039;walltime&#039;&#039;=6:00:00:00&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| shared&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; height=20px; text-align:left&amp;quot;&lt;br /&gt;
| style=&amp;quot;width:10%;padding:3px ; style=&amp;quot;color:#b32425&amp;quot; | extralong**&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| thin&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;walltime&#039;&#039;=6:00:00:01,&#039;&#039;procs&#039;&#039;=1, &#039;&#039;pmem&#039;&#039;=4000mb&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1&#039;&#039;, walltime&#039;&#039;=6:00:00:01&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes=1:&#039;&#039;ppn&#039;&#039;=16, &#039;&#039;walltime&#039;&#039;=14:00:00:00&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| shared&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; height=20px; text-align:left&amp;quot; &lt;br /&gt;
| style=&amp;quot;width:10%;padding:3px&amp;quot; | fat&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| fat&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;walltime&#039;&#039;=00:10:00,&#039;&#039;procs&#039;&#039;=1, &#039;&#039;pmem&#039;&#039;=32000mb&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1:&#039;&#039;ppn&#039;&#039;=32, &#039;&#039;walltime&#039;&#039;=3:00:00:00&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| shared&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; height=20px; text-align:left&amp;quot; &lt;br /&gt;
| style=&amp;quot;width:10%;padding:3px ; style=&amp;quot;color:#FF0000&amp;quot; | multinode*&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| broad&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;walltime&#039;&#039;=00:10:00,&#039;&#039;procs&#039;&#039;=1, &#039;&#039;pmem&#039;&#039;=4500mb&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=2&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=128:&#039;&#039;ppn&#039;&#039;=28, &#039;&#039;walltime&#039;&#039;=48:00:00&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| singlejob&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; height=20px; text-align:left&amp;quot; &lt;br /&gt;
| style=&amp;quot;width:10%;padding:3px&amp;quot; | dev_multinode&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| broad&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;walltime&#039;&#039;=00:10:00,&#039;&#039;procs&#039;&#039;=1, &#039;&#039;pmem&#039;&#039;=4500mb&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=2&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=16:&#039;&#039;ppn&#039;&#039;=28, &#039;&#039;walltime&#039;&#039;=00:30:00&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| singlejob&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; height=20px; text-align:left&amp;quot; &lt;br /&gt;
| style=&amp;quot;width:10%;padding:3px ; style=&amp;quot;color:#b32425&amp;quot; | special**&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| broad&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;walltime&#039;&#039;=00:30:00,&#039;&#039;procs&#039;&#039;=1, &#039;&#039;pmem&#039;&#039;=4500mb&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1, &#039;&#039;walltime&#039;&#039;=00:30:00&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1:&#039;&#039;ppn&#039;&#039;=28, &#039;&#039;walltime&#039;&#039;=48:00:00&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| shared&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; height=20px; text-align:left&amp;quot; &lt;br /&gt;
| style=&amp;quot;width:10%;padding:3px ; style=&amp;quot;color:#b32425&amp;quot; | dev_special**&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| broad&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;walltime&#039;&#039;=00:10:00,&#039;&#039;procs&#039;&#039;=1, &#039;&#039;pmem&#039;&#039;=4500mb&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1, &#039;&#039;walltime&#039;&#039;=00:10:00&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1:&#039;&#039;ppn&#039;&#039;=28, &#039;&#039;walltime&#039;&#039;=00:30:00&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| shared&lt;br /&gt;
|- &lt;br /&gt;
|}&lt;br /&gt;
Note that &#039;&#039;node access policy&#039;&#039;=singlejob means that, irrespective of the requested number of cores, node access is exclusive. &lt;br /&gt;
Default resources of a queue class defines walltime, processes and memory if not explicitly given with msub command. Resource list acronyms &#039;&#039;walltime&#039;&#039;, &#039;&#039;procs&#039;&#039;, &#039;&#039;nodes&#039;&#039; and &#039;&#039;ppn&#039;&#039; are described [[Batch_Jobs#msub_-l_resource_list|here]].&lt;br /&gt;
*  &amp;lt;span style=&amp;quot;color:#FF0000&amp;quot;&amp;gt; *Automatic routing. &amp;lt;/span&amp;gt;&lt;br /&gt;
* [[**Only accessible to predefined user groups.]]&lt;br /&gt;
==== Queue class examples ====&lt;br /&gt;
&lt;br /&gt;
* To run your batch job longer than 3 days, please use&amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;$ msub -q verylong&amp;lt;/span&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
* To run your batch job on one of the [[BwUniCluster_File_System#Components_of_bwUniCluster|fat nodes]], please use&amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;$ msub -q fat&amp;lt;/span&amp;gt;. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Environment Variables for Batch Jobs =&lt;br /&gt;
== Additional Moab Environments ==&lt;br /&gt;
The bwUniCluster expands the [[Batch_Jobs#Moab Environment Variables|common set of MOAB environment variables]] by the following variable:&lt;br /&gt;
{| width=700px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! colspan=&amp;quot;3&amp;quot; style=&amp;quot;background-color:#999999;padding:3px&amp;quot;| bwUniCluster specific MOAB variables&lt;br /&gt;
|-&lt;br /&gt;
! Environment variable&lt;br /&gt;
! Description&lt;br /&gt;
|- &lt;br /&gt;
| MOAB_SUBMITDIR&lt;br /&gt;
| Directory of job submission&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Additional Slurm Environments ==&lt;br /&gt;
Since the work load manager MOAB on [[bwUniCluster]] uses the resource manager SLURM, the following environment variables of SLURM are added to your environment once your job has started:&lt;br /&gt;
{| width=750px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! colspan=&amp;quot;3&amp;quot; style=&amp;quot;background-color:#999999;padding:3px&amp;quot;| SLURM variables&lt;br /&gt;
|- style=&amp;quot;width:25%;height=20px; text-align:left;padding:3px&amp;quot;&lt;br /&gt;
! style=&amp;quot;width:20%;height=20px; text-align:left;padding:3px&amp;quot;| Environment variables&lt;br /&gt;
! style=&amp;quot;height=20px; text-align:left;padding:3px&amp;quot;| Description&lt;br /&gt;
|- &lt;br /&gt;
| style=&amp;quot;width:20%;height=20px; text-align:left;padding:3px&amp;quot; | SLURM_JOB_CPUS_PER_NODE &lt;br /&gt;
| style=&amp;quot;height=20px; text-align:left;padding:3px&amp;quot;|  Number of processes per node dedicated to the job&lt;br /&gt;
|- &lt;br /&gt;
| style=&amp;quot;width:20%;height=20px; text-align:left;padding:3px&amp;quot; | SLURM_JOB_NODELIST &lt;br /&gt;
| style=&amp;quot;height=20px; text-align:left;padding:3px&amp;quot;|  List of nodes dedicated to the job&lt;br /&gt;
|- &lt;br /&gt;
| style=&amp;quot;width:20%;height=20px; text-align:left;padding:3px&amp;quot; | SLURM_JOB_NUM_NODES &lt;br /&gt;
| style=&amp;quot;height=20px; text-align:left;padding:3px&amp;quot;| Number of nodes dedicated to the job&lt;br /&gt;
|- &lt;br /&gt;
| style=&amp;quot;width:20%;height=20px; text-align:left;padding:3px&amp;quot; | SLURM_MEM_PER_NODE &lt;br /&gt;
| style=&amp;quot;height=20px; text-align:left;padding:3px&amp;quot;| Memory per node dedicated to the job &lt;br /&gt;
|- &lt;br /&gt;
| style=&amp;quot;width:20%;height=20px; text-align:left;padding:3px&amp;quot; | SLURM_NPROCS&lt;br /&gt;
| style=&amp;quot;height=20px; text-align:left;padding:3px&amp;quot;|  Total number of processes dedicated to the job &lt;br /&gt;
|}&lt;br /&gt;
See also:&lt;br /&gt;
* [[Batch_Jobs#Batch_Job_.28Slurm.29_Variables_:_bwUniCluster|List of almost all important Slurm environments]]&lt;br /&gt;
&lt;br /&gt;
== Interactive Job Monitoring per Node ==&lt;br /&gt;
By default nodes are not used exclusive unless they are requested with &#039;&#039;-l naccesspolicy=singlejob&#039;&#039; as described [[Batch_Jobs#msub_-l_resource_list|here]]. &amp;lt;br&amp;gt;&lt;br /&gt;
If a Job runs exclusive on one node you may do a ssh login to that node. The ssh access will be limited by the set walltime. To get the nodes of your job need to read the environment variable SLURM_JOB_NODELIST during the runtime of the job. It contains all nodes in a shortened way e.g. &#039;&#039;uc1n[344,386]&#039;&#039; or &#039;&#039;uc1n[344-345]&#039;&#039;. To expand this string to &#039;&#039;uc1n344 uc1n345&#039;&#039; you can you can use the command expandnodes like:&lt;br /&gt;
&lt;br /&gt;
  expandnodes $SLURM_JOB_NODELIST &amp;gt; nodelist&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
= Intel MPI parallel Programs =&lt;br /&gt;
== Intel MPI without Multithreading ==&lt;br /&gt;
MPI parallel programs run faster than serial programs on multi CPU and multi core systems. N-fold spawned processes of the MPI program, i.e., &#039;&#039;&#039;MPI tasks&#039;&#039;&#039;,  run simultaneously and communicate via the Message Passing Interface (MPI) paradigm. MPI tasks do not share memory but can be spawned over different nodes. &lt;br /&gt;
&lt;br /&gt;
Generate a wrapper script for &#039;&#039;&#039;Intel MPI&#039;&#039;&#039;, &#039;&#039;job_impi.sh&#039;&#039; containing the following lines:&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
module load mpi/impi/&amp;lt;placeholder_for_version&amp;gt;&lt;br /&gt;
mpiexec.hydra -bootstrap slurm my_par_program&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&amp;lt;font color=red&amp;gt;&#039;&#039;&#039;Attention:&#039;&#039;&#039;&amp;lt;/font&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
Do &#039;&#039;&#039;NOT&#039;&#039;&#039; add mpirun options &#039;&#039;-n &amp;lt;number_of_processes&amp;gt;&#039;&#039; or any other option defining processes or nodes, since MOAB instructs mpirun about number of processes and node hostnames. &lt;br /&gt;
Moreover, replace &amp;lt;placeholder_for_version&amp;gt; with the wished version of &#039;&#039;&#039;Intel MPI&#039;&#039;&#039; to enable the MPI environment.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Launching and running 32 Intel MPI tasks on 4 nodes, each requiring 1000 MByte, and running for 5 hours, execute:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ msub -q multinode -l nodes=4:ppn=16,pmem=1000mb,walltime=05:00:00 job_impi.sh&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Intel MPI with Multithreading ==&lt;br /&gt;
Multithreaded + MPI parallel programs operate faster than serial programs on multi CPUs with multiple cores. All threads of one process share resources such as memory. On the contrary MPI tasks do not share memory but can be spawned over different nodes.  &lt;br /&gt;
&lt;br /&gt;
Multiple Intel MPI tasks must be launched by the MPI parallel program &#039;&#039;&#039;mpiexec.hydra&#039;&#039;&#039;. For multithreaded programs based on &#039;&#039;&#039;Open&#039;&#039;&#039; &#039;&#039;&#039;M&#039;&#039;&#039;ulti-&#039;&#039;&#039;P&#039;&#039;&#039;rocessing (OpenMP) number of threads are defined by the environment variable OMP_NUM_THREADS. By default this variable is set to 1 (OMP_NUM_THREADS=1).&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;For Intel MPI&#039;&#039;&#039; a job-script to submit a batch job called &#039;&#039;job_impi_omp.sh&#039;&#039; that runs a Intel MPI program with 8 tasks and a tenfold threaded program &#039;&#039;impi_omp_program&#039;&#039; requiring 32000 MByte of total physical memory per task and total wall clock time of 6 hours looks like: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--b)--&amp;gt; &lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#MSUB -l nodes=4:ppn=20&lt;br /&gt;
#MSUB -l walltime=06:00:00&lt;br /&gt;
#MSUB -l pmem=3200mb&lt;br /&gt;
#MSUB -v MPI_MODULE=mpi/impi&lt;br /&gt;
#MSUB -v OMP_NUM_THREADS=10&lt;br /&gt;
#MSUB -v MPIRUN_OPTIONS=&amp;quot;-binding &amp;quot;domain=omp&amp;quot; -print-rank-map -ppn 2 -envall&amp;quot;&lt;br /&gt;
#MSUB -v EXE=./impi_omp_program&lt;br /&gt;
#MSUB -N test_impi_omp&lt;br /&gt;
&lt;br /&gt;
#If using more than one MPI task per node please set&lt;br /&gt;
export KMP_AFFINITY=scatter&lt;br /&gt;
#export KMP_AFFINITY=verbose,scatter  prints messages concerning the supported affinity &lt;br /&gt;
#KMP_AFFINITY Description: https://software.intel.com/en-us/node/524790#KMP_AFFINITY_ENVIRONMENT_VARIABLE&lt;br /&gt;
 &lt;br /&gt;
module load ${MPI_MODULE}&lt;br /&gt;
TASK_COUNT=$((${MOAB_PROCCOUNT}/${OMP_NUM_THREADS}))&lt;br /&gt;
echo &amp;quot;${EXE} running on ${MOAB_PROCCOUNT} cores with ${TASK_COUNT} MPI-tasks and ${OMP_NUM_THREADS} threads&amp;quot;&lt;br /&gt;
startexe=&amp;quot;mpiexec.hydra -bootstrap slurm ${MPIRUN_OPTIONS} -n ${TASK_COUNT} ${EXE}&amp;quot;&lt;br /&gt;
echo $startexe&lt;br /&gt;
exec $startexe&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Using Intel compiler the environment variable KMP_AFFINITY switches on binding of threads to specific cores. If you only run one MPI task per node please set KMP_AFFINITY=compact,1,0.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Execute the script &#039;&#039;&#039;job_impi_omp.sh&#039;&#039;&#039; adding the queue class &#039;&#039;multinode&#039;&#039; to your msub command:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ msub -q multinode job_impi_omp.sh&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
The mpirun option &#039;&#039;-print-rank-map&#039;&#039; shows the bindings between MPI tasks and nodes (not very beneficial). The option &#039;&#039;-binding&#039;&#039; binds MPI tasks (processes) to a particular processor; &#039;&#039;domain=omp&#039;&#039; means that the domain size is determined by the number of threads. In the above examples (2 MPI tasks per node) you could also choose &#039;&#039;-binding &amp;quot;cell=unit;map=bunch&amp;quot;&#039;&#039;; this binding maps one MPI process to each socket. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
= Interactive Jobs =&lt;br /&gt;
Interactive jobs on bwUniCluster [[BwUniCluster_User_Access#Allowed_activities_on_login_nodes|must &#039;&#039;&#039;NOT&#039;&#039;&#039; run on the logins nodes]], however resources for interactive jobs can be requested using msub. Considering a serial application with a graphical frontend that requires 5000 MByte of memory and limiting the interactive run to 2 hours execute the following:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ msub  -I  -V  -l nodes=1:ppn=1 -l walltime=0:02:00:00&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The option -V defines that all environment variables are exported to the compute node of the interactive session.&lt;br /&gt;
After execution of this command &#039;&#039;&#039;DO NOT CLOSE&#039;&#039;&#039; your current terminal session but wait until the queueing system MOAB has granted you the requested resources on the compute system. Once granted you will be automatically logged on the dedicated resource. Now you have an interactive session with 1 core and 5000 MByte of memory on the compute system for 2 hours. Simply execute now your application:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ cd to_path&lt;br /&gt;
$ ./application&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Note that, once the walltime limit has been reached you will be automatically logged out of the compute system.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
== Chain Jobs ==&lt;br /&gt;
The CPU time requirements of many applications exceed the limits of the job classes. In those situations it is recommended to solve the problem by a job chain. A job chain is a sequence of jobs where each job automatically starts its successor. &lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
##################################################&lt;br /&gt;
## simple MOAB submitter script to setup        ## &lt;br /&gt;
## a chain of jobs for bwUniCluster             ##&lt;br /&gt;
##################################################&lt;br /&gt;
## ver.  : 2015-09-17, KIT, SCC&lt;br /&gt;
&lt;br /&gt;
## Define maximum number of jobs via positional parameter 1, default is 5&lt;br /&gt;
max_nojob=${1:-5}&lt;br /&gt;
&lt;br /&gt;
## Define your jobscript (e.g. &amp;quot;~/chain_link_job.sh&amp;quot;)&lt;br /&gt;
chain_link_job=${PWD}/chain_link_job.sh&lt;br /&gt;
&lt;br /&gt;
## Define type of dependency via positional parameter 2, default is &#039;afterok&#039;&lt;br /&gt;
dep_type=&amp;quot;${2:-afterok}&amp;quot;&lt;br /&gt;
## -&amp;gt; List of all dependencies:&lt;br /&gt;
## http://docs.adaptivecomputing.com/suite/8-0/enterprise/help.htm#topics/\&lt;br /&gt;
##    moabWorkloadManager/topics/jobAdministration/jobdependencies.html&lt;br /&gt;
&lt;br /&gt;
myloop_counter=1&lt;br /&gt;
## Submit loop&lt;br /&gt;
while [ ${myloop_counter} -le ${max_nojob} ] ; do&lt;br /&gt;
   ##&lt;br /&gt;
   ## Differ msub_opt depending on chain link number&lt;br /&gt;
   if [ ${myloop_counter} -eq 1 ] ; then&lt;br /&gt;
      msub_opt=&amp;quot;&amp;quot;&lt;br /&gt;
   else&lt;br /&gt;
      ## Attention: do NOT use &#039;-W depend&#039; together with msub&lt;br /&gt;
      msub_opt=&amp;quot;-l depend=${dep_type}:${jobID}&amp;quot;&lt;br /&gt;
   fi&lt;br /&gt;
   ##&lt;br /&gt;
   ## Print current iteration number and msub command&lt;br /&gt;
   echo &amp;quot;Chain job iteration = ${myloop_counter}&amp;quot;&lt;br /&gt;
   echo &amp;quot;   msub -v myloop_counter=${myloop_counter} ${msub_opt} ${chain_link_job}&amp;quot;&lt;br /&gt;
   ## Store job ID for next iteration by storing output of msub command with empty lines&lt;br /&gt;
   jobID=$(msub -v myloop_counter=${myloop_counter} ${msub_opt} ${chain_link_job} 2&amp;gt;&amp;amp;1 | sed &#039;/^$/d&#039;)&lt;br /&gt;
   ##   &lt;br /&gt;
   ## Check if ERROR occurred&lt;br /&gt;
   if [[ &amp;quot;${jobID}&amp;quot; =~ &amp;quot;ERROR&amp;quot; ]] ; then&lt;br /&gt;
      echo &amp;quot;   -&amp;gt; submission failed!&amp;quot; ; exit 1&lt;br /&gt;
   else&lt;br /&gt;
      echo &amp;quot;   -&amp;gt; job number = ${jobID}&amp;quot;&lt;br /&gt;
   fi&lt;br /&gt;
   ##&lt;br /&gt;
   ## Increase counter&lt;br /&gt;
   let myloop_counter+=1&lt;br /&gt;
done&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
----&lt;br /&gt;
[[Category:bwUniCluster|Batch Jobs - bwUniCluster features]]&lt;/div&gt;</summary>
		<author><name>S Shamsudeen</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=Batch_Jobs_-_bwUniCluster_Features&amp;diff=4845</id>
		<title>Batch Jobs - bwUniCluster Features</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=Batch_Jobs_-_bwUniCluster_Features&amp;diff=4845"/>
		<updated>2017-05-09T08:49:59Z</updated>

		<summary type="html">&lt;p&gt;S Shamsudeen: /* msub -q queues */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;font color=green size=+2&amp;gt;This article contains information on features of the [[Batch_Jobs|batch job system]] only applicable on bwUniCluster.&amp;lt;/font&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
= Job Submission =&lt;br /&gt;
== msub Command ==&lt;br /&gt;
=== Interactive Jobs ===&lt;br /&gt;
The bwUniCluster supports the following additional msub option(s):&lt;br /&gt;
&lt;br /&gt;
{| width=750px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! colspan=&amp;quot;3&amp;quot; | msub Options&lt;br /&gt;
|-&lt;br /&gt;
! Command line&lt;br /&gt;
! Script&lt;br /&gt;
! Purpose&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
|  -I&lt;br /&gt;
| &lt;br /&gt;
|  Declares the job is to be run interactively.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== msub -l &#039;&#039;resource_list&#039;&#039; ===&lt;br /&gt;
No deviation or additional features to general [[Batch_Jobs|batch job]] setting.&lt;br /&gt;
&lt;br /&gt;
=== msub -q &#039;&#039;queues&#039;&#039; ===&lt;br /&gt;
Compute resources such as walltime, nodes and memory are restricted and must fit into &#039;&#039;&#039;queues&#039;&#039;&#039;. Since requested compute resources are NOT always automatically mapped to the correct queue class, you must add to your msub command the correct queue class. Details are:&lt;br /&gt;
&lt;br /&gt;
{| width=750px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! colspan=&amp;quot;6&amp;quot; style=&amp;quot;background-color:#999999;padding:3px&amp;quot;| msub -q &#039;&#039;queue&#039;&#039;&lt;br /&gt;
|- style=&amp;quot;width:10%;height=20px; text-align:left;&amp;quot;&lt;br /&gt;
! style=&amp;quot;width:10%;padding:3px&amp;quot;| &#039;&#039;queue&#039;&#039;&lt;br /&gt;
! style=&amp;quot;width:5%;padding:3px&amp;quot;| &#039;&#039;node&#039;&#039;&lt;br /&gt;
! style=&amp;quot;width:15%;padding:3px&amp;quot;| &#039;&#039;default resources&#039;&#039;&lt;br /&gt;
! style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;minimum resources&#039;&#039;&lt;br /&gt;
! style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;maximum resources&#039;&#039;&lt;br /&gt;
! style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;node access policy&#039;&#039;&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; height=20px; text-align:left&amp;quot;&lt;br /&gt;
| style=&amp;quot;padding:3px ; style=&amp;quot;color:#FF0000&amp;quot;| develop*&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| thin&lt;br /&gt;
| style=&amp;quot;width:15%;padding:3px&amp;quot;| &#039;&#039;walltime&#039;&#039;=00:10:00,&#039;&#039;procs&#039;&#039;=1, &#039;&#039;pmem&#039;&#039;=4000mb&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1:&#039;&#039;ppn&#039;&#039;=16, &#039;&#039;walltime&#039;&#039;=00:30:00&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| shared&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; height=20px; text-align:left&amp;quot;&lt;br /&gt;
| style=&amp;quot;width:10%;padding:3px ; style=&amp;quot;color:#FF0000&amp;quot; | singlenode* &lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| thin&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;walltime&#039;&#039;=00:30:01,&#039;&#039;procs&#039;&#039;=1, &#039;&#039;pmem&#039;&#039;=4000mb&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1, &#039;&#039;walltime&#039;&#039;=00:30:01&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1:&#039;&#039;ppn&#039;&#039;=16, &#039;&#039;walltime&#039;&#039;=3:00:00:00&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| shared&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; height=20px; text-align:left&amp;quot;&lt;br /&gt;
| style=&amp;quot;width:10%;vertical-align:top;height=20px; text-align:left;padding:3px ; style=&amp;quot;color:#FF0000&amp;quot; | verylong*&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| thin&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;walltime&#039;&#039;=3:00:00:01,&#039;&#039;procs&#039;&#039;=1, &#039;&#039;pmem&#039;&#039;=4000mb&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1&#039;&#039;, walltime&#039;&#039;=3:00:00:01&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes=1:&#039;&#039;ppn&#039;&#039;=16, &#039;&#039;walltime&#039;&#039;=6:00:00:00&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| shared&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; height=20px; text-align:left&amp;quot;&lt;br /&gt;
| style=&amp;quot;width:10%;padding:3px ; style=&amp;quot;color:#b32425&amp;quot; | extralong**&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| thin&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;walltime&#039;&#039;=3:00:00:01,&#039;&#039;procs&#039;&#039;=1, &#039;&#039;pmem&#039;&#039;=4000mb&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1&#039;&#039;, walltime&#039;&#039;=3:00:00:01&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes=1:&#039;&#039;ppn&#039;&#039;=16, &#039;&#039;walltime&#039;&#039;=6:00:00:00&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| shared&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; height=20px; text-align:left&amp;quot; &lt;br /&gt;
| style=&amp;quot;width:10%;padding:3px&amp;quot; | fat&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| fat&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;walltime&#039;&#039;=00:10:00,&#039;&#039;procs&#039;&#039;=1, &#039;&#039;pmem&#039;&#039;=32000mb&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1:&#039;&#039;ppn&#039;&#039;=32, &#039;&#039;walltime&#039;&#039;=3:00:00:00&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| shared&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; height=20px; text-align:left&amp;quot; &lt;br /&gt;
| style=&amp;quot;width:10%;padding:3px ; style=&amp;quot;color:#FF0000&amp;quot; | multinode*&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| broad&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;walltime&#039;&#039;=00:10:00,&#039;&#039;procs&#039;&#039;=1, &#039;&#039;pmem&#039;&#039;=4500mb&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=2&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=128:&#039;&#039;ppn&#039;&#039;=28, &#039;&#039;walltime&#039;&#039;=48:00:00&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| singlejob&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; height=20px; text-align:left&amp;quot; &lt;br /&gt;
| style=&amp;quot;width:10%;padding:3px&amp;quot; | dev_multinode&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| broad&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;walltime&#039;&#039;=00:10:00,&#039;&#039;procs&#039;&#039;=1, &#039;&#039;pmem&#039;&#039;=4500mb&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=2&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=16:&#039;&#039;ppn&#039;&#039;=28, &#039;&#039;walltime&#039;&#039;=00:30:00&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| singlejob&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; height=20px; text-align:left&amp;quot; &lt;br /&gt;
| style=&amp;quot;width:10%;padding:3px ; style=&amp;quot;color:#b32425&amp;quot; | special**&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| broad&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;walltime&#039;&#039;=00:30:00,&#039;&#039;procs&#039;&#039;=1, &#039;&#039;pmem&#039;&#039;=4500mb&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1, &#039;&#039;walltime&#039;&#039;=00:30:00&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1:&#039;&#039;ppn&#039;&#039;=28, &#039;&#039;walltime&#039;&#039;=48:00:00&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| shared&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; height=20px; text-align:left&amp;quot; &lt;br /&gt;
| style=&amp;quot;width:10%;padding:3px ; style=&amp;quot;color:#b32425&amp;quot; | dev_special**&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| broad&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;walltime&#039;&#039;=00:10:00,&#039;&#039;procs&#039;&#039;=1, &#039;&#039;pmem&#039;&#039;=4500mb&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1, &#039;&#039;walltime&#039;&#039;=00:10:00&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1:&#039;&#039;ppn&#039;&#039;=28, &#039;&#039;walltime&#039;&#039;=00:30:00&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| shared&lt;br /&gt;
|- &lt;br /&gt;
|}&lt;br /&gt;
Note that &#039;&#039;node access policy&#039;&#039;=singlejob means that, irrespective of the requested number of cores, node access is exclusive. &lt;br /&gt;
Default resources of a queue class defines walltime, processes and memory if not explicitly given with msub command. Resource list acronyms &#039;&#039;walltime&#039;&#039;, &#039;&#039;procs&#039;&#039;, &#039;&#039;nodes&#039;&#039; and &#039;&#039;ppn&#039;&#039; are described [[Batch_Jobs#msub_-l_resource_list|here]].&lt;br /&gt;
*  &amp;lt;span style=&amp;quot;color:#FF0000&amp;quot;&amp;gt; *Automatic routing. &amp;lt;/span&amp;gt;&lt;br /&gt;
* [[**Only accessible to predefined user groups.]]&lt;br /&gt;
==== Queue class examples ====&lt;br /&gt;
&lt;br /&gt;
* To run your batch job longer than 3 days, please use&amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;$ msub -q verylong&amp;lt;/span&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
* To run your batch job on one of the [[BwUniCluster_File_System#Components_of_bwUniCluster|fat nodes]], please use&amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;$ msub -q fat&amp;lt;/span&amp;gt;. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Environment Variables for Batch Jobs =&lt;br /&gt;
== Additional Moab Environments ==&lt;br /&gt;
The bwUniCluster expands the [[Batch_Jobs#Moab Environment Variables|common set of MOAB environment variables]] by the following variable:&lt;br /&gt;
{| width=700px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! colspan=&amp;quot;3&amp;quot; style=&amp;quot;background-color:#999999;padding:3px&amp;quot;| bwUniCluster specific MOAB variables&lt;br /&gt;
|-&lt;br /&gt;
! Environment variable&lt;br /&gt;
! Description&lt;br /&gt;
|- &lt;br /&gt;
| MOAB_SUBMITDIR&lt;br /&gt;
| Directory of job submission&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Additional Slurm Environments ==&lt;br /&gt;
Since the work load manager MOAB on [[bwUniCluster]] uses the resource manager SLURM, the following environment variables of SLURM are added to your environment once your job has started:&lt;br /&gt;
{| width=750px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! colspan=&amp;quot;3&amp;quot; style=&amp;quot;background-color:#999999;padding:3px&amp;quot;| SLURM variables&lt;br /&gt;
|- style=&amp;quot;width:25%;height=20px; text-align:left;padding:3px&amp;quot;&lt;br /&gt;
! style=&amp;quot;width:20%;height=20px; text-align:left;padding:3px&amp;quot;| Environment variables&lt;br /&gt;
! style=&amp;quot;height=20px; text-align:left;padding:3px&amp;quot;| Description&lt;br /&gt;
|- &lt;br /&gt;
| style=&amp;quot;width:20%;height=20px; text-align:left;padding:3px&amp;quot; | SLURM_JOB_CPUS_PER_NODE &lt;br /&gt;
| style=&amp;quot;height=20px; text-align:left;padding:3px&amp;quot;|  Number of processes per node dedicated to the job&lt;br /&gt;
|- &lt;br /&gt;
| style=&amp;quot;width:20%;height=20px; text-align:left;padding:3px&amp;quot; | SLURM_JOB_NODELIST &lt;br /&gt;
| style=&amp;quot;height=20px; text-align:left;padding:3px&amp;quot;|  List of nodes dedicated to the job&lt;br /&gt;
|- &lt;br /&gt;
| style=&amp;quot;width:20%;height=20px; text-align:left;padding:3px&amp;quot; | SLURM_JOB_NUM_NODES &lt;br /&gt;
| style=&amp;quot;height=20px; text-align:left;padding:3px&amp;quot;| Number of nodes dedicated to the job&lt;br /&gt;
|- &lt;br /&gt;
| style=&amp;quot;width:20%;height=20px; text-align:left;padding:3px&amp;quot; | SLURM_MEM_PER_NODE &lt;br /&gt;
| style=&amp;quot;height=20px; text-align:left;padding:3px&amp;quot;| Memory per node dedicated to the job &lt;br /&gt;
|- &lt;br /&gt;
| style=&amp;quot;width:20%;height=20px; text-align:left;padding:3px&amp;quot; | SLURM_NPROCS&lt;br /&gt;
| style=&amp;quot;height=20px; text-align:left;padding:3px&amp;quot;|  Total number of processes dedicated to the job &lt;br /&gt;
|}&lt;br /&gt;
See also:&lt;br /&gt;
* [[Batch_Jobs#Batch_Job_.28Slurm.29_Variables_:_bwUniCluster|List of almost all important Slurm environments]]&lt;br /&gt;
&lt;br /&gt;
== Interactive Job Monitoring per Node ==&lt;br /&gt;
By default nodes are not used exclusive unless they are requested with &#039;&#039;-l naccesspolicy=singlejob&#039;&#039; as described [[Batch_Jobs#msub_-l_resource_list|here]]. &amp;lt;br&amp;gt;&lt;br /&gt;
If a Job runs exclusive on one node you may do a ssh login to that node. The ssh access will be limited by the set walltime. To get the nodes of your job need to read the environment variable SLURM_JOB_NODELIST during the runtime of the job. It contains all nodes in a shortened way e.g. &#039;&#039;uc1n[344,386]&#039;&#039; or &#039;&#039;uc1n[344-345]&#039;&#039;. To expand this string to &#039;&#039;uc1n344 uc1n345&#039;&#039; you can you can use the command expandnodes like:&lt;br /&gt;
&lt;br /&gt;
  expandnodes $SLURM_JOB_NODELIST &amp;gt; nodelist&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
= Intel MPI parallel Programs =&lt;br /&gt;
== Intel MPI without Multithreading ==&lt;br /&gt;
MPI parallel programs run faster than serial programs on multi CPU and multi core systems. N-fold spawned processes of the MPI program, i.e., &#039;&#039;&#039;MPI tasks&#039;&#039;&#039;,  run simultaneously and communicate via the Message Passing Interface (MPI) paradigm. MPI tasks do not share memory but can be spawned over different nodes. &lt;br /&gt;
&lt;br /&gt;
Generate a wrapper script for &#039;&#039;&#039;Intel MPI&#039;&#039;&#039;, &#039;&#039;job_impi.sh&#039;&#039; containing the following lines:&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
module load mpi/impi/&amp;lt;placeholder_for_version&amp;gt;&lt;br /&gt;
mpiexec.hydra -bootstrap slurm my_par_program&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&amp;lt;font color=red&amp;gt;&#039;&#039;&#039;Attention:&#039;&#039;&#039;&amp;lt;/font&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
Do &#039;&#039;&#039;NOT&#039;&#039;&#039; add mpirun options &#039;&#039;-n &amp;lt;number_of_processes&amp;gt;&#039;&#039; or any other option defining processes or nodes, since MOAB instructs mpirun about number of processes and node hostnames. &lt;br /&gt;
Moreover, replace &amp;lt;placeholder_for_version&amp;gt; with the wished version of &#039;&#039;&#039;Intel MPI&#039;&#039;&#039; to enable the MPI environment.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Launching and running 32 Intel MPI tasks on 4 nodes, each requiring 1000 MByte, and running for 5 hours, execute:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ msub -q multinode -l nodes=4:ppn=16,pmem=1000mb,walltime=05:00:00 job_impi.sh&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Intel MPI with Multithreading ==&lt;br /&gt;
Multithreaded + MPI parallel programs operate faster than serial programs on multi CPUs with multiple cores. All threads of one process share resources such as memory. On the contrary MPI tasks do not share memory but can be spawned over different nodes.  &lt;br /&gt;
&lt;br /&gt;
Multiple Intel MPI tasks must be launched by the MPI parallel program &#039;&#039;&#039;mpiexec.hydra&#039;&#039;&#039;. For multithreaded programs based on &#039;&#039;&#039;Open&#039;&#039;&#039; &#039;&#039;&#039;M&#039;&#039;&#039;ulti-&#039;&#039;&#039;P&#039;&#039;&#039;rocessing (OpenMP) number of threads are defined by the environment variable OMP_NUM_THREADS. By default this variable is set to 1 (OMP_NUM_THREADS=1).&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;For Intel MPI&#039;&#039;&#039; a job-script to submit a batch job called &#039;&#039;job_impi_omp.sh&#039;&#039; that runs a Intel MPI program with 8 tasks and a tenfold threaded program &#039;&#039;impi_omp_program&#039;&#039; requiring 32000 MByte of total physical memory per task and total wall clock time of 6 hours looks like: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--b)--&amp;gt; &lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#MSUB -l nodes=4:ppn=20&lt;br /&gt;
#MSUB -l walltime=06:00:00&lt;br /&gt;
#MSUB -l pmem=3200mb&lt;br /&gt;
#MSUB -v MPI_MODULE=mpi/impi&lt;br /&gt;
#MSUB -v OMP_NUM_THREADS=10&lt;br /&gt;
#MSUB -v MPIRUN_OPTIONS=&amp;quot;-binding &amp;quot;domain=omp&amp;quot; -print-rank-map -ppn 2 -envall&amp;quot;&lt;br /&gt;
#MSUB -v EXE=./impi_omp_program&lt;br /&gt;
#MSUB -N test_impi_omp&lt;br /&gt;
&lt;br /&gt;
#If using more than one MPI task per node please set&lt;br /&gt;
export KMP_AFFINITY=scatter&lt;br /&gt;
#export KMP_AFFINITY=verbose,scatter  prints messages concerning the supported affinity &lt;br /&gt;
#KMP_AFFINITY Description: https://software.intel.com/en-us/node/524790#KMP_AFFINITY_ENVIRONMENT_VARIABLE&lt;br /&gt;
 &lt;br /&gt;
module load ${MPI_MODULE}&lt;br /&gt;
TASK_COUNT=$((${MOAB_PROCCOUNT}/${OMP_NUM_THREADS}))&lt;br /&gt;
echo &amp;quot;${EXE} running on ${MOAB_PROCCOUNT} cores with ${TASK_COUNT} MPI-tasks and ${OMP_NUM_THREADS} threads&amp;quot;&lt;br /&gt;
startexe=&amp;quot;mpiexec.hydra -bootstrap slurm ${MPIRUN_OPTIONS} -n ${TASK_COUNT} ${EXE}&amp;quot;&lt;br /&gt;
echo $startexe&lt;br /&gt;
exec $startexe&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Using Intel compiler the environment variable KMP_AFFINITY switches on binding of threads to specific cores. If you only run one MPI task per node please set KMP_AFFINITY=compact,1,0.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Execute the script &#039;&#039;&#039;job_impi_omp.sh&#039;&#039;&#039; adding the queue class &#039;&#039;multinode&#039;&#039; to your msub command:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ msub -q multinode job_impi_omp.sh&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
The mpirun option &#039;&#039;-print-rank-map&#039;&#039; shows the bindings between MPI tasks and nodes (not very beneficial). The option &#039;&#039;-binding&#039;&#039; binds MPI tasks (processes) to a particular processor; &#039;&#039;domain=omp&#039;&#039; means that the domain size is determined by the number of threads. In the above examples (2 MPI tasks per node) you could also choose &#039;&#039;-binding &amp;quot;cell=unit;map=bunch&amp;quot;&#039;&#039;; this binding maps one MPI process to each socket. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
= Interactive Jobs =&lt;br /&gt;
Interactive jobs on bwUniCluster [[BwUniCluster_User_Access#Allowed_activities_on_login_nodes|must &#039;&#039;&#039;NOT&#039;&#039;&#039; run on the logins nodes]], however resources for interactive jobs can be requested using msub. Considering a serial application with a graphical frontend that requires 5000 MByte of memory and limiting the interactive run to 2 hours execute the following:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ msub  -I  -V  -l nodes=1:ppn=1 -l walltime=0:02:00:00&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The option -V defines that all environment variables are exported to the compute node of the interactive session.&lt;br /&gt;
After execution of this command &#039;&#039;&#039;DO NOT CLOSE&#039;&#039;&#039; your current terminal session but wait until the queueing system MOAB has granted you the requested resources on the compute system. Once granted you will be automatically logged on the dedicated resource. Now you have an interactive session with 1 core and 5000 MByte of memory on the compute system for 2 hours. Simply execute now your application:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ cd to_path&lt;br /&gt;
$ ./application&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Note that, once the walltime limit has been reached you will be automatically logged out of the compute system.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
== Chain Jobs ==&lt;br /&gt;
The CPU time requirements of many applications exceed the limits of the job classes. In those situations it is recommended to solve the problem by a job chain. A job chain is a sequence of jobs where each job automatically starts its successor. &lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
##################################################&lt;br /&gt;
## simple MOAB submitter script to setup        ## &lt;br /&gt;
## a chain of jobs for bwUniCluster             ##&lt;br /&gt;
##################################################&lt;br /&gt;
## ver.  : 2015-09-17, KIT, SCC&lt;br /&gt;
&lt;br /&gt;
## Define maximum number of jobs via positional parameter 1, default is 5&lt;br /&gt;
max_nojob=${1:-5}&lt;br /&gt;
&lt;br /&gt;
## Define your jobscript (e.g. &amp;quot;~/chain_link_job.sh&amp;quot;)&lt;br /&gt;
chain_link_job=${PWD}/chain_link_job.sh&lt;br /&gt;
&lt;br /&gt;
## Define type of dependency via positional parameter 2, default is &#039;afterok&#039;&lt;br /&gt;
dep_type=&amp;quot;${2:-afterok}&amp;quot;&lt;br /&gt;
## -&amp;gt; List of all dependencies:&lt;br /&gt;
## http://docs.adaptivecomputing.com/suite/8-0/enterprise/help.htm#topics/\&lt;br /&gt;
##    moabWorkloadManager/topics/jobAdministration/jobdependencies.html&lt;br /&gt;
&lt;br /&gt;
myloop_counter=1&lt;br /&gt;
## Submit loop&lt;br /&gt;
while [ ${myloop_counter} -le ${max_nojob} ] ; do&lt;br /&gt;
   ##&lt;br /&gt;
   ## Differ msub_opt depending on chain link number&lt;br /&gt;
   if [ ${myloop_counter} -eq 1 ] ; then&lt;br /&gt;
      msub_opt=&amp;quot;&amp;quot;&lt;br /&gt;
   else&lt;br /&gt;
      ## Attention: do NOT use &#039;-W depend&#039; together with msub&lt;br /&gt;
      msub_opt=&amp;quot;-l depend=${dep_type}:${jobID}&amp;quot;&lt;br /&gt;
   fi&lt;br /&gt;
   ##&lt;br /&gt;
   ## Print current iteration number and msub command&lt;br /&gt;
   echo &amp;quot;Chain job iteration = ${myloop_counter}&amp;quot;&lt;br /&gt;
   echo &amp;quot;   msub -v myloop_counter=${myloop_counter} ${msub_opt} ${chain_link_job}&amp;quot;&lt;br /&gt;
   ## Store job ID for next iteration by storing output of msub command with empty lines&lt;br /&gt;
   jobID=$(msub -v myloop_counter=${myloop_counter} ${msub_opt} ${chain_link_job} 2&amp;gt;&amp;amp;1 | sed &#039;/^$/d&#039;)&lt;br /&gt;
   ##   &lt;br /&gt;
   ## Check if ERROR occurred&lt;br /&gt;
   if [[ &amp;quot;${jobID}&amp;quot; =~ &amp;quot;ERROR&amp;quot; ]] ; then&lt;br /&gt;
      echo &amp;quot;   -&amp;gt; submission failed!&amp;quot; ; exit 1&lt;br /&gt;
   else&lt;br /&gt;
      echo &amp;quot;   -&amp;gt; job number = ${jobID}&amp;quot;&lt;br /&gt;
   fi&lt;br /&gt;
   ##&lt;br /&gt;
   ## Increase counter&lt;br /&gt;
   let myloop_counter+=1&lt;br /&gt;
done&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
----&lt;br /&gt;
[[Category:bwUniCluster|Batch Jobs - bwUniCluster features]]&lt;/div&gt;</summary>
		<author><name>S Shamsudeen</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=Batch_Jobs_-_bwUniCluster_Features&amp;diff=4844</id>
		<title>Batch Jobs - bwUniCluster Features</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=Batch_Jobs_-_bwUniCluster_Features&amp;diff=4844"/>
		<updated>2017-05-09T08:27:02Z</updated>

		<summary type="html">&lt;p&gt;S Shamsudeen: /* msub -q queues */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;font color=green size=+2&amp;gt;This article contains information on features of the [[Batch_Jobs|batch job system]] only applicable on bwUniCluster.&amp;lt;/font&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
= Job Submission =&lt;br /&gt;
== msub Command ==&lt;br /&gt;
=== Interactive Jobs ===&lt;br /&gt;
The bwUniCluster supports the following additional msub option(s):&lt;br /&gt;
&lt;br /&gt;
{| width=750px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! colspan=&amp;quot;3&amp;quot; | msub Options&lt;br /&gt;
|-&lt;br /&gt;
! Command line&lt;br /&gt;
! Script&lt;br /&gt;
! Purpose&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
|  -I&lt;br /&gt;
| &lt;br /&gt;
|  Declares the job is to be run interactively.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== msub -l &#039;&#039;resource_list&#039;&#039; ===&lt;br /&gt;
No deviation or additional features to general [[Batch_Jobs|batch job]] setting.&lt;br /&gt;
&lt;br /&gt;
=== msub -q &#039;&#039;queues&#039;&#039; ===&lt;br /&gt;
Compute resources such as walltime, nodes and memory are restricted and must fit into &#039;&#039;&#039;queues&#039;&#039;&#039;. Since requested compute resources are NOT always automatically mapped to the correct queue class, you must add to your msub command the correct queue class. Details are:&lt;br /&gt;
&lt;br /&gt;
{| width=750px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! colspan=&amp;quot;6&amp;quot; style=&amp;quot;background-color:#999999;padding:3px&amp;quot;| msub -q &#039;&#039;queue&#039;&#039;&lt;br /&gt;
|- style=&amp;quot;width:10%;height=20px; text-align:left;&amp;quot;&lt;br /&gt;
! style=&amp;quot;width:10%;padding:3px&amp;quot;| &#039;&#039;queue&#039;&#039;&lt;br /&gt;
! style=&amp;quot;width:5%;padding:3px&amp;quot;| &#039;&#039;node&#039;&#039;&lt;br /&gt;
! style=&amp;quot;width:15%;padding:3px&amp;quot;| &#039;&#039;default resources&#039;&#039;&lt;br /&gt;
! style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;minimum resources&#039;&#039;&lt;br /&gt;
! style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;maximum resources&#039;&#039;&lt;br /&gt;
! style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;node access policy&#039;&#039;&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; height=20px; text-align:left&amp;quot;&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| develop&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| thin&lt;br /&gt;
| style=&amp;quot;width:15%;padding:3px&amp;quot;| &#039;&#039;walltime&#039;&#039;=00:10:00,&#039;&#039;procs&#039;&#039;=1, &#039;&#039;pmem&#039;&#039;=4000mb&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1:&#039;&#039;ppn&#039;&#039;=16, &#039;&#039;walltime&#039;&#039;=00:30:00&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| shared&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; height=20px; text-align:left&amp;quot;&lt;br /&gt;
| style=&amp;quot;width:10%;padding:3px&amp;quot; | singlenode &lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| thin&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;walltime&#039;&#039;=00:30:01,&#039;&#039;procs&#039;&#039;=1, &#039;&#039;pmem&#039;&#039;=4000mb&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1, &#039;&#039;walltime&#039;&#039;=00:30:01&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1:&#039;&#039;ppn&#039;&#039;=16, &#039;&#039;walltime&#039;&#039;=3:00:00:00&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| shared&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; height=20px; text-align:left&amp;quot;&lt;br /&gt;
| style=&amp;quot;width:10%;vertical-align:top;height=20px; text-align:left;padding:3px&amp;quot; | verylong&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| thin&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;walltime&#039;&#039;=3:00:00:01,&#039;&#039;procs&#039;&#039;=1, &#039;&#039;pmem&#039;&#039;=4000mb&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1&#039;&#039;, walltime&#039;&#039;=3:00:00:01&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes=1:&#039;&#039;ppn&#039;&#039;=16, &#039;&#039;walltime&#039;&#039;=6:00:00:00&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| shared&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; height=20px; text-align:left&amp;quot;&lt;br /&gt;
| style=&amp;quot;width:10%;padding:3px&amp;quot; | extralong**&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| thin&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;walltime&#039;&#039;=3:00:00:01,&#039;&#039;procs&#039;&#039;=1, &#039;&#039;pmem&#039;&#039;=4000mb&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1&#039;&#039;, walltime&#039;&#039;=3:00:00:01&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes=1:&#039;&#039;ppn&#039;&#039;=16, &#039;&#039;walltime&#039;&#039;=6:00:00:00&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| shared&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; height=20px; text-align:left&amp;quot; &lt;br /&gt;
| style=&amp;quot;width:10%;padding:3px&amp;quot; | fat&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| fat&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;walltime&#039;&#039;=00:10:00,&#039;&#039;procs&#039;&#039;=1, &#039;&#039;pmem&#039;&#039;=32000mb&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1:&#039;&#039;ppn&#039;&#039;=32, &#039;&#039;walltime&#039;&#039;=3:00:00:00&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| shared&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; height=20px; text-align:left&amp;quot; &lt;br /&gt;
| style=&amp;quot;width:10%;padding:3px&amp;quot; | multinode&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| broad&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;walltime&#039;&#039;=00:10:00,&#039;&#039;procs&#039;&#039;=1, &#039;&#039;pmem&#039;&#039;=4500mb&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=2&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=128:&#039;&#039;ppn&#039;&#039;=28, &#039;&#039;walltime&#039;&#039;=48:00:00&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| singlejob&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; height=20px; text-align:left&amp;quot; &lt;br /&gt;
| style=&amp;quot;width:10%;padding:3px&amp;quot; | dev_multinode&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| broad&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;walltime&#039;&#039;=00:10:00,&#039;&#039;procs&#039;&#039;=1, &#039;&#039;pmem&#039;&#039;=4500mb&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=2&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=16:&#039;&#039;ppn&#039;&#039;=28, &#039;&#039;walltime&#039;&#039;=00:30:00&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| singlejob&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; height=20px; text-align:left&amp;quot; &lt;br /&gt;
| style=&amp;quot;width:10%;padding:3px&amp;quot; | special**&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| broad&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;walltime&#039;&#039;=00:30:00,&#039;&#039;procs&#039;&#039;=1, &#039;&#039;pmem&#039;&#039;=4500mb&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1, &#039;&#039;walltime&#039;&#039;=00:30:00&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1:&#039;&#039;ppn&#039;&#039;=28, &#039;&#039;walltime&#039;&#039;=48:00:00&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| shared&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; height=20px; text-align:left&amp;quot; &lt;br /&gt;
| style=&amp;quot;width:10%;padding:3px&amp;quot; | dev_special**&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| broad&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;walltime&#039;&#039;=00:10:00,&#039;&#039;procs&#039;&#039;=1, &#039;&#039;pmem&#039;&#039;=4500mb&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1, &#039;&#039;walltime&#039;&#039;=00:10:00&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1:&#039;&#039;ppn&#039;&#039;=28, &#039;&#039;walltime&#039;&#039;=00:30:00&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| shared&lt;br /&gt;
|- &lt;br /&gt;
|}&lt;br /&gt;
Note that &#039;&#039;node access policy&#039;&#039;=singlejob means that, irrespective of the requested number of cores, node access is exclusive. &lt;br /&gt;
Default resources of a queue class defines walltime, processes and memory if not explicitly given with msub command. Resource list acronyms &#039;&#039;walltime&#039;&#039;, &#039;&#039;procs&#039;&#039;, &#039;&#039;nodes&#039;&#039; and &#039;&#039;ppn&#039;&#039; are described [[Batch_Jobs#msub_-l_resource_list|here]].&lt;br /&gt;
* [[**Only accessible to predefined user groups.]]&lt;br /&gt;
==== Queue class examples ====&lt;br /&gt;
&lt;br /&gt;
* To run your batch job longer than 3 days, please use&amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;$ msub -q verylong&amp;lt;/span&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
* To run your batch job on one of the [[BwUniCluster_File_System#Components_of_bwUniCluster|fat nodes]], please use&amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;$ msub -q fat&amp;lt;/span&amp;gt;. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Environment Variables for Batch Jobs =&lt;br /&gt;
== Additional Moab Environments ==&lt;br /&gt;
The bwUniCluster expands the [[Batch_Jobs#Moab Environment Variables|common set of MOAB environment variables]] by the following variable:&lt;br /&gt;
{| width=700px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! colspan=&amp;quot;3&amp;quot; style=&amp;quot;background-color:#999999;padding:3px&amp;quot;| bwUniCluster specific MOAB variables&lt;br /&gt;
|-&lt;br /&gt;
! Environment variable&lt;br /&gt;
! Description&lt;br /&gt;
|- &lt;br /&gt;
| MOAB_SUBMITDIR&lt;br /&gt;
| Directory of job submission&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Additional Slurm Environments ==&lt;br /&gt;
Since the work load manager MOAB on [[bwUniCluster]] uses the resource manager SLURM, the following environment variables of SLURM are added to your environment once your job has started:&lt;br /&gt;
{| width=750px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! colspan=&amp;quot;3&amp;quot; style=&amp;quot;background-color:#999999;padding:3px&amp;quot;| SLURM variables&lt;br /&gt;
|- style=&amp;quot;width:25%;height=20px; text-align:left;padding:3px&amp;quot;&lt;br /&gt;
! style=&amp;quot;width:20%;height=20px; text-align:left;padding:3px&amp;quot;| Environment variables&lt;br /&gt;
! style=&amp;quot;height=20px; text-align:left;padding:3px&amp;quot;| Description&lt;br /&gt;
|- &lt;br /&gt;
| style=&amp;quot;width:20%;height=20px; text-align:left;padding:3px&amp;quot; | SLURM_JOB_CPUS_PER_NODE &lt;br /&gt;
| style=&amp;quot;height=20px; text-align:left;padding:3px&amp;quot;|  Number of processes per node dedicated to the job&lt;br /&gt;
|- &lt;br /&gt;
| style=&amp;quot;width:20%;height=20px; text-align:left;padding:3px&amp;quot; | SLURM_JOB_NODELIST &lt;br /&gt;
| style=&amp;quot;height=20px; text-align:left;padding:3px&amp;quot;|  List of nodes dedicated to the job&lt;br /&gt;
|- &lt;br /&gt;
| style=&amp;quot;width:20%;height=20px; text-align:left;padding:3px&amp;quot; | SLURM_JOB_NUM_NODES &lt;br /&gt;
| style=&amp;quot;height=20px; text-align:left;padding:3px&amp;quot;| Number of nodes dedicated to the job&lt;br /&gt;
|- &lt;br /&gt;
| style=&amp;quot;width:20%;height=20px; text-align:left;padding:3px&amp;quot; | SLURM_MEM_PER_NODE &lt;br /&gt;
| style=&amp;quot;height=20px; text-align:left;padding:3px&amp;quot;| Memory per node dedicated to the job &lt;br /&gt;
|- &lt;br /&gt;
| style=&amp;quot;width:20%;height=20px; text-align:left;padding:3px&amp;quot; | SLURM_NPROCS&lt;br /&gt;
| style=&amp;quot;height=20px; text-align:left;padding:3px&amp;quot;|  Total number of processes dedicated to the job &lt;br /&gt;
|}&lt;br /&gt;
See also:&lt;br /&gt;
* [[Batch_Jobs#Batch_Job_.28Slurm.29_Variables_:_bwUniCluster|List of almost all important Slurm environments]]&lt;br /&gt;
&lt;br /&gt;
== Interactive Job Monitoring per Node ==&lt;br /&gt;
By default nodes are not used exclusive unless they are requested with &#039;&#039;-l naccesspolicy=singlejob&#039;&#039; as described [[Batch_Jobs#msub_-l_resource_list|here]]. &amp;lt;br&amp;gt;&lt;br /&gt;
If a Job runs exclusive on one node you may do a ssh login to that node. The ssh access will be limited by the set walltime. To get the nodes of your job need to read the environment variable SLURM_JOB_NODELIST during the runtime of the job. It contains all nodes in a shortened way e.g. &#039;&#039;uc1n[344,386]&#039;&#039; or &#039;&#039;uc1n[344-345]&#039;&#039;. To expand this string to &#039;&#039;uc1n344 uc1n345&#039;&#039; you can you can use the command expandnodes like:&lt;br /&gt;
&lt;br /&gt;
  expandnodes $SLURM_JOB_NODELIST &amp;gt; nodelist&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
= Intel MPI parallel Programs =&lt;br /&gt;
== Intel MPI without Multithreading ==&lt;br /&gt;
MPI parallel programs run faster than serial programs on multi CPU and multi core systems. N-fold spawned processes of the MPI program, i.e., &#039;&#039;&#039;MPI tasks&#039;&#039;&#039;,  run simultaneously and communicate via the Message Passing Interface (MPI) paradigm. MPI tasks do not share memory but can be spawned over different nodes. &lt;br /&gt;
&lt;br /&gt;
Generate a wrapper script for &#039;&#039;&#039;Intel MPI&#039;&#039;&#039;, &#039;&#039;job_impi.sh&#039;&#039; containing the following lines:&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
module load mpi/impi/&amp;lt;placeholder_for_version&amp;gt;&lt;br /&gt;
mpiexec.hydra -bootstrap slurm my_par_program&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&amp;lt;font color=red&amp;gt;&#039;&#039;&#039;Attention:&#039;&#039;&#039;&amp;lt;/font&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
Do &#039;&#039;&#039;NOT&#039;&#039;&#039; add mpirun options &#039;&#039;-n &amp;lt;number_of_processes&amp;gt;&#039;&#039; or any other option defining processes or nodes, since MOAB instructs mpirun about number of processes and node hostnames. &lt;br /&gt;
Moreover, replace &amp;lt;placeholder_for_version&amp;gt; with the wished version of &#039;&#039;&#039;Intel MPI&#039;&#039;&#039; to enable the MPI environment.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Launching and running 32 Intel MPI tasks on 4 nodes, each requiring 1000 MByte, and running for 5 hours, execute:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ msub -q multinode -l nodes=4:ppn=16,pmem=1000mb,walltime=05:00:00 job_impi.sh&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Intel MPI with Multithreading ==&lt;br /&gt;
Multithreaded + MPI parallel programs operate faster than serial programs on multi CPUs with multiple cores. All threads of one process share resources such as memory. On the contrary MPI tasks do not share memory but can be spawned over different nodes.  &lt;br /&gt;
&lt;br /&gt;
Multiple Intel MPI tasks must be launched by the MPI parallel program &#039;&#039;&#039;mpiexec.hydra&#039;&#039;&#039;. For multithreaded programs based on &#039;&#039;&#039;Open&#039;&#039;&#039; &#039;&#039;&#039;M&#039;&#039;&#039;ulti-&#039;&#039;&#039;P&#039;&#039;&#039;rocessing (OpenMP) number of threads are defined by the environment variable OMP_NUM_THREADS. By default this variable is set to 1 (OMP_NUM_THREADS=1).&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;For Intel MPI&#039;&#039;&#039; a job-script to submit a batch job called &#039;&#039;job_impi_omp.sh&#039;&#039; that runs a Intel MPI program with 8 tasks and a tenfold threaded program &#039;&#039;impi_omp_program&#039;&#039; requiring 32000 MByte of total physical memory per task and total wall clock time of 6 hours looks like: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--b)--&amp;gt; &lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#MSUB -l nodes=4:ppn=20&lt;br /&gt;
#MSUB -l walltime=06:00:00&lt;br /&gt;
#MSUB -l pmem=3200mb&lt;br /&gt;
#MSUB -v MPI_MODULE=mpi/impi&lt;br /&gt;
#MSUB -v OMP_NUM_THREADS=10&lt;br /&gt;
#MSUB -v MPIRUN_OPTIONS=&amp;quot;-binding &amp;quot;domain=omp&amp;quot; -print-rank-map -ppn 2 -envall&amp;quot;&lt;br /&gt;
#MSUB -v EXE=./impi_omp_program&lt;br /&gt;
#MSUB -N test_impi_omp&lt;br /&gt;
&lt;br /&gt;
#If using more than one MPI task per node please set&lt;br /&gt;
export KMP_AFFINITY=scatter&lt;br /&gt;
#export KMP_AFFINITY=verbose,scatter  prints messages concerning the supported affinity &lt;br /&gt;
#KMP_AFFINITY Description: https://software.intel.com/en-us/node/524790#KMP_AFFINITY_ENVIRONMENT_VARIABLE&lt;br /&gt;
 &lt;br /&gt;
module load ${MPI_MODULE}&lt;br /&gt;
TASK_COUNT=$((${MOAB_PROCCOUNT}/${OMP_NUM_THREADS}))&lt;br /&gt;
echo &amp;quot;${EXE} running on ${MOAB_PROCCOUNT} cores with ${TASK_COUNT} MPI-tasks and ${OMP_NUM_THREADS} threads&amp;quot;&lt;br /&gt;
startexe=&amp;quot;mpiexec.hydra -bootstrap slurm ${MPIRUN_OPTIONS} -n ${TASK_COUNT} ${EXE}&amp;quot;&lt;br /&gt;
echo $startexe&lt;br /&gt;
exec $startexe&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Using Intel compiler the environment variable KMP_AFFINITY switches on binding of threads to specific cores. If you only run one MPI task per node please set KMP_AFFINITY=compact,1,0.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Execute the script &#039;&#039;&#039;job_impi_omp.sh&#039;&#039;&#039; adding the queue class &#039;&#039;multinode&#039;&#039; to your msub command:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ msub -q multinode job_impi_omp.sh&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
The mpirun option &#039;&#039;-print-rank-map&#039;&#039; shows the bindings between MPI tasks and nodes (not very beneficial). The option &#039;&#039;-binding&#039;&#039; binds MPI tasks (processes) to a particular processor; &#039;&#039;domain=omp&#039;&#039; means that the domain size is determined by the number of threads. In the above examples (2 MPI tasks per node) you could also choose &#039;&#039;-binding &amp;quot;cell=unit;map=bunch&amp;quot;&#039;&#039;; this binding maps one MPI process to each socket. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
= Interactive Jobs =&lt;br /&gt;
Interactive jobs on bwUniCluster [[BwUniCluster_User_Access#Allowed_activities_on_login_nodes|must &#039;&#039;&#039;NOT&#039;&#039;&#039; run on the logins nodes]], however resources for interactive jobs can be requested using msub. Considering a serial application with a graphical frontend that requires 5000 MByte of memory and limiting the interactive run to 2 hours execute the following:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ msub  -I  -V  -l nodes=1:ppn=1 -l walltime=0:02:00:00&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The option -V defines that all environment variables are exported to the compute node of the interactive session.&lt;br /&gt;
After execution of this command &#039;&#039;&#039;DO NOT CLOSE&#039;&#039;&#039; your current terminal session but wait until the queueing system MOAB has granted you the requested resources on the compute system. Once granted you will be automatically logged on the dedicated resource. Now you have an interactive session with 1 core and 5000 MByte of memory on the compute system for 2 hours. Simply execute now your application:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ cd to_path&lt;br /&gt;
$ ./application&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Note that, once the walltime limit has been reached you will be automatically logged out of the compute system.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
== Chain Jobs ==&lt;br /&gt;
The CPU time requirements of many applications exceed the limits of the job classes. In those situations it is recommended to solve the problem by a job chain. A job chain is a sequence of jobs where each job automatically starts its successor. &lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
##################################################&lt;br /&gt;
## simple MOAB submitter script to setup        ## &lt;br /&gt;
## a chain of jobs for bwUniCluster             ##&lt;br /&gt;
##################################################&lt;br /&gt;
## ver.  : 2015-09-17, KIT, SCC&lt;br /&gt;
&lt;br /&gt;
## Define maximum number of jobs via positional parameter 1, default is 5&lt;br /&gt;
max_nojob=${1:-5}&lt;br /&gt;
&lt;br /&gt;
## Define your jobscript (e.g. &amp;quot;~/chain_link_job.sh&amp;quot;)&lt;br /&gt;
chain_link_job=${PWD}/chain_link_job.sh&lt;br /&gt;
&lt;br /&gt;
## Define type of dependency via positional parameter 2, default is &#039;afterok&#039;&lt;br /&gt;
dep_type=&amp;quot;${2:-afterok}&amp;quot;&lt;br /&gt;
## -&amp;gt; List of all dependencies:&lt;br /&gt;
## http://docs.adaptivecomputing.com/suite/8-0/enterprise/help.htm#topics/\&lt;br /&gt;
##    moabWorkloadManager/topics/jobAdministration/jobdependencies.html&lt;br /&gt;
&lt;br /&gt;
myloop_counter=1&lt;br /&gt;
## Submit loop&lt;br /&gt;
while [ ${myloop_counter} -le ${max_nojob} ] ; do&lt;br /&gt;
   ##&lt;br /&gt;
   ## Differ msub_opt depending on chain link number&lt;br /&gt;
   if [ ${myloop_counter} -eq 1 ] ; then&lt;br /&gt;
      msub_opt=&amp;quot;&amp;quot;&lt;br /&gt;
   else&lt;br /&gt;
      ## Attention: do NOT use &#039;-W depend&#039; together with msub&lt;br /&gt;
      msub_opt=&amp;quot;-l depend=${dep_type}:${jobID}&amp;quot;&lt;br /&gt;
   fi&lt;br /&gt;
   ##&lt;br /&gt;
   ## Print current iteration number and msub command&lt;br /&gt;
   echo &amp;quot;Chain job iteration = ${myloop_counter}&amp;quot;&lt;br /&gt;
   echo &amp;quot;   msub -v myloop_counter=${myloop_counter} ${msub_opt} ${chain_link_job}&amp;quot;&lt;br /&gt;
   ## Store job ID for next iteration by storing output of msub command with empty lines&lt;br /&gt;
   jobID=$(msub -v myloop_counter=${myloop_counter} ${msub_opt} ${chain_link_job} 2&amp;gt;&amp;amp;1 | sed &#039;/^$/d&#039;)&lt;br /&gt;
   ##   &lt;br /&gt;
   ## Check if ERROR occurred&lt;br /&gt;
   if [[ &amp;quot;${jobID}&amp;quot; =~ &amp;quot;ERROR&amp;quot; ]] ; then&lt;br /&gt;
      echo &amp;quot;   -&amp;gt; submission failed!&amp;quot; ; exit 1&lt;br /&gt;
   else&lt;br /&gt;
      echo &amp;quot;   -&amp;gt; job number = ${jobID}&amp;quot;&lt;br /&gt;
   fi&lt;br /&gt;
   ##&lt;br /&gt;
   ## Increase counter&lt;br /&gt;
   let myloop_counter+=1&lt;br /&gt;
done&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
----&lt;br /&gt;
[[Category:bwUniCluster|Batch Jobs - bwUniCluster features]]&lt;/div&gt;</summary>
		<author><name>S Shamsudeen</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=Batch_Jobs_-_bwUniCluster_Features&amp;diff=4838</id>
		<title>Batch Jobs - bwUniCluster Features</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=Batch_Jobs_-_bwUniCluster_Features&amp;diff=4838"/>
		<updated>2017-05-08T12:49:49Z</updated>

		<summary type="html">&lt;p&gt;S Shamsudeen: /* msub -q queues */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;font color=green size=+2&amp;gt;This article contains information on features of the [[Batch_Jobs|batch job system]] only applicable on bwUniCluster.&amp;lt;/font&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
= Job Submission =&lt;br /&gt;
== msub Command ==&lt;br /&gt;
=== Interactive Jobs ===&lt;br /&gt;
The bwUniCluster supports the following additional msub option(s):&lt;br /&gt;
&lt;br /&gt;
{| width=750px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! colspan=&amp;quot;3&amp;quot; | msub Options&lt;br /&gt;
|-&lt;br /&gt;
! Command line&lt;br /&gt;
! Script&lt;br /&gt;
! Purpose&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
|  -I&lt;br /&gt;
| &lt;br /&gt;
|  Declares the job is to be run interactively.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== msub -l &#039;&#039;resource_list&#039;&#039; ===&lt;br /&gt;
No deviation or additional features to general [[Batch_Jobs|batch job]] setting.&lt;br /&gt;
&lt;br /&gt;
=== msub -q &#039;&#039;queues&#039;&#039; ===&lt;br /&gt;
Compute resources such as walltime, nodes and memory are restricted and must fit into &#039;&#039;&#039;queues&#039;&#039;&#039;. Since requested compute resources are NOT always automatically mapped to the correct queue class, you must add to your msub command the correct queue class. Details are:&lt;br /&gt;
&lt;br /&gt;
{| width=750px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! colspan=&amp;quot;6&amp;quot; style=&amp;quot;background-color:#999999;padding:3px&amp;quot;| msub -q &#039;&#039;queue&#039;&#039;&lt;br /&gt;
|- style=&amp;quot;width:10%;height=20px; text-align:left;&amp;quot;&lt;br /&gt;
! style=&amp;quot;width:10%;padding:3px&amp;quot;| &#039;&#039;queue&#039;&#039;&lt;br /&gt;
! style=&amp;quot;width:5%;padding:3px&amp;quot;| &#039;&#039;node&#039;&#039;&lt;br /&gt;
! style=&amp;quot;width:15%;padding:3px&amp;quot;| &#039;&#039;default resources&#039;&#039;&lt;br /&gt;
! style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;minimum resources&#039;&#039;&lt;br /&gt;
! style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;maximum resources&#039;&#039;&lt;br /&gt;
! style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;node access policy&#039;&#039;&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; height=20px; text-align:left&amp;quot;&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| develop&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| thin&lt;br /&gt;
| style=&amp;quot;width:15%;padding:3px&amp;quot;| &#039;&#039;walltime&#039;&#039;=00:10:00,&#039;&#039;procs&#039;&#039;=1, &#039;&#039;pmem&#039;&#039;=4000mb&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1:&#039;&#039;ppn&#039;&#039;=16, &#039;&#039;walltime&#039;&#039;=00:30:00&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| shared&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; height=20px; text-align:left&amp;quot;&lt;br /&gt;
| style=&amp;quot;width:10%;padding:3px&amp;quot; | singlenode &lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| thin&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;walltime&#039;&#039;=00:30:01,&#039;&#039;procs&#039;&#039;=1, &#039;&#039;pmem&#039;&#039;=4000mb&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1, &#039;&#039;walltime&#039;&#039;=00:30:01&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1:&#039;&#039;ppn&#039;&#039;=16, &#039;&#039;walltime&#039;&#039;=3:00:00:00&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| shared&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; height=20px; text-align:left&amp;quot;&lt;br /&gt;
| style=&amp;quot;width:10%;vertical-align:top;height=20px; text-align:left;padding:3px&amp;quot; | verylong&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| thin&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;walltime&#039;&#039;=3:00:00:01,&#039;&#039;procs&#039;&#039;=1, &#039;&#039;pmem&#039;&#039;=4000mb&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1&#039;&#039;, walltime&#039;&#039;=3:00:00:01&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes=1:&#039;&#039;ppn&#039;&#039;=16, &#039;&#039;walltime&#039;&#039;=6:00:00:00&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| shared&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; height=20px; text-align:left&amp;quot;&lt;br /&gt;
| style=&amp;quot;width:10%;padding:3px&amp;quot; | extralong**&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| thin&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;walltime&#039;&#039;=3:00:00:01,&#039;&#039;procs&#039;&#039;=1, &#039;&#039;pmem&#039;&#039;=4000mb&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1&#039;&#039;, walltime&#039;&#039;=3:00:00:01&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes=1:&#039;&#039;ppn&#039;&#039;=16, &#039;&#039;walltime&#039;&#039;=6:00:00:00&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| shared&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; height=20px; text-align:left&amp;quot; &lt;br /&gt;
| style=&amp;quot;width:10%;padding:3px&amp;quot; | fat&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| fat&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;walltime&#039;&#039;=00:10:00,&#039;&#039;procs&#039;&#039;=1, &#039;&#039;pmem&#039;&#039;=32000mb&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1:&#039;&#039;ppn&#039;&#039;=32, &#039;&#039;walltime&#039;&#039;=3:00:00:00&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| shared&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; height=20px; text-align:left&amp;quot; &lt;br /&gt;
| style=&amp;quot;width:10%;padding:3px&amp;quot; | multinode&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| broad&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;walltime&#039;&#039;=00:10:00,&#039;&#039;procs&#039;&#039;=1, &#039;&#039;pmem&#039;&#039;=4500mb&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=2&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=128:&#039;&#039;ppn&#039;&#039;=28, &#039;&#039;walltime&#039;&#039;=48:00:00&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| singlejob&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; height=20px; text-align:left&amp;quot; &lt;br /&gt;
| style=&amp;quot;width:10%;padding:3px&amp;quot; | dev_multinode&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| broad&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;walltime&#039;&#039;=00:10:00,&#039;&#039;procs&#039;&#039;=1, &#039;&#039;pmem&#039;&#039;=4500mb&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=2&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=16:&#039;&#039;ppn&#039;&#039;=28, &#039;&#039;walltime&#039;&#039;=00:30:00&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| singlejob&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; height=20px; text-align:left&amp;quot; &lt;br /&gt;
| style=&amp;quot;width:10%;padding:3px&amp;quot; | special**&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| broad&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;walltime&#039;&#039;=00:30:00,&#039;&#039;procs&#039;&#039;=1, &#039;&#039;pmem&#039;&#039;=4500mb&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1, &#039;&#039;walltime&#039;&#039;=00:30:00&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1:&#039;&#039;ppn&#039;&#039;=28, &#039;&#039;walltime&#039;&#039;=48:00:00&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| shared&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; height=20px; text-align:left&amp;quot; &lt;br /&gt;
| style=&amp;quot;width:10%;padding:3px&amp;quot; | dev_special**&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| broad&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;walltime&#039;&#039;=00:10:00,&#039;&#039;procs&#039;&#039;=1, &#039;&#039;pmem&#039;&#039;=4500mb&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1, &#039;&#039;walltime&#039;&#039;=00:10:00&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1:&#039;&#039;ppn&#039;&#039;=28, &#039;&#039;walltime&#039;&#039;=00:30:00&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| shared&lt;br /&gt;
|- &lt;br /&gt;
|}&lt;br /&gt;
Note that &#039;&#039;node access policy&#039;&#039;=singlejob means that, irrespective of the requested number of cores, node access is exclusive. &lt;br /&gt;
Default resources of a queue class defines walltime, processes and memory if not explicitly given with msub command. Resource list acronyms &#039;&#039;walltime&#039;&#039;, &#039;&#039;procs&#039;&#039;, &#039;&#039;nodes&#039;&#039; and &#039;&#039;ppn&#039;&#039; are described [[Batch_Jobs#msub_-l_resource_list|here]].&lt;br /&gt;
&lt;br /&gt;
**Only accessible to predefined user groups.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Queue class examples ====&lt;br /&gt;
&lt;br /&gt;
* To run your batch job longer than 3 days, please use&amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;$ msub -q verylong&amp;lt;/span&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
* To run your batch job on one of the [[BwUniCluster_File_System#Components_of_bwUniCluster|fat nodes]], please use&amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;$ msub -q fat&amp;lt;/span&amp;gt;. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Environment Variables for Batch Jobs =&lt;br /&gt;
== Additional Moab Environments ==&lt;br /&gt;
The bwUniCluster expands the [[Batch_Jobs#Moab Environment Variables|common set of MOAB environment variables]] by the following variable:&lt;br /&gt;
{| width=700px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! colspan=&amp;quot;3&amp;quot; style=&amp;quot;background-color:#999999;padding:3px&amp;quot;| bwUniCluster specific MOAB variables&lt;br /&gt;
|-&lt;br /&gt;
! Environment variable&lt;br /&gt;
! Description&lt;br /&gt;
|- &lt;br /&gt;
| MOAB_SUBMITDIR&lt;br /&gt;
| Directory of job submission&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Additional Slurm Environments ==&lt;br /&gt;
Since the work load manager MOAB on [[bwUniCluster]] uses the resource manager SLURM, the following environment variables of SLURM are added to your environment once your job has started:&lt;br /&gt;
{| width=750px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! colspan=&amp;quot;3&amp;quot; style=&amp;quot;background-color:#999999;padding:3px&amp;quot;| SLURM variables&lt;br /&gt;
|- style=&amp;quot;width:25%;height=20px; text-align:left;padding:3px&amp;quot;&lt;br /&gt;
! style=&amp;quot;width:20%;height=20px; text-align:left;padding:3px&amp;quot;| Environment variables&lt;br /&gt;
! style=&amp;quot;height=20px; text-align:left;padding:3px&amp;quot;| Description&lt;br /&gt;
|- &lt;br /&gt;
| style=&amp;quot;width:20%;height=20px; text-align:left;padding:3px&amp;quot; | SLURM_JOB_CPUS_PER_NODE &lt;br /&gt;
| style=&amp;quot;height=20px; text-align:left;padding:3px&amp;quot;|  Number of processes per node dedicated to the job&lt;br /&gt;
|- &lt;br /&gt;
| style=&amp;quot;width:20%;height=20px; text-align:left;padding:3px&amp;quot; | SLURM_JOB_NODELIST &lt;br /&gt;
| style=&amp;quot;height=20px; text-align:left;padding:3px&amp;quot;|  List of nodes dedicated to the job&lt;br /&gt;
|- &lt;br /&gt;
| style=&amp;quot;width:20%;height=20px; text-align:left;padding:3px&amp;quot; | SLURM_JOB_NUM_NODES &lt;br /&gt;
| style=&amp;quot;height=20px; text-align:left;padding:3px&amp;quot;| Number of nodes dedicated to the job&lt;br /&gt;
|- &lt;br /&gt;
| style=&amp;quot;width:20%;height=20px; text-align:left;padding:3px&amp;quot; | SLURM_MEM_PER_NODE &lt;br /&gt;
| style=&amp;quot;height=20px; text-align:left;padding:3px&amp;quot;| Memory per node dedicated to the job &lt;br /&gt;
|- &lt;br /&gt;
| style=&amp;quot;width:20%;height=20px; text-align:left;padding:3px&amp;quot; | SLURM_NPROCS&lt;br /&gt;
| style=&amp;quot;height=20px; text-align:left;padding:3px&amp;quot;|  Total number of processes dedicated to the job &lt;br /&gt;
|}&lt;br /&gt;
See also:&lt;br /&gt;
* [[Batch_Jobs#Batch_Job_.28Slurm.29_Variables_:_bwUniCluster|List of almost all important Slurm environments]]&lt;br /&gt;
&lt;br /&gt;
== Interactive Job Monitoring per Node ==&lt;br /&gt;
By default nodes are not used exclusive unless they are requested with &#039;&#039;-l naccesspolicy=singlejob&#039;&#039; as described [[Batch_Jobs#msub_-l_resource_list|here]]. &amp;lt;br&amp;gt;&lt;br /&gt;
If a Job runs exclusive on one node you may do a ssh login to that node. The ssh access will be limited by the set walltime. To get the nodes of your job need to read the environment variable SLURM_JOB_NODELIST during the runtime of the job. It contains all nodes in a shortened way e.g. &#039;&#039;uc1n[344,386]&#039;&#039; or &#039;&#039;uc1n[344-345]&#039;&#039;. To expand this string to &#039;&#039;uc1n344 uc1n345&#039;&#039; you can you can use the command expandnodes like:&lt;br /&gt;
&lt;br /&gt;
  expandnodes $SLURM_JOB_NODELIST &amp;gt; nodelist&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
= Intel MPI parallel Programs =&lt;br /&gt;
== Intel MPI without Multithreading ==&lt;br /&gt;
MPI parallel programs run faster than serial programs on multi CPU and multi core systems. N-fold spawned processes of the MPI program, i.e., &#039;&#039;&#039;MPI tasks&#039;&#039;&#039;,  run simultaneously and communicate via the Message Passing Interface (MPI) paradigm. MPI tasks do not share memory but can be spawned over different nodes. &lt;br /&gt;
&lt;br /&gt;
Generate a wrapper script for &#039;&#039;&#039;Intel MPI&#039;&#039;&#039;, &#039;&#039;job_impi.sh&#039;&#039; containing the following lines:&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
module load mpi/impi/&amp;lt;placeholder_for_version&amp;gt;&lt;br /&gt;
mpiexec.hydra -bootstrap slurm my_par_program&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&amp;lt;font color=red&amp;gt;&#039;&#039;&#039;Attention:&#039;&#039;&#039;&amp;lt;/font&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
Do &#039;&#039;&#039;NOT&#039;&#039;&#039; add mpirun options &#039;&#039;-n &amp;lt;number_of_processes&amp;gt;&#039;&#039; or any other option defining processes or nodes, since MOAB instructs mpirun about number of processes and node hostnames. &lt;br /&gt;
Moreover, replace &amp;lt;placeholder_for_version&amp;gt; with the wished version of &#039;&#039;&#039;Intel MPI&#039;&#039;&#039; to enable the MPI environment.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Launching and running 32 Intel MPI tasks on 4 nodes, each requiring 1000 MByte, and running for 5 hours, execute:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ msub -q multinode -l nodes=4:ppn=16,pmem=1000mb,walltime=05:00:00 job_impi.sh&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Intel MPI with Multithreading ==&lt;br /&gt;
Multithreaded + MPI parallel programs operate faster than serial programs on multi CPUs with multiple cores. All threads of one process share resources such as memory. On the contrary MPI tasks do not share memory but can be spawned over different nodes.  &lt;br /&gt;
&lt;br /&gt;
Multiple Intel MPI tasks must be launched by the MPI parallel program &#039;&#039;&#039;mpiexec.hydra&#039;&#039;&#039;. For multithreaded programs based on &#039;&#039;&#039;Open&#039;&#039;&#039; &#039;&#039;&#039;M&#039;&#039;&#039;ulti-&#039;&#039;&#039;P&#039;&#039;&#039;rocessing (OpenMP) number of threads are defined by the environment variable OMP_NUM_THREADS. By default this variable is set to 1 (OMP_NUM_THREADS=1).&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;For Intel MPI&#039;&#039;&#039; a job-script to submit a batch job called &#039;&#039;job_impi_omp.sh&#039;&#039; that runs a Intel MPI program with 8 tasks and a tenfold threaded program &#039;&#039;impi_omp_program&#039;&#039; requiring 32000 MByte of total physical memory per task and total wall clock time of 6 hours looks like: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--b)--&amp;gt; &lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#MSUB -l nodes=4:ppn=20&lt;br /&gt;
#MSUB -l walltime=06:00:00&lt;br /&gt;
#MSUB -l pmem=3200mb&lt;br /&gt;
#MSUB -v MPI_MODULE=mpi/impi&lt;br /&gt;
#MSUB -v OMP_NUM_THREADS=10&lt;br /&gt;
#MSUB -v MPIRUN_OPTIONS=&amp;quot;-binding &amp;quot;domain=omp&amp;quot; -print-rank-map -ppn 2 -envall&amp;quot;&lt;br /&gt;
#MSUB -v EXE=./impi_omp_program&lt;br /&gt;
#MSUB -N test_impi_omp&lt;br /&gt;
&lt;br /&gt;
#If using more than one MPI task per node please set&lt;br /&gt;
export KMP_AFFINITY=scatter&lt;br /&gt;
#export KMP_AFFINITY=verbose,scatter  prints messages concerning the supported affinity &lt;br /&gt;
#KMP_AFFINITY Description: https://software.intel.com/en-us/node/524790#KMP_AFFINITY_ENVIRONMENT_VARIABLE&lt;br /&gt;
 &lt;br /&gt;
module load ${MPI_MODULE}&lt;br /&gt;
TASK_COUNT=$((${MOAB_PROCCOUNT}/${OMP_NUM_THREADS}))&lt;br /&gt;
echo &amp;quot;${EXE} running on ${MOAB_PROCCOUNT} cores with ${TASK_COUNT} MPI-tasks and ${OMP_NUM_THREADS} threads&amp;quot;&lt;br /&gt;
startexe=&amp;quot;mpiexec.hydra -bootstrap slurm ${MPIRUN_OPTIONS} -n ${TASK_COUNT} ${EXE}&amp;quot;&lt;br /&gt;
echo $startexe&lt;br /&gt;
exec $startexe&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Using Intel compiler the environment variable KMP_AFFINITY switches on binding of threads to specific cores. If you only run one MPI task per node please set KMP_AFFINITY=compact,1,0.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Execute the script &#039;&#039;&#039;job_impi_omp.sh&#039;&#039;&#039; adding the queue class &#039;&#039;multinode&#039;&#039; to your msub command:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ msub -q multinode job_impi_omp.sh&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
The mpirun option &#039;&#039;-print-rank-map&#039;&#039; shows the bindings between MPI tasks and nodes (not very beneficial). The option &#039;&#039;-binding&#039;&#039; binds MPI tasks (processes) to a particular processor; &#039;&#039;domain=omp&#039;&#039; means that the domain size is determined by the number of threads. In the above examples (2 MPI tasks per node) you could also choose &#039;&#039;-binding &amp;quot;cell=unit;map=bunch&amp;quot;&#039;&#039;; this binding maps one MPI process to each socket. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
= Interactive Jobs =&lt;br /&gt;
Interactive jobs on bwUniCluster [[BwUniCluster_User_Access#Allowed_activities_on_login_nodes|must &#039;&#039;&#039;NOT&#039;&#039;&#039; run on the logins nodes]], however resources for interactive jobs can be requested using msub. Considering a serial application with a graphical frontend that requires 5000 MByte of memory and limiting the interactive run to 2 hours execute the following:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ msub  -I  -V  -l nodes=1:ppn=1 -l walltime=0:02:00:00&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The option -V defines that all environment variables are exported to the compute node of the interactive session.&lt;br /&gt;
After execution of this command &#039;&#039;&#039;DO NOT CLOSE&#039;&#039;&#039; your current terminal session but wait until the queueing system MOAB has granted you the requested resources on the compute system. Once granted you will be automatically logged on the dedicated resource. Now you have an interactive session with 1 core and 5000 MByte of memory on the compute system for 2 hours. Simply execute now your application:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ cd to_path&lt;br /&gt;
$ ./application&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Note that, once the walltime limit has been reached you will be automatically logged out of the compute system.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
== Chain Jobs ==&lt;br /&gt;
The CPU time requirements of many applications exceed the limits of the job classes. In those situations it is recommended to solve the problem by a job chain. A job chain is a sequence of jobs where each job automatically starts its successor. &lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
##################################################&lt;br /&gt;
## simple MOAB submitter script to setup        ## &lt;br /&gt;
## a chain of jobs for bwUniCluster             ##&lt;br /&gt;
##################################################&lt;br /&gt;
## ver.  : 2015-09-17, KIT, SCC&lt;br /&gt;
&lt;br /&gt;
## Define maximum number of jobs via positional parameter 1, default is 5&lt;br /&gt;
max_nojob=${1:-5}&lt;br /&gt;
&lt;br /&gt;
## Define your jobscript (e.g. &amp;quot;~/chain_link_job.sh&amp;quot;)&lt;br /&gt;
chain_link_job=${PWD}/chain_link_job.sh&lt;br /&gt;
&lt;br /&gt;
## Define type of dependency via positional parameter 2, default is &#039;afterok&#039;&lt;br /&gt;
dep_type=&amp;quot;${2:-afterok}&amp;quot;&lt;br /&gt;
## -&amp;gt; List of all dependencies:&lt;br /&gt;
## http://docs.adaptivecomputing.com/suite/8-0/enterprise/help.htm#topics/\&lt;br /&gt;
##    moabWorkloadManager/topics/jobAdministration/jobdependencies.html&lt;br /&gt;
&lt;br /&gt;
myloop_counter=1&lt;br /&gt;
## Submit loop&lt;br /&gt;
while [ ${myloop_counter} -le ${max_nojob} ] ; do&lt;br /&gt;
   ##&lt;br /&gt;
   ## Differ msub_opt depending on chain link number&lt;br /&gt;
   if [ ${myloop_counter} -eq 1 ] ; then&lt;br /&gt;
      msub_opt=&amp;quot;&amp;quot;&lt;br /&gt;
   else&lt;br /&gt;
      ## Attention: do NOT use &#039;-W depend&#039; together with msub&lt;br /&gt;
      msub_opt=&amp;quot;-l depend=${dep_type}:${jobID}&amp;quot;&lt;br /&gt;
   fi&lt;br /&gt;
   ##&lt;br /&gt;
   ## Print current iteration number and msub command&lt;br /&gt;
   echo &amp;quot;Chain job iteration = ${myloop_counter}&amp;quot;&lt;br /&gt;
   echo &amp;quot;   msub -v myloop_counter=${myloop_counter} ${msub_opt} ${chain_link_job}&amp;quot;&lt;br /&gt;
   ## Store job ID for next iteration by storing output of msub command with empty lines&lt;br /&gt;
   jobID=$(msub -v myloop_counter=${myloop_counter} ${msub_opt} ${chain_link_job} 2&amp;gt;&amp;amp;1 | sed &#039;/^$/d&#039;)&lt;br /&gt;
   ##   &lt;br /&gt;
   ## Check if ERROR occurred&lt;br /&gt;
   if [[ &amp;quot;${jobID}&amp;quot; =~ &amp;quot;ERROR&amp;quot; ]] ; then&lt;br /&gt;
      echo &amp;quot;   -&amp;gt; submission failed!&amp;quot; ; exit 1&lt;br /&gt;
   else&lt;br /&gt;
      echo &amp;quot;   -&amp;gt; job number = ${jobID}&amp;quot;&lt;br /&gt;
   fi&lt;br /&gt;
   ##&lt;br /&gt;
   ## Increase counter&lt;br /&gt;
   let myloop_counter+=1&lt;br /&gt;
done&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
----&lt;br /&gt;
[[Category:bwUniCluster|Batch Jobs - bwUniCluster features]]&lt;/div&gt;</summary>
		<author><name>S Shamsudeen</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=Batch_Jobs_-_bwUniCluster_Features&amp;diff=4835</id>
		<title>Batch Jobs - bwUniCluster Features</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=Batch_Jobs_-_bwUniCluster_Features&amp;diff=4835"/>
		<updated>2017-05-08T10:46:08Z</updated>

		<summary type="html">&lt;p&gt;S Shamsudeen: /* msub -q queues */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;font color=green size=+2&amp;gt;This article contains information on features of the [[Batch_Jobs|batch job system]] only applicable on bwUniCluster.&amp;lt;/font&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
= Job Submission =&lt;br /&gt;
== msub Command ==&lt;br /&gt;
=== Interactive Jobs ===&lt;br /&gt;
The bwUniCluster supports the following additional msub option(s):&lt;br /&gt;
&lt;br /&gt;
{| width=750px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! colspan=&amp;quot;3&amp;quot; | msub Options&lt;br /&gt;
|-&lt;br /&gt;
! Command line&lt;br /&gt;
! Script&lt;br /&gt;
! Purpose&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
|  -I&lt;br /&gt;
| &lt;br /&gt;
|  Declares the job is to be run interactively.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== msub -l &#039;&#039;resource_list&#039;&#039; ===&lt;br /&gt;
No deviation or additional features to general [[Batch_Jobs|batch job]] setting.&lt;br /&gt;
&lt;br /&gt;
=== msub -q &#039;&#039;queues&#039;&#039; ===&lt;br /&gt;
Compute resources such as walltime, nodes and memory are restricted and must fit into &#039;&#039;&#039;queues&#039;&#039;&#039;. Since requested compute resources are NOT always automatically mapped to the correct queue class, you must add to your msub command the correct queue class. Details are:&lt;br /&gt;
&lt;br /&gt;
{| width=750px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! colspan=&amp;quot;6&amp;quot; style=&amp;quot;background-color:#999999;padding:3px&amp;quot;| msub -q &#039;&#039;queue&#039;&#039;&lt;br /&gt;
|- style=&amp;quot;width:10%;height=20px; text-align:left;&amp;quot;&lt;br /&gt;
! style=&amp;quot;width:10%;padding:3px&amp;quot;| &#039;&#039;queue&#039;&#039;&lt;br /&gt;
! style=&amp;quot;width:5%;padding:3px&amp;quot;| &#039;&#039;node&#039;&#039;&lt;br /&gt;
! style=&amp;quot;width:15%;padding:3px&amp;quot;| &#039;&#039;default resources&#039;&#039;&lt;br /&gt;
! style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;minimum resources&#039;&#039;&lt;br /&gt;
! style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;maximum resources&#039;&#039;&lt;br /&gt;
! style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;node access policy&#039;&#039;&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; height=20px; text-align:left&amp;quot;&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| develop&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| thin&lt;br /&gt;
| style=&amp;quot;width:15%;padding:3px&amp;quot;| &#039;&#039;walltime&#039;&#039;=00:10:00,&#039;&#039;procs&#039;&#039;=1, &#039;&#039;pmem&#039;&#039;=4000mb&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1:&#039;&#039;ppn&#039;&#039;=16, &#039;&#039;walltime&#039;&#039;=00:30:00&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| shared&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; height=20px; text-align:left&amp;quot;&lt;br /&gt;
| style=&amp;quot;width:10%;padding:3px&amp;quot; | singlenode &lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| thin&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;walltime&#039;&#039;=00:30:01,&#039;&#039;procs&#039;&#039;=1, &#039;&#039;pmem&#039;&#039;=4000mb&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1, &#039;&#039;walltime&#039;&#039;=00:30:01&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1:&#039;&#039;ppn&#039;&#039;=16, &#039;&#039;walltime&#039;&#039;=3:00:00:00&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| shared&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; height=20px; text-align:left&amp;quot;&lt;br /&gt;
| style=&amp;quot;width:10%;vertical-align:top;height=20px; text-align:left;padding:3px&amp;quot; | verylong&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| thin&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;walltime&#039;&#039;=3:00:00:01,&#039;&#039;procs&#039;&#039;=1, &#039;&#039;pmem&#039;&#039;=4000mb&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1&#039;&#039;, walltime&#039;&#039;=3:00:00:01&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes=1:&#039;&#039;ppn&#039;&#039;=16, &#039;&#039;walltime&#039;&#039;=6:00:00:00&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| shared&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; height=20px; text-align:left&amp;quot;&lt;br /&gt;
| style=&amp;quot;width:10%;padding:3px&amp;quot; | extralong&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| thin&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;walltime&#039;&#039;=3:00:00:01,&#039;&#039;procs&#039;&#039;=1, &#039;&#039;pmem&#039;&#039;=4000mb&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1&#039;&#039;, walltime&#039;&#039;=3:00:00:01&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes=1:&#039;&#039;ppn&#039;&#039;=16, &#039;&#039;walltime&#039;&#039;=6:00:00:00&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| shared&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; height=20px; text-align:left&amp;quot; &lt;br /&gt;
| style=&amp;quot;width:10%;padding:3px&amp;quot; | fat&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| fat&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;walltime&#039;&#039;=00:10:00,&#039;&#039;procs&#039;&#039;=1, &#039;&#039;pmem&#039;&#039;=32000mb&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1:&#039;&#039;ppn&#039;&#039;=32, &#039;&#039;walltime&#039;&#039;=3:00:00:00&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| shared&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; height=20px; text-align:left&amp;quot; &lt;br /&gt;
| style=&amp;quot;width:10%;padding:3px&amp;quot; | multinode&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| broad&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;walltime&#039;&#039;=00:10:00,&#039;&#039;procs&#039;&#039;=1, &#039;&#039;pmem&#039;&#039;=4500mb&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=2&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=128:&#039;&#039;ppn&#039;&#039;=28, &#039;&#039;walltime&#039;&#039;=48:00:00&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| singlejob&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; height=20px; text-align:left&amp;quot; &lt;br /&gt;
| style=&amp;quot;width:10%;padding:3px&amp;quot; | dev_multinode&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| broad&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;walltime&#039;&#039;=00:10:00,&#039;&#039;procs&#039;&#039;=1, &#039;&#039;pmem&#039;&#039;=4500mb&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=2&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=16:&#039;&#039;ppn&#039;&#039;=28, &#039;&#039;walltime&#039;&#039;=00:30:00&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| singlejob&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; height=20px; text-align:left&amp;quot; &lt;br /&gt;
| style=&amp;quot;width:10%;padding:3px&amp;quot; | special&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| broad&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;walltime&#039;&#039;=00:30:00,&#039;&#039;procs&#039;&#039;=1, &#039;&#039;pmem&#039;&#039;=4500mb&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1, &#039;&#039;walltime&#039;&#039;=00:30:00&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1:&#039;&#039;ppn&#039;&#039;=28, &#039;&#039;walltime&#039;&#039;=48:00:00&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| shared&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; height=20px; text-align:left&amp;quot; &lt;br /&gt;
| style=&amp;quot;width:10%;padding:3px&amp;quot; | dev_special&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| broad&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;walltime&#039;&#039;=00:10:00,&#039;&#039;procs&#039;&#039;=1, &#039;&#039;pmem&#039;&#039;=4500mb&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1, &#039;&#039;walltime&#039;&#039;=00:10:00&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1:&#039;&#039;ppn&#039;&#039;=28, &#039;&#039;walltime&#039;&#039;=00:30:00&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| shared&lt;br /&gt;
|- &lt;br /&gt;
|}&lt;br /&gt;
Note that &#039;&#039;node access policy&#039;&#039;=singlejob means that, irrespected of the requested number of cores, node access is exclusive. &lt;br /&gt;
Default resources of a queue class defines walltime, processes and memory if not explicitly given with msub command. Resource list acronyms &#039;&#039;walltime&#039;&#039;, &#039;&#039;procs&#039;&#039;, &#039;&#039;nodes&#039;&#039; and &#039;&#039;ppn&#039;&#039; are described [[Batch_Jobs#msub_-l_resource_list|here]].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Queue class examples ====&lt;br /&gt;
&lt;br /&gt;
* To run your batch job longer than 3 days, please use&amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;$ msub -q verylong&amp;lt;/span&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
* To run your batch job on one of the [[BwUniCluster_File_System#Components_of_bwUniCluster|fat nodes]], please use&amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;$ msub -q fat&amp;lt;/span&amp;gt;. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Environment Variables for Batch Jobs =&lt;br /&gt;
== Additional Moab Environments ==&lt;br /&gt;
The bwUniCluster expands the [[Batch_Jobs#Moab Environment Variables|common set of MOAB environment variables]] by the following variable:&lt;br /&gt;
{| width=700px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! colspan=&amp;quot;3&amp;quot; style=&amp;quot;background-color:#999999;padding:3px&amp;quot;| bwUniCluster specific MOAB variables&lt;br /&gt;
|-&lt;br /&gt;
! Environment variable&lt;br /&gt;
! Description&lt;br /&gt;
|- &lt;br /&gt;
| MOAB_SUBMITDIR&lt;br /&gt;
| Directory of job submission&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Additional Slurm Environments ==&lt;br /&gt;
Since the work load manager MOAB on [[bwUniCluster]] uses the resource manager SLURM, the following environment variables of SLURM are added to your environment once your job has started:&lt;br /&gt;
{| width=750px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! colspan=&amp;quot;3&amp;quot; style=&amp;quot;background-color:#999999;padding:3px&amp;quot;| SLURM variables&lt;br /&gt;
|- style=&amp;quot;width:25%;height=20px; text-align:left;padding:3px&amp;quot;&lt;br /&gt;
! style=&amp;quot;width:20%;height=20px; text-align:left;padding:3px&amp;quot;| Environment variables&lt;br /&gt;
! style=&amp;quot;height=20px; text-align:left;padding:3px&amp;quot;| Description&lt;br /&gt;
|- &lt;br /&gt;
| style=&amp;quot;width:20%;height=20px; text-align:left;padding:3px&amp;quot; | SLURM_JOB_CPUS_PER_NODE &lt;br /&gt;
| style=&amp;quot;height=20px; text-align:left;padding:3px&amp;quot;|  Number of processes per node dedicated to the job&lt;br /&gt;
|- &lt;br /&gt;
| style=&amp;quot;width:20%;height=20px; text-align:left;padding:3px&amp;quot; | SLURM_JOB_NODELIST &lt;br /&gt;
| style=&amp;quot;height=20px; text-align:left;padding:3px&amp;quot;|  List of nodes dedicated to the job&lt;br /&gt;
|- &lt;br /&gt;
| style=&amp;quot;width:20%;height=20px; text-align:left;padding:3px&amp;quot; | SLURM_JOB_NUM_NODES &lt;br /&gt;
| style=&amp;quot;height=20px; text-align:left;padding:3px&amp;quot;| Number of nodes dedicated to the job&lt;br /&gt;
|- &lt;br /&gt;
| style=&amp;quot;width:20%;height=20px; text-align:left;padding:3px&amp;quot; | SLURM_MEM_PER_NODE &lt;br /&gt;
| style=&amp;quot;height=20px; text-align:left;padding:3px&amp;quot;| Memory per node dedicated to the job &lt;br /&gt;
|- &lt;br /&gt;
| style=&amp;quot;width:20%;height=20px; text-align:left;padding:3px&amp;quot; | SLURM_NPROCS&lt;br /&gt;
| style=&amp;quot;height=20px; text-align:left;padding:3px&amp;quot;|  Total number of processes dedicated to the job &lt;br /&gt;
|}&lt;br /&gt;
See also:&lt;br /&gt;
* [[Batch_Jobs#Batch_Job_.28Slurm.29_Variables_:_bwUniCluster|List of almost all important Slurm environments]]&lt;br /&gt;
&lt;br /&gt;
== Interactive Job Monitoring per Node ==&lt;br /&gt;
By default nodes are not used exclusive unless they are requested with &#039;&#039;-l naccesspolicy=singlejob&#039;&#039; as described [[Batch_Jobs#msub_-l_resource_list|here]]. &amp;lt;br&amp;gt;&lt;br /&gt;
If a Job runs exclusive on one node you may do a ssh login to that node. The ssh access will be limited by the set walltime. To get the nodes of your job need to read the environment variable SLURM_JOB_NODELIST during the runtime of the job. It contains all nodes in a shortened way e.g. &#039;&#039;uc1n[344,386]&#039;&#039; or &#039;&#039;uc1n[344-345]&#039;&#039;. To expand this string to &#039;&#039;uc1n344 uc1n345&#039;&#039; you can you can use the command expandnodes like:&lt;br /&gt;
&lt;br /&gt;
  expandnodes $SLURM_JOB_NODELIST &amp;gt; nodelist&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
= Intel MPI parallel Programs =&lt;br /&gt;
== Intel MPI without Multithreading ==&lt;br /&gt;
MPI parallel programs run faster than serial programs on multi CPU and multi core systems. N-fold spawned processes of the MPI program, i.e., &#039;&#039;&#039;MPI tasks&#039;&#039;&#039;,  run simultaneously and communicate via the Message Passing Interface (MPI) paradigm. MPI tasks do not share memory but can be spawned over different nodes. &lt;br /&gt;
&lt;br /&gt;
Generate a wrapper script for &#039;&#039;&#039;Intel MPI&#039;&#039;&#039;, &#039;&#039;job_impi.sh&#039;&#039; containing the following lines:&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
module load mpi/impi/&amp;lt;placeholder_for_version&amp;gt;&lt;br /&gt;
mpiexec.hydra -bootstrap slurm my_par_program&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&amp;lt;font color=red&amp;gt;&#039;&#039;&#039;Attention:&#039;&#039;&#039;&amp;lt;/font&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
Do &#039;&#039;&#039;NOT&#039;&#039;&#039; add mpirun options &#039;&#039;-n &amp;lt;number_of_processes&amp;gt;&#039;&#039; or any other option defining processes or nodes, since MOAB instructs mpirun about number of processes and node hostnames. &lt;br /&gt;
Moreover, replace &amp;lt;placeholder_for_version&amp;gt; with the wished version of &#039;&#039;&#039;Intel MPI&#039;&#039;&#039; to enable the MPI environment.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Launching and running 32 Intel MPI tasks on 4 nodes, each requiring 1000 MByte, and running for 5 hours, execute:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ msub -q multinode -l nodes=4:ppn=16,pmem=1000mb,walltime=05:00:00 job_impi.sh&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Intel MPI with Multithreading ==&lt;br /&gt;
Multithreaded + MPI parallel programs operate faster than serial programs on multi CPUs with multiple cores. All threads of one process share resources such as memory. On the contrary MPI tasks do not share memory but can be spawned over different nodes.  &lt;br /&gt;
&lt;br /&gt;
Multiple Intel MPI tasks must be launched by the MPI parallel program &#039;&#039;&#039;mpiexec.hydra&#039;&#039;&#039;. For multithreaded programs based on &#039;&#039;&#039;Open&#039;&#039;&#039; &#039;&#039;&#039;M&#039;&#039;&#039;ulti-&#039;&#039;&#039;P&#039;&#039;&#039;rocessing (OpenMP) number of threads are defined by the environment variable OMP_NUM_THREADS. By default this variable is set to 1 (OMP_NUM_THREADS=1).&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;For Intel MPI&#039;&#039;&#039; a job-script to submit a batch job called &#039;&#039;job_impi_omp.sh&#039;&#039; that runs a Intel MPI program with 8 tasks and a tenfold threaded program &#039;&#039;impi_omp_program&#039;&#039; requiring 32000 MByte of total physical memory per task and total wall clock time of 6 hours looks like: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--b)--&amp;gt; &lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#MSUB -l nodes=4:ppn=20&lt;br /&gt;
#MSUB -l walltime=06:00:00&lt;br /&gt;
#MSUB -l pmem=3200mb&lt;br /&gt;
#MSUB -v MPI_MODULE=mpi/impi&lt;br /&gt;
#MSUB -v OMP_NUM_THREADS=10&lt;br /&gt;
#MSUB -v MPIRUN_OPTIONS=&amp;quot;-binding &amp;quot;domain=omp&amp;quot; -print-rank-map -ppn 2 -envall&amp;quot;&lt;br /&gt;
#MSUB -v EXE=./impi_omp_program&lt;br /&gt;
#MSUB -N test_impi_omp&lt;br /&gt;
&lt;br /&gt;
#If using more than one MPI task per node please set&lt;br /&gt;
export KMP_AFFINITY=scatter&lt;br /&gt;
#export KMP_AFFINITY=verbose,scatter  prints messages concerning the supported affinity &lt;br /&gt;
#KMP_AFFINITY Description: https://software.intel.com/en-us/node/524790#KMP_AFFINITY_ENVIRONMENT_VARIABLE&lt;br /&gt;
 &lt;br /&gt;
module load ${MPI_MODULE}&lt;br /&gt;
TASK_COUNT=$((${MOAB_PROCCOUNT}/${OMP_NUM_THREADS}))&lt;br /&gt;
echo &amp;quot;${EXE} running on ${MOAB_PROCCOUNT} cores with ${TASK_COUNT} MPI-tasks and ${OMP_NUM_THREADS} threads&amp;quot;&lt;br /&gt;
startexe=&amp;quot;mpiexec.hydra -bootstrap slurm ${MPIRUN_OPTIONS} -n ${TASK_COUNT} ${EXE}&amp;quot;&lt;br /&gt;
echo $startexe&lt;br /&gt;
exec $startexe&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Using Intel compiler the environment variable KMP_AFFINITY switches on binding of threads to specific cores. If you only run one MPI task per node please set KMP_AFFINITY=compact,1,0.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Execute the script &#039;&#039;&#039;job_impi_omp.sh&#039;&#039;&#039; adding the queue class &#039;&#039;multinode&#039;&#039; to your msub command:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ msub -q multinode job_impi_omp.sh&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
The mpirun option &#039;&#039;-print-rank-map&#039;&#039; shows the bindings between MPI tasks and nodes (not very beneficial). The option &#039;&#039;-binding&#039;&#039; binds MPI tasks (processes) to a particular processor; &#039;&#039;domain=omp&#039;&#039; means that the domain size is determined by the number of threads. In the above examples (2 MPI tasks per node) you could also choose &#039;&#039;-binding &amp;quot;cell=unit;map=bunch&amp;quot;&#039;&#039;; this binding maps one MPI process to each socket. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
= Interactive Jobs =&lt;br /&gt;
Interactive jobs on bwUniCluster [[BwUniCluster_User_Access#Allowed_activities_on_login_nodes|must &#039;&#039;&#039;NOT&#039;&#039;&#039; run on the logins nodes]], however resources for interactive jobs can be requested using msub. Considering a serial application with a graphical frontend that requires 5000 MByte of memory and limiting the interactive run to 2 hours execute the following:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ msub  -I  -V  -l nodes=1:ppn=1 -l walltime=0:02:00:00&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The option -V defines that all environment variables are exported to the compute node of the interactive session.&lt;br /&gt;
After execution of this command &#039;&#039;&#039;DO NOT CLOSE&#039;&#039;&#039; your current terminal session but wait until the queueing system MOAB has granted you the requested resources on the compute system. Once granted you will be automatically logged on the dedicated resource. Now you have an interactive session with 1 core and 5000 MByte of memory on the compute system for 2 hours. Simply execute now your application:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ cd to_path&lt;br /&gt;
$ ./application&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Note that, once the walltime limit has been reached you will be automatically logged out of the compute system.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
== Chain Jobs ==&lt;br /&gt;
The CPU time requirements of many applications exceed the limits of the job classes. In those situations it is recommended to solve the problem by a job chain. A job chain is a sequence of jobs where each job automatically starts its successor. &lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
##################################################&lt;br /&gt;
## simple MOAB submitter script to setup        ## &lt;br /&gt;
## a chain of jobs for bwUniCluster             ##&lt;br /&gt;
##################################################&lt;br /&gt;
## ver.  : 2015-09-17, KIT, SCC&lt;br /&gt;
&lt;br /&gt;
## Define maximum number of jobs via positional parameter 1, default is 5&lt;br /&gt;
max_nojob=${1:-5}&lt;br /&gt;
&lt;br /&gt;
## Define your jobscript (e.g. &amp;quot;~/chain_link_job.sh&amp;quot;)&lt;br /&gt;
chain_link_job=${PWD}/chain_link_job.sh&lt;br /&gt;
&lt;br /&gt;
## Define type of dependency via positional parameter 2, default is &#039;afterok&#039;&lt;br /&gt;
dep_type=&amp;quot;${2:-afterok}&amp;quot;&lt;br /&gt;
## -&amp;gt; List of all dependencies:&lt;br /&gt;
## http://docs.adaptivecomputing.com/suite/8-0/enterprise/help.htm#topics/\&lt;br /&gt;
##    moabWorkloadManager/topics/jobAdministration/jobdependencies.html&lt;br /&gt;
&lt;br /&gt;
myloop_counter=1&lt;br /&gt;
## Submit loop&lt;br /&gt;
while [ ${myloop_counter} -le ${max_nojob} ] ; do&lt;br /&gt;
   ##&lt;br /&gt;
   ## Differ msub_opt depending on chain link number&lt;br /&gt;
   if [ ${myloop_counter} -eq 1 ] ; then&lt;br /&gt;
      msub_opt=&amp;quot;&amp;quot;&lt;br /&gt;
   else&lt;br /&gt;
      ## Attention: do NOT use &#039;-W depend&#039; together with msub&lt;br /&gt;
      msub_opt=&amp;quot;-l depend=${dep_type}:${jobID}&amp;quot;&lt;br /&gt;
   fi&lt;br /&gt;
   ##&lt;br /&gt;
   ## Print current iteration number and msub command&lt;br /&gt;
   echo &amp;quot;Chain job iteration = ${myloop_counter}&amp;quot;&lt;br /&gt;
   echo &amp;quot;   msub -v myloop_counter=${myloop_counter} ${msub_opt} ${chain_link_job}&amp;quot;&lt;br /&gt;
   ## Store job ID for next iteration by storing output of msub command with empty lines&lt;br /&gt;
   jobID=$(msub -v myloop_counter=${myloop_counter} ${msub_opt} ${chain_link_job} 2&amp;gt;&amp;amp;1 | sed &#039;/^$/d&#039;)&lt;br /&gt;
   ##   &lt;br /&gt;
   ## Check if ERROR occurred&lt;br /&gt;
   if [[ &amp;quot;${jobID}&amp;quot; =~ &amp;quot;ERROR&amp;quot; ]] ; then&lt;br /&gt;
      echo &amp;quot;   -&amp;gt; submission failed!&amp;quot; ; exit 1&lt;br /&gt;
   else&lt;br /&gt;
      echo &amp;quot;   -&amp;gt; job number = ${jobID}&amp;quot;&lt;br /&gt;
   fi&lt;br /&gt;
   ##&lt;br /&gt;
   ## Increase counter&lt;br /&gt;
   let myloop_counter+=1&lt;br /&gt;
done&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
----&lt;br /&gt;
[[Category:bwUniCluster|Batch Jobs - bwUniCluster features]]&lt;/div&gt;</summary>
		<author><name>S Shamsudeen</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=Batch_Jobs_-_bwUniCluster_Features&amp;diff=4834</id>
		<title>Batch Jobs - bwUniCluster Features</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=Batch_Jobs_-_bwUniCluster_Features&amp;diff=4834"/>
		<updated>2017-05-08T10:42:56Z</updated>

		<summary type="html">&lt;p&gt;S Shamsudeen: /* msub -q queues */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;font color=green size=+2&amp;gt;This article contains information on features of the [[Batch_Jobs|batch job system]] only applicable on bwUniCluster.&amp;lt;/font&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
= Job Submission =&lt;br /&gt;
== msub Command ==&lt;br /&gt;
=== Interactive Jobs ===&lt;br /&gt;
The bwUniCluster supports the following additional msub option(s):&lt;br /&gt;
&lt;br /&gt;
{| width=750px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! colspan=&amp;quot;3&amp;quot; | msub Options&lt;br /&gt;
|-&lt;br /&gt;
! Command line&lt;br /&gt;
! Script&lt;br /&gt;
! Purpose&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
|  -I&lt;br /&gt;
| &lt;br /&gt;
|  Declares the job is to be run interactively.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== msub -l &#039;&#039;resource_list&#039;&#039; ===&lt;br /&gt;
No deviation or additional features to general [[Batch_Jobs|batch job]] setting.&lt;br /&gt;
&lt;br /&gt;
=== msub -q &#039;&#039;queues&#039;&#039; ===&lt;br /&gt;
Compute resources such as walltime, nodes and memory are restricted and must fit into &#039;&#039;&#039;queues&#039;&#039;&#039;. Since requested compute resources are NOT always automatically mapped to the correct queue class, you must add to your msub command the correct queue class. Details are:&lt;br /&gt;
&lt;br /&gt;
{| width=750px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! colspan=&amp;quot;6&amp;quot; style=&amp;quot;background-color:#999999;padding:3px&amp;quot;| msub -q &#039;&#039;queue&#039;&#039;&lt;br /&gt;
|- style=&amp;quot;width:10%;height=20px; text-align:left;&amp;quot;&lt;br /&gt;
! style=&amp;quot;width:10%;padding:3px&amp;quot;| &#039;&#039;queue&#039;&#039;&lt;br /&gt;
! style=&amp;quot;width:5%;padding:3px&amp;quot;| &#039;&#039;node&#039;&#039;&lt;br /&gt;
! style=&amp;quot;width:15%;padding:3px&amp;quot;| &#039;&#039;default resources&#039;&#039;&lt;br /&gt;
! style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;minimum resources&#039;&#039;&lt;br /&gt;
! style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;maximum resources&#039;&#039;&lt;br /&gt;
! style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;node access policy&#039;&#039;&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; height=20px; text-align:left&amp;quot;&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| develop&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| thin&lt;br /&gt;
| style=&amp;quot;width:15%;padding:3px&amp;quot;| &#039;&#039;walltime&#039;&#039;=00:10:00,&#039;&#039;procs&#039;&#039;=1, &#039;&#039;pmem&#039;&#039;=4000mb&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1:&#039;&#039;ppn&#039;&#039;=16, &#039;&#039;walltime&#039;&#039;=00:30:00&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| shared&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; height=20px; text-align:left&amp;quot;&lt;br /&gt;
| style=&amp;quot;width:10%;padding:3px&amp;quot; | singlenode &lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| thin&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;walltime&#039;&#039;=00:30:01,&#039;&#039;procs&#039;&#039;=1, &#039;&#039;pmem&#039;&#039;=4000mb&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1, &#039;&#039;walltime&#039;&#039;=00:30:01&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1:&#039;&#039;ppn&#039;&#039;=16, &#039;&#039;walltime&#039;&#039;=3:00:00:00&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| shared&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; height=20px; text-align:left&amp;quot;&lt;br /&gt;
| style=&amp;quot;width:10%;vertical-align:top;height=20px; text-align:left;padding:3px&amp;quot; | verylong&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| thin&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;walltime&#039;&#039;=3:00:00:01,&#039;&#039;procs&#039;&#039;=1, &#039;&#039;pmem&#039;&#039;=4000mb&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1&#039;&#039;, walltime&#039;&#039;=3:00:00:01&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes=1:&#039;&#039;ppn&#039;&#039;=16, &#039;&#039;walltime&#039;&#039;=6:00:00:00&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| shared&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; height=20px; text-align:left&amp;quot;&lt;br /&gt;
| style=&amp;quot;width:10%;padding:3px&amp;quot; | extralong&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| thin&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;walltime&#039;&#039;=3:00:00:01,&#039;&#039;procs&#039;&#039;=1, &#039;&#039;pmem&#039;&#039;=4000mb&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1&#039;&#039;, walltime&#039;&#039;=3:00:00:01&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes=1:&#039;&#039;ppn&#039;&#039;=16, &#039;&#039;walltime&#039;&#039;=6:00:00:00&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| shared&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; height=20px; text-align:left&amp;quot; &lt;br /&gt;
| style=&amp;quot;width:10%;padding:3px&amp;quot; | fat&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| fat&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;walltime&#039;&#039;=00:10:00,&#039;&#039;procs&#039;&#039;=1, &#039;&#039;pmem&#039;&#039;=32000mb&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1:&#039;&#039;ppn&#039;&#039;=32, &#039;&#039;walltime&#039;&#039;=3:00:00:00&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| shared&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; height=20px; text-align:left&amp;quot; &lt;br /&gt;
| style=&amp;quot;width:10%;padding:3px&amp;quot; | multinode&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| broad&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;walltime&#039;&#039;=00:10:00,&#039;&#039;procs&#039;&#039;=1, &#039;&#039;pmem&#039;&#039;=4500mb&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=2&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=128:&#039;&#039;ppn&#039;&#039;=28, &#039;&#039;walltime&#039;&#039;=48:00:00&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| singlejob&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; height=20px; text-align:left&amp;quot; &lt;br /&gt;
| style=&amp;quot;width:10%;padding:3px&amp;quot; | dev_multinode&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| broad&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;walltime&#039;&#039;=00:10:00,&#039;&#039;procs&#039;&#039;=1, &#039;&#039;pmem&#039;&#039;=4500mb&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=2&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=16:&#039;&#039;ppn&#039;&#039;=28, &#039;&#039;walltime&#039;&#039;=48:00:00&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| singlejob&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; height=20px; text-align:left&amp;quot; &lt;br /&gt;
| style=&amp;quot;width:10%;padding:3px&amp;quot; | special&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| broad&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;walltime&#039;&#039;=00:30:00,&#039;&#039;procs&#039;&#039;=1, &#039;&#039;pmem&#039;&#039;=4500mb&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1, &#039;&#039;walltime&#039;&#039;=00:30:00&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1:&#039;&#039;ppn&#039;&#039;=28, &#039;&#039;walltime&#039;&#039;=48:00:00&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| shared&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; height=20px; text-align:left&amp;quot; &lt;br /&gt;
| style=&amp;quot;width:10%;padding:3px&amp;quot; | dev_special&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| broad&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;walltime&#039;&#039;=00:10:00,&#039;&#039;procs&#039;&#039;=1, &#039;&#039;pmem&#039;&#039;=4500mb&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1, &#039;&#039;walltime&#039;&#039;=00:10:00&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1:&#039;&#039;ppn&#039;&#039;=28, &#039;&#039;walltime&#039;&#039;=48:00:00&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| shared&lt;br /&gt;
|- &lt;br /&gt;
|}&lt;br /&gt;
Note that &#039;&#039;node access policy&#039;&#039;=singlejob means that, irrespected of the requested number of cores, node access is exclusive. &lt;br /&gt;
Default resources of a queue class defines walltime, processes and memory if not explicitly given with msub command. Resource list acronyms &#039;&#039;walltime&#039;&#039;, &#039;&#039;procs&#039;&#039;, &#039;&#039;nodes&#039;&#039; and &#039;&#039;ppn&#039;&#039; are described [[Batch_Jobs#msub_-l_resource_list|here]].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Queue class examples ====&lt;br /&gt;
&lt;br /&gt;
* To run your batch job longer than 3 days, please use&amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;$ msub -q verylong&amp;lt;/span&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
* To run your batch job on one of the [[BwUniCluster_File_System#Components_of_bwUniCluster|fat nodes]], please use&amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;$ msub -q fat&amp;lt;/span&amp;gt;. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Environment Variables for Batch Jobs =&lt;br /&gt;
== Additional Moab Environments ==&lt;br /&gt;
The bwUniCluster expands the [[Batch_Jobs#Moab Environment Variables|common set of MOAB environment variables]] by the following variable:&lt;br /&gt;
{| width=700px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! colspan=&amp;quot;3&amp;quot; style=&amp;quot;background-color:#999999;padding:3px&amp;quot;| bwUniCluster specific MOAB variables&lt;br /&gt;
|-&lt;br /&gt;
! Environment variable&lt;br /&gt;
! Description&lt;br /&gt;
|- &lt;br /&gt;
| MOAB_SUBMITDIR&lt;br /&gt;
| Directory of job submission&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Additional Slurm Environments ==&lt;br /&gt;
Since the work load manager MOAB on [[bwUniCluster]] uses the resource manager SLURM, the following environment variables of SLURM are added to your environment once your job has started:&lt;br /&gt;
{| width=750px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! colspan=&amp;quot;3&amp;quot; style=&amp;quot;background-color:#999999;padding:3px&amp;quot;| SLURM variables&lt;br /&gt;
|- style=&amp;quot;width:25%;height=20px; text-align:left;padding:3px&amp;quot;&lt;br /&gt;
! style=&amp;quot;width:20%;height=20px; text-align:left;padding:3px&amp;quot;| Environment variables&lt;br /&gt;
! style=&amp;quot;height=20px; text-align:left;padding:3px&amp;quot;| Description&lt;br /&gt;
|- &lt;br /&gt;
| style=&amp;quot;width:20%;height=20px; text-align:left;padding:3px&amp;quot; | SLURM_JOB_CPUS_PER_NODE &lt;br /&gt;
| style=&amp;quot;height=20px; text-align:left;padding:3px&amp;quot;|  Number of processes per node dedicated to the job&lt;br /&gt;
|- &lt;br /&gt;
| style=&amp;quot;width:20%;height=20px; text-align:left;padding:3px&amp;quot; | SLURM_JOB_NODELIST &lt;br /&gt;
| style=&amp;quot;height=20px; text-align:left;padding:3px&amp;quot;|  List of nodes dedicated to the job&lt;br /&gt;
|- &lt;br /&gt;
| style=&amp;quot;width:20%;height=20px; text-align:left;padding:3px&amp;quot; | SLURM_JOB_NUM_NODES &lt;br /&gt;
| style=&amp;quot;height=20px; text-align:left;padding:3px&amp;quot;| Number of nodes dedicated to the job&lt;br /&gt;
|- &lt;br /&gt;
| style=&amp;quot;width:20%;height=20px; text-align:left;padding:3px&amp;quot; | SLURM_MEM_PER_NODE &lt;br /&gt;
| style=&amp;quot;height=20px; text-align:left;padding:3px&amp;quot;| Memory per node dedicated to the job &lt;br /&gt;
|- &lt;br /&gt;
| style=&amp;quot;width:20%;height=20px; text-align:left;padding:3px&amp;quot; | SLURM_NPROCS&lt;br /&gt;
| style=&amp;quot;height=20px; text-align:left;padding:3px&amp;quot;|  Total number of processes dedicated to the job &lt;br /&gt;
|}&lt;br /&gt;
See also:&lt;br /&gt;
* [[Batch_Jobs#Batch_Job_.28Slurm.29_Variables_:_bwUniCluster|List of almost all important Slurm environments]]&lt;br /&gt;
&lt;br /&gt;
== Interactive Job Monitoring per Node ==&lt;br /&gt;
By default nodes are not used exclusive unless they are requested with &#039;&#039;-l naccesspolicy=singlejob&#039;&#039; as described [[Batch_Jobs#msub_-l_resource_list|here]]. &amp;lt;br&amp;gt;&lt;br /&gt;
If a Job runs exclusive on one node you may do a ssh login to that node. The ssh access will be limited by the set walltime. To get the nodes of your job need to read the environment variable SLURM_JOB_NODELIST during the runtime of the job. It contains all nodes in a shortened way e.g. &#039;&#039;uc1n[344,386]&#039;&#039; or &#039;&#039;uc1n[344-345]&#039;&#039;. To expand this string to &#039;&#039;uc1n344 uc1n345&#039;&#039; you can you can use the command expandnodes like:&lt;br /&gt;
&lt;br /&gt;
  expandnodes $SLURM_JOB_NODELIST &amp;gt; nodelist&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
= Intel MPI parallel Programs =&lt;br /&gt;
== Intel MPI without Multithreading ==&lt;br /&gt;
MPI parallel programs run faster than serial programs on multi CPU and multi core systems. N-fold spawned processes of the MPI program, i.e., &#039;&#039;&#039;MPI tasks&#039;&#039;&#039;,  run simultaneously and communicate via the Message Passing Interface (MPI) paradigm. MPI tasks do not share memory but can be spawned over different nodes. &lt;br /&gt;
&lt;br /&gt;
Generate a wrapper script for &#039;&#039;&#039;Intel MPI&#039;&#039;&#039;, &#039;&#039;job_impi.sh&#039;&#039; containing the following lines:&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
module load mpi/impi/&amp;lt;placeholder_for_version&amp;gt;&lt;br /&gt;
mpiexec.hydra -bootstrap slurm my_par_program&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&amp;lt;font color=red&amp;gt;&#039;&#039;&#039;Attention:&#039;&#039;&#039;&amp;lt;/font&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
Do &#039;&#039;&#039;NOT&#039;&#039;&#039; add mpirun options &#039;&#039;-n &amp;lt;number_of_processes&amp;gt;&#039;&#039; or any other option defining processes or nodes, since MOAB instructs mpirun about number of processes and node hostnames. &lt;br /&gt;
Moreover, replace &amp;lt;placeholder_for_version&amp;gt; with the wished version of &#039;&#039;&#039;Intel MPI&#039;&#039;&#039; to enable the MPI environment.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Launching and running 32 Intel MPI tasks on 4 nodes, each requiring 1000 MByte, and running for 5 hours, execute:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ msub -q multinode -l nodes=4:ppn=16,pmem=1000mb,walltime=05:00:00 job_impi.sh&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Intel MPI with Multithreading ==&lt;br /&gt;
Multithreaded + MPI parallel programs operate faster than serial programs on multi CPUs with multiple cores. All threads of one process share resources such as memory. On the contrary MPI tasks do not share memory but can be spawned over different nodes.  &lt;br /&gt;
&lt;br /&gt;
Multiple Intel MPI tasks must be launched by the MPI parallel program &#039;&#039;&#039;mpiexec.hydra&#039;&#039;&#039;. For multithreaded programs based on &#039;&#039;&#039;Open&#039;&#039;&#039; &#039;&#039;&#039;M&#039;&#039;&#039;ulti-&#039;&#039;&#039;P&#039;&#039;&#039;rocessing (OpenMP) number of threads are defined by the environment variable OMP_NUM_THREADS. By default this variable is set to 1 (OMP_NUM_THREADS=1).&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;For Intel MPI&#039;&#039;&#039; a job-script to submit a batch job called &#039;&#039;job_impi_omp.sh&#039;&#039; that runs a Intel MPI program with 8 tasks and a tenfold threaded program &#039;&#039;impi_omp_program&#039;&#039; requiring 32000 MByte of total physical memory per task and total wall clock time of 6 hours looks like: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--b)--&amp;gt; &lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#MSUB -l nodes=4:ppn=20&lt;br /&gt;
#MSUB -l walltime=06:00:00&lt;br /&gt;
#MSUB -l pmem=3200mb&lt;br /&gt;
#MSUB -v MPI_MODULE=mpi/impi&lt;br /&gt;
#MSUB -v OMP_NUM_THREADS=10&lt;br /&gt;
#MSUB -v MPIRUN_OPTIONS=&amp;quot;-binding &amp;quot;domain=omp&amp;quot; -print-rank-map -ppn 2 -envall&amp;quot;&lt;br /&gt;
#MSUB -v EXE=./impi_omp_program&lt;br /&gt;
#MSUB -N test_impi_omp&lt;br /&gt;
&lt;br /&gt;
#If using more than one MPI task per node please set&lt;br /&gt;
export KMP_AFFINITY=scatter&lt;br /&gt;
#export KMP_AFFINITY=verbose,scatter  prints messages concerning the supported affinity &lt;br /&gt;
#KMP_AFFINITY Description: https://software.intel.com/en-us/node/524790#KMP_AFFINITY_ENVIRONMENT_VARIABLE&lt;br /&gt;
 &lt;br /&gt;
module load ${MPI_MODULE}&lt;br /&gt;
TASK_COUNT=$((${MOAB_PROCCOUNT}/${OMP_NUM_THREADS}))&lt;br /&gt;
echo &amp;quot;${EXE} running on ${MOAB_PROCCOUNT} cores with ${TASK_COUNT} MPI-tasks and ${OMP_NUM_THREADS} threads&amp;quot;&lt;br /&gt;
startexe=&amp;quot;mpiexec.hydra -bootstrap slurm ${MPIRUN_OPTIONS} -n ${TASK_COUNT} ${EXE}&amp;quot;&lt;br /&gt;
echo $startexe&lt;br /&gt;
exec $startexe&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Using Intel compiler the environment variable KMP_AFFINITY switches on binding of threads to specific cores. If you only run one MPI task per node please set KMP_AFFINITY=compact,1,0.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Execute the script &#039;&#039;&#039;job_impi_omp.sh&#039;&#039;&#039; adding the queue class &#039;&#039;multinode&#039;&#039; to your msub command:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ msub -q multinode job_impi_omp.sh&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
The mpirun option &#039;&#039;-print-rank-map&#039;&#039; shows the bindings between MPI tasks and nodes (not very beneficial). The option &#039;&#039;-binding&#039;&#039; binds MPI tasks (processes) to a particular processor; &#039;&#039;domain=omp&#039;&#039; means that the domain size is determined by the number of threads. In the above examples (2 MPI tasks per node) you could also choose &#039;&#039;-binding &amp;quot;cell=unit;map=bunch&amp;quot;&#039;&#039;; this binding maps one MPI process to each socket. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
= Interactive Jobs =&lt;br /&gt;
Interactive jobs on bwUniCluster [[BwUniCluster_User_Access#Allowed_activities_on_login_nodes|must &#039;&#039;&#039;NOT&#039;&#039;&#039; run on the logins nodes]], however resources for interactive jobs can be requested using msub. Considering a serial application with a graphical frontend that requires 5000 MByte of memory and limiting the interactive run to 2 hours execute the following:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ msub  -I  -V  -l nodes=1:ppn=1 -l walltime=0:02:00:00&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The option -V defines that all environment variables are exported to the compute node of the interactive session.&lt;br /&gt;
After execution of this command &#039;&#039;&#039;DO NOT CLOSE&#039;&#039;&#039; your current terminal session but wait until the queueing system MOAB has granted you the requested resources on the compute system. Once granted you will be automatically logged on the dedicated resource. Now you have an interactive session with 1 core and 5000 MByte of memory on the compute system for 2 hours. Simply execute now your application:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ cd to_path&lt;br /&gt;
$ ./application&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Note that, once the walltime limit has been reached you will be automatically logged out of the compute system.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
== Chain Jobs ==&lt;br /&gt;
The CPU time requirements of many applications exceed the limits of the job classes. In those situations it is recommended to solve the problem by a job chain. A job chain is a sequence of jobs where each job automatically starts its successor. &lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
##################################################&lt;br /&gt;
## simple MOAB submitter script to setup        ## &lt;br /&gt;
## a chain of jobs for bwUniCluster             ##&lt;br /&gt;
##################################################&lt;br /&gt;
## ver.  : 2015-09-17, KIT, SCC&lt;br /&gt;
&lt;br /&gt;
## Define maximum number of jobs via positional parameter 1, default is 5&lt;br /&gt;
max_nojob=${1:-5}&lt;br /&gt;
&lt;br /&gt;
## Define your jobscript (e.g. &amp;quot;~/chain_link_job.sh&amp;quot;)&lt;br /&gt;
chain_link_job=${PWD}/chain_link_job.sh&lt;br /&gt;
&lt;br /&gt;
## Define type of dependency via positional parameter 2, default is &#039;afterok&#039;&lt;br /&gt;
dep_type=&amp;quot;${2:-afterok}&amp;quot;&lt;br /&gt;
## -&amp;gt; List of all dependencies:&lt;br /&gt;
## http://docs.adaptivecomputing.com/suite/8-0/enterprise/help.htm#topics/\&lt;br /&gt;
##    moabWorkloadManager/topics/jobAdministration/jobdependencies.html&lt;br /&gt;
&lt;br /&gt;
myloop_counter=1&lt;br /&gt;
## Submit loop&lt;br /&gt;
while [ ${myloop_counter} -le ${max_nojob} ] ; do&lt;br /&gt;
   ##&lt;br /&gt;
   ## Differ msub_opt depending on chain link number&lt;br /&gt;
   if [ ${myloop_counter} -eq 1 ] ; then&lt;br /&gt;
      msub_opt=&amp;quot;&amp;quot;&lt;br /&gt;
   else&lt;br /&gt;
      ## Attention: do NOT use &#039;-W depend&#039; together with msub&lt;br /&gt;
      msub_opt=&amp;quot;-l depend=${dep_type}:${jobID}&amp;quot;&lt;br /&gt;
   fi&lt;br /&gt;
   ##&lt;br /&gt;
   ## Print current iteration number and msub command&lt;br /&gt;
   echo &amp;quot;Chain job iteration = ${myloop_counter}&amp;quot;&lt;br /&gt;
   echo &amp;quot;   msub -v myloop_counter=${myloop_counter} ${msub_opt} ${chain_link_job}&amp;quot;&lt;br /&gt;
   ## Store job ID for next iteration by storing output of msub command with empty lines&lt;br /&gt;
   jobID=$(msub -v myloop_counter=${myloop_counter} ${msub_opt} ${chain_link_job} 2&amp;gt;&amp;amp;1 | sed &#039;/^$/d&#039;)&lt;br /&gt;
   ##   &lt;br /&gt;
   ## Check if ERROR occurred&lt;br /&gt;
   if [[ &amp;quot;${jobID}&amp;quot; =~ &amp;quot;ERROR&amp;quot; ]] ; then&lt;br /&gt;
      echo &amp;quot;   -&amp;gt; submission failed!&amp;quot; ; exit 1&lt;br /&gt;
   else&lt;br /&gt;
      echo &amp;quot;   -&amp;gt; job number = ${jobID}&amp;quot;&lt;br /&gt;
   fi&lt;br /&gt;
   ##&lt;br /&gt;
   ## Increase counter&lt;br /&gt;
   let myloop_counter+=1&lt;br /&gt;
done&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
----&lt;br /&gt;
[[Category:bwUniCluster|Batch Jobs - bwUniCluster features]]&lt;/div&gt;</summary>
		<author><name>S Shamsudeen</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=Batch_Jobs_-_bwUniCluster_Features&amp;diff=4833</id>
		<title>Batch Jobs - bwUniCluster Features</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=Batch_Jobs_-_bwUniCluster_Features&amp;diff=4833"/>
		<updated>2017-05-08T10:41:08Z</updated>

		<summary type="html">&lt;p&gt;S Shamsudeen: /* msub -q queues */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;font color=green size=+2&amp;gt;This article contains information on features of the [[Batch_Jobs|batch job system]] only applicable on bwUniCluster.&amp;lt;/font&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
= Job Submission =&lt;br /&gt;
== msub Command ==&lt;br /&gt;
=== Interactive Jobs ===&lt;br /&gt;
The bwUniCluster supports the following additional msub option(s):&lt;br /&gt;
&lt;br /&gt;
{| width=750px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! colspan=&amp;quot;3&amp;quot; | msub Options&lt;br /&gt;
|-&lt;br /&gt;
! Command line&lt;br /&gt;
! Script&lt;br /&gt;
! Purpose&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
|  -I&lt;br /&gt;
| &lt;br /&gt;
|  Declares the job is to be run interactively.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== msub -l &#039;&#039;resource_list&#039;&#039; ===&lt;br /&gt;
No deviation or additional features to general [[Batch_Jobs|batch job]] setting.&lt;br /&gt;
&lt;br /&gt;
=== msub -q &#039;&#039;queues&#039;&#039; ===&lt;br /&gt;
Compute resources such as walltime, nodes and memory are restricted and must fit into &#039;&#039;&#039;queues&#039;&#039;&#039;. Since requested compute resources are NOT always automatically mapped to the correct queue class, you must add to your msub command the correct queue class. Details are:&lt;br /&gt;
&lt;br /&gt;
{| width=750px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! colspan=&amp;quot;6&amp;quot; style=&amp;quot;background-color:#999999;padding:3px&amp;quot;| msub -q &#039;&#039;queue&#039;&#039;&lt;br /&gt;
|- style=&amp;quot;width:10%;height=20px; text-align:left;&amp;quot;&lt;br /&gt;
! style=&amp;quot;width:10%;padding:3px&amp;quot;| &#039;&#039;queue&#039;&#039;&lt;br /&gt;
! style=&amp;quot;width:5%;padding:3px&amp;quot;| &#039;&#039;node&#039;&#039;&lt;br /&gt;
! style=&amp;quot;width:15%;padding:3px&amp;quot;| &#039;&#039;default resources&#039;&#039;&lt;br /&gt;
! style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;minimum resources&#039;&#039;&lt;br /&gt;
! style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;maximum resources&#039;&#039;&lt;br /&gt;
! style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;node access policy&#039;&#039;&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; height=20px; text-align:left&amp;quot;&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| develop&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| thin&lt;br /&gt;
| style=&amp;quot;width:15%;padding:3px&amp;quot;| &#039;&#039;walltime&#039;&#039;=00:10:00,&#039;&#039;procs&#039;&#039;=1, &#039;&#039;pmem&#039;&#039;=4000mb&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1:&#039;&#039;ppn&#039;&#039;=16, &#039;&#039;walltime&#039;&#039;=00:30:00&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| shared&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; height=20px; text-align:left&amp;quot;&lt;br /&gt;
| style=&amp;quot;width:10%;padding:3px&amp;quot; | singlenode &lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| thin&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;walltime&#039;&#039;=00:30:01,&#039;&#039;procs&#039;&#039;=1, &#039;&#039;pmem&#039;&#039;=4000mb&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1, &#039;&#039;walltime&#039;&#039;=00:30:01&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1:&#039;&#039;ppn&#039;&#039;=16, &#039;&#039;walltime&#039;&#039;=3:00:00:00&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| shared&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; height=20px; text-align:left&amp;quot;&lt;br /&gt;
| style=&amp;quot;width:10%;vertical-align:top;height=20px; text-align:left;padding:3px&amp;quot; | verylong&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| thin&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;walltime&#039;&#039;=3:00:00:01,&#039;&#039;procs&#039;&#039;=1, &#039;&#039;pmem&#039;&#039;=4000mb&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1&#039;&#039;, walltime&#039;&#039;=3:00:00:01&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes=1:&#039;&#039;ppn&#039;&#039;=16, &#039;&#039;walltime&#039;&#039;=6:00:00:00&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| shared&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; height=20px; text-align:left&amp;quot;&lt;br /&gt;
| style=&amp;quot;width:10%;padding:3px&amp;quot; | extralong&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| thin&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;walltime&#039;&#039;=3:00:00:01,&#039;&#039;procs&#039;&#039;=1, &#039;&#039;pmem&#039;&#039;=4000mb&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1&#039;&#039;, walltime&#039;&#039;=3:00:00:01&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes=1:&#039;&#039;ppn&#039;&#039;=16, &#039;&#039;walltime&#039;&#039;=6:00:00:00&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| shared&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; height=20px; text-align:left&amp;quot; &lt;br /&gt;
| style=&amp;quot;width:10%;padding:3px&amp;quot; | fat&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| fat&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;walltime&#039;&#039;=00:10:00,&#039;&#039;procs&#039;&#039;=1, &#039;&#039;pmem&#039;&#039;=32000mb&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1:&#039;&#039;ppn&#039;&#039;=32, &#039;&#039;walltime&#039;&#039;=3:00:00:00&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| shared&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; height=20px; text-align:left&amp;quot; &lt;br /&gt;
| style=&amp;quot;width:10%;padding:3px&amp;quot; | multinode&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| broad&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;walltime&#039;&#039;=00:10:00,&#039;&#039;procs&#039;&#039;=1, &#039;&#039;pmem&#039;&#039;=4500mb&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=2&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=128:&#039;&#039;ppn&#039;&#039;=28, &#039;&#039;walltime&#039;&#039;=48:00:00&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| singlejob&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; height=20px; text-align:left&amp;quot; &lt;br /&gt;
| style=&amp;quot;width:10%;padding:3px&amp;quot; | dev_multinode&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| broad&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;walltime&#039;&#039;=00:10:00,&#039;&#039;procs&#039;&#039;=1, &#039;&#039;pmem&#039;&#039;=4500mb&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=2&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=16:&#039;&#039;ppn&#039;&#039;=28, &#039;&#039;walltime&#039;&#039;=48:00:00&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| shared&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; height=20px; text-align:left&amp;quot; &lt;br /&gt;
| style=&amp;quot;width:10%;padding:3px&amp;quot; | special&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| broad&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;walltime&#039;&#039;=00:30:00,&#039;&#039;procs&#039;&#039;=1, &#039;&#039;pmem&#039;&#039;=4500mb&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1, &#039;&#039;walltime&#039;&#039;=00:30:00&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1:&#039;&#039;ppn&#039;&#039;=28, &#039;&#039;walltime&#039;&#039;=48:00:00&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| shared&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; height=20px; text-align:left&amp;quot; &lt;br /&gt;
| style=&amp;quot;width:10%;padding:3px&amp;quot; | dev_special&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| broad&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;walltime&#039;&#039;=00:10:00,&#039;&#039;procs&#039;&#039;=1, &#039;&#039;pmem&#039;&#039;=4500mb&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1, &#039;&#039;walltime&#039;&#039;=00:10:00&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1:&#039;&#039;ppn&#039;&#039;=28, &#039;&#039;walltime&#039;&#039;=48:00:00&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| shared&lt;br /&gt;
|- &lt;br /&gt;
|}&lt;br /&gt;
Note that &#039;&#039;node access policy&#039;&#039;=singlejob means that, irrespected of the requested number of cores, node access is exclusive. &lt;br /&gt;
Default resources of a queue class defines walltime, processes and memory if not explicitly given with msub command. Resource list acronyms &#039;&#039;walltime&#039;&#039;, &#039;&#039;procs&#039;&#039;, &#039;&#039;nodes&#039;&#039; and &#039;&#039;ppn&#039;&#039; are described [[Batch_Jobs#msub_-l_resource_list|here]].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Queue class examples ====&lt;br /&gt;
&lt;br /&gt;
* To run your batch job longer than 3 days, please use&amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;$ msub -q verylong&amp;lt;/span&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
* To run your batch job on one of the [[BwUniCluster_File_System#Components_of_bwUniCluster|fat nodes]], please use&amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;$ msub -q fat&amp;lt;/span&amp;gt;. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Environment Variables for Batch Jobs =&lt;br /&gt;
== Additional Moab Environments ==&lt;br /&gt;
The bwUniCluster expands the [[Batch_Jobs#Moab Environment Variables|common set of MOAB environment variables]] by the following variable:&lt;br /&gt;
{| width=700px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! colspan=&amp;quot;3&amp;quot; style=&amp;quot;background-color:#999999;padding:3px&amp;quot;| bwUniCluster specific MOAB variables&lt;br /&gt;
|-&lt;br /&gt;
! Environment variable&lt;br /&gt;
! Description&lt;br /&gt;
|- &lt;br /&gt;
| MOAB_SUBMITDIR&lt;br /&gt;
| Directory of job submission&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Additional Slurm Environments ==&lt;br /&gt;
Since the work load manager MOAB on [[bwUniCluster]] uses the resource manager SLURM, the following environment variables of SLURM are added to your environment once your job has started:&lt;br /&gt;
{| width=750px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! colspan=&amp;quot;3&amp;quot; style=&amp;quot;background-color:#999999;padding:3px&amp;quot;| SLURM variables&lt;br /&gt;
|- style=&amp;quot;width:25%;height=20px; text-align:left;padding:3px&amp;quot;&lt;br /&gt;
! style=&amp;quot;width:20%;height=20px; text-align:left;padding:3px&amp;quot;| Environment variables&lt;br /&gt;
! style=&amp;quot;height=20px; text-align:left;padding:3px&amp;quot;| Description&lt;br /&gt;
|- &lt;br /&gt;
| style=&amp;quot;width:20%;height=20px; text-align:left;padding:3px&amp;quot; | SLURM_JOB_CPUS_PER_NODE &lt;br /&gt;
| style=&amp;quot;height=20px; text-align:left;padding:3px&amp;quot;|  Number of processes per node dedicated to the job&lt;br /&gt;
|- &lt;br /&gt;
| style=&amp;quot;width:20%;height=20px; text-align:left;padding:3px&amp;quot; | SLURM_JOB_NODELIST &lt;br /&gt;
| style=&amp;quot;height=20px; text-align:left;padding:3px&amp;quot;|  List of nodes dedicated to the job&lt;br /&gt;
|- &lt;br /&gt;
| style=&amp;quot;width:20%;height=20px; text-align:left;padding:3px&amp;quot; | SLURM_JOB_NUM_NODES &lt;br /&gt;
| style=&amp;quot;height=20px; text-align:left;padding:3px&amp;quot;| Number of nodes dedicated to the job&lt;br /&gt;
|- &lt;br /&gt;
| style=&amp;quot;width:20%;height=20px; text-align:left;padding:3px&amp;quot; | SLURM_MEM_PER_NODE &lt;br /&gt;
| style=&amp;quot;height=20px; text-align:left;padding:3px&amp;quot;| Memory per node dedicated to the job &lt;br /&gt;
|- &lt;br /&gt;
| style=&amp;quot;width:20%;height=20px; text-align:left;padding:3px&amp;quot; | SLURM_NPROCS&lt;br /&gt;
| style=&amp;quot;height=20px; text-align:left;padding:3px&amp;quot;|  Total number of processes dedicated to the job &lt;br /&gt;
|}&lt;br /&gt;
See also:&lt;br /&gt;
* [[Batch_Jobs#Batch_Job_.28Slurm.29_Variables_:_bwUniCluster|List of almost all important Slurm environments]]&lt;br /&gt;
&lt;br /&gt;
== Interactive Job Monitoring per Node ==&lt;br /&gt;
By default nodes are not used exclusive unless they are requested with &#039;&#039;-l naccesspolicy=singlejob&#039;&#039; as described [[Batch_Jobs#msub_-l_resource_list|here]]. &amp;lt;br&amp;gt;&lt;br /&gt;
If a Job runs exclusive on one node you may do a ssh login to that node. The ssh access will be limited by the set walltime. To get the nodes of your job need to read the environment variable SLURM_JOB_NODELIST during the runtime of the job. It contains all nodes in a shortened way e.g. &#039;&#039;uc1n[344,386]&#039;&#039; or &#039;&#039;uc1n[344-345]&#039;&#039;. To expand this string to &#039;&#039;uc1n344 uc1n345&#039;&#039; you can you can use the command expandnodes like:&lt;br /&gt;
&lt;br /&gt;
  expandnodes $SLURM_JOB_NODELIST &amp;gt; nodelist&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
= Intel MPI parallel Programs =&lt;br /&gt;
== Intel MPI without Multithreading ==&lt;br /&gt;
MPI parallel programs run faster than serial programs on multi CPU and multi core systems. N-fold spawned processes of the MPI program, i.e., &#039;&#039;&#039;MPI tasks&#039;&#039;&#039;,  run simultaneously and communicate via the Message Passing Interface (MPI) paradigm. MPI tasks do not share memory but can be spawned over different nodes. &lt;br /&gt;
&lt;br /&gt;
Generate a wrapper script for &#039;&#039;&#039;Intel MPI&#039;&#039;&#039;, &#039;&#039;job_impi.sh&#039;&#039; containing the following lines:&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
module load mpi/impi/&amp;lt;placeholder_for_version&amp;gt;&lt;br /&gt;
mpiexec.hydra -bootstrap slurm my_par_program&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&amp;lt;font color=red&amp;gt;&#039;&#039;&#039;Attention:&#039;&#039;&#039;&amp;lt;/font&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
Do &#039;&#039;&#039;NOT&#039;&#039;&#039; add mpirun options &#039;&#039;-n &amp;lt;number_of_processes&amp;gt;&#039;&#039; or any other option defining processes or nodes, since MOAB instructs mpirun about number of processes and node hostnames. &lt;br /&gt;
Moreover, replace &amp;lt;placeholder_for_version&amp;gt; with the wished version of &#039;&#039;&#039;Intel MPI&#039;&#039;&#039; to enable the MPI environment.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Launching and running 32 Intel MPI tasks on 4 nodes, each requiring 1000 MByte, and running for 5 hours, execute:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ msub -q multinode -l nodes=4:ppn=16,pmem=1000mb,walltime=05:00:00 job_impi.sh&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Intel MPI with Multithreading ==&lt;br /&gt;
Multithreaded + MPI parallel programs operate faster than serial programs on multi CPUs with multiple cores. All threads of one process share resources such as memory. On the contrary MPI tasks do not share memory but can be spawned over different nodes.  &lt;br /&gt;
&lt;br /&gt;
Multiple Intel MPI tasks must be launched by the MPI parallel program &#039;&#039;&#039;mpiexec.hydra&#039;&#039;&#039;. For multithreaded programs based on &#039;&#039;&#039;Open&#039;&#039;&#039; &#039;&#039;&#039;M&#039;&#039;&#039;ulti-&#039;&#039;&#039;P&#039;&#039;&#039;rocessing (OpenMP) number of threads are defined by the environment variable OMP_NUM_THREADS. By default this variable is set to 1 (OMP_NUM_THREADS=1).&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;For Intel MPI&#039;&#039;&#039; a job-script to submit a batch job called &#039;&#039;job_impi_omp.sh&#039;&#039; that runs a Intel MPI program with 8 tasks and a tenfold threaded program &#039;&#039;impi_omp_program&#039;&#039; requiring 32000 MByte of total physical memory per task and total wall clock time of 6 hours looks like: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--b)--&amp;gt; &lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#MSUB -l nodes=4:ppn=20&lt;br /&gt;
#MSUB -l walltime=06:00:00&lt;br /&gt;
#MSUB -l pmem=3200mb&lt;br /&gt;
#MSUB -v MPI_MODULE=mpi/impi&lt;br /&gt;
#MSUB -v OMP_NUM_THREADS=10&lt;br /&gt;
#MSUB -v MPIRUN_OPTIONS=&amp;quot;-binding &amp;quot;domain=omp&amp;quot; -print-rank-map -ppn 2 -envall&amp;quot;&lt;br /&gt;
#MSUB -v EXE=./impi_omp_program&lt;br /&gt;
#MSUB -N test_impi_omp&lt;br /&gt;
&lt;br /&gt;
#If using more than one MPI task per node please set&lt;br /&gt;
export KMP_AFFINITY=scatter&lt;br /&gt;
#export KMP_AFFINITY=verbose,scatter  prints messages concerning the supported affinity &lt;br /&gt;
#KMP_AFFINITY Description: https://software.intel.com/en-us/node/524790#KMP_AFFINITY_ENVIRONMENT_VARIABLE&lt;br /&gt;
 &lt;br /&gt;
module load ${MPI_MODULE}&lt;br /&gt;
TASK_COUNT=$((${MOAB_PROCCOUNT}/${OMP_NUM_THREADS}))&lt;br /&gt;
echo &amp;quot;${EXE} running on ${MOAB_PROCCOUNT} cores with ${TASK_COUNT} MPI-tasks and ${OMP_NUM_THREADS} threads&amp;quot;&lt;br /&gt;
startexe=&amp;quot;mpiexec.hydra -bootstrap slurm ${MPIRUN_OPTIONS} -n ${TASK_COUNT} ${EXE}&amp;quot;&lt;br /&gt;
echo $startexe&lt;br /&gt;
exec $startexe&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Using Intel compiler the environment variable KMP_AFFINITY switches on binding of threads to specific cores. If you only run one MPI task per node please set KMP_AFFINITY=compact,1,0.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Execute the script &#039;&#039;&#039;job_impi_omp.sh&#039;&#039;&#039; adding the queue class &#039;&#039;multinode&#039;&#039; to your msub command:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ msub -q multinode job_impi_omp.sh&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
The mpirun option &#039;&#039;-print-rank-map&#039;&#039; shows the bindings between MPI tasks and nodes (not very beneficial). The option &#039;&#039;-binding&#039;&#039; binds MPI tasks (processes) to a particular processor; &#039;&#039;domain=omp&#039;&#039; means that the domain size is determined by the number of threads. In the above examples (2 MPI tasks per node) you could also choose &#039;&#039;-binding &amp;quot;cell=unit;map=bunch&amp;quot;&#039;&#039;; this binding maps one MPI process to each socket. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
= Interactive Jobs =&lt;br /&gt;
Interactive jobs on bwUniCluster [[BwUniCluster_User_Access#Allowed_activities_on_login_nodes|must &#039;&#039;&#039;NOT&#039;&#039;&#039; run on the logins nodes]], however resources for interactive jobs can be requested using msub. Considering a serial application with a graphical frontend that requires 5000 MByte of memory and limiting the interactive run to 2 hours execute the following:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ msub  -I  -V  -l nodes=1:ppn=1 -l walltime=0:02:00:00&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The option -V defines that all environment variables are exported to the compute node of the interactive session.&lt;br /&gt;
After execution of this command &#039;&#039;&#039;DO NOT CLOSE&#039;&#039;&#039; your current terminal session but wait until the queueing system MOAB has granted you the requested resources on the compute system. Once granted you will be automatically logged on the dedicated resource. Now you have an interactive session with 1 core and 5000 MByte of memory on the compute system for 2 hours. Simply execute now your application:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ cd to_path&lt;br /&gt;
$ ./application&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Note that, once the walltime limit has been reached you will be automatically logged out of the compute system.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
== Chain Jobs ==&lt;br /&gt;
The CPU time requirements of many applications exceed the limits of the job classes. In those situations it is recommended to solve the problem by a job chain. A job chain is a sequence of jobs where each job automatically starts its successor. &lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
##################################################&lt;br /&gt;
## simple MOAB submitter script to setup        ## &lt;br /&gt;
## a chain of jobs for bwUniCluster             ##&lt;br /&gt;
##################################################&lt;br /&gt;
## ver.  : 2015-09-17, KIT, SCC&lt;br /&gt;
&lt;br /&gt;
## Define maximum number of jobs via positional parameter 1, default is 5&lt;br /&gt;
max_nojob=${1:-5}&lt;br /&gt;
&lt;br /&gt;
## Define your jobscript (e.g. &amp;quot;~/chain_link_job.sh&amp;quot;)&lt;br /&gt;
chain_link_job=${PWD}/chain_link_job.sh&lt;br /&gt;
&lt;br /&gt;
## Define type of dependency via positional parameter 2, default is &#039;afterok&#039;&lt;br /&gt;
dep_type=&amp;quot;${2:-afterok}&amp;quot;&lt;br /&gt;
## -&amp;gt; List of all dependencies:&lt;br /&gt;
## http://docs.adaptivecomputing.com/suite/8-0/enterprise/help.htm#topics/\&lt;br /&gt;
##    moabWorkloadManager/topics/jobAdministration/jobdependencies.html&lt;br /&gt;
&lt;br /&gt;
myloop_counter=1&lt;br /&gt;
## Submit loop&lt;br /&gt;
while [ ${myloop_counter} -le ${max_nojob} ] ; do&lt;br /&gt;
   ##&lt;br /&gt;
   ## Differ msub_opt depending on chain link number&lt;br /&gt;
   if [ ${myloop_counter} -eq 1 ] ; then&lt;br /&gt;
      msub_opt=&amp;quot;&amp;quot;&lt;br /&gt;
   else&lt;br /&gt;
      ## Attention: do NOT use &#039;-W depend&#039; together with msub&lt;br /&gt;
      msub_opt=&amp;quot;-l depend=${dep_type}:${jobID}&amp;quot;&lt;br /&gt;
   fi&lt;br /&gt;
   ##&lt;br /&gt;
   ## Print current iteration number and msub command&lt;br /&gt;
   echo &amp;quot;Chain job iteration = ${myloop_counter}&amp;quot;&lt;br /&gt;
   echo &amp;quot;   msub -v myloop_counter=${myloop_counter} ${msub_opt} ${chain_link_job}&amp;quot;&lt;br /&gt;
   ## Store job ID for next iteration by storing output of msub command with empty lines&lt;br /&gt;
   jobID=$(msub -v myloop_counter=${myloop_counter} ${msub_opt} ${chain_link_job} 2&amp;gt;&amp;amp;1 | sed &#039;/^$/d&#039;)&lt;br /&gt;
   ##   &lt;br /&gt;
   ## Check if ERROR occurred&lt;br /&gt;
   if [[ &amp;quot;${jobID}&amp;quot; =~ &amp;quot;ERROR&amp;quot; ]] ; then&lt;br /&gt;
      echo &amp;quot;   -&amp;gt; submission failed!&amp;quot; ; exit 1&lt;br /&gt;
   else&lt;br /&gt;
      echo &amp;quot;   -&amp;gt; job number = ${jobID}&amp;quot;&lt;br /&gt;
   fi&lt;br /&gt;
   ##&lt;br /&gt;
   ## Increase counter&lt;br /&gt;
   let myloop_counter+=1&lt;br /&gt;
done&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
----&lt;br /&gt;
[[Category:bwUniCluster|Batch Jobs - bwUniCluster features]]&lt;/div&gt;</summary>
		<author><name>S Shamsudeen</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=Batch_Jobs_-_bwUniCluster_Features&amp;diff=4832</id>
		<title>Batch Jobs - bwUniCluster Features</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=Batch_Jobs_-_bwUniCluster_Features&amp;diff=4832"/>
		<updated>2017-05-08T10:35:32Z</updated>

		<summary type="html">&lt;p&gt;S Shamsudeen: /* msub -q queues */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;font color=green size=+2&amp;gt;This article contains information on features of the [[Batch_Jobs|batch job system]] only applicable on bwUniCluster.&amp;lt;/font&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
= Job Submission =&lt;br /&gt;
== msub Command ==&lt;br /&gt;
=== Interactive Jobs ===&lt;br /&gt;
The bwUniCluster supports the following additional msub option(s):&lt;br /&gt;
&lt;br /&gt;
{| width=750px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! colspan=&amp;quot;3&amp;quot; | msub Options&lt;br /&gt;
|-&lt;br /&gt;
! Command line&lt;br /&gt;
! Script&lt;br /&gt;
! Purpose&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
|  -I&lt;br /&gt;
| &lt;br /&gt;
|  Declares the job is to be run interactively.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== msub -l &#039;&#039;resource_list&#039;&#039; ===&lt;br /&gt;
No deviation or additional features to general [[Batch_Jobs|batch job]] setting.&lt;br /&gt;
&lt;br /&gt;
=== msub -q &#039;&#039;queues&#039;&#039; ===&lt;br /&gt;
Compute resources such as walltime, nodes and memory are restricted and must fit into &#039;&#039;&#039;queues&#039;&#039;&#039;. Since requested compute resources are NOT always automatically mapped to the correct queue class, you must add to your msub command the correct queue class. Details are:&lt;br /&gt;
&lt;br /&gt;
{| width=750px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! colspan=&amp;quot;6&amp;quot; style=&amp;quot;background-color:#999999;padding:3px&amp;quot;| msub -q &#039;&#039;queue&#039;&#039;&lt;br /&gt;
|- style=&amp;quot;width:10%;height=20px; text-align:left;&amp;quot;&lt;br /&gt;
! style=&amp;quot;width:10%;padding:3px&amp;quot;| &#039;&#039;queue&#039;&#039;&lt;br /&gt;
! style=&amp;quot;width:5%;padding:3px&amp;quot;| &#039;&#039;node&#039;&#039;&lt;br /&gt;
! style=&amp;quot;width:15%;padding:3px&amp;quot;| &#039;&#039;default resources&#039;&#039;&lt;br /&gt;
! style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;minimum resources&#039;&#039;&lt;br /&gt;
! style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;maximum resources&#039;&#039;&lt;br /&gt;
! style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;node access policy&#039;&#039;&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; height=20px; text-align:left&amp;quot;&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| develop&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| thin&lt;br /&gt;
| style=&amp;quot;width:15%;padding:3px&amp;quot;| &#039;&#039;walltime&#039;&#039;=00:10:00,&#039;&#039;procs&#039;&#039;=1, &#039;&#039;pmem&#039;&#039;=4000mb&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1:&#039;&#039;ppn&#039;&#039;=16, &#039;&#039;walltime&#039;&#039;=00:30:00&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| shared&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; height=20px; text-align:left&amp;quot;&lt;br /&gt;
| style=&amp;quot;width:10%;padding:3px&amp;quot; | singlenode &lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| thin&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;walltime&#039;&#039;=00:30:01,&#039;&#039;procs&#039;&#039;=1, &#039;&#039;pmem&#039;&#039;=4000mb&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1, &#039;&#039;walltime&#039;&#039;=00:30:01&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1:&#039;&#039;ppn&#039;&#039;=16, &#039;&#039;walltime&#039;&#039;=3:00:00:00&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| shared&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; height=20px; text-align:left&amp;quot;&lt;br /&gt;
| style=&amp;quot;width:10%;vertical-align:top;height=20px; text-align:left;padding:3px&amp;quot; | verylong&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| thin&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;walltime&#039;&#039;=3:00:00:01,&#039;&#039;procs&#039;&#039;=1, &#039;&#039;pmem&#039;&#039;=4000mb&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1&#039;&#039;, walltime&#039;&#039;=3:00:00:01&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes=1:&#039;&#039;ppn&#039;&#039;=16, &#039;&#039;walltime&#039;&#039;=6:00:00:00&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| shared&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; height=20px; text-align:left&amp;quot;&lt;br /&gt;
| style=&amp;quot;width:10%;padding:3px&amp;quot; | extralong&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| thin&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;walltime&#039;&#039;=3:00:00:01,&#039;&#039;procs&#039;&#039;=1, &#039;&#039;pmem&#039;&#039;=4000mb&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1&#039;&#039;, walltime&#039;&#039;=3:00:00:01&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes=1:&#039;&#039;ppn&#039;&#039;=16, &#039;&#039;walltime&#039;&#039;=6:00:00:00&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| shared&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; height=20px; text-align:left&amp;quot; &lt;br /&gt;
| style=&amp;quot;width:10%;padding:3px&amp;quot; | fat&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| fat&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;walltime&#039;&#039;=00:10:00,&#039;&#039;procs&#039;&#039;=1, &#039;&#039;pmem&#039;&#039;=32000mb&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1:&#039;&#039;ppn&#039;&#039;=32, &#039;&#039;walltime&#039;&#039;=3:00:00:00&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| shared&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; height=20px; text-align:left&amp;quot; &lt;br /&gt;
| style=&amp;quot;width:10%;padding:3px&amp;quot; | multinode&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| broad&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;walltime&#039;&#039;=00:10:00,&#039;&#039;procs&#039;&#039;=1, &#039;&#039;pmem&#039;&#039;=4500mb&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=2&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=128:&#039;&#039;ppn&#039;&#039;=28, &#039;&#039;walltime&#039;&#039;=48:00:00&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| shared&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; height=20px; text-align:left&amp;quot; &lt;br /&gt;
| style=&amp;quot;width:10%;padding:3px&amp;quot; | dev_multinode&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| broad&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;walltime&#039;&#039;=00:10:00,&#039;&#039;procs&#039;&#039;=1, &#039;&#039;pmem&#039;&#039;=4500mb&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=2&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=16:&#039;&#039;ppn&#039;&#039;=28, &#039;&#039;walltime&#039;&#039;=48:00:00&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| shared&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; height=20px; text-align:left&amp;quot; &lt;br /&gt;
| style=&amp;quot;width:10%;padding:3px&amp;quot; | special&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| broad&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;walltime&#039;&#039;=00:30:00,&#039;&#039;procs&#039;&#039;=1, &#039;&#039;pmem&#039;&#039;=4500mb&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1, &#039;&#039;walltime&#039;&#039;=00:30:00&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1:&#039;&#039;ppn&#039;&#039;=28, &#039;&#039;walltime&#039;&#039;=48:00:00&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| shared&lt;br /&gt;
|- &lt;br /&gt;
|}&lt;br /&gt;
Note that &#039;&#039;node access policy&#039;&#039;=singlejob means that, irrespected of the requested number of cores, node access is exclusive. &lt;br /&gt;
Default resources of a queue class defines walltime, processes and memory if not explicitly given with msub command. Resource list acronyms &#039;&#039;walltime&#039;&#039;, &#039;&#039;procs&#039;&#039;, &#039;&#039;nodes&#039;&#039; and &#039;&#039;ppn&#039;&#039; are described [[Batch_Jobs#msub_-l_resource_list|here]].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Queue class examples ====&lt;br /&gt;
&lt;br /&gt;
* To run your batch job longer than 3 days, please use&amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;$ msub -q verylong&amp;lt;/span&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
* To run your batch job on one of the [[BwUniCluster_File_System#Components_of_bwUniCluster|fat nodes]], please use&amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;$ msub -q fat&amp;lt;/span&amp;gt;. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Environment Variables for Batch Jobs =&lt;br /&gt;
== Additional Moab Environments ==&lt;br /&gt;
The bwUniCluster expands the [[Batch_Jobs#Moab Environment Variables|common set of MOAB environment variables]] by the following variable:&lt;br /&gt;
{| width=700px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! colspan=&amp;quot;3&amp;quot; style=&amp;quot;background-color:#999999;padding:3px&amp;quot;| bwUniCluster specific MOAB variables&lt;br /&gt;
|-&lt;br /&gt;
! Environment variable&lt;br /&gt;
! Description&lt;br /&gt;
|- &lt;br /&gt;
| MOAB_SUBMITDIR&lt;br /&gt;
| Directory of job submission&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Additional Slurm Environments ==&lt;br /&gt;
Since the work load manager MOAB on [[bwUniCluster]] uses the resource manager SLURM, the following environment variables of SLURM are added to your environment once your job has started:&lt;br /&gt;
{| width=750px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! colspan=&amp;quot;3&amp;quot; style=&amp;quot;background-color:#999999;padding:3px&amp;quot;| SLURM variables&lt;br /&gt;
|- style=&amp;quot;width:25%;height=20px; text-align:left;padding:3px&amp;quot;&lt;br /&gt;
! style=&amp;quot;width:20%;height=20px; text-align:left;padding:3px&amp;quot;| Environment variables&lt;br /&gt;
! style=&amp;quot;height=20px; text-align:left;padding:3px&amp;quot;| Description&lt;br /&gt;
|- &lt;br /&gt;
| style=&amp;quot;width:20%;height=20px; text-align:left;padding:3px&amp;quot; | SLURM_JOB_CPUS_PER_NODE &lt;br /&gt;
| style=&amp;quot;height=20px; text-align:left;padding:3px&amp;quot;|  Number of processes per node dedicated to the job&lt;br /&gt;
|- &lt;br /&gt;
| style=&amp;quot;width:20%;height=20px; text-align:left;padding:3px&amp;quot; | SLURM_JOB_NODELIST &lt;br /&gt;
| style=&amp;quot;height=20px; text-align:left;padding:3px&amp;quot;|  List of nodes dedicated to the job&lt;br /&gt;
|- &lt;br /&gt;
| style=&amp;quot;width:20%;height=20px; text-align:left;padding:3px&amp;quot; | SLURM_JOB_NUM_NODES &lt;br /&gt;
| style=&amp;quot;height=20px; text-align:left;padding:3px&amp;quot;| Number of nodes dedicated to the job&lt;br /&gt;
|- &lt;br /&gt;
| style=&amp;quot;width:20%;height=20px; text-align:left;padding:3px&amp;quot; | SLURM_MEM_PER_NODE &lt;br /&gt;
| style=&amp;quot;height=20px; text-align:left;padding:3px&amp;quot;| Memory per node dedicated to the job &lt;br /&gt;
|- &lt;br /&gt;
| style=&amp;quot;width:20%;height=20px; text-align:left;padding:3px&amp;quot; | SLURM_NPROCS&lt;br /&gt;
| style=&amp;quot;height=20px; text-align:left;padding:3px&amp;quot;|  Total number of processes dedicated to the job &lt;br /&gt;
|}&lt;br /&gt;
See also:&lt;br /&gt;
* [[Batch_Jobs#Batch_Job_.28Slurm.29_Variables_:_bwUniCluster|List of almost all important Slurm environments]]&lt;br /&gt;
&lt;br /&gt;
== Interactive Job Monitoring per Node ==&lt;br /&gt;
By default nodes are not used exclusive unless they are requested with &#039;&#039;-l naccesspolicy=singlejob&#039;&#039; as described [[Batch_Jobs#msub_-l_resource_list|here]]. &amp;lt;br&amp;gt;&lt;br /&gt;
If a Job runs exclusive on one node you may do a ssh login to that node. The ssh access will be limited by the set walltime. To get the nodes of your job need to read the environment variable SLURM_JOB_NODELIST during the runtime of the job. It contains all nodes in a shortened way e.g. &#039;&#039;uc1n[344,386]&#039;&#039; or &#039;&#039;uc1n[344-345]&#039;&#039;. To expand this string to &#039;&#039;uc1n344 uc1n345&#039;&#039; you can you can use the command expandnodes like:&lt;br /&gt;
&lt;br /&gt;
  expandnodes $SLURM_JOB_NODELIST &amp;gt; nodelist&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
= Intel MPI parallel Programs =&lt;br /&gt;
== Intel MPI without Multithreading ==&lt;br /&gt;
MPI parallel programs run faster than serial programs on multi CPU and multi core systems. N-fold spawned processes of the MPI program, i.e., &#039;&#039;&#039;MPI tasks&#039;&#039;&#039;,  run simultaneously and communicate via the Message Passing Interface (MPI) paradigm. MPI tasks do not share memory but can be spawned over different nodes. &lt;br /&gt;
&lt;br /&gt;
Generate a wrapper script for &#039;&#039;&#039;Intel MPI&#039;&#039;&#039;, &#039;&#039;job_impi.sh&#039;&#039; containing the following lines:&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
module load mpi/impi/&amp;lt;placeholder_for_version&amp;gt;&lt;br /&gt;
mpiexec.hydra -bootstrap slurm my_par_program&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&amp;lt;font color=red&amp;gt;&#039;&#039;&#039;Attention:&#039;&#039;&#039;&amp;lt;/font&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
Do &#039;&#039;&#039;NOT&#039;&#039;&#039; add mpirun options &#039;&#039;-n &amp;lt;number_of_processes&amp;gt;&#039;&#039; or any other option defining processes or nodes, since MOAB instructs mpirun about number of processes and node hostnames. &lt;br /&gt;
Moreover, replace &amp;lt;placeholder_for_version&amp;gt; with the wished version of &#039;&#039;&#039;Intel MPI&#039;&#039;&#039; to enable the MPI environment.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Launching and running 32 Intel MPI tasks on 4 nodes, each requiring 1000 MByte, and running for 5 hours, execute:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ msub -q multinode -l nodes=4:ppn=16,pmem=1000mb,walltime=05:00:00 job_impi.sh&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Intel MPI with Multithreading ==&lt;br /&gt;
Multithreaded + MPI parallel programs operate faster than serial programs on multi CPUs with multiple cores. All threads of one process share resources such as memory. On the contrary MPI tasks do not share memory but can be spawned over different nodes.  &lt;br /&gt;
&lt;br /&gt;
Multiple Intel MPI tasks must be launched by the MPI parallel program &#039;&#039;&#039;mpiexec.hydra&#039;&#039;&#039;. For multithreaded programs based on &#039;&#039;&#039;Open&#039;&#039;&#039; &#039;&#039;&#039;M&#039;&#039;&#039;ulti-&#039;&#039;&#039;P&#039;&#039;&#039;rocessing (OpenMP) number of threads are defined by the environment variable OMP_NUM_THREADS. By default this variable is set to 1 (OMP_NUM_THREADS=1).&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;For Intel MPI&#039;&#039;&#039; a job-script to submit a batch job called &#039;&#039;job_impi_omp.sh&#039;&#039; that runs a Intel MPI program with 8 tasks and a tenfold threaded program &#039;&#039;impi_omp_program&#039;&#039; requiring 32000 MByte of total physical memory per task and total wall clock time of 6 hours looks like: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--b)--&amp;gt; &lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#MSUB -l nodes=4:ppn=20&lt;br /&gt;
#MSUB -l walltime=06:00:00&lt;br /&gt;
#MSUB -l pmem=3200mb&lt;br /&gt;
#MSUB -v MPI_MODULE=mpi/impi&lt;br /&gt;
#MSUB -v OMP_NUM_THREADS=10&lt;br /&gt;
#MSUB -v MPIRUN_OPTIONS=&amp;quot;-binding &amp;quot;domain=omp&amp;quot; -print-rank-map -ppn 2 -envall&amp;quot;&lt;br /&gt;
#MSUB -v EXE=./impi_omp_program&lt;br /&gt;
#MSUB -N test_impi_omp&lt;br /&gt;
&lt;br /&gt;
#If using more than one MPI task per node please set&lt;br /&gt;
export KMP_AFFINITY=scatter&lt;br /&gt;
#export KMP_AFFINITY=verbose,scatter  prints messages concerning the supported affinity &lt;br /&gt;
#KMP_AFFINITY Description: https://software.intel.com/en-us/node/524790#KMP_AFFINITY_ENVIRONMENT_VARIABLE&lt;br /&gt;
 &lt;br /&gt;
module load ${MPI_MODULE}&lt;br /&gt;
TASK_COUNT=$((${MOAB_PROCCOUNT}/${OMP_NUM_THREADS}))&lt;br /&gt;
echo &amp;quot;${EXE} running on ${MOAB_PROCCOUNT} cores with ${TASK_COUNT} MPI-tasks and ${OMP_NUM_THREADS} threads&amp;quot;&lt;br /&gt;
startexe=&amp;quot;mpiexec.hydra -bootstrap slurm ${MPIRUN_OPTIONS} -n ${TASK_COUNT} ${EXE}&amp;quot;&lt;br /&gt;
echo $startexe&lt;br /&gt;
exec $startexe&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Using Intel compiler the environment variable KMP_AFFINITY switches on binding of threads to specific cores. If you only run one MPI task per node please set KMP_AFFINITY=compact,1,0.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Execute the script &#039;&#039;&#039;job_impi_omp.sh&#039;&#039;&#039; adding the queue class &#039;&#039;multinode&#039;&#039; to your msub command:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ msub -q multinode job_impi_omp.sh&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
The mpirun option &#039;&#039;-print-rank-map&#039;&#039; shows the bindings between MPI tasks and nodes (not very beneficial). The option &#039;&#039;-binding&#039;&#039; binds MPI tasks (processes) to a particular processor; &#039;&#039;domain=omp&#039;&#039; means that the domain size is determined by the number of threads. In the above examples (2 MPI tasks per node) you could also choose &#039;&#039;-binding &amp;quot;cell=unit;map=bunch&amp;quot;&#039;&#039;; this binding maps one MPI process to each socket. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
= Interactive Jobs =&lt;br /&gt;
Interactive jobs on bwUniCluster [[BwUniCluster_User_Access#Allowed_activities_on_login_nodes|must &#039;&#039;&#039;NOT&#039;&#039;&#039; run on the logins nodes]], however resources for interactive jobs can be requested using msub. Considering a serial application with a graphical frontend that requires 5000 MByte of memory and limiting the interactive run to 2 hours execute the following:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ msub  -I  -V  -l nodes=1:ppn=1 -l walltime=0:02:00:00&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The option -V defines that all environment variables are exported to the compute node of the interactive session.&lt;br /&gt;
After execution of this command &#039;&#039;&#039;DO NOT CLOSE&#039;&#039;&#039; your current terminal session but wait until the queueing system MOAB has granted you the requested resources on the compute system. Once granted you will be automatically logged on the dedicated resource. Now you have an interactive session with 1 core and 5000 MByte of memory on the compute system for 2 hours. Simply execute now your application:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ cd to_path&lt;br /&gt;
$ ./application&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Note that, once the walltime limit has been reached you will be automatically logged out of the compute system.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
== Chain Jobs ==&lt;br /&gt;
The CPU time requirements of many applications exceed the limits of the job classes. In those situations it is recommended to solve the problem by a job chain. A job chain is a sequence of jobs where each job automatically starts its successor. &lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
##################################################&lt;br /&gt;
## simple MOAB submitter script to setup        ## &lt;br /&gt;
## a chain of jobs for bwUniCluster             ##&lt;br /&gt;
##################################################&lt;br /&gt;
## ver.  : 2015-09-17, KIT, SCC&lt;br /&gt;
&lt;br /&gt;
## Define maximum number of jobs via positional parameter 1, default is 5&lt;br /&gt;
max_nojob=${1:-5}&lt;br /&gt;
&lt;br /&gt;
## Define your jobscript (e.g. &amp;quot;~/chain_link_job.sh&amp;quot;)&lt;br /&gt;
chain_link_job=${PWD}/chain_link_job.sh&lt;br /&gt;
&lt;br /&gt;
## Define type of dependency via positional parameter 2, default is &#039;afterok&#039;&lt;br /&gt;
dep_type=&amp;quot;${2:-afterok}&amp;quot;&lt;br /&gt;
## -&amp;gt; List of all dependencies:&lt;br /&gt;
## http://docs.adaptivecomputing.com/suite/8-0/enterprise/help.htm#topics/\&lt;br /&gt;
##    moabWorkloadManager/topics/jobAdministration/jobdependencies.html&lt;br /&gt;
&lt;br /&gt;
myloop_counter=1&lt;br /&gt;
## Submit loop&lt;br /&gt;
while [ ${myloop_counter} -le ${max_nojob} ] ; do&lt;br /&gt;
   ##&lt;br /&gt;
   ## Differ msub_opt depending on chain link number&lt;br /&gt;
   if [ ${myloop_counter} -eq 1 ] ; then&lt;br /&gt;
      msub_opt=&amp;quot;&amp;quot;&lt;br /&gt;
   else&lt;br /&gt;
      ## Attention: do NOT use &#039;-W depend&#039; together with msub&lt;br /&gt;
      msub_opt=&amp;quot;-l depend=${dep_type}:${jobID}&amp;quot;&lt;br /&gt;
   fi&lt;br /&gt;
   ##&lt;br /&gt;
   ## Print current iteration number and msub command&lt;br /&gt;
   echo &amp;quot;Chain job iteration = ${myloop_counter}&amp;quot;&lt;br /&gt;
   echo &amp;quot;   msub -v myloop_counter=${myloop_counter} ${msub_opt} ${chain_link_job}&amp;quot;&lt;br /&gt;
   ## Store job ID for next iteration by storing output of msub command with empty lines&lt;br /&gt;
   jobID=$(msub -v myloop_counter=${myloop_counter} ${msub_opt} ${chain_link_job} 2&amp;gt;&amp;amp;1 | sed &#039;/^$/d&#039;)&lt;br /&gt;
   ##   &lt;br /&gt;
   ## Check if ERROR occurred&lt;br /&gt;
   if [[ &amp;quot;${jobID}&amp;quot; =~ &amp;quot;ERROR&amp;quot; ]] ; then&lt;br /&gt;
      echo &amp;quot;   -&amp;gt; submission failed!&amp;quot; ; exit 1&lt;br /&gt;
   else&lt;br /&gt;
      echo &amp;quot;   -&amp;gt; job number = ${jobID}&amp;quot;&lt;br /&gt;
   fi&lt;br /&gt;
   ##&lt;br /&gt;
   ## Increase counter&lt;br /&gt;
   let myloop_counter+=1&lt;br /&gt;
done&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
----&lt;br /&gt;
[[Category:bwUniCluster|Batch Jobs - bwUniCluster features]]&lt;/div&gt;</summary>
		<author><name>S Shamsudeen</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=Batch_Jobs_-_bwUniCluster_Features&amp;diff=4831</id>
		<title>Batch Jobs - bwUniCluster Features</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=Batch_Jobs_-_bwUniCluster_Features&amp;diff=4831"/>
		<updated>2017-05-08T09:25:37Z</updated>

		<summary type="html">&lt;p&gt;S Shamsudeen: /* msub -q queues */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;font color=green size=+2&amp;gt;This article contains information on features of the [[Batch_Jobs|batch job system]] only applicable on bwUniCluster.&amp;lt;/font&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
= Job Submission =&lt;br /&gt;
== msub Command ==&lt;br /&gt;
=== Interactive Jobs ===&lt;br /&gt;
The bwUniCluster supports the following additional msub option(s):&lt;br /&gt;
&lt;br /&gt;
{| width=750px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! colspan=&amp;quot;3&amp;quot; | msub Options&lt;br /&gt;
|-&lt;br /&gt;
! Command line&lt;br /&gt;
! Script&lt;br /&gt;
! Purpose&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
|  -I&lt;br /&gt;
| &lt;br /&gt;
|  Declares the job is to be run interactively.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== msub -l &#039;&#039;resource_list&#039;&#039; ===&lt;br /&gt;
No deviation or additional features to general [[Batch_Jobs|batch job]] setting.&lt;br /&gt;
&lt;br /&gt;
=== msub -q &#039;&#039;queues&#039;&#039; ===&lt;br /&gt;
Compute resources such as walltime, nodes and memory are restricted and must fit into &#039;&#039;&#039;queues&#039;&#039;&#039;. Since requested compute resources are NOT always automatically mapped to the correct queue class, you must add to your msub command the correct queue class. Details are:&lt;br /&gt;
&lt;br /&gt;
{| width=750px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! colspan=&amp;quot;6&amp;quot; style=&amp;quot;background-color:#999999;padding:3px&amp;quot;| msub -q &#039;&#039;queue&#039;&#039;&lt;br /&gt;
|- style=&amp;quot;width:10%;height=20px; text-align:left;&amp;quot;&lt;br /&gt;
! style=&amp;quot;width:10%;padding:3px&amp;quot;| &#039;&#039;queue&#039;&#039;&lt;br /&gt;
! style=&amp;quot;width:5%;padding:3px&amp;quot;| &#039;&#039;node&#039;&#039;&lt;br /&gt;
! style=&amp;quot;width:15%;padding:3px&amp;quot;| &#039;&#039;default resources&#039;&#039;&lt;br /&gt;
! style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;minimum resources&#039;&#039;&lt;br /&gt;
! style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;maximum resources&#039;&#039;&lt;br /&gt;
! style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;node access policy&#039;&#039;&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; height=20px; text-align:left&amp;quot;&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| develop&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| thin&lt;br /&gt;
| style=&amp;quot;width:15%;padding:3px&amp;quot;| &#039;&#039;walltime&#039;&#039;=00:10:00,&#039;&#039;procs&#039;&#039;=1, &#039;&#039;pmem&#039;&#039;=4000mb&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1:&#039;&#039;ppn&#039;&#039;=16, &#039;&#039;walltime&#039;&#039;=00:30:00&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| shared&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; height=20px; text-align:left&amp;quot;&lt;br /&gt;
| style=&amp;quot;width:10%;padding:3px&amp;quot; | singlenode &lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| thin&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;walltime&#039;&#039;=00:30:01,&#039;&#039;procs&#039;&#039;=1, &#039;&#039;pmem&#039;&#039;=4000mb&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1, &#039;&#039;walltime&#039;&#039;=00:30:01&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1:&#039;&#039;ppn&#039;&#039;=16, &#039;&#039;walltime&#039;&#039;=3:00:00:00&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| shared&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; height=20px; text-align:left&amp;quot;&lt;br /&gt;
| style=&amp;quot;width:10%;vertical-align:top;height=20px; text-align:left;padding:3px&amp;quot; | verylong&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| thin&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;walltime&#039;&#039;=3:00:00:01,&#039;&#039;procs&#039;&#039;=1, &#039;&#039;pmem&#039;&#039;=4000mb&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1&#039;&#039;, walltime&#039;&#039;=3:00:00:01&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes=1:&#039;&#039;ppn&#039;&#039;=16, &#039;&#039;walltime&#039;&#039;=6:00:00:00&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| shared&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; height=20px; text-align:left&amp;quot;&lt;br /&gt;
| style=&amp;quot;width:10%;padding:3px&amp;quot; | extralong&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| thin&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;walltime&#039;&#039;=3:00:00:01,&#039;&#039;procs&#039;&#039;=1, &#039;&#039;pmem&#039;&#039;=4000mb&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1&#039;&#039;, walltime&#039;&#039;=3:00:00:01&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes=1:&#039;&#039;ppn&#039;&#039;=16, &#039;&#039;walltime&#039;&#039;=6:00:00:00&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| shared&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; height=20px; text-align:left&amp;quot; &lt;br /&gt;
| style=&amp;quot;width:10%;padding:3px&amp;quot; | fat&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| fat&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;walltime&#039;&#039;=00:10:00,&#039;&#039;procs&#039;&#039;=1, &#039;&#039;pmem&#039;&#039;=32000mb&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1:&#039;&#039;ppn&#039;&#039;=32, &#039;&#039;walltime&#039;&#039;=3:00:00:00&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| shared&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; height=20px; text-align:left&amp;quot; &lt;br /&gt;
| style=&amp;quot;width:10%;padding:3px&amp;quot; | multinode&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| broad&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;walltime&#039;&#039;=00:10:00,&#039;&#039;procs&#039;&#039;=1, &#039;&#039;pmem&#039;&#039;=4500mb&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=2&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=128:&#039;&#039;ppn&#039;&#039;=28, &#039;&#039;walltime&#039;&#039;=48:00:00&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| shared&lt;br /&gt;
&lt;br /&gt;
|- &lt;br /&gt;
|}&lt;br /&gt;
Note that &#039;&#039;node access policy&#039;&#039;=singlejob means that, irrespected of the requested number of cores, node access is exclusive. &lt;br /&gt;
Default resources of a queue class defines walltime, processes and memory if not explicitly given with msub command. Resource list acronyms &#039;&#039;walltime&#039;&#039;, &#039;&#039;procs&#039;&#039;, &#039;&#039;nodes&#039;&#039; and &#039;&#039;ppn&#039;&#039; are described [[Batch_Jobs#msub_-l_resource_list|here]].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Queue class examples ====&lt;br /&gt;
&lt;br /&gt;
* To run your batch job longer than 3 days, please use&amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;$ msub -q verylong&amp;lt;/span&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
* To run your batch job on one of the [[BwUniCluster_File_System#Components_of_bwUniCluster|fat nodes]], please use&amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;$ msub -q fat&amp;lt;/span&amp;gt;. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Environment Variables for Batch Jobs =&lt;br /&gt;
== Additional Moab Environments ==&lt;br /&gt;
The bwUniCluster expands the [[Batch_Jobs#Moab Environment Variables|common set of MOAB environment variables]] by the following variable:&lt;br /&gt;
{| width=700px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! colspan=&amp;quot;3&amp;quot; style=&amp;quot;background-color:#999999;padding:3px&amp;quot;| bwUniCluster specific MOAB variables&lt;br /&gt;
|-&lt;br /&gt;
! Environment variable&lt;br /&gt;
! Description&lt;br /&gt;
|- &lt;br /&gt;
| MOAB_SUBMITDIR&lt;br /&gt;
| Directory of job submission&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Additional Slurm Environments ==&lt;br /&gt;
Since the work load manager MOAB on [[bwUniCluster]] uses the resource manager SLURM, the following environment variables of SLURM are added to your environment once your job has started:&lt;br /&gt;
{| width=750px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! colspan=&amp;quot;3&amp;quot; style=&amp;quot;background-color:#999999;padding:3px&amp;quot;| SLURM variables&lt;br /&gt;
|- style=&amp;quot;width:25%;height=20px; text-align:left;padding:3px&amp;quot;&lt;br /&gt;
! style=&amp;quot;width:20%;height=20px; text-align:left;padding:3px&amp;quot;| Environment variables&lt;br /&gt;
! style=&amp;quot;height=20px; text-align:left;padding:3px&amp;quot;| Description&lt;br /&gt;
|- &lt;br /&gt;
| style=&amp;quot;width:20%;height=20px; text-align:left;padding:3px&amp;quot; | SLURM_JOB_CPUS_PER_NODE &lt;br /&gt;
| style=&amp;quot;height=20px; text-align:left;padding:3px&amp;quot;|  Number of processes per node dedicated to the job&lt;br /&gt;
|- &lt;br /&gt;
| style=&amp;quot;width:20%;height=20px; text-align:left;padding:3px&amp;quot; | SLURM_JOB_NODELIST &lt;br /&gt;
| style=&amp;quot;height=20px; text-align:left;padding:3px&amp;quot;|  List of nodes dedicated to the job&lt;br /&gt;
|- &lt;br /&gt;
| style=&amp;quot;width:20%;height=20px; text-align:left;padding:3px&amp;quot; | SLURM_JOB_NUM_NODES &lt;br /&gt;
| style=&amp;quot;height=20px; text-align:left;padding:3px&amp;quot;| Number of nodes dedicated to the job&lt;br /&gt;
|- &lt;br /&gt;
| style=&amp;quot;width:20%;height=20px; text-align:left;padding:3px&amp;quot; | SLURM_MEM_PER_NODE &lt;br /&gt;
| style=&amp;quot;height=20px; text-align:left;padding:3px&amp;quot;| Memory per node dedicated to the job &lt;br /&gt;
|- &lt;br /&gt;
| style=&amp;quot;width:20%;height=20px; text-align:left;padding:3px&amp;quot; | SLURM_NPROCS&lt;br /&gt;
| style=&amp;quot;height=20px; text-align:left;padding:3px&amp;quot;|  Total number of processes dedicated to the job &lt;br /&gt;
|}&lt;br /&gt;
See also:&lt;br /&gt;
* [[Batch_Jobs#Batch_Job_.28Slurm.29_Variables_:_bwUniCluster|List of almost all important Slurm environments]]&lt;br /&gt;
&lt;br /&gt;
== Interactive Job Monitoring per Node ==&lt;br /&gt;
By default nodes are not used exclusive unless they are requested with &#039;&#039;-l naccesspolicy=singlejob&#039;&#039; as described [[Batch_Jobs#msub_-l_resource_list|here]]. &amp;lt;br&amp;gt;&lt;br /&gt;
If a Job runs exclusive on one node you may do a ssh login to that node. The ssh access will be limited by the set walltime. To get the nodes of your job need to read the environment variable SLURM_JOB_NODELIST during the runtime of the job. It contains all nodes in a shortened way e.g. &#039;&#039;uc1n[344,386]&#039;&#039; or &#039;&#039;uc1n[344-345]&#039;&#039;. To expand this string to &#039;&#039;uc1n344 uc1n345&#039;&#039; you can you can use the command expandnodes like:&lt;br /&gt;
&lt;br /&gt;
  expandnodes $SLURM_JOB_NODELIST &amp;gt; nodelist&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
= Intel MPI parallel Programs =&lt;br /&gt;
== Intel MPI without Multithreading ==&lt;br /&gt;
MPI parallel programs run faster than serial programs on multi CPU and multi core systems. N-fold spawned processes of the MPI program, i.e., &#039;&#039;&#039;MPI tasks&#039;&#039;&#039;,  run simultaneously and communicate via the Message Passing Interface (MPI) paradigm. MPI tasks do not share memory but can be spawned over different nodes. &lt;br /&gt;
&lt;br /&gt;
Generate a wrapper script for &#039;&#039;&#039;Intel MPI&#039;&#039;&#039;, &#039;&#039;job_impi.sh&#039;&#039; containing the following lines:&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
module load mpi/impi/&amp;lt;placeholder_for_version&amp;gt;&lt;br /&gt;
mpiexec.hydra -bootstrap slurm my_par_program&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&amp;lt;font color=red&amp;gt;&#039;&#039;&#039;Attention:&#039;&#039;&#039;&amp;lt;/font&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
Do &#039;&#039;&#039;NOT&#039;&#039;&#039; add mpirun options &#039;&#039;-n &amp;lt;number_of_processes&amp;gt;&#039;&#039; or any other option defining processes or nodes, since MOAB instructs mpirun about number of processes and node hostnames. &lt;br /&gt;
Moreover, replace &amp;lt;placeholder_for_version&amp;gt; with the wished version of &#039;&#039;&#039;Intel MPI&#039;&#039;&#039; to enable the MPI environment.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Launching and running 32 Intel MPI tasks on 4 nodes, each requiring 1000 MByte, and running for 5 hours, execute:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ msub -q multinode -l nodes=4:ppn=16,pmem=1000mb,walltime=05:00:00 job_impi.sh&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Intel MPI with Multithreading ==&lt;br /&gt;
Multithreaded + MPI parallel programs operate faster than serial programs on multi CPUs with multiple cores. All threads of one process share resources such as memory. On the contrary MPI tasks do not share memory but can be spawned over different nodes.  &lt;br /&gt;
&lt;br /&gt;
Multiple Intel MPI tasks must be launched by the MPI parallel program &#039;&#039;&#039;mpiexec.hydra&#039;&#039;&#039;. For multithreaded programs based on &#039;&#039;&#039;Open&#039;&#039;&#039; &#039;&#039;&#039;M&#039;&#039;&#039;ulti-&#039;&#039;&#039;P&#039;&#039;&#039;rocessing (OpenMP) number of threads are defined by the environment variable OMP_NUM_THREADS. By default this variable is set to 1 (OMP_NUM_THREADS=1).&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;For Intel MPI&#039;&#039;&#039; a job-script to submit a batch job called &#039;&#039;job_impi_omp.sh&#039;&#039; that runs a Intel MPI program with 8 tasks and a tenfold threaded program &#039;&#039;impi_omp_program&#039;&#039; requiring 32000 MByte of total physical memory per task and total wall clock time of 6 hours looks like: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--b)--&amp;gt; &lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#MSUB -l nodes=4:ppn=20&lt;br /&gt;
#MSUB -l walltime=06:00:00&lt;br /&gt;
#MSUB -l pmem=3200mb&lt;br /&gt;
#MSUB -v MPI_MODULE=mpi/impi&lt;br /&gt;
#MSUB -v OMP_NUM_THREADS=10&lt;br /&gt;
#MSUB -v MPIRUN_OPTIONS=&amp;quot;-binding &amp;quot;domain=omp&amp;quot; -print-rank-map -ppn 2 -envall&amp;quot;&lt;br /&gt;
#MSUB -v EXE=./impi_omp_program&lt;br /&gt;
#MSUB -N test_impi_omp&lt;br /&gt;
&lt;br /&gt;
#If using more than one MPI task per node please set&lt;br /&gt;
export KMP_AFFINITY=scatter&lt;br /&gt;
#export KMP_AFFINITY=verbose,scatter  prints messages concerning the supported affinity &lt;br /&gt;
#KMP_AFFINITY Description: https://software.intel.com/en-us/node/524790#KMP_AFFINITY_ENVIRONMENT_VARIABLE&lt;br /&gt;
 &lt;br /&gt;
module load ${MPI_MODULE}&lt;br /&gt;
TASK_COUNT=$((${MOAB_PROCCOUNT}/${OMP_NUM_THREADS}))&lt;br /&gt;
echo &amp;quot;${EXE} running on ${MOAB_PROCCOUNT} cores with ${TASK_COUNT} MPI-tasks and ${OMP_NUM_THREADS} threads&amp;quot;&lt;br /&gt;
startexe=&amp;quot;mpiexec.hydra -bootstrap slurm ${MPIRUN_OPTIONS} -n ${TASK_COUNT} ${EXE}&amp;quot;&lt;br /&gt;
echo $startexe&lt;br /&gt;
exec $startexe&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Using Intel compiler the environment variable KMP_AFFINITY switches on binding of threads to specific cores. If you only run one MPI task per node please set KMP_AFFINITY=compact,1,0.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Execute the script &#039;&#039;&#039;job_impi_omp.sh&#039;&#039;&#039; adding the queue class &#039;&#039;multinode&#039;&#039; to your msub command:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ msub -q multinode job_impi_omp.sh&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
The mpirun option &#039;&#039;-print-rank-map&#039;&#039; shows the bindings between MPI tasks and nodes (not very beneficial). The option &#039;&#039;-binding&#039;&#039; binds MPI tasks (processes) to a particular processor; &#039;&#039;domain=omp&#039;&#039; means that the domain size is determined by the number of threads. In the above examples (2 MPI tasks per node) you could also choose &#039;&#039;-binding &amp;quot;cell=unit;map=bunch&amp;quot;&#039;&#039;; this binding maps one MPI process to each socket. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
= Interactive Jobs =&lt;br /&gt;
Interactive jobs on bwUniCluster [[BwUniCluster_User_Access#Allowed_activities_on_login_nodes|must &#039;&#039;&#039;NOT&#039;&#039;&#039; run on the logins nodes]], however resources for interactive jobs can be requested using msub. Considering a serial application with a graphical frontend that requires 5000 MByte of memory and limiting the interactive run to 2 hours execute the following:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ msub  -I  -V  -l nodes=1:ppn=1 -l walltime=0:02:00:00&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The option -V defines that all environment variables are exported to the compute node of the interactive session.&lt;br /&gt;
After execution of this command &#039;&#039;&#039;DO NOT CLOSE&#039;&#039;&#039; your current terminal session but wait until the queueing system MOAB has granted you the requested resources on the compute system. Once granted you will be automatically logged on the dedicated resource. Now you have an interactive session with 1 core and 5000 MByte of memory on the compute system for 2 hours. Simply execute now your application:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ cd to_path&lt;br /&gt;
$ ./application&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Note that, once the walltime limit has been reached you will be automatically logged out of the compute system.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
== Chain Jobs ==&lt;br /&gt;
The CPU time requirements of many applications exceed the limits of the job classes. In those situations it is recommended to solve the problem by a job chain. A job chain is a sequence of jobs where each job automatically starts its successor. &lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
##################################################&lt;br /&gt;
## simple MOAB submitter script to setup        ## &lt;br /&gt;
## a chain of jobs for bwUniCluster             ##&lt;br /&gt;
##################################################&lt;br /&gt;
## ver.  : 2015-09-17, KIT, SCC&lt;br /&gt;
&lt;br /&gt;
## Define maximum number of jobs via positional parameter 1, default is 5&lt;br /&gt;
max_nojob=${1:-5}&lt;br /&gt;
&lt;br /&gt;
## Define your jobscript (e.g. &amp;quot;~/chain_link_job.sh&amp;quot;)&lt;br /&gt;
chain_link_job=${PWD}/chain_link_job.sh&lt;br /&gt;
&lt;br /&gt;
## Define type of dependency via positional parameter 2, default is &#039;afterok&#039;&lt;br /&gt;
dep_type=&amp;quot;${2:-afterok}&amp;quot;&lt;br /&gt;
## -&amp;gt; List of all dependencies:&lt;br /&gt;
## http://docs.adaptivecomputing.com/suite/8-0/enterprise/help.htm#topics/\&lt;br /&gt;
##    moabWorkloadManager/topics/jobAdministration/jobdependencies.html&lt;br /&gt;
&lt;br /&gt;
myloop_counter=1&lt;br /&gt;
## Submit loop&lt;br /&gt;
while [ ${myloop_counter} -le ${max_nojob} ] ; do&lt;br /&gt;
   ##&lt;br /&gt;
   ## Differ msub_opt depending on chain link number&lt;br /&gt;
   if [ ${myloop_counter} -eq 1 ] ; then&lt;br /&gt;
      msub_opt=&amp;quot;&amp;quot;&lt;br /&gt;
   else&lt;br /&gt;
      ## Attention: do NOT use &#039;-W depend&#039; together with msub&lt;br /&gt;
      msub_opt=&amp;quot;-l depend=${dep_type}:${jobID}&amp;quot;&lt;br /&gt;
   fi&lt;br /&gt;
   ##&lt;br /&gt;
   ## Print current iteration number and msub command&lt;br /&gt;
   echo &amp;quot;Chain job iteration = ${myloop_counter}&amp;quot;&lt;br /&gt;
   echo &amp;quot;   msub -v myloop_counter=${myloop_counter} ${msub_opt} ${chain_link_job}&amp;quot;&lt;br /&gt;
   ## Store job ID for next iteration by storing output of msub command with empty lines&lt;br /&gt;
   jobID=$(msub -v myloop_counter=${myloop_counter} ${msub_opt} ${chain_link_job} 2&amp;gt;&amp;amp;1 | sed &#039;/^$/d&#039;)&lt;br /&gt;
   ##   &lt;br /&gt;
   ## Check if ERROR occurred&lt;br /&gt;
   if [[ &amp;quot;${jobID}&amp;quot; =~ &amp;quot;ERROR&amp;quot; ]] ; then&lt;br /&gt;
      echo &amp;quot;   -&amp;gt; submission failed!&amp;quot; ; exit 1&lt;br /&gt;
   else&lt;br /&gt;
      echo &amp;quot;   -&amp;gt; job number = ${jobID}&amp;quot;&lt;br /&gt;
   fi&lt;br /&gt;
   ##&lt;br /&gt;
   ## Increase counter&lt;br /&gt;
   let myloop_counter+=1&lt;br /&gt;
done&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
----&lt;br /&gt;
[[Category:bwUniCluster|Batch Jobs - bwUniCluster features]]&lt;/div&gt;</summary>
		<author><name>S Shamsudeen</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=Batch_Jobs_-_bwUniCluster_Features&amp;diff=4830</id>
		<title>Batch Jobs - bwUniCluster Features</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=Batch_Jobs_-_bwUniCluster_Features&amp;diff=4830"/>
		<updated>2017-05-08T06:52:20Z</updated>

		<summary type="html">&lt;p&gt;S Shamsudeen: /* msub -q queues */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;font color=green size=+2&amp;gt;This article contains information on features of the [[Batch_Jobs|batch job system]] only applicable on bwUniCluster.&amp;lt;/font&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
= Job Submission =&lt;br /&gt;
== msub Command ==&lt;br /&gt;
=== Interactive Jobs ===&lt;br /&gt;
The bwUniCluster supports the following additional msub option(s):&lt;br /&gt;
&lt;br /&gt;
{| width=750px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! colspan=&amp;quot;3&amp;quot; | msub Options&lt;br /&gt;
|-&lt;br /&gt;
! Command line&lt;br /&gt;
! Script&lt;br /&gt;
! Purpose&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
|  -I&lt;br /&gt;
| &lt;br /&gt;
|  Declares the job is to be run interactively.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== msub -l &#039;&#039;resource_list&#039;&#039; ===&lt;br /&gt;
No deviation or additional features to general [[Batch_Jobs|batch job]] setting.&lt;br /&gt;
&lt;br /&gt;
=== msub -q &#039;&#039;queues&#039;&#039; ===&lt;br /&gt;
Compute resources such as walltime, nodes and memory are restricted and must fit into &#039;&#039;&#039;queues&#039;&#039;&#039;. Since requested compute resources are NOT always automatically mapped to the correct queue class, you must add to your msub command the correct queue class. Details are:&lt;br /&gt;
&lt;br /&gt;
{| width=750px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! colspan=&amp;quot;6&amp;quot; style=&amp;quot;background-color:#999999;padding:3px&amp;quot;| msub -q &#039;&#039;queue&#039;&#039;&lt;br /&gt;
|- style=&amp;quot;width:10%;height=20px; text-align:left;&amp;quot;&lt;br /&gt;
! style=&amp;quot;width:10%;padding:3px&amp;quot;| &#039;&#039;queue&#039;&#039;&lt;br /&gt;
! style=&amp;quot;width:5%;padding:3px&amp;quot;| &#039;&#039;node&#039;&#039;&lt;br /&gt;
! style=&amp;quot;width:15%;padding:3px&amp;quot;| &#039;&#039;default resources&#039;&#039;&lt;br /&gt;
! style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;minimum resources&#039;&#039;&lt;br /&gt;
! style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;maximum resources&#039;&#039;&lt;br /&gt;
! style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;node access policy&#039;&#039;&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; height=20px; text-align:left&amp;quot;&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| develop&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| thin&lt;br /&gt;
| style=&amp;quot;width:15%;padding:3px&amp;quot;| &#039;&#039;walltime&#039;&#039;=00:10:00,&#039;&#039;procs&#039;&#039;=1, &#039;&#039;pmem&#039;&#039;=4000mb&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1:&#039;&#039;ppn&#039;&#039;=16, &#039;&#039;walltime&#039;&#039;=00:30:00&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| shared&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; height=20px; text-align:left&amp;quot;&lt;br /&gt;
| style=&amp;quot;width:10%;padding:3px&amp;quot; | singlenode &lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| thin&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;walltime&#039;&#039;=00:30:01,&#039;&#039;procs&#039;&#039;=1, &#039;&#039;pmem&#039;&#039;=4000mb&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1, &#039;&#039;walltime&#039;&#039;=00:30:01&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1:&#039;&#039;ppn&#039;&#039;=16, &#039;&#039;walltime&#039;&#039;=3:00:00:00&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| shared&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; height=20px; text-align:left&amp;quot;&lt;br /&gt;
| style=&amp;quot;width:10%;vertical-align:top;height=20px; text-align:left;padding:3px&amp;quot; | verylong&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| thin&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;walltime&#039;&#039;=3:00:00:01,&#039;&#039;procs&#039;&#039;=1, &#039;&#039;pmem&#039;&#039;=4000mb&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1&#039;&#039;, walltime&#039;&#039;=3:00:00:01&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes=1:&#039;&#039;ppn&#039;&#039;=16, &#039;&#039;walltime&#039;&#039;=6:00:00:00&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| shared&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; height=20px; text-align:left&amp;quot;&lt;br /&gt;
| style=&amp;quot;width:10%;padding:3px&amp;quot; | fat&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| fat&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;walltime&#039;&#039;=00:10:00,&#039;&#039;procs&#039;&#039;=1, &#039;&#039;pmem&#039;&#039;=32000mb&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1:&#039;&#039;ppn&#039;&#039;=32, &#039;&#039;walltime&#039;&#039;=3:00:00:00&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| shared&lt;br /&gt;
|- &lt;br /&gt;
|}&lt;br /&gt;
Note that &#039;&#039;node access policy&#039;&#039;=singlejob means that, irrespected of the requested number of cores, node access is exclusive. &lt;br /&gt;
Default resources of a queue class defines walltime, processes and memory if not explicitly given with msub command. Resource list acronyms &#039;&#039;walltime&#039;&#039;, &#039;&#039;procs&#039;&#039;, &#039;&#039;nodes&#039;&#039; and &#039;&#039;ppn&#039;&#039; are described [[Batch_Jobs#msub_-l_resource_list|here]].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Queue class examples ====&lt;br /&gt;
&lt;br /&gt;
* To run your batch job longer than 3 days, please use&amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;$ msub -q verylong&amp;lt;/span&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
* To run your batch job on one of the [[BwUniCluster_File_System#Components_of_bwUniCluster|fat nodes]], please use&amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;$ msub -q fat&amp;lt;/span&amp;gt;. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Environment Variables for Batch Jobs =&lt;br /&gt;
== Additional Moab Environments ==&lt;br /&gt;
The bwUniCluster expands the [[Batch_Jobs#Moab Environment Variables|common set of MOAB environment variables]] by the following variable:&lt;br /&gt;
{| width=700px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! colspan=&amp;quot;3&amp;quot; style=&amp;quot;background-color:#999999;padding:3px&amp;quot;| bwUniCluster specific MOAB variables&lt;br /&gt;
|-&lt;br /&gt;
! Environment variable&lt;br /&gt;
! Description&lt;br /&gt;
|- &lt;br /&gt;
| MOAB_SUBMITDIR&lt;br /&gt;
| Directory of job submission&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Additional Slurm Environments ==&lt;br /&gt;
Since the work load manager MOAB on [[bwUniCluster]] uses the resource manager SLURM, the following environment variables of SLURM are added to your environment once your job has started:&lt;br /&gt;
{| width=750px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! colspan=&amp;quot;3&amp;quot; style=&amp;quot;background-color:#999999;padding:3px&amp;quot;| SLURM variables&lt;br /&gt;
|- style=&amp;quot;width:25%;height=20px; text-align:left;padding:3px&amp;quot;&lt;br /&gt;
! style=&amp;quot;width:20%;height=20px; text-align:left;padding:3px&amp;quot;| Environment variables&lt;br /&gt;
! style=&amp;quot;height=20px; text-align:left;padding:3px&amp;quot;| Description&lt;br /&gt;
|- &lt;br /&gt;
| style=&amp;quot;width:20%;height=20px; text-align:left;padding:3px&amp;quot; | SLURM_JOB_CPUS_PER_NODE &lt;br /&gt;
| style=&amp;quot;height=20px; text-align:left;padding:3px&amp;quot;|  Number of processes per node dedicated to the job&lt;br /&gt;
|- &lt;br /&gt;
| style=&amp;quot;width:20%;height=20px; text-align:left;padding:3px&amp;quot; | SLURM_JOB_NODELIST &lt;br /&gt;
| style=&amp;quot;height=20px; text-align:left;padding:3px&amp;quot;|  List of nodes dedicated to the job&lt;br /&gt;
|- &lt;br /&gt;
| style=&amp;quot;width:20%;height=20px; text-align:left;padding:3px&amp;quot; | SLURM_JOB_NUM_NODES &lt;br /&gt;
| style=&amp;quot;height=20px; text-align:left;padding:3px&amp;quot;| Number of nodes dedicated to the job&lt;br /&gt;
|- &lt;br /&gt;
| style=&amp;quot;width:20%;height=20px; text-align:left;padding:3px&amp;quot; | SLURM_MEM_PER_NODE &lt;br /&gt;
| style=&amp;quot;height=20px; text-align:left;padding:3px&amp;quot;| Memory per node dedicated to the job &lt;br /&gt;
|- &lt;br /&gt;
| style=&amp;quot;width:20%;height=20px; text-align:left;padding:3px&amp;quot; | SLURM_NPROCS&lt;br /&gt;
| style=&amp;quot;height=20px; text-align:left;padding:3px&amp;quot;|  Total number of processes dedicated to the job &lt;br /&gt;
|}&lt;br /&gt;
See also:&lt;br /&gt;
* [[Batch_Jobs#Batch_Job_.28Slurm.29_Variables_:_bwUniCluster|List of almost all important Slurm environments]]&lt;br /&gt;
&lt;br /&gt;
== Interactive Job Monitoring per Node ==&lt;br /&gt;
By default nodes are not used exclusive unless they are requested with &#039;&#039;-l naccesspolicy=singlejob&#039;&#039; as described [[Batch_Jobs#msub_-l_resource_list|here]]. &amp;lt;br&amp;gt;&lt;br /&gt;
If a Job runs exclusive on one node you may do a ssh login to that node. The ssh access will be limited by the set walltime. To get the nodes of your job need to read the environment variable SLURM_JOB_NODELIST during the runtime of the job. It contains all nodes in a shortened way e.g. &#039;&#039;uc1n[344,386]&#039;&#039; or &#039;&#039;uc1n[344-345]&#039;&#039;. To expand this string to &#039;&#039;uc1n344 uc1n345&#039;&#039; you can you can use the command expandnodes like:&lt;br /&gt;
&lt;br /&gt;
  expandnodes $SLURM_JOB_NODELIST &amp;gt; nodelist&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
= Intel MPI parallel Programs =&lt;br /&gt;
== Intel MPI without Multithreading ==&lt;br /&gt;
MPI parallel programs run faster than serial programs on multi CPU and multi core systems. N-fold spawned processes of the MPI program, i.e., &#039;&#039;&#039;MPI tasks&#039;&#039;&#039;,  run simultaneously and communicate via the Message Passing Interface (MPI) paradigm. MPI tasks do not share memory but can be spawned over different nodes. &lt;br /&gt;
&lt;br /&gt;
Generate a wrapper script for &#039;&#039;&#039;Intel MPI&#039;&#039;&#039;, &#039;&#039;job_impi.sh&#039;&#039; containing the following lines:&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
module load mpi/impi/&amp;lt;placeholder_for_version&amp;gt;&lt;br /&gt;
mpiexec.hydra -bootstrap slurm my_par_program&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&amp;lt;font color=red&amp;gt;&#039;&#039;&#039;Attention:&#039;&#039;&#039;&amp;lt;/font&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
Do &#039;&#039;&#039;NOT&#039;&#039;&#039; add mpirun options &#039;&#039;-n &amp;lt;number_of_processes&amp;gt;&#039;&#039; or any other option defining processes or nodes, since MOAB instructs mpirun about number of processes and node hostnames. &lt;br /&gt;
Moreover, replace &amp;lt;placeholder_for_version&amp;gt; with the wished version of &#039;&#039;&#039;Intel MPI&#039;&#039;&#039; to enable the MPI environment.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Launching and running 32 Intel MPI tasks on 4 nodes, each requiring 1000 MByte, and running for 5 hours, execute:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ msub -q multinode -l nodes=4:ppn=16,pmem=1000mb,walltime=05:00:00 job_impi.sh&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Intel MPI with Multithreading ==&lt;br /&gt;
Multithreaded + MPI parallel programs operate faster than serial programs on multi CPUs with multiple cores. All threads of one process share resources such as memory. On the contrary MPI tasks do not share memory but can be spawned over different nodes.  &lt;br /&gt;
&lt;br /&gt;
Multiple Intel MPI tasks must be launched by the MPI parallel program &#039;&#039;&#039;mpiexec.hydra&#039;&#039;&#039;. For multithreaded programs based on &#039;&#039;&#039;Open&#039;&#039;&#039; &#039;&#039;&#039;M&#039;&#039;&#039;ulti-&#039;&#039;&#039;P&#039;&#039;&#039;rocessing (OpenMP) number of threads are defined by the environment variable OMP_NUM_THREADS. By default this variable is set to 1 (OMP_NUM_THREADS=1).&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;For Intel MPI&#039;&#039;&#039; a job-script to submit a batch job called &#039;&#039;job_impi_omp.sh&#039;&#039; that runs a Intel MPI program with 8 tasks and a tenfold threaded program &#039;&#039;impi_omp_program&#039;&#039; requiring 32000 MByte of total physical memory per task and total wall clock time of 6 hours looks like: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--b)--&amp;gt; &lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#MSUB -l nodes=4:ppn=20&lt;br /&gt;
#MSUB -l walltime=06:00:00&lt;br /&gt;
#MSUB -l pmem=3200mb&lt;br /&gt;
#MSUB -v MPI_MODULE=mpi/impi&lt;br /&gt;
#MSUB -v OMP_NUM_THREADS=10&lt;br /&gt;
#MSUB -v MPIRUN_OPTIONS=&amp;quot;-binding &amp;quot;domain=omp&amp;quot; -print-rank-map -ppn 2 -envall&amp;quot;&lt;br /&gt;
#MSUB -v EXE=./impi_omp_program&lt;br /&gt;
#MSUB -N test_impi_omp&lt;br /&gt;
&lt;br /&gt;
#If using more than one MPI task per node please set&lt;br /&gt;
export KMP_AFFINITY=scatter&lt;br /&gt;
#export KMP_AFFINITY=verbose,scatter  prints messages concerning the supported affinity &lt;br /&gt;
#KMP_AFFINITY Description: https://software.intel.com/en-us/node/524790#KMP_AFFINITY_ENVIRONMENT_VARIABLE&lt;br /&gt;
 &lt;br /&gt;
module load ${MPI_MODULE}&lt;br /&gt;
TASK_COUNT=$((${MOAB_PROCCOUNT}/${OMP_NUM_THREADS}))&lt;br /&gt;
echo &amp;quot;${EXE} running on ${MOAB_PROCCOUNT} cores with ${TASK_COUNT} MPI-tasks and ${OMP_NUM_THREADS} threads&amp;quot;&lt;br /&gt;
startexe=&amp;quot;mpiexec.hydra -bootstrap slurm ${MPIRUN_OPTIONS} -n ${TASK_COUNT} ${EXE}&amp;quot;&lt;br /&gt;
echo $startexe&lt;br /&gt;
exec $startexe&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Using Intel compiler the environment variable KMP_AFFINITY switches on binding of threads to specific cores. If you only run one MPI task per node please set KMP_AFFINITY=compact,1,0.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Execute the script &#039;&#039;&#039;job_impi_omp.sh&#039;&#039;&#039; adding the queue class &#039;&#039;multinode&#039;&#039; to your msub command:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ msub -q multinode job_impi_omp.sh&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
The mpirun option &#039;&#039;-print-rank-map&#039;&#039; shows the bindings between MPI tasks and nodes (not very beneficial). The option &#039;&#039;-binding&#039;&#039; binds MPI tasks (processes) to a particular processor; &#039;&#039;domain=omp&#039;&#039; means that the domain size is determined by the number of threads. In the above examples (2 MPI tasks per node) you could also choose &#039;&#039;-binding &amp;quot;cell=unit;map=bunch&amp;quot;&#039;&#039;; this binding maps one MPI process to each socket. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
= Interactive Jobs =&lt;br /&gt;
Interactive jobs on bwUniCluster [[BwUniCluster_User_Access#Allowed_activities_on_login_nodes|must &#039;&#039;&#039;NOT&#039;&#039;&#039; run on the logins nodes]], however resources for interactive jobs can be requested using msub. Considering a serial application with a graphical frontend that requires 5000 MByte of memory and limiting the interactive run to 2 hours execute the following:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ msub  -I  -V  -l nodes=1:ppn=1 -l walltime=0:02:00:00&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The option -V defines that all environment variables are exported to the compute node of the interactive session.&lt;br /&gt;
After execution of this command &#039;&#039;&#039;DO NOT CLOSE&#039;&#039;&#039; your current terminal session but wait until the queueing system MOAB has granted you the requested resources on the compute system. Once granted you will be automatically logged on the dedicated resource. Now you have an interactive session with 1 core and 5000 MByte of memory on the compute system for 2 hours. Simply execute now your application:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ cd to_path&lt;br /&gt;
$ ./application&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Note that, once the walltime limit has been reached you will be automatically logged out of the compute system.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
== Chain Jobs ==&lt;br /&gt;
The CPU time requirements of many applications exceed the limits of the job classes. In those situations it is recommended to solve the problem by a job chain. A job chain is a sequence of jobs where each job automatically starts its successor. &lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
##################################################&lt;br /&gt;
## simple MOAB submitter script to setup        ## &lt;br /&gt;
## a chain of jobs for bwUniCluster             ##&lt;br /&gt;
##################################################&lt;br /&gt;
## ver.  : 2015-09-17, KIT, SCC&lt;br /&gt;
&lt;br /&gt;
## Define maximum number of jobs via positional parameter 1, default is 5&lt;br /&gt;
max_nojob=${1:-5}&lt;br /&gt;
&lt;br /&gt;
## Define your jobscript (e.g. &amp;quot;~/chain_link_job.sh&amp;quot;)&lt;br /&gt;
chain_link_job=${PWD}/chain_link_job.sh&lt;br /&gt;
&lt;br /&gt;
## Define type of dependency via positional parameter 2, default is &#039;afterok&#039;&lt;br /&gt;
dep_type=&amp;quot;${2:-afterok}&amp;quot;&lt;br /&gt;
## -&amp;gt; List of all dependencies:&lt;br /&gt;
## http://docs.adaptivecomputing.com/suite/8-0/enterprise/help.htm#topics/\&lt;br /&gt;
##    moabWorkloadManager/topics/jobAdministration/jobdependencies.html&lt;br /&gt;
&lt;br /&gt;
myloop_counter=1&lt;br /&gt;
## Submit loop&lt;br /&gt;
while [ ${myloop_counter} -le ${max_nojob} ] ; do&lt;br /&gt;
   ##&lt;br /&gt;
   ## Differ msub_opt depending on chain link number&lt;br /&gt;
   if [ ${myloop_counter} -eq 1 ] ; then&lt;br /&gt;
      msub_opt=&amp;quot;&amp;quot;&lt;br /&gt;
   else&lt;br /&gt;
      ## Attention: do NOT use &#039;-W depend&#039; together with msub&lt;br /&gt;
      msub_opt=&amp;quot;-l depend=${dep_type}:${jobID}&amp;quot;&lt;br /&gt;
   fi&lt;br /&gt;
   ##&lt;br /&gt;
   ## Print current iteration number and msub command&lt;br /&gt;
   echo &amp;quot;Chain job iteration = ${myloop_counter}&amp;quot;&lt;br /&gt;
   echo &amp;quot;   msub -v myloop_counter=${myloop_counter} ${msub_opt} ${chain_link_job}&amp;quot;&lt;br /&gt;
   ## Store job ID for next iteration by storing output of msub command with empty lines&lt;br /&gt;
   jobID=$(msub -v myloop_counter=${myloop_counter} ${msub_opt} ${chain_link_job} 2&amp;gt;&amp;amp;1 | sed &#039;/^$/d&#039;)&lt;br /&gt;
   ##   &lt;br /&gt;
   ## Check if ERROR occurred&lt;br /&gt;
   if [[ &amp;quot;${jobID}&amp;quot; =~ &amp;quot;ERROR&amp;quot; ]] ; then&lt;br /&gt;
      echo &amp;quot;   -&amp;gt; submission failed!&amp;quot; ; exit 1&lt;br /&gt;
   else&lt;br /&gt;
      echo &amp;quot;   -&amp;gt; job number = ${jobID}&amp;quot;&lt;br /&gt;
   fi&lt;br /&gt;
   ##&lt;br /&gt;
   ## Increase counter&lt;br /&gt;
   let myloop_counter+=1&lt;br /&gt;
done&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
----&lt;br /&gt;
[[Category:bwUniCluster|Batch Jobs - bwUniCluster features]]&lt;/div&gt;</summary>
		<author><name>S Shamsudeen</name></author>
	</entry>
</feed>