<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://wiki.bwhpc.de/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=T+Kienzle</id>
	<title>bwHPC Wiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://wiki.bwhpc.de/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=T+Kienzle"/>
	<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/e/Special:Contributions/T_Kienzle"/>
	<updated>2026-05-14T00:47:48Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.39.17</generator>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwForCluster_User_Access&amp;diff=6139</id>
		<title>BwForCluster User Access</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwForCluster_User_Access&amp;diff=6139"/>
		<updated>2020-03-25T15:37:49Z</updated>

		<summary type="html">&lt;p&gt;T Kienzle: /* Issuing bwForCluster entitlement */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The usage of bwForCluster is free of charge. bwForClusters are customized to the requirements of particular research areas. &lt;br /&gt;
Each bwForCluster is/will be financed by the DFG (German Research Foundation) and by the Ministry of Science, Research and Arts of Baden-Württemberg based on scientifc grant proposal (compare proposals guidelines as per Art. 91b GG).&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;width: 100%; border-spacing: 5px;&amp;quot;&lt;br /&gt;
| style=&amp;quot;text-align:center; color:#000;vertical-align:middle;font-size:75%;&amp;quot; |[[File:Bwforreg.svg|center|border|500px|]] &lt;br /&gt;
|-&lt;br /&gt;
| style=&amp;quot;text-align:center; color:#000;vertical-align:middle;&amp;quot; |&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Granting user access to a bwForCluster requires 3 steps:&lt;br /&gt;
&lt;br /&gt;
# Registration of a new or joining an already existing &#039;&#039;rechenvorhaben (RV)&#039;&#039;  at [https://www.bwhpc.de/zas.html &amp;quot;Zentrale Antragsseite (ZAS)&amp;quot;]. An &#039;&#039;RV&#039;&#039; defines the planned compute activities of a group of researchers. As coworker of this group you only need to register your membership in the corresponding &#039;&#039;RV&#039;&#039;.&lt;br /&gt;
# Application for a [[#Issuing bwForCluster entitlement | bwForCluster entitlement]] issued by your university.&lt;br /&gt;
# [[#Personal registration at bwForCluster | Personal registration at assigned cluster site ]]  based on approved &#039;&#039;RV&#039;&#039; [[File:Zas assignment icon.svg|25px|]] and issued bwForCluster entitlement [[File:bwfor entitlement icon.svg|25px|]].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Step 1 and 2 are pre-steps and requirements of step 3. Moreover, steps 1 and 2 can be done in parallel. The question, at which bwForCluster site to do step 3, will be answered by the cluster assignment team (CAT) based on the data of step 1, i.e. at &#039;&#039;ZAS&#039;&#039;. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=  &#039;&#039;RV&#039;&#039; registration at &#039;&#039;ZAS&#039;&#039; =&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;rechenvorhaben (RV)&#039;&#039; registration at &#039;&#039;ZAS&#039;&#039; does &#039;&#039;&#039;not&#039;&#039;&#039; correspond to typical compute project proposals as for other HPC clusters, since:&lt;br /&gt;
* there is no scientific reviewing process&lt;br /&gt;
* the registration inquires only brief details and&lt;br /&gt;
* it covers a group of coworkers, and&lt;br /&gt;
* only the &#039;&#039;RV&#039;&#039; responsible must submit the &#039;&#039;RV&#039;&#039;.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== How to register? ==&lt;br /&gt;
=== Register a new &#039;&#039;RV&#039;&#039; ===&lt;br /&gt;
Typically only the leader of a scientific work group or the senior scientist of a research group/collaboration needs to login at&lt;br /&gt;
* https://www.bwhpc-c5.de/en/ZAS/bwforcluster_rv_registration.php&lt;br /&gt;
and to fill in all mandatory fields of the given web form. Please note that for your convenience you can also switch to the [https://www.bwhpc-c5.de/ZAS/bwforcluster_rechenvorhaben.php german version] of the web form. The submitter of the &#039;&#039;rechenvorhaben (RV)&#039;&#039; will be assigned the role &#039;&#039;RV&#039;&#039; responsible.&lt;br /&gt;
&lt;br /&gt;
The web form consists of the following fields to be filled:&lt;br /&gt;
{| style=&amp;quot;width:100%;border:3px solid darkgray; margin: 1em auto 1em auto;&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! style=&amp;quot;width:25%;padding:2px;margin:3px; background:#AAA; font-size:100%; font-weight:bold; border:1px solid #BBBBBB; text-align:left; color:#000; padding:0.2em 0.4em;&amp;quot;| Field&lt;br /&gt;
! style=&amp;quot;padding:2px;margin:3px; background:#AAA; font-size:100%; font-weight:bold; border:1px solid #BBBBBB; text-align:left; color:#000; padding:0.2em 0.4em;&amp;quot; | Explanation&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; {{Lightgray_small}}| RV Title&lt;br /&gt;
|scope=&amp;quot;column&amp;quot; {{Tab element}} | Define a short title of your planned compute activities, maximum of 255 characters including spaces.&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; {{Lightgray_small}}| RV Description&lt;br /&gt;
|scope=&amp;quot;column&amp;quot; {{Tab element}} | Write a short abstract about your planned compute activities (maximum of 2048 characters including spaces). Research groups that contributed to the DFG grant proposal (Art. 91b GG) of the corresponding bwForCluster only need to give reference to that particular proposal.&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; {{Lightgray_small}}| Scientific field&lt;br /&gt;
|scope=&amp;quot;column&amp;quot; {{Tab element}} | Tick one or several scientific fields. Once all bwForClusters are up and running the full list of given scientific fields are supported and hence for &#039;&#039;rechenvorhaben (RV)&#039;&#039; applicable. If your &#039;&#039;RV&#039;&#039; does not match any given scientific field, please state your scientific field in the given text box. Grayed out scientific fields indicate that the corresponding bwForCluster(s) is/are not operational yet.&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; {{Lightgray_small}}| Field of activity&lt;br /&gt;
|scope=&amp;quot;column&amp;quot; {{Tab element}} | Define if your &#039;&#039;RV&#039;&#039; is primarily for research and/or teaching. If not applicable, use text box.&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; {{Lightgray_small}}| Parallel paradigm&lt;br /&gt;
|scope=&amp;quot;column&amp;quot; {{Tab element}} | State what parallel paradigm your code or application uses. Multiple ticks are allowed. Further information can be provided via text box. If you are not sure about it, please state the software you are using.&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; {{Lightgray_small}}| Programming language&lt;br /&gt;
|scope=&amp;quot;column&amp;quot; {{Tab element}} | State the programming language(s) of your code or application.&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; {{Lightgray_small}}| Numerical methods&lt;br /&gt;
|scope=&amp;quot;column&amp;quot; {{Tab element}} | State the numerical or &amp;quot;calculation&amp;quot; methods your code or application utilises. If you do not know, write &amp;quot;unknown&amp;quot;. &lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; {{Lightgray_small}}| Software packages&lt;br /&gt;
|scope=&amp;quot;column&amp;quot; {{Tab element}} | State all software package you will or want to use for your &#039;&#039;RV&#039;&#039;. Also include compiler, MPI and numerical packages (e.g. MKL, FFTW).&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; {{Lightgray_small}}| Estimated requested computing resources&lt;br /&gt;
|scope=&amp;quot;column&amp;quot; {{Tab element}} | Roughly estimate how many CPU hours are required to finish your &#039;&#039;RV&#039;&#039;. To calculate &amp;quot;CPU hours&amp;quot; multiply &amp;quot;elapsed parallel computing time&amp;quot; with &amp;quot;number of CPU core&amp;quot; per job. &lt;br /&gt;
&#039;&#039;Example: Your code uses 4 CPU cores and has a wall time of 1h. This makes 4 CPU hours.&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; {{Lightgray_small}}| Planned maximum number of parallel used CPU cores per job&lt;br /&gt;
|scope=&amp;quot;column&amp;quot; {{Tab element}} | Give an upper limit of how many CPU cores per job are required. Please cross-check with your statement about parallel paradigm. &lt;br /&gt;
&#039;&#039;Obviously, a sequential code can only use 1 CPU core per job.&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; {{Lightgray_small}}| Estimated maximal memory requirements per CPU core&lt;br /&gt;
|scope=&amp;quot;column&amp;quot; {{Tab element}} | Give an upper limit of how much RAM per CPU core your jobs require. Please give a value in GigaBytes.&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; {{Lightgray_small}}| Estimated requested persistent disk space for the &#039;&#039;RV&#039;&#039;&lt;br /&gt;
|scope=&amp;quot;column&amp;quot; {{Tab element}} | State how much disk space in total you need for your &#039;&#039;RV&#039;&#039;. Please give a value in GigaBytes. &lt;br /&gt;
&#039;&#039;Example: If your RV has 4 more coworker and each of you produces until the end of your RV 20 GigaByte of output, your maximum disk space is 100 GigaByte.&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; {{Lightgray_small}}| Estimated maximal temporary disk space per job&lt;br /&gt;
|scope=&amp;quot;column&amp;quot; {{Tab element}} | State how much space your intermediate files require during a job run. Please give a value in GigaBytes. &lt;br /&gt;
&#039;&#039;Example: Your final output of your job is 0.1 GigaByte, but during your job 10 temporary files, each with a size of 1 GigaByte, are written to disk, hence your correct answer is 10 GigaByte.&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; {{Lightgray_small}}| Institute&lt;br /&gt;
|scope=&amp;quot;column&amp;quot; {{Tab element}} | State the full name of your Institute.&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
Note that the fields - name, firstname, organization, mail, und EPPN - are auto-filled and can not be changed. These are your credentials as given by your home organization.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;font-weight:bold;font-size:110%;&amp;quot;&amp;gt;Response&amp;lt;/span&amp;gt;&lt;br /&gt;
# After submitting you will get an email from &#039;&#039;ZAS&#039;&#039; confirming your submission. With this email you are given an unique&amp;lt;br&amp;gt;* acronym&amp;lt;br&amp;gt;* password&amp;lt;br&amp;gt;Please keep the password enclosed. Only with both, acronym and password, [[BwForCluster_User_Access#Register_for_an_RV_as_coworker | coworkers ]] can be added to your &#039;&#039;rechenvorhaben&#039;&#039;.&lt;br /&gt;
# The cluster assignment team will be notified immediately about your submission.&lt;br /&gt;
# The cluster assignment team decides based on your submitted data which bwForCluster most fits, and submits its decision to &#039;&#039;ZAS&#039;&#039;.&lt;br /&gt;
# &#039;&#039;ZAS&#039;&#039; notifies you in an email about your assigned bwForCluster and provides website details for [[BwForCluster_User_Access#Personal_registration_at_bwForCluster | step 3 ]].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Register for an &#039;&#039;RV&#039;&#039; as coworker  ===&lt;br /&gt;
&lt;br /&gt;
Each coworker must add herself/himself to a &#039;&#039;rechenvorhaben (RV)&#039;&#039;.&lt;br /&gt;
To become coworker of an &#039;&#039;RV&#039;&#039;, please login at&lt;br /&gt;
* https://www.bwhpc-c5.de/en/ZAS/bwforcluster_collaboration.php&lt;br /&gt;
and provide the&lt;br /&gt;
* acronym&lt;br /&gt;
* password&lt;br /&gt;
of the &#039;&#039;RV&#039;&#039;. Your &#039;&#039;RV&#039;&#039; responsible will provide you with that information.&lt;br /&gt;
You will be assigned to role &#039;&#039;RV&#039;&#039; member.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;font-weight:bold;font-size:110%;&amp;quot;&amp;gt;Response&amp;lt;/span&amp;gt;&lt;br /&gt;
# After submitting the request for participating in an &#039;&#039;RV&#039;&#039;, the coworker will receive an email from &#039;&#039;ZAS&#039;&#039; about the further steps (i.e. [[#Personal registration at bwForCluster | personal registration at assigned bwForCluster]]). The membership status in the &#039;&#039;RV&#039;&#039; is set active by default.&lt;br /&gt;
# Each person with role &#039;&#039;RV&#039;&#039; responsible or &#039;&#039;RV&#039;&#039; manager will be notified automatically. They can check the list of &#039;&#039;RV&#039;&#039; members at https://www.bwhpc-c5.de/en/ZAS/info_rv.php by clicking on the &#039;&#039;RV&#039;&#039; acronym.&lt;br /&gt;
# The &#039;&#039;RV&#039;&#039; responsible can set any &#039;&#039;RV&#039;&#039; coworker to &#039;&#039;RV&#039;&#039; manager and vice versa as well as deactivate/reactivate any &#039;&#039;RV&#039;&#039; coworker or &#039;&#039;RV&#039;&#039; manager.&lt;br /&gt;
# An &#039;&#039;RV&#039;&#039; manager can deactivate/reactivate any &#039;&#039;RV&#039;&#039; coworker.  &lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--=== RV extension === &lt;br /&gt;
Any RV is restricted to 1 year duration and the given compute resources during registration. Only the RV responsible can apply for an extension of the RV. The extension can be the duration by another year or the increase of computational resources. In any case, the RV responsible must login at:&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;span style=&amp;quot;font-weight:bold;font-size:115%;&amp;quot;&amp;gt;Response&amp;lt;/span&amp;gt; &lt;br /&gt;
# After submission the RV responsible will receive an email from ZAS--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Roles in an &#039;&#039;RV&#039;&#039; ==&lt;br /&gt;
{| style=&amp;quot;width:100%;border:3px solid darkgray; margin: 1em auto 1em auto;&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! style=&amp;quot;width:25%;padding:2px;margin:3px; background:#AAA; font-size:100%; font-weight:bold; border:1px solid #BBBBBB; text-align:left; color:#000; padding:0.2em 0.4em;&amp;quot;| Role&lt;br /&gt;
! style=&amp;quot;padding:2px;margin:3px; background:#AAA; font-size:100%; font-weight:bold; border:1px solid #BBBBBB; text-align:left; color:#000; padding:0.2em 0.4em;&amp;quot; | Explanation&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; {{Lightgray_small}}| RV responsible&lt;br /&gt;
|scope=&amp;quot;column&amp;quot; {{Tab element}} | Registers the &#039;&#039;rechenvorhaben (RV)&#039;&#039;, i.e. the planned compute activities. Can deactivate but also reactivate &#039;&#039;RV&#039;&#039; managers and coworkers. Can promote coworkers to mangers and vice versa.&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; {{Lightgray_small}}| RV manager&lt;br /&gt;
|scope=&amp;quot;column&amp;quot; {{Tab element}} | &#039;&#039;RV&#039;&#039; coworker with elevated rights to deactivate and reactivate other &#039;&#039;RV&#039;&#039; coworkers.  &lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; {{Lightgray_small}}| RV coworker&lt;br /&gt;
|scope=&amp;quot;column&amp;quot; {{Tab element}} | Typical user who has registered her/himself via &#039;&#039;RV&#039;&#039; acronym and password.&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Issuing bwForCluster entitlement =&lt;br /&gt;
&lt;br /&gt;
Each university issues the bwForCluster entitlement&lt;br /&gt;
&amp;lt;pre&amp;gt;http://bwidm.de/entitlement/bwForCluster&amp;lt;/pre&amp;gt;&lt;br /&gt;
only for their own members. &lt;br /&gt;
&lt;br /&gt;
The bwForCluster entitlement issued by a university assures the operator of a bwForCluster that its university member&#039;s compute activities comply with the German Foreign Trade Act (Außenwirtschaftsgesetz - AWG) und German Foreign Trade Regulations (Außenwirtschaftsverordnung - AWV).&lt;br /&gt;
&lt;br /&gt;
The following unversities have already established a process to issue the bwForCluster entitlement:&lt;br /&gt;
&lt;br /&gt;
* [[BwForCluster User Access Members Hochschule Esslingen|Hochschule Esslingen]]&lt;br /&gt;
* [[BwCluster User Access Uni Freiburg|University of Freiburg]]&lt;br /&gt;
* [[BwForCluster User Access Members Uni Heidelberg|University of Heidelberg]]&lt;br /&gt;
* [https://kim.uni-hohenheim.de/bwhpc-account/ University of Hohenheim]&lt;br /&gt;
* [http://www.scc.kit.edu/downloads/sdo/Antrag_Zugangsberechtigung_bwForCluster.pdf Karlsruhe Institute of Technology (KIT)]&lt;br /&gt;
* [[BwForCluster User Access Members Uni Konstanz|University of Konstanz]]&lt;br /&gt;
* [[BwForCluster User Access Members Uni Mannheim|University of Mannheim]]&lt;br /&gt;
* [https://www.hlrs.de/solutions-services/academic-users/bwforcluster-access/ University of Stuttgart]&lt;br /&gt;
* [[BwForCluster User Access Members Uni Tübingen|University of Tübingen]]&lt;br /&gt;
* [[BwForCluster User Access Members Uni Ulm|University of Ulm]]&lt;br /&gt;
&lt;br /&gt;
If you do not find your university in the list above, please contact your local service desk.&lt;br /&gt;
&lt;br /&gt;
= Personal registration at bwForCluster  = &lt;br /&gt;
&lt;br /&gt;
Once you have registered your own RV (&#039;&#039;rechenvorhaben&#039;&#039;)&lt;br /&gt;
or a membership in an RV, the cluster assignment team will provide&lt;br /&gt;
you with information about your designated cluster. You will receive&lt;br /&gt;
an email with a website to create an account for yourself&lt;br /&gt;
on that cluster.&lt;br /&gt;
&lt;br /&gt;
Available bwForCluster registration servers (service providers):&lt;br /&gt;
{| style=&amp;quot;border:3px solid darkgray; margin: 1em auto 1em auto;&amp;quot; width=&amp;quot;72%&amp;quot;&lt;br /&gt;
|- &lt;br /&gt;
!scope=&amp;quot;row&amp;quot; {{Darkgray}} | Cluster topic and location&lt;br /&gt;
!scope=&amp;quot;row&amp;quot; {{Darkgray}} | Registration server &lt;br /&gt;
|-&lt;br /&gt;
| bwForCluster Chemistry JUSTUS Ulm&lt;br /&gt;
| http://bwidm.rz.uni-ulm.de&lt;br /&gt;
|-&lt;br /&gt;
| bwForCluster MLS&amp;amp;WISO (Production and Development)&lt;br /&gt;
| https://bwservices.uni-heidelberg.de&lt;br /&gt;
|-&lt;br /&gt;
| bwForCluster NEMO Freiburg&lt;br /&gt;
| https://bwservices.uni-freiburg.de&lt;br /&gt;
|-&lt;br /&gt;
| bwForCluster BinAC Tübingen&lt;br /&gt;
| https://bwservices.uni-tuebingen.de&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Please note, this step is different to your registration at ZAS since&lt;br /&gt;
here you register yourself as a person. Only based on your personal credentials&lt;br /&gt;
a user account can be generated. However, resource requests of the RV in ZAS&lt;br /&gt;
will be cross-checked with the resources consumed by you and your RV coworkers&lt;br /&gt;
on this cluster.&lt;br /&gt;
&lt;br /&gt;
After steps 1 and 2 (RV approval and bwForCluster entitlement) please visit the&lt;br /&gt;
&lt;br /&gt;
* bwForCluster &#039;&#039;service provider&#039;&#039; registration website (see table above or email after RV approval):&lt;br /&gt;
*# Select your home organization from the list of organizations and click &#039;&#039;&#039;Proceed&#039;&#039;&#039;.&lt;br /&gt;
*# You will be redirected to the &#039;&#039;Identity Provider&#039;&#039; of your home organization.&lt;br /&gt;
*# Enter your home-organizational user ID (might be user name, email, ...) and password and click &#039;&#039;&#039;Login&#039;&#039;&#039; or &#039;&#039;&#039;Anmelden&#039;&#039;&#039;.&lt;br /&gt;
*# When doing this for the first time you need to accept that your personal data is transferred to the &#039;&#039;service provider&#039;&#039;.&lt;br /&gt;
*# You will be redirected back to the cluster registration website.&lt;br /&gt;
*# Select &#039;&#039;&#039;Service Description&#039;&#039;&#039; of the designated cluster.&lt;br /&gt;
*# Click on &#039;&#039;&#039;Register&#039;&#039;&#039; link to register for this cluster.&lt;br /&gt;
*# Read and accept the terms and conditions of use and click on button &#039;&#039;&#039;Register&#039;&#039;&#039;.&lt;br /&gt;
*# Finally you will receive an email with instructions how to login to the cluster (please wait ~15 minutes before trying).&lt;br /&gt;
*# Click on &#039;&#039;&#039;Set Service Password&#039;&#039;&#039; and set a password for the cluster.&lt;br /&gt;
*# &#039;&#039;&#039;Note: Users from Tübingen accessing the BinAC do not set a separate service password&#039;&#039;&#039; and have to use their local account credentials instead.&lt;br /&gt;
*# &#039;&#039;&#039;Note:&#039;&#039;&#039; You can return to the registration website at any time, in order to review your registration details, change/reset your service password or deregister from the service by yourself.&lt;br /&gt;
&lt;br /&gt;
= Login to bwForCluster  = &lt;br /&gt;
&lt;br /&gt;
Personalized details about how to login to the cluster are included&lt;br /&gt;
in an email send after registration at the bwForCluster service provider.&lt;br /&gt;
&lt;br /&gt;
General instructions for the bwForCluster login can be found here:&lt;br /&gt;
{| style=&amp;quot;border:3px solid darkgray; margin: 1em auto 1em auto;&amp;quot; width=&amp;quot;73%&amp;quot;&lt;br /&gt;
|- &lt;br /&gt;
!scope=&amp;quot;row&amp;quot; {{Darkgray}} | Cluster topic and location&lt;br /&gt;
!scope=&amp;quot;row&amp;quot; {{Darkgray}} | Login instructions&lt;br /&gt;
|-&lt;br /&gt;
| bwForCluster Chemistry JUSTUS Ulm&lt;br /&gt;
| [[BwForCluster_Chemistry_Login|bwForCluster Chemistry Login]]&lt;br /&gt;
|-&lt;br /&gt;
| bwForCluster MLS&amp;amp;WISO Production&lt;br /&gt;
| [[BwForCluster_MLS&amp;amp;WISO_Production_Login|bwForCluster MLS&amp;amp;WISO Production Login]]&lt;br /&gt;
|-&lt;br /&gt;
| bwForCluster MLS&amp;amp;WISO Development&lt;br /&gt;
| [[BwForCluster_MLS&amp;amp;WISO_Development_Login|bwForCluster MLS&amp;amp;WISO Development Login]]&lt;br /&gt;
|-&lt;br /&gt;
| bwForCluster NEMO Freiburg&lt;br /&gt;
| [[bwForCluster NEMO Login]] &lt;br /&gt;
|-&lt;br /&gt;
| bwForCluster BinAC Tübingen&lt;br /&gt;
| [[bwForCluster BinAC Login]] &lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
[[Category:Access|bwForCluster]][[Category:bwForCluster_Chemistry]][[Category:bwForCluster_MLS&amp;amp;WISO_Production]][[Category:bwForCluster_MLS&amp;amp;WISO_Development]]&lt;/div&gt;</summary>
		<author><name>T Kienzle</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwForCluster_User_Access&amp;diff=6138</id>
		<title>BwForCluster User Access</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwForCluster_User_Access&amp;diff=6138"/>
		<updated>2020-03-25T15:37:21Z</updated>

		<summary type="html">&lt;p&gt;T Kienzle: /* Issuing bwForCluster entitlement */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The usage of bwForCluster is free of charge. bwForClusters are customized to the requirements of particular research areas. &lt;br /&gt;
Each bwForCluster is/will be financed by the DFG (German Research Foundation) and by the Ministry of Science, Research and Arts of Baden-Württemberg based on scientifc grant proposal (compare proposals guidelines as per Art. 91b GG).&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;width: 100%; border-spacing: 5px;&amp;quot;&lt;br /&gt;
| style=&amp;quot;text-align:center; color:#000;vertical-align:middle;font-size:75%;&amp;quot; |[[File:Bwforreg.svg|center|border|500px|]] &lt;br /&gt;
|-&lt;br /&gt;
| style=&amp;quot;text-align:center; color:#000;vertical-align:middle;&amp;quot; |&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Granting user access to a bwForCluster requires 3 steps:&lt;br /&gt;
&lt;br /&gt;
# Registration of a new or joining an already existing &#039;&#039;rechenvorhaben (RV)&#039;&#039;  at [https://www.bwhpc.de/zas.html &amp;quot;Zentrale Antragsseite (ZAS)&amp;quot;]. An &#039;&#039;RV&#039;&#039; defines the planned compute activities of a group of researchers. As coworker of this group you only need to register your membership in the corresponding &#039;&#039;RV&#039;&#039;.&lt;br /&gt;
# Application for a [[#Issuing bwForCluster entitlement | bwForCluster entitlement]] issued by your university.&lt;br /&gt;
# [[#Personal registration at bwForCluster | Personal registration at assigned cluster site ]]  based on approved &#039;&#039;RV&#039;&#039; [[File:Zas assignment icon.svg|25px|]] and issued bwForCluster entitlement [[File:bwfor entitlement icon.svg|25px|]].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Step 1 and 2 are pre-steps and requirements of step 3. Moreover, steps 1 and 2 can be done in parallel. The question, at which bwForCluster site to do step 3, will be answered by the cluster assignment team (CAT) based on the data of step 1, i.e. at &#039;&#039;ZAS&#039;&#039;. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=  &#039;&#039;RV&#039;&#039; registration at &#039;&#039;ZAS&#039;&#039; =&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;rechenvorhaben (RV)&#039;&#039; registration at &#039;&#039;ZAS&#039;&#039; does &#039;&#039;&#039;not&#039;&#039;&#039; correspond to typical compute project proposals as for other HPC clusters, since:&lt;br /&gt;
* there is no scientific reviewing process&lt;br /&gt;
* the registration inquires only brief details and&lt;br /&gt;
* it covers a group of coworkers, and&lt;br /&gt;
* only the &#039;&#039;RV&#039;&#039; responsible must submit the &#039;&#039;RV&#039;&#039;.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== How to register? ==&lt;br /&gt;
=== Register a new &#039;&#039;RV&#039;&#039; ===&lt;br /&gt;
Typically only the leader of a scientific work group or the senior scientist of a research group/collaboration needs to login at&lt;br /&gt;
* https://www.bwhpc-c5.de/en/ZAS/bwforcluster_rv_registration.php&lt;br /&gt;
and to fill in all mandatory fields of the given web form. Please note that for your convenience you can also switch to the [https://www.bwhpc-c5.de/ZAS/bwforcluster_rechenvorhaben.php german version] of the web form. The submitter of the &#039;&#039;rechenvorhaben (RV)&#039;&#039; will be assigned the role &#039;&#039;RV&#039;&#039; responsible.&lt;br /&gt;
&lt;br /&gt;
The web form consists of the following fields to be filled:&lt;br /&gt;
{| style=&amp;quot;width:100%;border:3px solid darkgray; margin: 1em auto 1em auto;&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! style=&amp;quot;width:25%;padding:2px;margin:3px; background:#AAA; font-size:100%; font-weight:bold; border:1px solid #BBBBBB; text-align:left; color:#000; padding:0.2em 0.4em;&amp;quot;| Field&lt;br /&gt;
! style=&amp;quot;padding:2px;margin:3px; background:#AAA; font-size:100%; font-weight:bold; border:1px solid #BBBBBB; text-align:left; color:#000; padding:0.2em 0.4em;&amp;quot; | Explanation&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; {{Lightgray_small}}| RV Title&lt;br /&gt;
|scope=&amp;quot;column&amp;quot; {{Tab element}} | Define a short title of your planned compute activities, maximum of 255 characters including spaces.&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; {{Lightgray_small}}| RV Description&lt;br /&gt;
|scope=&amp;quot;column&amp;quot; {{Tab element}} | Write a short abstract about your planned compute activities (maximum of 2048 characters including spaces). Research groups that contributed to the DFG grant proposal (Art. 91b GG) of the corresponding bwForCluster only need to give reference to that particular proposal.&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; {{Lightgray_small}}| Scientific field&lt;br /&gt;
|scope=&amp;quot;column&amp;quot; {{Tab element}} | Tick one or several scientific fields. Once all bwForClusters are up and running the full list of given scientific fields are supported and hence for &#039;&#039;rechenvorhaben (RV)&#039;&#039; applicable. If your &#039;&#039;RV&#039;&#039; does not match any given scientific field, please state your scientific field in the given text box. Grayed out scientific fields indicate that the corresponding bwForCluster(s) is/are not operational yet.&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; {{Lightgray_small}}| Field of activity&lt;br /&gt;
|scope=&amp;quot;column&amp;quot; {{Tab element}} | Define if your &#039;&#039;RV&#039;&#039; is primarily for research and/or teaching. If not applicable, use text box.&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; {{Lightgray_small}}| Parallel paradigm&lt;br /&gt;
|scope=&amp;quot;column&amp;quot; {{Tab element}} | State what parallel paradigm your code or application uses. Multiple ticks are allowed. Further information can be provided via text box. If you are not sure about it, please state the software you are using.&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; {{Lightgray_small}}| Programming language&lt;br /&gt;
|scope=&amp;quot;column&amp;quot; {{Tab element}} | State the programming language(s) of your code or application.&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; {{Lightgray_small}}| Numerical methods&lt;br /&gt;
|scope=&amp;quot;column&amp;quot; {{Tab element}} | State the numerical or &amp;quot;calculation&amp;quot; methods your code or application utilises. If you do not know, write &amp;quot;unknown&amp;quot;. &lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; {{Lightgray_small}}| Software packages&lt;br /&gt;
|scope=&amp;quot;column&amp;quot; {{Tab element}} | State all software package you will or want to use for your &#039;&#039;RV&#039;&#039;. Also include compiler, MPI and numerical packages (e.g. MKL, FFTW).&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; {{Lightgray_small}}| Estimated requested computing resources&lt;br /&gt;
|scope=&amp;quot;column&amp;quot; {{Tab element}} | Roughly estimate how many CPU hours are required to finish your &#039;&#039;RV&#039;&#039;. To calculate &amp;quot;CPU hours&amp;quot; multiply &amp;quot;elapsed parallel computing time&amp;quot; with &amp;quot;number of CPU core&amp;quot; per job. &lt;br /&gt;
&#039;&#039;Example: Your code uses 4 CPU cores and has a wall time of 1h. This makes 4 CPU hours.&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; {{Lightgray_small}}| Planned maximum number of parallel used CPU cores per job&lt;br /&gt;
|scope=&amp;quot;column&amp;quot; {{Tab element}} | Give an upper limit of how many CPU cores per job are required. Please cross-check with your statement about parallel paradigm. &lt;br /&gt;
&#039;&#039;Obviously, a sequential code can only use 1 CPU core per job.&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; {{Lightgray_small}}| Estimated maximal memory requirements per CPU core&lt;br /&gt;
|scope=&amp;quot;column&amp;quot; {{Tab element}} | Give an upper limit of how much RAM per CPU core your jobs require. Please give a value in GigaBytes.&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; {{Lightgray_small}}| Estimated requested persistent disk space for the &#039;&#039;RV&#039;&#039;&lt;br /&gt;
|scope=&amp;quot;column&amp;quot; {{Tab element}} | State how much disk space in total you need for your &#039;&#039;RV&#039;&#039;. Please give a value in GigaBytes. &lt;br /&gt;
&#039;&#039;Example: If your RV has 4 more coworker and each of you produces until the end of your RV 20 GigaByte of output, your maximum disk space is 100 GigaByte.&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; {{Lightgray_small}}| Estimated maximal temporary disk space per job&lt;br /&gt;
|scope=&amp;quot;column&amp;quot; {{Tab element}} | State how much space your intermediate files require during a job run. Please give a value in GigaBytes. &lt;br /&gt;
&#039;&#039;Example: Your final output of your job is 0.1 GigaByte, but during your job 10 temporary files, each with a size of 1 GigaByte, are written to disk, hence your correct answer is 10 GigaByte.&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; {{Lightgray_small}}| Institute&lt;br /&gt;
|scope=&amp;quot;column&amp;quot; {{Tab element}} | State the full name of your Institute.&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
Note that the fields - name, firstname, organization, mail, und EPPN - are auto-filled and can not be changed. These are your credentials as given by your home organization.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;font-weight:bold;font-size:110%;&amp;quot;&amp;gt;Response&amp;lt;/span&amp;gt;&lt;br /&gt;
# After submitting you will get an email from &#039;&#039;ZAS&#039;&#039; confirming your submission. With this email you are given an unique&amp;lt;br&amp;gt;* acronym&amp;lt;br&amp;gt;* password&amp;lt;br&amp;gt;Please keep the password enclosed. Only with both, acronym and password, [[BwForCluster_User_Access#Register_for_an_RV_as_coworker | coworkers ]] can be added to your &#039;&#039;rechenvorhaben&#039;&#039;.&lt;br /&gt;
# The cluster assignment team will be notified immediately about your submission.&lt;br /&gt;
# The cluster assignment team decides based on your submitted data which bwForCluster most fits, and submits its decision to &#039;&#039;ZAS&#039;&#039;.&lt;br /&gt;
# &#039;&#039;ZAS&#039;&#039; notifies you in an email about your assigned bwForCluster and provides website details for [[BwForCluster_User_Access#Personal_registration_at_bwForCluster | step 3 ]].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Register for an &#039;&#039;RV&#039;&#039; as coworker  ===&lt;br /&gt;
&lt;br /&gt;
Each coworker must add herself/himself to a &#039;&#039;rechenvorhaben (RV)&#039;&#039;.&lt;br /&gt;
To become coworker of an &#039;&#039;RV&#039;&#039;, please login at&lt;br /&gt;
* https://www.bwhpc-c5.de/en/ZAS/bwforcluster_collaboration.php&lt;br /&gt;
and provide the&lt;br /&gt;
* acronym&lt;br /&gt;
* password&lt;br /&gt;
of the &#039;&#039;RV&#039;&#039;. Your &#039;&#039;RV&#039;&#039; responsible will provide you with that information.&lt;br /&gt;
You will be assigned to role &#039;&#039;RV&#039;&#039; member.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;font-weight:bold;font-size:110%;&amp;quot;&amp;gt;Response&amp;lt;/span&amp;gt;&lt;br /&gt;
# After submitting the request for participating in an &#039;&#039;RV&#039;&#039;, the coworker will receive an email from &#039;&#039;ZAS&#039;&#039; about the further steps (i.e. [[#Personal registration at bwForCluster | personal registration at assigned bwForCluster]]). The membership status in the &#039;&#039;RV&#039;&#039; is set active by default.&lt;br /&gt;
# Each person with role &#039;&#039;RV&#039;&#039; responsible or &#039;&#039;RV&#039;&#039; manager will be notified automatically. They can check the list of &#039;&#039;RV&#039;&#039; members at https://www.bwhpc-c5.de/en/ZAS/info_rv.php by clicking on the &#039;&#039;RV&#039;&#039; acronym.&lt;br /&gt;
# The &#039;&#039;RV&#039;&#039; responsible can set any &#039;&#039;RV&#039;&#039; coworker to &#039;&#039;RV&#039;&#039; manager and vice versa as well as deactivate/reactivate any &#039;&#039;RV&#039;&#039; coworker or &#039;&#039;RV&#039;&#039; manager.&lt;br /&gt;
# An &#039;&#039;RV&#039;&#039; manager can deactivate/reactivate any &#039;&#039;RV&#039;&#039; coworker.  &lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--=== RV extension === &lt;br /&gt;
Any RV is restricted to 1 year duration and the given compute resources during registration. Only the RV responsible can apply for an extension of the RV. The extension can be the duration by another year or the increase of computational resources. In any case, the RV responsible must login at:&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;span style=&amp;quot;font-weight:bold;font-size:115%;&amp;quot;&amp;gt;Response&amp;lt;/span&amp;gt; &lt;br /&gt;
# After submission the RV responsible will receive an email from ZAS--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Roles in an &#039;&#039;RV&#039;&#039; ==&lt;br /&gt;
{| style=&amp;quot;width:100%;border:3px solid darkgray; margin: 1em auto 1em auto;&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! style=&amp;quot;width:25%;padding:2px;margin:3px; background:#AAA; font-size:100%; font-weight:bold; border:1px solid #BBBBBB; text-align:left; color:#000; padding:0.2em 0.4em;&amp;quot;| Role&lt;br /&gt;
! style=&amp;quot;padding:2px;margin:3px; background:#AAA; font-size:100%; font-weight:bold; border:1px solid #BBBBBB; text-align:left; color:#000; padding:0.2em 0.4em;&amp;quot; | Explanation&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; {{Lightgray_small}}| RV responsible&lt;br /&gt;
|scope=&amp;quot;column&amp;quot; {{Tab element}} | Registers the &#039;&#039;rechenvorhaben (RV)&#039;&#039;, i.e. the planned compute activities. Can deactivate but also reactivate &#039;&#039;RV&#039;&#039; managers and coworkers. Can promote coworkers to mangers and vice versa.&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; {{Lightgray_small}}| RV manager&lt;br /&gt;
|scope=&amp;quot;column&amp;quot; {{Tab element}} | &#039;&#039;RV&#039;&#039; coworker with elevated rights to deactivate and reactivate other &#039;&#039;RV&#039;&#039; coworkers.  &lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; {{Lightgray_small}}| RV coworker&lt;br /&gt;
|scope=&amp;quot;column&amp;quot; {{Tab element}} | Typical user who has registered her/himself via &#039;&#039;RV&#039;&#039; acronym and password.&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Issuing bwForCluster entitlement =&lt;br /&gt;
&lt;br /&gt;
Each university issues the bwForCluster entitlement&lt;br /&gt;
&amp;lt;pre&amp;gt;http://bwidm.de/entitlement/bwForCluster&amp;lt;/pre&amp;gt;&lt;br /&gt;
only for their own members. &lt;br /&gt;
&lt;br /&gt;
The bwForCluster entitlement issued by a university assures the operator of a bwForCluster that its university member&#039;s compute activities comply with the German Foreign Trade Act (Außenwirtschaftsgesetz - AWG) und German Foreign Trade Regulations (Außenwirtschaftsverordnung - AWV).&lt;br /&gt;
&lt;br /&gt;
The following unversities have already established a process to issue the bwForCluster entitlement:&lt;br /&gt;
&lt;br /&gt;
* [[BwForCluster User Access Members Hochschule Esslingen|Hochschule Esslingen]]&lt;br /&gt;
* [[BwCluster User Access Uni Freiburg|University of Freiburg]]&lt;br /&gt;
* [[BwForCluster User Access Members Uni Heidelberg|University of Heidelberg]]&lt;br /&gt;
* [https://kim.uni-hohenheim.de/bwhpc-account/|University of Hohenheim]&lt;br /&gt;
* [http://www.scc.kit.edu/downloads/sdo/Antrag_Zugangsberechtigung_bwForCluster.pdf Karlsruhe Institute of Technology (KIT)]&lt;br /&gt;
* [[BwForCluster User Access Members Uni Konstanz|University of Konstanz]]&lt;br /&gt;
* [[BwForCluster User Access Members Uni Mannheim|University of Mannheim]]&lt;br /&gt;
* [https://www.hlrs.de/solutions-services/academic-users/bwforcluster-access/ University of Stuttgart]&lt;br /&gt;
* [[BwForCluster User Access Members Uni Tübingen|University of Tübingen]]&lt;br /&gt;
* [[BwForCluster User Access Members Uni Ulm|University of Ulm]]&lt;br /&gt;
&lt;br /&gt;
If you do not find your university in the list above, please contact your local service desk.&lt;br /&gt;
&lt;br /&gt;
= Personal registration at bwForCluster  = &lt;br /&gt;
&lt;br /&gt;
Once you have registered your own RV (&#039;&#039;rechenvorhaben&#039;&#039;)&lt;br /&gt;
or a membership in an RV, the cluster assignment team will provide&lt;br /&gt;
you with information about your designated cluster. You will receive&lt;br /&gt;
an email with a website to create an account for yourself&lt;br /&gt;
on that cluster.&lt;br /&gt;
&lt;br /&gt;
Available bwForCluster registration servers (service providers):&lt;br /&gt;
{| style=&amp;quot;border:3px solid darkgray; margin: 1em auto 1em auto;&amp;quot; width=&amp;quot;72%&amp;quot;&lt;br /&gt;
|- &lt;br /&gt;
!scope=&amp;quot;row&amp;quot; {{Darkgray}} | Cluster topic and location&lt;br /&gt;
!scope=&amp;quot;row&amp;quot; {{Darkgray}} | Registration server &lt;br /&gt;
|-&lt;br /&gt;
| bwForCluster Chemistry JUSTUS Ulm&lt;br /&gt;
| http://bwidm.rz.uni-ulm.de&lt;br /&gt;
|-&lt;br /&gt;
| bwForCluster MLS&amp;amp;WISO (Production and Development)&lt;br /&gt;
| https://bwservices.uni-heidelberg.de&lt;br /&gt;
|-&lt;br /&gt;
| bwForCluster NEMO Freiburg&lt;br /&gt;
| https://bwservices.uni-freiburg.de&lt;br /&gt;
|-&lt;br /&gt;
| bwForCluster BinAC Tübingen&lt;br /&gt;
| https://bwservices.uni-tuebingen.de&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Please note, this step is different to your registration at ZAS since&lt;br /&gt;
here you register yourself as a person. Only based on your personal credentials&lt;br /&gt;
a user account can be generated. However, resource requests of the RV in ZAS&lt;br /&gt;
will be cross-checked with the resources consumed by you and your RV coworkers&lt;br /&gt;
on this cluster.&lt;br /&gt;
&lt;br /&gt;
After steps 1 and 2 (RV approval and bwForCluster entitlement) please visit the&lt;br /&gt;
&lt;br /&gt;
* bwForCluster &#039;&#039;service provider&#039;&#039; registration website (see table above or email after RV approval):&lt;br /&gt;
*# Select your home organization from the list of organizations and click &#039;&#039;&#039;Proceed&#039;&#039;&#039;.&lt;br /&gt;
*# You will be redirected to the &#039;&#039;Identity Provider&#039;&#039; of your home organization.&lt;br /&gt;
*# Enter your home-organizational user ID (might be user name, email, ...) and password and click &#039;&#039;&#039;Login&#039;&#039;&#039; or &#039;&#039;&#039;Anmelden&#039;&#039;&#039;.&lt;br /&gt;
*# When doing this for the first time you need to accept that your personal data is transferred to the &#039;&#039;service provider&#039;&#039;.&lt;br /&gt;
*# You will be redirected back to the cluster registration website.&lt;br /&gt;
*# Select &#039;&#039;&#039;Service Description&#039;&#039;&#039; of the designated cluster.&lt;br /&gt;
*# Click on &#039;&#039;&#039;Register&#039;&#039;&#039; link to register for this cluster.&lt;br /&gt;
*# Read and accept the terms and conditions of use and click on button &#039;&#039;&#039;Register&#039;&#039;&#039;.&lt;br /&gt;
*# Finally you will receive an email with instructions how to login to the cluster (please wait ~15 minutes before trying).&lt;br /&gt;
*# Click on &#039;&#039;&#039;Set Service Password&#039;&#039;&#039; and set a password for the cluster.&lt;br /&gt;
*# &#039;&#039;&#039;Note: Users from Tübingen accessing the BinAC do not set a separate service password&#039;&#039;&#039; and have to use their local account credentials instead.&lt;br /&gt;
*# &#039;&#039;&#039;Note:&#039;&#039;&#039; You can return to the registration website at any time, in order to review your registration details, change/reset your service password or deregister from the service by yourself.&lt;br /&gt;
&lt;br /&gt;
= Login to bwForCluster  = &lt;br /&gt;
&lt;br /&gt;
Personalized details about how to login to the cluster are included&lt;br /&gt;
in an email send after registration at the bwForCluster service provider.&lt;br /&gt;
&lt;br /&gt;
General instructions for the bwForCluster login can be found here:&lt;br /&gt;
{| style=&amp;quot;border:3px solid darkgray; margin: 1em auto 1em auto;&amp;quot; width=&amp;quot;73%&amp;quot;&lt;br /&gt;
|- &lt;br /&gt;
!scope=&amp;quot;row&amp;quot; {{Darkgray}} | Cluster topic and location&lt;br /&gt;
!scope=&amp;quot;row&amp;quot; {{Darkgray}} | Login instructions&lt;br /&gt;
|-&lt;br /&gt;
| bwForCluster Chemistry JUSTUS Ulm&lt;br /&gt;
| [[BwForCluster_Chemistry_Login|bwForCluster Chemistry Login]]&lt;br /&gt;
|-&lt;br /&gt;
| bwForCluster MLS&amp;amp;WISO Production&lt;br /&gt;
| [[BwForCluster_MLS&amp;amp;WISO_Production_Login|bwForCluster MLS&amp;amp;WISO Production Login]]&lt;br /&gt;
|-&lt;br /&gt;
| bwForCluster MLS&amp;amp;WISO Development&lt;br /&gt;
| [[BwForCluster_MLS&amp;amp;WISO_Development_Login|bwForCluster MLS&amp;amp;WISO Development Login]]&lt;br /&gt;
|-&lt;br /&gt;
| bwForCluster NEMO Freiburg&lt;br /&gt;
| [[bwForCluster NEMO Login]] &lt;br /&gt;
|-&lt;br /&gt;
| bwForCluster BinAC Tübingen&lt;br /&gt;
| [[bwForCluster BinAC Login]] &lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
[[Category:Access|bwForCluster]][[Category:bwForCluster_Chemistry]][[Category:bwForCluster_MLS&amp;amp;WISO_Production]][[Category:bwForCluster_MLS&amp;amp;WISO_Development]]&lt;/div&gt;</summary>
		<author><name>T Kienzle</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwForCluster_User_Access&amp;diff=6137</id>
		<title>BwForCluster User Access</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwForCluster_User_Access&amp;diff=6137"/>
		<updated>2020-03-25T15:36:49Z</updated>

		<summary type="html">&lt;p&gt;T Kienzle: /* Issuing bwForCluster entitlement */ update hohenheim link&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The usage of bwForCluster is free of charge. bwForClusters are customized to the requirements of particular research areas. &lt;br /&gt;
Each bwForCluster is/will be financed by the DFG (German Research Foundation) and by the Ministry of Science, Research and Arts of Baden-Württemberg based on scientifc grant proposal (compare proposals guidelines as per Art. 91b GG).&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;width: 100%; border-spacing: 5px;&amp;quot;&lt;br /&gt;
| style=&amp;quot;text-align:center; color:#000;vertical-align:middle;font-size:75%;&amp;quot; |[[File:Bwforreg.svg|center|border|500px|]] &lt;br /&gt;
|-&lt;br /&gt;
| style=&amp;quot;text-align:center; color:#000;vertical-align:middle;&amp;quot; |&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Granting user access to a bwForCluster requires 3 steps:&lt;br /&gt;
&lt;br /&gt;
# Registration of a new or joining an already existing &#039;&#039;rechenvorhaben (RV)&#039;&#039;  at [https://www.bwhpc.de/zas.html &amp;quot;Zentrale Antragsseite (ZAS)&amp;quot;]. An &#039;&#039;RV&#039;&#039; defines the planned compute activities of a group of researchers. As coworker of this group you only need to register your membership in the corresponding &#039;&#039;RV&#039;&#039;.&lt;br /&gt;
# Application for a [[#Issuing bwForCluster entitlement | bwForCluster entitlement]] issued by your university.&lt;br /&gt;
# [[#Personal registration at bwForCluster | Personal registration at assigned cluster site ]]  based on approved &#039;&#039;RV&#039;&#039; [[File:Zas assignment icon.svg|25px|]] and issued bwForCluster entitlement [[File:bwfor entitlement icon.svg|25px|]].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Step 1 and 2 are pre-steps and requirements of step 3. Moreover, steps 1 and 2 can be done in parallel. The question, at which bwForCluster site to do step 3, will be answered by the cluster assignment team (CAT) based on the data of step 1, i.e. at &#039;&#039;ZAS&#039;&#039;. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=  &#039;&#039;RV&#039;&#039; registration at &#039;&#039;ZAS&#039;&#039; =&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;rechenvorhaben (RV)&#039;&#039; registration at &#039;&#039;ZAS&#039;&#039; does &#039;&#039;&#039;not&#039;&#039;&#039; correspond to typical compute project proposals as for other HPC clusters, since:&lt;br /&gt;
* there is no scientific reviewing process&lt;br /&gt;
* the registration inquires only brief details and&lt;br /&gt;
* it covers a group of coworkers, and&lt;br /&gt;
* only the &#039;&#039;RV&#039;&#039; responsible must submit the &#039;&#039;RV&#039;&#039;.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== How to register? ==&lt;br /&gt;
=== Register a new &#039;&#039;RV&#039;&#039; ===&lt;br /&gt;
Typically only the leader of a scientific work group or the senior scientist of a research group/collaboration needs to login at&lt;br /&gt;
* https://www.bwhpc-c5.de/en/ZAS/bwforcluster_rv_registration.php&lt;br /&gt;
and to fill in all mandatory fields of the given web form. Please note that for your convenience you can also switch to the [https://www.bwhpc-c5.de/ZAS/bwforcluster_rechenvorhaben.php german version] of the web form. The submitter of the &#039;&#039;rechenvorhaben (RV)&#039;&#039; will be assigned the role &#039;&#039;RV&#039;&#039; responsible.&lt;br /&gt;
&lt;br /&gt;
The web form consists of the following fields to be filled:&lt;br /&gt;
{| style=&amp;quot;width:100%;border:3px solid darkgray; margin: 1em auto 1em auto;&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! style=&amp;quot;width:25%;padding:2px;margin:3px; background:#AAA; font-size:100%; font-weight:bold; border:1px solid #BBBBBB; text-align:left; color:#000; padding:0.2em 0.4em;&amp;quot;| Field&lt;br /&gt;
! style=&amp;quot;padding:2px;margin:3px; background:#AAA; font-size:100%; font-weight:bold; border:1px solid #BBBBBB; text-align:left; color:#000; padding:0.2em 0.4em;&amp;quot; | Explanation&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; {{Lightgray_small}}| RV Title&lt;br /&gt;
|scope=&amp;quot;column&amp;quot; {{Tab element}} | Define a short title of your planned compute activities, maximum of 255 characters including spaces.&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; {{Lightgray_small}}| RV Description&lt;br /&gt;
|scope=&amp;quot;column&amp;quot; {{Tab element}} | Write a short abstract about your planned compute activities (maximum of 2048 characters including spaces). Research groups that contributed to the DFG grant proposal (Art. 91b GG) of the corresponding bwForCluster only need to give reference to that particular proposal.&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; {{Lightgray_small}}| Scientific field&lt;br /&gt;
|scope=&amp;quot;column&amp;quot; {{Tab element}} | Tick one or several scientific fields. Once all bwForClusters are up and running the full list of given scientific fields are supported and hence for &#039;&#039;rechenvorhaben (RV)&#039;&#039; applicable. If your &#039;&#039;RV&#039;&#039; does not match any given scientific field, please state your scientific field in the given text box. Grayed out scientific fields indicate that the corresponding bwForCluster(s) is/are not operational yet.&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; {{Lightgray_small}}| Field of activity&lt;br /&gt;
|scope=&amp;quot;column&amp;quot; {{Tab element}} | Define if your &#039;&#039;RV&#039;&#039; is primarily for research and/or teaching. If not applicable, use text box.&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; {{Lightgray_small}}| Parallel paradigm&lt;br /&gt;
|scope=&amp;quot;column&amp;quot; {{Tab element}} | State what parallel paradigm your code or application uses. Multiple ticks are allowed. Further information can be provided via text box. If you are not sure about it, please state the software you are using.&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; {{Lightgray_small}}| Programming language&lt;br /&gt;
|scope=&amp;quot;column&amp;quot; {{Tab element}} | State the programming language(s) of your code or application.&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; {{Lightgray_small}}| Numerical methods&lt;br /&gt;
|scope=&amp;quot;column&amp;quot; {{Tab element}} | State the numerical or &amp;quot;calculation&amp;quot; methods your code or application utilises. If you do not know, write &amp;quot;unknown&amp;quot;. &lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; {{Lightgray_small}}| Software packages&lt;br /&gt;
|scope=&amp;quot;column&amp;quot; {{Tab element}} | State all software package you will or want to use for your &#039;&#039;RV&#039;&#039;. Also include compiler, MPI and numerical packages (e.g. MKL, FFTW).&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; {{Lightgray_small}}| Estimated requested computing resources&lt;br /&gt;
|scope=&amp;quot;column&amp;quot; {{Tab element}} | Roughly estimate how many CPU hours are required to finish your &#039;&#039;RV&#039;&#039;. To calculate &amp;quot;CPU hours&amp;quot; multiply &amp;quot;elapsed parallel computing time&amp;quot; with &amp;quot;number of CPU core&amp;quot; per job. &lt;br /&gt;
&#039;&#039;Example: Your code uses 4 CPU cores and has a wall time of 1h. This makes 4 CPU hours.&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; {{Lightgray_small}}| Planned maximum number of parallel used CPU cores per job&lt;br /&gt;
|scope=&amp;quot;column&amp;quot; {{Tab element}} | Give an upper limit of how many CPU cores per job are required. Please cross-check with your statement about parallel paradigm. &lt;br /&gt;
&#039;&#039;Obviously, a sequential code can only use 1 CPU core per job.&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; {{Lightgray_small}}| Estimated maximal memory requirements per CPU core&lt;br /&gt;
|scope=&amp;quot;column&amp;quot; {{Tab element}} | Give an upper limit of how much RAM per CPU core your jobs require. Please give a value in GigaBytes.&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; {{Lightgray_small}}| Estimated requested persistent disk space for the &#039;&#039;RV&#039;&#039;&lt;br /&gt;
|scope=&amp;quot;column&amp;quot; {{Tab element}} | State how much disk space in total you need for your &#039;&#039;RV&#039;&#039;. Please give a value in GigaBytes. &lt;br /&gt;
&#039;&#039;Example: If your RV has 4 more coworker and each of you produces until the end of your RV 20 GigaByte of output, your maximum disk space is 100 GigaByte.&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; {{Lightgray_small}}| Estimated maximal temporary disk space per job&lt;br /&gt;
|scope=&amp;quot;column&amp;quot; {{Tab element}} | State how much space your intermediate files require during a job run. Please give a value in GigaBytes. &lt;br /&gt;
&#039;&#039;Example: Your final output of your job is 0.1 GigaByte, but during your job 10 temporary files, each with a size of 1 GigaByte, are written to disk, hence your correct answer is 10 GigaByte.&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; {{Lightgray_small}}| Institute&lt;br /&gt;
|scope=&amp;quot;column&amp;quot; {{Tab element}} | State the full name of your Institute.&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
Note that the fields - name, firstname, organization, mail, und EPPN - are auto-filled and can not be changed. These are your credentials as given by your home organization.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;font-weight:bold;font-size:110%;&amp;quot;&amp;gt;Response&amp;lt;/span&amp;gt;&lt;br /&gt;
# After submitting you will get an email from &#039;&#039;ZAS&#039;&#039; confirming your submission. With this email you are given an unique&amp;lt;br&amp;gt;* acronym&amp;lt;br&amp;gt;* password&amp;lt;br&amp;gt;Please keep the password enclosed. Only with both, acronym and password, [[BwForCluster_User_Access#Register_for_an_RV_as_coworker | coworkers ]] can be added to your &#039;&#039;rechenvorhaben&#039;&#039;.&lt;br /&gt;
# The cluster assignment team will be notified immediately about your submission.&lt;br /&gt;
# The cluster assignment team decides based on your submitted data which bwForCluster most fits, and submits its decision to &#039;&#039;ZAS&#039;&#039;.&lt;br /&gt;
# &#039;&#039;ZAS&#039;&#039; notifies you in an email about your assigned bwForCluster and provides website details for [[BwForCluster_User_Access#Personal_registration_at_bwForCluster | step 3 ]].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Register for an &#039;&#039;RV&#039;&#039; as coworker  ===&lt;br /&gt;
&lt;br /&gt;
Each coworker must add herself/himself to a &#039;&#039;rechenvorhaben (RV)&#039;&#039;.&lt;br /&gt;
To become coworker of an &#039;&#039;RV&#039;&#039;, please login at&lt;br /&gt;
* https://www.bwhpc-c5.de/en/ZAS/bwforcluster_collaboration.php&lt;br /&gt;
and provide the&lt;br /&gt;
* acronym&lt;br /&gt;
* password&lt;br /&gt;
of the &#039;&#039;RV&#039;&#039;. Your &#039;&#039;RV&#039;&#039; responsible will provide you with that information.&lt;br /&gt;
You will be assigned to role &#039;&#039;RV&#039;&#039; member.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;font-weight:bold;font-size:110%;&amp;quot;&amp;gt;Response&amp;lt;/span&amp;gt;&lt;br /&gt;
# After submitting the request for participating in an &#039;&#039;RV&#039;&#039;, the coworker will receive an email from &#039;&#039;ZAS&#039;&#039; about the further steps (i.e. [[#Personal registration at bwForCluster | personal registration at assigned bwForCluster]]). The membership status in the &#039;&#039;RV&#039;&#039; is set active by default.&lt;br /&gt;
# Each person with role &#039;&#039;RV&#039;&#039; responsible or &#039;&#039;RV&#039;&#039; manager will be notified automatically. They can check the list of &#039;&#039;RV&#039;&#039; members at https://www.bwhpc-c5.de/en/ZAS/info_rv.php by clicking on the &#039;&#039;RV&#039;&#039; acronym.&lt;br /&gt;
# The &#039;&#039;RV&#039;&#039; responsible can set any &#039;&#039;RV&#039;&#039; coworker to &#039;&#039;RV&#039;&#039; manager and vice versa as well as deactivate/reactivate any &#039;&#039;RV&#039;&#039; coworker or &#039;&#039;RV&#039;&#039; manager.&lt;br /&gt;
# An &#039;&#039;RV&#039;&#039; manager can deactivate/reactivate any &#039;&#039;RV&#039;&#039; coworker.  &lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--=== RV extension === &lt;br /&gt;
Any RV is restricted to 1 year duration and the given compute resources during registration. Only the RV responsible can apply for an extension of the RV. The extension can be the duration by another year or the increase of computational resources. In any case, the RV responsible must login at:&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;span style=&amp;quot;font-weight:bold;font-size:115%;&amp;quot;&amp;gt;Response&amp;lt;/span&amp;gt; &lt;br /&gt;
# After submission the RV responsible will receive an email from ZAS--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Roles in an &#039;&#039;RV&#039;&#039; ==&lt;br /&gt;
{| style=&amp;quot;width:100%;border:3px solid darkgray; margin: 1em auto 1em auto;&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! style=&amp;quot;width:25%;padding:2px;margin:3px; background:#AAA; font-size:100%; font-weight:bold; border:1px solid #BBBBBB; text-align:left; color:#000; padding:0.2em 0.4em;&amp;quot;| Role&lt;br /&gt;
! style=&amp;quot;padding:2px;margin:3px; background:#AAA; font-size:100%; font-weight:bold; border:1px solid #BBBBBB; text-align:left; color:#000; padding:0.2em 0.4em;&amp;quot; | Explanation&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; {{Lightgray_small}}| RV responsible&lt;br /&gt;
|scope=&amp;quot;column&amp;quot; {{Tab element}} | Registers the &#039;&#039;rechenvorhaben (RV)&#039;&#039;, i.e. the planned compute activities. Can deactivate but also reactivate &#039;&#039;RV&#039;&#039; managers and coworkers. Can promote coworkers to mangers and vice versa.&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; {{Lightgray_small}}| RV manager&lt;br /&gt;
|scope=&amp;quot;column&amp;quot; {{Tab element}} | &#039;&#039;RV&#039;&#039; coworker with elevated rights to deactivate and reactivate other &#039;&#039;RV&#039;&#039; coworkers.  &lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; {{Lightgray_small}}| RV coworker&lt;br /&gt;
|scope=&amp;quot;column&amp;quot; {{Tab element}} | Typical user who has registered her/himself via &#039;&#039;RV&#039;&#039; acronym and password.&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Issuing bwForCluster entitlement =&lt;br /&gt;
&lt;br /&gt;
Each university issues the bwForCluster entitlement&lt;br /&gt;
&amp;lt;pre&amp;gt;http://bwidm.de/entitlement/bwForCluster&amp;lt;/pre&amp;gt;&lt;br /&gt;
only for their own members. &lt;br /&gt;
&lt;br /&gt;
The bwForCluster entitlement issued by a university assures the operator of a bwForCluster that its university member&#039;s compute activities comply with the German Foreign Trade Act (Außenwirtschaftsgesetz - AWG) und German Foreign Trade Regulations (Außenwirtschaftsverordnung - AWV).&lt;br /&gt;
&lt;br /&gt;
The following unversities have already established a process to issue the bwForCluster entitlement:&lt;br /&gt;
&lt;br /&gt;
* [[BwForCluster User Access Members Hochschule Esslingen|Hochschule Esslingen]]&lt;br /&gt;
* [[BwCluster User Access Uni Freiburg|University of Freiburg]]&lt;br /&gt;
* [[BwForCluster User Access Members Uni Heidelberg|University of Heidelberg]]&lt;br /&gt;
* [https://kim.uni-hohenheim.de/bwhpc-account|University of Hohenheim]&lt;br /&gt;
* [http://www.scc.kit.edu/downloads/sdo/Antrag_Zugangsberechtigung_bwForCluster.pdf Karlsruhe Institute of Technology (KIT)]&lt;br /&gt;
* [[BwForCluster User Access Members Uni Konstanz|University of Konstanz]]&lt;br /&gt;
* [[BwForCluster User Access Members Uni Mannheim|University of Mannheim]]&lt;br /&gt;
* [https://www.hlrs.de/solutions-services/academic-users/bwforcluster-access/ University of Stuttgart]&lt;br /&gt;
* [[BwForCluster User Access Members Uni Tübingen|University of Tübingen]]&lt;br /&gt;
* [[BwForCluster User Access Members Uni Ulm|University of Ulm]]&lt;br /&gt;
&lt;br /&gt;
If you do not find your university in the list above, please contact your local service desk.&lt;br /&gt;
&lt;br /&gt;
= Personal registration at bwForCluster  = &lt;br /&gt;
&lt;br /&gt;
Once you have registered your own RV (&#039;&#039;rechenvorhaben&#039;&#039;)&lt;br /&gt;
or a membership in an RV, the cluster assignment team will provide&lt;br /&gt;
you with information about your designated cluster. You will receive&lt;br /&gt;
an email with a website to create an account for yourself&lt;br /&gt;
on that cluster.&lt;br /&gt;
&lt;br /&gt;
Available bwForCluster registration servers (service providers):&lt;br /&gt;
{| style=&amp;quot;border:3px solid darkgray; margin: 1em auto 1em auto;&amp;quot; width=&amp;quot;72%&amp;quot;&lt;br /&gt;
|- &lt;br /&gt;
!scope=&amp;quot;row&amp;quot; {{Darkgray}} | Cluster topic and location&lt;br /&gt;
!scope=&amp;quot;row&amp;quot; {{Darkgray}} | Registration server &lt;br /&gt;
|-&lt;br /&gt;
| bwForCluster Chemistry JUSTUS Ulm&lt;br /&gt;
| http://bwidm.rz.uni-ulm.de&lt;br /&gt;
|-&lt;br /&gt;
| bwForCluster MLS&amp;amp;WISO (Production and Development)&lt;br /&gt;
| https://bwservices.uni-heidelberg.de&lt;br /&gt;
|-&lt;br /&gt;
| bwForCluster NEMO Freiburg&lt;br /&gt;
| https://bwservices.uni-freiburg.de&lt;br /&gt;
|-&lt;br /&gt;
| bwForCluster BinAC Tübingen&lt;br /&gt;
| https://bwservices.uni-tuebingen.de&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Please note, this step is different to your registration at ZAS since&lt;br /&gt;
here you register yourself as a person. Only based on your personal credentials&lt;br /&gt;
a user account can be generated. However, resource requests of the RV in ZAS&lt;br /&gt;
will be cross-checked with the resources consumed by you and your RV coworkers&lt;br /&gt;
on this cluster.&lt;br /&gt;
&lt;br /&gt;
After steps 1 and 2 (RV approval and bwForCluster entitlement) please visit the&lt;br /&gt;
&lt;br /&gt;
* bwForCluster &#039;&#039;service provider&#039;&#039; registration website (see table above or email after RV approval):&lt;br /&gt;
*# Select your home organization from the list of organizations and click &#039;&#039;&#039;Proceed&#039;&#039;&#039;.&lt;br /&gt;
*# You will be redirected to the &#039;&#039;Identity Provider&#039;&#039; of your home organization.&lt;br /&gt;
*# Enter your home-organizational user ID (might be user name, email, ...) and password and click &#039;&#039;&#039;Login&#039;&#039;&#039; or &#039;&#039;&#039;Anmelden&#039;&#039;&#039;.&lt;br /&gt;
*# When doing this for the first time you need to accept that your personal data is transferred to the &#039;&#039;service provider&#039;&#039;.&lt;br /&gt;
*# You will be redirected back to the cluster registration website.&lt;br /&gt;
*# Select &#039;&#039;&#039;Service Description&#039;&#039;&#039; of the designated cluster.&lt;br /&gt;
*# Click on &#039;&#039;&#039;Register&#039;&#039;&#039; link to register for this cluster.&lt;br /&gt;
*# Read and accept the terms and conditions of use and click on button &#039;&#039;&#039;Register&#039;&#039;&#039;.&lt;br /&gt;
*# Finally you will receive an email with instructions how to login to the cluster (please wait ~15 minutes before trying).&lt;br /&gt;
*# Click on &#039;&#039;&#039;Set Service Password&#039;&#039;&#039; and set a password for the cluster.&lt;br /&gt;
*# &#039;&#039;&#039;Note: Users from Tübingen accessing the BinAC do not set a separate service password&#039;&#039;&#039; and have to use their local account credentials instead.&lt;br /&gt;
*# &#039;&#039;&#039;Note:&#039;&#039;&#039; You can return to the registration website at any time, in order to review your registration details, change/reset your service password or deregister from the service by yourself.&lt;br /&gt;
&lt;br /&gt;
= Login to bwForCluster  = &lt;br /&gt;
&lt;br /&gt;
Personalized details about how to login to the cluster are included&lt;br /&gt;
in an email send after registration at the bwForCluster service provider.&lt;br /&gt;
&lt;br /&gt;
General instructions for the bwForCluster login can be found here:&lt;br /&gt;
{| style=&amp;quot;border:3px solid darkgray; margin: 1em auto 1em auto;&amp;quot; width=&amp;quot;73%&amp;quot;&lt;br /&gt;
|- &lt;br /&gt;
!scope=&amp;quot;row&amp;quot; {{Darkgray}} | Cluster topic and location&lt;br /&gt;
!scope=&amp;quot;row&amp;quot; {{Darkgray}} | Login instructions&lt;br /&gt;
|-&lt;br /&gt;
| bwForCluster Chemistry JUSTUS Ulm&lt;br /&gt;
| [[BwForCluster_Chemistry_Login|bwForCluster Chemistry Login]]&lt;br /&gt;
|-&lt;br /&gt;
| bwForCluster MLS&amp;amp;WISO Production&lt;br /&gt;
| [[BwForCluster_MLS&amp;amp;WISO_Production_Login|bwForCluster MLS&amp;amp;WISO Production Login]]&lt;br /&gt;
|-&lt;br /&gt;
| bwForCluster MLS&amp;amp;WISO Development&lt;br /&gt;
| [[BwForCluster_MLS&amp;amp;WISO_Development_Login|bwForCluster MLS&amp;amp;WISO Development Login]]&lt;br /&gt;
|-&lt;br /&gt;
| bwForCluster NEMO Freiburg&lt;br /&gt;
| [[bwForCluster NEMO Login]] &lt;br /&gt;
|-&lt;br /&gt;
| bwForCluster BinAC Tübingen&lt;br /&gt;
| [[bwForCluster BinAC Login]] &lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
[[Category:Access|bwForCluster]][[Category:bwForCluster_Chemistry]][[Category:bwForCluster_MLS&amp;amp;WISO_Production]][[Category:bwForCluster_MLS&amp;amp;WISO_Development]]&lt;/div&gt;</summary>
		<author><name>T Kienzle</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=User:M_Janczyk/Software/Singularity_Containers&amp;diff=5795</id>
		<title>User:M Janczyk/Software/Singularity Containers</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=User:M_Janczyk/Software/Singularity_Containers&amp;diff=5795"/>
		<updated>2019-07-31T08:26:43Z</updated>

		<summary type="html">&lt;p&gt;T Kienzle: /* Introduction */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction =&lt;br /&gt;
&lt;br /&gt;
Singularity is an open-source software for container-virtualization. Because not every different software configuration can be provided as [[Environment Modules|Modules]] on the clusters, containers offer a way to use a pre-built scientific software in a closed and reproducible space, independent from the environment. A Singularity container contains its own operating system, the intended software and all required dependencies. This also means that you can use software that isn&#039;t available for RHEL/CentOS, but is offered for other Linux systems. Singularity containers are easily movable between systems and do not require root-rights for execution (different to Docker). &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For example, a user may build a software package from a library or from source on their own computer, move it to the server (for example with scp) and execute it. This works as long as Singularity is installed on both systems, without having to deal with the environment on the cluster. &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The container generally works as its own closed-off environment. While you can access data and files stored on the host, you generally do not have access to software or modules running there. This means that you usually have to provide software that could otherwise be found in a module inside the container.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The following tutorial gives a brief introduction to the program to create and run a container on a cluster. &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Building containers can be tricky. If you need help, please don&#039;t hesitate to contact [https://wiki.bwhpc.de/e/Category:Support us]. [https://wiki.bwhpc.de/e/Category:Support We can help you].&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Requirements to build a Container ==&lt;br /&gt;
&lt;br /&gt;
Singularity requires a Linux-system. If no Linux computer is available, a virtual machine with a Linux-OS can be used. Singularity does &#039;&#039;not&#039;&#039; work on the Windows Subsystem for Linux (WSL). &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
First, install Singularity 3 from source by following the instructions on the [https://sylabs.io/guides/3.0/user-guide/installation.html# official page]. Singularity 3 also requires the installation of the programming language Go. &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Singularity has to be installed on all systems trying to use the container, and has to be loaded with an appropriate module first on the cluster. &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Building a Container ==&lt;br /&gt;
&lt;br /&gt;
The command to build a new container is &#039;&#039;singularity build&#039;&#039;. A container that should remain writable after building can be created with &#039;&#039;singularity build --sandbox&#039;&#039;. A specific home directory can be defined by using &#039;&#039;singularity build --home /your/home/path/&#039;&#039;.  &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Most building operations require root-rights. If a container has to be made up from scratch, it can be built manually in a writable shell or through a definition file. &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Singularity 3 supports two formats of containers: As a Singularity Image File (&#039;&#039;.sif&#039;&#039;) and as a directory. In the following, directories will be used in the examples.&lt;br /&gt;
&lt;br /&gt;
=== Building an existing container ===&lt;br /&gt;
&lt;br /&gt;
If an existing container from Docker (&#039;&#039;docker://&#039;&#039;) or a container library (&#039;&#039;library://&#039;&#039; or &#039;&#039;shub://&#039;&#039;) should be built using Singularity, it can simply be imported as is. A writable sandbox container can be built by flagging --sandbox: &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sudo singularity build &amp;lt;containername&amp;gt; library://path/to/file&lt;br /&gt;
$ sudo singularity build --sandbox &amp;lt;containername&amp;gt; library://path/to/file&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Building a new container with a definition file ===&lt;br /&gt;
&lt;br /&gt;
A definition file, or recipe in older guides, provides the program with a script to build the container from. The same definition-file always reliably produces the same container, so it is encouraged to use definition-files for reproducibility if possible. &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The file has to contain a &#039;&#039;&#039;header&#039;&#039;&#039;, specifying the operating system and its source. If one wants to use CentOS, for example: &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Bootstrap: library&lt;br /&gt;
From: centos&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After this, various sections with specifications can follow in any order. The most important are &#039;&#039;&#039;%post&#039;&#039;&#039; and &#039;&#039;&#039;%environment&#039;&#039;&#039;. &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;%post&#039;&#039;&#039; constitutes the main part of the file, with a set of bash-instructions used to build the container in order. For example, the process might start with an update and the installation of standard tools required later on: &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
%post&lt;br /&gt;
	yum -y install update&lt;br /&gt;
	yum -y install wget tar&lt;br /&gt;
	cd /&lt;br /&gt;
	mkdir example&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Keep in mind that the container contains almost no pre-installed packages. &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;%environment&#039;&#039;&#039; can be used to set environment-variables such as paths, for example: &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
%environment&lt;br /&gt;
	export PKG_CONFIG_PATH=/usr/bin&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Inputs into &#039;&#039;&#039;%environment&#039;&#039;&#039; from the definition-file are written to &#039;&#039;/.singularity.d/env/90-environment.sh&#039;&#039;, which is sourced at runtime. In &#039;&#039;%post&#039;&#039;, commands for starting the container can be written to that file or to &#039;&#039;/.singularity.d/env/91-environment.sh&#039;&#039;. This can be useful if no &#039;~/.bashrc&#039; or similar is available. &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Some other options for the definition-file are &#039;&#039;&#039;%runtime&#039;&#039;&#039;, which can include a script to be executed at the start of the &#039;&#039;run&#039;&#039; command, &#039;&#039;&#039;%files&#039;&#039;&#039;, which can be used to copy files from the host-system into the container, and &#039;&#039;&#039;%labels&#039;&#039;&#039; and &#039;&#039;&#039;%help&#039;&#039;&#039; to provide the user of the container with credentials and a metadata help-file, respectively. More options can be found on the [https://sylabs.io/guides/3.0/user-guide/definition_files.html Singularity homepage]. &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The file is invoked with &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sudo singularity build &amp;lt;containername&amp;gt; definition-file.def&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Building a new container in the Shell ===&lt;br /&gt;
&lt;br /&gt;
If no definition-file is a available, a container can be built by executing the same commands that would be put in the definition file manually in a shell. Effectively, this approach is the same as building an existing container, just with the intention to write into it immidiately. To do so, a container with the desired OS has to be built as a sandbox and opened as a writable shell: &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sudo singularity build --sandbox &amp;lt;containername&amp;gt; library://centos&lt;br /&gt;
$ sudo singularity shell --writable &amp;lt;containername&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This shell already behaves like its own OS. Note that if you took a base-container with only an OS, it contains almost no pre-installed packages, and that some paths (e.g. &#039;&#039;~/&#039;&#039;) may still reference the host-system. In general, &#039;&#039;cd /&#039;&#039; gets you to the highest parent-folder within singularity. &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ yum -y install update&lt;br /&gt;
$ yum -y install wget tar&lt;br /&gt;
$ cd /&lt;br /&gt;
$ mkdir example&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Using a container ==&lt;br /&gt;
&lt;br /&gt;
To use a container on a cluster, an available Singularity-module has to be loaded. For example: &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ module load your/singularity/version&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Keep in mind that other modules you may have loaded (that aren&#039;t needed for running the container itself) will &#039;&#039;not&#039;&#039; be available inside the container. &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
There are three key ways to run services on a regular container, the already known &#039;&#039;shell&#039;&#039;, as well as &#039;&#039;exec&#039;&#039; and &#039;&#039;run&#039;&#039;.  By default, all Singularity containers are read-only. To make them writable, &#039;&#039;--writable&#039;&#039; has to be specified in the command. &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Again, the container can be opened as a shell: &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ singularity shell &amp;lt;containername&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
On the cluster, the easiest way to run one-line commands (including commands to run other files) is passing them to the container through &#039;&#039;exec&#039;&#039;: &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ singularity exec --writable &amp;lt;containername&amp;gt; yum -y install update&lt;br /&gt;
$ singularity exec &amp;lt;containername&amp;gt; script.sh&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Exec&#039;&#039; is generally the easiest way to execute scripts. Without further specification, it runs the container in the current directory, making it a good option to run on workspaces, which Singularity might otherwise struggle with if run from &#039;&#039;$HOME&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
The third option, &#039;&#039;run&#039;&#039;, allows to execute a &#039;&#039;&#039;%runscript&#039;&#039;&#039; that was provided in the definition file during the build-process: &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ singularity run &amp;lt;containername&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is for example useful to start an installed program.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Containers and Batch Jobs ===&lt;br /&gt;
[[Batch Jobs]] utilizing Singularity containers are generally built the same way as all other batch jobs, where the job script contains a &#039;&#039;singularity exec&#039;&#039; command. For example: &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#MSUB -l nodes=1:gpu&lt;br /&gt;
#MSUB -l walltime=3:00:00&lt;br /&gt;
#MSUB -l mem=5000mb&lt;br /&gt;
#MSUB -N Singularity&lt;br /&gt;
&lt;br /&gt;
module load your/singularity/version&lt;br /&gt;
cd your/workspace&lt;br /&gt;
singularity exec --nv &amp;lt;containername&amp;gt; script1.sh&lt;br /&gt;
singularity exec --nv &amp;lt;containername&amp;gt; python script2.py&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Using GPUS ===&lt;br /&gt;
&lt;br /&gt;
Containers can run on GPUs as well. First, make sure that all necessary libraries (for example CUDA) are installed on the container itself and all appropriate drivers are available on the host system. When using the container, both &#039;&#039;exec&#039;&#039; and &#039;&#039;shell&#039;&#039; need another mandatory specification to be able to interact with GPUs - the flag &#039;&#039;--nv&#039;&#039; after the command. &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ singularity exec --nv &amp;lt;containername&amp;gt; script.sh&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Concluding notes ==&lt;br /&gt;
&lt;br /&gt;
* Keep track of your inputs when using a (writable) shell.&lt;br /&gt;
* Modules loaded outside the container don&#039;t work on the inside.&lt;br /&gt;
* Use &#039;&#039;singularity exec&#039;&#039; for Batch-Jobs.&lt;br /&gt;
* Use &#039;&#039;singularity exec --nv&#039;&#039; for GPUs.&lt;br /&gt;
&lt;br /&gt;
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
[[Category:Scientific_Applications]]&lt;/div&gt;</summary>
		<author><name>T Kienzle</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=User:M_Janczyk/Software/Singularity_Containers&amp;diff=5794</id>
		<title>User:M Janczyk/Software/Singularity Containers</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=User:M_Janczyk/Software/Singularity_Containers&amp;diff=5794"/>
		<updated>2019-07-31T08:25:02Z</updated>

		<summary type="html">&lt;p&gt;T Kienzle: /* Introduction */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction =&lt;br /&gt;
&lt;br /&gt;
Singularity is an open-source software for container-virtualization. Because not every different software configuration can be provided as [[Environment Modules|Modules]] on the clusters, containers offer a way to use a pre-built scientific software in a closed and reproducible space, independent from the environment. A Singularity container contains its own operating system, the intended software and all required dependencies. This also means that you can use software that isn&#039;t available for RHEL/CentOS, but is offered for other Linux systems. Singularity containers are easily movable between systems and do not require root-rights for execution (different to Docker). &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For example, a user may build a software package from a library or from source on their own computer, move it to the server (for example with scp) and execute it. This works as long as Singularity is installed on both systems, without having to deal with the environment on the cluster. &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The container generally works as its own closed-off environment. While you can access data and files stored on the host, you generally do not have access to software or modules running there. This means that you usually have to provide software that could otherwise be found in a module inside the container.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The following tutorial gives a brief introduction to the program to create and run a container on a cluster. &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Building containers can be tricky. If you need help, please don&#039;t hesitate to contact [https://wiki.bwhpc.de/e/Category:Support us].&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Requirements to build a Container ==&lt;br /&gt;
&lt;br /&gt;
Singularity requires a Linux-system. If no Linux computer is available, a virtual machine with a Linux-OS can be used. Singularity does &#039;&#039;not&#039;&#039; work on the Windows Subsystem for Linux (WSL). &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
First, install Singularity 3 from source by following the instructions on the [https://sylabs.io/guides/3.0/user-guide/installation.html# official page]. Singularity 3 also requires the installation of the programming language Go. &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Singularity has to be installed on all systems trying to use the container, and has to be loaded with an appropriate module first on the cluster. &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Building a Container ==&lt;br /&gt;
&lt;br /&gt;
The command to build a new container is &#039;&#039;singularity build&#039;&#039;. A container that should remain writable after building can be created with &#039;&#039;singularity build --sandbox&#039;&#039;. A specific home directory can be defined by using &#039;&#039;singularity build --home /your/home/path/&#039;&#039;.  &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Most building operations require root-rights. If a container has to be made up from scratch, it can be built manually in a writable shell or through a definition file. &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Singularity 3 supports two formats of containers: As a Singularity Image File (&#039;&#039;.sif&#039;&#039;) and as a directory. In the following, directories will be used in the examples.&lt;br /&gt;
&lt;br /&gt;
=== Building an existing container ===&lt;br /&gt;
&lt;br /&gt;
If an existing container from Docker (&#039;&#039;docker://&#039;&#039;) or a container library (&#039;&#039;library://&#039;&#039; or &#039;&#039;shub://&#039;&#039;) should be built using Singularity, it can simply be imported as is. A writable sandbox container can be built by flagging --sandbox: &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sudo singularity build &amp;lt;containername&amp;gt; library://path/to/file&lt;br /&gt;
$ sudo singularity build --sandbox &amp;lt;containername&amp;gt; library://path/to/file&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Building a new container with a definition file ===&lt;br /&gt;
&lt;br /&gt;
A definition file, or recipe in older guides, provides the program with a script to build the container from. The same definition-file always reliably produces the same container, so it is encouraged to use definition-files for reproducibility if possible. &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The file has to contain a &#039;&#039;&#039;header&#039;&#039;&#039;, specifying the operating system and its source. If one wants to use CentOS, for example: &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Bootstrap: library&lt;br /&gt;
From: centos&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After this, various sections with specifications can follow in any order. The most important are &#039;&#039;&#039;%post&#039;&#039;&#039; and &#039;&#039;&#039;%environment&#039;&#039;&#039;. &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;%post&#039;&#039;&#039; constitutes the main part of the file, with a set of bash-instructions used to build the container in order. For example, the process might start with an update and the installation of standard tools required later on: &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
%post&lt;br /&gt;
	yum -y install update&lt;br /&gt;
	yum -y install wget tar&lt;br /&gt;
	cd /&lt;br /&gt;
	mkdir example&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Keep in mind that the container contains almost no pre-installed packages. &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;%environment&#039;&#039;&#039; can be used to set environment-variables such as paths, for example: &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
%environment&lt;br /&gt;
	export PKG_CONFIG_PATH=/usr/bin&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Inputs into &#039;&#039;&#039;%environment&#039;&#039;&#039; from the definition-file are written to &#039;&#039;/.singularity.d/env/90-environment.sh&#039;&#039;, which is sourced at runtime. In &#039;&#039;%post&#039;&#039;, commands for starting the container can be written to that file or to &#039;&#039;/.singularity.d/env/91-environment.sh&#039;&#039;. This can be useful if no &#039;~/.bashrc&#039; or similar is available. &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Some other options for the definition-file are &#039;&#039;&#039;%runtime&#039;&#039;&#039;, which can include a script to be executed at the start of the &#039;&#039;run&#039;&#039; command, &#039;&#039;&#039;%files&#039;&#039;&#039;, which can be used to copy files from the host-system into the container, and &#039;&#039;&#039;%labels&#039;&#039;&#039; and &#039;&#039;&#039;%help&#039;&#039;&#039; to provide the user of the container with credentials and a metadata help-file, respectively. More options can be found on the [https://sylabs.io/guides/3.0/user-guide/definition_files.html Singularity homepage]. &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The file is invoked with &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sudo singularity build &amp;lt;containername&amp;gt; definition-file.def&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Building a new container in the Shell ===&lt;br /&gt;
&lt;br /&gt;
If no definition-file is a available, a container can be built by executing the same commands that would be put in the definition file manually in a shell. Effectively, this approach is the same as building an existing container, just with the intention to write into it immidiately. To do so, a container with the desired OS has to be built as a sandbox and opened as a writable shell: &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sudo singularity build --sandbox &amp;lt;containername&amp;gt; library://centos&lt;br /&gt;
$ sudo singularity shell --writable &amp;lt;containername&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This shell already behaves like its own OS. Note that if you took a base-container with only an OS, it contains almost no pre-installed packages, and that some paths (e.g. &#039;&#039;~/&#039;&#039;) may still reference the host-system. In general, &#039;&#039;cd /&#039;&#039; gets you to the highest parent-folder within singularity. &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ yum -y install update&lt;br /&gt;
$ yum -y install wget tar&lt;br /&gt;
$ cd /&lt;br /&gt;
$ mkdir example&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Using a container ==&lt;br /&gt;
&lt;br /&gt;
To use a container on a cluster, an available Singularity-module has to be loaded. For example: &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ module load your/singularity/version&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Keep in mind that other modules you may have loaded (that aren&#039;t needed for running the container itself) will &#039;&#039;not&#039;&#039; be available inside the container. &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
There are three key ways to run services on a regular container, the already known &#039;&#039;shell&#039;&#039;, as well as &#039;&#039;exec&#039;&#039; and &#039;&#039;run&#039;&#039;.  By default, all Singularity containers are read-only. To make them writable, &#039;&#039;--writable&#039;&#039; has to be specified in the command. &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Again, the container can be opened as a shell: &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ singularity shell &amp;lt;containername&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
On the cluster, the easiest way to run one-line commands (including commands to run other files) is passing them to the container through &#039;&#039;exec&#039;&#039;: &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ singularity exec --writable &amp;lt;containername&amp;gt; yum -y install update&lt;br /&gt;
$ singularity exec &amp;lt;containername&amp;gt; script.sh&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Exec&#039;&#039; is generally the easiest way to execute scripts. Without further specification, it runs the container in the current directory, making it a good option to run on workspaces, which Singularity might otherwise struggle with if run from &#039;&#039;$HOME&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
The third option, &#039;&#039;run&#039;&#039;, allows to execute a &#039;&#039;&#039;%runscript&#039;&#039;&#039; that was provided in the definition file during the build-process: &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ singularity run &amp;lt;containername&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is for example useful to start an installed program.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Containers and Batch Jobs ===&lt;br /&gt;
[[Batch Jobs]] utilizing Singularity containers are generally built the same way as all other batch jobs, where the job script contains a &#039;&#039;singularity exec&#039;&#039; command. For example: &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#MSUB -l nodes=1:gpu&lt;br /&gt;
#MSUB -l walltime=3:00:00&lt;br /&gt;
#MSUB -l mem=5000mb&lt;br /&gt;
#MSUB -N Singularity&lt;br /&gt;
&lt;br /&gt;
module load your/singularity/version&lt;br /&gt;
cd your/workspace&lt;br /&gt;
singularity exec --nv &amp;lt;containername&amp;gt; script1.sh&lt;br /&gt;
singularity exec --nv &amp;lt;containername&amp;gt; python script2.py&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Using GPUS ===&lt;br /&gt;
&lt;br /&gt;
Containers can run on GPUs as well. First, make sure that all necessary libraries (for example CUDA) are installed on the container itself and all appropriate drivers are available on the host system. When using the container, both &#039;&#039;exec&#039;&#039; and &#039;&#039;shell&#039;&#039; need another mandatory specification to be able to interact with GPUs - the flag &#039;&#039;--nv&#039;&#039; after the command. &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ singularity exec --nv &amp;lt;containername&amp;gt; script.sh&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Concluding notes ==&lt;br /&gt;
&lt;br /&gt;
* Keep track of your inputs when using a (writable) shell.&lt;br /&gt;
* Modules loaded outside the container don&#039;t work on the inside.&lt;br /&gt;
* Use &#039;&#039;singularity exec&#039;&#039; for Batch-Jobs.&lt;br /&gt;
* Use &#039;&#039;singularity exec --nv&#039;&#039; for GPUs.&lt;br /&gt;
&lt;br /&gt;
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
[[Category:Scientific_Applications]]&lt;/div&gt;</summary>
		<author><name>T Kienzle</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=User:M_Janczyk/Software/Singularity_Containers&amp;diff=5793</id>
		<title>User:M Janczyk/Software/Singularity Containers</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=User:M_Janczyk/Software/Singularity_Containers&amp;diff=5793"/>
		<updated>2019-07-31T08:21:41Z</updated>

		<summary type="html">&lt;p&gt;T Kienzle: some minor changes&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction =&lt;br /&gt;
&lt;br /&gt;
Singularity is an open-source software for container-virtualization. Because not every different software configuration can be provided as [[Environment Modules|Modules]] on the clusters, containers offer a way to use a pre-built scientific software in a closed and reproducible space, independent from the environment. A Singularity container contains its own operating system, the intended software and all required dependencies. This also means that you can use software that isn&#039;t available for RHEL/CentOS, but is offered for other Linux systems. Singularity containers are easily movable between systems and do not require root-rights for execution (different to Docker). &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For example, a user may build a software package from a library or from source on their own computer, move it to the server (for example with scp) and execute it. This works as long as Singularity is installed on both systems, without having to deal with the environment on the cluster. &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The container generally works as its own closed-off environment. While you can access data and files stored on the host, you generally do not have access to software or modules running there. This means that you usually have to provide software that could otherwise be found in a module inside the container.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The following tutorial gives a brief introduction to the program to create and run a container on a cluster. &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Requirements to build a Container ==&lt;br /&gt;
&lt;br /&gt;
Singularity requires a Linux-system. If no Linux computer is available, a virtual machine with a Linux-OS can be used. Singularity does &#039;&#039;not&#039;&#039; work on the Windows Subsystem for Linux (WSL). &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
First, install Singularity 3 from source by following the instructions on the [https://sylabs.io/guides/3.0/user-guide/installation.html# official page]. Singularity 3 also requires the installation of the programming language Go. &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Singularity has to be installed on all systems trying to use the container, and has to be loaded with an appropriate module first on the cluster. &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Building a Container ==&lt;br /&gt;
&lt;br /&gt;
The command to build a new container is &#039;&#039;singularity build&#039;&#039;. A container that should remain writable after building can be created with &#039;&#039;singularity build --sandbox&#039;&#039;. A specific home directory can be defined by using &#039;&#039;singularity build --home /your/home/path/&#039;&#039;.  &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Most building operations require root-rights. If a container has to be made up from scratch, it can be built manually in a writable shell or through a definition file. &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Singularity 3 supports two formats of containers: As a Singularity Image File (&#039;&#039;.sif&#039;&#039;) and as a directory. In the following, directories will be used in the examples.&lt;br /&gt;
&lt;br /&gt;
=== Building an existing container ===&lt;br /&gt;
&lt;br /&gt;
If an existing container from Docker (&#039;&#039;docker://&#039;&#039;) or a container library (&#039;&#039;library://&#039;&#039; or &#039;&#039;shub://&#039;&#039;) should be built using Singularity, it can simply be imported as is. A writable sandbox container can be built by flagging --sandbox: &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sudo singularity build &amp;lt;containername&amp;gt; library://path/to/file&lt;br /&gt;
$ sudo singularity build --sandbox &amp;lt;containername&amp;gt; library://path/to/file&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Building a new container with a definition file ===&lt;br /&gt;
&lt;br /&gt;
A definition file, or recipe in older guides, provides the program with a script to build the container from. The same definition-file always reliably produces the same container, so it is encouraged to use definition-files for reproducibility if possible. &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The file has to contain a &#039;&#039;&#039;header&#039;&#039;&#039;, specifying the operating system and its source. If one wants to use CentOS, for example: &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Bootstrap: library&lt;br /&gt;
From: centos&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After this, various sections with specifications can follow in any order. The most important are &#039;&#039;&#039;%post&#039;&#039;&#039; and &#039;&#039;&#039;%environment&#039;&#039;&#039;. &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;%post&#039;&#039;&#039; constitutes the main part of the file, with a set of bash-instructions used to build the container in order. For example, the process might start with an update and the installation of standard tools required later on: &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
%post&lt;br /&gt;
	yum -y install update&lt;br /&gt;
	yum -y install wget tar&lt;br /&gt;
	cd /&lt;br /&gt;
	mkdir example&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Keep in mind that the container contains almost no pre-installed packages. &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;%environment&#039;&#039;&#039; can be used to set environment-variables such as paths, for example: &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
%environment&lt;br /&gt;
	export PKG_CONFIG_PATH=/usr/bin&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Inputs into &#039;&#039;&#039;%environment&#039;&#039;&#039; from the definition-file are written to &#039;&#039;/.singularity.d/env/90-environment.sh&#039;&#039;, which is sourced at runtime. In &#039;&#039;%post&#039;&#039;, commands for starting the container can be written to that file or to &#039;&#039;/.singularity.d/env/91-environment.sh&#039;&#039;. This can be useful if no &#039;~/.bashrc&#039; or similar is available. &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Some other options for the definition-file are &#039;&#039;&#039;%runtime&#039;&#039;&#039;, which can include a script to be executed at the start of the &#039;&#039;run&#039;&#039; command, &#039;&#039;&#039;%files&#039;&#039;&#039;, which can be used to copy files from the host-system into the container, and &#039;&#039;&#039;%labels&#039;&#039;&#039; and &#039;&#039;&#039;%help&#039;&#039;&#039; to provide the user of the container with credentials and a metadata help-file, respectively. More options can be found on the [https://sylabs.io/guides/3.0/user-guide/definition_files.html Singularity homepage]. &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The file is invoked with &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sudo singularity build &amp;lt;containername&amp;gt; definition-file.def&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Building a new container in the Shell ===&lt;br /&gt;
&lt;br /&gt;
If no definition-file is a available, a container can be built by executing the same commands that would be put in the definition file manually in a shell. Effectively, this approach is the same as building an existing container, just with the intention to write into it immidiately. To do so, a container with the desired OS has to be built as a sandbox and opened as a writable shell: &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sudo singularity build --sandbox &amp;lt;containername&amp;gt; library://centos&lt;br /&gt;
$ sudo singularity shell --writable &amp;lt;containername&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This shell already behaves like its own OS. Note that if you took a base-container with only an OS, it contains almost no pre-installed packages, and that some paths (e.g. &#039;&#039;~/&#039;&#039;) may still reference the host-system. In general, &#039;&#039;cd /&#039;&#039; gets you to the highest parent-folder within singularity. &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ yum -y install update&lt;br /&gt;
$ yum -y install wget tar&lt;br /&gt;
$ cd /&lt;br /&gt;
$ mkdir example&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Using a container ==&lt;br /&gt;
&lt;br /&gt;
To use a container on a cluster, an available Singularity-module has to be loaded. For example: &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ module load your/singularity/version&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Keep in mind that other modules you may have loaded (that aren&#039;t needed for running the container itself) will &#039;&#039;not&#039;&#039; be available inside the container. &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
There are three key ways to run services on a regular container, the already known &#039;&#039;shell&#039;&#039;, as well as &#039;&#039;exec&#039;&#039; and &#039;&#039;run&#039;&#039;.  By default, all Singularity containers are read-only. To make them writable, &#039;&#039;--writable&#039;&#039; has to be specified in the command. &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Again, the container can be opened as a shell: &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ singularity shell &amp;lt;containername&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
On the cluster, the easiest way to run one-line commands (including commands to run other files) is passing them to the container through &#039;&#039;exec&#039;&#039;: &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ singularity exec --writable &amp;lt;containername&amp;gt; yum -y install update&lt;br /&gt;
$ singularity exec &amp;lt;containername&amp;gt; script.sh&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Exec&#039;&#039; is generally the easiest way to execute scripts. Without further specification, it runs the container in the current directory, making it a good option to run on workspaces, which Singularity might otherwise struggle with if run from &#039;&#039;$HOME&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
The third option, &#039;&#039;run&#039;&#039;, allows to execute a &#039;&#039;&#039;%runscript&#039;&#039;&#039; that was provided in the definition file during the build-process: &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ singularity run &amp;lt;containername&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is for example useful to start an installed program.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Containers and Batch Jobs ===&lt;br /&gt;
[[Batch Jobs]] utilizing Singularity containers are generally built the same way as all other batch jobs, where the job script contains a &#039;&#039;singularity exec&#039;&#039; command. For example: &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#MSUB -l nodes=1:gpu&lt;br /&gt;
#MSUB -l walltime=3:00:00&lt;br /&gt;
#MSUB -l mem=5000mb&lt;br /&gt;
#MSUB -N Singularity&lt;br /&gt;
&lt;br /&gt;
module load your/singularity/version&lt;br /&gt;
cd your/workspace&lt;br /&gt;
singularity exec --nv &amp;lt;containername&amp;gt; script1.sh&lt;br /&gt;
singularity exec --nv &amp;lt;containername&amp;gt; python script2.py&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Using GPUS ===&lt;br /&gt;
&lt;br /&gt;
Containers can run on GPUs as well. First, make sure that all necessary libraries (for example CUDA) are installed on the container itself and all appropriate drivers are available on the host system. When using the container, both &#039;&#039;exec&#039;&#039; and &#039;&#039;shell&#039;&#039; need another mandatory specification to be able to interact with GPUs - the flag &#039;&#039;--nv&#039;&#039; after the command. &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ singularity exec --nv &amp;lt;containername&amp;gt; script.sh&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Concluding notes ==&lt;br /&gt;
&lt;br /&gt;
* Keep track of your inputs when using a (writable) shell.&lt;br /&gt;
* Modules loaded outside the container don&#039;t work on the inside.&lt;br /&gt;
* Use &#039;&#039;singularity exec&#039;&#039; for Batch-Jobs.&lt;br /&gt;
* Use &#039;&#039;singularity exec --nv&#039;&#039; for GPUs.&lt;br /&gt;
&lt;br /&gt;
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
[[Category:Scientific_Applications]]&lt;/div&gt;</summary>
		<author><name>T Kienzle</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=JUSTUS2/Visualization&amp;diff=5242</id>
		<title>JUSTUS2/Visualization</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=JUSTUS2/Visualization&amp;diff=5242"/>
		<updated>2017-11-29T08:19:41Z</updated>

		<summary type="html">&lt;p&gt;T Kienzle: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{| width=600px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Description !! Content&lt;br /&gt;
|-&lt;br /&gt;
| module load&lt;br /&gt;
| vis/tigervnc&lt;br /&gt;
|-&lt;br /&gt;
| Availability&lt;br /&gt;
| [[bwUniCluster]] &amp;amp;#124; [[BwForCluster_Chemistry]]&lt;br /&gt;
|-&lt;br /&gt;
| License&lt;br /&gt;
| GPL&lt;br /&gt;
|-&lt;br /&gt;
| Citing &lt;br /&gt;
| n/a&lt;br /&gt;
|-&lt;br /&gt;
| Links&lt;br /&gt;
| [http://www.tigervnc.org/ TigerVNC Homepage]&lt;br /&gt;
|-&lt;br /&gt;
| Graphical Interface&lt;br /&gt;
| No&lt;br /&gt;
|}&lt;br /&gt;
= Introduction to TigerVNC =&lt;br /&gt;
&#039;&#039;&#039;TigerVNC&#039;&#039;&#039; is a high-performance implementation of VNC (Virtual Network Computing), a client/server application that allows &lt;br /&gt;
users to launch and interact with graphical applications on remote machines. It should be faster than standard X11 forwarding and thus can&lt;br /&gt;
be used if a graphical software feels slow and has bad responsiveness.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
= Versions and Availability =&lt;br /&gt;
A list of versions currently available on all bwHPC-C5-Clusters can be obtained from the&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;big&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[https://cis-hpc.uni-konstanz.de/prod.cis/ Cluster Information System CIS]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/big&amp;gt;&lt;br /&gt;
{{#widget:Iframe&lt;br /&gt;
|url=https://cis-hpc.uni-konstanz.de/prod.cis/bwUniCluster/vis/tigervnc&lt;br /&gt;
|width=99%&lt;br /&gt;
|height=180&lt;br /&gt;
|border=0&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Start =&lt;br /&gt;
First, you need to start the VNC server. The exact startup mechanism depends on the cluster.&lt;br /&gt;
* [[Start VNC Server - bwUniCluster]]&lt;br /&gt;
* [[Start VNC Server - bwForCluster Chemistry]]&lt;br /&gt;
* [[Start VNC Server - bwForCluster Chemistry - 3D Acceleration]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
= Login =&lt;br /&gt;
The startup script of the VNC server should print detailed instructions on how to establish the connection to the VNC server from your local computer. They depend on whether you&lt;br /&gt;
use Windows or Linux and if you work with TurboVNC Java Viewer which is a tool that can simplify the process a little bit but needs the Java Development Kit (JDK) to run. Therefore the next steps are&lt;br /&gt;
divided into 3 cases. Each command should be issued on the local computer.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Login with TurboVNC Java Viewer&#039;&#039;&#039; &amp;lt;br /&amp;gt;Needed Software: [http://turbovnc.org/ TurboVNC 2.0], JDK&amp;lt;br /&amp;gt; Open TurboVNC Java Viewer. Go to Options... -&amp;gt; Security -&amp;gt; Gateway and fill in the parameters provided by the &#039;&#039;run_vncserver&#039;&#039; script. You can save these settings in the &amp;quot;Global&amp;quot; tab if you want to. Now you can click &amp;quot;OK&amp;quot;, supply the VNC server and connect to the server. You should be prompted for your ssh password and your VNC password and after that the connection is established.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Login without TurboVNC Java Viewer for Linux users&#039;&#039;&#039;&amp;lt;br /&amp;gt;Needed Software: A VNC viewer such as tigervnc or turbovnc&amp;lt;br /&amp;gt;A tunnel must be created with the ssh command given by the &#039;&#039;run_vncserver&#039;&#039; script. Open a new terminal, start a VNC viewer and connect to localhost:n, where n is the display number printed by &#039;&#039;run_vncserver&#039;&#039;, using a command such like this&amp;lt;br /&amp;gt;&amp;lt;pre&amp;gt;$ vncviewer localhost:1&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Login without TurboVNC Java Viewer for Windows users&#039;&#039;&#039;&amp;lt;br /&amp;gt;Needed Software: [https://sourceforge.net/projects/tigervnc/files/tigervnc/1.3.0 tigervnc], [http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html Putty]&amp;lt;br /&amp;gt;You need to start Putty and go to Connection -&amp;gt; SSH -&amp;gt; Tunnels. Fill in the parameters provided by &#039;&#039;run_vncserver&#039;&#039;. After you clicked &amp;quot;Add&amp;quot; you must navigate to Session and connect to the bwUniCluster with your username and password. Once the connection is established start the tigervnc client and connect to localhost:n where n is the display number printed by &#039;&#039;run_vncserver&#039;&#039;.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Shutdown =&lt;br /&gt;
To exit your VNC session it is not sufficient to only close the window of the viewer, because this will not terminate the VNC server. The server will keep running and you will run into problems&lt;br /&gt;
when you try to start a new VNC session later on. Please use the &amp;quot;log out&amp;quot; function of the desktop environment inside the VNC session, this will terminate the server properly.&lt;br /&gt;
[[Category:visualization]][[Category:bwUniCluster]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
= Starting OpenGL programs =&lt;br /&gt;
Starting an OpenGL program in a 3D accelerated VNC session results in an error message. In order to fix this issue and redirect OpenGL commands to the graphics card on the cluster you have to use the vglrun command like in the following example:&lt;br /&gt;
 $ vglrun glxgears -info&lt;/div&gt;</summary>
		<author><name>T Kienzle</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=Data_Transfer&amp;diff=5241</id>
		<title>Data Transfer</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=Data_Transfer&amp;diff=5241"/>
		<updated>2017-11-29T08:10:20Z</updated>

		<summary type="html">&lt;p&gt;T Kienzle: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;H1&amp;gt;Using SFTP from Unix client&amp;lt;/H1&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Example:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt; sftp  ka_xy1234@bwfilestorage.lsdf.kit.edu&lt;br /&gt;
Connecting to bwfilestorage.lsdf.kit.edu&amp;lt;br&amp;gt;&lt;br /&gt;
ka_xy1234@bwfilestorage.lsdf.kit.edu&#039;s password: &lt;br /&gt;
sftp&amp;gt; ls&lt;br /&gt;
snapshots&lt;br /&gt;
temp test&lt;br /&gt;
sftp&amp;gt; help&lt;br /&gt;
...&lt;br /&gt;
sftp&amp;gt; put myfile&lt;br /&gt;
sftp&amp;gt; get myfile&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;H1&amp;gt;Using SFTP from Windows and Mac client&amp;lt;/H1&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Windows clients do not have a SCP/SFTP client installed by default, so it needs to be installed before this protocol can be used. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Tools for example:&#039;&#039;&#039;&lt;br /&gt;
* [https://www.openssh.com/ OpenSSH] &lt;br /&gt;
*[https://www.chiark.greenend.org.uk/~sgtatham/putty/download.html Putty suite] (for Windows and Unix)&lt;br /&gt;
*[https://winscp.net/eng/download.php WinSCP] (for Windows)&lt;br /&gt;
*[https://filezilla-project.org/download.php?show_all=1 FileZilla] (for Windows, Mac and Linux)&lt;br /&gt;
*[https://cygwin.com/install.html Cygwin] (for Windows)&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;network drive over SFTP:&#039;&#039;&#039;&lt;br /&gt;
*[https://www.southrivertechnologies.com/download/downloadwd.html WebDrive] (for Windows and Mac) &lt;br /&gt;
*[https://www.eldos.com/sftp-net-drive/comparison.php  SFTP Net Drive (ELDOS)] (for Windows)&lt;br /&gt;
*[https://www.netdrive.net/ NetDrive] (for Windows)&lt;br /&gt;
*[https://www.expandrive.com/expandrive ExpanDrive] (for Windows and Mac)&lt;br /&gt;
&amp;lt;hr&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[Category:bwFileStorage|SFTP]]&lt;/div&gt;</summary>
		<author><name>T Kienzle</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=Data_Transfer&amp;diff=5240</id>
		<title>Data Transfer</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=Data_Transfer&amp;diff=5240"/>
		<updated>2017-11-29T08:08:51Z</updated>

		<summary type="html">&lt;p&gt;T Kienzle: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;H1&amp;gt;Using SFTP from Unix client&amp;lt;/H1&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Example:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt; sftp  ka_xy1234@bwfilestorage.lsdf.kit.edu&lt;br /&gt;
Connecting to bwfilestorage.lsdf.kit.edu&amp;lt;br&amp;gt;&lt;br /&gt;
ka_xy1234@bwfilestorage.lsdf.kit.edu&#039;s password: &lt;br /&gt;
sftp&amp;gt; ls&lt;br /&gt;
snapshots&lt;br /&gt;
temp test&lt;br /&gt;
sftp&amp;gt; help&lt;br /&gt;
...&lt;br /&gt;
sftp&amp;gt; put myfile&lt;br /&gt;
sftp&amp;gt; get myfile&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;H1&amp;gt;Using SFTP from Windows and Mac client&amp;lt;/H1&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Windows clients do not have a SCP/SFTP client installed by default, so it needs to be installed before this protocol can be used. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Tools for example:&#039;&#039;&#039;&lt;br /&gt;
* [https://www.openssh.com/ OpenSSH] &lt;br /&gt;
*[https://www.chiark.greenend.org.uk/~sgtatham/putty/download.html Putty suite] (for Windows and Unix)&lt;br /&gt;
*[https://winscp.net/eng/download.php WinSCP] (for Windows)&lt;br /&gt;
*[https://filezilla-project.org/download.php?show_all=1 FileZilla] (for Windows, Mac and Linux)&lt;br /&gt;
*[http://cygwin.com/install.html Cygwin] (for Windows)&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;network drive over SFTP:&#039;&#039;&#039;&lt;br /&gt;
*[https://www.southrivertechnologies.com/download/downloadwd.html WebDrive] (for Windows and Mac) &lt;br /&gt;
*[https://www.eldos.com/sftp-net-drive/comparison.php  SFTP Net Drive (ELDOS)] (for Windows)&lt;br /&gt;
*[https://www.netdrive.net/ NetDrive] (for Windows)&lt;br /&gt;
*[https://www.expandrive.com/expandrive ExpanDrive] (for Windows and Mac)&lt;br /&gt;
&amp;lt;hr&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[Category:bwFileStorage|SFTP]]&lt;/div&gt;</summary>
		<author><name>T Kienzle</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=Data_Transfer&amp;diff=5239</id>
		<title>Data Transfer</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=Data_Transfer&amp;diff=5239"/>
		<updated>2017-11-29T08:07:32Z</updated>

		<summary type="html">&lt;p&gt;T Kienzle: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;H1&amp;gt;Using SFTP from Unix client&amp;lt;/H1&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Example:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt; sftp  ka_xy1234@bwfilestorage.lsdf.kit.edu&lt;br /&gt;
Connecting to bwfilestorage.lsdf.kit.edu&amp;lt;br&amp;gt;&lt;br /&gt;
ka_xy1234@bwfilestorage.lsdf.kit.edu&#039;s password: &lt;br /&gt;
sftp&amp;gt; ls&lt;br /&gt;
snapshots&lt;br /&gt;
temp test&lt;br /&gt;
sftp&amp;gt; help&lt;br /&gt;
...&lt;br /&gt;
sftp&amp;gt; put myfile&lt;br /&gt;
sftp&amp;gt; get myfile&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;H1&amp;gt;Using SFTP from Windows and Mac client&amp;lt;/H1&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Windows clients do not have a SCP/SFTP client installed by default, so it needs to be installed before this protocol can be used. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Tools for example:&#039;&#039;&#039;&lt;br /&gt;
* [http://www.openssh.com/ OpenSSH] &lt;br /&gt;
*[http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html Putty suite] (for Windows and Unix)&lt;br /&gt;
*[https://winscp.net/eng/download.php WinSCP] (for Windows)&lt;br /&gt;
*[https://filezilla-project.org/download.php?show_all=1 FileZilla] (for Windows, Mac and Linux)&lt;br /&gt;
*[http://cygwin.com/install.html Cygwin] (for Windows)&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;network drive over SFTP:&#039;&#039;&#039;&lt;br /&gt;
*[https://www.southrivertechnologies.com/download/downloadwd.html WebDrive] (for Windows and Mac) &lt;br /&gt;
*[https://www.eldos.com/sftp-net-drive/comparison.php  SFTP Net Drive (ELDOS)] (for Windows)&lt;br /&gt;
*[http://www.netdrive.net/ NetDrive] (for Windows)&lt;br /&gt;
*[https://www.expandrive.com/expandrive ExpanDrive] (for Windows and Mac)&lt;br /&gt;
&amp;lt;hr&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[Category:bwFileStorage|SFTP]]&lt;/div&gt;</summary>
		<author><name>T Kienzle</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=JUSTUS2/Software/Schrodinger&amp;diff=5235</id>
		<title>JUSTUS2/Software/Schrodinger</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=JUSTUS2/Software/Schrodinger&amp;diff=5235"/>
		<updated>2017-11-29T08:03:54Z</updated>

		<summary type="html">&lt;p&gt;T Kienzle: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;!--{|  align=&amp;quot;right&amp;quot; {{Table|width=600px}}   --&amp;gt;&lt;br /&gt;
&amp;lt;!--{|{{Softwarebox}}--&amp;gt;&lt;br /&gt;
{| width=600px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! colspan=&amp;quot;2&amp;quot;  style=&amp;quot;text-align:center&amp;quot; | Schrödinger&lt;br /&gt;
|-&lt;br /&gt;
| module load&lt;br /&gt;
| chem/schrodinger&lt;br /&gt;
&amp;lt;!-- Neben CIS auch bereits über Kategorien  abgedeckt--&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| Availability&lt;br /&gt;
| [[bwUniCluster]] &amp;amp;#124; [[BwForCluster_Chemistry]] &amp;amp;#124; bwGRiD Tuebingen&lt;br /&gt;
&amp;lt;!-- &lt;br /&gt;
[[bwForCluster ENM]],  [[bwForCluster MLS/WISO]],  [[bwForCluster Theochem]],  [[bwForCluster BinAC]]&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| License&lt;br /&gt;
| commercial&lt;br /&gt;
|-&lt;br /&gt;
|Citing&lt;br /&gt;
| 5. See Schrodinger manual&lt;br /&gt;
|-&lt;br /&gt;
| Links&lt;br /&gt;
| [https://www.schrodinger.com Homepage] &amp;amp;#124; [https://www.schrodinger.com/supportdocs/18/  Documentation]&lt;br /&gt;
|-&lt;br /&gt;
| Graphical Interface&lt;br /&gt;
| Yes, Maestro, Desmond&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Description == &lt;br /&gt;
&#039;&#039;&#039;Schrödinger&#039;&#039;&#039; is a collection of software chemical or biochemical use. It offers various tools, like Maestro, that acts like an interface to all other schrödinger software and it is supposed to be used for material science, to investigate the structure, the reactivity and properties of chemical systems. Maestro is part of the Materials science suite.&lt;br /&gt;
&lt;br /&gt;
= Versions and Availability =&lt;br /&gt;
A list of versions currently available on all bwHPC-C5-Clusters can be obtained from the&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;big&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[https://cis-hpc.uni-konstanz.de Cluster Information System CIS]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/big&amp;gt;&lt;br /&gt;
{{#widget:Iframe&lt;br /&gt;
|url=https://cis-hpc.uni-konstanz.de/prod.cis/bwUniCluster/chem/schrodinger&lt;br /&gt;
|width=99%&lt;br /&gt;
|height=370&lt;br /&gt;
|border=0&lt;br /&gt;
}}&lt;br /&gt;
On the command line interface of any bwHPC cluster you&#039;ll get a list of available versions by using&lt;br /&gt;
the command&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;&#039;module avail chem/schrodinger&#039;&#039;&#039;&#039;.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
: UniCluster&lt;br /&gt;
$ module avail chem/schrodinger&lt;br /&gt;
------------------------ /opt/bwhpc/common/modulefiles -------------------------&lt;br /&gt;
chem/schrodinger/2014u1 chem/schrodinger/2015u3&lt;br /&gt;
chem/schrodinger/2014u2 chem/schrodinger/2015u4&lt;br /&gt;
: Justus&lt;br /&gt;
$ module avail chem/schrodinger&lt;br /&gt;
------------------------ /opt/bwhpc/common/modulefiles -------------------------&lt;br /&gt;
chem/schrodinger/2015u4&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;font color=red&amp;gt;The module is called &#039;&#039;&#039;&#039;schrodinger&#039;&#039;&#039;&#039; and &#039;&#039;&#039;not&#039;&#039;&#039; &#039;schoedinger&#039; or &#039;schrödinger&#039;!&amp;lt;/font&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= License and Registering =&lt;br /&gt;
Schrödinger, especially Maestro, is available after registering for academical use for no charge. Downloadable here: [https://www.schroedinger.com/downloadcenter/ Download Center]&amp;lt;br&amp;gt;&lt;br /&gt;
Your request will be verified and after that you can download Maestro for academical use. That will take about 48 hours.&lt;br /&gt;
&lt;br /&gt;
== Prerequirements ==&lt;br /&gt;
=== Requesting the License Key ===&lt;br /&gt;
After installing Maestro you need a license key which you can request from the „Competence Center for Bioinformatics“. Please contact &lt;br /&gt;
&lt;br /&gt;
==== Set environment variable for the license server ====&lt;br /&gt;
When you got the licence server information, set it as an environment variable, editing $LICENSE_SERVER$ in the following command.&lt;br /&gt;
&amp;lt;pre&amp;gt;export LM_LICENSE_FILE=$LICENSE_SERVER$&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
On a Mac system you should additionally execute following tasks&lt;br /&gt;
&amp;lt;pre&amp;gt;sudo vi /etc/launchd.conf&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Insert: &amp;lt;pre&amp;gt;setenv LM_LICENSE_FILE $LICENSE_SERVER$&amp;lt;/pre&amp;gt; and save the file.&lt;br /&gt;
&lt;br /&gt;
Execute following command and start Maestro from Sportlight:&lt;br /&gt;
&amp;lt;pre&amp;gt;launchctl setenv LM_LICENSE_FILE $LICENSE_FILE$&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== bwGRiD/bwUni/JUSTUS specific installation ===&lt;br /&gt;
&lt;br /&gt;
Follow the instruction for your operating system.&lt;br /&gt;
&lt;br /&gt;
==== Linux/Mac ====&lt;br /&gt;
&lt;br /&gt;
Download [https://raw.githubusercontent.com/marekdynowski/bwSchrodingerHosts/master/post-install.sh this file] and open a terminal in the download folder. Make the bash-script executable&lt;br /&gt;
&amp;lt;pre&amp;gt;chmod u+x post-install.sh&amp;lt;/pre&amp;gt;&lt;br /&gt;
and execute it.&lt;br /&gt;
&amp;lt;pre&amp;gt;./post-install.sh&amp;lt;/pre&amp;gt;&lt;br /&gt;
You will be asked for the License Server, your usernames on the cluster and the schrodinger path/version.&lt;br /&gt;
&lt;br /&gt;
==== Windows ====&lt;br /&gt;
&lt;br /&gt;
$SCHRODINGER ist the path to your schrodinger installation.&lt;br /&gt;
Download [https://codeload.github.com/marekdynowski/MOAB-Schroedinger-Submitter/zip/master this zipfile], extract it and copy the MOAB folder to $SCHRODINGER/queues/.&lt;br /&gt;
&lt;br /&gt;
Download the [https://raw.githubusercontent.com/marekdynowski/bwSchrodingerHosts/master/schrodinger.hosts sample schrodinger.hosts]&lt;br /&gt;
and change the entries&lt;br /&gt;
* %LICENSE_SERVER% the address of the license server&lt;br /&gt;
* %USER_BWGRID% your username on the bwhpc cluster&lt;br /&gt;
* %USER_BWUNI% your username on the bwUni cluster&lt;br /&gt;
* %USER_BWJUSTUS% your username on the Justus cluster&lt;br /&gt;
* %SCHROD_VERSION% the schrodinger version (e.g. 2015u4)&lt;br /&gt;
You can delete sections of the schrodinger.hosts if you don&#039;t have an account on these clusters.&lt;br /&gt;
&lt;br /&gt;
Please execute these commands in your windows shell. (Close Schrodinger before executing these commands!)&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cd $SCHRODINGER/utilities&lt;br /&gt;
feature_flags.exe -d JOBCON_JSERVER_GO&lt;br /&gt;
jserver.exe -cleanall&lt;br /&gt;
jserver.exe -shutdown&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Usage ==&lt;br /&gt;
&lt;br /&gt;
=== Copy your public key ===&lt;br /&gt;
Maestro needs a passwordless login for submitting jobs on the cluster. &lt;br /&gt;
Login to the cluster and copy your public key of your host system from &amp;lt;tt&amp;gt;.ssh/id_rsa.pub&amp;lt;/tt&amp;gt; into &amp;lt;tt&amp;gt;.ssh/authorized_keys&amp;lt;/tt&amp;gt; file on the cluster.&lt;br /&gt;
&lt;br /&gt;
=== Using Maestro with the offered example ===&lt;br /&gt;
First start Maestro and choose the following menu items:&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Schrodinger-Usage01.jpg]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Then load the molecularDynamicsExample.cms from your filesystem:&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Schrodinger-Usage02.jpg]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Choose your desired host and click &amp;quot;Run&amp;quot;.&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Schrodinger-Usage03.jpg]]&lt;br /&gt;
&lt;br /&gt;
=== Monitoring your Job ===&lt;br /&gt;
Maestro offers an job monitoring interface, which you can start in the left corner of the molecule window.&lt;br /&gt;
[[File:Schrodinger-Monitor01.jpg]]&lt;br /&gt;
Furthermore you are able to manipulate you job on the cluster.&lt;br /&gt;
&lt;br /&gt;
[[Category:Chemistry software]][[Category:bwUniCluster]][[Category:bwForCluster_Chemistry]]&lt;/div&gt;</summary>
		<author><name>T Kienzle</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster3.0/Software/OpenFoam&amp;diff=5230</id>
		<title>BwUniCluster3.0/Software/OpenFoam</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster3.0/Software/OpenFoam&amp;diff=5230"/>
		<updated>2017-11-28T13:05:33Z</updated>

		<summary type="html">&lt;p&gt;T Kienzle: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{| width=600px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Description !! Content&lt;br /&gt;
|-&lt;br /&gt;
| module load&lt;br /&gt;
| cae/openfoam&lt;br /&gt;
|-&lt;br /&gt;
| Availability&lt;br /&gt;
| [[bwUniCluster]] &amp;amp;#124; [[BwForCluster_Chemistry]] &amp;amp;#124; bwGRiD-T&amp;amp;uuml;bingen&lt;br /&gt;
|-&lt;br /&gt;
| License&lt;br /&gt;
| [https://www.openfoam.org/licence.php GNU General Public Licence]&lt;br /&gt;
|-&lt;br /&gt;
| Citing&lt;br /&gt;
| n/a&lt;br /&gt;
|-&lt;br /&gt;
| Links&lt;br /&gt;
| [https://www.openfoam.org/ Openfoam Homepage] &amp;amp;#124; [https://www.openfoam.org/docs/ Documentation]&lt;br /&gt;
|-&lt;br /&gt;
| Graphical Interface&lt;br /&gt;
| No&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
= Description =&lt;br /&gt;
The OpenFOAM®  (Open Field Operation and Manipulation) CFD Toolbox is a free, open source CFD software package with an extensive range of features to solve anything from complex fluid flows involving chemical reactions, turbulence and heat transfer, to solid dynamics and electromagnetics. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
= Versions and Availability =&lt;br /&gt;
A list of versions currently available on all bwHPC-C5-Clusters can be obtained from the&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;big&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[https://cis-hpc.uni-konstanz.de/prod.cis/ Cluster Information System CIS]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/big&amp;gt;&lt;br /&gt;
{{#widget:Iframe&lt;br /&gt;
|url=https://cis-hpc.uni-konstanz.de/prod.cis/bwUniCluster/cae/openfoam&lt;br /&gt;
|width=99%&lt;br /&gt;
|height=600&lt;br /&gt;
|border=0&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
In order to check what OpenFOAM versions are installed on the system, run the following command:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ module avail cae/openfoam&amp;lt;/pre&amp;gt;&lt;br /&gt;
Typically, several OpenFOAM versions might be available.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Any available version can be accessed by loading the appropriate module:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ module load cae/openfoam/&amp;lt;version&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
with &amp;lt;version&amp;gt; specifying the desired version. &lt;br /&gt;
&lt;br /&gt;
To activate the OpenFOAM applications, after the module is loaded, run the following:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ source $FOAM_INIT&amp;lt;/pre&amp;gt;&lt;br /&gt;
or simply:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ foamInit&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Parallel run with OpenFOAM  =&lt;br /&gt;
To improve the concurrently solving process and decrease the error occurrence probability while running an OpenFOAM job in parallel (&#039;&#039;on a singlenode&#039;&#039;), some modifications have been introduced. Specifically, after the case decomposition is done, it is recommended to save the decomposed data directly on the nodes in a local folder and it run from there. When the calculations are over, the data is moved back to the case folder and reconstructed. It will improve the overall performance, considering that you allocated enough wall-time to decompose and rebuild your cases, as it moves the&amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;processor*&amp;lt;/span&amp;gt;folders to and out of the nodes local work-space. For such a procedure it is necessary to use the following commands for decomposition and reconstruction of the geometry domain:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ decomposeParHPC&lt;br /&gt;
$ reconstructParHPC&lt;br /&gt;
$ reconstructParMeshHPC&amp;lt;/pre&amp;gt;&lt;br /&gt;
instead of:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ decomposePar&lt;br /&gt;
$ reconstructPar&lt;br /&gt;
$ recontructParMesh&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For example, if you want to run&amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;snappyHexMesh&amp;lt;/span&amp;gt;in parallel, you may use the following commands:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ decomposeParMeshHPC&lt;br /&gt;
$ mpiexec snappyHexMesh -overwrite&lt;br /&gt;
$ reconstructParMeshHPC -constant&amp;lt;/pre&amp;gt;&lt;br /&gt;
instead of:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ decomposePar&lt;br /&gt;
$ mpiexec snappyHexMesh -overwrite&lt;br /&gt;
$ reconstructParMesh -constant&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Building an OpenFOAM batch file for parallel processing =&lt;br /&gt;
== General information == &lt;br /&gt;
To run any job in a parallel mode with OpenFOAM, it is necessary to decompose the geometry domain into segments, equal to the number of processors (or threads) you intend to use. That means, for example, if you want to run a case on 8 processors, you will have to decompose the mesh in 8 segments, first. Then, you start the solver in &#039;&#039;parallel&#039;&#039; mode, letting &#039;&#039;OpenFOAM&#039;&#039; to run calculations concurrently on these segments, one processor responding for one segment of the mesh, sharing the data with all other processors in between. There is, of course, a mechanism that connects properly the calculations, so you don&#039;t loose your data or generate wrong results. Decomposition and segments building process is handled by&amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;decomposePar&amp;lt;/span&amp;gt;utility. In &amp;quot;&#039;&#039;system/decomposeParDict&#039;&#039;&amp;quot; you may specify, how many &amp;quot;segments&amp;quot; you want your geometry domain to be divided in, and which decomposition method to use. The automatic one is &amp;quot;&#039;&#039;scotch&#039;&#039;&amp;quot;. It trims the mesh, collecting as many cells as possible per processor, trying to avoid having empty segments or segments with few cells. If you want your mesh to be divided in other way, specifying the number of segments it should be cut in x, y or z direction, for example, you can use &amp;quot;simple&amp;quot; or &amp;quot;hierarchical&amp;quot; methods. There are some other ways as well, with more documentation on the internet. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
== Wrapper script generation == &lt;br /&gt;
&#039;&#039;&#039;Attention:&#039;&#039;&#039; &amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;openfoam&amp;lt;/span&amp;gt; module loads automatically the necessary &amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;openmpi&amp;lt;/span&amp;gt; module for parallel run, do &#039;&#039;&#039;NOT&#039;&#039;&#039; load another version of mpi, as it may conflict with the loaded &amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;openfoam&amp;lt;/span&amp;gt; version. &lt;br /&gt;
&lt;br /&gt;
A job-script to submit a batch job called &#039;&#039;job_openfoam.sh&#039;&#039; that runs &#039;&#039;icoFoam&#039;&#039; solver with OpenFoam version 2.4.0, on 8 processors, requiring 6000 MByte of total physical memory per processor and total wall clock time of 6 hours looks like: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--b)--&amp;gt; &lt;br /&gt;
{| style=&amp;quot;width: 100%; border:1px solid #d0cfcc; background:#f2f7ff;border-spacing: 5px;&amp;quot;&lt;br /&gt;
| style=&amp;quot;width:280px; white-space:nowrap; color:#000;&amp;quot; |&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#MSUB -l nodes=1:ppn=8&lt;br /&gt;
#MSUB -l walltime=06:00:00&lt;br /&gt;
#MSUB -l pmem=6000mb&lt;br /&gt;
#MSUB -v FOAM_MODULE=&amp;quot;cae/openfoam/2.4.0&amp;quot;&lt;br /&gt;
#MSUB -v MPIRUN_OPTIONS=&amp;quot;--bind-to core --map-by core -report-bindings&amp;quot;&lt;br /&gt;
#MSUB -v EXECUTABLE=&amp;quot;icoFoam&amp;quot;&lt;br /&gt;
#MSUB -N test_icoFoam&lt;br /&gt;
#MSUB -o icoFoam.log&lt;br /&gt;
#MSUB -j oe&lt;br /&gt;
&lt;br /&gt;
# openfoam-2.4.0 automatically loads mpi/openmpi/1.8-gnu-4.9&lt;br /&gt;
module load ${FOAM_MODULE}&lt;br /&gt;
foamInit&lt;br /&gt;
&lt;br /&gt;
# remove decomposePar if you already decomposed your case beforehand &lt;br /&gt;
decomposePar &amp;amp;&amp;amp;&lt;br /&gt;
&lt;br /&gt;
# starting the solver in parallel. Name of the solver is given in the &amp;quot;EXECUTABLE&amp;quot; variable, &lt;br /&gt;
# in the header &lt;br /&gt;
mpirun ${MPIRUN_OPTIONS} ${EXECUTABLE} -parallel &amp;amp;&amp;amp;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# remove reconstructPar and the &#039;&amp;amp;&amp;amp;&#039; operator from the command above &lt;br /&gt;
# if you would like to reconstruct the case later&lt;br /&gt;
reconstructPar&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Using I/O and reducing the amount of data and files =&lt;br /&gt;
In OpenFOAM, you can control which variables or fields are written at specific times. For example, for post-processing purposes, you might need only a subset of variables. In order to control which files will be written, there is a function object called &amp;quot;writeObjects&amp;quot;. &lt;br /&gt;
&lt;br /&gt;
An example controlDict file may look like this: At the top of the file (entry &amp;quot;writeControl&amp;quot;) you specify that ALL fields (variables) required for restarting are saved every 12 wall-clock hours. Then, additionally, at the bottom of the controlDict in the &amp;quot;functions&amp;quot; block, you can add a function object of type &amp;quot;writeObjects&amp;quot;. With this function object, you can control the output of specific fields independent of the entry at the top of the file: &lt;br /&gt;
&amp;lt;!--b)--&amp;gt; &lt;br /&gt;
{| style=&amp;quot;width: 100%; border:1px solid #d0cfcc; background:#f2f7ff;border-spacing: 5px;&amp;quot;&lt;br /&gt;
| style=&amp;quot;width:280px; white-space:nowrap; color:#000;&amp;quot; |&lt;br /&gt;
&amp;lt;source lang=&amp;quot;text&amp;quot;&amp;gt;&lt;br /&gt;
/*--------------------------------*- C++ -*----------------------------------*\&lt;br /&gt;
| =========                 |                                                 |&lt;br /&gt;
| \\      /  F ield         | OpenFOAM: The Open Source CFD Toolbox           |&lt;br /&gt;
|  \\    /   O peration     | Version:  4.1.x                                 |&lt;br /&gt;
|   \\  /    A nd           | Web:      www.OpenFOAM.org                      |&lt;br /&gt;
|    \\/     M anipulation  |                                                 |&lt;br /&gt;
\*---------------------------------------------------------------------------*/&lt;br /&gt;
FoamFile&lt;br /&gt;
{&lt;br /&gt;
    version     2.0;&lt;br /&gt;
    format      ascii;&lt;br /&gt;
    class       dictionary;&lt;br /&gt;
    location    &amp;quot;system&amp;quot;;&lt;br /&gt;
    object      controlDict;&lt;br /&gt;
}&lt;br /&gt;
// * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * //&lt;br /&gt;
&lt;br /&gt;
startFrom       latestTime;&lt;br /&gt;
startTime       0;&lt;br /&gt;
stopAt  	endTime;&lt;br /&gt;
endTime         1e2;&lt;br /&gt;
deltaT          1e-5;&lt;br /&gt;
&lt;br /&gt;
writeControl    clockTime;&lt;br /&gt;
writeInterval   43200; // write ALL fields necessary to restart your simulation &lt;br /&gt;
                       // every 43200 wall-clock seconds = 12 hours of real time&lt;br /&gt;
&lt;br /&gt;
purgeWrite      0;&lt;br /&gt;
writeFormat     binary;&lt;br /&gt;
writePrecision  10;&lt;br /&gt;
writeCompression off;&lt;br /&gt;
timeFormat      general;&lt;br /&gt;
timePrecision   10;&lt;br /&gt;
runTimeModifiable false;&lt;br /&gt;
&lt;br /&gt;
functions&lt;br /&gt;
{&lt;br /&gt;
    writeFields // name of the function object&lt;br /&gt;
    {&lt;br /&gt;
        type writeObjects;&lt;br /&gt;
        libs ( &amp;quot;libutilityFunctionObjects.so&amp;quot; );&lt;br /&gt;
&lt;br /&gt;
        objects&lt;br /&gt;
        (&lt;br /&gt;
	    T U rho // list of fields/variables to be written&lt;br /&gt;
        );&lt;br /&gt;
&lt;br /&gt;
        // E.g. write every 1e-5 seconds of simulation time only the specified fields&lt;br /&gt;
        writeControl runTime;&lt;br /&gt;
        writeInterval 1e-5; // write every 1e-5 seconds&lt;br /&gt;
    }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can also define multiple function objects in order to write different subsets of fields at different times. You can also use wildcards in the list of fields- for example, in order to write out all fields starting with &amp;quot;RR_&amp;quot; you can add&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;quot;RR_.*&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
to the list of objects. You can get a list of valid field names by writing &amp;quot;banana&amp;quot; in the field list. During the run of the solver all valid field names are printed.&lt;br /&gt;
The output time can be changed too. Instead of writing at specific times in the simulation, you can also write after a certain number of time steps or depening on the wall clock time:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;// write every 100th simulation time step&lt;br /&gt;
writeControl timeStep;&lt;br /&gt;
writeInterval 100;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;// every 3600 seconds of real wall clock time&lt;br /&gt;
writeControl runtime;&lt;br /&gt;
writeInterval 3600; &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you use OpenFOAM before version 4.0 or 1606, the type of function object is:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
type writeRegisteredObject; // (instead of type writeObjects) &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
If you use OpenFOAM before version 3.0, you have to load the library with&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
functionObjectLibs (&amp;quot;libIOFunctionObjects.so&amp;quot;); // (instead of libs ( &amp;quot;libutilityFunctionObjects.so&amp;quot; )) &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
and exchange the entry &amp;quot;writeControl&amp;quot; with &amp;quot;outputControl&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
= OpenFOAM and ParaView on bwUniCluster=&lt;br /&gt;
ParaView is not directly linked to OpenFOAM installation on the cluster. Therefore, to visualize OpenFOAM jobs with ParaView, they will have to be manually opened within the specific ParaView module.  &lt;br /&gt;
&lt;br /&gt;
1. Load the ParaView module. For example: &lt;br /&gt;
&amp;lt;pre&amp;gt;$ module load cae/paraview/4.3.1&amp;lt;/pre&amp;gt;&lt;br /&gt;
2. Create a dummy &#039;*.openfoam&#039; file in the OpenFOAM case folder:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ cd &amp;lt;case_folder_path&amp;gt;&lt;br /&gt;
$ touch &amp;lt;case_name&amp;gt;.openfoam&amp;lt;/pre&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;NOTICE:&#039;&#039;&#039; the name of the dummy file should be the same as the name of the OpenFOAM case folder, with &#039;.openfoam&#039; extension.&lt;br /&gt;
&lt;br /&gt;
3. Open ParaView:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ paraview&amp;lt;/pre&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;NOTICE:&#039;&#039;&#039; ParaView is a visualization software which requires X-server to run.&lt;br /&gt;
&lt;br /&gt;
4. In Paraview go to &#039;File&#039; -&amp;gt; &#039;Open&#039;, or press Ctrl+O. Choose to show &#039;All files (*)&#039;, and open your &amp;lt;case_name&amp;gt;.openfoam file. In the pop-up window select OpenFOAM, and press &#039;Ok&#039;.&lt;br /&gt;
&lt;br /&gt;
5. That&#039;s it! Enjoy ParaView and OpenFOAM.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
[[Category:Engineering software]][[Category:bwUniCluster]][[Category:bwForCluster_Chemistry]]&lt;/div&gt;</summary>
		<author><name>T Kienzle</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwForCluster_User_Access_Members_Hochschule_Esslingen&amp;diff=5227</id>
		<title>BwForCluster User Access Members Hochschule Esslingen</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwForCluster_User_Access_Members_Hochschule_Esslingen&amp;diff=5227"/>
		<updated>2017-11-28T12:57:20Z</updated>

		<summary type="html">&lt;p&gt;T Kienzle: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Access to bwForCluster for members of Hochschule Esslingen =&lt;br /&gt;
&lt;br /&gt;
== Login prerequisites == &lt;br /&gt;
&lt;br /&gt;
* A valid Hochschule Esslingen account is required for access to the bwForCluster. &lt;br /&gt;
&lt;br /&gt;
* As part of the login procedure, you will get a page that shows the data on your account that will be transferred from Hochschule Esslingen to the cluster. This page shows your entitlements. It should list the entitlement: &amp;lt;pre&amp;gt;http://bwidm.de/entitlement/bwForCluster&amp;lt;/pre&amp;gt;  If it does not, please contact [mailto:grid-support@hs-esslingen.de grid-support@hs-esslingen.de] for further information.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Terms of Use ==&lt;br /&gt;
&lt;br /&gt;
As with any other Hochschule Esslingen service, users of bwForCluster must strictly adhere to the [https://www.hs-esslingen.de/de/hochschule/service/rechenzentrum/organisatorisches/betriebsordnung-des-rechenzentrums.html Betriebsordnung des Rechenzentrums].&lt;/div&gt;</summary>
		<author><name>T Kienzle</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=Registration_Support_-_bwUniCluster&amp;diff=5226</id>
		<title>Registration Support - bwUniCluster</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=Registration_Support_-_bwUniCluster&amp;diff=5226"/>
		<updated>2017-11-28T12:56:03Z</updated>

		<summary type="html">&lt;p&gt;T Kienzle: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Registration ==&lt;br /&gt;
If you have questions or problems concerning &#039;&#039;&#039;registration&#039;&#039;&#039;, please open a ticket via &lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;[http://www.support.bwhpc-c5.de bwSupport Portal]&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
or contact your local university hotline:&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;width:100%; vertical-align:top; background:#f5fffa;border:2px solid #000000;&amp;quot;&lt;br /&gt;
! University !! Hotline&lt;br /&gt;
|-&lt;br /&gt;
| Albert Ludwig University of Freiburg  ||  hpc-support (ät) hpc.uni-freiburg.de bwHPC-Support or visit https://www.hpc.uni-freiburg.de/bwunicluster &lt;br /&gt;
|-&lt;br /&gt;
| Eberhard Karls University, Tübingen || hpcmaster (ät) uni-tuebingen.de&lt;br /&gt;
|-&lt;br /&gt;
| Karlsruhe Institute of Technology (KIT)  || bwunicluster-hotline (ät) lists.kit.edu &lt;br /&gt;
|-&lt;br /&gt;
| Ruprecht-Karls-Universität Heidelberg || hpc-support (ät) urz.uni-heidelberg.de&lt;br /&gt;
|-&lt;br /&gt;
| Ulm University || helpdesk (ät) uni-ulm.de &lt;br /&gt;
|-&lt;br /&gt;
| University of Hohenheim || kim-bw-projekt (ät) uni-hohenheim.de &lt;br /&gt;
|-&lt;br /&gt;
| University of Konstanz || support (ät) uni-konstanz.de &lt;br /&gt;
|-&lt;br /&gt;
| University of Mannheim ||hpc-support (ät) mailman.uni-mannheim.de &lt;br /&gt;
|-&lt;br /&gt;
| University of Stuttgart || bwunicluster (ät) hlrs.de &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
[[Category:Support]][[Category:bwUniCluster]]&lt;/div&gt;</summary>
		<author><name>T Kienzle</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=Data_Transfer&amp;diff=5224</id>
		<title>Data Transfer</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=Data_Transfer&amp;diff=5224"/>
		<updated>2017-11-28T12:38:29Z</updated>

		<summary type="html">&lt;p&gt;T Kienzle: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;H1&amp;gt;Using SFTP from Unix client&amp;lt;/H1&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Example:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt; sftp  ka_xy1234@bwfilestorage.lsdf.kit.edu&lt;br /&gt;
Connecting to bwfilestorage.lsdf.kit.edu&amp;lt;br&amp;gt;&lt;br /&gt;
ka_xy1234@bwfilestorage.lsdf.kit.edu&#039;s password: &lt;br /&gt;
sftp&amp;gt; ls&lt;br /&gt;
snapshots&lt;br /&gt;
temp test&lt;br /&gt;
sftp&amp;gt; help&lt;br /&gt;
...&lt;br /&gt;
sftp&amp;gt; put myfile&lt;br /&gt;
sftp&amp;gt; get myfile&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;H1&amp;gt;Using SFTP from Windows and Mac client&amp;lt;/H1&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Windows clients do not have a SCP/SFTP client installed by default, so it needs to be installed before this protocol can be used. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Tools for example:&#039;&#039;&#039;&lt;br /&gt;
* [http://www.openssh.com/ OpenSSH] &lt;br /&gt;
*[http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html Putty suite] (for Windows and Unix)&lt;br /&gt;
*[https://winscp.net/eng/download.php WinSCP] (for Windows)&lt;br /&gt;
*[https://filezilla-project.org/download.php?show_all=1 FileZilla] (for Windows, Mac and Linux)&lt;br /&gt;
*[http://cygwin.com/install.html Cygwin] (for Windows)&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;network drive over SFTP:&#039;&#039;&#039;&lt;br /&gt;
*[http://www.southrivertechnologies.com/download/downloadwd.html WebDrive] (for Windows and Mac) &lt;br /&gt;
*[https://www.eldos.com/sftp-net-drive/comparison.php  SFTP Net Drive (ELDOS)] (for Windows)&lt;br /&gt;
*[http://www.netdrive.net/ NetDrive] (for Windows)&lt;br /&gt;
*[https://www.expandrive.com/expandrive ExpanDrive] (for Windows and Mac)&lt;br /&gt;
&amp;lt;hr&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[Category:bwFileStorage|SFTP]]&lt;/div&gt;</summary>
		<author><name>T Kienzle</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=JUSTUS2/Software/Molden&amp;diff=5221</id>
		<title>JUSTUS2/Software/Molden</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=JUSTUS2/Software/Molden&amp;diff=5221"/>
		<updated>2017-11-28T12:33:41Z</updated>

		<summary type="html">&lt;p&gt;T Kienzle: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{| width=600px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Description !! Content&lt;br /&gt;
|-&lt;br /&gt;
| module load&lt;br /&gt;
| chem/molden&lt;br /&gt;
|-&lt;br /&gt;
| Availability&lt;br /&gt;
| [[bwUniCluster]] &amp;amp;#124; [[BwForCluster_Chemistry]]&lt;br /&gt;
|-&lt;br /&gt;
| License&lt;br /&gt;
| Free. See: [http://www.cmbi.ru.nl/molden/CopyRight.html Copyright CMBI]&lt;br /&gt;
|-&lt;br /&gt;
| Citing&lt;br /&gt;
| [http://www.cmbi.ru.nl/molden/ref.html Molden-Reference]&lt;br /&gt;
|-&lt;br /&gt;
| Links&lt;br /&gt;
| [http://www.cmbi.ru.nl/molden/ Homepage] &amp;amp;#124; [http://www.cmbi.ru.nl/molden Documentation]&lt;br /&gt;
|-&lt;br /&gt;
| Graphical Interface&lt;br /&gt;
| [[#Graphical User Interface (GUI)|Yes]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
= Description = &lt;br /&gt;
&#039;&#039;&#039;Molden&#039;&#039;&#039; is a software package for displaying molecular coordinates, molecular orbitals, electron densities from various quantum chemical software packages and supports contour and 3-D grid plots. Moreover, &#039;&#039;&#039;Molden&#039;&#039;&#039; supports many output formats including postscript, povray, OpenGL and hpgl.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Versions and Availability =&lt;br /&gt;
A list of versions currently available on all bwHPC-C5-Clusters can be obtained from the&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;big&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[https://cis-hpc.uni-konstanz.de/prod.cis/ Cluster Information System CIS]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/big&amp;gt;&lt;br /&gt;
{{#widget:Iframe&lt;br /&gt;
|url=https://cis-hpc.uni-konstanz.de/prod.cis/bwUniCluster/chem/molden&lt;br /&gt;
|width=99%&lt;br /&gt;
|height=150&lt;br /&gt;
|border=0&lt;br /&gt;
}}&lt;br /&gt;
On the command line interface of any bwHPC cluster, a list of the available versions using&lt;br /&gt;
&#039;module avail chem/molden/version&#039;.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ module avail chem/molden&lt;br /&gt;
------------------------ /opt/bwhpc/common/modulefiles -------------------------&lt;br /&gt;
chem/molden/5.2.1&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Usage =&lt;br /&gt;
== Loading the module ==&lt;br /&gt;
You can load the default version of &#039;&#039;Molden&#039;&#039; with the command&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ module load chem/molden&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
If you wish to load a specific (older) version of &#039;&#039;Molden&#039;&#039;, you can do so using e.g. &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ module load chem/molden/5.1&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
to load the version 5.1&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Software Binaries ==&lt;br /&gt;
=== Command-Line === &lt;br /&gt;
Once the module &#039;&#039;Molden&#039;&#039; is loaded, the binaries &#039;&#039;&#039;molden&#039;&#039;&#039; and &#039;&#039;&#039;gmolden&#039;&#039;&#039; can be directly executed:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ molden&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
=== Graphical User Interface (GUI) ===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ gmolden &amp;amp;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
[[File:molden_gui.jpg]]&lt;br /&gt;
&lt;br /&gt;
==== Binary options ====&lt;br /&gt;
Available binary options can be taken from the binary help:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ molden -h&lt;br /&gt;
 -a        no automatic cartesian -&amp;gt; zmat conversion&lt;br /&gt;
 -b        use orbitals of first point on opt. runs&lt;br /&gt;
 -c 0.5    change depth of shading, range [0.0-1.0]&lt;br /&gt;
 -e        DMAREL INPUT: Use est set of parameters&lt;br /&gt;
 -f        PDB: build connectivity from cartesian coordinates&lt;br /&gt;
 -g        PDB: always calculate Helix/Sheet information&lt;br /&gt;
 -geom XXXxYYY-xxx-yyy&lt;br /&gt;
           XXX and YYY are the size of the window&lt;br /&gt;
           xxx and yyy are the position of the window&lt;br /&gt;
 -h        print commandline flags&lt;br /&gt;
 -hoff     switch of hydrogen bonds&lt;br /&gt;
 -hdmin x  mininum hydrogen bond distance (Ang)&lt;br /&gt;
 -hdmax x  maximum hydrogen bond distance (Ang)&lt;br /&gt;
 -hamin x  mininum hydrogen bond angle (Degrees)&lt;br /&gt;
 -hamax x  maximum hydrogen bond angle (Degrees)&lt;br /&gt;
 -i opt    fdat files: &lt;br /&gt;
           opt=1 standardise H-C, H-N&lt;br /&gt;
           opt=2 1 + standardise phenyl rings&lt;br /&gt;
 -j num    maximum number of gifs to write&lt;br /&gt;
 -k num    select color of labels (0-15)&lt;br /&gt;
           (gmolden only).&lt;br /&gt;
 -l        dont display molden logo&lt;br /&gt;
 -n        dont add hydrogens to PDB file&lt;br /&gt;
 -m        turn off the beep sounds&lt;br /&gt;
 -o fname  plotfilename (default=plot)&lt;br /&gt;
 -p 2.0    change perspective value (def: 13.0)&lt;br /&gt;
 -r fname  read file with per line; &lt;br /&gt;
           atom color(1-15) VandeWaalsRadius, (- = skip)&lt;br /&gt;
           background color(1-15)&lt;br /&gt;
           palette red #CF54FD ...   (14 colors)&lt;br /&gt;
 -s 4.0    scale amplitude of normal vibrations&lt;br /&gt;
 -t        read ascii MOPAC93 Chem3D style&lt;br /&gt;
 -u        With GAMESS-US optimisation output, molden&lt;br /&gt;
           generates a z-matrix, (def: read from output)&lt;br /&gt;
 -v        print verbose information&lt;br /&gt;
 -w opt    write all points of a movie to a file:&lt;br /&gt;
           opt specifies format; xyz(=1) zmat(=2,mopac)&lt;br /&gt;
           VRML2.0(=3)&lt;br /&gt;
 -x file   read in file with spherical atomic densities&lt;br /&gt;
 -y 1.0    threshold for printing displacement vectors&lt;br /&gt;
           of normal modes to postscript file&lt;br /&gt;
 -z        create high quality opengl coils&lt;br /&gt;
 -A        Keep order of atoms when creating a Z-matrix&lt;br /&gt;
 -C        Color postscript (default=mono, except PDB)&lt;br /&gt;
 -D opt    DMA mode:&lt;br /&gt;
           0 = atomic sites only (default)&lt;br /&gt;
           1 = atomic+halfway-bond sites&lt;br /&gt;
           2 = no shift of overlap dens. of conn. atoms&lt;br /&gt;
 -E        DMAREL input: use coordinates from multipoles&lt;br /&gt;
 -F        gmolden: Use all opengl code, (line drawing)&lt;br /&gt;
 -G 0.6    Grid width colour coded ESP potential map&lt;br /&gt;
 -H        GAMESS-US: do normal modes when HSSEND=.TRUE.&lt;br /&gt;
 -I        dont use shaders, if available&lt;br /&gt;
 -J opt    Choose format of screen shot:&lt;br /&gt;
           1 = GIF (default)&lt;br /&gt;
           2 = RGB&lt;br /&gt;
           3 = BMP&lt;br /&gt;
 -L        display both neg. and pos. contour in space&lt;br /&gt;
           plot of the laplacian&lt;br /&gt;
 -1        use only the lower half of the cubic grid&lt;br /&gt;
           used for the space type plot&lt;br /&gt;
 -2        use only the upper half of the cubic grid&lt;br /&gt;
           used for the space type plot&lt;br /&gt;
 -M        MonoChrome postscript&lt;br /&gt;
 -N        Check for mpi, to run ambfor/ambmd &lt;br /&gt;
           in parallel &lt;br /&gt;
 -O        switch off multiple structures handling&lt;br /&gt;
 -P        PDB: treat all input files as PDB files&lt;br /&gt;
 -Q        support for older StarNet xwin32 (ver. 6)&lt;br /&gt;
 -R npts   adjust the gridsize in points&lt;br /&gt;
 -S        start with shade off&lt;br /&gt;
 -T        treat all input files as TINKER xyz files&lt;br /&gt;
 -U        do not use opengl shaders&lt;br /&gt;
 -X        use with XMOL cartesian format input&lt;br /&gt;
 -V fname  VRML density filename&lt;br /&gt;
 -W        Write VRML2.0 instead of VRML1.0&lt;br /&gt;
 -Z        Map the Z-matrix file mapfile onto crystal&lt;br /&gt;
           mapfile contains Z-matrix followed by keyword&lt;br /&gt;
           MAP and per line an integer that maps a&lt;br /&gt;
           Z-matrix line onto a cartesian line&lt;br /&gt;
 -=        Use gamess-us dialect of gaussian zmat writing&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
or from the [http://www.cmbi.ru.nl/molden/command.html website].&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Molden-Specific Environments =&lt;br /&gt;
To see a list of all Molden environments set by the &#039;module load chem/molden/version&#039;-command,&lt;br /&gt;
use &#039;env | grep MOLDEN&#039;&lt;br /&gt;
Or try the command &#039;module show &#039;chem/molden/version&#039;.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ module load chem/molden&lt;br /&gt;
$ env | grep MOLDEN&lt;br /&gt;
MOLDEN_VERSION=5.2.1&lt;br /&gt;
MOLDEN_AMB_DIR=/opt/bwhpc/common/chem/molden/5.2.1/bin/ambfor&lt;br /&gt;
MOLDEN_BIN_DIR=/opt/bwhpc/common/chem/molden/5.2.1/bin&lt;br /&gt;
MOLDEN_UTL_DIR=/opt/bwhpc/common/chem/molden/5.2.1/utils&lt;br /&gt;
MOLDEN_SRF_DIR=/opt/bwhpc/common/chem/molden/5.2.1/bin/surf&lt;br /&gt;
$ &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Examples =&lt;br /&gt;
To visualize the cartesian coordinates of one molecule stored in the file &#039;&#039;molecule.xyz&#039;&#039; without automatic conversion to Z-matrix, execute:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ molden -a molecule.xyz&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Version-Specific Information =&lt;br /&gt;
For specific information  about a special version , see the information available via the module system with the command&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ module help chem/molden/version&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
[[Category:Chemistry software]][[Category:bwUniCluster]][[Category:bwForCluster_Chemistry]]&lt;/div&gt;</summary>
		<author><name>T Kienzle</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=User:M_Janczyk/Acknowledgement&amp;diff=5219</id>
		<title>User:M Janczyk/Acknowledgement</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=User:M_Janczyk/Acknowledgement&amp;diff=5219"/>
		<updated>2017-11-28T12:31:46Z</updated>

		<summary type="html">&lt;p&gt;T Kienzle: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;When preparing a publication describing work that involved the usage of a bwForCluster, e.g. the bwForCluster NEMO, please ensure that you reference the bwHPC initiative, the bwHPC-C5 project and – if appropriate – also the bwHPC facility itself. The following sample text is suggested as a starting point.&lt;br /&gt;
 &lt;br /&gt;
 Acknowledgement:&lt;br /&gt;
 The authors acknowledge support by the state of Baden-Württemberg through bwHPC&lt;br /&gt;
 and the German Research Foundation (DFG) through grant no INST 39/963-1 FUGG.&lt;br /&gt;
&lt;br /&gt;
In addition, we kindly ask you to notify us of any reports, conference papers, journal articles, theses, posters, talks which contain results obtained on any bwHPC resource by sending an email to  &lt;br /&gt;
[mailto:publications@bwhpc-c5.de  publications@bwhpc-c5.de] stating:&lt;br /&gt;
* cluster facility (e.g. bwForCluster NEMO)&lt;br /&gt;
* RV acronym (e.g. bw16A000)&lt;br /&gt;
* author(s)&lt;br /&gt;
* title &#039;&#039;or&#039;&#039; booktitle&lt;br /&gt;
* journal, volume, pages &#039;&#039;or&#039;&#039; editors, address, publisher &lt;br /&gt;
* year.&lt;br /&gt;
&lt;br /&gt;
Such recognition is important for acquiring funding for the next generation hardware, support services, data storage and infrastructure.&lt;br /&gt;
&lt;br /&gt;
The publications will be referenced on the [https://www.bwhpc.de/en/user_publications.php bwHPC website].&lt;br /&gt;
&lt;br /&gt;
[[Category:BwForCluster NEMO]]&lt;br /&gt;
[[Category:Acknowledgement]]&lt;br /&gt;
[[Category:BwForCluster NEMO|Acknowledgement]]&lt;br /&gt;
[[Category:Acknowledgement|bwForCluster NEMO]]&lt;br /&gt;
[[Category:Acknowledgment|bwForCluster NEMO]]&lt;/div&gt;</summary>
		<author><name>T Kienzle</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster2.0/Software/Ansys&amp;diff=5211</id>
		<title>BwUniCluster2.0/Software/Ansys</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster2.0/Software/Ansys&amp;diff=5211"/>
		<updated>2017-11-28T11:37:43Z</updated>

		<summary type="html">&lt;p&gt;T Kienzle: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{| width=600px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Description !! Content&lt;br /&gt;
|-&lt;br /&gt;
| module load&lt;br /&gt;
| cae/ansys&lt;br /&gt;
|-&lt;br /&gt;
| Availability&lt;br /&gt;
| [[bwUniCluster]] &amp;amp;#124; bwGRiD-T&amp;amp;uuml;bingen&lt;br /&gt;
|-&lt;br /&gt;
| License&lt;br /&gt;
| Academic. See: [http://www.ansys.com/Academic/educator-tools/Licensing+&amp;amp;+Terms+of+Use Licensing and Terms-of-Use].&lt;br /&gt;
|-&lt;br /&gt;
| Citing &lt;br /&gt;
| [http://www.ansys.com/academic/educator-tools/ Citations]&lt;br /&gt;
|-&lt;br /&gt;
| Links&lt;br /&gt;
| [http://www.ansys.com/ Ansys Homepage] &amp;amp;#124; [http://www.ansys.com/Academic/educator-tools/Support+Resources Support and Resources]&lt;br /&gt;
|-&lt;br /&gt;
| Graphical Interface&lt;br /&gt;
| Yes&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
= Description = &lt;br /&gt;
&#039;&#039;&#039;ANSYS&#039;&#039;&#039; is a general purpose software to simulate interactions of all disciplines of physics, structural, fluid dynamics, heat transfer, electromagnetic etc. For more information about ANSYS products please visit [http://www.ansys.com/Industries/Academic/ http://www.ansys.com/Industries/Academic/]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Versions and Availability =&lt;br /&gt;
A list of versions currently available on all bwHPC-C5-Clusters can be obtained from the&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;big&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[https://cis-hpc.uni-konstanz.de/prod.cis/ Cluster Information System CIS]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/big&amp;gt;&lt;br /&gt;
{{#widget:Iframe&lt;br /&gt;
|url=https://cis-hpc.uni-konstanz.de/prod.cis/bwUniCluster/cae/ansys&lt;br /&gt;
|width=99%&lt;br /&gt;
|height=350&lt;br /&gt;
|border=0&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
On the command line interface of a particular bwHPC cluster a list of all available ANSYS versions can be inquired as followed&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ module avail cae/ansys&lt;br /&gt;
-------------------------- /opt/bwhpc/kit/modulefiles --------------------------&lt;br /&gt;
cae/ansys/15.0&lt;br /&gt;
&lt;br /&gt;
------------------------ /opt/bwhpc/common/modulefiles -------------------------&lt;br /&gt;
cae/ansys/15.0_bw&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
There are two licenses. The module &amp;lt;pre&amp;gt;cae/ansys/15.0_bw&amp;lt;/pre&amp;gt; uses the BW license server (25 academic research, 69 parallel processes) and &amp;lt;pre&amp;gt;cae/ansys/15.0&amp;lt;/pre&amp;gt; uses the KIT license server (only members of the KIT can use it). &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Usage =&lt;br /&gt;
== Loading the Module ==&lt;br /&gt;
If you wish to load a specific version of ANSYS you can do so by executing e.g.: &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ module load cae/ansys/15.0_bw&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
to load the version 15.0 with the BW license.&lt;br /&gt;
&lt;br /&gt;
You can load the default version of ANSYS with the command:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ module load cae/ansys&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Start commands ==&lt;br /&gt;
To start an ANSYS Mechanical session enter &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ansys150&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
To launch an ANSYS FLUENT session enter&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ fluent&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The following command is to run the ANSYS Workbench&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ runwb2&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Online documention is available from the help menu or by using the command&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ anshelp150&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
As with all processes that require more than a few minutes to run, non-trivial ANSYS solver jobs must be submitted to the cluster queueing system.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Examples =&lt;br /&gt;
== ANSYS Mechanical batch jobs ==&lt;br /&gt;
&lt;br /&gt;
The following script could be submitted to the queueing system to run an ANSYS Mechanical job in parallel:&lt;br /&gt;
{{bwFrameA|&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
module load cae/ansys&lt;br /&gt;
export MPIRUN_OPTIONS=&amp;quot;-prot&amp;quot;&lt;br /&gt;
export MPI_USESRUN=1&lt;br /&gt;
cd Arbeitsverzeichnis&lt;br /&gt;
export MACHINES=`/software/bwhpc/common/cae/ansys_inc150/scc/machines.pl`&lt;br /&gt;
ansys150 -dis -b -j lal -machines $MACHINES &amp;lt; input.f18&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
working_dir could start with $HOME or $WORK .&lt;br /&gt;
&lt;br /&gt;
To submit the example script to the queueing system execute the following (32 cores, 1 GB of memory per core, max. time 600 seconds) :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
msub -l nodes=2:ppn=16,pmem=1000mb,walltime=600 Shell-Script&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== ANSYS Fluent batch jobs ==&lt;br /&gt;
The following script &amp;quot;run_fluent.sh&amp;quot; could be submitted to the queueing system to run an ANSYS Fluent job in parallel using 4 cores on a single node:&lt;br /&gt;
{{bwFrameA|&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/sh&lt;br /&gt;
#MSUB -l nodes=1:ppn=4&lt;br /&gt;
#MSUB -l walltime=0:10:00&lt;br /&gt;
#MSUB -l mem=16000mb&lt;br /&gt;
&lt;br /&gt;
## setup environment&lt;br /&gt;
export MPI_USESRUN=1&lt;br /&gt;
&lt;br /&gt;
## generate hosts list&lt;br /&gt;
export run_nodes=$(srun hostname)&lt;br /&gt;
echo $run_nodes | sed &amp;quot;s/ /\n/g&amp;quot; &amp;gt; fluent.hosts&lt;br /&gt;
echo &amp;quot;&amp;quot; &amp;gt;&amp;gt; fluent.hosts&lt;br /&gt;
&lt;br /&gt;
## load ansys module&lt;br /&gt;
module load cae/ansys&lt;br /&gt;
&lt;br /&gt;
## start fluent job&lt;br /&gt;
time fluent 3d -rsh=ssh -g -pib -cnf=fluent.hosts -i test.inp&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
}}&lt;br /&gt;
To submit the example script to the queueing system execute the following:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ msub run_fluent.sh&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== ANSYS CFX batch jobs ==&lt;br /&gt;
With the script &amp;quot;run_cfx.sh&amp;quot; you can submit a CFX job to the queueing system to run in parallel using 8 cores on two node with the start-method &#039;Platform MPI Parallel&#039;:&lt;br /&gt;
{{bwFrameA|&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/sh&lt;br /&gt;
#MSUB -l nodes=2:ppn=4&lt;br /&gt;
#MSUB -l walltime=0:10:00&lt;br /&gt;
#MSUB -l mem=32000mb&lt;br /&gt;
&lt;br /&gt;
## setup environment&lt;br /&gt;
export MPI_USESRUN=1&lt;br /&gt;
&lt;br /&gt;
## load ansys module&lt;br /&gt;
module load cae/ansys&lt;br /&gt;
&lt;br /&gt;
## start job&lt;br /&gt;
cfx5solve -def  test.def -part 8 -start-method &#039;Platform MPI Parallel&#039;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
}}&lt;br /&gt;
To submit the example script to the queueing system execute:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ msub run_cfx.sh&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
[[Category:Engineering software]][[Category:bwUniCluster]]&lt;/div&gt;</summary>
		<author><name>T Kienzle</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=Data_Transfer&amp;diff=5208</id>
		<title>Data Transfer</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=Data_Transfer&amp;diff=5208"/>
		<updated>2017-11-28T11:29:53Z</updated>

		<summary type="html">&lt;p&gt;T Kienzle: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;H1&amp;gt;Using SFTP from Unix client&amp;lt;/H1&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Example:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt; sftp  ka_xy1234@bwfilestorage.lsdf.kit.edu&lt;br /&gt;
Connecting to bwfilestorage.lsdf.kit.edu&amp;lt;br&amp;gt;&lt;br /&gt;
ka_xy1234@bwfilestorage.lsdf.kit.edu&#039;s password: &lt;br /&gt;
sftp&amp;gt; ls&lt;br /&gt;
snapshots&lt;br /&gt;
temp test&lt;br /&gt;
sftp&amp;gt; help&lt;br /&gt;
...&lt;br /&gt;
sftp&amp;gt; put myfile&lt;br /&gt;
sftp&amp;gt; get myfile&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;H1&amp;gt;Using SFTP from Windows and Mac client&amp;lt;/H1&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Windows clients do not have a SCP/SFTP client installed by default, so it needs to be installed before this protocol can be used. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Tools for example:&#039;&#039;&#039;&lt;br /&gt;
* [http://www.openssh.com/ OpenSSH] &lt;br /&gt;
*[http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html Putty suite] (for Windows and Unix)&lt;br /&gt;
*[https://winscp.net/eng/download.php WinSCP] (for Windows)&lt;br /&gt;
*[https://filezilla-project.org/download.php?show_all=1 FileZilla] (for Windows, Mac and Linux)&lt;br /&gt;
*[http://cygwin.com/install.html Cygwin] (for Windows)&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;network drive over SFTP:&#039;&#039;&#039;&lt;br /&gt;
*[http://www.southrivertechnologies.com/download/downloadwd.html WebDrive] (for Windows and Mac) &lt;br /&gt;
*[https://www.eldos.com/sftp-net-drive/comparison.php  SFTP Net Drive (ELDOS)] (for Windows)&lt;br /&gt;
*[http://www.netdrive.net/ NetDrive] (for Windows)&lt;br /&gt;
*[http://www.expandrive.com/expandrive ExpanDrive] (for Windows and Mac)&lt;br /&gt;
&amp;lt;hr&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[Category:bwFileStorage|SFTP]]&lt;/div&gt;</summary>
		<author><name>T Kienzle</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=JUSTUS2/Visualization&amp;diff=5203</id>
		<title>JUSTUS2/Visualization</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=JUSTUS2/Visualization&amp;diff=5203"/>
		<updated>2017-11-28T11:18:48Z</updated>

		<summary type="html">&lt;p&gt;T Kienzle: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{| width=600px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Description !! Content&lt;br /&gt;
|-&lt;br /&gt;
| module load&lt;br /&gt;
| vis/tigervnc&lt;br /&gt;
|-&lt;br /&gt;
| Availability&lt;br /&gt;
| [[bwUniCluster]] &amp;amp;#124; [[BwForCluster_Chemistry]]&lt;br /&gt;
|-&lt;br /&gt;
| License&lt;br /&gt;
| GPL&lt;br /&gt;
|-&lt;br /&gt;
| Citing &lt;br /&gt;
| n/a&lt;br /&gt;
|-&lt;br /&gt;
| Links&lt;br /&gt;
| [http://www.tigervnc.org TigerVNC Homepage]&lt;br /&gt;
|-&lt;br /&gt;
| Graphical Interface&lt;br /&gt;
| No&lt;br /&gt;
|}&lt;br /&gt;
= Introduction to TigerVNC =&lt;br /&gt;
&#039;&#039;&#039;TigerVNC&#039;&#039;&#039; is a high-performance implementation of VNC (Virtual Network Computing), a client/server application that allows &lt;br /&gt;
users to launch and interact with graphical applications on remote machines. It should be faster than standard X11 forwarding and thus can&lt;br /&gt;
be used if a graphical software feels slow and has bad responsiveness.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
= Versions and Availability =&lt;br /&gt;
A list of versions currently available on all bwHPC-C5-Clusters can be obtained from the&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;big&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[https://cis-hpc.uni-konstanz.de/prod.cis/ Cluster Information System CIS]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/big&amp;gt;&lt;br /&gt;
{{#widget:Iframe&lt;br /&gt;
|url=https://cis-hpc.uni-konstanz.de/prod.cis/bwUniCluster/vis/tigervnc&lt;br /&gt;
|width=99%&lt;br /&gt;
|height=180&lt;br /&gt;
|border=0&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Start =&lt;br /&gt;
First, you need to start the VNC server. The exact startup mechanism depends on the cluster.&lt;br /&gt;
* [[Start VNC Server - bwUniCluster]]&lt;br /&gt;
* [[Start VNC Server - bwForCluster Chemistry]]&lt;br /&gt;
* [[Start VNC Server - bwForCluster Chemistry - 3D Acceleration]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
= Login =&lt;br /&gt;
The startup script of the VNC server should print detailed instructions on how to establish the connection to the VNC server from your local computer. They depend on whether you&lt;br /&gt;
use Windows or Linux and if you work with TurboVNC Java Viewer which is a tool that can simplify the process a little bit but needs the Java Development Kit (JDK) to run. Therefore the next steps are&lt;br /&gt;
divided into 3 cases. Each command should be issued on the local computer.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Login with TurboVNC Java Viewer&#039;&#039;&#039; &amp;lt;br /&amp;gt;Needed Software: [http://turbovnc.org/ TurboVNC 2.0], JDK&amp;lt;br /&amp;gt; Open TurboVNC Java Viewer. Go to Options... -&amp;gt; Security -&amp;gt; Gateway and fill in the parameters provided by the &#039;&#039;run_vncserver&#039;&#039; script. You can save these settings in the &amp;quot;Global&amp;quot; tab if you want to. Now you can click &amp;quot;OK&amp;quot;, supply the VNC server and connect to the server. You should be prompted for your ssh password and your VNC password and after that the connection is established.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Login without TurboVNC Java Viewer for Linux users&#039;&#039;&#039;&amp;lt;br /&amp;gt;Needed Software: A VNC viewer such as tigervnc or turbovnc&amp;lt;br /&amp;gt;A tunnel must be created with the ssh command given by the &#039;&#039;run_vncserver&#039;&#039; script. Open a new terminal, start a VNC viewer and connect to localhost:n, where n is the display number printed by &#039;&#039;run_vncserver&#039;&#039;, using a command such like this&amp;lt;br /&amp;gt;&amp;lt;pre&amp;gt;$ vncviewer localhost:1&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Login without TurboVNC Java Viewer for Windows users&#039;&#039;&#039;&amp;lt;br /&amp;gt;Needed Software: [https://sourceforge.net/projects/tigervnc/files/tigervnc/1.3.0 tigervnc], [http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html Putty]&amp;lt;br /&amp;gt;You need to start Putty and go to Connection -&amp;gt; SSH -&amp;gt; Tunnels. Fill in the parameters provided by &#039;&#039;run_vncserver&#039;&#039;. After you clicked &amp;quot;Add&amp;quot; you must navigate to Session and connect to the bwUniCluster with your username and password. Once the connection is established start the tigervnc client and connect to localhost:n where n is the display number printed by &#039;&#039;run_vncserver&#039;&#039;.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Shutdown =&lt;br /&gt;
To exit your VNC session it is not sufficient to only close the window of the viewer, because this will not terminate the VNC server. The server will keep running and you will run into problems&lt;br /&gt;
when you try to start a new VNC session later on. Please use the &amp;quot;log out&amp;quot; function of the desktop environment inside the VNC session, this will terminate the server properly.&lt;br /&gt;
[[Category:visualization]][[Category:bwUniCluster]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
= Starting OpenGL programs =&lt;br /&gt;
Starting an OpenGL program in a 3D accelerated VNC session results in an error message. In order to fix this issue and redirect OpenGL commands to the graphics card on the cluster you have to use the vglrun command like in the following example:&lt;br /&gt;
 $ vglrun glxgears -info&lt;/div&gt;</summary>
		<author><name>T Kienzle</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=Environment_Modules&amp;diff=5201</id>
		<title>Environment Modules</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=Environment_Modules&amp;diff=5201"/>
		<updated>2017-11-28T08:12:19Z</updated>

		<summary type="html">&lt;p&gt;T Kienzle: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{| width=650px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Description !! Content&lt;br /&gt;
|-&lt;br /&gt;
| module load&lt;br /&gt;
| category/name &amp;amp;#124; category/name/version (optional)&lt;br /&gt;
|-&lt;br /&gt;
| Links&lt;br /&gt;
| [http://modules.sourceforge.net/ Environment Modules Project] &amp;amp;#124; [https://sourceforge.net/projects/modules/ Environment Modules]&lt;br /&gt;
|-&lt;br /&gt;
| License&lt;br /&gt;
| [http://www.gnu.org/licenses/old-licenses/gpl-2.0.html GNU General Public License]&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
= Introduction =&lt;br /&gt;
&#039;&#039;&#039;Environment Modules&#039;&#039;&#039;, or short &#039;&#039;&#039;Modules&#039;&#039;&#039; are the means by which most of the installed scientific software is provided on the bwHPC clusters.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
The use of different compilers, libraries and software packages requires users to set up  a specific session environment suited for the program they want to run. The bwHPC clusters provide users with the possibility to load and unload complete environments for compilers, libraries and software packages by a single command. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
= Description = &lt;br /&gt;
The Environment &#039;&#039;Modules&#039;&#039; package enables dynamic modification of your environment by the &lt;br /&gt;
use of so-called &#039;&#039;modulefiles&#039;&#039;. A &#039;&#039;modulefile&#039;&#039; contains information to configure the shell &lt;br /&gt;
for a program/software . Typically, a modulefile contains instructions that alter or set shell &lt;br /&gt;
environment variables, such as PATH and MANPATH, to enable access to various installed &lt;br /&gt;
software. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
One of the key features of using the Environment &#039;&#039;Modules&#039;&#039; software is to allow multiple versions of the same software to be used in your environment in a controlled manner. &lt;br /&gt;
For example, two different versions of the Intel C compiler can be installed on the system at the same time - the version used is based upon which Intel C compiler modulefile is loaded.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
The software stack of bwHPC clusters provides a number of modulefiles. You can also &lt;br /&gt;
create your own modulefiles. &#039;&#039;Modulefiles&#039;&#039; may be shared by many users on a system, and &lt;br /&gt;
users may have their own collection of modulefiles to supplement or replace the shared &lt;br /&gt;
modulefiles.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
A modulefile does not provide configuration of your environment until it is explicitly loaded, &lt;br /&gt;
i.e., the specific modulefile for a software product or application must be loaded in your environment before the configuration information in the modulefile is effective.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
For instance loading the default Intel C and Fortran compiler you must execute&lt;br /&gt;
&#039;&#039;&#039;&#039;module load compiler/intel&#039;&#039;&#039;&#039;.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ module load compiler/intel&lt;br /&gt;
$ module list&lt;br /&gt;
Currently Loaded Modulefiles:&lt;br /&gt;
  1) compiler/pgi/12.10(default)    2) compiler/intel/15.0(default)&lt;br /&gt;
$ : Display all Intel related environments now&lt;br /&gt;
$ env | grep INTEL&lt;br /&gt;
INTEL_LICENSE_FILE=/opt/bwhpc/common/compiler/intel/compxe.2015.3.187/composer_xe_2015.3.187/licenses&lt;br /&gt;
INTEL_LIB_MICMPI=/opt/bwhpc/common/compiler/intel/compxe.2015.3.187/composer_xe_2015.3.187/mpirt/lib/mic&lt;br /&gt;
INTEL_HOME=/opt/bwhpc/common/compiler/intel/compxe.2015.3.187/composer_xe_2015.3.187&lt;br /&gt;
INTEL_VERSION=15.0.3&lt;br /&gt;
INTEL_MAN_DIR=/opt/bwhpc/common/compiler/intel/compxe.2015.3.187/composer_xe_2015.3.187/man/en_US&lt;br /&gt;
INTEL_INC_DIR=/opt/bwhpc/common/compiler/intel/compxe.2015.3.187/composer_xe_2015.3.187/compiler/include&lt;br /&gt;
INTEL_BIN_DIR=/opt/bwhpc/common/compiler/intel/compxe.2015.3.187/composer_xe_2015.3.187/bin/intel64&lt;br /&gt;
INTEL_DOC_DIR=/opt/bwhpc/common/compiler/intel/compxe.2015.3.187/composer_xe_2015.3.187/Documentation/en_US&lt;br /&gt;
INTEL_LIB_DIR=/opt/bwhpc/common/compiler/intel/compxe.2015.3.187/composer_xe_2015.3.187/compiler/lib/intel64&lt;br /&gt;
INTEL_LIB_MIC=/opt/bwhpc/common/compiler/intel/compxe.2015.3.187/composer_xe_2015.3.187/compiler/lib/mic&lt;br /&gt;
INTEL_PYTHONHOME=/opt/bwhpc/common/compiler/intel/compxe.2015.3.187/composer_xe_2015.3.187/debugger/python/intel64&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Usage =&lt;br /&gt;
== Documentation ==&lt;br /&gt;
For help on how to use &#039;&#039;Modules&#039;&#039; software, i.e., the command &#039;&#039;&#039;module&#039;&#039;&#039;, &lt;br /&gt;
execute &#039;&#039;&#039;&#039;module help&#039;&#039;&#039;&#039;.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ module help&lt;br /&gt;
Modules Release 3.2.10 2012-12-21 (Copyright GNU GPL v2 1991):&lt;br /&gt;
&lt;br /&gt;
  Usage: module [ switches ] [ subcommand ] [subcommand-args ]&lt;br /&gt;
&lt;br /&gt;
Switches:&lt;br /&gt;
	-H|--help		this usage info&lt;br /&gt;
	-V|--version		modules version &amp;amp; configuration options&lt;br /&gt;
	-f|--force		force active dependency resolution&lt;br /&gt;
	-t|--terse		terse    format avail and list format&lt;br /&gt;
	-l|--long		long     format avail and list format&lt;br /&gt;
	-h|--human		readable format avail and list format&lt;br /&gt;
	-v|--verbose		enable  verbose messages&lt;br /&gt;
	-s|--silent		disable verbose messages&lt;br /&gt;
	-c|--create		create caches for avail and apropos&lt;br /&gt;
	-i|--icase		case insensitive&lt;br /&gt;
	-u|--userlvl &amp;lt;lvl&amp;gt;	set user level to (nov[ice],exp[ert],adv[anced])&lt;br /&gt;
  Available SubCommands and Args:&lt;br /&gt;
	+ add|load		modulefile [modulefile ...]&lt;br /&gt;
	+ rm|unload		modulefile [modulefile ...]&lt;br /&gt;
	+ switch|swap		[modulefile1] modulefile2&lt;br /&gt;
	+ display|show		modulefile [modulefile ...]&lt;br /&gt;
	+ avail			[modulefile [modulefile ...]]&lt;br /&gt;
	+ use [-a|--append]	dir [dir ...]&lt;br /&gt;
	+ unuse			dir [dir ...]&lt;br /&gt;
	+ update&lt;br /&gt;
	+ refresh&lt;br /&gt;
	+ purge&lt;br /&gt;
	+ list&lt;br /&gt;
	+ clear&lt;br /&gt;
	+ help			[modulefile [modulefile ...]]&lt;br /&gt;
	+ whatis		[modulefile [modulefile ...]]&lt;br /&gt;
	+ apropos|keyword	string&lt;br /&gt;
	+ initadd		modulefile [modulefile ...]&lt;br /&gt;
	+ initprepend		modulefile [modulefile ...]&lt;br /&gt;
	+ initrm		modulefile [modulefile ...]&lt;br /&gt;
	+ initswitch		modulefile1 modulefile2&lt;br /&gt;
	+ initlist&lt;br /&gt;
	+ initclear&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
or &#039;&#039;&#039;&#039;man module&#039;&#039;&#039;&#039;.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
MODULE(1)                       Modules package                      MODULE(1)&lt;br /&gt;
&lt;br /&gt;
NAME&lt;br /&gt;
       module - command interface to the Modules package&lt;br /&gt;
&lt;br /&gt;
SYNOPSIS&lt;br /&gt;
       module [ switches ] [ sub-command ] [ sub-command-args ]&lt;br /&gt;
&lt;br /&gt;
DESCRIPTION&lt;br /&gt;
       module is a user interface to the Modules package.  The Modules package&lt;br /&gt;
       provides for the dynamic modification of  the  user&#039;s  environment  via&lt;br /&gt;
       modulefiles.&lt;br /&gt;
&lt;br /&gt;
       Each  modulefile contains the information needed to configure the shel&lt;br /&gt;
[...]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
For help on particular version of &#039;&#039;Module&#039;&#039;, e.g. Intel compiler version X.Y,  execute&lt;br /&gt;
&#039;&#039;&#039;&#039;module help compiler/intel&#039;&#039;&#039;&#039;.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ module help compiler/intel&lt;br /&gt;
----------- Module Specific Help for &#039;compiler/intel/15.0&#039; --------&lt;br /&gt;
This module provides the Intel(R) compiler suite version 15.0.3 via&lt;br /&gt;
commands &#039;icc&#039;, &#039;icpc&#039; and &#039;ifort&#039; (version 15.0.3), the debugger &#039;gdb-ia&#039; (version&lt;br /&gt;
7.8.3) as well as the Intel(R) Threading Building Blocks TBB (version 4.3.5)&lt;br /&gt;
and the Integrated Performance Primitives IPP libraries (version 8.2.2)&lt;br /&gt;
(for details see also &#039;http://software.intel.com/en-us/intel-compilers/&#039;).&lt;br /&gt;
&lt;br /&gt;
The related Math Kernel Library MKL module is &#039;numlib/mkl/11.2.3&#039;.&lt;br /&gt;
The related Intel MPI module is &#039;mpi/impi/5.0.3-intel-15.0&#039;.&lt;br /&gt;
The Intel &#039;icpc&#039; should work well with GNU compiler version 4.4 to 4.8.&lt;br /&gt;
Before using TBB or IPP setup the corresponding environment, e.g. for 64bit+bash&lt;br /&gt;
  source $INTEL_HOME/tbb/bin/tbbvars.sh intel64&lt;br /&gt;
  source $INTEL_HOME/ipp/bin/ippvars.sh intel64&lt;br /&gt;
&lt;br /&gt;
Commands:&lt;br /&gt;
  icc           # Intel(R) C compiler&lt;br /&gt;
  icpc          # Intel(R) C++ compiler&lt;br /&gt;
  ifort         # Intel(R) Fortran compiler&lt;br /&gt;
  gdb-ia        # Intel version of GNU debugger&lt;br /&gt;
  # idb is not available anymore in Intel compiler suite 2015.&lt;br /&gt;
&lt;br /&gt;
Local documentation:&lt;br /&gt;
  Man pages: man icc; man icpc; man ifort; man gdb-ia&lt;br /&gt;
  firefox $INTEL_DOC_DIR/beginusing_lc.htm&lt;br /&gt;
  firefox $INTEL_DOC_DIR/beginusing_lf.htm&lt;br /&gt;
  The html-pages are very detailed and cover TBB and IPP as well as MKL.&lt;br /&gt;
&lt;br /&gt;
For some Intel(R) compiler option examples, hints on how to compile 32bit code&lt;br /&gt;
and solutions for less common problems see the tips and troubleshooting doc:&lt;br /&gt;
  $INTEL_DOC_DIR/intel-compiler-tips-and-troubleshooting.txt&lt;br /&gt;
&lt;br /&gt;
For details on library and include dirs please call&lt;br /&gt;
    module show compiler/intel/15.0&lt;br /&gt;
[...]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
=== Online Documentation ===&lt;br /&gt;
[https://sourceforge.net/p/modules/wiki/FAQ/ Frequently Asked Questions (FAQ)]&lt;br /&gt;
&lt;br /&gt;
== Display all available Modules ==&lt;br /&gt;
Available &#039;&#039;Module&#039;&#039; are modulefiles that can be loaded by the user. A &#039;&#039;Module&#039;&#039; must be loaded before it provides changes to your environment, as described in the introduction to this &lt;br /&gt;
section. You can display all available &#039;&#039;Modules&#039;&#039; on the system by executing:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ module avail&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The short form the command is:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ module av&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Available &#039;&#039;Modules&#039;&#039; can be also displayed in different modes, such as&lt;br /&gt;
* each &#039;&#039;Module&#039;&#039; per one line&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ module -t avail&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* long&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ module -l avail&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
== bwHPC CLuster Information System (CIS) ==&lt;br /&gt;
A &#039;&#039;&#039;GUI-Version of all available and scheduled modules&#039;&#039;&#039; is available with our &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;CIS&#039;&#039;&#039; (Cluster Information System).&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Software Admins are able to &#039;&#039;&#039;announce new modules&#039;&#039;&#039;, versions and complete new software, too.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Academic users and other interested parties can get a &#039;&#039;&#039;summay of all installed modules and module-help informations&#039;&#039;&#039;.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;big&amp;gt;&lt;br /&gt;
[https://cis-hpc.uni-konstanz.de CIS: Cluster Information System]&lt;br /&gt;
&amp;lt;/big&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
[[File: cis.jpg]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Module categories, versions  and defaults ==&lt;br /&gt;
The bwHPC clusters (such as [[bwUniCluster]]) traditionally provide a large variety of &lt;br /&gt;
software and software versions. Therefore &#039;&#039;Module&#039;&#039; are divided in category folders &lt;br /&gt;
containing subfolders of modulefiles again containing modulefile versions, and must be addressed&lt;br /&gt;
as follows:&lt;br /&gt;
 category/softwarename/version&lt;br /&gt;
For instance the Intel compiler X.Y belongs to the category of compilers, therefore the  &lt;br /&gt;
modulefile &#039;&#039;X.Y&#039;&#039; is placed under the category &#039;&#039;compiler&#039;&#039; and &#039;&#039;intel&#039;&#039;.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
In case of multiple software versions, one version will be always defined as the &#039;&#039;&#039;default&#039;&#039;&#039; &lt;br /&gt;
version. The &#039;&#039;Module&#039;&#039; of the default can be addressed by simply omitting the version number:&lt;br /&gt;
 category/softwarename&lt;br /&gt;
== Finding software Modules ==&lt;br /&gt;
Currently all bwHPC software packages are assigned to the following &#039;&#039;Module&#039;&#039; categories:&lt;br /&gt;
&amp;lt;!-- add wiki category for each of those, possibly just as a link --&amp;gt;&lt;br /&gt;
* [[:Category:Biology_software|bio]]&lt;br /&gt;
* [[:Category:Engineering_software|cae]]&lt;br /&gt;
* [[:Category:Chemistry_software|chem]]&lt;br /&gt;
* [[:Category:Compiler_software|compiler]]&lt;br /&gt;
* [[:Category:Debugger_software|devel]]&lt;br /&gt;
* [[BwHPC_BPG_for_Mathematics|math]]&lt;br /&gt;
* mpi&lt;br /&gt;
* [[:Category:Numerical libraries|numlib]]&lt;br /&gt;
* [[:Category:Physics software|phys]]&lt;br /&gt;
* [[:Category:System software|system]]&lt;br /&gt;
* [[:Category:Visualization|vis]]&lt;br /&gt;
You can selectively list software in one of those categories using, e.g. for the category &amp;quot;compiler&amp;quot; &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ module avail compiler/&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Searches are looking for a substring starting at the begin of the name, so this would list all software in categories starting with a &amp;quot;c&amp;quot;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ module avail c&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
while this would find nothing&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ module avail hem&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
== Loading Modules ==&lt;br /&gt;
You can load a &#039;&#039;Module&#039;&#039; software in to your environment to enable easier access to software that &lt;br /&gt;
you want to use by executing:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ module load category/softwarename/version&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
or&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ module add category/softwarename/version&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Loading a &#039;&#039;Module&#039;&#039; in this manner affects ONLY your environment for the current session.&lt;br /&gt;
=== Loading conflicts ===&lt;br /&gt;
By default you can not load different versions of same software &#039;&#039;Module&#039;&#039; in same session. Loading for example Intel compiler version X while Intel compiler version Y is loaded results in error message as follows:&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
Module &#039;compiler/intel/X&#039; conflicts with the currently loaded module(s) &#039;compiler/intel/Y&#039;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
The solution is [[#Unloading Modules|unloading]] or switching &#039;&#039;Modules&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
=== Showing the changes introduced by a Module ===&lt;br /&gt;
Loading a &#039;&#039;Module&#039;&#039; will change the environment of the current shell session. For instance the $PATH variable will be expanded by the software&#039;s binary directory. Other &#039;&#039;Module&#039;&#039; variables may even change the behavior of the current shell session or the software program(s) in a more drastic way. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Loaded &#039;&#039;Modules&#039;&#039; may also invoke an additional set of environment variables, which e.g. point to directories or destinations of documentation and examples. Their nomenclature is systematic: &lt;br /&gt;
{| width=600px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Variable&lt;br /&gt;
! Pointing to&lt;br /&gt;
|-&lt;br /&gt;
| $SWN_HOME&lt;br /&gt;
| Root directory of the software package&lt;br /&gt;
|-&lt;br /&gt;
| $SWN_DOC_DIR&lt;br /&gt;
| Documentation&lt;br /&gt;
|-&lt;br /&gt;
| $SWN_EXA_DIR&lt;br /&gt;
| Examples&lt;br /&gt;
|-&lt;br /&gt;
| $SWN_BPR_URL&lt;br /&gt;
| URL of software&#039;s Wiki article&lt;br /&gt;
|-&lt;br /&gt;
| and many many more...&lt;br /&gt;
| &amp;amp;nbsp;&lt;br /&gt;
|}&lt;br /&gt;
with SWN being the place holder of the software &#039;&#039;Module&#039;&#039; name.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
All the changes to the current shell session to be invoked by loading the &#039;&#039;Module&#039;&#039; can be reviewed using &#039;&#039;&#039;&#039;module show category/softwarename/version&#039;&#039;&#039;&#039;.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;u&amp;gt;Example (Intel compiler)&amp;lt;/u&amp;gt; &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ module show compiler/intel/14.0&lt;br /&gt;
-------------------------------------------------------------------&lt;br /&gt;
/opt/bwhpc/common/modulefiles/compiler/intel/14.0:&lt;br /&gt;
module-whatis	 Intel(R) compiler suite (icc, icpc, ifort), debugger (idb), IPP and TBB ver 14.0.4 &lt;br /&gt;
setenv		 INTEL_VERSION 14.0.4 &lt;br /&gt;
setenv		 INTEL_HOME /opt/bwhpc/common/compiler/intel/compxe.2013.sp1.4.211/composer_xe_2013_sp1.4.211 &lt;br /&gt;
setenv		 INTEL_BIN_DIR /opt/bwhpc/common/compiler/intel/compxe.2013.sp1.4.211/composer_xe_2013_sp1.4.211/bin/intel64 &lt;br /&gt;
setenv		 INTEL_LIB_DIR /opt/bwhpc/common/compiler/intel/compxe.2013.sp1.4.211/composer_xe_2013_sp1.4.211/compiler/lib/intel64 &lt;br /&gt;
setenv		 INTEL_LIB_MIC /opt/bwhpc/common/compiler/intel/compxe.2013.sp1.4.211/composer_xe_2013_sp1.4.211/compiler/lib/mic &lt;br /&gt;
setenv		 INTEL_INC_DIR /opt/bwhpc/common/compiler/intel/compxe.2013.sp1.4.211/composer_xe_2013_sp1.4.211/compiler/include &lt;br /&gt;
setenv		 INTEL_MAN_DIR /opt/bwhpc/common/compiler/intel/compxe.2013.sp1.4.211/composer_xe_2013_sp1.4.211/man/en_US &lt;br /&gt;
setenv		 INTEL_DOC_DIR /opt/bwhpc/common/compiler/intel/compxe.2013.sp1.4.211/composer_xe_2013_sp1.4.211/Documentation/en_US &lt;br /&gt;
setenv		 ICC_VERSION 14.0.4 &lt;br /&gt;
setenv		 ICC_HOME /opt/bwhpc/common/compiler/intel/compxe.2013.sp1.4.211/composer_xe_2013_sp1.4.211 &lt;br /&gt;
setenv		 ICC_BIN_DIR /opt/bwhpc/common/compiler/intel/compxe.2013.sp1.4.211/composer_xe_2013_sp1.4.211/bin/intel64 &lt;br /&gt;
setenv		 ICC_LIB_DIR /opt/bwhpc/common/compiler/intel/compxe.2013.sp1.4.211/composer_xe_2013_sp1.4.211/compiler/lib/intel64 &lt;br /&gt;
setenv		 ICC_INC_DIR /opt/bwhpc/common/compiler/intel/compxe.2013.sp1.4.211/composer_xe_2013_sp1.4.211/compiler/include &lt;br /&gt;
setenv		 ICC_MAN_DIR /opt/bwhpc/common/compiler/intel/compxe.2013.sp1.4.211/composer_xe_2013_sp1.4.211/man/en_US &lt;br /&gt;
setenv		 ICC_DOC_DIR /opt/bwhpc/common/compiler/intel/compxe.2013.sp1.4.211/composer_xe_2013_sp1.4.211/Documentation/en_US &lt;br /&gt;
setenv		 IFORT_VERSION 14.0.4 &lt;br /&gt;
setenv		 IFORT_HOME /opt/bwhpc/common/compiler/intel/compxe.2013.sp1.4.211/composer_xe_2013_sp1.4.211 &lt;br /&gt;
setenv		 IFORT_BIN_DIR /opt/bwhpc/common/compiler/intel/compxe.2013.sp1.4.211/composer_xe_2013_sp1.4.211/bin/intel64 &lt;br /&gt;
setenv		 IFORT_LIB_DIR /opt/bwhpc/common/compiler/intel/compxe.2013.sp1.4.211/composer_xe_2013_sp1.4.211/compiler/lib/intel64 &lt;br /&gt;
setenv		 IFORT_INC_DIR /opt/bwhpc/common/compiler/intel/compxe.2013.sp1.4.211/composer_xe_2013_sp1.4.211/compiler/include &lt;br /&gt;
setenv		 IFORT_MAN_DIR /opt/bwhpc/common/compiler/intel/compxe.2013.sp1.4.211/composer_xe_2013_sp1.4.211/man/en_US &lt;br /&gt;
setenv		 IFORT_DOC_DIR /opt/bwhpc/common/compiler/intel/compxe.2013.sp1.4.211/composer_xe_2013_sp1.4.211/Documentation/en_US &lt;br /&gt;
setenv		 IDB_VERSION 14.0.4 &lt;br /&gt;
setenv		 IDB_HOME /opt/bwhpc/common/compiler/intel/compxe.2013.sp1.4.211/composer_xe_2013_sp1.4.211 &lt;br /&gt;
setenv		 IDB_LIB_DIR /opt/bwhpc/common/compiler/intel/compxe.2013.sp1.4.211/composer_xe_2013_sp1.4.211/debugger/lib/intel64 &lt;br /&gt;
setenv		 LANGUAGE_TERRITORY en_US &lt;br /&gt;
prepend-path	 PATH /opt/bwhpc/common/compiler/intel/compxe.2013.sp1.4.211/composer_xe_2013_sp1.4.211/bin/intel64 &lt;br /&gt;
prepend-path	 LD_LIBRARY_PATH /opt/bwhpc/common/compiler/intel/compxe.2013.sp1.4.211/composer_xe_2013_sp1.4.211/compiler/lib/intel64 &lt;br /&gt;
prepend-path	 LD_RUN_PATH /opt/bwhpc/common/compiler/intel/compxe.2013.sp1.4.211/composer_xe_2013_sp1.4.211/compiler/lib/intel64 &lt;br /&gt;
prepend-path	 MIC_LD_LIBRARY_PATH /opt/bwhpc/common/compiler/intel/compxe.2013.sp1.4.211/composer_xe_2013_sp1.4.211/compiler/lib/mic &lt;br /&gt;
prepend-path	 LIBRARY_PATH /opt/bwhpc/common/compiler/intel/compxe.2013.sp1.4.211/composer_xe_2013_sp1.4.211/compiler/lib/intel64 &lt;br /&gt;
prepend-path	 MANPATH /opt/bwhpc/common/compiler/intel/compxe.2013.sp1.4.211/composer_xe_2013_sp1.4.211/man/en_US &lt;br /&gt;
prepend-path	 NLSPATH /opt/bwhpc/common/compiler/intel/compxe.2013.sp1.4.211/composer_xe_2013_sp1.4.211/compiler/lib/intel64/locale/%l_%t/%N &lt;br /&gt;
prepend-path	 LD_LIBRARY_PATH /opt/bwhpc/common/compiler/intel/compxe.2013.sp1.4.211/composer_xe_2013_sp1.4.211/debugger/lib/intel64 &lt;br /&gt;
prepend-path	 NLSPATH /opt/bwhpc/common/compiler/intel/compxe.2013.sp1.4.211/composer_xe_2013_sp1.4.211/debugger/intel64/locale/%l_%t/%N &lt;br /&gt;
prepend-path	 INTEL_LICENSE_FILE /opt/bwhpc/common/compiler/intel/compxe.2013.sp1.4.211/composer_xe_2013_sp1.4.211/licenses &lt;br /&gt;
setenv		 IDB_JAVA_ARGUMENTS -Xms512m -Xmx1024m &lt;br /&gt;
setenv		 CC icc &lt;br /&gt;
setenv		 CXX icpc &lt;br /&gt;
setenv		 F77 ifort &lt;br /&gt;
setenv		 FC ifort &lt;br /&gt;
setenv		 F90 ifort &lt;br /&gt;
setenv		 TEST_MODULE_SCRIPT /opt/bwhpc/common/compiler/intel/compxe.2013.sp1.4.211/install-doc/test-compiler-intel.sh &lt;br /&gt;
setenv		 TEST_MODULE_NAME compiler/intel/14.0 &lt;br /&gt;
conflict	 compiler/intel &lt;br /&gt;
conflict	 compiler/gnu/4.9 &lt;br /&gt;
-------------------------------------------------------------------&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;font color=red&amp;gt;&#039;module show&#039; does &#039;&#039;&#039;not&#039;&#039;&#039; load the &#039;&#039;Module&#039;&#039;!&amp;lt;/font&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Modules depending on Modules ===&lt;br /&gt;
Some program &#039;&#039;Modules&#039;&#039; depend on libraries to be loaded to the user environment. Therefore the&lt;br /&gt;
corresponding &#039;&#039;Modules&#039;&#039; of the software must be loaded together with the &#039;&#039;Modules&#039;&#039; of &lt;br /&gt;
the libraries. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
By default such software &#039;&#039;Modules&#039;&#039; try to load required &#039;&#039;Modules&#039;&#039; and corresponding versions automatically. However, automatic loading might fail if a different version of that required &#039;&#039;Module&#039;&#039; &lt;br /&gt;
is already loaded (cf. [[#Loading conflicts|Loading conflicts]]).&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
== Unloading Modules ==&lt;br /&gt;
To unload or to remove a software &#039;&#039;Module&#039;&#039; execute:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ module unload category/softwarename/version&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
or&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ module remove category/softwarename/version&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
=== Unloading all loaded modules ===&lt;br /&gt;
==== Purge ====&lt;br /&gt;
Unloading a &#039;&#039;Module&#039;&#039; that has been loaded by default makes it inactive for the current session only - it will be reloaded the next time you log in.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
In order to remove all previously loaded software modules from your environment issue the command &#039;module purge&#039;.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;u&amp;gt;Example&amp;lt;/u&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ module list&lt;br /&gt;
Currently Loaded Modulefiles:&lt;br /&gt;
  1) devel/gdb/7.7&lt;br /&gt;
  2) compiler/intel/14.0&lt;br /&gt;
  3) mpi/openmpi/1.8-intel-14.0(default)&lt;br /&gt;
$&lt;br /&gt;
$ module purge&lt;br /&gt;
$ module list&lt;br /&gt;
No Modulefiles Currently Loaded.&lt;br /&gt;
$ &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;font color&amp;gt;Beware!&amp;lt;/font&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&#039;module purge&#039; is working without any further inquiry.&lt;br /&gt;
&lt;br /&gt;
==== Clear ====&lt;br /&gt;
Use &#039;&#039;&#039;&#039;module clear&#039;&#039;&#039;&#039; and confirm with &amp;quot;&#039;&#039;&#039;y&#039;&#039;&#039;&amp;quot; to unload all loaded module, too.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;u&amp;gt;Example&amp;lt;/u&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ module list&lt;br /&gt;
Currently Loaded Modulefiles:&lt;br /&gt;
  1) compiler/intel/14.0&lt;br /&gt;
  2) mpi/openmpi/1.8-intel-14.0(default)&lt;br /&gt;
  3) devel/gdb/7.7&lt;br /&gt;
$ &lt;br /&gt;
$ module clear&lt;br /&gt;
Are you sure you want to clear all loaded modules!? [n] y&lt;br /&gt;
$&lt;br /&gt;
$ module list&lt;br /&gt;
No Modulefiles Currently Loaded.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Display your loaded Modules ==&lt;br /&gt;
All &#039;&#039;Modules&#039;&#039; that are currently loaded for you can be displayed by the&lt;br /&gt;
command &#039;&#039;&#039;&#039;module list&#039;&#039;&#039;&#039;. [[#Purge|See example above]].&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Note: You only have to load further &#039;&#039;Modules&#039;&#039;, if you want to use additional software&lt;br /&gt;
packages or to change the version of an already loaded software.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Software job examples =&lt;br /&gt;
The &#039;&#039;Modules&#039;&#039; installed on bwHPC systems provide job examples to help you get started using the software or submitting jobs with this software. Examples can be found via a convenient &lt;br /&gt;
variable $SWN_EXA_DIR (for a &#039;&#039;Module&#039;&#039; called &#039;&#039;&#039;SWN&#039;&#039;&#039;). It is advisable to copy the whole example folder to your $HOME directory, so you can edit those job examples. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
For copying the entire job examples folder of software &#039;&#039;&#039;swn&#039;&#039;&#039; to your working directory, execute:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ module load catogory/softwarename&lt;br /&gt;
$ cp -R $SWN_EXA_DIR .&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
= How do Modules work? =&lt;br /&gt;
The default shell on the bwHPC clusters is bash, so explanations and examples will be shown for bash. In general, programs cannot modify the environment of the shell they are being run from, so how can the module command do exactly that?&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
The module command is not a program, but a bash-function.&lt;br /&gt;
You can view its content using &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ type module&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
and you will get a result like this: &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ type module&lt;br /&gt;
module is a function&lt;br /&gt;
module () &lt;br /&gt;
{ &lt;br /&gt;
    eval `/usr/bin/modulecmd bash $*`&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
In this function, modulecmd is called. Its output to stdout is then executed inside your current shell using the bash-internal &#039;&#039;eval&#039;&#039; command. As a consequence, all output that you see from the module is transmitted via stderr (output handle 2)  or in some cases even stdin (output handle 0).&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
----&lt;br /&gt;
[[Category:System software]][[Category:bwUniCluster|Environment Modules]][[Category:ForHLR Phase I|Environment Modules]]&lt;/div&gt;</summary>
		<author><name>T Kienzle</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=Development/MKL&amp;diff=5200</id>
		<title>Development/MKL</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=Development/MKL&amp;diff=5200"/>
		<updated>2017-11-28T08:11:31Z</updated>

		<summary type="html">&lt;p&gt;T Kienzle: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{| width=600px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Description !! Content&lt;br /&gt;
|-&lt;br /&gt;
| module load&lt;br /&gt;
| numlib/mkl&lt;br /&gt;
|-&lt;br /&gt;
| License&lt;br /&gt;
| Commercial. See [https://software.intel.com/en-us/articles/end-user-license-agreement EULA].&lt;br /&gt;
|-&lt;br /&gt;
| Citing &lt;br /&gt;
| n/a&lt;br /&gt;
|-&lt;br /&gt;
| Links&lt;br /&gt;
| [https://software.intel.com/en-us/intel-mkl Intel MKL Homepage] &amp;amp;#124; [https://software.intel.com/en-us/articles/intel-math-kernel-library-documentation Online-Documentation]&lt;br /&gt;
|-&lt;br /&gt;
| Graphical Interface&lt;br /&gt;
| No&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
= Description =&lt;br /&gt;
&#039;&#039;&#039;Intel MKL (Math Kernel Library)&#039;&#039;&#039; is a library of optimized math routines for numerical computations such as linear algebra (using BLAS, LAPACK, ScaLAPACK) and discrete Fourier Transformation.&lt;br /&gt;
With its standard interface in matrix computation and the interface of the popular fast Fourier transformation library fftw, MKL can be used to replace other libraries with minimal code changes. In fact a program which uses FFTW without MPI doesn&#039;t need to be changed at all. Just recompile it with the MKL linker flags.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
* [http://software.intel.com/en-us/articles/intel-math-kernel-library-documentation Online-Documentation]&lt;br /&gt;
&lt;br /&gt;
= Versions and Availability =&lt;br /&gt;
A list of currently available MKL modules can be obtained from the&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;big&amp;gt;&lt;br /&gt;
[https://cis-hpc.uni-konstanz.de/prod.cis/ Cluster Information System CIS] &amp;lt;/big&amp;gt;&lt;br /&gt;
{{#widget:Iframe&lt;br /&gt;
|url=https://cis-hpc.uni-konstanz.de/prod.cis/bwUniCluster/numlib/mkl&lt;br /&gt;
|width=99%&lt;br /&gt;
|height=700&lt;br /&gt;
|border=0&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Show a list of available versions using &#039;module avail numlib/mkl&#039; on any HPC-C5 cluster.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
: EXAMPLE bwUniCluster&lt;br /&gt;
$ module avail numlib/mkl&lt;br /&gt;
------------------------------ /opt/bwhpc/common/modulefiles ------------------------------&lt;br /&gt;
numlib/mkl/10.3.12         numlib/mkl/11.1.4(default)&lt;br /&gt;
numlib/mkl/11.0.5          numlib/mkl/11.2.3&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Local documentation =&lt;br /&gt;
There is some information in the module help file accessible via &#039;module help numlib/mkl&#039;-&lt;br /&gt;
command.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
: EXCERPT ONLY&lt;br /&gt;
$ module help numlib/mkl&lt;br /&gt;
----------- Module Specific Help for &#039;numlib/mkl/11.1.4&#039; ----------&lt;br /&gt;
This module provides the Intel(R) Math Kernel Library (MKL)&lt;br /&gt;
version 11.1.4, a fast and reliable implementation&lt;br /&gt;
of BLAS/LAPACK/FFTW (see also &#039;http://software.intel.com/en-us/intel-mkl/&#039;).&lt;br /&gt;
&lt;br /&gt;
The preferable compiler for this MKL version is &#039;compiler/intel/14.0&#039;. Linking&lt;br /&gt;
with other compilers like GNU, PGI and SUN is possible. The desired compiler&lt;br /&gt;
module (exception system GNU compiler) has to be loaded before using MKL.&lt;br /&gt;
&lt;br /&gt;
Local documentation:&lt;br /&gt;
&lt;br /&gt;
  Man pages in &#039;$MKL_MAN_DIR/man3&#039;, e.g. &#039;man dotc&#039;.&lt;br /&gt;
  firefox  $MKL_DOC_DIR/mkl_documentation.htm&lt;br /&gt;
  acroread $MKL_DOC_DIR/l_mkl_11.1.4.211.mklman.pdf&lt;br /&gt;
  acroread $MKL_DOC_DIR/l_mkl_11.1.4.211.mkl_11.1.4_lnx_userguide.pdf&lt;br /&gt;
&lt;br /&gt;
Linking examples (ifort compiler with support for blas and lapack):&lt;br /&gt;
&lt;br /&gt;
* Dynamic linking of myprog.f and parallel MKL supporting the LP64 interface:&lt;br /&gt;
&lt;br /&gt;
  ifort myprog.f -L${MKL_LIB_DIR} -I${MKL_INC_DIR}            \&lt;br /&gt;
        -lmkl_intel_lp64 -lmkl_intel_thread -lmkl_core -liomp5 -lpthread&lt;br /&gt;
[... t.b.c. ...]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
After loading the module, the environment variable $MKL_DOC_DIR points to the local documentation folder. Various examples can be found in $MKLROOT/examples.&lt;br /&gt;
&lt;br /&gt;
= MKL-Specific Environments =&lt;br /&gt;
To see a list of all MKL environments set by the &#039;module load&#039;-command use &#039;env | grep MKL&#039;. Or use the command &#039;module display numlib/mkl/version&#039;.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Example &amp;lt;small&amp;gt;(bwUniCluster)&amp;lt;/small&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ module load numlib/mkl&lt;br /&gt;
$ env | grep MKL&lt;br /&gt;
MKLROOT=/opt/bwhpc/common/compiler/intel/compxe.2013.sp1.4.211/mkl&lt;br /&gt;
MKL_LIB_MIC_COM=/opt/bwhpc/common/compiler/intel/compxe.2013.sp1.4.211/lib/mic&lt;br /&gt;
MKL_DOC_DIR=/opt/bwhpc/common/compiler/intel/compxe.2013.sp1.4.211/composerxe/Documentation/en_US/mkl&lt;br /&gt;
MKL_NUM_THREADS=1&lt;br /&gt;
MKL_HOME=/opt/bwhpc/common/compiler/intel/compxe.2013.sp1.4.211/mkl&lt;br /&gt;
MKL_LIB_MIC=/opt/bwhpc/common/compiler/intel/compxe.2013.sp1.4.211/mkl/lib/mic&lt;br /&gt;
MKL_MAN_DIR=/opt/bwhpc/common/compiler/intel/compxe.2013.sp1.4.211/man/en_US&lt;br /&gt;
MKL_EXA_DIR=/opt/bwhpc/common/compiler/intel/compxe.2013.sp1.4.211/composerxe/Samples/en_US&lt;br /&gt;
MKL_STA_DIR=/opt/bwhpc/common/compiler/intel/compxe.2013.sp1.4.211/mkl/lib/intel64_static&lt;br /&gt;
MKL_INC_DIR=/opt/bwhpc/common/compiler/intel/compxe.2013.sp1.4.211/mkl/include&lt;br /&gt;
MKL_BIN_DIR=/opt/bwhpc/common/compiler/intel/compxe.2013.sp1.4.211/mkl/bin&lt;br /&gt;
MKL_LIB_DIR=/opt/bwhpc/common/compiler/intel/compxe.2013.sp1.4.211/mkl/lib/intel64&lt;br /&gt;
MKL_VERSION=11.1.4&lt;br /&gt;
MKL_LIB_COM=/opt/bwhpc/common/compiler/intel/compxe.2013.sp1.4.211/lib/intel64&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Compiling and linking =&lt;br /&gt;
Compilation is possible with both GCC and Intel compilers but it is easier for Intel compilers, so this case is explained here.&lt;br /&gt;
After loading the compiler and the library module with&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ module load compiler/intel&lt;br /&gt;
$ module load numlib/mkl&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
you can include the MKL header file in your program:&lt;br /&gt;
&amp;lt;source lang=cpp&amp;gt;#include &amp;lt;mkl.h&amp;gt;&amp;lt;/source&amp;gt;&lt;br /&gt;
Compilation is simple:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ icpc -c example_mkl.c&amp;lt;/pre&amp;gt;&lt;br /&gt;
When linking the program you have to tell the compiler to link against the mkl library:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ icpc example_mkl.o -mkl&amp;lt;/pre&amp;gt;&lt;br /&gt;
With the -mkl switch the intel compiler automatically sets the correct linker flags but you can specify them explicitly for example to enable static linking or when non-intel compilers are used. Information about the different options can be found at https://software.intel.com/en-us/node/438568 and especially helpful is the MKL link line advisor at https://software.intel.com/en-us/articles/intel-mkl-link-line-advisor.&lt;br /&gt;
By default $MKL_NUM_THREADS is set to 1 and so only one thread will be created, but if you feel the need to run the computation on more cores (after benchmarking) you can set $MKL_NUM_THREADS to a higher number.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
== FFTW Interface to Intel Math Kernel Library (MKL) ==&lt;br /&gt;
Sometimes, [[FFTW|FFTW]] is not available on your cluster. You can use the MKL library&lt;br /&gt;
instead and include the FFTW functions, too.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Intel Math Kernel Library (MKL) offers FFTW2 and FFTW3 interfaces to Intel MKL Fast Fourier Transform and Trigonometric Transform functionality. The purpose of these interfaces is to enable applications using FFTW to gain performance with Intel MKL without changing the program source code.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Here is an excerpt from &#039;module help numlib/mkl&#039;:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Static FFTW2/3 C/Fortran interfaces can be found in dir&lt;br /&gt;
    ${MKL_HOME}/interfaces/&lt;br /&gt;
  Examples:&lt;br /&gt;
    Link to FFTW3 Fortran interface with GNU compiler and ilp64 support:&lt;br /&gt;
      ${MKL_HOME}/interfaces/fftw3xf/libfftw3xf_intel64_double_i8_gnu47.a&lt;br /&gt;
    Link to FFTW3 Fortran interface with Intel compiler and lp64 support:&lt;br /&gt;
      ${MKL_HOME}/interfaces/fftw3xf/libfftw3xf_intel64_double_i4_intel150.a&lt;br /&gt;
  The Intel FFTW interfaces requires the Intel MKL library (e.g. it does&lt;br /&gt;
  not work with ACML library). Usually it is not a problem to use a&lt;br /&gt;
  different compiler version, e.g. to use _gnu41.a with gnu 4.3 compiler.&lt;br /&gt;
  See dir ${MKL_HOME}/interfaces/ for other interfaces (fftw2/3 Fortran/C).&lt;br /&gt;
  Compiler option for include files: -I${MKL_INC_DIR}/fftw&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
See the corresponding webpages:&lt;br /&gt;
* [https://software.intel.com/en-us/node/471410 FFTW Interface to Intel Math Kernel Library]&lt;br /&gt;
* [https://software.intel.com/de-de/node/471414 FFTW2 Interface to Intel Math Kernel Library]&lt;br /&gt;
* [https://software.intel.com/en-us/node/471456 FFTW3 Interface to Intel Math Kernel Library]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Examples =&lt;br /&gt;
To help getting started we provide two C++ examples. The first one computes the square of a 2x2 matrix:&lt;br /&gt;
&amp;lt;source lang=&amp;quot;cpp&amp;quot;&amp;gt;&lt;br /&gt;
#include &amp;lt;iostream&amp;gt;&lt;br /&gt;
#include &amp;lt;mkl.h&amp;gt;&lt;br /&gt;
using namespace std;&lt;br /&gt;
&lt;br /&gt;
int main()&lt;br /&gt;
{&lt;br /&gt;
    double m[2][2] = {{2,1}, {0,2}};&lt;br /&gt;
    double c[2][2];&lt;br /&gt;
&lt;br /&gt;
    for(int i = 0; i &amp;lt; 2; ++i)&lt;br /&gt;
    {&lt;br /&gt;
        for(int j = 0; j &amp;lt; 2; ++j)&lt;br /&gt;
            cout &amp;lt;&amp;lt; m[i][j] &amp;lt;&amp;lt; &amp;quot; &amp;quot;;&lt;br /&gt;
&lt;br /&gt;
        cout &amp;lt;&amp;lt; endl;&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    cblas_dgemm(CblasRowMajor, CblasNoTrans, CblasNoTrans, 2, 2, 2, 1.0, &amp;amp;m[0][0], 2, &amp;amp;m[0][0], 2, 0.0, &amp;amp;c[0][0], 2);&lt;br /&gt;
&lt;br /&gt;
    cout &amp;lt;&amp;lt; endl;&lt;br /&gt;
&lt;br /&gt;
    for(int i = 0; i &amp;lt; 2; ++i)&lt;br /&gt;
    {&lt;br /&gt;
        for(int j = 0; j &amp;lt; 2; ++j)&lt;br /&gt;
            cout &amp;lt;&amp;lt; c[i][j] &amp;lt;&amp;lt; &amp;quot; &amp;quot;;&lt;br /&gt;
&lt;br /&gt;
        cout &amp;lt;&amp;lt; endl;&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    return 0;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
And the second one does a fast Fourier transformation using the Intel MKL interface (DFTI):&lt;br /&gt;
&amp;lt;source lang=&amp;quot;cpp&amp;quot;&amp;gt;&lt;br /&gt;
#include &amp;lt;iostream&amp;gt;&lt;br /&gt;
#include &amp;lt;complex&amp;gt;&lt;br /&gt;
#include &amp;lt;cmath&amp;gt;&lt;br /&gt;
#include &amp;lt;mkl.h&amp;gt;&lt;br /&gt;
using namespace std;&lt;br /&gt;
&lt;br /&gt;
int main()&lt;br /&gt;
{&lt;br /&gt;
    const int N = 3;&lt;br /&gt;
    complex&amp;lt;double&amp;gt; x[N] = {2, -1, 0.5};&lt;br /&gt;
&lt;br /&gt;
    cout &amp;lt;&amp;lt; &amp;quot;Input: &amp;quot; &amp;lt;&amp;lt; endl;&lt;br /&gt;
&lt;br /&gt;
    for(int i = 0; i &amp;lt; N; i++)&lt;br /&gt;
        cout &amp;lt;&amp;lt; x[i] &amp;lt;&amp;lt; endl;&lt;br /&gt;
&lt;br /&gt;
    DFTI_DESCRIPTOR_HANDLE desc;&lt;br /&gt;
&lt;br /&gt;
    DftiCreateDescriptor(&amp;amp;desc, DFTI_DOUBLE, DFTI_COMPLEX, 1, N);&lt;br /&gt;
    DftiCommitDescriptor(desc);&lt;br /&gt;
    DftiComputeForward(desc, x);&lt;br /&gt;
    DftiFreeDescriptor(&amp;amp;desc);&lt;br /&gt;
&lt;br /&gt;
    cout &amp;lt;&amp;lt; &amp;quot;\nOutput: &amp;quot; &amp;lt;&amp;lt; endl;&lt;br /&gt;
&lt;br /&gt;
    for(int i = 0; i &amp;lt; N; i++)&lt;br /&gt;
        cout &amp;lt;&amp;lt; x[i] &amp;lt;&amp;lt; endl;&lt;br /&gt;
&lt;br /&gt;
    cout &amp;lt;&amp;lt; &amp;quot;\nTest the interpolation function f:&amp;quot; &amp;lt;&amp;lt; endl;&lt;br /&gt;
&lt;br /&gt;
    for(int i = 0; i &amp;lt; N; i++)&lt;br /&gt;
    {&lt;br /&gt;
        double t = i/(double)N;&lt;br /&gt;
        complex&amp;lt;double&amp;gt; u(0, 2*M_PI*t);&lt;br /&gt;
        complex&amp;lt;double&amp;gt; z = exp(u);&lt;br /&gt;
        complex&amp;lt;double&amp;gt; w = 1.0/N * (x[0] + x[1]*z + x[2]*z*z);&lt;br /&gt;
&lt;br /&gt;
        cout &amp;lt;&amp;lt; &amp;quot;f(&amp;quot; &amp;lt;&amp;lt; t &amp;lt;&amp;lt; &amp;quot;) = &amp;quot; &amp;lt;&amp;lt; w &amp;lt;&amp;lt; endl;&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    return 0;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Category:Numerical_libraries]][[Category:bwUniCluster]][[Category:bwForCluster_Chemistry]][[Category:BwForCluster_BinAC]][[Category:bwForCluster_MLS&amp;amp;WISO_Production]]&lt;/div&gt;</summary>
		<author><name>T Kienzle</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=Development/MKL&amp;diff=5197</id>
		<title>Development/MKL</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=Development/MKL&amp;diff=5197"/>
		<updated>2017-11-28T07:53:55Z</updated>

		<summary type="html">&lt;p&gt;T Kienzle: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{| width=600px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Description !! Content&lt;br /&gt;
|-&lt;br /&gt;
| module load&lt;br /&gt;
| numlib/mkl&lt;br /&gt;
|-&lt;br /&gt;
| License&lt;br /&gt;
| Commercial. See [https://software.intel.com/en-us/articles/end-user-license-agreement EULA].&lt;br /&gt;
|-&lt;br /&gt;
| Citing &lt;br /&gt;
| n/a&lt;br /&gt;
|-&lt;br /&gt;
| Links&lt;br /&gt;
| [https://software.intel.com/en-us/intel-mkl Intel MKL Homepage] &amp;amp;#124; [https://software.intel.com/en-us/articles/intel-math-kernel-library-documentation Online-Documentation]&lt;br /&gt;
|-&lt;br /&gt;
| Graphical Interface&lt;br /&gt;
| No&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
= Description =&lt;br /&gt;
&#039;&#039;&#039;Intel MKL (Math Kernel Library)&#039;&#039;&#039; is a library of optimized math routines for numerical computations such as linear algebra (using BLAS, LAPACK, ScaLAPACK) and discrete Fourier Transformation.&lt;br /&gt;
With its standard interface in matrix computation and the interface of the popular fast Fourier transformation library fftw, MKL can be used to replace other libraries with minimal code changes. In fact a program which uses FFTW without MPI doesn&#039;t need to be changed at all. Just recompile it with the MKL linker flags.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
* [http://software.intel.com/en-us/articles/intel-math-kernel-library-documentation Online-Documentation]&lt;br /&gt;
&lt;br /&gt;
= Versions and Availability =&lt;br /&gt;
A list of currently available MKL modules can be obtained from the&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;big&amp;gt;&lt;br /&gt;
[https://cis-hpc.uni-konstanz.de/prod.cis/ Cluster Information System CIS] &amp;lt;/big&amp;gt;&lt;br /&gt;
{{#widget:Iframe&lt;br /&gt;
|url=https://cis-hpc.uni-konstanz.de/prod.cis/bwUniCluster/numlib/mkl&lt;br /&gt;
|width=99%&lt;br /&gt;
|height=700&lt;br /&gt;
|border=0&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Show a list of available versions using &#039;module avail numlib/mkl&#039; on any HPC-C5 cluster.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
: EXAMPLE bwUniCluster&lt;br /&gt;
$ module avail numlib/mkl&lt;br /&gt;
------------------------------ /opt/bwhpc/common/modulefiles ------------------------------&lt;br /&gt;
numlib/mkl/10.3.12         numlib/mkl/11.1.4(default)&lt;br /&gt;
numlib/mkl/11.0.5          numlib/mkl/11.2.3&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Local documentation =&lt;br /&gt;
There is some information in the module help file accessible via &#039;module help numlib/mkl&#039;-&lt;br /&gt;
command.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
: EXCERPT ONLY&lt;br /&gt;
$ module help numlib/mkl&lt;br /&gt;
----------- Module Specific Help for &#039;numlib/mkl/11.1.4&#039; ----------&lt;br /&gt;
This module provides the Intel(R) Math Kernel Library (MKL)&lt;br /&gt;
version 11.1.4, a fast and reliable implementation&lt;br /&gt;
of BLAS/LAPACK/FFTW (see also &#039;http://software.intel.com/en-us/intel-mkl/&#039;).&lt;br /&gt;
&lt;br /&gt;
The preferable compiler for this MKL version is &#039;compiler/intel/14.0&#039;. Linking&lt;br /&gt;
with other compilers like GNU, PGI and SUN is possible. The desired compiler&lt;br /&gt;
module (exception system GNU compiler) has to be loaded before using MKL.&lt;br /&gt;
&lt;br /&gt;
Local documentation:&lt;br /&gt;
&lt;br /&gt;
  Man pages in &#039;$MKL_MAN_DIR/man3&#039;, e.g. &#039;man dotc&#039;.&lt;br /&gt;
  firefox  $MKL_DOC_DIR/mkl_documentation.htm&lt;br /&gt;
  acroread $MKL_DOC_DIR/l_mkl_11.1.4.211.mklman.pdf&lt;br /&gt;
  acroread $MKL_DOC_DIR/l_mkl_11.1.4.211.mkl_11.1.4_lnx_userguide.pdf&lt;br /&gt;
&lt;br /&gt;
Linking examples (ifort compiler with support for blas and lapack):&lt;br /&gt;
&lt;br /&gt;
* Dynamic linking of myprog.f and parallel MKL supporting the LP64 interface:&lt;br /&gt;
&lt;br /&gt;
  ifort myprog.f -L${MKL_LIB_DIR} -I${MKL_INC_DIR}            \&lt;br /&gt;
        -lmkl_intel_lp64 -lmkl_intel_thread -lmkl_core -liomp5 -lpthread&lt;br /&gt;
[... t.b.c. ...]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
After loading the module, the environment variable $MKL_DOC_DIR points to the local documentation folder. Various examples can be found in $MKLROOT/examples.&lt;br /&gt;
&lt;br /&gt;
= MKL-Specific Environments =&lt;br /&gt;
To see a list of all MKL environments set by the &#039;module load&#039;-command use &#039;env | grep MKL&#039;. Or use the command &#039;module display numlib/mkl/version&#039;.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Example &amp;lt;small&amp;gt;(bwUniCluster)&amp;lt;/small&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ module load numlib/mkl&lt;br /&gt;
$ env | grep MKL&lt;br /&gt;
MKLROOT=/opt/bwhpc/common/compiler/intel/compxe.2013.sp1.4.211/mkl&lt;br /&gt;
MKL_LIB_MIC_COM=/opt/bwhpc/common/compiler/intel/compxe.2013.sp1.4.211/lib/mic&lt;br /&gt;
MKL_DOC_DIR=/opt/bwhpc/common/compiler/intel/compxe.2013.sp1.4.211/composerxe/Documentation/en_US/mkl&lt;br /&gt;
MKL_NUM_THREADS=1&lt;br /&gt;
MKL_HOME=/opt/bwhpc/common/compiler/intel/compxe.2013.sp1.4.211/mkl&lt;br /&gt;
MKL_LIB_MIC=/opt/bwhpc/common/compiler/intel/compxe.2013.sp1.4.211/mkl/lib/mic&lt;br /&gt;
MKL_MAN_DIR=/opt/bwhpc/common/compiler/intel/compxe.2013.sp1.4.211/man/en_US&lt;br /&gt;
MKL_EXA_DIR=/opt/bwhpc/common/compiler/intel/compxe.2013.sp1.4.211/composerxe/Samples/en_US&lt;br /&gt;
MKL_STA_DIR=/opt/bwhpc/common/compiler/intel/compxe.2013.sp1.4.211/mkl/lib/intel64_static&lt;br /&gt;
MKL_INC_DIR=/opt/bwhpc/common/compiler/intel/compxe.2013.sp1.4.211/mkl/include&lt;br /&gt;
MKL_BIN_DIR=/opt/bwhpc/common/compiler/intel/compxe.2013.sp1.4.211/mkl/bin&lt;br /&gt;
MKL_LIB_DIR=/opt/bwhpc/common/compiler/intel/compxe.2013.sp1.4.211/mkl/lib/intel64&lt;br /&gt;
MKL_VERSION=11.1.4&lt;br /&gt;
MKL_LIB_COM=/opt/bwhpc/common/compiler/intel/compxe.2013.sp1.4.211/lib/intel64&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Compiling and linking =&lt;br /&gt;
Compilation is possible with both GCC and Intel compilers but it is easier for Intel compilers, so this case is explained here.&lt;br /&gt;
After loading the compiler and the library module with&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ module load compiler/intel&lt;br /&gt;
$ module load numlib/mkl&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
you can include the MKL header file in your program:&lt;br /&gt;
&amp;lt;source lang=cpp&amp;gt;#include &amp;lt;mkl.h&amp;gt;&amp;lt;/source&amp;gt;&lt;br /&gt;
Compilation is simple:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ icpc -c example_mkl.c&amp;lt;/pre&amp;gt;&lt;br /&gt;
When linking the program you have to tell the compiler to link against the mkl library:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ icpc example_mkl.o -mkl&amp;lt;/pre&amp;gt;&lt;br /&gt;
With the -mkl switch the intel compiler automatically sets the correct linker flags but you can specify them explicitly for example to enable static linking or when non-intel compilers are used. Information about the different options can be found at http://software.intel.com/en-us/node/438568 and especially helpful is the MKL link line advisor at https://software.intel.com/en-us/articles/intel-mkl-link-line-advisor.&lt;br /&gt;
By default $MKL_NUM_THREADS is set to 1 and so only one thread will be created, but if you feel the need to run the computation on more cores (after benchmarking) you can set $MKL_NUM_THREADS to a higher number.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
== FFTW Interface to Intel Math Kernel Library (MKL) ==&lt;br /&gt;
Sometimes, [[FFTW|FFTW]] is not available on your cluster. You can use the MKL library&lt;br /&gt;
instead and include the FFTW functions, too.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Intel Math Kernel Library (MKL) offers FFTW2 and FFTW3 interfaces to Intel MKL Fast Fourier Transform and Trigonometric Transform functionality. The purpose of these interfaces is to enable applications using FFTW to gain performance with Intel MKL without changing the program source code.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Here is an excerpt from &#039;module help numlib/mkl&#039;:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Static FFTW2/3 C/Fortran interfaces can be found in dir&lt;br /&gt;
    ${MKL_HOME}/interfaces/&lt;br /&gt;
  Examples:&lt;br /&gt;
    Link to FFTW3 Fortran interface with GNU compiler and ilp64 support:&lt;br /&gt;
      ${MKL_HOME}/interfaces/fftw3xf/libfftw3xf_intel64_double_i8_gnu47.a&lt;br /&gt;
    Link to FFTW3 Fortran interface with Intel compiler and lp64 support:&lt;br /&gt;
      ${MKL_HOME}/interfaces/fftw3xf/libfftw3xf_intel64_double_i4_intel150.a&lt;br /&gt;
  The Intel FFTW interfaces requires the Intel MKL library (e.g. it does&lt;br /&gt;
  not work with ACML library). Usually it is not a problem to use a&lt;br /&gt;
  different compiler version, e.g. to use _gnu41.a with gnu 4.3 compiler.&lt;br /&gt;
  See dir ${MKL_HOME}/interfaces/ for other interfaces (fftw2/3 Fortran/C).&lt;br /&gt;
  Compiler option for include files: -I${MKL_INC_DIR}/fftw&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
See the corresponding webpages:&lt;br /&gt;
* [https://software.intel.com/en-us/node/471410 FFTW Interface to Intel Math Kernel Library]&lt;br /&gt;
* [https://software.intel.com/de-de/node/471414 FFTW2 Interface to Intel Math Kernel Library]&lt;br /&gt;
* [https://software.intel.com/en-us/node/471456 FFTW3 Interface to Intel Math Kernel Library]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Examples =&lt;br /&gt;
To help getting started we provide two C++ examples. The first one computes the square of a 2x2 matrix:&lt;br /&gt;
&amp;lt;source lang=&amp;quot;cpp&amp;quot;&amp;gt;&lt;br /&gt;
#include &amp;lt;iostream&amp;gt;&lt;br /&gt;
#include &amp;lt;mkl.h&amp;gt;&lt;br /&gt;
using namespace std;&lt;br /&gt;
&lt;br /&gt;
int main()&lt;br /&gt;
{&lt;br /&gt;
    double m[2][2] = {{2,1}, {0,2}};&lt;br /&gt;
    double c[2][2];&lt;br /&gt;
&lt;br /&gt;
    for(int i = 0; i &amp;lt; 2; ++i)&lt;br /&gt;
    {&lt;br /&gt;
        for(int j = 0; j &amp;lt; 2; ++j)&lt;br /&gt;
            cout &amp;lt;&amp;lt; m[i][j] &amp;lt;&amp;lt; &amp;quot; &amp;quot;;&lt;br /&gt;
&lt;br /&gt;
        cout &amp;lt;&amp;lt; endl;&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    cblas_dgemm(CblasRowMajor, CblasNoTrans, CblasNoTrans, 2, 2, 2, 1.0, &amp;amp;m[0][0], 2, &amp;amp;m[0][0], 2, 0.0, &amp;amp;c[0][0], 2);&lt;br /&gt;
&lt;br /&gt;
    cout &amp;lt;&amp;lt; endl;&lt;br /&gt;
&lt;br /&gt;
    for(int i = 0; i &amp;lt; 2; ++i)&lt;br /&gt;
    {&lt;br /&gt;
        for(int j = 0; j &amp;lt; 2; ++j)&lt;br /&gt;
            cout &amp;lt;&amp;lt; c[i][j] &amp;lt;&amp;lt; &amp;quot; &amp;quot;;&lt;br /&gt;
&lt;br /&gt;
        cout &amp;lt;&amp;lt; endl;&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    return 0;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
And the second one does a fast Fourier transformation using the Intel MKL interface (DFTI):&lt;br /&gt;
&amp;lt;source lang=&amp;quot;cpp&amp;quot;&amp;gt;&lt;br /&gt;
#include &amp;lt;iostream&amp;gt;&lt;br /&gt;
#include &amp;lt;complex&amp;gt;&lt;br /&gt;
#include &amp;lt;cmath&amp;gt;&lt;br /&gt;
#include &amp;lt;mkl.h&amp;gt;&lt;br /&gt;
using namespace std;&lt;br /&gt;
&lt;br /&gt;
int main()&lt;br /&gt;
{&lt;br /&gt;
    const int N = 3;&lt;br /&gt;
    complex&amp;lt;double&amp;gt; x[N] = {2, -1, 0.5};&lt;br /&gt;
&lt;br /&gt;
    cout &amp;lt;&amp;lt; &amp;quot;Input: &amp;quot; &amp;lt;&amp;lt; endl;&lt;br /&gt;
&lt;br /&gt;
    for(int i = 0; i &amp;lt; N; i++)&lt;br /&gt;
        cout &amp;lt;&amp;lt; x[i] &amp;lt;&amp;lt; endl;&lt;br /&gt;
&lt;br /&gt;
    DFTI_DESCRIPTOR_HANDLE desc;&lt;br /&gt;
&lt;br /&gt;
    DftiCreateDescriptor(&amp;amp;desc, DFTI_DOUBLE, DFTI_COMPLEX, 1, N);&lt;br /&gt;
    DftiCommitDescriptor(desc);&lt;br /&gt;
    DftiComputeForward(desc, x);&lt;br /&gt;
    DftiFreeDescriptor(&amp;amp;desc);&lt;br /&gt;
&lt;br /&gt;
    cout &amp;lt;&amp;lt; &amp;quot;\nOutput: &amp;quot; &amp;lt;&amp;lt; endl;&lt;br /&gt;
&lt;br /&gt;
    for(int i = 0; i &amp;lt; N; i++)&lt;br /&gt;
        cout &amp;lt;&amp;lt; x[i] &amp;lt;&amp;lt; endl;&lt;br /&gt;
&lt;br /&gt;
    cout &amp;lt;&amp;lt; &amp;quot;\nTest the interpolation function f:&amp;quot; &amp;lt;&amp;lt; endl;&lt;br /&gt;
&lt;br /&gt;
    for(int i = 0; i &amp;lt; N; i++)&lt;br /&gt;
    {&lt;br /&gt;
        double t = i/(double)N;&lt;br /&gt;
        complex&amp;lt;double&amp;gt; u(0, 2*M_PI*t);&lt;br /&gt;
        complex&amp;lt;double&amp;gt; z = exp(u);&lt;br /&gt;
        complex&amp;lt;double&amp;gt; w = 1.0/N * (x[0] + x[1]*z + x[2]*z*z);&lt;br /&gt;
&lt;br /&gt;
        cout &amp;lt;&amp;lt; &amp;quot;f(&amp;quot; &amp;lt;&amp;lt; t &amp;lt;&amp;lt; &amp;quot;) = &amp;quot; &amp;lt;&amp;lt; w &amp;lt;&amp;lt; endl;&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    return 0;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Category:Numerical_libraries]][[Category:bwUniCluster]][[Category:bwForCluster_Chemistry]][[Category:BwForCluster_BinAC]][[Category:bwForCluster_MLS&amp;amp;WISO_Production]]&lt;/div&gt;</summary>
		<author><name>T Kienzle</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=Development/MKL&amp;diff=5196</id>
		<title>Development/MKL</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=Development/MKL&amp;diff=5196"/>
		<updated>2017-11-28T07:51:05Z</updated>

		<summary type="html">&lt;p&gt;T Kienzle: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{| width=600px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Description !! Content&lt;br /&gt;
|-&lt;br /&gt;
| module load&lt;br /&gt;
| numlib/mkl&lt;br /&gt;
|-&lt;br /&gt;
| License&lt;br /&gt;
| Commercial. See [https://software.intel.com/en-us/articles/end-user-license-agreement EULA].&lt;br /&gt;
|-&lt;br /&gt;
| Citing &lt;br /&gt;
| n/a&lt;br /&gt;
|-&lt;br /&gt;
| Links&lt;br /&gt;
| [https://software.intel.com/en-us/intel-mkl Intel MKL Homepage] &amp;amp;#124; [https://software.intel.com/en-us/articles/intel-math-kernel-library-documentation Online-Documentation]&lt;br /&gt;
|-&lt;br /&gt;
| Graphical Interface&lt;br /&gt;
| No&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
= Description =&lt;br /&gt;
&#039;&#039;&#039;Intel MKL (Math Kernel Library)&#039;&#039;&#039; is a library of optimized math routines for numerical computations such as linear algebra (using BLAS, LAPACK, ScaLAPACK) and discrete Fourier Transformation.&lt;br /&gt;
With its standard interface in matrix computation and the interface of the popular fast Fourier transformation library fftw, MKL can be used to replace other libraries with minimal code changes. In fact a program which uses FFTW without MPI doesn&#039;t need to be changed at all. Just recompile it with the MKL linker flags.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
* [http://software.intel.com/en-us/articles/intel-math-kernel-library-documentation Online-Documentation]&lt;br /&gt;
&lt;br /&gt;
= Versions and Availability =&lt;br /&gt;
A list of currently available MKL modules can be obtained from the&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;big&amp;gt;&lt;br /&gt;
[https://cis-hpc.uni-konstanz.de/prod.cis/ Cluster Information System CIS] &amp;lt;/big&amp;gt;&lt;br /&gt;
{{#widget:Iframe&lt;br /&gt;
|url=https://cis-hpc.uni-konstanz.de/prod.cis/bwUniCluster/numlib/mkl&lt;br /&gt;
|width=99%&lt;br /&gt;
|height=700&lt;br /&gt;
|border=0&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Show a list of available versions using &#039;module avail numlib/mkl&#039; on any HPC-C5 cluster.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
: EXAMPLE bwUniCluster&lt;br /&gt;
$ module avail numlib/mkl&lt;br /&gt;
------------------------------ /opt/bwhpc/common/modulefiles ------------------------------&lt;br /&gt;
numlib/mkl/10.3.12         numlib/mkl/11.1.4(default)&lt;br /&gt;
numlib/mkl/11.0.5          numlib/mkl/11.2.3&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Local documentation =&lt;br /&gt;
There is some information in the module help file accessible via &#039;module help numlib/mkl&#039;-&lt;br /&gt;
command.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
: EXCERPT ONLY&lt;br /&gt;
$ module help numlib/mkl&lt;br /&gt;
----------- Module Specific Help for &#039;numlib/mkl/11.1.4&#039; ----------&lt;br /&gt;
This module provides the Intel(R) Math Kernel Library (MKL)&lt;br /&gt;
version 11.1.4, a fast and reliable implementation&lt;br /&gt;
of BLAS/LAPACK/FFTW (see also &#039;http://software.intel.com/en-us/intel-mkl/&#039;).&lt;br /&gt;
&lt;br /&gt;
The preferable compiler for this MKL version is &#039;compiler/intel/14.0&#039;. Linking&lt;br /&gt;
with other compilers like GNU, PGI and SUN is possible. The desired compiler&lt;br /&gt;
module (exception system GNU compiler) has to be loaded before using MKL.&lt;br /&gt;
&lt;br /&gt;
Local documentation:&lt;br /&gt;
&lt;br /&gt;
  Man pages in &#039;$MKL_MAN_DIR/man3&#039;, e.g. &#039;man dotc&#039;.&lt;br /&gt;
  firefox  $MKL_DOC_DIR/mkl_documentation.htm&lt;br /&gt;
  acroread $MKL_DOC_DIR/l_mkl_11.1.4.211.mklman.pdf&lt;br /&gt;
  acroread $MKL_DOC_DIR/l_mkl_11.1.4.211.mkl_11.1.4_lnx_userguide.pdf&lt;br /&gt;
&lt;br /&gt;
Linking examples (ifort compiler with support for blas and lapack):&lt;br /&gt;
&lt;br /&gt;
* Dynamic linking of myprog.f and parallel MKL supporting the LP64 interface:&lt;br /&gt;
&lt;br /&gt;
  ifort myprog.f -L${MKL_LIB_DIR} -I${MKL_INC_DIR}            \&lt;br /&gt;
        -lmkl_intel_lp64 -lmkl_intel_thread -lmkl_core -liomp5 -lpthread&lt;br /&gt;
[... t.b.c. ...]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
After loading the module, the environment variable $MKL_DOC_DIR points to the local documentation folder. Various examples can be found in $MKLROOT/examples.&lt;br /&gt;
&lt;br /&gt;
= MKL-Specific Environments =&lt;br /&gt;
To see a list of all MKL environments set by the &#039;module load&#039;-command use &#039;env | grep MKL&#039;. Or use the command &#039;module display numlib/mkl/version&#039;.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Example &amp;lt;small&amp;gt;(bwUniCluster)&amp;lt;/small&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ module load numlib/mkl&lt;br /&gt;
$ env | grep MKL&lt;br /&gt;
MKLROOT=/opt/bwhpc/common/compiler/intel/compxe.2013.sp1.4.211/mkl&lt;br /&gt;
MKL_LIB_MIC_COM=/opt/bwhpc/common/compiler/intel/compxe.2013.sp1.4.211/lib/mic&lt;br /&gt;
MKL_DOC_DIR=/opt/bwhpc/common/compiler/intel/compxe.2013.sp1.4.211/composerxe/Documentation/en_US/mkl&lt;br /&gt;
MKL_NUM_THREADS=1&lt;br /&gt;
MKL_HOME=/opt/bwhpc/common/compiler/intel/compxe.2013.sp1.4.211/mkl&lt;br /&gt;
MKL_LIB_MIC=/opt/bwhpc/common/compiler/intel/compxe.2013.sp1.4.211/mkl/lib/mic&lt;br /&gt;
MKL_MAN_DIR=/opt/bwhpc/common/compiler/intel/compxe.2013.sp1.4.211/man/en_US&lt;br /&gt;
MKL_EXA_DIR=/opt/bwhpc/common/compiler/intel/compxe.2013.sp1.4.211/composerxe/Samples/en_US&lt;br /&gt;
MKL_STA_DIR=/opt/bwhpc/common/compiler/intel/compxe.2013.sp1.4.211/mkl/lib/intel64_static&lt;br /&gt;
MKL_INC_DIR=/opt/bwhpc/common/compiler/intel/compxe.2013.sp1.4.211/mkl/include&lt;br /&gt;
MKL_BIN_DIR=/opt/bwhpc/common/compiler/intel/compxe.2013.sp1.4.211/mkl/bin&lt;br /&gt;
MKL_LIB_DIR=/opt/bwhpc/common/compiler/intel/compxe.2013.sp1.4.211/mkl/lib/intel64&lt;br /&gt;
MKL_VERSION=11.1.4&lt;br /&gt;
MKL_LIB_COM=/opt/bwhpc/common/compiler/intel/compxe.2013.sp1.4.211/lib/intel64&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Compiling and linking =&lt;br /&gt;
Compilation is possible with both GCC and Intel compilers but it is easier for Intel compilers, so this case is explained here.&lt;br /&gt;
After loading the compiler and the library module with&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ module load compiler/intel&lt;br /&gt;
$ module load numlib/mkl&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
you can include the MKL header file in your program:&lt;br /&gt;
&amp;lt;source lang=cpp&amp;gt;#include &amp;lt;mkl.h&amp;gt;&amp;lt;/source&amp;gt;&lt;br /&gt;
Compilation is simple:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ icpc -c example_mkl.c&amp;lt;/pre&amp;gt;&lt;br /&gt;
When linking the program you have to tell the compiler to link against the mkl library:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ icpc example_mkl.o -mkl&amp;lt;/pre&amp;gt;&lt;br /&gt;
With the -mkl switch the intel compiler automatically sets the correct linker flags but you can specify them explicitly for example to enable static linking or when non-intel compilers are used. Information about the different options can be found at http://software.intel.com/en-us/node/438568 and especially helpful is the MKL link line advisor at http://software.intel.com/en-us/articles/intel-mkl-link-line-advisor.&lt;br /&gt;
By default $MKL_NUM_THREADS is set to 1 and so only one thread will be created, but if you feel the need to run the computation on more cores (after benchmarking) you can set $MKL_NUM_THREADS to a higher number.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
== FFTW Interface to Intel Math Kernel Library (MKL) ==&lt;br /&gt;
Sometimes, [[FFTW|FFTW]] is not available on your cluster. You can use the MKL library&lt;br /&gt;
instead and include the FFTW functions, too.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Intel Math Kernel Library (MKL) offers FFTW2 and FFTW3 interfaces to Intel MKL Fast Fourier Transform and Trigonometric Transform functionality. The purpose of these interfaces is to enable applications using FFTW to gain performance with Intel MKL without changing the program source code.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Here is an excerpt from &#039;module help numlib/mkl&#039;:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Static FFTW2/3 C/Fortran interfaces can be found in dir&lt;br /&gt;
    ${MKL_HOME}/interfaces/&lt;br /&gt;
  Examples:&lt;br /&gt;
    Link to FFTW3 Fortran interface with GNU compiler and ilp64 support:&lt;br /&gt;
      ${MKL_HOME}/interfaces/fftw3xf/libfftw3xf_intel64_double_i8_gnu47.a&lt;br /&gt;
    Link to FFTW3 Fortran interface with Intel compiler and lp64 support:&lt;br /&gt;
      ${MKL_HOME}/interfaces/fftw3xf/libfftw3xf_intel64_double_i4_intel150.a&lt;br /&gt;
  The Intel FFTW interfaces requires the Intel MKL library (e.g. it does&lt;br /&gt;
  not work with ACML library). Usually it is not a problem to use a&lt;br /&gt;
  different compiler version, e.g. to use _gnu41.a with gnu 4.3 compiler.&lt;br /&gt;
  See dir ${MKL_HOME}/interfaces/ for other interfaces (fftw2/3 Fortran/C).&lt;br /&gt;
  Compiler option for include files: -I${MKL_INC_DIR}/fftw&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
See the corresponding webpages:&lt;br /&gt;
* [https://software.intel.com/en-us/node/471410 FFTW Interface to Intel Math Kernel Library]&lt;br /&gt;
* [https://software.intel.com/de-de/node/471414 FFTW2 Interface to Intel Math Kernel Library]&lt;br /&gt;
* [https://software.intel.com/en-us/node/471456 FFTW3 Interface to Intel Math Kernel Library]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Examples =&lt;br /&gt;
To help getting started we provide two C++ examples. The first one computes the square of a 2x2 matrix:&lt;br /&gt;
&amp;lt;source lang=&amp;quot;cpp&amp;quot;&amp;gt;&lt;br /&gt;
#include &amp;lt;iostream&amp;gt;&lt;br /&gt;
#include &amp;lt;mkl.h&amp;gt;&lt;br /&gt;
using namespace std;&lt;br /&gt;
&lt;br /&gt;
int main()&lt;br /&gt;
{&lt;br /&gt;
    double m[2][2] = {{2,1}, {0,2}};&lt;br /&gt;
    double c[2][2];&lt;br /&gt;
&lt;br /&gt;
    for(int i = 0; i &amp;lt; 2; ++i)&lt;br /&gt;
    {&lt;br /&gt;
        for(int j = 0; j &amp;lt; 2; ++j)&lt;br /&gt;
            cout &amp;lt;&amp;lt; m[i][j] &amp;lt;&amp;lt; &amp;quot; &amp;quot;;&lt;br /&gt;
&lt;br /&gt;
        cout &amp;lt;&amp;lt; endl;&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    cblas_dgemm(CblasRowMajor, CblasNoTrans, CblasNoTrans, 2, 2, 2, 1.0, &amp;amp;m[0][0], 2, &amp;amp;m[0][0], 2, 0.0, &amp;amp;c[0][0], 2);&lt;br /&gt;
&lt;br /&gt;
    cout &amp;lt;&amp;lt; endl;&lt;br /&gt;
&lt;br /&gt;
    for(int i = 0; i &amp;lt; 2; ++i)&lt;br /&gt;
    {&lt;br /&gt;
        for(int j = 0; j &amp;lt; 2; ++j)&lt;br /&gt;
            cout &amp;lt;&amp;lt; c[i][j] &amp;lt;&amp;lt; &amp;quot; &amp;quot;;&lt;br /&gt;
&lt;br /&gt;
        cout &amp;lt;&amp;lt; endl;&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    return 0;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
And the second one does a fast Fourier transformation using the Intel MKL interface (DFTI):&lt;br /&gt;
&amp;lt;source lang=&amp;quot;cpp&amp;quot;&amp;gt;&lt;br /&gt;
#include &amp;lt;iostream&amp;gt;&lt;br /&gt;
#include &amp;lt;complex&amp;gt;&lt;br /&gt;
#include &amp;lt;cmath&amp;gt;&lt;br /&gt;
#include &amp;lt;mkl.h&amp;gt;&lt;br /&gt;
using namespace std;&lt;br /&gt;
&lt;br /&gt;
int main()&lt;br /&gt;
{&lt;br /&gt;
    const int N = 3;&lt;br /&gt;
    complex&amp;lt;double&amp;gt; x[N] = {2, -1, 0.5};&lt;br /&gt;
&lt;br /&gt;
    cout &amp;lt;&amp;lt; &amp;quot;Input: &amp;quot; &amp;lt;&amp;lt; endl;&lt;br /&gt;
&lt;br /&gt;
    for(int i = 0; i &amp;lt; N; i++)&lt;br /&gt;
        cout &amp;lt;&amp;lt; x[i] &amp;lt;&amp;lt; endl;&lt;br /&gt;
&lt;br /&gt;
    DFTI_DESCRIPTOR_HANDLE desc;&lt;br /&gt;
&lt;br /&gt;
    DftiCreateDescriptor(&amp;amp;desc, DFTI_DOUBLE, DFTI_COMPLEX, 1, N);&lt;br /&gt;
    DftiCommitDescriptor(desc);&lt;br /&gt;
    DftiComputeForward(desc, x);&lt;br /&gt;
    DftiFreeDescriptor(&amp;amp;desc);&lt;br /&gt;
&lt;br /&gt;
    cout &amp;lt;&amp;lt; &amp;quot;\nOutput: &amp;quot; &amp;lt;&amp;lt; endl;&lt;br /&gt;
&lt;br /&gt;
    for(int i = 0; i &amp;lt; N; i++)&lt;br /&gt;
        cout &amp;lt;&amp;lt; x[i] &amp;lt;&amp;lt; endl;&lt;br /&gt;
&lt;br /&gt;
    cout &amp;lt;&amp;lt; &amp;quot;\nTest the interpolation function f:&amp;quot; &amp;lt;&amp;lt; endl;&lt;br /&gt;
&lt;br /&gt;
    for(int i = 0; i &amp;lt; N; i++)&lt;br /&gt;
    {&lt;br /&gt;
        double t = i/(double)N;&lt;br /&gt;
        complex&amp;lt;double&amp;gt; u(0, 2*M_PI*t);&lt;br /&gt;
        complex&amp;lt;double&amp;gt; z = exp(u);&lt;br /&gt;
        complex&amp;lt;double&amp;gt; w = 1.0/N * (x[0] + x[1]*z + x[2]*z*z);&lt;br /&gt;
&lt;br /&gt;
        cout &amp;lt;&amp;lt; &amp;quot;f(&amp;quot; &amp;lt;&amp;lt; t &amp;lt;&amp;lt; &amp;quot;) = &amp;quot; &amp;lt;&amp;lt; w &amp;lt;&amp;lt; endl;&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    return 0;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Category:Numerical_libraries]][[Category:bwUniCluster]][[Category:bwForCluster_Chemistry]][[Category:BwForCluster_BinAC]][[Category:bwForCluster_MLS&amp;amp;WISO_Production]]&lt;/div&gt;</summary>
		<author><name>T Kienzle</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=Development/Intel_Compiler&amp;diff=5195</id>
		<title>Development/Intel Compiler</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=Development/Intel_Compiler&amp;diff=5195"/>
		<updated>2017-11-28T07:50:30Z</updated>

		<summary type="html">&lt;p&gt;T Kienzle: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{| width=600px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Description !! Content&lt;br /&gt;
|-&lt;br /&gt;
| module load&lt;br /&gt;
| compiler/intel&lt;br /&gt;
|-&lt;br /&gt;
| License&lt;br /&gt;
| Commercial. See $INTEL_HOME/install-doc/EULA.txt. &amp;amp;#124; [https://software.intel.com/en-us/faq/licensing Intel Product Licensig FAQ]&lt;br /&gt;
|-&lt;br /&gt;
|Citing&lt;br /&gt;
| n/a&lt;br /&gt;
|-&lt;br /&gt;
| Links&lt;br /&gt;
| [https://software.intel.com/en-us/c-compilers Intel C-Compiler Homepage]&lt;br /&gt;
|-&lt;br /&gt;
| Graphical Interface&lt;br /&gt;
| [[#Debugger|Yes (Intel Debugger GUI-Verison)]]&lt;br /&gt;
|-&lt;br /&gt;
| Included modules&lt;br /&gt;
|  icc &amp;amp;#124; icpc &amp;amp;#124; ifort &amp;amp;#124; idb &amp;amp;#124; gdb-ia&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
= Introduction =&lt;br /&gt;
The &#039;&#039;&#039;Intel Compiler&#039;&#039;&#039; of the &#039;&#039;&#039;Intel Composer XE Suite&#039;&#039;&#039; consists of tools to compile and debug C, C++ and Fortran programs:&lt;br /&gt;
{| width=500px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
|style=&amp;quot;padding:3px&amp;quot;| icc&lt;br /&gt;
|style=&amp;quot;padding:3px&amp;quot;| Intel  C compiler&lt;br /&gt;
|-&lt;br /&gt;
|style=&amp;quot;padding:3px&amp;quot;| icpc&lt;br /&gt;
|style=&amp;quot;padding:3px&amp;quot;| Intel C++ compiler&lt;br /&gt;
|-&lt;br /&gt;
|style=&amp;quot;padding:3px&amp;quot;| ifort&lt;br /&gt;
|style=&amp;quot;padding:3px&amp;quot;| [https://software.intel.com/en-us/fortran-compilers Intel Fortran compiler]&lt;br /&gt;
|-&lt;br /&gt;
|style=&amp;quot;padding:3px&amp;quot;| [[#GUI|idb]]&lt;br /&gt;
|style=&amp;quot;padding:3px&amp;quot;| Intel debugger in GUI mode (until version 14 only)&lt;br /&gt;
|-&lt;br /&gt;
|style=&amp;quot;padding:3px&amp;quot;| [[#Console Mode|gdb-ia]]&lt;br /&gt;
|style=&amp;quot;padding:3px&amp;quot;| Intel version of GNU debugger in console mode (from version 15)&lt;br /&gt;
|-&lt;br /&gt;
|style=&amp;quot;padding:3px&amp;quot;| [[#Console Mode|idbc]]&lt;br /&gt;
|style=&amp;quot;padding:3px&amp;quot;| Intel debugger in console mode (until version 14 only)&lt;br /&gt;
|}&lt;br /&gt;
The intel compiler suite also includes the TBB (Threading Building Blocks) and IPP (Integrated Performance Primitives) libraries.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
More information about the MPI versions of the Intel Compiler is available here:&lt;br /&gt;
* [[BwHPC_BPG_for_Parallel_Programming|Best Practices Guide for Parallel Programming]].&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Versions and Availability =&lt;br /&gt;
A list of versions currently available on all [[:Category:BwHPC Cluster|bwHPC Clusters]] can be obtained from the cluster information system&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[https://cis-hpc.uni-konstanz.de/prod.cis/ Cluster Information System]&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
{{#widget:Iframe&lt;br /&gt;
|url=https://cis-hpc.uni-konstanz.de/prod.cis/bwUniCluster/compiler/intel&lt;br /&gt;
|width=99%&lt;br /&gt;
|height=700&lt;br /&gt;
}}&lt;br /&gt;
!--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
On the command line interface of any bwHPC cluster you&#039;ll get a list of available versions &lt;br /&gt;
by executing:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ module avail compiler/intel&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
== Loading the module ==&lt;br /&gt;
=== Default Version ===&lt;br /&gt;
You can load the default version of the Intel Compiler with the command&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ module load compiler/intel&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
If loading the module fails, check if you have already loaded the module &lt;br /&gt;
with &#039;module list&#039;.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
=== Specific (newer or older) Version ===&lt;br /&gt;
If you wish to load a specific (older or newer) version (if available), &lt;br /&gt;
add the specific version number of the intel compiler, e.g. for loading Intel compiler suite 17.0, execute:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ module unload compiler/intel&lt;br /&gt;
$ module   load compiler/intel/17.0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Note: Only one compiler can be loaded in your active session, hence, before loading a new intel compiler version you must to unload the current loaded version. &lt;br /&gt;
&lt;br /&gt;
For unloading the intel compiler the version number is not required:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ module unload compiler/intel&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
unloads any currently loaded intel compiler version.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Intel Compiler Specific Environment Variables =&lt;br /&gt;
To see a list of all Intel Compiler environment variables set by the &#039;module load&#039; command execute:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module show compiler/intel&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Documentation =&lt;br /&gt;
== Online documentation ==&lt;br /&gt;
* [https://software.intel.com/en-us/articles/intel-c-composer-xe-documentation Intel® C-Compiler Documentation]&lt;br /&gt;
* [https://software.intel.com/en-us/intel-software-technical-documentation Intel® Software Documentation Library]&lt;br /&gt;
&lt;br /&gt;
== Local documentation == &lt;br /&gt;
For version specific documentation see the help page of the module. For example&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ module help compiler/intel&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
will show the information for the default version.&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
&amp;lt;pre&amp;gt;$ module help compiler/intel&lt;br /&gt;
----------- Module Specific Help for &#039;compiler/intel/15.0&#039; --------&lt;br /&gt;
This module provides the Intel(R) compiler suite version 15.0.3 via&lt;br /&gt;
commands &#039;icc&#039;, &#039;icpc&#039; and &#039;ifort&#039; (version 15.0.3), the debugger &#039;gdb-ia&#039; (version&lt;br /&gt;
7.8.3) as well as the Intel(R) Threading Building Blocks TBB (version 4.3.5)&lt;br /&gt;
and the Integrated Performance Primitives IPP libraries (version 8.2.2)&lt;br /&gt;
(for details see also &#039;http://software.intel.com/en-us/intel-compilers/&#039;).&lt;br /&gt;
&lt;br /&gt;
The related Math Kernel Library MKL module is &#039;numlib/mkl/11.2.3&#039;.&lt;br /&gt;
The related Intel MPI module is &#039;mpi/impi/5.0.3-intel-15.0&#039;.&lt;br /&gt;
The Intel &#039;icpc&#039; should work well with GNU compiler version 4.4 to 4.8.&lt;br /&gt;
Before using TBB or IPP setup the corresponding environment, e.g. for 64bit+bash&lt;br /&gt;
  source $INTEL_HOME/tbb/bin/tbbvars.sh intel64&lt;br /&gt;
  source $INTEL_HOME/ipp/bin/ippvars.sh intel64&lt;br /&gt;
&lt;br /&gt;
Commands:&lt;br /&gt;
  icc           # Intel(R) C compiler&lt;br /&gt;
  icpc          # Intel(R) C++ compiler&lt;br /&gt;
  ifort         # Intel(R) Fortran compiler&lt;br /&gt;
  gdb-ia        # Intel version of GNU debugger&lt;br /&gt;
  # idb is not available anymore in Intel compiler suite 2015.&lt;br /&gt;
&lt;br /&gt;
Local documentation:&lt;br /&gt;
  Man pages: man icc; man icpc; man ifort; man gdb-ia&lt;br /&gt;
  firefox $INTEL_DOC_DIR/beginusing_lc.htm&lt;br /&gt;
  firefox $INTEL_DOC_DIR/beginusing_lf.htm&lt;br /&gt;
  The html-pages are very detailed and cover TBB and IPP as well as MKL.&lt;br /&gt;
&lt;br /&gt;
For some Intel(R) compiler option examples, hints on how to compile 32bit code&lt;br /&gt;
and solutions for less common problems see the tips and troubleshooting doc:&lt;br /&gt;
  $INTEL_DOC_DIR/intel-compiler-tips-and-troubleshooting.txt&lt;br /&gt;
&lt;br /&gt;
For details on library and include dirs please call&lt;br /&gt;
    module show compiler/intel/15.0&lt;br /&gt;
[...]&amp;lt;/pre&amp;gt;&lt;br /&gt;
!--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Manual Pages ==&lt;br /&gt;
For detailed lists of the different program options consult the particular man page&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ man icc&lt;br /&gt;
$ man icpc&lt;br /&gt;
$ man ifort&lt;br /&gt;
$ man idb&lt;br /&gt;
$ man gdb-ia&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Debugger =&lt;br /&gt;
Please use DDT. It is a parallel debugger with graphical user interface and can also be used for debugging serial programs. The description of the debugger can be found on the website&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
http://www.bwhpc-c5.de/wiki/index.php/DDT&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Optimizations = &lt;br /&gt;
You can turn on various optimization options to enhance the performance of your program. Which options are the best depends on the specific program and can be determined by benchmarking your code. A command which gives good performance and a decent file size is&lt;br /&gt;
&#039;&#039;&#039;icc -xHost -O2 ex.c&#039;&#039;&#039;.&lt;br /&gt;
With the option &#039;&#039;&#039;-xHost&#039;&#039;&#039; instructions for the highest instruction set available on the compilation host processor are generated. If you want to generate optimal code on bwUniCluster for both nodes with Sandy Bridge architecture and nodes with Broadwell architecture, you must compile your code with the options &#039;&#039;&#039;-xAVX -axCORE-AVX2&#039;&#039;&#039; (instead of &#039;&#039;&#039;-xHost&#039;&#039;&#039;). &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
There are more aggressive optimization flags and levels (e.g. -O3 or -fast and implied options) but the compiled programs can get quite large due to inlining. Additionally the compilation process will probably take longer. Moreover it may happen that the compiled program is even slower -- or may require installation of additional statically-linked libraries. Such a command would be for example:&lt;br /&gt;
&#039;&#039;&#039;icc -fast ex.c&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Profiling =&lt;br /&gt;
Profiling an application means augmenting the compiled binary with information on execution counts per source-line (and basic blocks) -- e.g. one may see how many times an if-statement has been evaluated to true. To do so, compile your code with the profile flag:&lt;br /&gt;
&#039;&#039;&#039;icc -p ex.c -o ex&#039;&#039;&#039;.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Using the gprof tool, one may manually inspect execution count of each executed line of source code.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
For compiler optimization, recompile your source using &lt;br /&gt;
&#039;&#039;&#039;icc -prof-gen ex.c -o ex&#039;&#039;&#039;&lt;br /&gt;
then execute the most co]]mmon and typical use-case of your application, and then recompile using the generated profile count (and using optimization):&lt;br /&gt;
&#039;&#039;&#039;icc -prof-use -O2 ex.c -o ex&#039;&#039;&#039;.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
== Further literature ==&lt;br /&gt;
A tutorial on optimization can be found at [https://software.intel.com/sites/default/files/managed/c1/61/compiler-essentials.1.pdf Compiler-Essentials.pdf]&lt;br /&gt;
and to get the different optimization options execute&lt;br /&gt;
&#039;&#039;&#039;icc -help opt&#039;&#039;&#039;&lt;br /&gt;
&#039;&#039;&#039;icc -help advanced&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
or the previously described catch-all option &#039;&#039;&#039;&#039;&#039;-v --help&#039;&#039;&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
[[Category:Compiler_software]][[Category:bwUniCluster]][[Category:bwForCluster_Chemistry]][[Category:BwForCluster_BinAC]][[Category:bwForCluster_MLS&amp;amp;WISO_Production]]&lt;/div&gt;</summary>
		<author><name>T Kienzle</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster2.0/Software/Mathematica&amp;diff=5194</id>
		<title>BwUniCluster2.0/Software/Mathematica</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster2.0/Software/Mathematica&amp;diff=5194"/>
		<updated>2017-11-28T07:46:25Z</updated>

		<summary type="html">&lt;p&gt;T Kienzle: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{| width=600px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Description !! Content&lt;br /&gt;
|-&lt;br /&gt;
| module load&lt;br /&gt;
| math/mathematica&lt;br /&gt;
|-&lt;br /&gt;
| License&lt;br /&gt;
| Commercial. See [http://www.wolfram.com/mathematica/pricing/ Mathematica Pricing].&lt;br /&gt;
|-&lt;br /&gt;
| Citing &lt;br /&gt;
| n/a&lt;br /&gt;
|-&lt;br /&gt;
| Links&lt;br /&gt;
| [http://www.wolfram.com/mathematica/ Homepage] &amp;amp;#124; [http://www.wolfram.com/support/?source=nav Documentation]&lt;br /&gt;
|-&lt;br /&gt;
| Graphical Interface&lt;br /&gt;
| Yes (See [[VNC]])&lt;br /&gt;
|-&lt;br /&gt;
| Comments&lt;br /&gt;
| Mathematica might not be available on all locations and clusters.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Description == &lt;br /&gt;
&#039;&#039;&#039;Mathematica&#039;&#039;&#039; is a software from Wolfram for symbolic and numerical computation with many features such as powerful visualization and application specific functions.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
== Versions and Availability ==&lt;br /&gt;
A list of versions currently available on all bwHPC-C5-Clusters can be obtained from the&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;big&amp;gt;&lt;br /&gt;
[https://cis-hpc.uni-konstanz.de/prod.cis/ Cluster Information System CIS]&lt;br /&gt;
&amp;lt;/big&amp;gt;&lt;br /&gt;
{{#widget:Iframe&lt;br /&gt;
|url=https://cis-hpc.uni-konstanz.de/prod.cis/bwUniCluster/math/mathematica&lt;br /&gt;
|width=90%&lt;br /&gt;
|height=330&lt;br /&gt;
|border=0&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
= Loading =&lt;br /&gt;
To check if Mathematica is available execute&lt;br /&gt;
 $ module avail math/mathematica&lt;br /&gt;
&lt;br /&gt;
If Mathematica is available you can load a specific version or you can load the default version with&lt;br /&gt;
 $ module load math/mathematica&lt;br /&gt;
&lt;br /&gt;
= General Usage =&lt;br /&gt;
Mathematica can be used interactively on the command line or with a graphical front-end.&lt;br /&gt;
Alternatively Mathematica can run a script in batch mode which is useful when submitting batch jobs to the cluster.&lt;br /&gt;
After loading Mathematica the different modes can be used as follows.&lt;br /&gt;
&lt;br /&gt;
*Interactive with GUI (needs [[X11 forwarding]] or [[VNC]]):&amp;lt;br /&amp;gt;&amp;lt;pre&amp;gt;$ mathematica&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*Interactive with command line:&amp;lt;br /&amp;gt;&amp;lt;pre&amp;gt;$ math&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*Non-Interactive. Looks like interactive with command line but input is taken from script.m:&amp;lt;br /&amp;gt;&amp;lt;pre&amp;gt;$ math &amp;lt; script.m&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*Non-Interactive. You have to explicitly specify what you want to print&amp;lt;br /&amp;gt;&amp;lt;pre&amp;gt;$ math -script script.m&amp;lt;/pre&amp;gt;However, the output is done in InputForm which is suitable for input in other Mathematica calculations but if you want pretty output you have to change the output format to OutputForm like this&amp;lt;br /&amp;gt;&amp;lt;pre&amp;gt;SetOptions[$Output, FormatType -&amp;gt; OutputForm]&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For an introduction to Mathematica we refer to the online documentation ([http://reference.wolfram.com/language/ Mathematica Documentation Center]). Specific information on the use in a compute cluster is in the next section.&lt;br /&gt;
&lt;br /&gt;
= Parallel Computation =&lt;br /&gt;
Obviously parallel computation can be useful to speed up time-consuming computations, but it should also be used for multiple computations with different input data files (e.g. for parametric studies). The reason for this&lt;br /&gt;
is the license model from Wolfram. There are two types of licenses. &lt;br /&gt;
* Each time an instance of Mathematica starts, a so called MathKernel license is used up.&lt;br /&gt;
* Each time Mathematica spawns a subprocess, a license called SubMathKernel is used up.&lt;br /&gt;
Because usually there are much more SubMathKernel licenses than MathKernel licenses it is recommended to start multiple subprocesses instead of submitting multiple jobs.&lt;br /&gt;
&lt;br /&gt;
Remember to request the correct amount of processors in your [[Batch Jobs]] script but note that Mathematica will not automatically use these processors.&lt;br /&gt;
In general you have to adjust your code to benefit from more cores.&lt;br /&gt;
To do this you first have to start a number of kernels which are then used by ParallelTable to run the computations in parallel.&lt;br /&gt;
This basic example computes the first eight square numbers in parallel.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
LaunchKernels[8]&lt;br /&gt;
f[i_] := i^2&lt;br /&gt;
DistributeDefinitions[f]&lt;br /&gt;
ParallelTable[f[i], {i,0,7}]&lt;br /&gt;
CloseKernels[]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Note that the use of DistributeDefinitions is necessary for f is a user defined function and the definition of this function must be available to all kernels.&lt;br /&gt;
&lt;br /&gt;
The next example is the computation of a numerical solution for the following initial value problem&lt;br /&gt;
 x&#039;(t) = x(t)^2 - x(t)^3&lt;br /&gt;
 x(0) = d&lt;br /&gt;
It is difficult to solve this equation with high accuracy at the point 1/d.&lt;br /&gt;
We decrease the step size of the algorithm to see how the execution time and the relative error at the point 1/d change.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
d = 0.00001&lt;br /&gt;
&lt;br /&gt;
y[t_] := 1/(ProductLog[(1/d-1)*Exp[1/d-1-t]]+1)	(* analytical solution *)&lt;br /&gt;
relerr[t_, s_] := Abs[(y[t] - x[t]/.s)/y[t]]			(* relative error of solution s at time t *)&lt;br /&gt;
g[v_] := {v[[1]], v[[2]][[1]], relerr[1/d, v[[2]][[2]][[1]]]}	(* helper function *)&lt;br /&gt;
&lt;br /&gt;
(* compute numerical solutions for 6 different step sizes *)&lt;br /&gt;
LaunchKernels[6]&lt;br /&gt;
tbl = ParallelTable[{step, Timing[NDSolve[{x&#039;[t] == x[t]^2 - x[t]^3, x[0] == d}, x, {t, 0, 2/d}, MaxStepSize-&amp;gt;step, MaxSteps-&amp;gt;10000000]]}, {step,5,3,-0.4}]&lt;br /&gt;
CloseKernels[]&lt;br /&gt;
&lt;br /&gt;
SetOptions[$Output, FormatType -&amp;gt; OutputForm]    (* better output when called with -script *)&lt;br /&gt;
Print[ Grid[Join[{{&amp;quot;Stepsize&amp;quot;, &amp;quot;Time&amp;quot;, &amp;quot;Error at 1/d&amp;quot;}}, Map[g, tbl]]] ]    (* print the result *)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Mathematica features some further functions similar to ParallelTable which are documented on the [http://reference.wolfram.com/mathematica/guide/ParallelComputing.html Mathematica parallel computing webpage] from Wolfram. One such function is ParallelSubmit which can be used to start two (or more) completely unrelated functions in parallel. But bear in mind that the functions should have similar running times because the scheduler reserves the requested resources for the runtime of the whole job. The following code shows how to pack two 1-core jobs into one 2-core job.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
f[r_] := (&lt;br /&gt;
    d = 0.000001;&lt;br /&gt;
    {time, sol} = Timing[NDSolve[{x&#039;[t] == r*x[t]^2 - x[t]^3, x[0] == d}, x, {t, 0, 2/d}, MaxStepSize-&amp;gt;3, MaxSteps-&amp;gt;10000000]];&lt;br /&gt;
    {time, x[1/d]/.sol[[1]]}&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
g[r_] := (&lt;br /&gt;
    {time, sol} = Timing[NDSolve[{y&#039;&#039;[t] + (r+1)*y&#039;[t] + r*y[t] == 0, y[0] == 1, y&#039;[0] == 0},  y, {t, 0, 30}, MaxStepSize-&amp;gt;0.000002, MaxSteps-&amp;gt;10000000000]];&lt;br /&gt;
    {time, y[30]/.sol[[1]]}&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
kernel1 := (&lt;br /&gt;
    file = OpenWrite[&amp;quot;out1.txt&amp;quot;, FormatType -&amp;gt; OutputForm];&lt;br /&gt;
    $Output = {file};&lt;br /&gt;
    Print[f[1.2]];&lt;br /&gt;
    Close[file];&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
kernel2 := (&lt;br /&gt;
    file = OpenWrite[&amp;quot;out2.txt&amp;quot;, FormatType -&amp;gt; OutputForm];&lt;br /&gt;
    $Output = {file};&lt;br /&gt;
    Print[g[1000]];&lt;br /&gt;
    Close[file];&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
LaunchKernels[2]&lt;br /&gt;
DistributeDefinitions[kernel1, kernel2]&lt;br /&gt;
e = {ParallelSubmit[kernel1], ParallelSubmit[kernel2]}&lt;br /&gt;
WaitAll[e]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The helper functions kernel1 and kernel2 get started in different threads and then they execute f and g. The output of these functions is redirected to different files so the results can clearly be distinguished and examined as soon as one function is done.&lt;br /&gt;
&lt;br /&gt;
[[Category:Mathematics software]] [[Category:bwUniCluster]][[Category:bwForCluster_MLS&amp;amp;WISO_Production]]&lt;/div&gt;</summary>
		<author><name>T Kienzle</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=Development/Parallel_Programming&amp;diff=5192</id>
		<title>Development/Parallel Programming</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=Development/Parallel_Programming&amp;diff=5192"/>
		<updated>2017-11-27T14:07:04Z</updated>

		<summary type="html">&lt;p&gt;T Kienzle: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{| width=600px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Description !! Content&lt;br /&gt;
|-&lt;br /&gt;
| module load&lt;br /&gt;
| mpi/impi &amp;amp;#124; mpi/openmpi &lt;br /&gt;
|-&lt;br /&gt;
| Links&lt;br /&gt;
| [https://software.intel.com/en-us/intel-mpi-library Intel® MPI Library] &amp;amp;#124; [http://www.open-mpi.org/ Open MPI] &lt;br /&gt;
|-&lt;br /&gt;
| License&lt;br /&gt;
| [https://software.intel.com/en-us/articles/intel-mpi-library-licensing-faq Intel MPI Library Licensing FAQ] &amp;lt;small&amp;gt;install-doc/EULA.txt&amp;lt;/small&amp;gt; &amp;amp;#124; [http://www.open-mpi.org/community/license.php Open MPI License]&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
= Introduction =&lt;br /&gt;
This page will provide information regarding the supported parallel programming paradigms and specific hints on their usage.&lt;br /&gt;
Please refer to the [[BwUniCluster_Environment_Modules|Modules Documentation]] how to setup your environment on bwUniCluster to load a specific software installation.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
More informations about Compilers installed at our clusters is available here:&lt;br /&gt;
* [[Intel_Compiler|Intel Compiler Suite]]&lt;br /&gt;
* [[GCC|GNU Compiler (GCC)]]&lt;br /&gt;
* [[General_compiler_usage|General Compiler Usage (incl. PGI Compiler)]]&lt;br /&gt;
&lt;br /&gt;
= Versions and Availability =&lt;br /&gt;
=== impi (Intel) ===&lt;br /&gt;
A list of versions currently available compilers on the bwHPC-C5-Clusters can be obtained from the&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;big&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[https://cis-hpc.uni-konstanz.de/prod.cis/ Cluster Information System CIS]&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;/big&amp;gt;&lt;br /&gt;
{{#widget:Iframe&lt;br /&gt;
|url=https://cis-hpc.uni-konstanz.de/prod.cis/bwUniCluster/mpi/impi&lt;br /&gt;
|width=99%&lt;br /&gt;
|height=1200&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
=== openmpi ===&lt;br /&gt;
A list of versions currently available compilers on the bwHPC-C5-Clusters can be obtained from the&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;big&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[https://cis-hpc.uni-konstanz.de/prod.cis/ Cluster Information System CIS]&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;/big&amp;gt;&lt;br /&gt;
{{#widget:Iframe&lt;br /&gt;
|url=https://cis-hpc.uni-konstanz.de/prod.cis/bwUniCluster/mpi/openmpi&lt;br /&gt;
|width=99%&lt;br /&gt;
|height=2100&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= OpenMP =&lt;br /&gt;
== General Information ==&lt;br /&gt;
OpenMP is a mature specification [http://www.openmp.org/specifications/] to allow easy, portable, and most importantly incremental node-level parallelisation of code.&lt;br /&gt;
Being a thread-based approach, OpenMP is aimed at more fine-grained parallelism than [[BwHPC_Best_Practices_Repository#MPI|MPI]].&lt;br /&gt;
Although there have been extensions to extend OpenMP for inter-node parallelisation, it is a node-level approach aimed to make best usage of a node&#039;s cores&amp;lt;!-- -- the section [[#Hybrid Parallelisation|Hybrid Parallelisation]] will explain how to parallelise utilizing MPI plus a thread-based parallelization paradigm like OpenMP--&amp;gt;.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
With regard to ease-of-use, OpenMP is ahead of any other common approach: the source-code is annotated using &amp;lt;tt&amp;gt;#pragma omp&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;!$omp&amp;lt;/tt&amp;gt; statements, in C/C++ and Fortran respectively.&lt;br /&gt;
Whenever the compiler encompasses a semantic block of code encapsulated in a parallel region, this block of code is transparently compiled into a function, which is passed to a so-called team-of-threads upon entering this semantic block. This fork-join model of execution eases a lot of the programmer&#039;s pain involved with Threads.&lt;br /&gt;
Being a loop-centric approach, OpenMP is aimed at codes with long/time-consuming loops.&lt;br /&gt;
A single combined directive &amp;lt;tt&amp;gt;pragma omp parallel for&amp;lt;/tt&amp;gt; will tell the compiler to automatically parallel the ensuing for-loop.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
The following example is a bit more advanced in that even reductions of variables over multiple threads are easily to parallel:&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
   for (int i=0, sum = 0.0; i &amp;lt; VECTOR_LEN; i++)&lt;br /&gt;
     norm2 += (v[i]*v[i]);&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
is parallelized by just adding a single line as in:&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
#  pragma omp parallel for reduction(+:norm2)&lt;br /&gt;
   for (int i=0, sum = 0.0; i &amp;lt; VECTOR_LEN; i++)&lt;br /&gt;
     norm2 += (v[i]*v[i]);&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
With &amp;lt;tt&amp;gt;VECTOR_LENGTH&amp;lt;/tt&amp;gt; being large enough, this piece of code compiled with OpenMP will run in parallel, exhibiting very nice speedup.&lt;br /&gt;
Compiled without, the code remains as is. Developpers may therefore incrementally parallelize their application based on the profile derived from performance analysis tools, starting with the most time-consuming loops.&lt;br /&gt;
Using OpenMP&#039;s concise API, one may query the number of running threads, the number of processors, a time to calculate runtime, and even set parameters such as the number of threads to execute a parallel region.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
The OpenMP-4.0 specification added support for the SIMD-directive to better utilize SIMD-vectorization, as well as integrating directives to offload computation to accelerators using the &amp;lt;tt&amp;gt;target&amp;lt;/tt&amp;gt; directive: these are integrated into the Intel Compiler and are actively being worked on for the GNU compiler, some restrictions may apply.&lt;br /&gt;
== OpenMP Best Practice Guide ==&lt;br /&gt;
The following silly example to calculate the squared Euklidian Norm shows some techniques:&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
#include &amp;lt;stdlib.h&amp;gt;&lt;br /&gt;
#include &amp;lt;stdio.h&amp;gt;&lt;br /&gt;
#include &amp;lt;omp.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
#define VECTOR_LENGTH 5&lt;br /&gt;
&lt;br /&gt;
int main (int argc, char * argv[])&lt;br /&gt;
{&lt;br /&gt;
    int len = VECTOR_LENGTH;&lt;br /&gt;
    int i;&lt;br /&gt;
    double * v;&lt;br /&gt;
    double norm2 = 0.0;&lt;br /&gt;
    double t1, tdiff;&lt;br /&gt;
&lt;br /&gt;
    if (argc &amp;gt; 1)&lt;br /&gt;
        len = atoi (argv[1]);&lt;br /&gt;
    v = malloc (len * sizeof(double));&lt;br /&gt;
&lt;br /&gt;
    t1 = omp_get_wtime();&lt;br /&gt;
    // Initialization already with (the same number of) threads&lt;br /&gt;
#pragma omp parallel for&lt;br /&gt;
    for (i=0; i &amp;lt; len; i++) {&lt;br /&gt;
        v[i] = i;&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    // Now aggregate the sum-of-squares by specifying a reduction&lt;br /&gt;
#pragma omp parallel for reduction(+:norm2)&lt;br /&gt;
    for(i=0; i &amp;lt; len; i++) {&lt;br /&gt;
        norm2 += (v[i]*v[i]);&lt;br /&gt;
    }&lt;br /&gt;
    tdiff = omp_get_wtime() - t1;&lt;br /&gt;
&lt;br /&gt;
    printf (&amp;quot;norm2: %f Time:%f\n&amp;quot;, norm2, tdiff);&lt;br /&gt;
    return 0;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&amp;lt;!-- Specific OpenMP hints: default(none), reproducability, thread-safey --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Group independent parallel sections together: in the above example, You may combine those two sections into one larger parallel block. This will just once enter the parallel region (in the fork-join model) instead of twice. Especially in inner loops, this will considerably decrease overhead.&lt;br /&gt;
* Compile with the Intel compiler&#039;s option &amp;lt;tt&amp;gt;-diag-enable sc-parallel3&amp;lt;/tt&amp;gt; to get the further warnings on thread-safety, performance, etc. The following code with loop-carried dependency will e.g. compile fine (aka without warning):&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
#pragma omp parallel for reduction(+:norm2)&lt;br /&gt;
    for(i=1; i &amp;lt; len-1; i++) {&lt;br /&gt;
        v[i] = v[i-1]+v[i+1];&lt;br /&gt;
    }&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
However the Intel compiler with &amp;lt;tt&amp;gt;-diag-enable sc-parallel3&amp;lt;/tt&amp;gt; will produce the following warning:&lt;br /&gt;
&amp;lt;tt&amp;gt;warning #12246: variable &amp;quot;v&amp;quot; has loop carried data dependency that may lead to incorrect program execution in parallel mode; see (file:omp_norm2.c line:32)&amp;lt;/tt&amp;gt;&lt;br /&gt;
* Always specify &amp;lt;tt&amp;gt;default(none)&amp;lt;/tt&amp;gt; on larger parallel regions in order to specifically set the visibility of variables to either &amp;lt;tt&amp;gt;shared&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;private&amp;lt;/tt&amp;gt;.&lt;br /&gt;
* Try to restructure code to allow for &amp;lt;tt&amp;gt;nowait&amp;lt;/tt&amp;gt;: OpenMP defines synchronization points (implied barriers) at the end of work sharing constructs such as the &amp;lt;tt&amp;gt;pragma omp for&amp;lt;tt&amp;gt; directive. If the ensuing section of code does not depend on data being generated inside the parallel section, adding the &amp;lt;tt&amp;gt;nowait&amp;lt;/tt&amp;gt; clause to the worksharing directive allows the compiler to eliminate this synchronization point. This reduces overhead, allows for better overlap and better utilization of the processor&#039;s resources. This might imply however to restructure the code (move portions of independent code in between dependent works-sharing constructs).&lt;br /&gt;
== Usage ==&lt;br /&gt;
OpenMP is supported by various compilers, here the usage for two main compilers [[BwHPC_BPG_Compiler#GCC|GCC]] and [[BwHPC_BPG_Compiler#Intel Suite|Intel Suite]] are introduced.&lt;br /&gt;
For both compilers, You first need to turn on OpenMP support by specifying a parameter on the compiler&#039;s command-line.&lt;br /&gt;
In case You make function calls to OpenMP&#039;s API, You also need to include the header-file &amp;lt;tt&amp;gt;omp.h&amp;lt;/tt&amp;gt;.&lt;br /&gt;
OpenMP&#039;s API allows to query or set the number of threads, query the number of processors, get a wall-clock time to measure execution times, etc.&lt;br /&gt;
=== OpenMP with GNU Compiler Collection ===&lt;br /&gt;
Starting with version 4.2 the gcc compiler supports OpenMP-2.5.&lt;br /&gt;
Since then the analysis capabilities of the GNU compiler have steadily improved.&lt;br /&gt;
The installed compilers support OpenMP-3.1.&lt;br /&gt;
&amp;lt;!-- Starting with gcc-4.9 OpenMP-4.0 is supported, however the &amp;lt;tt&amp;gt;target&amp;lt;/tt&amp;gt; directive will only offload to the host processor. --&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
To use OpenMP with the gcc-compiler, pass &amp;lt;tt&amp;gt;-fopenmp&amp;lt;/tt&amp;gt; as parameter.&lt;br /&gt;
=== OpenMP with Intel Compiler ===&lt;br /&gt;
The Intel Compiler&#039;s support for OpenMP is more advanced than gcc&#039;s -- especially in term of programmer support.&lt;br /&gt;
To use OpenMP with the Intel compiler, pass &amp;lt;tt&amp;gt;-openmp&amp;lt;/tt&amp;gt; as command-line parameter.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
One may get very insightful information about OpenMP, when compiling with&lt;br /&gt;
* Compiling with &amp;lt;tt&amp;gt;-openmp-report2&amp;lt;/tt&amp;gt; to get information, which loops were parallelized and a reason why not.&lt;br /&gt;
* Compiling with &amp;lt;tt&amp;gt;-diag-enable sc-parallel3&amp;lt;/tt&amp;gt; to get errors and warnings about your sources weaknesses with regard to parallelization (see example below).&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;!-- ---------------------------------------------------------------------------------------------- --&amp;gt;&lt;br /&gt;
= MPI =&lt;br /&gt;
In this section, You will find information regarding the supported installations of the Message-Passing Interface libraries and their usage.&amp;lt;br&amp;gt;&lt;br /&gt;
Due to the Fortran interface ABI, all MPI-libraries are normally bound to a specific compiler-vendor and even the specific compiler version.&lt;br /&gt;
Therefore, as listed in [[BwHPC_BPG_Compiler]] two compilers are supported on bwUniCluster: [[GCC|GCC]] and [[Intel_Compiler|Intel Compiler Suite]].&lt;br /&gt;
As both compilers are continously improving, the communication libraries will be adopted in lock-step.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
With a set of different implementations, there comes the problem of choice. These pages should inform the user of the communication libraries, what considerations should be done with regard to performance, maintainability and debugging -- in general tool support -- of the various implementations.&lt;br /&gt;
== MPI Introduction ==&lt;br /&gt;
The Message-Passing Interface is a standard provided by the [http://www.mpi-forum.org MPI-Forum] which regularly convenes for the [http://mpi-forum.org/meetings/ MPI-Forum Meetings] to update this standard. The current version is MPI-3.0 available as [http://mpi-forum.org/docs/mpi-3.0/mpi30-report.pdf PDF].&lt;br /&gt;
This document defines the API of over 300 functions for the C- and the Fortran-language -- however, You will certainly not need all of them to begin with.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Every MPI-conforming program needs to call &amp;lt;tt&amp;gt;MPI_Init()&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;MPI_Finalize()&amp;lt;/tt&amp;gt; upon start and shutdown -- or &amp;lt;tt&amp;gt;MPI_Abort()&amp;lt;/tt&amp;gt; in case of an abnormal termination.&lt;br /&gt;
After initialization the programm may call any other MPI-function, specifically communication functions.&lt;br /&gt;
However to do so, it is required to find out how many processes the program has been started with, using &amp;lt;tt&amp;gt;MPI_Comm_size()&amp;lt;/tt&amp;gt; and what number (here called a rank= this particular process has using &amp;lt;tt&amp;gt;MPI_Comm_rank()&amp;lt;/tt&amp;gt;.&lt;br /&gt;
Communication is always relative to a so-called communicator -- the default one after initialization being called &amp;lt;tt&amp;gt;MPI_COMM_WORLD&amp;lt;/tt&amp;gt;- &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
There&#039;s basically three ways of communication:&lt;br /&gt;
* two-sided communication using point-to-point (often abbreviated P2P) functions, such as &amp;lt;tt&amp;gt;MPI_Send()&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;MPI_Recv()&amp;lt;/tt&amp;gt;, which always involves two participating processes,&lt;br /&gt;
* collectice communcation functions (often abbreviated as colls) involve multiple processes, examples are &amp;lt;tt&amp;gt;MPI_Bcast()&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;MPI_Reduce()&amp;lt;/tt&amp;gt;, &lt;br /&gt;
* one-sided communication, where communication between two processes is initiated by one-process, only. With proper RMA-hardware support and careful programming, this may allow higher performance or scalibility.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
All parts of the programm, which reference MPI functionality need to be compiled with the &#039;&#039;&#039;same&#039;&#039;&#039; compiler settings/include files and linked to the same MPI-Library. This is stressed here, since without taking pre-cautions, a different MPI&#039;s header may be included, resulting in funny errors: consider that Intel MPI is derived from MPIch, with MPI-datatypes being C &amp;lt;tt&amp;gt;int&amp;lt;/tt&amp;gt;s, while Open MPI uses pointers to structures (the former being 4, the latter being 8 bytes on bwUniCluster).&lt;br /&gt;
To ease the programmer&#039;s life, MPI implementations offer compiler-wrappers, e.g. &amp;lt;tt&amp;gt;mpicc&amp;lt;/tt&amp;gt; for C and &amp;lt;tt&amp;gt;mpif90&amp;lt;/tt&amp;gt; for Fortran90 for compilation and linking, taking care to include all required libraries.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
All programs must be started using the &amp;lt;tt&amp;gt;mpirun&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;mpiexec&amp;lt;/tt&amp;gt; command. Depending on the actual implementation, it uses different arguments, however the following works with any MPI:&lt;br /&gt;
* &amp;lt;tt&amp;gt;mpirun -np 128 ./app&amp;lt;/tt&amp;gt; starts 128 processes (with ranks 0 to 127)&lt;br /&gt;
* &amp;lt;tt&amp;gt;mpiexec -n 128 -hostfile mynodes.txt ./app&amp;lt;/tt&amp;gt; starts 128 processes on only the nodes listed line-by-line in the provided text-file &amp;lt;tt&amp;gt;mynodes.txt&amp;lt;/tt&amp;gt;.&lt;br /&gt;
* &amp;lt;tt&amp;gt;mpiexec -n 64 ./app1 : -n 64 ./app2&amp;lt;/tt&amp;gt; starts 128 processes, 64 of which execute &amp;lt;tt&amp;gt;app1&amp;lt;/tt&amp;gt;, the other 64 execute &amp;lt;tt&amp;gt;app2&amp;lt;/tt&amp;gt;. All processes however participate in the same &amp;lt;tt&amp;gt;MPI_COMM_WORLD&amp;lt;/tt&amp;gt; and therefore must accordingly take care about their respective ranks.&lt;br /&gt;
Please note, that process placement (e.g. a round-robin scheme), and specifically process-binding to sockets is MPI-implementation dependant.&lt;br /&gt;
== MPI Best Practice Guide ==&lt;br /&gt;
Specific performance considerations with regard to MPI (independent of the implementation):&lt;br /&gt;
* No communication at all is best: Only communicate between processes if at all necessary. Consider that file-access is &amp;quot;communication&amp;quot; as well.&lt;br /&gt;
* If communication is done with multiple processes, try to involve as many processes in just one call: MPI optimizes the communication pattern for so-called &amp;quot;collective communication&amp;quot; to take advantage of the underlying network (with regard to network topology, message sizes, queueing capabilities of the network interconnect, etc.). Therefore try to always think in collective communication, if a communication pattern involves a group of processes.&lt;br /&gt;
* Try to group processes together: Function calls like &amp;lt;tt&amp;gt;MPI_Cart_create&amp;lt;/tt&amp;gt; will come in handy for applications with cartesian domains but also general communicators derived from &amp;lt;tt&amp;gt;MPI_COMM_WORLD&amp;lt;/tt&amp;gt; using &amp;lt;tt&amp;gt;MPI_Comm_split()&amp;lt;/tt&amp;gt; may benefit by MPI&#039;s knowing the underlying network topology. Use MPI3&#039;s &amp;lt;tt&amp;gt;MPI_Comm_split_type()&amp;lt;/tt&amp;gt; with &amp;lt;tt&amp;gt;MPI_COMM_TYPE_SHARED&amp;lt;/tt&amp;gt; for a sub-communicator with processes having access to the same shared memory region (aka on bwUniCluster the same node).&lt;br /&gt;
* File-accesses to load / store data &#039;&#039;&#039;must&#039;&#039;&#039; be done collectively: Writing to storage, or even reading the initialization data -- all of which involves getting data from/to all MPI processes -- must be done collectively. MPI&#039;s Parallel IO offers a rich API to read and distribute the data access -- in order to take advantage of parallel filesystems like Lustre. A many-fold performance improvement may be seen by writing data in large chunks in collective fashion -- and at the same time being nice to other users and applications. &lt;br /&gt;
* Try to hide the communication by computation: Try to hide (some) of the cost of communication of Point-to-point communication by using non-blocking / immediate P2P-calls (&amp;lt;tt&amp;gt;MPI_Isend&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;MPI_Irecv&amp;lt;/tt&amp;gt; et al, followed by &amp;lt;tt&amp;gt;MPI_Wait&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;MPI_Test&amp;lt;/tt&amp;gt; et al). This may allow the MPI-implementation to initiate or even offload communication to the network interconnect and resume executing your application, while data is being transferred. MPI-3 adds non-blocking collectives, e.g. &amp;lt;tt&amp;gt;MPI_Ibcast()&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;MPI_Iallreduce()&amp;lt;/tt&amp;gt;. For extra credit, explain the use-cases of &amp;lt;tt&amp;gt;MPI_Ibarrier()&amp;lt;/tt&amp;gt;.&lt;br /&gt;
* Every call to MPI may trigger an access to physical hardware -- limit it: When calling communication-related functions such as &amp;lt;tt&amp;gt;MPI_Test&amp;lt;/tt&amp;gt; to check whether a specific communication has finished, the queue of the network adapter may need to be queried. This memory access or even physical hardware access to query the state will cost cycles. Therefore, the programmer should combine multiple requests with functions such as &amp;lt;tt&amp;gt;MPI_Waitall()&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;MPI_Waitany()&amp;lt;/tt&amp;gt; or their Test*-counterparts.&lt;br /&gt;
* Make usage of derived datatypes: instead of manually copying data into temporary, even newly allocated memory, describe the data-layout to MPI -- and let the implementation, or even the network HCA&#039;s hardware do the data fetching.&lt;br /&gt;
* Bind Your processes to sockets: Operating Systems are good in making best use of the ressources -- which sometimes involves moving tasks from one core to another, or even (though more unlikely since the OS&#039; heuristics try to avoid it) to another socket, with the obvious effects: Caches are cold, every memory access to memory allocated on the previous socket &amp;quot;has to travel the bus&amp;quot;. This is particularly happening if You have multiple OpenMP parallel regions which are separated by code that does IO -- and threads are sleeping -- the processes doing IO may wander to a different socket... Bind Your processes to at least the socket. All major MPIs support this binding (see below).&lt;br /&gt;
* Do not use the C++ interface: First of all, it has been marked as deprecated in the MPI-3.0 standard, since it added little benefit to C++ programmers over the C-interface. Moreover, since MPI implementations are written in C, the interface adds another level of indirection and therefore a bit of overhead in terms of instructions and Cache misses.&lt;br /&gt;
== Open MPI ==&lt;br /&gt;
The [http://www.open-mpi.org Open MPI] library is an open, flexible and nevertheless performant implementation of MPI-2 and MPI-3. Licensed under BSD, it is being actively developed by an open community of industry and research institutions.&lt;br /&gt;
The flexibility comes in handy: using the concept of a [http://www.open-mpi.org/faq/?category=tuning#mca-def MCA] (aka a plugin) Open MPI supports many different network interconnects (Infinband, TCP, Cray, etc.) , on the other hand, a installation may be tailored to suite an installation, e.g. the network (Infiniband with specific settings), the main startup-mechanism, etc.&lt;br /&gt;
Furthermore, the [http://www.open-mpi.org/faq/ FAQ] offers hints on [http://www.open-mpi.org/faq/?category=tuning performance tuning].&lt;br /&gt;
=== Usage ===&lt;br /&gt;
Like other MPI implementations, after loading the [[BwUniCluster_Environment_Modules|module]], Open MPI provides the compiler-wrappers &amp;lt;tt&amp;gt;mpicc&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;mpicxx&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;mpifort&amp;lt;/tt&amp;gt;  (or for&lt;br /&gt;
versions lower than 1.7 &amp;lt;tt&amp;gt;mpif77&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;mpif90&amp;lt;/tt&amp;gt;) for the C-, C++ and Fortran compilers respectively. Albeit their usage is not required, these wrappers are handy to not have to use the command-line options for header- or library directories, aka &amp;lt;tt&amp;gt;-I&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;-L&amp;lt;/tt&amp;gt;, as well as the actual needed MPI-libraries itselve.&lt;br /&gt;
=== Further information === &lt;br /&gt;
Open MPI also features a few specific functionalities that will help users and developpers, alike:&lt;br /&gt;
* Open MPI&#039;s tool &amp;lt;tt&amp;gt;ompi_info&amp;lt;/tt&amp;gt; allows seeing all of Open MPI&#039;s installed MCA components and their specific options.&lt;br /&gt;
Without any option the user gets a list of flags, the Open MPI installation was compiled for (version of compilers, specific configure-flags, e.g. debugging, or profiling options). Furthermore, using &amp;lt;tt&amp;gt;ompi_info --param all all&amp;lt;/tt&amp;gt; one may see all of the MCA&#039;s options, e.g. that the default PML-MCA uses an initial free-list of 4 blocks (increased by 64 upon first encountering this limit):&lt;br /&gt;
&amp;lt;tt&amp;gt;ompi_info --param ob1 all&amp;lt;/tt&amp;gt; -- which may be increased for applications that are certain to benefit from a larger value upon startup.&lt;br /&gt;
* Open MPI allows adapting MCA parameters on the command-line: parameters may be supplied, e.g. the above-mentioned parameter &amp;lt;tt&amp;gt;mpirun -np 128 --mca mpirun -np 16 --mca pml_ob1_free_list_num 128 ./mpi_stub&amp;lt;/tt&amp;gt;.&lt;br /&gt;
* Open MPI internally uses the tool [http://www.open-mpi.org/projects/hwloc/ hwloc] for node-local processor-information, as well as process- and memory-affinity. This tool also is a good tool to get information on the node&#039;s processor topology and Cache-information. This may be used to optimize and balance memory usage or for choosing a better ratio of MPI processes per node vs. OpenMP threads per core.&lt;br /&gt;
== Intel (i)MPI ==&lt;br /&gt;
The Intel MPI Library is a multi-fabric message passing library that implements the Message Passing Interface, v2.2 (MPI-2.2) specification. It provides a standard library across Intel&lt;br /&gt;
platforms that enable adoption of MPI-2.2 functions as their needs dictate.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
The Intel MPI Library enables developers to change or to upgrade processors and interconnects as new technology becomes available without changes to the software or to the operating environment.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
The library is provided in the following kits:&lt;br /&gt;
* The Intel MPI Library Runtime Environment (RTO) has the tools you need to run programs, including Multipurpose Daemon (MPD), Hydra and supporting utilities, shared (.so) libraries, and documentation.&lt;br /&gt;
* The Intel MPI Library Development Kit (SDK) includes all of the Runtime Environment components plus compilation tools, including compiler commands such as &amp;lt;b&amp;gt;mpiicc&amp;lt;/b&amp;gt;, include files and modules, static (.a) libraries, debug libraries, trace libraries, and test codes.&lt;br /&gt;
=== General information  ===&lt;br /&gt;
All Intel mpi-modules are called &#039;&#039;&#039;&#039;mpi/impi&#039;&#039;&#039;&#039;.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
These modules provides the Intel Message Passing Interface (mpicc, mpicxx, mpif77&lt;br /&gt;
and mpif90) for the Intel compiler suite (icc, icpc and ifort) &amp;lt;small&amp;gt;(see also [http://software.intel.com/en-us/intel-mpi-library Intel MPI Library])&amp;lt;/small&amp;gt;.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
The corresponding Intel compiler module is loaded automatically (if not done before). &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;font color=red&amp;gt;Compiler and MPI module must fit. Don&#039;t mix incongruous versions!&amp;lt;/font&amp;gt;&lt;br /&gt;
=== Usage ===&lt;br /&gt;
The following table lists available MPI compiler commands and the underlying compilers, compiler families, languages, and application binary interfaces (ABIs) that they support.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;u&amp;gt;The Intel MPI Library Compiler Drivers&amp;lt;/u&amp;gt;&lt;br /&gt;
{| width=600px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Compiler Command !! Default Compiler !! Supported Language(s) !! Supported ABI&#039;s&lt;br /&gt;
|-&lt;br /&gt;
| colspan=4 style=&amp;quot;background-color:#DCDCDC;&amp;quot; | Generic Compilers &lt;br /&gt;
|-&lt;br /&gt;
| mpicc || gcc, cc  || C || 32/64 bit &lt;br /&gt;
|-&lt;br /&gt;
| mpicxx || g++ || C/C++ || 32/64 bit &lt;br /&gt;
|-&lt;br /&gt;
| mpifc || gfortran || Fortran77/Fortran 95 || 32/64 bit&lt;br /&gt;
|-&lt;br /&gt;
| colspan=4 style=&amp;quot;background-color:#DCDCDC;&amp;quot; | [[GCC|GNU Compiler]] Versions 3 and higher &lt;br /&gt;
|-&lt;br /&gt;
| mpigcc || gcc || C || 32/64 bit &lt;br /&gt;
|-&lt;br /&gt;
| mpigxx || g++  || C/C++ || 32/64 bit &lt;br /&gt;
|-&lt;br /&gt;
| mpif77 || g77 || Fortran 77 || 32/64 bit &lt;br /&gt;
|-&lt;br /&gt;
| mpif90 || gfortran || Fortran 95 || 32/64 bit&lt;br /&gt;
|-&lt;br /&gt;
| colspan=4 style=&amp;quot;background-color:#DCDCDC;&amp;quot; | [[Intel_Compiler|&#039;&#039;&#039;Intel Fortran, C++ Compilers&#039;&#039;&#039;]] Versions 13.1 through 14.0 and Higher&lt;br /&gt;
|-&lt;br /&gt;
| mpiicc || icc || C || 32/64 bit &lt;br /&gt;
|-&lt;br /&gt;
| mpiicpc || icpc || C++ || 32/64 bit &lt;br /&gt;
|-&lt;br /&gt;
|impiifort  || ifort || Fortran77/Fortran 95 || 32/64 bit &lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
* Compiler commands are available only in the Intel MPI Library Development Kit.&lt;br /&gt;
* Compiler commands are in the &amp;lt;installdir&amp;gt;/&amp;lt;arch&amp;gt;/bin directory. Where &amp;lt;installdir&amp;gt; refers to the Intel MPI Library installation directory (depending on the loaded mpi module) and &amp;lt;arch&amp;gt; is one of the following architectures:&lt;br /&gt;
* ia32 - IA-32 architecture&lt;br /&gt;
* intel64 - Intel 64 architecture&lt;br /&gt;
* mic – Intel Xeon Phi™ Coprocessor architecture&lt;br /&gt;
* Ensure that the corresponding underlying compilers (32-bit or 64-bit, as appropriate) are already in your PATH. This is normally done by the &#039;module load&#039; command. &lt;br /&gt;
* To port existing MPI-enabled applications to the Intel MPI Library, recompile all sources.&lt;br /&gt;
* To display mini-help of a compiler command, execute it without any parameters.&lt;br /&gt;
&lt;br /&gt;
=== Further information ===&lt;br /&gt;
==== Intel MPI without multithreading on the bwUniCluster ====&lt;br /&gt;
* [[Batch_Jobs_-_bwUniCluster_Features#Intel_MPI_without_Multithreading|Additional Infos about Intel MPI without Multithreading]]&lt;br /&gt;
==== Sample MPI programm in &amp;quot;C&amp;quot; ====&lt;br /&gt;
The following example  code includes a sample MPI program written in C.&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
#******************************************************************************&lt;br /&gt;
# Content: (exemplify version)&lt;br /&gt;
#      Based on a Monto Carlo method, this MPI sample code uses volumes to&lt;br /&gt;
#      estimate the number PI.&lt;br /&gt;
#*****************************************************************************/&lt;br /&gt;
#include &amp;lt;stdlib.h&amp;gt;&lt;br /&gt;
#include &amp;lt;stdio.h&amp;gt;&lt;br /&gt;
#include &amp;lt;math.h&amp;gt;&lt;br /&gt;
#include &amp;lt;time.h&amp;gt;&lt;br /&gt;
#include &amp;lt;math.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
// here you&#039;l include the main MPI-library&lt;br /&gt;
#include &amp;quot;mpi.h&amp;quot;&lt;br /&gt;
&lt;br /&gt;
#define MASTER 0&lt;br /&gt;
#define TAG_HELLO 4&lt;br /&gt;
#define TAG_TEST 5&lt;br /&gt;
#define TAG_TIME 6&lt;br /&gt;
&lt;br /&gt;
int main(int argc, char *argv[]) {&lt;br /&gt;
  int i, id, remote_id, num_procs;&lt;br /&gt;
   &lt;br /&gt;
  MPI_Status stat;&lt;br /&gt;
  int namelen;&lt;br /&gt;
  char name[MPI_MAX_PROCESSOR_NAME];&lt;br /&gt;
&lt;br /&gt;
  // Start MPI.&lt;br /&gt;
  if (MPI_Init (&amp;amp;argc, &amp;amp;argv) != MPI_SUCCESS) {&lt;br /&gt;
      printf (&amp;quot;Failed to initialize MPIn&amp;quot;);&lt;br /&gt;
      return (-1);&lt;br /&gt;
  }&lt;br /&gt;
  // Create the communicator, and retrieve the number of processes.&lt;br /&gt;
  MPI_Comm_size (MPI_COMM_WORLD, &amp;amp;num_procs);&lt;br /&gt;
&lt;br /&gt;
  // Determine the rank of the process.&lt;br /&gt;
  MPI_Comm_rank (MPI_COMM_WORLD, &amp;amp;id);&lt;br /&gt;
&lt;br /&gt;
  // Get machine name&lt;br /&gt;
  MPI_Get_processor_name (name, &amp;amp;namelen);&lt;br /&gt;
  &lt;br /&gt;
  if (id == MASTER) {&lt;br /&gt;
      printf (&amp;quot;Hello world: rank %d of %d running on %sn&amp;quot;, id, num_procs, name);&lt;br /&gt;
&lt;br /&gt;
  for (i = 1; i&amp;lt;num_procs; i++) {	&lt;br /&gt;
      MPI_Recv (&amp;amp;remote_id, 1, MPI_INT, i, TAG_HELLO, MPI_COMM_WORLD, &amp;amp;stat);	&lt;br /&gt;
      MPI_Recv (&amp;amp;num_procs, 1, MPI_INT, i, TAG_HELLO, MPI_COMM_WORLD, &amp;amp;stat);  		&lt;br /&gt;
      MPI_Recv (&amp;amp;namelen, 1, MPI_INT, i, TAG_HELLO, MPI_COMM_WORLD, &amp;amp;stat);			&lt;br /&gt;
      MPI_Recv (name, namelen+1, MPI_CHAR, i, TAG_HELLO, MPI_COMM_WORLD, &amp;amp;stat);&lt;br /&gt;
			&lt;br /&gt;
      printf (&amp;quot;Hello world: rank %d of %d running on %sn&amp;quot;, remote_id, num_procs, name);&lt;br /&gt;
      }&lt;br /&gt;
  }&lt;br /&gt;
  else {	    &lt;br /&gt;
      MPI_Send (&amp;amp;id, 1, MPI_INT, MASTER, TAG_HELLO, MPI_COMM_WORLD);&lt;br /&gt;
      MPI_Send (&amp;amp;num_procs, 1, MPI_INT, MASTER, TAG_HELLO, MPI_COMM_WORLD);&lt;br /&gt;
      MPI_Send (&amp;amp;namelen, 1, MPI_INT, MASTER, TAG_HELLO, MPI_COMM_WORLD);&lt;br /&gt;
      MPI_Send (name, namelen+1, MPI_CHAR, MASTER, TAG_HELLO, MPI_COMM_WORLD);&lt;br /&gt;
      }&lt;br /&gt;
&lt;br /&gt;
   // Rank 0 distributes seek randomly to all processes.&lt;br /&gt;
  double startprocess, endprocess;&lt;br /&gt;
&lt;br /&gt;
  int distributed_seed = 0;&lt;br /&gt;
  int *buff;&lt;br /&gt;
&lt;br /&gt;
  buff = (int *)malloc(num_procs * sizeof(int));&lt;br /&gt;
	&lt;br /&gt;
  unsigned int MAX_NUM_POINTS = pow (2,32) - 1;&lt;br /&gt;
  unsigned int num_local_points = MAX_NUM_POINTS / num_procs;&lt;br /&gt;
&lt;br /&gt;
  if (id == MASTER) {		  &lt;br /&gt;
      srand (time(NULL));&lt;br /&gt;
  &lt;br /&gt;
      for (i=0; i&amp;lt;num_procs; i++) {           &lt;br /&gt;
          distributed_seed = rand();&lt;br /&gt;
          buff[i] = distributed_seed;&lt;br /&gt;
     }  &lt;br /&gt;
  }&lt;br /&gt;
  // Broadcast the seed to all processes&lt;br /&gt;
  MPI_Bcast(buff, num_procs, MPI_INT, MASTER, MPI_COMM_WORLD);&lt;br /&gt;
&lt;br /&gt;
  // At this point, every process (including rank 0) has a different seed. Using their seed,&lt;br /&gt;
  // each process generates N points randomly in the interval [1/n, 1, 1]&lt;br /&gt;
  startprocess = MPI_Wtime();&lt;br /&gt;
&lt;br /&gt;
  srand (buff[id]);&lt;br /&gt;
&lt;br /&gt;
  unsigned int point = 0;&lt;br /&gt;
  unsigned int rand_MAX = 128000;&lt;br /&gt;
  float p_x, p_y, p_z;&lt;br /&gt;
  float temp, temp2, pi;&lt;br /&gt;
  double result;&lt;br /&gt;
  unsigned int inside = 0, total_inside = 0;&lt;br /&gt;
&lt;br /&gt;
  for (point=0; point&amp;lt;num_local_points; point++) {&lt;br /&gt;
      temp = (rand() % (rand_MAX+1));&lt;br /&gt;
      p_x = temp / rand_MAX;&lt;br /&gt;
      p_x = p_x / num_procs;&lt;br /&gt;
      &lt;br /&gt;
      temp2 = (float)id / num_procs;	// id belongs to 0, num_procs-1&lt;br /&gt;
      p_x += temp2;&lt;br /&gt;
      &lt;br /&gt;
      temp = (rand() % (rand_MAX+1));&lt;br /&gt;
      p_y = temp / rand_MAX;&lt;br /&gt;
      &lt;br /&gt;
      temp = (rand() % (rand_MAX+1));&lt;br /&gt;
      p_z = temp / rand_MAX;&lt;br /&gt;
&lt;br /&gt;
      // Compute the number of points residing inside of the 1/8 of the sphere&lt;br /&gt;
      result = p_x * p_x + p_y * p_y + p_z * p_z;&lt;br /&gt;
&lt;br /&gt;
      if (result &amp;lt;= 1) inside++;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  double elapsed = MPI_Wtime() - startprocess;&lt;br /&gt;
&lt;br /&gt;
  MPI_Reduce (&amp;amp;inside, &amp;amp;total_inside, 1, MPI_UNSIGNED, MPI_SUM, MASTER, MPI_COMM_WORLD);&lt;br /&gt;
&lt;br /&gt;
#if DEBUG &lt;br /&gt;
  printf (&amp;quot;rank %d counts %u points inside the spheren&amp;quot;, id, inside);&lt;br /&gt;
#endif&lt;br /&gt;
  if (id == MASTER) {&lt;br /&gt;
      double timeprocess[num_procs];&lt;br /&gt;
&lt;br /&gt;
      timeprocess[MASTER] = elapsed;&lt;br /&gt;
      printf(&amp;quot;Elapsed time from rank %d: %10.2f (sec) n&amp;quot;, MASTER, timeprocess[MASTER]);&lt;br /&gt;
      for (i=1; i&amp;lt;num_procs; i++) {&lt;br /&gt;
	  // Rank 0 waits for elapsed time value &lt;br /&gt;
	  MPI_Recv (&amp;amp;timeprocess[i], 1, MPI_DOUBLE, i, TAG_TIME, MPI_COMM_WORLD, &amp;amp;stat); &lt;br /&gt;
	  printf(&amp;quot;Elapsed time from rank %d: %10.2f (sec) n&amp;quot;, i, timeprocess[i]);&lt;br /&gt;
      }&lt;br /&gt;
&lt;br /&gt;
      temp = 6 * (float)total_inside;&lt;br /&gt;
      pi = temp / MAX_NUM_POINTS;   &lt;br /&gt;
      printf ( &amp;quot;Out of %u points, there are %u points inside the sphere =&amp;gt; pi=%16.12fn&amp;quot;,&lt;br /&gt;
          MAX_NUM_POINTS, total_inside, pi);&lt;br /&gt;
  }&lt;br /&gt;
  else {&lt;br /&gt;
      // Send back the processing time (in second)&lt;br /&gt;
      MPI_Send (&amp;amp;elapsed, 1, MPI_DOUBLE, MASTER, TAG_TIME, MPI_COMM_WORLD);&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  free(buff);&lt;br /&gt;
&lt;br /&gt;
  // Terminate MPI.&lt;br /&gt;
  MPI_Finalize();&lt;br /&gt;
  &lt;br /&gt;
  return 0;&lt;br /&gt;
}&amp;lt;/source&amp;gt;&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
= Hybrid Parallelization = &lt;br /&gt;
--&amp;gt;&lt;/div&gt;</summary>
		<author><name>T Kienzle</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=Development/Parallel_Programming&amp;diff=5189</id>
		<title>Development/Parallel Programming</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=Development/Parallel_Programming&amp;diff=5189"/>
		<updated>2017-11-27T13:56:05Z</updated>

		<summary type="html">&lt;p&gt;T Kienzle: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{| width=600px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Description !! Content&lt;br /&gt;
|-&lt;br /&gt;
| module load&lt;br /&gt;
| mpi/impi &amp;amp;#124; mpi/openmpi &lt;br /&gt;
|-&lt;br /&gt;
| Links&lt;br /&gt;
| [https://software.intel.com/en-us/intel-mpi-library Intel® MPI Library] &amp;amp;#124; [http://www.open-mpi.org/ Open MPI] &lt;br /&gt;
|-&lt;br /&gt;
| License&lt;br /&gt;
| [https://software.intel.com/en-us/articles/intel-mpi-library-licensing-faq Intel MPI Library Licensing FAQ] &amp;lt;small&amp;gt;install-doc/EULA.txt&amp;lt;/small&amp;gt; &amp;amp;#124; [http://www.open-mpi.org/community/license.php Open MPI License]&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
= Introduction =&lt;br /&gt;
This page will provide information regarding the supported parallel programming paradigms and specific hints on their usage.&lt;br /&gt;
Please refer to the [[BwUniCluster_Environment_Modules|Modules Documentation]] how to setup your environment on bwUniCluster to load a specific software installation.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
More informations about Compilers installed at our clusters is available here:&lt;br /&gt;
* [[Intel_Compiler|Intel Compiler Suite]]&lt;br /&gt;
* [[GCC|GNU Compiler (GCC)]]&lt;br /&gt;
* [[General_compiler_usage|General Compiler Usage (incl. PGI Compiler)]]&lt;br /&gt;
&lt;br /&gt;
= Versions and Availability =&lt;br /&gt;
=== impi (Intel) ===&lt;br /&gt;
A list of versions currently available compilers on the bwHPC-C5-Clusters can be obtained from the&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;big&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[https://cis-hpc.uni-konstanz.de/prod.cis/ Cluster Information System CIS]&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;/big&amp;gt;&lt;br /&gt;
{{#widget:Iframe&lt;br /&gt;
|url=https://cis-hpc.uni-konstanz.de/prod.cis/bwUniCluster/mpi/impi&lt;br /&gt;
|width=99%&lt;br /&gt;
|height=1200&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
=== openmpi ===&lt;br /&gt;
A list of versions currently available compilers on the bwHPC-C5-Clusters can be obtained from the&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;big&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[https://cis-hpc.uni-konstanz.de/prod.cis/ Cluster Information System CIS]&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;/big&amp;gt;&lt;br /&gt;
{{#widget:Iframe&lt;br /&gt;
|url=https://cis-hpc.uni-konstanz.de/prod.cis/bwUniCluster/mpi/openmpi&lt;br /&gt;
|width=99%&lt;br /&gt;
|height=2100&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= OpenMP =&lt;br /&gt;
== General Information ==&lt;br /&gt;
OpenMP is a mature specification [http://openmp.org/wp/openmp-specifications/] to allow easy, portable, and most importantly incremental node-level parallelisation of code.&lt;br /&gt;
Being a thread-based approach, OpenMP is aimed at more fine-grained parallelism than [[BwHPC_Best_Practices_Repository#MPI|MPI]].&lt;br /&gt;
Although there have been extensions to extend OpenMP for inter-node parallelisation, it is a node-level approach aimed to make best usage of a node&#039;s cores&amp;lt;!-- -- the section [[#Hybrid Parallelisation|Hybrid Parallelisation]] will explain how to parallelise utilizing MPI plus a thread-based parallelization paradigm like OpenMP--&amp;gt;.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
With regard to ease-of-use, OpenMP is ahead of any other common approach: the source-code is annotated using &amp;lt;tt&amp;gt;#pragma omp&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;!$omp&amp;lt;/tt&amp;gt; statements, in C/C++ and Fortran respectively.&lt;br /&gt;
Whenever the compiler encompasses a semantic block of code encapsulated in a parallel region, this block of code is transparently compiled into a function, which is passed to a so-called team-of-threads upon entering this semantic block. This fork-join model of execution eases a lot of the programmer&#039;s pain involved with Threads.&lt;br /&gt;
Being a loop-centric approach, OpenMP is aimed at codes with long/time-consuming loops.&lt;br /&gt;
A single combined directive &amp;lt;tt&amp;gt;pragma omp parallel for&amp;lt;/tt&amp;gt; will tell the compiler to automatically parallel the ensuing for-loop.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
The following example is a bit more advanced in that even reductions of variables over multiple threads are easily to parallel:&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
   for (int i=0, sum = 0.0; i &amp;lt; VECTOR_LEN; i++)&lt;br /&gt;
     norm2 += (v[i]*v[i]);&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
is parallelized by just adding a single line as in:&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
#  pragma omp parallel for reduction(+:norm2)&lt;br /&gt;
   for (int i=0, sum = 0.0; i &amp;lt; VECTOR_LEN; i++)&lt;br /&gt;
     norm2 += (v[i]*v[i]);&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
With &amp;lt;tt&amp;gt;VECTOR_LENGTH&amp;lt;/tt&amp;gt; being large enough, this piece of code compiled with OpenMP will run in parallel, exhibiting very nice speedup.&lt;br /&gt;
Compiled without, the code remains as is. Developpers may therefore incrementally parallelize their application based on the profile derived from performance analysis tools, starting with the most time-consuming loops.&lt;br /&gt;
Using OpenMP&#039;s concise API, one may query the number of running threads, the number of processors, a time to calculate runtime, and even set parameters such as the number of threads to execute a parallel region.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
The OpenMP-4.0 specification added support for the SIMD-directive to better utilize SIMD-vectorization, as well as integrating directives to offload computation to accelerators using the &amp;lt;tt&amp;gt;target&amp;lt;/tt&amp;gt; directive: these are integrated into the Intel Compiler and are actively being worked on for the GNU compiler, some restrictions may apply.&lt;br /&gt;
== OpenMP Best Practice Guide ==&lt;br /&gt;
The following silly example to calculate the squared Euklidian Norm shows some techniques:&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
#include &amp;lt;stdlib.h&amp;gt;&lt;br /&gt;
#include &amp;lt;stdio.h&amp;gt;&lt;br /&gt;
#include &amp;lt;omp.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
#define VECTOR_LENGTH 5&lt;br /&gt;
&lt;br /&gt;
int main (int argc, char * argv[])&lt;br /&gt;
{&lt;br /&gt;
    int len = VECTOR_LENGTH;&lt;br /&gt;
    int i;&lt;br /&gt;
    double * v;&lt;br /&gt;
    double norm2 = 0.0;&lt;br /&gt;
    double t1, tdiff;&lt;br /&gt;
&lt;br /&gt;
    if (argc &amp;gt; 1)&lt;br /&gt;
        len = atoi (argv[1]);&lt;br /&gt;
    v = malloc (len * sizeof(double));&lt;br /&gt;
&lt;br /&gt;
    t1 = omp_get_wtime();&lt;br /&gt;
    // Initialization already with (the same number of) threads&lt;br /&gt;
#pragma omp parallel for&lt;br /&gt;
    for (i=0; i &amp;lt; len; i++) {&lt;br /&gt;
        v[i] = i;&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    // Now aggregate the sum-of-squares by specifying a reduction&lt;br /&gt;
#pragma omp parallel for reduction(+:norm2)&lt;br /&gt;
    for(i=0; i &amp;lt; len; i++) {&lt;br /&gt;
        norm2 += (v[i]*v[i]);&lt;br /&gt;
    }&lt;br /&gt;
    tdiff = omp_get_wtime() - t1;&lt;br /&gt;
&lt;br /&gt;
    printf (&amp;quot;norm2: %f Time:%f\n&amp;quot;, norm2, tdiff);&lt;br /&gt;
    return 0;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&amp;lt;!-- Specific OpenMP hints: default(none), reproducability, thread-safey --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Group independent parallel sections together: in the above example, You may combine those two sections into one larger parallel block. This will just once enter the parallel region (in the fork-join model) instead of twice. Especially in inner loops, this will considerably decrease overhead.&lt;br /&gt;
* Compile with the Intel compiler&#039;s option &amp;lt;tt&amp;gt;-diag-enable sc-parallel3&amp;lt;/tt&amp;gt; to get the further warnings on thread-safety, performance, etc. The following code with loop-carried dependency will e.g. compile fine (aka without warning):&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
#pragma omp parallel for reduction(+:norm2)&lt;br /&gt;
    for(i=1; i &amp;lt; len-1; i++) {&lt;br /&gt;
        v[i] = v[i-1]+v[i+1];&lt;br /&gt;
    }&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
However the Intel compiler with &amp;lt;tt&amp;gt;-diag-enable sc-parallel3&amp;lt;/tt&amp;gt; will produce the following warning:&lt;br /&gt;
&amp;lt;tt&amp;gt;warning #12246: variable &amp;quot;v&amp;quot; has loop carried data dependency that may lead to incorrect program execution in parallel mode; see (file:omp_norm2.c line:32)&amp;lt;/tt&amp;gt;&lt;br /&gt;
* Always specify &amp;lt;tt&amp;gt;default(none)&amp;lt;/tt&amp;gt; on larger parallel regions in order to specifically set the visibility of variables to either &amp;lt;tt&amp;gt;shared&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;private&amp;lt;/tt&amp;gt;.&lt;br /&gt;
* Try to restructure code to allow for &amp;lt;tt&amp;gt;nowait&amp;lt;/tt&amp;gt;: OpenMP defines synchronization points (implied barriers) at the end of work sharing constructs such as the &amp;lt;tt&amp;gt;pragma omp for&amp;lt;tt&amp;gt; directive. If the ensuing section of code does not depend on data being generated inside the parallel section, adding the &amp;lt;tt&amp;gt;nowait&amp;lt;/tt&amp;gt; clause to the worksharing directive allows the compiler to eliminate this synchronization point. This reduces overhead, allows for better overlap and better utilization of the processor&#039;s resources. This might imply however to restructure the code (move portions of independent code in between dependent works-sharing constructs).&lt;br /&gt;
== Usage ==&lt;br /&gt;
OpenMP is supported by various compilers, here the usage for two main compilers [[BwHPC_BPG_Compiler#GCC|GCC]] and [[BwHPC_BPG_Compiler#Intel Suite|Intel Suite]] are introduced.&lt;br /&gt;
For both compilers, You first need to turn on OpenMP support by specifying a parameter on the compiler&#039;s command-line.&lt;br /&gt;
In case You make function calls to OpenMP&#039;s API, You also need to include the header-file &amp;lt;tt&amp;gt;omp.h&amp;lt;/tt&amp;gt;.&lt;br /&gt;
OpenMP&#039;s API allows to query or set the number of threads, query the number of processors, get a wall-clock time to measure execution times, etc.&lt;br /&gt;
=== OpenMP with GNU Compiler Collection ===&lt;br /&gt;
Starting with version 4.2 the gcc compiler supports OpenMP-2.5.&lt;br /&gt;
Since then the analysis capabilities of the GNU compiler have steadily improved.&lt;br /&gt;
The installed compilers support OpenMP-3.1.&lt;br /&gt;
&amp;lt;!-- Starting with gcc-4.9 OpenMP-4.0 is supported, however the &amp;lt;tt&amp;gt;target&amp;lt;/tt&amp;gt; directive will only offload to the host processor. --&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
To use OpenMP with the gcc-compiler, pass &amp;lt;tt&amp;gt;-fopenmp&amp;lt;/tt&amp;gt; as parameter.&lt;br /&gt;
=== OpenMP with Intel Compiler ===&lt;br /&gt;
The Intel Compiler&#039;s support for OpenMP is more advanced than gcc&#039;s -- especially in term of programmer support.&lt;br /&gt;
To use OpenMP with the Intel compiler, pass &amp;lt;tt&amp;gt;-openmp&amp;lt;/tt&amp;gt; as command-line parameter.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
One may get very insightful information about OpenMP, when compiling with&lt;br /&gt;
* Compiling with &amp;lt;tt&amp;gt;-openmp-report2&amp;lt;/tt&amp;gt; to get information, which loops were parallelized and a reason why not.&lt;br /&gt;
* Compiling with &amp;lt;tt&amp;gt;-diag-enable sc-parallel3&amp;lt;/tt&amp;gt; to get errors and warnings about your sources weaknesses with regard to parallelization (see example below).&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;!-- ---------------------------------------------------------------------------------------------- --&amp;gt;&lt;br /&gt;
= MPI =&lt;br /&gt;
In this section, You will find information regarding the supported installations of the Message-Passing Interface libraries and their usage.&amp;lt;br&amp;gt;&lt;br /&gt;
Due to the Fortran interface ABI, all MPI-libraries are normally bound to a specific compiler-vendor and even the specific compiler version.&lt;br /&gt;
Therefore, as listed in [[BwHPC_BPG_Compiler]] two compilers are supported on bwUniCluster: [[GCC|GCC]] and [[Intel_Compiler|Intel Compiler Suite]].&lt;br /&gt;
As both compilers are continously improving, the communication libraries will be adopted in lock-step.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
With a set of different implementations, there comes the problem of choice. These pages should inform the user of the communication libraries, what considerations should be done with regard to performance, maintainability and debugging -- in general tool support -- of the various implementations.&lt;br /&gt;
== MPI Introduction ==&lt;br /&gt;
The Message-Passing Interface is a standard provided by the [http://www.mpi-forum.org MPI-Forum] which regularly convenes for the [http://mpi-forum.org/meetings/ MPI-Forum Meetings] to update this standard. The current version is MPI-3.0 available as [http://mpi-forum.org/docs/mpi-3.0/mpi30-report.pdf PDF].&lt;br /&gt;
This document defines the API of over 300 functions for the C- and the Fortran-language -- however, You will certainly not need all of them to begin with.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Every MPI-conforming program needs to call &amp;lt;tt&amp;gt;MPI_Init()&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;MPI_Finalize()&amp;lt;/tt&amp;gt; upon start and shutdown -- or &amp;lt;tt&amp;gt;MPI_Abort()&amp;lt;/tt&amp;gt; in case of an abnormal termination.&lt;br /&gt;
After initialization the programm may call any other MPI-function, specifically communication functions.&lt;br /&gt;
However to do so, it is required to find out how many processes the program has been started with, using &amp;lt;tt&amp;gt;MPI_Comm_size()&amp;lt;/tt&amp;gt; and what number (here called a rank= this particular process has using &amp;lt;tt&amp;gt;MPI_Comm_rank()&amp;lt;/tt&amp;gt;.&lt;br /&gt;
Communication is always relative to a so-called communicator -- the default one after initialization being called &amp;lt;tt&amp;gt;MPI_COMM_WORLD&amp;lt;/tt&amp;gt;- &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
There&#039;s basically three ways of communication:&lt;br /&gt;
* two-sided communication using point-to-point (often abbreviated P2P) functions, such as &amp;lt;tt&amp;gt;MPI_Send()&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;MPI_Recv()&amp;lt;/tt&amp;gt;, which always involves two participating processes,&lt;br /&gt;
* collectice communcation functions (often abbreviated as colls) involve multiple processes, examples are &amp;lt;tt&amp;gt;MPI_Bcast()&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;MPI_Reduce()&amp;lt;/tt&amp;gt;, &lt;br /&gt;
* one-sided communication, where communication between two processes is initiated by one-process, only. With proper RMA-hardware support and careful programming, this may allow higher performance or scalibility.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
All parts of the programm, which reference MPI functionality need to be compiled with the &#039;&#039;&#039;same&#039;&#039;&#039; compiler settings/include files and linked to the same MPI-Library. This is stressed here, since without taking pre-cautions, a different MPI&#039;s header may be included, resulting in funny errors: consider that Intel MPI is derived from MPIch, with MPI-datatypes being C &amp;lt;tt&amp;gt;int&amp;lt;/tt&amp;gt;s, while Open MPI uses pointers to structures (the former being 4, the latter being 8 bytes on bwUniCluster).&lt;br /&gt;
To ease the programmer&#039;s life, MPI implementations offer compiler-wrappers, e.g. &amp;lt;tt&amp;gt;mpicc&amp;lt;/tt&amp;gt; for C and &amp;lt;tt&amp;gt;mpif90&amp;lt;/tt&amp;gt; for Fortran90 for compilation and linking, taking care to include all required libraries.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
All programs must be started using the &amp;lt;tt&amp;gt;mpirun&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;mpiexec&amp;lt;/tt&amp;gt; command. Depending on the actual implementation, it uses different arguments, however the following works with any MPI:&lt;br /&gt;
* &amp;lt;tt&amp;gt;mpirun -np 128 ./app&amp;lt;/tt&amp;gt; starts 128 processes (with ranks 0 to 127)&lt;br /&gt;
* &amp;lt;tt&amp;gt;mpiexec -n 128 -hostfile mynodes.txt ./app&amp;lt;/tt&amp;gt; starts 128 processes on only the nodes listed line-by-line in the provided text-file &amp;lt;tt&amp;gt;mynodes.txt&amp;lt;/tt&amp;gt;.&lt;br /&gt;
* &amp;lt;tt&amp;gt;mpiexec -n 64 ./app1 : -n 64 ./app2&amp;lt;/tt&amp;gt; starts 128 processes, 64 of which execute &amp;lt;tt&amp;gt;app1&amp;lt;/tt&amp;gt;, the other 64 execute &amp;lt;tt&amp;gt;app2&amp;lt;/tt&amp;gt;. All processes however participate in the same &amp;lt;tt&amp;gt;MPI_COMM_WORLD&amp;lt;/tt&amp;gt; and therefore must accordingly take care about their respective ranks.&lt;br /&gt;
Please note, that process placement (e.g. a round-robin scheme), and specifically process-binding to sockets is MPI-implementation dependant.&lt;br /&gt;
== MPI Best Practice Guide ==&lt;br /&gt;
Specific performance considerations with regard to MPI (independent of the implementation):&lt;br /&gt;
* No communication at all is best: Only communicate between processes if at all necessary. Consider that file-access is &amp;quot;communication&amp;quot; as well.&lt;br /&gt;
* If communication is done with multiple processes, try to involve as many processes in just one call: MPI optimizes the communication pattern for so-called &amp;quot;collective communication&amp;quot; to take advantage of the underlying network (with regard to network topology, message sizes, queueing capabilities of the network interconnect, etc.). Therefore try to always think in collective communication, if a communication pattern involves a group of processes.&lt;br /&gt;
* Try to group processes together: Function calls like &amp;lt;tt&amp;gt;MPI_Cart_create&amp;lt;/tt&amp;gt; will come in handy for applications with cartesian domains but also general communicators derived from &amp;lt;tt&amp;gt;MPI_COMM_WORLD&amp;lt;/tt&amp;gt; using &amp;lt;tt&amp;gt;MPI_Comm_split()&amp;lt;/tt&amp;gt; may benefit by MPI&#039;s knowing the underlying network topology. Use MPI3&#039;s &amp;lt;tt&amp;gt;MPI_Comm_split_type()&amp;lt;/tt&amp;gt; with &amp;lt;tt&amp;gt;MPI_COMM_TYPE_SHARED&amp;lt;/tt&amp;gt; for a sub-communicator with processes having access to the same shared memory region (aka on bwUniCluster the same node).&lt;br /&gt;
* File-accesses to load / store data &#039;&#039;&#039;must&#039;&#039;&#039; be done collectively: Writing to storage, or even reading the initialization data -- all of which involves getting data from/to all MPI processes -- must be done collectively. MPI&#039;s Parallel IO offers a rich API to read and distribute the data access -- in order to take advantage of parallel filesystems like Lustre. A many-fold performance improvement may be seen by writing data in large chunks in collective fashion -- and at the same time being nice to other users and applications. &lt;br /&gt;
* Try to hide the communication by computation: Try to hide (some) of the cost of communication of Point-to-point communication by using non-blocking / immediate P2P-calls (&amp;lt;tt&amp;gt;MPI_Isend&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;MPI_Irecv&amp;lt;/tt&amp;gt; et al, followed by &amp;lt;tt&amp;gt;MPI_Wait&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;MPI_Test&amp;lt;/tt&amp;gt; et al). This may allow the MPI-implementation to initiate or even offload communication to the network interconnect and resume executing your application, while data is being transferred. MPI-3 adds non-blocking collectives, e.g. &amp;lt;tt&amp;gt;MPI_Ibcast()&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;MPI_Iallreduce()&amp;lt;/tt&amp;gt;. For extra credit, explain the use-cases of &amp;lt;tt&amp;gt;MPI_Ibarrier()&amp;lt;/tt&amp;gt;.&lt;br /&gt;
* Every call to MPI may trigger an access to physical hardware -- limit it: When calling communication-related functions such as &amp;lt;tt&amp;gt;MPI_Test&amp;lt;/tt&amp;gt; to check whether a specific communication has finished, the queue of the network adapter may need to be queried. This memory access or even physical hardware access to query the state will cost cycles. Therefore, the programmer should combine multiple requests with functions such as &amp;lt;tt&amp;gt;MPI_Waitall()&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;MPI_Waitany()&amp;lt;/tt&amp;gt; or their Test*-counterparts.&lt;br /&gt;
* Make usage of derived datatypes: instead of manually copying data into temporary, even newly allocated memory, describe the data-layout to MPI -- and let the implementation, or even the network HCA&#039;s hardware do the data fetching.&lt;br /&gt;
* Bind Your processes to sockets: Operating Systems are good in making best use of the ressources -- which sometimes involves moving tasks from one core to another, or even (though more unlikely since the OS&#039; heuristics try to avoid it) to another socket, with the obvious effects: Caches are cold, every memory access to memory allocated on the previous socket &amp;quot;has to travel the bus&amp;quot;. This is particularly happening if You have multiple OpenMP parallel regions which are separated by code that does IO -- and threads are sleeping -- the processes doing IO may wander to a different socket... Bind Your processes to at least the socket. All major MPIs support this binding (see below).&lt;br /&gt;
* Do not use the C++ interface: First of all, it has been marked as deprecated in the MPI-3.0 standard, since it added little benefit to C++ programmers over the C-interface. Moreover, since MPI implementations are written in C, the interface adds another level of indirection and therefore a bit of overhead in terms of instructions and Cache misses.&lt;br /&gt;
== Open MPI ==&lt;br /&gt;
The [http://www.open-mpi.org Open MPI] library is an open, flexible and nevertheless performant implementation of MPI-2 and MPI-3. Licensed under BSD, it is being actively developed by an open community of industry and research institutions.&lt;br /&gt;
The flexibility comes in handy: using the concept of a [http://www.open-mpi.org/faq/?category=tuning#mca-def MCA] (aka a plugin) Open MPI supports many different network interconnects (Infinband, TCP, Cray, etc.) , on the other hand, a installation may be tailored to suite an installation, e.g. the network (Infiniband with specific settings), the main startup-mechanism, etc.&lt;br /&gt;
Furthermore, the [http://www.open-mpi.org/faq/ FAQ] offers hints on [http://www.open-mpi.org/faq/?category=tuning performance tuning].&lt;br /&gt;
=== Usage ===&lt;br /&gt;
Like other MPI implementations, after loading the [[BwUniCluster_Environment_Modules|module]], Open MPI provides the compiler-wrappers &amp;lt;tt&amp;gt;mpicc&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;mpicxx&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;mpifort&amp;lt;/tt&amp;gt;  (or for&lt;br /&gt;
versions lower than 1.7 &amp;lt;tt&amp;gt;mpif77&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;mpif90&amp;lt;/tt&amp;gt;) for the C-, C++ and Fortran compilers respectively. Albeit their usage is not required, these wrappers are handy to not have to use the command-line options for header- or library directories, aka &amp;lt;tt&amp;gt;-I&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;-L&amp;lt;/tt&amp;gt;, as well as the actual needed MPI-libraries itselve.&lt;br /&gt;
=== Further information === &lt;br /&gt;
Open MPI also features a few specific functionalities that will help users and developpers, alike:&lt;br /&gt;
* Open MPI&#039;s tool &amp;lt;tt&amp;gt;ompi_info&amp;lt;/tt&amp;gt; allows seeing all of Open MPI&#039;s installed MCA components and their specific options.&lt;br /&gt;
Without any option the user gets a list of flags, the Open MPI installation was compiled for (version of compilers, specific configure-flags, e.g. debugging, or profiling options). Furthermore, using &amp;lt;tt&amp;gt;ompi_info --param all all&amp;lt;/tt&amp;gt; one may see all of the MCA&#039;s options, e.g. that the default PML-MCA uses an initial free-list of 4 blocks (increased by 64 upon first encountering this limit):&lt;br /&gt;
&amp;lt;tt&amp;gt;ompi_info --param ob1 all&amp;lt;/tt&amp;gt; -- which may be increased for applications that are certain to benefit from a larger value upon startup.&lt;br /&gt;
* Open MPI allows adapting MCA parameters on the command-line: parameters may be supplied, e.g. the above-mentioned parameter &amp;lt;tt&amp;gt;mpirun -np 128 --mca mpirun -np 16 --mca pml_ob1_free_list_num 128 ./mpi_stub&amp;lt;/tt&amp;gt;.&lt;br /&gt;
* Open MPI internally uses the tool [http://www.open-mpi.org/projects/hwloc/ hwloc] for node-local processor-information, as well as process- and memory-affinity. This tool also is a good tool to get information on the node&#039;s processor topology and Cache-information. This may be used to optimize and balance memory usage or for choosing a better ratio of MPI processes per node vs. OpenMP threads per core.&lt;br /&gt;
== Intel (i)MPI ==&lt;br /&gt;
The Intel MPI Library is a multi-fabric message passing library that implements the Message Passing Interface, v2.2 (MPI-2.2) specification. It provides a standard library across Intel&lt;br /&gt;
platforms that enable adoption of MPI-2.2 functions as their needs dictate.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
The Intel MPI Library enables developers to change or to upgrade processors and interconnects as new technology becomes available without changes to the software or to the operating environment.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
The library is provided in the following kits:&lt;br /&gt;
* The Intel MPI Library Runtime Environment (RTO) has the tools you need to run programs, including Multipurpose Daemon (MPD), Hydra and supporting utilities, shared (.so) libraries, and documentation.&lt;br /&gt;
* The Intel MPI Library Development Kit (SDK) includes all of the Runtime Environment components plus compilation tools, including compiler commands such as &amp;lt;b&amp;gt;mpiicc&amp;lt;/b&amp;gt;, include files and modules, static (.a) libraries, debug libraries, trace libraries, and test codes.&lt;br /&gt;
=== General information  ===&lt;br /&gt;
All Intel mpi-modules are called &#039;&#039;&#039;&#039;mpi/impi&#039;&#039;&#039;&#039;.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
These modules provides the Intel Message Passing Interface (mpicc, mpicxx, mpif77&lt;br /&gt;
and mpif90) for the Intel compiler suite (icc, icpc and ifort) &amp;lt;small&amp;gt;(see also [http://software.intel.com/en-us/intel-mpi-library Intel MPI Library])&amp;lt;/small&amp;gt;.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
The corresponding Intel compiler module is loaded automatically (if not done before). &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;font color=red&amp;gt;Compiler and MPI module must fit. Don&#039;t mix incongruous versions!&amp;lt;/font&amp;gt;&lt;br /&gt;
=== Usage ===&lt;br /&gt;
The following table lists available MPI compiler commands and the underlying compilers, compiler families, languages, and application binary interfaces (ABIs) that they support.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;u&amp;gt;The Intel MPI Library Compiler Drivers&amp;lt;/u&amp;gt;&lt;br /&gt;
{| width=600px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Compiler Command !! Default Compiler !! Supported Language(s) !! Supported ABI&#039;s&lt;br /&gt;
|-&lt;br /&gt;
| colspan=4 style=&amp;quot;background-color:#DCDCDC;&amp;quot; | Generic Compilers &lt;br /&gt;
|-&lt;br /&gt;
| mpicc || gcc, cc  || C || 32/64 bit &lt;br /&gt;
|-&lt;br /&gt;
| mpicxx || g++ || C/C++ || 32/64 bit &lt;br /&gt;
|-&lt;br /&gt;
| mpifc || gfortran || Fortran77/Fortran 95 || 32/64 bit&lt;br /&gt;
|-&lt;br /&gt;
| colspan=4 style=&amp;quot;background-color:#DCDCDC;&amp;quot; | [[GCC|GNU Compiler]] Versions 3 and higher &lt;br /&gt;
|-&lt;br /&gt;
| mpigcc || gcc || C || 32/64 bit &lt;br /&gt;
|-&lt;br /&gt;
| mpigxx || g++  || C/C++ || 32/64 bit &lt;br /&gt;
|-&lt;br /&gt;
| mpif77 || g77 || Fortran 77 || 32/64 bit &lt;br /&gt;
|-&lt;br /&gt;
| mpif90 || gfortran || Fortran 95 || 32/64 bit&lt;br /&gt;
|-&lt;br /&gt;
| colspan=4 style=&amp;quot;background-color:#DCDCDC;&amp;quot; | [[Intel_Compiler|&#039;&#039;&#039;Intel Fortran, C++ Compilers&#039;&#039;&#039;]] Versions 13.1 through 14.0 and Higher&lt;br /&gt;
|-&lt;br /&gt;
| mpiicc || icc || C || 32/64 bit &lt;br /&gt;
|-&lt;br /&gt;
| mpiicpc || icpc || C++ || 32/64 bit &lt;br /&gt;
|-&lt;br /&gt;
|impiifort  || ifort || Fortran77/Fortran 95 || 32/64 bit &lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
* Compiler commands are available only in the Intel MPI Library Development Kit.&lt;br /&gt;
* Compiler commands are in the &amp;lt;installdir&amp;gt;/&amp;lt;arch&amp;gt;/bin directory. Where &amp;lt;installdir&amp;gt; refers to the Intel MPI Library installation directory (depending on the loaded mpi module) and &amp;lt;arch&amp;gt; is one of the following architectures:&lt;br /&gt;
* ia32 - IA-32 architecture&lt;br /&gt;
* intel64 - Intel 64 architecture&lt;br /&gt;
* mic – Intel Xeon Phi™ Coprocessor architecture&lt;br /&gt;
* Ensure that the corresponding underlying compilers (32-bit or 64-bit, as appropriate) are already in your PATH. This is normally done by the &#039;module load&#039; command. &lt;br /&gt;
* To port existing MPI-enabled applications to the Intel MPI Library, recompile all sources.&lt;br /&gt;
* To display mini-help of a compiler command, execute it without any parameters.&lt;br /&gt;
&lt;br /&gt;
=== Further information ===&lt;br /&gt;
==== Intel MPI without multithreading on the bwUniCluster ====&lt;br /&gt;
* [[Batch_Jobs_-_bwUniCluster_Features#Intel_MPI_without_Multithreading|Additional Infos about Intel MPI without Multithreading]]&lt;br /&gt;
==== Sample MPI programm in &amp;quot;C&amp;quot; ====&lt;br /&gt;
The following example  code includes a sample MPI program written in C.&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
#******************************************************************************&lt;br /&gt;
# Content: (exemplify version)&lt;br /&gt;
#      Based on a Monto Carlo method, this MPI sample code uses volumes to&lt;br /&gt;
#      estimate the number PI.&lt;br /&gt;
#*****************************************************************************/&lt;br /&gt;
#include &amp;lt;stdlib.h&amp;gt;&lt;br /&gt;
#include &amp;lt;stdio.h&amp;gt;&lt;br /&gt;
#include &amp;lt;math.h&amp;gt;&lt;br /&gt;
#include &amp;lt;time.h&amp;gt;&lt;br /&gt;
#include &amp;lt;math.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
// here you&#039;l include the main MPI-library&lt;br /&gt;
#include &amp;quot;mpi.h&amp;quot;&lt;br /&gt;
&lt;br /&gt;
#define MASTER 0&lt;br /&gt;
#define TAG_HELLO 4&lt;br /&gt;
#define TAG_TEST 5&lt;br /&gt;
#define TAG_TIME 6&lt;br /&gt;
&lt;br /&gt;
int main(int argc, char *argv[]) {&lt;br /&gt;
  int i, id, remote_id, num_procs;&lt;br /&gt;
   &lt;br /&gt;
  MPI_Status stat;&lt;br /&gt;
  int namelen;&lt;br /&gt;
  char name[MPI_MAX_PROCESSOR_NAME];&lt;br /&gt;
&lt;br /&gt;
  // Start MPI.&lt;br /&gt;
  if (MPI_Init (&amp;amp;argc, &amp;amp;argv) != MPI_SUCCESS) {&lt;br /&gt;
      printf (&amp;quot;Failed to initialize MPIn&amp;quot;);&lt;br /&gt;
      return (-1);&lt;br /&gt;
  }&lt;br /&gt;
  // Create the communicator, and retrieve the number of processes.&lt;br /&gt;
  MPI_Comm_size (MPI_COMM_WORLD, &amp;amp;num_procs);&lt;br /&gt;
&lt;br /&gt;
  // Determine the rank of the process.&lt;br /&gt;
  MPI_Comm_rank (MPI_COMM_WORLD, &amp;amp;id);&lt;br /&gt;
&lt;br /&gt;
  // Get machine name&lt;br /&gt;
  MPI_Get_processor_name (name, &amp;amp;namelen);&lt;br /&gt;
  &lt;br /&gt;
  if (id == MASTER) {&lt;br /&gt;
      printf (&amp;quot;Hello world: rank %d of %d running on %sn&amp;quot;, id, num_procs, name);&lt;br /&gt;
&lt;br /&gt;
  for (i = 1; i&amp;lt;num_procs; i++) {	&lt;br /&gt;
      MPI_Recv (&amp;amp;remote_id, 1, MPI_INT, i, TAG_HELLO, MPI_COMM_WORLD, &amp;amp;stat);	&lt;br /&gt;
      MPI_Recv (&amp;amp;num_procs, 1, MPI_INT, i, TAG_HELLO, MPI_COMM_WORLD, &amp;amp;stat);  		&lt;br /&gt;
      MPI_Recv (&amp;amp;namelen, 1, MPI_INT, i, TAG_HELLO, MPI_COMM_WORLD, &amp;amp;stat);			&lt;br /&gt;
      MPI_Recv (name, namelen+1, MPI_CHAR, i, TAG_HELLO, MPI_COMM_WORLD, &amp;amp;stat);&lt;br /&gt;
			&lt;br /&gt;
      printf (&amp;quot;Hello world: rank %d of %d running on %sn&amp;quot;, remote_id, num_procs, name);&lt;br /&gt;
      }&lt;br /&gt;
  }&lt;br /&gt;
  else {	    &lt;br /&gt;
      MPI_Send (&amp;amp;id, 1, MPI_INT, MASTER, TAG_HELLO, MPI_COMM_WORLD);&lt;br /&gt;
      MPI_Send (&amp;amp;num_procs, 1, MPI_INT, MASTER, TAG_HELLO, MPI_COMM_WORLD);&lt;br /&gt;
      MPI_Send (&amp;amp;namelen, 1, MPI_INT, MASTER, TAG_HELLO, MPI_COMM_WORLD);&lt;br /&gt;
      MPI_Send (name, namelen+1, MPI_CHAR, MASTER, TAG_HELLO, MPI_COMM_WORLD);&lt;br /&gt;
      }&lt;br /&gt;
&lt;br /&gt;
   // Rank 0 distributes seek randomly to all processes.&lt;br /&gt;
  double startprocess, endprocess;&lt;br /&gt;
&lt;br /&gt;
  int distributed_seed = 0;&lt;br /&gt;
  int *buff;&lt;br /&gt;
&lt;br /&gt;
  buff = (int *)malloc(num_procs * sizeof(int));&lt;br /&gt;
	&lt;br /&gt;
  unsigned int MAX_NUM_POINTS = pow (2,32) - 1;&lt;br /&gt;
  unsigned int num_local_points = MAX_NUM_POINTS / num_procs;&lt;br /&gt;
&lt;br /&gt;
  if (id == MASTER) {		  &lt;br /&gt;
      srand (time(NULL));&lt;br /&gt;
  &lt;br /&gt;
      for (i=0; i&amp;lt;num_procs; i++) {           &lt;br /&gt;
          distributed_seed = rand();&lt;br /&gt;
          buff[i] = distributed_seed;&lt;br /&gt;
     }  &lt;br /&gt;
  }&lt;br /&gt;
  // Broadcast the seed to all processes&lt;br /&gt;
  MPI_Bcast(buff, num_procs, MPI_INT, MASTER, MPI_COMM_WORLD);&lt;br /&gt;
&lt;br /&gt;
  // At this point, every process (including rank 0) has a different seed. Using their seed,&lt;br /&gt;
  // each process generates N points randomly in the interval [1/n, 1, 1]&lt;br /&gt;
  startprocess = MPI_Wtime();&lt;br /&gt;
&lt;br /&gt;
  srand (buff[id]);&lt;br /&gt;
&lt;br /&gt;
  unsigned int point = 0;&lt;br /&gt;
  unsigned int rand_MAX = 128000;&lt;br /&gt;
  float p_x, p_y, p_z;&lt;br /&gt;
  float temp, temp2, pi;&lt;br /&gt;
  double result;&lt;br /&gt;
  unsigned int inside = 0, total_inside = 0;&lt;br /&gt;
&lt;br /&gt;
  for (point=0; point&amp;lt;num_local_points; point++) {&lt;br /&gt;
      temp = (rand() % (rand_MAX+1));&lt;br /&gt;
      p_x = temp / rand_MAX;&lt;br /&gt;
      p_x = p_x / num_procs;&lt;br /&gt;
      &lt;br /&gt;
      temp2 = (float)id / num_procs;	// id belongs to 0, num_procs-1&lt;br /&gt;
      p_x += temp2;&lt;br /&gt;
      &lt;br /&gt;
      temp = (rand() % (rand_MAX+1));&lt;br /&gt;
      p_y = temp / rand_MAX;&lt;br /&gt;
      &lt;br /&gt;
      temp = (rand() % (rand_MAX+1));&lt;br /&gt;
      p_z = temp / rand_MAX;&lt;br /&gt;
&lt;br /&gt;
      // Compute the number of points residing inside of the 1/8 of the sphere&lt;br /&gt;
      result = p_x * p_x + p_y * p_y + p_z * p_z;&lt;br /&gt;
&lt;br /&gt;
      if (result &amp;lt;= 1) inside++;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  double elapsed = MPI_Wtime() - startprocess;&lt;br /&gt;
&lt;br /&gt;
  MPI_Reduce (&amp;amp;inside, &amp;amp;total_inside, 1, MPI_UNSIGNED, MPI_SUM, MASTER, MPI_COMM_WORLD);&lt;br /&gt;
&lt;br /&gt;
#if DEBUG &lt;br /&gt;
  printf (&amp;quot;rank %d counts %u points inside the spheren&amp;quot;, id, inside);&lt;br /&gt;
#endif&lt;br /&gt;
  if (id == MASTER) {&lt;br /&gt;
      double timeprocess[num_procs];&lt;br /&gt;
&lt;br /&gt;
      timeprocess[MASTER] = elapsed;&lt;br /&gt;
      printf(&amp;quot;Elapsed time from rank %d: %10.2f (sec) n&amp;quot;, MASTER, timeprocess[MASTER]);&lt;br /&gt;
      for (i=1; i&amp;lt;num_procs; i++) {&lt;br /&gt;
	  // Rank 0 waits for elapsed time value &lt;br /&gt;
	  MPI_Recv (&amp;amp;timeprocess[i], 1, MPI_DOUBLE, i, TAG_TIME, MPI_COMM_WORLD, &amp;amp;stat); &lt;br /&gt;
	  printf(&amp;quot;Elapsed time from rank %d: %10.2f (sec) n&amp;quot;, i, timeprocess[i]);&lt;br /&gt;
      }&lt;br /&gt;
&lt;br /&gt;
      temp = 6 * (float)total_inside;&lt;br /&gt;
      pi = temp / MAX_NUM_POINTS;   &lt;br /&gt;
      printf ( &amp;quot;Out of %u points, there are %u points inside the sphere =&amp;gt; pi=%16.12fn&amp;quot;,&lt;br /&gt;
          MAX_NUM_POINTS, total_inside, pi);&lt;br /&gt;
  }&lt;br /&gt;
  else {&lt;br /&gt;
      // Send back the processing time (in second)&lt;br /&gt;
      MPI_Send (&amp;amp;elapsed, 1, MPI_DOUBLE, MASTER, TAG_TIME, MPI_COMM_WORLD);&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  free(buff);&lt;br /&gt;
&lt;br /&gt;
  // Terminate MPI.&lt;br /&gt;
  MPI_Finalize();&lt;br /&gt;
  &lt;br /&gt;
  return 0;&lt;br /&gt;
}&amp;lt;/source&amp;gt;&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
= Hybrid Parallelization = &lt;br /&gt;
--&amp;gt;&lt;/div&gt;</summary>
		<author><name>T Kienzle</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BWUniCluster_User_Access_Members_Uni_Mannheim&amp;diff=4916</id>
		<title>BWUniCluster User Access Members Uni Mannheim</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BWUniCluster_User_Access_Members_Uni_Mannheim&amp;diff=4916"/>
		<updated>2017-06-27T13:16:49Z</updated>

		<summary type="html">&lt;p&gt;T Kienzle: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Entitlement for bwUniCluster and bwFileStorage  @ Uni Mannheim =&lt;br /&gt;
&lt;br /&gt;
Members of the University of Mannheim need to apply with a valid account for an Entitlement to use bwHPC resources.&lt;br /&gt;
Therefor please use this web registration service:&lt;br /&gt;
&lt;br /&gt;
[https://sp-grid-webregistration.uni-mannheim.de https://sp-grid-webregistration.uni-mannheim.de]&lt;br /&gt;
&lt;br /&gt;
The web registration service asks for a project description which is used for internal documentation only. Please also indicate that you want access to &#039;&#039;&#039;bwUniCluster&#039;&#039;&#039; or &#039;&#039;&#039;bwFileStorage&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
With the approval of your request for the HPC Entitlement you will be able to register for &#039;&#039;&#039;bwUniCluster&#039;&#039;&#039; and &#039;&#039;&#039;bwFileStorage&#039;&#039;&#039; too.&lt;br /&gt;
&lt;br /&gt;
In case of questions contact [mailto:hpc-support@mailman.uni-mannheim.de hpc-support@mailman.uni-mannheim.de]&lt;/div&gt;</summary>
		<author><name>T Kienzle</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BWUniCluster_User_Access_Members_Uni_Mannheim&amp;diff=4915</id>
		<title>BWUniCluster User Access Members Uni Mannheim</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BWUniCluster_User_Access_Members_Uni_Mannheim&amp;diff=4915"/>
		<updated>2017-06-27T13:16:08Z</updated>

		<summary type="html">&lt;p&gt;T Kienzle: /* Entitlement for bwUniCluster and bwFileStorage */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Entitlement for bwUniCluster and bwFileStorage  @ Uni Mannheim =&lt;br /&gt;
&lt;br /&gt;
Members of the University of Mannheim need to apply with a valid account for an Entitlement to use bwHPC resources.&lt;br /&gt;
Therefor please use this web registration service:&lt;br /&gt;
&lt;br /&gt;
[https://sp-grid-webregistration.uni-mannheim.de https://sp-grid-webregistration.uni-mannheim.de]&lt;br /&gt;
&lt;br /&gt;
The web registration service asks for a project description which is used for internal documentation only. Please also indicate that you want access to bwUniCluster.&lt;br /&gt;
&lt;br /&gt;
With the approval of your request for the HPC Entitlement you will be able to register for bwUniCluster and bwFileStorage too.&lt;br /&gt;
&lt;br /&gt;
In case of questions contact [mailto:hpc-support@mailman.uni-mannheim.de hpc-support@mailman.uni-mannheim.de]&lt;/div&gt;</summary>
		<author><name>T Kienzle</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BWUniCluster_User_Access_Members_Uni_Mannheim&amp;diff=4914</id>
		<title>BWUniCluster User Access Members Uni Mannheim</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BWUniCluster_User_Access_Members_Uni_Mannheim&amp;diff=4914"/>
		<updated>2017-06-27T13:15:27Z</updated>

		<summary type="html">&lt;p&gt;T Kienzle: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Entitlement for bwUniCluster and bwFileStorage =&lt;br /&gt;
&lt;br /&gt;
Members of the University of Mannheim need to apply with a valid account for an Entitlement to use bwHPC resources.&lt;br /&gt;
Therefor please use this web registration service:&lt;br /&gt;
&lt;br /&gt;
[https://sp-grid-webregistration.uni-mannheim.de https://sp-grid-webregistration.uni-mannheim.de]&lt;br /&gt;
&lt;br /&gt;
The web registration service asks for a project description which is used for internal documentation only. Please also indicate that you want access to bwUniCluster.&lt;br /&gt;
&lt;br /&gt;
With the approval of your request for the HPC Entitlement you will be able to register for bwUniCluster and bwFileStorage too.&lt;br /&gt;
&lt;br /&gt;
In case of questions contact [mailto:hpc-support@mailman.uni-mannheim.de hpc-support@mailman.uni-mannheim.de]&lt;/div&gt;</summary>
		<author><name>T Kienzle</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwFileStorage_User_Access&amp;diff=4913</id>
		<title>BwFileStorage User Access</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwFileStorage_User_Access&amp;diff=4913"/>
		<updated>2017-06-27T13:14:21Z</updated>

		<summary type="html">&lt;p&gt;T Kienzle: /* Step A: bwLSDF-FileService entitlement for registration */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Registration =&lt;br /&gt;
&lt;br /&gt;
Granting access and issuing a user account for &#039;&#039;&#039;&#039;&#039;bwFileStorage&#039;&#039;&#039;&#039;&#039; requires the registration at the KIT service website [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu] (step B). However, this registration depends on the &#039;&#039;&#039;bwLSDF-FileService entitlement&#039;&#039;&#039; issued by your university (step A).&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Step A: bwLSDF-FileService entitlement for registration ==&lt;br /&gt;
Each university issues the bwLSDF-FileService entitlement &#039;&#039;&#039;only&#039;&#039;&#039; for their own members. Details on the issuing process and/or the entitlement application forms are listed hereafter:  &lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- * [[BwCluster_User_Access_Uni_Freiburg|Albert Ludwig University of Freiburg]]&lt;br /&gt;
* [http://www.zdv.uni-tuebingen.de/dienstleistungen/computing/zugang-zu-den-ressourcen.html Eberhard Karls University Tübingen]&lt;br /&gt;
* [http://www.scc.kit.edu/downloads/sdo/Antrag_Benutzernummer_bwUniCluster.pdf Karlsruhe Institute of Technology (KIT)]&lt;br /&gt;
* [[‎BWUniCluster_User_Access_Members_Uni_Heidelberg|Ruprecht-Karls-Universität Heidelberg]]&lt;br /&gt;
* [[BWUniCluster_User_Access_Members_Uni_Ulm|Ulm University]]&lt;br /&gt;
* [http://www.kim.uni-hohenheim.de/bwhpc University of Hohenheim]&lt;br /&gt;
* [[BWUniCluster_User_Access_Members_Uni_Konstanz|University of Konstanz]]&lt;br /&gt;
* [https://www.hlrs.de/solutions-services/academic-users/bwunicluster-access/ University of Stuttgart]&lt;br /&gt;
&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* [[BWUniCluster_User_Access_Members_Uni_Mannheim|University of Mannheim]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Step B: Web Registration ==&lt;br /&gt;
After step A, i.e., after issueing the bwLSDF-FileService entitlement, please visit: &lt;br /&gt;
* [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu] &lt;br /&gt;
*# Select your home organization from the list and click &#039;&#039;&#039;Proceed&#039;&#039;&#039;&lt;br /&gt;
*# You will be directed to the &#039;&#039;Identity Provider&#039;&#039; of your home organisation  &lt;br /&gt;
*# Enter your home-organisational user ID / username  and your home-organisational password and click &#039;&#039;&#039;Login&#039;&#039;&#039; button&lt;br /&gt;
*# You will be redirected back to the registration website [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu/] &lt;br /&gt;
*# &amp;lt;div&amp;gt;Select unter &#039;&#039;&#039;The following services are available&#039;&#039;&#039; the service &#039;&#039;&#039;bwFileStorage&#039;&#039;&#039; &lt;br /&gt;
*# Click &#039;&#039;&#039;Register&#039;&#039;&#039;&lt;br /&gt;
*# Finally&lt;br /&gt;
*#* for all non-KIT members &#039;&#039;&#039;mandatorily&#039;&#039;&#039; set a service password for authentication on bwFileStorage&lt;br /&gt;
*#* for KIT members &#039;&#039;&#039;optionally&#039;&#039;&#039; set a service password for authentication on bwFileStorage&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- old description&lt;br /&gt;
** login with your home-organizational user account and user password,&lt;br /&gt;
** select service &#039;&#039;&#039;&#039;&#039;bwFileStorager&#039;&#039;&#039;&#039;&#039; (on the left side) and &lt;br /&gt;
** follow the instructions to complete the registration.&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Changing Password ==&lt;br /&gt;
By default, for KIT members your bwFileStorage &#039;&#039;&#039;password&#039;&#039;&#039; to log on matches  that one of your KIT &lt;br /&gt;
account, while for all non-KIT members your bw &#039;&#039;&#039;password&#039;&#039;&#039; is that one you saved during the web registration (compare step 7 of chapter 1.2). &lt;br /&gt;
At any time, you can set a new bwFileStorage password via the registration website [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu] by carrying out the following steps:&lt;br /&gt;
# visit [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu] and select your home organization &lt;br /&gt;
# authenticate yourself via your home-organizational user id / username and your home-organizational password&lt;br /&gt;
# find on the left side &#039;&#039;&#039;bwFileStorage&#039;&#039;&#039; and select &#039;&#039;&#039;Set Service Password&#039;&#039;&#039;&lt;br /&gt;
# set new service, i.e. bwFileStorage, password, repeat it and click &#039;&#039;&#039;Save&#039;&#039;&#039; button.&lt;br /&gt;
# the page answers e.g. &amp;quot;Das Passwort wurde bei dem Dienst geändert&amp;quot; (&amp;quot;password has been changed&amp;quot;)&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;/div&gt;</summary>
		<author><name>T Kienzle</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwFileStorage_User_Access&amp;diff=4912</id>
		<title>BwFileStorage User Access</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwFileStorage_User_Access&amp;diff=4912"/>
		<updated>2017-06-27T13:09:53Z</updated>

		<summary type="html">&lt;p&gt;T Kienzle: Created page with &amp;quot;= Registration =  Granting access and issuing a user account for &amp;#039;&amp;#039;&amp;#039;&amp;#039;&amp;#039;bwFileStorage&amp;#039;&amp;#039;&amp;#039;&amp;#039;&amp;#039; requires the registration at the KIT service website [https://bwidm.scc.kit.edu/ https...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Registration =&lt;br /&gt;
&lt;br /&gt;
Granting access and issuing a user account for &#039;&#039;&#039;&#039;&#039;bwFileStorage&#039;&#039;&#039;&#039;&#039; requires the registration at the KIT service website [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu] (step B). However, this registration depends on the &#039;&#039;&#039;bwLSDF-FileService entitlement&#039;&#039;&#039; issued by your university (step A).&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Step A: bwLSDF-FileService entitlement for registration ==&lt;br /&gt;
Each university issues the bwLSDF-FileService entitlement &#039;&#039;&#039;only&#039;&#039;&#039; for their own members. Details on the issuing process and/or the entitlement application forms are listed hereafter:  &lt;br /&gt;
&lt;br /&gt;
* [[BwCluster_User_Access_Uni_Freiburg|Albert Ludwig University of Freiburg]]&lt;br /&gt;
* [http://www.zdv.uni-tuebingen.de/dienstleistungen/computing/zugang-zu-den-ressourcen.html Eberhard Karls University Tübingen]&lt;br /&gt;
* [http://www.scc.kit.edu/downloads/sdo/Antrag_Benutzernummer_bwUniCluster.pdf Karlsruhe Institute of Technology (KIT)]&lt;br /&gt;
* [[‎BWUniCluster_User_Access_Members_Uni_Heidelberg|Ruprecht-Karls-Universität Heidelberg]]&lt;br /&gt;
* [[BWUniCluster_User_Access_Members_Uni_Ulm|Ulm University]]&lt;br /&gt;
* [http://www.kim.uni-hohenheim.de/bwhpc University of Hohenheim]&lt;br /&gt;
* [[BWUniCluster_User_Access_Members_Uni_Konstanz|University of Konstanz]]&lt;br /&gt;
* [[BWUniCluster_User_Access_Members_Uni_Mannheim|University of Mannheim]]&lt;br /&gt;
* [https://www.hlrs.de/solutions-services/academic-users/bwunicluster-access/ University of Stuttgart]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Step B: Web Registration ==&lt;br /&gt;
After step A, i.e., after issueing the bwLSDF-FileService entitlement, please visit: &lt;br /&gt;
* [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu] &lt;br /&gt;
*# Select your home organization from the list and click &#039;&#039;&#039;Proceed&#039;&#039;&#039;&lt;br /&gt;
*# You will be directed to the &#039;&#039;Identity Provider&#039;&#039; of your home organisation  &lt;br /&gt;
*# Enter your home-organisational user ID / username  and your home-organisational password and click &#039;&#039;&#039;Login&#039;&#039;&#039; button&lt;br /&gt;
*# You will be redirected back to the registration website [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu/] &lt;br /&gt;
*# &amp;lt;div&amp;gt;Select unter &#039;&#039;&#039;The following services are available&#039;&#039;&#039; the service &#039;&#039;&#039;bwFileStorage&#039;&#039;&#039; &lt;br /&gt;
*# Click &#039;&#039;&#039;Register&#039;&#039;&#039;&lt;br /&gt;
*# Finally&lt;br /&gt;
*#* for all non-KIT members &#039;&#039;&#039;mandatorily&#039;&#039;&#039; set a service password for authentication on bwFileStorage&lt;br /&gt;
*#* for KIT members &#039;&#039;&#039;optionally&#039;&#039;&#039; set a service password for authentication on bwFileStorage&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- old description&lt;br /&gt;
** login with your home-organizational user account and user password,&lt;br /&gt;
** select service &#039;&#039;&#039;&#039;&#039;bwFileStorager&#039;&#039;&#039;&#039;&#039; (on the left side) and &lt;br /&gt;
** follow the instructions to complete the registration.&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Changing Password ==&lt;br /&gt;
By default, for KIT members your bwFileStorage &#039;&#039;&#039;password&#039;&#039;&#039; to log on matches  that one of your KIT &lt;br /&gt;
account, while for all non-KIT members your bw &#039;&#039;&#039;password&#039;&#039;&#039; is that one you saved during the web registration (compare step 7 of chapter 1.2). &lt;br /&gt;
At any time, you can set a new bwFileStorage password via the registration website [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu] by carrying out the following steps:&lt;br /&gt;
# visit [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu] and select your home organization &lt;br /&gt;
# authenticate yourself via your home-organizational user id / username and your home-organizational password&lt;br /&gt;
# find on the left side &#039;&#039;&#039;bwFileStorage&#039;&#039;&#039; and select &#039;&#039;&#039;Set Service Password&#039;&#039;&#039;&lt;br /&gt;
# set new service, i.e. bwFileStorage, password, repeat it and click &#039;&#039;&#039;Save&#039;&#039;&#039; button.&lt;br /&gt;
# the page answers e.g. &amp;quot;Das Passwort wurde bei dem Dienst geändert&amp;quot; (&amp;quot;password has been changed&amp;quot;)&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;/div&gt;</summary>
		<author><name>T Kienzle</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BWUniCluster_User_Access_Members_Uni_Mannheim&amp;diff=4910</id>
		<title>BWUniCluster User Access Members Uni Mannheim</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BWUniCluster_User_Access_Members_Uni_Mannheim&amp;diff=4910"/>
		<updated>2017-06-27T12:53:21Z</updated>

		<summary type="html">&lt;p&gt;T Kienzle: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Members of the University of Mannheim need to apply with a valid account for an Entitlement to use bwHPC resources.&lt;br /&gt;
Therefor please use this web registration service:&lt;br /&gt;
&lt;br /&gt;
[https://sp-grid-webregistration.uni-mannheim.de https://sp-grid-webregistration.uni-mannheim.de]&lt;br /&gt;
&lt;br /&gt;
The web registration service asks for a project description which is used for internal documentation only. Please also indicate that you want access to bwUniCluster.&lt;br /&gt;
&lt;br /&gt;
With the approval of your request for the HPC Entitlement you will be able to register for bwUniCluster and bwFileStorage too.&lt;br /&gt;
&lt;br /&gt;
In case of questions contact [mailto:hpc-support@mailman.uni-mannheim.de hpc-support@mailman.uni-mannheim.de]&lt;/div&gt;</summary>
		<author><name>T Kienzle</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BWUniCluster_User_Access_Members_Uni_Mannheim&amp;diff=4763</id>
		<title>BWUniCluster User Access Members Uni Mannheim</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BWUniCluster_User_Access_Members_Uni_Mannheim&amp;diff=4763"/>
		<updated>2017-02-22T14:15:47Z</updated>

		<summary type="html">&lt;p&gt;T Kienzle: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Members of the University of Mannheim need to apply with a valid account for an Entitlement to use bwHPC resources.&lt;br /&gt;
Therefor please use this web registration service:&lt;br /&gt;
&lt;br /&gt;
[https://sp-grid-webregistration.uni-mannheim.de https://sp-grid-webregistration.uni-mannheim.de]&lt;br /&gt;
&lt;br /&gt;
The web registration service asks for a project description which is used for internal documentation only. Please also indicate that you want access to bwUniCluster.&lt;br /&gt;
&lt;br /&gt;
With the approval of your registration for bwGRiD Mannheim/Heidelberg you receive the entitlement to register for bwUniCluster, too.&lt;br /&gt;
&lt;br /&gt;
In case of questions contact [mailto:hpc-support@mailman.uni-mannheim.de hpc-support@mailman.uni-mannheim.de]&lt;/div&gt;</summary>
		<author><name>T Kienzle</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=Workspace&amp;diff=4639</id>
		<title>Workspace</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=Workspace&amp;diff=4639"/>
		<updated>2016-12-20T16:55:52Z</updated>

		<summary type="html">&lt;p&gt;T Kienzle: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;Workspace tools&#039;&#039;&#039; provide temporary scratch space so calles &#039;&#039;&#039;workspaces&#039;&#039;&#039; for your calculation on a central file storage. They are meant to keep data for a limited time – but usually longer than the time of a single job run. It is not meant for permanent storage, hence data in workspaces is not backed up and may be lost in case of problems on the storage system. Please copy/move important results to $HOME or some disks outside the cluster. &lt;br /&gt;
&lt;br /&gt;
== Create workspace ==&lt;br /&gt;
To create a workspace you need to state &#039;&#039;name&#039;&#039; of your workspace and &#039;&#039;lifetime&#039;&#039; in days. A maximum value for &#039;&#039;lifetime&#039;&#039; and a maximum number of renewals is defined on each cluster.  Execution of:&lt;br /&gt;
&lt;br /&gt;
   $ ws_allocate blah 30&lt;br /&gt;
&lt;br /&gt;
e.g. returns:&lt;br /&gt;
 &lt;br /&gt;
   Workspace created. Duration is 720 hours. &lt;br /&gt;
   Further extensions available: 3&lt;br /&gt;
   /work/workspace/scratch/username-blah-0&lt;br /&gt;
&lt;br /&gt;
For more information read the program&#039;s help, i.e. &#039;&#039;$ ws_allocate -h&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== List all your workspaces ==&lt;br /&gt;
To list all your workspaces, execute:&lt;br /&gt;
&lt;br /&gt;
   $ ws_list&lt;br /&gt;
&lt;br /&gt;
which will return:&lt;br /&gt;
* Workspace ID&lt;br /&gt;
* Workspace location&lt;br /&gt;
* available extensions&lt;br /&gt;
* creation date and remaining time&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Find workspace location ==&lt;br /&gt;
&lt;br /&gt;
Workspace location/path can be prompted for any workspace &#039;&#039;ID&#039;&#039; using &#039;&#039;&#039;ws_find&#039;&#039;&#039;, in case of workspace &#039;&#039;blah&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
   $ ws_find blah&lt;br /&gt;
&lt;br /&gt;
returns the one-liner:&lt;br /&gt;
&lt;br /&gt;
   /work/workspace/scratch/username-blah-0&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
== Extend lifetime of your workspace ==&lt;br /&gt;
&lt;br /&gt;
Any workspace&#039;s lifetime can be only extended a cluster-specific number of times. There several commands to extend workspace lifetime&lt;br /&gt;
#&amp;lt;pre&amp;gt;$ ws_extend blah 40&amp;lt;/pre&amp;gt; which extends workspace ID &#039;&#039;blah&#039;&#039; by &#039;&#039;40&#039;&#039; days from now,&lt;br /&gt;
#&amp;lt;pre&amp;gt;$ ws_extend blah&amp;lt;/pre&amp;gt; which extends workspace ID &#039;&#039;blah&#039;&#039; by the number days used previously&lt;br /&gt;
#&amp;lt;pre&amp;gt;$ ws_allocate -x blah 40&amp;lt;/pre&amp;gt; which extends workspace ID &#039;&#039;blah&#039;&#039; by &#039;&#039;40&#039;&#039; days from now.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Delete a workspace ==&lt;br /&gt;
&lt;br /&gt;
   $ ws_release blah # Manually erase your workspace blah&lt;/div&gt;</summary>
		<author><name>T Kienzle</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster2.0/Software/R&amp;diff=4606</id>
		<title>BwUniCluster2.0/Software/R</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster2.0/Software/R&amp;diff=4606"/>
		<updated>2016-11-11T11:27:03Z</updated>

		<summary type="html">&lt;p&gt;T Kienzle: /* Program Binaries */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{| width=600px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Description !! Content&lt;br /&gt;
|-&lt;br /&gt;
| module load&lt;br /&gt;
| math/R&lt;br /&gt;
|-&lt;br /&gt;
| License&lt;br /&gt;
| GPL&lt;br /&gt;
|-&lt;br /&gt;
| Citing &lt;br /&gt;
| n/a&lt;br /&gt;
|-&lt;br /&gt;
| Links&lt;br /&gt;
| [http://www.r-project.org/ Homepage] &amp;amp;#124; [http://cran.r-project.org/manuals.html  Documentation]&lt;br /&gt;
|-&lt;br /&gt;
| Graphical Interface&lt;br /&gt;
| No&lt;br /&gt;
|-&lt;br /&gt;
| Plugins&lt;br /&gt;
| User dependent&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
= Description =&lt;br /&gt;
&#039;&#039;&#039;R&#039;&#039;&#039; is a language and environment for statistical computing and graphics. It is a GNU project which is similar to the S language and environment which was developed at Bell Laboratories (formerly AT&amp;amp;T, now Lucent Technologies) by John Chambers and colleagues. R can be considered as a different implementation of S. There are some important differences, but much code written for S runs unaltered under R.&lt;br /&gt;
&lt;br /&gt;
R provides a wide variety of statistical (linear and nonlinear modelling, classical statistical tests, time-series analysis, classification, clustering, …) and graphical techniques, and is highly extensible. The S language is often the vehicle of choice for research in statistical methodology, and R provides an Open Source route to participation in that activity.&lt;br /&gt;
&lt;br /&gt;
One of R’s strengths is the ease with which well-designed publication-quality plots can be produced, including mathematical symbols and formulae where needed. Great care has been taken over the defaults for the minor design choices in graphics, but the user retains full control.&lt;br /&gt;
&lt;br /&gt;
R is available as Free Software under the terms of the Free Software Foundation’s GNU General Public License in source code form. It compiles and runs on a wide variety of UNIX platforms and similar systems (including FreeBSD and Linux), Windows and MacOS.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Versions and Availability =&lt;br /&gt;
A list of versions currently available on all bwHPC-C5-Clusters can be obtained from the&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;big&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[https://cis-hpc.uni-konstanz.de/prod.cis/ Cluster Information System CIS]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/big&amp;gt;&lt;br /&gt;
{{#widget:Iframe&lt;br /&gt;
|url=https://cis-hpc.uni-konstanz.de/prod.cis/bwUniCluster/math/R&lt;br /&gt;
|width=99%&lt;br /&gt;
|height=280&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
On the command line interface of any bwHPC cluster, a list of the available R versions using&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ module avail math/R&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Usage =&lt;br /&gt;
The R installation also provides the standalone library libRmath. This library allows you to access R routines from your own C or C++ programs (see section 9 of the &#039;R Installation and Administration&#039; manual.)&lt;br /&gt;
&lt;br /&gt;
== Loading the module ==&lt;br /&gt;
You can load the default version of R with the command&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ module load math/R&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The module will try to load modules it needs to function (e.g. compiler/intel). If loading the module fails, check if you have already loaded one of those modules, but not in the version needed for R.&lt;br /&gt;
&lt;br /&gt;
If you wish to load a specific (older) version, you can do so using e.g. &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ module load math/R/3.1.2&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
to load the version 3.1.2.&lt;br /&gt;
&lt;br /&gt;
== Program Binaries ==&lt;br /&gt;
Standard usage:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Usage: R [options] [&amp;lt; infile] [&amp;gt; outfile]&lt;br /&gt;
    R CMD command [arguments]&lt;br /&gt;
  &lt;br /&gt;
Example: R CMD BATCH script.R&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Executing R in batch mode:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
R CMD BATCH --no-save --no-restore &amp;lt;INPUT_FILE&amp;gt;.R&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For help run&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
R --help&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For command help run&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
R CMD command --help&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Further information and help&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Man pages:          man R               man Rscript&lt;br /&gt;
Info pages. e.g.:   info R-intro        info R-FAQ&lt;br /&gt;
Manuals:            $R_DOC_DIR/manual&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Multithreading in R ==&lt;br /&gt;
&lt;br /&gt;
An easy way to use multiple cores on a single node in R is to use the [https://cran.r-project.org/web/packages/doParallel/vignettes/gettingstartedParallel.pdf| doParallel] package in combination with [https://cran.r-project.org/web/packages/foreach/vignettes/foreach.pdf| foreach].&lt;br /&gt;
&lt;br /&gt;
= Examples =&lt;br /&gt;
As with all processes that require more than a few minutes to run, non-trivial compute jobs must be submitted to the cluster queuing system.&lt;br /&gt;
&lt;br /&gt;
Example scripts are available in the directory $R_EXA_DIR:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ module show math/R                      # show environment variables, which will be available after &#039;module load&#039;&lt;br /&gt;
$ module load math/R                      # load module&lt;br /&gt;
$ ls $R_EXA_DIR                           # show content of directory $R_EXA_DIR&lt;br /&gt;
$ cat $R_EXA_DIR/README                   # show examples README&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Run a first simple example job&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ module load math/R                                       # load module&lt;br /&gt;
$ mkdir Rtest                                              # create test directory&lt;br /&gt;
$ cp $R_EXA_DIR/bwhpc-r.moab $R_EXA_DIR/fit.R Rtest/       # copy example files to test directory&lt;br /&gt;
$ cd Rtest/                                                # change to directory&lt;br /&gt;
$ nano bwhpc-r.moab                                        # change job options, quit with &#039;CTRL+X&#039;&lt;br /&gt;
$ msub bwhpc-r.moab                                        # submit job&lt;br /&gt;
$ checkjob -v &amp;lt;JOBID&amp;gt;                                      # check state of job&lt;br /&gt;
$ ls                                                       # when job finishes the results will be visible in this directory&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Installing R-Packages into your home folder =&lt;br /&gt;
Since we cannot provide a software module for every R package, we recommend to install special R packages locally into your home folder. One possibility doing this is shown below: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cp $HOME/.bashrc $HOME/.bashrc.backup                                    # Make a backup copy of your bashrc&lt;br /&gt;
echo &amp;quot;export R_LIBS=\&amp;quot;${HOME}/R_libs\&amp;quot;&amp;quot; &amp;gt;&amp;gt; $HOME/.bashrc                 # Setting the environment variable R_LIBS permanently in your bashrc&lt;br /&gt;
source $HOME/.bashrc                                                     # Sourcing bashrc to make R_LIBS available &lt;br /&gt;
mkdir $R_LIBS                                                            # Create the R_libs folder in your HOME directory  &lt;br /&gt;
module load math/R                                                       # Loading the matlab software module&lt;br /&gt;
R                                                                        # Loading R&lt;br /&gt;
install.packages(&#039;package_name&#039;, repos=&amp;quot;http://cran.r-project.org&amp;quot;)      # Installing your R package and the dependencies &lt;br /&gt;
library(package_name)                                                    # Loading the package into you R instance&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The package is now installed permanently in your home folder and is available every time you start R. &lt;br /&gt;
&lt;br /&gt;
You can restore your old .bashrc if something goes wrong with: &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ mv $HOME/.bashrc.backup $HOME/.bashrc      # Restoring the original bashrc&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Installed packages can be deleted by deleting the folder ${HOME}/R_libs.&lt;br /&gt;
&lt;br /&gt;
= Version-Specific Information =&lt;br /&gt;
&lt;br /&gt;
For information specific to a single version, see the information available via the module system with the command&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ module help math/R&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Installed R plugins =&lt;br /&gt;
* [[CummeRbund_(R-package)|Bioinformatics: cummeRbund]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
----&lt;br /&gt;
[[Category:Mathematics software]][[Category:bwUniCluster]][[Category:BwForCluster_BinAC]][[Category:bwForCluster_MLS&amp;amp;WISO_Production]]&lt;/div&gt;</summary>
		<author><name>T Kienzle</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwForCluster_User_Access_Members_Uni_Mannheim&amp;diff=3211</id>
		<title>BwForCluster User Access Members Uni Mannheim</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwForCluster_User_Access_Members_Uni_Mannheim&amp;diff=3211"/>
		<updated>2015-12-03T13:15:00Z</updated>

		<summary type="html">&lt;p&gt;T Kienzle: /* Terms of Use */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Access to bwForCluster for members of Mannheim University =&lt;br /&gt;
&lt;br /&gt;
== Login prerequisites == &lt;br /&gt;
&lt;br /&gt;
* A valid RUM account is required for access to the bwForCluster. &lt;br /&gt;
&lt;br /&gt;
* As part of the login procedure, you will get a page that shows the data on your account that will be transferred from Mannheim University to the cluster. This page shows your entitlements. It should list the entitlement: &amp;lt;pre&amp;gt;http://bwidm.de/entitlement/bwForCluster &amp;lt;/pre&amp;gt;  If it does not, please go to [http://www.uni-mannheim.de/rum/ueber_uns/arbeitsgruppen/zs/hpc/bwgrid_cluster/zugang/  HPC-Zugang] for further information.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Terms of Use ==&lt;br /&gt;
&lt;br /&gt;
As with any other RUM service, users of bwForCluster must strictly adhere to the [http://www.uni-mannheim.de/ionas/rum/ueber_uns/arbeitsgruppen/zs/webdienste/verwaltungs_und_benutzerordnung/ terms of use Mannheim University].&lt;/div&gt;</summary>
		<author><name>T Kienzle</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=Batch_Jobs_-_bwUniCluster_Features&amp;diff=2451</id>
		<title>Batch Jobs - bwUniCluster Features</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=Batch_Jobs_-_bwUniCluster_Features&amp;diff=2451"/>
		<updated>2015-07-30T18:09:10Z</updated>

		<summary type="html">&lt;p&gt;T Kienzle: /* Interactiv Job Monitoring per Node */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This article contains information on features of the [[Batch_Jobs|batch job system]] only applicable on bwUniCluster.&lt;br /&gt;
&lt;br /&gt;
= Job Submission =&lt;br /&gt;
== msub Command ==&lt;br /&gt;
The bwUniCluster supports the following additional msub option(s):&lt;br /&gt;
{| style=&amp;quot;width:100%; vertical-align:top; background:#f5fffa;border:1px solid #000000;padding:1px&amp;quot;&lt;br /&gt;
! colspan=&amp;quot;3&amp;quot; style=&amp;quot;background-color:#999999;padding:3px&amp;quot;| bwUniCluster additional msub Options&lt;br /&gt;
|-&lt;br /&gt;
! style=&amp;quot;width:15%;height=20px; text-align:left;padding:3px&amp;quot;|Command line &lt;br /&gt;
! style=&amp;quot;width:20%;height=20px; text-align:left;padding:3px&amp;quot;|Script&lt;br /&gt;
! style=&amp;quot;width:65%;height=20px; text-align:left;padding:3px&amp;quot;|Purpose&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| style=&amp;quot;height=20px; text-align:left;padding:3px&amp;quot; | -I&lt;br /&gt;
| &lt;br /&gt;
| style=&amp;quot;height=20px; text-align:left;padding:3px&amp;quot; | Declares the job is to be run interactively.&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== msub -l &#039;&#039;resource_list&#039;&#039; ===&lt;br /&gt;
No deviation or additional features to general [[Batch_Jobs|batch job]] setting.&lt;br /&gt;
=== msub -q &#039;&#039;queues&#039;&#039; ===&lt;br /&gt;
&lt;br /&gt;
Compute resources such as walltime, nodes and memory are restricted and must fit into &#039;&#039;&#039;queues&#039;&#039;&#039;. Since requested compute resources are NOT always automatically mapped to the correct queue class, you must add to your msub command the correct queue class. Details are:&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot;, style=&amp;quot;width:100%; vertical-align:top; background:#f5fffa;border:1px solid #000000;padding:1px&amp;quot;&lt;br /&gt;
! colspan=&amp;quot;6&amp;quot; style=&amp;quot;background-color:#999999;padding:3px&amp;quot;| msub -q &#039;&#039;queue&#039;&#039;&lt;br /&gt;
|- style=&amp;quot;width:10%;height=20px; text-align:left;&amp;quot;&lt;br /&gt;
! style=&amp;quot;width:10%;padding:3px&amp;quot;| &#039;&#039;queue&#039;&#039;&lt;br /&gt;
! style=&amp;quot;width:5%;padding:3px&amp;quot;| &#039;&#039;node&#039;&#039;&lt;br /&gt;
! style=&amp;quot;width:15%;padding:3px&amp;quot;| &#039;&#039;default resources&#039;&#039;&lt;br /&gt;
! style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;minimum resources&#039;&#039;&lt;br /&gt;
! style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;maximum resources&#039;&#039;&lt;br /&gt;
! style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;node access policy&#039;&#039;&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; height=20px; text-align:left&amp;quot;&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| develop&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| thin&lt;br /&gt;
| style=&amp;quot;width:15%;padding:3px&amp;quot;| &#039;&#039;walltime&#039;&#039;=00:10:00,&#039;&#039;procs&#039;&#039;=1, &#039;&#039;pmem&#039;&#039;=4000mb&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1:&#039;&#039;ppn&#039;&#039;=16, &#039;&#039;walltime&#039;&#039;=00:30:00&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| shared&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; height=20px; text-align:left&amp;quot;&lt;br /&gt;
| style=&amp;quot;width:10%;padding:3px&amp;quot; | singlenode &lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| thin&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;walltime&#039;&#039;=00:30:01,&#039;&#039;procs&#039;&#039;=1, &#039;&#039;pmem&#039;&#039;=4000mb&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1, &#039;&#039;walltime&#039;&#039;=00:30:01&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1:&#039;&#039;ppn&#039;&#039;=16, &#039;&#039;walltime&#039;&#039;=3:00:00:00&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| shared&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; height=20px; text-align:left&amp;quot;&lt;br /&gt;
| style=&amp;quot;width:10%;vertical-align:top;height=20px; text-align:left;padding:3px&amp;quot; | multinode&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| thin&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;walltime&#039;&#039;=00:10:00,&#039;&#039;procs&#039;&#039;=1, &#039;&#039;pmem&#039;&#039;=4000mb&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=2&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=16:&#039;&#039;ppn&#039;&#039;=16, &#039;&#039;walltime&#039;&#039;=2:00:00:00&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| singlejob&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; height=20px; text-align:left&amp;quot;&lt;br /&gt;
| style=&amp;quot;width:10%;vertical-align:top;height=20px; text-align:left;padding:3px&amp;quot; | verylong&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| thin&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;walltime&#039;&#039;=3:00:00:01,&#039;&#039;procs&#039;&#039;=1, &#039;&#039;pmem&#039;&#039;=4000mb&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1&#039;&#039;, walltime&#039;&#039;=3:00:00:01&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes=1:&#039;&#039;ppn&#039;&#039;=16, &#039;&#039;walltime&#039;&#039;=6:00:00:00&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| shared&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; height=20px; text-align:left&amp;quot;&lt;br /&gt;
| style=&amp;quot;width:10%;padding:3px&amp;quot; | fat&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| fat&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;walltime&#039;&#039;=00:10:00,&#039;&#039;procs&#039;&#039;=1, &#039;&#039;pmem&#039;&#039;=32000mb&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1:&#039;&#039;ppn&#039;&#039;=32, &#039;&#039;walltime&#039;&#039;=3:00:00:00&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| shared&lt;br /&gt;
|- &lt;br /&gt;
|}&lt;br /&gt;
Note that &#039;&#039;node access policy&#039;&#039;=singlejob means that, irrespected of the requested number of cores, node access is exclusive. &lt;br /&gt;
Default resources of a queue class defines walltime, processes and memory if not explicitly given with msub command. Resource list acronyms &#039;&#039;walltime&#039;&#039;, &#039;&#039;procs&#039;&#039;, &#039;&#039;nodes&#039;&#039; and &#039;&#039;ppn&#039;&#039; are described [[Batch_Jobs#msub_-l_resource_list|here]].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Queue class examples ====&lt;br /&gt;
&lt;br /&gt;
* To run your batch job longer than 3 days, please use&amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;$ msub -q verylong&amp;lt;/span&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
* To run your batch job on one of the [[BwUniCluster_File_System#Components_of_bwUniCluster|fat nodes]], please use&amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;$ msub -q fat&amp;lt;/span&amp;gt;. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Environment Variables for Batch Jobs =&lt;br /&gt;
The bwUniCluster expands the [[Batch_Jobs#Environment Variables for Batch Jobs|common set of MOAB environment variables]] by the following variable(s):&lt;br /&gt;
{| style=&amp;quot;width:100%; vertical-align:top; background:#f5fffa;border:1px solid #000000;padding:1px&amp;quot;&lt;br /&gt;
! colspan=&amp;quot;3&amp;quot; style=&amp;quot;background-color:#999999;padding:3px&amp;quot;| bwUniCluster specific MOAB variables&lt;br /&gt;
|- style=&amp;quot;width:25%;height=20px; text-align:left;padding:3px&amp;quot;&lt;br /&gt;
! style=&amp;quot;width:20%;height=20px; text-align:left;padding:3px&amp;quot;| Environment variables&lt;br /&gt;
! style=&amp;quot;height=20px; text-align:left;padding:3px&amp;quot;| Description&lt;br /&gt;
|- &lt;br /&gt;
| style=&amp;quot;width:20%;height=20px; text-align:left;padding:3px&amp;quot; | MOAB_SUBMITDIR&lt;br /&gt;
| style=&amp;quot;height=20px; text-align:left;padding:3px&amp;quot;| Directory of job submission&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Since the work load manager MOAB on [[bwUniCluster]] uses the resource manager SLURM, the following environment variables of SLURM are added to your environment once your job has started:&lt;br /&gt;
{| style=&amp;quot;width:100%; vertical-align:top; background:#f5fffa;border:1px solid #000000;padding:1px&amp;quot;&lt;br /&gt;
! colspan=&amp;quot;3&amp;quot; style=&amp;quot;background-color:#999999;padding:3px&amp;quot;| SLURM variables&lt;br /&gt;
|- style=&amp;quot;width:25%;height=20px; text-align:left;padding:3px&amp;quot;&lt;br /&gt;
! style=&amp;quot;width:20%;height=20px; text-align:left;padding:3px&amp;quot;| Environment variables&lt;br /&gt;
! style=&amp;quot;height=20px; text-align:left;padding:3px&amp;quot;| Description&lt;br /&gt;
|- &lt;br /&gt;
| style=&amp;quot;width:20%;height=20px; text-align:left;padding:3px&amp;quot; | SLURM_JOB_CPUS_PER_NODE &lt;br /&gt;
| style=&amp;quot;height=20px; text-align:left;padding:3px&amp;quot;|  Number of processes per node dedicated to the job&lt;br /&gt;
|- &lt;br /&gt;
| style=&amp;quot;width:20%;height=20px; text-align:left;padding:3px&amp;quot; | SLURM_JOB_NODELIST &lt;br /&gt;
| style=&amp;quot;height=20px; text-align:left;padding:3px&amp;quot;|  List of nodes dedicated to the job&lt;br /&gt;
|- &lt;br /&gt;
| style=&amp;quot;width:20%;height=20px; text-align:left;padding:3px&amp;quot; | SLURM_JOB_NUM_NODES &lt;br /&gt;
| style=&amp;quot;height=20px; text-align:left;padding:3px&amp;quot;| Number of nodes dedicated to the job&lt;br /&gt;
|- &lt;br /&gt;
| style=&amp;quot;width:20%;height=20px; text-align:left;padding:3px&amp;quot; | SLURM_MEM_PER_NODE &lt;br /&gt;
| style=&amp;quot;height=20px; text-align:left;padding:3px&amp;quot;| Memory per node dedicated to the job &lt;br /&gt;
|- &lt;br /&gt;
| style=&amp;quot;width:20%;height=20px; text-align:left;padding:3px&amp;quot; | SLURM_NPROCS&lt;br /&gt;
| style=&amp;quot;height=20px; text-align:left;padding:3px&amp;quot;|  Total number of processes dedicated to the job &lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
== Interactiv Job Monitoring per Node ==&lt;br /&gt;
&lt;br /&gt;
By default nodes are not used exclusive unless they are requested with &#039;&#039;-l naccesspolicy=singlejob&#039;&#039; as described [[Batch_Jobs#msub_-l_resource_list|here]]. &amp;lt;br&amp;gt;&lt;br /&gt;
If a Job runs exclusive on one node you may do a ssh login to that node. The ssh access will be limited by the set walltime. To get the nodes of your job need to read the environment variable SLURM_JOB_NODELIST during the runtime of the job. It contains all nodes in a shortened way e.g. &#039;&#039;uc1n[344,386]&#039;&#039; or &#039;&#039;uc1n[344-345]&#039;&#039;. To expand this string to &#039;&#039;uc1n344 uc1n345&#039;&#039; you can you can use the command expandnodes like:&lt;br /&gt;
&lt;br /&gt;
  expandnodes $SLURM_JOB_NODELIST &amp;gt; nodelist&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Intel MPI parallel Programs =&lt;br /&gt;
== Intel MPI without Multithreading ==&lt;br /&gt;
MPI parallel programs run faster than serial programs on multi CPU and multi core systems. N-fold spawned processes of the MPI program, i.e., &#039;&#039;&#039;MPI tasks&#039;&#039;&#039;,  run simultaneously and communicate via the Message Passing Interface (MPI) paradigm. MPI tasks do not share memory but can be spawned over different nodes. &lt;br /&gt;
&lt;br /&gt;
Generate a wrapper script for &#039;&#039;&#039;Intel MPI&#039;&#039;&#039;, &#039;&#039;job_impi.sh&#039;&#039; containing the following lines:&lt;br /&gt;
{| style=&amp;quot;width: 100%; border:1px solid #d0cfcc; background:#f2f7ff;border-spacing: 2px;&amp;quot;&lt;br /&gt;
| style=&amp;quot;width:280px; white-space:nowrap; color:#000;&amp;quot; |&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
module load mpi/impi/&amp;lt;placeholder_for_version&amp;gt;&lt;br /&gt;
mpiexec.hydra -bootstrap slurm my_par_program&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&#039;&#039;&#039;Attention:&#039;&#039;&#039; Do &#039;&#039;&#039;NOT&#039;&#039;&#039; add mpirun options &#039;&#039;-n &amp;lt;number_of_processes&amp;gt;&#039;&#039; or any other option defining processes or nodes, since MOAB instructs mpirun about number of processes and node hostnames. &lt;br /&gt;
Moreover, replace &amp;lt;placeholder_for_version&amp;gt; with the wished version of &#039;&#039;&#039;Intel MPI&#039;&#039;&#039; to enable the MPI environment.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Launching and running 32 Intel MPI tasks on 4 nodes, each requiring 1000 MByte, and running for 5 hours, execute:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ msub -q multinode -l nodes=4:ppn=16,pmem=1000mb,walltime=05:00:00 job_impi.sh&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Intel MPI with Multithreading ==&lt;br /&gt;
Multithreaded + MPI parallel programs operate faster than serial programs on multi CPUs with multiple cores. All threads of one process share resources such as memory. On the contrary MPI tasks do not share memory but can be spawned over different nodes.  &lt;br /&gt;
&lt;br /&gt;
Multiple Intel MPI tasks must be launched by the MPI parallel program &#039;&#039;&#039;mpiexec.hydra&#039;&#039;&#039;. For multithreaded programs based on &#039;&#039;&#039;Open&#039;&#039;&#039; &#039;&#039;&#039;M&#039;&#039;&#039;ulti-&#039;&#039;&#039;P&#039;&#039;&#039;rocessing (OpenMP) number of threads are defined by the environment variable OMP_NUM_THREADS. By default this variable is set to 1 (OMP_NUM_THREADS=1).&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;For Intel MPI&#039;&#039;&#039; a job-script to submit a batch job called &#039;&#039;job_impi_omp.sh&#039;&#039; that runs a Intel MPI program with 8 tasks and a tenfold threaded program &#039;&#039;impi_omp_program&#039;&#039; requiring 32000 MByte of total physical memory per task and total wall clock time of 6 hours looks like: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--b)--&amp;gt; &lt;br /&gt;
{| style=&amp;quot;width: 100%; border:1px solid #d0cfcc; background:#f2f7ff;border-spacing: 5px;&amp;quot;&lt;br /&gt;
| style=&amp;quot;width:280px; white-space:nowrap; color:#000;&amp;quot; |&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#MSUB -l nodes=4:ppn=20&lt;br /&gt;
#MSUB -l walltime=06:00:00&lt;br /&gt;
#MSUB -l pmem=3200mb&lt;br /&gt;
#MSUB -v MPI_MODULE=mpi/impi&lt;br /&gt;
#MSUB -v OMP_NUM_THREADS=10&lt;br /&gt;
#MSUB -v MPIRUN_OPTIONS=&amp;quot;-binding &amp;quot;domain=omp&amp;quot; -print-rank-map -ppn 2 -envall&amp;quot;&lt;br /&gt;
#MSUB -v EXE=./impi_omp_program&lt;br /&gt;
#MSUB -N test_impi_omp&lt;br /&gt;
&lt;br /&gt;
#If using more than one MPI task per node please set&lt;br /&gt;
export KMP_AFFINITY=scatter&lt;br /&gt;
#export KMP_AFFINITY=verbose,scatter  prints messages concerning the supported affinity &lt;br /&gt;
#KMP_AFFINITY Description: https://software.intel.com/en-us/node/524790#KMP_AFFINITY_ENVIRONMENT_VARIABLE&lt;br /&gt;
 &lt;br /&gt;
module load ${MPI_MODULE}&lt;br /&gt;
TASK_COUNT=$((${MOAB_PROCCOUNT}/${OMP_NUM_THREADS}))&lt;br /&gt;
echo &amp;quot;${EXE} running on ${MOAB_PROCCOUNT} cores with ${TASK_COUNT} MPI-tasks and ${OMP_NUM_THREADS} threads&amp;quot;&lt;br /&gt;
startexe=&amp;quot;mpiexec.hydra -bootstrap slurm ${MPIRUN_OPTIONS} -n ${TASK_COUNT} ${EXE}&amp;quot;&lt;br /&gt;
echo $startexe&lt;br /&gt;
exec $startexe&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
Using Intel compiler the environment variable KMP_AFFINITY switches on binding of threads to specific cores. If you only run one MPI task per node please set KMP_AFFINITY=compact,1,0.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Execute the script &#039;&#039;&#039;job_impi_omp.sh&#039;&#039;&#039; adding the queue class &#039;&#039;multinode&#039;&#039; to your msub command:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ msub -q multinode job_impi_omp.sh&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
The mpirun option &#039;&#039;-print-rank-map&#039;&#039; shows the bindings between MPI tasks and nodes (not very beneficial). The option &#039;&#039;-binding&#039;&#039; binds MPI tasks (processes) to a particular processor; &#039;&#039;domain=omp&#039;&#039; means that the domain size is determined by the number of threads. In the above examples (2 MPI tasks per node) you could also choose &#039;&#039;-binding &amp;quot;cell=unit;map=bunch&amp;quot;&#039;&#039;; this binding maps one MPI process to each socket. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Interactive Jobs =&lt;br /&gt;
Interactive jobs on bwUniCluster [[BwUniCluster_User_Access#Allowed_activities_on_login_nodes|must &#039;&#039;&#039;NOT&#039;&#039;&#039; run on the logins nodes]], however resources for interactive jobs can be requested using msub. Considering a serial application with a graphical frontend that requires 5000 MByte of memory and limiting the interactive run to 2 hours execute the following:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ msub  -I  -V  -l nodes=1:ppn=1 -l walltime=0:02:00:00&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The option -V defines that all environment variables are exported to the compute node of the interactive session.&lt;br /&gt;
After execution of this command &#039;&#039;&#039;DO NOT CLOSE&#039;&#039;&#039; your current terminal session but wait until the queueing system MOAB has granted you the requested resources on the compute system. Once granted you will be automatically logged on the dedicated resource. Now you have an interactive session with 1 core and 5000 MByte of memory on the compute system for 2 hours. Simply execute now your application:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ cd to_path&lt;br /&gt;
$ ./application&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Note that, once the walltime limit has been reached you will be automatically logged out of the compute system.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Chain Jobs ==&lt;br /&gt;
&lt;br /&gt;
The CPU time requirements of many applications exceed the limits of the job classes. In those situations it is recommended to solve the problem by a job chain. A job chain is a sequence of jobs where each job automatically starts its successor. &lt;br /&gt;
&lt;br /&gt;
{{bwFrameA|&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
##################################################&lt;br /&gt;
## simple job run template for bwUniCluster     ##&lt;br /&gt;
## to run chain jobs with MOAB                  ##&lt;br /&gt;
##################################################&lt;br /&gt;
##&lt;br /&gt;
## usage :&lt;br /&gt;
##         msub -v myloop_counter=0 ./moab_chain_job.sh&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
#MSUB -l nodes=1:ppn=1&lt;br /&gt;
#MSUB -l walltime=00:00:05&lt;br /&gt;
#MSUB -l pmem=50mb&lt;br /&gt;
#MSUB -q develop&lt;br /&gt;
#MSUB -N chain&lt;br /&gt;
&lt;br /&gt;
## Defaults&lt;br /&gt;
loop_max=10&lt;br /&gt;
cmd=&#039;sleep 2&#039;&lt;br /&gt;
&lt;br /&gt;
## Check if counter environment variable is set&lt;br /&gt;
if [ -z &amp;quot;${myloop_counter}&amp;quot; ] ; then&lt;br /&gt;
   echo &amp;quot;  ERROR: myloop_counter is undefined, stop chain job&amp;quot;&lt;br /&gt;
   exit 1&lt;br /&gt;
fi&lt;br /&gt;
&lt;br /&gt;
## only continue if below loop_max&lt;br /&gt;
if [ ${myloop_counter} -lt ${loop_max} ] ; then&lt;br /&gt;
&lt;br /&gt;
   ## increase counter&lt;br /&gt;
   let myloop_counter+=1&lt;br /&gt;
&lt;br /&gt;
   ## print current Job number&lt;br /&gt;
   echo &amp;quot;  Chain job iteration = ${myloop_counter}&amp;quot;&lt;br /&gt;
&lt;br /&gt;
   ## Define your command&lt;br /&gt;
   cmd=&#039;sleep 2&#039;&lt;br /&gt;
   echo &amp;quot;  -&amp;gt; executing ${cmd}&amp;quot;&lt;br /&gt;
   ${cmd}&lt;br /&gt;
&lt;br /&gt;
   if [ $? -eq 0 ] ; then&lt;br /&gt;
      ## continue only if last command was successful&lt;br /&gt;
      msub -v myloop_counter=${myloop_counter} ./moab_chain_job.sh&lt;br /&gt;
   else&lt;br /&gt;
      ## Terminate chain&lt;br /&gt;
      echo &amp;quot;  ERROR: ${cmd} of chain job no. ${myloop_counter} terminated unexpectedly&amp;quot;&lt;br /&gt;
      exit 1&lt;br /&gt;
   fi&lt;br /&gt;
fi&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
[[Category:bwUniCluster|Batch Jobs - bwUniCluster features]]&lt;/div&gt;</summary>
		<author><name>T Kienzle</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=Batch_Jobs_-_bwUniCluster_Features&amp;diff=2450</id>
		<title>Batch Jobs - bwUniCluster Features</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=Batch_Jobs_-_bwUniCluster_Features&amp;diff=2450"/>
		<updated>2015-07-30T14:16:12Z</updated>

		<summary type="html">&lt;p&gt;T Kienzle: /* Node Monitoring */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This article contains information on features of the [[Batch_Jobs|batch job system]] only applicable on bwUniCluster.&lt;br /&gt;
&lt;br /&gt;
= Job Submission =&lt;br /&gt;
== msub Command ==&lt;br /&gt;
The bwUniCluster supports the following additional msub option(s):&lt;br /&gt;
{| style=&amp;quot;width:100%; vertical-align:top; background:#f5fffa;border:1px solid #000000;padding:1px&amp;quot;&lt;br /&gt;
! colspan=&amp;quot;3&amp;quot; style=&amp;quot;background-color:#999999;padding:3px&amp;quot;| bwUniCluster additional msub Options&lt;br /&gt;
|-&lt;br /&gt;
! style=&amp;quot;width:15%;height=20px; text-align:left;padding:3px&amp;quot;|Command line &lt;br /&gt;
! style=&amp;quot;width:20%;height=20px; text-align:left;padding:3px&amp;quot;|Script&lt;br /&gt;
! style=&amp;quot;width:65%;height=20px; text-align:left;padding:3px&amp;quot;|Purpose&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| style=&amp;quot;height=20px; text-align:left;padding:3px&amp;quot; | -I&lt;br /&gt;
| &lt;br /&gt;
| style=&amp;quot;height=20px; text-align:left;padding:3px&amp;quot; | Declares the job is to be run interactively.&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== msub -l &#039;&#039;resource_list&#039;&#039; ===&lt;br /&gt;
No deviation or additional features to general [[Batch_Jobs|batch job]] setting.&lt;br /&gt;
=== msub -q &#039;&#039;queues&#039;&#039; ===&lt;br /&gt;
&lt;br /&gt;
Compute resources such as walltime, nodes and memory are restricted and must fit into &#039;&#039;&#039;queues&#039;&#039;&#039;. Since requested compute resources are NOT always automatically mapped to the correct queue class, you must add to your msub command the correct queue class. Details are:&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot;, style=&amp;quot;width:100%; vertical-align:top; background:#f5fffa;border:1px solid #000000;padding:1px&amp;quot;&lt;br /&gt;
! colspan=&amp;quot;6&amp;quot; style=&amp;quot;background-color:#999999;padding:3px&amp;quot;| msub -q &#039;&#039;queue&#039;&#039;&lt;br /&gt;
|- style=&amp;quot;width:10%;height=20px; text-align:left;&amp;quot;&lt;br /&gt;
! style=&amp;quot;width:10%;padding:3px&amp;quot;| &#039;&#039;queue&#039;&#039;&lt;br /&gt;
! style=&amp;quot;width:5%;padding:3px&amp;quot;| &#039;&#039;node&#039;&#039;&lt;br /&gt;
! style=&amp;quot;width:15%;padding:3px&amp;quot;| &#039;&#039;default resources&#039;&#039;&lt;br /&gt;
! style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;minimum resources&#039;&#039;&lt;br /&gt;
! style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;maximum resources&#039;&#039;&lt;br /&gt;
! style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;node access policy&#039;&#039;&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; height=20px; text-align:left&amp;quot;&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| develop&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| thin&lt;br /&gt;
| style=&amp;quot;width:15%;padding:3px&amp;quot;| &#039;&#039;walltime&#039;&#039;=00:10:00,&#039;&#039;procs&#039;&#039;=1, &#039;&#039;pmem&#039;&#039;=4000mb&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1:&#039;&#039;ppn&#039;&#039;=16, &#039;&#039;walltime&#039;&#039;=00:30:00&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| shared&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; height=20px; text-align:left&amp;quot;&lt;br /&gt;
| style=&amp;quot;width:10%;padding:3px&amp;quot; | singlenode &lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| thin&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;walltime&#039;&#039;=00:30:01,&#039;&#039;procs&#039;&#039;=1, &#039;&#039;pmem&#039;&#039;=4000mb&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1, &#039;&#039;walltime&#039;&#039;=00:30:01&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1:&#039;&#039;ppn&#039;&#039;=16, &#039;&#039;walltime&#039;&#039;=3:00:00:00&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| shared&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; height=20px; text-align:left&amp;quot;&lt;br /&gt;
| style=&amp;quot;width:10%;vertical-align:top;height=20px; text-align:left;padding:3px&amp;quot; | multinode&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| thin&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;walltime&#039;&#039;=00:10:00,&#039;&#039;procs&#039;&#039;=1, &#039;&#039;pmem&#039;&#039;=4000mb&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=2&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=16:&#039;&#039;ppn&#039;&#039;=16, &#039;&#039;walltime&#039;&#039;=2:00:00:00&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| singlejob&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; height=20px; text-align:left&amp;quot;&lt;br /&gt;
| style=&amp;quot;width:10%;vertical-align:top;height=20px; text-align:left;padding:3px&amp;quot; | verylong&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| thin&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;walltime&#039;&#039;=3:00:00:01,&#039;&#039;procs&#039;&#039;=1, &#039;&#039;pmem&#039;&#039;=4000mb&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1&#039;&#039;, walltime&#039;&#039;=3:00:00:01&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes=1:&#039;&#039;ppn&#039;&#039;=16, &#039;&#039;walltime&#039;&#039;=6:00:00:00&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| shared&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; height=20px; text-align:left&amp;quot;&lt;br /&gt;
| style=&amp;quot;width:10%;padding:3px&amp;quot; | fat&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| fat&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;walltime&#039;&#039;=00:10:00,&#039;&#039;procs&#039;&#039;=1, &#039;&#039;pmem&#039;&#039;=32000mb&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1:&#039;&#039;ppn&#039;&#039;=32, &#039;&#039;walltime&#039;&#039;=3:00:00:00&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| shared&lt;br /&gt;
|- &lt;br /&gt;
|}&lt;br /&gt;
Note that &#039;&#039;node access policy&#039;&#039;=singlejob means that, irrespected of the requested number of cores, node access is exclusive. &lt;br /&gt;
Default resources of a queue class defines walltime, processes and memory if not explicitly given with msub command. Resource list acronyms &#039;&#039;walltime&#039;&#039;, &#039;&#039;procs&#039;&#039;, &#039;&#039;nodes&#039;&#039; and &#039;&#039;ppn&#039;&#039; are described [[Batch_Jobs#msub_-l_resource_list|here]].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Queue class examples ====&lt;br /&gt;
&lt;br /&gt;
* To run your batch job longer than 3 days, please use&amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;$ msub -q verylong&amp;lt;/span&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
* To run your batch job on one of the [[BwUniCluster_File_System#Components_of_bwUniCluster|fat nodes]], please use&amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;$ msub -q fat&amp;lt;/span&amp;gt;. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Environment Variables for Batch Jobs =&lt;br /&gt;
The bwUniCluster expands the [[Batch_Jobs#Environment Variables for Batch Jobs|common set of MOAB environment variables]] by the following variable(s):&lt;br /&gt;
{| style=&amp;quot;width:100%; vertical-align:top; background:#f5fffa;border:1px solid #000000;padding:1px&amp;quot;&lt;br /&gt;
! colspan=&amp;quot;3&amp;quot; style=&amp;quot;background-color:#999999;padding:3px&amp;quot;| bwUniCluster specific MOAB variables&lt;br /&gt;
|- style=&amp;quot;width:25%;height=20px; text-align:left;padding:3px&amp;quot;&lt;br /&gt;
! style=&amp;quot;width:20%;height=20px; text-align:left;padding:3px&amp;quot;| Environment variables&lt;br /&gt;
! style=&amp;quot;height=20px; text-align:left;padding:3px&amp;quot;| Description&lt;br /&gt;
|- &lt;br /&gt;
| style=&amp;quot;width:20%;height=20px; text-align:left;padding:3px&amp;quot; | MOAB_SUBMITDIR&lt;br /&gt;
| style=&amp;quot;height=20px; text-align:left;padding:3px&amp;quot;| Directory of job submission&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Since the work load manager MOAB on [[bwUniCluster]] uses the resource manager SLURM, the following environment variables of SLURM are added to your environment once your job has started:&lt;br /&gt;
{| style=&amp;quot;width:100%; vertical-align:top; background:#f5fffa;border:1px solid #000000;padding:1px&amp;quot;&lt;br /&gt;
! colspan=&amp;quot;3&amp;quot; style=&amp;quot;background-color:#999999;padding:3px&amp;quot;| SLURM variables&lt;br /&gt;
|- style=&amp;quot;width:25%;height=20px; text-align:left;padding:3px&amp;quot;&lt;br /&gt;
! style=&amp;quot;width:20%;height=20px; text-align:left;padding:3px&amp;quot;| Environment variables&lt;br /&gt;
! style=&amp;quot;height=20px; text-align:left;padding:3px&amp;quot;| Description&lt;br /&gt;
|- &lt;br /&gt;
| style=&amp;quot;width:20%;height=20px; text-align:left;padding:3px&amp;quot; | SLURM_JOB_CPUS_PER_NODE &lt;br /&gt;
| style=&amp;quot;height=20px; text-align:left;padding:3px&amp;quot;|  Number of processes per node dedicated to the job&lt;br /&gt;
|- &lt;br /&gt;
| style=&amp;quot;width:20%;height=20px; text-align:left;padding:3px&amp;quot; | SLURM_JOB_NODELIST &lt;br /&gt;
| style=&amp;quot;height=20px; text-align:left;padding:3px&amp;quot;|  List of nodes dedicated to the job&lt;br /&gt;
|- &lt;br /&gt;
| style=&amp;quot;width:20%;height=20px; text-align:left;padding:3px&amp;quot; | SLURM_JOB_NUM_NODES &lt;br /&gt;
| style=&amp;quot;height=20px; text-align:left;padding:3px&amp;quot;| Number of nodes dedicated to the job&lt;br /&gt;
|- &lt;br /&gt;
| style=&amp;quot;width:20%;height=20px; text-align:left;padding:3px&amp;quot; | SLURM_MEM_PER_NODE &lt;br /&gt;
| style=&amp;quot;height=20px; text-align:left;padding:3px&amp;quot;| Memory per node dedicated to the job &lt;br /&gt;
|- &lt;br /&gt;
| style=&amp;quot;width:20%;height=20px; text-align:left;padding:3px&amp;quot; | SLURM_NPROCS&lt;br /&gt;
| style=&amp;quot;height=20px; text-align:left;padding:3px&amp;quot;|  Total number of processes dedicated to the job &lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
== Interactiv Job Monitoring per Node ==&lt;br /&gt;
&lt;br /&gt;
By default nodes are not used exclusive unless they are requested with &#039;&#039;-l naccesspolicy=singlejob&#039;&#039; as described [[Batch_Jobs#msub_-l_resource_list|here]]. &amp;lt;br&amp;gt;&lt;br /&gt;
If a Job runs exclusive on one node you may do a ssh login to that node. The ssh access will be limited by the set walltime. To get the nodes of your job need to read the environment variable SLURM_JOB_NODELIST. It contains node in e.g. uc1n[344,386]. To expand this string you can you can use the command expandnodes.&lt;br /&gt;
&lt;br /&gt;
  expandnodes $SLURM_JOB_NODELIST &amp;gt; nodelist&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Intel MPI parallel Programs =&lt;br /&gt;
== Intel MPI without Multithreading ==&lt;br /&gt;
MPI parallel programs run faster than serial programs on multi CPU and multi core systems. N-fold spawned processes of the MPI program, i.e., &#039;&#039;&#039;MPI tasks&#039;&#039;&#039;,  run simultaneously and communicate via the Message Passing Interface (MPI) paradigm. MPI tasks do not share memory but can be spawned over different nodes. &lt;br /&gt;
&lt;br /&gt;
Generate a wrapper script for &#039;&#039;&#039;Intel MPI&#039;&#039;&#039;, &#039;&#039;job_impi.sh&#039;&#039; containing the following lines:&lt;br /&gt;
{| style=&amp;quot;width: 100%; border:1px solid #d0cfcc; background:#f2f7ff;border-spacing: 2px;&amp;quot;&lt;br /&gt;
| style=&amp;quot;width:280px; white-space:nowrap; color:#000;&amp;quot; |&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
module load mpi/impi/&amp;lt;placeholder_for_version&amp;gt;&lt;br /&gt;
mpiexec.hydra -bootstrap slurm my_par_program&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&#039;&#039;&#039;Attention:&#039;&#039;&#039; Do &#039;&#039;&#039;NOT&#039;&#039;&#039; add mpirun options &#039;&#039;-n &amp;lt;number_of_processes&amp;gt;&#039;&#039; or any other option defining processes or nodes, since MOAB instructs mpirun about number of processes and node hostnames. &lt;br /&gt;
Moreover, replace &amp;lt;placeholder_for_version&amp;gt; with the wished version of &#039;&#039;&#039;Intel MPI&#039;&#039;&#039; to enable the MPI environment.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Launching and running 32 Intel MPI tasks on 4 nodes, each requiring 1000 MByte, and running for 5 hours, execute:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ msub -q multinode -l nodes=4:ppn=16,pmem=1000mb,walltime=05:00:00 job_impi.sh&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Intel MPI with Multithreading ==&lt;br /&gt;
Multithreaded + MPI parallel programs operate faster than serial programs on multi CPUs with multiple cores. All threads of one process share resources such as memory. On the contrary MPI tasks do not share memory but can be spawned over different nodes.  &lt;br /&gt;
&lt;br /&gt;
Multiple Intel MPI tasks must be launched by the MPI parallel program &#039;&#039;&#039;mpiexec.hydra&#039;&#039;&#039;. For multithreaded programs based on &#039;&#039;&#039;Open&#039;&#039;&#039; &#039;&#039;&#039;M&#039;&#039;&#039;ulti-&#039;&#039;&#039;P&#039;&#039;&#039;rocessing (OpenMP) number of threads are defined by the environment variable OMP_NUM_THREADS. By default this variable is set to 1 (OMP_NUM_THREADS=1).&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;For Intel MPI&#039;&#039;&#039; a job-script to submit a batch job called &#039;&#039;job_impi_omp.sh&#039;&#039; that runs a Intel MPI program with 8 tasks and a tenfold threaded program &#039;&#039;impi_omp_program&#039;&#039; requiring 32000 MByte of total physical memory per task and total wall clock time of 6 hours looks like: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--b)--&amp;gt; &lt;br /&gt;
{| style=&amp;quot;width: 100%; border:1px solid #d0cfcc; background:#f2f7ff;border-spacing: 5px;&amp;quot;&lt;br /&gt;
| style=&amp;quot;width:280px; white-space:nowrap; color:#000;&amp;quot; |&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#MSUB -l nodes=4:ppn=20&lt;br /&gt;
#MSUB -l walltime=06:00:00&lt;br /&gt;
#MSUB -l pmem=3200mb&lt;br /&gt;
#MSUB -v MPI_MODULE=mpi/impi&lt;br /&gt;
#MSUB -v OMP_NUM_THREADS=10&lt;br /&gt;
#MSUB -v MPIRUN_OPTIONS=&amp;quot;-binding &amp;quot;domain=omp&amp;quot; -print-rank-map -ppn 2 -envall&amp;quot;&lt;br /&gt;
#MSUB -v EXE=./impi_omp_program&lt;br /&gt;
#MSUB -N test_impi_omp&lt;br /&gt;
&lt;br /&gt;
#If using more than one MPI task per node please set&lt;br /&gt;
export KMP_AFFINITY=scatter&lt;br /&gt;
#export KMP_AFFINITY=verbose,scatter  prints messages concerning the supported affinity &lt;br /&gt;
#KMP_AFFINITY Description: https://software.intel.com/en-us/node/524790#KMP_AFFINITY_ENVIRONMENT_VARIABLE&lt;br /&gt;
 &lt;br /&gt;
module load ${MPI_MODULE}&lt;br /&gt;
TASK_COUNT=$((${MOAB_PROCCOUNT}/${OMP_NUM_THREADS}))&lt;br /&gt;
echo &amp;quot;${EXE} running on ${MOAB_PROCCOUNT} cores with ${TASK_COUNT} MPI-tasks and ${OMP_NUM_THREADS} threads&amp;quot;&lt;br /&gt;
startexe=&amp;quot;mpiexec.hydra -bootstrap slurm ${MPIRUN_OPTIONS} -n ${TASK_COUNT} ${EXE}&amp;quot;&lt;br /&gt;
echo $startexe&lt;br /&gt;
exec $startexe&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
Using Intel compiler the environment variable KMP_AFFINITY switches on binding of threads to specific cores. If you only run one MPI task per node please set KMP_AFFINITY=compact,1,0.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Execute the script &#039;&#039;&#039;job_impi_omp.sh&#039;&#039;&#039; adding the queue class &#039;&#039;multinode&#039;&#039; to your msub command:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ msub -q multinode job_impi_omp.sh&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
The mpirun option &#039;&#039;-print-rank-map&#039;&#039; shows the bindings between MPI tasks and nodes (not very beneficial). The option &#039;&#039;-binding&#039;&#039; binds MPI tasks (processes) to a particular processor; &#039;&#039;domain=omp&#039;&#039; means that the domain size is determined by the number of threads. In the above examples (2 MPI tasks per node) you could also choose &#039;&#039;-binding &amp;quot;cell=unit;map=bunch&amp;quot;&#039;&#039;; this binding maps one MPI process to each socket. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Interactive Jobs =&lt;br /&gt;
Interactive jobs on bwUniCluster [[BwUniCluster_User_Access#Allowed_activities_on_login_nodes|must &#039;&#039;&#039;NOT&#039;&#039;&#039; run on the logins nodes]], however resources for interactive jobs can be requested using msub. Considering a serial application with a graphical frontend that requires 5000 MByte of memory and limiting the interactive run to 2 hours execute the following:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ msub  -I  -V  -l nodes=1:ppn=1 -l walltime=0:02:00:00&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The option -V defines that all environment variables are exported to the compute node of the interactive session.&lt;br /&gt;
After execution of this command &#039;&#039;&#039;DO NOT CLOSE&#039;&#039;&#039; your current terminal session but wait until the queueing system MOAB has granted you the requested resources on the compute system. Once granted you will be automatically logged on the dedicated resource. Now you have an interactive session with 1 core and 5000 MByte of memory on the compute system for 2 hours. Simply execute now your application:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ cd to_path&lt;br /&gt;
$ ./application&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Note that, once the walltime limit has been reached you will be automatically logged out of the compute system.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Chain Jobs ==&lt;br /&gt;
&lt;br /&gt;
The CPU time requirements of many applications exceed the limits of the job classes. In those situations it is recommended to solve the problem by a job chain. A job chain is a sequence of jobs where each job automatically starts its successor. &lt;br /&gt;
&lt;br /&gt;
{{bwFrameA|&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
##################################################&lt;br /&gt;
## simple job run template for bwUniCluster     ##&lt;br /&gt;
## to run chain jobs with MOAB                  ##&lt;br /&gt;
##################################################&lt;br /&gt;
##&lt;br /&gt;
## usage :&lt;br /&gt;
##         msub -v myloop_counter=0 ./moab_chain_job.sh&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
#MSUB -l nodes=1:ppn=1&lt;br /&gt;
#MSUB -l walltime=00:00:05&lt;br /&gt;
#MSUB -l pmem=50mb&lt;br /&gt;
#MSUB -q develop&lt;br /&gt;
#MSUB -N chain&lt;br /&gt;
&lt;br /&gt;
## Defaults&lt;br /&gt;
loop_max=10&lt;br /&gt;
cmd=&#039;sleep 2&#039;&lt;br /&gt;
&lt;br /&gt;
## Check if counter environment variable is set&lt;br /&gt;
if [ -z &amp;quot;${myloop_counter}&amp;quot; ] ; then&lt;br /&gt;
   echo &amp;quot;  ERROR: myloop_counter is undefined, stop chain job&amp;quot;&lt;br /&gt;
   exit 1&lt;br /&gt;
fi&lt;br /&gt;
&lt;br /&gt;
## only continue if below loop_max&lt;br /&gt;
if [ ${myloop_counter} -lt ${loop_max} ] ; then&lt;br /&gt;
&lt;br /&gt;
   ## increase counter&lt;br /&gt;
   let myloop_counter+=1&lt;br /&gt;
&lt;br /&gt;
   ## print current Job number&lt;br /&gt;
   echo &amp;quot;  Chain job iteration = ${myloop_counter}&amp;quot;&lt;br /&gt;
&lt;br /&gt;
   ## Define your command&lt;br /&gt;
   cmd=&#039;sleep 2&#039;&lt;br /&gt;
   echo &amp;quot;  -&amp;gt; executing ${cmd}&amp;quot;&lt;br /&gt;
   ${cmd}&lt;br /&gt;
&lt;br /&gt;
   if [ $? -eq 0 ] ; then&lt;br /&gt;
      ## continue only if last command was successful&lt;br /&gt;
      msub -v myloop_counter=${myloop_counter} ./moab_chain_job.sh&lt;br /&gt;
   else&lt;br /&gt;
      ## Terminate chain&lt;br /&gt;
      echo &amp;quot;  ERROR: ${cmd} of chain job no. ${myloop_counter} terminated unexpectedly&amp;quot;&lt;br /&gt;
      exit 1&lt;br /&gt;
   fi&lt;br /&gt;
fi&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
[[Category:bwUniCluster|Batch Jobs - bwUniCluster features]]&lt;/div&gt;</summary>
		<author><name>T Kienzle</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwForCluster_User_Access_Members_Uni_Mannheim&amp;diff=1982</id>
		<title>BwForCluster User Access Members Uni Mannheim</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwForCluster_User_Access_Members_Uni_Mannheim&amp;diff=1982"/>
		<updated>2014-12-17T15:38:13Z</updated>

		<summary type="html">&lt;p&gt;T Kienzle: Created page with &amp;quot;= Access to bwForCluster for members of Mannheim University =  == Login prerequisites ==   * A valid RUM account is required for access to the bwForCluster.   * As part of the...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Access to bwForCluster for members of Mannheim University =&lt;br /&gt;
&lt;br /&gt;
== Login prerequisites == &lt;br /&gt;
&lt;br /&gt;
* A valid RUM account is required for access to the bwForCluster. &lt;br /&gt;
&lt;br /&gt;
* As part of the login procedure, you will get a page that shows the data on your account that will be transferred from Mannheim University to the cluster. This page shows your entitlements. It should list the entitlement: &amp;lt;pre&amp;gt;http://bwidm.de/entitlement/bwForCluster &amp;lt;/pre&amp;gt;  If it does not, please go to [http://www.uni-mannheim.de/rum/ueber_uns/arbeitsgruppen/zs/hpc/bwgrid_cluster/zugang/  HPC-Zugang] for further information.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Terms of Use ==&lt;br /&gt;
&lt;br /&gt;
As with any other RUM service, users of bwForCluster must strictly adhere to the [http://www.uni-mannheim.de/rum/ivs/benutzerverwaltung/benutzerkennung/verwaltungs_und_benutzungsordnung.pdf terms of use Mannheim University].&lt;/div&gt;</summary>
		<author><name>T Kienzle</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwForCluster_User_Access&amp;diff=1979</id>
		<title>BwForCluster User Access</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwForCluster_User_Access&amp;diff=1979"/>
		<updated>2014-12-17T15:29:55Z</updated>

		<summary type="html">&lt;p&gt;T Kienzle: /* Issueing bwForCluster entitlement */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The usage of bwForCluster is free of charge. bwForClusters are customized to the requirements of particular research areas. &lt;br /&gt;
Each bwForCluster is/will be financed by the DFG (German Research Foundation) and by the Ministry of Science, Research and Arts of Baden-Württemberg based on scientifc grant proposal (compare proposals guidelines as per Art. 91b GG).&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;width: 100%; border-spacing: 5px;&amp;quot;&lt;br /&gt;
| style=&amp;quot;text-align:center; color:#000;vertical-align:middle;font-size:75%;&amp;quot; |[[File:Bwforreg.svg|center|border|500px|]] &lt;br /&gt;
|-&lt;br /&gt;
| style=&amp;quot;text-align:center; color:#000;vertical-align:middle;&amp;quot; |&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Granting user access to a bwForCluster requires 3 steps:&lt;br /&gt;
&lt;br /&gt;
# Registration of a new or joining an already existing &#039;&#039;Rechenvorhaben (RV)&#039;&#039;  at [https://www.bwhpc-c5.de/en/ZAS/zas_overview.php &amp;quot;Zentrale Antragsseite (ZAS)&amp;quot;]. An &#039;&#039;RV&#039;&#039; defines the planned compute activities of a group of researchers. As coworker of this group you only need to register your membership in the corresponding &#039;&#039;RV&#039;&#039;.&lt;br /&gt;
# Application for a [[#Issueing bwForCluster entitlement | bwForCluster entitlement]] issued by your university.&lt;br /&gt;
# [[#Personal registration at bwForCluster | Personal registration]] at assigned cluster site based on approved &#039;&#039;RV&#039;&#039; [[File:Zas assignment icon.svg|25px|]] and issued bwForCluster entitlement [[File:bwfor entitlement icon.svg|25px|]].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Step 1 and 2 are pre-steps and requirements of step 3. Moreover, steps 1 and 2 can be done in parallel. The question, at which bwForCluster site to do step 3, will be answered by the cluster assignment team (CAT) based on the data of step 1, i.e. at &#039;&#039;ZAS&#039;&#039;. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=  &#039;&#039;RV&#039;&#039; registration at &#039;&#039;ZAS&#039;&#039; =&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;rechenvorhaben (RV)&#039;&#039; registration at &#039;&#039;ZAS&#039;&#039; does &#039;&#039;&#039;NOT&#039;&#039;&#039; correspond to typical compute project proposals as for other HPC clusters, since:&lt;br /&gt;
* there is no scientific reviewing process&lt;br /&gt;
* the registration inquires only brief details and&lt;br /&gt;
* it covers a group of coworkers, and&lt;br /&gt;
* only the &#039;&#039;RV&#039;&#039; responsible must submit the &#039;&#039;RV&#039;&#039;.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== How to register? ==&lt;br /&gt;
=== Register a new &#039;&#039;RV&#039;&#039; ===&lt;br /&gt;
Typically only the leader of a scientific work group or the senior scientist of a research group/collaboration needs to login at&lt;br /&gt;
* https://www.bwhpc-c5.de/en/ZAS/bwforcluster_rv_registration.php&lt;br /&gt;
and to fill in all mandatory fields of the given web form. Please note that for your convenience you can also switch to the [https://www.bwhpc-c5.de/ZAS/bwforcluster_rechenvorhaben.php german version] of the web form. The submitter of the &#039;&#039;rechenvorhaben (RV)&#039;&#039; will be assigned the role &#039;&#039;&#039;RV responsible&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
The web form consists of the following fields to be filled:&lt;br /&gt;
{| style=&amp;quot;width:100%;border:3px solid darkgray; margin: 1em auto 1em auto;&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! style=&amp;quot;width:25%;padding:2px;margin:3px; background:#AAA; font-size:100%; font-weight:bold; border:1px solid #BBBBBB; text-align:left; color:#000; padding:0.2em 0.4em;&amp;quot;| Field&lt;br /&gt;
! style=&amp;quot;padding:2px;margin:3px; background:#AAA; font-size:100%; font-weight:bold; border:1px solid #BBBBBB; text-align:left; color:#000; padding:0.2em 0.4em;&amp;quot; | Explanation&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; {{Lightgray_small}}| RV Title&lt;br /&gt;
|scope=&amp;quot;column&amp;quot; {{Tab element}} | Define a short title of your planned compute activities, maximum of 255 characters including spaces.&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; {{Lightgray_small}}| RV Description&lt;br /&gt;
|scope=&amp;quot;column&amp;quot; {{Tab element}} | Write a short abstract about your planned compute activities (maximum of 2048 characters including spaces). Research groups that contributed to the DFG grant proposal (Art. 91b GG) of the corresponding bwForCluster only need to give reference to that particular proposal.&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; {{Lightgray_small}}| Scientific field&lt;br /&gt;
|scope=&amp;quot;column&amp;quot; {{Tab element}} | Tick one or several scientific fields. Once all bwForClusters are up and running the full list of given scientific fields are supported and hence for &#039;&#039;rechenvorhaben (RV)&#039;&#039; applicable. If your &#039;&#039;RV&#039;&#039; does not match any given scientific field, please state your scientific field in the given text box. Grayed out scientific fields indicate that the corresponding bwForCluster(s) is/are not operational yet.&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; {{Lightgray_small}}| Field of activity&lt;br /&gt;
|scope=&amp;quot;column&amp;quot; {{Tab element}} | Define if your &#039;&#039;RV&#039;&#039; is primarily for research and/or teaching. If not applicable, use text box.&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; {{Lightgray_small}}| Parallel paradigm&lt;br /&gt;
|scope=&amp;quot;column&amp;quot; {{Tab element}} | State what parallel paradigm your code or application uses. Multiple ticks are allowed. Further information can be provided via text box. If you are not sure about it, please state the software you are using.&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; {{Lightgray_small}}| Programming language&lt;br /&gt;
|scope=&amp;quot;column&amp;quot; {{Tab element}} | State the programming language(s) of your code or application.&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; {{Lightgray_small}}| Numerical methods&lt;br /&gt;
|scope=&amp;quot;column&amp;quot; {{Tab element}} | State the numerical or &amp;quot;calculation&amp;quot; methods your code or application utilises. If you do not know, write &amp;quot;unknown&amp;quot;. &lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; {{Lightgray_small}}| Software packages&lt;br /&gt;
|scope=&amp;quot;column&amp;quot; {{Tab element}} | State all software package you will or want to use for your &#039;&#039;RV&#039;&#039;. Also include compiler, MPI and numerical packages (e.g. MKL, FFTW).&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; {{Lightgray_small}}| Estimated requested computing resources&lt;br /&gt;
|scope=&amp;quot;column&amp;quot; {{Tab element}} | Roughly estimate how many CPU hours are required to finish your &#039;&#039;RV&#039;&#039;. To calculate &amp;quot;CPU hours&amp;quot; multiply &amp;quot;elapsed parallel computing time&amp;quot; with &amp;quot;number of CPU core&amp;quot; per job. &lt;br /&gt;
&#039;&#039;Example: Your code uses 4 CPU cores and has a wall time of 1h. This makes 4 CPU hours.&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; {{Lightgray_small}}| Planned maximum number of parallel used CPU cores per job&lt;br /&gt;
|scope=&amp;quot;column&amp;quot; {{Tab element}} | Give an upper limit of how many CPU cores per job are required. Please cross-check with your statement about parallel paradigm. &lt;br /&gt;
&#039;&#039;Obviously, a sequential code can only use 1 CPU core per job.&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; {{Lightgray_small}}| Estimated maximal memory requirements per CPU core&lt;br /&gt;
|scope=&amp;quot;column&amp;quot; {{Tab element}} | Give an upper limit of how much RAM per CPU core your jobs require. Please give a value in GigaBytes.&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; {{Lightgray_small}}| Estimated requested persistent disk space for the &#039;&#039;RV&#039;&#039;&lt;br /&gt;
|scope=&amp;quot;column&amp;quot; {{Tab element}} | State how much disk space in total you need for your &#039;&#039;RV&#039;&#039;. Please give a value in GigaBytes. &lt;br /&gt;
&#039;&#039;Example: If your RV has 4 more coworker and each of you produces until the&lt;br /&gt;
end of your RV 20 GigaByte of output, your maximum disk space is 100 GigaByte.&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; {{Lightgray_small}}| Estimated maximal temporary disk space per job&lt;br /&gt;
|scope=&amp;quot;column&amp;quot; {{Tab element}} | State how much space your intermediate files require during a job run. Please give a value in GigaBytes. &lt;br /&gt;
&#039;&#039;Example: Your final output of your job is 0.1 GigaByte, but during your job 10 temporary files, each with a size of 1 GigaByte, are written to disk, hence your correct answer is 10 GigaByte.&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; {{Lightgray_small}}| Institute&lt;br /&gt;
|scope=&amp;quot;column&amp;quot; {{Tab element}} | State the full name of your Institute.&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
Note that the fields - name, firstname, organization, mail, und EPPN - are auto-filled and can not be changed. These are your credentials as given by your home organization.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;span style=&amp;quot;font-weight:bold;font-size:110%;&amp;quot;&amp;gt;Response&amp;lt;/span&amp;gt;&lt;br /&gt;
# After submitting you will get an email from ZAS confirming your submission. With this email you are given an unique&amp;lt;br&amp;gt;* acronym&amp;lt;br&amp;gt;* password&amp;lt;br&amp;gt;Please keep the password enclosed. Only with both, acronym and password, [[BwForCluster_User_Access#Coworkers|coworkers]] can be added to your &#039;&#039;rechenvorhaben&#039;&#039;.&lt;br /&gt;
# The cluster assignment team will be notified immediately about your submission.&lt;br /&gt;
# The cluster assignment team decides within 2 working days and based on your submitted data which bwForCluster most fits, and submits its decision to ZAS.&lt;br /&gt;
# ZAS notifies you in an email about your assigned bwForCluster and provides website details for [[BwForCluster_User_Access#Personal registration at bwForCluster|step 3]].&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Coworkers ===&lt;br /&gt;
Important, each coworker must add herself/himself to a &#039;&#039;rechenvorhaben&#039;&#039;, since only via personal authentication the correctness of personal credentials (provided by your home organization) can be guaranteed. To become coworker of a &#039;&#039;rechenvorhaben&#039;&#039;, please login at&lt;br /&gt;
* https://www.bwhpc-c5.de/en/ZAS/bwforcluster_collaboration.php&lt;br /&gt;
and provide correct&lt;br /&gt;
* password&lt;br /&gt;
* acronym &lt;br /&gt;
of the &#039;&#039;rechenvorhaben&#039;&#039;. Your RV responsible will provide you with that information. The submitter will be assigned the role &#039;&#039;&#039;RV member&#039;&#039;&#039;.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;span style=&amp;quot;font-weight:bold;font-size:110%;&amp;quot;&amp;gt;Response&amp;lt;/span&amp;gt;&lt;br /&gt;
# After submitting the request for participating a &#039;&#039;&#039;RV&#039;&#039;&#039;, the coworker will receive an email from ZAS about the process. The membership status in the &#039;&#039;&#039;RV&#039;&#039;&#039; is set active by default.&lt;br /&gt;
# Each person with the role of &#039;&#039;&#039;RV responsible&#039;&#039;&#039; or &#039;&#039;&#039;RV manager&#039;&#039;&#039; will be notified. Any of these persons can crosscheck via https://www.bwhpc-c5.de/en/ZAS/info_compute_project.php and clicking on the correct RV acronym the correctness of the process.&lt;br /&gt;
# The RV responsible can set any &#039;&#039;&#039;RV coworker&#039;&#039;&#039; to &#039;&#039;&#039;RV manager&#039;&#039;&#039; and vice versa as well as deactivate/reactivate any &#039;&#039;&#039;RV coworker&#039;&#039;&#039; or &#039;&#039;&#039;RV manager&#039;&#039;&#039;.&lt;br /&gt;
# Any RV manager can deactivate/reactivate any &#039;&#039;&#039;RV coworker&#039;&#039;&#039;.  &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--=== RV extension === &lt;br /&gt;
Any RV is restricted to 1 year duration and the given compute resources during registration. Only the RV responsible can apply for an extension of the RV. The extension can be the duration by another year or the increase of computational resources. In any case, the RV responsible must login at:&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;span style=&amp;quot;font-weight:bold;font-size:115%;&amp;quot;&amp;gt;Response&amp;lt;/span&amp;gt; &lt;br /&gt;
# After submission the RV responsible will receive an email from ZAS--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Roles in a RV ==&lt;br /&gt;
{| style=&amp;quot;width:100%;border:3px solid darkgray; margin: 1em auto 1em auto;&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! style=&amp;quot;width:25%;padding:2px;margin:3px; background:#AAA; font-size:100%; font-weight:bold; border:1px solid #BBBBBB; text-align:left; color:#000; padding:0.2em 0.4em;&amp;quot;| Role&lt;br /&gt;
! style=&amp;quot;padding:2px;margin:3px; background:#AAA; font-size:100%; font-weight:bold; border:1px solid #BBBBBB; text-align:left; color:#000; padding:0.2em 0.4em;&amp;quot; | Explanation&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; {{Lightgray_small}}| RV responsible&lt;br /&gt;
|scope=&amp;quot;column&amp;quot; {{Tab element}} | Registers the &#039;&#039;rechenvorhaben&#039;&#039; (RV), i.e. the planned compute activities. Can deactivate but also reactivate RV managers and coworkers. Can promote coworkers to mangers and vice versa.&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; {{Lightgray_small}}| RV manager&lt;br /&gt;
|scope=&amp;quot;column&amp;quot; {{Tab element}} | Defines a RV coworker with rights to deactivate and reactivate RV coworkers.  &lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; {{Lightgray_small}}| RV coworker&lt;br /&gt;
|scope=&amp;quot;column&amp;quot; {{Tab element}} | Registers her/his membership via RV acronym and password.&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Issueing bwForCluster entitlement =&lt;br /&gt;
&lt;br /&gt;
Each university issues the bwForCluster entitlement&lt;br /&gt;
&amp;lt;pre&amp;gt;http://bwidm.de/entitlement/bwForCluster&amp;lt;/pre&amp;gt;&lt;br /&gt;
only for their own members. &lt;br /&gt;
&lt;br /&gt;
The bwForCluster entitlement issued by a university assures the operator of a bwForCluster that its university member&#039;s compute activities comply with the German Foreign Trade Act (Außenwirtschaftsgesetz - AWG) und German Foreign Trade Regulations (Außenwirtschaftsverordnung - AWV).&lt;br /&gt;
&lt;br /&gt;
The following unversities have already established a process to issue the bwForCluster entitlement:&lt;br /&gt;
* [http://www.scc.kit.edu/downloads/sdo/Antrag_Zugangsberechtigung_bwForCluster_v2.pdf KIT]&lt;br /&gt;
* [[BwForCluster User Access Members Uni Ulm|Ulm University]]&lt;br /&gt;
* [[BwForCluster User Access Members Uni Tübingen|Tübingen University]]&lt;br /&gt;
* [[BwForCluster User Access Members Uni Mannheim|Mannheim University]]&lt;br /&gt;
If you do not find your university in the list above, please contact your local service desk.&lt;br /&gt;
&lt;br /&gt;
= Personal registration at bwForCluster  = &lt;br /&gt;
&lt;br /&gt;
Once you have registered your own RV (&#039;&#039;rechenvorhaben&#039;&#039;)&lt;br /&gt;
or a membership in an RV, the cluster assignment team will provide&lt;br /&gt;
you with information about your designated cluster. You will receive&lt;br /&gt;
an email with a website to create an account for yourself&lt;br /&gt;
on that cluster.&lt;br /&gt;
&lt;br /&gt;
Available bwForCluster registration servers (service providers):&lt;br /&gt;
{| style=&amp;quot;border:3px solid darkgray; margin: 1em auto 1em auto;&amp;quot; width=&amp;quot;60%&amp;quot;&lt;br /&gt;
|- &lt;br /&gt;
!scope=&amp;quot;row&amp;quot; {{Darkgray}} | Cluster topic and location&lt;br /&gt;
!scope=&amp;quot;row&amp;quot; {{Darkgray}} | Registration server &lt;br /&gt;
|-&lt;br /&gt;
| bwForCluster Chemistry JUSTUS Ulm&lt;br /&gt;
| http://bwidm.rz.uni-ulm.de/&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Please note, this step is different to your registration at ZAS since&lt;br /&gt;
here you register yourself as a person. Only based on your personal credentials&lt;br /&gt;
a user account can be generated. However, resource requests of the RV in ZAS&lt;br /&gt;
will be cross-checked with the resources consumed by you and your RV coworkers&lt;br /&gt;
on this cluster.&lt;br /&gt;
&lt;br /&gt;
After steps 1 and 2 (RV approval and bwForCluster entitlement) please visit the&lt;br /&gt;
&lt;br /&gt;
* bwForCluster &#039;&#039;service provider&#039;&#039; registration website (see table above or email after RV approval):&lt;br /&gt;
*# Select your home organization from the list of organizations and click &#039;&#039;&#039;Proceed&#039;&#039;&#039;.&lt;br /&gt;
*# You will be redirected to the &#039;&#039;Identity Provider&#039;&#039; of your home organization.&lt;br /&gt;
*# Enter your home-organizational user ID (might be user name, email, ...) and password and click &#039;&#039;&#039;Login&#039;&#039;&#039; or &#039;&#039;&#039;Anmelden&#039;&#039;&#039;.&lt;br /&gt;
*# When doing this for the first time you need to accept that your personal data is transferred to the &#039;&#039;service provider&#039;&#039;.&lt;br /&gt;
*# You will be redirected back to the cluster registration website.&lt;br /&gt;
*# Select &#039;&#039;&#039;Service Description&#039;&#039;&#039; of the designated cluster.&lt;br /&gt;
*# Click on &#039;&#039;&#039;Register&#039;&#039;&#039; link to register for this cluster.&lt;br /&gt;
*# Read and accept the terms and conditions of use and click on button &#039;&#039;&#039;Register&#039;&#039;&#039;.&lt;br /&gt;
*# Finally you will receive an email with instructions how to login to the cluster (please wait ~15 minutes before trying).&lt;br /&gt;
*# Optionally but strongly recommended: Click on &#039;&#039;&#039;Set Service Password&#039;&#039;&#039; and set a password for the cluster.&lt;br /&gt;
&lt;br /&gt;
= Login to bwForCluster  = &lt;br /&gt;
&lt;br /&gt;
Personalized details about how to login to the cluster are included&lt;br /&gt;
in an email send after registration at the bwForCluster service provider.&lt;br /&gt;
&lt;br /&gt;
General instructions for the bwForCluster login can be found here:&lt;br /&gt;
{| style=&amp;quot;border:3px solid darkgray; margin: 1em auto 1em auto;&amp;quot; width=&amp;quot;60%&amp;quot;&lt;br /&gt;
|- &lt;br /&gt;
!scope=&amp;quot;row&amp;quot; {{Darkgray}} | Cluster topic and location&lt;br /&gt;
!scope=&amp;quot;row&amp;quot; {{Darkgray}} | Login instructions&lt;br /&gt;
|-&lt;br /&gt;
| bwForCluster Chemistry JUSTUS Ulm&lt;br /&gt;
| [[BwForCluster_Chemistry_Login|bwForCluster Chemistry Login]] &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
[[Category:Access|bwForCluster]][[Category:bwForCluster_Chemistry]]&lt;/div&gt;</summary>
		<author><name>T Kienzle</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=Batch_Jobs_-_bwUniCluster_Features&amp;diff=1795</id>
		<title>Batch Jobs - bwUniCluster Features</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=Batch_Jobs_-_bwUniCluster_Features&amp;diff=1795"/>
		<updated>2014-11-05T15:30:15Z</updated>

		<summary type="html">&lt;p&gt;T Kienzle: /* Environment Variables for Batch Jobs */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This article contains information on features of the [[Batch_Jobs|batch job system]] only applicable on bwUniCluster.&lt;br /&gt;
&lt;br /&gt;
= Job Submission =&lt;br /&gt;
== msub Command ==&lt;br /&gt;
The bwUniCluster supports the following additional msub option(s):&lt;br /&gt;
{| style=&amp;quot;width:100%; vertical-align:top; background:#f5fffa;border:1px solid #000000;padding:1px&amp;quot;&lt;br /&gt;
! colspan=&amp;quot;3&amp;quot; style=&amp;quot;background-color:#999999;padding:3px&amp;quot;| bwUniCluster additional msub Options&lt;br /&gt;
|-&lt;br /&gt;
! style=&amp;quot;width:15%;height=20px; text-align:left;padding:3px&amp;quot;|Command line &lt;br /&gt;
! style=&amp;quot;width:20%;height=20px; text-align:left;padding:3px&amp;quot;|Script&lt;br /&gt;
! style=&amp;quot;width:65%;height=20px; text-align:left;padding:3px&amp;quot;|Purpose&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| style=&amp;quot;height=20px; text-align:left;padding:3px&amp;quot; | -I&lt;br /&gt;
| &lt;br /&gt;
| style=&amp;quot;height=20px; text-align:left;padding:3px&amp;quot; | Declares the job is to be run interactively.&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== msub -l &#039;&#039;resource_list&#039;&#039; ===&lt;br /&gt;
No deviation or additional features to general [[Batch_Jobs|batch job]] setting.&lt;br /&gt;
=== msub -q &#039;&#039;queues&#039;&#039; ===&lt;br /&gt;
&lt;br /&gt;
Compute resources such as walltime, nodes and memory are restricted and must fit into &#039;&#039;&#039;queues&#039;&#039;&#039;. Since requested compute resources are NOT always automatically mapped to the correct queue class, you must add to your msub command the correct queue class. Details are:&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot;, style=&amp;quot;width:100%; vertical-align:top; background:#f5fffa;border:1px solid #000000;padding:1px&amp;quot;&lt;br /&gt;
! colspan=&amp;quot;5&amp;quot; style=&amp;quot;background-color:#999999;padding:3px&amp;quot;| msub -q &#039;&#039;queue&#039;&#039;&lt;br /&gt;
|- style=&amp;quot;width:10%;height=20px; text-align:left;&amp;quot;&lt;br /&gt;
! style=&amp;quot;width:10%;padding:3px&amp;quot;| &#039;&#039;queue&#039;&#039;&lt;br /&gt;
! style=&amp;quot;width:5%;padding:3px&amp;quot;| &#039;&#039;node&#039;&#039;&lt;br /&gt;
! style=&amp;quot;width:15%;padding:3px&amp;quot;| &#039;&#039;default resources&#039;&#039;&lt;br /&gt;
! style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;minimum resources&#039;&#039;&lt;br /&gt;
! style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;maximum resources&#039;&#039;&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; height=20px; text-align:left&amp;quot;&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| develop&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| thin&lt;br /&gt;
| style=&amp;quot;width:15%;padding:3px&amp;quot;| &#039;&#039;walltime&#039;&#039;=00:10:00,&#039;&#039;procs&#039;&#039;=1, &#039;&#039;mem&#039;&#039;=4000mb&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;walltime&#039;&#039;=00:30:00,&#039;&#039;nodes&#039;&#039;=1:&#039;&#039;ppn&#039;&#039;=16&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; height=20px; text-align:left&amp;quot;&lt;br /&gt;
| style=&amp;quot;width:10%;padding:3px&amp;quot; | singlenode &lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| thin&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;walltime&#039;&#039;=00:30:01,&#039;&#039;procs&#039;&#039;=1, mem&#039;&#039;=4000mb&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;walltime&#039;&#039;=00:30:01,&#039;&#039;nodes&#039;&#039;=1&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;walltime&#039;&#039;=3:00:00:00,&#039;&#039;nodes&#039;&#039;=1:&#039;&#039;ppn&#039;&#039;=16&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; height=20px; text-align:left&amp;quot;&lt;br /&gt;
| style=&amp;quot;width:10%;vertical-align:top;height=20px; text-align:left;padding:3px&amp;quot; | multinode&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| thin&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;walltime&#039;&#039;=00:10:00,&#039;&#039;procs&#039;&#039;=1, &#039;&#039;mem&#039;&#039;=4000mb&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=2&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;walltime&#039;&#039;=2:00:00:00,&#039;&#039;nodes&#039;&#039;=16:&#039;&#039;ppn&#039;&#039;=16&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; height=20px; text-align:left&amp;quot;&lt;br /&gt;
| style=&amp;quot;width:10%;vertical-align:top;height=20px; text-align:left;padding:3px&amp;quot; | verylong&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| thin&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;walltime&#039;&#039;=3:00:00:01,&#039;&#039;procs&#039;&#039;=1, &#039;&#039;mem&#039;&#039;=4000mb&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;walltime&#039;&#039;=3:00:00:01,&#039;&#039;nodes&#039;&#039;=1&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;walltime&#039;&#039;=6:00:00:00,&#039;&#039;nodes=1:&#039;&#039;ppn&#039;&#039;=16&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; height=20px; text-align:left&amp;quot;&lt;br /&gt;
| style=&amp;quot;width:10%;padding:3px&amp;quot; | fat&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| fat&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;walltime&#039;&#039;=00:10:00,&#039;&#039;procs&#039;&#039;=1, &#039;&#039;mem&#039;&#039;=32000mb&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;walltime&#039;&#039;=3:00:00:00,&#039;&#039;nodes&#039;&#039;=1:&#039;&#039;ppn&#039;&#039;=32&lt;br /&gt;
|- &lt;br /&gt;
|}&lt;br /&gt;
Default resources of a queue class defines walltime, processes and memory if not explicitly given with msub command. Resource list acronyms &#039;&#039;walltime&#039;&#039;, &#039;&#039;procs&#039;&#039;, &#039;&#039;nodes&#039;&#039; and &#039;&#039;ppn&#039;&#039; are described [[Batch_Jobs#msub_-l_resource_list|here]].&lt;br /&gt;
&lt;br /&gt;
==== Queue class examples ====&lt;br /&gt;
&lt;br /&gt;
* To run your batch job longer than 3 days, please use&amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;$ msub -q verylong&amp;lt;/span&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
* To run your batch job on one of the [[BwUniCluster_File_System#Components_of_bwUniCluster|fat nodes]], please use&amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;$ msub -q fat&amp;lt;/span&amp;gt;. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Environment Variables for Batch Jobs =&lt;br /&gt;
The bwUniCluster expands the [[Batch_Jobs#Environment Variables for Batch Jobs|common set of MOAB environment variables]] by the following variable(s):&lt;br /&gt;
{| style=&amp;quot;width:100%; vertical-align:top; background:#f5fffa;border:1px solid #000000;padding:1px&amp;quot;&lt;br /&gt;
! colspan=&amp;quot;3&amp;quot; style=&amp;quot;background-color:#999999;padding:3px&amp;quot;| bwUniCluster specific MOAB variables&lt;br /&gt;
|- style=&amp;quot;width:25%;height=20px; text-align:left;padding:3px&amp;quot;&lt;br /&gt;
! style=&amp;quot;width:20%;height=20px; text-align:left;padding:3px&amp;quot;| Environment variables&lt;br /&gt;
! style=&amp;quot;height=20px; text-align:left;padding:3px&amp;quot;| Description&lt;br /&gt;
|- &lt;br /&gt;
| style=&amp;quot;width:20%;height=20px; text-align:left;padding:3px&amp;quot; | MOAB_SUBMITDIR&lt;br /&gt;
| style=&amp;quot;height=20px; text-align:left;padding:3px&amp;quot;| Directory of job submission&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Since the work load manager MOAB on [[bwUniCluster]] uses the resource manager SLURM, the following environment variables of SLURM are added to your environment once your job has started:&lt;br /&gt;
{| style=&amp;quot;width:100%; vertical-align:top; background:#f5fffa;border:1px solid #000000;padding:1px&amp;quot;&lt;br /&gt;
! colspan=&amp;quot;3&amp;quot; style=&amp;quot;background-color:#999999;padding:3px&amp;quot;| SLURM variables&lt;br /&gt;
|- style=&amp;quot;width:25%;height=20px; text-align:left;padding:3px&amp;quot;&lt;br /&gt;
! style=&amp;quot;width:20%;height=20px; text-align:left;padding:3px&amp;quot;| Environment variables&lt;br /&gt;
! style=&amp;quot;height=20px; text-align:left;padding:3px&amp;quot;| Description&lt;br /&gt;
|- &lt;br /&gt;
| style=&amp;quot;width:20%;height=20px; text-align:left;padding:3px&amp;quot; | SLURM_JOB_CPUS_PER_NODE &lt;br /&gt;
| style=&amp;quot;height=20px; text-align:left;padding:3px&amp;quot;|  Number of processes per node dedicated to the job&lt;br /&gt;
|- &lt;br /&gt;
| style=&amp;quot;width:20%;height=20px; text-align:left;padding:3px&amp;quot; | SLURM_JOB_NODELIST &lt;br /&gt;
| style=&amp;quot;height=20px; text-align:left;padding:3px&amp;quot;|  List of nodes dedicated to the job&lt;br /&gt;
|- &lt;br /&gt;
| style=&amp;quot;width:20%;height=20px; text-align:left;padding:3px&amp;quot; | SLURM_JOB_NUM_NODES &lt;br /&gt;
| style=&amp;quot;height=20px; text-align:left;padding:3px&amp;quot;| Number of nodes dedicated to the job&lt;br /&gt;
|- &lt;br /&gt;
| style=&amp;quot;width:20%;height=20px; text-align:left;padding:3px&amp;quot; | SLURM_MEM_PER_NODE &lt;br /&gt;
| style=&amp;quot;height=20px; text-align:left;padding:3px&amp;quot;| Memory per node dedicated to the job &lt;br /&gt;
|- &lt;br /&gt;
| style=&amp;quot;width:20%;height=20px; text-align:left;padding:3px&amp;quot; | SLURM_NPROCS&lt;br /&gt;
| style=&amp;quot;height=20px; text-align:left;padding:3px&amp;quot;|  Total number of processes dedicated to the job &lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
== Node Monitoring ==&lt;br /&gt;
&lt;br /&gt;
By default nodes are not used exclusive anless they are requested with &#039;&#039;-l naccesspolicy=singlejob&#039;&#039; as described [[Batch_Jobs#msub_-l_resource_list|here]]. &amp;lt;br&amp;gt;&lt;br /&gt;
If a Job runs exclusive on one node you may do a ssh login to that node. To get the nodes of your job need to read the environment variable SLURM_JOB_NODELIST, e.g. &lt;br /&gt;
&lt;br /&gt;
  echo $SLURM_JOB_NODELIST &amp;gt; nodelist&lt;br /&gt;
&lt;br /&gt;
= Interactive Jobs =&lt;br /&gt;
Interactive jobs on bwUniCluster [[BwUniCluster_User_Access#Allowed_activities_on_login_nodes|must &#039;&#039;&#039;NOT&#039;&#039;&#039; run on the logins nodes]], however resources for interactive jobs can be requested using msub. Considering a serial application with a graphical frontend that requires 5000 MByte of memory and limiting the interactive run to 2 hours execute the following:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ msub  -I  -V  -l nodes=1:ppn=1 -l walltime=0:02:00:00&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The option -V defines that all environment variables are exported to the compute node of the interactive session.&lt;br /&gt;
After execution of this command &#039;&#039;&#039;DO NOT CLOSE&#039;&#039;&#039; your current terminal session but wait until the queueing system MOAB has granted you the requested resources on the compute system. Once granted you will be automatically logged on the dedicated resource. Now you have an interactive session with 1 core and 5000 MByte of memory on the compute system for 2 hours. Simply execute now your application:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ cd to_path&lt;br /&gt;
$ ./application&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Note that, once the walltime limit has been reached you will be automatically logged out of the compute system.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
[[Category:bwUniCluster|Batch Jobs - bwUniCluster features]]&lt;/div&gt;</summary>
		<author><name>T Kienzle</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=Batch_Jobs_-_bwUniCluster_Features&amp;diff=1794</id>
		<title>Batch Jobs - bwUniCluster Features</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=Batch_Jobs_-_bwUniCluster_Features&amp;diff=1794"/>
		<updated>2014-11-05T15:09:35Z</updated>

		<summary type="html">&lt;p&gt;T Kienzle: /* msub -q queues */  gb to mb&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This article contains information on features of the [[Batch_Jobs|batch job system]] only applicable on bwUniCluster.&lt;br /&gt;
&lt;br /&gt;
= Job Submission =&lt;br /&gt;
== msub Command ==&lt;br /&gt;
The bwUniCluster supports the following additional msub option(s):&lt;br /&gt;
{| style=&amp;quot;width:100%; vertical-align:top; background:#f5fffa;border:1px solid #000000;padding:1px&amp;quot;&lt;br /&gt;
! colspan=&amp;quot;3&amp;quot; style=&amp;quot;background-color:#999999;padding:3px&amp;quot;| bwUniCluster additional msub Options&lt;br /&gt;
|-&lt;br /&gt;
! style=&amp;quot;width:15%;height=20px; text-align:left;padding:3px&amp;quot;|Command line &lt;br /&gt;
! style=&amp;quot;width:20%;height=20px; text-align:left;padding:3px&amp;quot;|Script&lt;br /&gt;
! style=&amp;quot;width:65%;height=20px; text-align:left;padding:3px&amp;quot;|Purpose&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| style=&amp;quot;height=20px; text-align:left;padding:3px&amp;quot; | -I&lt;br /&gt;
| &lt;br /&gt;
| style=&amp;quot;height=20px; text-align:left;padding:3px&amp;quot; | Declares the job is to be run interactively.&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== msub -l &#039;&#039;resource_list&#039;&#039; ===&lt;br /&gt;
No deviation or additional features to general [[Batch_Jobs|batch job]] setting.&lt;br /&gt;
=== msub -q &#039;&#039;queues&#039;&#039; ===&lt;br /&gt;
&lt;br /&gt;
Compute resources such as walltime, nodes and memory are restricted and must fit into &#039;&#039;&#039;queues&#039;&#039;&#039;. Since requested compute resources are NOT always automatically mapped to the correct queue class, you must add to your msub command the correct queue class. Details are:&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot;, style=&amp;quot;width:100%; vertical-align:top; background:#f5fffa;border:1px solid #000000;padding:1px&amp;quot;&lt;br /&gt;
! colspan=&amp;quot;5&amp;quot; style=&amp;quot;background-color:#999999;padding:3px&amp;quot;| msub -q &#039;&#039;queue&#039;&#039;&lt;br /&gt;
|- style=&amp;quot;width:10%;height=20px; text-align:left;&amp;quot;&lt;br /&gt;
! style=&amp;quot;width:10%;padding:3px&amp;quot;| &#039;&#039;queue&#039;&#039;&lt;br /&gt;
! style=&amp;quot;width:5%;padding:3px&amp;quot;| &#039;&#039;node&#039;&#039;&lt;br /&gt;
! style=&amp;quot;width:15%;padding:3px&amp;quot;| &#039;&#039;default resources&#039;&#039;&lt;br /&gt;
! style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;minimum resources&#039;&#039;&lt;br /&gt;
! style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;maximum resources&#039;&#039;&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; height=20px; text-align:left&amp;quot;&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| develop&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| thin&lt;br /&gt;
| style=&amp;quot;width:15%;padding:3px&amp;quot;| &#039;&#039;walltime&#039;&#039;=00:10:00,&#039;&#039;procs&#039;&#039;=1, &#039;&#039;mem&#039;&#039;=4000mb&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;walltime&#039;&#039;=00:30:00,&#039;&#039;nodes&#039;&#039;=1:&#039;&#039;ppn&#039;&#039;=16&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; height=20px; text-align:left&amp;quot;&lt;br /&gt;
| style=&amp;quot;width:10%;padding:3px&amp;quot; | singlenode &lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| thin&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;walltime&#039;&#039;=00:30:01,&#039;&#039;procs&#039;&#039;=1, mem&#039;&#039;=4000mb&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;walltime&#039;&#039;=00:30:01,&#039;&#039;nodes&#039;&#039;=1&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;walltime&#039;&#039;=3:00:00:00,&#039;&#039;nodes&#039;&#039;=1:&#039;&#039;ppn&#039;&#039;=16&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; height=20px; text-align:left&amp;quot;&lt;br /&gt;
| style=&amp;quot;width:10%;vertical-align:top;height=20px; text-align:left;padding:3px&amp;quot; | multinode&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| thin&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;walltime&#039;&#039;=00:10:00,&#039;&#039;procs&#039;&#039;=1, &#039;&#039;mem&#039;&#039;=4000mb&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=2&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;walltime&#039;&#039;=2:00:00:00,&#039;&#039;nodes&#039;&#039;=16:&#039;&#039;ppn&#039;&#039;=16&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; height=20px; text-align:left&amp;quot;&lt;br /&gt;
| style=&amp;quot;width:10%;vertical-align:top;height=20px; text-align:left;padding:3px&amp;quot; | verylong&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| thin&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;walltime&#039;&#039;=3:00:00:01,&#039;&#039;procs&#039;&#039;=1, &#039;&#039;mem&#039;&#039;=4000mb&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;walltime&#039;&#039;=3:00:00:01,&#039;&#039;nodes&#039;&#039;=1&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;walltime&#039;&#039;=6:00:00:00,&#039;&#039;nodes=1:&#039;&#039;ppn&#039;&#039;=16&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; height=20px; text-align:left&amp;quot;&lt;br /&gt;
| style=&amp;quot;width:10%;padding:3px&amp;quot; | fat&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| fat&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;walltime&#039;&#039;=00:10:00,&#039;&#039;procs&#039;&#039;=1, &#039;&#039;mem&#039;&#039;=32000mb&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;nodes&#039;&#039;=1&lt;br /&gt;
| style=&amp;quot;padding:3px&amp;quot;| &#039;&#039;walltime&#039;&#039;=3:00:00:00,&#039;&#039;nodes&#039;&#039;=1:&#039;&#039;ppn&#039;&#039;=32&lt;br /&gt;
|- &lt;br /&gt;
|}&lt;br /&gt;
Default resources of a queue class defines walltime, processes and memory if not explicitly given with msub command. Resource list acronyms &#039;&#039;walltime&#039;&#039;, &#039;&#039;procs&#039;&#039;, &#039;&#039;nodes&#039;&#039; and &#039;&#039;ppn&#039;&#039; are described [[Batch_Jobs#msub_-l_resource_list|here]].&lt;br /&gt;
&lt;br /&gt;
==== Queue class examples ====&lt;br /&gt;
&lt;br /&gt;
* To run your batch job longer than 3 days, please use&amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;$ msub -q verylong&amp;lt;/span&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
* To run your batch job on one of the [[BwUniCluster_File_System#Components_of_bwUniCluster|fat nodes]], please use&amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;$ msub -q fat&amp;lt;/span&amp;gt;. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Environment Variables for Batch Jobs =&lt;br /&gt;
The bwUniCluster expands the [[Batch_Jobs#Environment Variables for Batch Jobs|common set of MOAB environment variables]] by the following variable(s):&lt;br /&gt;
{| style=&amp;quot;width:100%; vertical-align:top; background:#f5fffa;border:1px solid #000000;padding:1px&amp;quot;&lt;br /&gt;
! colspan=&amp;quot;3&amp;quot; style=&amp;quot;background-color:#999999;padding:3px&amp;quot;| bwUniCluster specific MOAB variables&lt;br /&gt;
|- style=&amp;quot;width:25%;height=20px; text-align:left;padding:3px&amp;quot;&lt;br /&gt;
! style=&amp;quot;width:20%;height=20px; text-align:left;padding:3px&amp;quot;| Environment variables&lt;br /&gt;
! style=&amp;quot;height=20px; text-align:left;padding:3px&amp;quot;| Description&lt;br /&gt;
|- &lt;br /&gt;
| style=&amp;quot;width:20%;height=20px; text-align:left;padding:3px&amp;quot; | MOAB_SUBMITDIR&lt;br /&gt;
| style=&amp;quot;height=20px; text-align:left;padding:3px&amp;quot;| Directory of job submission&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Since the work load manager MOAB on [[bwUniCluster]] uses the resource manager SLURM, the following environment variables of SLURM are added to your environment once your job has started:&lt;br /&gt;
{| style=&amp;quot;width:100%; vertical-align:top; background:#f5fffa;border:1px solid #000000;padding:1px&amp;quot;&lt;br /&gt;
! colspan=&amp;quot;3&amp;quot; style=&amp;quot;background-color:#999999;padding:3px&amp;quot;| SLURM variables&lt;br /&gt;
|- style=&amp;quot;width:25%;height=20px; text-align:left;padding:3px&amp;quot;&lt;br /&gt;
! style=&amp;quot;width:20%;height=20px; text-align:left;padding:3px&amp;quot;| Environment variables&lt;br /&gt;
! style=&amp;quot;height=20px; text-align:left;padding:3px&amp;quot;| Description&lt;br /&gt;
|- &lt;br /&gt;
| style=&amp;quot;width:20%;height=20px; text-align:left;padding:3px&amp;quot; | SLURM_JOB_CPUS_PER_NODE &lt;br /&gt;
| style=&amp;quot;height=20px; text-align:left;padding:3px&amp;quot;|  Number of processes per node dedicated to the job&lt;br /&gt;
|- &lt;br /&gt;
| style=&amp;quot;width:20%;height=20px; text-align:left;padding:3px&amp;quot; | SLURM_JOB_NODELIST &lt;br /&gt;
| style=&amp;quot;height=20px; text-align:left;padding:3px&amp;quot;|  List of nodes dedicated to the job&lt;br /&gt;
|- &lt;br /&gt;
| style=&amp;quot;width:20%;height=20px; text-align:left;padding:3px&amp;quot; | SLURM_JOB_NUM_NODES &lt;br /&gt;
| style=&amp;quot;height=20px; text-align:left;padding:3px&amp;quot;| Number of nodes dedicated to the job&lt;br /&gt;
|- &lt;br /&gt;
| style=&amp;quot;width:20%;height=20px; text-align:left;padding:3px&amp;quot; | SLURM_MEM_PER_NODE &lt;br /&gt;
| style=&amp;quot;height=20px; text-align:left;padding:3px&amp;quot;| Memory per node dedicated to the job &lt;br /&gt;
|- &lt;br /&gt;
| style=&amp;quot;width:20%;height=20px; text-align:left;padding:3px&amp;quot; | SLURM_NPROCS&lt;br /&gt;
| style=&amp;quot;height=20px; text-align:left;padding:3px&amp;quot;|  Total number of processes dedicated to the job &lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Interactive Jobs =&lt;br /&gt;
Interactive jobs on bwUniCluster [[BwUniCluster_User_Access#Allowed_activities_on_login_nodes|must &#039;&#039;&#039;NOT&#039;&#039;&#039; run on the logins nodes]], however resources for interactive jobs can be requested using msub. Considering a serial application with a graphical frontend that requires 5000 MByte of memory and limiting the interactive run to 2 hours execute the following:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ msub  -I  -V  -l nodes=1:ppn=1 -l walltime=0:02:00:00&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The option -V defines that all environment variables are exported to the compute node of the interactive session.&lt;br /&gt;
After execution of this command &#039;&#039;&#039;DO NOT CLOSE&#039;&#039;&#039; your current terminal session but wait until the queueing system MOAB has granted you the requested resources on the compute system. Once granted you will be automatically logged on the dedicated resource. Now you have an interactive session with 1 core and 5000 MByte of memory on the compute system for 2 hours. Simply execute now your application:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ cd to_path&lt;br /&gt;
$ ./application&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Note that, once the walltime limit has been reached you will be automatically logged out of the compute system.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
[[Category:bwUniCluster|Batch Jobs - bwUniCluster features]]&lt;/div&gt;</summary>
		<author><name>T Kienzle</name></author>
	</entry>
</feed>