<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://wiki.bwhpc.de/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=A+Flachmueller</id>
	<title>bwHPC Wiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://wiki.bwhpc.de/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=A+Flachmueller"/>
	<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/e/Special:Contributions/A_Flachmueller"/>
	<updated>2026-04-29T02:20:18Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.39.17</generator>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=Registration/bwForCluster/Entitlement&amp;diff=16026</id>
		<title>Registration/bwForCluster/Entitlement</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=Registration/bwForCluster/Entitlement&amp;diff=16026"/>
		<updated>2026-04-28T15:18:31Z</updated>

		<summary type="html">&lt;p&gt;A Flachmueller: /* Request an Entitlement */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;div style=&amp;quot;border: 3px solid #e2e3e5; padding: 15px; background-color: #ffffff; margin: 10px 0;&amp;quot;&amp;gt;&lt;br /&gt;
The bwForCluster Entitlement is issued by the university you are a member of. It tells the cluster-operator that you are currently a member of this university. It also means that the university checked that your actions comply with the German Foreign Trade Act (Außenwirtschaftsgesetz - AWG) and German Foreign Trade Regulations (Außenwirtschaftsverordnung - AWV)  (see [https://www.bwidm.de/attribute.php#Berechtigung eduPersonEntitlement]).&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Step A: bwForCluster Entitlement =&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;border: 3px solid #0066cc; padding: 15px; background-color: #e7f3ff; margin: 10px 0;&amp;quot;&amp;gt;&lt;br /&gt;
To register for a bwForCluster you need the  &#039;&#039;&#039;bwForCluster Entitlement&#039;&#039;&#039; issued by your university.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The full name of the entitlement is &amp;lt;code&amp;gt;&amp;lt;nowiki&amp;gt;http://bwidm.de/entitlement/bwForCluster&amp;lt;/nowiki&amp;gt;&amp;lt;/code&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
Note that this is the name of an identity management attribute and not a link. &lt;br /&gt;
&lt;br /&gt;
More informations on Entitlements and other attributes in general: [https://www.bwidm.de/dienste.php Entitlement (Dienste)]&lt;br /&gt;
&lt;br /&gt;
== Check your Entitlements ==&lt;br /&gt;
&lt;br /&gt;
First check if you already have the Entitlement, some universities automatically assign the entitlement to some accounts:&lt;br /&gt;
&lt;br /&gt;
&amp;amp;rarr; &#039;&#039;&#039;[https://login.bwidm.de/user/index.xhtml?show=entitlement&amp;amp;highlight=bwForCluster Registration Server bwForCluster Entitlement Check]&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- image can be deleted [[File:BwIDM-idp.png|center|600px|thumb|Verify Entitlement.]] --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Request an Entitlement == &lt;br /&gt;
&lt;br /&gt;
If you need the entitlement, please follow the link for your institution or contact your local service desk if no information is provided:&lt;br /&gt;
&amp;lt;!--* [https://www.hs-esslingen.de/informatik-und-informationstechnik/forschung-labore/projekte/forschungsprojekte/high-performance-computing/ Hochschule Esslingen]--&amp;gt;&lt;br /&gt;
* [[Registration/bwIDM-Entitlements-Uni-Freiburg|Universität Freiburg]]&lt;br /&gt;
* [https://heiservices.uni-heidelberg.de/entitlement Universität Heidelberg] (access only within Uni Heidelberg network)&lt;br /&gt;
* [https://kim.uni-hohenheim.de/bwhpc-account/ Universität Hohenheim]&lt;br /&gt;
* [https://www.scc.kit.edu/downloads/ISM/SD-HPC-Formulare/Accessform_bwForCluster_v1_DE_EN_2026.pdf Karlsruhe Institute of Technology (KIT)]&lt;br /&gt;
* [https://www.kim.uni-konstanz.de/services/forschen-und-lehren/high-performance-computing/zugang-bwforcluster/ Universität Konstanz]&lt;br /&gt;
* [[BwForCluster User Access Members Uni Mannheim|Universität Mannheim]]&lt;br /&gt;
* [https://www.hlrs.de/apply-for-computing-time/bwforcluster Universität Stuttgart]&lt;br /&gt;
* [[BwForCluster User Access Members Uni Tübingen|Universität Tübingen]]&lt;br /&gt;
* [[BwForCluster User Access Members Uni Ulm|Universität Ulm]]&lt;br /&gt;
* [[Registration/HAW|HAW BW e.V.]]: Please contact your local service desk&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;border: 3px solid #ffc107; padding: 15px; background-color: #fff3cd; margin: 10px 0;&amp;quot;&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Entitlement Synchronization:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
After your university assigns the entitlement, it takes some time for it to synchronize across the system.&lt;br /&gt;
* &#039;&#039;&#039;If the entitlement does not appear within 24 hours,&#039;&#039;&#039; contact your local service desk&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p style=&amp;quot;text-align:right;&amp;quot;&amp;gt;[[Registration/bwForCluster/RV|Go to step B]]&amp;lt;/p&amp;gt;&lt;/div&gt;</summary>
		<author><name>A Flachmueller</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=Energy_Efficient_Cluster_Usage&amp;diff=15996</id>
		<title>Energy Efficient Cluster Usage</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=Energy_Efficient_Cluster_Usage&amp;diff=15996"/>
		<updated>2026-04-22T10:35:53Z</updated>

		<summary type="html">&lt;p&gt;A Flachmueller: /* What do I want to do and why do I need an HPC Cluster for it? */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction =&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Energy consumption of data centers has been increasing continuously throughout the last decade. In 2020, the energy consumption of all data centers in Germany amounted to around  [https://www.bundestag.de/resource/blob/863850/423c11968fcb5c9995e9ef9090edf9e6/WD-8-070-21-pdf-data.pdf 3 percent] of the total electricity produced. Accompanying this large energy consumption are large-scale emissions of CO2 to the atmosphere and thus significant contributions to climate change.&lt;br /&gt;
To illustrate this, an average compute job running on a single node for one day may easily consume 10 kWh or even more. That translates roughly to brewing 700 cups of coffee.&lt;br /&gt;
Assuming that a typical bwHPC cluster has a few hundred compute nodes, this amounts to the energy consumption of a village for each cluster. &lt;br /&gt;
&lt;br /&gt;
Although a large amount of this energy consumption is an intrinsic requirement of running large HPC clusters (even when it&#039;s processors are idle, a cluster uses a lot of energy), efficient use of the available resources is important. Using as many resources as possible does not make a power user. Using them wisely does.&lt;br /&gt;
In the following, a basic introduction to some of the most important aspects of energy-efficient HPC usage from a user perspective is given. &lt;br /&gt;
&lt;br /&gt;
We can generally distinguish three tasks when optimizing for running HPC jobs efficiently.&lt;br /&gt;
&lt;br /&gt;
&amp;amp;rarr;  What do I want to do and why do I need an HPC Cluster for it?&lt;br /&gt;
&lt;br /&gt;
&amp;amp;rarr;  How many and which kind of hardware resources do I require for it?&lt;br /&gt;
&lt;br /&gt;
&amp;amp;rarr;  How do I optimize my code to use these resources most efficiently?&lt;br /&gt;
&lt;br /&gt;
= What do I want to do and why do I need an HPC Cluster for it? =&lt;br /&gt;
&lt;br /&gt;
The bwHPC clusters are used to almost full capacity, and running a job on an HPC node consumes a lot of energy, as shown above. &lt;br /&gt;
Therefore, users are requested to run only necessary jobs.&lt;br /&gt;
&lt;br /&gt;
Please consider testing new setups and their output for validity prior to submitting jobs that require lots of resources. This also includes projects where a lot of (smaller) similar jobs are submitted. &lt;br /&gt;
&lt;br /&gt;
Make sure to double-check your jobs prior to the submission, as having to discard the output data of an HPC project due to faulty input files is wasting a lot of computational resources.&lt;br /&gt;
&lt;br /&gt;
Finally, identifying the specific resource requirements for a given job is important to allocate the optimal amount for your compute job, and to decide if an HPC cluster is needed at all.&lt;br /&gt;
&lt;br /&gt;
= How many and which kind of hardware resources do I require for it =&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Resource allocation is a crucial part when working on an HPC cluster. &lt;br /&gt;
As this is dependent on both the job as well as the specific cluster hardware and architecture available. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&lt;br /&gt;
A small number of jobs and few resources&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;rarr;  Submit to the scheduler. No extended testing and resource scaling analysis are needed. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&lt;br /&gt;
Medium-sized projects&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;rarr;  Run only necessary jobs: Please consider testing new setups and their output for validity prior to submitting a huge amount of similar jobs&lt;br /&gt;
&lt;br /&gt;
&amp;amp;rarr;  Start small: Run your problem on a small set of resources first.&lt;br /&gt;
&lt;br /&gt;
&amp;amp;rarr;  Use the proper tools for development: If you develop your own code, please use the proper tools for debugging and parallel performance analysis. See: [[Development#Documentation_in_the_Wiki|Development]].&lt;br /&gt;
&lt;br /&gt;
&amp;amp;rarr;  A look at the job feedback can help you determine if you are using the cluster efficiently&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Large projects&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;rarr;  Same approach as for medium-sized projects. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;rarr;  Run a scaling analysis for your project with regard to how many resources work best. See: [[Scaling]].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Many short jobs&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;rarr;  Handling via the scheduler is inefficient. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;rarr;  Simple parallelization by hand is advisable. See: A basic introduction to [[Development/Parallel_Programming | Parallel Programming]].&lt;br /&gt;
&lt;br /&gt;
= How do I optimize my code to use these resources most efficiently? =&lt;br /&gt;
&lt;br /&gt;
The above recommendations will help use the cluster resources more efficiently.&lt;br /&gt;
Regarding software development, power efficiency correlates obviously heavily with &#039;&#039;&#039;computing performance&#039;&#039;&#039;, but also with memory usage, i.e. the amount of memory used, but also memory efficiency.&lt;br /&gt;
&lt;br /&gt;
Here, we have gathered a few results based on other research:&lt;br /&gt;
&amp;amp;rarr;  Use an efficient programming language such as Rust, C, and C++ -- well any compiled language. Do not use any interpreted language like Perl or Python. Since Machine Learning is a hot topic, this deserves a few words: Any ML-Python code using Tensorflow or other libraries will make heavy usage of NumPy and other math packages, which will use C-based implementations. Please make sure, you use the provided Python modules, which are optimized to use Intel MKL and other mathematical libraries.&lt;br /&gt;
&lt;br /&gt;
Further reading:&lt;br /&gt;
Rui Pereira, et al: &amp;quot;&#039;&#039;Energy efficiency across programming languages: how do energy, time, and memory relate?&#039;&#039;&amp;quot;, SLE 2017: Proc. of the 10th ACM SIGPLAN Int. Conf. on SW Language Eng., Oct. 2017, pp. 256–267, [https://doi.org/10.1145/3136014.3136031 doi:10.1145/3136014.3136031]&lt;br /&gt;
&lt;br /&gt;
&amp;amp;rarr;  Analyse memory access patterns&lt;br /&gt;
&lt;br /&gt;
&amp;amp;rarr;  For small tight loops checking for locks, use the &amp;lt;code&amp;gt;pause&amp;lt;/code&amp;gt; instruction.&lt;br /&gt;
&lt;br /&gt;
= Summary: General Recommendations =&lt;br /&gt;
&lt;br /&gt;
* Choose the most &#039;&#039;&#039;efficient algorithms&#039;&#039;&#039; for the given problem&lt;br /&gt;
* Run only &#039;&#039;&#039;necessary&#039;&#039;&#039; jobs: Please consider testing new setups and their output for validity prior to submitting a huge amount of similar jobs&lt;br /&gt;
* Start &#039;&#039;&#039;small&#039;&#039;&#039;: Run Your problem on a small number of parallel entities (be it processes or threads) first.&lt;br /&gt;
* &#039;&#039;&#039;Estimate&#039;&#039;&#039; the runtime of the parallel job as &#039;&#039;&#039;exactly&#039;&#039;&#039; as possible to increase the efficiency of the scheduling of the whole system&lt;br /&gt;
* Use the proper tools for development: If You develop your own code, please use the proper tools for debugging and parallel performance analysis. More information is available on the bwHPC Wiki.&lt;br /&gt;
* A look at the &#039;&#039;&#039;job feedback&#039;&#039;&#039; can help you determine if you are using the cluster efficiently&lt;/div&gt;</summary>
		<author><name>A Flachmueller</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=Energy_Efficient_Cluster_Usage&amp;diff=15995</id>
		<title>Energy Efficient Cluster Usage</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=Energy_Efficient_Cluster_Usage&amp;diff=15995"/>
		<updated>2026-04-22T10:30:03Z</updated>

		<summary type="html">&lt;p&gt;A Flachmueller: /* Introduction */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction =&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Energy consumption of data centers has been increasing continuously throughout the last decade. In 2020, the energy consumption of all data centers in Germany amounted to around  [https://www.bundestag.de/resource/blob/863850/423c11968fcb5c9995e9ef9090edf9e6/WD-8-070-21-pdf-data.pdf 3 percent] of the total electricity produced. Accompanying this large energy consumption are large-scale emissions of CO2 to the atmosphere and thus significant contributions to climate change.&lt;br /&gt;
To illustrate this, an average compute job running on a single node for one day may easily consume 10 kWh or even more. That translates roughly to brewing 700 cups of coffee.&lt;br /&gt;
Assuming that a typical bwHPC cluster has a few hundred compute nodes, this amounts to the energy consumption of a village for each cluster. &lt;br /&gt;
&lt;br /&gt;
Although a large amount of this energy consumption is an intrinsic requirement of running large HPC clusters (even when it&#039;s processors are idle, a cluster uses a lot of energy), efficient use of the available resources is important. Using as many resources as possible does not make a power user. Using them wisely does.&lt;br /&gt;
In the following, a basic introduction to some of the most important aspects of energy-efficient HPC usage from a user perspective is given. &lt;br /&gt;
&lt;br /&gt;
We can generally distinguish three tasks when optimizing for running HPC jobs efficiently.&lt;br /&gt;
&lt;br /&gt;
&amp;amp;rarr;  What do I want to do and why do I need an HPC Cluster for it?&lt;br /&gt;
&lt;br /&gt;
&amp;amp;rarr;  How many and which kind of hardware resources do I require for it?&lt;br /&gt;
&lt;br /&gt;
&amp;amp;rarr;  How do I optimize my code to use these resources most efficiently?&lt;br /&gt;
&lt;br /&gt;
= What do I want to do and why do I need an HPC Cluster for it? =&lt;br /&gt;
&lt;br /&gt;
The bwHPC clusters are used to almost full capacity, and running a job on an HPC node consumes a lot of energy, as shown above. &lt;br /&gt;
Therefore, users are requested to run only necessary jobs.&lt;br /&gt;
&lt;br /&gt;
Please consider testing new setups and their output for validity prior to submitting jobs that require lots of resources. This also includes projects where a lot of (smaller) similar jobs are submitted. &lt;br /&gt;
&lt;br /&gt;
Make sure to double-check your jobs prior to the submission, having to discard the output data of an HPC project due to faulty input files is wasting a lot of computational resources.&lt;br /&gt;
&lt;br /&gt;
Finally, identifying the specific resource requirements for a given job is important to allocate the optimal your compute job, and to decide if an HPC cluster is needed at all. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= How many and which kind of hardware resources do I require for it =&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Resource allocation is a crucial part when working on an HPC cluster. &lt;br /&gt;
As this is dependent on both the job as well as the specific cluster hardware and architecture available. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&lt;br /&gt;
A small number of jobs and few resources&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;rarr;  Submit to the scheduler. No extended testing and resource scaling analysis are needed. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&lt;br /&gt;
Medium-sized projects&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;rarr;  Run only necessary jobs: Please consider testing new setups and their output for validity prior to submitting a huge amount of similar jobs&lt;br /&gt;
&lt;br /&gt;
&amp;amp;rarr;  Start small: Run your problem on a small set of resources first.&lt;br /&gt;
&lt;br /&gt;
&amp;amp;rarr;  Use the proper tools for development: If you develop your own code, please use the proper tools for debugging and parallel performance analysis. See: [[Development#Documentation_in_the_Wiki|Development]].&lt;br /&gt;
&lt;br /&gt;
&amp;amp;rarr;  A look at the job feedback can help you determine if you are using the cluster efficiently&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Large projects&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;rarr;  Same approach as for medium-sized projects. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;rarr;  Run a scaling analysis for your project with regard to how many resources work best. See: [[Scaling]].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Many short jobs&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;rarr;  Handling via the scheduler is inefficient. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;rarr;  Simple parallelization by hand is advisable. See: A basic introduction to [[Development/Parallel_Programming | Parallel Programming]].&lt;br /&gt;
&lt;br /&gt;
= How do I optimize my code to use these resources most efficiently? =&lt;br /&gt;
&lt;br /&gt;
The above recommendations will help use the cluster resources more efficiently.&lt;br /&gt;
Regarding software development, power efficiency correlates obviously heavily with &#039;&#039;&#039;computing performance&#039;&#039;&#039;, but also with memory usage, i.e. the amount of memory used, but also memory efficiency.&lt;br /&gt;
&lt;br /&gt;
Here, we have gathered a few results based on other research:&lt;br /&gt;
&amp;amp;rarr;  Use an efficient programming language such as Rust, C, and C++ -- well any compiled language. Do not use any interpreted language like Perl or Python. Since Machine Learning is a hot topic, this deserves a few words: Any ML-Python code using Tensorflow or other libraries will make heavy usage of NumPy and other math packages, which will use C-based implementations. Please make sure, you use the provided Python modules, which are optimized to use Intel MKL and other mathematical libraries.&lt;br /&gt;
&lt;br /&gt;
Further reading:&lt;br /&gt;
Rui Pereira, et al: &amp;quot;&#039;&#039;Energy efficiency across programming languages: how do energy, time, and memory relate?&#039;&#039;&amp;quot;, SLE 2017: Proc. of the 10th ACM SIGPLAN Int. Conf. on SW Language Eng., Oct. 2017, pp. 256–267, [https://doi.org/10.1145/3136014.3136031 doi:10.1145/3136014.3136031]&lt;br /&gt;
&lt;br /&gt;
&amp;amp;rarr;  Analyse memory access patterns&lt;br /&gt;
&lt;br /&gt;
&amp;amp;rarr;  For small tight loops checking for locks, use the &amp;lt;code&amp;gt;pause&amp;lt;/code&amp;gt; instruction.&lt;br /&gt;
&lt;br /&gt;
= Summary: General Recommendations =&lt;br /&gt;
&lt;br /&gt;
* Choose the most &#039;&#039;&#039;efficient algorithms&#039;&#039;&#039; for the given problem&lt;br /&gt;
* Run only &#039;&#039;&#039;necessary&#039;&#039;&#039; jobs: Please consider testing new setups and their output for validity prior to submitting a huge amount of similar jobs&lt;br /&gt;
* Start &#039;&#039;&#039;small&#039;&#039;&#039;: Run Your problem on a small number of parallel entities (be it processes or threads) first.&lt;br /&gt;
* &#039;&#039;&#039;Estimate&#039;&#039;&#039; the runtime of the parallel job as &#039;&#039;&#039;exactly&#039;&#039;&#039; as possible to increase the efficiency of the scheduling of the whole system&lt;br /&gt;
* Use the proper tools for development: If You develop your own code, please use the proper tools for debugging and parallel performance analysis. More information is available on the bwHPC Wiki.&lt;br /&gt;
* A look at the &#039;&#039;&#039;job feedback&#039;&#039;&#039; can help you determine if you are using the cluster efficiently&lt;/div&gt;</summary>
		<author><name>A Flachmueller</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=Running_Calculations&amp;diff=15797</id>
		<title>Running Calculations</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=Running_Calculations&amp;diff=15797"/>
		<updated>2026-03-11T11:46:16Z</updated>

		<summary type="html">&lt;p&gt;A Flachmueller: /* Life Cycle of a Calculation (Job) */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;← This page is used in the [[HPC Glossary]] to explain the term &amp;quot;Batch Scheduler&amp;quot; and &amp;quot;Batch System&amp;quot;&lt;br /&gt;
== Life Cycle of a Calculation (Job) ==&lt;br /&gt;
[[File:running_calculations_on_cluster.svg|thumb|upright=0.4]]&lt;br /&gt;
On your desktop computer you start your calculations and they start immediately, running until they are finished. Then your desktop does mostly nothing, until you start another calculation. A compute cluster has several hundred, maybe a thousand computers (compute nodes), all of them are busy most of the time and many people want to run a great number of calculations. So running your job has to include some extra steps:&lt;br /&gt;
&lt;br /&gt;
# prepare a script (a set commands to run - usually as a shell script), with all the commands that are necessary to run your calculation from start to finish. In addition to the commands necessary to run the calculation, this &#039;&#039;[[batch script]]&#039;&#039; has a header section, in which you specify details like required compute cores (processing units witin a computer), estimated runtime, memory requirements, disk space needed, etc.&lt;br /&gt;
# &#039;&#039;Submit&#039;&#039; the script into a queue, where your &#039;&#039;job&#039;&#039; (calculation) &lt;br /&gt;
# gets asigned an inital priority, is queued and waits in row with other compute jobs until the resources you requested in the header become available. (Requested time is also a resource!) &lt;br /&gt;
# Execution: Once a suitable resource slot is available and your job is in the front of the queue of suitable jobs (suitable with regards to the resource slot), your script is executed on (a) compute node(s). Your calculation runs on that/those node(s) until it is finished or reaches the specified time limit. &lt;br /&gt;
# Save results: Include commands to save the calculation results back to a long term storage (e.g. your home directory), at least at the end of your script. What you have not saved until the job finishes won&#039;t be saved!&lt;br /&gt;
# If your job reaches the specified time limit, all your running processes will be killed and the resources get cleared. So any data that has not been saved will be lost!&lt;br /&gt;
&lt;br /&gt;
The software that distributes jobs on compute nodes is called a &#039;&#039;&#039;[[batch system]]&#039;&#039;&#039; or &#039;&#039;&#039;batch scheduler&#039;&#039;&#039;. The software currently used as a [[batch system]] on bwHPC clusters is &amp;quot;Slurm&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
Learn more about the functioning of job distribution in&lt;br /&gt;
&lt;br /&gt;
&amp;amp;rarr; &#039;&#039;&#039;[[batch system]]&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Example Jobs ==&lt;br /&gt;
&lt;br /&gt;
For most software that a bwHPC project installed on the cluster, we have prepared an example job script running some example calculation with that exact software.&lt;br /&gt;
&lt;br /&gt;
How to access these examples is described in the &amp;quot;Software job examples&amp;quot; section of the page&lt;br /&gt;
&lt;br /&gt;
&amp;amp;rarr; &#039;&#039;&#039;[[Environment Modules]]&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Link to Batch System per Cluster ==&lt;br /&gt;
&lt;br /&gt;
Because of differences in configuration (partly due to different available hardware), each cluster has their own batch system documentation:&lt;br /&gt;
&lt;br /&gt;
&amp;amp;rarr; &#039;&#039;&#039;[[BwUniCluster3.0/Running_Jobs|Slurm bwUniCluster 3.0]]&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;rarr; &#039;&#039;&#039;[[JUSTUS2/Jobscripts: Running Your Calculations | Slurm JUSTUS 2]]&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;rarr;  &#039;&#039;&#039;[[Helix/Slurm   | Slurm Helix]]&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;rarr;  &#039;&#039;&#039;[[NEMO2/Slurm | Slurm NEMO2]]&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;rarr;  &#039;&#039;&#039;[[BinAC2/Slurm | Slurm BinAC2]]&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== How to Use Computing Ressources Efficiently ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
When you are running your calculations, you will have to decide on how many compute-cores your job will be simultaneously calculated. &lt;br /&gt;
For this, your computational problem will have to be divided into pieces, which always causes some overhead. &lt;br /&gt;
&lt;br /&gt;
How to find a reasonable number of how many compute cores to use for your calculation can be found under&lt;br /&gt;
&lt;br /&gt;
&amp;amp;rarr;  &#039;&#039;&#039;[[Scaling]]&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Running calculations on an HPC node consumes a lot of energy. To make the most of the available resources and keep cluster and energy use as efficient as possible please also see our advice for &lt;br /&gt;
&lt;br /&gt;
&amp;amp;rarr; &#039;&#039;&#039;[[Energy Efficient Cluster Usage]]&#039;&#039;&#039;&lt;/div&gt;</summary>
		<author><name>A Flachmueller</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=Running_Calculations&amp;diff=15796</id>
		<title>Running Calculations</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=Running_Calculations&amp;diff=15796"/>
		<updated>2026-03-11T11:15:34Z</updated>

		<summary type="html">&lt;p&gt;A Flachmueller: /* Life Cycle of a Calculation (Job) */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;← This page is used in the [[HPC Glossary]] to explain the term &amp;quot;Batch Scheduler&amp;quot; and &amp;quot;Batch System&amp;quot;&lt;br /&gt;
== Life Cycle of a Calculation (Job) ==&lt;br /&gt;
[[File:running_calculations_on_cluster.svg|thumb|upright=0.4]]&lt;br /&gt;
On your desktop computer you start your calculations and they start immediately, running until they are finished. Then your desktop does mostly nothing, until you start another calculation. A compute cluster has several hundred, maybe a thousand computers (compute nodes), all of them are busy most of the time and many people want to run a great number of calculations. So running your job has to include some extra steps:&lt;br /&gt;
&lt;br /&gt;
# prepare a script (a set commands to run - usually as a shell script), with all the commands that are necessary to run your calculation from start to finish. In addition to the commands necessary to run the calculation, this &#039;&#039;[[batch script]]&#039;&#039; has a header section, in which you specify details like required compute cores (processing units witin a computer), estimated runtime, memory requirements, disk space needed, etc.&lt;br /&gt;
# &#039;&#039;Submit&#039;&#039; the script into a queue, where your &#039;&#039;job&#039;&#039; (calculation) &lt;br /&gt;
# is queued and waits in row with other compute jobs until the resources you requested in the header become available. &lt;br /&gt;
# Execution: Once a suitable resource slot is available and your job is in the front of the queue of suitable jobs (suitable with regards to the resource slot), your script is executed on (a) compute node(s). Your calculation runs on that/those node(s) until it is finished or reaches the specified time limit. &lt;br /&gt;
# Save results: Include commands to save the calculation results back to a long term storage (e.g. your home directory), at least at the end of your script. What you have not saved until the job finishes won&#039;t be saved!&lt;br /&gt;
# If your job reaches the specified time limit, all your running processes will be killed and the resources get cleared. So any data that has not been saved will be lost!&lt;br /&gt;
&lt;br /&gt;
The software that distributes jobs on compute nodes is called a &#039;&#039;&#039;[[batch system]]&#039;&#039;&#039; or &#039;&#039;&#039;batch scheduler&#039;&#039;&#039;. The software currently used as a [[batch system]] on bwHPC clusters is &amp;quot;Slurm&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
Learn more about the functioning of job distribution in&lt;br /&gt;
&lt;br /&gt;
&amp;amp;rarr; &#039;&#039;&#039;[[batch system]]&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Example Jobs ==&lt;br /&gt;
&lt;br /&gt;
For most software that a bwHPC project installed on the cluster, we have prepared an example job script running some example calculation with that exact software.&lt;br /&gt;
&lt;br /&gt;
How to access these examples is described in the &amp;quot;Software job examples&amp;quot; section of the page&lt;br /&gt;
&lt;br /&gt;
&amp;amp;rarr; &#039;&#039;&#039;[[Environment Modules]]&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Link to Batch System per Cluster ==&lt;br /&gt;
&lt;br /&gt;
Because of differences in configuration (partly due to different available hardware), each cluster has their own batch system documentation:&lt;br /&gt;
&lt;br /&gt;
&amp;amp;rarr; &#039;&#039;&#039;[[BwUniCluster3.0/Running_Jobs|Slurm bwUniCluster 3.0]]&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;rarr; &#039;&#039;&#039;[[JUSTUS2/Jobscripts: Running Your Calculations | Slurm JUSTUS 2]]&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;rarr;  &#039;&#039;&#039;[[Helix/Slurm   | Slurm Helix]]&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;rarr;  &#039;&#039;&#039;[[NEMO2/Slurm | Slurm NEMO2]]&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;rarr;  &#039;&#039;&#039;[[BinAC2/Slurm | Slurm BinAC2]]&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== How to Use Computing Ressources Efficiently ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
When you are running your calculations, you will have to decide on how many compute-cores your job will be simultaneously calculated. &lt;br /&gt;
For this, your computational problem will have to be divided into pieces, which always causes some overhead. &lt;br /&gt;
&lt;br /&gt;
How to find a reasonable number of how many compute cores to use for your calculation can be found under&lt;br /&gt;
&lt;br /&gt;
&amp;amp;rarr;  &#039;&#039;&#039;[[Scaling]]&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Running calculations on an HPC node consumes a lot of energy. To make the most of the available resources and keep cluster and energy use as efficient as possible please also see our advice for &lt;br /&gt;
&lt;br /&gt;
&amp;amp;rarr; &#039;&#039;&#039;[[Energy Efficient Cluster Usage]]&#039;&#039;&#039;&lt;/div&gt;</summary>
		<author><name>A Flachmueller</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=Running_Calculations&amp;diff=15795</id>
		<title>Running Calculations</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=Running_Calculations&amp;diff=15795"/>
		<updated>2026-03-10T16:34:04Z</updated>

		<summary type="html">&lt;p&gt;A Flachmueller: Execution step is expressed more broadly + minor corrections&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;← This page is used in the [[HPC Glossary]] to explain the term &amp;quot;Batch Scheduler&amp;quot; and &amp;quot;Batch System&amp;quot;&lt;br /&gt;
== Life Cycle of a Calculation (Job) ==&lt;br /&gt;
[[File:running_calculations_on_cluster.svg|thumb|upright=0.4]]&lt;br /&gt;
On your desktop computer you start your calculations and they start immediately, running until they are finished. Then your desktop does mostly nothing, until you start another calculation. A compute cluster has several hundred, maybe a thousand computers (compute nodes), all of them are busy most of the time and many people want to run a great number of calculations. So running your job has to include some extra steps:&lt;br /&gt;
&lt;br /&gt;
# prepare a script (a set commands to run - usually as a shell script), with all the commands that are necessary to run your calculation from start to finish. In addition to the commands necessary to run the calculation, this &#039;&#039;[[batch script]]&#039;&#039; has a header section, in which you specify details like required compute cores (processing units witin a computer), estimated runtime, memory requirements, disk space needed, etc.&lt;br /&gt;
# &#039;&#039;Submit&#039;&#039; the script into a queue, where your &#039;&#039;job&#039;&#039; (calculation) &lt;br /&gt;
# is queued and waits in row with other compute jobs until the resources you requested in the header become available. &lt;br /&gt;
# Execution: Once a suitable resource slot is available and your job is in the front of the queue of suitable jobs (suitable with regards to the resource slot), your script is executed on (a) compute node(s). Your calculation runs on that/those node(s) until it is finished or reaches the specified time limit. &lt;br /&gt;
# Save results: Include commands to save the calculation results back to a long term storage (e.g. your home directory), at least at the end of your script.&lt;br /&gt;
&lt;br /&gt;
The software that distributes jobs on compute nodes is called a &#039;&#039;&#039;[[batch system]]&#039;&#039;&#039; or &#039;&#039;&#039;batch scheduler&#039;&#039;&#039;. The software currently used as a [[batch system]] on bwHPC clusters is &amp;quot;Slurm&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
Learn more about the functioning of job distribution in&lt;br /&gt;
&lt;br /&gt;
&amp;amp;rarr; &#039;&#039;&#039;[[batch system]]&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Example Jobs ==&lt;br /&gt;
&lt;br /&gt;
For most software that a bwHPC project installed on the cluster, we have prepared an example job script running some example calculation with that exact software.&lt;br /&gt;
&lt;br /&gt;
How to access these examples is described in the &amp;quot;Software job examples&amp;quot; section of the page&lt;br /&gt;
&lt;br /&gt;
&amp;amp;rarr; &#039;&#039;&#039;[[Environment Modules]]&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Link to Batch System per Cluster ==&lt;br /&gt;
&lt;br /&gt;
Because of differences in configuration (partly due to different available hardware), each cluster has their own batch system documentation:&lt;br /&gt;
&lt;br /&gt;
&amp;amp;rarr; &#039;&#039;&#039;[[BwUniCluster3.0/Running_Jobs|Slurm bwUniCluster 3.0]]&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;rarr; &#039;&#039;&#039;[[JUSTUS2/Jobscripts: Running Your Calculations | Slurm JUSTUS 2]]&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;rarr;  &#039;&#039;&#039;[[Helix/Slurm   | Slurm Helix]]&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;rarr;  &#039;&#039;&#039;[[NEMO2/Slurm | Slurm NEMO2]]&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;rarr;  &#039;&#039;&#039;[[BinAC2/Slurm | Slurm BinAC2]]&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== How to Use Computing Ressources Efficiently ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
When you are running your calculations, you will have to decide on how many compute-cores your job will be simultaneously calculated. &lt;br /&gt;
For this, your computational problem will have to be divided into pieces, which always causes some overhead. &lt;br /&gt;
&lt;br /&gt;
How to find a reasonable number of how many compute cores to use for your calculation can be found under&lt;br /&gt;
&lt;br /&gt;
&amp;amp;rarr;  &#039;&#039;&#039;[[Scaling]]&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Running calculations on an HPC node consumes a lot of energy. To make the most of the available resources and keep cluster and energy use as efficient as possible please also see our advice for &lt;br /&gt;
&lt;br /&gt;
&amp;amp;rarr; &#039;&#039;&#039;[[Energy Efficient Cluster Usage]]&#039;&#039;&#039;&lt;/div&gt;</summary>
		<author><name>A Flachmueller</name></author>
	</entry>
</feed>