<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://wiki.bwhpc.de/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=A+Saramet</id>
	<title>bwHPC Wiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://wiki.bwhpc.de/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=A+Saramet"/>
	<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/e/Special:Contributions/A_Saramet"/>
	<updated>2026-05-03T16:56:41Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.39.17</generator>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster3.0/Software/OpenFoam&amp;diff=8965</id>
		<title>BwUniCluster3.0/Software/OpenFoam</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster3.0/Software/OpenFoam&amp;diff=8965"/>
		<updated>2021-07-29T09:22:21Z</updated>

		<summary type="html">&lt;p&gt;A Saramet: /* OpenFOAM and ParaView on bwUniCluster */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{| width=600px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Description !! Content&lt;br /&gt;
|-&lt;br /&gt;
| module load&lt;br /&gt;
| cae/openfoam&lt;br /&gt;
|-&lt;br /&gt;
| Availability&lt;br /&gt;
| [[bwUniCluster_2.0]] &amp;amp;#124; [[BwForCluster_JUSTUS_2]] &amp;amp;#124; bwGRiD-T&amp;amp;uuml;bingen &amp;amp;#124; [[BwForCluster_MLS&amp;amp;WISO_Production]]&lt;br /&gt;
|-&lt;br /&gt;
| License&lt;br /&gt;
| [https://www.openfoam.org/licence.php GNU General Public Licence]&lt;br /&gt;
|-&lt;br /&gt;
| Citing&lt;br /&gt;
| n/a&lt;br /&gt;
|-&lt;br /&gt;
| Links&lt;br /&gt;
| [https://www.openfoam.org/ Homepage] &amp;amp;#124; [https://www.openfoam.org/docs/ Documentation]&lt;br /&gt;
|-&lt;br /&gt;
| Graphical Interface&lt;br /&gt;
| No&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
= Description =&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;OpenFOAM&#039;&#039;&#039; (Open-source Field Operation And Manipulation) is a free, open-source CFD software package with an extensive range of features to solve anything from complex fluid flows involving chemical reactions, turbulence and heat transfer, to solid dynamics and electromagnetics.&lt;br /&gt;
&lt;br /&gt;
= Availability =&lt;br /&gt;
&lt;br /&gt;
OpenFOAM is available on selected bwHPC-Clusters. A complete list of versions currently installed on the bwHPC-Clusters can be obtained from the [https://www.bwhpc.de/software.html Cluster Information System (CIS)].&lt;br /&gt;
&lt;br /&gt;
= Adding OpenFOAM to Your Environment =&lt;br /&gt;
&lt;br /&gt;
In order to check which versions of OpenFOAM are installed on the compute cluster, run the following command:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ module avail cae/openfoam&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In general, several OpenFOAM versions should be returned.&lt;br /&gt;
The default version can be accessed by loading the appropriate module:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ module load cae/openfoam&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Other OpenFOAM versions (if available) can be loaded according to:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ module load cae/openfoam/&amp;lt;version&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
with &amp;lt;version&amp;gt; specifying the desired version.&lt;br /&gt;
&lt;br /&gt;
To activate the OpenFOAM applications, type&lt;br /&gt;
&amp;lt;pre&amp;gt;$ source $FOAM_INIT&amp;lt;/pre&amp;gt;&lt;br /&gt;
or simply&lt;br /&gt;
&amp;lt;pre&amp;gt;$ foamInit&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Parallel run with OpenFOAM  =&lt;br /&gt;
For a better performance on running OpenFOAM jobs in parallel on bwUniCluster, it is recommended to have the decomposed data in local folders on each node.  &lt;br /&gt;
&lt;br /&gt;
Therefore you may use *HPC scripts, wich will copy your data to the node specific folders after running the decomposePar, and copy it back to the local case folder before running reconstructPar.&lt;br /&gt;
&lt;br /&gt;
Don&#039;t forget to allocate enough wall-time for decomposition and reconstruction of your cases. As the data will be processed directly on  the nodes, and may be lost if the job is cancelled before  the data is copied back into the case folder.&lt;br /&gt;
&lt;br /&gt;
Following commands will do that for you: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;$ decomposeParHPC&lt;br /&gt;
$ reconstructParHPC&lt;br /&gt;
$ reconstructParMeshHPC&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
instead of:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ decomposePar&lt;br /&gt;
$ reconstructPar&lt;br /&gt;
$ recontructParMesh&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For example, if you want to run&amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;snappyHexMesh&amp;lt;/span&amp;gt;in parallel, you may use the following commands:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ decomposeParMeshHPC&lt;br /&gt;
$ mpirun --bind-to core --map-by core -report-bindings snappyHexMesh -overwrite -parallel&lt;br /&gt;
$ reconstructParMeshHPC -constant&amp;lt;/pre&amp;gt;&lt;br /&gt;
instead of:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ decomposePar&lt;br /&gt;
$ mpirun --bind-to core --map-by core -report-bindings snappyHexMesh -overwrite -parallel&lt;br /&gt;
$ reconstructParMesh -constant&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For running jobs on multiple nodes, OpenFOAM needs passwordless communication between the nodes, to copy data into the local folders.&lt;br /&gt;
&lt;br /&gt;
A small trick using ssh-keygen once will let your nodes to communicate freely over rsh. &lt;br /&gt;
&lt;br /&gt;
Do it once (if you didn&#039;t do it already in the past):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ssh-keygen&lt;br /&gt;
$ cat $HOME/.ssh/id_rsa.pub &amp;gt;&amp;gt; $HOME/.ssh/authorized_keys&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Building an OpenFOAM batch file for parallel processing =&lt;br /&gt;
== General information == &lt;br /&gt;
Before running OpenFOAM jobs in parallel, it is necessary to decompose the geometry domain into segments, equal to the number of processors (or threads) you intend to use. &lt;br /&gt;
&lt;br /&gt;
That means, for example, if you want to run a case on 8 processors, you will have to decompose the mesh in 8 segments, first. Then, you start the solver in &#039;&#039;parallel&#039;&#039;, letting &#039;&#039;OpenFOAM&#039;&#039; to run calculations concurrently on these segments, one processor responding for one segment of the mesh, sharing the data with all other processors in between. &lt;br /&gt;
&lt;br /&gt;
There is, of course, a mechanism that connects properly the calculations, so you don&#039;t loose your data or generate wrong results. &lt;br /&gt;
&lt;br /&gt;
Decomposition and segments building process is handled by&amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;decomposePar&amp;lt;/span&amp;gt;utility. &lt;br /&gt;
&lt;br /&gt;
The number of subdomains, in which the geometry will be decomposed, is specified in &amp;quot;&#039;&#039;system/decomposeParDict&#039;&#039;&amp;quot;, as well as the decomposition method to use. &lt;br /&gt;
&lt;br /&gt;
The automatic decomposition method is &amp;quot;&#039;&#039;scotch&#039;&#039;&amp;quot;. It trims the mesh, collecting as many cells as possible per processor, trying to avoid having empty segments or segments with not enough cells. If you want your mesh to be divided in other way, specifying the number of segments it should be cut in x, y or z direction, for example, you can use &amp;quot;simple&amp;quot; or &amp;quot;hierarchical&amp;quot; methods. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Wrapper script generation == &lt;br /&gt;
&#039;&#039;&#039;Attention:&#039;&#039;&#039; &amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;openfoam&amp;lt;/span&amp;gt; module loads automatically the necessary &amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;openmpi&amp;lt;/span&amp;gt; module for parallel run, do &#039;&#039;&#039;NOT&#039;&#039;&#039; load another version of mpi, as it may conflict with the loaded &amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;openfoam&amp;lt;/span&amp;gt; version. &lt;br /&gt;
&lt;br /&gt;
A job-script to submit a batch job called &#039;&#039;job_openfoam.sh&#039;&#039; that runs &#039;&#039;icoFoam&#039;&#039; solver with OpenFoam version 8, on 80 processors, on a &#039;&#039;multiple&#039;&#039; partition with a total wall clock time of 6 hours looks like: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--b)--&amp;gt; &lt;br /&gt;
{| style=&amp;quot;width: 100%; border:1px solid #d0cfcc; background:#f2f7ff;border-spacing: 5px;&amp;quot;&lt;br /&gt;
| style=&amp;quot;width:280px; white-space:nowrap; color:#000;&amp;quot; |&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
# Allocate nodes&lt;br /&gt;
#SBATCH --nodes=2&lt;br /&gt;
# Number of tasks per node&lt;br /&gt;
#SBATCH --ntasks-per-node=40&lt;br /&gt;
# Queue class https://wiki.bwhpc.de/e/BwUniCluster_2.0_Batch_Queues&lt;br /&gt;
#SBATCH --partition=multiple&lt;br /&gt;
# Maximum job run time&lt;br /&gt;
#SBATCH --time=4:00:00&lt;br /&gt;
# Give the job a reasonable name&lt;br /&gt;
#SBATCH --job-name=openfoam&lt;br /&gt;
# File name for standard output (%j will be replaced by job id)&lt;br /&gt;
#SBATCH --output=logs-%j.out&lt;br /&gt;
# File name for error output&lt;br /&gt;
#SBATCH --error=logs-%j.err&lt;br /&gt;
&lt;br /&gt;
# User defined variables&lt;br /&gt;
FOAM_VERSION=&amp;quot;8&amp;quot;&lt;br /&gt;
EXECUTABLE=&amp;quot;icoFoam&amp;quot;&lt;br /&gt;
MPIRUN_OPTIONS=&amp;quot;--bind-to core --map-by core --report-bindings&amp;quot;&lt;br /&gt;
&lt;br /&gt;
module load ${FOAM_VERSION}&lt;br /&gt;
foamInit&lt;br /&gt;
&lt;br /&gt;
# remove decomposePar if you already decomposed your case beforehand &lt;br /&gt;
decomposeParHPC &amp;amp;&amp;amp;&lt;br /&gt;
&lt;br /&gt;
# starting the solver in parallel. Name of the solver is given in the &amp;quot;EXECUTABLE&amp;quot; variable&lt;br /&gt;
mpirun ${MPIRUN_OPTIONS} ${EXECUTABLE} -parallel &amp;amp;&amp;amp;&lt;br /&gt;
&lt;br /&gt;
reconstructParHPC&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Attention:&#039;&#039;&#039; The script above will run a parallel OpenFOAM Job with pre-installed OpenMPI. If you are using an OpenFOAM version wich comes with pre-installed Intel MPI (like, for example&amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;cae/openfoam/v1712-impi&amp;lt;/span&amp;gt;) you will have to modify the batch script to use all the advantages of Intel MPI for parallel calculations. For details see:  &lt;br /&gt;
* [[Batch_Jobs_-_bwUniCluster_Features|Batch Jobs Features]]&lt;br /&gt;
&lt;br /&gt;
= Using I/O and reducing the amount of data and files =&lt;br /&gt;
In OpenFOAM, you can control which variables or fields are written at specific times. For example, for post-processing purposes, you might need only a subset of variables. In order to control which files will be written, there is a function object called &amp;quot;writeObjects&amp;quot;. &lt;br /&gt;
&lt;br /&gt;
An example controlDict file may look like this: At the top of the file (entry &amp;quot;writeControl&amp;quot;) you specify that ALL fields (variables) required for restarting are saved every 12 wall-clock hours. Then, additionally, at the bottom of the controlDict in the &amp;quot;functions&amp;quot; block, you can add a function object of type &amp;quot;writeObjects&amp;quot;. With this function object, you can control the output of specific fields independent of the entry at the top of the file: &lt;br /&gt;
&amp;lt;!--b)--&amp;gt; &lt;br /&gt;
{| style=&amp;quot;width: 100%; border:1px solid #d0cfcc; background:#f2f7ff;border-spacing: 5px;&amp;quot;&lt;br /&gt;
| style=&amp;quot;width:280px; white-space:nowrap; color:#000;&amp;quot; |&lt;br /&gt;
&amp;lt;source lang=&amp;quot;text&amp;quot;&amp;gt;&lt;br /&gt;
/*--------------------------------*- C++ -*----------------------------------*\&lt;br /&gt;
| =========                 |                                                 |&lt;br /&gt;
| \\      /  F ield         | OpenFOAM: The Open Source CFD Toolbox           |&lt;br /&gt;
|  \\    /   O peration     | Version:  4.1.x                                 |&lt;br /&gt;
|   \\  /    A nd           | Web:      www.OpenFOAM.org                      |&lt;br /&gt;
|    \\/     M anipulation  |                                                 |&lt;br /&gt;
\*---------------------------------------------------------------------------*/&lt;br /&gt;
FoamFile&lt;br /&gt;
{&lt;br /&gt;
    version     2.0;&lt;br /&gt;
    format      ascii;&lt;br /&gt;
    class       dictionary;&lt;br /&gt;
    location    &amp;quot;system&amp;quot;;&lt;br /&gt;
    object      controlDict;&lt;br /&gt;
}&lt;br /&gt;
// * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * //&lt;br /&gt;
&lt;br /&gt;
startFrom       latestTime;&lt;br /&gt;
startTime       0;&lt;br /&gt;
stopAt  	endTime;&lt;br /&gt;
endTime         1e2;&lt;br /&gt;
deltaT          1e-5;&lt;br /&gt;
&lt;br /&gt;
writeControl    clockTime;&lt;br /&gt;
writeInterval   43200; // write ALL fields necessary to restart your simulation &lt;br /&gt;
                       // every 43200 wall-clock seconds = 12 hours of real time&lt;br /&gt;
&lt;br /&gt;
purgeWrite      0;&lt;br /&gt;
writeFormat     binary;&lt;br /&gt;
writePrecision  10;&lt;br /&gt;
writeCompression off;&lt;br /&gt;
timeFormat      general;&lt;br /&gt;
timePrecision   10;&lt;br /&gt;
runTimeModifiable false;&lt;br /&gt;
&lt;br /&gt;
functions&lt;br /&gt;
{&lt;br /&gt;
    writeFields // name of the function object&lt;br /&gt;
    {&lt;br /&gt;
        type writeObjects;&lt;br /&gt;
        libs ( &amp;quot;libutilityFunctionObjects.so&amp;quot; );&lt;br /&gt;
&lt;br /&gt;
        objects&lt;br /&gt;
        (&lt;br /&gt;
	    T U rho // list of fields/variables to be written&lt;br /&gt;
        );&lt;br /&gt;
&lt;br /&gt;
        // E.g. write every 1e-5 seconds of simulation time only the specified fields&lt;br /&gt;
        writeControl runTime;&lt;br /&gt;
        writeInterval 1e-5; // write every 1e-5 seconds&lt;br /&gt;
    }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can also define multiple function objects in order to write different subsets of fields at different times. You can also use wildcards in the list of fields- for example, in order to write out all fields starting with &amp;quot;RR_&amp;quot; you can add&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;quot;RR_.*&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
to the list of objects. You can get a list of valid field names by writing &amp;quot;banana&amp;quot; in the field list. During the run of the solver all valid field names are printed.&lt;br /&gt;
The output time can be changed too. Instead of writing at specific times in the simulation, you can also write after a certain number of time steps or depening on the wall clock time:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;// write every 100th simulation time step&lt;br /&gt;
writeControl timeStep;&lt;br /&gt;
writeInterval 100;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;// every 3600 seconds of real wall clock time&lt;br /&gt;
writeControl runtime;&lt;br /&gt;
writeInterval 3600; &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you use OpenFOAM before version 4.0 or 1606, the type of function object is:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
type writeRegisteredObject; // (instead of type writeObjects) &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
If you use OpenFOAM before version 3.0, you have to load the library with&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
functionObjectLibs (&amp;quot;libIOFunctionObjects.so&amp;quot;); // (instead of libs ( &amp;quot;libutilityFunctionObjects.so&amp;quot; )) &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
and exchange the entry &amp;quot;writeControl&amp;quot; with &amp;quot;outputControl&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
= OpenFOAM and ParaView on bwUniCluster=&lt;br /&gt;
ParaView is not directly linked to OpenFOAM installation on the cluster. Therefore, to visualize OpenFOAM jobs with ParaView, they will have to be manually opened within the specific ParaView module.  &lt;br /&gt;
&lt;br /&gt;
1. Load the ParaView module. For example: &lt;br /&gt;
&amp;lt;pre&amp;gt;$ module load cae/paraview/5.9&amp;lt;/pre&amp;gt;&lt;br /&gt;
2. Create a dummy &#039;*.openfoam&#039; file in the OpenFOAM case folder:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ cd &amp;lt;case_folder_path&amp;gt;&lt;br /&gt;
$ touch &amp;lt;case_name&amp;gt;.openfoam&amp;lt;/pre&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;NOTICE:&#039;&#039;&#039; the name of the dummy file should be the same as the name of the OpenFOAM case folder, with &#039;.openfoam&#039; extension.&lt;br /&gt;
&lt;br /&gt;
3. Open ParaView:&lt;br /&gt;
To run Paraview using VNC system is required on the bwUniCluster.&lt;br /&gt;
On the cluster run: &lt;br /&gt;
&amp;lt;pre&amp;gt;$ start_vnc_desktop --hw-rendering &amp;lt;/pre&amp;gt;&lt;br /&gt;
Start your VNC client on your desktop PC.&lt;br /&gt;
&#039;&#039;&#039;NOTICE&#039;&#039;&#039; Information for remote visualization on KIT HPC system is available on: https://wiki.scc.kit.edu/hpc/index.php/Remote_Visualization&lt;br /&gt;
&lt;br /&gt;
4. In Paraview go to &#039;File&#039; -&amp;gt; &#039;Open&#039;, or press Ctrl+O. Choose to show &#039;All files (*)&#039;, and open your &amp;lt;case_name&amp;gt;.openfoam file. In the pop-up window select OpenFOAM, and press &#039;Ok&#039;.&lt;br /&gt;
&lt;br /&gt;
5. That&#039;s it! Enjoy ParaView and OpenFOAM.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
[[Category:Engineering software]]&lt;br /&gt;
[[Category:BwUniCluster]]&lt;br /&gt;
[[Category:BwUniCluster_2.0]]&lt;br /&gt;
[[Category:BwForCluster_Chemistry]]&lt;br /&gt;
[[Category:BwForCluster_JUSTUS_2]]&lt;br /&gt;
[[Category:BwForCluster_MLS&amp;amp;WISO_Production]]&lt;/div&gt;</summary>
		<author><name>A Saramet</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster3.0/Software/OpenFoam&amp;diff=8964</id>
		<title>BwUniCluster3.0/Software/OpenFoam</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster3.0/Software/OpenFoam&amp;diff=8964"/>
		<updated>2021-07-29T09:19:42Z</updated>

		<summary type="html">&lt;p&gt;A Saramet: /* OpenFOAM and ParaView on bwUniCluster */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{| width=600px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Description !! Content&lt;br /&gt;
|-&lt;br /&gt;
| module load&lt;br /&gt;
| cae/openfoam&lt;br /&gt;
|-&lt;br /&gt;
| Availability&lt;br /&gt;
| [[bwUniCluster_2.0]] &amp;amp;#124; [[BwForCluster_JUSTUS_2]] &amp;amp;#124; bwGRiD-T&amp;amp;uuml;bingen &amp;amp;#124; [[BwForCluster_MLS&amp;amp;WISO_Production]]&lt;br /&gt;
|-&lt;br /&gt;
| License&lt;br /&gt;
| [https://www.openfoam.org/licence.php GNU General Public Licence]&lt;br /&gt;
|-&lt;br /&gt;
| Citing&lt;br /&gt;
| n/a&lt;br /&gt;
|-&lt;br /&gt;
| Links&lt;br /&gt;
| [https://www.openfoam.org/ Homepage] &amp;amp;#124; [https://www.openfoam.org/docs/ Documentation]&lt;br /&gt;
|-&lt;br /&gt;
| Graphical Interface&lt;br /&gt;
| No&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
= Description =&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;OpenFOAM&#039;&#039;&#039; (Open-source Field Operation And Manipulation) is a free, open-source CFD software package with an extensive range of features to solve anything from complex fluid flows involving chemical reactions, turbulence and heat transfer, to solid dynamics and electromagnetics.&lt;br /&gt;
&lt;br /&gt;
= Availability =&lt;br /&gt;
&lt;br /&gt;
OpenFOAM is available on selected bwHPC-Clusters. A complete list of versions currently installed on the bwHPC-Clusters can be obtained from the [https://www.bwhpc.de/software.html Cluster Information System (CIS)].&lt;br /&gt;
&lt;br /&gt;
= Adding OpenFOAM to Your Environment =&lt;br /&gt;
&lt;br /&gt;
In order to check which versions of OpenFOAM are installed on the compute cluster, run the following command:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ module avail cae/openfoam&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In general, several OpenFOAM versions should be returned.&lt;br /&gt;
The default version can be accessed by loading the appropriate module:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ module load cae/openfoam&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Other OpenFOAM versions (if available) can be loaded according to:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ module load cae/openfoam/&amp;lt;version&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
with &amp;lt;version&amp;gt; specifying the desired version.&lt;br /&gt;
&lt;br /&gt;
To activate the OpenFOAM applications, type&lt;br /&gt;
&amp;lt;pre&amp;gt;$ source $FOAM_INIT&amp;lt;/pre&amp;gt;&lt;br /&gt;
or simply&lt;br /&gt;
&amp;lt;pre&amp;gt;$ foamInit&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Parallel run with OpenFOAM  =&lt;br /&gt;
For a better performance on running OpenFOAM jobs in parallel on bwUniCluster, it is recommended to have the decomposed data in local folders on each node.  &lt;br /&gt;
&lt;br /&gt;
Therefore you may use *HPC scripts, wich will copy your data to the node specific folders after running the decomposePar, and copy it back to the local case folder before running reconstructPar.&lt;br /&gt;
&lt;br /&gt;
Don&#039;t forget to allocate enough wall-time for decomposition and reconstruction of your cases. As the data will be processed directly on  the nodes, and may be lost if the job is cancelled before  the data is copied back into the case folder.&lt;br /&gt;
&lt;br /&gt;
Following commands will do that for you: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;$ decomposeParHPC&lt;br /&gt;
$ reconstructParHPC&lt;br /&gt;
$ reconstructParMeshHPC&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
instead of:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ decomposePar&lt;br /&gt;
$ reconstructPar&lt;br /&gt;
$ recontructParMesh&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For example, if you want to run&amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;snappyHexMesh&amp;lt;/span&amp;gt;in parallel, you may use the following commands:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ decomposeParMeshHPC&lt;br /&gt;
$ mpirun --bind-to core --map-by core -report-bindings snappyHexMesh -overwrite -parallel&lt;br /&gt;
$ reconstructParMeshHPC -constant&amp;lt;/pre&amp;gt;&lt;br /&gt;
instead of:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ decomposePar&lt;br /&gt;
$ mpirun --bind-to core --map-by core -report-bindings snappyHexMesh -overwrite -parallel&lt;br /&gt;
$ reconstructParMesh -constant&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For running jobs on multiple nodes, OpenFOAM needs passwordless communication between the nodes, to copy data into the local folders.&lt;br /&gt;
&lt;br /&gt;
A small trick using ssh-keygen once will let your nodes to communicate freely over rsh. &lt;br /&gt;
&lt;br /&gt;
Do it once (if you didn&#039;t do it already in the past):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ssh-keygen&lt;br /&gt;
$ cat $HOME/.ssh/id_rsa.pub &amp;gt;&amp;gt; $HOME/.ssh/authorized_keys&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Building an OpenFOAM batch file for parallel processing =&lt;br /&gt;
== General information == &lt;br /&gt;
Before running OpenFOAM jobs in parallel, it is necessary to decompose the geometry domain into segments, equal to the number of processors (or threads) you intend to use. &lt;br /&gt;
&lt;br /&gt;
That means, for example, if you want to run a case on 8 processors, you will have to decompose the mesh in 8 segments, first. Then, you start the solver in &#039;&#039;parallel&#039;&#039;, letting &#039;&#039;OpenFOAM&#039;&#039; to run calculations concurrently on these segments, one processor responding for one segment of the mesh, sharing the data with all other processors in between. &lt;br /&gt;
&lt;br /&gt;
There is, of course, a mechanism that connects properly the calculations, so you don&#039;t loose your data or generate wrong results. &lt;br /&gt;
&lt;br /&gt;
Decomposition and segments building process is handled by&amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;decomposePar&amp;lt;/span&amp;gt;utility. &lt;br /&gt;
&lt;br /&gt;
The number of subdomains, in which the geometry will be decomposed, is specified in &amp;quot;&#039;&#039;system/decomposeParDict&#039;&#039;&amp;quot;, as well as the decomposition method to use. &lt;br /&gt;
&lt;br /&gt;
The automatic decomposition method is &amp;quot;&#039;&#039;scotch&#039;&#039;&amp;quot;. It trims the mesh, collecting as many cells as possible per processor, trying to avoid having empty segments or segments with not enough cells. If you want your mesh to be divided in other way, specifying the number of segments it should be cut in x, y or z direction, for example, you can use &amp;quot;simple&amp;quot; or &amp;quot;hierarchical&amp;quot; methods. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Wrapper script generation == &lt;br /&gt;
&#039;&#039;&#039;Attention:&#039;&#039;&#039; &amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;openfoam&amp;lt;/span&amp;gt; module loads automatically the necessary &amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;openmpi&amp;lt;/span&amp;gt; module for parallel run, do &#039;&#039;&#039;NOT&#039;&#039;&#039; load another version of mpi, as it may conflict with the loaded &amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;openfoam&amp;lt;/span&amp;gt; version. &lt;br /&gt;
&lt;br /&gt;
A job-script to submit a batch job called &#039;&#039;job_openfoam.sh&#039;&#039; that runs &#039;&#039;icoFoam&#039;&#039; solver with OpenFoam version 8, on 80 processors, on a &#039;&#039;multiple&#039;&#039; partition with a total wall clock time of 6 hours looks like: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--b)--&amp;gt; &lt;br /&gt;
{| style=&amp;quot;width: 100%; border:1px solid #d0cfcc; background:#f2f7ff;border-spacing: 5px;&amp;quot;&lt;br /&gt;
| style=&amp;quot;width:280px; white-space:nowrap; color:#000;&amp;quot; |&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
# Allocate nodes&lt;br /&gt;
#SBATCH --nodes=2&lt;br /&gt;
# Number of tasks per node&lt;br /&gt;
#SBATCH --ntasks-per-node=40&lt;br /&gt;
# Queue class https://wiki.bwhpc.de/e/BwUniCluster_2.0_Batch_Queues&lt;br /&gt;
#SBATCH --partition=multiple&lt;br /&gt;
# Maximum job run time&lt;br /&gt;
#SBATCH --time=4:00:00&lt;br /&gt;
# Give the job a reasonable name&lt;br /&gt;
#SBATCH --job-name=openfoam&lt;br /&gt;
# File name for standard output (%j will be replaced by job id)&lt;br /&gt;
#SBATCH --output=logs-%j.out&lt;br /&gt;
# File name for error output&lt;br /&gt;
#SBATCH --error=logs-%j.err&lt;br /&gt;
&lt;br /&gt;
# User defined variables&lt;br /&gt;
FOAM_VERSION=&amp;quot;8&amp;quot;&lt;br /&gt;
EXECUTABLE=&amp;quot;icoFoam&amp;quot;&lt;br /&gt;
MPIRUN_OPTIONS=&amp;quot;--bind-to core --map-by core --report-bindings&amp;quot;&lt;br /&gt;
&lt;br /&gt;
module load ${FOAM_VERSION}&lt;br /&gt;
foamInit&lt;br /&gt;
&lt;br /&gt;
# remove decomposePar if you already decomposed your case beforehand &lt;br /&gt;
decomposeParHPC &amp;amp;&amp;amp;&lt;br /&gt;
&lt;br /&gt;
# starting the solver in parallel. Name of the solver is given in the &amp;quot;EXECUTABLE&amp;quot; variable&lt;br /&gt;
mpirun ${MPIRUN_OPTIONS} ${EXECUTABLE} -parallel &amp;amp;&amp;amp;&lt;br /&gt;
&lt;br /&gt;
reconstructParHPC&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Attention:&#039;&#039;&#039; The script above will run a parallel OpenFOAM Job with pre-installed OpenMPI. If you are using an OpenFOAM version wich comes with pre-installed Intel MPI (like, for example&amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;cae/openfoam/v1712-impi&amp;lt;/span&amp;gt;) you will have to modify the batch script to use all the advantages of Intel MPI for parallel calculations. For details see:  &lt;br /&gt;
* [[Batch_Jobs_-_bwUniCluster_Features|Batch Jobs Features]]&lt;br /&gt;
&lt;br /&gt;
= Using I/O and reducing the amount of data and files =&lt;br /&gt;
In OpenFOAM, you can control which variables or fields are written at specific times. For example, for post-processing purposes, you might need only a subset of variables. In order to control which files will be written, there is a function object called &amp;quot;writeObjects&amp;quot;. &lt;br /&gt;
&lt;br /&gt;
An example controlDict file may look like this: At the top of the file (entry &amp;quot;writeControl&amp;quot;) you specify that ALL fields (variables) required for restarting are saved every 12 wall-clock hours. Then, additionally, at the bottom of the controlDict in the &amp;quot;functions&amp;quot; block, you can add a function object of type &amp;quot;writeObjects&amp;quot;. With this function object, you can control the output of specific fields independent of the entry at the top of the file: &lt;br /&gt;
&amp;lt;!--b)--&amp;gt; &lt;br /&gt;
{| style=&amp;quot;width: 100%; border:1px solid #d0cfcc; background:#f2f7ff;border-spacing: 5px;&amp;quot;&lt;br /&gt;
| style=&amp;quot;width:280px; white-space:nowrap; color:#000;&amp;quot; |&lt;br /&gt;
&amp;lt;source lang=&amp;quot;text&amp;quot;&amp;gt;&lt;br /&gt;
/*--------------------------------*- C++ -*----------------------------------*\&lt;br /&gt;
| =========                 |                                                 |&lt;br /&gt;
| \\      /  F ield         | OpenFOAM: The Open Source CFD Toolbox           |&lt;br /&gt;
|  \\    /   O peration     | Version:  4.1.x                                 |&lt;br /&gt;
|   \\  /    A nd           | Web:      www.OpenFOAM.org                      |&lt;br /&gt;
|    \\/     M anipulation  |                                                 |&lt;br /&gt;
\*---------------------------------------------------------------------------*/&lt;br /&gt;
FoamFile&lt;br /&gt;
{&lt;br /&gt;
    version     2.0;&lt;br /&gt;
    format      ascii;&lt;br /&gt;
    class       dictionary;&lt;br /&gt;
    location    &amp;quot;system&amp;quot;;&lt;br /&gt;
    object      controlDict;&lt;br /&gt;
}&lt;br /&gt;
// * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * //&lt;br /&gt;
&lt;br /&gt;
startFrom       latestTime;&lt;br /&gt;
startTime       0;&lt;br /&gt;
stopAt  	endTime;&lt;br /&gt;
endTime         1e2;&lt;br /&gt;
deltaT          1e-5;&lt;br /&gt;
&lt;br /&gt;
writeControl    clockTime;&lt;br /&gt;
writeInterval   43200; // write ALL fields necessary to restart your simulation &lt;br /&gt;
                       // every 43200 wall-clock seconds = 12 hours of real time&lt;br /&gt;
&lt;br /&gt;
purgeWrite      0;&lt;br /&gt;
writeFormat     binary;&lt;br /&gt;
writePrecision  10;&lt;br /&gt;
writeCompression off;&lt;br /&gt;
timeFormat      general;&lt;br /&gt;
timePrecision   10;&lt;br /&gt;
runTimeModifiable false;&lt;br /&gt;
&lt;br /&gt;
functions&lt;br /&gt;
{&lt;br /&gt;
    writeFields // name of the function object&lt;br /&gt;
    {&lt;br /&gt;
        type writeObjects;&lt;br /&gt;
        libs ( &amp;quot;libutilityFunctionObjects.so&amp;quot; );&lt;br /&gt;
&lt;br /&gt;
        objects&lt;br /&gt;
        (&lt;br /&gt;
	    T U rho // list of fields/variables to be written&lt;br /&gt;
        );&lt;br /&gt;
&lt;br /&gt;
        // E.g. write every 1e-5 seconds of simulation time only the specified fields&lt;br /&gt;
        writeControl runTime;&lt;br /&gt;
        writeInterval 1e-5; // write every 1e-5 seconds&lt;br /&gt;
    }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can also define multiple function objects in order to write different subsets of fields at different times. You can also use wildcards in the list of fields- for example, in order to write out all fields starting with &amp;quot;RR_&amp;quot; you can add&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;quot;RR_.*&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
to the list of objects. You can get a list of valid field names by writing &amp;quot;banana&amp;quot; in the field list. During the run of the solver all valid field names are printed.&lt;br /&gt;
The output time can be changed too. Instead of writing at specific times in the simulation, you can also write after a certain number of time steps or depening on the wall clock time:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;// write every 100th simulation time step&lt;br /&gt;
writeControl timeStep;&lt;br /&gt;
writeInterval 100;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;// every 3600 seconds of real wall clock time&lt;br /&gt;
writeControl runtime;&lt;br /&gt;
writeInterval 3600; &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you use OpenFOAM before version 4.0 or 1606, the type of function object is:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
type writeRegisteredObject; // (instead of type writeObjects) &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
If you use OpenFOAM before version 3.0, you have to load the library with&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
functionObjectLibs (&amp;quot;libIOFunctionObjects.so&amp;quot;); // (instead of libs ( &amp;quot;libutilityFunctionObjects.so&amp;quot; )) &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
and exchange the entry &amp;quot;writeControl&amp;quot; with &amp;quot;outputControl&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
= OpenFOAM and ParaView on bwUniCluster=&lt;br /&gt;
ParaView is not directly linked to OpenFOAM installation on the cluster. Therefore, to visualize OpenFOAM jobs with ParaView, they will have to be manually opened within the specific ParaView module.  &lt;br /&gt;
&lt;br /&gt;
1. Load the ParaView module. For example: &lt;br /&gt;
&amp;lt;pre&amp;gt;$ module load cae/paraview/5.9&amp;lt;/pre&amp;gt;&lt;br /&gt;
2. Create a dummy &#039;*.openfoam&#039; file in the OpenFOAM case folder:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ cd &amp;lt;case_folder_path&amp;gt;&lt;br /&gt;
$ touch &amp;lt;case_name&amp;gt;.openfoam&amp;lt;/pre&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;NOTICE:&#039;&#039;&#039; the name of the dummy file should be the same as the name of the OpenFOAM case folder, with &#039;.openfoam&#039; extension.&lt;br /&gt;
&lt;br /&gt;
3. Open ParaView:&lt;br /&gt;
To run Paraview using VNC system is required on the bwUniCluster.&lt;br /&gt;
Run &#039;start_vnc_desktop --hw-rendering&#039; and start your VNC client on your desktop PC.&lt;br /&gt;
Information for remote visualization on KIT HPC system is available on:&lt;br /&gt;
https://wiki.scc.kit.edu/hpc/index.php/Remote_Visualization&lt;br /&gt;
&lt;br /&gt;
4. In Paraview go to &#039;File&#039; -&amp;gt; &#039;Open&#039;, or press Ctrl+O. Choose to show &#039;All files (*)&#039;, and open your &amp;lt;case_name&amp;gt;.openfoam file. In the pop-up window select OpenFOAM, and press &#039;Ok&#039;.&lt;br /&gt;
&lt;br /&gt;
5. That&#039;s it! Enjoy ParaView and OpenFOAM.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
[[Category:Engineering software]]&lt;br /&gt;
[[Category:BwUniCluster]]&lt;br /&gt;
[[Category:BwUniCluster_2.0]]&lt;br /&gt;
[[Category:BwForCluster_Chemistry]]&lt;br /&gt;
[[Category:BwForCluster_JUSTUS_2]]&lt;br /&gt;
[[Category:BwForCluster_MLS&amp;amp;WISO_Production]]&lt;/div&gt;</summary>
		<author><name>A Saramet</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster3.0/Software/OpenFoam&amp;diff=8591</id>
		<title>BwUniCluster3.0/Software/OpenFoam</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster3.0/Software/OpenFoam&amp;diff=8591"/>
		<updated>2021-05-05T14:44:42Z</updated>

		<summary type="html">&lt;p&gt;A Saramet: /* Wrapper script generation */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{| width=600px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Description !! Content&lt;br /&gt;
|-&lt;br /&gt;
| module load&lt;br /&gt;
| cae/openfoam&lt;br /&gt;
|-&lt;br /&gt;
| Availability&lt;br /&gt;
| [[bwUniCluster_2.0]] &amp;amp;#124; [[BwForCluster_JUSTUS_2]] &amp;amp;#124; bwGRiD-T&amp;amp;uuml;bingen&lt;br /&gt;
|-&lt;br /&gt;
| License&lt;br /&gt;
| [https://www.openfoam.org/licence.php GNU General Public Licence]&lt;br /&gt;
|-&lt;br /&gt;
| Citing&lt;br /&gt;
| n/a&lt;br /&gt;
|-&lt;br /&gt;
| Links&lt;br /&gt;
| [https://www.openfoam.org/ Homepage] &amp;amp;#124; [https://www.openfoam.org/docs/ Documentation]&lt;br /&gt;
|-&lt;br /&gt;
| Graphical Interface&lt;br /&gt;
| No&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
= Description =&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;OpenFOAM&#039;&#039;&#039; (Open-source Field Operation And Manipulation) is a free, open-source CFD software package with an extensive range of features to solve anything from complex fluid flows involving chemical reactions, turbulence and heat transfer, to solid dynamics and electromagnetics.&lt;br /&gt;
&lt;br /&gt;
= Availability =&lt;br /&gt;
&lt;br /&gt;
OpenFOAM is available on selected bwHPC-Clusters. A complete list of versions currently installed on the bwHPC-Clusters can be obtained from the [https://www.bwhpc.de/software.html Cluster Information System (CIS)].&lt;br /&gt;
&lt;br /&gt;
= Adding OpenFOAM to Your Environment =&lt;br /&gt;
&lt;br /&gt;
In order to check which versions of OpenFOAM are installed on the compute cluster, run the following command:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ module avail cae/openfoam&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In general, several OpenFOAM versions should be returned.&lt;br /&gt;
The default version can be accessed by loading the appropriate module:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ module load cae/openfoam&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Other OpenFOAM versions (if available) can be loaded according to:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ module load cae/openfoam/&amp;lt;version&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
with &amp;lt;version&amp;gt; specifying the desired version.&lt;br /&gt;
&lt;br /&gt;
To activate the OpenFOAM applications, type&lt;br /&gt;
&amp;lt;pre&amp;gt;$ source $FOAM_INIT&amp;lt;/pre&amp;gt;&lt;br /&gt;
or simply&lt;br /&gt;
&amp;lt;pre&amp;gt;$ foamInit&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Parallel run with OpenFOAM  =&lt;br /&gt;
For a better performance on running OpenFOAM jobs in parallel on bwUniCluster, it is recommended to have the decomposed data in local folders on each node.  &lt;br /&gt;
&lt;br /&gt;
Therefore you may use *HPC scripts, wich will copy your data to the node specific folders after running the decomposePar, and copy it back to the local case folder before running reconstructPar.&lt;br /&gt;
&lt;br /&gt;
Don&#039;t forget to allocate enough wall-time for decomposition and reconstruction of your cases. As the data will be processed directly on  the nodes, and may be lost if the job is cancelled before  the data is copied back into the case folder.&lt;br /&gt;
&lt;br /&gt;
Following commands will do that for you: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;$ decomposeParHPC&lt;br /&gt;
$ reconstructParHPC&lt;br /&gt;
$ reconstructParMeshHPC&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
instead of:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ decomposePar&lt;br /&gt;
$ reconstructPar&lt;br /&gt;
$ recontructParMesh&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For example, if you want to run&amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;snappyHexMesh&amp;lt;/span&amp;gt;in parallel, you may use the following commands:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ decomposeParMeshHPC&lt;br /&gt;
$ mpirun --bind-to core --map-by core -report-bindings snappyHexMesh -overwrite -parallel&lt;br /&gt;
$ reconstructParMeshHPC -constant&amp;lt;/pre&amp;gt;&lt;br /&gt;
instead of:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ decomposePar&lt;br /&gt;
$ mpirun --bind-to core --map-by core -report-bindings snappyHexMesh -overwrite -parallel&lt;br /&gt;
$ reconstructParMesh -constant&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For running jobs on multiple nodes, OpenFOAM needs passwordless communication between the nodes, to copy data into the local folders.&lt;br /&gt;
&lt;br /&gt;
A small trick using ssh-keygen once will let your nodes to communicate freely over rsh. &lt;br /&gt;
&lt;br /&gt;
Do it once (if you didn&#039;t do it already in the past):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ssh-keygen&lt;br /&gt;
$ cat $HOME/.ssh/id_rsa.pub &amp;gt;&amp;gt; $HOME/.ssh/authorized_keys&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Building an OpenFOAM batch file for parallel processing =&lt;br /&gt;
== General information == &lt;br /&gt;
Before running OpenFOAM jobs in parallel, it is necessary to decompose the geometry domain into segments, equal to the number of processors (or threads) you intend to use. &lt;br /&gt;
&lt;br /&gt;
That means, for example, if you want to run a case on 8 processors, you will have to decompose the mesh in 8 segments, first. Then, you start the solver in &#039;&#039;parallel&#039;&#039;, letting &#039;&#039;OpenFOAM&#039;&#039; to run calculations concurrently on these segments, one processor responding for one segment of the mesh, sharing the data with all other processors in between. &lt;br /&gt;
&lt;br /&gt;
There is, of course, a mechanism that connects properly the calculations, so you don&#039;t loose your data or generate wrong results. &lt;br /&gt;
&lt;br /&gt;
Decomposition and segments building process is handled by&amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;decomposePar&amp;lt;/span&amp;gt;utility. &lt;br /&gt;
&lt;br /&gt;
The number of subdomains, in which the geometry will be decomposed, is specified in &amp;quot;&#039;&#039;system/decomposeParDict&#039;&#039;&amp;quot;, as well as the decomposition method to use. &lt;br /&gt;
&lt;br /&gt;
The automatic decomposition method is &amp;quot;&#039;&#039;scotch&#039;&#039;&amp;quot;. It trims the mesh, collecting as many cells as possible per processor, trying to avoid having empty segments or segments with not enough cells. If you want your mesh to be divided in other way, specifying the number of segments it should be cut in x, y or z direction, for example, you can use &amp;quot;simple&amp;quot; or &amp;quot;hierarchical&amp;quot; methods. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Wrapper script generation == &lt;br /&gt;
&#039;&#039;&#039;Attention:&#039;&#039;&#039; &amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;openfoam&amp;lt;/span&amp;gt; module loads automatically the necessary &amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;openmpi&amp;lt;/span&amp;gt; module for parallel run, do &#039;&#039;&#039;NOT&#039;&#039;&#039; load another version of mpi, as it may conflict with the loaded &amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;openfoam&amp;lt;/span&amp;gt; version. &lt;br /&gt;
&lt;br /&gt;
A job-script to submit a batch job called &#039;&#039;job_openfoam.sh&#039;&#039; that runs &#039;&#039;icoFoam&#039;&#039; solver with OpenFoam version 8, on 80 processors, on a &#039;&#039;multiple&#039;&#039; partition with a total wall clock time of 6 hours looks like: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--b)--&amp;gt; &lt;br /&gt;
{| style=&amp;quot;width: 100%; border:1px solid #d0cfcc; background:#f2f7ff;border-spacing: 5px;&amp;quot;&lt;br /&gt;
| style=&amp;quot;width:280px; white-space:nowrap; color:#000;&amp;quot; |&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
# Allocate nodes&lt;br /&gt;
#SBATCH --nodes=2&lt;br /&gt;
# Number of tasks per node&lt;br /&gt;
#SBATCH --ntasks-per-node=40&lt;br /&gt;
# Queue class https://wiki.bwhpc.de/e/BwUniCluster_2.0_Batch_Queues&lt;br /&gt;
#SBATCH --partition=multiple&lt;br /&gt;
# Maximum job run time&lt;br /&gt;
#SBATCH --time=4:00:00&lt;br /&gt;
# Give the job a reasonable name&lt;br /&gt;
#SBATCH --job-name=openfoam&lt;br /&gt;
# File name for standard output (%j will be replaced by job id)&lt;br /&gt;
#SBATCH --output=logs-%j.out&lt;br /&gt;
# File name for error output&lt;br /&gt;
#SBATCH --error=logs-%j.err&lt;br /&gt;
&lt;br /&gt;
# User defined variables&lt;br /&gt;
FOAM_VERSION=&amp;quot;8&amp;quot;&lt;br /&gt;
EXECUTABLE=&amp;quot;icoFoam&amp;quot;&lt;br /&gt;
MPIRUN_OPTIONS=&amp;quot;--bind-to core --map-by core --report-bindings&amp;quot;&lt;br /&gt;
&lt;br /&gt;
module load ${FOAM_VERSION}&lt;br /&gt;
foamInit&lt;br /&gt;
&lt;br /&gt;
# remove decomposePar if you already decomposed your case beforehand &lt;br /&gt;
decomposeParHPC &amp;amp;&amp;amp;&lt;br /&gt;
&lt;br /&gt;
# starting the solver in parallel. Name of the solver is given in the &amp;quot;EXECUTABLE&amp;quot; variable&lt;br /&gt;
mpirun ${MPIRUN_OPTIONS} ${EXECUTABLE} -parallel &amp;amp;&amp;amp;&lt;br /&gt;
&lt;br /&gt;
reconstructParHPC&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Attention:&#039;&#039;&#039; The script above will run a parallel OpenFOAM Job with pre-installed OpenMPI. If you are using an OpenFOAM version wich comes with pre-installed Intel MPI (like, for example&amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;cae/openfoam/v1712-impi&amp;lt;/span&amp;gt;) you will have to modify the batch script to use all the advantages of Intel MPI for parallel calculations. For details see:  &lt;br /&gt;
* [[Batch_Jobs_-_bwUniCluster_Features|Batch Jobs Features]]&lt;br /&gt;
&lt;br /&gt;
= Using I/O and reducing the amount of data and files =&lt;br /&gt;
In OpenFOAM, you can control which variables or fields are written at specific times. For example, for post-processing purposes, you might need only a subset of variables. In order to control which files will be written, there is a function object called &amp;quot;writeObjects&amp;quot;. &lt;br /&gt;
&lt;br /&gt;
An example controlDict file may look like this: At the top of the file (entry &amp;quot;writeControl&amp;quot;) you specify that ALL fields (variables) required for restarting are saved every 12 wall-clock hours. Then, additionally, at the bottom of the controlDict in the &amp;quot;functions&amp;quot; block, you can add a function object of type &amp;quot;writeObjects&amp;quot;. With this function object, you can control the output of specific fields independent of the entry at the top of the file: &lt;br /&gt;
&amp;lt;!--b)--&amp;gt; &lt;br /&gt;
{| style=&amp;quot;width: 100%; border:1px solid #d0cfcc; background:#f2f7ff;border-spacing: 5px;&amp;quot;&lt;br /&gt;
| style=&amp;quot;width:280px; white-space:nowrap; color:#000;&amp;quot; |&lt;br /&gt;
&amp;lt;source lang=&amp;quot;text&amp;quot;&amp;gt;&lt;br /&gt;
/*--------------------------------*- C++ -*----------------------------------*\&lt;br /&gt;
| =========                 |                                                 |&lt;br /&gt;
| \\      /  F ield         | OpenFOAM: The Open Source CFD Toolbox           |&lt;br /&gt;
|  \\    /   O peration     | Version:  4.1.x                                 |&lt;br /&gt;
|   \\  /    A nd           | Web:      www.OpenFOAM.org                      |&lt;br /&gt;
|    \\/     M anipulation  |                                                 |&lt;br /&gt;
\*---------------------------------------------------------------------------*/&lt;br /&gt;
FoamFile&lt;br /&gt;
{&lt;br /&gt;
    version     2.0;&lt;br /&gt;
    format      ascii;&lt;br /&gt;
    class       dictionary;&lt;br /&gt;
    location    &amp;quot;system&amp;quot;;&lt;br /&gt;
    object      controlDict;&lt;br /&gt;
}&lt;br /&gt;
// * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * //&lt;br /&gt;
&lt;br /&gt;
startFrom       latestTime;&lt;br /&gt;
startTime       0;&lt;br /&gt;
stopAt  	endTime;&lt;br /&gt;
endTime         1e2;&lt;br /&gt;
deltaT          1e-5;&lt;br /&gt;
&lt;br /&gt;
writeControl    clockTime;&lt;br /&gt;
writeInterval   43200; // write ALL fields necessary to restart your simulation &lt;br /&gt;
                       // every 43200 wall-clock seconds = 12 hours of real time&lt;br /&gt;
&lt;br /&gt;
purgeWrite      0;&lt;br /&gt;
writeFormat     binary;&lt;br /&gt;
writePrecision  10;&lt;br /&gt;
writeCompression off;&lt;br /&gt;
timeFormat      general;&lt;br /&gt;
timePrecision   10;&lt;br /&gt;
runTimeModifiable false;&lt;br /&gt;
&lt;br /&gt;
functions&lt;br /&gt;
{&lt;br /&gt;
    writeFields // name of the function object&lt;br /&gt;
    {&lt;br /&gt;
        type writeObjects;&lt;br /&gt;
        libs ( &amp;quot;libutilityFunctionObjects.so&amp;quot; );&lt;br /&gt;
&lt;br /&gt;
        objects&lt;br /&gt;
        (&lt;br /&gt;
	    T U rho // list of fields/variables to be written&lt;br /&gt;
        );&lt;br /&gt;
&lt;br /&gt;
        // E.g. write every 1e-5 seconds of simulation time only the specified fields&lt;br /&gt;
        writeControl runTime;&lt;br /&gt;
        writeInterval 1e-5; // write every 1e-5 seconds&lt;br /&gt;
    }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can also define multiple function objects in order to write different subsets of fields at different times. You can also use wildcards in the list of fields- for example, in order to write out all fields starting with &amp;quot;RR_&amp;quot; you can add&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;quot;RR_.*&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
to the list of objects. You can get a list of valid field names by writing &amp;quot;banana&amp;quot; in the field list. During the run of the solver all valid field names are printed.&lt;br /&gt;
The output time can be changed too. Instead of writing at specific times in the simulation, you can also write after a certain number of time steps or depening on the wall clock time:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;// write every 100th simulation time step&lt;br /&gt;
writeControl timeStep;&lt;br /&gt;
writeInterval 100;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;// every 3600 seconds of real wall clock time&lt;br /&gt;
writeControl runtime;&lt;br /&gt;
writeInterval 3600; &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you use OpenFOAM before version 4.0 or 1606, the type of function object is:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
type writeRegisteredObject; // (instead of type writeObjects) &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
If you use OpenFOAM before version 3.0, you have to load the library with&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
functionObjectLibs (&amp;quot;libIOFunctionObjects.so&amp;quot;); // (instead of libs ( &amp;quot;libutilityFunctionObjects.so&amp;quot; )) &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
and exchange the entry &amp;quot;writeControl&amp;quot; with &amp;quot;outputControl&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
= OpenFOAM and ParaView on bwUniCluster=&lt;br /&gt;
ParaView is not directly linked to OpenFOAM installation on the cluster. Therefore, to visualize OpenFOAM jobs with ParaView, they will have to be manually opened within the specific ParaView module.  &lt;br /&gt;
&lt;br /&gt;
1. Load the ParaView module. For example: &lt;br /&gt;
&amp;lt;pre&amp;gt;$ module load cae/paraview/4.3.1&amp;lt;/pre&amp;gt;&lt;br /&gt;
2. Create a dummy &#039;*.openfoam&#039; file in the OpenFOAM case folder:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ cd &amp;lt;case_folder_path&amp;gt;&lt;br /&gt;
$ touch &amp;lt;case_name&amp;gt;.openfoam&amp;lt;/pre&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;NOTICE:&#039;&#039;&#039; the name of the dummy file should be the same as the name of the OpenFOAM case folder, with &#039;.openfoam&#039; extension.&lt;br /&gt;
&lt;br /&gt;
3. Open ParaView:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ paraview&amp;lt;/pre&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;NOTICE:&#039;&#039;&#039; ParaView is a visualization software which requires X-server to run.&lt;br /&gt;
&lt;br /&gt;
4. In Paraview go to &#039;File&#039; -&amp;gt; &#039;Open&#039;, or press Ctrl+O. Choose to show &#039;All files (*)&#039;, and open your &amp;lt;case_name&amp;gt;.openfoam file. In the pop-up window select OpenFOAM, and press &#039;Ok&#039;.&lt;br /&gt;
&lt;br /&gt;
5. That&#039;s it! Enjoy ParaView and OpenFOAM.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
[[Category:Engineering software]]&lt;br /&gt;
[[Category:BwUniCluster]]&lt;br /&gt;
[[Category:BwUniCluster_2.0]]&lt;br /&gt;
[[Category:BwForCluster_Chemistry]]&lt;br /&gt;
[[Category:BwForCluster_JUSTUS_2]]&lt;/div&gt;</summary>
		<author><name>A Saramet</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster3.0/Software/OpenFoam&amp;diff=5495</id>
		<title>BwUniCluster3.0/Software/OpenFoam</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster3.0/Software/OpenFoam&amp;diff=5495"/>
		<updated>2018-06-25T16:20:10Z</updated>

		<summary type="html">&lt;p&gt;A Saramet: /* Parallel run with OpenFOAM */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{| width=600px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Description !! Content&lt;br /&gt;
|-&lt;br /&gt;
| module load&lt;br /&gt;
| cae/openfoam&lt;br /&gt;
|-&lt;br /&gt;
| Availability&lt;br /&gt;
| [[bwUniCluster]] &amp;amp;#124; [[BwForCluster_Chemistry]] &amp;amp;#124; bwGRiD-T&amp;amp;uuml;bingen&lt;br /&gt;
|-&lt;br /&gt;
| License&lt;br /&gt;
| [https://www.openfoam.org/licence.php GNU General Public Licence]&lt;br /&gt;
|-&lt;br /&gt;
| Citing&lt;br /&gt;
| n/a&lt;br /&gt;
|-&lt;br /&gt;
| Links&lt;br /&gt;
| [https://www.openfoam.org/ Openfoam Homepage] &amp;amp;#124; [https://www.openfoam.org/docs/ Documentation]&lt;br /&gt;
|-&lt;br /&gt;
| Graphical Interface&lt;br /&gt;
| No&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
= Description =&lt;br /&gt;
The OpenFOAM®  (Open Field Operation and Manipulation) CFD Toolbox is a free, open source CFD software package with an extensive range of features to solve anything from complex fluid flows involving chemical reactions, turbulence and heat transfer, to solid dynamics and electromagnetics. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
= Versions and Availability =&lt;br /&gt;
A list of versions currently available on all bwHPC-C5-Clusters can be obtained from the&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;big&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[https://cis-hpc.uni-konstanz.de/prod.cis/ Cluster Information System CIS]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/big&amp;gt;&lt;br /&gt;
{{#widget:Iframe&lt;br /&gt;
|url=https://cis-hpc.uni-konstanz.de/prod.cis/bwUniCluster/cae/openfoam&lt;br /&gt;
|width=99%&lt;br /&gt;
|height=600&lt;br /&gt;
|border=0&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
In order to check what OpenFOAM versions are installed on the system, run the following command:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ module avail cae/openfoam&amp;lt;/pre&amp;gt;&lt;br /&gt;
Typically, several OpenFOAM versions might be available.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Any available version can be accessed by loading the appropriate module:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ module load cae/openfoam/&amp;lt;version&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
with &amp;lt;version&amp;gt; specifying the desired version. &lt;br /&gt;
&lt;br /&gt;
To activate the OpenFOAM applications, after the module is loaded, run the following:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ source $FOAM_INIT&amp;lt;/pre&amp;gt;&lt;br /&gt;
or simply:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ foamInit&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Parallel run with OpenFOAM  =&lt;br /&gt;
For a better performance on running OpenFOAM jobs in parallel on bwUniCluster, it is recommended to have the decomposed data in local folders on each node.  &lt;br /&gt;
&lt;br /&gt;
Therefore you may use *HPC scripts, wich will copy your data to the node specific folders after running the decomposePar, and copy it back to the local case folder before running reconstructPar.&lt;br /&gt;
&lt;br /&gt;
Don&#039;t forget to allocate enough wall-time for decomposition and reconstruction of your cases. As the data will be processed directly on  the nodes, and may be lost if the job is cancelled before  the data is copied back into the case folder.&lt;br /&gt;
&lt;br /&gt;
Following commands will do that for you: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;$ decomposeParHPC&lt;br /&gt;
$ reconstructParHPC&lt;br /&gt;
$ reconstructParMeshHPC&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
instead of:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ decomposePar&lt;br /&gt;
$ reconstructPar&lt;br /&gt;
$ recontructParMesh&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For example, if you want to run&amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;snappyHexMesh&amp;lt;/span&amp;gt;in parallel, you may use the following commands:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ decomposeParMeshHPC&lt;br /&gt;
$ mpirun --bind-to core --map-by core -report-bindings snappyHexMesh -overwrite -parallel&lt;br /&gt;
$ reconstructParMeshHPC -constant&amp;lt;/pre&amp;gt;&lt;br /&gt;
instead of:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ decomposePar&lt;br /&gt;
$ mpirun --bind-to core --map-by core -report-bindings snappyHexMesh -overwrite -parallel&lt;br /&gt;
$ reconstructParMesh -constant&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For running jobs on multiple nodes, OpenFOAM needs passwordless communication between the nodes, to copy data into the local folders.&lt;br /&gt;
&lt;br /&gt;
A small trick using ssh-keygen once will let your nodes to communicate freely over rsh. &lt;br /&gt;
&lt;br /&gt;
Do it once (if you didn&#039;t do it already in the past):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ssh-keygen&lt;br /&gt;
$ cat $HOME/.ssh/id_rsa.pub &amp;gt;&amp;gt; $HOME/.ssh/authorized_keys&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Building an OpenFOAM batch file for parallel processing =&lt;br /&gt;
== General information == &lt;br /&gt;
Before running OpenFOAM jobs in parallel, it is necessary to decompose the geometry domain into segments, equal to the number of processors (or threads) you intend to use. &lt;br /&gt;
&lt;br /&gt;
That means, for example, if you want to run a case on 8 processors, you will have to decompose the mesh in 8 segments, first. Then, you start the solver in &#039;&#039;parallel&#039;&#039;, letting &#039;&#039;OpenFOAM&#039;&#039; to run calculations concurrently on these segments, one processor responding for one segment of the mesh, sharing the data with all other processors in between. &lt;br /&gt;
&lt;br /&gt;
There is, of course, a mechanism that connects properly the calculations, so you don&#039;t loose your data or generate wrong results. &lt;br /&gt;
&lt;br /&gt;
Decomposition and segments building process is handled by&amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;decomposePar&amp;lt;/span&amp;gt;utility. &lt;br /&gt;
&lt;br /&gt;
The number of subdomains, in which the geometry will be decomposed, is specified in &amp;quot;&#039;&#039;system/decomposeParDict&#039;&#039;&amp;quot;, as well as the decomposition method to use. &lt;br /&gt;
&lt;br /&gt;
The automatic decomposition method is &amp;quot;&#039;&#039;scotch&#039;&#039;&amp;quot;. It trims the mesh, collecting as many cells as possible per processor, trying to avoid having empty segments or segments with not enough cells. If you want your mesh to be divided in other way, specifying the number of segments it should be cut in x, y or z direction, for example, you can use &amp;quot;simple&amp;quot; or &amp;quot;hierarchical&amp;quot; methods. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Wrapper script generation == &lt;br /&gt;
&#039;&#039;&#039;Attention:&#039;&#039;&#039; &amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;openfoam&amp;lt;/span&amp;gt; module loads automatically the necessary &amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;openmpi&amp;lt;/span&amp;gt; module for parallel run, do &#039;&#039;&#039;NOT&#039;&#039;&#039; load another version of mpi, as it may conflict with the loaded &amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;openfoam&amp;lt;/span&amp;gt; version. &lt;br /&gt;
&lt;br /&gt;
A job-script to submit a batch job called &#039;&#039;job_openfoam.sh&#039;&#039; that runs &#039;&#039;icoFoam&#039;&#039; solver with OpenFoam version 2.4.0, on 8 processors, requiring 6000 MByte of total physical memory per processor and total wall clock time of 6 hours looks like: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--b)--&amp;gt; &lt;br /&gt;
{| style=&amp;quot;width: 100%; border:1px solid #d0cfcc; background:#f2f7ff;border-spacing: 5px;&amp;quot;&lt;br /&gt;
| style=&amp;quot;width:280px; white-space:nowrap; color:#000;&amp;quot; |&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#MSUB -l nodes=1:ppn=8&lt;br /&gt;
#MSUB -l walltime=06:00:00&lt;br /&gt;
#MSUB -l pmem=6000mb&lt;br /&gt;
#MSUB -v FOAM_MODULE=&amp;quot;cae/openfoam/2.4.0&amp;quot;&lt;br /&gt;
#MSUB -v MPIRUN_OPTIONS=&amp;quot;--bind-to core --map-by core -report-bindings&amp;quot;&lt;br /&gt;
#MSUB -v EXECUTABLE=&amp;quot;icoFoam&amp;quot;&lt;br /&gt;
#MSUB -N test_icoFoam&lt;br /&gt;
#MSUB -o icoFoam.log&lt;br /&gt;
#MSUB -j oe&lt;br /&gt;
&lt;br /&gt;
# openfoam-2.4.0 automatically loads mpi/openmpi/1.8-gnu-4.9&lt;br /&gt;
module load ${FOAM_MODULE}&lt;br /&gt;
foamInit&lt;br /&gt;
&lt;br /&gt;
# remove decomposePar if you already decomposed your case beforehand &lt;br /&gt;
decomposeParHPC &amp;amp;&amp;amp;&lt;br /&gt;
&lt;br /&gt;
# starting the solver in parallel. Name of the solver is given in the &amp;quot;EXECUTABLE&amp;quot; variable, &lt;br /&gt;
# in the header &lt;br /&gt;
mpirun ${MPIRUN_OPTIONS} ${EXECUTABLE} -parallel &amp;amp;&amp;amp;&lt;br /&gt;
&lt;br /&gt;
reconstructParHPC&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Attention:&#039;&#039;&#039; The script above will run a parallel OpenFOAM Job with pre-installed OpenMPI. If you are using an OpenFOAM version wich comes with pre-installed Intel MPI (like, for example&amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;cae/openfoam/v1712-impi&amp;lt;/span&amp;gt;) you will have to modify the batch script to use all the advantages of Intel MPI for parallel calculations. For details see:  &lt;br /&gt;
* [[Batch_Jobs_-_bwUniCluster_Features|Batch Jobs Features]]&lt;br /&gt;
&lt;br /&gt;
= Using I/O and reducing the amount of data and files =&lt;br /&gt;
In OpenFOAM, you can control which variables or fields are written at specific times. For example, for post-processing purposes, you might need only a subset of variables. In order to control which files will be written, there is a function object called &amp;quot;writeObjects&amp;quot;. &lt;br /&gt;
&lt;br /&gt;
An example controlDict file may look like this: At the top of the file (entry &amp;quot;writeControl&amp;quot;) you specify that ALL fields (variables) required for restarting are saved every 12 wall-clock hours. Then, additionally, at the bottom of the controlDict in the &amp;quot;functions&amp;quot; block, you can add a function object of type &amp;quot;writeObjects&amp;quot;. With this function object, you can control the output of specific fields independent of the entry at the top of the file: &lt;br /&gt;
&amp;lt;!--b)--&amp;gt; &lt;br /&gt;
{| style=&amp;quot;width: 100%; border:1px solid #d0cfcc; background:#f2f7ff;border-spacing: 5px;&amp;quot;&lt;br /&gt;
| style=&amp;quot;width:280px; white-space:nowrap; color:#000;&amp;quot; |&lt;br /&gt;
&amp;lt;source lang=&amp;quot;text&amp;quot;&amp;gt;&lt;br /&gt;
/*--------------------------------*- C++ -*----------------------------------*\&lt;br /&gt;
| =========                 |                                                 |&lt;br /&gt;
| \\      /  F ield         | OpenFOAM: The Open Source CFD Toolbox           |&lt;br /&gt;
|  \\    /   O peration     | Version:  4.1.x                                 |&lt;br /&gt;
|   \\  /    A nd           | Web:      www.OpenFOAM.org                      |&lt;br /&gt;
|    \\/     M anipulation  |                                                 |&lt;br /&gt;
\*---------------------------------------------------------------------------*/&lt;br /&gt;
FoamFile&lt;br /&gt;
{&lt;br /&gt;
    version     2.0;&lt;br /&gt;
    format      ascii;&lt;br /&gt;
    class       dictionary;&lt;br /&gt;
    location    &amp;quot;system&amp;quot;;&lt;br /&gt;
    object      controlDict;&lt;br /&gt;
}&lt;br /&gt;
// * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * //&lt;br /&gt;
&lt;br /&gt;
startFrom       latestTime;&lt;br /&gt;
startTime       0;&lt;br /&gt;
stopAt  	endTime;&lt;br /&gt;
endTime         1e2;&lt;br /&gt;
deltaT          1e-5;&lt;br /&gt;
&lt;br /&gt;
writeControl    clockTime;&lt;br /&gt;
writeInterval   43200; // write ALL fields necessary to restart your simulation &lt;br /&gt;
                       // every 43200 wall-clock seconds = 12 hours of real time&lt;br /&gt;
&lt;br /&gt;
purgeWrite      0;&lt;br /&gt;
writeFormat     binary;&lt;br /&gt;
writePrecision  10;&lt;br /&gt;
writeCompression off;&lt;br /&gt;
timeFormat      general;&lt;br /&gt;
timePrecision   10;&lt;br /&gt;
runTimeModifiable false;&lt;br /&gt;
&lt;br /&gt;
functions&lt;br /&gt;
{&lt;br /&gt;
    writeFields // name of the function object&lt;br /&gt;
    {&lt;br /&gt;
        type writeObjects;&lt;br /&gt;
        libs ( &amp;quot;libutilityFunctionObjects.so&amp;quot; );&lt;br /&gt;
&lt;br /&gt;
        objects&lt;br /&gt;
        (&lt;br /&gt;
	    T U rho // list of fields/variables to be written&lt;br /&gt;
        );&lt;br /&gt;
&lt;br /&gt;
        // E.g. write every 1e-5 seconds of simulation time only the specified fields&lt;br /&gt;
        writeControl runTime;&lt;br /&gt;
        writeInterval 1e-5; // write every 1e-5 seconds&lt;br /&gt;
    }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can also define multiple function objects in order to write different subsets of fields at different times. You can also use wildcards in the list of fields- for example, in order to write out all fields starting with &amp;quot;RR_&amp;quot; you can add&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;quot;RR_.*&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
to the list of objects. You can get a list of valid field names by writing &amp;quot;banana&amp;quot; in the field list. During the run of the solver all valid field names are printed.&lt;br /&gt;
The output time can be changed too. Instead of writing at specific times in the simulation, you can also write after a certain number of time steps or depening on the wall clock time:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;// write every 100th simulation time step&lt;br /&gt;
writeControl timeStep;&lt;br /&gt;
writeInterval 100;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;// every 3600 seconds of real wall clock time&lt;br /&gt;
writeControl runtime;&lt;br /&gt;
writeInterval 3600; &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you use OpenFOAM before version 4.0 or 1606, the type of function object is:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
type writeRegisteredObject; // (instead of type writeObjects) &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
If you use OpenFOAM before version 3.0, you have to load the library with&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
functionObjectLibs (&amp;quot;libIOFunctionObjects.so&amp;quot;); // (instead of libs ( &amp;quot;libutilityFunctionObjects.so&amp;quot; )) &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
and exchange the entry &amp;quot;writeControl&amp;quot; with &amp;quot;outputControl&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
= OpenFOAM and ParaView on bwUniCluster=&lt;br /&gt;
ParaView is not directly linked to OpenFOAM installation on the cluster. Therefore, to visualize OpenFOAM jobs with ParaView, they will have to be manually opened within the specific ParaView module.  &lt;br /&gt;
&lt;br /&gt;
1. Load the ParaView module. For example: &lt;br /&gt;
&amp;lt;pre&amp;gt;$ module load cae/paraview/4.3.1&amp;lt;/pre&amp;gt;&lt;br /&gt;
2. Create a dummy &#039;*.openfoam&#039; file in the OpenFOAM case folder:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ cd &amp;lt;case_folder_path&amp;gt;&lt;br /&gt;
$ touch &amp;lt;case_name&amp;gt;.openfoam&amp;lt;/pre&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;NOTICE:&#039;&#039;&#039; the name of the dummy file should be the same as the name of the OpenFOAM case folder, with &#039;.openfoam&#039; extension.&lt;br /&gt;
&lt;br /&gt;
3. Open ParaView:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ paraview&amp;lt;/pre&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;NOTICE:&#039;&#039;&#039; ParaView is a visualization software which requires X-server to run.&lt;br /&gt;
&lt;br /&gt;
4. In Paraview go to &#039;File&#039; -&amp;gt; &#039;Open&#039;, or press Ctrl+O. Choose to show &#039;All files (*)&#039;, and open your &amp;lt;case_name&amp;gt;.openfoam file. In the pop-up window select OpenFOAM, and press &#039;Ok&#039;.&lt;br /&gt;
&lt;br /&gt;
5. That&#039;s it! Enjoy ParaView and OpenFOAM.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
[[Category:Engineering software]][[Category:bwUniCluster]][[Category:bwForCluster_Chemistry]]&lt;/div&gt;</summary>
		<author><name>A Saramet</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster3.0/Software/OpenFoam&amp;diff=5494</id>
		<title>BwUniCluster3.0/Software/OpenFoam</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster3.0/Software/OpenFoam&amp;diff=5494"/>
		<updated>2018-06-25T16:19:36Z</updated>

		<summary type="html">&lt;p&gt;A Saramet: /* Parallel run with OpenFOAM */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{| width=600px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Description !! Content&lt;br /&gt;
|-&lt;br /&gt;
| module load&lt;br /&gt;
| cae/openfoam&lt;br /&gt;
|-&lt;br /&gt;
| Availability&lt;br /&gt;
| [[bwUniCluster]] &amp;amp;#124; [[BwForCluster_Chemistry]] &amp;amp;#124; bwGRiD-T&amp;amp;uuml;bingen&lt;br /&gt;
|-&lt;br /&gt;
| License&lt;br /&gt;
| [https://www.openfoam.org/licence.php GNU General Public Licence]&lt;br /&gt;
|-&lt;br /&gt;
| Citing&lt;br /&gt;
| n/a&lt;br /&gt;
|-&lt;br /&gt;
| Links&lt;br /&gt;
| [https://www.openfoam.org/ Openfoam Homepage] &amp;amp;#124; [https://www.openfoam.org/docs/ Documentation]&lt;br /&gt;
|-&lt;br /&gt;
| Graphical Interface&lt;br /&gt;
| No&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
= Description =&lt;br /&gt;
The OpenFOAM®  (Open Field Operation and Manipulation) CFD Toolbox is a free, open source CFD software package with an extensive range of features to solve anything from complex fluid flows involving chemical reactions, turbulence and heat transfer, to solid dynamics and electromagnetics. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
= Versions and Availability =&lt;br /&gt;
A list of versions currently available on all bwHPC-C5-Clusters can be obtained from the&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;big&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[https://cis-hpc.uni-konstanz.de/prod.cis/ Cluster Information System CIS]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/big&amp;gt;&lt;br /&gt;
{{#widget:Iframe&lt;br /&gt;
|url=https://cis-hpc.uni-konstanz.de/prod.cis/bwUniCluster/cae/openfoam&lt;br /&gt;
|width=99%&lt;br /&gt;
|height=600&lt;br /&gt;
|border=0&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
In order to check what OpenFOAM versions are installed on the system, run the following command:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ module avail cae/openfoam&amp;lt;/pre&amp;gt;&lt;br /&gt;
Typically, several OpenFOAM versions might be available.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Any available version can be accessed by loading the appropriate module:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ module load cae/openfoam/&amp;lt;version&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
with &amp;lt;version&amp;gt; specifying the desired version. &lt;br /&gt;
&lt;br /&gt;
To activate the OpenFOAM applications, after the module is loaded, run the following:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ source $FOAM_INIT&amp;lt;/pre&amp;gt;&lt;br /&gt;
or simply:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ foamInit&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Parallel run with OpenFOAM  =&lt;br /&gt;
For a better performance on running OpenFOAM jobs in parallel on bwUniCluster, it is recommended to have the decomposed data in local folders on each node.  &lt;br /&gt;
&lt;br /&gt;
Therefore you may use *HPC scripts, wich will copy your data to the node specific folders after running the decomposePar, and copy it back to the local case folder before running reconstructPar.&lt;br /&gt;
&lt;br /&gt;
Don&#039;t forget to allocate enough wall-time for decomposition and reconstruction of your cases. As the data will be processed directly on  the nodes, and may be lost if the job is cancelled before  the data is copied back into the case folder.&lt;br /&gt;
&lt;br /&gt;
Following commands will do that for you: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;$ decomposeParHPC&lt;br /&gt;
$ reconstructParHPC&lt;br /&gt;
$ reconstructParMeshHPC&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
instead of:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ decomposePar&lt;br /&gt;
$ reconstructPar&lt;br /&gt;
$ recontructParMesh&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For example, if you want to run&amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;snappyHexMesh&amp;lt;/span&amp;gt;in parallel, you may use the following commands:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ decomposeParMeshHPC&lt;br /&gt;
$ mpirun --bind-to core --map-by core -report-bindings snappyHexMesh -overwrite -parallel&lt;br /&gt;
$ reconstructParMeshHPC -constant&amp;lt;/pre&amp;gt;&lt;br /&gt;
instead of:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ decomposePar&lt;br /&gt;
$ mpirun --bind-to core --map-by core -report-bindings snappyHexMesh -overwrite -parallel&lt;br /&gt;
$ reconstructParMesh -constant&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For running jobs on multiple nodes, OpenFOAM needs passwordless communication between the nodes, to copy data into the local folders.&lt;br /&gt;
&lt;br /&gt;
A small trick using ssh-keygen once will let your nodes to communicate freely over rsh. &lt;br /&gt;
&lt;br /&gt;
Do it once:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ssh-keygen&lt;br /&gt;
$ cat $HOME/.ssh/id_rsa.pub &amp;gt;&amp;gt; $HOME/.ssh/authorized_keys&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Building an OpenFOAM batch file for parallel processing =&lt;br /&gt;
== General information == &lt;br /&gt;
Before running OpenFOAM jobs in parallel, it is necessary to decompose the geometry domain into segments, equal to the number of processors (or threads) you intend to use. &lt;br /&gt;
&lt;br /&gt;
That means, for example, if you want to run a case on 8 processors, you will have to decompose the mesh in 8 segments, first. Then, you start the solver in &#039;&#039;parallel&#039;&#039;, letting &#039;&#039;OpenFOAM&#039;&#039; to run calculations concurrently on these segments, one processor responding for one segment of the mesh, sharing the data with all other processors in between. &lt;br /&gt;
&lt;br /&gt;
There is, of course, a mechanism that connects properly the calculations, so you don&#039;t loose your data or generate wrong results. &lt;br /&gt;
&lt;br /&gt;
Decomposition and segments building process is handled by&amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;decomposePar&amp;lt;/span&amp;gt;utility. &lt;br /&gt;
&lt;br /&gt;
The number of subdomains, in which the geometry will be decomposed, is specified in &amp;quot;&#039;&#039;system/decomposeParDict&#039;&#039;&amp;quot;, as well as the decomposition method to use. &lt;br /&gt;
&lt;br /&gt;
The automatic decomposition method is &amp;quot;&#039;&#039;scotch&#039;&#039;&amp;quot;. It trims the mesh, collecting as many cells as possible per processor, trying to avoid having empty segments or segments with not enough cells. If you want your mesh to be divided in other way, specifying the number of segments it should be cut in x, y or z direction, for example, you can use &amp;quot;simple&amp;quot; or &amp;quot;hierarchical&amp;quot; methods. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Wrapper script generation == &lt;br /&gt;
&#039;&#039;&#039;Attention:&#039;&#039;&#039; &amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;openfoam&amp;lt;/span&amp;gt; module loads automatically the necessary &amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;openmpi&amp;lt;/span&amp;gt; module for parallel run, do &#039;&#039;&#039;NOT&#039;&#039;&#039; load another version of mpi, as it may conflict with the loaded &amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;openfoam&amp;lt;/span&amp;gt; version. &lt;br /&gt;
&lt;br /&gt;
A job-script to submit a batch job called &#039;&#039;job_openfoam.sh&#039;&#039; that runs &#039;&#039;icoFoam&#039;&#039; solver with OpenFoam version 2.4.0, on 8 processors, requiring 6000 MByte of total physical memory per processor and total wall clock time of 6 hours looks like: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--b)--&amp;gt; &lt;br /&gt;
{| style=&amp;quot;width: 100%; border:1px solid #d0cfcc; background:#f2f7ff;border-spacing: 5px;&amp;quot;&lt;br /&gt;
| style=&amp;quot;width:280px; white-space:nowrap; color:#000;&amp;quot; |&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#MSUB -l nodes=1:ppn=8&lt;br /&gt;
#MSUB -l walltime=06:00:00&lt;br /&gt;
#MSUB -l pmem=6000mb&lt;br /&gt;
#MSUB -v FOAM_MODULE=&amp;quot;cae/openfoam/2.4.0&amp;quot;&lt;br /&gt;
#MSUB -v MPIRUN_OPTIONS=&amp;quot;--bind-to core --map-by core -report-bindings&amp;quot;&lt;br /&gt;
#MSUB -v EXECUTABLE=&amp;quot;icoFoam&amp;quot;&lt;br /&gt;
#MSUB -N test_icoFoam&lt;br /&gt;
#MSUB -o icoFoam.log&lt;br /&gt;
#MSUB -j oe&lt;br /&gt;
&lt;br /&gt;
# openfoam-2.4.0 automatically loads mpi/openmpi/1.8-gnu-4.9&lt;br /&gt;
module load ${FOAM_MODULE}&lt;br /&gt;
foamInit&lt;br /&gt;
&lt;br /&gt;
# remove decomposePar if you already decomposed your case beforehand &lt;br /&gt;
decomposeParHPC &amp;amp;&amp;amp;&lt;br /&gt;
&lt;br /&gt;
# starting the solver in parallel. Name of the solver is given in the &amp;quot;EXECUTABLE&amp;quot; variable, &lt;br /&gt;
# in the header &lt;br /&gt;
mpirun ${MPIRUN_OPTIONS} ${EXECUTABLE} -parallel &amp;amp;&amp;amp;&lt;br /&gt;
&lt;br /&gt;
reconstructParHPC&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Attention:&#039;&#039;&#039; The script above will run a parallel OpenFOAM Job with pre-installed OpenMPI. If you are using an OpenFOAM version wich comes with pre-installed Intel MPI (like, for example&amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;cae/openfoam/v1712-impi&amp;lt;/span&amp;gt;) you will have to modify the batch script to use all the advantages of Intel MPI for parallel calculations. For details see:  &lt;br /&gt;
* [[Batch_Jobs_-_bwUniCluster_Features|Batch Jobs Features]]&lt;br /&gt;
&lt;br /&gt;
= Using I/O and reducing the amount of data and files =&lt;br /&gt;
In OpenFOAM, you can control which variables or fields are written at specific times. For example, for post-processing purposes, you might need only a subset of variables. In order to control which files will be written, there is a function object called &amp;quot;writeObjects&amp;quot;. &lt;br /&gt;
&lt;br /&gt;
An example controlDict file may look like this: At the top of the file (entry &amp;quot;writeControl&amp;quot;) you specify that ALL fields (variables) required for restarting are saved every 12 wall-clock hours. Then, additionally, at the bottom of the controlDict in the &amp;quot;functions&amp;quot; block, you can add a function object of type &amp;quot;writeObjects&amp;quot;. With this function object, you can control the output of specific fields independent of the entry at the top of the file: &lt;br /&gt;
&amp;lt;!--b)--&amp;gt; &lt;br /&gt;
{| style=&amp;quot;width: 100%; border:1px solid #d0cfcc; background:#f2f7ff;border-spacing: 5px;&amp;quot;&lt;br /&gt;
| style=&amp;quot;width:280px; white-space:nowrap; color:#000;&amp;quot; |&lt;br /&gt;
&amp;lt;source lang=&amp;quot;text&amp;quot;&amp;gt;&lt;br /&gt;
/*--------------------------------*- C++ -*----------------------------------*\&lt;br /&gt;
| =========                 |                                                 |&lt;br /&gt;
| \\      /  F ield         | OpenFOAM: The Open Source CFD Toolbox           |&lt;br /&gt;
|  \\    /   O peration     | Version:  4.1.x                                 |&lt;br /&gt;
|   \\  /    A nd           | Web:      www.OpenFOAM.org                      |&lt;br /&gt;
|    \\/     M anipulation  |                                                 |&lt;br /&gt;
\*---------------------------------------------------------------------------*/&lt;br /&gt;
FoamFile&lt;br /&gt;
{&lt;br /&gt;
    version     2.0;&lt;br /&gt;
    format      ascii;&lt;br /&gt;
    class       dictionary;&lt;br /&gt;
    location    &amp;quot;system&amp;quot;;&lt;br /&gt;
    object      controlDict;&lt;br /&gt;
}&lt;br /&gt;
// * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * //&lt;br /&gt;
&lt;br /&gt;
startFrom       latestTime;&lt;br /&gt;
startTime       0;&lt;br /&gt;
stopAt  	endTime;&lt;br /&gt;
endTime         1e2;&lt;br /&gt;
deltaT          1e-5;&lt;br /&gt;
&lt;br /&gt;
writeControl    clockTime;&lt;br /&gt;
writeInterval   43200; // write ALL fields necessary to restart your simulation &lt;br /&gt;
                       // every 43200 wall-clock seconds = 12 hours of real time&lt;br /&gt;
&lt;br /&gt;
purgeWrite      0;&lt;br /&gt;
writeFormat     binary;&lt;br /&gt;
writePrecision  10;&lt;br /&gt;
writeCompression off;&lt;br /&gt;
timeFormat      general;&lt;br /&gt;
timePrecision   10;&lt;br /&gt;
runTimeModifiable false;&lt;br /&gt;
&lt;br /&gt;
functions&lt;br /&gt;
{&lt;br /&gt;
    writeFields // name of the function object&lt;br /&gt;
    {&lt;br /&gt;
        type writeObjects;&lt;br /&gt;
        libs ( &amp;quot;libutilityFunctionObjects.so&amp;quot; );&lt;br /&gt;
&lt;br /&gt;
        objects&lt;br /&gt;
        (&lt;br /&gt;
	    T U rho // list of fields/variables to be written&lt;br /&gt;
        );&lt;br /&gt;
&lt;br /&gt;
        // E.g. write every 1e-5 seconds of simulation time only the specified fields&lt;br /&gt;
        writeControl runTime;&lt;br /&gt;
        writeInterval 1e-5; // write every 1e-5 seconds&lt;br /&gt;
    }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can also define multiple function objects in order to write different subsets of fields at different times. You can also use wildcards in the list of fields- for example, in order to write out all fields starting with &amp;quot;RR_&amp;quot; you can add&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;quot;RR_.*&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
to the list of objects. You can get a list of valid field names by writing &amp;quot;banana&amp;quot; in the field list. During the run of the solver all valid field names are printed.&lt;br /&gt;
The output time can be changed too. Instead of writing at specific times in the simulation, you can also write after a certain number of time steps or depening on the wall clock time:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;// write every 100th simulation time step&lt;br /&gt;
writeControl timeStep;&lt;br /&gt;
writeInterval 100;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;// every 3600 seconds of real wall clock time&lt;br /&gt;
writeControl runtime;&lt;br /&gt;
writeInterval 3600; &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you use OpenFOAM before version 4.0 or 1606, the type of function object is:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
type writeRegisteredObject; // (instead of type writeObjects) &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
If you use OpenFOAM before version 3.0, you have to load the library with&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
functionObjectLibs (&amp;quot;libIOFunctionObjects.so&amp;quot;); // (instead of libs ( &amp;quot;libutilityFunctionObjects.so&amp;quot; )) &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
and exchange the entry &amp;quot;writeControl&amp;quot; with &amp;quot;outputControl&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
= OpenFOAM and ParaView on bwUniCluster=&lt;br /&gt;
ParaView is not directly linked to OpenFOAM installation on the cluster. Therefore, to visualize OpenFOAM jobs with ParaView, they will have to be manually opened within the specific ParaView module.  &lt;br /&gt;
&lt;br /&gt;
1. Load the ParaView module. For example: &lt;br /&gt;
&amp;lt;pre&amp;gt;$ module load cae/paraview/4.3.1&amp;lt;/pre&amp;gt;&lt;br /&gt;
2. Create a dummy &#039;*.openfoam&#039; file in the OpenFOAM case folder:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ cd &amp;lt;case_folder_path&amp;gt;&lt;br /&gt;
$ touch &amp;lt;case_name&amp;gt;.openfoam&amp;lt;/pre&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;NOTICE:&#039;&#039;&#039; the name of the dummy file should be the same as the name of the OpenFOAM case folder, with &#039;.openfoam&#039; extension.&lt;br /&gt;
&lt;br /&gt;
3. Open ParaView:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ paraview&amp;lt;/pre&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;NOTICE:&#039;&#039;&#039; ParaView is a visualization software which requires X-server to run.&lt;br /&gt;
&lt;br /&gt;
4. In Paraview go to &#039;File&#039; -&amp;gt; &#039;Open&#039;, or press Ctrl+O. Choose to show &#039;All files (*)&#039;, and open your &amp;lt;case_name&amp;gt;.openfoam file. In the pop-up window select OpenFOAM, and press &#039;Ok&#039;.&lt;br /&gt;
&lt;br /&gt;
5. That&#039;s it! Enjoy ParaView and OpenFOAM.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
[[Category:Engineering software]][[Category:bwUniCluster]][[Category:bwForCluster_Chemistry]]&lt;/div&gt;</summary>
		<author><name>A Saramet</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster3.0/Software/OpenFoam&amp;diff=5493</id>
		<title>BwUniCluster3.0/Software/OpenFoam</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster3.0/Software/OpenFoam&amp;diff=5493"/>
		<updated>2018-06-25T15:37:09Z</updated>

		<summary type="html">&lt;p&gt;A Saramet: /* General information */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{| width=600px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Description !! Content&lt;br /&gt;
|-&lt;br /&gt;
| module load&lt;br /&gt;
| cae/openfoam&lt;br /&gt;
|-&lt;br /&gt;
| Availability&lt;br /&gt;
| [[bwUniCluster]] &amp;amp;#124; [[BwForCluster_Chemistry]] &amp;amp;#124; bwGRiD-T&amp;amp;uuml;bingen&lt;br /&gt;
|-&lt;br /&gt;
| License&lt;br /&gt;
| [https://www.openfoam.org/licence.php GNU General Public Licence]&lt;br /&gt;
|-&lt;br /&gt;
| Citing&lt;br /&gt;
| n/a&lt;br /&gt;
|-&lt;br /&gt;
| Links&lt;br /&gt;
| [https://www.openfoam.org/ Openfoam Homepage] &amp;amp;#124; [https://www.openfoam.org/docs/ Documentation]&lt;br /&gt;
|-&lt;br /&gt;
| Graphical Interface&lt;br /&gt;
| No&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
= Description =&lt;br /&gt;
The OpenFOAM®  (Open Field Operation and Manipulation) CFD Toolbox is a free, open source CFD software package with an extensive range of features to solve anything from complex fluid flows involving chemical reactions, turbulence and heat transfer, to solid dynamics and electromagnetics. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
= Versions and Availability =&lt;br /&gt;
A list of versions currently available on all bwHPC-C5-Clusters can be obtained from the&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;big&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[https://cis-hpc.uni-konstanz.de/prod.cis/ Cluster Information System CIS]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/big&amp;gt;&lt;br /&gt;
{{#widget:Iframe&lt;br /&gt;
|url=https://cis-hpc.uni-konstanz.de/prod.cis/bwUniCluster/cae/openfoam&lt;br /&gt;
|width=99%&lt;br /&gt;
|height=600&lt;br /&gt;
|border=0&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
In order to check what OpenFOAM versions are installed on the system, run the following command:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ module avail cae/openfoam&amp;lt;/pre&amp;gt;&lt;br /&gt;
Typically, several OpenFOAM versions might be available.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Any available version can be accessed by loading the appropriate module:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ module load cae/openfoam/&amp;lt;version&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
with &amp;lt;version&amp;gt; specifying the desired version. &lt;br /&gt;
&lt;br /&gt;
To activate the OpenFOAM applications, after the module is loaded, run the following:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ source $FOAM_INIT&amp;lt;/pre&amp;gt;&lt;br /&gt;
or simply:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ foamInit&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Parallel run with OpenFOAM  =&lt;br /&gt;
For a better performance on running OpenFOAM jobs in parallel on bwUniCluster, it is recommended to have the decomposed data in local folders on each node.  &lt;br /&gt;
&lt;br /&gt;
Therefore you may use *HPC scripts, wich will copy your data to the node specific folders after running the decomposePar, and copy it back to the local case folder before running reconstructPar.&lt;br /&gt;
&lt;br /&gt;
Don&#039;t forget to allocate enough wall-time for decomposition and reconstruction of your cases. As the data will be processed directly on  the nodes, and may be lost if the job is cancelled before  the data is copied back into the case folder.&lt;br /&gt;
&lt;br /&gt;
Following commands will do that for you: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;$ decomposeParHPC&lt;br /&gt;
$ reconstructParHPC&lt;br /&gt;
$ reconstructParMeshHPC&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
instead of:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ decomposePar&lt;br /&gt;
$ reconstructPar&lt;br /&gt;
$ recontructParMesh&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For example, if you want to run&amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;snappyHexMesh&amp;lt;/span&amp;gt;in parallel, you may use the following commands:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ decomposeParMeshHPC&lt;br /&gt;
$ mpirun --bind-to core --map-by core -report-bindings snappyHexMesh -overwrite -parallel&lt;br /&gt;
$ reconstructParMeshHPC -constant&amp;lt;/pre&amp;gt;&lt;br /&gt;
instead of:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ decomposePar&lt;br /&gt;
$ mpirun --bind-to core --map-by core -report-bindings snappyHexMesh -overwrite -parallel&lt;br /&gt;
$ reconstructParMesh -constant&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Building an OpenFOAM batch file for parallel processing =&lt;br /&gt;
== General information == &lt;br /&gt;
Before running OpenFOAM jobs in parallel, it is necessary to decompose the geometry domain into segments, equal to the number of processors (or threads) you intend to use. &lt;br /&gt;
&lt;br /&gt;
That means, for example, if you want to run a case on 8 processors, you will have to decompose the mesh in 8 segments, first. Then, you start the solver in &#039;&#039;parallel&#039;&#039;, letting &#039;&#039;OpenFOAM&#039;&#039; to run calculations concurrently on these segments, one processor responding for one segment of the mesh, sharing the data with all other processors in between. &lt;br /&gt;
&lt;br /&gt;
There is, of course, a mechanism that connects properly the calculations, so you don&#039;t loose your data or generate wrong results. &lt;br /&gt;
&lt;br /&gt;
Decomposition and segments building process is handled by&amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;decomposePar&amp;lt;/span&amp;gt;utility. &lt;br /&gt;
&lt;br /&gt;
The number of subdomains, in which the geometry will be decomposed, is specified in &amp;quot;&#039;&#039;system/decomposeParDict&#039;&#039;&amp;quot;, as well as the decomposition method to use. &lt;br /&gt;
&lt;br /&gt;
The automatic decomposition method is &amp;quot;&#039;&#039;scotch&#039;&#039;&amp;quot;. It trims the mesh, collecting as many cells as possible per processor, trying to avoid having empty segments or segments with not enough cells. If you want your mesh to be divided in other way, specifying the number of segments it should be cut in x, y or z direction, for example, you can use &amp;quot;simple&amp;quot; or &amp;quot;hierarchical&amp;quot; methods. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Wrapper script generation == &lt;br /&gt;
&#039;&#039;&#039;Attention:&#039;&#039;&#039; &amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;openfoam&amp;lt;/span&amp;gt; module loads automatically the necessary &amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;openmpi&amp;lt;/span&amp;gt; module for parallel run, do &#039;&#039;&#039;NOT&#039;&#039;&#039; load another version of mpi, as it may conflict with the loaded &amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;openfoam&amp;lt;/span&amp;gt; version. &lt;br /&gt;
&lt;br /&gt;
A job-script to submit a batch job called &#039;&#039;job_openfoam.sh&#039;&#039; that runs &#039;&#039;icoFoam&#039;&#039; solver with OpenFoam version 2.4.0, on 8 processors, requiring 6000 MByte of total physical memory per processor and total wall clock time of 6 hours looks like: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--b)--&amp;gt; &lt;br /&gt;
{| style=&amp;quot;width: 100%; border:1px solid #d0cfcc; background:#f2f7ff;border-spacing: 5px;&amp;quot;&lt;br /&gt;
| style=&amp;quot;width:280px; white-space:nowrap; color:#000;&amp;quot; |&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#MSUB -l nodes=1:ppn=8&lt;br /&gt;
#MSUB -l walltime=06:00:00&lt;br /&gt;
#MSUB -l pmem=6000mb&lt;br /&gt;
#MSUB -v FOAM_MODULE=&amp;quot;cae/openfoam/2.4.0&amp;quot;&lt;br /&gt;
#MSUB -v MPIRUN_OPTIONS=&amp;quot;--bind-to core --map-by core -report-bindings&amp;quot;&lt;br /&gt;
#MSUB -v EXECUTABLE=&amp;quot;icoFoam&amp;quot;&lt;br /&gt;
#MSUB -N test_icoFoam&lt;br /&gt;
#MSUB -o icoFoam.log&lt;br /&gt;
#MSUB -j oe&lt;br /&gt;
&lt;br /&gt;
# openfoam-2.4.0 automatically loads mpi/openmpi/1.8-gnu-4.9&lt;br /&gt;
module load ${FOAM_MODULE}&lt;br /&gt;
foamInit&lt;br /&gt;
&lt;br /&gt;
# remove decomposePar if you already decomposed your case beforehand &lt;br /&gt;
decomposeParHPC &amp;amp;&amp;amp;&lt;br /&gt;
&lt;br /&gt;
# starting the solver in parallel. Name of the solver is given in the &amp;quot;EXECUTABLE&amp;quot; variable, &lt;br /&gt;
# in the header &lt;br /&gt;
mpirun ${MPIRUN_OPTIONS} ${EXECUTABLE} -parallel &amp;amp;&amp;amp;&lt;br /&gt;
&lt;br /&gt;
reconstructParHPC&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Attention:&#039;&#039;&#039; The script above will run a parallel OpenFOAM Job with pre-installed OpenMPI. If you are using an OpenFOAM version wich comes with pre-installed Intel MPI (like, for example&amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;cae/openfoam/v1712-impi&amp;lt;/span&amp;gt;) you will have to modify the batch script to use all the advantages of Intel MPI for parallel calculations. For details see:  &lt;br /&gt;
* [[Batch_Jobs_-_bwUniCluster_Features|Batch Jobs Features]]&lt;br /&gt;
&lt;br /&gt;
= Using I/O and reducing the amount of data and files =&lt;br /&gt;
In OpenFOAM, you can control which variables or fields are written at specific times. For example, for post-processing purposes, you might need only a subset of variables. In order to control which files will be written, there is a function object called &amp;quot;writeObjects&amp;quot;. &lt;br /&gt;
&lt;br /&gt;
An example controlDict file may look like this: At the top of the file (entry &amp;quot;writeControl&amp;quot;) you specify that ALL fields (variables) required for restarting are saved every 12 wall-clock hours. Then, additionally, at the bottom of the controlDict in the &amp;quot;functions&amp;quot; block, you can add a function object of type &amp;quot;writeObjects&amp;quot;. With this function object, you can control the output of specific fields independent of the entry at the top of the file: &lt;br /&gt;
&amp;lt;!--b)--&amp;gt; &lt;br /&gt;
{| style=&amp;quot;width: 100%; border:1px solid #d0cfcc; background:#f2f7ff;border-spacing: 5px;&amp;quot;&lt;br /&gt;
| style=&amp;quot;width:280px; white-space:nowrap; color:#000;&amp;quot; |&lt;br /&gt;
&amp;lt;source lang=&amp;quot;text&amp;quot;&amp;gt;&lt;br /&gt;
/*--------------------------------*- C++ -*----------------------------------*\&lt;br /&gt;
| =========                 |                                                 |&lt;br /&gt;
| \\      /  F ield         | OpenFOAM: The Open Source CFD Toolbox           |&lt;br /&gt;
|  \\    /   O peration     | Version:  4.1.x                                 |&lt;br /&gt;
|   \\  /    A nd           | Web:      www.OpenFOAM.org                      |&lt;br /&gt;
|    \\/     M anipulation  |                                                 |&lt;br /&gt;
\*---------------------------------------------------------------------------*/&lt;br /&gt;
FoamFile&lt;br /&gt;
{&lt;br /&gt;
    version     2.0;&lt;br /&gt;
    format      ascii;&lt;br /&gt;
    class       dictionary;&lt;br /&gt;
    location    &amp;quot;system&amp;quot;;&lt;br /&gt;
    object      controlDict;&lt;br /&gt;
}&lt;br /&gt;
// * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * //&lt;br /&gt;
&lt;br /&gt;
startFrom       latestTime;&lt;br /&gt;
startTime       0;&lt;br /&gt;
stopAt  	endTime;&lt;br /&gt;
endTime         1e2;&lt;br /&gt;
deltaT          1e-5;&lt;br /&gt;
&lt;br /&gt;
writeControl    clockTime;&lt;br /&gt;
writeInterval   43200; // write ALL fields necessary to restart your simulation &lt;br /&gt;
                       // every 43200 wall-clock seconds = 12 hours of real time&lt;br /&gt;
&lt;br /&gt;
purgeWrite      0;&lt;br /&gt;
writeFormat     binary;&lt;br /&gt;
writePrecision  10;&lt;br /&gt;
writeCompression off;&lt;br /&gt;
timeFormat      general;&lt;br /&gt;
timePrecision   10;&lt;br /&gt;
runTimeModifiable false;&lt;br /&gt;
&lt;br /&gt;
functions&lt;br /&gt;
{&lt;br /&gt;
    writeFields // name of the function object&lt;br /&gt;
    {&lt;br /&gt;
        type writeObjects;&lt;br /&gt;
        libs ( &amp;quot;libutilityFunctionObjects.so&amp;quot; );&lt;br /&gt;
&lt;br /&gt;
        objects&lt;br /&gt;
        (&lt;br /&gt;
	    T U rho // list of fields/variables to be written&lt;br /&gt;
        );&lt;br /&gt;
&lt;br /&gt;
        // E.g. write every 1e-5 seconds of simulation time only the specified fields&lt;br /&gt;
        writeControl runTime;&lt;br /&gt;
        writeInterval 1e-5; // write every 1e-5 seconds&lt;br /&gt;
    }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can also define multiple function objects in order to write different subsets of fields at different times. You can also use wildcards in the list of fields- for example, in order to write out all fields starting with &amp;quot;RR_&amp;quot; you can add&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;quot;RR_.*&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
to the list of objects. You can get a list of valid field names by writing &amp;quot;banana&amp;quot; in the field list. During the run of the solver all valid field names are printed.&lt;br /&gt;
The output time can be changed too. Instead of writing at specific times in the simulation, you can also write after a certain number of time steps or depening on the wall clock time:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;// write every 100th simulation time step&lt;br /&gt;
writeControl timeStep;&lt;br /&gt;
writeInterval 100;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;// every 3600 seconds of real wall clock time&lt;br /&gt;
writeControl runtime;&lt;br /&gt;
writeInterval 3600; &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you use OpenFOAM before version 4.0 or 1606, the type of function object is:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
type writeRegisteredObject; // (instead of type writeObjects) &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
If you use OpenFOAM before version 3.0, you have to load the library with&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
functionObjectLibs (&amp;quot;libIOFunctionObjects.so&amp;quot;); // (instead of libs ( &amp;quot;libutilityFunctionObjects.so&amp;quot; )) &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
and exchange the entry &amp;quot;writeControl&amp;quot; with &amp;quot;outputControl&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
= OpenFOAM and ParaView on bwUniCluster=&lt;br /&gt;
ParaView is not directly linked to OpenFOAM installation on the cluster. Therefore, to visualize OpenFOAM jobs with ParaView, they will have to be manually opened within the specific ParaView module.  &lt;br /&gt;
&lt;br /&gt;
1. Load the ParaView module. For example: &lt;br /&gt;
&amp;lt;pre&amp;gt;$ module load cae/paraview/4.3.1&amp;lt;/pre&amp;gt;&lt;br /&gt;
2. Create a dummy &#039;*.openfoam&#039; file in the OpenFOAM case folder:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ cd &amp;lt;case_folder_path&amp;gt;&lt;br /&gt;
$ touch &amp;lt;case_name&amp;gt;.openfoam&amp;lt;/pre&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;NOTICE:&#039;&#039;&#039; the name of the dummy file should be the same as the name of the OpenFOAM case folder, with &#039;.openfoam&#039; extension.&lt;br /&gt;
&lt;br /&gt;
3. Open ParaView:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ paraview&amp;lt;/pre&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;NOTICE:&#039;&#039;&#039; ParaView is a visualization software which requires X-server to run.&lt;br /&gt;
&lt;br /&gt;
4. In Paraview go to &#039;File&#039; -&amp;gt; &#039;Open&#039;, or press Ctrl+O. Choose to show &#039;All files (*)&#039;, and open your &amp;lt;case_name&amp;gt;.openfoam file. In the pop-up window select OpenFOAM, and press &#039;Ok&#039;.&lt;br /&gt;
&lt;br /&gt;
5. That&#039;s it! Enjoy ParaView and OpenFOAM.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
[[Category:Engineering software]][[Category:bwUniCluster]][[Category:bwForCluster_Chemistry]]&lt;/div&gt;</summary>
		<author><name>A Saramet</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster3.0/Software/OpenFoam&amp;diff=5492</id>
		<title>BwUniCluster3.0/Software/OpenFoam</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster3.0/Software/OpenFoam&amp;diff=5492"/>
		<updated>2018-06-25T15:16:47Z</updated>

		<summary type="html">&lt;p&gt;A Saramet: /* Wrapper script generation */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{| width=600px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Description !! Content&lt;br /&gt;
|-&lt;br /&gt;
| module load&lt;br /&gt;
| cae/openfoam&lt;br /&gt;
|-&lt;br /&gt;
| Availability&lt;br /&gt;
| [[bwUniCluster]] &amp;amp;#124; [[BwForCluster_Chemistry]] &amp;amp;#124; bwGRiD-T&amp;amp;uuml;bingen&lt;br /&gt;
|-&lt;br /&gt;
| License&lt;br /&gt;
| [https://www.openfoam.org/licence.php GNU General Public Licence]&lt;br /&gt;
|-&lt;br /&gt;
| Citing&lt;br /&gt;
| n/a&lt;br /&gt;
|-&lt;br /&gt;
| Links&lt;br /&gt;
| [https://www.openfoam.org/ Openfoam Homepage] &amp;amp;#124; [https://www.openfoam.org/docs/ Documentation]&lt;br /&gt;
|-&lt;br /&gt;
| Graphical Interface&lt;br /&gt;
| No&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
= Description =&lt;br /&gt;
The OpenFOAM®  (Open Field Operation and Manipulation) CFD Toolbox is a free, open source CFD software package with an extensive range of features to solve anything from complex fluid flows involving chemical reactions, turbulence and heat transfer, to solid dynamics and electromagnetics. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
= Versions and Availability =&lt;br /&gt;
A list of versions currently available on all bwHPC-C5-Clusters can be obtained from the&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;big&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[https://cis-hpc.uni-konstanz.de/prod.cis/ Cluster Information System CIS]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/big&amp;gt;&lt;br /&gt;
{{#widget:Iframe&lt;br /&gt;
|url=https://cis-hpc.uni-konstanz.de/prod.cis/bwUniCluster/cae/openfoam&lt;br /&gt;
|width=99%&lt;br /&gt;
|height=600&lt;br /&gt;
|border=0&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
In order to check what OpenFOAM versions are installed on the system, run the following command:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ module avail cae/openfoam&amp;lt;/pre&amp;gt;&lt;br /&gt;
Typically, several OpenFOAM versions might be available.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Any available version can be accessed by loading the appropriate module:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ module load cae/openfoam/&amp;lt;version&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
with &amp;lt;version&amp;gt; specifying the desired version. &lt;br /&gt;
&lt;br /&gt;
To activate the OpenFOAM applications, after the module is loaded, run the following:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ source $FOAM_INIT&amp;lt;/pre&amp;gt;&lt;br /&gt;
or simply:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ foamInit&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Parallel run with OpenFOAM  =&lt;br /&gt;
For a better performance on running OpenFOAM jobs in parallel on bwUniCluster, it is recommended to have the decomposed data in local folders on each node.  &lt;br /&gt;
&lt;br /&gt;
Therefore you may use *HPC scripts, wich will copy your data to the node specific folders after running the decomposePar, and copy it back to the local case folder before running reconstructPar.&lt;br /&gt;
&lt;br /&gt;
Don&#039;t forget to allocate enough wall-time for decomposition and reconstruction of your cases. As the data will be processed directly on  the nodes, and may be lost if the job is cancelled before  the data is copied back into the case folder.&lt;br /&gt;
&lt;br /&gt;
Following commands will do that for you: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;$ decomposeParHPC&lt;br /&gt;
$ reconstructParHPC&lt;br /&gt;
$ reconstructParMeshHPC&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
instead of:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ decomposePar&lt;br /&gt;
$ reconstructPar&lt;br /&gt;
$ recontructParMesh&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For example, if you want to run&amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;snappyHexMesh&amp;lt;/span&amp;gt;in parallel, you may use the following commands:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ decomposeParMeshHPC&lt;br /&gt;
$ mpirun --bind-to core --map-by core -report-bindings snappyHexMesh -overwrite -parallel&lt;br /&gt;
$ reconstructParMeshHPC -constant&amp;lt;/pre&amp;gt;&lt;br /&gt;
instead of:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ decomposePar&lt;br /&gt;
$ mpirun --bind-to core --map-by core -report-bindings snappyHexMesh -overwrite -parallel&lt;br /&gt;
$ reconstructParMesh -constant&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Building an OpenFOAM batch file for parallel processing =&lt;br /&gt;
== General information == &lt;br /&gt;
To run any job in a parallel mode with OpenFOAM, it is necessary to decompose the geometry domain into segments, equal to the number of processors (or threads) you intend to use. That means, for example, if you want to run a case on 8 processors, you will have to decompose the mesh in 8 segments, first. Then, you start the solver in &#039;&#039;parallel&#039;&#039; mode, letting &#039;&#039;OpenFOAM&#039;&#039; to run calculations concurrently on these segments, one processor responding for one segment of the mesh, sharing the data with all other processors in between. There is, of course, a mechanism that connects properly the calculations, so you don&#039;t loose your data or generate wrong results. Decomposition and segments building process is handled by&amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;decomposePar&amp;lt;/span&amp;gt;utility. In &amp;quot;&#039;&#039;system/decomposeParDict&#039;&#039;&amp;quot; you may specify, how many &amp;quot;segments&amp;quot; you want your geometry domain to be divided in, and which decomposition method to use. The automatic one is &amp;quot;&#039;&#039;scotch&#039;&#039;&amp;quot;. It trims the mesh, collecting as many cells as possible per processor, trying to avoid having empty segments or segments with few cells. If you want your mesh to be divided in other way, specifying the number of segments it should be cut in x, y or z direction, for example, you can use &amp;quot;simple&amp;quot; or &amp;quot;hierarchical&amp;quot; methods. There are some other ways as well, with more documentation on the internet. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
== Wrapper script generation == &lt;br /&gt;
&#039;&#039;&#039;Attention:&#039;&#039;&#039; &amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;openfoam&amp;lt;/span&amp;gt; module loads automatically the necessary &amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;openmpi&amp;lt;/span&amp;gt; module for parallel run, do &#039;&#039;&#039;NOT&#039;&#039;&#039; load another version of mpi, as it may conflict with the loaded &amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;openfoam&amp;lt;/span&amp;gt; version. &lt;br /&gt;
&lt;br /&gt;
A job-script to submit a batch job called &#039;&#039;job_openfoam.sh&#039;&#039; that runs &#039;&#039;icoFoam&#039;&#039; solver with OpenFoam version 2.4.0, on 8 processors, requiring 6000 MByte of total physical memory per processor and total wall clock time of 6 hours looks like: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--b)--&amp;gt; &lt;br /&gt;
{| style=&amp;quot;width: 100%; border:1px solid #d0cfcc; background:#f2f7ff;border-spacing: 5px;&amp;quot;&lt;br /&gt;
| style=&amp;quot;width:280px; white-space:nowrap; color:#000;&amp;quot; |&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#MSUB -l nodes=1:ppn=8&lt;br /&gt;
#MSUB -l walltime=06:00:00&lt;br /&gt;
#MSUB -l pmem=6000mb&lt;br /&gt;
#MSUB -v FOAM_MODULE=&amp;quot;cae/openfoam/2.4.0&amp;quot;&lt;br /&gt;
#MSUB -v MPIRUN_OPTIONS=&amp;quot;--bind-to core --map-by core -report-bindings&amp;quot;&lt;br /&gt;
#MSUB -v EXECUTABLE=&amp;quot;icoFoam&amp;quot;&lt;br /&gt;
#MSUB -N test_icoFoam&lt;br /&gt;
#MSUB -o icoFoam.log&lt;br /&gt;
#MSUB -j oe&lt;br /&gt;
&lt;br /&gt;
# openfoam-2.4.0 automatically loads mpi/openmpi/1.8-gnu-4.9&lt;br /&gt;
module load ${FOAM_MODULE}&lt;br /&gt;
foamInit&lt;br /&gt;
&lt;br /&gt;
# remove decomposePar if you already decomposed your case beforehand &lt;br /&gt;
decomposeParHPC &amp;amp;&amp;amp;&lt;br /&gt;
&lt;br /&gt;
# starting the solver in parallel. Name of the solver is given in the &amp;quot;EXECUTABLE&amp;quot; variable, &lt;br /&gt;
# in the header &lt;br /&gt;
mpirun ${MPIRUN_OPTIONS} ${EXECUTABLE} -parallel &amp;amp;&amp;amp;&lt;br /&gt;
&lt;br /&gt;
reconstructParHPC&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Attention:&#039;&#039;&#039; The script above will run a parallel OpenFOAM Job with pre-installed OpenMPI. If you are using an OpenFOAM version wich comes with pre-installed Intel MPI (like, for example&amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;cae/openfoam/v1712-impi&amp;lt;/span&amp;gt;) you will have to modify the batch script to use all the advantages of Intel MPI for parallel calculations. For details see:  &lt;br /&gt;
* [[Batch_Jobs_-_bwUniCluster_Features|Batch Jobs Features]]&lt;br /&gt;
&lt;br /&gt;
= Using I/O and reducing the amount of data and files =&lt;br /&gt;
In OpenFOAM, you can control which variables or fields are written at specific times. For example, for post-processing purposes, you might need only a subset of variables. In order to control which files will be written, there is a function object called &amp;quot;writeObjects&amp;quot;. &lt;br /&gt;
&lt;br /&gt;
An example controlDict file may look like this: At the top of the file (entry &amp;quot;writeControl&amp;quot;) you specify that ALL fields (variables) required for restarting are saved every 12 wall-clock hours. Then, additionally, at the bottom of the controlDict in the &amp;quot;functions&amp;quot; block, you can add a function object of type &amp;quot;writeObjects&amp;quot;. With this function object, you can control the output of specific fields independent of the entry at the top of the file: &lt;br /&gt;
&amp;lt;!--b)--&amp;gt; &lt;br /&gt;
{| style=&amp;quot;width: 100%; border:1px solid #d0cfcc; background:#f2f7ff;border-spacing: 5px;&amp;quot;&lt;br /&gt;
| style=&amp;quot;width:280px; white-space:nowrap; color:#000;&amp;quot; |&lt;br /&gt;
&amp;lt;source lang=&amp;quot;text&amp;quot;&amp;gt;&lt;br /&gt;
/*--------------------------------*- C++ -*----------------------------------*\&lt;br /&gt;
| =========                 |                                                 |&lt;br /&gt;
| \\      /  F ield         | OpenFOAM: The Open Source CFD Toolbox           |&lt;br /&gt;
|  \\    /   O peration     | Version:  4.1.x                                 |&lt;br /&gt;
|   \\  /    A nd           | Web:      www.OpenFOAM.org                      |&lt;br /&gt;
|    \\/     M anipulation  |                                                 |&lt;br /&gt;
\*---------------------------------------------------------------------------*/&lt;br /&gt;
FoamFile&lt;br /&gt;
{&lt;br /&gt;
    version     2.0;&lt;br /&gt;
    format      ascii;&lt;br /&gt;
    class       dictionary;&lt;br /&gt;
    location    &amp;quot;system&amp;quot;;&lt;br /&gt;
    object      controlDict;&lt;br /&gt;
}&lt;br /&gt;
// * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * //&lt;br /&gt;
&lt;br /&gt;
startFrom       latestTime;&lt;br /&gt;
startTime       0;&lt;br /&gt;
stopAt  	endTime;&lt;br /&gt;
endTime         1e2;&lt;br /&gt;
deltaT          1e-5;&lt;br /&gt;
&lt;br /&gt;
writeControl    clockTime;&lt;br /&gt;
writeInterval   43200; // write ALL fields necessary to restart your simulation &lt;br /&gt;
                       // every 43200 wall-clock seconds = 12 hours of real time&lt;br /&gt;
&lt;br /&gt;
purgeWrite      0;&lt;br /&gt;
writeFormat     binary;&lt;br /&gt;
writePrecision  10;&lt;br /&gt;
writeCompression off;&lt;br /&gt;
timeFormat      general;&lt;br /&gt;
timePrecision   10;&lt;br /&gt;
runTimeModifiable false;&lt;br /&gt;
&lt;br /&gt;
functions&lt;br /&gt;
{&lt;br /&gt;
    writeFields // name of the function object&lt;br /&gt;
    {&lt;br /&gt;
        type writeObjects;&lt;br /&gt;
        libs ( &amp;quot;libutilityFunctionObjects.so&amp;quot; );&lt;br /&gt;
&lt;br /&gt;
        objects&lt;br /&gt;
        (&lt;br /&gt;
	    T U rho // list of fields/variables to be written&lt;br /&gt;
        );&lt;br /&gt;
&lt;br /&gt;
        // E.g. write every 1e-5 seconds of simulation time only the specified fields&lt;br /&gt;
        writeControl runTime;&lt;br /&gt;
        writeInterval 1e-5; // write every 1e-5 seconds&lt;br /&gt;
    }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can also define multiple function objects in order to write different subsets of fields at different times. You can also use wildcards in the list of fields- for example, in order to write out all fields starting with &amp;quot;RR_&amp;quot; you can add&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;quot;RR_.*&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
to the list of objects. You can get a list of valid field names by writing &amp;quot;banana&amp;quot; in the field list. During the run of the solver all valid field names are printed.&lt;br /&gt;
The output time can be changed too. Instead of writing at specific times in the simulation, you can also write after a certain number of time steps or depening on the wall clock time:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;// write every 100th simulation time step&lt;br /&gt;
writeControl timeStep;&lt;br /&gt;
writeInterval 100;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;// every 3600 seconds of real wall clock time&lt;br /&gt;
writeControl runtime;&lt;br /&gt;
writeInterval 3600; &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you use OpenFOAM before version 4.0 or 1606, the type of function object is:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
type writeRegisteredObject; // (instead of type writeObjects) &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
If you use OpenFOAM before version 3.0, you have to load the library with&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
functionObjectLibs (&amp;quot;libIOFunctionObjects.so&amp;quot;); // (instead of libs ( &amp;quot;libutilityFunctionObjects.so&amp;quot; )) &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
and exchange the entry &amp;quot;writeControl&amp;quot; with &amp;quot;outputControl&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
= OpenFOAM and ParaView on bwUniCluster=&lt;br /&gt;
ParaView is not directly linked to OpenFOAM installation on the cluster. Therefore, to visualize OpenFOAM jobs with ParaView, they will have to be manually opened within the specific ParaView module.  &lt;br /&gt;
&lt;br /&gt;
1. Load the ParaView module. For example: &lt;br /&gt;
&amp;lt;pre&amp;gt;$ module load cae/paraview/4.3.1&amp;lt;/pre&amp;gt;&lt;br /&gt;
2. Create a dummy &#039;*.openfoam&#039; file in the OpenFOAM case folder:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ cd &amp;lt;case_folder_path&amp;gt;&lt;br /&gt;
$ touch &amp;lt;case_name&amp;gt;.openfoam&amp;lt;/pre&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;NOTICE:&#039;&#039;&#039; the name of the dummy file should be the same as the name of the OpenFOAM case folder, with &#039;.openfoam&#039; extension.&lt;br /&gt;
&lt;br /&gt;
3. Open ParaView:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ paraview&amp;lt;/pre&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;NOTICE:&#039;&#039;&#039; ParaView is a visualization software which requires X-server to run.&lt;br /&gt;
&lt;br /&gt;
4. In Paraview go to &#039;File&#039; -&amp;gt; &#039;Open&#039;, or press Ctrl+O. Choose to show &#039;All files (*)&#039;, and open your &amp;lt;case_name&amp;gt;.openfoam file. In the pop-up window select OpenFOAM, and press &#039;Ok&#039;.&lt;br /&gt;
&lt;br /&gt;
5. That&#039;s it! Enjoy ParaView and OpenFOAM.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
[[Category:Engineering software]][[Category:bwUniCluster]][[Category:bwForCluster_Chemistry]]&lt;/div&gt;</summary>
		<author><name>A Saramet</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster3.0/Software/OpenFoam&amp;diff=5491</id>
		<title>BwUniCluster3.0/Software/OpenFoam</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster3.0/Software/OpenFoam&amp;diff=5491"/>
		<updated>2018-06-25T15:15:15Z</updated>

		<summary type="html">&lt;p&gt;A Saramet: /* Parallel run with OpenFOAM */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{| width=600px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Description !! Content&lt;br /&gt;
|-&lt;br /&gt;
| module load&lt;br /&gt;
| cae/openfoam&lt;br /&gt;
|-&lt;br /&gt;
| Availability&lt;br /&gt;
| [[bwUniCluster]] &amp;amp;#124; [[BwForCluster_Chemistry]] &amp;amp;#124; bwGRiD-T&amp;amp;uuml;bingen&lt;br /&gt;
|-&lt;br /&gt;
| License&lt;br /&gt;
| [https://www.openfoam.org/licence.php GNU General Public Licence]&lt;br /&gt;
|-&lt;br /&gt;
| Citing&lt;br /&gt;
| n/a&lt;br /&gt;
|-&lt;br /&gt;
| Links&lt;br /&gt;
| [https://www.openfoam.org/ Openfoam Homepage] &amp;amp;#124; [https://www.openfoam.org/docs/ Documentation]&lt;br /&gt;
|-&lt;br /&gt;
| Graphical Interface&lt;br /&gt;
| No&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
= Description =&lt;br /&gt;
The OpenFOAM®  (Open Field Operation and Manipulation) CFD Toolbox is a free, open source CFD software package with an extensive range of features to solve anything from complex fluid flows involving chemical reactions, turbulence and heat transfer, to solid dynamics and electromagnetics. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
= Versions and Availability =&lt;br /&gt;
A list of versions currently available on all bwHPC-C5-Clusters can be obtained from the&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;big&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[https://cis-hpc.uni-konstanz.de/prod.cis/ Cluster Information System CIS]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/big&amp;gt;&lt;br /&gt;
{{#widget:Iframe&lt;br /&gt;
|url=https://cis-hpc.uni-konstanz.de/prod.cis/bwUniCluster/cae/openfoam&lt;br /&gt;
|width=99%&lt;br /&gt;
|height=600&lt;br /&gt;
|border=0&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
In order to check what OpenFOAM versions are installed on the system, run the following command:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ module avail cae/openfoam&amp;lt;/pre&amp;gt;&lt;br /&gt;
Typically, several OpenFOAM versions might be available.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Any available version can be accessed by loading the appropriate module:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ module load cae/openfoam/&amp;lt;version&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
with &amp;lt;version&amp;gt; specifying the desired version. &lt;br /&gt;
&lt;br /&gt;
To activate the OpenFOAM applications, after the module is loaded, run the following:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ source $FOAM_INIT&amp;lt;/pre&amp;gt;&lt;br /&gt;
or simply:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ foamInit&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Parallel run with OpenFOAM  =&lt;br /&gt;
For a better performance on running OpenFOAM jobs in parallel on bwUniCluster, it is recommended to have the decomposed data in local folders on each node.  &lt;br /&gt;
&lt;br /&gt;
Therefore you may use *HPC scripts, wich will copy your data to the node specific folders after running the decomposePar, and copy it back to the local case folder before running reconstructPar.&lt;br /&gt;
&lt;br /&gt;
Don&#039;t forget to allocate enough wall-time for decomposition and reconstruction of your cases. As the data will be processed directly on  the nodes, and may be lost if the job is cancelled before  the data is copied back into the case folder.&lt;br /&gt;
&lt;br /&gt;
Following commands will do that for you: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;$ decomposeParHPC&lt;br /&gt;
$ reconstructParHPC&lt;br /&gt;
$ reconstructParMeshHPC&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
instead of:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ decomposePar&lt;br /&gt;
$ reconstructPar&lt;br /&gt;
$ recontructParMesh&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For example, if you want to run&amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;snappyHexMesh&amp;lt;/span&amp;gt;in parallel, you may use the following commands:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ decomposeParMeshHPC&lt;br /&gt;
$ mpirun --bind-to core --map-by core -report-bindings snappyHexMesh -overwrite -parallel&lt;br /&gt;
$ reconstructParMeshHPC -constant&amp;lt;/pre&amp;gt;&lt;br /&gt;
instead of:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ decomposePar&lt;br /&gt;
$ mpirun --bind-to core --map-by core -report-bindings snappyHexMesh -overwrite -parallel&lt;br /&gt;
$ reconstructParMesh -constant&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Building an OpenFOAM batch file for parallel processing =&lt;br /&gt;
== General information == &lt;br /&gt;
To run any job in a parallel mode with OpenFOAM, it is necessary to decompose the geometry domain into segments, equal to the number of processors (or threads) you intend to use. That means, for example, if you want to run a case on 8 processors, you will have to decompose the mesh in 8 segments, first. Then, you start the solver in &#039;&#039;parallel&#039;&#039; mode, letting &#039;&#039;OpenFOAM&#039;&#039; to run calculations concurrently on these segments, one processor responding for one segment of the mesh, sharing the data with all other processors in between. There is, of course, a mechanism that connects properly the calculations, so you don&#039;t loose your data or generate wrong results. Decomposition and segments building process is handled by&amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;decomposePar&amp;lt;/span&amp;gt;utility. In &amp;quot;&#039;&#039;system/decomposeParDict&#039;&#039;&amp;quot; you may specify, how many &amp;quot;segments&amp;quot; you want your geometry domain to be divided in, and which decomposition method to use. The automatic one is &amp;quot;&#039;&#039;scotch&#039;&#039;&amp;quot;. It trims the mesh, collecting as many cells as possible per processor, trying to avoid having empty segments or segments with few cells. If you want your mesh to be divided in other way, specifying the number of segments it should be cut in x, y or z direction, for example, you can use &amp;quot;simple&amp;quot; or &amp;quot;hierarchical&amp;quot; methods. There are some other ways as well, with more documentation on the internet. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
== Wrapper script generation == &lt;br /&gt;
&#039;&#039;&#039;Attention:&#039;&#039;&#039; &amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;openfoam&amp;lt;/span&amp;gt; module loads automatically the necessary &amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;openmpi&amp;lt;/span&amp;gt; module for parallel run, do &#039;&#039;&#039;NOT&#039;&#039;&#039; load another version of mpi, as it may conflict with the loaded &amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;openfoam&amp;lt;/span&amp;gt; version. &lt;br /&gt;
&lt;br /&gt;
A job-script to submit a batch job called &#039;&#039;job_openfoam.sh&#039;&#039; that runs &#039;&#039;icoFoam&#039;&#039; solver with OpenFoam version 2.4.0, on 8 processors, requiring 6000 MByte of total physical memory per processor and total wall clock time of 6 hours looks like: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--b)--&amp;gt; &lt;br /&gt;
{| style=&amp;quot;width: 100%; border:1px solid #d0cfcc; background:#f2f7ff;border-spacing: 5px;&amp;quot;&lt;br /&gt;
| style=&amp;quot;width:280px; white-space:nowrap; color:#000;&amp;quot; |&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#MSUB -l nodes=1:ppn=8&lt;br /&gt;
#MSUB -l walltime=06:00:00&lt;br /&gt;
#MSUB -l pmem=6000mb&lt;br /&gt;
#MSUB -v FOAM_MODULE=&amp;quot;cae/openfoam/2.4.0&amp;quot;&lt;br /&gt;
#MSUB -v MPIRUN_OPTIONS=&amp;quot;--bind-to core --map-by core -report-bindings&amp;quot;&lt;br /&gt;
#MSUB -v EXECUTABLE=&amp;quot;icoFoam&amp;quot;&lt;br /&gt;
#MSUB -N test_icoFoam&lt;br /&gt;
#MSUB -o icoFoam.log&lt;br /&gt;
#MSUB -j oe&lt;br /&gt;
&lt;br /&gt;
# openfoam-2.4.0 automatically loads mpi/openmpi/1.8-gnu-4.9&lt;br /&gt;
module load ${FOAM_MODULE}&lt;br /&gt;
foamInit&lt;br /&gt;
&lt;br /&gt;
# remove decomposePar if you already decomposed your case beforehand &lt;br /&gt;
decomposePar &amp;amp;&amp;amp;&lt;br /&gt;
&lt;br /&gt;
# starting the solver in parallel. Name of the solver is given in the &amp;quot;EXECUTABLE&amp;quot; variable, &lt;br /&gt;
# in the header &lt;br /&gt;
mpirun ${MPIRUN_OPTIONS} ${EXECUTABLE} -parallel &amp;amp;&amp;amp;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# remove reconstructPar and the &#039;&amp;amp;&amp;amp;&#039; operator from the command above &lt;br /&gt;
# if you would like to reconstruct the case later&lt;br /&gt;
reconstructPar&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Attention:&#039;&#039;&#039; The script above will run a parallel OpenFOAM Job with pre-installed OpenMPI. If you are using an OpenFOAM version wich comes with pre-installed Intel MPI (like, for example&amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;cae/openfoam/v1712-impi&amp;lt;/span&amp;gt;) you will have to modify the batch script to use all the advantages of Intel MPI for parallel calculations. For details see:  &lt;br /&gt;
* [[Batch_Jobs_-_bwUniCluster_Features|Batch Jobs Features]]&lt;br /&gt;
&lt;br /&gt;
= Using I/O and reducing the amount of data and files =&lt;br /&gt;
In OpenFOAM, you can control which variables or fields are written at specific times. For example, for post-processing purposes, you might need only a subset of variables. In order to control which files will be written, there is a function object called &amp;quot;writeObjects&amp;quot;. &lt;br /&gt;
&lt;br /&gt;
An example controlDict file may look like this: At the top of the file (entry &amp;quot;writeControl&amp;quot;) you specify that ALL fields (variables) required for restarting are saved every 12 wall-clock hours. Then, additionally, at the bottom of the controlDict in the &amp;quot;functions&amp;quot; block, you can add a function object of type &amp;quot;writeObjects&amp;quot;. With this function object, you can control the output of specific fields independent of the entry at the top of the file: &lt;br /&gt;
&amp;lt;!--b)--&amp;gt; &lt;br /&gt;
{| style=&amp;quot;width: 100%; border:1px solid #d0cfcc; background:#f2f7ff;border-spacing: 5px;&amp;quot;&lt;br /&gt;
| style=&amp;quot;width:280px; white-space:nowrap; color:#000;&amp;quot; |&lt;br /&gt;
&amp;lt;source lang=&amp;quot;text&amp;quot;&amp;gt;&lt;br /&gt;
/*--------------------------------*- C++ -*----------------------------------*\&lt;br /&gt;
| =========                 |                                                 |&lt;br /&gt;
| \\      /  F ield         | OpenFOAM: The Open Source CFD Toolbox           |&lt;br /&gt;
|  \\    /   O peration     | Version:  4.1.x                                 |&lt;br /&gt;
|   \\  /    A nd           | Web:      www.OpenFOAM.org                      |&lt;br /&gt;
|    \\/     M anipulation  |                                                 |&lt;br /&gt;
\*---------------------------------------------------------------------------*/&lt;br /&gt;
FoamFile&lt;br /&gt;
{&lt;br /&gt;
    version     2.0;&lt;br /&gt;
    format      ascii;&lt;br /&gt;
    class       dictionary;&lt;br /&gt;
    location    &amp;quot;system&amp;quot;;&lt;br /&gt;
    object      controlDict;&lt;br /&gt;
}&lt;br /&gt;
// * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * //&lt;br /&gt;
&lt;br /&gt;
startFrom       latestTime;&lt;br /&gt;
startTime       0;&lt;br /&gt;
stopAt  	endTime;&lt;br /&gt;
endTime         1e2;&lt;br /&gt;
deltaT          1e-5;&lt;br /&gt;
&lt;br /&gt;
writeControl    clockTime;&lt;br /&gt;
writeInterval   43200; // write ALL fields necessary to restart your simulation &lt;br /&gt;
                       // every 43200 wall-clock seconds = 12 hours of real time&lt;br /&gt;
&lt;br /&gt;
purgeWrite      0;&lt;br /&gt;
writeFormat     binary;&lt;br /&gt;
writePrecision  10;&lt;br /&gt;
writeCompression off;&lt;br /&gt;
timeFormat      general;&lt;br /&gt;
timePrecision   10;&lt;br /&gt;
runTimeModifiable false;&lt;br /&gt;
&lt;br /&gt;
functions&lt;br /&gt;
{&lt;br /&gt;
    writeFields // name of the function object&lt;br /&gt;
    {&lt;br /&gt;
        type writeObjects;&lt;br /&gt;
        libs ( &amp;quot;libutilityFunctionObjects.so&amp;quot; );&lt;br /&gt;
&lt;br /&gt;
        objects&lt;br /&gt;
        (&lt;br /&gt;
	    T U rho // list of fields/variables to be written&lt;br /&gt;
        );&lt;br /&gt;
&lt;br /&gt;
        // E.g. write every 1e-5 seconds of simulation time only the specified fields&lt;br /&gt;
        writeControl runTime;&lt;br /&gt;
        writeInterval 1e-5; // write every 1e-5 seconds&lt;br /&gt;
    }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can also define multiple function objects in order to write different subsets of fields at different times. You can also use wildcards in the list of fields- for example, in order to write out all fields starting with &amp;quot;RR_&amp;quot; you can add&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;quot;RR_.*&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
to the list of objects. You can get a list of valid field names by writing &amp;quot;banana&amp;quot; in the field list. During the run of the solver all valid field names are printed.&lt;br /&gt;
The output time can be changed too. Instead of writing at specific times in the simulation, you can also write after a certain number of time steps or depening on the wall clock time:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;// write every 100th simulation time step&lt;br /&gt;
writeControl timeStep;&lt;br /&gt;
writeInterval 100;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;// every 3600 seconds of real wall clock time&lt;br /&gt;
writeControl runtime;&lt;br /&gt;
writeInterval 3600; &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you use OpenFOAM before version 4.0 or 1606, the type of function object is:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
type writeRegisteredObject; // (instead of type writeObjects) &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
If you use OpenFOAM before version 3.0, you have to load the library with&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
functionObjectLibs (&amp;quot;libIOFunctionObjects.so&amp;quot;); // (instead of libs ( &amp;quot;libutilityFunctionObjects.so&amp;quot; )) &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
and exchange the entry &amp;quot;writeControl&amp;quot; with &amp;quot;outputControl&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
= OpenFOAM and ParaView on bwUniCluster=&lt;br /&gt;
ParaView is not directly linked to OpenFOAM installation on the cluster. Therefore, to visualize OpenFOAM jobs with ParaView, they will have to be manually opened within the specific ParaView module.  &lt;br /&gt;
&lt;br /&gt;
1. Load the ParaView module. For example: &lt;br /&gt;
&amp;lt;pre&amp;gt;$ module load cae/paraview/4.3.1&amp;lt;/pre&amp;gt;&lt;br /&gt;
2. Create a dummy &#039;*.openfoam&#039; file in the OpenFOAM case folder:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ cd &amp;lt;case_folder_path&amp;gt;&lt;br /&gt;
$ touch &amp;lt;case_name&amp;gt;.openfoam&amp;lt;/pre&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;NOTICE:&#039;&#039;&#039; the name of the dummy file should be the same as the name of the OpenFOAM case folder, with &#039;.openfoam&#039; extension.&lt;br /&gt;
&lt;br /&gt;
3. Open ParaView:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ paraview&amp;lt;/pre&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;NOTICE:&#039;&#039;&#039; ParaView is a visualization software which requires X-server to run.&lt;br /&gt;
&lt;br /&gt;
4. In Paraview go to &#039;File&#039; -&amp;gt; &#039;Open&#039;, or press Ctrl+O. Choose to show &#039;All files (*)&#039;, and open your &amp;lt;case_name&amp;gt;.openfoam file. In the pop-up window select OpenFOAM, and press &#039;Ok&#039;.&lt;br /&gt;
&lt;br /&gt;
5. That&#039;s it! Enjoy ParaView and OpenFOAM.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
[[Category:Engineering software]][[Category:bwUniCluster]][[Category:bwForCluster_Chemistry]]&lt;/div&gt;</summary>
		<author><name>A Saramet</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster3.0/Software/OpenFoam&amp;diff=5385</id>
		<title>BwUniCluster3.0/Software/OpenFoam</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster3.0/Software/OpenFoam&amp;diff=5385"/>
		<updated>2018-03-19T14:16:46Z</updated>

		<summary type="html">&lt;p&gt;A Saramet: /* Wrapper script generation */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{| width=600px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Description !! Content&lt;br /&gt;
|-&lt;br /&gt;
| module load&lt;br /&gt;
| cae/openfoam&lt;br /&gt;
|-&lt;br /&gt;
| Availability&lt;br /&gt;
| [[bwUniCluster]] &amp;amp;#124; [[BwForCluster_Chemistry]] &amp;amp;#124; bwGRiD-T&amp;amp;uuml;bingen&lt;br /&gt;
|-&lt;br /&gt;
| License&lt;br /&gt;
| [https://www.openfoam.org/licence.php GNU General Public Licence]&lt;br /&gt;
|-&lt;br /&gt;
| Citing&lt;br /&gt;
| n/a&lt;br /&gt;
|-&lt;br /&gt;
| Links&lt;br /&gt;
| [https://www.openfoam.org/ Openfoam Homepage] &amp;amp;#124; [https://www.openfoam.org/docs/ Documentation]&lt;br /&gt;
|-&lt;br /&gt;
| Graphical Interface&lt;br /&gt;
| No&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
= Description =&lt;br /&gt;
The OpenFOAM®  (Open Field Operation and Manipulation) CFD Toolbox is a free, open source CFD software package with an extensive range of features to solve anything from complex fluid flows involving chemical reactions, turbulence and heat transfer, to solid dynamics and electromagnetics. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
= Versions and Availability =&lt;br /&gt;
A list of versions currently available on all bwHPC-C5-Clusters can be obtained from the&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;big&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[https://cis-hpc.uni-konstanz.de/prod.cis/ Cluster Information System CIS]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/big&amp;gt;&lt;br /&gt;
{{#widget:Iframe&lt;br /&gt;
|url=https://cis-hpc.uni-konstanz.de/prod.cis/bwUniCluster/cae/openfoam&lt;br /&gt;
|width=99%&lt;br /&gt;
|height=600&lt;br /&gt;
|border=0&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
In order to check what OpenFOAM versions are installed on the system, run the following command:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ module avail cae/openfoam&amp;lt;/pre&amp;gt;&lt;br /&gt;
Typically, several OpenFOAM versions might be available.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Any available version can be accessed by loading the appropriate module:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ module load cae/openfoam/&amp;lt;version&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
with &amp;lt;version&amp;gt; specifying the desired version. &lt;br /&gt;
&lt;br /&gt;
To activate the OpenFOAM applications, after the module is loaded, run the following:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ source $FOAM_INIT&amp;lt;/pre&amp;gt;&lt;br /&gt;
or simply:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ foamInit&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Parallel run with OpenFOAM  =&lt;br /&gt;
To improve the concurrently solving process and decrease the error occurrence probability while running an OpenFOAM job in parallel (&#039;&#039;on a singlenode&#039;&#039;), some modifications have been introduced. Specifically, after the case decomposition is done, it is recommended to save the decomposed data directly on the nodes in a local folder and it run from there. When the calculations are over, the data is moved back to the case folder and reconstructed. It will improve the overall performance, considering that you allocated enough wall-time to decompose and rebuild your cases, as it moves the&amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;processor*&amp;lt;/span&amp;gt;folders to and out of the nodes local work-space. For such a procedure it is necessary to use the following commands for decomposition and reconstruction of the geometry domain:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ decomposeParHPC&lt;br /&gt;
$ reconstructParHPC&lt;br /&gt;
$ reconstructParMeshHPC&amp;lt;/pre&amp;gt;&lt;br /&gt;
instead of:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ decomposePar&lt;br /&gt;
$ reconstructPar&lt;br /&gt;
$ recontructParMesh&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For example, if you want to run&amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;snappyHexMesh&amp;lt;/span&amp;gt;in parallel, you may use the following commands:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ decomposeParMeshHPC&lt;br /&gt;
$ mpiexec snappyHexMesh -overwrite&lt;br /&gt;
$ reconstructParMeshHPC -constant&amp;lt;/pre&amp;gt;&lt;br /&gt;
instead of:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ decomposePar&lt;br /&gt;
$ mpiexec snappyHexMesh -overwrite&lt;br /&gt;
$ reconstructParMesh -constant&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Building an OpenFOAM batch file for parallel processing =&lt;br /&gt;
== General information == &lt;br /&gt;
To run any job in a parallel mode with OpenFOAM, it is necessary to decompose the geometry domain into segments, equal to the number of processors (or threads) you intend to use. That means, for example, if you want to run a case on 8 processors, you will have to decompose the mesh in 8 segments, first. Then, you start the solver in &#039;&#039;parallel&#039;&#039; mode, letting &#039;&#039;OpenFOAM&#039;&#039; to run calculations concurrently on these segments, one processor responding for one segment of the mesh, sharing the data with all other processors in between. There is, of course, a mechanism that connects properly the calculations, so you don&#039;t loose your data or generate wrong results. Decomposition and segments building process is handled by&amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;decomposePar&amp;lt;/span&amp;gt;utility. In &amp;quot;&#039;&#039;system/decomposeParDict&#039;&#039;&amp;quot; you may specify, how many &amp;quot;segments&amp;quot; you want your geometry domain to be divided in, and which decomposition method to use. The automatic one is &amp;quot;&#039;&#039;scotch&#039;&#039;&amp;quot;. It trims the mesh, collecting as many cells as possible per processor, trying to avoid having empty segments or segments with few cells. If you want your mesh to be divided in other way, specifying the number of segments it should be cut in x, y or z direction, for example, you can use &amp;quot;simple&amp;quot; or &amp;quot;hierarchical&amp;quot; methods. There are some other ways as well, with more documentation on the internet. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
== Wrapper script generation == &lt;br /&gt;
&#039;&#039;&#039;Attention:&#039;&#039;&#039; &amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;openfoam&amp;lt;/span&amp;gt; module loads automatically the necessary &amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;openmpi&amp;lt;/span&amp;gt; module for parallel run, do &#039;&#039;&#039;NOT&#039;&#039;&#039; load another version of mpi, as it may conflict with the loaded &amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;openfoam&amp;lt;/span&amp;gt; version. &lt;br /&gt;
&lt;br /&gt;
A job-script to submit a batch job called &#039;&#039;job_openfoam.sh&#039;&#039; that runs &#039;&#039;icoFoam&#039;&#039; solver with OpenFoam version 2.4.0, on 8 processors, requiring 6000 MByte of total physical memory per processor and total wall clock time of 6 hours looks like: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--b)--&amp;gt; &lt;br /&gt;
{| style=&amp;quot;width: 100%; border:1px solid #d0cfcc; background:#f2f7ff;border-spacing: 5px;&amp;quot;&lt;br /&gt;
| style=&amp;quot;width:280px; white-space:nowrap; color:#000;&amp;quot; |&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#MSUB -l nodes=1:ppn=8&lt;br /&gt;
#MSUB -l walltime=06:00:00&lt;br /&gt;
#MSUB -l pmem=6000mb&lt;br /&gt;
#MSUB -v FOAM_MODULE=&amp;quot;cae/openfoam/2.4.0&amp;quot;&lt;br /&gt;
#MSUB -v MPIRUN_OPTIONS=&amp;quot;--bind-to core --map-by core -report-bindings&amp;quot;&lt;br /&gt;
#MSUB -v EXECUTABLE=&amp;quot;icoFoam&amp;quot;&lt;br /&gt;
#MSUB -N test_icoFoam&lt;br /&gt;
#MSUB -o icoFoam.log&lt;br /&gt;
#MSUB -j oe&lt;br /&gt;
&lt;br /&gt;
# openfoam-2.4.0 automatically loads mpi/openmpi/1.8-gnu-4.9&lt;br /&gt;
module load ${FOAM_MODULE}&lt;br /&gt;
foamInit&lt;br /&gt;
&lt;br /&gt;
# remove decomposePar if you already decomposed your case beforehand &lt;br /&gt;
decomposePar &amp;amp;&amp;amp;&lt;br /&gt;
&lt;br /&gt;
# starting the solver in parallel. Name of the solver is given in the &amp;quot;EXECUTABLE&amp;quot; variable, &lt;br /&gt;
# in the header &lt;br /&gt;
mpirun ${MPIRUN_OPTIONS} ${EXECUTABLE} -parallel &amp;amp;&amp;amp;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# remove reconstructPar and the &#039;&amp;amp;&amp;amp;&#039; operator from the command above &lt;br /&gt;
# if you would like to reconstruct the case later&lt;br /&gt;
reconstructPar&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Attention:&#039;&#039;&#039; The script above will run a parallel OpenFOAM Job with pre-installed OpenMPI. If you are using an OpenFOAM version wich comes with pre-installed Intel MPI (like, for example&amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;cae/openfoam/v1712-impi&amp;lt;/span&amp;gt;) you will have to modify the batch script to use all the advantages of Intel MPI for parallel calculations. For details see:  &lt;br /&gt;
* [[Batch_Jobs_-_bwUniCluster_Features|Batch Jobs Features]]&lt;br /&gt;
&lt;br /&gt;
= Using I/O and reducing the amount of data and files =&lt;br /&gt;
In OpenFOAM, you can control which variables or fields are written at specific times. For example, for post-processing purposes, you might need only a subset of variables. In order to control which files will be written, there is a function object called &amp;quot;writeObjects&amp;quot;. &lt;br /&gt;
&lt;br /&gt;
An example controlDict file may look like this: At the top of the file (entry &amp;quot;writeControl&amp;quot;) you specify that ALL fields (variables) required for restarting are saved every 12 wall-clock hours. Then, additionally, at the bottom of the controlDict in the &amp;quot;functions&amp;quot; block, you can add a function object of type &amp;quot;writeObjects&amp;quot;. With this function object, you can control the output of specific fields independent of the entry at the top of the file: &lt;br /&gt;
&amp;lt;!--b)--&amp;gt; &lt;br /&gt;
{| style=&amp;quot;width: 100%; border:1px solid #d0cfcc; background:#f2f7ff;border-spacing: 5px;&amp;quot;&lt;br /&gt;
| style=&amp;quot;width:280px; white-space:nowrap; color:#000;&amp;quot; |&lt;br /&gt;
&amp;lt;source lang=&amp;quot;text&amp;quot;&amp;gt;&lt;br /&gt;
/*--------------------------------*- C++ -*----------------------------------*\&lt;br /&gt;
| =========                 |                                                 |&lt;br /&gt;
| \\      /  F ield         | OpenFOAM: The Open Source CFD Toolbox           |&lt;br /&gt;
|  \\    /   O peration     | Version:  4.1.x                                 |&lt;br /&gt;
|   \\  /    A nd           | Web:      www.OpenFOAM.org                      |&lt;br /&gt;
|    \\/     M anipulation  |                                                 |&lt;br /&gt;
\*---------------------------------------------------------------------------*/&lt;br /&gt;
FoamFile&lt;br /&gt;
{&lt;br /&gt;
    version     2.0;&lt;br /&gt;
    format      ascii;&lt;br /&gt;
    class       dictionary;&lt;br /&gt;
    location    &amp;quot;system&amp;quot;;&lt;br /&gt;
    object      controlDict;&lt;br /&gt;
}&lt;br /&gt;
// * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * //&lt;br /&gt;
&lt;br /&gt;
startFrom       latestTime;&lt;br /&gt;
startTime       0;&lt;br /&gt;
stopAt  	endTime;&lt;br /&gt;
endTime         1e2;&lt;br /&gt;
deltaT          1e-5;&lt;br /&gt;
&lt;br /&gt;
writeControl    clockTime;&lt;br /&gt;
writeInterval   43200; // write ALL fields necessary to restart your simulation &lt;br /&gt;
                       // every 43200 wall-clock seconds = 12 hours of real time&lt;br /&gt;
&lt;br /&gt;
purgeWrite      0;&lt;br /&gt;
writeFormat     binary;&lt;br /&gt;
writePrecision  10;&lt;br /&gt;
writeCompression off;&lt;br /&gt;
timeFormat      general;&lt;br /&gt;
timePrecision   10;&lt;br /&gt;
runTimeModifiable false;&lt;br /&gt;
&lt;br /&gt;
functions&lt;br /&gt;
{&lt;br /&gt;
    writeFields // name of the function object&lt;br /&gt;
    {&lt;br /&gt;
        type writeObjects;&lt;br /&gt;
        libs ( &amp;quot;libutilityFunctionObjects.so&amp;quot; );&lt;br /&gt;
&lt;br /&gt;
        objects&lt;br /&gt;
        (&lt;br /&gt;
	    T U rho // list of fields/variables to be written&lt;br /&gt;
        );&lt;br /&gt;
&lt;br /&gt;
        // E.g. write every 1e-5 seconds of simulation time only the specified fields&lt;br /&gt;
        writeControl runTime;&lt;br /&gt;
        writeInterval 1e-5; // write every 1e-5 seconds&lt;br /&gt;
    }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can also define multiple function objects in order to write different subsets of fields at different times. You can also use wildcards in the list of fields- for example, in order to write out all fields starting with &amp;quot;RR_&amp;quot; you can add&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;quot;RR_.*&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
to the list of objects. You can get a list of valid field names by writing &amp;quot;banana&amp;quot; in the field list. During the run of the solver all valid field names are printed.&lt;br /&gt;
The output time can be changed too. Instead of writing at specific times in the simulation, you can also write after a certain number of time steps or depening on the wall clock time:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;// write every 100th simulation time step&lt;br /&gt;
writeControl timeStep;&lt;br /&gt;
writeInterval 100;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;// every 3600 seconds of real wall clock time&lt;br /&gt;
writeControl runtime;&lt;br /&gt;
writeInterval 3600; &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you use OpenFOAM before version 4.0 or 1606, the type of function object is:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
type writeRegisteredObject; // (instead of type writeObjects) &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
If you use OpenFOAM before version 3.0, you have to load the library with&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
functionObjectLibs (&amp;quot;libIOFunctionObjects.so&amp;quot;); // (instead of libs ( &amp;quot;libutilityFunctionObjects.so&amp;quot; )) &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
and exchange the entry &amp;quot;writeControl&amp;quot; with &amp;quot;outputControl&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
= OpenFOAM and ParaView on bwUniCluster=&lt;br /&gt;
ParaView is not directly linked to OpenFOAM installation on the cluster. Therefore, to visualize OpenFOAM jobs with ParaView, they will have to be manually opened within the specific ParaView module.  &lt;br /&gt;
&lt;br /&gt;
1. Load the ParaView module. For example: &lt;br /&gt;
&amp;lt;pre&amp;gt;$ module load cae/paraview/4.3.1&amp;lt;/pre&amp;gt;&lt;br /&gt;
2. Create a dummy &#039;*.openfoam&#039; file in the OpenFOAM case folder:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ cd &amp;lt;case_folder_path&amp;gt;&lt;br /&gt;
$ touch &amp;lt;case_name&amp;gt;.openfoam&amp;lt;/pre&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;NOTICE:&#039;&#039;&#039; the name of the dummy file should be the same as the name of the OpenFOAM case folder, with &#039;.openfoam&#039; extension.&lt;br /&gt;
&lt;br /&gt;
3. Open ParaView:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ paraview&amp;lt;/pre&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;NOTICE:&#039;&#039;&#039; ParaView is a visualization software which requires X-server to run.&lt;br /&gt;
&lt;br /&gt;
4. In Paraview go to &#039;File&#039; -&amp;gt; &#039;Open&#039;, or press Ctrl+O. Choose to show &#039;All files (*)&#039;, and open your &amp;lt;case_name&amp;gt;.openfoam file. In the pop-up window select OpenFOAM, and press &#039;Ok&#039;.&lt;br /&gt;
&lt;br /&gt;
5. That&#039;s it! Enjoy ParaView and OpenFOAM.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
[[Category:Engineering software]][[Category:bwUniCluster]][[Category:bwForCluster_Chemistry]]&lt;/div&gt;</summary>
		<author><name>A Saramet</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster3.0/Software/OpenFoam&amp;diff=5175</id>
		<title>BwUniCluster3.0/Software/OpenFoam</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster3.0/Software/OpenFoam&amp;diff=5175"/>
		<updated>2017-11-03T10:18:53Z</updated>

		<summary type="html">&lt;p&gt;A Saramet: /* Wrapper script generation */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{| width=600px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Description !! Content&lt;br /&gt;
|-&lt;br /&gt;
| module load&lt;br /&gt;
| cae/openfoam&lt;br /&gt;
|-&lt;br /&gt;
| Availability&lt;br /&gt;
| [[bwUniCluster]] &amp;amp;#124; [[BwForCluster_Chemistry]] &amp;amp;#124; bwGRiD-T&amp;amp;uuml;bingen&lt;br /&gt;
|-&lt;br /&gt;
| License&lt;br /&gt;
| [http://www.openfoam.org/licence.php GNU General Public Licence]&lt;br /&gt;
|-&lt;br /&gt;
| Citing&lt;br /&gt;
| n/a&lt;br /&gt;
|-&lt;br /&gt;
| Links&lt;br /&gt;
| [http://www.openfoam.org/ Openfoam Homepage] &amp;amp;#124; [http://www.openfoam.org/docs/ Documentation]&lt;br /&gt;
|-&lt;br /&gt;
| Graphical Interface&lt;br /&gt;
| No&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
= Description =&lt;br /&gt;
The OpenFOAM®  (Open Field Operation and Manipulation) CFD Toolbox is a free, open source CFD software package with an extensive range of features to solve anything from complex fluid flows involving chemical reactions, turbulence and heat transfer, to solid dynamics and electromagnetics. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
= Versions and Availability =&lt;br /&gt;
A list of versions currently available on all bwHPC-C5-Clusters can be obtained from the&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;big&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[https://cis-hpc.uni-konstanz.de/prod.cis/ Cluster Information System CIS]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/big&amp;gt;&lt;br /&gt;
{{#widget:Iframe&lt;br /&gt;
|url=https://cis-hpc.uni-konstanz.de/prod.cis/bwUniCluster/cae/openfoam&lt;br /&gt;
|width=99%&lt;br /&gt;
|height=600&lt;br /&gt;
|border=0&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
In order to check what OpenFOAM versions are installed on the system, run the following command:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ module avail cae/openfoam&amp;lt;/pre&amp;gt;&lt;br /&gt;
Typically, several OpenFOAM versions might be available.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Any available version can be accessed by loading the appropriate module:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ module load cae/openfoam/&amp;lt;version&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
with &amp;lt;version&amp;gt; specifying the desired version. &lt;br /&gt;
&lt;br /&gt;
To activate the OpenFOAM applications, after the module is loaded, run the following:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ source $FOAM_INIT&amp;lt;/pre&amp;gt;&lt;br /&gt;
or simply:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ foamInit&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Parallel run with OpenFOAM  =&lt;br /&gt;
To improve the concurrently solving process and decrease the error occurrence probability while running an OpenFOAM job in parallel (&#039;&#039;on a singlenode&#039;&#039;), some modifications have been introduced. Specifically, after the case decomposition is done, it is recommended to save the decomposed data directly on the nodes in a local folder and it run from there. When the calculations are over, the data is moved back to the case folder and reconstructed. It will improve the overall performance, considering that you allocated enough wall-time to decompose and rebuild your cases, as it moves the&amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;processor*&amp;lt;/span&amp;gt;folders to and out of the nodes local work-space. For such a procedure it is necessary to use the following commands for decomposition and reconstruction of the geometry domain:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ decomposeParHPC&lt;br /&gt;
$ reconstructParHPC&lt;br /&gt;
$ reconstructParMeshHPC&amp;lt;/pre&amp;gt;&lt;br /&gt;
instead of:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ decomposePar&lt;br /&gt;
$ reconstructPar&lt;br /&gt;
$ recontructParMesh&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For example, if you want to run&amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;snappyHexMesh&amp;lt;/span&amp;gt;in parallel, you may use the following commands:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ decomposeParMeshHPC&lt;br /&gt;
$ mpiexec snappyHexMesh -overwrite&lt;br /&gt;
$ reconstructParMeshHPC -constant&amp;lt;/pre&amp;gt;&lt;br /&gt;
instead of:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ decomposePar&lt;br /&gt;
$ mpiexec snappyHexMesh -overwrite&lt;br /&gt;
$ reconstructParMesh -constant&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Building an OpenFOAM batch file for parallel processing =&lt;br /&gt;
== General information == &lt;br /&gt;
To run any job in a parallel mode with OpenFOAM, it is necessary to decompose the geometry domain into segments, equal to the number of processors (or threads) you intend to use. That means, for example, if you want to run a case on 8 processors, you will have to decompose the mesh in 8 segments, first. Then, you start the solver in &#039;&#039;parallel&#039;&#039; mode, letting &#039;&#039;OpenFOAM&#039;&#039; to run calculations concurrently on these segments, one processor responding for one segment of the mesh, sharing the data with all other processors in between. There is, of course, a mechanism that connects properly the calculations, so you don&#039;t loose your data or generate wrong results. Decomposition and segments building process is handled by&amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;decomposePar&amp;lt;/span&amp;gt;utility. In &amp;quot;&#039;&#039;system/decomposeParDict&#039;&#039;&amp;quot; you may specify, how many &amp;quot;segments&amp;quot; you want your geometry domain to be divided in, and which decomposition method to use. The automatic one is &amp;quot;&#039;&#039;scotch&#039;&#039;&amp;quot;. It trims the mesh, collecting as many cells as possible per processor, trying to avoid having empty segments or segments with few cells. If you want your mesh to be divided in other way, specifying the number of segments it should be cut in x, y or z direction, for example, you can use &amp;quot;simple&amp;quot; or &amp;quot;hierarchical&amp;quot; methods. There are some other ways as well, with more documentation on the internet. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
== Wrapper script generation == &lt;br /&gt;
&#039;&#039;&#039;Attention:&#039;&#039;&#039; &amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;openfoam&amp;lt;/span&amp;gt; module loads automatically the necessary &amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;openmpi&amp;lt;/span&amp;gt; module for parallel run, do &#039;&#039;&#039;NOT&#039;&#039;&#039; load another version of mpi, as it may conflict with the loaded &amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;openfoam&amp;lt;/span&amp;gt; version. &lt;br /&gt;
&lt;br /&gt;
A job-script to submit a batch job called &#039;&#039;job_openfoam.sh&#039;&#039; that runs &#039;&#039;icoFoam&#039;&#039; solver with OpenFoam version 2.4.0, on 8 processors, requiring 6000 MByte of total physical memory per processor and total wall clock time of 6 hours looks like: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--b)--&amp;gt; &lt;br /&gt;
{| style=&amp;quot;width: 100%; border:1px solid #d0cfcc; background:#f2f7ff;border-spacing: 5px;&amp;quot;&lt;br /&gt;
| style=&amp;quot;width:280px; white-space:nowrap; color:#000;&amp;quot; |&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#MSUB -l nodes=1:ppn=8&lt;br /&gt;
#MSUB -l walltime=06:00:00&lt;br /&gt;
#MSUB -l pmem=6000mb&lt;br /&gt;
#MSUB -v FOAM_MODULE=&amp;quot;cae/openfoam/2.4.0&amp;quot;&lt;br /&gt;
#MSUB -v MPIRUN_OPTIONS=&amp;quot;--bind-to core --map-by core -report-bindings&amp;quot;&lt;br /&gt;
#MSUB -v EXECUTABLE=&amp;quot;icoFoam&amp;quot;&lt;br /&gt;
#MSUB -N test_icoFoam&lt;br /&gt;
#MSUB -o icoFoam.log&lt;br /&gt;
#MSUB -j oe&lt;br /&gt;
&lt;br /&gt;
# openfoam-2.4.0 automatically loads mpi/openmpi/1.8-gnu-4.9&lt;br /&gt;
module load ${FOAM_MODULE}&lt;br /&gt;
foamInit&lt;br /&gt;
&lt;br /&gt;
# remove decomposePar if you already decomposed your case beforehand &lt;br /&gt;
decomposePar &amp;amp;&amp;amp;&lt;br /&gt;
&lt;br /&gt;
# starting the solver in parallel. Name of the solver is given in the &amp;quot;EXECUTABLE&amp;quot; variable, &lt;br /&gt;
# in the header &lt;br /&gt;
mpirun ${MPIRUN_OPTIONS} ${EXECUTABLE} -parallel &amp;amp;&amp;amp;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# remove reconstructPar and the &#039;&amp;amp;&amp;amp;&#039; operator from the command above &lt;br /&gt;
# if you would like to reconstruct the case later&lt;br /&gt;
reconstructPar&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Using I/O and reducing the amount of data and files =&lt;br /&gt;
In OpenFOAM, you can control which variables or fields are written at specific times. For example, for post-processing purposes, you might need only a subset of variables. In order to control which files will be written, there is a function object called &amp;quot;writeObjects&amp;quot;. &lt;br /&gt;
&lt;br /&gt;
An example controlDict file may look like this: At the top of the file (entry &amp;quot;writeControl&amp;quot;) you specify that ALL fields (variables) required for restarting are saved every 12 wall-clock hours. Then, additionally, at the bottom of the controlDict in the &amp;quot;functions&amp;quot; block, you can add a function object of type &amp;quot;writeObjects&amp;quot;. With this function object, you can control the output of specific fields independent of the entry at the top of the file: &lt;br /&gt;
&amp;lt;!--b)--&amp;gt; &lt;br /&gt;
{| style=&amp;quot;width: 100%; border:1px solid #d0cfcc; background:#f2f7ff;border-spacing: 5px;&amp;quot;&lt;br /&gt;
| style=&amp;quot;width:280px; white-space:nowrap; color:#000;&amp;quot; |&lt;br /&gt;
&amp;lt;source lang=&amp;quot;text&amp;quot;&amp;gt;&lt;br /&gt;
/*--------------------------------*- C++ -*----------------------------------*\&lt;br /&gt;
| =========                 |                                                 |&lt;br /&gt;
| \\      /  F ield         | OpenFOAM: The Open Source CFD Toolbox           |&lt;br /&gt;
|  \\    /   O peration     | Version:  4.1.x                                 |&lt;br /&gt;
|   \\  /    A nd           | Web:      www.OpenFOAM.org                      |&lt;br /&gt;
|    \\/     M anipulation  |                                                 |&lt;br /&gt;
\*---------------------------------------------------------------------------*/&lt;br /&gt;
FoamFile&lt;br /&gt;
{&lt;br /&gt;
    version     2.0;&lt;br /&gt;
    format      ascii;&lt;br /&gt;
    class       dictionary;&lt;br /&gt;
    location    &amp;quot;system&amp;quot;;&lt;br /&gt;
    object      controlDict;&lt;br /&gt;
}&lt;br /&gt;
// * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * //&lt;br /&gt;
&lt;br /&gt;
startFrom       latestTime;&lt;br /&gt;
startTime       0;&lt;br /&gt;
stopAt  	endTime;&lt;br /&gt;
endTime         1e2;&lt;br /&gt;
deltaT          1e-5;&lt;br /&gt;
&lt;br /&gt;
writeControl    clockTime;&lt;br /&gt;
writeInterval   43200; // write ALL fields necessary to restart your simulation &lt;br /&gt;
                       // every 43200 wall-clock seconds = 12 hours of real time&lt;br /&gt;
&lt;br /&gt;
purgeWrite      0;&lt;br /&gt;
writeFormat     binary;&lt;br /&gt;
writePrecision  10;&lt;br /&gt;
writeCompression off;&lt;br /&gt;
timeFormat      general;&lt;br /&gt;
timePrecision   10;&lt;br /&gt;
runTimeModifiable false;&lt;br /&gt;
&lt;br /&gt;
functions&lt;br /&gt;
{&lt;br /&gt;
    writeFields // name of the function object&lt;br /&gt;
    {&lt;br /&gt;
        type writeObjects;&lt;br /&gt;
        libs ( &amp;quot;libutilityFunctionObjects.so&amp;quot; );&lt;br /&gt;
&lt;br /&gt;
        objects&lt;br /&gt;
        (&lt;br /&gt;
	    T U rho // list of fields/variables to be written&lt;br /&gt;
        );&lt;br /&gt;
&lt;br /&gt;
        // E.g. write every 1e-5 seconds of simulation time only the specified fields&lt;br /&gt;
        writeControl runTime;&lt;br /&gt;
        writeInterval 1e-5; // write every 1e-5 seconds&lt;br /&gt;
    }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can also define multiple function objects in order to write different subsets of fields at different times. You can also use wildcards in the list of fields- for example, in order to write out all fields starting with &amp;quot;RR_&amp;quot; you can add&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;quot;RR_.*&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
to the list of objects. You can get a list of valid field names by writing &amp;quot;banana&amp;quot; in the field list. During the run of the solver all valid field names are printed.&lt;br /&gt;
The output time can be changed too. Instead of writing at specific times in the simulation, you can also write after a certain number of time steps or depening on the wall clock time:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;// write every 100th simulation time step&lt;br /&gt;
writeControl timeStep;&lt;br /&gt;
writeInterval 100;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;// every 3600 seconds of real wall clock time&lt;br /&gt;
writeControl runtime;&lt;br /&gt;
writeInterval 3600; &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you use OpenFOAM before version 4.0 or 1606, the type of function object is:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
type writeRegisteredObject; // (instead of type writeObjects) &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
If you use OpenFOAM before version 3.0, you have to load the library with&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
functionObjectLibs (&amp;quot;libIOFunctionObjects.so&amp;quot;); // (instead of libs ( &amp;quot;libutilityFunctionObjects.so&amp;quot; )) &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
and exchange the entry &amp;quot;writeControl&amp;quot; with &amp;quot;outputControl&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
= OpenFOAM and ParaView on bwUniCluster=&lt;br /&gt;
ParaView is not directly linked to OpenFOAM installation on the cluster. Therefore, to visualize OpenFOAM jobs with ParaView, they will have to be manually opened within the specific ParaView module.  &lt;br /&gt;
&lt;br /&gt;
1. Load the ParaView module. For example: &lt;br /&gt;
&amp;lt;pre&amp;gt;$ module load cae/paraview/4.3.1&amp;lt;/pre&amp;gt;&lt;br /&gt;
2. Create a dummy &#039;*.openfoam&#039; file in the OpenFOAM case folder:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ cd &amp;lt;case_folder_path&amp;gt;&lt;br /&gt;
$ touch &amp;lt;case_name&amp;gt;.openfoam&amp;lt;/pre&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;NOTICE:&#039;&#039;&#039; the name of the dummy file should be the same as the name of the OpenFOAM case folder, with &#039;.openfoam&#039; extension.&lt;br /&gt;
&lt;br /&gt;
3. Open ParaView:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ paraview&amp;lt;/pre&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;NOTICE:&#039;&#039;&#039; ParaView is a visualization software which requires X-server to run.&lt;br /&gt;
&lt;br /&gt;
4. In Paraview go to &#039;File&#039; -&amp;gt; &#039;Open&#039;, or press Ctrl+O. Choose to show &#039;All files (*)&#039;, and open your &amp;lt;case_name&amp;gt;.openfoam file. In the pop-up window select OpenFOAM, and press &#039;Ok&#039;.&lt;br /&gt;
&lt;br /&gt;
5. That&#039;s it! Enjoy ParaView and OpenFOAM.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
[[Category:Engineering software]][[Category:bwUniCluster]][[Category:bwForCluster_Chemistry]]&lt;/div&gt;</summary>
		<author><name>A Saramet</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster3.0/Software/OpenFoam&amp;diff=4822</id>
		<title>BwUniCluster3.0/Software/OpenFoam</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster3.0/Software/OpenFoam&amp;diff=4822"/>
		<updated>2017-04-11T15:03:57Z</updated>

		<summary type="html">&lt;p&gt;A Saramet: /* Parallel run with OpenFOAM */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{| width=600px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Description !! Content&lt;br /&gt;
|-&lt;br /&gt;
| module load&lt;br /&gt;
| cae/openfoam&lt;br /&gt;
|-&lt;br /&gt;
| Availability&lt;br /&gt;
| [[bwUniCluster]] &amp;amp;#124; [[BwForCluster_Chemistry]] &amp;amp;#124; bwGRiD-T&amp;amp;uuml;bingen&lt;br /&gt;
|-&lt;br /&gt;
| License&lt;br /&gt;
| [http://www.openfoam.org/licence.php GNU General Public Licence]&lt;br /&gt;
|-&lt;br /&gt;
| Citing&lt;br /&gt;
| n/a&lt;br /&gt;
|-&lt;br /&gt;
| Links&lt;br /&gt;
| [http://www.openfoam.org/ Openfoam Homepage] &amp;amp;#124; [http://www.openfoam.org/docs/ Documentation]&lt;br /&gt;
|-&lt;br /&gt;
| Graphical Interface&lt;br /&gt;
| No&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
= Description =&lt;br /&gt;
The OpenFOAM®  (Open Field Operation and Manipulation) CFD Toolbox is a free, open source CFD software package with an extensive range of features to solve anything from complex fluid flows involving chemical reactions, turbulence and heat transfer, to solid dynamics and electromagnetics. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
= Versions and Availability =&lt;br /&gt;
A list of versions currently available on all bwHPC-C5-Clusters can be obtained from the&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;big&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[https://cis-hpc.uni-konstanz.de/prod.cis/ Cluster Information System CIS]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/big&amp;gt;&lt;br /&gt;
{{#widget:Iframe&lt;br /&gt;
|url=https://cis-hpc.uni-konstanz.de/prod.cis/bwUniCluster/cae/openfoam&lt;br /&gt;
|width=99%&lt;br /&gt;
|height=600&lt;br /&gt;
|border=0&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
In order to check what OpenFOAM versions are installed on the system, run the following command:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ module avail cae/openfoam&amp;lt;/pre&amp;gt;&lt;br /&gt;
Typically, several OpenFOAM versions might be available.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Any available version can be accessed by loading the appropriate module:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ module load cae/openfoam/&amp;lt;version&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
with &amp;lt;version&amp;gt; specifying the desired version. &lt;br /&gt;
&lt;br /&gt;
To activate the OpenFOAM applications, after the module is loaded, run the following:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ source $FOAM_INIT&amp;lt;/pre&amp;gt;&lt;br /&gt;
or simply:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ foamInit&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Parallel run with OpenFOAM  =&lt;br /&gt;
To improve the concurrently solving process and decrease the error occurrence probability while running an OpenFOAM job in parallel (&#039;&#039;on a singlenode&#039;&#039;), some modifications have been introduced. Specifically, after the case decomposition is done, it is recommended to save the decomposed data directly on the nodes in a local folder and it run from there. When the calculations are over, the data is moved back to the case folder and reconstructed. It will improve the overall performance, considering that you allocated enough wall-time to decompose and rebuild your cases, as it moves the&amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;processor*&amp;lt;/span&amp;gt;folders to and out of the nodes local work-space. For such a procedure it is necessary to use the following commands for decomposition and reconstruction of the geometry domain:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ decomposeParHPC&lt;br /&gt;
$ reconstructParHPC&lt;br /&gt;
$ reconstructParMeshHPC&amp;lt;/pre&amp;gt;&lt;br /&gt;
instead of:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ decomposePar&lt;br /&gt;
$ reconstructPar&lt;br /&gt;
$ recontructParMesh&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For example, if you want to run&amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;snappyHexMesh&amp;lt;/span&amp;gt;in parallel, you may use the following commands:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ decomposeParMeshHPC&lt;br /&gt;
$ mpiexec snappyHexMesh -overwrite&lt;br /&gt;
$ reconstructParMeshHPC -constant&amp;lt;/pre&amp;gt;&lt;br /&gt;
instead of:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ decomposePar&lt;br /&gt;
$ mpiexec snappyHexMesh -overwrite&lt;br /&gt;
$ reconstructParMesh -constant&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Building an OpenFOAM batch file for parallel processing =&lt;br /&gt;
== General information == &lt;br /&gt;
To run any job in a parallel mode with OpenFOAM, it is necessary to decompose the geometry domain into segments, equal to the number of processors (or threads) you intend to use. That means, for example, if you want to run a case on 8 processors, you will have to decompose the mesh in 8 segments, first. Then, you start the solver in &#039;&#039;parallel&#039;&#039; mode, letting &#039;&#039;OpenFOAM&#039;&#039; to run calculations concurrently on these segments, one processor responding for one segment of the mesh, sharing the data with all other processors in between. There is, of course, a mechanism that connects properly the calculations, so you don&#039;t loose your data or generate wrong results. Decomposition and segments building process is handled by&amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;decomposePar&amp;lt;/span&amp;gt;utility. In &amp;quot;&#039;&#039;system/decomposeParDict&#039;&#039;&amp;quot; you may specify, how many &amp;quot;segments&amp;quot; you want your geometry domain to be divided in, and which decomposition method to use. The automatic one is &amp;quot;&#039;&#039;scotch&#039;&#039;&amp;quot;. It trims the mesh, collecting as many cells as possible per processor, trying to avoid having empty segments or segments with few cells. If you want your mesh to be divided in other way, specifying the number of segments it should be cut in x, y or z direction, for example, you can use &amp;quot;simple&amp;quot; or &amp;quot;hierarchical&amp;quot; methods. There are some other ways as well, with more documentation on the internet. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
== Wrapper script generation == &lt;br /&gt;
&#039;&#039;&#039;Attention:&#039;&#039;&#039; &amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;openfoam&amp;lt;/span&amp;gt; module loads automatically the necessary &amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;openmpi&amp;lt;/span&amp;gt; module for parallel run, do &#039;&#039;&#039;NOT&#039;&#039;&#039; load another version of mpi, as it may conflict with the loaded &amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;openfoam&amp;lt;/span&amp;gt; version. &lt;br /&gt;
&lt;br /&gt;
A job-script to submit a batch job called &#039;&#039;job_openfoam.sh&#039;&#039; that runs &#039;&#039;icoFoam&#039;&#039; solver with OpenFoam version 2.4.0, on 8 processors, requiring 6000 MByte of total physical memory per processor and total wall clock time of 6 hours looks like: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--b)--&amp;gt; &lt;br /&gt;
{| style=&amp;quot;width: 100%; border:1px solid #d0cfcc; background:#f2f7ff;border-spacing: 5px;&amp;quot;&lt;br /&gt;
| style=&amp;quot;width:280px; white-space:nowrap; color:#000;&amp;quot; |&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#MSUB -l nodes=1:ppn=8&lt;br /&gt;
#MSUB -l walltime=06:00:00&lt;br /&gt;
#MSUB -l pmem=6000mb&lt;br /&gt;
#MSUB -v FOAM_MODULE=&amp;quot;cae/openfoam/2.4.0&amp;quot;&lt;br /&gt;
#MSUB -v MPIRUN_OPTIONS=&amp;quot;--bind-to core --map-by core -report-bindings&amp;quot;&lt;br /&gt;
#MSUB -v EXECUTABLE=&amp;quot;icoFoam&amp;quot;&lt;br /&gt;
#MSUB -N test_icoFoam&lt;br /&gt;
#MSUB -o icoFoam.log&lt;br /&gt;
#MSUB -j oe&lt;br /&gt;
&lt;br /&gt;
startexe=&amp;quot;mpirun ${MPIRUN_OPTIONS} ${EXECUTABLE} -parallel&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# openfoam-2.4.0 automatically loads mpi/openmpi/1.8-gnu-4.9&lt;br /&gt;
module load ${FOAM_MODULE}&lt;br /&gt;
foamInit&lt;br /&gt;
&lt;br /&gt;
# remove decomposePar if you already decomposed your case beforehand &lt;br /&gt;
decomposePar &amp;amp;&amp;amp;&lt;br /&gt;
&lt;br /&gt;
# starting the solver in parallel. Name of the solver is given in the &amp;quot;EXECUTABLE&amp;quot; variable, &lt;br /&gt;
# in the header &lt;br /&gt;
echo $startexe&lt;br /&gt;
exec $startexe &amp;amp;&amp;amp;&lt;br /&gt;
&lt;br /&gt;
# remove reconstructPar and the &#039;&amp;amp;&amp;amp;&#039; operator from the command above &lt;br /&gt;
# if you would like to reconstruct the case later&lt;br /&gt;
reconstructPar&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Using I/O and reducing the amount of data and files =&lt;br /&gt;
In OpenFOAM, you can control which variables or fields are written at specific times. For example, for post-processing purposes, you might need only a subset of variables. In order to control which files will be written, there is a function object called &amp;quot;writeObjects&amp;quot;. &lt;br /&gt;
&lt;br /&gt;
An example controlDict file may look like this: At the top of the file (entry &amp;quot;writeControl&amp;quot;) you specify that ALL fields (variables) required for restarting are saved every 12 wall-clock hours. Then, additionally, at the bottom of the controlDict in the &amp;quot;functions&amp;quot; block, you can add a function object of type &amp;quot;writeObjects&amp;quot;. With this function object, you can control the output of specific fields independent of the entry at the top of the file: &lt;br /&gt;
&amp;lt;!--b)--&amp;gt; &lt;br /&gt;
{| style=&amp;quot;width: 100%; border:1px solid #d0cfcc; background:#f2f7ff;border-spacing: 5px;&amp;quot;&lt;br /&gt;
| style=&amp;quot;width:280px; white-space:nowrap; color:#000;&amp;quot; |&lt;br /&gt;
&amp;lt;source lang=&amp;quot;text&amp;quot;&amp;gt;&lt;br /&gt;
/*--------------------------------*- C++ -*----------------------------------*\&lt;br /&gt;
| =========                 |                                                 |&lt;br /&gt;
| \\      /  F ield         | OpenFOAM: The Open Source CFD Toolbox           |&lt;br /&gt;
|  \\    /   O peration     | Version:  4.1.x                                 |&lt;br /&gt;
|   \\  /    A nd           | Web:      www.OpenFOAM.org                      |&lt;br /&gt;
|    \\/     M anipulation  |                                                 |&lt;br /&gt;
\*---------------------------------------------------------------------------*/&lt;br /&gt;
FoamFile&lt;br /&gt;
{&lt;br /&gt;
    version     2.0;&lt;br /&gt;
    format      ascii;&lt;br /&gt;
    class       dictionary;&lt;br /&gt;
    location    &amp;quot;system&amp;quot;;&lt;br /&gt;
    object      controlDict;&lt;br /&gt;
}&lt;br /&gt;
// * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * //&lt;br /&gt;
&lt;br /&gt;
startFrom       latestTime;&lt;br /&gt;
startTime       0;&lt;br /&gt;
stopAt  	endTime;&lt;br /&gt;
endTime         1e2;&lt;br /&gt;
deltaT          1e-5;&lt;br /&gt;
&lt;br /&gt;
writeControl    clockTime;&lt;br /&gt;
writeInterval   43200; // write ALL fields necessary to restart your simulation &lt;br /&gt;
                       // every 43200 wall-clock seconds = 12 hours of real time&lt;br /&gt;
&lt;br /&gt;
purgeWrite      0;&lt;br /&gt;
writeFormat     binary;&lt;br /&gt;
writePrecision  10;&lt;br /&gt;
writeCompression off;&lt;br /&gt;
timeFormat      general;&lt;br /&gt;
timePrecision   10;&lt;br /&gt;
runTimeModifiable false;&lt;br /&gt;
&lt;br /&gt;
functions&lt;br /&gt;
{&lt;br /&gt;
    writeFields // name of the function object&lt;br /&gt;
    {&lt;br /&gt;
        type writeObjects;&lt;br /&gt;
        libs ( &amp;quot;libutilityFunctionObjects.so&amp;quot; );&lt;br /&gt;
&lt;br /&gt;
        objects&lt;br /&gt;
        (&lt;br /&gt;
	    T U rho // list of fields/variables to be written&lt;br /&gt;
        );&lt;br /&gt;
&lt;br /&gt;
        // E.g. write every 1e-5 seconds of simulation time only the specified fields&lt;br /&gt;
        writeControl runTime;&lt;br /&gt;
        writeInterval 1e-5; // write every 1e-5 seconds&lt;br /&gt;
    }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can also define multiple function objects in order to write different subsets of fields at different times. You can also use wildcards in the list of fields- for example, in order to write out all fields starting with &amp;quot;RR_&amp;quot; you can add&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;quot;RR_.*&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
to the list of objects. You can get a list of valid field names by writing &amp;quot;banana&amp;quot; in the field list. During the run of the solver all valid field names are printed.&lt;br /&gt;
The output time can be changed too. Instead of writing at specific times in the simulation, you can also write after a certain number of time steps or depening on the wall clock time:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;// write every 100th simulation time step&lt;br /&gt;
writeControl timeStep;&lt;br /&gt;
writeInterval 100;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;// every 3600 seconds of real wall clock time&lt;br /&gt;
writeControl runtime;&lt;br /&gt;
writeInterval 3600; &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you use OpenFOAM before version 4.0 or 1606, the type of function object is:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
type writeRegisteredObject; // (instead of type writeObjects) &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
If you use OpenFOAM before version 3.0, you have to load the library with&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
functionObjectLibs (&amp;quot;libIOFunctionObjects.so&amp;quot;); // (instead of libs ( &amp;quot;libutilityFunctionObjects.so&amp;quot; )) &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
and exchange the entry &amp;quot;writeControl&amp;quot; with &amp;quot;outputControl&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
= OpenFOAM and ParaView on bwUniCluster=&lt;br /&gt;
ParaView is not directly linked to OpenFOAM installation on the cluster. Therefore, to visualize OpenFOAM jobs with ParaView, they will have to be manually opened within the specific ParaView module.  &lt;br /&gt;
&lt;br /&gt;
1. Load the ParaView module. For example: &lt;br /&gt;
&amp;lt;pre&amp;gt;$ module load cae/paraview/4.3.1&amp;lt;/pre&amp;gt;&lt;br /&gt;
2. Create a dummy &#039;*.openfoam&#039; file in the OpenFOAM case folder:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ cd &amp;lt;case_folder_path&amp;gt;&lt;br /&gt;
$ touch &amp;lt;case_name&amp;gt;.openfoam&amp;lt;/pre&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;NOTICE:&#039;&#039;&#039; the name of the dummy file should be the same as the name of the OpenFOAM case folder, with &#039;.openfoam&#039; extension.&lt;br /&gt;
&lt;br /&gt;
3. Open ParaView:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ paraview&amp;lt;/pre&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;NOTICE:&#039;&#039;&#039; ParaView is a visualization software which requires X-server to run.&lt;br /&gt;
&lt;br /&gt;
4. In Paraview go to &#039;File&#039; -&amp;gt; &#039;Open&#039;, or press Ctrl+O. Choose to show &#039;All files (*)&#039;, and open your &amp;lt;case_name&amp;gt;.openfoam file. In the pop-up window select OpenFOAM, and press &#039;Ok&#039;.&lt;br /&gt;
&lt;br /&gt;
5. That&#039;s it! Enjoy ParaView and OpenFOAM.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
[[Category:Engineering software]][[Category:bwUniCluster]][[Category:bwForCluster_Chemistry]]&lt;/div&gt;</summary>
		<author><name>A Saramet</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster3.0/Software/OpenFoam&amp;diff=4821</id>
		<title>BwUniCluster3.0/Software/OpenFoam</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster3.0/Software/OpenFoam&amp;diff=4821"/>
		<updated>2017-04-11T15:03:09Z</updated>

		<summary type="html">&lt;p&gt;A Saramet: /* Parallel run with OpenFOAM */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{| width=600px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Description !! Content&lt;br /&gt;
|-&lt;br /&gt;
| module load&lt;br /&gt;
| cae/openfoam&lt;br /&gt;
|-&lt;br /&gt;
| Availability&lt;br /&gt;
| [[bwUniCluster]] &amp;amp;#124; [[BwForCluster_Chemistry]] &amp;amp;#124; bwGRiD-T&amp;amp;uuml;bingen&lt;br /&gt;
|-&lt;br /&gt;
| License&lt;br /&gt;
| [http://www.openfoam.org/licence.php GNU General Public Licence]&lt;br /&gt;
|-&lt;br /&gt;
| Citing&lt;br /&gt;
| n/a&lt;br /&gt;
|-&lt;br /&gt;
| Links&lt;br /&gt;
| [http://www.openfoam.org/ Openfoam Homepage] &amp;amp;#124; [http://www.openfoam.org/docs/ Documentation]&lt;br /&gt;
|-&lt;br /&gt;
| Graphical Interface&lt;br /&gt;
| No&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
= Description =&lt;br /&gt;
The OpenFOAM®  (Open Field Operation and Manipulation) CFD Toolbox is a free, open source CFD software package with an extensive range of features to solve anything from complex fluid flows involving chemical reactions, turbulence and heat transfer, to solid dynamics and electromagnetics. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
= Versions and Availability =&lt;br /&gt;
A list of versions currently available on all bwHPC-C5-Clusters can be obtained from the&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;big&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[https://cis-hpc.uni-konstanz.de/prod.cis/ Cluster Information System CIS]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/big&amp;gt;&lt;br /&gt;
{{#widget:Iframe&lt;br /&gt;
|url=https://cis-hpc.uni-konstanz.de/prod.cis/bwUniCluster/cae/openfoam&lt;br /&gt;
|width=99%&lt;br /&gt;
|height=600&lt;br /&gt;
|border=0&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
In order to check what OpenFOAM versions are installed on the system, run the following command:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ module avail cae/openfoam&amp;lt;/pre&amp;gt;&lt;br /&gt;
Typically, several OpenFOAM versions might be available.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Any available version can be accessed by loading the appropriate module:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ module load cae/openfoam/&amp;lt;version&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
with &amp;lt;version&amp;gt; specifying the desired version. &lt;br /&gt;
&lt;br /&gt;
To activate the OpenFOAM applications, after the module is loaded, run the following:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ source $FOAM_INIT&amp;lt;/pre&amp;gt;&lt;br /&gt;
or simply:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ foamInit&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Parallel run with OpenFOAM  =&lt;br /&gt;
To improve the concurrently solving process and decrease the error occurrence probability while running an OpenFOAM job in parallel (&#039;&#039;on a singlenode&#039;&#039;), some modifications have been introduced. Specifically, after the case decomposition is done, it is recommended to save the decomposed data directly on the nodes local folder and run from there. When the calculations are over, the data is moved back to the case folder and reconstructed. It will improve the overall performance, considering that you allocated enough wall-time to decompose and rebuild your cases, as it moves the&amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;processor*&amp;lt;/span&amp;gt;folders to and out of the nodes local work-space. For such a procedure it is necessary to use the following commands for decomposition and reconstruction of the geometry domain:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ decomposeParHPC&lt;br /&gt;
$ reconstructParHPC&lt;br /&gt;
$ reconstructParMeshHPC&amp;lt;/pre&amp;gt;&lt;br /&gt;
instead of:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ decomposePar&lt;br /&gt;
$ reconstructPar&lt;br /&gt;
$ recontructParMesh&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For example, if you want to run&amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;snappyHexMesh&amp;lt;/span&amp;gt;in parallel, you may use the following commands:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ decomposeParMeshHPC&lt;br /&gt;
$ mpiexec snappyHexMesh -overwrite&lt;br /&gt;
$ reconstructParMeshHPC -constant&amp;lt;/pre&amp;gt;&lt;br /&gt;
instead of:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ decomposePar&lt;br /&gt;
$ mpiexec snappyHexMesh -overwrite&lt;br /&gt;
$ reconstructParMesh -constant&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Building an OpenFOAM batch file for parallel processing =&lt;br /&gt;
== General information == &lt;br /&gt;
To run any job in a parallel mode with OpenFOAM, it is necessary to decompose the geometry domain into segments, equal to the number of processors (or threads) you intend to use. That means, for example, if you want to run a case on 8 processors, you will have to decompose the mesh in 8 segments, first. Then, you start the solver in &#039;&#039;parallel&#039;&#039; mode, letting &#039;&#039;OpenFOAM&#039;&#039; to run calculations concurrently on these segments, one processor responding for one segment of the mesh, sharing the data with all other processors in between. There is, of course, a mechanism that connects properly the calculations, so you don&#039;t loose your data or generate wrong results. Decomposition and segments building process is handled by&amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;decomposePar&amp;lt;/span&amp;gt;utility. In &amp;quot;&#039;&#039;system/decomposeParDict&#039;&#039;&amp;quot; you may specify, how many &amp;quot;segments&amp;quot; you want your geometry domain to be divided in, and which decomposition method to use. The automatic one is &amp;quot;&#039;&#039;scotch&#039;&#039;&amp;quot;. It trims the mesh, collecting as many cells as possible per processor, trying to avoid having empty segments or segments with few cells. If you want your mesh to be divided in other way, specifying the number of segments it should be cut in x, y or z direction, for example, you can use &amp;quot;simple&amp;quot; or &amp;quot;hierarchical&amp;quot; methods. There are some other ways as well, with more documentation on the internet. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
== Wrapper script generation == &lt;br /&gt;
&#039;&#039;&#039;Attention:&#039;&#039;&#039; &amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;openfoam&amp;lt;/span&amp;gt; module loads automatically the necessary &amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;openmpi&amp;lt;/span&amp;gt; module for parallel run, do &#039;&#039;&#039;NOT&#039;&#039;&#039; load another version of mpi, as it may conflict with the loaded &amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;openfoam&amp;lt;/span&amp;gt; version. &lt;br /&gt;
&lt;br /&gt;
A job-script to submit a batch job called &#039;&#039;job_openfoam.sh&#039;&#039; that runs &#039;&#039;icoFoam&#039;&#039; solver with OpenFoam version 2.4.0, on 8 processors, requiring 6000 MByte of total physical memory per processor and total wall clock time of 6 hours looks like: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--b)--&amp;gt; &lt;br /&gt;
{| style=&amp;quot;width: 100%; border:1px solid #d0cfcc; background:#f2f7ff;border-spacing: 5px;&amp;quot;&lt;br /&gt;
| style=&amp;quot;width:280px; white-space:nowrap; color:#000;&amp;quot; |&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#MSUB -l nodes=1:ppn=8&lt;br /&gt;
#MSUB -l walltime=06:00:00&lt;br /&gt;
#MSUB -l pmem=6000mb&lt;br /&gt;
#MSUB -v FOAM_MODULE=&amp;quot;cae/openfoam/2.4.0&amp;quot;&lt;br /&gt;
#MSUB -v MPIRUN_OPTIONS=&amp;quot;--bind-to core --map-by core -report-bindings&amp;quot;&lt;br /&gt;
#MSUB -v EXECUTABLE=&amp;quot;icoFoam&amp;quot;&lt;br /&gt;
#MSUB -N test_icoFoam&lt;br /&gt;
#MSUB -o icoFoam.log&lt;br /&gt;
#MSUB -j oe&lt;br /&gt;
&lt;br /&gt;
startexe=&amp;quot;mpirun ${MPIRUN_OPTIONS} ${EXECUTABLE} -parallel&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# openfoam-2.4.0 automatically loads mpi/openmpi/1.8-gnu-4.9&lt;br /&gt;
module load ${FOAM_MODULE}&lt;br /&gt;
foamInit&lt;br /&gt;
&lt;br /&gt;
# remove decomposePar if you already decomposed your case beforehand &lt;br /&gt;
decomposePar &amp;amp;&amp;amp;&lt;br /&gt;
&lt;br /&gt;
# starting the solver in parallel. Name of the solver is given in the &amp;quot;EXECUTABLE&amp;quot; variable, &lt;br /&gt;
# in the header &lt;br /&gt;
echo $startexe&lt;br /&gt;
exec $startexe &amp;amp;&amp;amp;&lt;br /&gt;
&lt;br /&gt;
# remove reconstructPar and the &#039;&amp;amp;&amp;amp;&#039; operator from the command above &lt;br /&gt;
# if you would like to reconstruct the case later&lt;br /&gt;
reconstructPar&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Using I/O and reducing the amount of data and files =&lt;br /&gt;
In OpenFOAM, you can control which variables or fields are written at specific times. For example, for post-processing purposes, you might need only a subset of variables. In order to control which files will be written, there is a function object called &amp;quot;writeObjects&amp;quot;. &lt;br /&gt;
&lt;br /&gt;
An example controlDict file may look like this: At the top of the file (entry &amp;quot;writeControl&amp;quot;) you specify that ALL fields (variables) required for restarting are saved every 12 wall-clock hours. Then, additionally, at the bottom of the controlDict in the &amp;quot;functions&amp;quot; block, you can add a function object of type &amp;quot;writeObjects&amp;quot;. With this function object, you can control the output of specific fields independent of the entry at the top of the file: &lt;br /&gt;
&amp;lt;!--b)--&amp;gt; &lt;br /&gt;
{| style=&amp;quot;width: 100%; border:1px solid #d0cfcc; background:#f2f7ff;border-spacing: 5px;&amp;quot;&lt;br /&gt;
| style=&amp;quot;width:280px; white-space:nowrap; color:#000;&amp;quot; |&lt;br /&gt;
&amp;lt;source lang=&amp;quot;text&amp;quot;&amp;gt;&lt;br /&gt;
/*--------------------------------*- C++ -*----------------------------------*\&lt;br /&gt;
| =========                 |                                                 |&lt;br /&gt;
| \\      /  F ield         | OpenFOAM: The Open Source CFD Toolbox           |&lt;br /&gt;
|  \\    /   O peration     | Version:  4.1.x                                 |&lt;br /&gt;
|   \\  /    A nd           | Web:      www.OpenFOAM.org                      |&lt;br /&gt;
|    \\/     M anipulation  |                                                 |&lt;br /&gt;
\*---------------------------------------------------------------------------*/&lt;br /&gt;
FoamFile&lt;br /&gt;
{&lt;br /&gt;
    version     2.0;&lt;br /&gt;
    format      ascii;&lt;br /&gt;
    class       dictionary;&lt;br /&gt;
    location    &amp;quot;system&amp;quot;;&lt;br /&gt;
    object      controlDict;&lt;br /&gt;
}&lt;br /&gt;
// * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * //&lt;br /&gt;
&lt;br /&gt;
startFrom       latestTime;&lt;br /&gt;
startTime       0;&lt;br /&gt;
stopAt  	endTime;&lt;br /&gt;
endTime         1e2;&lt;br /&gt;
deltaT          1e-5;&lt;br /&gt;
&lt;br /&gt;
writeControl    clockTime;&lt;br /&gt;
writeInterval   43200; // write ALL fields necessary to restart your simulation &lt;br /&gt;
                       // every 43200 wall-clock seconds = 12 hours of real time&lt;br /&gt;
&lt;br /&gt;
purgeWrite      0;&lt;br /&gt;
writeFormat     binary;&lt;br /&gt;
writePrecision  10;&lt;br /&gt;
writeCompression off;&lt;br /&gt;
timeFormat      general;&lt;br /&gt;
timePrecision   10;&lt;br /&gt;
runTimeModifiable false;&lt;br /&gt;
&lt;br /&gt;
functions&lt;br /&gt;
{&lt;br /&gt;
    writeFields // name of the function object&lt;br /&gt;
    {&lt;br /&gt;
        type writeObjects;&lt;br /&gt;
        libs ( &amp;quot;libutilityFunctionObjects.so&amp;quot; );&lt;br /&gt;
&lt;br /&gt;
        objects&lt;br /&gt;
        (&lt;br /&gt;
	    T U rho // list of fields/variables to be written&lt;br /&gt;
        );&lt;br /&gt;
&lt;br /&gt;
        // E.g. write every 1e-5 seconds of simulation time only the specified fields&lt;br /&gt;
        writeControl runTime;&lt;br /&gt;
        writeInterval 1e-5; // write every 1e-5 seconds&lt;br /&gt;
    }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can also define multiple function objects in order to write different subsets of fields at different times. You can also use wildcards in the list of fields- for example, in order to write out all fields starting with &amp;quot;RR_&amp;quot; you can add&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;quot;RR_.*&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
to the list of objects. You can get a list of valid field names by writing &amp;quot;banana&amp;quot; in the field list. During the run of the solver all valid field names are printed.&lt;br /&gt;
The output time can be changed too. Instead of writing at specific times in the simulation, you can also write after a certain number of time steps or depening on the wall clock time:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;// write every 100th simulation time step&lt;br /&gt;
writeControl timeStep;&lt;br /&gt;
writeInterval 100;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;// every 3600 seconds of real wall clock time&lt;br /&gt;
writeControl runtime;&lt;br /&gt;
writeInterval 3600; &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you use OpenFOAM before version 4.0 or 1606, the type of function object is:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
type writeRegisteredObject; // (instead of type writeObjects) &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
If you use OpenFOAM before version 3.0, you have to load the library with&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
functionObjectLibs (&amp;quot;libIOFunctionObjects.so&amp;quot;); // (instead of libs ( &amp;quot;libutilityFunctionObjects.so&amp;quot; )) &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
and exchange the entry &amp;quot;writeControl&amp;quot; with &amp;quot;outputControl&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
= OpenFOAM and ParaView on bwUniCluster=&lt;br /&gt;
ParaView is not directly linked to OpenFOAM installation on the cluster. Therefore, to visualize OpenFOAM jobs with ParaView, they will have to be manually opened within the specific ParaView module.  &lt;br /&gt;
&lt;br /&gt;
1. Load the ParaView module. For example: &lt;br /&gt;
&amp;lt;pre&amp;gt;$ module load cae/paraview/4.3.1&amp;lt;/pre&amp;gt;&lt;br /&gt;
2. Create a dummy &#039;*.openfoam&#039; file in the OpenFOAM case folder:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ cd &amp;lt;case_folder_path&amp;gt;&lt;br /&gt;
$ touch &amp;lt;case_name&amp;gt;.openfoam&amp;lt;/pre&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;NOTICE:&#039;&#039;&#039; the name of the dummy file should be the same as the name of the OpenFOAM case folder, with &#039;.openfoam&#039; extension.&lt;br /&gt;
&lt;br /&gt;
3. Open ParaView:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ paraview&amp;lt;/pre&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;NOTICE:&#039;&#039;&#039; ParaView is a visualization software which requires X-server to run.&lt;br /&gt;
&lt;br /&gt;
4. In Paraview go to &#039;File&#039; -&amp;gt; &#039;Open&#039;, or press Ctrl+O. Choose to show &#039;All files (*)&#039;, and open your &amp;lt;case_name&amp;gt;.openfoam file. In the pop-up window select OpenFOAM, and press &#039;Ok&#039;.&lt;br /&gt;
&lt;br /&gt;
5. That&#039;s it! Enjoy ParaView and OpenFOAM.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
[[Category:Engineering software]][[Category:bwUniCluster]][[Category:bwForCluster_Chemistry]]&lt;/div&gt;</summary>
		<author><name>A Saramet</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster3.0/Software/OpenFoam&amp;diff=4820</id>
		<title>BwUniCluster3.0/Software/OpenFoam</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster3.0/Software/OpenFoam&amp;diff=4820"/>
		<updated>2017-04-11T13:44:27Z</updated>

		<summary type="html">&lt;p&gt;A Saramet: /* OpenFOAM data handling on bwUniCluster */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{| width=600px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Description !! Content&lt;br /&gt;
|-&lt;br /&gt;
| module load&lt;br /&gt;
| cae/openfoam&lt;br /&gt;
|-&lt;br /&gt;
| Availability&lt;br /&gt;
| [[bwUniCluster]] &amp;amp;#124; [[BwForCluster_Chemistry]] &amp;amp;#124; bwGRiD-T&amp;amp;uuml;bingen&lt;br /&gt;
|-&lt;br /&gt;
| License&lt;br /&gt;
| [http://www.openfoam.org/licence.php GNU General Public Licence]&lt;br /&gt;
|-&lt;br /&gt;
| Citing&lt;br /&gt;
| n/a&lt;br /&gt;
|-&lt;br /&gt;
| Links&lt;br /&gt;
| [http://www.openfoam.org/ Openfoam Homepage] &amp;amp;#124; [http://www.openfoam.org/docs/ Documentation]&lt;br /&gt;
|-&lt;br /&gt;
| Graphical Interface&lt;br /&gt;
| No&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
= Description =&lt;br /&gt;
The OpenFOAM®  (Open Field Operation and Manipulation) CFD Toolbox is a free, open source CFD software package with an extensive range of features to solve anything from complex fluid flows involving chemical reactions, turbulence and heat transfer, to solid dynamics and electromagnetics. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
= Versions and Availability =&lt;br /&gt;
A list of versions currently available on all bwHPC-C5-Clusters can be obtained from the&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;big&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[https://cis-hpc.uni-konstanz.de/prod.cis/ Cluster Information System CIS]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/big&amp;gt;&lt;br /&gt;
{{#widget:Iframe&lt;br /&gt;
|url=https://cis-hpc.uni-konstanz.de/prod.cis/bwUniCluster/cae/openfoam&lt;br /&gt;
|width=99%&lt;br /&gt;
|height=600&lt;br /&gt;
|border=0&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
In order to check what OpenFOAM versions are installed on the system, run the following command:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ module avail cae/openfoam&amp;lt;/pre&amp;gt;&lt;br /&gt;
Typically, several OpenFOAM versions might be available.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Any available version can be accessed by loading the appropriate module:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ module load cae/openfoam/&amp;lt;version&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
with &amp;lt;version&amp;gt; specifying the desired version. &lt;br /&gt;
&lt;br /&gt;
To activate the OpenFOAM applications, after the module is loaded, run the following:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ source $FOAM_INIT&amp;lt;/pre&amp;gt;&lt;br /&gt;
or simply:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ foamInit&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Parallel run with OpenFOAM  =&lt;br /&gt;
To improve the concurrently solving process and decrease the error occurrence probability while running an OpenFOAM job in parallel (&#039;&#039;on a singlenode&#039;&#039;), some modifications have been introduced. Specifically, after the case decomposition is done, it is recommended to save the decomposed data directly on the nodes local folder and run from there. When the calculations are over, the data is moved back to the case folder and reconstructed. It will improve the overall performance, considering that you allocated enough wall-time to decompose and rebuild your cases, as it moves the&amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;processor*&amp;lt;/span&amp;gt;folders to and out of the nodes local work-space. For such a procedure it is necessary to use the following commands for decomposition and reconstruction of the geometry domain:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ decomposeParHPC&lt;br /&gt;
$ reconstructParHPC&lt;br /&gt;
$ reconstructParMeshHPC&amp;lt;/pre&amp;gt;&lt;br /&gt;
instead of:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ decomposPar&lt;br /&gt;
$ reconstructPar&lt;br /&gt;
$ recontructParMesh&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For example, if you want to run&amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;snappyHexMesh&amp;lt;/span&amp;gt;in parallel, you may use the following commands:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ decomposeParMeshHPC&lt;br /&gt;
$ mpiexec snappyHexMesh -overwrite&lt;br /&gt;
$ reconstructParMeshHPC -constant&amp;lt;/pre&amp;gt;&lt;br /&gt;
instead of:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ decomposePar&lt;br /&gt;
$ mpiexec snappyHexMesh -overwrite&lt;br /&gt;
$ reconstructParMesh -constant&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Building an OpenFOAM batch file for parallel processing =&lt;br /&gt;
== General information == &lt;br /&gt;
To run any job in a parallel mode with OpenFOAM, it is necessary to decompose the geometry domain into segments, equal to the number of processors (or threads) you intend to use. That means, for example, if you want to run a case on 8 processors, you will have to decompose the mesh in 8 segments, first. Then, you start the solver in &#039;&#039;parallel&#039;&#039; mode, letting &#039;&#039;OpenFOAM&#039;&#039; to run calculations concurrently on these segments, one processor responding for one segment of the mesh, sharing the data with all other processors in between. There is, of course, a mechanism that connects properly the calculations, so you don&#039;t loose your data or generate wrong results. Decomposition and segments building process is handled by&amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;decomposePar&amp;lt;/span&amp;gt;utility. In &amp;quot;&#039;&#039;system/decomposeParDict&#039;&#039;&amp;quot; you may specify, how many &amp;quot;segments&amp;quot; you want your geometry domain to be divided in, and which decomposition method to use. The automatic one is &amp;quot;&#039;&#039;scotch&#039;&#039;&amp;quot;. It trims the mesh, collecting as many cells as possible per processor, trying to avoid having empty segments or segments with few cells. If you want your mesh to be divided in other way, specifying the number of segments it should be cut in x, y or z direction, for example, you can use &amp;quot;simple&amp;quot; or &amp;quot;hierarchical&amp;quot; methods. There are some other ways as well, with more documentation on the internet. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
== Wrapper script generation == &lt;br /&gt;
&#039;&#039;&#039;Attention:&#039;&#039;&#039; &amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;openfoam&amp;lt;/span&amp;gt; module loads automatically the necessary &amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;openmpi&amp;lt;/span&amp;gt; module for parallel run, do &#039;&#039;&#039;NOT&#039;&#039;&#039; load another version of mpi, as it may conflict with the loaded &amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;openfoam&amp;lt;/span&amp;gt; version. &lt;br /&gt;
&lt;br /&gt;
A job-script to submit a batch job called &#039;&#039;job_openfoam.sh&#039;&#039; that runs &#039;&#039;icoFoam&#039;&#039; solver with OpenFoam version 2.4.0, on 8 processors, requiring 6000 MByte of total physical memory per processor and total wall clock time of 6 hours looks like: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--b)--&amp;gt; &lt;br /&gt;
{| style=&amp;quot;width: 100%; border:1px solid #d0cfcc; background:#f2f7ff;border-spacing: 5px;&amp;quot;&lt;br /&gt;
| style=&amp;quot;width:280px; white-space:nowrap; color:#000;&amp;quot; |&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#MSUB -l nodes=1:ppn=8&lt;br /&gt;
#MSUB -l walltime=06:00:00&lt;br /&gt;
#MSUB -l pmem=6000mb&lt;br /&gt;
#MSUB -v FOAM_MODULE=&amp;quot;cae/openfoam/2.4.0&amp;quot;&lt;br /&gt;
#MSUB -v MPIRUN_OPTIONS=&amp;quot;--bind-to core --map-by core -report-bindings&amp;quot;&lt;br /&gt;
#MSUB -v EXECUTABLE=&amp;quot;icoFoam&amp;quot;&lt;br /&gt;
#MSUB -N test_icoFoam&lt;br /&gt;
#MSUB -o icoFoam.log&lt;br /&gt;
#MSUB -j oe&lt;br /&gt;
&lt;br /&gt;
startexe=&amp;quot;mpirun ${MPIRUN_OPTIONS} ${EXECUTABLE} -parallel&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# openfoam-2.4.0 automatically loads mpi/openmpi/1.8-gnu-4.9&lt;br /&gt;
module load ${FOAM_MODULE}&lt;br /&gt;
foamInit&lt;br /&gt;
&lt;br /&gt;
# remove decomposePar if you already decomposed your case beforehand &lt;br /&gt;
decomposePar &amp;amp;&amp;amp;&lt;br /&gt;
&lt;br /&gt;
# starting the solver in parallel. Name of the solver is given in the &amp;quot;EXECUTABLE&amp;quot; variable, &lt;br /&gt;
# in the header &lt;br /&gt;
echo $startexe&lt;br /&gt;
exec $startexe &amp;amp;&amp;amp;&lt;br /&gt;
&lt;br /&gt;
# remove reconstructPar and the &#039;&amp;amp;&amp;amp;&#039; operator from the command above &lt;br /&gt;
# if you would like to reconstruct the case later&lt;br /&gt;
reconstructPar&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Using I/O and reducing the amount of data and files =&lt;br /&gt;
In OpenFOAM, you can control which variables or fields are written at specific times. For example, for post-processing purposes, you might need only a subset of variables. In order to control which files will be written, there is a function object called &amp;quot;writeObjects&amp;quot;. &lt;br /&gt;
&lt;br /&gt;
An example controlDict file may look like this: At the top of the file (entry &amp;quot;writeControl&amp;quot;) you specify that ALL fields (variables) required for restarting are saved every 12 wall-clock hours. Then, additionally, at the bottom of the controlDict in the &amp;quot;functions&amp;quot; block, you can add a function object of type &amp;quot;writeObjects&amp;quot;. With this function object, you can control the output of specific fields independent of the entry at the top of the file: &lt;br /&gt;
&amp;lt;!--b)--&amp;gt; &lt;br /&gt;
{| style=&amp;quot;width: 100%; border:1px solid #d0cfcc; background:#f2f7ff;border-spacing: 5px;&amp;quot;&lt;br /&gt;
| style=&amp;quot;width:280px; white-space:nowrap; color:#000;&amp;quot; |&lt;br /&gt;
&amp;lt;source lang=&amp;quot;text&amp;quot;&amp;gt;&lt;br /&gt;
/*--------------------------------*- C++ -*----------------------------------*\&lt;br /&gt;
| =========                 |                                                 |&lt;br /&gt;
| \\      /  F ield         | OpenFOAM: The Open Source CFD Toolbox           |&lt;br /&gt;
|  \\    /   O peration     | Version:  4.1.x                                 |&lt;br /&gt;
|   \\  /    A nd           | Web:      www.OpenFOAM.org                      |&lt;br /&gt;
|    \\/     M anipulation  |                                                 |&lt;br /&gt;
\*---------------------------------------------------------------------------*/&lt;br /&gt;
FoamFile&lt;br /&gt;
{&lt;br /&gt;
    version     2.0;&lt;br /&gt;
    format      ascii;&lt;br /&gt;
    class       dictionary;&lt;br /&gt;
    location    &amp;quot;system&amp;quot;;&lt;br /&gt;
    object      controlDict;&lt;br /&gt;
}&lt;br /&gt;
// * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * //&lt;br /&gt;
&lt;br /&gt;
startFrom       latestTime;&lt;br /&gt;
startTime       0;&lt;br /&gt;
stopAt  	endTime;&lt;br /&gt;
endTime         1e2;&lt;br /&gt;
deltaT          1e-5;&lt;br /&gt;
&lt;br /&gt;
writeControl    clockTime;&lt;br /&gt;
writeInterval   43200; // write ALL fields necessary to restart your simulation &lt;br /&gt;
                       // every 43200 wall-clock seconds = 12 hours of real time&lt;br /&gt;
&lt;br /&gt;
purgeWrite      0;&lt;br /&gt;
writeFormat     binary;&lt;br /&gt;
writePrecision  10;&lt;br /&gt;
writeCompression off;&lt;br /&gt;
timeFormat      general;&lt;br /&gt;
timePrecision   10;&lt;br /&gt;
runTimeModifiable false;&lt;br /&gt;
&lt;br /&gt;
functions&lt;br /&gt;
{&lt;br /&gt;
    writeFields // name of the function object&lt;br /&gt;
    {&lt;br /&gt;
        type writeObjects;&lt;br /&gt;
        libs ( &amp;quot;libutilityFunctionObjects.so&amp;quot; );&lt;br /&gt;
&lt;br /&gt;
        objects&lt;br /&gt;
        (&lt;br /&gt;
	    T U rho // list of fields/variables to be written&lt;br /&gt;
        );&lt;br /&gt;
&lt;br /&gt;
        // E.g. write every 1e-5 seconds of simulation time only the specified fields&lt;br /&gt;
        writeControl runTime;&lt;br /&gt;
        writeInterval 1e-5; // write every 1e-5 seconds&lt;br /&gt;
    }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can also define multiple function objects in order to write different subsets of fields at different times. You can also use wildcards in the list of fields- for example, in order to write out all fields starting with &amp;quot;RR_&amp;quot; you can add&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;quot;RR_.*&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
to the list of objects. You can get a list of valid field names by writing &amp;quot;banana&amp;quot; in the field list. During the run of the solver all valid field names are printed.&lt;br /&gt;
The output time can be changed too. Instead of writing at specific times in the simulation, you can also write after a certain number of time steps or depening on the wall clock time:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;// write every 100th simulation time step&lt;br /&gt;
writeControl timeStep;&lt;br /&gt;
writeInterval 100;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;// every 3600 seconds of real wall clock time&lt;br /&gt;
writeControl runtime;&lt;br /&gt;
writeInterval 3600; &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you use OpenFOAM before version 4.0 or 1606, the type of function object is:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
type writeRegisteredObject; // (instead of type writeObjects) &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
If you use OpenFOAM before version 3.0, you have to load the library with&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
functionObjectLibs (&amp;quot;libIOFunctionObjects.so&amp;quot;); // (instead of libs ( &amp;quot;libutilityFunctionObjects.so&amp;quot; )) &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
and exchange the entry &amp;quot;writeControl&amp;quot; with &amp;quot;outputControl&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
= OpenFOAM and ParaView on bwUniCluster=&lt;br /&gt;
ParaView is not directly linked to OpenFOAM installation on the cluster. Therefore, to visualize OpenFOAM jobs with ParaView, they will have to be manually opened within the specific ParaView module.  &lt;br /&gt;
&lt;br /&gt;
1. Load the ParaView module. For example: &lt;br /&gt;
&amp;lt;pre&amp;gt;$ module load cae/paraview/4.3.1&amp;lt;/pre&amp;gt;&lt;br /&gt;
2. Create a dummy &#039;*.openfoam&#039; file in the OpenFOAM case folder:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ cd &amp;lt;case_folder_path&amp;gt;&lt;br /&gt;
$ touch &amp;lt;case_name&amp;gt;.openfoam&amp;lt;/pre&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;NOTICE:&#039;&#039;&#039; the name of the dummy file should be the same as the name of the OpenFOAM case folder, with &#039;.openfoam&#039; extension.&lt;br /&gt;
&lt;br /&gt;
3. Open ParaView:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ paraview&amp;lt;/pre&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;NOTICE:&#039;&#039;&#039; ParaView is a visualization software which requires X-server to run.&lt;br /&gt;
&lt;br /&gt;
4. In Paraview go to &#039;File&#039; -&amp;gt; &#039;Open&#039;, or press Ctrl+O. Choose to show &#039;All files (*)&#039;, and open your &amp;lt;case_name&amp;gt;.openfoam file. In the pop-up window select OpenFOAM, and press &#039;Ok&#039;.&lt;br /&gt;
&lt;br /&gt;
5. That&#039;s it! Enjoy ParaView and OpenFOAM.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
[[Category:Engineering software]][[Category:bwUniCluster]][[Category:bwForCluster_Chemistry]]&lt;/div&gt;</summary>
		<author><name>A Saramet</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster3.0/Software/OpenFoam&amp;diff=4819</id>
		<title>BwUniCluster3.0/Software/OpenFoam</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster3.0/Software/OpenFoam&amp;diff=4819"/>
		<updated>2017-04-11T11:37:16Z</updated>

		<summary type="html">&lt;p&gt;A Saramet: /* Improving parallel run performance */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{| width=600px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Description !! Content&lt;br /&gt;
|-&lt;br /&gt;
| module load&lt;br /&gt;
| cae/openfoam&lt;br /&gt;
|-&lt;br /&gt;
| Availability&lt;br /&gt;
| [[bwUniCluster]] &amp;amp;#124; [[BwForCluster_Chemistry]] &amp;amp;#124; bwGRiD-T&amp;amp;uuml;bingen&lt;br /&gt;
|-&lt;br /&gt;
| License&lt;br /&gt;
| [http://www.openfoam.org/licence.php GNU General Public Licence]&lt;br /&gt;
|-&lt;br /&gt;
| Citing&lt;br /&gt;
| n/a&lt;br /&gt;
|-&lt;br /&gt;
| Links&lt;br /&gt;
| [http://www.openfoam.org/ Openfoam Homepage] &amp;amp;#124; [http://www.openfoam.org/docs/ Documentation]&lt;br /&gt;
|-&lt;br /&gt;
| Graphical Interface&lt;br /&gt;
| No&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
= Description =&lt;br /&gt;
The OpenFOAM®  (Open Field Operation and Manipulation) CFD Toolbox is a free, open source CFD software package with an extensive range of features to solve anything from complex fluid flows involving chemical reactions, turbulence and heat transfer, to solid dynamics and electromagnetics. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
= Versions and Availability =&lt;br /&gt;
A list of versions currently available on all bwHPC-C5-Clusters can be obtained from the&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;big&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[https://cis-hpc.uni-konstanz.de/prod.cis/ Cluster Information System CIS]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/big&amp;gt;&lt;br /&gt;
{{#widget:Iframe&lt;br /&gt;
|url=https://cis-hpc.uni-konstanz.de/prod.cis/bwUniCluster/cae/openfoam&lt;br /&gt;
|width=99%&lt;br /&gt;
|height=600&lt;br /&gt;
|border=0&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
In order to check what OpenFOAM versions are installed on the system, run the following command:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ module avail cae/openfoam&amp;lt;/pre&amp;gt;&lt;br /&gt;
Typically, several OpenFOAM versions might be available.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Any available version can be accessed by loading the appropriate module:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ module load cae/openfoam/&amp;lt;version&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
with &amp;lt;version&amp;gt; specifying the desired version. &lt;br /&gt;
&lt;br /&gt;
To activate the OpenFOAM applications, after the module is loaded, run the following:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ source $FOAM_INIT&amp;lt;/pre&amp;gt;&lt;br /&gt;
or simply:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ foamInit&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= OpenFOAM data handling on bwUniCluster   =&lt;br /&gt;
OpenFOAM cases may contain huge amount of data. Therefore, it it recommended to run jobs in $WORK folder or using Workspaces, as described in [http://www.bwhpc-c5.de/wiki/index.php/BwUniCluster_Hardware_and_Architecture#File_Systems File Systems]&lt;br /&gt;
&lt;br /&gt;
Self - compiled solvers and applications, on the other hand, should be installed in $HOME directory, as a regular backup of this folder is automatically done, saving your data and not requireing future recompilations. &lt;br /&gt;
&lt;br /&gt;
Don&#039;t forget to load the OpenFOAM version before compiling your solver, as it will automatically load version configured compiler and mpi.&lt;br /&gt;
&lt;br /&gt;
= Building an OpenFOAM batch file for parallel processing =&lt;br /&gt;
== General information == &lt;br /&gt;
To run any job in a parallel mode with OpenFOAM, it is necessary to decompose the geometry domain into segments, equal to the number of processors (or threads) you intend to use. That means, for example, if you want to run a case on 8 processors, you will have to decompose the mesh in 8 segments, first. Then, you start the solver in &#039;&#039;parallel&#039;&#039; mode, letting &#039;&#039;OpenFOAM&#039;&#039; to run calculations concurrently on these segments, one processor responding for one segment of the mesh, sharing the data with all other processors in between. There is, of course, a mechanism that connects properly the calculations, so you don&#039;t loose your data or generate wrong results. Decomposition and segments building process is handled by&amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;decomposePar&amp;lt;/span&amp;gt;utility. In &amp;quot;&#039;&#039;system/decomposeParDict&#039;&#039;&amp;quot; you may specify, how many &amp;quot;segments&amp;quot; you want your geometry domain to be divided in, and which decomposition method to use. The automatic one is &amp;quot;&#039;&#039;scotch&#039;&#039;&amp;quot;. It trims the mesh, collecting as many cells as possible per processor, trying to avoid having empty segments or segments with few cells. If you want your mesh to be divided in other way, specifying the number of segments it should be cut in x, y or z direction, for example, you can use &amp;quot;simple&amp;quot; or &amp;quot;hierarchical&amp;quot; methods. There are some other ways as well, with more documentation on the internet. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
== Wrapper script generation == &lt;br /&gt;
&#039;&#039;&#039;Attention:&#039;&#039;&#039; &amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;openfoam&amp;lt;/span&amp;gt; module loads automatically the necessary &amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;openmpi&amp;lt;/span&amp;gt; module for parallel run, do &#039;&#039;&#039;NOT&#039;&#039;&#039; load another version of mpi, as it may conflict with the loaded &amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;openfoam&amp;lt;/span&amp;gt; version. &lt;br /&gt;
&lt;br /&gt;
A job-script to submit a batch job called &#039;&#039;job_openfoam.sh&#039;&#039; that runs &#039;&#039;icoFoam&#039;&#039; solver with OpenFoam version 2.4.0, on 8 processors, requiring 6000 MByte of total physical memory per processor and total wall clock time of 6 hours looks like: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--b)--&amp;gt; &lt;br /&gt;
{| style=&amp;quot;width: 100%; border:1px solid #d0cfcc; background:#f2f7ff;border-spacing: 5px;&amp;quot;&lt;br /&gt;
| style=&amp;quot;width:280px; white-space:nowrap; color:#000;&amp;quot; |&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#MSUB -l nodes=1:ppn=8&lt;br /&gt;
#MSUB -l walltime=06:00:00&lt;br /&gt;
#MSUB -l pmem=6000mb&lt;br /&gt;
#MSUB -v FOAM_MODULE=&amp;quot;cae/openfoam/2.4.0&amp;quot;&lt;br /&gt;
#MSUB -v MPIRUN_OPTIONS=&amp;quot;--bind-to core --map-by core -report-bindings&amp;quot;&lt;br /&gt;
#MSUB -v EXECUTABLE=&amp;quot;icoFoam&amp;quot;&lt;br /&gt;
#MSUB -N test_icoFoam&lt;br /&gt;
#MSUB -o icoFoam.log&lt;br /&gt;
#MSUB -j oe&lt;br /&gt;
&lt;br /&gt;
startexe=&amp;quot;mpirun ${MPIRUN_OPTIONS} ${EXECUTABLE} -parallel&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# openfoam-2.4.0 automatically loads mpi/openmpi/1.8-gnu-4.9&lt;br /&gt;
module load ${FOAM_MODULE}&lt;br /&gt;
foamInit&lt;br /&gt;
&lt;br /&gt;
# remove decomposePar if you already decomposed your case beforehand &lt;br /&gt;
decomposePar &amp;amp;&amp;amp;&lt;br /&gt;
&lt;br /&gt;
# starting the solver in parallel. Name of the solver is given in the &amp;quot;EXECUTABLE&amp;quot; variable, &lt;br /&gt;
# in the header &lt;br /&gt;
echo $startexe&lt;br /&gt;
exec $startexe &amp;amp;&amp;amp;&lt;br /&gt;
&lt;br /&gt;
# remove reconstructPar and the &#039;&amp;amp;&amp;amp;&#039; operator from the command above &lt;br /&gt;
# if you would like to reconstruct the case later&lt;br /&gt;
reconstructPar&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Using I/O and reducing the amount of data and files =&lt;br /&gt;
In OpenFOAM, you can control which variables or fields are written at specific times. For example, for post-processing purposes, you might need only a subset of variables. In order to control which files will be written, there is a function object called &amp;quot;writeObjects&amp;quot;. &lt;br /&gt;
&lt;br /&gt;
An example controlDict file may look like this: At the top of the file (entry &amp;quot;writeControl&amp;quot;) you specify that ALL fields (variables) required for restarting are saved every 12 wall-clock hours. Then, additionally, at the bottom of the controlDict in the &amp;quot;functions&amp;quot; block, you can add a function object of type &amp;quot;writeObjects&amp;quot;. With this function object, you can control the output of specific fields independent of the entry at the top of the file: &lt;br /&gt;
&amp;lt;!--b)--&amp;gt; &lt;br /&gt;
{| style=&amp;quot;width: 100%; border:1px solid #d0cfcc; background:#f2f7ff;border-spacing: 5px;&amp;quot;&lt;br /&gt;
| style=&amp;quot;width:280px; white-space:nowrap; color:#000;&amp;quot; |&lt;br /&gt;
&amp;lt;source lang=&amp;quot;text&amp;quot;&amp;gt;&lt;br /&gt;
/*--------------------------------*- C++ -*----------------------------------*\&lt;br /&gt;
| =========                 |                                                 |&lt;br /&gt;
| \\      /  F ield         | OpenFOAM: The Open Source CFD Toolbox           |&lt;br /&gt;
|  \\    /   O peration     | Version:  4.1.x                                 |&lt;br /&gt;
|   \\  /    A nd           | Web:      www.OpenFOAM.org                      |&lt;br /&gt;
|    \\/     M anipulation  |                                                 |&lt;br /&gt;
\*---------------------------------------------------------------------------*/&lt;br /&gt;
FoamFile&lt;br /&gt;
{&lt;br /&gt;
    version     2.0;&lt;br /&gt;
    format      ascii;&lt;br /&gt;
    class       dictionary;&lt;br /&gt;
    location    &amp;quot;system&amp;quot;;&lt;br /&gt;
    object      controlDict;&lt;br /&gt;
}&lt;br /&gt;
// * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * //&lt;br /&gt;
&lt;br /&gt;
startFrom       latestTime;&lt;br /&gt;
startTime       0;&lt;br /&gt;
stopAt  	endTime;&lt;br /&gt;
endTime         1e2;&lt;br /&gt;
deltaT          1e-5;&lt;br /&gt;
&lt;br /&gt;
writeControl    clockTime;&lt;br /&gt;
writeInterval   43200; // write ALL fields necessary to restart your simulation &lt;br /&gt;
                       // every 43200 wall-clock seconds = 12 hours of real time&lt;br /&gt;
&lt;br /&gt;
purgeWrite      0;&lt;br /&gt;
writeFormat     binary;&lt;br /&gt;
writePrecision  10;&lt;br /&gt;
writeCompression off;&lt;br /&gt;
timeFormat      general;&lt;br /&gt;
timePrecision   10;&lt;br /&gt;
runTimeModifiable false;&lt;br /&gt;
&lt;br /&gt;
functions&lt;br /&gt;
{&lt;br /&gt;
    writeFields // name of the function object&lt;br /&gt;
    {&lt;br /&gt;
        type writeObjects;&lt;br /&gt;
        libs ( &amp;quot;libutilityFunctionObjects.so&amp;quot; );&lt;br /&gt;
&lt;br /&gt;
        objects&lt;br /&gt;
        (&lt;br /&gt;
	    T U rho // list of fields/variables to be written&lt;br /&gt;
        );&lt;br /&gt;
&lt;br /&gt;
        // E.g. write every 1e-5 seconds of simulation time only the specified fields&lt;br /&gt;
        writeControl runTime;&lt;br /&gt;
        writeInterval 1e-5; // write every 1e-5 seconds&lt;br /&gt;
    }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can also define multiple function objects in order to write different subsets of fields at different times. You can also use wildcards in the list of fields- for example, in order to write out all fields starting with &amp;quot;RR_&amp;quot; you can add&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;quot;RR_.*&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
to the list of objects. You can get a list of valid field names by writing &amp;quot;banana&amp;quot; in the field list. During the run of the solver all valid field names are printed.&lt;br /&gt;
The output time can be changed too. Instead of writing at specific times in the simulation, you can also write after a certain number of time steps or depening on the wall clock time:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;// write every 100th simulation time step&lt;br /&gt;
writeControl timeStep;&lt;br /&gt;
writeInterval 100;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;// every 3600 seconds of real wall clock time&lt;br /&gt;
writeControl runtime;&lt;br /&gt;
writeInterval 3600; &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you use OpenFOAM before version 4.0 or 1606, the type of function object is:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
type writeRegisteredObject; // (instead of type writeObjects) &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
If you use OpenFOAM before version 3.0, you have to load the library with&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
functionObjectLibs (&amp;quot;libIOFunctionObjects.so&amp;quot;); // (instead of libs ( &amp;quot;libutilityFunctionObjects.so&amp;quot; )) &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
and exchange the entry &amp;quot;writeControl&amp;quot; with &amp;quot;outputControl&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
= OpenFOAM and ParaView on bwUniCluster=&lt;br /&gt;
ParaView is not directly linked to OpenFOAM installation on the cluster. Therefore, to visualize OpenFOAM jobs with ParaView, they will have to be manually opened within the specific ParaView module.  &lt;br /&gt;
&lt;br /&gt;
1. Load the ParaView module. For example: &lt;br /&gt;
&amp;lt;pre&amp;gt;$ module load cae/paraview/4.3.1&amp;lt;/pre&amp;gt;&lt;br /&gt;
2. Create a dummy &#039;*.openfoam&#039; file in the OpenFOAM case folder:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ cd &amp;lt;case_folder_path&amp;gt;&lt;br /&gt;
$ touch &amp;lt;case_name&amp;gt;.openfoam&amp;lt;/pre&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;NOTICE:&#039;&#039;&#039; the name of the dummy file should be the same as the name of the OpenFOAM case folder, with &#039;.openfoam&#039; extension.&lt;br /&gt;
&lt;br /&gt;
3. Open ParaView:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ paraview&amp;lt;/pre&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;NOTICE:&#039;&#039;&#039; ParaView is a visualization software which requires X-server to run.&lt;br /&gt;
&lt;br /&gt;
4. In Paraview go to &#039;File&#039; -&amp;gt; &#039;Open&#039;, or press Ctrl+O. Choose to show &#039;All files (*)&#039;, and open your &amp;lt;case_name&amp;gt;.openfoam file. In the pop-up window select OpenFOAM, and press &#039;Ok&#039;.&lt;br /&gt;
&lt;br /&gt;
5. That&#039;s it! Enjoy ParaView and OpenFOAM.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
[[Category:Engineering software]][[Category:bwUniCluster]][[Category:bwForCluster_Chemistry]]&lt;/div&gt;</summary>
		<author><name>A Saramet</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster3.0/Software/OpenFoam&amp;diff=3495</id>
		<title>BwUniCluster3.0/Software/OpenFoam</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster3.0/Software/OpenFoam&amp;diff=3495"/>
		<updated>2015-12-17T13:10:25Z</updated>

		<summary type="html">&lt;p&gt;A Saramet: /* Wrapper script generation */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{| width=600px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Description !! Content&lt;br /&gt;
|-&lt;br /&gt;
| module load&lt;br /&gt;
| cae/openfoam&lt;br /&gt;
|-&lt;br /&gt;
| Availability&lt;br /&gt;
| [[bwUniCluster]]&lt;br /&gt;
|-&lt;br /&gt;
| License&lt;br /&gt;
| [http://www.openfoam.org/licence.php GNU General Public Licence]&lt;br /&gt;
|-&lt;br /&gt;
| Citing&lt;br /&gt;
| n/a&lt;br /&gt;
|-&lt;br /&gt;
| Links&lt;br /&gt;
| [http://www.openfoam.org/ Openfoam Homepage] &amp;amp;#124; [http://www.openfoam.org/docs/ Documentation]&lt;br /&gt;
|-&lt;br /&gt;
| Graphical Interface&lt;br /&gt;
| No&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
= Description =&lt;br /&gt;
The OpenFOAM®  (Open Field Operation and Manipulation) CFD Toolbox is a free, open source CFD software package with an extensive range of features to solve anything from complex fluid flows involving chemical reactions, turbulence and heat transfer, to solid dynamics and electromagnetics. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
= Versions and Availability =&lt;br /&gt;
A list of versions currently available on all bwHPC-C5-Clusters can be obtained from the&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;big&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[https://cis-hpc.uni-konstanz.de/prod.cis/ Cluster Information System CIS]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/big&amp;gt;&lt;br /&gt;
{{#widget:Iframe&lt;br /&gt;
|url=https://cis-hpc.uni-konstanz.de/prod.cis/bwUniCluster/cae/openfoam&lt;br /&gt;
|width=99%&lt;br /&gt;
|height=450&lt;br /&gt;
|border=0&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
In order to check what OpenFOAM versions are installed on the system, run the following command:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ module avail cae/openfoam&amp;lt;/pre&amp;gt;&lt;br /&gt;
Typically, several OpenFOAM versions might be available.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Any available version can be accessed by loading the appropriate module:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ module load cae/openfoam/&amp;lt;version&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
with &amp;lt;version&amp;gt; specifying the desired version. &lt;br /&gt;
&lt;br /&gt;
To activate the OpenFOAM applications, after the module is loaded, run the following:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ source $FOAM_INIT&amp;lt;/pre&amp;gt;&lt;br /&gt;
or simply:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ foamInit&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Improving parallel run performance  =&lt;br /&gt;
To improve the concurrently solving process and decrease the error occurrence probability while running an OpenFOAM job in parallel (&#039;&#039;on a singlenode&#039;&#039;), some modifications have been introduced. Specifically, after the case decomposition is done, it is recommended to save the decomposed data directly on the nodes in the pre-allocated work-space and use it from there. When the calculations are over, the data is moved back to the case folder and reconstructed. It will improve the overall performance, considering that you allocated enough wall-time to decompose and rebuild your cases, as it moves the&amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;processor*&amp;lt;/span&amp;gt;folders to and out of the nodes local work-space. For such a procedure it is necessary to use the following commands for decomposition and reconstruction of the geometry domain:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ decomposeParHPC&lt;br /&gt;
$ reconstructParHPC&lt;br /&gt;
$ reconstructParMeshHPC&amp;lt;/pre&amp;gt;&lt;br /&gt;
instead of:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ decomposPar&lt;br /&gt;
$ reconstructPar&lt;br /&gt;
$ recontructParMesh&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For example, if you want to run&amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;snappyHexMesh&amp;lt;/span&amp;gt;in parallel, you may use the following commands:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ decomposeParMeshHPC&lt;br /&gt;
$ mpiexec snappyHexMesh -overwrite&lt;br /&gt;
$ reconstructParMeshHPC -constant&amp;lt;/pre&amp;gt;&lt;br /&gt;
instead of:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ decomposePar&lt;br /&gt;
$ mpiexec snappyHexMesh -overwrite&lt;br /&gt;
$ reconstructParMesh -constant&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
= Building an OpenFOAM batch file for parallel processing =&lt;br /&gt;
== General information == &lt;br /&gt;
To run any job in a parallel mode with OpenFOAM, it is necessary to decompose the geometry domain into segments, equal to the number of processors (or threads) you intend to use. That means, for example, if you want to run a case on 8 processors, you will have to decompose the mesh in 8 segments, first. Then, you start the solver in &#039;&#039;parallel&#039;&#039; mode, letting &#039;&#039;OpenFOAM&#039;&#039; to run calculations concurrently on these segments, one processor responding for one segment of the mesh, sharing the data with all other processors in between. There is, of course, a mechanism that connects properly the calculations, so you don&#039;t loose your data or generate wrong results. Decomposition and segments building process is handled by&amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;decomposePar&amp;lt;/span&amp;gt;utility. In &amp;quot;&#039;&#039;system/decomposeParDict&#039;&#039;&amp;quot; you may specify, how many &amp;quot;segments&amp;quot; you want your geometry domain to be divided in, and which decomposition method to use. The automatic one is &amp;quot;&#039;&#039;scotch&#039;&#039;&amp;quot;. It trims the mesh, collecting as many cells as possible per processor, trying to avoid having empty segments or segments with few cells. If you want your mesh to be divided in other way, specifying the number of segments it should be cut in x, y or z direction, for example, you can use &amp;quot;simple&amp;quot; or &amp;quot;hierarchical&amp;quot; methods. There are some other ways as well, with more documentation on the internet. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
== Wrapper script generation == &lt;br /&gt;
&#039;&#039;&#039;Attention:&#039;&#039;&#039; &amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;openfoam&amp;lt;/span&amp;gt; module loads automatically the necessary &amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;openmpi&amp;lt;/span&amp;gt; module for parallel run, do &#039;&#039;&#039;NOT&#039;&#039;&#039; load another version of mpi, as it may conflict with the loaded &amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;openfoam&amp;lt;/span&amp;gt; version. &lt;br /&gt;
&lt;br /&gt;
A job-script to submit a batch job called &#039;&#039;job_openfoam.sh&#039;&#039; that runs &#039;&#039;icoFoam&#039;&#039; solver with OpenFoam version 2.4.0, on 8 processors, requiring 6000 MByte of total physical memory per processor and total wall clock time of 6 hours looks like: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--b)--&amp;gt; &lt;br /&gt;
{| style=&amp;quot;width: 100%; border:1px solid #d0cfcc; background:#f2f7ff;border-spacing: 5px;&amp;quot;&lt;br /&gt;
| style=&amp;quot;width:280px; white-space:nowrap; color:#000;&amp;quot; |&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#MSUB -l nodes=1:ppn=8&lt;br /&gt;
#MSUB -l walltime=06:00:00&lt;br /&gt;
#MSUB -l pmem=6000mb&lt;br /&gt;
#MSUB -v FOAM_MODULE=&amp;quot;cae/openfoam/2.4.0&amp;quot;&lt;br /&gt;
#MSUB -v MPIRUN_OPTIONS=&amp;quot;--bind-to core --map-by core -report-bindings&amp;quot;&lt;br /&gt;
#MSUB -v EXECUTABLE=&amp;quot;icoFoam&amp;quot;&lt;br /&gt;
#MSUB -N test_icoFoam&lt;br /&gt;
#MSUB -o icoFoam.log&lt;br /&gt;
#MSUB -j oe&lt;br /&gt;
&lt;br /&gt;
startexe=&amp;quot;mpirun ${MPIRUN_OPTIONS} ${EXECUTABLE} -parallel&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# openfoam-2.4.0 automatically loads mpi/openmpi/1.8-gnu-4.9&lt;br /&gt;
module load ${FOAM_MODULE}&lt;br /&gt;
foamInit&lt;br /&gt;
&lt;br /&gt;
# remove decomposePar if you already decomposed your case beforehand &lt;br /&gt;
decomposePar &amp;amp;&amp;amp;&lt;br /&gt;
&lt;br /&gt;
# starting the solver in parallel. Name of the solver is given in the &amp;quot;EXECUTABLE&amp;quot; variable, &lt;br /&gt;
# in the header &lt;br /&gt;
echo $startexe&lt;br /&gt;
exec $startexe &amp;amp;&amp;amp;&lt;br /&gt;
&lt;br /&gt;
# remove reconstructPar and the &#039;&amp;amp;&amp;amp;&#039; operator from the command above &lt;br /&gt;
# if you would like to reconstruct the case later&lt;br /&gt;
reconstructPar&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= OpenFOAM and ParaView on bwUniCluster=&lt;br /&gt;
ParaView is not directly linked to OpenFOAM installation on the cluster. Therefore, to visualize OpenFOAM jobs with ParaView, they will have to be manually opened within the specific ParaView module.  &lt;br /&gt;
&lt;br /&gt;
1. Load the ParaView module. For example: &lt;br /&gt;
&amp;lt;pre&amp;gt;$ module load cae/paraview/4.3.1&amp;lt;/pre&amp;gt;&lt;br /&gt;
2. Create a dummy &#039;*.openfoam&#039; file in the OpenFOAM case folder:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ cd &amp;lt;case_folder_path&amp;gt;&lt;br /&gt;
$ touch &amp;lt;case_name&amp;gt;.openfoam&amp;lt;/pre&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;NOTICE:&#039;&#039;&#039; the name of the dummy file should be the same as the name of the OpenFOAM case folder, with &#039;.openfoam&#039; extension.&lt;br /&gt;
&lt;br /&gt;
3. Open ParaView:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ paraview&amp;lt;/pre&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;NOTICE:&#039;&#039;&#039; ParaView is a visualization software which requires X-server to run.&lt;br /&gt;
&lt;br /&gt;
4. In Paraview go to &#039;File&#039; -&amp;gt; &#039;Open&#039;, or press Ctrl+O. Choose to show &#039;All files (*)&#039;, and open your &amp;lt;case_name&amp;gt;.openfoam file. In the pop-up window select OpenFOAM, and press &#039;Ok&#039;.&lt;br /&gt;
&lt;br /&gt;
5. That&#039;s it! Enjoy ParaView and OpenFOAM.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
[[Category:Engineering software]][[Category:bwUniCluster]]&lt;/div&gt;</summary>
		<author><name>A Saramet</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster3.0/Software/OpenFoam&amp;diff=2638</id>
		<title>BwUniCluster3.0/Software/OpenFoam</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster3.0/Software/OpenFoam&amp;diff=2638"/>
		<updated>2015-10-26T16:05:25Z</updated>

		<summary type="html">&lt;p&gt;A Saramet: /* Wrapper script generation */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= OpenFOAM =&lt;br /&gt;
[[File:OpenfoamLogo.png]]&lt;br /&gt;
&lt;br /&gt;
 Module:        cae/openfoam/&#039;version&#039;&lt;br /&gt;
 Target System: Red-Hat-Enterprise-Linux-Server-release-6.4-Santiago &lt;br /&gt;
 Main Location: /opt/bwhpc/common&lt;br /&gt;
 Priority:      mandatory&lt;br /&gt;
 License:       GPL&lt;br /&gt;
 Homepage:      http://www.openfoam.org/&lt;br /&gt;
&lt;br /&gt;
== Accessing and basic usage ==&lt;br /&gt;
The OpenFOAM®  (Open Field Operation and Manipulation) CFD Toolbox is a free, open source CFD software package with an extensive range of features to solve anything from complex fluid flows involving chemical reactions, turbulence and heat transfer, to solid dynamics and electromagnetics. &lt;br /&gt;
&lt;br /&gt;
In order to check what OpenFOAM versions are installed on the system, run the following command:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ module avail cae/openfoam&amp;lt;/pre&amp;gt;&lt;br /&gt;
Typically, several OpenFOAM versions might be available.&lt;br /&gt;
&lt;br /&gt;
Any available version can be accessed by loading the appropriate module:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ module load cae/openfoam/&amp;lt;version&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
with &amp;lt;version&amp;gt; specifying the desired version. &lt;br /&gt;
&lt;br /&gt;
To activate the OpenFOAM applications, after the module is loaded, run the following:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ source $FOAM_INIT&amp;lt;/pre&amp;gt;&lt;br /&gt;
or simply:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ foamInit&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Improving parallel run performance  ==&lt;br /&gt;
&lt;br /&gt;
To improve the concurrently solving process and decrease the error occurrence probability while running an OpenFOAM job in parallel (&#039;&#039;on a singlenode&#039;&#039;), some modifications have been introduced. Specifically, after the case decomposition is done, it is recommended to save the decomposed data directly on the nodes in the pre-allocated work-space and use it from there. When the calculations are over, the data is moved back to the case folder and reconstructed. It will improve the overall performance, considering that you allocated enough wall-time to decompose and rebuild your cases, as it moves the&amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;processor*&amp;lt;/span&amp;gt;folders to and out of the nodes local work-space. For such a procedure it is necessary to use the following commands for decomposition and reconstruction of the geometry domain:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ decomposeParHPC&lt;br /&gt;
$ reconstructParHPC&lt;br /&gt;
$ reconstructParMeshHPC&amp;lt;/pre&amp;gt;&lt;br /&gt;
instead of:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ decomposPar&lt;br /&gt;
$ reconstructPar&lt;br /&gt;
$ recontructParMesh&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For example, if you want to run&amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;snappyHexMesh&amp;lt;/span&amp;gt;in parallel, you may use the following commands:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ decomposeParMeshHPC&lt;br /&gt;
$ mpiexec snappyHexMesh -overwrite&lt;br /&gt;
$ reconstructParMeshHPC -constant&amp;lt;/pre&amp;gt;&lt;br /&gt;
instead of:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ decomposePar&lt;br /&gt;
$ mpiexec snappyHexMesh -overwrite&lt;br /&gt;
$ reconstructParMesh -constant&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Building an OpenFOAM batch file for parallel processing =&lt;br /&gt;
== General information == &lt;br /&gt;
&lt;br /&gt;
To run any job in a parallel mode with OpenFOAM, it is necessary to decompose the geometry domain into segments, equal to the number of processors (or threads) you intend to use. That means, for example, if you want to run a case on 8 processors, you will have to decompose the mesh in 8 segments, first. Then, you start the solver in &#039;&#039;parallel&#039;&#039; mode, letting &#039;&#039;OpenFOAM&#039;&#039; to run calculations concurrently on these segments, one processor responding for one segment of the mesh, sharing the data with all other processors in between. There is, of course, a mechanism that connects properly the calculations, so you don&#039;t loose your data or generate wrong results. Decomposition and segments building process is handled by&amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;decomposePar&amp;lt;/span&amp;gt;utility. In &amp;quot;&#039;&#039;system/decomposeParDict&#039;&#039;&amp;quot; you may specify, how many &amp;quot;segments&amp;quot; you want your geometry domain to be divided in, and which decomposition method to use. The automatic one is &amp;quot;&#039;&#039;scotch&#039;&#039;&amp;quot;. It trims the mesh, collecting as many cells as possible per processor, trying to avoid having empty segments or segments with few cells. If you want your mesh to be divided in other way, specifying the number of segments it should be cut in x, y or z direction, for example, you can use &amp;quot;simple&amp;quot; or &amp;quot;hierarchical&amp;quot; methods. There are some other ways as well, with more documentation on the internet. &lt;br /&gt;
&lt;br /&gt;
== Wrapper script generation == &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Attention:&#039;&#039;&#039; &amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;openfoam&amp;lt;/span&amp;gt; module loads automatically the necessary &amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;openmpi&amp;lt;/span&amp;gt; module for parallel run, do &#039;&#039;&#039;NOT&#039;&#039;&#039; load another version of mpi, as it may conflict with the loaded &amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;openfoam&amp;lt;/span&amp;gt; version. &lt;br /&gt;
&lt;br /&gt;
A job-script to submit a batch job called &#039;&#039;job_openfoam.sh&#039;&#039; that runs &#039;&#039;icoFoam&#039;&#039; solver with OpenFoam version 2.4.0, on 8 processors, requiring 6000 MByte of total physical memory per processor and total wall clock time of 6 hours looks like: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--b)--&amp;gt; &lt;br /&gt;
{| style=&amp;quot;width: 100%; border:1px solid #d0cfcc; background:#f2f7ff;border-spacing: 5px;&amp;quot;&lt;br /&gt;
| style=&amp;quot;width:280px; white-space:nowrap; color:#000;&amp;quot; |&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#MSUB -l nodes=1:ppn=8&lt;br /&gt;
#MSUB -l walltime=06:00:00&lt;br /&gt;
#MSUB -l pmem=6000mb&lt;br /&gt;
#MSUB -v FOAM_MODULE=&amp;quot;cae/openfoam/2.4.0&amp;quot;&lt;br /&gt;
#MSUB -v MPIRUN_OPTIONS=&amp;quot;--bind-to core --map-by core -report-bindings&amp;quot;&lt;br /&gt;
#MSUB -v EXECUTABLE=&amp;quot;icoFoam&amp;quot;&lt;br /&gt;
#MSUB -N test_icoFoam&lt;br /&gt;
#MSUB -o icoFoam.log&lt;br /&gt;
#MSUB -j oe&lt;br /&gt;
&lt;br /&gt;
startexe=&amp;quot;mpirun ${MPIRUN_OPTIONS} ${EXECUTABLE} -parallel&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# openfoam-2.4.0 automatically loads mpi/openmpi/1.8-gnu-4.9&lt;br /&gt;
module load ${FOAM_MODULE}&lt;br /&gt;
foamInit&lt;br /&gt;
&lt;br /&gt;
# remove decomposePar if you already decomposed your case beforehand &lt;br /&gt;
decomposePar &amp;amp;&amp;amp;&lt;br /&gt;
&lt;br /&gt;
# starting the solver in parallel. Name of the solver is given in the &amp;quot;EXECUTABLE&amp;quot; variable, &lt;br /&gt;
#in the header &lt;br /&gt;
echo $startexe&lt;br /&gt;
exec $startexe &amp;amp;&amp;amp;&lt;br /&gt;
&lt;br /&gt;
# remove reconstructPar if you would like to reconstruct the case later&lt;br /&gt;
reconstructPar&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
= OpenFOAM and ParaView on bwUniCluster=&lt;br /&gt;
&lt;br /&gt;
ParaView is not directly linked to OpenFOAM installation on the cluster. Therefore, to visualize OpenFOAM jobs with ParaView, they will have to be manually opened within the specific ParaView module.  &lt;br /&gt;
&lt;br /&gt;
1. Load the ParaView module. For example: &lt;br /&gt;
&amp;lt;pre&amp;gt;$ module load cae/paraview/4.3.1&amp;lt;/pre&amp;gt;&lt;br /&gt;
2. Create a dummy &#039;*.openfoam&#039; file in the OpenFOAM case folder:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ cd &amp;lt;case_folder_path&amp;gt;&lt;br /&gt;
$ touch &amp;lt;case_name&amp;gt;.openfoam&amp;lt;/pre&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;NOTICE:&#039;&#039;&#039; the name of the dummy file should be the same as the name of the OpenFOAM case folder, with &#039;.openfoam&#039; extension.&lt;br /&gt;
&lt;br /&gt;
3. Open ParaView:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ paraview&amp;lt;/pre&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;NOTICE:&#039;&#039;&#039; ParaView is a visualization software which requires X-server to run.&lt;br /&gt;
&lt;br /&gt;
4. In Paraview go to &#039;File&#039; -&amp;gt; &#039;Open&#039;, or press Ctrl+O. Choose to show &#039;All files (*)&#039;, and open your &amp;lt;case_name&amp;gt;.openfoam file. In the pop-up window select OpenFOAM, and press &#039;Ok&#039;.&lt;br /&gt;
&lt;br /&gt;
5. That&#039;s it! Enjoy ParaView and OpenFOAM.&lt;br /&gt;
----&lt;br /&gt;
[[Category:Engineering software]][[Category:bwUniCluster]]&lt;/div&gt;</summary>
		<author><name>A Saramet</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster3.0/Software/OpenFoam&amp;diff=2570</id>
		<title>BwUniCluster3.0/Software/OpenFoam</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster3.0/Software/OpenFoam&amp;diff=2570"/>
		<updated>2015-09-09T15:16:13Z</updated>

		<summary type="html">&lt;p&gt;A Saramet: /* OpenFOAM and ParaView */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= OpenFOAM =&lt;br /&gt;
[[File:OpenfoamLogo.png]]&lt;br /&gt;
&lt;br /&gt;
 Module:        cae/openfoam/&#039;version&#039;&lt;br /&gt;
 Target System: Red-Hat-Enterprise-Linux-Server-release-6.4-Santiago &lt;br /&gt;
 Main Location: /opt/bwhpc/common&lt;br /&gt;
 Priority:      mandatory&lt;br /&gt;
 License:       GPL&lt;br /&gt;
 Homepage:      http://www.openfoam.org/&lt;br /&gt;
&lt;br /&gt;
== Accessing and basic usage ==&lt;br /&gt;
The OpenFOAM®  (Open Field Operation and Manipulation) CFD Toolbox is a free, open source CFD software package with an extensive range of features to solve anything from complex fluid flows involving chemical reactions, turbulence and heat transfer, to solid dynamics and electromagnetics. &lt;br /&gt;
&lt;br /&gt;
In order to check what OpenFOAM versions are installed on the system, run the following command:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ module avail cae/openfoam&amp;lt;/pre&amp;gt;&lt;br /&gt;
Typically, several OpenFOAM versions might be available.&lt;br /&gt;
&lt;br /&gt;
Any available version can be accessed by loading the appropriate module:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ module load cae/openfoam/&amp;lt;version&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
with &amp;lt;version&amp;gt; specifying the desired version. &lt;br /&gt;
&lt;br /&gt;
To activate the OpenFOAM applications, after the module is loaded, run the following:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ source $FOAM_INIT&amp;lt;/pre&amp;gt;&lt;br /&gt;
or simply:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ foamInit&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Improving parallel run performance  ==&lt;br /&gt;
&lt;br /&gt;
To improve the concurrently solving process and decrease the error occurrence probability while running an OpenFOAM job in parallel (&#039;&#039;on a singlenode&#039;&#039;), some modifications have been introduced. Specifically, after the case decomposition is done, it is recommended to save the decomposed data directly on the nodes in the pre-allocated work-space and use it from there. When the calculations are over, the data is moved back to the case folder and reconstructed. It will improve the overall performance, considering that you allocated enough wall-time to decompose and rebuild your cases, as it moves the&amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;processor*&amp;lt;/span&amp;gt;folders to and out of the nodes local work-space. For such a procedure it is necessary to use the following commands for decomposition and reconstruction of the geometry domain:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ decomposeParHPC&lt;br /&gt;
$ reconstructParHPC&lt;br /&gt;
$ reconstructParMeshHPC&amp;lt;/pre&amp;gt;&lt;br /&gt;
instead of:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ decomposPar&lt;br /&gt;
$ reconstructPar&lt;br /&gt;
$ recontructParMesh&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For example, if you want to run&amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;snappyHexMesh&amp;lt;/span&amp;gt;in parallel, you may use the following commands:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ decomposeParMeshHPC&lt;br /&gt;
$ mpiexec snappyHexMesh -overwrite&lt;br /&gt;
$ reconstructParMeshHPC -constant&amp;lt;/pre&amp;gt;&lt;br /&gt;
instead of:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ decomposePar&lt;br /&gt;
$ mpiexec snappyHexMesh -overwrite&lt;br /&gt;
$ reconstructParMesh -constant&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Building an OpenFOAM batch file for parallel processing =&lt;br /&gt;
== General information == &lt;br /&gt;
&lt;br /&gt;
To run any job in a parallel mode with OpenFOAM, it is necessary to decompose the geometry domain into segments, equal to the number of processors (or threads) you intend to use. That means, for example, if you want to run a case on 8 processors, you will have to decompose the mesh in 8 segments, first. Then, you start the solver in &#039;&#039;parallel&#039;&#039; mode, letting &#039;&#039;OpenFOAM&#039;&#039; to run calculations concurrently on these segments, one processor responding for one segment of the mesh, sharing the data with all other processors in between. There is, of course, a mechanism that connects properly the calculations, so you don&#039;t loose your data or generate wrong results. Decomposition and segments building process is handled by&amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;decomposePar&amp;lt;/span&amp;gt;utility. In &amp;quot;&#039;&#039;system/decomposeParDict&#039;&#039;&amp;quot; you may specify, how many &amp;quot;segments&amp;quot; you want your geometry domain to be divided in, and which decomposition method to use. The automatic one is &amp;quot;&#039;&#039;scotch&#039;&#039;&amp;quot;. It trims the mesh, collecting as many cells as possible per processor, trying to avoid having empty segments or segments with few cells. If you want your mesh to be divided in other way, specifying the number of segments it should be cut in x, y or z direction, for example, you can use &amp;quot;simple&amp;quot; or &amp;quot;hierarchical&amp;quot; methods. There are some other ways as well, with more documentation on the internet. &lt;br /&gt;
&lt;br /&gt;
== Wrapper script generation == &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Attention:&#039;&#039;&#039; &amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;openfoam&amp;lt;/span&amp;gt; module loads automatically the necessary &amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;openmpi&amp;lt;/span&amp;gt; module for parallel run, do &#039;&#039;&#039;NOT&#039;&#039;&#039; load another version of mpi, as it may conflict with the loaded &amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;openfoam&amp;lt;/span&amp;gt; version. &lt;br /&gt;
&lt;br /&gt;
A job-script to submit a batch job called &#039;&#039;job_openfoam.sh&#039;&#039; that runs &#039;&#039;icoFoam&#039;&#039; solver with OpenFoam version 2.4.0, on 8 processors, requiring 6000 MByte of total physical memory per processor and total wall clock time of 6 hours looks like: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--b)--&amp;gt; &lt;br /&gt;
{| style=&amp;quot;width: 100%; border:1px solid #d0cfcc; background:#f2f7ff;border-spacing: 5px;&amp;quot;&lt;br /&gt;
| style=&amp;quot;width:280px; white-space:nowrap; color:#000;&amp;quot; |&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#MSUB -l nodes=1:ppn=8&lt;br /&gt;
#MSUB -l walltime=06:00:00&lt;br /&gt;
#MSUB -l pmem=6000mb&lt;br /&gt;
#MSUB -v FOAM_MODULE=&amp;quot;cae/openfoam/2.4.0&amp;quot;&lt;br /&gt;
#MSUB -v MPIRUN_OPTIONS=&amp;quot;--bind-to core --map-by core -report-bindings&amp;quot;&lt;br /&gt;
#MSUB -v EXECUTABLE=&amp;quot;icoFoam&amp;quot;&lt;br /&gt;
#MSUB -N test_icoFoam&lt;br /&gt;
#MSUB -o icoFoam.log&lt;br /&gt;
#MSUB -j oe&lt;br /&gt;
&lt;br /&gt;
startexe=&amp;quot;mpirun ${MPIRUN_OPTIONS} ${EXECUTABLE} -parallel&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# openfoam-2.4.0 automatically loads mpi/openmpi/1.8-gnu-4.9&lt;br /&gt;
module load ${FOAM_MODULE}&lt;br /&gt;
foamInit&lt;br /&gt;
&lt;br /&gt;
# remove decomposePar if you already decomposed your case beforehand &lt;br /&gt;
decomposePar &amp;amp;&amp;amp;&lt;br /&gt;
&lt;br /&gt;
# starting the solver in parallel. Name of the solver is given in the &amp;quot;EXECUTABLE&amp;quot; variable, in the header &lt;br /&gt;
echo $startexe&lt;br /&gt;
exec $startexe &amp;amp;&amp;amp;&lt;br /&gt;
&lt;br /&gt;
# remove reconstructPar if you would like to reconstruct the case later&lt;br /&gt;
reconstructPar&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
= OpenFOAM and ParaView on bwUniCluster=&lt;br /&gt;
&lt;br /&gt;
ParaView is not directly linked to OpenFOAM installation on the cluster. Therefore, to visualize OpenFOAM jobs with ParaView, they will have to be manually opened within the specific ParaView module.  &lt;br /&gt;
&lt;br /&gt;
1. Load the ParaView module. For example: &lt;br /&gt;
&amp;lt;pre&amp;gt;$ module load cae/paraview/4.3.1&amp;lt;/pre&amp;gt;&lt;br /&gt;
2. Create a dummy &#039;*.openfoam&#039; file in the OpenFOAM case folder:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ cd &amp;lt;case_folder_path&amp;gt;&lt;br /&gt;
$ touch &amp;lt;case_name&amp;gt;.openfoam&amp;lt;/pre&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;NOTICE:&#039;&#039;&#039; the name of the dummy file should be the same as the name of the OpenFOAM case folder, with &#039;.openfoam&#039; extension.&lt;br /&gt;
&lt;br /&gt;
3. Open ParaView:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ paraview&amp;lt;/pre&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;NOTICE:&#039;&#039;&#039; ParaView is a visualization software which requires X-server to run.&lt;br /&gt;
&lt;br /&gt;
4. In Paraview go to &#039;File&#039; -&amp;gt; &#039;Open&#039;, or press Ctrl+O. Choose to show &#039;All files (*)&#039;, and open your &amp;lt;case_name&amp;gt;.openfoam file. In the pop-up window select OpenFOAM, and press &#039;Ok&#039;.&lt;br /&gt;
&lt;br /&gt;
5. That&#039;s it! Enjoy ParaView and OpenFOAM.&lt;br /&gt;
----&lt;br /&gt;
[[Category:Engineering software]][[Category:bwUniCluster]]&lt;/div&gt;</summary>
		<author><name>A Saramet</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster3.0/Software/OpenFoam&amp;diff=2569</id>
		<title>BwUniCluster3.0/Software/OpenFoam</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster3.0/Software/OpenFoam&amp;diff=2569"/>
		<updated>2015-09-09T15:15:59Z</updated>

		<summary type="html">&lt;p&gt;A Saramet: /* OpenFOAM and ParaView */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= OpenFOAM =&lt;br /&gt;
[[File:OpenfoamLogo.png]]&lt;br /&gt;
&lt;br /&gt;
 Module:        cae/openfoam/&#039;version&#039;&lt;br /&gt;
 Target System: Red-Hat-Enterprise-Linux-Server-release-6.4-Santiago &lt;br /&gt;
 Main Location: /opt/bwhpc/common&lt;br /&gt;
 Priority:      mandatory&lt;br /&gt;
 License:       GPL&lt;br /&gt;
 Homepage:      http://www.openfoam.org/&lt;br /&gt;
&lt;br /&gt;
== Accessing and basic usage ==&lt;br /&gt;
The OpenFOAM®  (Open Field Operation and Manipulation) CFD Toolbox is a free, open source CFD software package with an extensive range of features to solve anything from complex fluid flows involving chemical reactions, turbulence and heat transfer, to solid dynamics and electromagnetics. &lt;br /&gt;
&lt;br /&gt;
In order to check what OpenFOAM versions are installed on the system, run the following command:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ module avail cae/openfoam&amp;lt;/pre&amp;gt;&lt;br /&gt;
Typically, several OpenFOAM versions might be available.&lt;br /&gt;
&lt;br /&gt;
Any available version can be accessed by loading the appropriate module:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ module load cae/openfoam/&amp;lt;version&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
with &amp;lt;version&amp;gt; specifying the desired version. &lt;br /&gt;
&lt;br /&gt;
To activate the OpenFOAM applications, after the module is loaded, run the following:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ source $FOAM_INIT&amp;lt;/pre&amp;gt;&lt;br /&gt;
or simply:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ foamInit&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Improving parallel run performance  ==&lt;br /&gt;
&lt;br /&gt;
To improve the concurrently solving process and decrease the error occurrence probability while running an OpenFOAM job in parallel (&#039;&#039;on a singlenode&#039;&#039;), some modifications have been introduced. Specifically, after the case decomposition is done, it is recommended to save the decomposed data directly on the nodes in the pre-allocated work-space and use it from there. When the calculations are over, the data is moved back to the case folder and reconstructed. It will improve the overall performance, considering that you allocated enough wall-time to decompose and rebuild your cases, as it moves the&amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;processor*&amp;lt;/span&amp;gt;folders to and out of the nodes local work-space. For such a procedure it is necessary to use the following commands for decomposition and reconstruction of the geometry domain:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ decomposeParHPC&lt;br /&gt;
$ reconstructParHPC&lt;br /&gt;
$ reconstructParMeshHPC&amp;lt;/pre&amp;gt;&lt;br /&gt;
instead of:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ decomposPar&lt;br /&gt;
$ reconstructPar&lt;br /&gt;
$ recontructParMesh&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For example, if you want to run&amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;snappyHexMesh&amp;lt;/span&amp;gt;in parallel, you may use the following commands:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ decomposeParMeshHPC&lt;br /&gt;
$ mpiexec snappyHexMesh -overwrite&lt;br /&gt;
$ reconstructParMeshHPC -constant&amp;lt;/pre&amp;gt;&lt;br /&gt;
instead of:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ decomposePar&lt;br /&gt;
$ mpiexec snappyHexMesh -overwrite&lt;br /&gt;
$ reconstructParMesh -constant&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Building an OpenFOAM batch file for parallel processing =&lt;br /&gt;
== General information == &lt;br /&gt;
&lt;br /&gt;
To run any job in a parallel mode with OpenFOAM, it is necessary to decompose the geometry domain into segments, equal to the number of processors (or threads) you intend to use. That means, for example, if you want to run a case on 8 processors, you will have to decompose the mesh in 8 segments, first. Then, you start the solver in &#039;&#039;parallel&#039;&#039; mode, letting &#039;&#039;OpenFOAM&#039;&#039; to run calculations concurrently on these segments, one processor responding for one segment of the mesh, sharing the data with all other processors in between. There is, of course, a mechanism that connects properly the calculations, so you don&#039;t loose your data or generate wrong results. Decomposition and segments building process is handled by&amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;decomposePar&amp;lt;/span&amp;gt;utility. In &amp;quot;&#039;&#039;system/decomposeParDict&#039;&#039;&amp;quot; you may specify, how many &amp;quot;segments&amp;quot; you want your geometry domain to be divided in, and which decomposition method to use. The automatic one is &amp;quot;&#039;&#039;scotch&#039;&#039;&amp;quot;. It trims the mesh, collecting as many cells as possible per processor, trying to avoid having empty segments or segments with few cells. If you want your mesh to be divided in other way, specifying the number of segments it should be cut in x, y or z direction, for example, you can use &amp;quot;simple&amp;quot; or &amp;quot;hierarchical&amp;quot; methods. There are some other ways as well, with more documentation on the internet. &lt;br /&gt;
&lt;br /&gt;
== Wrapper script generation == &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Attention:&#039;&#039;&#039; &amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;openfoam&amp;lt;/span&amp;gt; module loads automatically the necessary &amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;openmpi&amp;lt;/span&amp;gt; module for parallel run, do &#039;&#039;&#039;NOT&#039;&#039;&#039; load another version of mpi, as it may conflict with the loaded &amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;openfoam&amp;lt;/span&amp;gt; version. &lt;br /&gt;
&lt;br /&gt;
A job-script to submit a batch job called &#039;&#039;job_openfoam.sh&#039;&#039; that runs &#039;&#039;icoFoam&#039;&#039; solver with OpenFoam version 2.4.0, on 8 processors, requiring 6000 MByte of total physical memory per processor and total wall clock time of 6 hours looks like: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--b)--&amp;gt; &lt;br /&gt;
{| style=&amp;quot;width: 100%; border:1px solid #d0cfcc; background:#f2f7ff;border-spacing: 5px;&amp;quot;&lt;br /&gt;
| style=&amp;quot;width:280px; white-space:nowrap; color:#000;&amp;quot; |&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#MSUB -l nodes=1:ppn=8&lt;br /&gt;
#MSUB -l walltime=06:00:00&lt;br /&gt;
#MSUB -l pmem=6000mb&lt;br /&gt;
#MSUB -v FOAM_MODULE=&amp;quot;cae/openfoam/2.4.0&amp;quot;&lt;br /&gt;
#MSUB -v MPIRUN_OPTIONS=&amp;quot;--bind-to core --map-by core -report-bindings&amp;quot;&lt;br /&gt;
#MSUB -v EXECUTABLE=&amp;quot;icoFoam&amp;quot;&lt;br /&gt;
#MSUB -N test_icoFoam&lt;br /&gt;
#MSUB -o icoFoam.log&lt;br /&gt;
#MSUB -j oe&lt;br /&gt;
&lt;br /&gt;
startexe=&amp;quot;mpirun ${MPIRUN_OPTIONS} ${EXECUTABLE} -parallel&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# openfoam-2.4.0 automatically loads mpi/openmpi/1.8-gnu-4.9&lt;br /&gt;
module load ${FOAM_MODULE}&lt;br /&gt;
foamInit&lt;br /&gt;
&lt;br /&gt;
# remove decomposePar if you already decomposed your case beforehand &lt;br /&gt;
decomposePar &amp;amp;&amp;amp;&lt;br /&gt;
&lt;br /&gt;
# starting the solver in parallel. Name of the solver is given in the &amp;quot;EXECUTABLE&amp;quot; variable, in the header &lt;br /&gt;
echo $startexe&lt;br /&gt;
exec $startexe &amp;amp;&amp;amp;&lt;br /&gt;
&lt;br /&gt;
# remove reconstructPar if you would like to reconstruct the case later&lt;br /&gt;
reconstructPar&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
= OpenFOAM and ParaView =&lt;br /&gt;
&lt;br /&gt;
ParaView is not directly linked to OpenFOAM installation on the cluster. Therefore, to visualize OpenFOAM jobs with ParaView, they will have to be manually opened within the specific ParaView module.  &lt;br /&gt;
&lt;br /&gt;
1. Load the ParaView module. For example: &lt;br /&gt;
&amp;lt;pre&amp;gt;$ module load cae/paraview/4.3.1&amp;lt;/pre&amp;gt;&lt;br /&gt;
2. Create a dummy &#039;*.openfoam&#039; file in the OpenFOAM case folder:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ cd &amp;lt;case_folder_path&amp;gt;&lt;br /&gt;
$ touch &amp;lt;case_name&amp;gt;.openfoam&amp;lt;/pre&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;NOTICE:&#039;&#039;&#039; the name of the dummy file should be the same as the name of the OpenFOAM case folder, with &#039;.openfoam&#039; extension.&lt;br /&gt;
&lt;br /&gt;
3. Open ParaView:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ paraview&amp;lt;/pre&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;NOTICE:&#039;&#039;&#039; ParaView is a visualization software which requires X-server to run.&lt;br /&gt;
&lt;br /&gt;
4. In Paraview go to &#039;File&#039; -&amp;gt; &#039;Open&#039;, or press Ctrl+O. Choose to show &#039;All files (*)&#039;, and open your &amp;lt;case_name&amp;gt;.openfoam file. In the pop-up window select OpenFOAM, and press &#039;Ok&#039;.&lt;br /&gt;
&lt;br /&gt;
5. That&#039;s it! Enjoy ParaView and OpenFOAM.&lt;br /&gt;
----&lt;br /&gt;
[[Category:Engineering software]][[Category:bwUniCluster]]&lt;/div&gt;</summary>
		<author><name>A Saramet</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster3.0/Software/OpenFoam&amp;diff=2568</id>
		<title>BwUniCluster3.0/Software/OpenFoam</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster3.0/Software/OpenFoam&amp;diff=2568"/>
		<updated>2015-09-09T15:02:21Z</updated>

		<summary type="html">&lt;p&gt;A Saramet: /* OpenFOAM and ParaView */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= OpenFOAM =&lt;br /&gt;
[[File:OpenfoamLogo.png]]&lt;br /&gt;
&lt;br /&gt;
 Module:        cae/openfoam/&#039;version&#039;&lt;br /&gt;
 Target System: Red-Hat-Enterprise-Linux-Server-release-6.4-Santiago &lt;br /&gt;
 Main Location: /opt/bwhpc/common&lt;br /&gt;
 Priority:      mandatory&lt;br /&gt;
 License:       GPL&lt;br /&gt;
 Homepage:      http://www.openfoam.org/&lt;br /&gt;
&lt;br /&gt;
== Accessing and basic usage ==&lt;br /&gt;
The OpenFOAM®  (Open Field Operation and Manipulation) CFD Toolbox is a free, open source CFD software package with an extensive range of features to solve anything from complex fluid flows involving chemical reactions, turbulence and heat transfer, to solid dynamics and electromagnetics. &lt;br /&gt;
&lt;br /&gt;
In order to check what OpenFOAM versions are installed on the system, run the following command:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ module avail cae/openfoam&amp;lt;/pre&amp;gt;&lt;br /&gt;
Typically, several OpenFOAM versions might be available.&lt;br /&gt;
&lt;br /&gt;
Any available version can be accessed by loading the appropriate module:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ module load cae/openfoam/&amp;lt;version&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
with &amp;lt;version&amp;gt; specifying the desired version. &lt;br /&gt;
&lt;br /&gt;
To activate the OpenFOAM applications, after the module is loaded, run the following:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ source $FOAM_INIT&amp;lt;/pre&amp;gt;&lt;br /&gt;
or simply:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ foamInit&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Improving parallel run performance  ==&lt;br /&gt;
&lt;br /&gt;
To improve the concurrently solving process and decrease the error occurrence probability while running an OpenFOAM job in parallel (&#039;&#039;on a singlenode&#039;&#039;), some modifications have been introduced. Specifically, after the case decomposition is done, it is recommended to save the decomposed data directly on the nodes in the pre-allocated work-space and use it from there. When the calculations are over, the data is moved back to the case folder and reconstructed. It will improve the overall performance, considering that you allocated enough wall-time to decompose and rebuild your cases, as it moves the&amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;processor*&amp;lt;/span&amp;gt;folders to and out of the nodes local work-space. For such a procedure it is necessary to use the following commands for decomposition and reconstruction of the geometry domain:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ decomposeParHPC&lt;br /&gt;
$ reconstructParHPC&lt;br /&gt;
$ reconstructParMeshHPC&amp;lt;/pre&amp;gt;&lt;br /&gt;
instead of:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ decomposPar&lt;br /&gt;
$ reconstructPar&lt;br /&gt;
$ recontructParMesh&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For example, if you want to run&amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;snappyHexMesh&amp;lt;/span&amp;gt;in parallel, you may use the following commands:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ decomposeParMeshHPC&lt;br /&gt;
$ mpiexec snappyHexMesh -overwrite&lt;br /&gt;
$ reconstructParMeshHPC -constant&amp;lt;/pre&amp;gt;&lt;br /&gt;
instead of:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ decomposePar&lt;br /&gt;
$ mpiexec snappyHexMesh -overwrite&lt;br /&gt;
$ reconstructParMesh -constant&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Building an OpenFOAM batch file for parallel processing =&lt;br /&gt;
== General information == &lt;br /&gt;
&lt;br /&gt;
To run any job in a parallel mode with OpenFOAM, it is necessary to decompose the geometry domain into segments, equal to the number of processors (or threads) you intend to use. That means, for example, if you want to run a case on 8 processors, you will have to decompose the mesh in 8 segments, first. Then, you start the solver in &#039;&#039;parallel&#039;&#039; mode, letting &#039;&#039;OpenFOAM&#039;&#039; to run calculations concurrently on these segments, one processor responding for one segment of the mesh, sharing the data with all other processors in between. There is, of course, a mechanism that connects properly the calculations, so you don&#039;t loose your data or generate wrong results. Decomposition and segments building process is handled by&amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;decomposePar&amp;lt;/span&amp;gt;utility. In &amp;quot;&#039;&#039;system/decomposeParDict&#039;&#039;&amp;quot; you may specify, how many &amp;quot;segments&amp;quot; you want your geometry domain to be divided in, and which decomposition method to use. The automatic one is &amp;quot;&#039;&#039;scotch&#039;&#039;&amp;quot;. It trims the mesh, collecting as many cells as possible per processor, trying to avoid having empty segments or segments with few cells. If you want your mesh to be divided in other way, specifying the number of segments it should be cut in x, y or z direction, for example, you can use &amp;quot;simple&amp;quot; or &amp;quot;hierarchical&amp;quot; methods. There are some other ways as well, with more documentation on the internet. &lt;br /&gt;
&lt;br /&gt;
== Wrapper script generation == &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Attention:&#039;&#039;&#039; &amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;openfoam&amp;lt;/span&amp;gt; module loads automatically the necessary &amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;openmpi&amp;lt;/span&amp;gt; module for parallel run, do &#039;&#039;&#039;NOT&#039;&#039;&#039; load another version of mpi, as it may conflict with the loaded &amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;openfoam&amp;lt;/span&amp;gt; version. &lt;br /&gt;
&lt;br /&gt;
A job-script to submit a batch job called &#039;&#039;job_openfoam.sh&#039;&#039; that runs &#039;&#039;icoFoam&#039;&#039; solver with OpenFoam version 2.4.0, on 8 processors, requiring 6000 MByte of total physical memory per processor and total wall clock time of 6 hours looks like: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--b)--&amp;gt; &lt;br /&gt;
{| style=&amp;quot;width: 100%; border:1px solid #d0cfcc; background:#f2f7ff;border-spacing: 5px;&amp;quot;&lt;br /&gt;
| style=&amp;quot;width:280px; white-space:nowrap; color:#000;&amp;quot; |&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#MSUB -l nodes=1:ppn=8&lt;br /&gt;
#MSUB -l walltime=06:00:00&lt;br /&gt;
#MSUB -l pmem=6000mb&lt;br /&gt;
#MSUB -v FOAM_MODULE=&amp;quot;cae/openfoam/2.4.0&amp;quot;&lt;br /&gt;
#MSUB -v MPIRUN_OPTIONS=&amp;quot;--bind-to core --map-by core -report-bindings&amp;quot;&lt;br /&gt;
#MSUB -v EXECUTABLE=&amp;quot;icoFoam&amp;quot;&lt;br /&gt;
#MSUB -N test_icoFoam&lt;br /&gt;
#MSUB -o icoFoam.log&lt;br /&gt;
#MSUB -j oe&lt;br /&gt;
&lt;br /&gt;
startexe=&amp;quot;mpirun ${MPIRUN_OPTIONS} ${EXECUTABLE} -parallel&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# openfoam-2.4.0 automatically loads mpi/openmpi/1.8-gnu-4.9&lt;br /&gt;
module load ${FOAM_MODULE}&lt;br /&gt;
foamInit&lt;br /&gt;
&lt;br /&gt;
# remove decomposePar if you already decomposed your case beforehand &lt;br /&gt;
decomposePar &amp;amp;&amp;amp;&lt;br /&gt;
&lt;br /&gt;
# starting the solver in parallel. Name of the solver is given in the &amp;quot;EXECUTABLE&amp;quot; variable, in the header &lt;br /&gt;
echo $startexe&lt;br /&gt;
exec $startexe &amp;amp;&amp;amp;&lt;br /&gt;
&lt;br /&gt;
# remove reconstructPar if you would like to reconstruct the case later&lt;br /&gt;
reconstructPar&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
= OpenFOAM and ParaView =&lt;br /&gt;
&lt;br /&gt;
ParaView is not directly linked to OpenFOAM installation on the cluster. Therefore, to visualize OpenFOAM jobs with ParaView, they will have to be manually opened the specific ParaView module.  &lt;br /&gt;
&lt;br /&gt;
First load the ParaView module. For example: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;$ module load cae/paraview/4.3.1&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
[[Category:Engineering software]][[Category:bwUniCluster]]&lt;/div&gt;</summary>
		<author><name>A Saramet</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster3.0/Software/OpenFoam&amp;diff=2567</id>
		<title>BwUniCluster3.0/Software/OpenFoam</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster3.0/Software/OpenFoam&amp;diff=2567"/>
		<updated>2015-09-09T15:00:00Z</updated>

		<summary type="html">&lt;p&gt;A Saramet: /* OpenFOAM and ParaView */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= OpenFOAM =&lt;br /&gt;
[[File:OpenfoamLogo.png]]&lt;br /&gt;
&lt;br /&gt;
 Module:        cae/openfoam/&#039;version&#039;&lt;br /&gt;
 Target System: Red-Hat-Enterprise-Linux-Server-release-6.4-Santiago &lt;br /&gt;
 Main Location: /opt/bwhpc/common&lt;br /&gt;
 Priority:      mandatory&lt;br /&gt;
 License:       GPL&lt;br /&gt;
 Homepage:      http://www.openfoam.org/&lt;br /&gt;
&lt;br /&gt;
== Accessing and basic usage ==&lt;br /&gt;
The OpenFOAM®  (Open Field Operation and Manipulation) CFD Toolbox is a free, open source CFD software package with an extensive range of features to solve anything from complex fluid flows involving chemical reactions, turbulence and heat transfer, to solid dynamics and electromagnetics. &lt;br /&gt;
&lt;br /&gt;
In order to check what OpenFOAM versions are installed on the system, run the following command:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ module avail cae/openfoam&amp;lt;/pre&amp;gt;&lt;br /&gt;
Typically, several OpenFOAM versions might be available.&lt;br /&gt;
&lt;br /&gt;
Any available version can be accessed by loading the appropriate module:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ module load cae/openfoam/&amp;lt;version&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
with &amp;lt;version&amp;gt; specifying the desired version. &lt;br /&gt;
&lt;br /&gt;
To activate the OpenFOAM applications, after the module is loaded, run the following:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ source $FOAM_INIT&amp;lt;/pre&amp;gt;&lt;br /&gt;
or simply:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ foamInit&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Improving parallel run performance  ==&lt;br /&gt;
&lt;br /&gt;
To improve the concurrently solving process and decrease the error occurrence probability while running an OpenFOAM job in parallel (&#039;&#039;on a singlenode&#039;&#039;), some modifications have been introduced. Specifically, after the case decomposition is done, it is recommended to save the decomposed data directly on the nodes in the pre-allocated work-space and use it from there. When the calculations are over, the data is moved back to the case folder and reconstructed. It will improve the overall performance, considering that you allocated enough wall-time to decompose and rebuild your cases, as it moves the&amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;processor*&amp;lt;/span&amp;gt;folders to and out of the nodes local work-space. For such a procedure it is necessary to use the following commands for decomposition and reconstruction of the geometry domain:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ decomposeParHPC&lt;br /&gt;
$ reconstructParHPC&lt;br /&gt;
$ reconstructParMeshHPC&amp;lt;/pre&amp;gt;&lt;br /&gt;
instead of:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ decomposPar&lt;br /&gt;
$ reconstructPar&lt;br /&gt;
$ recontructParMesh&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For example, if you want to run&amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;snappyHexMesh&amp;lt;/span&amp;gt;in parallel, you may use the following commands:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ decomposeParMeshHPC&lt;br /&gt;
$ mpiexec snappyHexMesh -overwrite&lt;br /&gt;
$ reconstructParMeshHPC -constant&amp;lt;/pre&amp;gt;&lt;br /&gt;
instead of:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ decomposePar&lt;br /&gt;
$ mpiexec snappyHexMesh -overwrite&lt;br /&gt;
$ reconstructParMesh -constant&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Building an OpenFOAM batch file for parallel processing =&lt;br /&gt;
== General information == &lt;br /&gt;
&lt;br /&gt;
To run any job in a parallel mode with OpenFOAM, it is necessary to decompose the geometry domain into segments, equal to the number of processors (or threads) you intend to use. That means, for example, if you want to run a case on 8 processors, you will have to decompose the mesh in 8 segments, first. Then, you start the solver in &#039;&#039;parallel&#039;&#039; mode, letting &#039;&#039;OpenFOAM&#039;&#039; to run calculations concurrently on these segments, one processor responding for one segment of the mesh, sharing the data with all other processors in between. There is, of course, a mechanism that connects properly the calculations, so you don&#039;t loose your data or generate wrong results. Decomposition and segments building process is handled by&amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;decomposePar&amp;lt;/span&amp;gt;utility. In &amp;quot;&#039;&#039;system/decomposeParDict&#039;&#039;&amp;quot; you may specify, how many &amp;quot;segments&amp;quot; you want your geometry domain to be divided in, and which decomposition method to use. The automatic one is &amp;quot;&#039;&#039;scotch&#039;&#039;&amp;quot;. It trims the mesh, collecting as many cells as possible per processor, trying to avoid having empty segments or segments with few cells. If you want your mesh to be divided in other way, specifying the number of segments it should be cut in x, y or z direction, for example, you can use &amp;quot;simple&amp;quot; or &amp;quot;hierarchical&amp;quot; methods. There are some other ways as well, with more documentation on the internet. &lt;br /&gt;
&lt;br /&gt;
== Wrapper script generation == &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Attention:&#039;&#039;&#039; &amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;openfoam&amp;lt;/span&amp;gt; module loads automatically the necessary &amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;openmpi&amp;lt;/span&amp;gt; module for parallel run, do &#039;&#039;&#039;NOT&#039;&#039;&#039; load another version of mpi, as it may conflict with the loaded &amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;openfoam&amp;lt;/span&amp;gt; version. &lt;br /&gt;
&lt;br /&gt;
A job-script to submit a batch job called &#039;&#039;job_openfoam.sh&#039;&#039; that runs &#039;&#039;icoFoam&#039;&#039; solver with OpenFoam version 2.4.0, on 8 processors, requiring 6000 MByte of total physical memory per processor and total wall clock time of 6 hours looks like: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--b)--&amp;gt; &lt;br /&gt;
{| style=&amp;quot;width: 100%; border:1px solid #d0cfcc; background:#f2f7ff;border-spacing: 5px;&amp;quot;&lt;br /&gt;
| style=&amp;quot;width:280px; white-space:nowrap; color:#000;&amp;quot; |&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#MSUB -l nodes=1:ppn=8&lt;br /&gt;
#MSUB -l walltime=06:00:00&lt;br /&gt;
#MSUB -l pmem=6000mb&lt;br /&gt;
#MSUB -v FOAM_MODULE=&amp;quot;cae/openfoam/2.4.0&amp;quot;&lt;br /&gt;
#MSUB -v MPIRUN_OPTIONS=&amp;quot;--bind-to core --map-by core -report-bindings&amp;quot;&lt;br /&gt;
#MSUB -v EXECUTABLE=&amp;quot;icoFoam&amp;quot;&lt;br /&gt;
#MSUB -N test_icoFoam&lt;br /&gt;
#MSUB -o icoFoam.log&lt;br /&gt;
#MSUB -j oe&lt;br /&gt;
&lt;br /&gt;
startexe=&amp;quot;mpirun ${MPIRUN_OPTIONS} ${EXECUTABLE} -parallel&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# openfoam-2.4.0 automatically loads mpi/openmpi/1.8-gnu-4.9&lt;br /&gt;
module load ${FOAM_MODULE}&lt;br /&gt;
foamInit&lt;br /&gt;
&lt;br /&gt;
# remove decomposePar if you already decomposed your case beforehand &lt;br /&gt;
decomposePar &amp;amp;&amp;amp;&lt;br /&gt;
&lt;br /&gt;
# starting the solver in parallel. Name of the solver is given in the &amp;quot;EXECUTABLE&amp;quot; variable, in the header &lt;br /&gt;
echo $startexe&lt;br /&gt;
exec $startexe &amp;amp;&amp;amp;&lt;br /&gt;
&lt;br /&gt;
# remove reconstructPar if you would like to reconstruct the case later&lt;br /&gt;
reconstructPar&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
= OpenFOAM and ParaView =&lt;br /&gt;
&lt;br /&gt;
ParaView is not directly linked to OpenFOAM installation on the cluster. Therefore, to visualize OpenFOAM jobs with ParaView, they will have to be manually opened the specific ParaView module.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
[[Category:Engineering software]][[Category:bwUniCluster]]&lt;/div&gt;</summary>
		<author><name>A Saramet</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster3.0/Software/OpenFoam&amp;diff=2566</id>
		<title>BwUniCluster3.0/Software/OpenFoam</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster3.0/Software/OpenFoam&amp;diff=2566"/>
		<updated>2015-09-09T14:55:04Z</updated>

		<summary type="html">&lt;p&gt;A Saramet: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= OpenFOAM =&lt;br /&gt;
[[File:OpenfoamLogo.png]]&lt;br /&gt;
&lt;br /&gt;
 Module:        cae/openfoam/&#039;version&#039;&lt;br /&gt;
 Target System: Red-Hat-Enterprise-Linux-Server-release-6.4-Santiago &lt;br /&gt;
 Main Location: /opt/bwhpc/common&lt;br /&gt;
 Priority:      mandatory&lt;br /&gt;
 License:       GPL&lt;br /&gt;
 Homepage:      http://www.openfoam.org/&lt;br /&gt;
&lt;br /&gt;
== Accessing and basic usage ==&lt;br /&gt;
The OpenFOAM®  (Open Field Operation and Manipulation) CFD Toolbox is a free, open source CFD software package with an extensive range of features to solve anything from complex fluid flows involving chemical reactions, turbulence and heat transfer, to solid dynamics and electromagnetics. &lt;br /&gt;
&lt;br /&gt;
In order to check what OpenFOAM versions are installed on the system, run the following command:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ module avail cae/openfoam&amp;lt;/pre&amp;gt;&lt;br /&gt;
Typically, several OpenFOAM versions might be available.&lt;br /&gt;
&lt;br /&gt;
Any available version can be accessed by loading the appropriate module:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ module load cae/openfoam/&amp;lt;version&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
with &amp;lt;version&amp;gt; specifying the desired version. &lt;br /&gt;
&lt;br /&gt;
To activate the OpenFOAM applications, after the module is loaded, run the following:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ source $FOAM_INIT&amp;lt;/pre&amp;gt;&lt;br /&gt;
or simply:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ foamInit&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Improving parallel run performance  ==&lt;br /&gt;
&lt;br /&gt;
To improve the concurrently solving process and decrease the error occurrence probability while running an OpenFOAM job in parallel (&#039;&#039;on a singlenode&#039;&#039;), some modifications have been introduced. Specifically, after the case decomposition is done, it is recommended to save the decomposed data directly on the nodes in the pre-allocated work-space and use it from there. When the calculations are over, the data is moved back to the case folder and reconstructed. It will improve the overall performance, considering that you allocated enough wall-time to decompose and rebuild your cases, as it moves the&amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;processor*&amp;lt;/span&amp;gt;folders to and out of the nodes local work-space. For such a procedure it is necessary to use the following commands for decomposition and reconstruction of the geometry domain:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ decomposeParHPC&lt;br /&gt;
$ reconstructParHPC&lt;br /&gt;
$ reconstructParMeshHPC&amp;lt;/pre&amp;gt;&lt;br /&gt;
instead of:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ decomposPar&lt;br /&gt;
$ reconstructPar&lt;br /&gt;
$ recontructParMesh&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For example, if you want to run&amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;snappyHexMesh&amp;lt;/span&amp;gt;in parallel, you may use the following commands:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ decomposeParMeshHPC&lt;br /&gt;
$ mpiexec snappyHexMesh -overwrite&lt;br /&gt;
$ reconstructParMeshHPC -constant&amp;lt;/pre&amp;gt;&lt;br /&gt;
instead of:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ decomposePar&lt;br /&gt;
$ mpiexec snappyHexMesh -overwrite&lt;br /&gt;
$ reconstructParMesh -constant&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Building an OpenFOAM batch file for parallel processing =&lt;br /&gt;
== General information == &lt;br /&gt;
&lt;br /&gt;
To run any job in a parallel mode with OpenFOAM, it is necessary to decompose the geometry domain into segments, equal to the number of processors (or threads) you intend to use. That means, for example, if you want to run a case on 8 processors, you will have to decompose the mesh in 8 segments, first. Then, you start the solver in &#039;&#039;parallel&#039;&#039; mode, letting &#039;&#039;OpenFOAM&#039;&#039; to run calculations concurrently on these segments, one processor responding for one segment of the mesh, sharing the data with all other processors in between. There is, of course, a mechanism that connects properly the calculations, so you don&#039;t loose your data or generate wrong results. Decomposition and segments building process is handled by&amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;decomposePar&amp;lt;/span&amp;gt;utility. In &amp;quot;&#039;&#039;system/decomposeParDict&#039;&#039;&amp;quot; you may specify, how many &amp;quot;segments&amp;quot; you want your geometry domain to be divided in, and which decomposition method to use. The automatic one is &amp;quot;&#039;&#039;scotch&#039;&#039;&amp;quot;. It trims the mesh, collecting as many cells as possible per processor, trying to avoid having empty segments or segments with few cells. If you want your mesh to be divided in other way, specifying the number of segments it should be cut in x, y or z direction, for example, you can use &amp;quot;simple&amp;quot; or &amp;quot;hierarchical&amp;quot; methods. There are some other ways as well, with more documentation on the internet. &lt;br /&gt;
&lt;br /&gt;
== Wrapper script generation == &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Attention:&#039;&#039;&#039; &amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;openfoam&amp;lt;/span&amp;gt; module loads automatically the necessary &amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;openmpi&amp;lt;/span&amp;gt; module for parallel run, do &#039;&#039;&#039;NOT&#039;&#039;&#039; load another version of mpi, as it may conflict with the loaded &amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;openfoam&amp;lt;/span&amp;gt; version. &lt;br /&gt;
&lt;br /&gt;
A job-script to submit a batch job called &#039;&#039;job_openfoam.sh&#039;&#039; that runs &#039;&#039;icoFoam&#039;&#039; solver with OpenFoam version 2.4.0, on 8 processors, requiring 6000 MByte of total physical memory per processor and total wall clock time of 6 hours looks like: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--b)--&amp;gt; &lt;br /&gt;
{| style=&amp;quot;width: 100%; border:1px solid #d0cfcc; background:#f2f7ff;border-spacing: 5px;&amp;quot;&lt;br /&gt;
| style=&amp;quot;width:280px; white-space:nowrap; color:#000;&amp;quot; |&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#MSUB -l nodes=1:ppn=8&lt;br /&gt;
#MSUB -l walltime=06:00:00&lt;br /&gt;
#MSUB -l pmem=6000mb&lt;br /&gt;
#MSUB -v FOAM_MODULE=&amp;quot;cae/openfoam/2.4.0&amp;quot;&lt;br /&gt;
#MSUB -v MPIRUN_OPTIONS=&amp;quot;--bind-to core --map-by core -report-bindings&amp;quot;&lt;br /&gt;
#MSUB -v EXECUTABLE=&amp;quot;icoFoam&amp;quot;&lt;br /&gt;
#MSUB -N test_icoFoam&lt;br /&gt;
#MSUB -o icoFoam.log&lt;br /&gt;
#MSUB -j oe&lt;br /&gt;
&lt;br /&gt;
startexe=&amp;quot;mpirun ${MPIRUN_OPTIONS} ${EXECUTABLE} -parallel&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# openfoam-2.4.0 automatically loads mpi/openmpi/1.8-gnu-4.9&lt;br /&gt;
module load ${FOAM_MODULE}&lt;br /&gt;
foamInit&lt;br /&gt;
&lt;br /&gt;
# remove decomposePar if you already decomposed your case beforehand &lt;br /&gt;
decomposePar &amp;amp;&amp;amp;&lt;br /&gt;
&lt;br /&gt;
# starting the solver in parallel. Name of the solver is given in the &amp;quot;EXECUTABLE&amp;quot; variable, in the header &lt;br /&gt;
echo $startexe&lt;br /&gt;
exec $startexe &amp;amp;&amp;amp;&lt;br /&gt;
&lt;br /&gt;
# remove reconstructPar if you would like to reconstruct the case later&lt;br /&gt;
reconstructPar&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
= OpenFOAM and ParaView =&lt;br /&gt;
----&lt;br /&gt;
[[Category:Engineering software]][[Category:bwUniCluster]]&lt;/div&gt;</summary>
		<author><name>A Saramet</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster3.0/Software/OpenFoam&amp;diff=2486</id>
		<title>BwUniCluster3.0/Software/OpenFoam</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster3.0/Software/OpenFoam&amp;diff=2486"/>
		<updated>2015-08-18T14:04:05Z</updated>

		<summary type="html">&lt;p&gt;A Saramet: /* Wrapper script generation */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= OpenFOAM =&lt;br /&gt;
[[File:OpenfoamLogo.png]]&lt;br /&gt;
&lt;br /&gt;
 Module:        cae/openfoam/&#039;version&#039;&lt;br /&gt;
 Target System: Red-Hat-Enterprise-Linux-Server-release-6.4-Santiago &lt;br /&gt;
 Main Location: /opt/bwhpc/common&lt;br /&gt;
 Priority:      mandatory&lt;br /&gt;
 License:       GPL&lt;br /&gt;
 Homepage:      http://www.openfoam.org/&lt;br /&gt;
&lt;br /&gt;
== Accessing and basic usage ==&lt;br /&gt;
The OpenFOAM®  (Open Field Operation and Manipulation) CFD Toolbox is a free, open source CFD software package with an extensive range of features to solve anything from complex fluid flows involving chemical reactions, turbulence and heat transfer, to solid dynamics and electromagnetics. &lt;br /&gt;
&lt;br /&gt;
In order to check what OpenFOAM versions are installed on the system, run the following command:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ module avail cae/openfoam&amp;lt;/pre&amp;gt;&lt;br /&gt;
Typically, several OpenFOAM versions might be available.&lt;br /&gt;
&lt;br /&gt;
Any available version can be accessed by loading the appropriate module:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ module load cae/openfoam/&amp;lt;version&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
with &amp;lt;version&amp;gt; specifying the desired version. &lt;br /&gt;
&lt;br /&gt;
To activate the OpenFOAM applications, after the module is loaded, run the following:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ source $FOAM_INIT&amp;lt;/pre&amp;gt;&lt;br /&gt;
or simply:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ foamInit&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Improving parallel run performance  ==&lt;br /&gt;
&lt;br /&gt;
To improve the concurrently solving process and decrease the error occurrence probability while running an OpenFOAM job in parallel (&#039;&#039;on a singlenode&#039;&#039;), some modifications have been introduced. Specifically, after the case decomposition is done, it is recommended to save the decomposed data directly on the nodes in the pre-allocated work-space and use it from there. When the calculations are over, the data is moved back to the case folder and reconstructed. It will improve the overall performance, considering that you allocated enough wall-time to decompose and rebuild your cases, as it moves the&amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;processor*&amp;lt;/span&amp;gt;folders to and out of the nodes local work-space. For such a procedure it is necessary to use the following commands for decomposition and reconstruction of the geometry domain:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ decomposeParHPC&lt;br /&gt;
$ reconstructParHPC&lt;br /&gt;
$ reconstructParMeshHPC&amp;lt;/pre&amp;gt;&lt;br /&gt;
instead of:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ decomposPar&lt;br /&gt;
$ reconstructPar&lt;br /&gt;
$ recontructParMesh&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For example, if you want to run&amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;snappyHexMesh&amp;lt;/span&amp;gt;in parallel, you may use the following commands:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ decomposeParMeshHPC&lt;br /&gt;
$ mpiexec snappyHexMesh -overwrite&lt;br /&gt;
$ reconstructParMeshHPC -constant&amp;lt;/pre&amp;gt;&lt;br /&gt;
instead of:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ decomposePar&lt;br /&gt;
$ mpiexec snappyHexMesh -overwrite&lt;br /&gt;
$ reconstructParMesh -constant&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Building an OpenFOAM batch file for parallel processing =&lt;br /&gt;
== General information == &lt;br /&gt;
&lt;br /&gt;
To run any job in a parallel mode with OpenFOAM, it is necessary to decompose the geometry domain into segments, equal to the number of processors (or threads) you intend to use. That means, for example, if you want to run a case on 8 processors, you will have to decompose the mesh in 8 segments, first. Then, you start the solver in &#039;&#039;parallel&#039;&#039; mode, letting &#039;&#039;OpenFOAM&#039;&#039; to run calculations concurrently on these segments, one processor responding for one segment of the mesh, sharing the data with all other processors in between. There is, of course, a mechanism that connects properly the calculations, so you don&#039;t loose your data or generate wrong results. Decomposition and segments building process is handled by&amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;decomposePar&amp;lt;/span&amp;gt;utility. In &amp;quot;&#039;&#039;system/decomposeParDict&#039;&#039;&amp;quot; you may specify, how many &amp;quot;segments&amp;quot; you want your geometry domain to be divided in, and which decomposition method to use. The automatic one is &amp;quot;&#039;&#039;scotch&#039;&#039;&amp;quot;. It trims the mesh, collecting as many cells as possible per processor, trying to avoid having empty segments or segments with few cells. If you want your mesh to be divided in other way, specifying the number of segments it should be cut in x, y or z direction, for example, you can use &amp;quot;simple&amp;quot; or &amp;quot;hierarchical&amp;quot; methods. There are some other ways as well, with more documentation on the internet. &lt;br /&gt;
&lt;br /&gt;
== Wrapper script generation == &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Attention:&#039;&#039;&#039; &amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;openfoam&amp;lt;/span&amp;gt; module loads automatically the necessary &amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;openmpi&amp;lt;/span&amp;gt; module for parallel run, do &#039;&#039;&#039;NOT&#039;&#039;&#039; load another version of mpi, as it may conflict with the loaded &amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;openfoam&amp;lt;/span&amp;gt; version. &lt;br /&gt;
&lt;br /&gt;
A job-script to submit a batch job called &#039;&#039;job_openfoam.sh&#039;&#039; that runs &#039;&#039;icoFoam&#039;&#039; solver with OpenFoam version 2.4.0, on 8 processors, requiring 6000 MByte of total physical memory per processor and total wall clock time of 6 hours looks like: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--b)--&amp;gt; &lt;br /&gt;
{| style=&amp;quot;width: 100%; border:1px solid #d0cfcc; background:#f2f7ff;border-spacing: 5px;&amp;quot;&lt;br /&gt;
| style=&amp;quot;width:280px; white-space:nowrap; color:#000;&amp;quot; |&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#MSUB -l nodes=1:ppn=8&lt;br /&gt;
#MSUB -l walltime=06:00:00&lt;br /&gt;
#MSUB -l pmem=6000mb&lt;br /&gt;
#MSUB -v FOAM_MODULE=&amp;quot;cae/openfoam/2.4.0&amp;quot;&lt;br /&gt;
#MSUB -v MPIRUN_OPTIONS=&amp;quot;--bind-to core --map-by core -report-bindings&amp;quot;&lt;br /&gt;
#MSUB -v EXECUTABLE=&amp;quot;icoFoam&amp;quot;&lt;br /&gt;
#MSUB -N test_icoFoam&lt;br /&gt;
#MSUB -o icoFoam.log&lt;br /&gt;
#MSUB -j oe&lt;br /&gt;
&lt;br /&gt;
startexe=&amp;quot;mpirun ${MPIRUN_OPTIONS} ${EXECUTABLE} -parallel&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# openfoam-2.4.0 automatically loads mpi/openmpi/1.8-gnu-4.9&lt;br /&gt;
module load ${FOAM_MODULE}&lt;br /&gt;
foamInit&lt;br /&gt;
&lt;br /&gt;
# remove decomposePar if you already decomposed your case beforehand &lt;br /&gt;
decomposePar &amp;amp;&amp;amp;&lt;br /&gt;
&lt;br /&gt;
# starting the solver in parallel. Name of the solver is given in the &amp;quot;EXECUTABLE&amp;quot; variable, in the header &lt;br /&gt;
echo $startexe&lt;br /&gt;
exec $startexe &amp;amp;&amp;amp;&lt;br /&gt;
&lt;br /&gt;
# remove reconstructPar if you would like to reconstruct the case later&lt;br /&gt;
reconstructPar&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
----&lt;br /&gt;
[[Category:Engineering software]][[Category:bwUniCluster]]&lt;/div&gt;</summary>
		<author><name>A Saramet</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster3.0/Software/OpenFoam&amp;diff=2485</id>
		<title>BwUniCluster3.0/Software/OpenFoam</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster3.0/Software/OpenFoam&amp;diff=2485"/>
		<updated>2015-08-18T14:01:33Z</updated>

		<summary type="html">&lt;p&gt;A Saramet: /* Wrapper script generation */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= OpenFOAM =&lt;br /&gt;
[[File:OpenfoamLogo.png]]&lt;br /&gt;
&lt;br /&gt;
 Module:        cae/openfoam/&#039;version&#039;&lt;br /&gt;
 Target System: Red-Hat-Enterprise-Linux-Server-release-6.4-Santiago &lt;br /&gt;
 Main Location: /opt/bwhpc/common&lt;br /&gt;
 Priority:      mandatory&lt;br /&gt;
 License:       GPL&lt;br /&gt;
 Homepage:      http://www.openfoam.org/&lt;br /&gt;
&lt;br /&gt;
== Accessing and basic usage ==&lt;br /&gt;
The OpenFOAM®  (Open Field Operation and Manipulation) CFD Toolbox is a free, open source CFD software package with an extensive range of features to solve anything from complex fluid flows involving chemical reactions, turbulence and heat transfer, to solid dynamics and electromagnetics. &lt;br /&gt;
&lt;br /&gt;
In order to check what OpenFOAM versions are installed on the system, run the following command:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ module avail cae/openfoam&amp;lt;/pre&amp;gt;&lt;br /&gt;
Typically, several OpenFOAM versions might be available.&lt;br /&gt;
&lt;br /&gt;
Any available version can be accessed by loading the appropriate module:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ module load cae/openfoam/&amp;lt;version&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
with &amp;lt;version&amp;gt; specifying the desired version. &lt;br /&gt;
&lt;br /&gt;
To activate the OpenFOAM applications, after the module is loaded, run the following:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ source $FOAM_INIT&amp;lt;/pre&amp;gt;&lt;br /&gt;
or simply:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ foamInit&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Improving parallel run performance  ==&lt;br /&gt;
&lt;br /&gt;
To improve the concurrently solving process and decrease the error occurrence probability while running an OpenFOAM job in parallel (&#039;&#039;on a singlenode&#039;&#039;), some modifications have been introduced. Specifically, after the case decomposition is done, it is recommended to save the decomposed data directly on the nodes in the pre-allocated work-space and use it from there. When the calculations are over, the data is moved back to the case folder and reconstructed. It will improve the overall performance, considering that you allocated enough wall-time to decompose and rebuild your cases, as it moves the&amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;processor*&amp;lt;/span&amp;gt;folders to and out of the nodes local work-space. For such a procedure it is necessary to use the following commands for decomposition and reconstruction of the geometry domain:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ decomposeParHPC&lt;br /&gt;
$ reconstructParHPC&lt;br /&gt;
$ reconstructParMeshHPC&amp;lt;/pre&amp;gt;&lt;br /&gt;
instead of:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ decomposPar&lt;br /&gt;
$ reconstructPar&lt;br /&gt;
$ recontructParMesh&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For example, if you want to run&amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;snappyHexMesh&amp;lt;/span&amp;gt;in parallel, you may use the following commands:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ decomposeParMeshHPC&lt;br /&gt;
$ mpiexec snappyHexMesh -overwrite&lt;br /&gt;
$ reconstructParMeshHPC -constant&amp;lt;/pre&amp;gt;&lt;br /&gt;
instead of:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ decomposePar&lt;br /&gt;
$ mpiexec snappyHexMesh -overwrite&lt;br /&gt;
$ reconstructParMesh -constant&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Building an OpenFOAM batch file for parallel processing =&lt;br /&gt;
== General information == &lt;br /&gt;
&lt;br /&gt;
To run any job in a parallel mode with OpenFOAM, it is necessary to decompose the geometry domain into segments, equal to the number of processors (or threads) you intend to use. That means, for example, if you want to run a case on 8 processors, you will have to decompose the mesh in 8 segments, first. Then, you start the solver in &#039;&#039;parallel&#039;&#039; mode, letting &#039;&#039;OpenFOAM&#039;&#039; to run calculations concurrently on these segments, one processor responding for one segment of the mesh, sharing the data with all other processors in between. There is, of course, a mechanism that connects properly the calculations, so you don&#039;t loose your data or generate wrong results. Decomposition and segments building process is handled by&amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;decomposePar&amp;lt;/span&amp;gt;utility. In &amp;quot;&#039;&#039;system/decomposeParDict&#039;&#039;&amp;quot; you may specify, how many &amp;quot;segments&amp;quot; you want your geometry domain to be divided in, and which decomposition method to use. The automatic one is &amp;quot;&#039;&#039;scotch&#039;&#039;&amp;quot;. It trims the mesh, collecting as many cells as possible per processor, trying to avoid having empty segments or segments with few cells. If you want your mesh to be divided in other way, specifying the number of segments it should be cut in x, y or z direction, for example, you can use &amp;quot;simple&amp;quot; or &amp;quot;hierarchical&amp;quot; methods. There are some other ways as well, with more documentation on the internet. &lt;br /&gt;
&lt;br /&gt;
== Wrapper script generation == &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Attention:&#039;&#039;&#039; &amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;openfoam&amp;lt;/span&amp;gt; module loads automatically the necessary &amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;openmpi&amp;lt;/span&amp;gt; module for parallel run, do &#039;&#039;&#039;NOT&#039;&#039;&#039; load another version of mpi, as it may conflict with the loaded &amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;openfoam&amp;lt;/span&amp;gt; version. &lt;br /&gt;
&lt;br /&gt;
A job-script to submit a batch job called &#039;&#039;job_openfoam.sh&#039;&#039; that runs &#039;&#039;icoFoam&#039;&#039; solver with OpenFoam version 2.4.0, on 8 processors, requiring 6000 MByte of total physical memory per processor and total wall clock time of 6 hours looks like: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--b)--&amp;gt; &lt;br /&gt;
{| style=&amp;quot;width: 100%; border:1px solid #d0cfcc; background:#f2f7ff;border-spacing: 5px;&amp;quot;&lt;br /&gt;
| style=&amp;quot;width:280px; white-space:nowrap; color:#000;&amp;quot; |&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#MSUB -l nodes=1:ppn=8&lt;br /&gt;
#MSUB -l walltime=06:00:00&lt;br /&gt;
#MSUB -l pmem=6000mb&lt;br /&gt;
#MSUB -v MPIRUN_OPTIONS=&amp;quot;--bind-to core --map-by core -report-bindings&amp;quot;&lt;br /&gt;
#MSUB -v EXECUTABLE=&amp;quot;icoFoam&amp;quot;&lt;br /&gt;
#MSUB -N test_icoFoam&lt;br /&gt;
#MSUB -o icoFoam.log&lt;br /&gt;
#MSUB -j oe&lt;br /&gt;
&lt;br /&gt;
startexe=&amp;quot;mpirun ${MPIRUN_OPTIONS} ${EXECUTABLE} -parallel&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# openfoam-2.4.0 automatically loads mpi/openmpi/1.8-gnu-4.9&lt;br /&gt;
module load cae/openfoam/2.4.0&lt;br /&gt;
foamInit&lt;br /&gt;
&lt;br /&gt;
# remove decomposePar if you already decomposed your case beforehand &lt;br /&gt;
decomposePar &amp;amp;&amp;amp;&lt;br /&gt;
&lt;br /&gt;
# starting the solver in parallel. Name of the solver is given in the &amp;quot;EXECUTABLE&amp;quot; variable, in the header &lt;br /&gt;
echo $startexe&lt;br /&gt;
exec $startexe &amp;amp;&amp;amp;&lt;br /&gt;
&lt;br /&gt;
# remove reconstructPar if you would like to reconstruct the case later&lt;br /&gt;
reconstructPar&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
----&lt;br /&gt;
[[Category:Engineering software]][[Category:bwUniCluster]]&lt;/div&gt;</summary>
		<author><name>A Saramet</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster3.0/Software/OpenFoam&amp;diff=2484</id>
		<title>BwUniCluster3.0/Software/OpenFoam</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster3.0/Software/OpenFoam&amp;diff=2484"/>
		<updated>2015-08-18T13:38:57Z</updated>

		<summary type="html">&lt;p&gt;A Saramet: /* General information */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= OpenFOAM =&lt;br /&gt;
[[File:OpenfoamLogo.png]]&lt;br /&gt;
&lt;br /&gt;
 Module:        cae/openfoam/&#039;version&#039;&lt;br /&gt;
 Target System: Red-Hat-Enterprise-Linux-Server-release-6.4-Santiago &lt;br /&gt;
 Main Location: /opt/bwhpc/common&lt;br /&gt;
 Priority:      mandatory&lt;br /&gt;
 License:       GPL&lt;br /&gt;
 Homepage:      http://www.openfoam.org/&lt;br /&gt;
&lt;br /&gt;
== Accessing and basic usage ==&lt;br /&gt;
The OpenFOAM®  (Open Field Operation and Manipulation) CFD Toolbox is a free, open source CFD software package with an extensive range of features to solve anything from complex fluid flows involving chemical reactions, turbulence and heat transfer, to solid dynamics and electromagnetics. &lt;br /&gt;
&lt;br /&gt;
In order to check what OpenFOAM versions are installed on the system, run the following command:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ module avail cae/openfoam&amp;lt;/pre&amp;gt;&lt;br /&gt;
Typically, several OpenFOAM versions might be available.&lt;br /&gt;
&lt;br /&gt;
Any available version can be accessed by loading the appropriate module:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ module load cae/openfoam/&amp;lt;version&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
with &amp;lt;version&amp;gt; specifying the desired version. &lt;br /&gt;
&lt;br /&gt;
To activate the OpenFOAM applications, after the module is loaded, run the following:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ source $FOAM_INIT&amp;lt;/pre&amp;gt;&lt;br /&gt;
or simply:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ foamInit&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Improving parallel run performance  ==&lt;br /&gt;
&lt;br /&gt;
To improve the concurrently solving process and decrease the error occurrence probability while running an OpenFOAM job in parallel (&#039;&#039;on a singlenode&#039;&#039;), some modifications have been introduced. Specifically, after the case decomposition is done, it is recommended to save the decomposed data directly on the nodes in the pre-allocated work-space and use it from there. When the calculations are over, the data is moved back to the case folder and reconstructed. It will improve the overall performance, considering that you allocated enough wall-time to decompose and rebuild your cases, as it moves the&amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;processor*&amp;lt;/span&amp;gt;folders to and out of the nodes local work-space. For such a procedure it is necessary to use the following commands for decomposition and reconstruction of the geometry domain:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ decomposeParHPC&lt;br /&gt;
$ reconstructParHPC&lt;br /&gt;
$ reconstructParMeshHPC&amp;lt;/pre&amp;gt;&lt;br /&gt;
instead of:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ decomposPar&lt;br /&gt;
$ reconstructPar&lt;br /&gt;
$ recontructParMesh&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For example, if you want to run&amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;snappyHexMesh&amp;lt;/span&amp;gt;in parallel, you may use the following commands:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ decomposeParMeshHPC&lt;br /&gt;
$ mpiexec snappyHexMesh -overwrite&lt;br /&gt;
$ reconstructParMeshHPC -constant&amp;lt;/pre&amp;gt;&lt;br /&gt;
instead of:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ decomposePar&lt;br /&gt;
$ mpiexec snappyHexMesh -overwrite&lt;br /&gt;
$ reconstructParMesh -constant&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Building an OpenFOAM batch file for parallel processing =&lt;br /&gt;
== General information == &lt;br /&gt;
&lt;br /&gt;
To run any job in a parallel mode with OpenFOAM, it is necessary to decompose the geometry domain into segments, equal to the number of processors (or threads) you intend to use. That means, for example, if you want to run a case on 8 processors, you will have to decompose the mesh in 8 segments, first. Then, you start the solver in &#039;&#039;parallel&#039;&#039; mode, letting &#039;&#039;OpenFOAM&#039;&#039; to run calculations concurrently on these segments, one processor responding for one segment of the mesh, sharing the data with all other processors in between. There is, of course, a mechanism that connects properly the calculations, so you don&#039;t loose your data or generate wrong results. Decomposition and segments building process is handled by&amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;decomposePar&amp;lt;/span&amp;gt;utility. In &amp;quot;&#039;&#039;system/decomposeParDict&#039;&#039;&amp;quot; you may specify, how many &amp;quot;segments&amp;quot; you want your geometry domain to be divided in, and which decomposition method to use. The automatic one is &amp;quot;&#039;&#039;scotch&#039;&#039;&amp;quot;. It trims the mesh, collecting as many cells as possible per processor, trying to avoid having empty segments or segments with few cells. If you want your mesh to be divided in other way, specifying the number of segments it should be cut in x, y or z direction, for example, you can use &amp;quot;simple&amp;quot; or &amp;quot;hierarchical&amp;quot; methods. There are some other ways as well, with more documentation on the internet. &lt;br /&gt;
&lt;br /&gt;
== Wrapper script generation == &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Attention:&#039;&#039;&#039; &amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;openfoam&amp;lt;/span&amp;gt; module loads automatically the necessary &amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;openmpi&amp;lt;/span&amp;gt; module for parallel run, do &#039;&#039;&#039;NOT&#039;&#039;&#039; load another version of mpi, as it may conflict with the loaded &amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;openfoam&amp;lt;/span&amp;gt; version. &lt;br /&gt;
----&lt;br /&gt;
[[Category:Engineering software]][[Category:bwUniCluster]]&lt;/div&gt;</summary>
		<author><name>A Saramet</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster3.0/Software/OpenFoam&amp;diff=2483</id>
		<title>BwUniCluster3.0/Software/OpenFoam</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster3.0/Software/OpenFoam&amp;diff=2483"/>
		<updated>2015-08-18T13:26:41Z</updated>

		<summary type="html">&lt;p&gt;A Saramet: /* General information */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= OpenFOAM =&lt;br /&gt;
[[File:OpenfoamLogo.png]]&lt;br /&gt;
&lt;br /&gt;
 Module:        cae/openfoam/&#039;version&#039;&lt;br /&gt;
 Target System: Red-Hat-Enterprise-Linux-Server-release-6.4-Santiago &lt;br /&gt;
 Main Location: /opt/bwhpc/common&lt;br /&gt;
 Priority:      mandatory&lt;br /&gt;
 License:       GPL&lt;br /&gt;
 Homepage:      http://www.openfoam.org/&lt;br /&gt;
&lt;br /&gt;
== Accessing and basic usage ==&lt;br /&gt;
The OpenFOAM®  (Open Field Operation and Manipulation) CFD Toolbox is a free, open source CFD software package with an extensive range of features to solve anything from complex fluid flows involving chemical reactions, turbulence and heat transfer, to solid dynamics and electromagnetics. &lt;br /&gt;
&lt;br /&gt;
In order to check what OpenFOAM versions are installed on the system, run the following command:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ module avail cae/openfoam&amp;lt;/pre&amp;gt;&lt;br /&gt;
Typically, several OpenFOAM versions might be available.&lt;br /&gt;
&lt;br /&gt;
Any available version can be accessed by loading the appropriate module:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ module load cae/openfoam/&amp;lt;version&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
with &amp;lt;version&amp;gt; specifying the desired version. &lt;br /&gt;
&lt;br /&gt;
To activate the OpenFOAM applications, after the module is loaded, run the following:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ source $FOAM_INIT&amp;lt;/pre&amp;gt;&lt;br /&gt;
or simply:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ foamInit&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Improving parallel run performance  ==&lt;br /&gt;
&lt;br /&gt;
To improve the concurrently solving process and decrease the error occurrence probability while running an OpenFOAM job in parallel (&#039;&#039;on a singlenode&#039;&#039;), some modifications have been introduced. Specifically, after the case decomposition is done, it is recommended to save the decomposed data directly on the nodes in the pre-allocated work-space and use it from there. When the calculations are over, the data is moved back to the case folder and reconstructed. It will improve the overall performance, considering that you allocated enough wall-time to decompose and rebuild your cases, as it moves the&amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;processor*&amp;lt;/span&amp;gt;folders to and out of the nodes local work-space. For such a procedure it is necessary to use the following commands for decomposition and reconstruction of the geometry domain:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ decomposeParHPC&lt;br /&gt;
$ reconstructParHPC&lt;br /&gt;
$ reconstructParMeshHPC&amp;lt;/pre&amp;gt;&lt;br /&gt;
instead of:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ decomposPar&lt;br /&gt;
$ reconstructPar&lt;br /&gt;
$ recontructParMesh&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For example, if you want to run&amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;snappyHexMesh&amp;lt;/span&amp;gt;in parallel, you may use the following commands:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ decomposeParMeshHPC&lt;br /&gt;
$ mpiexec snappyHexMesh -overwrite&lt;br /&gt;
$ reconstructParMeshHPC -constant&amp;lt;/pre&amp;gt;&lt;br /&gt;
instead of:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ decomposePar&lt;br /&gt;
$ mpiexec snappyHexMesh -overwrite&lt;br /&gt;
$ reconstructParMesh -constant&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Building an OpenFOAM batch file for parallel processing =&lt;br /&gt;
== General information == &lt;br /&gt;
&lt;br /&gt;
To run any job in a parallel mode with OpenFOAM, it is necessary to decompose the geometry domain into segments, equal to the number of processors (or threads) you intend to use. That means, for example, if you want to run a case on 8 processors, you will have to decompose the mesh in 8 segments, first. Then, you start the solver in &#039;&#039;parallel&#039;&#039; mode, letting &#039;&#039;OpenFOAM&#039;&#039; to run calculations concurrently on these segments, one processor responding for one segment of the mesh, sharing the data with all other processors in between. There is, of course, a mechanism that connects properly the calculations, so you don&#039;t loose your data or generate wrong results. Decomposition and segments building process is handled by&amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;decomposePar&amp;lt;/span&amp;gt;utility. In &amp;quot;&#039;&#039;system/decomposeParDict&#039;&#039;&amp;quot; you may specify, how many &amp;quot;segments&amp;quot; you want your geometry domain to be divided in, and which decomposition method to use. The automatic one is &amp;quot;&#039;&#039;scotch&#039;&#039;&amp;quot;. It trims the mesh, collecting as many cells as possible per processor, trying to avoid having empty segments or segments with few cells. If you want your mesh to be divided in other way, specifying the number of segments it should be cut in x, y or z direction, for example, you can use &amp;quot;simple&amp;quot; or &amp;quot;hierarchical&amp;quot; methods. There are some other ways as well, with more documentation on the internet. &lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
[[Category:Engineering software]][[Category:bwUniCluster]]&lt;/div&gt;</summary>
		<author><name>A Saramet</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster3.0/Software/OpenFoam&amp;diff=2482</id>
		<title>BwUniCluster3.0/Software/OpenFoam</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster3.0/Software/OpenFoam&amp;diff=2482"/>
		<updated>2015-08-18T13:25:49Z</updated>

		<summary type="html">&lt;p&gt;A Saramet: /* General information */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= OpenFOAM =&lt;br /&gt;
[[File:OpenfoamLogo.png]]&lt;br /&gt;
&lt;br /&gt;
 Module:        cae/openfoam/&#039;version&#039;&lt;br /&gt;
 Target System: Red-Hat-Enterprise-Linux-Server-release-6.4-Santiago &lt;br /&gt;
 Main Location: /opt/bwhpc/common&lt;br /&gt;
 Priority:      mandatory&lt;br /&gt;
 License:       GPL&lt;br /&gt;
 Homepage:      http://www.openfoam.org/&lt;br /&gt;
&lt;br /&gt;
== Accessing and basic usage ==&lt;br /&gt;
The OpenFOAM®  (Open Field Operation and Manipulation) CFD Toolbox is a free, open source CFD software package with an extensive range of features to solve anything from complex fluid flows involving chemical reactions, turbulence and heat transfer, to solid dynamics and electromagnetics. &lt;br /&gt;
&lt;br /&gt;
In order to check what OpenFOAM versions are installed on the system, run the following command:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ module avail cae/openfoam&amp;lt;/pre&amp;gt;&lt;br /&gt;
Typically, several OpenFOAM versions might be available.&lt;br /&gt;
&lt;br /&gt;
Any available version can be accessed by loading the appropriate module:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ module load cae/openfoam/&amp;lt;version&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
with &amp;lt;version&amp;gt; specifying the desired version. &lt;br /&gt;
&lt;br /&gt;
To activate the OpenFOAM applications, after the module is loaded, run the following:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ source $FOAM_INIT&amp;lt;/pre&amp;gt;&lt;br /&gt;
or simply:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ foamInit&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Improving parallel run performance  ==&lt;br /&gt;
&lt;br /&gt;
To improve the concurrently solving process and decrease the error occurrence probability while running an OpenFOAM job in parallel (&#039;&#039;on a singlenode&#039;&#039;), some modifications have been introduced. Specifically, after the case decomposition is done, it is recommended to save the decomposed data directly on the nodes in the pre-allocated work-space and use it from there. When the calculations are over, the data is moved back to the case folder and reconstructed. It will improve the overall performance, considering that you allocated enough wall-time to decompose and rebuild your cases, as it moves the&amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;processor*&amp;lt;/span&amp;gt;folders to and out of the nodes local work-space. For such a procedure it is necessary to use the following commands for decomposition and reconstruction of the geometry domain:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ decomposeParHPC&lt;br /&gt;
$ reconstructParHPC&lt;br /&gt;
$ reconstructParMeshHPC&amp;lt;/pre&amp;gt;&lt;br /&gt;
instead of:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ decomposPar&lt;br /&gt;
$ reconstructPar&lt;br /&gt;
$ recontructParMesh&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For example, if you want to run&amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;snappyHexMesh&amp;lt;/span&amp;gt;in parallel, you may use the following commands:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ decomposeParMeshHPC&lt;br /&gt;
$ mpiexec snappyHexMesh -overwrite&lt;br /&gt;
$ reconstructParMeshHPC -constant&amp;lt;/pre&amp;gt;&lt;br /&gt;
instead of:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ decomposePar&lt;br /&gt;
$ mpiexec snappyHexMesh -overwrite&lt;br /&gt;
$ reconstructParMesh -constant&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Building an OpenFOAM batch file for parallel processing =&lt;br /&gt;
== General information == &lt;br /&gt;
&lt;br /&gt;
To run any job in a parallel mode with OpenFOAM, it is necessary to decompose the geometry domain into segments, equal to the number of processors (or threads) you intend to use. That means, for example, if you want to run a case on 8 processors, you will have to decompose the mesh in 8 segments, first. Then, you start the solver in &#039;&#039;parallel&#039;&#039; mode, letting &#039;&#039;OpenFOAM&#039;&#039; to run calculations concurrently on these segments, one processor responding for one segment of the mesh, sharing the data with all other processors in between. There is, of course, a mechanism that connects properly the calculations, so you don&#039;t loose your data or generate wrong results. Decomposition and segments building process is handled by&amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;decomposePar&amp;lt;/span&amp;gt;utility. In &amp;quot; &#039;&#039;system/decomposeParDict&#039;&#039; &amp;quot; you may specify, how many &amp;quot;segments&amp;quot; you want your geometry domain to be divided in, and which decomposition method to use. The automatic one is &amp;quot; &#039;&#039;scotch&#039;&#039; &amp;quot;. It trims the mesh, collecting as many cells as possible per processor, trying to avoid having empty segments or segments with few cells. If you want your mesh to be divided in other way, specifying the number of segments it should be cut in x, y or z direction, for example, you can use &amp;quot;simple&amp;quot; or &amp;quot;hierarchical&amp;quot; methods. There are some other ways as well, with more documentation on the internet. &lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
[[Category:Engineering software]][[Category:bwUniCluster]]&lt;/div&gt;</summary>
		<author><name>A Saramet</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster3.0/Software/OpenFoam&amp;diff=2481</id>
		<title>BwUniCluster3.0/Software/OpenFoam</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster3.0/Software/OpenFoam&amp;diff=2481"/>
		<updated>2015-08-18T13:22:35Z</updated>

		<summary type="html">&lt;p&gt;A Saramet: /* General information */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= OpenFOAM =&lt;br /&gt;
[[File:OpenfoamLogo.png]]&lt;br /&gt;
&lt;br /&gt;
 Module:        cae/openfoam/&#039;version&#039;&lt;br /&gt;
 Target System: Red-Hat-Enterprise-Linux-Server-release-6.4-Santiago &lt;br /&gt;
 Main Location: /opt/bwhpc/common&lt;br /&gt;
 Priority:      mandatory&lt;br /&gt;
 License:       GPL&lt;br /&gt;
 Homepage:      http://www.openfoam.org/&lt;br /&gt;
&lt;br /&gt;
== Accessing and basic usage ==&lt;br /&gt;
The OpenFOAM®  (Open Field Operation and Manipulation) CFD Toolbox is a free, open source CFD software package with an extensive range of features to solve anything from complex fluid flows involving chemical reactions, turbulence and heat transfer, to solid dynamics and electromagnetics. &lt;br /&gt;
&lt;br /&gt;
In order to check what OpenFOAM versions are installed on the system, run the following command:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ module avail cae/openfoam&amp;lt;/pre&amp;gt;&lt;br /&gt;
Typically, several OpenFOAM versions might be available.&lt;br /&gt;
&lt;br /&gt;
Any available version can be accessed by loading the appropriate module:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ module load cae/openfoam/&amp;lt;version&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
with &amp;lt;version&amp;gt; specifying the desired version. &lt;br /&gt;
&lt;br /&gt;
To activate the OpenFOAM applications, after the module is loaded, run the following:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ source $FOAM_INIT&amp;lt;/pre&amp;gt;&lt;br /&gt;
or simply:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ foamInit&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Improving parallel run performance  ==&lt;br /&gt;
&lt;br /&gt;
To improve the concurrently solving process and decrease the error occurrence probability while running an OpenFOAM job in parallel (&#039;&#039;on a singlenode&#039;&#039;), some modifications have been introduced. Specifically, after the case decomposition is done, it is recommended to save the decomposed data directly on the nodes in the pre-allocated work-space and use it from there. When the calculations are over, the data is moved back to the case folder and reconstructed. It will improve the overall performance, considering that you allocated enough wall-time to decompose and rebuild your cases, as it moves the&amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;processor*&amp;lt;/span&amp;gt;folders to and out of the nodes local work-space. For such a procedure it is necessary to use the following commands for decomposition and reconstruction of the geometry domain:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ decomposeParHPC&lt;br /&gt;
$ reconstructParHPC&lt;br /&gt;
$ reconstructParMeshHPC&amp;lt;/pre&amp;gt;&lt;br /&gt;
instead of:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ decomposPar&lt;br /&gt;
$ reconstructPar&lt;br /&gt;
$ recontructParMesh&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For example, if you want to run&amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;snappyHexMesh&amp;lt;/span&amp;gt;in parallel, you may use the following commands:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ decomposeParMeshHPC&lt;br /&gt;
$ mpiexec snappyHexMesh -overwrite&lt;br /&gt;
$ reconstructParMeshHPC -constant&amp;lt;/pre&amp;gt;&lt;br /&gt;
instead of:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ decomposePar&lt;br /&gt;
$ mpiexec snappyHexMesh -overwrite&lt;br /&gt;
$ reconstructParMesh -constant&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Building an OpenFOAM batch file for parallel processing =&lt;br /&gt;
== General information == &lt;br /&gt;
&lt;br /&gt;
To run any job in a parallel mode with OpenFOAM, it is necessary to decompose the geometry domain into segments, equal to the number of processors (or threads) you intend to use. That means, for example, if you want to run a case on 10 processors, you will have to decompose the mesh in 10 segments, first. Then, you start the solver in &#039;&#039;parallel&#039;&#039; mode, letting &#039;&#039;OpenFOAM&#039;&#039; to run calculations concurrently on these segments, one processor responding for one segment of the mesh, sharing the data with all other processors in between. There is, of course, a mechanism that connects properly the calculations, so you don&#039;t loose your data or generate wrong results. Decomposition and segments building process is handled by&amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;decomposePar&amp;lt;/span&amp;gt;utility. In &amp;quot; &#039;&#039;system/decomposeParDict&#039;&#039; &amp;quot; you may specify, how many &amp;quot;segments&amp;quot; you want your geometry domain to be divided in, and which decomposition method to use. The automatic one is &amp;quot; &#039;&#039;scotch&#039;&#039; &amp;quot;. It trims the mesh, collecting as many cells as possible per processor, trying to avoid having empty segments or segments with few cells. If you want your mesh to be divided in other way, specifying the number of segments it should be cut in x, y or z direction, for example, you can use &amp;quot;simple&amp;quot; or &amp;quot;hierarchical&amp;quot; methods. There are some other ways as well, with more documentation on the internet. &lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
[[Category:Engineering software]][[Category:bwUniCluster]]&lt;/div&gt;</summary>
		<author><name>A Saramet</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster3.0/Software/OpenFoam&amp;diff=2480</id>
		<title>BwUniCluster3.0/Software/OpenFoam</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster3.0/Software/OpenFoam&amp;diff=2480"/>
		<updated>2015-08-18T12:37:07Z</updated>

		<summary type="html">&lt;p&gt;A Saramet: /* Building an OpenFOAM batch file for parallel processing */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= OpenFOAM =&lt;br /&gt;
[[File:OpenfoamLogo.png]]&lt;br /&gt;
&lt;br /&gt;
 Module:        cae/openfoam/&#039;version&#039;&lt;br /&gt;
 Target System: Red-Hat-Enterprise-Linux-Server-release-6.4-Santiago &lt;br /&gt;
 Main Location: /opt/bwhpc/common&lt;br /&gt;
 Priority:      mandatory&lt;br /&gt;
 License:       GPL&lt;br /&gt;
 Homepage:      http://www.openfoam.org/&lt;br /&gt;
&lt;br /&gt;
== Accessing and basic usage ==&lt;br /&gt;
The OpenFOAM®  (Open Field Operation and Manipulation) CFD Toolbox is a free, open source CFD software package with an extensive range of features to solve anything from complex fluid flows involving chemical reactions, turbulence and heat transfer, to solid dynamics and electromagnetics. &lt;br /&gt;
&lt;br /&gt;
In order to check what OpenFOAM versions are installed on the system, run the following command:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ module avail cae/openfoam&amp;lt;/pre&amp;gt;&lt;br /&gt;
Typically, several OpenFOAM versions might be available.&lt;br /&gt;
&lt;br /&gt;
Any available version can be accessed by loading the appropriate module:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ module load cae/openfoam/&amp;lt;version&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
with &amp;lt;version&amp;gt; specifying the desired version. &lt;br /&gt;
&lt;br /&gt;
To activate the OpenFOAM applications, after the module is loaded, run the following:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ source $FOAM_INIT&amp;lt;/pre&amp;gt;&lt;br /&gt;
or simply:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ foamInit&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Improving parallel run performance  ==&lt;br /&gt;
&lt;br /&gt;
To improve the concurrently solving process and decrease the error occurrence probability while running an OpenFOAM job in parallel (&#039;&#039;on a singlenode&#039;&#039;), some modifications have been introduced. Specifically, after the case decomposition is done, it is recommended to save the decomposed data directly on the nodes in the pre-allocated work-space and use it from there. When the calculations are over, the data is moved back to the case folder and reconstructed. It will improve the overall performance, considering that you allocated enough wall-time to decompose and rebuild your cases, as it moves the&amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;processor*&amp;lt;/span&amp;gt;folders to and out of the nodes local work-space. For such a procedure it is necessary to use the following commands for decomposition and reconstruction of the geometry domain:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ decomposeParHPC&lt;br /&gt;
$ reconstructParHPC&lt;br /&gt;
$ reconstructParMeshHPC&amp;lt;/pre&amp;gt;&lt;br /&gt;
instead of:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ decomposPar&lt;br /&gt;
$ reconstructPar&lt;br /&gt;
$ recontructParMesh&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For example, if you want to run&amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;snappyHexMesh&amp;lt;/span&amp;gt;in parallel, you may use the following commands:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ decomposeParMeshHPC&lt;br /&gt;
$ mpiexec snappyHexMesh -overwrite&lt;br /&gt;
$ reconstructParMeshHPC -constant&amp;lt;/pre&amp;gt;&lt;br /&gt;
instead of:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ decomposePar&lt;br /&gt;
$ mpiexec snappyHexMesh -overwrite&lt;br /&gt;
$ reconstructParMesh -constant&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Building an OpenFOAM batch file for parallel processing =&lt;br /&gt;
== General information == &lt;br /&gt;
&lt;br /&gt;
To run any job in a parallel mode with OpenFOAM, it is necessary to decompose the geometry domain into segments, equal to the number of processors (or threads) you intend to use. That means, for example, if you have a case with a mesh which consist of 100 000 cells, to run your solver in parallel on 10 processors you will have to decompose the mesh in 10 segments, first. Then, you start the solver in &#039;&#039;parallel&#039;&#039; mode, letting &#039;&#039;OpenFOAM&#039;&#039;  (through &#039;&#039;mpi&#039;&#039;)  to run calculations concurrently on these segments, one processor responding for one segment of the mesh, sharing the data with all other processors in between. There is, of course, a mechanism that connects properly the calculations, so you don&#039;t loose your data or generate wrong results. Decomposition and segments building process is handled by&amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;decomposePar&amp;lt;/span&amp;gt;utility. In &amp;quot; &#039;&#039;system/decomposeParDict&#039;&#039; &amp;quot; you may specify, how many &amp;quot;segments&amp;quot; you want your geometry domain to be divided in, and which decomposition method to use. The automatic one is &amp;quot; &#039;&#039;scotch&#039;&#039; &amp;quot;. It trims the mesh, collecting as many cells as possible per processor, trying to avoid having empty segments or segments with few cells. If you want your mesh to be divided in other way, specifying the number of segments it should be cut in x, y or z direction, for example, you can use &amp;quot;simple&amp;quot; or &amp;quot;hierarchical&amp;quot; methods. There are some other ways as well, with more documentation on the internet. &lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
[[Category:Engineering software]][[Category:bwUniCluster]]&lt;/div&gt;</summary>
		<author><name>A Saramet</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster3.0/Software/OpenFoam&amp;diff=2479</id>
		<title>BwUniCluster3.0/Software/OpenFoam</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster3.0/Software/OpenFoam&amp;diff=2479"/>
		<updated>2015-08-18T11:52:51Z</updated>

		<summary type="html">&lt;p&gt;A Saramet: /* Submitting an OpenFOAM batch job */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= OpenFOAM =&lt;br /&gt;
[[File:OpenfoamLogo.png]]&lt;br /&gt;
&lt;br /&gt;
 Module:        cae/openfoam/&#039;version&#039;&lt;br /&gt;
 Target System: Red-Hat-Enterprise-Linux-Server-release-6.4-Santiago &lt;br /&gt;
 Main Location: /opt/bwhpc/common&lt;br /&gt;
 Priority:      mandatory&lt;br /&gt;
 License:       GPL&lt;br /&gt;
 Homepage:      http://www.openfoam.org/&lt;br /&gt;
&lt;br /&gt;
== Accessing and basic usage ==&lt;br /&gt;
The OpenFOAM®  (Open Field Operation and Manipulation) CFD Toolbox is a free, open source CFD software package with an extensive range of features to solve anything from complex fluid flows involving chemical reactions, turbulence and heat transfer, to solid dynamics and electromagnetics. &lt;br /&gt;
&lt;br /&gt;
In order to check what OpenFOAM versions are installed on the system, run the following command:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ module avail cae/openfoam&amp;lt;/pre&amp;gt;&lt;br /&gt;
Typically, several OpenFOAM versions might be available.&lt;br /&gt;
&lt;br /&gt;
Any available version can be accessed by loading the appropriate module:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ module load cae/openfoam/&amp;lt;version&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
with &amp;lt;version&amp;gt; specifying the desired version. &lt;br /&gt;
&lt;br /&gt;
To activate the OpenFOAM applications, after the module is loaded, run the following:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ source $FOAM_INIT&amp;lt;/pre&amp;gt;&lt;br /&gt;
or simply:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ foamInit&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Improving parallel run performance  ==&lt;br /&gt;
&lt;br /&gt;
To improve the concurrently solving process and decrease the error occurrence probability while running an OpenFOAM job in parallel (&#039;&#039;on a singlenode&#039;&#039;), some modifications have been introduced. Specifically, after the case decomposition is done, it is recommended to save the decomposed data directly on the nodes in the pre-allocated work-space and use it from there. When the calculations are over, the data is moved back to the case folder and reconstructed. It will improve the overall performance, considering that you allocated enough wall-time to decompose and rebuild your cases, as it moves the&amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;processor*&amp;lt;/span&amp;gt;folders to and out of the nodes local work-space. For such a procedure it is necessary to use the following commands for decomposition and reconstruction of the geometry domain:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ decomposeParHPC&lt;br /&gt;
$ reconstructParHPC&lt;br /&gt;
$ reconstructParMeshHPC&amp;lt;/pre&amp;gt;&lt;br /&gt;
instead of:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ decomposPar&lt;br /&gt;
$ reconstructPar&lt;br /&gt;
$ recontructParMesh&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For example, if you want to run&amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;snappyHexMesh&amp;lt;/span&amp;gt;in parallel, you may use the following commands:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ decomposeParMeshHPC&lt;br /&gt;
$ mpiexec snappyHexMesh -overwrite&lt;br /&gt;
$ reconstructParMeshHPC -constant&amp;lt;/pre&amp;gt;&lt;br /&gt;
instead of:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ decomposePar&lt;br /&gt;
$ mpiexec snappyHexMesh -overwrite&lt;br /&gt;
$ reconstructParMesh -constant&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Building an OpenFOAM batch file for parallel processing =&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
[[Category:Engineering software]][[Category:bwUniCluster]]&lt;/div&gt;</summary>
		<author><name>A Saramet</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster3.0/Software/OpenFoam&amp;diff=2478</id>
		<title>BwUniCluster3.0/Software/OpenFoam</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster3.0/Software/OpenFoam&amp;diff=2478"/>
		<updated>2015-08-18T11:51:58Z</updated>

		<summary type="html">&lt;p&gt;A Saramet: /* Improving parallel run performance */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= OpenFOAM =&lt;br /&gt;
[[File:OpenfoamLogo.png]]&lt;br /&gt;
&lt;br /&gt;
 Module:        cae/openfoam/&#039;version&#039;&lt;br /&gt;
 Target System: Red-Hat-Enterprise-Linux-Server-release-6.4-Santiago &lt;br /&gt;
 Main Location: /opt/bwhpc/common&lt;br /&gt;
 Priority:      mandatory&lt;br /&gt;
 License:       GPL&lt;br /&gt;
 Homepage:      http://www.openfoam.org/&lt;br /&gt;
&lt;br /&gt;
== Accessing and basic usage ==&lt;br /&gt;
The OpenFOAM®  (Open Field Operation and Manipulation) CFD Toolbox is a free, open source CFD software package with an extensive range of features to solve anything from complex fluid flows involving chemical reactions, turbulence and heat transfer, to solid dynamics and electromagnetics. &lt;br /&gt;
&lt;br /&gt;
In order to check what OpenFOAM versions are installed on the system, run the following command:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ module avail cae/openfoam&amp;lt;/pre&amp;gt;&lt;br /&gt;
Typically, several OpenFOAM versions might be available.&lt;br /&gt;
&lt;br /&gt;
Any available version can be accessed by loading the appropriate module:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ module load cae/openfoam/&amp;lt;version&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
with &amp;lt;version&amp;gt; specifying the desired version. &lt;br /&gt;
&lt;br /&gt;
To activate the OpenFOAM applications, after the module is loaded, run the following:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ source $FOAM_INIT&amp;lt;/pre&amp;gt;&lt;br /&gt;
or simply:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ foamInit&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Improving parallel run performance  ==&lt;br /&gt;
&lt;br /&gt;
To improve the concurrently solving process and decrease the error occurrence probability while running an OpenFOAM job in parallel (&#039;&#039;on a singlenode&#039;&#039;), some modifications have been introduced. Specifically, after the case decomposition is done, it is recommended to save the decomposed data directly on the nodes in the pre-allocated work-space and use it from there. When the calculations are over, the data is moved back to the case folder and reconstructed. It will improve the overall performance, considering that you allocated enough wall-time to decompose and rebuild your cases, as it moves the&amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;processor*&amp;lt;/span&amp;gt;folders to and out of the nodes local work-space. For such a procedure it is necessary to use the following commands for decomposition and reconstruction of the geometry domain:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ decomposeParHPC&lt;br /&gt;
$ reconstructParHPC&lt;br /&gt;
$ reconstructParMeshHPC&amp;lt;/pre&amp;gt;&lt;br /&gt;
instead of:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ decomposPar&lt;br /&gt;
$ reconstructPar&lt;br /&gt;
$ recontructParMesh&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For example, if you want to run&amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;snappyHexMesh&amp;lt;/span&amp;gt;in parallel, you may use the following commands:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ decomposeParMeshHPC&lt;br /&gt;
$ mpiexec snappyHexMesh -overwrite&lt;br /&gt;
$ reconstructParMeshHPC -constant&amp;lt;/pre&amp;gt;&lt;br /&gt;
instead of:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ decomposePar&lt;br /&gt;
$ mpiexec snappyHexMesh -overwrite&lt;br /&gt;
$ reconstructParMesh -constant&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Submitting an OpenFOAM batch job =&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
[[Category:Engineering software]][[Category:bwUniCluster]]&lt;/div&gt;</summary>
		<author><name>A Saramet</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster3.0/Software/OpenFoam&amp;diff=2477</id>
		<title>BwUniCluster3.0/Software/OpenFoam</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster3.0/Software/OpenFoam&amp;diff=2477"/>
		<updated>2015-08-18T11:50:04Z</updated>

		<summary type="html">&lt;p&gt;A Saramet: /* Parallel Computing with OpenFOAM */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= OpenFOAM =&lt;br /&gt;
[[File:OpenfoamLogo.png]]&lt;br /&gt;
&lt;br /&gt;
 Module:        cae/openfoam/&#039;version&#039;&lt;br /&gt;
 Target System: Red-Hat-Enterprise-Linux-Server-release-6.4-Santiago &lt;br /&gt;
 Main Location: /opt/bwhpc/common&lt;br /&gt;
 Priority:      mandatory&lt;br /&gt;
 License:       GPL&lt;br /&gt;
 Homepage:      http://www.openfoam.org/&lt;br /&gt;
&lt;br /&gt;
== Accessing and basic usage ==&lt;br /&gt;
The OpenFOAM®  (Open Field Operation and Manipulation) CFD Toolbox is a free, open source CFD software package with an extensive range of features to solve anything from complex fluid flows involving chemical reactions, turbulence and heat transfer, to solid dynamics and electromagnetics. &lt;br /&gt;
&lt;br /&gt;
In order to check what OpenFOAM versions are installed on the system, run the following command:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ module avail cae/openfoam&amp;lt;/pre&amp;gt;&lt;br /&gt;
Typically, several OpenFOAM versions might be available.&lt;br /&gt;
&lt;br /&gt;
Any available version can be accessed by loading the appropriate module:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ module load cae/openfoam/&amp;lt;version&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
with &amp;lt;version&amp;gt; specifying the desired version. &lt;br /&gt;
&lt;br /&gt;
To activate the OpenFOAM applications, after the module is loaded, run the following:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ source $FOAM_INIT&amp;lt;/pre&amp;gt;&lt;br /&gt;
or simply:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ foamInit&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Improving parallel run performance  ==&lt;br /&gt;
&lt;br /&gt;
To improve the concurrently solving process and decrease the error occurrence probability while running an OpenFOAM job in parallel (&#039;&#039;on a singlenode&#039;&#039;), some modifications have been introduced. Specifically, after the case decomposition is done, it is recommended to save the decomposed data directly on the nodes in the pre-allocated work-space and use it from there. When the calculations are over, the data is moved back to the case folder and reconstructed. It will improve the overall performance, considering that you allocated enough wall-time to decompose and rebuild your cases, as it moves the &amp;quot;&#039;&#039;processor*&#039;&#039;&amp;quot; folders to and out of the nodes local work-space. For such a procedure it is necessary to use the following commands for decomposition and reconstruction of the geometry domain:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ decomposeParHPC&lt;br /&gt;
$ reconstructParHPC&lt;br /&gt;
$ reconstructParMeshHPC&amp;lt;/pre&amp;gt;&lt;br /&gt;
instead of:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ decomposPar&lt;br /&gt;
$ reconstructPar&lt;br /&gt;
$ recontructParMesh&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For example, if you want to run&amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;snappyHexMesh&amp;lt;/span&amp;gt;in parallel, you may use the following commands:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ decomposeParMeshHPC&lt;br /&gt;
$ mpiexec snappyHexMesh -overwrite&lt;br /&gt;
$ reconstructParMeshHPC -constant&amp;lt;/pre&amp;gt;&lt;br /&gt;
instead of:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ decomposePar&lt;br /&gt;
$ mpiexec snappyHexMesh -overwrite&lt;br /&gt;
$ reconstructParMesh -constant&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Submitting an OpenFOAM batch job =&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
[[Category:Engineering software]][[Category:bwUniCluster]]&lt;/div&gt;</summary>
		<author><name>A Saramet</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster3.0/Software/OpenFoam&amp;diff=2476</id>
		<title>BwUniCluster3.0/Software/OpenFoam</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster3.0/Software/OpenFoam&amp;diff=2476"/>
		<updated>2015-08-18T11:38:03Z</updated>

		<summary type="html">&lt;p&gt;A Saramet: /* Parallel Computing with OpenFOAM */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= OpenFOAM =&lt;br /&gt;
[[File:OpenfoamLogo.png]]&lt;br /&gt;
&lt;br /&gt;
 Module:        cae/openfoam/&#039;version&#039;&lt;br /&gt;
 Target System: Red-Hat-Enterprise-Linux-Server-release-6.4-Santiago &lt;br /&gt;
 Main Location: /opt/bwhpc/common&lt;br /&gt;
 Priority:      mandatory&lt;br /&gt;
 License:       GPL&lt;br /&gt;
 Homepage:      http://www.openfoam.org/&lt;br /&gt;
&lt;br /&gt;
== Accessing and basic usage ==&lt;br /&gt;
The OpenFOAM®  (Open Field Operation and Manipulation) CFD Toolbox is a free, open source CFD software package with an extensive range of features to solve anything from complex fluid flows involving chemical reactions, turbulence and heat transfer, to solid dynamics and electromagnetics. &lt;br /&gt;
&lt;br /&gt;
In order to check what OpenFOAM versions are installed on the system, run the following command:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ module avail cae/openfoam&amp;lt;/pre&amp;gt;&lt;br /&gt;
Typically, several OpenFOAM versions might be available.&lt;br /&gt;
&lt;br /&gt;
Any available version can be accessed by loading the appropriate module:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ module load cae/openfoam/&amp;lt;version&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
with &amp;lt;version&amp;gt; specifying the desired version. &lt;br /&gt;
&lt;br /&gt;
To activate the OpenFOAM applications, after the module is loaded, run the following:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ source $FOAM_INIT&amp;lt;/pre&amp;gt;&lt;br /&gt;
or simply:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ foamInit&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Parallel Computing with OpenFOAM ==&lt;br /&gt;
&lt;br /&gt;
To improve the concurrently solving process and decrease the error occurrence probability while running an OpenFOAM job in parallel (&#039;&#039;on a singlenode&#039;&#039;), some modifications have been introduced. Specifically, after the case decomposition is done, it is recommended to save the decomposed data on a pre-allocated work-space and use it from there, for concurrency solving. When the parallel run is over, it is necessary to copy the data back to the case folder and reconstruct it. Therefore, for decomposition and reconstruction of the case, please use the following commands:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ decomposeParHPC&lt;br /&gt;
$ reconstructParHPC&lt;br /&gt;
$ reconstructParMeshHPC&amp;lt;/pre&amp;gt;&lt;br /&gt;
instead of:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ decomposPar&lt;br /&gt;
$ reconstructPar&lt;br /&gt;
$ recontructParMesh&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For example, if you want to run&amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;snappyHexMesh&amp;lt;/span&amp;gt;in parallel, you may use the following commands:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ decomposeParMeshHPC&lt;br /&gt;
$ mpiexec snappyHexMesh -overwrite&lt;br /&gt;
$ reconstructParMeshHPC -constant&amp;lt;/pre&amp;gt;&lt;br /&gt;
instead of:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ decomposePar&lt;br /&gt;
$ mpiexec snappyHexMesh -overwrite&lt;br /&gt;
$ reconstructParMesh -constant&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Submitting an OpenFOAM batch job =&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
[[Category:Engineering software]][[Category:bwUniCluster]]&lt;/div&gt;</summary>
		<author><name>A Saramet</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster3.0/Software/OpenFoam&amp;diff=2475</id>
		<title>BwUniCluster3.0/Software/OpenFoam</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster3.0/Software/OpenFoam&amp;diff=2475"/>
		<updated>2015-08-18T11:32:50Z</updated>

		<summary type="html">&lt;p&gt;A Saramet: /* Parallel Computing with OpenFOAM */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= OpenFOAM =&lt;br /&gt;
[[File:OpenfoamLogo.png]]&lt;br /&gt;
&lt;br /&gt;
 Module:        cae/openfoam/&#039;version&#039;&lt;br /&gt;
 Target System: Red-Hat-Enterprise-Linux-Server-release-6.4-Santiago &lt;br /&gt;
 Main Location: /opt/bwhpc/common&lt;br /&gt;
 Priority:      mandatory&lt;br /&gt;
 License:       GPL&lt;br /&gt;
 Homepage:      http://www.openfoam.org/&lt;br /&gt;
&lt;br /&gt;
== Accessing and basic usage ==&lt;br /&gt;
The OpenFOAM®  (Open Field Operation and Manipulation) CFD Toolbox is a free, open source CFD software package with an extensive range of features to solve anything from complex fluid flows involving chemical reactions, turbulence and heat transfer, to solid dynamics and electromagnetics. &lt;br /&gt;
&lt;br /&gt;
In order to check what OpenFOAM versions are installed on the system, run the following command:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ module avail cae/openfoam&amp;lt;/pre&amp;gt;&lt;br /&gt;
Typically, several OpenFOAM versions might be available.&lt;br /&gt;
&lt;br /&gt;
Any available version can be accessed by loading the appropriate module:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ module load cae/openfoam/&amp;lt;version&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
with &amp;lt;version&amp;gt; specifying the desired version. &lt;br /&gt;
&lt;br /&gt;
To activate the OpenFOAM applications, after the module is loaded, run the following:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ source $FOAM_INIT&amp;lt;/pre&amp;gt;&lt;br /&gt;
or simply:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ foamInit&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Parallel Computing with OpenFOAM ==&lt;br /&gt;
&lt;br /&gt;
To improve the concurrently solving process and decrease the error occurrence probability while running an OpenFOAM job in parallel (&#039;&#039;on a singlenode&#039;&#039;), some modifications have been introduced. Specifically, after the case decomposition is done, it is recommended to save the decomposed data on a pre-allocated work-space and use it from there, for concurrency solving. When the parallel run is over, it is necessary to copy the data back to the case folder and reconstruct it. Therefore, for decomposition and reconstruction of the case, please use the following commands:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ decomposeParHPC&lt;br /&gt;
$ reconstructParHPC&lt;br /&gt;
$ reconstructParMeshHPC&amp;lt;/pre&amp;gt;&lt;br /&gt;
instead of:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ decomposPar&lt;br /&gt;
$ reconstructPar&lt;br /&gt;
$ recontructParMesh&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For example, if you want to run&amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;snappyHexMesh&amp;lt;/span&amp;gt;in parallel, you may use the following commands:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ decomposeParMeshHPC&lt;br /&gt;
$ mpiexec snappyHexMesh -overwrite&lt;br /&gt;
$ reconstructParMeshHPC -constant&amp;lt;/pre&amp;gt;&lt;br /&gt;
instead of:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ decomposePar&lt;br /&gt;
$ mpiexec snappyHexMesh -overwrite&lt;br /&gt;
$ reconstructParMesh -constant&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Parallel Computing with OpenFOAM ==&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
[[Category:Engineering software]][[Category:bwUniCluster]]&lt;/div&gt;</summary>
		<author><name>A Saramet</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster3.0/Software/OpenFoam&amp;diff=2474</id>
		<title>BwUniCluster3.0/Software/OpenFoam</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster3.0/Software/OpenFoam&amp;diff=2474"/>
		<updated>2015-08-18T11:05:26Z</updated>

		<summary type="html">&lt;p&gt;A Saramet: /* OpenFOAM */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= OpenFOAM =&lt;br /&gt;
[[File:OpenfoamLogo.png]]&lt;br /&gt;
&lt;br /&gt;
 Module:        cae/openfoam/&#039;version&#039;&lt;br /&gt;
 Target System: Red-Hat-Enterprise-Linux-Server-release-6.4-Santiago &lt;br /&gt;
 Main Location: /opt/bwhpc/common&lt;br /&gt;
 Priority:      mandatory&lt;br /&gt;
 License:       GPL&lt;br /&gt;
 Homepage:      http://www.openfoam.org/&lt;br /&gt;
&lt;br /&gt;
== Accessing and basic usage ==&lt;br /&gt;
The OpenFOAM®  (Open Field Operation and Manipulation) CFD Toolbox is a free, open source CFD software package with an extensive range of features to solve anything from complex fluid flows involving chemical reactions, turbulence and heat transfer, to solid dynamics and electromagnetics. &lt;br /&gt;
&lt;br /&gt;
In order to check what OpenFOAM versions are installed on the system, run the following command:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ module avail cae/openfoam&amp;lt;/pre&amp;gt;&lt;br /&gt;
Typically, several OpenFOAM versions might be available.&lt;br /&gt;
&lt;br /&gt;
Any available version can be accessed by loading the appropriate module:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ module load cae/openfoam/&amp;lt;version&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
with &amp;lt;version&amp;gt; specifying the desired version. &lt;br /&gt;
&lt;br /&gt;
To activate the OpenFOAM applications, after the module is loaded, run the following:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ source $FOAM_INIT&amp;lt;/pre&amp;gt;&lt;br /&gt;
or simply:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ foamInit&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Parallel Computing with OpenFOAM ==&lt;br /&gt;
&lt;br /&gt;
To improve the concurrently solving process and decrease the error occurrence probability while running an OpenFOAM job in parallel (&#039;&#039;on a singlenode&#039;&#039;), some modifications have been introduced. Specifically, after the case decomposition is done, it is recommended to save the decomposed data on a pre-allocated work-space and use it from there, for concurrency solving. When the parallel run is over, it is necessary to copy the data back to the case folder and reconstruct it. Therefore, for decomposition and reconstruction of the case, please use the following commands:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ decomposeParHPC&lt;br /&gt;
$ reconstructParHPC&lt;br /&gt;
$ reconstructParMeshHPC&amp;lt;/pre&amp;gt;&lt;br /&gt;
instead of:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ decomposPar&lt;br /&gt;
$ reconstructPar&lt;br /&gt;
$ recontructParMesh&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For example, if you want to run&amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;snappyHexMesh&amp;lt;/span&amp;gt;in parallel, you may use the following commands:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ decomposeParMeshHPC&lt;br /&gt;
$ mpiexec snappyHexMesh -overwrite&lt;br /&gt;
$ reconstructParMeshHPC -constant&amp;lt;/pre&amp;gt;&lt;br /&gt;
instead of:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ decomposePar&lt;br /&gt;
$ mpiexec snappyHexMesh -overwrite&lt;br /&gt;
$ reconstructParMesh -constant&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
[[Category:Engineering software]][[Category:bwUniCluster]]&lt;/div&gt;</summary>
		<author><name>A Saramet</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=File:OpenfoamLogo.png&amp;diff=787</id>
		<title>File:OpenfoamLogo.png</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=File:OpenfoamLogo.png&amp;diff=787"/>
		<updated>2014-03-04T09:21:50Z</updated>

		<summary type="html">&lt;p&gt;A Saramet: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>A Saramet</name></author>
	</entry>
</feed>