BwUniCluster2.0/Software/OpenFoam: Difference between revisions

From bwHPC Wiki
Jump to navigation Jump to search
mNo edit summary
No edit summary
Line 1: Line 1:
{| class="wikitable"
= OpenFOAM =
|-
[[File:OpenfoamLogo.png]]
! Description !! Content

|-
Module: cae/openfoam/'version'
| module load
Target System: Red-Hat-Enterprise-Linux-Server-release-6.4-Santiago
| cae/openfoam
Main Location: /opt/bwhpc/common
|-
Priority: mandatory
| Availability
License: GPL
| [[bwUniCluster]]
Homepage: http://www.openfoam.org/
|-

| License
== Description ==
| [http://www.openfoam.org/licence.php GNU General Public Licence]
|-
| Citing
| n/a
|-
| Links
| [http://www.openfoam.org/ Openfoam Homepage] | [http://www.openfoam.org/docs/ Documentation]
|-
| Graphical Interface
| No
|}
<br>
<br>
= Description =
The OpenFOAM® (Open Field Operation and Manipulation) CFD Toolbox is a free, open source CFD software package with an extensive range of features to solve anything from complex fluid flows involving chemical reactions, turbulence and heat transfer, to solid dynamics and electromagnetics.
The OpenFOAM® (Open Field Operation and Manipulation) CFD Toolbox is a free, open source CFD software package with an extensive range of features to solve anything from complex fluid flows involving chemical reactions, turbulence and heat transfer, to solid dynamics and electromagnetics.
<br>

<br>
== Versions and Availability ==
= Versions and Availability =
A list of versions currently available on all bwHPC-C5-Clusters can be obtained from the
A list of versions currently available on all bwHPC-C5-Clusters can be obtained from the
<br>
<br>
Line 41: Line 56:
<pre>$ foamInit</pre>
<pre>$ foamInit</pre>
<br>
<br>
= Improving parallel run performance =
<br>
== Improving parallel run performance ==
To improve the concurrently solving process and decrease the error occurrence probability while running an OpenFOAM job in parallel (''on a singlenode''), some modifications have been introduced. Specifically, after the case decomposition is done, it is recommended to save the decomposed data directly on the nodes in the pre-allocated work-space and use it from there. When the calculations are over, the data is moved back to the case folder and reconstructed. It will improve the overall performance, considering that you allocated enough wall-time to decompose and rebuild your cases, as it moves the<span style="background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080">processor*</span>folders to and out of the nodes local work-space. For such a procedure it is necessary to use the following commands for decomposition and reconstruction of the geometry domain:
To improve the concurrently solving process and decrease the error occurrence probability while running an OpenFOAM job in parallel (''on a singlenode''), some modifications have been introduced. Specifically, after the case decomposition is done, it is recommended to save the decomposed data directly on the nodes in the pre-allocated work-space and use it from there. When the calculations are over, the data is moved back to the case folder and reconstructed. It will improve the overall performance, considering that you allocated enough wall-time to decompose and rebuild your cases, as it moves the<span style="background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080">processor*</span>folders to and out of the nodes local work-space. For such a procedure it is necessary to use the following commands for decomposition and reconstruction of the geometry domain:
<pre>$ decomposeParHPC
<pre>$ decomposeParHPC
Line 60: Line 74:
$ mpiexec snappyHexMesh -overwrite
$ mpiexec snappyHexMesh -overwrite
$ reconstructParMesh -constant</pre>
$ reconstructParMesh -constant</pre>
<br>

= Building an OpenFOAM batch file for parallel processing =
= Building an OpenFOAM batch file for parallel processing =
== General information ==
== General information ==

To run any job in a parallel mode with OpenFOAM, it is necessary to decompose the geometry domain into segments, equal to the number of processors (or threads) you intend to use. That means, for example, if you want to run a case on 8 processors, you will have to decompose the mesh in 8 segments, first. Then, you start the solver in ''parallel'' mode, letting ''OpenFOAM'' to run calculations concurrently on these segments, one processor responding for one segment of the mesh, sharing the data with all other processors in between. There is, of course, a mechanism that connects properly the calculations, so you don't loose your data or generate wrong results. Decomposition and segments building process is handled by<span style="background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080">decomposePar</span>utility. In "''system/decomposeParDict''" you may specify, how many "segments" you want your geometry domain to be divided in, and which decomposition method to use. The automatic one is "''scotch''". It trims the mesh, collecting as many cells as possible per processor, trying to avoid having empty segments or segments with few cells. If you want your mesh to be divided in other way, specifying the number of segments it should be cut in x, y or z direction, for example, you can use "simple" or "hierarchical" methods. There are some other ways as well, with more documentation on the internet.
To run any job in a parallel mode with OpenFOAM, it is necessary to decompose the geometry domain into segments, equal to the number of processors (or threads) you intend to use. That means, for example, if you want to run a case on 8 processors, you will have to decompose the mesh in 8 segments, first. Then, you start the solver in ''parallel'' mode, letting ''OpenFOAM'' to run calculations concurrently on these segments, one processor responding for one segment of the mesh, sharing the data with all other processors in between. There is, of course, a mechanism that connects properly the calculations, so you don't loose your data or generate wrong results. Decomposition and segments building process is handled by<span style="background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080">decomposePar</span>utility. In "''system/decomposeParDict''" you may specify, how many "segments" you want your geometry domain to be divided in, and which decomposition method to use. The automatic one is "''scotch''". It trims the mesh, collecting as many cells as possible per processor, trying to avoid having empty segments or segments with few cells. If you want your mesh to be divided in other way, specifying the number of segments it should be cut in x, y or z direction, for example, you can use "simple" or "hierarchical" methods. There are some other ways as well, with more documentation on the internet.
<br>

== Wrapper script generation ==
== Wrapper script generation ==

'''Attention:''' <span style="background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080">openfoam</span> module loads automatically the necessary <span style="background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080">openmpi</span> module for parallel run, do '''NOT''' load another version of mpi, as it may conflict with the loaded <span style="background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080">openfoam</span> version.
'''Attention:''' <span style="background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080">openfoam</span> module loads automatically the necessary <span style="background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080">openmpi</span> module for parallel run, do '''NOT''' load another version of mpi, as it may conflict with the loaded <span style="background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080">openfoam</span> version.


Line 106: Line 118:
</source>
</source>
|}
|}
<br>

= OpenFOAM and ParaView on bwUniCluster=
= OpenFOAM and ParaView on bwUniCluster=

ParaView is not directly linked to OpenFOAM installation on the cluster. Therefore, to visualize OpenFOAM jobs with ParaView, they will have to be manually opened within the specific ParaView module.
ParaView is not directly linked to OpenFOAM installation on the cluster. Therefore, to visualize OpenFOAM jobs with ParaView, they will have to be manually opened within the specific ParaView module.


Line 125: Line 136:


5. That's it! Enjoy ParaView and OpenFOAM.
5. That's it! Enjoy ParaView and OpenFOAM.

----
----
[[Category:Engineering software]][[Category:bwUniCluster]]
[[Category:Engineering software]][[Category:bwUniCluster]]

Revision as of 12:35, 1 December 2015

Description Content
module load cae/openfoam
Availability bwUniCluster
License GNU General Public Licence
Citing n/a
Links Openfoam Homepage | Documentation
Graphical Interface No



Description

The OpenFOAM® (Open Field Operation and Manipulation) CFD Toolbox is a free, open source CFD software package with an extensive range of features to solve anything from complex fluid flows involving chemical reactions, turbulence and heat transfer, to solid dynamics and electromagnetics.

Versions and Availability

A list of versions currently available on all bwHPC-C5-Clusters can be obtained from the

Cluster Information System CIS

{{#widget:Iframe |url=https://cis-hpc.uni-konstanz.de/prod.cis/bwUniCluster/cae/openfoam |width=99% |height=450 |border=1 }}

Open the above links by using the right mouse button and select "open in a new window" or "open in a new tab".


In order to check what OpenFOAM versions are installed on the system, run the following command:

$ module avail cae/openfoam

Typically, several OpenFOAM versions might be available.
Any available version can be accessed by loading the appropriate module:

$ module load cae/openfoam/<version>

with <version> specifying the desired version.

To activate the OpenFOAM applications, after the module is loaded, run the following:

$ source $FOAM_INIT

or simply:

$ foamInit


Improving parallel run performance

To improve the concurrently solving process and decrease the error occurrence probability while running an OpenFOAM job in parallel (on a singlenode), some modifications have been introduced. Specifically, after the case decomposition is done, it is recommended to save the decomposed data directly on the nodes in the pre-allocated work-space and use it from there. When the calculations are over, the data is moved back to the case folder and reconstructed. It will improve the overall performance, considering that you allocated enough wall-time to decompose and rebuild your cases, as it moves theprocessor*folders to and out of the nodes local work-space. For such a procedure it is necessary to use the following commands for decomposition and reconstruction of the geometry domain:

$ decomposeParHPC
$ reconstructParHPC
$ reconstructParMeshHPC

instead of:

$ decomposPar
$ reconstructPar
$ recontructParMesh

For example, if you want to runsnappyHexMeshin parallel, you may use the following commands:

$ decomposeParMeshHPC
$ mpiexec snappyHexMesh -overwrite
$ reconstructParMeshHPC -constant

instead of:

$ decomposePar
$ mpiexec snappyHexMesh -overwrite
$ reconstructParMesh -constant


Building an OpenFOAM batch file for parallel processing

General information

To run any job in a parallel mode with OpenFOAM, it is necessary to decompose the geometry domain into segments, equal to the number of processors (or threads) you intend to use. That means, for example, if you want to run a case on 8 processors, you will have to decompose the mesh in 8 segments, first. Then, you start the solver in parallel mode, letting OpenFOAM to run calculations concurrently on these segments, one processor responding for one segment of the mesh, sharing the data with all other processors in between. There is, of course, a mechanism that connects properly the calculations, so you don't loose your data or generate wrong results. Decomposition and segments building process is handled bydecomposeParutility. In "system/decomposeParDict" you may specify, how many "segments" you want your geometry domain to be divided in, and which decomposition method to use. The automatic one is "scotch". It trims the mesh, collecting as many cells as possible per processor, trying to avoid having empty segments or segments with few cells. If you want your mesh to be divided in other way, specifying the number of segments it should be cut in x, y or z direction, for example, you can use "simple" or "hierarchical" methods. There are some other ways as well, with more documentation on the internet.

Wrapper script generation

Attention: openfoam module loads automatically the necessary openmpi module for parallel run, do NOT load another version of mpi, as it may conflict with the loaded openfoam version.

A job-script to submit a batch job called job_openfoam.sh that runs icoFoam solver with OpenFoam version 2.4.0, on 8 processors, requiring 6000 MByte of total physical memory per processor and total wall clock time of 6 hours looks like:

#!/bin/bash
#MSUB -l nodes=1:ppn=8
#MSUB -l walltime=06:00:00
#MSUB -l pmem=6000mb
#MSUB -v FOAM_MODULE="cae/openfoam/2.4.0"
#MSUB -v MPIRUN_OPTIONS="--bind-to core --map-by core -report-bindings"
#MSUB -v EXECUTABLE="icoFoam"
#MSUB -N test_icoFoam
#MSUB -o icoFoam.log
#MSUB -j oe

startexe="mpirun ${MPIRUN_OPTIONS} ${EXECUTABLE} -parallel"

# openfoam-2.4.0 automatically loads mpi/openmpi/1.8-gnu-4.9
module load ${FOAM_MODULE}
foamInit

# remove decomposePar if you already decomposed your case beforehand 
decomposePar &&

# starting the solver in parallel. Name of the solver is given in the "EXECUTABLE" variable, 
#in the header 
echo $startexe
exec $startexe &&

# remove reconstructPar if you would like to reconstruct the case later
reconstructPar


OpenFOAM and ParaView on bwUniCluster

ParaView is not directly linked to OpenFOAM installation on the cluster. Therefore, to visualize OpenFOAM jobs with ParaView, they will have to be manually opened within the specific ParaView module.

1. Load the ParaView module. For example:

$ module load cae/paraview/4.3.1

2. Create a dummy '*.openfoam' file in the OpenFOAM case folder:

$ cd <case_folder_path>
$ touch <case_name>.openfoam

NOTICE: the name of the dummy file should be the same as the name of the OpenFOAM case folder, with '.openfoam' extension.

3. Open ParaView:

$ paraview

NOTICE: ParaView is a visualization software which requires X-server to run.

4. In Paraview go to 'File' -> 'Open', or press Ctrl+O. Choose to show 'All files (*)', and open your <case_name>.openfoam file. In the pop-up window select OpenFOAM, and press 'Ok'.

5. That's it! Enjoy ParaView and OpenFOAM.