BwUniCluster2.0/Software/OpenFoam: Difference between revisions
mNo edit summary |
|||
(46 intermediate revisions by 10 users not shown) | |||
Line 1: | Line 1: | ||
{{Softwarepage|cae/openfoam}} |
|||
{| width=500px class="wikitable" |
|||
{| width=600px class="wikitable" |
|||
|- |
|- |
||
! Description !! Content |
! Description !! Content |
||
Line 5: | Line 7: | ||
| module load |
| module load |
||
| cae/openfoam |
| cae/openfoam |
||
|- |
|||
| Availability |
|||
| [[bwUniCluster]] |
|||
|- |
|- |
||
| License |
| License |
||
| [ |
| [https://www.openfoam.org/licence.php GNU General Public Licence] |
||
|- |
|- |
||
| Citing |
| Citing |
||
Line 16: | Line 15: | ||
|- |
|- |
||
| Links |
| Links |
||
| [ |
| [https://www.openfoam.org/ Homepage] | [https://www.openfoam.org/docs/ Documentation] |
||
|- |
|- |
||
| Graphical Interface |
| Graphical Interface |
||
Line 24: | Line 23: | ||
<br> |
<br> |
||
= Description = |
= Description = |
||
The OpenFOAM® (Open Field Operation and Manipulation) CFD Toolbox is a free, open source CFD software package with an extensive range of features to solve anything from complex fluid flows involving chemical reactions, turbulence and heat transfer, to solid dynamics and electromagnetics. |
|||
<br> |
|||
<br> |
|||
= Versions and Availability = |
|||
A list of versions currently available on all bwHPC-C5-Clusters can be obtained from the |
|||
<br> |
|||
<big> |
|||
'''OpenFOAM''' (Open-source Field Operation And Manipulation) is a free, open-source CFD software package with an extensive range of features to solve anything from complex fluid flows involving chemical reactions, turbulence and heat transfer, to solid dynamics and electromagnetics. |
|||
[https://cis-hpc.uni-konstanz.de/prod.cis/ Cluster Information System CIS] |
|||
= Adding OpenFOAM to Your Environment = |
|||
</big> |
|||
{{#widget:Iframe |
|||
|url=https://cis-hpc.uni-konstanz.de/prod.cis/bwUniCluster/cae/openfoam |
|||
|width=99% |
|||
|height=450 |
|||
|border=1 |
|||
}} |
|||
<big><p style="color: red;">Open the above links by using the right mouse button and select "open in a new window" or "open in a new tab".</p></big> |
|||
<br> |
|||
In order to check what OpenFOAM versions are installed on the system, run the following command: |
|||
<pre>$ module avail cae/openfoam</pre> |
|||
Typically, several OpenFOAM versions might be available. |
|||
<br> |
|||
Any available version can be accessed by loading the appropriate module: |
|||
<pre>$ module load cae/openfoam/<version></pre> |
|||
with <version> specifying the desired version. |
|||
After loading the desired module, type to activate the OpenFOAM applications |
|||
To activate the OpenFOAM applications, after the module is loaded, run the following: |
|||
<pre>$ source $FOAM_INIT</pre> |
<pre>$ source $FOAM_INIT</pre> |
||
or simply |
or simply |
||
<pre>$ foamInit</pre> |
<pre>$ foamInit</pre> |
||
<br> |
|||
= |
= Parallel run with OpenFOAM = |
||
For a better performance on running OpenFOAM jobs in parallel on bwUniCluster, it is recommended to have the decomposed data in local folders on each node. |
|||
To improve the concurrently solving process and decrease the error occurrence probability while running an OpenFOAM job in parallel (''on a singlenode''), some modifications have been introduced. Specifically, after the case decomposition is done, it is recommended to save the decomposed data directly on the nodes in the pre-allocated work-space and use it from there. When the calculations are over, the data is moved back to the case folder and reconstructed. It will improve the overall performance, considering that you allocated enough wall-time to decompose and rebuild your cases, as it moves the<span style="background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080">processor*</span>folders to and out of the nodes local work-space. For such a procedure it is necessary to use the following commands for decomposition and reconstruction of the geometry domain: |
|||
Therefore you may use *HPC scripts, wich will copy your data to the node specific folders after running the decomposePar, and copy it back to the local case folder before running reconstructPar. |
|||
Don't forget to allocate enough wall-time for decomposition and reconstruction of your cases. As the data will be processed directly on the nodes, and may be lost if the job is cancelled before the data is copied back into the case folder. |
|||
Following commands will do that for you: |
|||
<pre>$ decomposeParHPC |
<pre>$ decomposeParHPC |
||
$ reconstructParHPC |
$ reconstructParHPC |
||
$ reconstructParMeshHPC</pre> |
$ reconstructParMeshHPC</pre> |
||
instead of: |
instead of: |
||
<pre>$ |
<pre>$ decomposePar |
||
$ reconstructPar |
$ reconstructPar |
||
$ recontructParMesh</pre> |
$ recontructParMesh</pre> |
||
Line 68: | Line 53: | ||
For example, if you want to run<span style="background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080">snappyHexMesh</span>in parallel, you may use the following commands: |
For example, if you want to run<span style="background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080">snappyHexMesh</span>in parallel, you may use the following commands: |
||
<pre>$ decomposeParMeshHPC |
<pre>$ decomposeParMeshHPC |
||
$ |
$ mpirun --bind-to core --map-by core -report-bindings snappyHexMesh -overwrite -parallel |
||
$ reconstructParMeshHPC -constant</pre> |
$ reconstructParMeshHPC -constant</pre> |
||
instead of: |
instead of: |
||
<pre>$ decomposePar |
<pre>$ decomposePar |
||
$ |
$ mpirun --bind-to core --map-by core -report-bindings snappyHexMesh -overwrite -parallel |
||
$ reconstructParMesh -constant</pre> |
$ reconstructParMesh -constant</pre> |
||
<br> |
<br> |
||
For running jobs on multiple nodes, OpenFOAM needs passwordless communication between the nodes, to copy data into the local folders. |
|||
A small trick using ssh-keygen once will let your nodes to communicate freely over rsh. |
|||
Do it once (if you didn't do it already in the past): |
|||
<pre> |
|||
$ ssh-keygen |
|||
$ cat $HOME/.ssh/id_rsa.pub >> $HOME/.ssh/authorized_keys |
|||
</pre> |
|||
= Building an OpenFOAM batch file for parallel processing = |
= Building an OpenFOAM batch file for parallel processing = |
||
== General information == |
== General information == |
||
Before running OpenFOAM jobs in parallel, it is necessary to decompose the geometry domain into segments, equal to the number of processors (or threads) you intend to use. |
|||
To run any job in a parallel mode with OpenFOAM, it is necessary to decompose the geometry domain into segments, equal to the number of processors (or threads) you intend to use. That means, for example, if you want to run a case on 8 processors, you will have to decompose the mesh in 8 segments, first. Then, you start the solver in ''parallel'' mode, letting ''OpenFOAM'' to run calculations concurrently on these segments, one processor responding for one segment of the mesh, sharing the data with all other processors in between. There is, of course, a mechanism that connects properly the calculations, so you don't loose your data or generate wrong results. Decomposition and segments building process is handled by<span style="background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080">decomposePar</span>utility. In "''system/decomposeParDict''" you may specify, how many "segments" you want your geometry domain to be divided in, and which decomposition method to use. The automatic one is "''scotch''". It trims the mesh, collecting as many cells as possible per processor, trying to avoid having empty segments or segments with few cells. If you want your mesh to be divided in other way, specifying the number of segments it should be cut in x, y or z direction, for example, you can use "simple" or "hierarchical" methods. There are some other ways as well, with more documentation on the internet. |
|||
That means, for example, if you want to run a case on 8 processors, you will have to decompose the mesh in 8 segments, first. Then, you start the solver in ''parallel'', letting ''OpenFOAM'' to run calculations concurrently on these segments, one processor responding for one segment of the mesh, sharing the data with all other processors in between. |
|||
There is, of course, a mechanism that connects properly the calculations, so you don't loose your data or generate wrong results. |
|||
Decomposition and segments building process is handled by<span style="background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080">decomposePar</span>utility. |
|||
The number of subdomains, in which the geometry will be decomposed, is specified in "''system/decomposeParDict''", as well as the decomposition method to use. |
|||
The automatic decomposition method is "''scotch''". It trims the mesh, collecting as many cells as possible per processor, trying to avoid having empty segments or segments with not enough cells. If you want your mesh to be divided in other way, specifying the number of segments it should be cut in x, y or z direction, for example, you can use "simple" or "hierarchical" methods. |
|||
<br> |
<br> |
||
== Wrapper script generation == |
== Wrapper script generation == |
||
'''Attention:''' <span style="background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080">openfoam</span> module loads automatically the necessary <span style="background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080">openmpi</span> module for parallel run, do '''NOT''' load another version of mpi, as it may conflict with the loaded <span style="background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080">openfoam</span> version. |
'''Attention:''' <span style="background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080">openfoam</span> module loads automatically the necessary <span style="background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080">openmpi</span> module for parallel run, do '''NOT''' load another version of mpi, as it may conflict with the loaded <span style="background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080">openfoam</span> version. |
||
A job-script to submit a batch job called ''job_openfoam.sh'' that runs ''icoFoam'' solver with OpenFoam version |
A job-script to submit a batch job called ''job_openfoam.sh'' that runs ''icoFoam'' solver with OpenFoam version 8, on 80 processors, on a ''multiple'' partition with a total wall clock time of 6 hours looks like: |
||
<!--b)--> |
<!--b)--> |
||
Line 89: | Line 97: | ||
<source lang="bash"> |
<source lang="bash"> |
||
#!/bin/bash |
#!/bin/bash |
||
# |
# Allocate nodes |
||
#SBATCH --nodes=2 |
|||
#MSUB -l walltime=06:00:00 |
|||
# Number of tasks per node |
|||
#MSUB -l pmem=6000mb |
|||
#SBATCH --ntasks-per-node=40 |
|||
#MSUB -v FOAM_MODULE="cae/openfoam/2.4.0" |
|||
# Queue class https://wiki.bwhpc.de/e/BwUniCluster_2.0_Batch_Queues |
|||
#MSUB -v MPIRUN_OPTIONS="--bind-to core --map-by core -report-bindings" |
|||
#SBATCH --partition=multiple |
|||
#MSUB -v EXECUTABLE="icoFoam" |
|||
# Maximum job run time |
|||
#MSUB -N test_icoFoam |
|||
#SBATCH --time=4:00:00 |
|||
#MSUB -o icoFoam.log |
|||
# Give the job a reasonable name |
|||
#MSUB -j oe |
|||
#SBATCH --job-name=openfoam |
|||
# File name for standard output (%j will be replaced by job id) |
|||
#SBATCH --output=logs-%j.out |
|||
# File name for error output |
|||
#SBATCH --error=logs-%j.err |
|||
# User defined variables |
|||
startexe="mpirun ${MPIRUN_OPTIONS} ${EXECUTABLE} -parallel" |
|||
FOAM_VERSION="8" |
|||
EXECUTABLE="icoFoam" |
|||
MPIRUN_OPTIONS="--bind-to core --map-by core --report-bindings" |
|||
module load ${FOAM_VERSION} |
|||
# openfoam-2.4.0 automatically loads mpi/openmpi/1.8-gnu-4.9 |
|||
module load ${FOAM_MODULE} |
|||
foamInit |
foamInit |
||
# remove decomposePar if you already decomposed your case beforehand |
# remove decomposePar if you already decomposed your case beforehand |
||
decomposeParHPC && |
|||
decomposePar && |
|||
# starting the solver in parallel. Name of the solver is given in the "EXECUTABLE" variable |
# starting the solver in parallel. Name of the solver is given in the "EXECUTABLE" variable |
||
mpirun ${MPIRUN_OPTIONS} ${EXECUTABLE} -parallel && |
|||
#in the header |
|||
echo $startexe |
|||
exec $startexe && |
|||
reconstructParHPC |
|||
# remove reconstructPar if you would like to reconstruct the case later |
|||
reconstructPar |
|||
</source> |
</source> |
||
|} |
|} |
||
<br> |
<br> |
||
'''Attention:''' The script above will run a parallel OpenFOAM Job with pre-installed OpenMPI. If you are using an OpenFOAM version wich comes with pre-installed Intel MPI (like, for example<span style="background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080">cae/openfoam/v1712-impi</span>) you will have to modify the batch script to use all the advantages of Intel MPI for parallel calculations. For details see: |
|||
* [[Batch_Jobs_-_bwUniCluster_Features|Batch Jobs Features]] |
|||
= Using I/O and reducing the amount of data and files = |
|||
In OpenFOAM, you can control which variables or fields are written at specific times. For example, for post-processing purposes, you might need only a subset of variables. In order to control which files will be written, there is a function object called "writeObjects". |
|||
An example controlDict file may look like this: At the top of the file (entry "writeControl") you specify that ALL fields (variables) required for restarting are saved every 12 wall-clock hours. Then, additionally, at the bottom of the controlDict in the "functions" block, you can add a function object of type "writeObjects". With this function object, you can control the output of specific fields independent of the entry at the top of the file: |
|||
<!--b)--> |
|||
{| style="width: 100%; border:1px solid #d0cfcc; background:#f2f7ff;border-spacing: 5px;" |
|||
| style="width:280px; white-space:nowrap; color:#000;" | |
|||
<source lang="text"> |
|||
/*--------------------------------*- C++ -*----------------------------------*\ |
|||
| ========= | | |
|||
| \\ / F ield | OpenFOAM: The Open Source CFD Toolbox | |
|||
| \\ / O peration | Version: 4.1.x | |
|||
| \\ / A nd | Web: www.OpenFOAM.org | |
|||
| \\/ M anipulation | | |
|||
\*---------------------------------------------------------------------------*/ |
|||
FoamFile |
|||
{ |
|||
version 2.0; |
|||
format ascii; |
|||
class dictionary; |
|||
location "system"; |
|||
object controlDict; |
|||
} |
|||
// * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * // |
|||
startFrom latestTime; |
|||
startTime 0; |
|||
stopAt endTime; |
|||
endTime 1e2; |
|||
deltaT 1e-5; |
|||
writeControl clockTime; |
|||
writeInterval 43200; // write ALL fields necessary to restart your simulation |
|||
// every 43200 wall-clock seconds = 12 hours of real time |
|||
purgeWrite 0; |
|||
writeFormat binary; |
|||
writePrecision 10; |
|||
writeCompression off; |
|||
timeFormat general; |
|||
timePrecision 10; |
|||
runTimeModifiable false; |
|||
functions |
|||
{ |
|||
writeFields // name of the function object |
|||
{ |
|||
type writeObjects; |
|||
libs ( "libutilityFunctionObjects.so" ); |
|||
objects |
|||
( |
|||
T U rho // list of fields/variables to be written |
|||
); |
|||
// E.g. write every 1e-5 seconds of simulation time only the specified fields |
|||
writeControl runTime; |
|||
writeInterval 1e-5; // write every 1e-5 seconds |
|||
} |
|||
} |
|||
</source> |
|||
|} |
|||
<br> |
|||
You can also define multiple function objects in order to write different subsets of fields at different times. You can also use wildcards in the list of fields- for example, in order to write out all fields starting with "RR_" you can add |
|||
<pre> |
|||
"RR_.*" |
|||
</pre> |
|||
to the list of objects. You can get a list of valid field names by writing "banana" in the field list. During the run of the solver all valid field names are printed. |
|||
The output time can be changed too. Instead of writing at specific times in the simulation, you can also write after a certain number of time steps or depening on the wall clock time: |
|||
<pre>// write every 100th simulation time step |
|||
writeControl timeStep; |
|||
writeInterval 100; |
|||
</pre> |
|||
<pre>// every 3600 seconds of real wall clock time |
|||
writeControl runtime; |
|||
writeInterval 3600; |
|||
</pre> |
|||
If you use OpenFOAM before version 4.0 or 1606, the type of function object is: |
|||
<pre> |
|||
type writeRegisteredObject; // (instead of type writeObjects) |
|||
</pre> |
|||
If you use OpenFOAM before version 3.0, you have to load the library with |
|||
<pre> |
|||
functionObjectLibs ("libIOFunctionObjects.so"); // (instead of libs ( "libutilityFunctionObjects.so" )) |
|||
</pre> |
|||
and exchange the entry "writeControl" with "outputControl". |
|||
= OpenFOAM and ParaView on bwUniCluster= |
= OpenFOAM and ParaView on bwUniCluster= |
||
ParaView is not directly linked to OpenFOAM installation on the cluster. Therefore, to visualize OpenFOAM jobs with ParaView, they will have to be manually opened within the specific ParaView module. |
ParaView is not directly linked to OpenFOAM installation on the cluster. Therefore, to visualize OpenFOAM jobs with ParaView, they will have to be manually opened within the specific ParaView module. |
||
1. Load the ParaView module. For example: |
1. Load the ParaView module. For example: |
||
<pre>$ module load cae/paraview/ |
<pre>$ module load cae/paraview/5.9</pre> |
||
2. Create a dummy '*.openfoam' file in the OpenFOAM case folder: |
2. Create a dummy '*.openfoam' file in the OpenFOAM case folder: |
||
<pre>$ cd <case_folder_path> |
<pre>$ cd <case_folder_path> |
||
Line 130: | Line 237: | ||
3. Open ParaView: |
3. Open ParaView: |
||
To run Paraview using VNC system is required on the bwUniCluster. |
|||
<pre>$ paraview</pre> |
|||
On the cluster run: |
|||
'''NOTICE:''' ParaView is a visualization software which requires X-server to run. |
|||
<pre>$ start_vnc_desktop --hw-rendering </pre> |
|||
Start your VNC client on your desktop PC. |
|||
'''NOTICE''' Information for remote visualization on KIT HPC system is available on: https://wiki.bwhpc.de/e/BwUniCluster2.0/Software/Start_vnc_desktop |
|||
4. In Paraview go to 'File' -> 'Open', or press Ctrl+O. Choose to show 'All files (*)', and open your <case_name>.openfoam file. In the pop-up window select OpenFOAM, and press 'Ok'. |
4. In Paraview go to 'File' -> 'Open', or press Ctrl+O. Choose to show 'All files (*)', and open your <case_name>.openfoam file. In the pop-up window select OpenFOAM, and press 'Ok'. |
||
5. That's it! Enjoy ParaView and OpenFOAM. |
5. That's it! Enjoy ParaView and OpenFOAM. |
||
---- |
|||
[[Category:Engineering software]][[Category:bwUniCluster]] |
Latest revision as of 11:52, 14 February 2023
The main documentation is available via |
Description | Content |
---|---|
module load | cae/openfoam |
License | GNU General Public Licence |
Citing | n/a |
Links | Homepage | Documentation |
Graphical Interface | No |
Description
OpenFOAM (Open-source Field Operation And Manipulation) is a free, open-source CFD software package with an extensive range of features to solve anything from complex fluid flows involving chemical reactions, turbulence and heat transfer, to solid dynamics and electromagnetics.
Adding OpenFOAM to Your Environment
After loading the desired module, type to activate the OpenFOAM applications
$ source $FOAM_INIT
or simply
$ foamInit
Parallel run with OpenFOAM
For a better performance on running OpenFOAM jobs in parallel on bwUniCluster, it is recommended to have the decomposed data in local folders on each node.
Therefore you may use *HPC scripts, wich will copy your data to the node specific folders after running the decomposePar, and copy it back to the local case folder before running reconstructPar.
Don't forget to allocate enough wall-time for decomposition and reconstruction of your cases. As the data will be processed directly on the nodes, and may be lost if the job is cancelled before the data is copied back into the case folder.
Following commands will do that for you:
$ decomposeParHPC $ reconstructParHPC $ reconstructParMeshHPC
instead of:
$ decomposePar $ reconstructPar $ recontructParMesh
For example, if you want to runsnappyHexMeshin parallel, you may use the following commands:
$ decomposeParMeshHPC $ mpirun --bind-to core --map-by core -report-bindings snappyHexMesh -overwrite -parallel $ reconstructParMeshHPC -constant
instead of:
$ decomposePar $ mpirun --bind-to core --map-by core -report-bindings snappyHexMesh -overwrite -parallel $ reconstructParMesh -constant
For running jobs on multiple nodes, OpenFOAM needs passwordless communication between the nodes, to copy data into the local folders.
A small trick using ssh-keygen once will let your nodes to communicate freely over rsh.
Do it once (if you didn't do it already in the past):
$ ssh-keygen $ cat $HOME/.ssh/id_rsa.pub >> $HOME/.ssh/authorized_keys
Building an OpenFOAM batch file for parallel processing
General information
Before running OpenFOAM jobs in parallel, it is necessary to decompose the geometry domain into segments, equal to the number of processors (or threads) you intend to use.
That means, for example, if you want to run a case on 8 processors, you will have to decompose the mesh in 8 segments, first. Then, you start the solver in parallel, letting OpenFOAM to run calculations concurrently on these segments, one processor responding for one segment of the mesh, sharing the data with all other processors in between.
There is, of course, a mechanism that connects properly the calculations, so you don't loose your data or generate wrong results.
Decomposition and segments building process is handled bydecomposeParutility.
The number of subdomains, in which the geometry will be decomposed, is specified in "system/decomposeParDict", as well as the decomposition method to use.
The automatic decomposition method is "scotch". It trims the mesh, collecting as many cells as possible per processor, trying to avoid having empty segments or segments with not enough cells. If you want your mesh to be divided in other way, specifying the number of segments it should be cut in x, y or z direction, for example, you can use "simple" or "hierarchical" methods.
Wrapper script generation
Attention: openfoam module loads automatically the necessary openmpi module for parallel run, do NOT load another version of mpi, as it may conflict with the loaded openfoam version.
A job-script to submit a batch job called job_openfoam.sh that runs icoFoam solver with OpenFoam version 8, on 80 processors, on a multiple partition with a total wall clock time of 6 hours looks like:
#!/bin/bash
# Allocate nodes
#SBATCH --nodes=2
# Number of tasks per node
#SBATCH --ntasks-per-node=40
# Queue class https://wiki.bwhpc.de/e/BwUniCluster_2.0_Batch_Queues
#SBATCH --partition=multiple
# Maximum job run time
#SBATCH --time=4:00:00
# Give the job a reasonable name
#SBATCH --job-name=openfoam
# File name for standard output (%j will be replaced by job id)
#SBATCH --output=logs-%j.out
# File name for error output
#SBATCH --error=logs-%j.err
# User defined variables
FOAM_VERSION="8"
EXECUTABLE="icoFoam"
MPIRUN_OPTIONS="--bind-to core --map-by core --report-bindings"
module load ${FOAM_VERSION}
foamInit
# remove decomposePar if you already decomposed your case beforehand
decomposeParHPC &&
# starting the solver in parallel. Name of the solver is given in the "EXECUTABLE" variable
mpirun ${MPIRUN_OPTIONS} ${EXECUTABLE} -parallel &&
reconstructParHPC
|
Attention: The script above will run a parallel OpenFOAM Job with pre-installed OpenMPI. If you are using an OpenFOAM version wich comes with pre-installed Intel MPI (like, for examplecae/openfoam/v1712-impi) you will have to modify the batch script to use all the advantages of Intel MPI for parallel calculations. For details see:
Using I/O and reducing the amount of data and files
In OpenFOAM, you can control which variables or fields are written at specific times. For example, for post-processing purposes, you might need only a subset of variables. In order to control which files will be written, there is a function object called "writeObjects".
An example controlDict file may look like this: At the top of the file (entry "writeControl") you specify that ALL fields (variables) required for restarting are saved every 12 wall-clock hours. Then, additionally, at the bottom of the controlDict in the "functions" block, you can add a function object of type "writeObjects". With this function object, you can control the output of specific fields independent of the entry at the top of the file:
/*--------------------------------*- C++ -*----------------------------------*\
| ========= | |
| \\ / F ield | OpenFOAM: The Open Source CFD Toolbox |
| \\ / O peration | Version: 4.1.x |
| \\ / A nd | Web: www.OpenFOAM.org |
| \\/ M anipulation | |
\*---------------------------------------------------------------------------*/
FoamFile
{
version 2.0;
format ascii;
class dictionary;
location "system";
object controlDict;
}
// * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * //
startFrom latestTime;
startTime 0;
stopAt endTime;
endTime 1e2;
deltaT 1e-5;
writeControl clockTime;
writeInterval 43200; // write ALL fields necessary to restart your simulation
// every 43200 wall-clock seconds = 12 hours of real time
purgeWrite 0;
writeFormat binary;
writePrecision 10;
writeCompression off;
timeFormat general;
timePrecision 10;
runTimeModifiable false;
functions
{
writeFields // name of the function object
{
type writeObjects;
libs ( "libutilityFunctionObjects.so" );
objects
(
T U rho // list of fields/variables to be written
);
// E.g. write every 1e-5 seconds of simulation time only the specified fields
writeControl runTime;
writeInterval 1e-5; // write every 1e-5 seconds
}
}
|
You can also define multiple function objects in order to write different subsets of fields at different times. You can also use wildcards in the list of fields- for example, in order to write out all fields starting with "RR_" you can add
"RR_.*"
to the list of objects. You can get a list of valid field names by writing "banana" in the field list. During the run of the solver all valid field names are printed. The output time can be changed too. Instead of writing at specific times in the simulation, you can also write after a certain number of time steps or depening on the wall clock time:
// write every 100th simulation time step writeControl timeStep; writeInterval 100;
// every 3600 seconds of real wall clock time writeControl runtime; writeInterval 3600;
If you use OpenFOAM before version 4.0 or 1606, the type of function object is:
type writeRegisteredObject; // (instead of type writeObjects)
If you use OpenFOAM before version 3.0, you have to load the library with
functionObjectLibs ("libIOFunctionObjects.so"); // (instead of libs ( "libutilityFunctionObjects.so" ))
and exchange the entry "writeControl" with "outputControl".
OpenFOAM and ParaView on bwUniCluster
ParaView is not directly linked to OpenFOAM installation on the cluster. Therefore, to visualize OpenFOAM jobs with ParaView, they will have to be manually opened within the specific ParaView module.
1. Load the ParaView module. For example:
$ module load cae/paraview/5.9
2. Create a dummy '*.openfoam' file in the OpenFOAM case folder:
$ cd <case_folder_path> $ touch <case_name>.openfoam
NOTICE: the name of the dummy file should be the same as the name of the OpenFOAM case folder, with '.openfoam' extension.
3. Open ParaView: To run Paraview using VNC system is required on the bwUniCluster. On the cluster run:
$ start_vnc_desktop --hw-rendering
Start your VNC client on your desktop PC. NOTICE Information for remote visualization on KIT HPC system is available on: https://wiki.bwhpc.de/e/BwUniCluster2.0/Software/Start_vnc_desktop
4. In Paraview go to 'File' -> 'Open', or press Ctrl+O. Choose to show 'All files (*)', and open your <case_name>.openfoam file. In the pop-up window select OpenFOAM, and press 'Ok'.
5. That's it! Enjoy ParaView and OpenFOAM.