BwUniCluster2.0/Software/Ansys: Difference between revisions
No edit summary |
No edit summary |
||
(24 intermediate revisions by 6 users not shown) | |||
Line 1: | Line 1: | ||
{{Softwarepage|cae/ansys}} |
|||
{| class="wikitable" |
|||
{| width=600px class="wikitable" |
|||
|- |
|- |
||
! Description !! Content |
! Description !! Content |
||
Line 5: | Line 7: | ||
| module load |
| module load |
||
| cae/ansys |
| cae/ansys |
||
|- |
|||
| Availability |
|||
| [[bwUniCluster]] | bwGRiD-Tübingen |
|||
|- |
|- |
||
| License |
| License |
||
| Academic. See: [http://www.ansys.com/Academic/educator-tools/Licensing+&+Terms+of+Use Licensing and Terms-of-Use]. |
|||
| Academic |
|||
|- |
|- |
||
| Citing |
| Citing |
||
| [http://www.ansys.com/academic/educator-tools/ Citations] |
|||
| n/a |
|||
|- |
|- |
||
| Links |
| Links |
||
| [http://www.ansys.com/ Ansys Homepage] |
| [http://www.ansys.com/ Ansys Homepage] | [http://www.ansys.com/Academic/educator-tools/Support+Resources Support and Resources] |
||
|- |
|- |
||
| Graphical Interface |
| Graphical Interface |
||
| Yes |
| Yes |
||
|- |
|||
| Included modules |
|||
| |
|||
|} |
|} |
||
Line 31: | Line 27: | ||
= Versions and Availability = |
= Versions and Availability = |
||
A list of versions currently available on all bwHPC-C5-Clusters can be obtained from the |
|||
</pre> |
|||
The cae/ansys modules are using the KIT license server and are reserved for members of the KIT only. |
|||
<br> |
<br> |
||
<big> |
|||
= Usage = |
|||
[https://cis-hpc.uni-konstanz.de/prod.cis/ Cluster Information System CIS] |
|||
== ANSYS Fluent batch jobs == |
|||
The execution of FLUENT can also be carried out using a shell script. Below is an example of a shell script named <code>run_fluent.sh</code> that starts a FLUENT calculation in parallel on two nodes with 40 processors each on the bwUniCluster: |
|||
1. Using IntelMPI: |
|||
</big> |
|||
{{#widget:Iframe |
|||
|url=https://cis-hpc.uni-konstanz.de/prod.cis/bwUniCluster/cae/ansys |
|||
|width=99% |
|||
|height=350 |
|||
|border=1 |
|||
}} |
|||
<big><p style="color: red;">Open the above links by using the right mouse button and select "open in a new window" or "open in a new tab".</p></big> |
|||
<br> |
|||
On the command line interface of a particular bwHPC cluster a list of all available ANSYS versions can be inquired as followed |
|||
<pre> |
<pre> |
||
#!/bin/bash |
|||
$ module avail cae/ansys |
|||
#SBATCH --nodes=2 |
|||
-------------------------- /opt/bwhpc/kit/modulefiles -------------------------- |
|||
#SBATCH --ntasks-per-node=96 |
|||
cae/ansys/15.0 |
|||
#SBATCH --time=2:00:00 |
|||
#SBATCH --mem=90gb |
|||
#SBATCH --partition=cpu |
|||
module load cae/ansys/2025R2 |
|||
------------------------ /opt/bwhpc/common/modulefiles ------------------------- |
|||
cae/ansys/15.0_bw |
|||
source fluentinit |
|||
scontrol show hostname ${SLURM_JOB_NODELIST} > fluent.hosts |
|||
time fluent 3d -mpi=intel -pib -g -t80 -cnf=fluent.hosts -i test.inp |
|||
</pre> |
</pre> |
||
There are two licenses. The module <pre>cae/ansys/15.0_bw</pre> uses the BW license server (25 academic research, 69 parallel processes) and <pre>cae/ansys/15.0</pre> uses the KIT license server (only members of the KIT can use it). |
|||
<br> |
|||
2. Using OpenMPI: |
|||
= Usage = |
|||
== Loading the Module == |
|||
If you wish to load a specific version of ANSYS you can do so by executing e.g.: |
|||
<pre> |
<pre> |
||
#!/bin/bash |
|||
$ module load cae/ansys/15.0_bw |
|||
#SBATCH --nodes=2 |
|||
#SBATCH --ntasks-per-node=96 |
|||
#SBATCH --time=2:00:00 |
|||
#SBATCH --mem=90gb |
|||
#SBATCH --partition=cpu |
|||
module load cae/ansys/2025R2 |
|||
source fluentinit |
|||
scontrol show hostname ${SLURM_JOB_NODELIST} > fluent.hosts |
|||
time fluent 3d -mpi=openmpi -g -t80 -cnf=fluent.hosts -i test.inp |
|||
</pre> |
</pre> |
||
to load the version 15.0 with the BW license. |
|||
To submit the script to the job management system, run: |
|||
You can load the default version of ANSYS with the command: |
|||
<pre> |
<pre> |
||
sbatch run_fluent.sh |
|||
$ module load cae/ansys |
|||
</pre> |
</pre> |
||
== Start commands == |
|||
== ANSYS CFX batch jobs == |
|||
The execution of CFX can also be carried out using a script. Below is an example script <code>run_cfx.sh</code> to start CFX with the start method 'Intel MPI Distributed Parallel': |
|||
<pre> |
<pre> |
||
#!/bin/sh |
|||
$ ansys150 |
|||
#SBATCH --nodes=2 |
|||
</pre> |
|||
#SBATCH --ntasks-per-node=96 |
|||
To launch an ANSYS FLUENT session enter |
|||
#SBATCH --partition=cpu |
|||
<pre> |
|||
#SBATCH --time=0:30:00 |
|||
$ fluent |
|||
#SBATCH --mem=90gb |
|||
</pre> |
|||
The following command is to run the ANSYS Workbench |
|||
<pre> |
|||
$ runwb2 |
|||
</pre> |
|||
Online documention is available from the help menu or by using the command |
|||
<pre> |
|||
$ anshelp150 |
|||
</pre> |
|||
As with all processes that require more than a few minutes to run, non-trivial ANSYS solver jobs must be submitted to the cluster queueing system. |
|||
<br> |
|||
module load cae/ansys/2025R2 |
|||
= Examples = |
|||
== ANSYS Mechanical batch jobs == |
|||
source cfxinit |
|||
The following script could be submitted to the queueing system to run an ANSYS Mechanical job in parallel: |
|||
{{bwFrameA| |
|||
<source lang="bash"> |
|||
#!/bin/bash |
|||
module load cae/ansys |
|||
export MPIRUN_OPTIONS="-prot" |
|||
export MPI_USESRUN=1 |
|||
cd Arbeitsverzeichnis |
|||
export MACHINES=`/software/bwhpc/common/cae/ansys_inc150/scc/machines.pl` |
|||
ansys150 -dis -b -j lal -machines $MACHINES < input.f18 |
|||
</source> |
|||
}} |
|||
cfx5solve -def test.def -par-dist $hostlist -start-method 'Intel MPI Distributed Parallel' |
|||
working_dir could start with $HOME or $WORK . |
|||
</pre> |
|||
To submit the script to the job management system, run: |
|||
To submit the example script to the queueing system execute the following (32 cores, 1 GB of memory per core, max. time 600 seconds) : |
|||
<pre> |
<pre> |
||
sbatch run_cfx.sh |
|||
msub -l nodes=2:ppn=16,pmem=1000mb,walltime=600 Shell-Script |
|||
</pre> |
</pre> |
||
== ANSYS |
== ANSYS Rocky batch jobs == |
||
The following script "run_fluent.sh" could be submitted to the queueing system to run an ANSYS Fluent job in parallel using 4 cores on a single node: |
|||
{{bwFrameA| |
|||
<source lang="bash"> |
|||
#!/bin/sh |
|||
#MSUB -l nodes=1:ppn=4 |
|||
#MSUB -l walltime=0:10:00 |
|||
#MSUB -l mem=16000mb |
|||
The execution of Rocky DEM can also be carried out using a script. Below is an example script <code>run_rocky.sh</code> to run a Rocky DEM simulation on a GPU node: |
|||
## setup environment |
|||
export MPI_USESRUN=1 |
|||
<pre> |
|||
## generate hosts list |
|||
#!/bin/bash |
|||
export run_nodes=$(srun hostname) |
|||
#SBATCH --output=rocky_job_%j.out |
|||
echo $run_nodes | sed "s/ /\n/g" > fluent.hosts |
|||
#SBATCH --job-name=rocky_job |
|||
echo "" >> fluent.hosts |
|||
#SBATCH --partition=gpu_a100_short # GPU patition |
|||
#SBATCH --nodes=1 |
|||
#SBATCH --gpus-per-node=1 |
|||
#SBATCH --mem=30000 # Total memory (MB) |
|||
#SBATCH --time=00:30:00 # Time limit (hh:mm:ss) |
|||
module load cae/ansys/2025R2 |
|||
module load cae/ansys |
|||
# Run Rocky DEM |
|||
Rocky --simulate "/path/to/Test.rocky" |
|||
time fluent 3d -rsh=ssh -g -pib -cnf=fluent.hosts -i test.inp |
|||
</source> |
|||
}} |
|||
To submit the example script to the queueing system execute the following: |
|||
<pre> |
|||
$ msub run_fluent.sh |
|||
</pre> |
</pre> |
||
<br> |
|||
To submit the script to the job management system, run: |
|||
== ANSYS CFX batch jobs == |
|||
With the script "run_cfx.sh" you can submit a CFX job to the queueing system to run in parallel using 8 cores on two node with the start-method 'Platform MPI Parallel': |
|||
{{bwFrameA| |
|||
<source lang="bash"> |
|||
#!/bin/sh |
|||
#MSUB -l nodes=2:ppn=4 |
|||
#MSUB -l walltime=0:10:00 |
|||
#MSUB -l mem=32000mb |
|||
## setup environment |
|||
export MPI_USESRUN=1 |
|||
## load ansys module |
|||
module load cae/ansys |
|||
## start job |
|||
cfx5solve -def test.def -part 8 -start-method 'Platform MPI Parallel' |
|||
</source> |
|||
}} |
|||
To submit the example script to the queueing system execute: |
|||
<pre> |
<pre> |
||
sbatch run_rocky.sh |
|||
$ msub run_cfx.sh |
|||
</pre> |
</pre> |
||
<br> |
|||
---- |
|||
[[Category:Engineering software]][[Category:bwUniCluster]] |
Latest revision as of 16:36, 19 August 2025
The main documentation is available on the cluster via |
Description | Content |
---|---|
module load | cae/ansys |
License | Academic. See: Licensing and Terms-of-Use. |
Citing | Citations |
Links | Ansys Homepage | Support and Resources |
Graphical Interface | Yes |
Description
ANSYS is a general purpose software to simulate interactions of all disciplines of physics, structural, fluid dynamics, heat transfer, electromagnetic etc. For more information about ANSYS products please visit http://www.ansys.com/Industries/Academic/
Versions and Availability
The cae/ansys modules are using the KIT license server and are reserved for members of the KIT only.
Usage
ANSYS Fluent batch jobs
The execution of FLUENT can also be carried out using a shell script. Below is an example of a shell script named run_fluent.sh
that starts a FLUENT calculation in parallel on two nodes with 40 processors each on the bwUniCluster:
1. Using IntelMPI:
#!/bin/bash #SBATCH --nodes=2 #SBATCH --ntasks-per-node=96 #SBATCH --time=2:00:00 #SBATCH --mem=90gb #SBATCH --partition=cpu module load cae/ansys/2025R2 source fluentinit scontrol show hostname ${SLURM_JOB_NODELIST} > fluent.hosts time fluent 3d -mpi=intel -pib -g -t80 -cnf=fluent.hosts -i test.inp
2. Using OpenMPI:
#!/bin/bash #SBATCH --nodes=2 #SBATCH --ntasks-per-node=96 #SBATCH --time=2:00:00 #SBATCH --mem=90gb #SBATCH --partition=cpu module load cae/ansys/2025R2 source fluentinit scontrol show hostname ${SLURM_JOB_NODELIST} > fluent.hosts time fluent 3d -mpi=openmpi -g -t80 -cnf=fluent.hosts -i test.inp
To submit the script to the job management system, run:
sbatch run_fluent.sh
ANSYS CFX batch jobs
The execution of CFX can also be carried out using a script. Below is an example script run_cfx.sh
to start CFX with the start method 'Intel MPI Distributed Parallel':
#!/bin/sh #SBATCH --nodes=2 #SBATCH --ntasks-per-node=96 #SBATCH --partition=cpu #SBATCH --time=0:30:00 #SBATCH --mem=90gb module load cae/ansys/2025R2 source cfxinit cfx5solve -def test.def -par-dist $hostlist -start-method 'Intel MPI Distributed Parallel'
To submit the script to the job management system, run:
sbatch run_cfx.sh
ANSYS Rocky batch jobs
The execution of Rocky DEM can also be carried out using a script. Below is an example script run_rocky.sh
to run a Rocky DEM simulation on a GPU node:
#!/bin/bash #SBATCH --output=rocky_job_%j.out #SBATCH --job-name=rocky_job #SBATCH --partition=gpu_a100_short # GPU patition #SBATCH --nodes=1 #SBATCH --gpus-per-node=1 #SBATCH --mem=30000 # Total memory (MB) #SBATCH --time=00:30:00 # Time limit (hh:mm:ss) module load cae/ansys/2025R2 # Run Rocky DEM Rocky --simulate "/path/to/Test.rocky"
To submit the script to the job management system, run:
sbatch run_rocky.sh