BwUniCluster2.0/Software/Ansys: Difference between revisions

From bwHPC Wiki
Jump to navigation Jump to search
No edit summary
 
(11 intermediate revisions by 4 users not shown)
Line 1: Line 1:
{{Softwarepage|cae/ansys}}

{| width=600px class="wikitable"
{| width=600px class="wikitable"
|-
|-
Line 5: Line 7:
| module load
| module load
| cae/ansys
| cae/ansys
|-
| Availability
| [[bwUniCluster]] | bwGRiD-Tübingen
|-
|-
| License
| License
Line 28: Line 27:


= Versions and Availability =
= Versions and Availability =
A list of versions currently available on all bwHPC-C5-Clusters can be obtained from the
<br>
<big>

[https://cis-hpc.uni-konstanz.de/prod.cis/ Cluster Information System CIS]

</big>
{{#widget:Iframe
|url=https://cis-hpc.uni-konstanz.de/prod.cis/bwUniCluster/cae/ansys
|width=99%
|height=350
|border=0
}}
<br>
On the command line interface of a particular bwHPC cluster a list of all available ANSYS versions can be inquired as followed
<pre>
$ module avail cae/ansys
-------------------------- /opt/bwhpc/kit/modulefiles --------------------------
cae/ansys/15.0
cae/ansys/16.2
cae/ansys/17.2
cae/ansys/18.2


</pre>
</pre>
The cae/ansys modules are using the KIT license server and is reserved for members of the KIT only.
The cae/ansys modules are using the KIT license server and are reserved for members of the KIT only.
<br>
<br>


= Usage =
= Usage =
== Loading the Module ==
== ANSYS Fluent batch jobs ==
The execution of FLUENT can also be carried out using a shell script. Below is an example of a shell script named <code>run_fluent.sh</code> that starts a FLUENT calculation in parallel on two nodes with 40 processors each on the bwUniCluster:
If you wish to load a specific version of ANSYS you can do so by executing e.g.:

<pre>
1. Using IntelMPI:
$ module load cae/ansys/15.0
</pre>
to load the version 15.0.


You can load the default version of ANSYS with the command:
<pre>
<pre>
#!/bin/bash
$ module load cae/ansys
#SBATCH --nodes=2
</pre>
#SBATCH --ntasks-per-node=96
#SBATCH --time=2:00:00
#SBATCH --mem=90gb
#SBATCH --partition=cpu


module load cae/ansys/2025R2
== Start commands ==

To start an ANSYS Mechanical session enter
source fluentinit
<pre>

$ ansys150
scontrol show hostname ${SLURM_JOB_NODELIST} > fluent.hosts
time fluent 3d -mpi=intel -pib -g -t80 -cnf=fluent.hosts -i test.inp
</pre>
</pre>
To launch an ANSYS FLUENT session enter
<pre>
$ fluent
</pre>
The following command is to run the ANSYS Workbench
<pre>
$ runwb2
</pre>
Online documention is available from the help menu or by using the command
<pre>
$ anshelp150
</pre>
As with all processes that require more than a few minutes to run, non-trivial ANSYS solver jobs must be submitted to the cluster queueing system.
<br>


2. Using OpenMPI:
= Examples =
== ANSYS Mechanical batch jobs ==


<pre>
The following script could be submitted to the queueing system to run an ANSYS Mechanical job in parallel:
{{bwFrameA|
<source lang="bash">
#!/bin/bash
#!/bin/bash
#SBATCH --nodes=2
module load cae/ansys
#SBATCH --ntasks-per-node=96
export MPIRUN_OPTIONS="-prot"
#SBATCH --time=2:00:00
export MPI_USESRUN=1
#SBATCH --mem=90gb
cd Arbeitsverzeichnis
#SBATCH --partition=cpu
export MACHINES=`/software/bwhpc/common/cae/ansys_inc150/scc/machines.pl`
ansys150 -dis -b -j lal -machines $MACHINES < input.f18
</source>
}}


module load cae/ansys/2025R2
working_dir could start with $HOME or $WORK .


source fluentinit
To submit the example script to the queueing system execute the following (32 cores, 1 GB of memory per core, max. time 600 seconds) :

<pre>
scontrol show hostname ${SLURM_JOB_NODELIST} > fluent.hosts
msub -l nodes=2:ppn=16,pmem=1000mb,walltime=600 Shell-Script
time fluent 3d -mpi=openmpi -g -t80 -cnf=fluent.hosts -i test.inp
</pre>
</pre>


To submit the script to the job management system, run:
== ANSYS Fluent batch jobs ==

The following script "run_fluent.sh" could be submitted to the queueing system to run an ANSYS Fluent job in parallel using 4 cores on a single node:
{{bwFrameA|
<source lang="bash">
#!/bin/sh
#MSUB -l nodes=1:ppn=4
#MSUB -l walltime=0:10:00
#MSUB -l mem=16000mb
## setup environment
export MPI_USESRUN=1
export run_nodes=`srun hostname -s`
echo $run_nodes | sed "s/ /\n/g" > fluent.hosts
echo "" >> fluent.hosts
module load cae/ansys
module load system/ssh_wrapper/0.1
## start fluent job
time fluent 3d -mpi=intel -g -pib -cnf=fluent.hosts -i test.inp
</source>
}}
To submit the example script to the queueing system execute the following:
<pre>
<pre>
$ msub run_fluent.sh
sbatch run_fluent.sh
</pre>
</pre>

<br>


== ANSYS CFX batch jobs ==
== ANSYS CFX batch jobs ==
With the script "run_cfx.sh" you can submit a CFX job to the queueing system to run in parallel using 8 cores on two node with the start-method 'Platform MPI Parallel':
The execution of CFX can also be carried out using a script. Below is an example script <code>run_cfx.sh</code> to start CFX with the start method 'Intel MPI Distributed Parallel':

{{bwFrameA|
<pre>
<source lang="bash">
#!/bin/sh
#!/bin/sh
#MSUB -l nodes=2:ppn=4
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=96
#MSUB -l walltime=0:10:00
#SBATCH --partition=cpu
#MSUB -l mem=32000mb
#SBATCH --time=0:30:00
#SBATCH --mem=90gb


module load cae/ansys/2025R2


source cfxinit
## load ansys module
module load cae/ansys


cfx5solve -def test.def -par-dist $hostlist -start-method 'Intel MPI Distributed Parallel'
## setup environment
</pre>
export MPI_USESRUN=1

export CFX5RSH=ssh
To submit the script to the job management system, run:
hostlist=$(srun hostname -s)
hostlist=`echo $hostlist| sed 's/ /,/g'`


## start job
cfx5solve -def test.def -par-dist $hostlist -start-method 'Intel MPI Local Parallel'
</source>
}}
To submit the example script to the queueing system execute:
<pre>
<pre>
$ msub run_cfx.sh
sbatch run_cfx.sh
</pre>
</pre>
<br>


== ANSYS Rocky batch jobs ==
----

[[Category:Engineering software]][[Category:bwUniCluster]]
The execution of Rocky DEM can also be carried out using a script. Below is an example script <code>run_rocky.sh</code> to run a Rocky DEM simulation on a GPU node:

<pre>
#!/bin/bash
#SBATCH --output=rocky_job_%j.out
#SBATCH --job-name=rocky_job
#SBATCH --partition=gpu_a100_short # GPU patition
#SBATCH --nodes=1
#SBATCH --gpus-per-node=1
#SBATCH --mem=30000 # Total memory (MB)
#SBATCH --time=00:30:00 # Time limit (hh:mm:ss)

module load cae/ansys/2025R2

# Run Rocky DEM
Rocky --simulate "/path/to/Test.rocky"
</pre>

To submit the script to the job management system, run:

<pre>
sbatch run_rocky.sh
</pre>

Latest revision as of 16:36, 19 August 2025

The main documentation is available on the cluster via module help cae/ansys. Most software modules for applications provide working example batch scripts.


Description Content
module load cae/ansys
License Academic. See: Licensing and Terms-of-Use.
Citing Citations
Links Ansys Homepage | Support and Resources
Graphical Interface Yes

Description

ANSYS is a general purpose software to simulate interactions of all disciplines of physics, structural, fluid dynamics, heat transfer, electromagnetic etc. For more information about ANSYS products please visit http://www.ansys.com/Industries/Academic/

Versions and Availability

The cae/ansys modules are using the KIT license server and are reserved for members of the KIT only.

Usage

ANSYS Fluent batch jobs

The execution of FLUENT can also be carried out using a shell script. Below is an example of a shell script named run_fluent.sh that starts a FLUENT calculation in parallel on two nodes with 40 processors each on the bwUniCluster:

1. Using IntelMPI:

#!/bin/bash
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=96
#SBATCH --time=2:00:00
#SBATCH --mem=90gb
#SBATCH --partition=cpu

module load cae/ansys/2025R2

source fluentinit

scontrol show hostname ${SLURM_JOB_NODELIST} > fluent.hosts
time fluent 3d -mpi=intel -pib -g -t80 -cnf=fluent.hosts -i test.inp

2. Using OpenMPI:

#!/bin/bash
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=96
#SBATCH --time=2:00:00
#SBATCH --mem=90gb
#SBATCH --partition=cpu

module load cae/ansys/2025R2

source fluentinit

scontrol show hostname ${SLURM_JOB_NODELIST} > fluent.hosts
time fluent 3d -mpi=openmpi -g -t80 -cnf=fluent.hosts -i test.inp

To submit the script to the job management system, run:

sbatch run_fluent.sh


ANSYS CFX batch jobs

The execution of CFX can also be carried out using a script. Below is an example script run_cfx.sh to start CFX with the start method 'Intel MPI Distributed Parallel':

#!/bin/sh
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=96
#SBATCH --partition=cpu
#SBATCH --time=0:30:00
#SBATCH --mem=90gb

module load cae/ansys/2025R2

source cfxinit

cfx5solve -def test.def -par-dist $hostlist -start-method 'Intel MPI Distributed Parallel'

To submit the script to the job management system, run:

sbatch run_cfx.sh

ANSYS Rocky batch jobs

The execution of Rocky DEM can also be carried out using a script. Below is an example script run_rocky.sh to run a Rocky DEM simulation on a GPU node:

#!/bin/bash
#SBATCH --output=rocky_job_%j.out          
#SBATCH --job-name=rocky_job               
#SBATCH --partition=gpu_a100_short          # GPU patition
#SBATCH --nodes=1                           
#SBATCH --gpus-per-node=1                   
#SBATCH --mem=30000                         # Total memory (MB)
#SBATCH --time=00:30:00                     # Time limit (hh:mm:ss)

module load cae/ansys/2025R2

# Run Rocky DEM
Rocky --simulate "/path/to/Test.rocky"

To submit the script to the job management system, run:

sbatch run_rocky.sh