BwUniCluster2.0/Software/Ansys: Difference between revisions
m (→ANSYS Fluent batch jobs: unfortunately `` and '' look too similar in bash scripts. changed to $(..)) |
No edit summary |
||
(36 intermediate revisions by 7 users not shown) | |||
Line 1: | Line 1: | ||
{{Softwarepage|cae/ansys}} |
|||
<!--{| align="right" {{Table|width=40%}} --> |
|||
{|{{Softwarebox}} |
|||
{| width=600px class="wikitable" |
|||
|- |
|- |
||
! Description !! Content |
|||
! colspan="2" style="text-align:center" | Name |
|||
|- |
|- |
||
| module load |
| module load |
||
| cae/ansys |
| cae/ansys |
||
|- |
|||
| Availability |
|||
| [[bwUniCluster]] |
|||
|- |
|- |
||
| License |
| License |
||
| Academic. See: [http://www.ansys.com/Academic/educator-tools/Licensing+&+Terms+of+Use Licensing and Terms-of-Use]. |
|||
| academic |
|||
|- |
|||
| Citing |
|||
| [http://www.ansys.com/academic/educator-tools/ Citations] |
|||
|- |
|- |
||
<!--|Citing |
|||
| |
|||
|- --> |
|||
| Links |
| Links |
||
| [http://www.ansys.com/ Homepage] |
| [http://www.ansys.com/ Ansys Homepage] | [http://www.ansys.com/Academic/educator-tools/Support+Resources Support and Resources] |
||
|- |
|- |
||
| Graphical Interface |
| Graphical Interface |
||
| Yes |
| Yes |
||
|- |
|||
<!--| User Forum |
|||
| --> |
|||
|} |
|} |
||
= Description = |
= Description = |
||
ANSYS is a general purpose software to simulate interactions of all disciplines of physics, structural, fluid dynamics, heat transfer, electromagnetic etc. For more information about ANSYS products please visit [http://www.ansys.com/Industries/Academic/ http://www.ansys.com/Industries/Academic/] |
'''ANSYS''' is a general purpose software to simulate interactions of all disciplines of physics, structural, fluid dynamics, heat transfer, electromagnetic etc. For more information about ANSYS products please visit [http://www.ansys.com/Industries/Academic/ http://www.ansys.com/Industries/Academic/] |
||
<br> |
<br> |
||
<br> |
<br> |
||
= Versions and Availability = |
= Versions and Availability = |
||
A current list of the versions available on the bwUniCluster and bwForclusters can be obtained from the Cluster Information System: [http://cis-hpc.uni-konstanz.de/prod.cis/ CIS Information on ANSYS] |
|||
On the command line interface of a particular bwHPC cluster a list of all available ANSYS versions can be inquired as followed |
|||
<pre> |
|||
$ module avail cae/ansys |
|||
-------------------------- /opt/bwhpc/kit/modulefiles -------------------------- |
|||
cae/ansys/15.0 |
|||
------------------------ /opt/bwhpc/common/modulefiles ------------------------- |
|||
cae/ansys/15.0_bw |
|||
</pre> |
</pre> |
||
The cae/ansys modules are using the KIT license server and are reserved for members of the KIT only. |
|||
<br> |
<br> |
||
= Usage = |
= Usage = |
||
== |
== ANSYS Fluent batch jobs == |
||
The execution of FLUENT can also be carried out using a shell script. Below is an example of a shell script named <code>run_fluent.sh</code> that starts a FLUENT calculation in parallel on two nodes with 40 processors each on the bwUniCluster: |
|||
If you wish to load a specific version of ANSYS you can do so by executing e.g.: |
|||
1. Using IntelMPI: |
|||
<pre> |
<pre> |
||
#!/bin/bash |
|||
$ module load cae/ansys/15.0_bw |
|||
#SBATCH --nodes=2 |
|||
#SBATCH --ntasks-per-node=96 |
|||
#SBATCH --time=2:00:00 |
|||
#SBATCH --mem=90gb |
|||
#SBATCH --partition=cpu |
|||
module load cae/ansys/2025R2 |
|||
source fluentinit |
|||
scontrol show hostname ${SLURM_JOB_NODELIST} > fluent.hosts |
|||
time fluent 3d -mpi=intel -pib -g -t80 -cnf=fluent.hosts -i test.inp |
|||
</pre> |
</pre> |
||
to load the version 15.0 with the BW license. |
|||
2. Using OpenMPI: |
|||
You can load the default version of ANSYS with the command: |
|||
<pre> |
<pre> |
||
#!/bin/bash |
|||
$ module load cae/ansys |
|||
#SBATCH --nodes=2 |
|||
#SBATCH --ntasks-per-node=96 |
|||
#SBATCH --time=2:00:00 |
|||
#SBATCH --mem=90gb |
|||
#SBATCH --partition=cpu |
|||
module load cae/ansys/2025R2 |
|||
source fluentinit |
|||
scontrol show hostname ${SLURM_JOB_NODELIST} > fluent.hosts |
|||
time fluent 3d -mpi=openmpi -g -t80 -cnf=fluent.hosts -i test.inp |
|||
</pre> |
</pre> |
||
To submit the script to the job management system, run: |
|||
== Start commands == |
|||
To start an ANSYS Mechanical session enter |
|||
<pre> |
<pre> |
||
sbatch run_fluent.sh |
|||
$ ansys150 |
|||
</pre> |
</pre> |
||
To launch an ANSYS FLUENT session enter |
|||
<pre> |
|||
$ fluent |
|||
</pre> |
|||
The following command is to run the ANSYS Workbench |
|||
<pre> |
|||
$ runwb2 |
|||
</pre> |
|||
Online documention is available from the help menu or by using the command |
|||
<pre> |
|||
$ anshelp150 |
|||
</pre> |
|||
As with all processes that require more than a few minutes to run, non-trivial ANSYS solver jobs must be submitted to the cluster queueing system. |
|||
<br> |
|||
= Examples = |
|||
== ANSYS Mechanical batch jobs == |
|||
The following script could be submitted to the queueing system to run an ANSYS Mechanical job in parallel: |
|||
{{bwFrameA| |
|||
<source lang="bash"> |
|||
#!/bin/bash |
|||
module load cae/ansys |
|||
export MPIRUN_OPTIONS="-prot" |
|||
export MPI_USESRUN=1 |
|||
cd Arbeitsverzeichnis |
|||
export MACHINES=$(/software/bwhpc/kit/cae/ansys_inc15/scc/machines.pl) |
|||
ansys150 -dis -b -j lal -machines $MACHINES < input.f18 |
|||
</source> |
|||
}} |
|||
== ANSYS CFX batch jobs == |
|||
working_dir could start with $HOME or $WORK . |
|||
The execution of CFX can also be carried out using a script. Below is an example script <code>run_cfx.sh</code> to start CFX with the start method 'Intel MPI Distributed Parallel': |
|||
To submit the example script to the queueing system execute the following (32 cores, 1 GB of memory per core, max. time 600 seconds) : |
|||
<pre> |
<pre> |
||
msub -l nodes=2:ppn=16,pmem=1000mb,walltime=600 Shell-Script |
|||
</pre> |
|||
== ANSYS Fluent batch jobs == |
|||
The following script "run_fluent.sh" could be submitted to the queueing system to run an ANSYS Fluent job in parallel using 4 cores on a single node: |
|||
{{bwFrameA| |
|||
<source lang="bash"> |
|||
#!/bin/sh |
#!/bin/sh |
||
# |
#SBATCH --nodes=2 |
||
#SBATCH --ntasks-per-node=96 |
|||
#MSUB -l walltime=0:10:00 |
|||
#SBATCH --partition=cpu |
|||
#MSUB -l mem=16000mb |
|||
#SBATCH --time=0:30:00 |
|||
#SBATCH --mem=90gb |
|||
module load cae/ansys/2025R2 |
|||
## setup environment |
|||
export MPI_USESRUN=1 |
|||
source cfxinit |
|||
## generate hosts list |
|||
export run_nodes=$(srun hostname) |
|||
echo $run_nodes | sed "s/ /\n/g" > fluent.hosts |
|||
echo "" >> fluent.hosts |
|||
cfx5solve -def test.def -par-dist $hostlist -start-method 'Intel MPI Distributed Parallel' |
|||
## load ansys module |
|||
</pre> |
|||
module load cae/ansys |
|||
To submit the script to the job management system, run: |
|||
## start fluent job |
|||
time fluent 3d -g -pib -cnf=fluent.hosts -i test.inp |
|||
</source> |
|||
}} |
|||
To submit the example script to the queueing system execute the following: |
|||
<pre> |
<pre> |
||
sbatch run_cfx.sh |
|||
$ msub run_fluent.sh |
|||
</pre> |
</pre> |
||
<br> |
|||
== ANSYS |
== ANSYS Rocky batch jobs == |
||
With the script "run_cfx.sh" you can submit a CFX job to the queueing system to run in parallel using 8 cores on two node with the start-method 'Platform MPI Parallel': |
|||
{{bwFrameA| |
|||
<source lang="bash"> |
|||
#!/bin/sh |
|||
#MSUB -l nodes=2:ppn=4 |
|||
#MSUB -l walltime=0:10:00 |
|||
#MSUB -l mem=32000mb |
|||
The execution of Rocky DEM can also be carried out using a script. Below is an example script <code>run_rocky.sh</code> to run a Rocky DEM simulation on a GPU node: |
|||
## setup environment |
|||
export MPI_USESRUN=1 |
|||
## load ansys module |
|||
module load cae/ansys |
|||
## start job |
|||
cfx5solve -def test.def -part 8 -start-method 'Platform MPI Parallel' |
|||
</source> |
|||
}} |
|||
To submit the example script to the queueing system execute: |
|||
<pre> |
<pre> |
||
#!/bin/bash |
|||
$ msub run_cfx.sh |
|||
#SBATCH --output=rocky_job_%j.out |
|||
#SBATCH --job-name=rocky_job |
|||
#SBATCH --partition=gpu_a100_short # GPU patition |
|||
#SBATCH --nodes=1 |
|||
#SBATCH --gpus-per-node=1 |
|||
#SBATCH --mem=30000 # Total memory (MB) |
|||
#SBATCH --time=00:30:00 # Time limit (hh:mm:ss) |
|||
module load cae/ansys/2025R2 |
|||
# Run Rocky DEM |
|||
Rocky --simulate "/path/to/Test.rocky" |
|||
</pre> |
</pre> |
||
<br> |
|||
To submit the script to the job management system, run: |
|||
---- |
|||
[[Category:Engineering software]][[Category:bwUniCluster]] |
|||
<pre> |
|||
sbatch run_rocky.sh |
|||
</pre> |
Latest revision as of 16:36, 19 August 2025
The main documentation is available on the cluster via |
Description | Content |
---|---|
module load | cae/ansys |
License | Academic. See: Licensing and Terms-of-Use. |
Citing | Citations |
Links | Ansys Homepage | Support and Resources |
Graphical Interface | Yes |
Description
ANSYS is a general purpose software to simulate interactions of all disciplines of physics, structural, fluid dynamics, heat transfer, electromagnetic etc. For more information about ANSYS products please visit http://www.ansys.com/Industries/Academic/
Versions and Availability
The cae/ansys modules are using the KIT license server and are reserved for members of the KIT only.
Usage
ANSYS Fluent batch jobs
The execution of FLUENT can also be carried out using a shell script. Below is an example of a shell script named run_fluent.sh
that starts a FLUENT calculation in parallel on two nodes with 40 processors each on the bwUniCluster:
1. Using IntelMPI:
#!/bin/bash #SBATCH --nodes=2 #SBATCH --ntasks-per-node=96 #SBATCH --time=2:00:00 #SBATCH --mem=90gb #SBATCH --partition=cpu module load cae/ansys/2025R2 source fluentinit scontrol show hostname ${SLURM_JOB_NODELIST} > fluent.hosts time fluent 3d -mpi=intel -pib -g -t80 -cnf=fluent.hosts -i test.inp
2. Using OpenMPI:
#!/bin/bash #SBATCH --nodes=2 #SBATCH --ntasks-per-node=96 #SBATCH --time=2:00:00 #SBATCH --mem=90gb #SBATCH --partition=cpu module load cae/ansys/2025R2 source fluentinit scontrol show hostname ${SLURM_JOB_NODELIST} > fluent.hosts time fluent 3d -mpi=openmpi -g -t80 -cnf=fluent.hosts -i test.inp
To submit the script to the job management system, run:
sbatch run_fluent.sh
ANSYS CFX batch jobs
The execution of CFX can also be carried out using a script. Below is an example script run_cfx.sh
to start CFX with the start method 'Intel MPI Distributed Parallel':
#!/bin/sh #SBATCH --nodes=2 #SBATCH --ntasks-per-node=96 #SBATCH --partition=cpu #SBATCH --time=0:30:00 #SBATCH --mem=90gb module load cae/ansys/2025R2 source cfxinit cfx5solve -def test.def -par-dist $hostlist -start-method 'Intel MPI Distributed Parallel'
To submit the script to the job management system, run:
sbatch run_cfx.sh
ANSYS Rocky batch jobs
The execution of Rocky DEM can also be carried out using a script. Below is an example script run_rocky.sh
to run a Rocky DEM simulation on a GPU node:
#!/bin/bash #SBATCH --output=rocky_job_%j.out #SBATCH --job-name=rocky_job #SBATCH --partition=gpu_a100_short # GPU patition #SBATCH --nodes=1 #SBATCH --gpus-per-node=1 #SBATCH --mem=30000 # Total memory (MB) #SBATCH --time=00:30:00 # Time limit (hh:mm:ss) module load cae/ansys/2025R2 # Run Rocky DEM Rocky --simulate "/path/to/Test.rocky"
To submit the script to the job management system, run:
sbatch run_rocky.sh