BwUniCluster3.0/Software/Ansys

From bwHPC Wiki
< BwUniCluster3.0‎ | Software
Revision as of 16:20, 20 August 2025 by B Jetty (talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

The main documentation is available on the cluster via module help cae/ansys. Most software modules for applications provide working example batch scripts.


Description Content
module load cae/ansys
License Academic. See: Licensing and Terms-of-Use.
Citing Citations
Links Ansys Homepage | Support and Resources
Graphical Interface Yes

Description

ANSYS is a general purpose software to simulate interactions of all disciplines of physics, structural, fluid dynamics, heat transfer, electromagnetic etc. For more information about ANSYS products please visit http://www.ansys.com/Industries/Academic/

Versions and Availability

The cae/ansys modules are using the KIT license server and are reserved for members of the KIT only.

Usage

ANSYS Fluent batch jobs

The execution of FLUENT can also be carried out using a shell script. Below is an example of a shell script named run_fluent.sh that starts a FLUENT calculation in parallel on two nodes with 40 processors each on the bwUniCluster:

1. Using IntelMPI:

#!/bin/bash
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=48
#SBATCH --time=2:00:00
#SBATCH --mem=90gb
#SBATCH --partition=cpu

module load cae/ansys/2025R2

source fluentinit

scontrol show hostname ${SLURM_JOB_NODELIST} > fluent.hosts
time fluent 3d -mpi=intel -pib -g -t96 -cnf=fluent.hosts -i test.inp

2. Using OpenMPI:

#!/bin/bash
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=48
#SBATCH --time=2:00:00
#SBATCH --mem=90gb
#SBATCH --partition=cpu

module load cae/ansys/2025R2

source fluentinit

scontrol show hostname ${SLURM_JOB_NODELIST} > fluent.hosts
time fluent 3d -mpi=openmpi -g -t96 -cnf=fluent.hosts -i test.inp

To submit the script to the job management system, run:

sbatch run_fluent.sh


ANSYS CFX batch jobs

The execution of CFX can also be carried out using a script. Below is an example script run_cfx.sh to start CFX with the start method 'Intel MPI Distributed Parallel':

#!/bin/sh
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=48
#SBATCH --partition=cpu
#SBATCH --time=0:30:00
#SBATCH --mem=90gb

module load cae/ansys/2025R2

source cfxinit

cfx5solve -def test.def -par-dist $hostlist -start-method 'Intel MPI Distributed Parallel'

To submit the script to the job management system, run:

sbatch run_cfx.sh

ANSYS Rocky batch jobs

The execution of Rocky DEM can also be carried out using a script. Below is an example script run_rocky.sh to run a Rocky DEM simulation on a GPU node:

#!/bin/bash
#SBATCH --output=rocky_job_%j.out          
#SBATCH --job-name=rocky_job               
#SBATCH --partition=gpu_a100_short          # GPU patition
#SBATCH --nodes=1                           
#SBATCH --gpus-per-node=1                   
#SBATCH --mem=30000                         # Total memory (MB)
#SBATCH --time=00:30:00                     # Time limit (hh:mm:ss)

module load cae/ansys/2025R2

# Run Rocky DEM
Rocky --simulate "/path/to/Test.rocky"

To submit the script to the job management system, run:

sbatch run_rocky.sh