BwUniCluster2.0/Software/Ansys: Difference between revisions
m (R Barthel moved page Ansys to BwUniCluster2.0/Software/Ansys) |
No edit summary |
||
(2 intermediate revisions by 2 users not shown) | |||
Line 34: | Line 34: | ||
= Usage = |
= Usage = |
||
== ANSYS Fluent batch jobs == |
== ANSYS Fluent batch jobs == |
||
The execution of FLUENT can also be carried out using a shell script. Below is an example of a shell script named <code>run_fluent.sh</code> that starts a FLUENT calculation in parallel on two nodes with 40 processors each on the bwUniCluster: |
|||
see http://www.scc.kit.edu/produkte/6724.php > "Kurzanleitung" > "#FLUENT-Aufruf_auf_den_Linux-Clustern_des_SCC" |
|||
1. Using IntelMPI: |
|||
<pre> |
|||
#!/bin/bash |
|||
#SBATCH --nodes=2 |
|||
#SBATCH --ntasks-per-node=96 |
|||
#SBATCH --time=2:00:00 |
|||
#SBATCH --mem=90gb |
|||
#SBATCH --partition=cpu |
|||
module load cae/ansys/2025R2 |
|||
source fluentinit |
|||
scontrol show hostname ${SLURM_JOB_NODELIST} > fluent.hosts |
|||
time fluent 3d -mpi=intel -pib -g -t80 -cnf=fluent.hosts -i test.inp |
|||
</pre> |
|||
2. Using OpenMPI: |
|||
<pre> |
|||
#!/bin/bash |
|||
#SBATCH --nodes=2 |
|||
#SBATCH --ntasks-per-node=96 |
|||
#SBATCH --time=2:00:00 |
|||
#SBATCH --mem=90gb |
|||
#SBATCH --partition=cpu |
|||
module load cae/ansys/2025R2 |
|||
source fluentinit |
|||
scontrol show hostname ${SLURM_JOB_NODELIST} > fluent.hosts |
|||
time fluent 3d -mpi=openmpi -g -t80 -cnf=fluent.hosts -i test.inp |
|||
</pre> |
|||
To submit the script to the job management system, run: |
|||
<pre> |
|||
sbatch run_fluent.sh |
|||
</pre> |
|||
== ANSYS CFX batch jobs == |
== ANSYS CFX batch jobs == |
||
The execution of CFX can also be carried out using a script. Below is an example script <code>run_cfx.sh</code> to start CFX with the start method 'Intel MPI Distributed Parallel': |
|||
see http://www.scc.kit.edu/produkte/3852.php > "Kurzanleitung" > "#CFX_auf_den_Linux-Clustern_des_SCC" |
|||
<pre> |
|||
#!/bin/sh |
|||
#SBATCH --nodes=2 |
|||
#SBATCH --ntasks-per-node=96 |
|||
#SBATCH --partition=cpu |
|||
#SBATCH --time=0:30:00 |
|||
#SBATCH --mem=90gb |
|||
module load cae/ansys/2025R2 |
|||
source cfxinit |
|||
cfx5solve -def test.def -par-dist $hostlist -start-method 'Intel MPI Distributed Parallel' |
|||
</pre> |
|||
To submit the script to the job management system, run: |
|||
<pre> |
|||
sbatch run_cfx.sh |
|||
</pre> |
|||
== ANSYS Rocky batch jobs == |
|||
The execution of Rocky DEM can also be carried out using a script. Below is an example script <code>run_rocky.sh</code> to run a Rocky DEM simulation on a GPU node: |
|||
<pre> |
|||
#!/bin/bash |
|||
#SBATCH --output=rocky_job_%j.out |
|||
#SBATCH --job-name=rocky_job |
|||
#SBATCH --partition=gpu_a100_short # GPU patition |
|||
#SBATCH --nodes=1 |
|||
#SBATCH --gpus-per-node=1 |
|||
#SBATCH --mem=30000 # Total memory (MB) |
|||
#SBATCH --time=00:30:00 # Time limit (hh:mm:ss) |
|||
module load cae/ansys/2025R2 |
|||
# Run Rocky DEM |
|||
Rocky --simulate "/path/to/Test.rocky" |
|||
</pre> |
|||
To submit the script to the job management system, run: |
|||
<pre> |
|||
---- |
|||
sbatch run_rocky.sh |
|||
[[Category:Engineering software]][[Category:bwUniCluster]] |
|||
</pre> |
Latest revision as of 16:36, 19 August 2025
The main documentation is available on the cluster via |
Description | Content |
---|---|
module load | cae/ansys |
License | Academic. See: Licensing and Terms-of-Use. |
Citing | Citations |
Links | Ansys Homepage | Support and Resources |
Graphical Interface | Yes |
Description
ANSYS is a general purpose software to simulate interactions of all disciplines of physics, structural, fluid dynamics, heat transfer, electromagnetic etc. For more information about ANSYS products please visit http://www.ansys.com/Industries/Academic/
Versions and Availability
The cae/ansys modules are using the KIT license server and are reserved for members of the KIT only.
Usage
ANSYS Fluent batch jobs
The execution of FLUENT can also be carried out using a shell script. Below is an example of a shell script named run_fluent.sh
that starts a FLUENT calculation in parallel on two nodes with 40 processors each on the bwUniCluster:
1. Using IntelMPI:
#!/bin/bash #SBATCH --nodes=2 #SBATCH --ntasks-per-node=96 #SBATCH --time=2:00:00 #SBATCH --mem=90gb #SBATCH --partition=cpu module load cae/ansys/2025R2 source fluentinit scontrol show hostname ${SLURM_JOB_NODELIST} > fluent.hosts time fluent 3d -mpi=intel -pib -g -t80 -cnf=fluent.hosts -i test.inp
2. Using OpenMPI:
#!/bin/bash #SBATCH --nodes=2 #SBATCH --ntasks-per-node=96 #SBATCH --time=2:00:00 #SBATCH --mem=90gb #SBATCH --partition=cpu module load cae/ansys/2025R2 source fluentinit scontrol show hostname ${SLURM_JOB_NODELIST} > fluent.hosts time fluent 3d -mpi=openmpi -g -t80 -cnf=fluent.hosts -i test.inp
To submit the script to the job management system, run:
sbatch run_fluent.sh
ANSYS CFX batch jobs
The execution of CFX can also be carried out using a script. Below is an example script run_cfx.sh
to start CFX with the start method 'Intel MPI Distributed Parallel':
#!/bin/sh #SBATCH --nodes=2 #SBATCH --ntasks-per-node=96 #SBATCH --partition=cpu #SBATCH --time=0:30:00 #SBATCH --mem=90gb module load cae/ansys/2025R2 source cfxinit cfx5solve -def test.def -par-dist $hostlist -start-method 'Intel MPI Distributed Parallel'
To submit the script to the job management system, run:
sbatch run_cfx.sh
ANSYS Rocky batch jobs
The execution of Rocky DEM can also be carried out using a script. Below is an example script run_rocky.sh
to run a Rocky DEM simulation on a GPU node:
#!/bin/bash #SBATCH --output=rocky_job_%j.out #SBATCH --job-name=rocky_job #SBATCH --partition=gpu_a100_short # GPU patition #SBATCH --nodes=1 #SBATCH --gpus-per-node=1 #SBATCH --mem=30000 # Total memory (MB) #SBATCH --time=00:30:00 # Time limit (hh:mm:ss) module load cae/ansys/2025R2 # Run Rocky DEM Rocky --simulate "/path/to/Test.rocky"
To submit the script to the job management system, run:
sbatch run_rocky.sh