BwUniCluster2.0/Software/Ansys: Difference between revisions
Line 142: | Line 142: | ||
== ANSYS CFX batch jobs == |
== ANSYS CFX batch jobs == |
||
see http://www.scc.kit.edu/produkte/3852.php > "Kurzanleitung" > "#CFX_auf_den_Linux-Clustern_des_SCC" |
|||
With the script "run_cfx.sh" you can submit a CFX job to the queueing system to run in parallel using 8 cores on two node with the start-method 'Platform MPI Parallel': |
|||
{{bwFrameA| |
|||
<source lang="bash"> |
|||
#!/bin/sh |
|||
#MSUB -l nodes=2:ppn=28 |
|||
#MSUB -l walltime=0:60:00 |
|||
#MSUB -l mem=120000mb |
|||
## setup environment |
|||
module load cae/ansys |
|||
export CFX5RSH=ssh |
|||
export MPI_USESRUN=1 |
|||
export I_MPI_HYDRA_BOOTSTRAP="slurm" |
|||
export I_MPI_HYDRA_RMK="slurm" |
|||
export I_MPI_HYDRA_BRANCH_COUNT="-1" |
|||
hostlist=$(srun hostname -s) |
|||
hostlist=`echo $hostlist| sed 's/ /,/g'` |
|||
## start cfx job |
|||
cfx5solve -def test.def -par-dist $hostlist -start-method 'Intel MPI Distributed Parallel' |
|||
</source> |
|||
}} |
|||
To submit the example script to the queueing system execute: |
|||
<pre> |
|||
$ msub run_cfx.sh |
|||
</pre> |
</pre> |
||
<br> |
<br> |
Revision as of 11:09, 17 April 2019
Description | Content |
---|---|
module load | cae/ansys |
Availability | bwUniCluster | bwGRiD-Tübingen |
License | Academic. See: Licensing and Terms-of-Use. |
Citing | Citations |
Links | Ansys Homepage | Support and Resources |
Graphical Interface | Yes |
Description
ANSYS is a general purpose software to simulate interactions of all disciplines of physics, structural, fluid dynamics, heat transfer, electromagnetic etc. For more information about ANSYS products please visit http://www.ansys.com/Industries/Academic/
Versions and Availability
A list of versions currently available on all bwHPC-C5-Clusters can be obtained from the
Cluster Information System CIS
{{#widget:Iframe
|url=https://cis-hpc.uni-konstanz.de/prod.cis/bwUniCluster/cae/ansys
|width=99%
|height=350
|border=0
}}
On the command line interface of a particular bwHPC cluster a list of all available ANSYS versions can be inquired as followed
$ module avail cae/ansys -------------------------- /opt/bwhpc/kit/modulefiles -------------------------- cae/ansys/15.0 cae/ansys/16.2 cae/ansys/17.2 cae/ansys/18.2
The cae/ansys modules are using the KIT license server and is reserved for members of the KIT only.
Usage
Loading the Module
If you wish to load a specific version of ANSYS you can do so by executing e.g.:
$ module load cae/ansys/15.0
to load the version 15.0.
You can load the default version of ANSYS with the command:
$ module load cae/ansys
Start commands
To start an ANSYS Mechanical session enter
$ ansys150
To launch an ANSYS FLUENT session enter
$ fluent
The following command is to run the ANSYS Workbench
$ runwb2
Online documention is available from the help menu or by using the command
$ anshelp150
As with all processes that require more than a few minutes to run, non-trivial ANSYS solver jobs must be submitted to the cluster queueing system.
Examples
ANSYS Mechanical batch jobs
The following script could be submitted to the queueing system to run an ANSYS Mechanical job in parallel:
#!/bin/bash
module load cae/ansys
export MPIRUN_OPTIONS="-prot"
export MPI_USESRUN=1
cd Arbeitsverzeichnis
export MACHINES=`/software/bwhpc/common/cae/ansys_inc150/scc/machines.pl`
ansys150 -dis -b -j lal -machines $MACHINES < input.f18
|
working_dir could start with $HOME or $WORK .
To submit the example script to the queueing system execute the following (32 cores, 1 GB of memory per core, max. time 600 seconds) :
msub -l nodes=2:ppn=16,pmem=1000mb,walltime=600 Shell-Script
ANSYS Fluent batch jobs
The following script "run_fluent.sh" could be submitted to the queueing system to run an ANSYS Fluent job in parallel using 4 cores on a single node:
#!/bin/sh
#MSUB -l nodes=2:ppn=28
#MSUB -l walltime=0:60:00
#MSUB -l mem=120000mb
## setup environment
module load cae/ansys
module load system/ssh_wrapper/0.1
export MPI_USESRUN=1
export FLUENT_SSH=/opt/bwhpc/common/system/ssh_wrapper/0.1/ssh
export FLUENT_SKIP_SSH_CHECK=1
export I_MPI_HYDRA_BOOTSTRAP="slurm"
export I_MPI_HYDRA_RMK="slurm"
export I_MPI_HYDRA_BRANCH_COUNT="-1"
export run_nodes=`srun hostname -s`
echo $run_nodes | sed "s/ /\n/g" > fluent.hosts
echo "" >> fluent.hosts
## start fluent job
time fluent 3d -mpi=intel -g -t56 -pib -cnf=fluent.hosts -i test.inp
|
To submit the example script to the queueing system execute the following:
$ msub run_fluent.sh
ANSYS CFX batch jobs
see http://www.scc.kit.edu/produkte/3852.php > "Kurzanleitung" > "#CFX_auf_den_Linux-Clustern_des_SCC"