BwUniCluster2.0/FAQ - broadwell partition: Difference between revisions
Jump to navigation
Jump to search
S Shamsudeen (talk | contribs) |
mNo edit summary |
||
(22 intermediate revisions by 5 users not shown) | |||
Line 1: | Line 1: | ||
FAQs concerning best practice of bwUniCluster broadwell (aka "extension" |
FAQs concerning best practice of [[BwUniCluster_2.0_Hardware_and_Architecture#Components_of_bwUniCluster|bwUniCluster broadwell partition]] (aka "extension" partition). |
||
__TOC__ |
__TOC__ |
||
Line 6: | Line 6: | ||
== Are there separate login nodes for the bwUniCluster broadwell partition? == |
== Are there separate login nodes for the bwUniCluster broadwell partition? == |
||
* Yes, but primarily to be used for compiling code. |
* Yes, but primarily to be used for compiling code. |
||
⚫ | |||
== How to login to broadwell login nodes? == |
|||
⚫ | |||
⚫ | |||
* If you login uc1, even though you can use broadwell nodes using the same procedure as 'compute nodes'. The submitted job will be distributed on broadwell nodes also. |
|||
<pre> |
|||
example: msub -q singlenode |
|||
⚫ | |||
msub -q multinode |
|||
</pre> |
|||
* But for the compilation you have to use the appropriate flags |
|||
* If you are compiling code on broadwell login nodes, your code will not optimally run on the new "Cascade Lake" nodes. |
|||
<br> |
|||
= Compilation = |
= Compilation = |
||
== How to compile code on broadwell (extension) nodes? == |
|||
== Q1 == |
|||
To use the code only on the partition multiple_e: |
|||
* To compile the same jobs on uc1 and uc1e, you have to use appropriate flags. Otherwise the submitted jobs will be crashed. Some of the main flags are: |
|||
<pre> |
|||
$ icc/ifort -xHost [-further_options] |
|||
</pre> |
|||
== How to compile code to be used on ALL partitions? == |
|||
On uc1e (= extension) login nodes: |
|||
<pre> |
|||
$ icc/ifort -xCORE-AVX2 -axCORE-AVX512 [-further_options] |
|||
</pre> |
|||
<br> |
|||
= Job execution = |
= Job execution = |
||
== How to submit jobs to the broadwell (= extension) partition == |
|||
== Q2 == |
|||
The submitted job will be distributed either way to the broadwell nodes if specified correctly, i.e.: |
|||
If the appropriate flags are selected, the submitted jobs can be executed on uc1 and uc1e. |
|||
<pre> |
|||
---- |
|||
$ sbatch -p multiple_e |
|||
[[Category:bwUniCluster]] |
|||
</pre> |
|||
== Can I use my old multinode job script for the new broadwell partition? == |
|||
Yes, but please note that all broadwell nodes do have '''28 cores per node'''. |
Latest revision as of 10:25, 21 May 2024
FAQs concerning best practice of bwUniCluster broadwell partition (aka "extension" partition).
Login
Are there separate login nodes for the bwUniCluster broadwell partition?
- Yes, but primarily to be used for compiling code.
How to login to broadwell login nodes?
- You can directly login on broadwell partition login nodes using
$ ssh username@uc1e.scc.kit.edu
- If you are compiling code on broadwell login nodes, your code will not optimally run on the new "Cascade Lake" nodes.
Compilation
How to compile code on broadwell (extension) nodes?
To use the code only on the partition multiple_e:
$ icc/ifort -xHost [-further_options]
How to compile code to be used on ALL partitions?
On uc1e (= extension) login nodes:
$ icc/ifort -xCORE-AVX2 -axCORE-AVX512 [-further_options]
Job execution
How to submit jobs to the broadwell (= extension) partition
The submitted job will be distributed either way to the broadwell nodes if specified correctly, i.e.:
$ sbatch -p multiple_e
Can I use my old multinode job script for the new broadwell partition?
Yes, but please note that all broadwell nodes do have 28 cores per node.