BwUniCluster2.0/FAQ - broadwell partition: Difference between revisions

From bwHPC Wiki
Jump to navigation Jump to search
m (R Barthel moved page BwUniCluster broadwell partition to FAQ - bwUniCluster broadwell partition: this is a FAQ therefore renaming)
mNo edit summary
 
(15 intermediate revisions by 4 users not shown)
Line 1: Line 1:
FAQs concerning best practice of [[BwUniCluster_Hardware_and_Architecture#Components_of_bwUniCluster|bwUniCluster broadwell partition]] (aka "extension" partition).
FAQs concerning best practice of [[BwUniCluster_2.0_Hardware_and_Architecture#Components_of_bwUniCluster|bwUniCluster broadwell partition]] (aka "extension" partition).


__TOC__
__TOC__
Line 9: Line 9:
== How to login to broadwell login nodes? ==
== How to login to broadwell login nodes? ==
* You can directly login on broadwell partition login nodes using
* You can directly login on broadwell partition login nodes using
<pre>
$ username@uc1e.scc.kit.edu
$ ssh username@uc1e.scc.kit.edu
* If you login to the ''old'' uc1 login nodes, even though you can use broadwell nodes using the same procedure as 'compute nodes'.
</pre>

* If you are compiling code on broadwell login nodes, your code will not optimally run on the new "Cascade Lake" nodes.
<br>


= Compilation =
= Compilation =
== How to compile code on broadwell (extension) nodes? ==
== How to compile code on broadwell (extension) nodes? ==
To use the code only on the partition multiple_e:
On uc1 (old) login nodes:
<pre>
<pre>
icc/ifort -axCORE-AVX2
$ icc/ifort -xHost [-further_options]
</pre>
On uc1e (extension) login nodes:
<pre>
icc/ifort -xHost
</pre>
</pre>


== How to compile the same code on old and extension partition? ==
== How to compile code to be used on ALL partitions? ==
On uc1e (= extension) login nodes:
On uc1e (= extension) login nodes:
<pre>
<pre>
icc/ifort -xAVX -axCORE-AVX2
$ icc/ifort -xCORE-AVX2 -axCORE-AVX512 [-further_options]
</pre>
</pre>
<br>

== What happens with code compiled for old partition whic is running on the extension partition? ==
Code will run but significantly slower since AVX2 will not be used. Please recompile your code accordingly.
<!--* To compile the same jobs on uc1 and uc1e, you have to use appropriate flags. Otherwise the submitted jobs will be crashed. Some of the main flags are:-->


= Job execution =
= Job execution =
Line 39: Line 33:
The submitted job will be distributed either way to the broadwell nodes if specified correctly, i.e.:
The submitted job will be distributed either way to the broadwell nodes if specified correctly, i.e.:
<pre>
<pre>
$ sbatch -p multiple_e
msub -q multinode
</pre>
</pre>


== Can I use my old multinode job script for the new broadwell partition? ==
----
Yes, but please note that all broadwell nodes do have '''28 cores per node'''.
[[Category:bwUniCluster]]

Latest revision as of 10:25, 21 May 2024

FAQs concerning best practice of bwUniCluster broadwell partition (aka "extension" partition).

Login

Are there separate login nodes for the bwUniCluster broadwell partition?

  • Yes, but primarily to be used for compiling code.

How to login to broadwell login nodes?

  • You can directly login on broadwell partition login nodes using
$ ssh username@uc1e.scc.kit.edu
  • If you are compiling code on broadwell login nodes, your code will not optimally run on the new "Cascade Lake" nodes.


Compilation

How to compile code on broadwell (extension) nodes?

To use the code only on the partition multiple_e:

$ icc/ifort -xHost [-further_options]

How to compile code to be used on ALL partitions?

On uc1e (= extension) login nodes:

$ icc/ifort -xCORE-AVX2 -axCORE-AVX512 [-further_options]


Job execution

How to submit jobs to the broadwell (= extension) partition

The submitted job will be distributed either way to the broadwell nodes if specified correctly, i.e.:

$ sbatch -p multiple_e

Can I use my old multinode job script for the new broadwell partition?

Yes, but please note that all broadwell nodes do have 28 cores per node.