Difference between revisions of "BwUniCluster2.0/FAQ - broadwell partition"

From bwHPC Wiki
Jump to: navigation, search
(How to login to broadwell login nodes?)
(7 intermediate revisions by one other user not shown)
Line 1: Line 1:
FAQs concerning best practice of [[BwUniCluster_Hardware_and_Architecture#Components_of_bwUniCluster|bwUniCluster broadwell partition]] (aka "extension" partition).
+
FAQs concerning best practice of [[BwUniCluster_2.0_Hardware_and_Architecture#Components_of_bwUniCluster|bwUniCluster broadwell partition]] (aka "extension" partition).
   
 
__TOC__
 
__TOC__
Line 13: Line 13:
 
</pre>
 
</pre>
 
* If you are compiling code on broadwell login nodes, your code will not optimally run on the new "Cascade Lake" nodes.
 
* If you are compiling code on broadwell login nodes, your code will not optimally run on the new "Cascade Lake" nodes.
  +
<br>
   
 
= Compilation =
 
= Compilation =
 
== How to compile code on broadwell (extension) nodes? ==
 
== How to compile code on broadwell (extension) nodes? ==
  +
To use the code only on the partition multiple_e:
On uc1 (old) login nodes:
 
<pre>
 
$ icc/ifort -axCORE-AVX2 [-further_options]
 
</pre>
 
On uc1e (extension) login nodes:
 
 
<pre>
 
<pre>
 
$ icc/ifort -xHost [-further_options]
 
$ icc/ifort -xHost [-further_options]
 
</pre>
 
</pre>
   
== How to compile the same code on old and extension partition? ==
+
== How to compile code to be used on ALL partitions? ==
 
On uc1e (= extension) login nodes:
 
On uc1e (= extension) login nodes:
 
<pre>
 
<pre>
$ icc/ifort -xAVX -axCORE-AVX2 [-further_options]
+
$ icc/ifort -xCORE-AVX2 -axCORE-AVX512 [-further_options]
 
</pre>
 
</pre>
  +
<br>
 
== What happens with code compiled for old partition whic is running on the extension partition? ==
 
Code will run but significantly slower since AVX2 will not be used. Please recompile your code accordingly.
 
 
<!--* To compile the same jobs on uc1 and uc1e, you have to use appropriate flags. Otherwise the submitted jobs will be crashed. Some of the main flags are:-->
 
   
 
= Job execution =
 
= Job execution =

Revision as of 18:42, 21 November 2020

FAQs concerning best practice of bwUniCluster broadwell partition (aka "extension" partition).

1 Login

1.1 Are there separate login nodes for the bwUniCluster broadwell partition?

  • Yes, but primarily to be used for compiling code.

1.2 How to login to broadwell login nodes?

  • You can directly login on broadwell partition login nodes using
$ ssh username@uc1e.scc.kit.edu
  • If you are compiling code on broadwell login nodes, your code will not optimally run on the new "Cascade Lake" nodes.


2 Compilation

2.1 How to compile code on broadwell (extension) nodes?

To use the code only on the partition multiple_e:

$ icc/ifort -xHost [-further_options]

2.2 How to compile code to be used on ALL partitions?

On uc1e (= extension) login nodes:

$ icc/ifort -xCORE-AVX2 -axCORE-AVX512 [-further_options]


3 Job execution

3.1 How to submit jobs to the broadwell (= extension) partition

The submitted job will be distributed either way to the broadwell nodes if specified correctly, i.e.:

$ sbatch -p multiple_e

3.2 Can I use my old multinode job script for the new broadwell partition?

Yes, but please note that all broadwell nodes do have 28 cores per node.