Difference between revisions of "BwUniCluster2.0/FAQ - broadwell partition"

From bwHPC Wiki
Jump to: navigation, search
m (R Barthel moved page BwUniCluster broadwell partition to FAQ - bwUniCluster broadwell partition: this is a FAQ therefore renaming)
(12 intermediate revisions by 2 users not shown)
Line 1: Line 1:
FAQs concerning best practice of [[BwUniCluster_Hardware_and_Architecture#Components_of_bwUniCluster|bwUniCluster broadwell partition]] (aka "extension" partition).
+
FAQs concerning best practice of [[BwUniCluster_2.0_Hardware_and_Architecture#Components_of_bwUniCluster|bwUniCluster broadwell partition]] (aka "extension" partition).
   
 
__TOC__
 
__TOC__
Line 9: Line 9:
 
== How to login to broadwell login nodes? ==
 
== How to login to broadwell login nodes? ==
 
* You can directly login on broadwell partition login nodes using
 
* You can directly login on broadwell partition login nodes using
  +
<pre>
$ username@uc1e.scc.kit.edu
 
  +
$ ssh username@uc1e.scc.kit.edu
* If you login to the ''old'' uc1 login nodes, even though you can use broadwell nodes using the same procedure as 'compute nodes'.
 
  +
</pre>
 
  +
* If you are compiling code on broadwell login nodes, your code will not optimally run on the new "Cascade Lake" nodes.
  +
<br>
   
 
= Compilation =
 
= Compilation =
 
== How to compile code on broadwell (extension) nodes? ==
 
== How to compile code on broadwell (extension) nodes? ==
  +
To use the code only on the partition multiple_e:
On uc1 (old) login nodes:
 
 
<pre>
 
<pre>
icc/ifort -axCORE-AVX2
+
$ icc/ifort -xHost [-further_options]
</pre>
 
On uc1e (extension) login nodes:
 
<pre>
 
icc/ifort -xHost
 
 
</pre>
 
</pre>
   
== How to compile the same code on old and extension partition? ==
+
== How to compile code to be used on ALL partitions? ==
 
On uc1e (= extension) login nodes:
 
On uc1e (= extension) login nodes:
 
<pre>
 
<pre>
icc/ifort -xAVX -axCORE-AVX2
+
$ icc/ifort -xCORE-AVX2 -axCORE-AVX512 [-further_options]
 
</pre>
 
</pre>
  +
<br>
 
== What happens with code compiled for old partition whic is running on the extension partition? ==
 
Code will run but significantly slower since AVX2 will not be used. Please recompile your code accordingly.
 
 
<!--* To compile the same jobs on uc1 and uc1e, you have to use appropriate flags. Otherwise the submitted jobs will be crashed. Some of the main flags are:-->
 
   
 
= Job execution =
 
= Job execution =
Line 39: Line 33:
 
The submitted job will be distributed either way to the broadwell nodes if specified correctly, i.e.:
 
The submitted job will be distributed either way to the broadwell nodes if specified correctly, i.e.:
 
<pre>
 
<pre>
  +
$ sbatch -p multiple_e
msub -q multinode
 
 
</pre>
 
</pre>
  +
  +
== Can I use my old multinode job script for the new broadwell partition? ==
  +
Yes, but please note that all broadwell nodes do have '''28 cores per node'''.
   
 
----
 
----

Revision as of 18:42, 21 November 2020

FAQs concerning best practice of bwUniCluster broadwell partition (aka "extension" partition).

1 Login

1.1 Are there separate login nodes for the bwUniCluster broadwell partition?

  • Yes, but primarily to be used for compiling code.

1.2 How to login to broadwell login nodes?

  • You can directly login on broadwell partition login nodes using
$ ssh username@uc1e.scc.kit.edu
  • If you are compiling code on broadwell login nodes, your code will not optimally run on the new "Cascade Lake" nodes.


2 Compilation

2.1 How to compile code on broadwell (extension) nodes?

To use the code only on the partition multiple_e:

$ icc/ifort -xHost [-further_options]

2.2 How to compile code to be used on ALL partitions?

On uc1e (= extension) login nodes:

$ icc/ifort -xCORE-AVX2 -axCORE-AVX512 [-further_options]


3 Job execution

3.1 How to submit jobs to the broadwell (= extension) partition

The submitted job will be distributed either way to the broadwell nodes if specified correctly, i.e.:

$ sbatch -p multiple_e

3.2 Can I use my old multinode job script for the new broadwell partition?

Yes, but please note that all broadwell nodes do have 28 cores per node.