Difference between revisions of "BwUniCluster2.0/FAQ - broadwell partition"

From bwHPC Wiki
Jump to: navigation, search
(27 intermediate revisions by 3 users not shown)
Line 1: Line 1:
FAQs concerning best practice of bwUniCluster broadwell (aka "extension") partition.
+
FAQs concerning best practice of [[BwUniCluster_2.0_Hardware_and_Architecture#Components_of_bwUniCluster|bwUniCluster broadwell partition]] (aka "extension" partition).
   
 
__TOC__
 
__TOC__
Line 5: Line 5:
 
= Login =
 
= Login =
 
== Are there separate login nodes for the bwUniCluster broadwell partition? ==
 
== Are there separate login nodes for the bwUniCluster broadwell partition? ==
* Yes, but primarily to be used for compiling code, for just submitting job either login nodes are ok.
+
* Yes, but primarily to be used for compiling code.
  +
  +
== How to login to broadwell login nodes? ==
  +
* You can directly login on broadwell partition login nodes using
  +
<pre>
  +
$ ssh username@uc1e.scc.kit.edu
  +
</pre>
  +
* If you are compiling code on broadwell login nodes, your code will not optimally run on the new "Cascade Lake" nodes.
  +
<br>
   
 
= Compilation =
 
= Compilation =
  +
== How to compile code on broadwell (extension) nodes? ==
== Q1 ==
 
  +
To use the code only on the partition multiple_e:
  +
<pre>
  +
$ icc/ifort -xHost [-further_options]
  +
</pre>
   
  +
== How to compile code to be used on ALL partitions? ==
  +
On uc1e (= extension) login nodes:
  +
<pre>
  +
$ icc/ifort -xCORE-AVX2 -axCORE-AVX512 [-further_options]
  +
</pre>
  +
<br>
   
 
= Job execution =
 
= Job execution =
  +
== How to submit jobs to the broadwell (= extension) partition ==
== Q2 ==
 
  +
The submitted job will be distributed either way to the broadwell nodes if specified correctly, i.e.:
  +
<pre>
  +
$ sbatch -p multiple_e
  +
</pre>
  +
  +
== Can I use my old multinode job script for the new broadwell partition? ==
  +
Yes, but please note that all broadwell nodes do have '''28 cores per node'''.
   
 
----
 
----

Revision as of 18:42, 21 November 2020

FAQs concerning best practice of bwUniCluster broadwell partition (aka "extension" partition).

1 Login

1.1 Are there separate login nodes for the bwUniCluster broadwell partition?

  • Yes, but primarily to be used for compiling code.

1.2 How to login to broadwell login nodes?

  • You can directly login on broadwell partition login nodes using
$ ssh username@uc1e.scc.kit.edu
  • If you are compiling code on broadwell login nodes, your code will not optimally run on the new "Cascade Lake" nodes.


2 Compilation

2.1 How to compile code on broadwell (extension) nodes?

To use the code only on the partition multiple_e:

$ icc/ifort -xHost [-further_options]

2.2 How to compile code to be used on ALL partitions?

On uc1e (= extension) login nodes:

$ icc/ifort -xCORE-AVX2 -axCORE-AVX512 [-further_options]


3 Job execution

3.1 How to submit jobs to the broadwell (= extension) partition

The submitted job will be distributed either way to the broadwell nodes if specified correctly, i.e.:

$ sbatch -p multiple_e

3.2 Can I use my old multinode job script for the new broadwell partition?

Yes, but please note that all broadwell nodes do have 28 cores per node.