Difference between revisions of "BwUniCluster2.0/FAQ - broadwell partition"

From bwHPC Wiki
Jump to: navigation, search
(Job execution)
Line 1: Line 1:
FAQs concerning best practice of bwUniCluster broadwell (aka "extension") partition.
+
FAQs concerning best practice of [[BwUniCluster_Hardware_and_Architecture#Components_of_bwUniCluster|bwUniCluster broadwell partition]] (aka "extension" partition).
   
 
__TOC__
 
__TOC__
Line 6: Line 6:
 
== Are there separate login nodes for the bwUniCluster broadwell partition? ==
 
== Are there separate login nodes for the bwUniCluster broadwell partition? ==
 
* Yes, but primarily to be used for compiling code.
 
* Yes, but primarily to be used for compiling code.
  +
* You can directly login on broadwell partition using
 
  +
== How to login to broadwell login nodes? ==
  +
* You can directly login on broadwell partition login nodes using
 
$ username@uc1e.scc.kit.edu
 
$ username@uc1e.scc.kit.edu
* If you login uc1, even though you can use broadwell nodes using the same procedure as 'compute nodes'. The submitted job will be distributed on broadwell nodes also.
+
* If you login to the ''old'' uc1 login nodes, even though you can use broadwell nodes using the same procedure as 'compute nodes'.
  +
example: msub -q singlenode
 
msub -q multinode
 
* But for the compilation you have to use the appropriate flags
 
   
 
= Compilation =
 
= Compilation =
  +
== How to compile code working on broadwell (= extension) nodes? ==
== Q1 ==
 
  +
On uc1 (= old) login nodes:
* To compile the same jobs on uc1 and uc1e, you have to use appropriate flags. Otherwise the submitted jobs will be crashed. Some of the main flags are:
 
  +
<pre>
  +
icc/ifort -axCORE-AVX2
  +
</pre>
  +
On uc1e (= extension) login nodes:
  +
<pre>
  +
icc/ifort -xHost
  +
</pre>
  +
  +
== How to compile code working on the old and extension partition? ==
  +
On uc1e (= extension) login nodes:
  +
<pre>
  +
icc/ifort -xAVX -axCORE-AVX2
  +
</pre>
  +
  +
== What happens with code compiled for old partition running on the extension partition? ==
  +
Code will run but significantly slower since AVX2 will not be used. Please recompile your code accordingly.
  +
  +
<!--* To compile the same jobs on uc1 and uc1e, you have to use appropriate flags. Otherwise the submitted jobs will be crashed. Some of the main flags are:-->
   
 
= Job execution =
 
= Job execution =
  +
== How to submit jobs to the broadwell (= extension) partition ==
== Q2 ==
 
  +
The submitted job will be distributed either way to the broadwell nodes if specified correctly, i.e.:
If the appropriate flags are selected, the submitted jobs can be executed on uc1 and uc1e.
 
  +
<pre>
  +
msub -q multinode
  +
</pre>
  +
 
----
 
----
 
[[Category:bwUniCluster]]
 
[[Category:bwUniCluster]]

Revision as of 13:52, 12 September 2017

FAQs concerning best practice of bwUniCluster broadwell partition (aka "extension" partition).

1 Login

1.1 Are there separate login nodes for the bwUniCluster broadwell partition?

  • Yes, but primarily to be used for compiling code.

1.2 How to login to broadwell login nodes?

  • You can directly login on broadwell partition login nodes using
  $ username@uc1e.scc.kit.edu
  • If you login to the old uc1 login nodes, even though you can use broadwell nodes using the same procedure as 'compute nodes'.


2 Compilation

2.1 How to compile code working on broadwell (= extension) nodes?

On uc1 (= old) login nodes:

icc/ifort -axCORE-AVX2

On uc1e (= extension) login nodes:

icc/ifort -xHost

2.2 How to compile code working on the old and extension partition?

On uc1e (= extension) login nodes:

icc/ifort -xAVX -axCORE-AVX2

2.3 What happens with code compiled for old partition running on the extension partition?

Code will run but significantly slower since AVX2 will not be used. Please recompile your code accordingly.


3 Job execution

3.1 How to submit jobs to the broadwell (= extension) partition

The submitted job will be distributed either way to the broadwell nodes if specified correctly, i.e.:

msub -q multinode