Difference between revisions of "BwUniCluster2.0/FAQ - broadwell partition"

From bwHPC Wiki
Jump to: navigation, search
Line 43: Line 43:
 
$ msub -q multinode
 
$ msub -q multinode
 
</pre>
 
</pre>
  +
  +
== Can I use my old multinode job script for the new broadwell partition? ==
  +
Yes, but please note that all broadwell nodes do have '''28 cores per node'''.
   
 
----
 
----

Revision as of 11:28, 18 September 2017

FAQs concerning best practice of bwUniCluster broadwell partition (aka "extension" partition).

1 Login

1.1 Are there separate login nodes for the bwUniCluster broadwell partition?

  • Yes, but primarily to be used for compiling code.

1.2 How to login to broadwell login nodes?

  • You can directly login on broadwell partition login nodes using
$ ssh username@uc1e.scc.kit.edu
  • If you login to the old uc1 login nodes, even though you can use broadwell nodes using the same procedure as 'compute nodes'.


2 Compilation

2.1 How to compile code on broadwell (extension) nodes?

On uc1 (old) login nodes:

$ icc/ifort -axCORE-AVX2 [-further_options]

On uc1e (extension) login nodes:

$ icc/ifort -xHost [-further_options]

2.2 How to compile the same code on old and extension partition?

On uc1e (= extension) login nodes:

$ icc/ifort -xAVX -axCORE-AVX2 [-further_options]

2.3 What happens with code compiled for old partition whic is running on the extension partition?

Code will run but significantly slower since AVX2 will not be used. Please recompile your code accordingly.


3 Job execution

3.1 How to submit jobs to the broadwell (= extension) partition

The submitted job will be distributed either way to the broadwell nodes if specified correctly, i.e.:

$ msub -q multinode

3.2 Can I use my old multinode job script for the new broadwell partition?

Yes, but please note that all broadwell nodes do have 28 cores per node.