NEMO/Moab: Difference between revisions
(49 intermediate revisions by 2 users not shown) | |||
Line 1: | Line 1: | ||
{|style="background:#deffee; width:100%;" |
|||
<font color=green size=+2>'''This article contains information on features of the batch job system only applicable on the "bwForCluster NEMO" in Freiburg.'''</font> |
|||
|style="padding:5px; background:#cef2e0; text-align:left"| |
|||
[[Image:Attention.svg|center|25px]] |
|||
|style="padding:5px; background:#cef2e0; text-align:left"| |
|||
A general description of Moab job scheduling and submitting options can be found in the general '''[[NEMO/Moab/General|Moab documentation]]''' wiki. |
|||
|} |
|||
= Submitting Jobs on the bwForCluster NEMO = |
= Submitting Jobs on the bwForCluster NEMO = |
||
Line 6: | Line 10: | ||
This page describes the details of the queuing system specific to the bwForCluster NEMO. |
This page describes the details of the queuing system specific to the bwForCluster NEMO. |
||
'''Currently all worker nodes have 20 physical cores. Do not request more than 20 processes per node with the ppn flag (e.g. ppn=20).''' |
|||
A general description on options that should work on all bwHPC clusters can be found on the [[Batch Jobs]] page. |
|||
If the requested ppn count exceeds this limit, the job will remain in idle state, i.e. it will not start running. Unfortunately msub does not inform you about wrong resource definitions. Please use checkjob -v to see if your job definitions are OK. Please select a number of cores which divides 20 evenly (e.g. ppn=[1,2,4,5,10,20]). This way the resource usage can be optimized. |
|||
Currently all worker nodes have 20 physical cores. Do not request more than 20 processes per node with the ppn flag (e.g. ppn=20). |
|||
Jobs run in a '''user only''' mode. This means that more than one job per user can run on the same worker node, if there are spare resources. |
|||
If the requested ppn count exceeds this limit, the job will remain in idle state, i.e. it will not start running. |
|||
* [[NEMO/Moab/General|General Moab Documentation]] |
|||
Jobs run in a '''shared''' mode. This means that more than one user can run jobs on the same worker node, if there are spare resources. |
|||
== Limits and Queues == |
|||
<font color=green>'''Walltime and cores correlate. MAXPS defines the product of maximum cores times walltime (cores X time). If you increase cores you'll need to decrease walltime to run the same amount of jobs.'''</font> |
|||
On the bwForCluster NEMO the standard queue '''should not be explicitly specified'''. |
|||
* The maximum walltime for a job is 96 hours (4 days) |
|||
walltime=4:00:00:00 or walltime=96:00:00. |
|||
* All nodes have 20 cores and 128 GB RAM. Thus each core can use roughly 6GB RAM (pmem=6GB). |
|||
nodes=1:ppn=20 |
|||
pmem=6gb |
|||
Four nodes have 256 GB and four nodes have 512 GB of RAM. |
|||
* Maximum used cores at any time is 6000. We use MAXPE which takes Memory into account, |
|||
see [http://docs.adaptivecomputing.com/9-0-3/MWM/help.htm#topics/moabWorkloadManager/topics/schedBasics/environment.html#PEoverview processor equivalent] |
|||
MAXPE: 4000 (soft limit), 6000 (hard limit) |
|||
* <font color=green>IMPORTANT MAXPS:</font> Maximum processor seconds which can be used at any time is 456192000. This increases with every job but decreases with the time passing, see [http://docs.adaptivecomputing.com/9-0-3/MWM/help.htm#topics/moabWorkloadManager/topics/fairness/throttlingpolicies.html#basic basic fairness policies] |
|||
<font color=green>MAXPS:</font> 304128000 (soft limit), 456192000 (hard limit) |
|||
(calculation 4 OPA islands * 44 nodes * 20 cores * 60 sec * 60 min * 24 h) |
|||
MAXPS = (# of cores) * walltime |
|||
Example: If a job uses 3520 cores for 24h, it will reserve 304128000 processor seconds. |
|||
This is the soft limit for the cluster. |
|||
showq -b -v # will show you when your jobs hit the limit |
|||
{| width=750px class="wikitable" |
|||
! colspan="6" style="background-color:#999999;padding:3px"| Queues |
|||
|- style="width:10%;height=20px; text-align:left;" |
|||
! style="width:10%;padding:3px"| ''queue'' |
|||
! style="width:5%;padding:3px"| ''node'' |
|||
! style="width:15%;padding:3px"| ''default resources'' |
|||
! style="padding:3px"| ''minimum resources'' |
|||
! style="padding:3px"| ''theoretical maximum resources'' |
|||
! style="padding:3px"| ''node access policy'' |
|||
|- style="vertical-align:top; height=20px; text-align:left" |
|||
| style="padding:3px"| '''do not specify''' |
|||
| style="padding:3px"| worker |
|||
| style="width:15%;padding:3px"| ''nodes''=1:''ppn''=1, walltime=01:00:00, ''pmem''=1000mb |
|||
| style="padding:3px"| ''nodes''=1:''ppn''=1 |
|||
| style="padding:3px"| ''nodes''=300:''ppn''=20, ''walltime''=4:00:00:00, ''pmem=6GB'' / ''pmem=12GB (256GB)'' / ''pmem=24GB (512GB)'' |
|||
| style="padding:3px"| single user |
|||
|- style="vertical-align:top; height=20px; text-align:left" |
|||
| style="padding:3px"| '''express''' |
|||
| style="padding:3px"| worker / interactive |
|||
| style="width:15%;padding:3px"| ''nodes''=1:''ppn''=1, walltime=15:00, ''pmem''=1000mb |
|||
| style="padding:3px"| ''nodes''=1:''ppn''=1 |
|||
| style="padding:3px"| ''nodes''=44:''ppn''=20, ''walltime''=15:00, ''pmem=6GB'' |
|||
| style="padding:3px"| single user |
|||
|- style="vertical-align:top; height=20px; text-align:left" |
|||
| style="padding:3px"| '''gpu''' |
|||
| style="padding:3px"| gpu |
|||
| style="width:15%;padding:3px"| ''nodes''=1:''ppn''=1, walltime=15:00, ''pmem''=1000mb |
|||
| style="padding:3px"| ''nodes''=1:''ppn''=1:''gpus''=1 |
|||
| style="padding:3px"| ''nodes''=1:''ppn''=64:''gpus''=8, ''walltime''=4:00:00:00, ''pmem=4GB'' |
|||
| style="padding:3px"| '''SHARED''' (SMT enabled) |
|||
|- style="vertical-align:top; height=20px; text-align:left" |
|||
|- |
|||
|} |
|||
== Interactive Jobs == |
|||
Interactive jobs must NOT run on the logins nodes, however resources for interactive jobs can be requested using msub. The following example starts an interactive session on one compute node with one core for one hour: |
|||
<pre> |
|||
$ msub -l nodes=1:ppn=1 -l walltime=1:00:00 -I |
|||
</pre> |
|||
The option "-I" means "interactive job". After execution of this command wait until the queuing system has granted you the requested resources. Once granted you will be automatically logged on the allocated compute node. |
|||
If you use applications or tools which provide a GUI, enable X-forwarding for your interactive session with: |
|||
<pre> |
|||
# use -Y for ssh X-forwarding |
|||
$ ssh -l <uid> -Y login.nemo.uni-freiburg.de |
|||
# use -X for X-forwarding |
|||
$ msub -l nodes=1:ppn=1,walltime=1:00:00 -I -X |
|||
</pre> |
|||
Once the walltime limit has been reached you will be automatically logged out from the compute node. |
|||
The option "-V" exports all environment variables to the compute node of the interactive session, but if you want to test your jobs, please aviod using "-V" since this alters your job environment. |
|||
=== Interactive GPU Jobs === |
|||
If you add the '':gpus'' flag to your interactive jobs, you will get a node with a GPU: |
|||
<pre> |
|||
$ msub -l nodes=1:ppn=1:gpus=1,feature=t4 -I # for the Nvidia T4 interactive node |
|||
$ msub -l nodes=1:ppn=1:gpus=1 -I # for the Nvidia V100 node, shouldn't be used for interactive jobs (just for short testing of the V100) |
|||
</pre> |
|||
== Express Jobs == |
|||
You can use the express queue to test batch jobs: |
|||
<pre> |
|||
$ msub -q express -l nodes=1:ppn=20 test.moab |
|||
</pre> |
|||
For defaults and maximum usage see table in [[NEMO/Moab#Limits_and_Queues|1.1 Limits and Queues]]. |
|||
== GPU Jobs == |
|||
You can use the gpu queue if you need graphic cards for your jobs. The node has 32 cores with enabled simultaneous multithreading (SMT) so 64 processes can be used and eight Nvidia V100 GUUs. This node can be used by multiple users simultaneously (SHARED mode). |
|||
<pre> |
|||
$ msub -q gpu -l nodes=1:ppn=1:gpus=1 gpu.moab # minimal job description |
|||
$ msub -q gpu -l nodes=1:ppn=64:gpus=8 gpu.moab # maximum resources |
|||
</pre> |
|||
For defaults and maximum usage see table in [[NEMO/Moab#Limits_and_Queues|1.1 Limits and Queues]]. |
|||
== NEW AMD ROME Nodes == |
|||
There are four nodes with AMD Rome processors with 128 real cores, 512 GiB and one Nvidia T4 GPU per node available on NEMO for evaluation purposes. SMT is disabled currently. To schedule your jobs on these machines please add <code>-l feature=amd</code> or <code>-l feature=t4</code>, for the Tesla card add <code>-l nodes=1:ppn=X:gpus1</code>. |
|||
<pre> |
|||
$ msub -l nodes=1:ppn=1,feature=amd amd.moab # minimal job description |
|||
$ msub -l nodes=1:ppn=128:gpus=1,pmem=4G,feature=t4 amd.moab # maximum resources |
|||
</pre> |
|||
== Monitor Running Jobs == |
== Monitor Running Jobs == |
||
Line 58: | Line 184: | ||
</pre> |
</pre> |
||
= Simple parallel jobs with job arrays = |
|||
== Interactive Jobs == |
|||
A typical method to create "embarrassingly parallel" compute tasks is to slice a large data set into equally sized partitions and create jobs that work on their respective partition. |
|||
Interactive jobs must NOT run on the logins nodes, however resources for interactive jobs can be requested using msub. The following example starts an interactive session on one compute node with one core for one hour: |
|||
The manual management of hundreds of jobs becomes difficult, though. Therefore, using the job array feature is recommended. |
|||
<pre> |
|||
$ msub -I -V -l nodes=1:ppn=1 -l walltime=1:00:00 |
|||
</pre> |
|||
== Job array example == |
|||
The option "-I" means "interactive job" and the option "-V" exports all environment variables to the compute node of the interactive session. After execution of this command wait until the queuing system has granted you the requested resources. Once granted you will be automatically logged on the allocated compute node. The full node with all cores will be available for you. |
|||
Create the directory $HOME/arrayjob and the job description file $HOME/arrayjob/arrayjob.moab |
|||
If you use applications or tools which provide a GUI, enable X-forwarding for your interactive session with: |
|||
<pre> |
<pre> |
||
#!/bin/bash |
|||
# use -Y for ssh X-forwarding |
|||
#MOAB -N ARRAYJOB |
|||
$ ssh -l <uid> -Y login.nemo.uni-freiburg.de |
|||
# |
|||
# use -X for X-forwarding |
|||
# This is a workaround for a know bug. |
|||
$ msub -I -V -X -l nodes=1:ppn=1,walltime=1:00:00 |
|||
# Arrayjobs need to be given the output directory |
|||
cd $HOME/arrayjob |
|||
# Now call the programm which does the work depending on the job id |
|||
python divide-and-conquer.py $MOAB_JOBARRAYINDEX |
|||
</pre> |
</pre> |
||
Create the worker program (python in this example): |
|||
Once the walltime limit has been reached you will be automatically logged out from the compute node. |
|||
<pre> |
|||
== msub -q ''queues'' == |
|||
#!/usr/bin/python |
|||
import sys |
|||
def main (argv): |
|||
print ("Executing work according to array index ", argv[1]) |
|||
if __name__ == "__main__": |
|||
main(sys.argv) |
|||
</pre> |
|||
You can now submit the job for single array indices or index ranges: |
|||
On the bwForCluster NEMO, the queue does not need to be explicitly specified. |
|||
<pre> |
|||
Since all compute nodes are identical, the bwForCluster NEMOcurrently only has the the single queue "compute", which is also the default. |
|||
msub -t 11 arrayjob.moab |
|||
msub -t 23-42 arrayjob.moab |
|||
---- |
|||
</pre> |
|||
[[Category:BwForCluster NEMO]] |
|||
[[Category:BwForCluster_NEMO_Specific_Batch_Features]] |
Latest revision as of 15:02, 13 November 2023
A general description of Moab job scheduling and submitting options can be found in the general Moab documentation wiki. |
Submitting Jobs on the bwForCluster NEMO
This page describes the details of the queuing system specific to the bwForCluster NEMO.
Currently all worker nodes have 20 physical cores. Do not request more than 20 processes per node with the ppn flag (e.g. ppn=20).
If the requested ppn count exceeds this limit, the job will remain in idle state, i.e. it will not start running. Unfortunately msub does not inform you about wrong resource definitions. Please use checkjob -v to see if your job definitions are OK. Please select a number of cores which divides 20 evenly (e.g. ppn=[1,2,4,5,10,20]). This way the resource usage can be optimized.
Jobs run in a user only mode. This means that more than one job per user can run on the same worker node, if there are spare resources.
Limits and Queues
Walltime and cores correlate. MAXPS defines the product of maximum cores times walltime (cores X time). If you increase cores you'll need to decrease walltime to run the same amount of jobs.
On the bwForCluster NEMO the standard queue should not be explicitly specified.
- The maximum walltime for a job is 96 hours (4 days)
walltime=4:00:00:00 or walltime=96:00:00.
- All nodes have 20 cores and 128 GB RAM. Thus each core can use roughly 6GB RAM (pmem=6GB).
nodes=1:ppn=20 pmem=6gb Four nodes have 256 GB and four nodes have 512 GB of RAM.
- Maximum used cores at any time is 6000. We use MAXPE which takes Memory into account,
see processor equivalent MAXPE: 4000 (soft limit), 6000 (hard limit)
- IMPORTANT MAXPS: Maximum processor seconds which can be used at any time is 456192000. This increases with every job but decreases with the time passing, see basic fairness policies
MAXPS: 304128000 (soft limit), 456192000 (hard limit) (calculation 4 OPA islands * 44 nodes * 20 cores * 60 sec * 60 min * 24 h)
MAXPS = (# of cores) * walltime
Example: If a job uses 3520 cores for 24h, it will reserve 304128000 processor seconds. This is the soft limit for the cluster.
showq -b -v # will show you when your jobs hit the limit
Queues | |||||
---|---|---|---|---|---|
queue | node | default resources | minimum resources | theoretical maximum resources | node access policy |
do not specify | worker | nodes=1:ppn=1, walltime=01:00:00, pmem=1000mb | nodes=1:ppn=1 | nodes=300:ppn=20, walltime=4:00:00:00, pmem=6GB / pmem=12GB (256GB) / pmem=24GB (512GB) | single user |
express | worker / interactive | nodes=1:ppn=1, walltime=15:00, pmem=1000mb | nodes=1:ppn=1 | nodes=44:ppn=20, walltime=15:00, pmem=6GB | single user |
gpu | gpu | nodes=1:ppn=1, walltime=15:00, pmem=1000mb | nodes=1:ppn=1:gpus=1 | nodes=1:ppn=64:gpus=8, walltime=4:00:00:00, pmem=4GB | SHARED (SMT enabled) |
Interactive Jobs
Interactive jobs must NOT run on the logins nodes, however resources for interactive jobs can be requested using msub. The following example starts an interactive session on one compute node with one core for one hour:
$ msub -l nodes=1:ppn=1 -l walltime=1:00:00 -I
The option "-I" means "interactive job". After execution of this command wait until the queuing system has granted you the requested resources. Once granted you will be automatically logged on the allocated compute node.
If you use applications or tools which provide a GUI, enable X-forwarding for your interactive session with:
# use -Y for ssh X-forwarding $ ssh -l <uid> -Y login.nemo.uni-freiburg.de # use -X for X-forwarding $ msub -l nodes=1:ppn=1,walltime=1:00:00 -I -X
Once the walltime limit has been reached you will be automatically logged out from the compute node.
The option "-V" exports all environment variables to the compute node of the interactive session, but if you want to test your jobs, please aviod using "-V" since this alters your job environment.
Interactive GPU Jobs
If you add the :gpus flag to your interactive jobs, you will get a node with a GPU:
$ msub -l nodes=1:ppn=1:gpus=1,feature=t4 -I # for the Nvidia T4 interactive node $ msub -l nodes=1:ppn=1:gpus=1 -I # for the Nvidia V100 node, shouldn't be used for interactive jobs (just for short testing of the V100)
Express Jobs
You can use the express queue to test batch jobs:
$ msub -q express -l nodes=1:ppn=20 test.moab
For defaults and maximum usage see table in 1.1 Limits and Queues.
GPU Jobs
You can use the gpu queue if you need graphic cards for your jobs. The node has 32 cores with enabled simultaneous multithreading (SMT) so 64 processes can be used and eight Nvidia V100 GUUs. This node can be used by multiple users simultaneously (SHARED mode).
$ msub -q gpu -l nodes=1:ppn=1:gpus=1 gpu.moab # minimal job description $ msub -q gpu -l nodes=1:ppn=64:gpus=8 gpu.moab # maximum resources
For defaults and maximum usage see table in 1.1 Limits and Queues.
NEW AMD ROME Nodes
There are four nodes with AMD Rome processors with 128 real cores, 512 GiB and one Nvidia T4 GPU per node available on NEMO for evaluation purposes. SMT is disabled currently. To schedule your jobs on these machines please add -l feature=amd
or -l feature=t4
, for the Tesla card add -l nodes=1:ppn=X:gpus1
.
$ msub -l nodes=1:ppn=1,feature=amd amd.moab # minimal job description $ msub -l nodes=1:ppn=128:gpus=1,pmem=4G,feature=t4 amd.moab # maximum resources
Monitor Running Jobs
Once your jobs are running you can log in to the nodes where your jobs were submitted to. The nodes are listed in checkjob.
$ checkjob 12345 ... Allocated Nodes: [n3101.nemo.privat:20][n3102.nemo.privat:20] ...
Then you can ssh into these nodes. The short host name is sufficient. Please logout after you are finished. And you can use the program pdsh to monitor your jobs non-interactively. pdsh checks where your job is running and performs a task on all nodes where your job is running.
Interactive:
$ ssh n3101
Non-interactive with ssh:
# run 'ps aux | grep <myjob>' on node n3101 $ ssh n3101 'ps aux | grep <myjob>'
Non-interactive with pdsh:
# run 'ps aux | grep <myjob>' on all nodes corresponding to jobid '12345' $ pdsh -j 12345 'ps aux | grep <myjob>' n3101: fr_uid 125068 101 0.0 39040 1684 ? Sl 12:15 0:25 <myjob> n3102: fr_uid 125068 101 0.0 39040 1684 ? Sl 12:15 0:25 <myjob> # run kill '<myjob>' on all nodes corresponding to jobid '12345' $ pdsh -j 12345 killall <myjob> # works with array jobs as well $ pdsh -j 12346[1] 'ps aux | grep <myjob>' n3103: fr_uid 125068 101 0.0 39040 1684 ? Sl 12:15 0:25 <myjob>
Simple parallel jobs with job arrays
A typical method to create "embarrassingly parallel" compute tasks is to slice a large data set into equally sized partitions and create jobs that work on their respective partition.
The manual management of hundreds of jobs becomes difficult, though. Therefore, using the job array feature is recommended.
Job array example
Create the directory $HOME/arrayjob and the job description file $HOME/arrayjob/arrayjob.moab
#!/bin/bash #MOAB -N ARRAYJOB # # This is a workaround for a know bug. # Arrayjobs need to be given the output directory cd $HOME/arrayjob # Now call the programm which does the work depending on the job id python divide-and-conquer.py $MOAB_JOBARRAYINDEX
Create the worker program (python in this example):
#!/usr/bin/python import sys def main (argv): print ("Executing work according to array index ", argv[1]) if __name__ == "__main__": main(sys.argv)
You can now submit the job for single array indices or index ranges:
msub -t 11 arrayjob.moab msub -t 23-42 arrayjob.moab