BinAC/Quickstart Guide: Difference between revisions
F Bartusch (talk | contribs) No edit summary |
F Bartusch (talk | contribs) No edit summary |
||
Line 52: | Line 52: | ||
</source> |
</source> |
||
= |
= Interactive Jobs = |
||
Interactive jobs are a good method for testing if/how software works with your data. |
|||
To start a 1 core job on a compute node providing a remote shell. |
To start a 1 core job on a compute node providing a remote shell. |
||
<source lang="bash"> |
<source lang="bash"> |
||
qsub -q short -l nodes=1:ppn=1 -I |
qsub -q short -l nodes=1:ppn=1 -I |
||
</source> |
</source> |
||
The same but requesting the whole node. |
The same but requesting the whole node. |
||
<source lang="bash"> |
<source lang="bash"> |
||
qsub -q short -l nodes=1:ppn=28 -I |
qsub -q short -l nodes=1:ppn=28 -I |
||
</source> |
</source> |
||
Standard Unix commands are directly available, for everything else use the modules. |
Standard Unix commands are directly available, for everything else use the modules. |
||
<source lang="bash"> |
<source lang="bash"> |
||
module avail |
module avail |
||
</source> |
</source> |
||
Just an example |
Just an example |
||
<source lang="bash"> |
<source lang="bash"> |
||
Line 73: | Line 79: | ||
= Simple Script Job = |
= Simple Script Job = |
||
Use your favourite text editor to create a script. |
Use your favourite text editor to create a script calles 'script.sh'. |
||
<source lang="bash"> |
<source lang="bash"> |
||
#PBS -l nodes=1:ppn=1 |
#PBS -l nodes=1:ppn=1 |
||
#PBS -l walltime=00:05:00 |
#PBS -l walltime=00:05:00 |
||
⚫ | |||
#PBS -S /bin/bash |
#PBS -S /bin/bash |
||
#PBS -N Simple_Script_Job |
#PBS -N Simple_Script_Job |
||
Line 86: | Line 94: | ||
echo "My job is running on node:" |
echo "My job is running on node:" |
||
uname -a |
uname -a |
||
module load chem/gromacs/4.6.7-gnu-4.9 |
|||
g_luck |
|||
</source> |
</source> |
||
Submit the job using |
Submit the job using |
||
<source lang="bash"> |
<source lang="bash"> |
||
qsub -q |
qsub -q tiny script.sh |
||
</source> |
</source> |
||
Take a note of your jobID. |
|||
Take a note of your jobID. The scheduler will reserve one core and 1 gigabayte of memory for 5 minutes on a compute node for your job. |
|||
The job should be scheduled within minute if the tiny queue is empty and write your username and the execution node into the output file. |
|||
⚫ | |||
= Best practices for job scripts = |
|||
= Killing a Job = |
= Killing a Job = |
||
Line 100: | Line 115: | ||
qdel <jobID> |
qdel <jobID> |
||
</source> |
</source> |
||
= Fancy Script Job = |
|||
<source lang="bash"> |
|||
#PBS -l nodes=1:ppn=28:gpus=4:exclusive_process |
|||
#PBS -l walltime=36:00:00 |
|||
#PBS -S /bin/bash |
|||
#PBS -N Gromacs_GPU |
|||
⚫ | |||
#PBS -o LOG |
|||
#PBS -n |
|||
module purge |
|||
module load chem/gromacs/2016.4-gnu-5.2 |
|||
cd $PBS_O_WORKDIR |
|||
gmx grompp -f NPT.mdp -c protein.pdb -n index.ndx -p topol.top |
|||
mdrun_s_gpu -v -deffnm NPT_protein -pin on -ntmpi 4 -ntomp 7 -gpu_id 0123 -s topol.tpr |
|||
</source> |
|||
Submit with |
|||
<source lang="bash"> |
|||
qsub -q gpu script.sh |
|||
</source> |
|||
⚫ |
Revision as of 16:17, 3 November 2020
File System Basics
The details of the file systems are explained here.
Home File System
Home directories are meant for permanent file storage of files that are keep being used like source codes, configuration files, executable programs, conda environments, etc. It is backuped daily and has a quota.
Work File System
Use the work file system and not your home directory for your calculations. Create a working directory using your username.
cd /beegfs/work/
mkdir <username>
cd <username>
Do not use the login nodes to carry out any calculations or heavy load file transfers.
Temporary Data
If your job creates temporary data, you can use the fast SSD with a capacity of 211 GB on the compute nodes. The temporary directory for your job is available with the $TMPDIR environment variable.
Queue Basics
The most recent Queue settings are displayed on login as message of the day on the terminal.
Get an overview of the number of running and queue jobs:
$ qstat -q
Queue Memory CPU Time Walltime Node Run Que Lm State
---------------- ------ -------- -------- ---- --- --- -- -----
tiny -- -- -- -- 0 0 -- E R
long -- -- -- -- 850 0 -- E R
gpu -- -- -- -- 66 0 -- E R
smp -- -- -- -- 4 1 -- E R
short -- -- -- -- 131 90 -- E R
----- -----
1051 91
To check all running and queued jobs.
qstat
Just your own jobs.
qstat -u <username>
Interactive Jobs
Interactive jobs are a good method for testing if/how software works with your data.
To start a 1 core job on a compute node providing a remote shell.
qsub -q short -l nodes=1:ppn=1 -I
The same but requesting the whole node.
qsub -q short -l nodes=1:ppn=28 -I
Standard Unix commands are directly available, for everything else use the modules.
module avail
Just an example
module load chem/gromacs/4.6.7-gnu-4.9
g_luck
Be aware that we allow node sharing. Do not disturb the calculations of other users.
Simple Script Job
Use your favourite text editor to create a script calles 'script.sh'.
#PBS -l nodes=1:ppn=1
#PBS -l walltime=00:05:00
#PBS -l mem=1gb
#PBS -S /bin/bash
#PBS -N Simple_Script_Job
#PBS -j oe
#PBS -o LOG
cd $PBS_O_WORKDIR
echo "my Username is:"
whoami
echo "My job is running on node:"
uname -a
Submit the job using
qsub -q tiny script.sh
Take a note of your jobID. The scheduler will reserve one core and 1 gigabayte of memory for 5 minutes on a compute node for your job. The job should be scheduled within minute if the tiny queue is empty and write your username and the execution node into the output file.
There are tons of options, details and caveats. If there is anything not working, as you like, send an email to hpcmaster@uni-tuebingen.de.
Best practices for job scripts
Killing a Job
Let's assume you build a Homer and want to stop/kill/remove a running job.
qdel <jobID>