BinAC/Quickstart Guide
File System Basics
The details of the file systems are explained here.
Home File System
Home directories are meant for permanent file storage of files that are keep being used like source codes, configuration files, executable programs, conda environments, etc. It is backuped daily and has a quota.
Work File System
Use the work file system and not your home directory for your calculations. Create a working directory using your username.
cd /beegfs/work/
mkdir <username>
cd <username>
Do not use the login nodes to carry out any calculations or heavy load file transfers.
Temporary Data
If your job creates temporary data, you can use the fast SSD with a capacity of 211 GB on the compute nodes. The temporary directory for your job is available with the $TMPDIR environment variable.
Queue Basics
The most recent Queue settings are displayed on login as message of the day on the terminal.
Get an overview of the number of running and queue jobs:
$ qstat -q
Queue Memory CPU Time Walltime Node Run Que Lm State
---------------- ------ -------- -------- ---- --- --- -- -----
tiny -- -- -- -- 0 0 -- E R
long -- -- -- -- 850 0 -- E R
gpu -- -- -- -- 66 0 -- E R
smp -- -- -- -- 4 1 -- E R
short -- -- -- -- 131 90 -- E R
----- -----
1051 91
To check all running and queued jobs.
qstat
Just your own jobs.
qstat -u <username>
Interactive Jobs
Interactive jobs are a good method for testing if/how software works with your data.
To start a 1 core job on a compute node providing a remote shell.
qsub -q short -l nodes=1:ppn=1 -I
The same but requesting the whole node.
qsub -q short -l nodes=1:ppn=28 -I
Standard Unix commands are directly available, for everything else use the modules.
module avail
Be aware that we allow node sharing. Do not disturb the calculations of other users.
Simple Script Job
Use your favourite text editor to create a script calles 'script.sh'.
#PBS -l nodes=1:ppn=1
#PBS -l walltime=00:05:00
#PBS -l mem=1gb
#PBS -S /bin/bash
#PBS -N Simple_Script_Job
#PBS -j oe
#PBS -o LOG
cd $PBS_O_WORKDIR
echo "my Username is:"
whoami
echo "My job is running on node:"
uname -a
Submit the job using
qsub -q tiny script.sh
Take a note of your jobID. The scheduler will reserve one core and 1 gigabyte of memory for 5 minutes on a compute node for your job. The job should be scheduled within minute if the tiny queue is empty and write your username and the execution node into the output file.
There are tons of options, details and caveats. Most of the options are explained on this page, but be aware that there are some differences on BinAC.
If there is anything not working, as you like, send an email to hpcmaster@uni-tuebingen.de.
Best practices for job scripts
The scheduler will reserve computational resources (nodes, cores, gpus, memory) for a specified period for you. By following some best practices, you can avoid some common problems beforehand.
Use the reserved resources
Reserved resources (nodes, cores, gpus, memory) are not available to other users and their jobs. You have the responsibility that your programs utilize the reserved resources.
An extreme example: You request a whole node (node=1:ppn=28), but your job uses just one core. The other 27 cores are idling. This is bad practice, so take care that the used programs really use the requested resources.
Another example are tools that do not benefit from a increasing number of cores. Please check the documentation of your tools and also check the feedback files that report the CPU efficiency of your job.
[...]
CPU efficiency, 0-100% | 25.00
[...]
This job for example used only 25% of the available CPU resources.
Specify memory for your job
Often we get tickets with question like "Why did the system kill my job?". Most often the user did not specify the required memory resources for the job. Then the following happens:
The job is started on a compute node, where it shares the resources other jobs. Let us assume that the other jobs already occupy 100 gigabyte of memory. Now you're job tries to allocate 40 gigabyte of memory. But as the compute node has only 128 gigabyte, your job crashes as it cannot allocate that much memory.
You can make your life easier by specifying the required memory in your job script with:
#PBS -l mem=xxgb
Then you have the guarantee that your job can allocate xx gigabyte of memory.
If you do not know how much memory your job will need, look into the documentation of the tools you use or ask us.
Killing a Job
Let's assume you build a Homer and want to stop/kill/remove a running job.
qdel <jobID>