BinAC/Quickstart Guide: Difference between revisions
No edit summary |
F Bartusch (talk | contribs) No edit summary |
||
(30 intermediate revisions by 3 users not shown) | |||
Line 1: | Line 1: | ||
== Purpose and Goals == |
|||
= Basics = |
|||
Use the work file system and not your home directory for your calculations. Create a working directory using your username. |
|||
The '''Getting Started''' guide is designed for users who are new to HPC systems in general and to BinAC specifically. After reading this guide, you should have a basic understanding of how to use BinAC for your research. |
|||
Please note that this guide does not cover basic Linux command line skills. If you're unfamiliar with commands such as listing directory contents or using a text editor, we recommend first exploring the Linux-Modul on the [[https://training.bwhpc.de/bwHPC|bwHPC training platform]]. |
|||
This guide also doesn't cover every feature of the system but aims to provide a broad overview. For more detailed information about specific features, please refer to the dedicated Wiki pages on topics like the batch system, storage, and more. |
|||
Some terms in this guide may be unfamiliar. You can look them up in the [[HPC_Glossary|HPC Glossray]]. |
|||
== General Workflow of Running a Calculation == |
|||
On a '''HPC Cluster''', you do not simply run log in and your software, but you write a '''Batch Script''' that contains all commands to run and process your job and send this into a waiting queue to be run on one of several hundred '''Compute Nodes'''. |
|||
== Get Access to the Cluster == |
|||
Follow the registration process for the bwForCluster. → [[Registration/bwForCluster|How to Register for a bwForCluster]] |
|||
== Login to the Cluster == |
|||
Setup service password and 2FA token and login to the cluster. → [[BinAC/Login|Login BinAC]] |
|||
== Using the Linux Commandline == |
|||
HPC Wiki (external site) → [https://hpc-wiki.info/hpc/Introduction_to_Linux_in_HPC/The_Command_Line Introduction to Linux Commandline] |
|||
Training course → [https://training.bwhpc.de/ Linux course on training.bwhpc.de] |
|||
Also see: [[.bashrc Do's and Don'ts]] |
|||
= File System Basics = |
|||
The details of the file systems are explained [https://wiki.bwhpc.de/e/BwForCluster_BinAC_Hardware_and_Architecture#Storage_Architecture here]. |
|||
== Home File System == |
|||
Home directories are meant for permanent file storage of files that are keep being used like source codes, configuration files, executable programs, conda environments, etc. |
|||
It is backuped daily and has a quota. |
|||
If that quota is reached, you will usually experience problems when working on the cluster. |
|||
== Work File System == |
|||
Use the work file system and not your home directory for your calculations and data. Create a work directory, usually users use their username: |
|||
<source lang="bash"> |
<source lang="bash"> |
||
cd /beegfs/work/ |
cd /beegfs/work/ |
||
mkdir |
mkdir $USER |
||
chmod 700 $USER |
|||
cd <username> |
|||
cd $USER |
|||
</source> |
</source> |
||
Do not use the login nodes to carry out any calculations or heavy load file transfers. |
Do not use the login nodes to carry out any calculations or heavy load file transfers. |
||
=== How to share data with coworkers? === |
|||
= Check the Queue = |
|||
To check all running and queued jobs. |
|||
You can share data with coworkers who are members in the same compute project by modifying file ownership and permissions of a directory in the work filesystem. |
|||
Every compute project has its own group on BinAC, namely the project's acronym. |
|||
In this example, the user tu_iioba01 creates a directory and wants to share it with members of the compute project bw16f003. |
|||
<source lang="bash"> |
|||
$ id tu_iioba01 |
|||
uid=900102(tu_iioba01) gid=500001(tu_tu) groups=500001(tu_tu),500002(bw16f003) |
|||
</source> |
|||
In order to share data with coworkers in the same compute project, change the group of the directory you want to share to the project acronym. |
|||
You can also set the so-called SGID-Bit. Then new files and subdirectories will automatically belong to this group. |
|||
<source lang="bash"> |
|||
# Change ownership. This command changes the ownwership for ALL files and subdirectories. |
|||
$ chown -R <username>:<acronym> /beegfs/work/<your directory> |
|||
# Set SGID-Bit |
|||
$ chmod g+s /beegfs/work/<your directory> |
|||
# Show the directory's permissions and owner |
|||
$ ls -ld /beegfs/work/<your directory> |
|||
drwxr-sr-x 3 tu_iioba01 bw16f003 1 Apr 16 15:06 /beegfs/work/<your directory> |
|||
</source> |
|||
The group of the directory is now bw16f003. The permissions for the group is 'r-s'. This means members of the group can access the directory content and newly created subdirectories/files automatically belong to the group bw16f003. |
|||
Now you can set the file permissions for your coworkers in this directory as needed by granting read, write, and execute permissions to files and subdirectories. |
|||
<source lang="bash"> |
|||
# Some examples |
|||
# Coworkers can read,write,delete,execute files in the directory |
|||
$ chmod 770 /beegfs/work/<your directory> |
|||
# Coworkers can read, owner still can write/delete files |
|||
$ chmod 740 /beegfs/work/<your directory>/<important_dataset> |
|||
</source> |
|||
For a more detailed explanation of file permissions, please refer to [https://www.linuxfoundation.org/blog/blog/classic-sysadmin-understanding-linux-file-permissions this tutorial] |
|||
== Temporary Data == |
|||
If your job creates temporary data, you can use the fast SSD with a capacity of 211 GB on the compute nodes. The temporary directory for your job is available via the $TMPDIR environment variable. |
|||
= Batch System Basics = |
|||
On cluster systems like BinAC you do not run your analysis by hand on the login node. |
|||
Instead, you write a script and submit it to the batch system, this is called a job. |
|||
The batch systems then tries to schedule the jobs on the available compute nodes. |
|||
== Queue/Job Basics == |
|||
The cluster consists of compute nodes with different [https://wiki.bwhpc.de/e/BwForCluster_BinAC_Hardware_and_Architecture hardware features]. |
|||
These hardware features (e.g. high-mem or GPUs) are only available when submitting the jobs to the [https://wiki.bwhpc.de/e/Queues specific queue]. |
|||
Also, each queue has different settings regarding maximal walltime. |
|||
The most recent queue settings are displayed on login as message of the day on the terminal. |
|||
Get an overview of the number of running and queued jobs: |
|||
<source lang="bash"> |
|||
$ qstat -q |
|||
Queue Memory CPU Time Walltime Node Run Que Lm State |
|||
---------------- ------ -------- -------- ---- --- --- -- ----- |
|||
tiny -- -- -- -- 0 0 -- E R |
|||
long -- -- -- -- 850 0 -- E R |
|||
gpu -- -- -- -- 66 0 -- E R |
|||
smp -- -- -- -- 4 1 -- E R |
|||
short -- -- -- -- 131 90 -- E R |
|||
----- ----- |
|||
1051 91 |
|||
</source> |
|||
To check all running and queued jobs: |
|||
<source lang="bash"> |
<source lang="bash"> |
||
qstat |
qstat |
||
</source> |
</source> |
||
Just your own jobs. |
Just your own jobs. |
||
<source lang="bash"> |
<source lang="bash"> |
||
qstat -u <username> |
qstat -u <username> |
||
</source> |
</source> |
||
= |
== Interactive Jobs == |
||
Interactive jobs are a good method for testing if/how software works with your data. |
|||
To start a 1 core job on a compute node providing a remote shell. |
To start a 1 core job on a compute node providing a remote shell. |
||
<source lang="bash"> |
<source lang="bash"> |
||
qsub -q short -l nodes=1:ppn=1 -I |
qsub -q short -l nodes=1:ppn=1 -I |
||
</source> |
</source> |
||
The same but requesting the whole node. |
The same but requesting the whole node. |
||
<source lang="bash"> |
<source lang="bash"> |
||
qsub -q short -l nodes=1:ppn=28 -I |
qsub -q short -l nodes=1:ppn=28 -I |
||
</source> |
</source> |
||
Standard Unix commands are directly available, for everything else use the modules. |
Standard Unix commands are directly available, for everything else use the modules. |
||
<source lang="bash"> |
<source lang="bash"> |
||
module avail |
module avail |
||
</source> |
</source> |
||
Just an example |
|||
<source lang="bash"> |
|||
module load chem/gromacs/4.6.7-gnu-4.9 |
|||
g_luck |
|||
</source> |
|||
Be aware that we allow node sharing. Do not disturb the calculations of other users. |
Be aware that we allow node sharing. Do not disturb the calculations of other users. |
||
= Simple Script Job = |
== Simple Script Job == |
||
Use your favourite text editor to create a script. |
|||
Use your favourite text editor to create a script called 'script.sh'. |
|||
{|style="background:#deffee; width:100%;" |
|||
|style="padding:5px; background:#cef2e0; text-align:left"| |
|||
[[Image:Attention.svg|center|25px]] |
|||
|style="padding:5px; background:#cef2e0; text-align:left"| |
|||
Please note that there are differences between Windows and Linux line endings. |
|||
Make sure that your editor uses Linux line endings when you are using Windows. |
|||
You can check your line endings with <code>vim -b <your script></code>. Windows line endings will be displayed as <code>^M</code>. |
|||
|} |
|||
<source lang="bash"> |
<source lang="bash"> |
||
#PBS -l nodes=1:ppn=1 |
#PBS -l nodes=1:ppn=1 |
||
#PBS -l walltime=00:05:00 |
#PBS -l walltime=00:05:00 |
||
#PBS -l mem=1gb |
|||
#PBS -S /bin/bash |
#PBS -S /bin/bash |
||
#PBS -N Simple_Script_Job |
#PBS -N Simple_Script_Job |
||
#PBS -j oe |
#PBS -j oe |
||
#PBS -o LOG |
#PBS -o LOG |
||
#PBS -n |
|||
cd $PBS_O_WORKDIR |
cd $PBS_O_WORKDIR |
||
echo "my Username is:" |
echo "my Username is:" |
||
Line 53: | Line 183: | ||
echo "My job is running on node:" |
echo "My job is running on node:" |
||
uname -a |
uname -a |
||
module load chem/gromacs/4.6.7-gnu-4.9 |
|||
g_luck |
|||
</source> |
</source> |
||
Submit the job using |
Submit the job using |
||
<source lang="bash"> |
<source lang="bash"> |
||
qsub -q |
qsub -q tiny script.sh |
||
</source> |
</source> |
||
Take a note of your jobID. |
|||
Take a note of your jobID. The scheduler will reserve one core and 1 gigabyte of memory for 5 minutes on a compute node for your job. |
|||
= Killing a Job = |
|||
The job should be scheduled within minute if the tiny queue is empty and write your username and the execution node into the output file. |
|||
There are tons of options, details and caveats. Most of the options are explained on [https://wiki.bwhpc.de/e/Batch_Jobs this page], but be aware that there are some [https://wiki.bwhpc.de/e/BwForCluster_BinAC_Specific_Batch_Features differences on BinAC]. |
|||
If your job needs GPUs, you have to specify how many GPUs you want. Just submitting the job to the GPU queue does not work: |
|||
<source lang="bash"> |
|||
#PBS -l nodes=1:ppn=1:gpus=1 |
|||
#PBS -q gpu |
|||
</source> |
|||
If you encounter any problems, just send a mail to hpcmaster@uni-tuebingen.de. |
|||
== Killing a Job == |
|||
Let's assume you build a Homer and want to stop/kill/remove a running job. |
Let's assume you build a Homer and want to stop/kill/remove a running job. |
||
<source lang="bash"> |
<source lang="bash"> |
||
Line 68: | Line 210: | ||
</source> |
</source> |
||
= |
== Best Practices == |
||
The scheduler will reserve computational resources (nodes, cores, gpus, memory) for a specified period for you. By following some best practices, you can avoid some common problems beforehand. |
|||
=== Specify memory for your job === |
|||
Often we get tickets with question like "Why did the system kill my job?". |
|||
Most often the user did not specify the required memory resources for the job. Then the following happens: |
|||
The job is started on a compute node, where it shares the resources other jobs. Let us assume that the other jobs on this node occupy already 100 gigabyte of memory. Now your job tries to allocate 40 gigabyte of memory. As the compute node has only 128 gigabyte, your job crashes because it cannot allocate that much memory. |
|||
You can make your life easier by specifying the required memory in your job script with: |
|||
<source lang="bash"> |
<source lang="bash"> |
||
#PBS -l mem=xxgb |
|||
#PBS -l nodes=1:ppn=28:gpus=4:exclusive_process |
|||
#PBS -l walltime=36:00:00 |
|||
#PBS -S /bin/bash |
|||
#PBS -N Gromacs_GPU |
|||
#PBS -j oe |
|||
#PBS -o LOG |
|||
#PBS -n |
|||
module purge |
|||
module load devel/cuda/7.5 mpi/mvapich2/2.1-gnu-4.9 numlib/openblas/0.2.18-gnu-4.9 |
|||
source /opt/bwhpc/common/chem/gromacs/2016_gnu-4.9/bin/GMXRC.bash |
|||
cd $PBS_O_WORKDIR |
|||
gmx grompp -f NPT.mdp -c protein.pdb -n index.ndx -p topol.top |
|||
mdrun_s_gpu -v -deffnm NPT_protein -pin on -ntmpi 4 -ntomp 7 -gpu_id 0123 -s topol.tpr |
|||
</source> |
</source> |
||
Then you have the guarantee that your job can allocate xx gigabyte of memory. |
|||
Submit with |
|||
If you do not know how much memory your job will need, look into the documentation of the tools you use or ask us. |
|||
We also started [https://wiki.bwhpc.de/e/Memory_Usage a wiki page] on which we will document some guidelines and pitfalls for specific tools. |
|||
=== Use the reserved resources === |
|||
Reserved resources (nodes, cores, gpus, memory) are not available to other users and their jobs. |
|||
You have the responsibility that your programs utilize the reserved resources. |
|||
An extreme example: You request a whole node (node=1:ppn=28), but your job uses just one core. The other 27 cores are idling. This is bad practice, so take care that the used programs really use the requested resources. |
|||
Another example are tools that do not benefit from a increasing number of cores. |
|||
Please check the documentation of your tools and also check the feedback files that report the CPU efficiency of your job. |
|||
<source lang="bash"> |
<source lang="bash"> |
||
[...] |
|||
qsub -q gpu script.sh |
|||
CPU efficiency, 0-100% | 25.00 |
|||
[...] |
|||
</source> |
</source> |
||
This job for example used only 25% of the available CPU resources. |
|||
= Software = |
|||
There are several mechanisms how software can be installed on BinAC. |
|||
If you need software that is not installed on BinAC you can open a ticket and we can find a way to provide the software on the cluster. |
|||
== Environment Modules == |
|||
Environment modules is the 'classic' way for providing software on clusters. |
|||
A module consists of a specific software version and can be loaded. |
|||
The module system then manipulates the PATH and other environment variables such that the software can be used. |
|||
<source lang="bash"> |
|||
# Show available modules |
|||
$ module avail |
|||
# Load a module |
|||
$ module load bio/bowtie2/2.4.1 |
|||
# Show the module's help |
|||
$ module help bio/bowtie2/2.4.1 |
|||
</source> |
|||
A more detailed description of module environments can be found [https://wiki.bwhpc.de/e/Environment_Modules on this wiki page] |
|||
Sometimes software packages have so many dependencies or the user wants a combination of tools, so that environment modules cannot be used in a meaningful way. |
|||
Then other solutions like conda environments or Singularity container (see below) can be used. |
|||
== Conda Environments == |
|||
Conda environments is a nice possibility for creating custom environments on the cluster, as a majority of the scientific software is available in the meantime as conda packages. |
|||
First, you have to install Miniconda in your home directory. |
|||
<source lang="bash"> |
|||
# Download installer |
|||
$ wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh |
|||
$ sh Miniconda3-latest-Linux-x86_64.sh |
|||
$ source ~/.bashrc |
|||
</source> |
|||
Then you can create your first environment and install software into it: |
|||
<source lang="bash"> |
|||
# Create an environment |
|||
$ conda create --name my_first_conda_environment |
|||
# Activate this environment |
|||
conda activate my_first_conda_environment |
|||
# Install software into this environment |
|||
$ conda install scipy=1.5.2 |
|||
</source> |
|||
You will need to add this line to your jobscripts such that the environments are available on the compute nodes: |
|||
<source lang="bash"> |
|||
source $HOME/miniconda3/etc/profile.d/conda.sh |
|||
conda activate <env_name> |
|||
</source> |
|||
When installing software conda will solve dependencies on the fly. |
|||
But it is not guaranteed that conda will use the exact same package versions in the future. |
|||
For the sake of reproducibility, you can write a file containing all conda packages together with their versions: |
|||
<source lang="bash"> |
|||
# Export packages installed in the active environment |
|||
$ conda list --explicit > spec-file.txt |
|||
# Create a new environment with the exact same conda packages |
|||
$ conda create --name myenv --file spec-file.txt |
|||
</source> |
|||
== Singularity Container == |
|||
Sometimes software is also available in a software container format. |
|||
Singularity is installed on all BinAC nodes. You can pull Singularity or Docker containers from registries onto BinAC and use them. |
|||
You can also build new Singularity containers on your own machine and copy them to BinAC. |
|||
Please note that Singularity containers should be stored in the work file system. |
|||
There are tons of options, details and caveats. If there is anything not working, as you like, send an email to [hpcmaster@uni-tuebingen.de hpcmaster]. |
|||
We configured Singularity such that containers stored in your home directory do not work. |
Latest revision as of 10:50, 6 November 2024
Purpose and Goals
The Getting Started guide is designed for users who are new to HPC systems in general and to BinAC specifically. After reading this guide, you should have a basic understanding of how to use BinAC for your research.
Please note that this guide does not cover basic Linux command line skills. If you're unfamiliar with commands such as listing directory contents or using a text editor, we recommend first exploring the Linux-Modul on the [training platform].
This guide also doesn't cover every feature of the system but aims to provide a broad overview. For more detailed information about specific features, please refer to the dedicated Wiki pages on topics like the batch system, storage, and more.
Some terms in this guide may be unfamiliar. You can look them up in the HPC Glossray.
General Workflow of Running a Calculation
On a HPC Cluster, you do not simply run log in and your software, but you write a Batch Script that contains all commands to run and process your job and send this into a waiting queue to be run on one of several hundred Compute Nodes.
Get Access to the Cluster
Follow the registration process for the bwForCluster. → How to Register for a bwForCluster
Login to the Cluster
Setup service password and 2FA token and login to the cluster. → Login BinAC
Using the Linux Commandline
HPC Wiki (external site) → Introduction to Linux Commandline
Training course → Linux course on training.bwhpc.de
Also see: .bashrc Do's and Don'ts
File System Basics
The details of the file systems are explained here.
Home File System
Home directories are meant for permanent file storage of files that are keep being used like source codes, configuration files, executable programs, conda environments, etc. It is backuped daily and has a quota. If that quota is reached, you will usually experience problems when working on the cluster.
Work File System
Use the work file system and not your home directory for your calculations and data. Create a work directory, usually users use their username:
cd /beegfs/work/
mkdir $USER
chmod 700 $USER
cd $USER
Do not use the login nodes to carry out any calculations or heavy load file transfers.
You can share data with coworkers who are members in the same compute project by modifying file ownership and permissions of a directory in the work filesystem. Every compute project has its own group on BinAC, namely the project's acronym.
In this example, the user tu_iioba01 creates a directory and wants to share it with members of the compute project bw16f003.
$ id tu_iioba01
uid=900102(tu_iioba01) gid=500001(tu_tu) groups=500001(tu_tu),500002(bw16f003)
In order to share data with coworkers in the same compute project, change the group of the directory you want to share to the project acronym. You can also set the so-called SGID-Bit. Then new files and subdirectories will automatically belong to this group.
# Change ownership. This command changes the ownwership for ALL files and subdirectories.
$ chown -R <username>:<acronym> /beegfs/work/<your directory>
# Set SGID-Bit
$ chmod g+s /beegfs/work/<your directory>
# Show the directory's permissions and owner
$ ls -ld /beegfs/work/<your directory>
drwxr-sr-x 3 tu_iioba01 bw16f003 1 Apr 16 15:06 /beegfs/work/<your directory>
The group of the directory is now bw16f003. The permissions for the group is 'r-s'. This means members of the group can access the directory content and newly created subdirectories/files automatically belong to the group bw16f003.
Now you can set the file permissions for your coworkers in this directory as needed by granting read, write, and execute permissions to files and subdirectories.
# Some examples
# Coworkers can read,write,delete,execute files in the directory
$ chmod 770 /beegfs/work/<your directory>
# Coworkers can read, owner still can write/delete files
$ chmod 740 /beegfs/work/<your directory>/<important_dataset>
For a more detailed explanation of file permissions, please refer to this tutorial
Temporary Data
If your job creates temporary data, you can use the fast SSD with a capacity of 211 GB on the compute nodes. The temporary directory for your job is available via the $TMPDIR environment variable.
Batch System Basics
On cluster systems like BinAC you do not run your analysis by hand on the login node. Instead, you write a script and submit it to the batch system, this is called a job. The batch systems then tries to schedule the jobs on the available compute nodes.
Queue/Job Basics
The cluster consists of compute nodes with different hardware features. These hardware features (e.g. high-mem or GPUs) are only available when submitting the jobs to the specific queue. Also, each queue has different settings regarding maximal walltime. The most recent queue settings are displayed on login as message of the day on the terminal.
Get an overview of the number of running and queued jobs:
$ qstat -q
Queue Memory CPU Time Walltime Node Run Que Lm State
---------------- ------ -------- -------- ---- --- --- -- -----
tiny -- -- -- -- 0 0 -- E R
long -- -- -- -- 850 0 -- E R
gpu -- -- -- -- 66 0 -- E R
smp -- -- -- -- 4 1 -- E R
short -- -- -- -- 131 90 -- E R
----- -----
1051 91
To check all running and queued jobs:
qstat
Just your own jobs.
qstat -u <username>
Interactive Jobs
Interactive jobs are a good method for testing if/how software works with your data.
To start a 1 core job on a compute node providing a remote shell.
qsub -q short -l nodes=1:ppn=1 -I
The same but requesting the whole node.
qsub -q short -l nodes=1:ppn=28 -I
Standard Unix commands are directly available, for everything else use the modules.
module avail
Be aware that we allow node sharing. Do not disturb the calculations of other users.
Simple Script Job
Use your favourite text editor to create a script called 'script.sh'.
Please note that there are differences between Windows and Linux line endings.
Make sure that your editor uses Linux line endings when you are using Windows.
You can check your line endings with |
#PBS -l nodes=1:ppn=1
#PBS -l walltime=00:05:00
#PBS -l mem=1gb
#PBS -S /bin/bash
#PBS -N Simple_Script_Job
#PBS -j oe
#PBS -o LOG
cd $PBS_O_WORKDIR
echo "my Username is:"
whoami
echo "My job is running on node:"
uname -a
Submit the job using
qsub -q tiny script.sh
Take a note of your jobID. The scheduler will reserve one core and 1 gigabyte of memory for 5 minutes on a compute node for your job. The job should be scheduled within minute if the tiny queue is empty and write your username and the execution node into the output file.
There are tons of options, details and caveats. Most of the options are explained on this page, but be aware that there are some differences on BinAC.
If your job needs GPUs, you have to specify how many GPUs you want. Just submitting the job to the GPU queue does not work:
#PBS -l nodes=1:ppn=1:gpus=1
#PBS -q gpu
If you encounter any problems, just send a mail to hpcmaster@uni-tuebingen.de.
Killing a Job
Let's assume you build a Homer and want to stop/kill/remove a running job.
qdel <jobID>
Best Practices
The scheduler will reserve computational resources (nodes, cores, gpus, memory) for a specified period for you. By following some best practices, you can avoid some common problems beforehand.
Specify memory for your job
Often we get tickets with question like "Why did the system kill my job?". Most often the user did not specify the required memory resources for the job. Then the following happens:
The job is started on a compute node, where it shares the resources other jobs. Let us assume that the other jobs on this node occupy already 100 gigabyte of memory. Now your job tries to allocate 40 gigabyte of memory. As the compute node has only 128 gigabyte, your job crashes because it cannot allocate that much memory.
You can make your life easier by specifying the required memory in your job script with:
#PBS -l mem=xxgb
Then you have the guarantee that your job can allocate xx gigabyte of memory.
If you do not know how much memory your job will need, look into the documentation of the tools you use or ask us. We also started a wiki page on which we will document some guidelines and pitfalls for specific tools.
Use the reserved resources
Reserved resources (nodes, cores, gpus, memory) are not available to other users and their jobs. You have the responsibility that your programs utilize the reserved resources.
An extreme example: You request a whole node (node=1:ppn=28), but your job uses just one core. The other 27 cores are idling. This is bad practice, so take care that the used programs really use the requested resources.
Another example are tools that do not benefit from a increasing number of cores. Please check the documentation of your tools and also check the feedback files that report the CPU efficiency of your job.
[...]
CPU efficiency, 0-100% | 25.00
[...]
This job for example used only 25% of the available CPU resources.
Software
There are several mechanisms how software can be installed on BinAC. If you need software that is not installed on BinAC you can open a ticket and we can find a way to provide the software on the cluster.
Environment Modules
Environment modules is the 'classic' way for providing software on clusters. A module consists of a specific software version and can be loaded. The module system then manipulates the PATH and other environment variables such that the software can be used.
# Show available modules
$ module avail
# Load a module
$ module load bio/bowtie2/2.4.1
# Show the module's help
$ module help bio/bowtie2/2.4.1
A more detailed description of module environments can be found on this wiki page
Sometimes software packages have so many dependencies or the user wants a combination of tools, so that environment modules cannot be used in a meaningful way. Then other solutions like conda environments or Singularity container (see below) can be used.
Conda Environments
Conda environments is a nice possibility for creating custom environments on the cluster, as a majority of the scientific software is available in the meantime as conda packages. First, you have to install Miniconda in your home directory.
# Download installer
$ wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh
$ sh Miniconda3-latest-Linux-x86_64.sh
$ source ~/.bashrc
Then you can create your first environment and install software into it:
# Create an environment
$ conda create --name my_first_conda_environment
# Activate this environment
conda activate my_first_conda_environment
# Install software into this environment
$ conda install scipy=1.5.2
You will need to add this line to your jobscripts such that the environments are available on the compute nodes:
source $HOME/miniconda3/etc/profile.d/conda.sh
conda activate <env_name>
When installing software conda will solve dependencies on the fly. But it is not guaranteed that conda will use the exact same package versions in the future. For the sake of reproducibility, you can write a file containing all conda packages together with their versions:
# Export packages installed in the active environment
$ conda list --explicit > spec-file.txt
# Create a new environment with the exact same conda packages
$ conda create --name myenv --file spec-file.txt
Singularity Container
Sometimes software is also available in a software container format. Singularity is installed on all BinAC nodes. You can pull Singularity or Docker containers from registries onto BinAC and use them. You can also build new Singularity containers on your own machine and copy them to BinAC.
Please note that Singularity containers should be stored in the work file system. We configured Singularity such that containers stored in your home directory do not work.