JUSTUS2/Software/Singularity: Difference between revisions

From bwHPC Wiki
Jump to navigation Jump to search
 
(5 intermediate revisions by 2 users not shown)
Line 2: Line 2:
|-
|-
! Description !! Content
! Description !! Content
|-
| module load
| system/singularity
|-
|-
| License
| License
Line 28: Line 25:
= Usage =
= Usage =


== Loading the module ==
== Availability ==
Singularity is directly available on all compute nodes (but not on the login nodes). You do not have to load a module, but you must use a compute node i.e. via an [[BwForCluster_JUSTUS_2_Slurm_HOWTO#How_to_submit_an_interactive_job.3F|interactive job]]

You can load the default version of Singularity with the following command:
<pre>
$ module load system/singularity
</pre>

If you wish to load another (older) version of Singularity, you can do so using
<pre>
$ module load system/singularity/<version>
</pre>
with <version> specifying the desired version.

<font color=red>Important: On JUSTUS 2, Singularity is available on all compute nodes. You do not have to load a module.</font>

== Program Binaries ==


The binary ''singularity'' is main program of the container platform.
The binary ''singularity'' is main program of the container platform.
Line 58: Line 41:


For additional information about how to use Singularity, please consult the [https://sylabs.io/docs/ documentation].
For additional information about how to use Singularity, please consult the [https://sylabs.io/docs/ documentation].

<font color=red>Important: On JUSTUS 2, Singularity is only available on compute nodes. You must switch to such a node to invoke any ''singularity'' commands.</font>


== Batch jobs with containers ==
== Batch jobs with containers ==
Line 81: Line 62:
# File name for error output
# File name for error output
#SBATCH --error=singularity_job-%j.err
#SBATCH --error=singularity_job-%j.err

module load your/singularity/version #(not needed on JUSTUS 2, but could be necessary on other system)


cd your/workspace
cd your/workspace
Line 102: Line 81:
#SBATCH --gres=gpu:1
#SBATCH --gres=gpu:1
[…]
[…]

module load your/singularity/version #(not needed on JUSTUS 2, but could be necessary on other system)


cd your/workspace
cd your/workspace
Line 137: Line 114:
'''Note:''' Ready-to-use containers can be pulled from the [https://ngc.nvidia.com/catalog/containers NVIDIA GPU CLOUD (NGC)] catalog.
'''Note:''' Ready-to-use containers can be pulled from the [https://ngc.nvidia.com/catalog/containers NVIDIA GPU CLOUD (NGC)] catalog.


<!-- veraltet, hat nur das eine alte lammps modul, erstmal unsichtbar
== NGC environment modules on JUSTUS 2 ==
== NGC environment modules on JUSTUS 2 ==


Line 171: Line 149:
$ mpirun -n 2 lmp -in in.lj.txt -var x 8 -var y 8 -var z 8 -k on g 2 -sf kk -pk kokkos cuda/aware on neigh full \
$ mpirun -n 2 lmp -in in.lj.txt -var x 8 -var y 8 -var z 8 -k on g 2 -sf kk -pk kokkos cuda/aware on neigh full \
comm device binsize 2.8
comm device binsize 2.8
$ module unload 29Oct2020
$ module unload 29Oct2020-lammps
$ module unload ngc/chem
$ module unload ngc/chem
</pre>
</pre>
Line 178: Line 156:


Currently, <span style="background:#edeae2;margin:2px;padding:1px;border:1px dotted #808080">module load ngc/numlib</span> for numeric libraries and <span style="background:#edeae2;margin:2px;padding:1px;border:1px dotted #808080">module load ngc/chem</span> for chemistry programs can be selected.
Currently, <span style="background:#edeae2;margin:2px;padding:1px;border:1px dotted #808080">module load ngc/numlib</span> for numeric libraries and <span style="background:#edeae2;margin:2px;padding:1px;border:1px dotted #808080">module load ngc/chem</span> for chemistry programs can be selected.
-->


== Batch jobs with containers on JUSTUS 2 ==
== Batch jobs with containers on JUSTUS 2 ==
Line 202: Line 181:
* [https://en.wikipedia.org/wiki/Singularity_(software) Wikipedia article (english)]
* [https://en.wikipedia.org/wiki/Singularity_(software) Wikipedia article (english)]
----
----
[[Category:Software]]

Latest revision as of 18:09, 28 November 2025

Description Content
License Open-source software, distributed under the 3-clause BSD License. More...
Citing ---
Links Homepage | Documentation
Graphical Interface No

Description

Singularity is a container platform.

License

Singularity is free, open-source software released under the 3-clause BSD license. Please read the license for additional information about Singularity.

Usage

Availability

Singularity is directly available on all compute nodes (but not on the login nodes). You do not have to load a module, but you must use a compute node i.e. via an interactive job

The binary singularity is main program of the container platform.

To get help using Singularity execute the following command:

$ singularity --help

Furthermore, a man page is available and can be accessed by typing:

$ man singularity

For additional information about how to use Singularity, please consult the documentation.

Batch jobs with containers

Batch jobs utilizing Singularity containers are generally built the same way as all other batch jobs, where the job script contains singularity commands. For example:

#!/bin/bash
# Allocate one node
#SBATCH --nodes=1
# Number of program instances to be executed
#SBATCH --tasks-per-node=4
# 8 GB memory required per node
#SBATCH --mem=16G
# Maximum run time of job
#SBATCH --time=1:00:00
# Give job a reasonable name
#SBATCH --job-name=Singularity
# File name for standard output (%j will be replaced by job id)
#SBATCH --output=singularity_job-%j.out
# File name for error output
#SBATCH --error=singularity_job-%j.err

cd your/workspace

# Run container (two options to start a container)
singularity run [options] <container>
singularity exec [options] <container> <command>

Keep in mind that other modules you may have loaded will not be available inside the container.

Using GPUs

#!/bin/bash
[…]
# Allocate one GPU per node
#SBATCH --partition=gpu
#SBATCH --gres=gpu:1
[…]

cd your/workspace

# Run container (two options to start a container)
singularity run --nv [options] <container>
singularity exec --nv [options] <container> <command>

Using the flag is advisable, but may be omitted if the correct GPU- and driver-APIs are available on the container.

Examples

Run your first container on JUSTUS 2

Build a TensorFlow container with Singularity and execute a Python command:

# request interactive node with GPUs
$ srun --nodes=1 --exclusive --gres=gpu:2 --pty bash

# create workspace and navigate into it
$ WORKSPACE=`ws_allocate tensorflow 3`
$ cd $WORKSPACE

# build container
$ singularity build tensorflow-20.11-tf2-py3.sif docker://nvcr.io/nvidia/tensorflow:20.11-tf2-py3

# execute Python command
$ singularity exec --nv tensorflow-20.11-tf2-py3.sif python -c 'import tensorflow as tf; \
  print("Num GPUs Available: ",len(tf.config.experimental.list_physical_devices("GPU")))'

Note: Ready-to-use containers can be pulled from the NVIDIA GPU CLOUD (NGC) catalog.


Batch jobs with containers on JUSTUS 2

Run a GROMACS container with Singularity as a batch job:

$ salloc --ntasks=1                                                         # obtain compute node
$ WORKSPACE=`ws_allocate gromacs 3`                                         # allocate workspace
$ cd $WORKSPACE                                                             # change to workspace
$ singularity pull gromacs-2020_2.sif docker://nvcr.io/hpc/gromacs:2020.2   # pull container from NGC
$ cp -r /opt/bwhpc/common/chem/ngc/gromacs/ ./bwhpc-examples/               # copy example to workspace
$ cd ./bwhpc-examples                                                       # change to example directory
$ sbatch gromacs-2020.2_gpu.slurm                                           # submit job
$ squeue                                                                    # obtain JOBID
$ scontrol show job <JOBID>                                                 # check state of job

More batch job examples are located at /opt/bwhpc/common/chem/ngc.

Useful links