Helix/Software/Singularity: Difference between revisions

From bwHPC Wiki
Jump to navigation Jump to search
No edit summary
(Singularity als Modul)
 
(One intermediate revision by the same user not shown)
Line 1: Line 1:
{{Softwarepage|system/singularity}}

{| width=760px class="wikitable"
{| width=760px class="wikitable"
|-
|-
Line 4: Line 6:
|-
|-
| module load
| module load
| system/singularity
| ---
|-
|-
| License
| License
Line 20: Line 22:
= Description =
= Description =


'''Singularity''' is a container platform. On bwForCluster Helix, Singularity 3.8 is installed on all nodes.
'''Singularity''' is a container platform.


Because not every different software configuration can be provided as [[Environment Modules|Modules]] on the clusters, containers offer a way to use a pre-built scientific software in a closed and reproducible space, independent from the environment. A Singularity container contains its own operating system, the intended software and all required dependencies, except for Kernel components (e.g. drivers). This also means that you can use software that isn't available for RHEL/CentOS, but is offered for other Linux systems. Singularity containers are easily movable between systems and do not require root-rights for execution (different to Docker).
Because not every different software configuration can be provided as [[Environment Modules|Modules]] on the clusters, containers offer a way to use a pre-built scientific software in a closed and reproducible space, independent from the environment. A Singularity container contains its own operating system, the intended software and all required dependencies, except for Kernel components (e.g. drivers). This also means that you can use software that isn't available for RHEL/CentOS, but is offered for other Linux systems. Singularity containers are easily movable between systems and do not require root-rights for execution (different to Docker).
Line 78: Line 80:
Using the flag is advisable, but may be omitted if the correct GPU- and driver-APIs are available on the container.
Using the flag is advisable, but may be omitted if the correct GPU- and driver-APIs are available on the container.


= Examples =

TBD


= Useful links =
= Useful links =

Latest revision as of 13:11, 21 April 2023

The main documentation is available via module help system/singularity on the cluster. Most software modules for applications provide working example batch scripts.


Description Content
module load system/singularity
License Open-source software, distributed under the 3-clause BSD License. More...
Citing ---
Links Homepage | Documentation
Graphical Interface No

Description

Singularity is a container platform.

Because not every different software configuration can be provided as Modules on the clusters, containers offer a way to use a pre-built scientific software in a closed and reproducible space, independent from the environment. A Singularity container contains its own operating system, the intended software and all required dependencies, except for Kernel components (e.g. drivers). This also means that you can use software that isn't available for RHEL/CentOS, but is offered for other Linux systems. Singularity containers are easily movable between systems and do not require root-rights for execution (different to Docker).

License

Singularity is free, open-source software released under the 3-clause BSD license. Please read the license for additional information about Singularity.

Usage

Program Binaries

The binary singularity is main program of the container platform.

To get help using Singularity execute the following command:

$ singularity --help

Furthermore, a man page is available and can be accessed by typing:

$ man singularity

For additional information about how to use Singularity, please consult the documentation.

Running containers

On the cluster, the easiest way to run one-line commands (including commands to run other files) is passing them to the container through exec:

singularity exec <containername> script.sh


exec is generally the easiest way to execute singularity in a script. Without further specification, it runs the container in the current directory.

An alternative option, run, allows to execute a %runscript that was provided in the definition file during the build-process:

$ singularity run <containername>

This is for example useful to start an installed program.

Using nvidia-GPUs

Singularity can interact with by setting the (experimental) --nv-flag

# Run container (two options to start a container)
singularity run --nv [options] <container>
singularity exec --nv [options] <container> <command>

Using the flag is advisable, but may be omitted if the correct GPU- and driver-APIs are available on the container.


Useful links