Difference between revisions of "JUSTUS2/Software/Singularity"

From bwHPC Wiki
Jump to: navigation, search
(NGC environment modules on JUSTUS 2)
m (K Siegmund moved page Singularity to JUSTUS2/Software/Singularity)
(47 intermediate revisions by 2 users not shown)
Line 4: Line 4:
 
|-
 
|-
 
| module load
 
| module load
  +
| system/singularity
| ---
 
|-
 
| Availability
 
| [[BwForCluster_JUSTUS_2]]
 
 
|-
 
|-
 
| License
 
| License
Line 31: Line 28:
 
= Usage =
 
= Usage =
   
  +
== Loading the module ==
  +
  +
You can load the default version of Singularity with the following command:
 
<pre>
 
<pre>
  +
$ module load system/singularity
Usage: singularity [global options...]
 
  +
</pre>
   
  +
If you wish to load another (older) version of Singularity, you can do so using
Options:
 
  +
<pre>
-c, --config string specify a configuration file (for root or unprivileged installation only) (default
 
"/etc/singularity/singularity.conf")
+
$ module load system/singularity/<version>
  +
</pre>
-d, --debug print debugging information (highest verbosity)
 
  +
with <version> specifying the desired version.
-h, --help help for singularity
 
--nocolor print without color output (default False)
 
-q, --quiet suppress normal output
 
-s, --silent only print errors
 
-v, --verbose print additional information
 
--version version for singularity
 
   
  +
<font color=red>Important: On JUSTUS 2, Singularity is available on all compute nodes. You do not have to load a module.</font>
Available Commands:
 
build Build a Singularity image
 
cache Manage the local cache
 
capability Manage Linux capabilities for users and groups
 
config Manage various singularity configuration (root user only)
 
delete Deletes requested image from the library
 
exec Run a command within a container
 
help Help about any command
 
inspect Show metadata for an image
 
instance Manage containers running as services
 
key Manage OpenPGP keys
 
oci Manage OCI containers
 
plugin Manage Singularity plugins
 
pull Pull an image from a URI
 
push Upload image to the provided URI
 
remote Manage singularity remote endpoints, keyservers and OCI/Docker registry credentials
 
run Run the user-defined default command within a container
 
run-help Show the user-defined help for an image
 
search Search a Container Library for images
 
shell Run a shell within a container
 
sif siftool is a program for Singularity Image Format (SIF) file manipulation
 
sign Attach digital signature(s) to an image
 
test Run the user-defined tests within a container
 
verify Verify cryptographic signatures attached to an image
 
version Show the version for Singularity
 
   
  +
== Program Binaries ==
Examples:
 
  +
$ singularity help <command> [<subcommand>]
 
$ singularity help build
+
The binary ''singularity'' is main program of the container platform.
  +
$ singularity help instance start
 
  +
To get help using Singularity execute the following command:
  +
<pre>
  +
$ singularity --help
 
</pre>
 
</pre>
  +
  +
Furthermore, a man page is available and can be accessed by typing:
  +
<pre>
  +
$ man singularity
  +
</pre>
  +
  +
For additional information about how to use Singularity, please consult the [https://sylabs.io/docs/ documentation].
  +
  +
<font color=red>Important: On JUSTUS 2, Singularity is only available on compute nodes. You must switch to such a node to invoke any ''singularity'' commands.</font>
  +
  +
== Batch jobs with containers ==
  +
  +
Batch jobs utilizing Singularity containers are generally built the same way as all other [https://wiki.bwhpc.de/e/BwForCluster_JUSTUS_2_Slurm_HOWTO#How_to_submit_a_batch_job.3F batch jobs], where the job script contains ''singularity'' commands. For example:
  +
  +
<pre>
  +
#!/bin/bash
  +
# Allocate one node
  +
#SBATCH --nodes=1
  +
# Number of program instances to be executed
  +
#SBATCH --tasks-per-node=4
  +
# 8 GB memory required per node
  +
#SBATCH --mem=16G
  +
# Maximum run time of job
  +
#SBATCH --time=1:00:00
  +
# Give job a reasonable name
  +
#SBATCH --job-name=Singularity
  +
# File name for standard output (%j will be replaced by job id)
  +
#SBATCH --output=singularity_job-%j.out
  +
# File name for error output
  +
#SBATCH --error=singularity_job-%j.err
  +
  +
module load your/singularity/version #(not needed on JUSTUS 2, but could be necessary on other system)
  +
  +
cd your/workspace
  +
  +
# Run container (two options to start a container)
  +
singularity run [options] <container>
  +
singularity exec [options] <container> <command>
  +
</pre>
  +
  +
<font color=red>Keep in mind that other modules you may have loaded will not be available inside the container.</font>
  +
  +
=== Using GPUs ===
  +
  +
<pre>
  +
#!/bin/bash
  +
[…]
  +
# Allocate one GPU per node
  +
#SBATCH --partition=gpu
  +
#SBATCH --gres=gpu:1
  +
[…]
  +
  +
module load your/singularity/version #(not needed on JUSTUS 2, but could be necessary on other system)
  +
  +
cd your/workspace
  +
  +
# Run container (two options to start a container)
  +
singularity run --nv [options] <container>
  +
singularity exec --nv [options] <container> <command>
  +
</pre>
  +
  +
Using the flag is advisable, but may be omitted if the correct GPU- and driver-APIs are available on the container.
   
 
= Examples =
 
= Examples =
Line 81: Line 118:
 
== Run your first container on JUSTUS 2 ==
 
== Run your first container on JUSTUS 2 ==
   
Build a TensorFlow container with Singularity and execute a Python command inside the container:
+
Build a TensorFlow container with Singularity and execute a Python command:
 
<pre>
 
<pre>
 
# request interactive node with GPUs
 
# request interactive node with GPUs
Line 97: Line 134:
 
print("Num GPUs Available: ",len(tf.config.experimental.list_physical_devices("GPU")))'
 
print("Num GPUs Available: ",len(tf.config.experimental.list_physical_devices("GPU")))'
 
</pre>
 
</pre>
  +
  +
'''Note:''' Ready-to-use containers can be pulled from the [https://ngc.nvidia.com/catalog/containers NVIDIA GPU CLOUD (NGC)] catalog.
   
 
== NGC environment modules on JUSTUS 2 ==
 
== NGC environment modules on JUSTUS 2 ==
Line 103: Line 142:
 
<pre>
 
<pre>
 
$ srun --nodes=1 --exclusive --gres=gpu:2 --pty bash
 
$ srun --nodes=1 --exclusive --gres=gpu:2 --pty bash
$ WORKSPACE=`ws_allocate npc 3`
+
$ WORKSPACE=`ws_allocate ngc 3`
 
$ cd $WORKSPACE
 
$ cd $WORKSPACE
 
$ export NGC_IMAGE_DIR=$(pwd)
 
$ export NGC_IMAGE_DIR=$(pwd)
 
</pre>
 
</pre>
Important: Containers can only run in workspaces.
+
<font color=red>Important: Containers can only run in workspaces.</font>
   
 
2) PyTorch container
 
2) PyTorch container
 
<pre>
 
<pre>
$ module load ngc/.numlib
+
$ module load ngc/numlib
$ module load 20.12-py3
+
$ module load 20.12-torch-py3
 
$ python3
 
$ python3
 
>>> import torch
 
>>> import torch
Line 118: Line 157:
 
>>> print(x)
 
>>> print(x)
 
>>> quit()
 
>>> quit()
$ module unload 20.12-py3
+
$ module unload 20.12-torch-py3
$ module unload ngc/.numlib
+
$ module unload ngc/numlib
 
</pre>
 
</pre>
Note: Use the container in the same manner as an interactive shell.
+
'''Note:''' Use the container in the same manner as an interactive shell.
   
 
3) LAMMPS container
 
3) LAMMPS container
 
<pre>
 
<pre>
$ module load ngc/.chem
+
$ module load ngc/chem
$ module load 29Oct2020
+
$ module avail
  +
$ module load 29Oct2020-lammps
 
$ wget https://lammps.sandia.gov/inputs/in.lj.txt
 
$ wget https://lammps.sandia.gov/inputs/in.lj.txt
 
$ export SINGULARITY_BINDPATH=$(pwd)
 
$ export SINGULARITY_BINDPATH=$(pwd)
 
$ mpirun -n 2 lmp -in in.lj.txt -var x 8 -var y 8 -var z 8 -k on g 2 -sf kk -pk kokkos cuda/aware on neigh full \
 
$ mpirun -n 2 lmp -in in.lj.txt -var x 8 -var y 8 -var z 8 -k on g 2 -sf kk -pk kokkos cuda/aware on neigh full \
 
comm device binsize 2.8
 
comm device binsize 2.8
$ module unload 29Oct2020
+
$ module unload 29Oct2020-lammps
$ module unload ngc/.chem
+
$ module unload ngc/chem
 
</pre>
 
</pre>
Note: Use <span style="background:#edeae2;margin:2px;padding:1px;border:1px dotted #808080">SINGULARITY_BINDPATH=<PATH></span> to mount the directory with the input file.
+
'''Note:''' Use <span style="background:#edeae2;margin:2px;padding:1px;border:1px dotted #808080">SINGULARITY_BINDPATH=<PATH></span> to mount the directory with the input file.
   
   
Currently, <span style="background:#edeae2;margin:2px;padding:1px;border:1px dotted #808080">module load ngc/.numlib</span> for numeric libraries and <span style="background:#edeae2;margin:2px;padding:1px;border:1px dotted #808080">module load ngc/.chem</span> for chemistry programs can be selected.
+
Currently, <span style="background:#edeae2;margin:2px;padding:1px;border:1px dotted #808080">module load ngc/numlib</span> for numeric libraries and <span style="background:#edeae2;margin:2px;padding:1px;border:1px dotted #808080">module load ngc/chem</span> for chemistry programs can be selected.
   
 
== Batch jobs with containers on JUSTUS 2 ==
 
== Batch jobs with containers on JUSTUS 2 ==
Line 143: Line 183:
 
Run a GROMACS container with Singularity as a batch job:
 
Run a GROMACS container with Singularity as a batch job:
 
<pre>
 
<pre>
  +
$ salloc --ntasks=1 # obtain compute node
 
$ WORKSPACE=`ws_allocate gromacs 3` # allocate workspace
 
$ WORKSPACE=`ws_allocate gromacs 3` # allocate workspace
 
$ cd $WORKSPACE # change to workspace
 
$ cd $WORKSPACE # change to workspace
Line 161: Line 202:
 
* [https://en.wikipedia.org/wiki/Singularity_(software) Wikipedia article (english)]
 
* [https://en.wikipedia.org/wiki/Singularity_(software) Wikipedia article (english)]
 
----
 
----
  +
[[Category:Software]]
[[Category:BwForCluster_Chemistry]][[Category:BwForCluster_JUSTUS_2]]
 

Revision as of 11:29, 14 February 2023

Description Content
module load system/singularity
License Open-source software, distributed under the 3-clause BSD License. More...
Citing ---
Links Homepage | Documentation
Graphical Interface No

1 Description

Singularity is a container platform.

2 License

Singularity is free, open-source software released under the 3-clause BSD license. Please read the license for additional information about Singularity.

3 Usage

3.1 Loading the module

You can load the default version of Singularity with the following command:

$ module load system/singularity

If you wish to load another (older) version of Singularity, you can do so using

$ module load system/singularity/<version>

with <version> specifying the desired version.

Important: On JUSTUS 2, Singularity is available on all compute nodes. You do not have to load a module.

3.2 Program Binaries

The binary singularity is main program of the container platform.

To get help using Singularity execute the following command:

$ singularity --help

Furthermore, a man page is available and can be accessed by typing:

$ man singularity

For additional information about how to use Singularity, please consult the documentation.

Important: On JUSTUS 2, Singularity is only available on compute nodes. You must switch to such a node to invoke any singularity commands.

3.3 Batch jobs with containers

Batch jobs utilizing Singularity containers are generally built the same way as all other batch jobs, where the job script contains singularity commands. For example:

#!/bin/bash
# Allocate one node
#SBATCH --nodes=1
# Number of program instances to be executed
#SBATCH --tasks-per-node=4
# 8 GB memory required per node
#SBATCH --mem=16G
# Maximum run time of job
#SBATCH --time=1:00:00
# Give job a reasonable name
#SBATCH --job-name=Singularity
# File name for standard output (%j will be replaced by job id)
#SBATCH --output=singularity_job-%j.out
# File name for error output
#SBATCH --error=singularity_job-%j.err

module load your/singularity/version #(not needed on JUSTUS 2, but could be necessary on other system)

cd your/workspace

# Run container (two options to start a container)
singularity run [options] <container>
singularity exec [options] <container> <command>

Keep in mind that other modules you may have loaded will not be available inside the container.

3.3.1 Using GPUs

#!/bin/bash
[…]
# Allocate one GPU per node
#SBATCH --partition=gpu
#SBATCH --gres=gpu:1
[…]

module load your/singularity/version #(not needed on JUSTUS 2, but could be necessary on other system)

cd your/workspace

# Run container (two options to start a container)
singularity run --nv [options] <container>
singularity exec --nv [options] <container> <command>

Using the flag is advisable, but may be omitted if the correct GPU- and driver-APIs are available on the container.

4 Examples

4.1 Run your first container on JUSTUS 2

Build a TensorFlow container with Singularity and execute a Python command:

# request interactive node with GPUs
$ srun --nodes=1 --exclusive --gres=gpu:2 --pty bash

# create workspace and navigate into it
$ WORKSPACE=`ws_allocate tensorflow 3`
$ cd $WORKSPACE

# build container
$ singularity build tensorflow-20.11-tf2-py3.sif docker://nvcr.io/nvidia/tensorflow:20.11-tf2-py3

# execute Python command
$ singularity exec --nv tensorflow-20.11-tf2-py3.sif python -c 'import tensorflow as tf; \
  print("Num GPUs Available: ",len(tf.config.experimental.list_physical_devices("GPU")))'

Note: Ready-to-use containers can be pulled from the NVIDIA GPU CLOUD (NGC) catalog.

4.2 NGC environment modules on JUSTUS 2

1) Prepare a workspace to store the container

$ srun --nodes=1 --exclusive --gres=gpu:2 --pty bash
$ WORKSPACE=`ws_allocate ngc 3`
$ cd $WORKSPACE
$ export NGC_IMAGE_DIR=$(pwd)

Important: Containers can only run in workspaces.

2) PyTorch container

$ module load ngc/numlib
$ module load 20.12-torch-py3
$ python3
>>> import torch
>>> x = torch.randn(2,3)
>>> print(x)
>>> quit()
$ module unload 20.12-torch-py3
$ module unload ngc/numlib

Note: Use the container in the same manner as an interactive shell.

3) LAMMPS container

$ module load ngc/chem
$ module avail
$ module load 29Oct2020-lammps
$ wget https://lammps.sandia.gov/inputs/in.lj.txt
$ export SINGULARITY_BINDPATH=$(pwd)
$ mpirun -n 2 lmp -in in.lj.txt -var x 8 -var y 8 -var z 8 -k on g 2 -sf kk -pk kokkos cuda/aware on neigh full \
  comm device binsize 2.8
$ module unload 29Oct2020-lammps
$ module unload ngc/chem

Note: Use SINGULARITY_BINDPATH=<PATH> to mount the directory with the input file.


Currently, module load ngc/numlib for numeric libraries and module load ngc/chem for chemistry programs can be selected.

4.3 Batch jobs with containers on JUSTUS 2

Run a GROMACS container with Singularity as a batch job:

$ salloc --ntasks=1                                                         # obtain compute node
$ WORKSPACE=`ws_allocate gromacs 3`                                         # allocate workspace
$ cd $WORKSPACE                                                             # change to workspace
$ singularity pull gromacs-2020_2.sif docker://nvcr.io/hpc/gromacs:2020.2   # pull container from NGC
$ cp -r /opt/bwhpc/common/chem/ngc/gromacs/ ./bwhpc-examples/               # copy example to workspace
$ cd ./bwhpc-examples                                                       # change to example directory
$ sbatch gromacs-2020.2_gpu.slurm                                           # submit job
$ squeue                                                                    # obtain JOBID
$ scontrol show job <JOBID>                                                 # check state of job

More batch job examples are located at /opt/bwhpc/common/chem/ngc.

5 Useful links