JUSTUS2/Software/Singularity: Difference between revisions
K Siegmund (talk | contribs) (→Usage) |
|||
(70 intermediate revisions by 2 users not shown) | |||
Line 2: | Line 2: | ||
|- |
|- |
||
! Description !! Content |
! Description !! Content |
||
|- |
|||
| module load |
|||
| --- |
|||
|- |
|||
| Availability |
|||
| [[BwForCluster_JUSTUS_2]] |
|||
|- |
|- |
||
| License |
| License |
||
Line 31: | Line 25: | ||
= Usage = |
= Usage = |
||
== Availability == |
|||
Singularity is directly available on all compute nodes (but not on the login nodes). You do not have to load a module, but you must use a compute node i.e. via an [[BwForCluster_JUSTUS_2_Slurm_HOWTO#How_to_submit_an_interactive_job.3F|interactive job]] |
|||
The binary ''singularity'' is main program of the container platform. |
|||
To get help using Singularity execute the following command: |
|||
<pre> |
<pre> |
||
$ singularity --help |
|||
</pre> |
|||
Furthermore, a man page is available and can be accessed by typing: |
|||
Options: |
|||
<pre> |
|||
-c, --config string specify a configuration file (for root or unprivileged installation only) (default |
|||
$ man singularity |
|||
</pre> |
|||
-d, --debug print debugging information (highest verbosity) |
|||
-h, --help help for singularity |
|||
--nocolor print without color output (default False) |
|||
-q, --quiet suppress normal output |
|||
-s, --silent only print errors |
|||
-v, --verbose print additional information |
|||
--version version for singularity |
|||
For additional information about how to use Singularity, please consult the [https://sylabs.io/docs/ documentation]. |
|||
Available Commands: |
|||
build Build a Singularity image |
|||
cache Manage the local cache |
|||
capability Manage Linux capabilities for users and groups |
|||
config Manage various singularity configuration (root user only) |
|||
delete Deletes requested image from the library |
|||
exec Run a command within a container |
|||
help Help about any command |
|||
inspect Show metadata for an image |
|||
instance Manage containers running as services |
|||
key Manage OpenPGP keys |
|||
oci Manage OCI containers |
|||
plugin Manage Singularity plugins |
|||
pull Pull an image from a URI |
|||
push Upload image to the provided URI |
|||
remote Manage singularity remote endpoints, keyservers and OCI/Docker registry credentials |
|||
run Run the user-defined default command within a container |
|||
run-help Show the user-defined help for an image |
|||
search Search a Container Library for images |
|||
shell Run a shell within a container |
|||
sif siftool is a program for Singularity Image Format (SIF) file manipulation |
|||
sign Attach digital signature(s) to an image |
|||
test Run the user-defined tests within a container |
|||
verify Verify cryptographic signatures attached to an image |
|||
version Show the version for Singularity |
|||
== Batch jobs with containers == |
|||
Examples: |
|||
$ singularity help <command> [<subcommand>] |
|||
Batch jobs utilizing Singularity containers are generally built the same way as all other [https://wiki.bwhpc.de/e/BwForCluster_JUSTUS_2_Slurm_HOWTO#How_to_submit_a_batch_job.3F batch jobs], where the job script contains ''singularity'' commands. For example: |
|||
$ singularity help build |
|||
$ singularity help instance start |
|||
<pre> |
|||
#!/bin/bash |
|||
# Allocate one node |
|||
#SBATCH --nodes=1 |
|||
# Number of program instances to be executed |
|||
#SBATCH --tasks-per-node=4 |
|||
# 8 GB memory required per node |
|||
#SBATCH --mem=16G |
|||
# Maximum run time of job |
|||
#SBATCH --time=1:00:00 |
|||
# Give job a reasonable name |
|||
#SBATCH --job-name=Singularity |
|||
# File name for standard output (%j will be replaced by job id) |
|||
#SBATCH --output=singularity_job-%j.out |
|||
# File name for error output |
|||
#SBATCH --error=singularity_job-%j.err |
|||
cd your/workspace |
|||
# Run container (two options to start a container) |
|||
singularity run [options] <container> |
|||
singularity exec [options] <container> <command> |
|||
</pre> |
</pre> |
||
<font color=red>Keep in mind that other modules you may have loaded will not be available inside the container.</font> |
|||
=== Using GPUs === |
|||
<pre> |
|||
#!/bin/bash |
|||
[…] |
|||
# Allocate one GPU per node |
|||
#SBATCH --partition=gpu |
|||
#SBATCH --gres=gpu:1 |
|||
[…] |
|||
cd your/workspace |
|||
# Run container (two options to start a container) |
|||
singularity run --nv [options] <container> |
|||
singularity exec --nv [options] <container> <command> |
|||
</pre> |
|||
Using the flag is advisable, but may be omitted if the correct GPU- and driver-APIs are available on the container. |
|||
= Examples = |
= Examples = |
||
== Run your first container on JUSTUS 2 == |
|||
Build a TensorFlow container with Singularity and execute a Python command inside the container: |
|||
Build a TensorFlow container with Singularity and execute a Python command: |
|||
<pre> |
<pre> |
||
# request |
# request interactive node with GPUs |
||
$ srun --nodes=1 --exclusive --gres=gpu:2 --pty bash |
$ srun --nodes=1 --exclusive --gres=gpu:2 --pty bash |
||
# create |
# create workspace and navigate into it |
||
$ WORKSPACE=`ws_allocate tensorflow 3` |
$ WORKSPACE=`ws_allocate tensorflow 3` |
||
$ cd $WORKSPACE |
$ cd $WORKSPACE |
||
Line 96: | Line 112: | ||
</pre> |
</pre> |
||
'''Note:''' Ready-to-use containers can be pulled from the [https://ngc.nvidia.com/catalog/containers NVIDIA GPU CLOUD (NGC)] catalog. |
|||
== NGC environment modules on JUSTUS 2 == |
|||
1) Prepare a workspace to store the container |
|||
<pre> |
<pre> |
||
$ srun --nodes=1 --exclusive --gres=gpu:2 --pty bash |
$ srun --nodes=1 --exclusive --gres=gpu:2 --pty bash |
||
$ WORKSPACE=`ws_allocate |
$ WORKSPACE=`ws_allocate ngc 3` |
||
$ cd $WORKSPACE |
$ cd $WORKSPACE |
||
$ export NGC_IMAGE_DIR=$(pwd) |
$ export NGC_IMAGE_DIR=$(pwd) |
||
</pre> |
|||
$ module load ngc/.numlib |
|||
<font color=red>Important: Containers can only run in workspaces.</font> |
|||
$ module load 20.12-tf2-py3 |
|||
2) PyTorch container |
|||
<pre> |
|||
$ module load ngc/numlib |
|||
$ module load 20.12-torch-py3 |
|||
$ python3 |
$ python3 |
||
>>> import |
>>> import torch |
||
>>> x = torch.randn(2,3) |
|||
>>> print("Num GPUs Available:",len(tf.config.experimental.list_physical_devices("GPU"))) |
|||
>>> print(x) |
|||
>>> quit() |
>>> quit() |
||
$ module unload 20.12- |
$ module unload 20.12-torch-py3 |
||
$ module unload ngc/ |
$ module unload ngc/numlib |
||
</pre> |
</pre> |
||
'''Note:''' Use the container in the same manner as an interactive shell. |
|||
3) LAMMPS container |
|||
<pre> |
|||
$ module load ngc/chem |
|||
$ module avail |
|||
$ module load 29Oct2020-lammps |
|||
$ wget https://lammps.sandia.gov/inputs/in.lj.txt |
|||
$ export SINGULARITY_BINDPATH=$(pwd) |
|||
$ mpirun -n 2 lmp -in in.lj.txt -var x 8 -var y 8 -var z 8 -k on g 2 -sf kk -pk kokkos cuda/aware on neigh full \ |
|||
comm device binsize 2.8 |
|||
$ module unload 29Oct2020-lammps |
|||
$ module unload ngc/chem |
|||
</pre> |
|||
'''Note:''' Use <span style="background:#edeae2;margin:2px;padding:1px;border:1px dotted #808080">SINGULARITY_BINDPATH=<PATH></span> to mount the directory with the input file. |
|||
Currently, <span style="background:#edeae2;margin:2px;padding:1px;border:1px dotted #808080">module load ngc/numlib</span> for numeric libraries and <span style="background:#edeae2;margin:2px;padding:1px;border:1px dotted #808080">module load ngc/chem</span> for chemistry programs can be selected. |
|||
== Batch jobs with containers on JUSTUS 2 == |
|||
Run a GROMACS container with Singularity: |
Run a GROMACS container with Singularity as a batch job: |
||
<pre> |
<pre> |
||
$ salloc --ntasks=1 # obtain compute node |
|||
$ WORKSPACE=`ws_allocate gromacs 3` # allocate workspace |
$ WORKSPACE=`ws_allocate gromacs 3` # allocate workspace |
||
$ cd $WORKSPACE # change to workspace |
$ cd $WORKSPACE # change to workspace |
||
Line 126: | Line 171: | ||
</pre> |
</pre> |
||
More examples are located at <span style="background:#edeae2;margin:2px;padding:1px;border:1px dotted #808080">/opt/bwhpc/common/chem/ngc</span> |
More batch job examples are located at <span style="background:#edeae2;margin:2px;padding:1px;border:1px dotted #808080">/opt/bwhpc/common/chem/ngc</span>. |
||
= Useful links = |
= Useful links = |
||
Line 134: | Line 179: | ||
* [https://en.wikipedia.org/wiki/Singularity_(software) Wikipedia article (english)] |
* [https://en.wikipedia.org/wiki/Singularity_(software) Wikipedia article (english)] |
||
---- |
---- |
||
[[Category:BwForCluster_Chemistry]][[Category:BwForCluster_JUSTUS_2]] |
Latest revision as of 11:49, 23 April 2024
Description | Content |
---|---|
License | Open-source software, distributed under the 3-clause BSD License. More... |
Citing | --- |
Links | Homepage | Documentation |
Graphical Interface | No |
Description
Singularity is a container platform.
License
Singularity is free, open-source software released under the 3-clause BSD license. Please read the license for additional information about Singularity.
Usage
Availability
Singularity is directly available on all compute nodes (but not on the login nodes). You do not have to load a module, but you must use a compute node i.e. via an interactive job
The binary singularity is main program of the container platform.
To get help using Singularity execute the following command:
$ singularity --help
Furthermore, a man page is available and can be accessed by typing:
$ man singularity
For additional information about how to use Singularity, please consult the documentation.
Batch jobs with containers
Batch jobs utilizing Singularity containers are generally built the same way as all other batch jobs, where the job script contains singularity commands. For example:
#!/bin/bash # Allocate one node #SBATCH --nodes=1 # Number of program instances to be executed #SBATCH --tasks-per-node=4 # 8 GB memory required per node #SBATCH --mem=16G # Maximum run time of job #SBATCH --time=1:00:00 # Give job a reasonable name #SBATCH --job-name=Singularity # File name for standard output (%j will be replaced by job id) #SBATCH --output=singularity_job-%j.out # File name for error output #SBATCH --error=singularity_job-%j.err cd your/workspace # Run container (two options to start a container) singularity run [options] <container> singularity exec [options] <container> <command>
Keep in mind that other modules you may have loaded will not be available inside the container.
Using GPUs
#!/bin/bash […] # Allocate one GPU per node #SBATCH --partition=gpu #SBATCH --gres=gpu:1 […] cd your/workspace # Run container (two options to start a container) singularity run --nv [options] <container> singularity exec --nv [options] <container> <command>
Using the flag is advisable, but may be omitted if the correct GPU- and driver-APIs are available on the container.
Examples
Run your first container on JUSTUS 2
Build a TensorFlow container with Singularity and execute a Python command:
# request interactive node with GPUs $ srun --nodes=1 --exclusive --gres=gpu:2 --pty bash # create workspace and navigate into it $ WORKSPACE=`ws_allocate tensorflow 3` $ cd $WORKSPACE # build container $ singularity build tensorflow-20.11-tf2-py3.sif docker://nvcr.io/nvidia/tensorflow:20.11-tf2-py3 # execute Python command $ singularity exec --nv tensorflow-20.11-tf2-py3.sif python -c 'import tensorflow as tf; \ print("Num GPUs Available: ",len(tf.config.experimental.list_physical_devices("GPU")))'
Note: Ready-to-use containers can be pulled from the NVIDIA GPU CLOUD (NGC) catalog.
NGC environment modules on JUSTUS 2
1) Prepare a workspace to store the container
$ srun --nodes=1 --exclusive --gres=gpu:2 --pty bash $ WORKSPACE=`ws_allocate ngc 3` $ cd $WORKSPACE $ export NGC_IMAGE_DIR=$(pwd)
Important: Containers can only run in workspaces.
2) PyTorch container
$ module load ngc/numlib $ module load 20.12-torch-py3 $ python3 >>> import torch >>> x = torch.randn(2,3) >>> print(x) >>> quit() $ module unload 20.12-torch-py3 $ module unload ngc/numlib
Note: Use the container in the same manner as an interactive shell.
3) LAMMPS container
$ module load ngc/chem $ module avail $ module load 29Oct2020-lammps $ wget https://lammps.sandia.gov/inputs/in.lj.txt $ export SINGULARITY_BINDPATH=$(pwd) $ mpirun -n 2 lmp -in in.lj.txt -var x 8 -var y 8 -var z 8 -k on g 2 -sf kk -pk kokkos cuda/aware on neigh full \ comm device binsize 2.8 $ module unload 29Oct2020-lammps $ module unload ngc/chem
Note: Use SINGULARITY_BINDPATH=<PATH> to mount the directory with the input file.
Currently, module load ngc/numlib for numeric libraries and module load ngc/chem for chemistry programs can be selected.
Batch jobs with containers on JUSTUS 2
Run a GROMACS container with Singularity as a batch job:
$ salloc --ntasks=1 # obtain compute node $ WORKSPACE=`ws_allocate gromacs 3` # allocate workspace $ cd $WORKSPACE # change to workspace $ singularity pull gromacs-2020_2.sif docker://nvcr.io/hpc/gromacs:2020.2 # pull container from NGC $ cp -r /opt/bwhpc/common/chem/ngc/gromacs/ ./bwhpc-examples/ # copy example to workspace $ cd ./bwhpc-examples # change to example directory $ sbatch gromacs-2020.2_gpu.slurm # submit job $ squeue # obtain JOBID $ scontrol show job <JOBID> # check state of job
More batch job examples are located at /opt/bwhpc/common/chem/ngc.