JUSTUS2/Software/Singularity
Description | Content |
---|---|
module load | --- |
Availability | BwForCluster_JUSTUS_2 |
License | Open-source software, distributed under the 3-clause BSD License. More... |
Citing | --- |
Links | Homepage | Documentation |
Graphical Interface | No |
Description
Singularity is a container platform.
License
Singularity is free, open-source software released under the 3-clause BSD license. Please read the license for additional information about Singularity.
Usage
Loading the module
You can load the default version of Singularity with the following command:
$ module load devel/singularity
If you wish to load another (older) version of Singularity, you can do so using
$ module load devel/singularity/<version>
with <version> specifying the desired version.
Important: On JUSTUS 2, Singularity is available on all compute nodes. You do not have to load a module.
Batch jobs with containers
Batch jobs utilizing Singularity containers are generally built the same way as all other batch jobs, where the job script contains singularity commands. For example:
#!/bin/bash # Allocate one node #SBATCH --nodes=1 # Number of program instances to be executed #SBATCH --tasks-per-node=4 # 8 GB memory required per node #SBATCH --mem=16G # Maximum run time of job #SBATCH --time=1:00:00 # Give job a reasonable name #SBATCH --job-name=Singularity # File name for standard output (%j will be replaced by job id) #SBATCH --output=singularity_job-%j.out # File name for error output #SBATCH --error=singularity_job-%j.err module load your/singularity/version #(not needed on JUSTUS 2, but could be necessary on other system) cd your/workspace # Run container (two options to start a container) singularity run [options] <container> singularity exec [options] <container> <command>
Using GPUs
#!/bin/bash […] # Allocate one GPU per node #SBATCH --partition=gpu #SBATCH --gres=gpu:1 […] module load your/singularity/version #(not needed on JUSTUS 2, but could be necessary on other system) cd your/workspace # Run container (two options to start a container) singularity run --nv [options] <container> singularity exec --nv [options] <container> <command>
Using the flag is advisable, but may be omitted if the correct GPU- and driver-APIs are available on the container.
Examples
Run your first container on JUSTUS 2
Build a TensorFlow container with Singularity and execute a Python command:
# request interactive node with GPUs $ srun --nodes=1 --exclusive --gres=gpu:2 --pty bash # create workspace and navigate into it $ WORKSPACE=`ws_allocate tensorflow 3` $ cd $WORKSPACE # build container $ singularity build tensorflow-20.11-tf2-py3.sif docker://nvcr.io/nvidia/tensorflow:20.11-tf2-py3 # execute Python command $ singularity exec --nv tensorflow-20.11-tf2-py3.sif python -c 'import tensorflow as tf; \ print("Num GPUs Available: ",len(tf.config.experimental.list_physical_devices("GPU")))'
Note: Ready-to-use containers can be pulled from the NVIDIA GPU CLOUD (NGC) catalog.
NGC environment modules on JUSTUS 2
1) Prepare a workspace to store the container
$ srun --nodes=1 --exclusive --gres=gpu:2 --pty bash $ WORKSPACE=`ws_allocate npc 3` $ cd $WORKSPACE $ export NGC_IMAGE_DIR=$(pwd)
Important: Containers can only run in workspaces.
2) PyTorch container
$ module load ngc/.numlib $ module load 20.12-py3 $ python3 >>> import torch >>> x = torch.randn(2,3) >>> print(x) >>> quit() $ module unload 20.12-py3 $ module unload ngc/.numlib
Note: Use the container in the same manner as an interactive shell.
3) LAMMPS container
$ module load ngc/.chem $ module load 29Oct2020 $ wget https://lammps.sandia.gov/inputs/in.lj.txt $ export SINGULARITY_BINDPATH=$(pwd) $ mpirun -n 2 lmp -in in.lj.txt -var x 8 -var y 8 -var z 8 -k on g 2 -sf kk -pk kokkos cuda/aware on neigh full \ comm device binsize 2.8 $ module unload 29Oct2020 $ module unload ngc/.chem
Note: Use SINGULARITY_BINDPATH=<PATH> to mount the directory with the input file.
Currently, module load ngc/.numlib for numeric libraries and module load ngc/.chem for chemistry programs can be selected.
Batch jobs with containers on JUSTUS 2
Run a GROMACS container with Singularity as a batch job:
$ WORKSPACE=`ws_allocate gromacs 3` # allocate workspace $ cd $WORKSPACE # change to workspace $ singularity pull gromacs-2020_2.sif docker://nvcr.io/hpc/gromacs:2020.2 # pull container from NGC $ cp -r /opt/bwhpc/common/chem/ngc/gromacs/ ./bwhpc-examples/ # copy example to workspace $ cd ./bwhpc-examples # change to example directory $ sbatch gromacs-2020.2_gpu.slurm # submit job $ squeue # obtain JOBID $ scontrol show job <JOBID> # check state of job
More batch job examples are located at /opt/bwhpc/common/chem/ngc.