JUSTUS2/Software/Singularity: Difference between revisions
Line 135: | Line 135: | ||
Note: Use <span style="background:#edeae2;margin:2px;padding:1px;border:1px dotted #808080">SINGULARITY_BINDPATH=<PATH></span> to mount the directory with the input file. |
Note: Use <span style="background:#edeae2;margin:2px;padding:1px;border:1px dotted #808080">SINGULARITY_BINDPATH=<PATH></span> to mount the directory with the input file. |
||
4) GROMACS container |
|||
<pre> |
|||
</pre> |
|||
Currently, <span style="background:#edeae2;margin:2px;padding:1px;border:1px dotted #808080">module load ngc/.numlib</span> for numeric libraries and <span style="background:#edeae2;margin:2px;padding:1px;border:1px dotted #808080">module load ngc/.chem</span> for chemistry programs can be selected. |
Currently, <span style="background:#edeae2;margin:2px;padding:1px;border:1px dotted #808080">module load ngc/.numlib</span> for numeric libraries and <span style="background:#edeae2;margin:2px;padding:1px;border:1px dotted #808080">module load ngc/.chem</span> for chemistry programs can be selected. |
Revision as of 00:38, 21 March 2021
Description | Content |
---|---|
module load | --- |
Availability | BwForCluster_JUSTUS_2 |
License | Open-source software, distributed under the 3-clause BSD License. More... |
Citing | --- |
Links | Homepage | Documentation |
Graphical Interface | No |
Description
Singularity is a container platform.
License
Singularity is free, open-source software released under the 3-clause BSD license. Please read the license for additional information about Singularity.
Usage
Usage: singularity [global options...] Options: -c, --config string specify a configuration file (for root or unprivileged installation only) (default "/etc/singularity/singularity.conf") -d, --debug print debugging information (highest verbosity) -h, --help help for singularity --nocolor print without color output (default False) -q, --quiet suppress normal output -s, --silent only print errors -v, --verbose print additional information --version version for singularity Available Commands: build Build a Singularity image cache Manage the local cache capability Manage Linux capabilities for users and groups config Manage various singularity configuration (root user only) delete Deletes requested image from the library exec Run a command within a container help Help about any command inspect Show metadata for an image instance Manage containers running as services key Manage OpenPGP keys oci Manage OCI containers plugin Manage Singularity plugins pull Pull an image from a URI push Upload image to the provided URI remote Manage singularity remote endpoints, keyservers and OCI/Docker registry credentials run Run the user-defined default command within a container run-help Show the user-defined help for an image search Search a Container Library for images shell Run a shell within a container sif siftool is a program for Singularity Image Format (SIF) file manipulation sign Attach digital signature(s) to an image test Run the user-defined tests within a container verify Verify cryptographic signatures attached to an image version Show the version for Singularity Examples: $ singularity help <command> [<subcommand>] $ singularity help build $ singularity help instance start
Examples
Run your first container on JUSTUS 2
Build a TensorFlow container with Singularity and execute a Python command inside the container:
# request interactive node with GPUs $ srun --nodes=1 --exclusive --gres=gpu:2 --pty bash # create workspace and navigate into it $ WORKSPACE=`ws_allocate tensorflow 3` $ cd $WORKSPACE # build container $ singularity build tensorflow-20.11-tf2-py3.sif docker://nvcr.io/nvidia/tensorflow:20.11-tf2-py3 # execute Python command $ singularity exec --nv tensorflow-20.11-tf2-py3.sif python -c 'import tensorflow as tf; \ print("Num GPUs Available: ",len(tf.config.experimental.list_physical_devices("GPU")))'
NGC environment modules on JUSTUS 2
1) Prepare a workspace to store the container
$ srun --nodes=1 --exclusive --gres=gpu:2 --pty bash $ WORKSPACE=`ws_allocate npc 3` $ cd $WORKSPACE $ export NGC_IMAGE_DIR=$(pwd)
Important: Containers can only run in workspaces.
2) TensorFlow container
$ module load ngc/.numlib $ module load 20.12-tf2-py3 $ python3 >>> import tensorflow as tf >>> print("Num GPUs Available:",len(tf.config.experimental.list_physical_devices("GPU"))) >>> quit() $ module unload 20.12-tf2-py3 $ module unload ngc/.numlib
Note: Use the container in the same manner as an interactive shell.
3) LAMMPS container
$ module load ngc/.chem $ module load 29Oct2020 $ wget https://lammps.sandia.gov/inputs/in.lj.txt $ export SINGULARITY_BINDPATH=$(pwd) $ mpirun -n 2 lmp -in in.lj.txt -var x 8 -var y 8 -var z 8 -k on g 2 -sf kk -pk kokkos cuda/aware on neigh full \ comm device binsize 2.8 $ module unload 29Oct2020 $ module unload ngc/.chem
Note: Use SINGULARITY_BINDPATH=<PATH> to mount the directory with the input file.
Currently, module load ngc/.numlib for numeric libraries and module load ngc/.chem for chemistry programs can be selected.
Batch jobs with containers on JUSTUS 2
Run a GROMACS container with Singularity as a batch job:
$ WORKSPACE=`ws_allocate gromacs 3` # allocate workspace $ cd $WORKSPACE # change to workspace $ singularity pull gromacs-2020_2.sif docker://nvcr.io/hpc/gromacs:2020.2 # pull container from NGC $ cp -r /opt/bwhpc/common/chem/ngc/gromacs/ ./bwhpc-examples/ # copy example to workspace $ cd ./bwhpc-examples # change to example directory $ sbatch gromacs-2020.2_gpu.slurm # submit job $ squeue # obtain JOBID $ scontrol show job <JOBID> # check state of job
More batch job examples are located at /opt/bwhpc/common/chem/ngc.