JUSTUS2/Software/Singularity: Difference between revisions
Jump to navigation
Jump to search
Line 91: | Line 91: | ||
print("Num GPUs Available: ",len(tf.config.experimental.list_physical_devices("GPU")))' |
print("Num GPUs Available: ",len(tf.config.experimental.list_physical_devices("GPU")))' |
||
</pre> |
</pre> |
||
Use the NGC environment modules: |
|||
<pre> |
|||
$ srun --nodes=1 --exclusive --gres=gpu:2 --pty bash |
|||
$ WORKSPACE=`ws_allocate npc 3` |
|||
$ cd $WORKSPACE |
|||
$ export NGC_IMAGE_DIR=$(pwd) |
|||
$ module load ngc/.numlib |
|||
$ module load 20.12-tf2-py3 |
|||
$ python3 |
|||
>>> import tensorflow as tf |
|||
>>> print("Num GPUs Available:",len(tf.config.experimental.list_physical_devices("GPU"))) |
|||
>>> quit() |
|||
$ module unload 20.12-tf2-py3 |
|||
$ module unload ngc/.numlib |
|||
</pre> |
|||
Run a GROMACS container with Singularity: |
Run a GROMACS container with Singularity: |
Revision as of 21:14, 20 March 2021
Description | Content |
---|---|
module load | --- |
Availability | BwForCluster_JUSTUS_2 |
License | Open-source software, distributed under the 3-clause BSD License. More... |
Citing | --- |
Links | Homepage | Documentation |
Graphical Interface | No |
Description
Singularity is a container platform.
License
Singularity is free, open-source software released under the 3-clause BSD license. Please read the license for additional information about Singularity.
Usage
Usage: singularity [global options...] Options: -c, --config string specify a configuration file (for root or unprivileged installation only) (default "/etc/singularity/singularity.conf") -d, --debug print debugging information (highest verbosity) -h, --help help for singularity --nocolor print without color output (default False) -q, --quiet suppress normal output -s, --silent only print errors -v, --verbose print additional information --version version for singularity Available Commands: build Build a Singularity image cache Manage the local cache capability Manage Linux capabilities for users and groups config Manage various singularity configuration (root user only) delete Deletes requested image from the library exec Run a command within a container help Help about any command inspect Show metadata for an image instance Manage containers running as services key Manage OpenPGP keys oci Manage OCI containers plugin Manage Singularity plugins pull Pull an image from a URI push Upload image to the provided URI remote Manage singularity remote endpoints, keyservers and OCI/Docker registry credentials run Run the user-defined default command within a container run-help Show the user-defined help for an image search Search a Container Library for images shell Run a shell within a container sif siftool is a program for Singularity Image Format (SIF) file manipulation sign Attach digital signature(s) to an image test Run the user-defined tests within a container verify Verify cryptographic signatures attached to an image version Show the version for Singularity Examples: $ singularity help <command> [<subcommand>] $ singularity help build $ singularity help instance start
Examples
Build a TensorFlow container with Singularity and execute a Python command inside the container:
# request an interactive node with GPUs $ srun --nodes=1 --exclusive --gres=gpu:2 --pty bash # build container $ singularity build tensorflow-20.11-tf2-py3.sif docker://nvcr.io/nvidia/tensorflow:20.11-tf2-py3 # execute Python command $ singularity exec --nv tensorflow-20.11-tf2-py3.sif python -c 'import tensorflow as tf; \ print("Num GPUs Available: ",len(tf.config.experimental.list_physical_devices("GPU")))'
Use the NGC environment modules:
$ srun --nodes=1 --exclusive --gres=gpu:2 --pty bash $ WORKSPACE=`ws_allocate npc 3` $ cd $WORKSPACE $ export NGC_IMAGE_DIR=$(pwd) $ module load ngc/.numlib $ module load 20.12-tf2-py3 $ python3 >>> import tensorflow as tf >>> print("Num GPUs Available:",len(tf.config.experimental.list_physical_devices("GPU"))) >>> quit() $ module unload 20.12-tf2-py3 $ module unload ngc/.numlib
Run a GROMACS container with Singularity:
$ WORKSPACE=`ws_allocate gromacs 3` # allocate workspace $ cd $WORKSPACE # change to workspace $ singularity pull gromacs-2020_2.sif docker://nvcr.io/hpc/gromacs:2020.2 # pull container from NGC $ cp -r /opt/bwhpc/common/chem/ngc/gromacs/ ./bwhpc-examples/ # copy example to workspace $ cd ./bwhpc-examples # change to example directory $ sbatch gromacs-2020.2_gpu.slurm # submit job $ squeue # obtain JOBID $ scontrol show job <JOBID> # check state of job
More examples are located at /opt/bwhpc/common/chem/ngc on JUSTUS 2.