NEMO2/Containers/Apptainer: Difference between revisions
mNo edit summary |
mNo edit summary |
||
| Line 6: | Line 6: | ||
* Runs as the invoking user — no root required and no privilege escalation |
* Runs as the invoking user — no root required and no privilege escalation |
||
* Uses SIF (Singularity Image Format), a single portable file |
* Uses SIF (Singularity Image Format), a single portable file |
||
* Can |
* Can pull Docker/OCI images directly from registries without a local Docker installation |
||
* No native Slurm plugin — call <tt>apptainer exec/run</tt> inside your batch script |
* No native Slurm plugin — call <tt>apptainer exec/run</tt> inside your batch script |
||
== Image sources == |
== Image sources == |
||
All major OCI registries work out of the box via <tt>docker://</tt> URIs |
All major OCI registries work out of the box via <tt>docker://</tt> URIs. |
||
{| class="wikitable" |
{| class="wikitable" |
||
| Line 34: | Line 34: | ||
SIF files are plain files — store them wherever you like. |
SIF files are plain files — store them wherever you like. |
||
To avoid filling your home quota, keep images in a [[Workspaces|workspace]] and use a symlink |
To avoid filling your home quota, keep images in a [[Workspaces|workspace]] and use a symlink: |
||
<pre> |
<pre> |
||
| Line 40: | Line 40: | ||
ws_allocate apptainer 100 |
ws_allocate apptainer 100 |
||
# |
# symlink ~/images to the workspace |
||
ln -s $(ws_find apptainer) ~/images |
ln -s $(ws_find apptainer) ~/images |
||
</pre> |
|||
Pull images directly into the workspace: |
|||
<pre> |
|||
apptainer pull ~/images/ubuntu.sif docker://ubuntu:24.04 |
|||
apptainer pull ~/images/pytorch.sif docker://nvcr.io/nvidia/pytorch:24.01-py3 |
|||
</pre> |
</pre> |
||
| Line 56: | Line 49: | ||
IMAGE=$HOME/images/pytorch.sif |
IMAGE=$HOME/images/pytorch.sif |
||
</pre> |
</pre> |
||
== Default mounts == |
|||
The following paths are '''automatically mounted''' into every container (configured via <tt>bind path</tt> in <tt>/etc/apptainer/apptainer.conf</tt>): |
|||
{| class="wikitable" |
|||
|- |
|||
! Host path !! Notes |
|||
|- |
|||
| <tt>/home</tt> || home directory of the invoking user, read-write |
|||
|- |
|||
| <tt>/work</tt> || all workspace filesystems, read-write |
|||
|} |
|||
The paths inside the container are '''identical''' to the paths on the host system. |
|||
Note that <tt>ws_*</tt> tools (e.g. <tt>ws_find</tt>, <tt>ws_list</tt>) are '''not available inside the container'''. |
|||
Determine your workspace path on the login node before running the container. |
|||
== Basic commands == |
== Basic commands == |
||
| Line 81: | Line 92: | ||
<pre> |
<pre> |
||
# exec: run a specific command |
# exec: run a specific command |
||
apptainer exec ubuntu.sif python3 script.py |
apptainer exec ~/images/ubuntu.sif python3 script.py |
||
# run: execute the container's default runscript |
# run: execute the container's default runscript |
||
apptainer run ubuntu.sif |
apptainer run ~/images/ubuntu.sif |
||
# shell: interactive shell |
# shell: interactive shell |
||
apptainer shell ubuntu.sif |
apptainer shell ~/images/ubuntu.sif |
||
</pre> |
</pre> |
||
=== |
=== Additional mounts === |
||
| ⚫ | |||
| ⚫ | |||
<pre> |
<pre> |
||
apptainer exec -- |
apptainer exec --bind /tmp/mydata:/data ~/images/ubuntu.sif bash |
||
</pre> |
</pre> |
||
== GPU access == |
|||
| ⚫ | |||
<tt>/home</tt> and <tt>/work</tt> are available inside the container by default (Apptainer automatically binds a set of paths; check <tt>apptainer config</tt> for the system defaults). |
|||
| ⚫ | |||
<pre> |
<pre> |
||
apptainer exec -- |
apptainer exec --nv ~/images/pytorch.sif python3 train.py |
||
</pre> |
</pre> |
||
Unlike Enroot/Pyxis, GPU passthrough is '''not''' automatic — <tt>--nv</tt> must be specified explicitly. |
|||
| ⚫ | |||
| ⚫ | |||
There is no Slurm plugin; call <tt>apptainer</tt> directly from your batch script: |
|||
CPU job: |
CPU job: |
||
| Line 125: | Line 136: | ||
</pre> |
</pre> |
||
GPU job: |
|||
GPU job (use <tt>--nv</tt> to pass through the allocated GPUs): |
|||
<pre lang="bash"> |
<pre lang="bash"> |
||
| Line 141: | Line 152: | ||
== Building images == |
== Building images == |
||
Building a new SIF image from a definition file requires root (or |
Building a new SIF image from a definition file requires root (or fakeroot/user namespaces). |
||
On NEMO you cannot build images directly on the compute nodes. |
On NEMO you cannot build images directly on the compute nodes. |
||
| Line 149: | Line 160: | ||
# Run on NEMO as usual |
# Run on NEMO as usual |
||
Store images in a workspace (symlinked to <tt>~/images/</tt> |
Store images in a workspace (symlinked to <tt>~/images/</tt>) — not in <tt>$TMPDIR</tt>, which is job-local and deleted after the job. |
||
== Tips == |
== Tips == |
||
Latest revision as of 14:51, 11 May 2026
Apptainer (formerly Singularity) is a container runtime designed for HPC systems. It is installed system-wide on NEMO and available without loading a module.
Key properties
- Runs as the invoking user — no root required and no privilege escalation
- Uses SIF (Singularity Image Format), a single portable file
- Can pull Docker/OCI images directly from registries without a local Docker installation
- No native Slurm plugin — call apptainer exec/run inside your batch script
Image sources
All major OCI registries work out of the box via docker:// URIs.
| Registry | URI prefix | Example |
|---|---|---|
| Docker Hub | docker:// | docker://ubuntu:24.04 |
| NVIDIA NGC | docker://nvcr.io/ | docker://nvcr.io/nvidia/pytorch:24.01-py3 |
| quay.io | docker://quay.io/ | docker://quay.io/rockylinux/rockylinux:9 |
| GitHub Container Registry | docker://ghcr.io/ | docker://ghcr.io/containerd/alpine:latest |
| Local .sif file | — | copy to a workspace and use directly |
library:// URIs (Sylabs Cloud Library) are not available by default — the default Apptainer installation has no Library API client configured. See Apptainer docs if you need to set up a library remote.
Image storage
SIF files are plain files — store them wherever you like. To avoid filling your home quota, keep images in a workspace and use a symlink:
# create a workspace (100 days) ws_allocate apptainer 100 # symlink ~/images to the workspace ln -s $(ws_find apptainer) ~/images
In batch scripts, reference images via the symlink:
IMAGE=$HOME/images/pytorch.sif
Default mounts
The following paths are automatically mounted into every container (configured via bind path in /etc/apptainer/apptainer.conf):
| Host path | Notes |
|---|---|
| /home | home directory of the invoking user, read-write |
| /work | all workspace filesystems, read-write |
The paths inside the container are identical to the paths on the host system.
Note that ws_* tools (e.g. ws_find, ws_list) are not available inside the container. Determine your workspace path on the login node before running the container.
Basic commands
Pull an image
Always pull images to a local SIF file first — using docker:// URIs at runtime repeatedly can hit Docker Hub rate limits.
# from Docker Hub apptainer pull ~/images/ubuntu.sif docker://ubuntu:24.04 # from NVIDIA NGC apptainer pull ~/images/pytorch.sif docker://nvcr.io/nvidia/pytorch:24.01-py3 # from quay.io apptainer pull ~/images/rockylinux.sif docker://quay.io/rockylinux/rockylinux:9 # from GitHub Container Registry apptainer pull ~/images/alpine.sif docker://ghcr.io/containerd/alpine:latest
Run a command inside a container
# exec: run a specific command apptainer exec ~/images/ubuntu.sif python3 script.py # run: execute the container's default runscript apptainer run ~/images/ubuntu.sif # shell: interactive shell apptainer shell ~/images/ubuntu.sif
Additional mounts
To mount directories beyond the defaults:
apptainer exec --bind /tmp/mydata:/data ~/images/ubuntu.sif bash
GPU access
Pass the --nv flag to enable NVIDIA GPU passthrough:
apptainer exec --nv ~/images/pytorch.sif python3 train.py
Unlike Enroot/Pyxis, GPU passthrough is not automatic — --nv must be specified explicitly.
Slurm batch job
There is no Slurm plugin; call apptainer directly from your batch script:
CPU job:
#!/bin/bash
#SBATCH -p cpu
#SBATCH --ntasks=1
#SBATCH --time=01:00:00
IMAGE=$HOME/images/ubuntu.sif
apptainer exec "$IMAGE" python3 /work/classic/myWs/train.py
GPU job:
#!/bin/bash
#SBATCH -p l40s
#SBATCH --gres=gpu:1
#SBATCH --ntasks=1
#SBATCH --time=01:00:00
IMAGE=$HOME/images/pytorch.sif
apptainer exec --nv "$IMAGE" python3 /work/classic/myWs/train.py
Building images
Building a new SIF image from a definition file requires root (or fakeroot/user namespaces). On NEMO you cannot build images directly on the compute nodes.
Recommended workflow:
- Build on a machine where you have root (local workstation, VM, GitHub Actions, …)
- Transfer the .sif file to NEMO (e.g. scp, rsync)
- Run on NEMO as usual
Store images in a workspace (symlinked to ~/images/) — not in $TMPDIR, which is job-local and deleted after the job.
Tips
- Use apptainer exec --writable-tmpfs if the container tries to write to its own filesystem (without persisting changes).
- The --containall flag gives a fully isolated environment (no automatic bind-mounts); useful for reproducibility tests.
- To check what Apptainer version is installed: apptainer --version