NEMO2/Containers/Apptainer: Difference between revisions

From bwHPC Wiki
Jump to navigation Jump to search
(Created page with "'''Apptainer''' (formerly Singularity) is a container runtime designed for HPC systems. It is installed system-wide on NEMO and available without loading a module. == Key properties == * Runs as the invoking user — no root required and no privilege escalation * Uses SIF (Singularity Image Format), a single portable file * Can convert Docker images to SIF at build time * No native Slurm plugin — call <tt>apptainer exec/run</tt> inside your batch script == Image sou...")
 
mNo edit summary
Line 10: Line 10:


== Image sources ==
== Image sources ==

All major OCI registries work out of the box via <tt>docker://</tt> URIs — no Docker installation needed.


{| class="wikitable"
{| class="wikitable"
|-
|-
! Source !! Example
! Registry !! URI prefix !! Example
|-
| Docker Hub || <tt>docker://</tt> || <tt>docker://ubuntu:24.04</tt>
|-
| NVIDIA NGC || <tt>docker://nvcr.io/</tt> || <tt>docker://nvcr.io/nvidia/pytorch:24.01-py3</tt>
|-
|-
| quay.io || <tt>docker://quay.io/</tt> || <tt>docker://quay.io/rockylinux/rockylinux:9</tt>
| Apptainer Hub || <tt>apptainer pull hello-world.sif library://lolcow</tt>
|-
|-
| Docker Hub (converted) || <tt>apptainer pull ubuntu.sif docker://ubuntu:24.04</tt>
| GitHub Container Registry || <tt>docker://ghcr.io/</tt> || <tt>docker://ghcr.io/containerd/alpine:latest</tt>
|-
|-
| Local <tt>.sif</tt> file || copy to <tt>$HOME</tt> or a workspace and use directly
| Local <tt>.sif</tt> file || || copy to a workspace and use directly
|}
|}

<tt>library://</tt> URIs (Sylabs Cloud Library) are '''not''' available by default — the default Apptainer installation has no Library API client configured.
See [https://apptainer.org/docs/user/latest/endpoint.html#no-default-remote Apptainer docs] if you need to set up a library remote.


== Image storage ==
== Image storage ==
Line 50: Line 59:
== Basic commands ==
== Basic commands ==


=== Pull / build an image ===
=== Pull an image ===

Always pull images to a local SIF file first — using <tt>docker://</tt> URIs at runtime repeatedly can hit Docker Hub rate limits.


<pre>
<pre>
# from Docker Hub → produces ubuntu.sif in ~/images/
# from Docker Hub
apptainer pull ~/images/ubuntu.sif docker://ubuntu:24.04
apptainer pull ~/images/ubuntu.sif docker://ubuntu:24.04


# from NVIDIA NGC
# from NVIDIA NGC
apptainer pull ~/images/pytorch.sif docker://nvcr.io/nvidia/pytorch:24.01-py3
apptainer pull ~/images/pytorch.sif docker://nvcr.io/nvidia/pytorch:24.01-py3

# from quay.io
apptainer pull ~/images/rockylinux.sif docker://quay.io/rockylinux/rockylinux:9

# from GitHub Container Registry
apptainer pull ~/images/alpine.sif docker://ghcr.io/containerd/alpine:latest
</pre>
</pre>


Line 88: Line 105:


<pre>
<pre>
apptainer exec --bind /work/classic/myWs:/data ubuntu.sif bash
apptainer exec --bind /tmp/mydata:/data ubuntu.sif bash
</pre>
</pre>



Revision as of 14:15, 11 May 2026

Apptainer (formerly Singularity) is a container runtime designed for HPC systems. It is installed system-wide on NEMO and available without loading a module.

Key properties

  • Runs as the invoking user — no root required and no privilege escalation
  • Uses SIF (Singularity Image Format), a single portable file
  • Can convert Docker images to SIF at build time
  • No native Slurm plugin — call apptainer exec/run inside your batch script

Image sources

All major OCI registries work out of the box via docker:// URIs — no Docker installation needed.

Registry URI prefix Example
Docker Hub docker:// docker://ubuntu:24.04
NVIDIA NGC docker://nvcr.io/ docker://nvcr.io/nvidia/pytorch:24.01-py3
quay.io docker://quay.io/ docker://quay.io/rockylinux/rockylinux:9
GitHub Container Registry docker://ghcr.io/ docker://ghcr.io/containerd/alpine:latest
Local .sif file copy to a workspace and use directly

library:// URIs (Sylabs Cloud Library) are not available by default — the default Apptainer installation has no Library API client configured. See Apptainer docs if you need to set up a library remote.

Image storage

SIF files are plain files — store them wherever you like. To avoid filling your home quota, keep images in a workspace and use a symlink so your scripts don't need to change:

# create a workspace (100 days)
ws_allocate apptainer 100

# create an images directory in the workspace and symlink it from home
ln -s $(ws_find apptainer) ~/images

Pull images directly into the workspace:

apptainer pull ~/images/ubuntu.sif docker://ubuntu:24.04
apptainer pull ~/images/pytorch.sif docker://nvcr.io/nvidia/pytorch:24.01-py3

In batch scripts, reference images via the symlink:

IMAGE=$HOME/images/pytorch.sif

Basic commands

Pull an image

Always pull images to a local SIF file first — using docker:// URIs at runtime repeatedly can hit Docker Hub rate limits.

# from Docker Hub
apptainer pull ~/images/ubuntu.sif docker://ubuntu:24.04

# from NVIDIA NGC
apptainer pull ~/images/pytorch.sif docker://nvcr.io/nvidia/pytorch:24.01-py3

# from quay.io
apptainer pull ~/images/rockylinux.sif docker://quay.io/rockylinux/rockylinux:9

# from GitHub Container Registry
apptainer pull ~/images/alpine.sif docker://ghcr.io/containerd/alpine:latest

Run a command inside a container

# exec: run a specific command
apptainer exec ubuntu.sif python3 script.py

# run: execute the container's default runscript
apptainer run ubuntu.sif

# shell: interactive shell
apptainer shell ubuntu.sif

GPU access

Pass the --nv flag to enable NVIDIA GPU passthrough:

apptainer exec --nv pytorch.sif python3 train.py

Bind-mount paths

/home and /work are available inside the container by default (Apptainer automatically binds a set of paths; check apptainer config for the system defaults).

To mount additional directories explicitly:

apptainer exec --bind /tmp/mydata:/data ubuntu.sif bash

Using Apptainer in a Slurm batch job

Unlike Enroot/Pyxis there is no Slurm plugin; simply call apptainer from your batch script:

CPU job:

#!/bin/bash
#SBATCH -p cpu
#SBATCH --ntasks=1
#SBATCH --time=01:00:00

IMAGE=$HOME/images/ubuntu.sif

apptainer exec "$IMAGE" python3 /work/classic/myWs/train.py

GPU job (use --nv to pass through the allocated GPUs):

#!/bin/bash
#SBATCH -p l40s
#SBATCH --gres=gpu:1
#SBATCH --ntasks=1
#SBATCH --time=01:00:00

IMAGE=$HOME/images/pytorch.sif

apptainer exec --nv "$IMAGE" python3 /work/classic/myWs/train.py

Building images

Building a new SIF image from a definition file requires root (or a system that supports fakeroot/user namespaces). On NEMO you cannot build images directly on the compute nodes.

Recommended workflow:

  1. Build on a machine where you have root (local workstation, VM, GitHub Actions, …)
  2. Transfer the .sif file to NEMO (e.g. scp, rsync)
  3. Run on NEMO as usual

Store images in a workspace (symlinked to ~/images/ as described above) — not in $TMPDIR, which is job-local and deleted after the job.

Tips

  • Use apptainer exec --writable-tmpfs if the container tries to write to its own filesystem (without persisting changes).
  • The --containall flag gives a fully isolated environment (no automatic bind-mounts); useful for reproducibility tests.
  • To check what Apptainer version is installed: apptainer --version