<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://wiki.bwhpc.de/wiki/index.php?action=history&amp;feed=atom&amp;title=NEMO2%2FContainers%2FEnroot</id>
	<title>NEMO2/Containers/Enroot - Revision history</title>
	<link rel="self" type="application/atom+xml" href="https://wiki.bwhpc.de/wiki/index.php?action=history&amp;feed=atom&amp;title=NEMO2%2FContainers%2FEnroot"/>
	<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=NEMO2/Containers/Enroot&amp;action=history"/>
	<updated>2026-05-08T13:16:00Z</updated>
	<subtitle>Revision history for this page on the wiki</subtitle>
	<generator>MediaWiki 1.39.17</generator>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=NEMO2/Containers/Enroot&amp;diff=16050&amp;oldid=prev</id>
		<title>M Janczyk: Created page with &quot;&#039;&#039;&#039;Enroot&#039;&#039;&#039; is a container runtime developed by NVIDIA that runs OCI/Docker containers without root privileges. On NEMO it is the &#039;&#039;&#039;recommended&#039;&#039;&#039; container solution and is integrated with Slurm via the &#039;&#039;&#039;Pyxis&#039;&#039;&#039; SPANK plugin.  == How it works ==  Enroot converts a Docker image into a SquashFS file (&lt;tt&gt;.sqsh&lt;/tt&gt;), unpacks it on demand into an overlay filesystem, and runs your workload inside that environment. The Pyxis plugin adds &lt;tt&gt;--container-*&lt;/tt&gt; options to...&quot;</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=NEMO2/Containers/Enroot&amp;diff=16050&amp;oldid=prev"/>
		<updated>2026-05-07T17:05:11Z</updated>

		<summary type="html">&lt;p&gt;Created page with &amp;quot;&amp;#039;&amp;#039;&amp;#039;Enroot&amp;#039;&amp;#039;&amp;#039; is a container runtime developed by NVIDIA that runs OCI/Docker containers without root privileges. On NEMO it is the &amp;#039;&amp;#039;&amp;#039;recommended&amp;#039;&amp;#039;&amp;#039; container solution and is integrated with Slurm via the &amp;#039;&amp;#039;&amp;#039;Pyxis&amp;#039;&amp;#039;&amp;#039; SPANK plugin.  == How it works ==  Enroot converts a Docker image into a SquashFS file (&amp;lt;tt&amp;gt;.sqsh&amp;lt;/tt&amp;gt;), unpacks it on demand into an overlay filesystem, and runs your workload inside that environment. The Pyxis plugin adds &amp;lt;tt&amp;gt;--container-*&amp;lt;/tt&amp;gt; options to...&amp;quot;&lt;/p&gt;
&lt;p&gt;&lt;b&gt;New page&lt;/b&gt;&lt;/p&gt;&lt;div&gt;&amp;#039;&amp;#039;&amp;#039;Enroot&amp;#039;&amp;#039;&amp;#039; is a container runtime developed by NVIDIA that runs OCI/Docker containers without root privileges.&lt;br /&gt;
On NEMO it is the &amp;#039;&amp;#039;&amp;#039;recommended&amp;#039;&amp;#039;&amp;#039; container solution and is integrated with Slurm via the &amp;#039;&amp;#039;&amp;#039;Pyxis&amp;#039;&amp;#039;&amp;#039; SPANK plugin.&lt;br /&gt;
&lt;br /&gt;
== How it works ==&lt;br /&gt;
&lt;br /&gt;
Enroot converts a Docker image into a SquashFS file (&amp;lt;tt&amp;gt;.sqsh&amp;lt;/tt&amp;gt;), unpacks it on demand into an overlay filesystem, and runs your workload inside that environment.&lt;br /&gt;
The Pyxis plugin adds &amp;lt;tt&amp;gt;--container-*&amp;lt;/tt&amp;gt; options to &amp;lt;tt&amp;gt;srun&amp;lt;/tt&amp;gt;/&amp;lt;tt&amp;gt;salloc&amp;lt;/tt&amp;gt;/&amp;lt;tt&amp;gt;sbatch&amp;lt;/tt&amp;gt; so containers become first-class Slurm jobs.&lt;br /&gt;
&lt;br /&gt;
== Default mounts ==&lt;br /&gt;
&lt;br /&gt;
The following paths are &amp;#039;&amp;#039;&amp;#039;automatically mounted&amp;#039;&amp;#039;&amp;#039; into every container when launched via Slurm/Pyxis:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Host path !! Notes&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;tt&amp;gt;/home&amp;lt;/tt&amp;gt; || all home directories, read-write&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;tt&amp;gt;/work&amp;lt;/tt&amp;gt; || all workspace filesystems, read-write&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;tt&amp;gt;/etc/slurm&amp;lt;/tt&amp;gt; || Slurm configuration (read-only)&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;tt&amp;gt;/usr/lib64/slurm&amp;lt;/tt&amp;gt; || Slurm plug-in libraries (read-only)&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
You do &amp;#039;&amp;#039;&amp;#039;not&amp;#039;&amp;#039;&amp;#039; need to pass &amp;lt;tt&amp;gt;--container-mount-home&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;--container-mounts=/work&amp;lt;/tt&amp;gt; manually — they are already there.&lt;br /&gt;
The paths inside the container are &amp;#039;&amp;#039;&amp;#039;identical&amp;#039;&amp;#039;&amp;#039; to the paths on the host system, so scripts and config files referencing &amp;lt;tt&amp;gt;$HOME&amp;lt;/tt&amp;gt; or workspace paths work without modification.&lt;br /&gt;
&lt;br /&gt;
Note that &amp;lt;tt&amp;gt;ws_*&amp;lt;/tt&amp;gt; tools (e.g. &amp;lt;tt&amp;gt;ws_find&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;ws_list&amp;lt;/tt&amp;gt;) are &amp;#039;&amp;#039;&amp;#039;not available inside the container&amp;#039;&amp;#039;&amp;#039;.&lt;br /&gt;
Determine your workspace path on the login node before submitting the job and pass it as an environment variable or hard-code it in your script.&lt;br /&gt;
&lt;br /&gt;
== Container image storage ==&lt;br /&gt;
&lt;br /&gt;
Unpacked container images are stored in:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
~/.local/share/enroot/&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Images are SquashFS files and can be several GB.&lt;br /&gt;
To avoid filling your home quota, store images in a [[Workspaces|workspace]] and symlink the default path:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# create a workspace (100 days)&lt;br /&gt;
ws_allocate enroot 100&lt;br /&gt;
&lt;br /&gt;
# replace the default enroot directory with a symlink to the workspace&lt;br /&gt;
mkdir -p ~/.local/share&lt;br /&gt;
ln -s $(ws_find enroot) ~/.local/share/enroot&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Enroot (and Pyxis) will now transparently use the workspace path for all image storage.&lt;br /&gt;
&lt;br /&gt;
== Usage without Slurm (interactive) ==&lt;br /&gt;
&lt;br /&gt;
=== Import an image ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# from Docker Hub&lt;br /&gt;
enroot import docker://ubuntu:24.04&lt;br /&gt;
&lt;br /&gt;
# from quay.io&lt;br /&gt;
enroot import docker://quay.io#rockylinux/rockylinux:9&lt;br /&gt;
&lt;br /&gt;
# from NVIDIA NGC&lt;br /&gt;
enroot import docker://nvcr.io#nvidia/pytorch:24.01-py3&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This creates a &amp;lt;tt&amp;gt;.sqsh&amp;lt;/tt&amp;gt; file in the current directory.&lt;br /&gt;
&lt;br /&gt;
=== Create a container ===&lt;br /&gt;
&lt;br /&gt;
The container name must be prefixed with &amp;lt;tt&amp;gt;pyxis_&amp;lt;/tt&amp;gt; for Slurm/Pyxis to find it later (omit the prefix when passing &amp;lt;tt&amp;gt;--container-name&amp;lt;/tt&amp;gt; to Slurm).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
enroot create --name pyxis_ubuntu ubuntu+24.04.sqsh&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Start a container ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# interactive shell, read-write&lt;br /&gt;
enroot start --rw pyxis_ubuntu bash&lt;br /&gt;
&lt;br /&gt;
# get root inside the container (to install packages)&lt;br /&gt;
enroot start --root --rw pyxis_ubuntu bash&lt;br /&gt;
&lt;br /&gt;
# mount an extra directory (e.g. from outside /home and /work)&lt;br /&gt;
enroot start --rw -m /tmp/mydata:/data pyxis_ubuntu bash&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== List and remove containers ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
enroot list --fancy&lt;br /&gt;
enroot remove pyxis_ubuntu&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Usage via Slurm / Pyxis ==&lt;br /&gt;
&lt;br /&gt;
=== Interactive allocation ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# use an already-created container&lt;br /&gt;
salloc -p cpu --container-name=ubuntu&lt;br /&gt;
&lt;br /&gt;
# pull, create and start in one step (container is created under ~/.local/share/enroot/)&lt;br /&gt;
salloc -p cpu --container-image=ubuntu:24.04 --container-name=ubuntu&lt;br /&gt;
salloc -p cpu --container-image=&amp;quot;quay.io#rockylinux/rockylinux:9&amp;quot; --container-name=rocky&lt;br /&gt;
salloc -p l40s --gres=gpu:1 --container-image=&amp;quot;nvcr.io#nvidia/pytorch:24.01-py3&amp;quot; --container-name=pytorch&lt;br /&gt;
&lt;br /&gt;
# start with a specific working directory inside the container&lt;br /&gt;
# $(ws_find enroot) is evaluated on the login node before the job starts&lt;br /&gt;
salloc -p cpu --container-name=ubuntu --container-workdir=$(ws_find enroot)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Batch job ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH -p cpu&lt;br /&gt;
#SBATCH --container-name=ubuntu&lt;br /&gt;
&lt;br /&gt;
python3 /work/classic/myWs/train.py&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Useful Pyxis options ===&lt;br /&gt;
&lt;br /&gt;
All options are listed in &amp;lt;tt&amp;gt;srun --help&amp;lt;/tt&amp;gt; under &amp;#039;&amp;#039;Options provided by plugins&amp;#039;&amp;#039;.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Option !! Effect&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;tt&amp;gt;--container-name=NAME&amp;lt;/tt&amp;gt; || Use existing enroot container (omit the &amp;lt;tt&amp;gt;pyxis_&amp;lt;/tt&amp;gt; prefix)&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;tt&amp;gt;--container-image=IMAGE&amp;lt;/tt&amp;gt; || Pull image from registry and create container on the fly&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;tt&amp;gt;--container-mount-home&amp;lt;/tt&amp;gt; || Mount &amp;lt;tt&amp;gt;$HOME&amp;lt;/tt&amp;gt; into container (already in defaults, but explicit flag also works)&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;tt&amp;gt;--container-mounts=SRC:DST[,…]&amp;lt;/tt&amp;gt; || Bind-mount additional paths&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;tt&amp;gt;--container-writable&amp;lt;/tt&amp;gt; || Make the container overlay writable&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;tt&amp;gt;--container-remap-root&amp;lt;/tt&amp;gt; || Become root inside the container (no real root on host)&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;tt&amp;gt;--container-workdir=PATH&amp;lt;/tt&amp;gt; || Set the working directory inside the container&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== GPU access ==&lt;br /&gt;
&lt;br /&gt;
GPU passthrough is &amp;#039;&amp;#039;&amp;#039;automatic&amp;#039;&amp;#039;&amp;#039; — no extra flags are needed.&lt;br /&gt;
Enroot/Pyxis detect the allocated GPUs via Slurm&amp;#039;s GRES mechanism and make them available inside the container.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
salloc -p l40s --gres=gpu:1 --container-name=pytorch&lt;br /&gt;
# nvidia-smi works inside the container out of the box&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Tips ==&lt;br /&gt;
&lt;br /&gt;
* Install extra packages interactively with &amp;lt;tt&amp;gt;enroot start --root --rw&amp;lt;/tt&amp;gt; before submitting batch jobs.&lt;br /&gt;
* Use &amp;lt;tt&amp;gt;--container-writable&amp;lt;/tt&amp;gt; in batch jobs only if your script modifies the container filesystem; otherwise the default overlay is discarded after the job anyway.&lt;br /&gt;
* Store large datasets in workspaces (&amp;lt;tt&amp;gt;/work&amp;lt;/tt&amp;gt;), not in &amp;lt;tt&amp;gt;$HOME&amp;lt;/tt&amp;gt;, to avoid filling your home quota with data files.&lt;/div&gt;</summary>
		<author><name>M Janczyk</name></author>
	</entry>
</feed>