BwUniCluster2.0/Containers: Difference between revisions
No edit summary |
No edit summary |
||
Line 9: | Line 9: | ||
= ENROOT = |
= ENROOT = |
||
Enroot enables you to run Docker containers on HPC systems. It is developed by NVIDIA. It is the recommended tool to use containers on HoreKa and integrates well with GPU usage. |
|||
Enroot is available to all users by default. |
|||
== Usage == |
|||
Excellent documentation is provided on NVIDIA's github page. This documentation here therefore confines itself to simple examples to get to know the essential functionalities. |
|||
Using Docker containers with Enroot requires three steps: |
|||
* Importing an image |
|||
* Creating a container |
|||
* Starting a container |
|||
Optionally containers can also be exported and transferred. |
|||
=== Importing a container image === |
|||
enroot import docker://alpine |
|||
This pulls the latest alpine image from dockerhub (default registry). You will obtain the file alpine.sqsh. |
|||
enroot import docker://nvcr.io#nvidia/pytorch:21.04-py3 |
|||
This pulls the latest pytorch image from NVIDIA's NGC registry. You will obtain the file nvidia+pytorch+21.04-py3.sqsh. |
|||
enroot import docker://registry.scc.kit.edu#myProject/myImage:latest |
|||
This pulls your latest Image from the KIT registry. You obtain the file myImage.sqsh. |
|||
=== Creating a container === |
|||
enroot create --name nvidia+pytorch+21.04-py3 nvidia+pytorch+21.04-py3.sqsh |
|||
Create a container named nvidia+pytorch+21.04-py3 by unpacking the .sqsh-file. |
|||
"Creating" a container means that the squashed container image is unpacked inside <code>$ENROOT_DATA_PATH/</code>. By default this variable points to <code>$HOME/.local/share/enroot/</code>. |
|||
=== Starting a container === |
|||
* Start the container nvidia+pytorch+21.04-py3 in read-write mode (--rw) and run bash inside the container. |
|||
<code>enroot start --rw nvidia+pytorch+21.04-py3 bash</code> |
|||
enroot start --root --rw nvidia+pytorch+21.04-py3 bash |
|||
Start container in --rw-mode and get root access (--root) inside the container. |
|||
You can now install software with root privileges, depending on the containerized Linux distribution e.g. with <code>apt-get install … </code>, <code>apk add …</code>, <code>yum install …</code>, <code>pacman -S …</code> |
|||
enroot start -m <localDir>:/work --rw nvidia+pytorch+21.04-py3 bash |
|||
Start container and mount (-m) a local directory to /work inside the container. |
|||
enroot start -m <localDir>:/work --rw nvidia+pytorch+21.04-py3 jupyter lab |
|||
Start container, mount a directory and start the application jupyter lab. |
|||
=== Exporting and transfering containers === |
|||
If you intend to use Docker images which you built e.g. on your local desktop, and transfer them somewhere else, there are several possibilities to do so: |
|||
enroot import --output myImage.sqsh dockerd://myImage |
|||
Build an image via docker build and a Dockerfile, import this image from the Docker daemon. Copy the .sqsh-file to HoreKa and import it with <code>enroot import</code>. |
|||
enroot export --output myImage.sqsh myImage |
|||
Export an existing enroot container. Copy the .sqsh-file to HoreKa and import it with enroot import. |
|||
enroot bundle --output myImage.run myImage.sqsh |
|||
Create a self extracting bundle from a container image. Copy the .run-file to HoreKa. You can run the self extracting image via ./myImage.run even if enroot is not installed! |
|||
=== Container management === |
|||
You can list all containers on the system and additional information (--fancy parameter) with the <code>enroot list</code> command. |
|||
The unpacked images can be removed with the enroot remove command. |
|||
= Singularity = |
= Singularity = |
Revision as of 15:46, 24 June 2021
To date, only few container runtime environments integrate well with HPC environments due to security concerns and differing assumptions in some areas.
For example native Docker environments require elevated privileges, which is not an option on shared HPC resources. Docker's "rootless mode" is also currently not supported on our HPC systems because it does not support necessary features such as cgroups resource controls, security profiles, overlay networks, furthermore GPU passthrough is difficult. Necessary subuid (newuidmap) and subgid (newgidmap) settings may impose security issues.
On bwUniCluster Enroot and Singularity are supported.
Further rootless container runtime environments (Podman, …) might be supported in the future, depending on how support for e.g. network interconnects, security features and HPC file systems develops.
ENROOT
Enroot enables you to run Docker containers on HPC systems. It is developed by NVIDIA. It is the recommended tool to use containers on HoreKa and integrates well with GPU usage.
Enroot is available to all users by default.
Usage
Excellent documentation is provided on NVIDIA's github page. This documentation here therefore confines itself to simple examples to get to know the essential functionalities.
Using Docker containers with Enroot requires three steps:
- Importing an image
- Creating a container
- Starting a container
Optionally containers can also be exported and transferred.
Importing a container image
enroot import docker://alpine
This pulls the latest alpine image from dockerhub (default registry). You will obtain the file alpine.sqsh.
enroot import docker://nvcr.io#nvidia/pytorch:21.04-py3
This pulls the latest pytorch image from NVIDIA's NGC registry. You will obtain the file nvidia+pytorch+21.04-py3.sqsh.
enroot import docker://registry.scc.kit.edu#myProject/myImage:latest
This pulls your latest Image from the KIT registry. You obtain the file myImage.sqsh.
Creating a container
enroot create --name nvidia+pytorch+21.04-py3 nvidia+pytorch+21.04-py3.sqsh
Create a container named nvidia+pytorch+21.04-py3 by unpacking the .sqsh-file.
"Creating" a container means that the squashed container image is unpacked inside $ENROOT_DATA_PATH/
. By default this variable points to $HOME/.local/share/enroot/
.
Starting a container
- Start the container nvidia+pytorch+21.04-py3 in read-write mode (--rw) and run bash inside the container.
enroot start --rw nvidia+pytorch+21.04-py3 bash
enroot start --root --rw nvidia+pytorch+21.04-py3 bash
Start container in --rw-mode and get root access (--root) inside the container.
You can now install software with root privileges, depending on the containerized Linux distribution e.g. with apt-get install …
, apk add …
, yum install …
, pacman -S …
enroot start -m <localDir>:/work --rw nvidia+pytorch+21.04-py3 bash
Start container and mount (-m) a local directory to /work inside the container.
enroot start -m <localDir>:/work --rw nvidia+pytorch+21.04-py3 jupyter lab
Start container, mount a directory and start the application jupyter lab.
Exporting and transfering containers
If you intend to use Docker images which you built e.g. on your local desktop, and transfer them somewhere else, there are several possibilities to do so:
enroot import --output myImage.sqsh dockerd://myImage
Build an image via docker build and a Dockerfile, import this image from the Docker daemon. Copy the .sqsh-file to HoreKa and import it with enroot import
.
enroot export --output myImage.sqsh myImage
Export an existing enroot container. Copy the .sqsh-file to HoreKa and import it with enroot import.
enroot bundle --output myImage.run myImage.sqsh
Create a self extracting bundle from a container image. Copy the .run-file to HoreKa. You can run the self extracting image via ./myImage.run even if enroot is not installed!
Container management
You can list all containers on the system and additional information (--fancy parameter) with the enroot list
command.
The unpacked images can be removed with the enroot remove command.