<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://wiki.bwhpc.de/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=F+Bartusch</id>
	<title>bwHPC Wiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://wiki.bwhpc.de/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=F+Bartusch"/>
	<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/e/Special:Contributions/F_Bartusch"/>
	<updated>2026-04-11T12:34:09Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.39.17</generator>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=NEMO2/Workspaces&amp;diff=15730</id>
		<title>NEMO2/Workspaces</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=NEMO2/Workspaces&amp;diff=15730"/>
		<updated>2026-02-11T08:58:49Z</updated>

		<summary type="html">&lt;p&gt;F Bartusch: Group workspaces works with BinAC 2&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;div style=&amp;quot;border: 3px solid #ffc107; padding: 15px; background-color: #fff3cd; margin: 10px 0;&amp;quot;&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Note:&#039;&#039;&#039; This is the updated Workspaces guide. The old wiki: [[Workspace]].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Workspace tools&#039;&#039;&#039; provide temporary storage spaces called &#039;&#039;&#039;workspaces&#039;&#039;&#039; for your calculations. They are meant for data that needs to persist longer than a single job, but not permanently.&lt;br /&gt;
&lt;br /&gt;
== What are Workspaces? ==&lt;br /&gt;
&lt;br /&gt;
Workspaces give you access to the cluster&#039;s fast parallel filesystems (like Lustre or Weka). &#039;&#039;&#039;You cannot write directly to these parallel filesystems&#039;&#039;&#039; - workspaces provide your designated area.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Use workspaces for:&#039;&#039;&#039;&lt;br /&gt;
* Jobs generating intermediate data&lt;br /&gt;
* Data shared between multiple compute nodes&lt;br /&gt;
* Multi-step workflows&lt;br /&gt;
* Temporary scratch space during calculations&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Don&#039;t use workspaces for:&#039;&#039;&#039;&lt;br /&gt;
* Permanent storage (use HOME or project directories)&lt;br /&gt;
* Single-node temporary files (use &amp;lt;tt&amp;gt;$TMPDIR&amp;lt;/tt&amp;gt; instead)&lt;br /&gt;
&lt;br /&gt;
== Important - Read First ==&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;No Backup:&#039;&#039;&#039; Data is &#039;&#039;&#039;not backed up&#039;&#039;&#039; and will be &#039;&#039;&#039;automatically deleted&#039;&#039;&#039; after expiration&lt;br /&gt;
* &#039;&#039;&#039;Time-limited:&#039;&#039;&#039; Lifetime typically 30-100 days (cluster-specific). See [[Workspaces/Advanced_Features/Quotas#Cluster-Specific_Workspace_Limits|Cluster Limits]]&lt;br /&gt;
* &#039;&#039;&#039;Automatic Reminders:&#039;&#039;&#039; Email notifications before expiration&lt;br /&gt;
* &#039;&#039;&#039;Backup Important Data:&#039;&#039;&#039; Copy results to permanent storage before expiration&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;For advanced features and detailed options:&#039;&#039;&#039; [[Workspaces/Advanced Features]]&lt;br /&gt;
&lt;br /&gt;
== Command Overview ==&lt;br /&gt;
&lt;br /&gt;
Main commands:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;tt&amp;gt;ws_allocate&amp;lt;/tt&amp;gt; - Create or extend workspace&lt;br /&gt;
* &amp;lt;tt&amp;gt;ws_list&amp;lt;/tt&amp;gt; - List your workspaces&lt;br /&gt;
* &amp;lt;tt&amp;gt;ws_find&amp;lt;/tt&amp;gt; - Find workspace path (for scripts)&lt;br /&gt;
* &amp;lt;tt&amp;gt;ws_extend&amp;lt;/tt&amp;gt; - Extend workspace lifetime&lt;br /&gt;
* &amp;lt;tt&amp;gt;ws_release&amp;lt;/tt&amp;gt; - Release (delete) workspace&lt;br /&gt;
* &amp;lt;tt&amp;gt;ws_restore&amp;lt;/tt&amp;gt; - Restore expired/released workspace&lt;br /&gt;
* &amp;lt;tt&amp;gt;ws_register&amp;lt;/tt&amp;gt; - Create symbolic links&lt;br /&gt;
&lt;br /&gt;
All commands support &amp;lt;tt&amp;gt;-h&amp;lt;/tt&amp;gt; for help.&lt;br /&gt;
&lt;br /&gt;
== Quick Start ==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|- &lt;br /&gt;
!style=&amp;quot;width:40%&amp;quot; | Task&lt;br /&gt;
!style=&amp;quot;width:60%&amp;quot; | Command&lt;br /&gt;
|-&lt;br /&gt;
|Create workspace (30 days)&lt;br /&gt;
|&amp;lt;tt&amp;gt;ws_allocate myWs 30&amp;lt;/tt&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|Create group workspace&lt;br /&gt;
|&amp;lt;tt&amp;gt;ws_allocate -G groupname myWs 30&amp;lt;/tt&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|List all workspaces&lt;br /&gt;
|&amp;lt;tt&amp;gt;ws_list&amp;lt;/tt&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|See what expires soon&lt;br /&gt;
|&amp;lt;tt&amp;gt;ws_list -Rr&amp;lt;/tt&amp;gt; (&amp;lt;tt&amp;gt;-R&amp;lt;/tt&amp;gt;=by time, &amp;lt;tt&amp;gt;-r&amp;lt;/tt&amp;gt;=reverse)&lt;br /&gt;
|-&lt;br /&gt;
|Find path (for scripts)&lt;br /&gt;
|&amp;lt;tt&amp;gt;ws_find myWs&amp;lt;/tt&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|Extend by 30 days&lt;br /&gt;
|&amp;lt;tt&amp;gt;ws_extend myWs 30&amp;lt;/tt&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|Delete workspace&lt;br /&gt;
|&amp;lt;tt&amp;gt;ws_release myWs&amp;lt;/tt&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|Restore workspace&lt;br /&gt;
|&amp;lt;tt&amp;gt;ws_restore -l&amp;lt;/tt&amp;gt; then &amp;lt;tt&amp;gt;ws_restore oldname newname&amp;lt;/tt&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Create Workspace ==&lt;br /&gt;
&lt;br /&gt;
Create a workspace with a &#039;&#039;&#039;name&#039;&#039;&#039; and &#039;&#039;&#039;lifetime&#039;&#039;&#039; in days:&lt;br /&gt;
&lt;br /&gt;
   $ ws_allocate myWs 30                    # Create for 30 days&lt;br /&gt;
&lt;br /&gt;
Returns:&lt;br /&gt;
 &lt;br /&gt;
   Workspace created. Duration is 720 hours. &lt;br /&gt;
   Further extensions available: 3&lt;br /&gt;
   /work/workspace/scratch/username-myWs-0&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Common options:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
   $ ws_allocate -G groupname myWs 30       # Group-writable (for teams)&lt;br /&gt;
   $ ws_allocate -g myWs 30                 # Group-readable&lt;br /&gt;
   $ ws_allocate -F ffuc myWs 30            # bwUniCluster 3.0: Flash filesystem&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Capture path in variable:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
   $ WORKSPACE=$(ws_allocate myWs 30)&lt;br /&gt;
   $ cd &amp;quot;$WORKSPACE&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Important:&#039;&#039;&#039; Running the same command again is safe - returns the existing workspace path.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Details:&#039;&#039;&#039; [[Workspaces/Advanced_Features/ws_allocate|Advanced Features guide]]&lt;br /&gt;
&lt;br /&gt;
== List Your Workspaces ==&lt;br /&gt;
&lt;br /&gt;
   $ ws_list                                # List all workspaces&lt;br /&gt;
&lt;br /&gt;
Shows workspace ID, location, extensions, creation date, and remaining time.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Common options:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
   $ ws_list -Rr                            # Sort by remaining time, reverse (last to expire first)&lt;br /&gt;
   $ ws_list -s                             # Short format (names only, for scripts)&lt;br /&gt;
   $ ws_list -g                             # Show group workspaces&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note:&#039;&#039;&#039; To list expired workspaces for restore, use &amp;lt;tt&amp;gt;ws_restore -l&amp;lt;/tt&amp;gt;. See [[#Restore_Workspace|Restore]].&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Details:&#039;&#039;&#039; [[Workspaces/Advanced_Features/ws_list|Advanced Features guide]]&lt;br /&gt;
&lt;br /&gt;
== Extend Workspace Lifetime ==&lt;br /&gt;
&lt;br /&gt;
Extend before workspace expires:&lt;br /&gt;
&lt;br /&gt;
   $ ws_extend myWs 30                      # Extend by 30 days from now&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Alternative:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
   $ ws_allocate -x myWs 30                 # Same result&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note:&#039;&#039;&#039; Each extension consumes one available extension. See [[Workspaces/Advanced_Features/Quotas#Cluster-Specific_Workspace_Limits|Cluster Limits]].&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Group workspaces:&#039;&#039;&#039; See [[#Extend_Group_Workspace|Extend Group Workspace]] for extending workspaces created by colleagues.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Details:&#039;&#039;&#039; [[Workspaces/Advanced_Features/ws_extend|Advanced Features guide]]&lt;br /&gt;
&lt;br /&gt;
== Release (Delete) Workspace ==&lt;br /&gt;
&lt;br /&gt;
When no longer needed:&lt;br /&gt;
&lt;br /&gt;
   $ ws_release myWs                        # Release workspace&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;What happens:&#039;&#039;&#039;&lt;br /&gt;
* Workspace becomes inaccessible immediately&lt;br /&gt;
* Data kept for short grace period (typically until next cleanup)&lt;br /&gt;
* Can be restored with &amp;lt;tt&amp;gt;ws_restore&amp;lt;/tt&amp;gt; during grace period&lt;br /&gt;
* May still count toward quota until final deletion&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Details:&#039;&#039;&#039; [[Workspaces/Advanced_Features/ws_release|Advanced Features guide]]&lt;br /&gt;
&lt;br /&gt;
== Restore Workspace ==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|- &lt;br /&gt;
!style=&amp;quot;width:40%&amp;quot; | Works on cluster&lt;br /&gt;
!style=&amp;quot;width:10%&amp;quot; | bwUC 3.0&lt;br /&gt;
!style=&amp;quot;width:10%&amp;quot; | BinAC2&lt;br /&gt;
!style=&amp;quot;width:10%&amp;quot; | Helix&lt;br /&gt;
!style=&amp;quot;width:10%&amp;quot; | JUSTUS 2&lt;br /&gt;
!style=&amp;quot;width:10%&amp;quot; | NEMO2&lt;br /&gt;
!style=&amp;quot;width:10%&amp;quot; | DACHS&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;tt&amp;gt;ws_restore&amp;lt;/tt&amp;gt;&lt;br /&gt;
|style=&amp;quot;background-color:#90EE90; text-align:center;&amp;quot; | ✓&lt;br /&gt;
|style=&amp;quot;background-color:#90EE90; text-align:center;&amp;quot; | ✓&lt;br /&gt;
|style=&amp;quot;background-color:#90EE90; text-align:center;&amp;quot; | ✓&lt;br /&gt;
|style=&amp;quot;background-color:#90EE90; text-align:center;&amp;quot; | ✓&lt;br /&gt;
|style=&amp;quot;background-color:#90EE90; text-align:center;&amp;quot; | ✓&lt;br /&gt;
|style=&amp;quot;background-color:#90EE90; text-align:center;&amp;quot; | ✓&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Recover expired or released workspaces within grace period:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Restore procedure:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
   $ ws_restore -l                          # (1) List restorable workspaces&lt;br /&gt;
   $ ws_allocate restored 60                # (2) Create target workspace&lt;br /&gt;
   $ ws_restore username-myWs-0 restored    # (3) Restore&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Important:&#039;&#039;&#039; Use the &#039;&#039;&#039;full name&#039;&#039;&#039; from &amp;lt;tt&amp;gt;ws_restore -l&amp;lt;/tt&amp;gt; (with username and timestamp), not the short name.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Details:&#039;&#039;&#039; [[Workspaces/Advanced_Features/ws_restore|Advanced Features guide]]&lt;br /&gt;
&lt;br /&gt;
== Work with Groups (Share Workspaces) ==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|- &lt;br /&gt;
!style=&amp;quot;width:40%&amp;quot; | Works on cluster&lt;br /&gt;
!style=&amp;quot;width:10%&amp;quot; | bwUC 3.0&lt;br /&gt;
!style=&amp;quot;width:10%&amp;quot; | BinAC2&lt;br /&gt;
!style=&amp;quot;width:10%&amp;quot; | Helix&lt;br /&gt;
!style=&amp;quot;width:10%&amp;quot; | JUSTUS 2&lt;br /&gt;
!style=&amp;quot;width:10%&amp;quot; | NEMO2&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;tt&amp;gt;-g&amp;lt;/tt&amp;gt; (group-readable)&lt;br /&gt;
| style=&amp;quot;text-align:center;&amp;quot; | &lt;br /&gt;
| style=&amp;quot;background-color:#90EE90; text-align:center;&amp;quot; | ✓&lt;br /&gt;
| style=&amp;quot;text-align:center;&amp;quot; | &lt;br /&gt;
| style=&amp;quot;text-align:center;&amp;quot; | &lt;br /&gt;
|style=&amp;quot;background-color:#90EE90; text-align:center;&amp;quot; | ✓&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;tt&amp;gt;-G&amp;lt;/tt&amp;gt; (group-writable)&lt;br /&gt;
| style=&amp;quot;text-align:center;&amp;quot; | &lt;br /&gt;
| style=&amp;quot;background-color:#90EE90; text-align:center;&amp;quot; | ✓&lt;br /&gt;
| style=&amp;quot;text-align:center;&amp;quot; | &lt;br /&gt;
| style=&amp;quot;text-align:center;&amp;quot; | &lt;br /&gt;
|style=&amp;quot;background-color:#90EE90; text-align:center;&amp;quot; | ✓&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Simple team collaboration with group workspaces:&lt;br /&gt;
&lt;br /&gt;
=== Create Group Workspace ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Group-readable (read-only):&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
   $ ws_allocate -g myWs 30&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Group-writable (recommended):&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
   $ ws_allocate -G projectgroup myWs 30    # Replace &#039;projectgroup&#039; with your group&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Tip:&#039;&#039;&#039; Set default in &amp;lt;tt&amp;gt;~/.ws_user.conf&amp;lt;/tt&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
groupname: projectgroup&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then simply: &amp;lt;tt&amp;gt;ws_allocate myWs 30&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== List Group Workspaces ===&lt;br /&gt;
&lt;br /&gt;
   $ ws_list -g                             # Show all group workspaces&lt;br /&gt;
&lt;br /&gt;
=== Extend Group Workspace ===&lt;br /&gt;
&lt;br /&gt;
Anyone in the group can extend group-writable workspaces (&amp;lt;tt&amp;gt;-G&amp;lt;/tt&amp;gt;):&lt;br /&gt;
&lt;br /&gt;
   $ ws_extend myWs 30                      # If you created it&lt;br /&gt;
   $ ws_allocate -x -u alice myWs 30        # If colleague created it&lt;br /&gt;
&lt;br /&gt;
=== Manage Reminders ===&lt;br /&gt;
&lt;br /&gt;
Take over reminder responsibility:&lt;br /&gt;
&lt;br /&gt;
   $ ws_allocate -r 7 -u alice -x myWs 0    # Update timing and take over reminders&lt;br /&gt;
&lt;br /&gt;
Changes reminder to 7 days before expiration and redirects emails to you.&lt;br /&gt;
&lt;br /&gt;
=== Why Use Group Workspaces? ===&lt;br /&gt;
&lt;br /&gt;
* Simple collaboration - everyone accesses same data&lt;br /&gt;
* No permission problems - automatic group permissions&lt;br /&gt;
* Independent extensions - team can extend without creator&lt;br /&gt;
* Easy discovery - &amp;lt;tt&amp;gt;ws_list -g&amp;lt;/tt&amp;gt; shows all team workspaces&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Advanced sharing:&#039;&#039;&#039; [[Workspaces/Advanced_Features/Sharing|Sharing guide]] for ACLs and ws_share&lt;/div&gt;</summary>
		<author><name>F Bartusch</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BinAC2/Hardware_and_Architecture&amp;diff=15729</id>
		<title>BinAC2/Hardware and Architecture</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BinAC2/Hardware_and_Architecture&amp;diff=15729"/>
		<updated>2026-02-11T08:57:04Z</updated>

		<summary type="html">&lt;p&gt;F Bartusch: Change link to new workspace documentation&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Hardware and Architecture =&lt;br /&gt;
&lt;br /&gt;
The bwForCluster BinAC 2 supports researchers from the broader fields of Bioinformatics, Medical Informatics, Astrophysics, Geosciences and Pharmacy.&lt;br /&gt;
&lt;br /&gt;
[[File:Binac2 schema.png|600px|thumb|center|Overview on the BinAC 2 hardware architecture.]]&lt;br /&gt;
&lt;br /&gt;
== Operating System and Software ==&lt;br /&gt;
&lt;br /&gt;
* Operating System: Rocky Linux 9.6&lt;br /&gt;
* Queuing System: [https://slurm.schedmd.com/documentation.html Slurm] (see [[BinAC2/Slurm]] for help)&lt;br /&gt;
* (Scientific) Libraries and Software: [[Environment Modules]]&lt;br /&gt;
&lt;br /&gt;
== Compute Nodes ==&lt;br /&gt;
&lt;br /&gt;
BinAC 2 offers compute nodes, high-mem nodes, and three types of GPU nodes.&lt;br /&gt;
* 180 compute nodes&lt;br /&gt;
* 16 SMP node&lt;br /&gt;
* 32 GPU nodes (2xA30)&lt;br /&gt;
* 8 GPU nodes (4xA100)&lt;br /&gt;
* 4 GPU nodes (4xH200)&lt;br /&gt;
* plus several special purpose nodes for login, interactive jobs, etc.&lt;br /&gt;
&lt;br /&gt;
Compute node specification:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|- &lt;br /&gt;
! style=&amp;quot;width:10%&amp;quot;|&lt;br /&gt;
! style=&amp;quot;width:10%&amp;quot;| Standard&lt;br /&gt;
! style=&amp;quot;width:10%&amp;quot;| High-Mem&lt;br /&gt;
! style=&amp;quot;width:10%&amp;quot;| GPU (A30)&lt;br /&gt;
! style=&amp;quot;width:10%&amp;quot;| GPU (A100)&lt;br /&gt;
! style=&amp;quot;width:10%&amp;quot;| GPU (H200)&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot;| Quantity&lt;br /&gt;
| 168 / 12 &lt;br /&gt;
| 14 / 2&lt;br /&gt;
| 32&lt;br /&gt;
| 8&lt;br /&gt;
| 4&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; | Processors&lt;br /&gt;
| 2 x [https://www.amd.com/de/products/processors/server/epyc/7003-series/amd-epyc-7543.html AMD EPYC Milan 7543] / 2 x [https://www.amd.com/en/products/processors/server/epyc/7003-series/amd-epyc-75f3.html AMD EPYC Milan 75F3]&lt;br /&gt;
| 2 x [https://www.amd.com/de/products/processors/server/epyc/7003-series/amd-epyc-7443.html AMD EPYC Milan 7443] / 2 x [https://www.amd.com/en/products/processors/server/epyc/7003-series/amd-epyc-75f3.html AMD EPYC Milan 75F3]&lt;br /&gt;
| 2 x [https://www.amd.com/de/products/processors/server/epyc/7003-series/amd-epyc-7543.html AMD EPYC Milan 7543]&lt;br /&gt;
| 2 x [https://www.amd.com/de/products/processors/server/epyc/7003-series/amd-epyc-7543.html AMD EPYC Milan 7543]&lt;br /&gt;
| 2 x [https://www.amd.com/de/products/processors/server/epyc/9005-series/amd-epyc-9555.html AMD EPYC Milan 9555]&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; | Processor Base Frequency (GHz)&lt;br /&gt;
| 2.80 / 2.95&lt;br /&gt;
| 2.85 / 2.95&lt;br /&gt;
| 2.80&lt;br /&gt;
| 2.80&lt;br /&gt;
| 3.20&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; | Number of Physical Cores / Hypertreads&lt;br /&gt;
| 64 / 128&lt;br /&gt;
| 48 / 96 // 64 / 128&lt;br /&gt;
| 64 / 128&lt;br /&gt;
| 64 / 128&lt;br /&gt;
| 128 / 256&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; | Working Memory (GB)&lt;br /&gt;
| 512&lt;br /&gt;
| 2048&lt;br /&gt;
| 512&lt;br /&gt;
| 512&lt;br /&gt;
| 1536&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; | Local Disk (GiB)&lt;br /&gt;
| 450 (NVMe-SSD)&lt;br /&gt;
| 14000 (NVMe-SSD)&lt;br /&gt;
| 450 (NVMe-SSD)&lt;br /&gt;
| 14000 (NVMe-SSD)&lt;br /&gt;
| 28000 (NVMe-SSD)&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; | Interconnect&lt;br /&gt;
| HDR 100 IB (84 nodes) / 100GbE (96 nodes)&lt;br /&gt;
| 100GbE&lt;br /&gt;
| 100GbE&lt;br /&gt;
| 100GbE&lt;br /&gt;
| HDR 200 IB + 100GbE&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; | Coprocessors&lt;br /&gt;
| -&lt;br /&gt;
| -&lt;br /&gt;
| 2 x [https://www.nvidia.com/de-de/data-center/products/a30-gpu/ NVIDIA A30 (24 GB ECC HBM2, NVLink)]&lt;br /&gt;
| 4 x [https://www.nvidia.com/de-de/data-center/a100/ NVIDIA A100 (80 GB ECC HBM2e)]&lt;br /&gt;
| 4 x [https://www.nvidia.com/de-de/data-center/h200/ NVIDIA H200 NVL  (141 GB ECC HBM3e, NVLink)]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
= Network =&lt;br /&gt;
&lt;br /&gt;
The compute nodes and the parallel file system are connected via 100GbE ethernet&amp;lt;/br&amp;gt;&lt;br /&gt;
In contrast to BinAC 1 not all compute nodes are connected via Infiniband, but there are 84 standard compute nodes connected via HDR Infiniband (100 GbE). In order to get your jobs onto the Infiniband nodes, submit your job with &amp;lt;code&amp;gt;--constraint=ib&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Question:&#039;&#039;&#039;&amp;lt;/br&amp;gt;&lt;br /&gt;
OpenMPI throws the following warning:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
--------------------------------------------------------------------------&lt;br /&gt;
No OpenFabrics connection schemes reported that they were able to be&lt;br /&gt;
used on a specific port.  As such, the openib BTL (OpenFabrics&lt;br /&gt;
support) will be disabled for this port.&lt;br /&gt;
  Local host:           node1-083&lt;br /&gt;
  Local device:         mlx5_0&lt;br /&gt;
  Local port:           1&lt;br /&gt;
  CPCs attempted:       rdmacm, udcm&lt;br /&gt;
--------------------------------------------------------------------------&lt;br /&gt;
[node1-083:2137377] 3 more processes have sent help message help-mpi-btl-openib-cpc-base.txt / no cpcs for port&lt;br /&gt;
[node1-083:2137377] Set MCA parameter &amp;quot;orte_base_help_aggregate&amp;quot; to 0 to see all help / error messages&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
What should i do?&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Answer:&#039;&#039;&#039;&amp;lt;/br&amp;gt;&lt;br /&gt;
BinAC2 has two (almost) separate networks, a 100GbE network and and InfiniBand network, both connecting a subset of the nodes. Both networks require different cables and switches.  &lt;br /&gt;
Concerning the network cards for the nodes, however, there exist VPI network cards which can be configured to work in either mode (https://docs.nvidia.com/networking/display/connectx6vpi/specifications#src-2487215234_Specifications-MCX653105A-ECATSpecifications).&lt;br /&gt;
OpenMPI can use a number of layers for transferring data and messages between processes. When it ramps up, it will test all means of communication that were configured during compilation and then tries to figure out the fastest path between all processes. &lt;br /&gt;
If OpenMPI encounters such a VPI card, it will first try to establish a Remote Direct Memory Access communication (RDMA) channel using the OpenFabrics (OFI) layer.&lt;br /&gt;
On nodes with 100Gb ethernet, this fails as there is no RDMA protocol configured. OpenMPI will fall back to TCP transport but not without complaints.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Workaround:&#039;&#039;&#039;&amp;lt;/br&amp;gt;  &lt;br /&gt;
For single-node jobs or on regular compute nodes, A30 and A100 GPU nodes: Add the lines&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
export OMPI_MCA_btl=&amp;quot;^ofi,openib&amp;quot;&lt;br /&gt;
export OMPI_MCA_mtl=&amp;quot;^ofi&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
to your job script to disable the OFI transport layer. If you need high-bandwidth, low-latency transport between all processes on all nodes, switch to the Infiniband partition (&amp;lt;code&amp;gt;#SBATCH --constraint=ib&amp;lt;/code&amp;gt;). &#039;&#039;Do not turn off the OFI layer on Infiniband nodes as this will be the best choice between nodes!&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
= File Systems =&lt;br /&gt;
&lt;br /&gt;
The bwForCluster BinAC 2 consists of two separate storage systems, one for the user&#039;s home directory $HOME and one serving as a project/work space.&lt;br /&gt;
The home directory is limited in space and parallel access but offers snapshots of your files and backup.&lt;br /&gt;
&lt;br /&gt;
The project/work is a parallel file system (PFS) which offers fast and parallel file access and a bigger capacity than the home directory. It is mounted at &amp;lt;code&amp;gt;/pfs/10&amp;lt;/code&amp;gt; on the login and compute nodes. This storage is based on Lustre and can be accessed in parallel from many nodes. The PFS contains the project and the work directory. Each compute project has its own directory at &amp;lt;code&amp;gt;/pfs/10/project&amp;lt;/code&amp;gt; that is accessible for all members of the compute project.&lt;br /&gt;
Each user can create workspaces under &amp;lt;code&amp;gt;/pfs/10/work&amp;lt;/code&amp;gt; using the workspace tools. These directories are only accessible for the user who created the workspace.&lt;br /&gt;
&lt;br /&gt;
Additionally, each compute node provides high-speed temporary storage (SSD) on the node-local solid state disk via the $TMPDIR environment variable. &lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|- &lt;br /&gt;
! style=&amp;quot;width:10%&amp;quot;|&lt;br /&gt;
! style=&amp;quot;width:10%&amp;quot;| &amp;lt;tt&amp;gt;$HOME&amp;lt;/tt&amp;gt;&lt;br /&gt;
! style=&amp;quot;width:10%&amp;quot;| project&lt;br /&gt;
! style=&amp;quot;width:10%&amp;quot;| work&lt;br /&gt;
! style=&amp;quot;width:10%&amp;quot;| &amp;lt;tt&amp;gt;$TMPDIR&amp;lt;/tt&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; | Visibility&lt;br /&gt;
| global &lt;br /&gt;
| global&lt;br /&gt;
| global&lt;br /&gt;
| node local&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; | Lifetime&lt;br /&gt;
| permanent&lt;br /&gt;
| permanent&lt;br /&gt;
| work space lifetime (max. 30 days, max. 5 extensions)&lt;br /&gt;
| batch job walltime&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; | Capacity&lt;br /&gt;
| -&lt;br /&gt;
| 8.1 PB&lt;br /&gt;
| 1000 TB&lt;br /&gt;
| 480 GB (compute nodes); 7.7 TB (GPU-A30 nodes); 16 TB (GPU-A100 and SMP nodes); 31 TB (GPU-H200 nodes)&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; | File System Type&lt;br /&gt;
| NFS&lt;br /&gt;
| Lustre&lt;br /&gt;
| Lustre&lt;br /&gt;
| XFS&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; | Speed (read)&lt;br /&gt;
| ≈ 1 GB/s, shared by all nodes&lt;br /&gt;
| max. 12 GB/s&lt;br /&gt;
| ≈ 145 GB/s peak, aggregated over 56 nodes, ideal striping&lt;br /&gt;
| ≈ 3 GB/s (compute)/ ≈5 GB/S (GPUA-30)/ ≈ 26 GB/s (GPU-A100 + SMP)/ ≈ 42 GB/s (GPU-H200) per node&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; | [https://en.wikipedia.org/wiki/Disk_quota#Quotas Quotas]&lt;br /&gt;
| 40 GB per user&lt;br /&gt;
| not yet, maybe in the future&lt;br /&gt;
| none&lt;br /&gt;
| none&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; | Backup&lt;br /&gt;
| yes (nightly)&lt;br /&gt;
| &#039;&#039;&#039;no&#039;&#039;&#039;&lt;br /&gt;
| &#039;&#039;&#039;no&#039;&#039;&#039;&lt;br /&gt;
| &#039;&#039;&#039;no&#039;&#039;&#039;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
  global             : all nodes access the same file system&lt;br /&gt;
  local              : each node has its own file system&lt;br /&gt;
  permanent          : files are stored permanently&lt;br /&gt;
  batch job walltime : files are removed at end of the batch job&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;color:red; background-color:#ffffcc;&amp;quot; cellpadding=&amp;quot;10&amp;quot;&lt;br /&gt;
|&lt;br /&gt;
Please note that due to the large capacity of &#039;&#039;&#039;work&#039;&#039;&#039; and &#039;&#039;&#039;project&#039;&#039;&#039; and due to frequent file changes on these file systems, no backup can be provided.&amp;lt;/br&amp;gt;&lt;br /&gt;
Backing up these file systems would require a redundant storage facility with multiple times the capacity of &#039;&#039;&#039;project&#039;&#039;&#039;. Furthermore, regular backups would significantly degrade the performance.&amp;lt;/br&amp;gt;&lt;br /&gt;
Data is stored redundantly, i.e. immune against disk failures but not immune against catastrophic incidents like cyber attacks or a fire in the server room.&amp;lt;/br&amp;gt;&lt;br /&gt;
Please consider to use on of the remote storage facilities like [https://wiki.bwhpc.de/e/SDS@hd SDS@hd], [https://uni-tuebingen.de/einrichtungen/zentrum-fuer-datenverarbeitung/projekte/laufende-projekte/bwsfs bwSFS], [https://www.scc.kit.edu/en/services/lsdf.php LSFD Online Storage] or the [https://www.rda.kit.edu/english/ bwDataArchive] to back up your valuable data.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Home ===&lt;br /&gt;
&lt;br /&gt;
Home directories are meant for permanent file storage of files that are keep being used like source codes, configuration files, executable programs etc.; the content of home directories will be backed up on a regular basis.&lt;br /&gt;
Because the backup space is limited we enforce a quota of 40GB on the home directories.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;NOTE:&#039;&#039;&#039;&lt;br /&gt;
Compute jobs on nodes must not write temporary data to $HOME.&lt;br /&gt;
Instead they should use the local $TMPDIR directory for I/O-heavy use cases&lt;br /&gt;
and work spaces for less I/O intense multinode-jobs.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
Current disk usage on home directory and quota status can be checked with the &#039;&#039;&#039;diskusage&#039;&#039;&#039; command: &lt;br /&gt;
 $ diskusage&lt;br /&gt;
 &lt;br /&gt;
 User           	   Used (GB)	  Quota (GB)	Used (%)&lt;br /&gt;
 ------------------------------------------------------------------------&lt;br /&gt;
 &amp;lt;username&amp;gt;                4.38               100.00             4.38&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
=== Project ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The data is stored on HDDs. The primary focus of &amp;lt;code&amp;gt;/pfs/10/project&amp;lt;/code&amp;gt; is pure capacity, not speed.&lt;br /&gt;
&lt;br /&gt;
Every project gets a dedicated directory located at:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight&amp;gt;&lt;br /&gt;
/pfs/10/project/&amp;lt;project_id&amp;gt;/&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can check the project(s) you are member of via:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight&amp;gt;&lt;br /&gt;
# id $USER | grep -o &#039;bw[^)]*&#039;&lt;br /&gt;
bw16f003&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In this case, your project directory would be:&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
/pfs/10/project/bw16f003/&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Check our [[BinAC2/Project_Data_Organization | data organization guide ]] for methods to organize data inside the project directory.&lt;br /&gt;
&lt;br /&gt;
=== Workspaces ===&lt;br /&gt;
&lt;br /&gt;
Data on the fast storage pool at &amp;lt;code&amp;gt;/pfs/10/work&amp;lt;/code&amp;gt; is stored on SSDs.&lt;br /&gt;
The primary focus of this filesystem is speed, not capacity.&amp;lt;br /&amp;gt;&lt;br /&gt;
Data on &amp;lt;code&amp;gt;/pfs/10/work&amp;lt;/code&amp;gt; is organized into so-called &#039;&#039;workspaces&#039;&#039;.&lt;br /&gt;
A workspace is a directory that you create and manage using the workspace tools.&lt;br /&gt;
&lt;br /&gt;
{|style=&amp;quot;background:#deffee; width:100%;&amp;quot;&lt;br /&gt;
|style=&amp;quot;padding:5px; background:#cef2e0; text-align:left&amp;quot;|&lt;br /&gt;
[[Image:Attention.svg|center|25px]]&lt;br /&gt;
|style=&amp;quot;padding:5px; background:#cef2e0; text-align:left&amp;quot;|&lt;br /&gt;
Please always remember, that workspaces are intended solely for temporary work data, and there is no backup of data in the workspaces.&amp;lt;br /&amp;gt;&lt;br /&gt;
Of you don&#039;t extend your workspaces they will be deleted at some point.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Workspace tools documentation ====&lt;br /&gt;
&lt;br /&gt;
You can find more information about the workspace tools in the general documentation:&lt;br /&gt;
:: &amp;amp;rarr; &#039;&#039;&#039;[[Workspaces | General workspace tools documentation]]&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==== BinAC 2 workspace policies and limits ====&lt;br /&gt;
&lt;br /&gt;
Some values and behaviors described in the general workspace documentation do not apply to BinAC 2.&amp;lt;br /&amp;gt;&lt;br /&gt;
On BinAC 2, workspaces:&lt;br /&gt;
&lt;br /&gt;
* have a &#039;&#039;&#039;lifetime of 30 days&#039;&#039;&#039;&lt;br /&gt;
* can be extended up to &#039;&#039;&#039;five times&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Because the capacity of &amp;lt;code&amp;gt;/pfs/10/work&amp;lt;/code&amp;gt; is limited, workspace expiration is enforced.&lt;br /&gt;
You will receive automated email reminders one week before a workspace expires.&lt;br /&gt;
&lt;br /&gt;
If a workspace’s lifetime expires (i.e. is not extended), it is moved to the special directory&lt;br /&gt;
&amp;lt;code&amp;gt;/pfs/10/work/.removed&amp;lt;/code&amp;gt;.&lt;br /&gt;
Expired workspaces can be restored for a keep time of 14 days.&lt;br /&gt;
After this period, expired workspaces are moved to a special location in the project filesystem.&lt;br /&gt;
After at least six months, a workspace is considered abandoned and is permanently deleted.&lt;br /&gt;
&lt;br /&gt;
In addition to the standard workspace tools, we provide the script&lt;br /&gt;
&amp;lt;syntaxhighlight inline&amp;gt;ws_shelve&amp;lt;/syntaxhighlight&amp;gt; for copying a workspace to a project directory.&lt;br /&gt;
&amp;lt;syntaxhighlight inline&amp;gt;ws_shelve&amp;lt;/syntaxhighlight&amp;gt; takes two arguments:&lt;br /&gt;
&lt;br /&gt;
* the name of the workspace to copy&lt;br /&gt;
* the acronym of your project&lt;br /&gt;
&lt;br /&gt;
This command creates and submits a Slurm job that uses &amp;lt;code&amp;gt;rsync&amp;lt;/code&amp;gt; to copy the workspace&lt;br /&gt;
from &amp;lt;code&amp;gt;/pfs/10/work&amp;lt;/code&amp;gt; to &amp;lt;code&amp;gt;/pfs/10/project&amp;lt;/code&amp;gt;.&lt;br /&gt;
A new directory is created with a timestamp suffix to prevent name collisions.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight&amp;gt;&lt;br /&gt;
# Copy the workspace /pfs/10/work/tu_xyz01-test to /pfs/10/project/bw10a001&lt;br /&gt;
&lt;br /&gt;
$ ws_shelve test bw10a001&lt;br /&gt;
Workspace shelving submitted.&lt;br /&gt;
Job ID: 2058525&lt;br /&gt;
Logs will be written to:&lt;br /&gt;
  /pfs/10/project/bw10a001/.ws_shelve/tu_xyz01/logs&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In this example, the workspace is copied to&lt;br /&gt;
&amp;lt;syntaxhighlight inline&amp;gt;/pfs/10/project/bw10a001/tu_xyz01/test-&amp;lt;timestamp&amp;gt;&amp;lt;/syntaxhighlight&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== SDS@hd ===&lt;br /&gt;
&lt;br /&gt;
SDS@hd is mounted via NFS on login and compute nodes at &amp;lt;syntaxhighlight inline&amp;gt;/mnt/sds-hd&amp;lt;/syntaxhighlight&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
To access your Speichervorhaben, the export to BinAC 2 must first be enabled by the SDS@hd-Team. Please contact [mailto:sds-hd-support@urz.uni-heidelberg.de SDS@hd support] and provide the acronym of your Speichervorhaben, along with a request to enable the export to BinAC 2.&lt;br /&gt;
&lt;br /&gt;
Once this has been done, you can access your Speichervorhaben as described in the [https://wiki.bwhpc.de/e/SDS@hd/Access/NFS#Access_your_data SDS documentation].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight&amp;gt;&lt;br /&gt;
$ kinit $USER&lt;br /&gt;
Password for &amp;lt;user&amp;gt;@BWSERVICES.UNI-HEIDELBERG.DE: &lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The Kerberos ticket store is shared across all nodes. Creating a single ticket is sufficient to access your Speichervorhaben on all nodes.&lt;br /&gt;
&lt;br /&gt;
=== More Details on the Lustre File System ===&lt;br /&gt;
[https://www.lustre.org/ Lustre] is a distributed parallel file system.&lt;br /&gt;
* The entire logical volume as presented to the user is formed by multiple physical or local drives. Data is distributed over more than one physical or logical volume/hard drive, single files can be larger than the capacity of a single hard drive.&lt;br /&gt;
* The file system can be mounted from all nodes (&amp;quot;clients&amp;quot;) in parallel at the same time for reading and writing. &amp;lt;i&amp;gt;This also means that technically you can write to the same file from two different compute nodes! Usually, this will create an unpredictable mess! Never ever do this unless you know &amp;lt;b&amp;gt;exactly&amp;lt;/b&amp;gt; what you are doing!&amp;lt;/i&amp;gt;&lt;br /&gt;
* On a single server or client, the bandwidth of multiple network interfaces can be aggregated to increase the throughput (&amp;quot;multi-rail&amp;quot;).&lt;br /&gt;
&lt;br /&gt;
Lustre works by chopping files into many small parts (&amp;quot;stripes&amp;quot;, file objects) which are then stored on the object storage servers. The information which part of the file is stored where on which object storage server, when it was changed last etc. and the entire directory structure is stored on the metadata servers. Think of the entries on the metadata server as being pointers pointing to the actual file objects on the object storage servers.&lt;br /&gt;
A Lustre file system can consist of many metadata servers (MDS) and object storage servers (OSS).&lt;br /&gt;
Each MDS or OSS can again hold one or more so-called object storage targets (OST) or metadata targets (MDT) which can e.g. be simply multiple hard drives.&lt;br /&gt;
The capacity of a Lustre file system can hence be easily scaled by adding more servers.&lt;br /&gt;
&lt;br /&gt;
==== Useful Lustre Comamnds ====&lt;br /&gt;
Commands specific to the Lustre file system are divided into user commands (&amp;lt;code&amp;gt;lfs ...&amp;lt;/code&amp;gt;) and administrative commands (&amp;lt;code&amp;gt;lctl ...&amp;lt;/code&amp;gt;). On BinAC2, users may only execute user commands, and also not all of them.&lt;br /&gt;
* &amp;lt;code&amp;gt;lfs help &amp;lt;command&amp;gt;&amp;lt;/code&amp;gt;: Print built-in help for command; Alternative: &amp;lt;code&amp;gt;man lfs &amp;lt;command&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;lfs find&amp;lt;/code&amp;gt;: Drop-in replacement for the &amp;lt;code&amp;gt;find&amp;lt;/code&amp;gt; command, much faster on Lustre filesystems as it directly talks to the metadata sever&lt;br /&gt;
* &amp;lt;code&amp;gt;lfs --list-commands&amp;lt;/code&amp;gt;: Print a list of available commands&lt;br /&gt;
&lt;br /&gt;
==== Moving data between WORK and PROJECT ====&lt;br /&gt;
&amp;lt;b&amp;gt;!! IMPORTANT !!&amp;lt;/b&amp;gt; Calling &amp;lt;code&amp;gt;mv&amp;lt;/code&amp;gt; on files will &amp;lt;i&amp;gt;not&amp;lt;/i&amp;gt; physically move them between the fast and the slow pool of the file system. Instead, the file metadata, i.e. the path to the file in the directory tree will be modified (i.e. data stored on the MDS). The stripes of the file on the OSS, however, will remain exactly were they were. The only result will be the confusing situation that you now have metadata entries under &amp;lt;code&amp;gt;/pfs/10/project&amp;lt;/code&amp;gt; that still point to WORK OSTs. This may sound confusing at first. When using &amp;lt;code&amp;gt;mv&amp;lt;/code&amp;gt; on the same file system, Lustre only renames the files and makes them available from a different path. The pointers to the file objects on the OSS stay identical. This will only change if you either create a copy of the file at a different path (with &amp;lt;code&amp;gt;cp&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;rsync&amp;lt;/code&amp;gt;, e.g.) or if you explicitly instruct Lustre to move the actual file objects to another storage location, e.g. another pool of the same file system.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;Proper ways of moving data between the pools&amp;lt;/b&amp;gt;&amp;lt;/br&amp;gt;&lt;br /&gt;
* Copy the data - which will create new files -, then delete the old files. Example:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$&amp;gt; cp -r /pfs/10/work/tu_abcde01-my-precious-ws/* /pfs/10/project/bw10a001/tu_abcde01/my-precious-research/.&lt;br /&gt;
$&amp;gt; rm -rf /pfs/10/work/tu_abcde01-my-precious-ws/*&lt;br /&gt;
$&amp;gt; ws_release --delete-data my-precious-ws&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* Alternative to copy: use &amp;lt;code&amp;gt;rsync&amp;lt;/code&amp;gt; to copy data between the workspace and the project directories. Example:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$&amp;gt; rsync -av /pfs/10/work/tu_abcde01-my-precious-ws/simulation/output /pfs/10/project/bw10a001/tu_abcde01/my-precious-research/simulation25/&lt;br /&gt;
$&amp;gt; rm -rf /pfs/10/work/tu_abcde01-my-precious-ws/*&lt;br /&gt;
$&amp;gt; ws_release --delete-data my-precious-ws&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* If there are many subfolders with similar size, you can use &amp;lt;code&amp;gt;xargs&amp;lt;/code&amp;gt; to copy them in parallel:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$&amp;gt; find . -maxdepth 1 -mindepth 1 -type d -print | xargs -P4 -I{} rsync -aHAXW --inplace --update {} /pfs/10/project/bw10a001/tu_abcde01/my-precious-research/simulation25/&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
will launch four parallel &amp;lt;code&amp;gt;rsync&amp;lt;/code&amp;gt; processes at a time, each will copy one of the subdirectories.&lt;br /&gt;
* First move the metadata with &amp;lt;code&amp;gt;mv&amp;lt;/code&amp;gt;, then use &amp;lt;code&amp;gt;lfs migrate&amp;lt;/code&amp;gt; or the wrapper &amp;lt;code&amp;gt;lfs_migrate&amp;lt;/code&amp;gt; to actually migrate the file stripes. This is also a possible resolution if you already &amp;lt;code&amp;gt;mv&amp;lt;/code&amp;gt;ed data from &amp;lt;code&amp;gt;work&amp;lt;/code&amp;gt; to &amp;lt;code&amp;gt;project&amp;lt;/code&amp;gt; or vice versa.&lt;br /&gt;
** &amp;lt;code&amp;gt;lfs migrate&amp;lt;/code&amp;gt; is the raw lustre command. It can only operate on one file at a time, but offers access to all options.&lt;br /&gt;
** &amp;lt;code&amp;gt;lfs_migrate&amp;lt;/code&amp;gt; is a versatile wrapper script that can work on single files or recursively on entire directories. If available, it will try to use &amp;lt;code&amp;gt;lfs migrate&amp;lt;/code&amp;gt;, otherwise it will fall back to &amp;lt;code&amp;gt;rsync&amp;lt;/code&amp;gt; (see &amp;lt;code&amp;gt;lfs_migrate --help&amp;lt;/code&amp;gt; for all options.)&amp;lt;/br&amp;gt;&lt;br /&gt;
Example with &amp;lt;code&amp;gt;lfs migrate&amp;lt;/code&amp;gt;:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$&amp;gt; mv /pfs/10/work/tu_abcde01-my-precious-ws/* /pfs/10/project/bw10a001/tu_abcde01/my-precious-research/.&lt;br /&gt;
$&amp;gt; cd /pfs/10/project/bw10a001/tu_abcde01/my-precious-research/.&lt;br /&gt;
$&amp;gt; lfs find . -type f --pool work -0 | xargs -0 lfs migrate --pool project # find all files whose file objects are on the work pool and migrate the objects to the project pool&lt;br /&gt;
$&amp;gt; ws_release --delete-data my-precious-ws&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Example with &amp;lt;code&amp;gt;lfs_migrate&amp;lt;/code&amp;gt;:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$&amp;gt; mv /pfs/10/work/tu_abcde01-my-precious-ws/* /pfs/10/project/bw10a001/tu_abcde01/my-precious-research/.&lt;br /&gt;
$&amp;gt; cd /pfs/10/project/bw10a001/tu_abcde01/my-precious-research/.&lt;br /&gt;
$&amp;gt; lfs_migrate --yes -q -p project * # migrate all file objects in the current directory to the project pool, be quiet (-q) and do not ask for confirmation (--yes)&lt;br /&gt;
$&amp;gt; ws_release --delete-data my-precious-ws&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Both migration commands can also be combined with options to restripe the files during migration, i.e. you can also change the number of OSTs the file is striped over, the size of a single strip etc.&amp;lt;/br&amp;gt;&lt;br /&gt;
&amp;lt;b&amp;gt;Attention!&amp;lt;/b&amp;gt; Both &amp;lt;code&amp;gt;lfs migrate&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;lfs_migrate&amp;lt;/code&amp;gt; will &amp;lt;i&amp;gt;not&amp;lt;/i&amp;gt; change the path of the file(s), you must also &amp;lt;code&amp;gt;mv&amp;lt;/code&amp;gt; them! If used without &amp;lt;code&amp;gt;mv&amp;lt;/code&amp;gt;, the files will still belong to the workspace although their file object stripes are now on the &amp;lt;code&amp;gt;project&amp;lt;/code&amp;gt; pool and a subsequent &amp;lt;code&amp;gt;rm&amp;lt;/code&amp;gt; in the workspace will wipe them. &lt;br /&gt;
&lt;br /&gt;
All of the above procedures may take a considerable amount of time depending on the amount of data, so it might be advisable to execute them in a terminal multiplexer like &amp;lt;code&amp;gt;screen&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;tmux&amp;lt;/code&amp;gt; or wrap them into small SLURM jobs with &amp;lt;code&amp;gt;sbatch --wrap=&amp;quot;&amp;lt;command&amp;gt;&amp;quot;&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;Question&amp;lt;/b&amp;gt;:&amp;lt;/br&amp;gt; I totally lost overview, how do i find out where my files are located?&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;Answer&amp;lt;/b&amp;gt;:&amp;lt;/br&amp;gt;&lt;br /&gt;
* Use &amp;lt;code&amp;gt;lfs find&amp;lt;/code&amp;gt; to find files on a specific pool. Example: &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$&amp;gt; lfs find . --pool project # recursively find all files in the current directory whose file objects are on the &amp;quot;project&amp;quot; pool&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* Use &amp;lt;code&amp;gt;lfs getstripe&amp;lt;/code&amp;gt; to query the striping pattern and the pool (also works recursively if called with a directory). Example:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$&amp;gt; lfs getstripe parameter.h &lt;br /&gt;
parameter.h&lt;br /&gt;
lmm_stripe_count:  1&lt;br /&gt;
lmm_stripe_size:   1048576&lt;br /&gt;
lmm_pattern:       raid0&lt;br /&gt;
lmm_layout_gen:    1&lt;br /&gt;
lmm_stripe_offset: 44&lt;br /&gt;
lmm_pool:          project&lt;br /&gt;
        obdidx           objid           objid           group&lt;br /&gt;
            44         7991938       0x79f282      0xd80000400&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
shows that the file is striped over OST 44 (obdidx) which belongs to pool project (lmm_pool).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;Why pathes and storage pools should match:&amp;lt;/b&amp;gt;&amp;lt;/br&amp;gt;&lt;br /&gt;
There are four different possible scenarios with two subdirectories and two pools:&lt;br /&gt;
* File path in &amp;lt;code&amp;gt;/pfs/10/work&amp;lt;/code&amp;gt;, file objects on pool &amp;lt;code&amp;gt;work&amp;lt;/code&amp;gt;: &amp;lt;b&amp;gt;good.&amp;lt;/b&amp;gt;&lt;br /&gt;
* File path in &amp;lt;code&amp;gt;/pfs/10/project&amp;lt;/code&amp;gt;, file objects on pool &amp;lt;code&amp;gt;project&amp;lt;/code&amp;gt;: &amp;lt;b&amp;gt;good.&amp;lt;/b&amp;gt;&lt;br /&gt;
* File path in &amp;lt;code&amp;gt;/pfs/10/project&amp;lt;/code&amp;gt;, file objects on pool &amp;lt;code&amp;gt;work&amp;lt;/code&amp;gt;: &amp;lt;b&amp;gt;bad&amp;lt;/b&amp;gt;. This will &amp;quot;leak&amp;quot; storage from the fast pool, making it unavailable for workspaces.&lt;br /&gt;
* File path in &amp;lt;code&amp;gt;/pfs/10/work&amp;lt;/code&amp;gt;, file objects on pool &amp;lt;code&amp;gt;project&amp;lt;/code&amp;gt;: &amp;lt;b&amp;gt;bad&amp;lt;/b&amp;gt;. Access will be slow, and if (volatile) workspaces are purged, data residing on &amp;lt;code&amp;gt;project&amp;lt;/code&amp;gt; will (voluntarily or involuntarily) be deleted.&lt;br /&gt;
The latter two situations may arise from &amp;lt;code&amp;gt;mv&amp;lt;/code&amp;gt;ing data between workspaces and project folders.&lt;br /&gt;
&lt;br /&gt;
==== More on data striping and how to influence it ====&lt;br /&gt;
&amp;lt;b&amp;gt;!! The default striping patterns on BinAC2 are set for good reasons and should not light-heartedly be changed!&amp;lt;/br&amp;gt; Doing so wrongly will in the best case only hurt your performance.&amp;lt;/br&amp;gt; In the worst case, it will also hurt all other users and endanger the stability of the cluster.&amp;lt;/br&amp;gt; Please talk to the admins first if you think that you need a non-default pattern.&amp;lt;/b&amp;gt;&lt;br /&gt;
* Reading striping patterns with &amp;lt;code&amp;gt;lfs getstripe&amp;lt;/code&amp;gt;&lt;br /&gt;
* Setting striping patterns with &amp;lt;code&amp;gt;lfs setstripe&amp;lt;/code&amp;gt; for new files and directories&lt;br /&gt;
* Restriping files with &amp;lt;code&amp;gt;lfs migrate&amp;lt;/code&amp;gt;&lt;br /&gt;
* Progressive File Layout&lt;br /&gt;
&lt;br /&gt;
==== Architecture of BinAC2&#039;s Lustre File System ====&lt;br /&gt;
&amp;lt;b&amp;gt;Metadata Servers:&amp;lt;/b&amp;gt;&lt;br /&gt;
* 2 metadata servers&lt;br /&gt;
* 1 MDT per server&lt;br /&gt;
* MDT Capacity: 31TB, hardware RAID6 on NVMe drives (flash memory/SSD)&lt;br /&gt;
* Networking: 2x 100 GbE, 2x HDR-100 InfiniBand&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;Object storage servers:&amp;lt;/b&amp;gt;&lt;br /&gt;
* 8 object storage servers&lt;br /&gt;
* 2 fast OSTs per server&lt;br /&gt;
** 70 TB per OST, software RAID (raid-z2, 10+2 reduncancy)&lt;br /&gt;
** NVMe drives, directly attached to the PCIe bus&lt;br /&gt;
* 8 slow OSTs per server&lt;br /&gt;
** 143 TB per OST, hardware RAID (RAID6, 8+2 redundancy)&lt;br /&gt;
** externally attached via SAS&lt;br /&gt;
* Networking: 2x 100 GbE, 2x HDR-100 InfiniBand&lt;br /&gt;
&lt;br /&gt;
* All fast OSTs are assigned to the pool &amp;lt;code&amp;gt;work&amp;lt;/code&amp;gt;&lt;br /&gt;
* All slow OSTs are assigned to the pool &amp;lt;code&amp;gt;project&amp;lt;/code&amp;gt;&lt;br /&gt;
* All files that are created under &amp;lt;code&amp;gt;/pfs/10/work&amp;lt;/code&amp;gt; are by default stored on the fast pool&lt;br /&gt;
* All files that are created under &amp;lt;code&amp;gt;/pfs/10/project&amp;lt;/code&amp;gt; are by default stored on the slow pool&lt;br /&gt;
* Metadata is distributed over both MDTs. All subdirectories of a directory (workspace or project folder) are typically on the same MDT. Directory striping/placement on MDTs can not be influenced by users.&lt;br /&gt;
* Default OST striping: Stripes have size 1 MiB. Files are striped over one OST if possible, i.e. all stripes of a file are on the same OST. New files are created on the most empty OST.&lt;br /&gt;
Internally, the slow and the fast pool belong to the same Lustre file system and namespace.&lt;br /&gt;
&lt;br /&gt;
More reading:&lt;br /&gt;
* [https://doc.lustre.org/lustre_manual.xhtml The Lustre 2.X Manual] ([http://doc.lustre.org/lustre_manual.pdf PDF])&lt;br /&gt;
* [https://wiki.lustre.org/Main_Page The Lustre Wiki]&lt;/div&gt;</summary>
		<author><name>F Bartusch</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BinAC2/Hardware_and_Architecture&amp;diff=15692</id>
		<title>BinAC2/Hardware and Architecture</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BinAC2/Hardware_and_Architecture&amp;diff=15692"/>
		<updated>2026-01-21T10:45:31Z</updated>

		<summary type="html">&lt;p&gt;F Bartusch: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Hardware and Architecture =&lt;br /&gt;
&lt;br /&gt;
The bwForCluster BinAC 2 supports researchers from the broader fields of Bioinformatics, Medical Informatics, Astrophysics, Geosciences and Pharmacy.&lt;br /&gt;
&lt;br /&gt;
[[File:Binac2 schema.png|600px|thumb|center|Overview on the BinAC 2 hardware architecture.]]&lt;br /&gt;
&lt;br /&gt;
== Operating System and Software ==&lt;br /&gt;
&lt;br /&gt;
* Operating System: Rocky Linux 9.6&lt;br /&gt;
* Queuing System: [https://slurm.schedmd.com/documentation.html Slurm] (see [[BinAC2/Slurm]] for help)&lt;br /&gt;
* (Scientific) Libraries and Software: [[Environment Modules]]&lt;br /&gt;
&lt;br /&gt;
== Compute Nodes ==&lt;br /&gt;
&lt;br /&gt;
BinAC 2 offers compute nodes, high-mem nodes, and three types of GPU nodes.&lt;br /&gt;
* 180 compute nodes&lt;br /&gt;
* 16 SMP node&lt;br /&gt;
* 32 GPU nodes (2xA30)&lt;br /&gt;
* 8 GPU nodes (4xA100)&lt;br /&gt;
* 4 GPU nodes (4xH200)&lt;br /&gt;
* plus several special purpose nodes for login, interactive jobs, etc.&lt;br /&gt;
&lt;br /&gt;
Compute node specification:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|- &lt;br /&gt;
! style=&amp;quot;width:10%&amp;quot;|&lt;br /&gt;
! style=&amp;quot;width:10%&amp;quot;| Standard&lt;br /&gt;
! style=&amp;quot;width:10%&amp;quot;| High-Mem&lt;br /&gt;
! style=&amp;quot;width:10%&amp;quot;| GPU (A30)&lt;br /&gt;
! style=&amp;quot;width:10%&amp;quot;| GPU (A100)&lt;br /&gt;
! style=&amp;quot;width:10%&amp;quot;| GPU (H200)&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot;| Quantity&lt;br /&gt;
| 168 / 12 &lt;br /&gt;
| 14 / 2&lt;br /&gt;
| 32&lt;br /&gt;
| 8&lt;br /&gt;
| 4&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; | Processors&lt;br /&gt;
| 2 x [https://www.amd.com/de/products/processors/server/epyc/7003-series/amd-epyc-7543.html AMD EPYC Milan 7543] / 2 x [https://www.amd.com/en/products/processors/server/epyc/7003-series/amd-epyc-75f3.html AMD EPYC Milan 75F3]&lt;br /&gt;
| 2 x [https://www.amd.com/de/products/processors/server/epyc/7003-series/amd-epyc-7443.html AMD EPYC Milan 7443] / 2 x [https://www.amd.com/en/products/processors/server/epyc/7003-series/amd-epyc-75f3.html AMD EPYC Milan 75F3]&lt;br /&gt;
| 2 x [https://www.amd.com/de/products/processors/server/epyc/7003-series/amd-epyc-7543.html AMD EPYC Milan 7543]&lt;br /&gt;
| 2 x [https://www.amd.com/de/products/processors/server/epyc/7003-series/amd-epyc-7543.html AMD EPYC Milan 7543]&lt;br /&gt;
| 2 x [https://www.amd.com/de/products/processors/server/epyc/9005-series/amd-epyc-9555.html AMD EPYC Milan 9555]&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; | Processor Base Frequency (GHz)&lt;br /&gt;
| 2.80 / 2.95&lt;br /&gt;
| 2.85 / 2.95&lt;br /&gt;
| 2.80&lt;br /&gt;
| 2.80&lt;br /&gt;
| 3.20&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; | Number of Physical Cores / Hypertreads&lt;br /&gt;
| 64 / 128&lt;br /&gt;
| 48 / 96 // 64 / 128&lt;br /&gt;
| 64 / 128&lt;br /&gt;
| 64 / 128&lt;br /&gt;
| 128 / 256&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; | Working Memory (GB)&lt;br /&gt;
| 512&lt;br /&gt;
| 2048&lt;br /&gt;
| 512&lt;br /&gt;
| 512&lt;br /&gt;
| 1536&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; | Local Disk (GiB)&lt;br /&gt;
| 450 (NVMe-SSD)&lt;br /&gt;
| 14000 (NVMe-SSD)&lt;br /&gt;
| 450 (NVMe-SSD)&lt;br /&gt;
| 14000 (NVMe-SSD)&lt;br /&gt;
| 28000 (NVMe-SSD)&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; | Interconnect&lt;br /&gt;
| HDR 100 IB (84 nodes) / 100GbE (96 nodes)&lt;br /&gt;
| 100GbE&lt;br /&gt;
| 100GbE&lt;br /&gt;
| 100GbE&lt;br /&gt;
| HDR 200 IB + 100GbE&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; | Coprocessors&lt;br /&gt;
| -&lt;br /&gt;
| -&lt;br /&gt;
| 2 x [https://www.nvidia.com/de-de/data-center/products/a30-gpu/ NVIDIA A30 (24 GB ECC HBM2, NVLink)]&lt;br /&gt;
| 4 x [https://www.nvidia.com/de-de/data-center/a100/ NVIDIA A100 (80 GB ECC HBM2e)]&lt;br /&gt;
| 4 x [https://www.nvidia.com/de-de/data-center/h200/ NVIDIA H200 NVL  (141 GB ECC HBM3e, NVLink)]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
= Network =&lt;br /&gt;
&lt;br /&gt;
The compute nodes and the parallel file system are connected via 100GbE ethernet&amp;lt;/br&amp;gt;&lt;br /&gt;
In contrast to BinAC 1 not all compute nodes are connected via Infiniband, but there are 84 standard compute nodes connected via HDR Infiniband (100 GbE). In order to get your jobs onto the Infiniband nodes, submit your job with &amp;lt;code&amp;gt;--constraint=ib&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Question:&#039;&#039;&#039;&amp;lt;/br&amp;gt;&lt;br /&gt;
OpenMPI throws the following warning:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
--------------------------------------------------------------------------&lt;br /&gt;
No OpenFabrics connection schemes reported that they were able to be&lt;br /&gt;
used on a specific port.  As such, the openib BTL (OpenFabrics&lt;br /&gt;
support) will be disabled for this port.&lt;br /&gt;
  Local host:           node1-083&lt;br /&gt;
  Local device:         mlx5_0&lt;br /&gt;
  Local port:           1&lt;br /&gt;
  CPCs attempted:       rdmacm, udcm&lt;br /&gt;
--------------------------------------------------------------------------&lt;br /&gt;
[node1-083:2137377] 3 more processes have sent help message help-mpi-btl-openib-cpc-base.txt / no cpcs for port&lt;br /&gt;
[node1-083:2137377] Set MCA parameter &amp;quot;orte_base_help_aggregate&amp;quot; to 0 to see all help / error messages&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
What should i do?&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Answer:&#039;&#039;&#039;&amp;lt;/br&amp;gt;&lt;br /&gt;
BinAC2 has two (almost) separate networks, a 100GbE network and and InfiniBand network, both connecting a subset of the nodes. Both networks require different cables and switches.  &lt;br /&gt;
Concerning the network cards for the nodes, however, there exist VPI network cards which can be configured to work in either mode (https://docs.nvidia.com/networking/display/connectx6vpi/specifications#src-2487215234_Specifications-MCX653105A-ECATSpecifications).&lt;br /&gt;
OpenMPI can use a number of layers for transferring data and messages between processes. When it ramps up, it will test all means of communication that were configured during compilation and then tries to figure out the fastest path between all processes. &lt;br /&gt;
If OpenMPI encounters such a VPI card, it will first try to establish a Remote Direct Memory Access communication (RDMA) channel using the OpenFabrics (OFI) layer.&lt;br /&gt;
On nodes with 100Gb ethernet, this fails as there is no RDMA protocol configured. OpenMPI will fall back to TCP transport but not without complaints.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Workaround:&#039;&#039;&#039;&amp;lt;/br&amp;gt;  &lt;br /&gt;
For single-node jobs or on regular compute nodes, A30 and A100 GPU nodes: Add the lines&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
export OMPI_MCA_btl=&amp;quot;^ofi,openib&amp;quot;&lt;br /&gt;
export OMPI_MCA_mtl=&amp;quot;^ofi&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
to your job script to disable the OFI transport layer. If you need high-bandwidth, low-latency transport between all processes on all nodes, switch to the Infiniband partition (&amp;lt;code&amp;gt;#SBATCH --constraint=ib&amp;lt;/code&amp;gt;). &#039;&#039;Do not turn off the OFI layer on Infiniband nodes as this will be the best choice between nodes!&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
= File Systems =&lt;br /&gt;
&lt;br /&gt;
The bwForCluster BinAC 2 consists of two separate storage systems, one for the user&#039;s home directory $HOME and one serving as a project/work space.&lt;br /&gt;
The home directory is limited in space and parallel access but offers snapshots of your files and backup.&lt;br /&gt;
&lt;br /&gt;
The project/work is a parallel file system (PFS) which offers fast and parallel file access and a bigger capacity than the home directory. It is mounted at &amp;lt;code&amp;gt;/pfs/10&amp;lt;/code&amp;gt; on the login and compute nodes. This storage is based on Lustre and can be accessed in parallel from many nodes. The PFS contains the project and the work directory. Each compute project has its own directory at &amp;lt;code&amp;gt;/pfs/10/project&amp;lt;/code&amp;gt; that is accessible for all members of the compute project.&lt;br /&gt;
Each user can create workspaces under &amp;lt;code&amp;gt;/pfs/10/work&amp;lt;/code&amp;gt; using the workspace tools. These directories are only accessible for the user who created the workspace.&lt;br /&gt;
&lt;br /&gt;
Additionally, each compute node provides high-speed temporary storage (SSD) on the node-local solid state disk via the $TMPDIR environment variable. &lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|- &lt;br /&gt;
! style=&amp;quot;width:10%&amp;quot;|&lt;br /&gt;
! style=&amp;quot;width:10%&amp;quot;| &amp;lt;tt&amp;gt;$HOME&amp;lt;/tt&amp;gt;&lt;br /&gt;
! style=&amp;quot;width:10%&amp;quot;| project&lt;br /&gt;
! style=&amp;quot;width:10%&amp;quot;| work&lt;br /&gt;
! style=&amp;quot;width:10%&amp;quot;| &amp;lt;tt&amp;gt;$TMPDIR&amp;lt;/tt&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; | Visibility&lt;br /&gt;
| global &lt;br /&gt;
| global&lt;br /&gt;
| global&lt;br /&gt;
| node local&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; | Lifetime&lt;br /&gt;
| permanent&lt;br /&gt;
| permanent&lt;br /&gt;
| work space lifetime (max. 30 days, max. 5 extensions)&lt;br /&gt;
| batch job walltime&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; | Capacity&lt;br /&gt;
| -&lt;br /&gt;
| 8.1 PB&lt;br /&gt;
| 1000 TB&lt;br /&gt;
| 480 GB (compute nodes); 7.7 TB (GPU-A30 nodes); 16 TB (GPU-A100 and SMP nodes); 31 TB (GPU-H200 nodes)&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; | File System Type&lt;br /&gt;
| NFS&lt;br /&gt;
| Lustre&lt;br /&gt;
| Lustre&lt;br /&gt;
| XFS&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; | Speed (read)&lt;br /&gt;
| ≈ 1 GB/s, shared by all nodes&lt;br /&gt;
| max. 12 GB/s&lt;br /&gt;
| ≈ 145 GB/s peak, aggregated over 56 nodes, ideal striping&lt;br /&gt;
| ≈ 3 GB/s (compute)/ ≈5 GB/S (GPUA-30)/ ≈ 26 GB/s (GPU-A100 + SMP)/ ≈ 42 GB/s (GPU-H200) per node&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; | [https://en.wikipedia.org/wiki/Disk_quota#Quotas Quotas]&lt;br /&gt;
| 40 GB per user&lt;br /&gt;
| not yet, maybe in the future&lt;br /&gt;
| none&lt;br /&gt;
| none&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; | Backup&lt;br /&gt;
| yes (nightly)&lt;br /&gt;
| &#039;&#039;&#039;no&#039;&#039;&#039;&lt;br /&gt;
| &#039;&#039;&#039;no&#039;&#039;&#039;&lt;br /&gt;
| &#039;&#039;&#039;no&#039;&#039;&#039;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
  global             : all nodes access the same file system&lt;br /&gt;
  local              : each node has its own file system&lt;br /&gt;
  permanent          : files are stored permanently&lt;br /&gt;
  batch job walltime : files are removed at end of the batch job&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;color:red; background-color:#ffffcc;&amp;quot; cellpadding=&amp;quot;10&amp;quot;&lt;br /&gt;
|&lt;br /&gt;
Please note that due to the large capacity of &#039;&#039;&#039;work&#039;&#039;&#039; and &#039;&#039;&#039;project&#039;&#039;&#039; and due to frequent file changes on these file systems, no backup can be provided.&amp;lt;/br&amp;gt;&lt;br /&gt;
Backing up these file systems would require a redundant storage facility with multiple times the capacity of &#039;&#039;&#039;project&#039;&#039;&#039;. Furthermore, regular backups would significantly degrade the performance.&amp;lt;/br&amp;gt;&lt;br /&gt;
Data is stored redundantly, i.e. immune against disk failures but not immune against catastrophic incidents like cyber attacks or a fire in the server room.&amp;lt;/br&amp;gt;&lt;br /&gt;
Please consider to use on of the remote storage facilities like [https://wiki.bwhpc.de/e/SDS@hd SDS@hd], [https://uni-tuebingen.de/einrichtungen/zentrum-fuer-datenverarbeitung/projekte/laufende-projekte/bwsfs bwSFS], [https://www.scc.kit.edu/en/services/lsdf.php LSFD Online Storage] or the [https://www.rda.kit.edu/english/ bwDataArchive] to back up your valuable data.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Home ===&lt;br /&gt;
&lt;br /&gt;
Home directories are meant for permanent file storage of files that are keep being used like source codes, configuration files, executable programs etc.; the content of home directories will be backed up on a regular basis.&lt;br /&gt;
Because the backup space is limited we enforce a quota of 40GB on the home directories.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;NOTE:&#039;&#039;&#039;&lt;br /&gt;
Compute jobs on nodes must not write temporary data to $HOME.&lt;br /&gt;
Instead they should use the local $TMPDIR directory for I/O-heavy use cases&lt;br /&gt;
and work spaces for less I/O intense multinode-jobs.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
Current disk usage on home directory and quota status can be checked with the &#039;&#039;&#039;diskusage&#039;&#039;&#039; command: &lt;br /&gt;
 $ diskusage&lt;br /&gt;
 &lt;br /&gt;
 User           	   Used (GB)	  Quota (GB)	Used (%)&lt;br /&gt;
 ------------------------------------------------------------------------&lt;br /&gt;
 &amp;lt;username&amp;gt;                4.38               100.00             4.38&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
=== Project ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The data is stored on HDDs. The primary focus of &amp;lt;code&amp;gt;/pfs/10/project&amp;lt;/code&amp;gt; is pure capacity, not speed.&lt;br /&gt;
&lt;br /&gt;
Every project gets a dedicated directory located at:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight&amp;gt;&lt;br /&gt;
/pfs/10/project/&amp;lt;project_id&amp;gt;/&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can check the project(s) you are member of via:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight&amp;gt;&lt;br /&gt;
# id $USER | grep -o &#039;bw[^)]*&#039;&lt;br /&gt;
bw16f003&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In this case, your project directory would be:&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
/pfs/10/project/bw16f003/&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Check our [[BinAC2/Project_Data_Organization | data organization guide ]] for methods to organize data inside the project directory.&lt;br /&gt;
&lt;br /&gt;
=== Workspaces ===&lt;br /&gt;
&lt;br /&gt;
Data on the fast storage pool at &amp;lt;code&amp;gt;/pfs/10/work&amp;lt;/code&amp;gt; is stored on SSDs.&lt;br /&gt;
The primary focus of this filesystem is speed, not capacity.&amp;lt;br /&amp;gt;&lt;br /&gt;
Data on &amp;lt;code&amp;gt;/pfs/10/work&amp;lt;/code&amp;gt; is organized into so-called &#039;&#039;workspaces&#039;&#039;.&lt;br /&gt;
A workspace is a directory that you create and manage using the workspace tools.&lt;br /&gt;
&lt;br /&gt;
{|style=&amp;quot;background:#deffee; width:100%;&amp;quot;&lt;br /&gt;
|style=&amp;quot;padding:5px; background:#cef2e0; text-align:left&amp;quot;|&lt;br /&gt;
[[Image:Attention.svg|center|25px]]&lt;br /&gt;
|style=&amp;quot;padding:5px; background:#cef2e0; text-align:left&amp;quot;|&lt;br /&gt;
Please always remember, that workspaces are intended solely for temporary work data, and there is no backup of data in the workspaces.&amp;lt;br /&amp;gt;&lt;br /&gt;
Of you don&#039;t extend your workspaces they will be deleted at some point.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Workspace tools documentation ====&lt;br /&gt;
&lt;br /&gt;
You can find more information about the workspace tools in the general documentation:&lt;br /&gt;
:: &amp;amp;rarr; &#039;&#039;&#039;[[Workspace | General workspace tools documentation]]&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==== BinAC 2 workspace policies and limits ====&lt;br /&gt;
&lt;br /&gt;
Some values and behaviors described in the general workspace documentation do not apply to BinAC 2.&amp;lt;br /&amp;gt;&lt;br /&gt;
On BinAC 2, workspaces:&lt;br /&gt;
&lt;br /&gt;
* have a &#039;&#039;&#039;lifetime of 30 days&#039;&#039;&#039;&lt;br /&gt;
* can be extended up to &#039;&#039;&#039;five times&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Because the capacity of &amp;lt;code&amp;gt;/pfs/10/work&amp;lt;/code&amp;gt; is limited, workspace expiration is enforced.&lt;br /&gt;
You will receive automated email reminders one week before a workspace expires.&lt;br /&gt;
&lt;br /&gt;
If a workspace’s lifetime expires (i.e. is not extended), it is moved to the special directory&lt;br /&gt;
&amp;lt;code&amp;gt;/pfs/10/work/.removed&amp;lt;/code&amp;gt;.&lt;br /&gt;
Expired workspaces can be restored for a keep time of 14 days.&lt;br /&gt;
After this period, expired workspaces are moved to a special location in the project filesystem.&lt;br /&gt;
After at least six months, a workspace is considered abandoned and is permanently deleted.&lt;br /&gt;
&lt;br /&gt;
In addition to the standard workspace tools, we provide the script&lt;br /&gt;
&amp;lt;syntaxhighlight inline&amp;gt;ws_shelve&amp;lt;/syntaxhighlight&amp;gt; for copying a workspace to a project directory.&lt;br /&gt;
&amp;lt;syntaxhighlight inline&amp;gt;ws_shelve&amp;lt;/syntaxhighlight&amp;gt; takes two arguments:&lt;br /&gt;
&lt;br /&gt;
* the name of the workspace to copy&lt;br /&gt;
* the acronym of your project&lt;br /&gt;
&lt;br /&gt;
This command creates and submits a Slurm job that uses &amp;lt;code&amp;gt;rsync&amp;lt;/code&amp;gt; to copy the workspace&lt;br /&gt;
from &amp;lt;code&amp;gt;/pfs/10/work&amp;lt;/code&amp;gt; to &amp;lt;code&amp;gt;/pfs/10/project&amp;lt;/code&amp;gt;.&lt;br /&gt;
A new directory is created with a timestamp suffix to prevent name collisions.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight&amp;gt;&lt;br /&gt;
# Copy the workspace /pfs/10/work/tu_xyz01-test to /pfs/10/project/bw10a001&lt;br /&gt;
&lt;br /&gt;
$ ws_shelve test bw10a001&lt;br /&gt;
Workspace shelving submitted.&lt;br /&gt;
Job ID: 2058525&lt;br /&gt;
Logs will be written to:&lt;br /&gt;
  /pfs/10/project/bw10a001/.ws_shelve/tu_xyz01/logs&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In this example, the workspace is copied to&lt;br /&gt;
&amp;lt;syntaxhighlight inline&amp;gt;/pfs/10/project/bw10a001/tu_xyz01/test-&amp;lt;timestamp&amp;gt;&amp;lt;/syntaxhighlight&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== SDS@hd ===&lt;br /&gt;
&lt;br /&gt;
SDS@hd is mounted via NFS on login and compute nodes at &amp;lt;syntaxhighlight inline&amp;gt;/mnt/sds-hd&amp;lt;/syntaxhighlight&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
To access your Speichervorhaben, the export to BinAC 2 must first be enabled by the SDS@hd-Team. Please contact [mailto:sds-hd-support@urz.uni-heidelberg.de SDS@hd support] and provide the acronym of your Speichervorhaben, along with a request to enable the export to BinAC 2.&lt;br /&gt;
&lt;br /&gt;
Once this has been done, you can access your Speichervorhaben as described in the [https://wiki.bwhpc.de/e/SDS@hd/Access/NFS#Access_your_data SDS documentation].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight&amp;gt;&lt;br /&gt;
$ kinit $USER&lt;br /&gt;
Password for &amp;lt;user&amp;gt;@BWSERVICES.UNI-HEIDELBERG.DE: &lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The Kerberos ticket store is shared across all nodes. Creating a single ticket is sufficient to access your Speichervorhaben on all nodes.&lt;br /&gt;
&lt;br /&gt;
=== More Details on the Lustre File System ===&lt;br /&gt;
[https://www.lustre.org/ Lustre] is a distributed parallel file system.&lt;br /&gt;
* The entire logical volume as presented to the user is formed by multiple physical or local drives. Data is distributed over more than one physical or logical volume/hard drive, single files can be larger than the capacity of a single hard drive.&lt;br /&gt;
* The file system can be mounted from all nodes (&amp;quot;clients&amp;quot;) in parallel at the same time for reading and writing. &amp;lt;i&amp;gt;This also means that technically you can write to the same file from two different compute nodes! Usually, this will create an unpredictable mess! Never ever do this unless you know &amp;lt;b&amp;gt;exactly&amp;lt;/b&amp;gt; what you are doing!&amp;lt;/i&amp;gt;&lt;br /&gt;
* On a single server or client, the bandwidth of multiple network interfaces can be aggregated to increase the throughput (&amp;quot;multi-rail&amp;quot;).&lt;br /&gt;
&lt;br /&gt;
Lustre works by chopping files into many small parts (&amp;quot;stripes&amp;quot;, file objects) which are then stored on the object storage servers. The information which part of the file is stored where on which object storage server, when it was changed last etc. and the entire directory structure is stored on the metadata servers. Think of the entries on the metadata server as being pointers pointing to the actual file objects on the object storage servers.&lt;br /&gt;
A Lustre file system can consist of many metadata servers (MDS) and object storage servers (OSS).&lt;br /&gt;
Each MDS or OSS can again hold one or more so-called object storage targets (OST) or metadata targets (MDT) which can e.g. be simply multiple hard drives.&lt;br /&gt;
The capacity of a Lustre file system can hence be easily scaled by adding more servers.&lt;br /&gt;
&lt;br /&gt;
==== Useful Lustre Comamnds ====&lt;br /&gt;
Commands specific to the Lustre file system are divided into user commands (&amp;lt;code&amp;gt;lfs ...&amp;lt;/code&amp;gt;) and administrative commands (&amp;lt;code&amp;gt;lctl ...&amp;lt;/code&amp;gt;). On BinAC2, users may only execute user commands, and also not all of them.&lt;br /&gt;
* &amp;lt;code&amp;gt;lfs help &amp;lt;command&amp;gt;&amp;lt;/code&amp;gt;: Print built-in help for command; Alternative: &amp;lt;code&amp;gt;man lfs &amp;lt;command&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;lfs find&amp;lt;/code&amp;gt;: Drop-in replacement for the &amp;lt;code&amp;gt;find&amp;lt;/code&amp;gt; command, much faster on Lustre filesystems as it directly talks to the metadata sever&lt;br /&gt;
* &amp;lt;code&amp;gt;lfs --list-commands&amp;lt;/code&amp;gt;: Print a list of available commands&lt;br /&gt;
&lt;br /&gt;
==== Moving data between WORK and PROJECT ====&lt;br /&gt;
&amp;lt;b&amp;gt;!! IMPORTANT !!&amp;lt;/b&amp;gt; Calling &amp;lt;code&amp;gt;mv&amp;lt;/code&amp;gt; on files will &amp;lt;i&amp;gt;not&amp;lt;/i&amp;gt; physically move them between the fast and the slow pool of the file system. Instead, the file metadata, i.e. the path to the file in the directory tree will be modified (i.e. data stored on the MDS). The stripes of the file on the OSS, however, will remain exactly were they were. The only result will be the confusing situation that you now have metadata entries under &amp;lt;code&amp;gt;/pfs/10/project&amp;lt;/code&amp;gt; that still point to WORK OSTs. This may sound confusing at first. When using &amp;lt;code&amp;gt;mv&amp;lt;/code&amp;gt; on the same file system, Lustre only renames the files and makes them available from a different path. The pointers to the file objects on the OSS stay identical. This will only change if you either create a copy of the file at a different path (with &amp;lt;code&amp;gt;cp&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;rsync&amp;lt;/code&amp;gt;, e.g.) or if you explicitly instruct Lustre to move the actual file objects to another storage location, e.g. another pool of the same file system.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;Proper ways of moving data between the pools&amp;lt;/b&amp;gt;&amp;lt;/br&amp;gt;&lt;br /&gt;
* Copy the data - which will create new files -, then delete the old files. Example:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$&amp;gt; cp -r /pfs/10/work/tu_abcde01-my-precious-ws/* /pfs/10/project/bw10a001/tu_abcde01/my-precious-research/.&lt;br /&gt;
$&amp;gt; rm -rf /pfs/10/work/tu_abcde01-my-precious-ws/*&lt;br /&gt;
$&amp;gt; ws_release --delete-data my-precious-ws&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* Alternative to copy: use &amp;lt;code&amp;gt;rsync&amp;lt;/code&amp;gt; to copy data between the workspace and the project directories. Example:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$&amp;gt; rsync -av /pfs/10/work/tu_abcde01-my-precious-ws/simulation/output /pfs/10/project/bw10a001/tu_abcde01/my-precious-research/simulation25/&lt;br /&gt;
$&amp;gt; rm -rf /pfs/10/work/tu_abcde01-my-precious-ws/*&lt;br /&gt;
$&amp;gt; ws_release --delete-data my-precious-ws&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* If there are many subfolders with similar size, you can use &amp;lt;code&amp;gt;xargs&amp;lt;/code&amp;gt; to copy them in parallel:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$&amp;gt; find . -maxdepth 1 -mindepth 1 -type d -print | xargs -P4 -I{} rsync -aHAXW --inplace --update {} /pfs/10/project/bw10a001/tu_abcde01/my-precious-research/simulation25/&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
will launch four parallel &amp;lt;code&amp;gt;rsync&amp;lt;/code&amp;gt; processes at a time, each will copy one of the subdirectories.&lt;br /&gt;
* First move the metadata with &amp;lt;code&amp;gt;mv&amp;lt;/code&amp;gt;, then use &amp;lt;code&amp;gt;lfs migrate&amp;lt;/code&amp;gt; or the wrapper &amp;lt;code&amp;gt;lfs_migrate&amp;lt;/code&amp;gt; to actually migrate the file stripes. This is also a possible resolution if you already &amp;lt;code&amp;gt;mv&amp;lt;/code&amp;gt;ed data from &amp;lt;code&amp;gt;work&amp;lt;/code&amp;gt; to &amp;lt;code&amp;gt;project&amp;lt;/code&amp;gt; or vice versa.&lt;br /&gt;
** &amp;lt;code&amp;gt;lfs migrate&amp;lt;/code&amp;gt; is the raw lustre command. It can only operate on one file at a time, but offers access to all options.&lt;br /&gt;
** &amp;lt;code&amp;gt;lfs_migrate&amp;lt;/code&amp;gt; is a versatile wrapper script that can work on single files or recursively on entire directories. If available, it will try to use &amp;lt;code&amp;gt;lfs migrate&amp;lt;/code&amp;gt;, otherwise it will fall back to &amp;lt;code&amp;gt;rsync&amp;lt;/code&amp;gt; (see &amp;lt;code&amp;gt;lfs_migrate --help&amp;lt;/code&amp;gt; for all options.)&amp;lt;/br&amp;gt;&lt;br /&gt;
Example with &amp;lt;code&amp;gt;lfs migrate&amp;lt;/code&amp;gt;:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$&amp;gt; mv /pfs/10/work/tu_abcde01-my-precious-ws/* /pfs/10/project/bw10a001/tu_abcde01/my-precious-research/.&lt;br /&gt;
$&amp;gt; cd /pfs/10/project/bw10a001/tu_abcde01/my-precious-research/.&lt;br /&gt;
$&amp;gt; lfs find . -type f --pool work -0 | xargs -0 lfs migrate --pool project # find all files whose file objects are on the work pool and migrate the objects to the project pool&lt;br /&gt;
$&amp;gt; ws_release --delete-data my-precious-ws&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Example with &amp;lt;code&amp;gt;lfs_migrate&amp;lt;/code&amp;gt;:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$&amp;gt; mv /pfs/10/work/tu_abcde01-my-precious-ws/* /pfs/10/project/bw10a001/tu_abcde01/my-precious-research/.&lt;br /&gt;
$&amp;gt; cd /pfs/10/project/bw10a001/tu_abcde01/my-precious-research/.&lt;br /&gt;
$&amp;gt; lfs_migrate --yes -q -p project * # migrate all file objects in the current directory to the project pool, be quiet (-q) and do not ask for confirmation (--yes)&lt;br /&gt;
$&amp;gt; ws_release --delete-data my-precious-ws&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Both migration commands can also be combined with options to restripe the files during migration, i.e. you can also change the number of OSTs the file is striped over, the size of a single strip etc.&amp;lt;/br&amp;gt;&lt;br /&gt;
&amp;lt;b&amp;gt;Attention!&amp;lt;/b&amp;gt; Both &amp;lt;code&amp;gt;lfs migrate&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;lfs_migrate&amp;lt;/code&amp;gt; will &amp;lt;i&amp;gt;not&amp;lt;/i&amp;gt; change the path of the file(s), you must also &amp;lt;code&amp;gt;mv&amp;lt;/code&amp;gt; them! If used without &amp;lt;code&amp;gt;mv&amp;lt;/code&amp;gt;, the files will still belong to the workspace although their file object stripes are now on the &amp;lt;code&amp;gt;project&amp;lt;/code&amp;gt; pool and a subsequent &amp;lt;code&amp;gt;rm&amp;lt;/code&amp;gt; in the workspace will wipe them. &lt;br /&gt;
&lt;br /&gt;
All of the above procedures may take a considerable amount of time depending on the amount of data, so it might be advisable to execute them in a terminal multiplexer like &amp;lt;code&amp;gt;screen&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;tmux&amp;lt;/code&amp;gt; or wrap them into small SLURM jobs with &amp;lt;code&amp;gt;sbatch --wrap=&amp;quot;&amp;lt;command&amp;gt;&amp;quot;&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;Question&amp;lt;/b&amp;gt;:&amp;lt;/br&amp;gt; I totally lost overview, how do i find out where my files are located?&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;Answer&amp;lt;/b&amp;gt;:&amp;lt;/br&amp;gt;&lt;br /&gt;
* Use &amp;lt;code&amp;gt;lfs find&amp;lt;/code&amp;gt; to find files on a specific pool. Example: &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$&amp;gt; lfs find . --pool project # recursively find all files in the current directory whose file objects are on the &amp;quot;project&amp;quot; pool&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* Use &amp;lt;code&amp;gt;lfs getstripe&amp;lt;/code&amp;gt; to query the striping pattern and the pool (also works recursively if called with a directory). Example:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$&amp;gt; lfs getstripe parameter.h &lt;br /&gt;
parameter.h&lt;br /&gt;
lmm_stripe_count:  1&lt;br /&gt;
lmm_stripe_size:   1048576&lt;br /&gt;
lmm_pattern:       raid0&lt;br /&gt;
lmm_layout_gen:    1&lt;br /&gt;
lmm_stripe_offset: 44&lt;br /&gt;
lmm_pool:          project&lt;br /&gt;
        obdidx           objid           objid           group&lt;br /&gt;
            44         7991938       0x79f282      0xd80000400&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
shows that the file is striped over OST 44 (obdidx) which belongs to pool project (lmm_pool).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;Why pathes and storage pools should match:&amp;lt;/b&amp;gt;&amp;lt;/br&amp;gt;&lt;br /&gt;
There are four different possible scenarios with two subdirectories and two pools:&lt;br /&gt;
* File path in &amp;lt;code&amp;gt;/pfs/10/work&amp;lt;/code&amp;gt;, file objects on pool &amp;lt;code&amp;gt;work&amp;lt;/code&amp;gt;: &amp;lt;b&amp;gt;good.&amp;lt;/b&amp;gt;&lt;br /&gt;
* File path in &amp;lt;code&amp;gt;/pfs/10/project&amp;lt;/code&amp;gt;, file objects on pool &amp;lt;code&amp;gt;project&amp;lt;/code&amp;gt;: &amp;lt;b&amp;gt;good.&amp;lt;/b&amp;gt;&lt;br /&gt;
* File path in &amp;lt;code&amp;gt;/pfs/10/project&amp;lt;/code&amp;gt;, file objects on pool &amp;lt;code&amp;gt;work&amp;lt;/code&amp;gt;: &amp;lt;b&amp;gt;bad&amp;lt;/b&amp;gt;. This will &amp;quot;leak&amp;quot; storage from the fast pool, making it unavailable for workspaces.&lt;br /&gt;
* File path in &amp;lt;code&amp;gt;/pfs/10/work&amp;lt;/code&amp;gt;, file objects on pool &amp;lt;code&amp;gt;project&amp;lt;/code&amp;gt;: &amp;lt;b&amp;gt;bad&amp;lt;/b&amp;gt;. Access will be slow, and if (volatile) workspaces are purged, data residing on &amp;lt;code&amp;gt;project&amp;lt;/code&amp;gt; will (voluntarily or involuntarily) be deleted.&lt;br /&gt;
The latter two situations may arise from &amp;lt;code&amp;gt;mv&amp;lt;/code&amp;gt;ing data between workspaces and project folders.&lt;br /&gt;
&lt;br /&gt;
==== More on data striping and how to influence it ====&lt;br /&gt;
&amp;lt;b&amp;gt;!! The default striping patterns on BinAC2 are set for good reasons and should not light-heartedly be changed!&amp;lt;/br&amp;gt; Doing so wrongly will in the best case only hurt your performance.&amp;lt;/br&amp;gt; In the worst case, it will also hurt all other users and endanger the stability of the cluster.&amp;lt;/br&amp;gt; Please talk to the admins first if you think that you need a non-default pattern.&amp;lt;/b&amp;gt;&lt;br /&gt;
* Reading striping patterns with &amp;lt;code&amp;gt;lfs getstripe&amp;lt;/code&amp;gt;&lt;br /&gt;
* Setting striping patterns with &amp;lt;code&amp;gt;lfs setstripe&amp;lt;/code&amp;gt; for new files and directories&lt;br /&gt;
* Restriping files with &amp;lt;code&amp;gt;lfs migrate&amp;lt;/code&amp;gt;&lt;br /&gt;
* Progressive File Layout&lt;br /&gt;
&lt;br /&gt;
==== Architecture of BinAC2&#039;s Lustre File System ====&lt;br /&gt;
&amp;lt;b&amp;gt;Metadata Servers:&amp;lt;/b&amp;gt;&lt;br /&gt;
* 2 metadata servers&lt;br /&gt;
* 1 MDT per server&lt;br /&gt;
* MDT Capacity: 31TB, hardware RAID6 on NVMe drives (flash memory/SSD)&lt;br /&gt;
* Networking: 2x 100 GbE, 2x HDR-100 InfiniBand&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;Object storage servers:&amp;lt;/b&amp;gt;&lt;br /&gt;
* 8 object storage servers&lt;br /&gt;
* 2 fast OSTs per server&lt;br /&gt;
** 70 TB per OST, software RAID (raid-z2, 10+2 reduncancy)&lt;br /&gt;
** NVMe drives, directly attached to the PCIe bus&lt;br /&gt;
* 8 slow OSTs per server&lt;br /&gt;
** 143 TB per OST, hardware RAID (RAID6, 8+2 redundancy)&lt;br /&gt;
** externally attached via SAS&lt;br /&gt;
* Networking: 2x 100 GbE, 2x HDR-100 InfiniBand&lt;br /&gt;
&lt;br /&gt;
* All fast OSTs are assigned to the pool &amp;lt;code&amp;gt;work&amp;lt;/code&amp;gt;&lt;br /&gt;
* All slow OSTs are assigned to the pool &amp;lt;code&amp;gt;project&amp;lt;/code&amp;gt;&lt;br /&gt;
* All files that are created under &amp;lt;code&amp;gt;/pfs/10/work&amp;lt;/code&amp;gt; are by default stored on the fast pool&lt;br /&gt;
* All files that are created under &amp;lt;code&amp;gt;/pfs/10/project&amp;lt;/code&amp;gt; are by default stored on the slow pool&lt;br /&gt;
* Metadata is distributed over both MDTs. All subdirectories of a directory (workspace or project folder) are typically on the same MDT. Directory striping/placement on MDTs can not be influenced by users.&lt;br /&gt;
* Default OST striping: Stripes have size 1 MiB. Files are striped over one OST if possible, i.e. all stripes of a file are on the same OST. New files are created on the most empty OST.&lt;br /&gt;
Internally, the slow and the fast pool belong to the same Lustre file system and namespace.&lt;br /&gt;
&lt;br /&gt;
More reading:&lt;br /&gt;
* [https://doc.lustre.org/lustre_manual.xhtml The Lustre 2.X Manual] ([http://doc.lustre.org/lustre_manual.pdf PDF])&lt;br /&gt;
* [https://wiki.lustre.org/Main_Page The Lustre Wiki]&lt;/div&gt;</summary>
		<author><name>F Bartusch</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BinAC2/Hardware_and_Architecture&amp;diff=15690</id>
		<title>BinAC2/Hardware and Architecture</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BinAC2/Hardware_and_Architecture&amp;diff=15690"/>
		<updated>2026-01-19T10:55:46Z</updated>

		<summary type="html">&lt;p&gt;F Bartusch: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Hardware and Architecture =&lt;br /&gt;
&lt;br /&gt;
The bwForCluster BinAC 2 supports researchers from the broader fields of Bioinformatics, Medical Informatics, Astrophysics, Geosciences and Pharmacy.&lt;br /&gt;
&lt;br /&gt;
[[File:Binac2 schema.png|600px|thumb|center|Overview on the BinAC 2 hardware architecture.]]&lt;br /&gt;
&lt;br /&gt;
== Operating System and Software ==&lt;br /&gt;
&lt;br /&gt;
* Operating System: Rocky Linux 9.6&lt;br /&gt;
* Queuing System: [https://slurm.schedmd.com/documentation.html Slurm] (see [[BinAC2/Slurm]] for help)&lt;br /&gt;
* (Scientific) Libraries and Software: [[Environment Modules]]&lt;br /&gt;
&lt;br /&gt;
== Compute Nodes ==&lt;br /&gt;
&lt;br /&gt;
BinAC 2 offers compute nodes, high-mem nodes, and three types of GPU nodes.&lt;br /&gt;
* 180 compute nodes&lt;br /&gt;
* 16 SMP node&lt;br /&gt;
* 32 GPU nodes (2xA30)&lt;br /&gt;
* 8 GPU nodes (4xA100)&lt;br /&gt;
* 4 GPU nodes (4xH200)&lt;br /&gt;
* plus several special purpose nodes for login, interactive jobs, etc.&lt;br /&gt;
&lt;br /&gt;
Compute node specification:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|- &lt;br /&gt;
! style=&amp;quot;width:10%&amp;quot;|&lt;br /&gt;
! style=&amp;quot;width:10%&amp;quot;| Standard&lt;br /&gt;
! style=&amp;quot;width:10%&amp;quot;| High-Mem&lt;br /&gt;
! style=&amp;quot;width:10%&amp;quot;| GPU (A30)&lt;br /&gt;
! style=&amp;quot;width:10%&amp;quot;| GPU (A100)&lt;br /&gt;
! style=&amp;quot;width:10%&amp;quot;| GPU (H200)&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot;| Quantity&lt;br /&gt;
| 168 / 12 &lt;br /&gt;
| 14 / 2&lt;br /&gt;
| 32&lt;br /&gt;
| 8&lt;br /&gt;
| 4&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; | Processors&lt;br /&gt;
| 2 x [https://www.amd.com/de/products/processors/server/epyc/7003-series/amd-epyc-7543.html AMD EPYC Milan 7543] / 2 x [https://www.amd.com/en/products/processors/server/epyc/7003-series/amd-epyc-75f3.html AMD EPYC Milan 75F3]&lt;br /&gt;
| 2 x [https://www.amd.com/de/products/processors/server/epyc/7003-series/amd-epyc-7443.html AMD EPYC Milan 7443] / 2 x [https://www.amd.com/en/products/processors/server/epyc/7003-series/amd-epyc-75f3.html AMD EPYC Milan 75F3]&lt;br /&gt;
| 2 x [https://www.amd.com/de/products/processors/server/epyc/7003-series/amd-epyc-7543.html AMD EPYC Milan 7543]&lt;br /&gt;
| 2 x [https://www.amd.com/de/products/processors/server/epyc/7003-series/amd-epyc-7543.html AMD EPYC Milan 7543]&lt;br /&gt;
| 2 x [https://www.amd.com/de/products/processors/server/epyc/9005-series/amd-epyc-9555.html AMD EPYC Milan 9555]&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; | Processor Base Frequency (GHz)&lt;br /&gt;
| 2.80 / 2.95&lt;br /&gt;
| 2.85 / 2.95&lt;br /&gt;
| 2.80&lt;br /&gt;
| 2.80&lt;br /&gt;
| 3.20&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; | Number of Physical Cores / Hypertreads&lt;br /&gt;
| 64 / 128&lt;br /&gt;
| 48 / 96 // 64 / 128&lt;br /&gt;
| 64 / 128&lt;br /&gt;
| 64 / 128&lt;br /&gt;
| 128 / 256&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; | Working Memory (GB)&lt;br /&gt;
| 512&lt;br /&gt;
| 2048&lt;br /&gt;
| 512&lt;br /&gt;
| 512&lt;br /&gt;
| 1536&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; | Local Disk (GiB)&lt;br /&gt;
| 450 (NVMe-SSD)&lt;br /&gt;
| 14000 (NVMe-SSD)&lt;br /&gt;
| 450 (NVMe-SSD)&lt;br /&gt;
| 14000 (NVMe-SSD)&lt;br /&gt;
| 28000 (NVMe-SSD)&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; | Interconnect&lt;br /&gt;
| HDR 100 IB (84 nodes) / 100GbE (96 nodes)&lt;br /&gt;
| 100GbE&lt;br /&gt;
| 100GbE&lt;br /&gt;
| 100GbE&lt;br /&gt;
| HDR 200 IB + 100GbE&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; | Coprocessors&lt;br /&gt;
| -&lt;br /&gt;
| -&lt;br /&gt;
| 2 x [https://www.nvidia.com/de-de/data-center/products/a30-gpu/ NVIDIA A30 (24 GB ECC HBM2, NVLink)]&lt;br /&gt;
| 4 x [https://www.nvidia.com/de-de/data-center/a100/ NVIDIA A100 (80 GB ECC HBM2e)]&lt;br /&gt;
| 4 x [https://www.nvidia.com/de-de/data-center/h200/ NVIDIA H200 NVL  (141 GB ECC HBM3e, NVLink)]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
= Network =&lt;br /&gt;
&lt;br /&gt;
The compute nodes and the parallel file system are connected via 100GbE ethernet&amp;lt;/br&amp;gt;&lt;br /&gt;
In contrast to BinAC 1 not all compute nodes are connected via Infiniband, but there are 84 standard compute nodes connected via HDR Infiniband (100 GbE). In order to get your jobs onto the Infiniband nodes, submit your job with &amp;lt;code&amp;gt;--constraint=ib&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Question:&#039;&#039;&#039;&amp;lt;/br&amp;gt;&lt;br /&gt;
OpenMPI throws the following warning:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
--------------------------------------------------------------------------&lt;br /&gt;
No OpenFabrics connection schemes reported that they were able to be&lt;br /&gt;
used on a specific port.  As such, the openib BTL (OpenFabrics&lt;br /&gt;
support) will be disabled for this port.&lt;br /&gt;
  Local host:           node1-083&lt;br /&gt;
  Local device:         mlx5_0&lt;br /&gt;
  Local port:           1&lt;br /&gt;
  CPCs attempted:       rdmacm, udcm&lt;br /&gt;
--------------------------------------------------------------------------&lt;br /&gt;
[node1-083:2137377] 3 more processes have sent help message help-mpi-btl-openib-cpc-base.txt / no cpcs for port&lt;br /&gt;
[node1-083:2137377] Set MCA parameter &amp;quot;orte_base_help_aggregate&amp;quot; to 0 to see all help / error messages&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
What should i do?&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Answer:&#039;&#039;&#039;&amp;lt;/br&amp;gt;&lt;br /&gt;
BinAC2 has two (almost) separate networks, a 100GbE network and and InfiniBand network, both connecting a subset of the nodes. Both networks require different cables and switches.  &lt;br /&gt;
Concerning the network cards for the nodes, however, there exist VPI network cards which can be configured to work in either mode (https://docs.nvidia.com/networking/display/connectx6vpi/specifications#src-2487215234_Specifications-MCX653105A-ECATSpecifications).&lt;br /&gt;
OpenMPI can use a number of layers for transferring data and messages between processes. When it ramps up, it will test all means of communication that were configured during compilation and then tries to figure out the fastest path between all processes. &lt;br /&gt;
If OpenMPI encounters such a VPI card, it will first try to establish a Remote Direct Memory Access communication (RDMA) channel using the OpenFabrics (OFI) layer.&lt;br /&gt;
On nodes with 100Gb ethernet, this fails as there is no RDMA protocol configured. OpenMPI will fall back to TCP transport but not without complaints.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Workaround:&#039;&#039;&#039;&amp;lt;/br&amp;gt;  &lt;br /&gt;
For single-node jobs or on regular compute nodes, A30 and A100 GPU nodes: Add the lines&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
export OMPI_MCA_btl=&amp;quot;^ofi,openib&amp;quot;&lt;br /&gt;
export OMPI_MCA_mtl=&amp;quot;^ofi&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
to your job script to disable the OFI transport layer. If you need high-bandwidth, low-latency transport between all processes on all nodes, switch to the Infiniband partition (&amp;lt;code&amp;gt;#SBATCH --constraint=ib&amp;lt;/code&amp;gt;). &#039;&#039;Do not turn off the OFI layer on Infiniband nodes as this will be the best choice between nodes!&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
= File Systems =&lt;br /&gt;
&lt;br /&gt;
The bwForCluster BinAC 2 consists of two separate storage systems, one for the user&#039;s home directory $HOME and one serving as a project/work space.&lt;br /&gt;
The home directory is limited in space and parallel access but offers snapshots of your files and backup.&lt;br /&gt;
&lt;br /&gt;
The project/work is a parallel file system (PFS) which offers fast and parallel file access and a bigger capacity than the home directory. It is mounted at &amp;lt;code&amp;gt;/pfs/10&amp;lt;/code&amp;gt; on the login and compute nodes. This storage is based on Lustre and can be accessed in parallel from many nodes. The PFS contains the project and the work directory. Each compute project has its own directory at &amp;lt;code&amp;gt;/pfs/10/project&amp;lt;/code&amp;gt; that is accessible for all members of the compute project.&lt;br /&gt;
Each user can create workspaces under &amp;lt;code&amp;gt;/pfs/10/work&amp;lt;/code&amp;gt; using the workspace tools. These directories are only accessible for the user who created the workspace.&lt;br /&gt;
&lt;br /&gt;
Additionally, each compute node provides high-speed temporary storage (SSD) on the node-local solid state disk via the $TMPDIR environment variable. &lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|- &lt;br /&gt;
! style=&amp;quot;width:10%&amp;quot;|&lt;br /&gt;
! style=&amp;quot;width:10%&amp;quot;| &amp;lt;tt&amp;gt;$HOME&amp;lt;/tt&amp;gt;&lt;br /&gt;
! style=&amp;quot;width:10%&amp;quot;| project&lt;br /&gt;
! style=&amp;quot;width:10%&amp;quot;| work&lt;br /&gt;
! style=&amp;quot;width:10%&amp;quot;| &amp;lt;tt&amp;gt;$TMPDIR&amp;lt;/tt&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; | Visibility&lt;br /&gt;
| global &lt;br /&gt;
| global&lt;br /&gt;
| global&lt;br /&gt;
| node local&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; | Lifetime&lt;br /&gt;
| permanent&lt;br /&gt;
| permanent&lt;br /&gt;
| work space lifetime (max. 30 days, max. 5 extensions)&lt;br /&gt;
| batch job walltime&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; | Capacity&lt;br /&gt;
| -&lt;br /&gt;
| 8.1 PB&lt;br /&gt;
| 1000 TB&lt;br /&gt;
| 480 GB (compute nodes); 7.7 TB (GPU-A30 nodes); 16 TB (GPU-A100 and SMP nodes); 31 TB (GPU-H200 nodes)&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; | File System Type&lt;br /&gt;
| NFS&lt;br /&gt;
| Lustre&lt;br /&gt;
| Lustre&lt;br /&gt;
| XFS&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; | Speed (read)&lt;br /&gt;
| ≈ 1 GB/s, shared by all nodes&lt;br /&gt;
| max. 12 GB/s&lt;br /&gt;
| ≈ 145 GB/s peak, aggregated over 56 nodes, ideal striping&lt;br /&gt;
| ≈ 3 GB/s (compute)/ ≈5 GB/S (GPUA-30)/ ≈ 26 GB/s (GPU-A100 + SMP)/ ≈ 42 GB/s (GPU-H200) per node&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; | [https://en.wikipedia.org/wiki/Disk_quota#Quotas Quotas]&lt;br /&gt;
| 40 GB per user&lt;br /&gt;
| not yet, maybe in the future&lt;br /&gt;
| none&lt;br /&gt;
| none&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; | Backup&lt;br /&gt;
| yes (nightly)&lt;br /&gt;
| &#039;&#039;&#039;no&#039;&#039;&#039;&lt;br /&gt;
| &#039;&#039;&#039;no&#039;&#039;&#039;&lt;br /&gt;
| &#039;&#039;&#039;no&#039;&#039;&#039;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
  global             : all nodes access the same file system&lt;br /&gt;
  local              : each node has its own file system&lt;br /&gt;
  permanent          : files are stored permanently&lt;br /&gt;
  batch job walltime : files are removed at end of the batch job&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;color:red; background-color:#ffffcc;&amp;quot; cellpadding=&amp;quot;10&amp;quot;&lt;br /&gt;
|&lt;br /&gt;
Please note that due to the large capacity of &#039;&#039;&#039;work&#039;&#039;&#039; and &#039;&#039;&#039;project&#039;&#039;&#039; and due to frequent file changes on these file systems, no backup can be provided.&amp;lt;/br&amp;gt;&lt;br /&gt;
Backing up these file systems would require a redundant storage facility with multiple times the capacity of &#039;&#039;&#039;project&#039;&#039;&#039;. Furthermore, regular backups would significantly degrade the performance.&amp;lt;/br&amp;gt;&lt;br /&gt;
Data is stored redundantly, i.e. immune against disk failures but not immune against catastrophic incidents like cyber attacks or a fire in the server room.&amp;lt;/br&amp;gt;&lt;br /&gt;
Please consider to use on of the remote storage facilities like [https://wiki.bwhpc.de/e/SDS@hd SDS@hd], [https://uni-tuebingen.de/einrichtungen/zentrum-fuer-datenverarbeitung/projekte/laufende-projekte/bwsfs bwSFS], [https://www.scc.kit.edu/en/services/lsdf.php LSFD Online Storage] or the [https://www.rda.kit.edu/english/ bwDataArchive] to back up your valuable data.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Home ===&lt;br /&gt;
&lt;br /&gt;
Home directories are meant for permanent file storage of files that are keep being used like source codes, configuration files, executable programs etc.; the content of home directories will be backed up on a regular basis.&lt;br /&gt;
Because the backup space is limited we enforce a quota of 40GB on the home directories.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;NOTE:&#039;&#039;&#039;&lt;br /&gt;
Compute jobs on nodes must not write temporary data to $HOME.&lt;br /&gt;
Instead they should use the local $TMPDIR directory for I/O-heavy use cases&lt;br /&gt;
and work spaces for less I/O intense multinode-jobs.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
Current disk usage on home directory and quota status can be checked with the &#039;&#039;&#039;diskusage&#039;&#039;&#039; command: &lt;br /&gt;
 $ diskusage&lt;br /&gt;
 &lt;br /&gt;
 User           	   Used (GB)	  Quota (GB)	Used (%)&lt;br /&gt;
 ------------------------------------------------------------------------&lt;br /&gt;
 &amp;lt;username&amp;gt;                4.38               100.00             4.38&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
=== Project ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The data is stored on HDDs. The primary focus of &amp;lt;code&amp;gt;/pfs/10/project&amp;lt;/code&amp;gt; is pure capacity, not speed.&lt;br /&gt;
&lt;br /&gt;
Every project gets a dedicated directory located at:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight&amp;gt;&lt;br /&gt;
/pfs/10/project/&amp;lt;project_id&amp;gt;/&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can check the project(s) you are member of via:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight&amp;gt;&lt;br /&gt;
# id $USER | grep -o &#039;bw[^)]*&#039;&lt;br /&gt;
bw16f003&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In this case, your project directory would be:&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
/pfs/10/project/bw16f003/&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Check our [[BinAC2/Project_Data_Organization | data organization guide ]] for methods to organize data inside the project directory.&lt;br /&gt;
&lt;br /&gt;
=== Workspaces ===&lt;br /&gt;
&lt;br /&gt;
Data on the fast storage pool at &amp;lt;code&amp;gt;/pfs/10/work&amp;lt;/code&amp;gt; is stored on SSDs.&lt;br /&gt;
The primary focus of this filesystem is speed, not capacity.&amp;lt;br /&amp;gt;&lt;br /&gt;
Data on &amp;lt;code&amp;gt;/pfs/10/work&amp;lt;/code&amp;gt; is organized into so-called &#039;&#039;workspaces&#039;&#039;.&lt;br /&gt;
A workspace is a directory that you create and manage using the workspace tools.&lt;br /&gt;
&lt;br /&gt;
{|style=&amp;quot;background:#deffee; width:100%;&amp;quot;&lt;br /&gt;
|style=&amp;quot;padding:5px; background:#cef2e0; text-align:left&amp;quot;|&lt;br /&gt;
[[Image:Attention.svg|center|25px]]&lt;br /&gt;
|style=&amp;quot;padding:5px; background:#cef2e0; text-align:left&amp;quot;|&lt;br /&gt;
Please always remember, that workspaces are intended solely for temporary work data, and there is no backup of data in the workspaces.&amp;lt;br /&amp;gt;&lt;br /&gt;
Of you don&#039;t extend your workspaces they will be deleted at some point.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Workspace tools documentation ====&lt;br /&gt;
&lt;br /&gt;
You can find more information about the workspace tools in the general documentation:&lt;br /&gt;
:: &amp;amp;rarr; &#039;&#039;&#039;[[Workspace | General workspace tools documentation]]&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==== BinAC 2 workspace policies and limits ====&lt;br /&gt;
&lt;br /&gt;
Some values and behaviors described in the general workspace documentation do not apply to BinAC 2.&amp;lt;br /&amp;gt;&lt;br /&gt;
On BinAC 2, workspaces:&lt;br /&gt;
&lt;br /&gt;
* have a &#039;&#039;&#039;lifetime of 30 days&#039;&#039;&#039;&lt;br /&gt;
* can be extended up to &#039;&#039;&#039;five times&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Because the capacity of &amp;lt;code&amp;gt;/pfs/10/work&amp;lt;/code&amp;gt; is limited, workspace expiration is enforced.&lt;br /&gt;
You will receive automated email reminders one week before a workspace expires.&lt;br /&gt;
&lt;br /&gt;
If a workspace’s lifetime expires (i.e. is not extended), it is moved to the special directory&lt;br /&gt;
&amp;lt;code&amp;gt;/pfs/10/work/.removed&amp;lt;/code&amp;gt;.&lt;br /&gt;
Expired workspaces can be restored for a keep time of 14 days.&lt;br /&gt;
After this period, expired workspaces are moved to a special location in the project filesystem.&lt;br /&gt;
After at least six months, a workspace is considered abandoned and is permanently deleted.&lt;br /&gt;
&lt;br /&gt;
In addition to the standard workspace tools, we provide the script&lt;br /&gt;
&amp;lt;syntaxhighlight inline&amp;gt;ws_shelve&amp;lt;/syntaxhighlight&amp;gt; for copying a workspace to a project directory.&lt;br /&gt;
&amp;lt;syntaxhighlight inline&amp;gt;ws_shelve&amp;lt;/syntaxhighlight&amp;gt; takes two arguments:&lt;br /&gt;
&lt;br /&gt;
* the name of the workspace to copy&lt;br /&gt;
* the acronym of your project&lt;br /&gt;
&lt;br /&gt;
This command creates and submits a Slurm job that uses &amp;lt;code&amp;gt;rsync&amp;lt;/code&amp;gt; to copy the workspace&lt;br /&gt;
from &amp;lt;code&amp;gt;/pfs/10/work&amp;lt;/code&amp;gt; to &amp;lt;code&amp;gt;/pfs/10/project&amp;lt;/code&amp;gt;.&lt;br /&gt;
A new directory is created with a timestamp suffix to prevent name collisions.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight&amp;gt;&lt;br /&gt;
# Copy the workspace /pfs/10/work/tu_xyz01-test to /pfs/10/project/bw10a001&lt;br /&gt;
&lt;br /&gt;
$ ws_shelve test bw10a001&lt;br /&gt;
Workspace shelving submitted.&lt;br /&gt;
Job ID: 2058525&lt;br /&gt;
Logs will be written to:&lt;br /&gt;
  /pfs/10/project/bw10a001/.ws_shelve/tu_xyz01/logs&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In this example, the workspace is copied to&lt;br /&gt;
&amp;lt;syntaxhighlight inline&amp;gt;/pfs/10/project/bw10a001/tu_xyz01/test-&amp;lt;timestamp&amp;gt;&amp;lt;/syntaxhighlight&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== SDS@hd ===&lt;br /&gt;
&lt;br /&gt;
SDS@hd is mounted via NFS on login and compute nodes at &amp;lt;syntaxhighlight inline&amp;gt;/mnt/sds-hd&amp;lt;/syntaxhighlight&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
To access your Speichervorhaben, the export to BinAC 2 must first be enabled by the SDS@hd-Team. Please contact [mailto:sds-hd-support@urz.uni-heidelberg.de SDS@hd support] and provide the acronym of your Speichervorhaben, along with a request to enable the export to BinAC 2.&lt;br /&gt;
&lt;br /&gt;
Once this has been done, you can access your Speichervorhaben as described in the [https://wiki.bwhpc.de/e/SDS@hd/Access/NFS#Access_your_data SDS documentation].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight&amp;gt;&lt;br /&gt;
$ kinit $USER&lt;br /&gt;
Password for &amp;lt;user&amp;gt;@BWSERVICES.UNI-HEIDELBERG.DE: &lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The Kerberos ticket store is shared across all nodes. Creating a single ticket is sufficient to access your Speichervorhaben on all nodes.&lt;br /&gt;
&lt;br /&gt;
=== More Details on the Lustre File System ===&lt;br /&gt;
[https://www.lustre.org/ Lustre] is a distributed parallel file system.&lt;br /&gt;
* The entire logical volume as presented to the user is formed by multiple physical or local drives. Data is distributed over more than one physical or logical volume/hard drive, single files can be larger than the capacity of a single hard drive.&lt;br /&gt;
* The file system can be mounted from all nodes (&amp;quot;clients&amp;quot;) in parallel at the same time for reading and writing. &amp;lt;i&amp;gt;This also means that technically you can write to the same file from two different compute nodes! Usually, this will create an unpredictable mess! Never ever do this unless you know &amp;lt;b&amp;gt;exactly&amp;lt;/b&amp;gt; what you are doing!&amp;lt;/i&amp;gt;&lt;br /&gt;
* On a single server or client, the bandwidth of multiple network interfaces can be aggregated to increase the throughput (&amp;quot;multi-rail&amp;quot;).&lt;br /&gt;
&lt;br /&gt;
Lustre works by chopping files into many small parts (&amp;quot;stripes&amp;quot;, file objects) which are then stored on the object storage servers. The information which part of the file is stored where on which object storage server, when it was changed last etc. and the entire directory structure is stored on the metadata servers. Think of the entries on the metadata server as being pointers pointing to the actual file objects on the object storage servers.&lt;br /&gt;
A Lustre file system can consist of many metadata servers (MDS) and object storage servers (OSS).&lt;br /&gt;
Each MDS or OSS can again hold one or more so-called object storage targets (OST) or metadata targets (MDT) which can e.g. be simply multiple hard drives.&lt;br /&gt;
The capacity of a Lustre file system can hence be easily scaled by adding more servers.&lt;br /&gt;
&lt;br /&gt;
==== Useful Lustre Comamnds ====&lt;br /&gt;
Commands specific to the Lustre file system are divided into user commands (&amp;lt;code&amp;gt;lfs ...&amp;lt;/code&amp;gt;) and administrative commands (&amp;lt;code&amp;gt;lctl ...&amp;lt;/code&amp;gt;). On BinAC2, users may only execute user commands, and also not all of them.&lt;br /&gt;
* &amp;lt;code&amp;gt;lfs help &amp;lt;command&amp;gt;&amp;lt;/code&amp;gt;: Print built-in help for command; Alternative: &amp;lt;code&amp;gt;man lfs &amp;lt;command&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;lfs find&amp;lt;/code&amp;gt;: Drop-in replacement for the &amp;lt;code&amp;gt;find&amp;lt;/code&amp;gt; command, much faster on Lustre filesystems as it directly talks to the metadata sever&lt;br /&gt;
* &amp;lt;code&amp;gt;lfs --list-commands&amp;lt;/code&amp;gt;: Print a list of available commands&lt;br /&gt;
&lt;br /&gt;
==== Moving data between WORK and PROJECT ====&lt;br /&gt;
&amp;lt;b&amp;gt;!! IMPORTANT !!&amp;lt;/b&amp;gt; Calling &amp;lt;code&amp;gt;mv&amp;lt;/code&amp;gt; on files will &amp;lt;i&amp;gt;not&amp;lt;/i&amp;gt; physically move them between the fast and the slow pool of the file system. Instead, the file metadata, i.e. the path to the file in the directory tree will be modified (i.e. data stored on the MDS). The stripes of the file on the OSS, however, will remain exactly were they were. The only result will be the confusing situation that you now have metadata entries under &amp;lt;code&amp;gt;/pfs/10/project&amp;lt;/code&amp;gt; that still point to WORK OSTs. This may sound confusing at first. When using &amp;lt;code&amp;gt;mv&amp;lt;/code&amp;gt; on the same file system, Lustre only renames the files and makes them available from a different path. The pointers to the file objects on the OSS stay identical. This will only change if you either create a copy of the file at a different path (with &amp;lt;code&amp;gt;cp&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;rsync&amp;lt;/code&amp;gt;, e.g.) or if you explicitly instruct Lustre to move the actual file objects to another storage location, e.g. another pool of the same file system.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;Proper ways of moving data between the pools&amp;lt;/b&amp;gt;&amp;lt;/br&amp;gt;&lt;br /&gt;
* Copy the data - which will create new files -, then delete the old files. Example:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$&amp;gt; cp -ar /pfs/10/work/tu_abcde01-my-precious-ws/* /pfs/10/project/bw10a001/tu_abcde01/my-precious-research/.&lt;br /&gt;
$&amp;gt; rm -rf /pfs/10/work/tu_abcde01-my-precious-ws/*&lt;br /&gt;
$&amp;gt; ws_release --delete-data my-precious-ws&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* Alternative to copy: use &amp;lt;code&amp;gt;rsync&amp;lt;/code&amp;gt; to copy data between the workspace and the project directories. Example:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$&amp;gt; rsync -av /pfs/10/work/tu_abcde01-my-precious-ws/simulation/output /pfs/10/project/bw10a001/tu_abcde01/my-precious-research/simulation25/&lt;br /&gt;
$&amp;gt; rm -rf /pfs/10/work/tu_abcde01-my-precious-ws/*&lt;br /&gt;
$&amp;gt; ws_release --delete-data my-precious-ws&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* If there are many subfolders with similar size, you can use &amp;lt;code&amp;gt;xargs&amp;lt;/code&amp;gt; to copy them in parallel:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$&amp;gt; find . -maxdepth 1 -mindepth 1 -type d -print | xargs -P4 -I{} rsync -aHAXW --inplace --update {} /pfs/10/project/bw10a001/tu_abcde01/my-precious-research/simulation25/&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
will launch four parallel &amp;lt;code&amp;gt;rsync&amp;lt;/code&amp;gt; processes at a time, each will copy one of the subdirectories.&lt;br /&gt;
* First move the metadata with &amp;lt;code&amp;gt;mv&amp;lt;/code&amp;gt;, then use &amp;lt;code&amp;gt;lfs migrate&amp;lt;/code&amp;gt; or the wrapper &amp;lt;code&amp;gt;lfs_migrate&amp;lt;/code&amp;gt; to actually migrate the file stripes. This is also a possible resolution if you already &amp;lt;code&amp;gt;mv&amp;lt;/code&amp;gt;ed data from &amp;lt;code&amp;gt;work&amp;lt;/code&amp;gt; to &amp;lt;code&amp;gt;project&amp;lt;/code&amp;gt; or vice versa.&lt;br /&gt;
** &amp;lt;code&amp;gt;lfs migrate&amp;lt;/code&amp;gt; is the raw lustre command. It can only operate on one file at a time, but offers access to all options.&lt;br /&gt;
** &amp;lt;code&amp;gt;lfs_migrate&amp;lt;/code&amp;gt; is a versatile wrapper script that can work on single files or recursively on entire directories. If available, it will try to use &amp;lt;code&amp;gt;lfs migrate&amp;lt;/code&amp;gt;, otherwise it will fall back to &amp;lt;code&amp;gt;rsync&amp;lt;/code&amp;gt; (see &amp;lt;code&amp;gt;lfs_migrate --help&amp;lt;/code&amp;gt; for all options.)&amp;lt;/br&amp;gt;&lt;br /&gt;
Example with &amp;lt;code&amp;gt;lfs migrate&amp;lt;/code&amp;gt;:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$&amp;gt; mv /pfs/10/work/tu_abcde01-my-precious-ws/* /pfs/10/project/bw10a001/tu_abcde01/my-precious-research/.&lt;br /&gt;
$&amp;gt; cd /pfs/10/project/bw10a001/tu_abcde01/my-precious-research/.&lt;br /&gt;
$&amp;gt; lfs find . -type f --pool work -0 | xargs -0 lfs migrate --pool project # find all files whose file objects are on the work pool and migrate the objects to the project pool&lt;br /&gt;
$&amp;gt; ws_release --delete-data my-precious-ws&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Example with &amp;lt;code&amp;gt;lfs_migrate&amp;lt;/code&amp;gt;:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$&amp;gt; mv /pfs/10/work/tu_abcde01-my-precious-ws/* /pfs/10/project/bw10a001/tu_abcde01/my-precious-research/.&lt;br /&gt;
$&amp;gt; cd /pfs/10/project/bw10a001/tu_abcde01/my-precious-research/.&lt;br /&gt;
$&amp;gt; lfs_migrate --yes -q -p project * # migrate all file objects in the current directory to the project pool, be quiet (-q) and do not ask for confirmation (--yes)&lt;br /&gt;
$&amp;gt; ws_release --delete-data my-precious-ws&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Both migration commands can also be combined with options to restripe the files during migration, i.e. you can also change the number of OSTs the file is striped over, the size of a single strip etc.&amp;lt;/br&amp;gt;&lt;br /&gt;
&amp;lt;b&amp;gt;Attention!&amp;lt;/b&amp;gt; Both &amp;lt;code&amp;gt;lfs migrate&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;lfs_migrate&amp;lt;/code&amp;gt; will &amp;lt;i&amp;gt;not&amp;lt;/i&amp;gt; change the path of the file(s), you must also &amp;lt;code&amp;gt;mv&amp;lt;/code&amp;gt; them! If used without &amp;lt;code&amp;gt;mv&amp;lt;/code&amp;gt;, the files will still belong to the workspace although their file object stripes are now on the &amp;lt;code&amp;gt;project&amp;lt;/code&amp;gt; pool and a subsequent &amp;lt;code&amp;gt;rm&amp;lt;/code&amp;gt; in the workspace will wipe them. &lt;br /&gt;
&lt;br /&gt;
All of the above procedures may take a considerable amount of time depending on the amount of data, so it might be advisable to execute them in a terminal multiplexer like &amp;lt;code&amp;gt;screen&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;tmux&amp;lt;/code&amp;gt; or wrap them into small SLURM jobs with &amp;lt;code&amp;gt;sbatch --wrap=&amp;quot;&amp;lt;command&amp;gt;&amp;quot;&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;Question&amp;lt;/b&amp;gt;:&amp;lt;/br&amp;gt; I totally lost overview, how do i find out where my files are located?&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;Answer&amp;lt;/b&amp;gt;:&amp;lt;/br&amp;gt;&lt;br /&gt;
* Use &amp;lt;code&amp;gt;lfs find&amp;lt;/code&amp;gt; to find files on a specific pool. Example: &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$&amp;gt; lfs find . --pool project # recursively find all files in the current directory whose file objects are on the &amp;quot;project&amp;quot; pool&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* Use &amp;lt;code&amp;gt;lfs getstripe&amp;lt;/code&amp;gt; to query the striping pattern and the pool (also works recursively if called with a directory). Example:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$&amp;gt; lfs getstripe parameter.h &lt;br /&gt;
parameter.h&lt;br /&gt;
lmm_stripe_count:  1&lt;br /&gt;
lmm_stripe_size:   1048576&lt;br /&gt;
lmm_pattern:       raid0&lt;br /&gt;
lmm_layout_gen:    1&lt;br /&gt;
lmm_stripe_offset: 44&lt;br /&gt;
lmm_pool:          project&lt;br /&gt;
        obdidx           objid           objid           group&lt;br /&gt;
            44         7991938       0x79f282      0xd80000400&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
shows that the file is striped over OST 44 (obdidx) which belongs to pool project (lmm_pool).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;Why pathes and storage pools should match:&amp;lt;/b&amp;gt;&amp;lt;/br&amp;gt;&lt;br /&gt;
There are four different possible scenarios with two subdirectories and two pools:&lt;br /&gt;
* File path in &amp;lt;code&amp;gt;/pfs/10/work&amp;lt;/code&amp;gt;, file objects on pool &amp;lt;code&amp;gt;work&amp;lt;/code&amp;gt;: &amp;lt;b&amp;gt;good.&amp;lt;/b&amp;gt;&lt;br /&gt;
* File path in &amp;lt;code&amp;gt;/pfs/10/project&amp;lt;/code&amp;gt;, file objects on pool &amp;lt;code&amp;gt;project&amp;lt;/code&amp;gt;: &amp;lt;b&amp;gt;good.&amp;lt;/b&amp;gt;&lt;br /&gt;
* File path in &amp;lt;code&amp;gt;/pfs/10/project&amp;lt;/code&amp;gt;, file objects on pool &amp;lt;code&amp;gt;work&amp;lt;/code&amp;gt;: &amp;lt;b&amp;gt;bad&amp;lt;/b&amp;gt;. This will &amp;quot;leak&amp;quot; storage from the fast pool, making it unavailable for workspaces.&lt;br /&gt;
* File path in &amp;lt;code&amp;gt;/pfs/10/work&amp;lt;/code&amp;gt;, file objects on pool &amp;lt;code&amp;gt;project&amp;lt;/code&amp;gt;: &amp;lt;b&amp;gt;bad&amp;lt;/b&amp;gt;. Access will be slow, and if (volatile) workspaces are purged, data residing on &amp;lt;code&amp;gt;project&amp;lt;/code&amp;gt; will (voluntarily or involuntarily) be deleted.&lt;br /&gt;
The latter two situations may arise from &amp;lt;code&amp;gt;mv&amp;lt;/code&amp;gt;ing data between workspaces and project folders.&lt;br /&gt;
&lt;br /&gt;
==== More on data striping and how to influence it ====&lt;br /&gt;
&amp;lt;b&amp;gt;!! The default striping patterns on BinAC2 are set for good reasons and should not light-heartedly be changed!&amp;lt;/br&amp;gt; Doing so wrongly will in the best case only hurt your performance.&amp;lt;/br&amp;gt; In the worst case, it will also hurt all other users and endanger the stability of the cluster.&amp;lt;/br&amp;gt; Please talk to the admins first if you think that you need a non-default pattern.&amp;lt;/b&amp;gt;&lt;br /&gt;
* Reading striping patterns with &amp;lt;code&amp;gt;lfs getstripe&amp;lt;/code&amp;gt;&lt;br /&gt;
* Setting striping patterns with &amp;lt;code&amp;gt;lfs setstripe&amp;lt;/code&amp;gt; for new files and directories&lt;br /&gt;
* Restriping files with &amp;lt;code&amp;gt;lfs migrate&amp;lt;/code&amp;gt;&lt;br /&gt;
* Progressive File Layout&lt;br /&gt;
&lt;br /&gt;
==== Architecture of BinAC2&#039;s Lustre File System ====&lt;br /&gt;
&amp;lt;b&amp;gt;Metadata Servers:&amp;lt;/b&amp;gt;&lt;br /&gt;
* 2 metadata servers&lt;br /&gt;
* 1 MDT per server&lt;br /&gt;
* MDT Capacity: 31TB, hardware RAID6 on NVMe drives (flash memory/SSD)&lt;br /&gt;
* Networking: 2x 100 GbE, 2x HDR-100 InfiniBand&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;Object storage servers:&amp;lt;/b&amp;gt;&lt;br /&gt;
* 8 object storage servers&lt;br /&gt;
* 2 fast OSTs per server&lt;br /&gt;
** 70 TB per OST, software RAID (raid-z2, 10+2 reduncancy)&lt;br /&gt;
** NVMe drives, directly attached to the PCIe bus&lt;br /&gt;
* 8 slow OSTs per server&lt;br /&gt;
** 143 TB per OST, hardware RAID (RAID6, 8+2 redundancy)&lt;br /&gt;
** externally attached via SAS&lt;br /&gt;
* Networking: 2x 100 GbE, 2x HDR-100 InfiniBand&lt;br /&gt;
&lt;br /&gt;
* All fast OSTs are assigned to the pool &amp;lt;code&amp;gt;work&amp;lt;/code&amp;gt;&lt;br /&gt;
* All slow OSTs are assigned to the pool &amp;lt;code&amp;gt;project&amp;lt;/code&amp;gt;&lt;br /&gt;
* All files that are created under &amp;lt;code&amp;gt;/pfs/10/work&amp;lt;/code&amp;gt; are by default stored on the fast pool&lt;br /&gt;
* All files that are created under &amp;lt;code&amp;gt;/pfs/10/project&amp;lt;/code&amp;gt; are by default stored on the slow pool&lt;br /&gt;
* Metadata is distributed over both MDTs. All subdirectories of a directory (workspace or project folder) are typically on the same MDT. Directory striping/placement on MDTs can not be influenced by users.&lt;br /&gt;
* Default OST striping: Stripes have size 1 MiB. Files are striped over one OST if possible, i.e. all stripes of a file are on the same OST. New files are created on the most empty OST.&lt;br /&gt;
Internally, the slow and the fast pool belong to the same Lustre file system and namespace.&lt;br /&gt;
&lt;br /&gt;
More reading:&lt;br /&gt;
* [https://doc.lustre.org/lustre_manual.xhtml The Lustre 2.X Manual] ([http://doc.lustre.org/lustre_manual.pdf PDF])&lt;br /&gt;
* [https://wiki.lustre.org/Main_Page The Lustre Wiki]&lt;/div&gt;</summary>
		<author><name>F Bartusch</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BinAC2/Software&amp;diff=15662</id>
		<title>BinAC2/Software</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BinAC2/Software&amp;diff=15662"/>
		<updated>2025-12-19T13:49:16Z</updated>

		<summary type="html">&lt;p&gt;F Bartusch: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Environment Modules ==&lt;br /&gt;
Most software is provided as Modules.&lt;br /&gt;
&lt;br /&gt;
Required reading to use: [[Environment Modules]]&lt;br /&gt;
&lt;br /&gt;
== Available Software ==&lt;br /&gt;
&lt;br /&gt;
* Web: Visit [https://www.bwhpc.de/software.php https://www.bwhpc.de/software.php], select &amp;lt;code&amp;gt;Cluster → bwForCluster BinAC 2&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* On the cluster: &amp;lt;code&amp;gt;module avail&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
== Software in Containers ==&lt;br /&gt;
&lt;br /&gt;
Instructions for loading software in containers: [[NEMO/Software/Singularity_Containers|Singularity Containers]]&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
== Documentation ==&lt;br /&gt;
&lt;br /&gt;
Documentation for environment modules available on the cluster:  &lt;br /&gt;
&lt;br /&gt;
* with command &amp;lt;code&amp;gt;module help&amp;lt;/code&amp;gt;&lt;br /&gt;
* examples in &amp;lt;code&amp;gt;$SOFTNAME_EXA_DIR&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Documentation in the Wiki ==&lt;br /&gt;
&lt;br /&gt;
For some applications additional documentation is provided here.&lt;br /&gt;
&lt;br /&gt;
* [[BinAC2/Software/Alphafold | Alphafold 3 ]] &lt;br /&gt;
* [[BinAC2/Software/AMDock | AMDock ]]&lt;br /&gt;
&amp;lt;!-- * [[BinAC/Software/BLAST | BLAST]] --&amp;gt;&lt;br /&gt;
&amp;lt;!-- * [[BinAC/Software/Bowtie | Bowtie]] --&amp;gt;&lt;br /&gt;
* [[BinAC/Software/Cellranger | Cell Ranger]]&lt;br /&gt;
* [[Development/Conda | Conda / Mamba]]&lt;br /&gt;
&amp;lt;!-- * [[BinAC/Software/Gromacs | Gromacs]] --&amp;gt;&lt;br /&gt;
* [[BinAC2/Software/Jupyterlab | JupyterLab]]&lt;br /&gt;
* [[BinAC2/Software/Nextflow | Nextflow and nf-core]]&lt;br /&gt;
* [[BinAC2/Software/Snakemake | Snakemake]]&lt;br /&gt;
* [[BinAC2/Software/TigerVNC | TigerVNC: Remote visualization using VNC]]&lt;/div&gt;</summary>
		<author><name>F Bartusch</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BinAC2/Software&amp;diff=15661</id>
		<title>BinAC2/Software</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BinAC2/Software&amp;diff=15661"/>
		<updated>2025-12-19T13:37:48Z</updated>

		<summary type="html">&lt;p&gt;F Bartusch: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Environment Modules ==&lt;br /&gt;
Most software is provided as Modules.&lt;br /&gt;
&lt;br /&gt;
Required reading to use: [[Environment Modules]]&lt;br /&gt;
&lt;br /&gt;
== Available Software ==&lt;br /&gt;
&lt;br /&gt;
* Web: Visit [https://www.bwhpc.de/software.php https://www.bwhpc.de/software.php], select &amp;lt;code&amp;gt;Cluster → bwForCluster BinAC 2&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* On the cluster: &amp;lt;code&amp;gt;module avail&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
== Software in Containers ==&lt;br /&gt;
&lt;br /&gt;
Instructions for loading software in containers: [[NEMO/Software/Singularity_Containers|Singularity Containers]]&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
== Documentation ==&lt;br /&gt;
&lt;br /&gt;
Documentation for environment modules available on the cluster:  &lt;br /&gt;
&lt;br /&gt;
* with command &amp;lt;code&amp;gt;module help&amp;lt;/code&amp;gt;&lt;br /&gt;
* examples in &amp;lt;code&amp;gt;$SOFTNAME_EXA_DIR&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Documentation in the Wiki ==&lt;br /&gt;
&lt;br /&gt;
For some applications additional documentation is provided here.&lt;br /&gt;
&lt;br /&gt;
* [[BinAC2/Software/Alphafold | Alphafold 3 ]] &lt;br /&gt;
* [[BinAC2/Software/AMDock | AMDock ]]&lt;br /&gt;
&amp;lt;!-- * [[BinAC/Software/BLAST | BLAST]] --&amp;gt;&lt;br /&gt;
&amp;lt;!-- * [[BinAC/Software/Bowtie | Bowtie]] --&amp;gt;&lt;br /&gt;
* [[BinAC/Software/Cellranger | Cell Ranger]]&lt;br /&gt;
* [[Development/Conda | Conda]]&lt;br /&gt;
&amp;lt;!-- * [[BinAC/Software/Gromacs | Gromacs]] --&amp;gt;&lt;br /&gt;
* [[BinAC2/Software/Jupyterlab | JupyterLab]]&lt;br /&gt;
* [[BinAC2/Software/Nextflow | Nextflow and nf-core]]&lt;br /&gt;
* [[BinAC2/Software/Snakemake | Snakemake]]&lt;br /&gt;
* [[BinAC2/Software/TigerVNC | TigerVNC: Remote visualization using VNC]]&lt;/div&gt;</summary>
		<author><name>F Bartusch</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BinAC2/Software/Alphafold&amp;diff=15649</id>
		<title>BinAC2/Software/Alphafold</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BinAC2/Software/Alphafold&amp;diff=15649"/>
		<updated>2025-12-16T10:57:55Z</updated>

		<summary type="html">&lt;p&gt;F Bartusch: Created page with &amp;quot;{{Softwarepage|bio/alphafold}}  {| width=600px class=&amp;quot;wikitable&amp;quot; |- ! Description !! Content |- | module load | bio/alphafold |- | License | ACC BY-NC-SA 4.0 - see [https://github.com/google-deepmind/alphafold3/blob/main/LICENSE] |- | Citing | See [https://github.com/google-deepmind/alphafold3?tab=readme-ov-file#citing-this-work] |- | Links | DeepMind AlphaFold Website: [https://deepmind.google/technologies/alphafold/] &amp;lt;br&amp;gt; Alphafold 3 Repository: [https://github.com/goo...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Softwarepage|bio/alphafold}}&lt;br /&gt;
&lt;br /&gt;
{| width=600px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Description !! Content&lt;br /&gt;
|-&lt;br /&gt;
| module load&lt;br /&gt;
| bio/alphafold&lt;br /&gt;
|-&lt;br /&gt;
| License&lt;br /&gt;
| ACC BY-NC-SA 4.0 - see [https://github.com/google-deepmind/alphafold3/blob/main/LICENSE]&lt;br /&gt;
|-&lt;br /&gt;
| Citing&lt;br /&gt;
| See [https://github.com/google-deepmind/alphafold3?tab=readme-ov-file#citing-this-work]&lt;br /&gt;
|-&lt;br /&gt;
| Links&lt;br /&gt;
| DeepMind AlphaFold Website: [https://deepmind.google/technologies/alphafold/] &amp;lt;br&amp;gt;&lt;br /&gt;
Alphafold 3 Repository: [https://github.com/google-deepmind/alphafold3]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
= Description =&lt;br /&gt;
&lt;br /&gt;
AlphaFold 3 developed by DeepMind predicts the 3D structure and interactions of biological molecules: proteins, DNA, RNA, ligands, and other small molecules.&lt;br /&gt;
&lt;br /&gt;
BinAC 2 provides almost everything you need for working with Alphafold 3:&lt;br /&gt;
* Alphafold 3 installed in an Apptainer image&lt;br /&gt;
* Alphafold 3 database&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;BUT&#039;&#039;&#039; you have to bring your own parameter file! Due to license issues we are not allowed to share them publicly. You can find information on how to obtain them [https://github.com/google-deepmind/alphafold3?tab=readme-ov-file#obtaining-model-parameters here].&lt;br /&gt;
&lt;br /&gt;
= Interpreting Predictions =&lt;br /&gt;
&lt;br /&gt;
When using AlphaFold3, users should treat predicted structures as hypotheses rather than definitive representations of molecular reality.&lt;br /&gt;
While the method often produces highly accurate models, confidence can vary substantially across regions, especially for flexible loops, disordered segments, or novel interactions not well represented in training data. Users should therefore critically assess confidence metrics provided by the model and, where possible, validate predictions against experimental data or independent computational approaches. Biological plausibility, consistency with known functional or biochemical evidence, and sensitivity to alternative inputs should also be considered. Overall, AlphaFold3 outputs are most powerful when used as guidance for downstream analysis and experimental design, not as final ground truth.&lt;br /&gt;
&lt;br /&gt;
AlphaFold 3 supplies multiple confidence metrics to help you critically assess its predictions:&lt;br /&gt;
&lt;br /&gt;
* Predicted LDDT (pLDDT): predicted atomic coordinates are accompanied by pLDDT scores. These reflect AlphaFold 3’s local confidence in the prediction of the position of that particular atom.&lt;br /&gt;
* Predicted Aligned Error (PAE) scores and a PAE plot: an indication of AlphaFold’s confidence in the packing and relative positions of domains, molecular chains such as proteins and DNA, and other entities like ligands and ions.&lt;br /&gt;
* Predicted TM (pTM) score: a single-value metric reflecting the accuracy of the overall predicted structure.&lt;br /&gt;
* Interface-predicted TM (ipTM) score: measures the accuracy of predictions of one component of the complex relative to the other components of the complex.&lt;br /&gt;
* Per chain pTM and per-chain pair ipTM: confidence in individual chains or pairs of chains.&lt;br /&gt;
&lt;br /&gt;
Further Reading:&lt;br /&gt;
&lt;br /&gt;
* [https://www.ebi.ac.uk/training/online/courses/alphafold/alphafold-3-and-alphafold-server/introducing-alphafold-3/how-have-alphafold-3s-predictions-been-validated/ AlphaFold3 validation overview]&lt;br /&gt;
&lt;br /&gt;
* [https://www.ebi.ac.uk/training/online/courses/alphafold/alphafold-3-and-alphafold-server/how-to-assess-the-quality-of-alphafold-3-predictions/ How to assess the quality of AlphaFold3 predictions]&lt;br /&gt;
&lt;br /&gt;
= Usage =&lt;br /&gt;
&lt;br /&gt;
AlphaFold&#039;s algorithm can be devided into two steps:&lt;br /&gt;
&lt;br /&gt;
# CPU-only part: Computation of several multiple sequence alignments (MSA)&lt;br /&gt;
# GPU-part: MSA used as input for neural network for infering structure&lt;br /&gt;
&lt;br /&gt;
This results in two optimal resource profiles regarding the number of CPU cores and GPUs.&lt;br /&gt;
Therefore we provide two template jobscripts for both steps at &amp;lt;code&amp;gt;/opt/bwhpc/common/bio/alphafold/3.0.1/bwhpc-examples&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
== Input == &lt;br /&gt;
&lt;br /&gt;
You can provide inputs to &amp;lt;code&amp;gt;run_alphafold.py&amp;lt;/code&amp;gt; in one of two ways:&lt;br /&gt;
&lt;br /&gt;
* Single input file: Use the &amp;lt;code&amp;gt;--json_path&amp;lt;/code&amp;gt; flag followed by the path to a single JSON file.&lt;br /&gt;
* Multiple input files: Use the &amp;lt;code&amp;gt;--input_dir&amp;lt;/code&amp;gt; flag followed by the path to a directory of JSON files.&lt;br /&gt;
&lt;br /&gt;
Please see the official [https://github.com/google-deepmind/alphafold3/blob/main/docs/input.md#top-level-structure AlphaFold 3 documentation] for more information regarding JSON file structure.&lt;br /&gt;
&lt;br /&gt;
The example jobscript uses a very simple input JSON:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;name&amp;quot;: &amp;quot;2PV7&amp;quot;,&lt;br /&gt;
  &amp;quot;sequences&amp;quot;: [&lt;br /&gt;
    {&lt;br /&gt;
      &amp;quot;protein&amp;quot;: {&lt;br /&gt;
        &amp;quot;id&amp;quot;: [&amp;quot;A&amp;quot;, &amp;quot;B&amp;quot;],&lt;br /&gt;
        &amp;quot;sequence&amp;quot;: &amp;quot;GMRESYANENQFGFKTINSDIHKIVIVGGYGKLGGLFARYLRASGYPISILDREDWAVAESILANADVVIVSVPINLTLETIERLKPYLTENMLLADLTSVKREPLAKMLEVHTGAVLGLHPMFGADIASMAKQVVVRCDGRFPERYEWLLEQIQIWGAKIYQTNATEHDHNMTYIQALRHFSTFANGLHLSKQPINLANLLALSSPIYRLELAMIGRLFAQDAELYADIIMDKSENLAVIETLKQTYDEALTFFENNDRQGFIDAFHKVRDWFGDYSEQFLKESRQLLQQANDLKQG&amp;quot;&lt;br /&gt;
      }&lt;br /&gt;
    }&lt;br /&gt;
  ],&lt;br /&gt;
  &amp;quot;modelSeeds&amp;quot;: [1],&lt;br /&gt;
  &amp;quot;dialect&amp;quot;: &amp;quot;alphafold3&amp;quot;,&lt;br /&gt;
  &amp;quot;version&amp;quot;: 1&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== AlphaFold 3 database ==&lt;br /&gt;
&lt;br /&gt;
Don&#039;t change &amp;lt;code&amp;gt;--db_dir=$ALPHAFOLD_DATABASES&amp;lt;/code&amp;gt; without a good reason. Upon loading the AlphaFold 3 module, the environment variable &amp;lt;code&amp;gt;$ALPHAFOLD_DATABASES&amp;lt;/code&amp;gt; is set to the centrally stored database at &amp;lt;code&amp;gt;/pfs/10/project/db/alphafold/3.0.1/databases/&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
== AlphaFold 3 model parameters ==&lt;br /&gt;
&lt;br /&gt;
Set the path to your AlphaFold 3 model parameters you received from DeepMind after applying for it. If you place the file &amp;lt;code&amp;gt;af3.bin.zst&amp;lt;/code&amp;gt; in the directory &amp;lt;code&amp;gt;$HOME/af3-models&amp;lt;/code&amp;gt; the example scripts will work out of the box.&lt;br /&gt;
&lt;br /&gt;
Please note: You can also store the parameters in your project directory or a workspace for slightly better performance. The template uses &amp;lt;code&amp;gt;$HOME/af3-models&amp;lt;/code&amp;gt; because this environment variable works for everyone.&lt;br /&gt;
&lt;br /&gt;
== Output ==&lt;br /&gt;
&lt;br /&gt;
You can set the output directory via the &amp;lt;code&amp;gt;--output_dir=$ALPHAFOLD_RESULTS_DIR&amp;lt;/code&amp;gt; option. The template jobscript creates a workspace called &amp;lt;code&amp;gt;alphafold&amp;lt;/code&amp;gt; as output directory.&lt;br /&gt;
&lt;br /&gt;
For every input job, AlphaFold 3 writes all its outputs in a directory called by the sanitized version of the job name. E.g. for job name &amp;quot;My first fold (TEST)&amp;quot;, AlphaFold 3 will write its outputs in a directory called &amp;lt;code&amp;gt;My_first_fold_TEST&amp;lt;/code&amp;gt; (the case is respected). If such directory already exists, AlphaFold 3 will append a timestamp to the directory name to avoid overwriting existing data unless &amp;lt;code&amp;gt;--force_output_dir&amp;lt;/code&amp;gt; is passed.&lt;br /&gt;
&lt;br /&gt;
Please see the official [https://github.com/google-deepmind/alphafold3/blob/main/docs/output.md AlphaFold 3 documentation] for more information regarding output directory structure.&lt;br /&gt;
&lt;br /&gt;
= CPU-only: Multiple Sequence Alignment =&lt;br /&gt;
&lt;br /&gt;
In the beginning, AlphaFold 3 computes three multiple sequence alignments (MSA).&lt;br /&gt;
These MSAs are computed on the CPU sequentially and the number of threads are hard-coded:&lt;br /&gt;
&lt;br /&gt;
* jackhmmer on UniRef90 using 8 threads&amp;lt;/br&amp;gt;&lt;br /&gt;
* jackhmmer on MGnify using 8 threads&amp;lt;/br&amp;gt;&lt;br /&gt;
* HHblits on BFD + Uniclust30 using 4 threads&amp;lt;/br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We provide a template jobscript at: &amp;lt;code&amp;gt;/opt/bwhpc/common/bio/alphafold/3.0.1/bwhpc-examples/binac2-af3-alignment.slurm&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can change &amp;lt;code&amp;gt;--time&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;--mem&amp;lt;/code&amp;gt;, although these are sensible defaults.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --partition=compute&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
&lt;br /&gt;
# Alphafold creates alignments sequentially, using at max 8 cores.&lt;br /&gt;
#SBATCH --ntasks-per-node=8&lt;br /&gt;
&lt;br /&gt;
#SBATCH --time=10:00:00&lt;br /&gt;
#SBATCH --mem=100gb&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The MSAs are stored in the directory specified by &amp;lt;code&amp;gt;--output_dir&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
= GPU-part: Model Inference =&lt;br /&gt;
&lt;br /&gt;
Aftere computing the MSAs, AlphaFold performs model inference on the GPU. Only one GPU is used.&lt;br /&gt;
This use case has this optimal resource profile:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --partition=gpu&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
#SBATCH --cpus-per-task=8&lt;br /&gt;
#SBATCH --gres=gpu:a100:1&lt;br /&gt;
#SBATCH --time=06:00:00&lt;br /&gt;
#SBATCH --mem=100gb&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The template jobscript only differs to CPU-only part in the option &amp;lt;code&amp;gt;--norun_data_pipeline&amp;lt;/code&amp;gt;. This means the MSAs aren&#039;t recomputed, but are taken from the previous CPU-only job.&lt;br /&gt;
&lt;br /&gt;
= Chaining CPU- and GPU-Jobs = &lt;br /&gt;
&lt;br /&gt;
To run the data pipeline first and then start the inference job as soon as the first one is finished, you can chain them like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight&amp;gt;&lt;br /&gt;
JOBID=$(sbatch --parsable binac2-af3-alignment.slurm)&lt;br /&gt;
sbatch --dependency=afterok:$JOBID binac2-af3-inference.slurm&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- This is a comment&lt;br /&gt;
= Benchmark on BinAC 2 =&lt;br /&gt;
&lt;br /&gt;
We ran some CASP14 targets with the &amp;lt;code&amp;gt;--benchmark=true&amp;lt;/code&amp;gt; on BinAC. The following table gives you some guidance for choosing meaningful memory and walltime values. &lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ Benchmark results on BinAC (work in progress)&lt;br /&gt;
|-&lt;br /&gt;
! Target !! #Residues !! jackhmmer UniRef90 [s] !! jackhmmer MGnify [s] !! HHblits on BFD [s] !! Inference [s] !! Memory Usage [GB]&lt;br /&gt;
|-&lt;br /&gt;
| ... || ... || ... || ... || ... || ... || ...&lt;br /&gt;
|-&lt;br /&gt;
| ... || ... || ... || ... || ... || ... || ...&lt;br /&gt;
|-&lt;br /&gt;
| ... || ... || ... || ... || ... || ... || ...&lt;br /&gt;
|}&lt;br /&gt;
--&amp;gt;&lt;/div&gt;</summary>
		<author><name>F Bartusch</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BinAC2/Acknowledgement&amp;diff=15634</id>
		<title>BinAC2/Acknowledgement</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BinAC2/Acknowledgement&amp;diff=15634"/>
		<updated>2025-12-05T11:30:59Z</updated>

		<summary type="html">&lt;p&gt;F Bartusch: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;When preparing a publication describing work that involved the usage of a bwForCluster, e.g. BinAC 2, please ensure that you reference the bwHPC initiative, the bwHPC-S5 project and – if appropriate – also the bwHPC facility itself. The following sample text is suggested as a starting point.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Acknowledgement:&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;blockquote&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;The authors acknowledge support by the High Performance and Cloud Computing Group at the Zentrum für Datenverarbeitung of the University of Tübingen, the state of Baden-Württemberg through bwHPC and the German Research Foundation (DFG) for funding under [https://gepris.dfg.de/gepris/projekt/455787709 &amp;quot;Project number 455787709&amp;quot;] (bwForCluster BinAC 2)&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In addition, we kindly ask you to notify us of any reports, conference papers, journal articles, theses, posters, talks which contain results obtained on any bwHPC resource by sending an email to  &lt;br /&gt;
[mailto:publications@bwhpc.de  publications@bwhpc.de] stating:&lt;br /&gt;
* cluster facility (e.g. bwForCluster BinAC 2)&lt;br /&gt;
* RV acronym (e.g. bw16A000)&lt;br /&gt;
* author(s)&lt;br /&gt;
* title &#039;&#039;or&#039;&#039; booktitle&lt;br /&gt;
* journal, volume, pages &#039;&#039;or&#039;&#039; editors, address, publisher &lt;br /&gt;
* year.&lt;br /&gt;
&lt;br /&gt;
Such recognition is highly important for acquiring funding for the next generation of hardware, support services, data storage and infrastructure.&lt;br /&gt;
&lt;br /&gt;
The publications will be referenced on the bwHPC website:&lt;br /&gt;
 https://www.bwhpc.de/user_publications.html&lt;/div&gt;</summary>
		<author><name>F Bartusch</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BinAC2/Acknowledgement&amp;diff=15633</id>
		<title>BinAC2/Acknowledgement</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BinAC2/Acknowledgement&amp;diff=15633"/>
		<updated>2025-12-05T11:29:29Z</updated>

		<summary type="html">&lt;p&gt;F Bartusch: Use project number in acknowledgement&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;When preparing a publication describing work that involved the usage of a bwForCluster, e.g. BinAC 2, please ensure that you reference the bwHPC initiative, the bwHPC-S5 project and – if appropriate – also the bwHPC facility itself. The following sample text is suggested as a starting point.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;&lt;br /&gt;
The authors acknowledge support by the High Performance and Cloud Computing Group at the Zentrum für Datenverarbeitung of the University of Tübingen, the state of Baden-Württemberg through bwHPC and the German Research Foundation (DFG) for funding under [https://gepris.dfg.de/gepris/projekt/455787709 &amp;quot;Project number 455787709&amp;quot;] (bwForCluster BinAC 2)&lt;br /&gt;
&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In addition, we kindly ask you to notify us of any reports, conference papers, journal articles, theses, posters, talks which contain results obtained on any bwHPC resource by sending an email to  &lt;br /&gt;
[mailto:publications@bwhpc.de  publications@bwhpc.de] stating:&lt;br /&gt;
* cluster facility (e.g. bwForCluster BinAC 2)&lt;br /&gt;
* RV acronym (e.g. bw16A000)&lt;br /&gt;
* author(s)&lt;br /&gt;
* title &#039;&#039;or&#039;&#039; booktitle&lt;br /&gt;
* journal, volume, pages &#039;&#039;or&#039;&#039; editors, address, publisher &lt;br /&gt;
* year.&lt;br /&gt;
&lt;br /&gt;
Such recognition is highly important for acquiring funding for the next generation of hardware, support services, data storage and infrastructure.&lt;br /&gt;
&lt;br /&gt;
The publications will be referenced on the bwHPC website:&lt;br /&gt;
 https://www.bwhpc.de/user_publications.html&lt;/div&gt;</summary>
		<author><name>F Bartusch</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=Registration/Login/Username&amp;diff=15559</id>
		<title>Registration/Login/Username</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=Registration/Login/Username&amp;diff=15559"/>
		<updated>2025-12-02T08:46:44Z</updated>

		<summary type="html">&lt;p&gt;F Bartusch: BinAC -&amp;gt; BinAC 2&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__FORCETOC__&lt;br /&gt;
&lt;br /&gt;
= Login Username =&lt;br /&gt;
&lt;br /&gt;
All members of universities and colleges in Baden-Württemberg can use the bwHPC resources.&lt;br /&gt;
Prefixes are used to ensure that the usernames assigned by the home organization are unique within the state.&lt;br /&gt;
The username for the bwHPC clusters is the same as the one assigned by the university or college, but it is prefixed with two letters for the home institution.&lt;br /&gt;
&lt;br /&gt;
{|style=&amp;quot;background:#deffee; width:100%;&amp;quot;&lt;br /&gt;
|style=&amp;quot;padding:5px; background:#cef2e0; text-align:left&amp;quot;|&lt;br /&gt;
[[Image:Attention.svg|center|25px]]&lt;br /&gt;
|style=&amp;quot;padding:5px; background:#cef2e0; text-align:left&amp;quot;|&lt;br /&gt;
Starting with &#039;&#039;&#039;bwUniCluster 3.0&#039;&#039;&#039;, KIT users have the &#039;&#039;&#039;prefix ka&#039;&#039;&#039; on all systems. &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
There are two ways to find out your username for the bwForCluster and the bwUniCluster:&lt;br /&gt;
&lt;br /&gt;
* [[#Get_your_Username_by_adding_a_Prefix_to_your_local_Account_Name|Get your Username by adding a Prefix to your local Account Name]]&lt;br /&gt;
* [[#Find_out_your_Username_by_visiting_the_Registration_Service|Find out your Username by visiting the Registration Service]]&lt;br /&gt;
&lt;br /&gt;
== Get your Username by adding a Prefix to your local Account Name ==&lt;br /&gt;
&lt;br /&gt;
{|style=&amp;quot;background:#deffee; width:100%;&amp;quot;&lt;br /&gt;
|style=&amp;quot;padding:5px; background:#cef2e0; text-align:left&amp;quot;|&lt;br /&gt;
[[Image:Attention.svg|center|25px]]&lt;br /&gt;
|style=&amp;quot;padding:5px; background:#cef2e0; text-align:left&amp;quot;|&lt;br /&gt;
If your university is not listed in the [https://www.bwidm.de/hochschulen.php &#039;&#039;&#039;bwIDM Membership Table&#039;&#039;&#039;], please ask your university to join bwIDM or update the table.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
If you want to use a &#039;&#039;&#039;bwForCluster&#039;&#039;&#039; or the &#039;&#039;&#039;bwUniCluster&#039;&#039;&#039; you need to add a prefix to your local username.&lt;br /&gt;
The username is constructed as follows (see [[#Examples|Examples]] below):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;prefix&amp;gt;_&amp;lt;local username&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Prefix for Universities of Applied Sciences ===&lt;br /&gt;
&lt;br /&gt;
For universities see table below, users from the [[Registration/HAW|HAW BW e.V.]] or a university of applied sciences should use this table updated by bwIDM as a guideline:&lt;br /&gt;
* [https://www.bwidm.de/hochschulen.php &#039;&#039;&#039;bwIDM Membership Table&#039;&#039;&#039;]&lt;br /&gt;
&lt;br /&gt;
=== Prefix for Universities ===&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; &lt;br /&gt;
|- &lt;br /&gt;
!style=&amp;quot;width:280px&amp;quot; | User from University&lt;br /&gt;
!style=&amp;quot;width:60px&amp;quot;  | Prefix&lt;br /&gt;
!style=&amp;quot;width:430px&amp;quot; | Username&lt;br /&gt;
|-&lt;br /&gt;
| Universität Freiburg&lt;br /&gt;
| fr&lt;br /&gt;
| colspan=&amp;quot;2&amp;quot; align=&amp;quot;center&amp;quot;| fr_&#039;&#039;&amp;lt;username&amp;gt;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
| Universität Heidelberg&lt;br /&gt;
| hd&lt;br /&gt;
| colspan=&amp;quot;2&amp;quot; align=&amp;quot;center&amp;quot;| hd_&#039;&#039;&amp;lt;username&amp;gt;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
| Universität Hohenheim&lt;br /&gt;
| ho&lt;br /&gt;
| colspan=&amp;quot;2&amp;quot; align=&amp;quot;center&amp;quot;| ho_&#039;&#039;&amp;lt;username&amp;gt;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
| Karlsruhe Institute of Technology (KIT)&lt;br /&gt;
| ka&lt;br /&gt;
| align=&amp;quot;center&amp;quot;| ka_&#039;&#039;&amp;lt;username&amp;gt;&#039;&#039; &lt;br /&gt;
|-&lt;br /&gt;
| Universität Konstanz&lt;br /&gt;
| kn&lt;br /&gt;
| colspan=&amp;quot;2&amp;quot; align=&amp;quot;center&amp;quot;| kn_&#039;&#039;&amp;lt;username&amp;gt;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
| Universität Mannheim&lt;br /&gt;
| ma&lt;br /&gt;
| colspan=&amp;quot;2&amp;quot; align=&amp;quot;center&amp;quot;| ma_&#039;&#039;&amp;lt;username&amp;gt;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
| Universität Stuttgart&lt;br /&gt;
| st&lt;br /&gt;
| colspan=&amp;quot;2&amp;quot; align=&amp;quot;center&amp;quot;| st_&#039;&#039;&amp;lt;username&amp;gt;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
| Universität Tübingen&lt;br /&gt;
| tu&lt;br /&gt;
| colspan=&amp;quot;2&amp;quot; align=&amp;quot;center&amp;quot;| tu_&#039;&#039;&amp;lt;username&amp;gt;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
| Universität Ulm&lt;br /&gt;
| ul&lt;br /&gt;
| colspan=&amp;quot;2&amp;quot; align=&amp;quot;center&amp;quot;| ul_&#039;&#039;&amp;lt;username&amp;gt;&#039;&#039;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Examples ===&lt;br /&gt;
&lt;br /&gt;
* If you are a member of the University of Konstanz and your local username is &amp;lt;code&amp;gt;ab1234&amp;lt;/code&amp;gt;, your username on any bwHPC cluster is &amp;lt;code&amp;gt;kn_ab1234&amp;lt;/code&amp;gt;.&lt;br /&gt;
* If your local username for the university is &amp;lt;code&amp;gt;vwxyz1234&amp;lt;/code&amp;gt; and you are a user of the University of Freiburg, your username on any bwHPC cluster is &amp;lt;code&amp;gt;fr_vwxyz1234&amp;lt;/code&amp;gt;.&lt;br /&gt;
* If you are from Aalen University and your username is &amp;lt;code&amp;gt;xyzs12342&amp;lt;/code&amp;gt;, your username for any bwHPC cluster is &amp;lt;code&amp;gt;aa_xyzs12342&amp;lt;/code&amp;gt;.&lt;br /&gt;
* If your KIT username is &amp;lt;code&amp;gt;ab1234&amp;lt;/code&amp;gt;, your username on any bwHPC cluster is &amp;lt;code&amp;gt;ka_ab1234&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
== Find out your Username by visiting the Registration Service ==&lt;br /&gt;
&lt;br /&gt;
Alternatively, you can log in to the registration service and verify your username online.&lt;br /&gt;
To do this, follow the next steps:&lt;br /&gt;
&lt;br /&gt;
1. Select the cluster you want to know your username for: &amp;lt;br /&amp;gt; &amp;amp;rarr; [https://login.bwidm.de &#039;&#039;&#039;bwUniCluster 3.0&#039;&#039;&#039;] &amp;lt;br /&amp;gt; &amp;amp;rarr; [https://bwservices.uni-tuebingen.de &#039;&#039;&#039;BinAC 2&#039;&#039;&#039;] &amp;lt;br /&amp;gt; &amp;amp;rarr; [https://login.bwidm.de &#039;&#039;&#039;JUSTUS 2&#039;&#039;&#039;] &amp;lt;br /&amp;gt; &amp;amp;rarr; [https://bwservices.uni-heidelberg.de &#039;&#039;&#039;Helix&#039;&#039;&#039;] &amp;lt;br /&amp;gt; &amp;amp;rarr; [https://login.bwidm.de &#039;&#039;&#039;NEMO2&#039;&#039;&#039;]&lt;br /&gt;
 &lt;br /&gt;
2. Select your home organization from the list and click &#039;&#039;&#039;Proceed&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
3. Enter your home-organisational user ID / username  and your home-organisational password and click &#039;&#039;&#039;Login&#039;&#039;&#039; button.&lt;br /&gt;
&lt;br /&gt;
4. You will be redirected back to the registration website.&lt;br /&gt;
&lt;br /&gt;
5. Find the cluster entry and select &#039;&#039;&#039;Registry Info&#039;&#039;&#039;.&lt;br /&gt;
[[File:BwIDM-pw.png|center|frame|Check Registry Info.]]&lt;br /&gt;
&lt;br /&gt;
6. Your cluster username will be after the string &#039;&#039;&#039;Username for login&#039;&#039;&#039;.&lt;br /&gt;
{| style=&amp;quot;width:50%;&amp;quot; align=&amp;quot;center&amp;quot;&lt;br /&gt;
|- &lt;br /&gt;
!style=&amp;quot;width:50%;&amp;quot; align=&amp;quot;center&amp;quot;|[[File:BwIDM-user.png|center|thumb|300px|Username for login (newer registration services).]]&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>F Bartusch</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=Registration/Account&amp;diff=15558</id>
		<title>Registration/Account</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=Registration/Account&amp;diff=15558"/>
		<updated>2025-12-02T08:45:13Z</updated>

		<summary type="html">&lt;p&gt;F Bartusch: BinAC -&amp;gt; BinAC 2&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;If you have trouble login to your account, on the cluster, you can try the following to verify your account:&lt;br /&gt;
&lt;br /&gt;
1. Select the cluster you want to check your account and select your home organization: &amp;lt;br /&amp;gt; &amp;amp;rarr; [https://login.bwidm.de &#039;&#039;&#039;bwUniCluster 3.0&#039;&#039;&#039;] &amp;lt;br /&amp;gt; &amp;amp;rarr; [https://bwservices.uni-tuebingen.de &#039;&#039;&#039;BinAC 2&#039;&#039;&#039;] &amp;lt;br /&amp;gt; &amp;amp;rarr; [https://login.bwidm.de &#039;&#039;&#039;JUSTUS 2&#039;&#039;&#039;] &amp;lt;br /&amp;gt; &amp;amp;rarr; [https://bwservices.uni-heidelberg.de &#039;&#039;&#039;Helix&#039;&#039;&#039;] &amp;lt;br /&amp;gt; &amp;amp;rarr; [https://login.bwidm.de &#039;&#039;&#039;NEMO2&#039;&#039;&#039;]&lt;br /&gt;
&lt;br /&gt;
2. Select your home organization from the list on the main page and click &#039;&#039;&#039;Proceed&#039;&#039;&#039; or &#039;&#039;&#039;Fortfahren&#039;&#039;&#039;.&lt;br /&gt;
[[File:BwIDM-login.png|center|600px|thumb|Select your home organization]]&lt;br /&gt;
&lt;br /&gt;
3. Authenticate yourself via the user id / username and password provided by your home institution.&lt;br /&gt;
&lt;br /&gt;
4. You will be redirected back to the registration website.&lt;br /&gt;
&lt;br /&gt;
5. Find the cluster entry and select &#039;&#039;&#039;Registry info&#039;&#039;&#039;&lt;br /&gt;
[[File:Reg_info.png|center|frame|Registry info.]]&lt;br /&gt;
&lt;br /&gt;
6. Verify that your account is &#039;&#039;&#039;ACTIVE&#039;&#039;&#039;.&lt;br /&gt;
[[File:Reg_check_account.png|center|800px|thumb|Registry info.]]&lt;/div&gt;</summary>
		<author><name>F Bartusch</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=Registration/Deregistration&amp;diff=15557</id>
		<title>Registration/Deregistration</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=Registration/Deregistration&amp;diff=15557"/>
		<updated>2025-12-02T08:44:41Z</updated>

		<summary type="html">&lt;p&gt;F Bartusch: BinAC -&amp;gt; BinAC 2&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{|style=&amp;quot;background:#deffee; width:100%;&amp;quot;&lt;br /&gt;
|style=&amp;quot;padding:5px; background:#cef2e0; text-align:left&amp;quot;|&lt;br /&gt;
[[Image:Attention.svg|center|25px]]&lt;br /&gt;
|style=&amp;quot;padding:5px; background:#cef2e0; text-align:left&amp;quot;|&lt;br /&gt;
Note that de-registering automatically unsubscribes you from the cluster mailing list.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
If you plan to permanently leave a bwUniCluster/bwForCluster, follow the de-register checklist:&lt;br /&gt;
&lt;br /&gt;
1. Transfer all your data in $HOME and your workspaces to your local computer/storage and after that clear off all your data.&lt;br /&gt;
&lt;br /&gt;
2. Select the cluster you want to de-register: &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;amp;rarr; [https://login.bwidm.de &#039;&#039;&#039;bwUniCluster 3.0&#039;&#039;&#039;] &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;!--&amp;amp;rarr; [https://login.bwidm.de &#039;&#039;&#039;bwUniCluster 3.0&#039;&#039;&#039;] &amp;lt;br&amp;gt;--&amp;gt;&lt;br /&gt;
&amp;amp;rarr; [https://bwservices.uni-tuebingen.de &#039;&#039;&#039;BinAC 2&#039;&#039;&#039;] &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;amp;rarr; [https://login.bwidm.de &#039;&#039;&#039;JUSTUS 2&#039;&#039;&#039;] &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;amp;rarr; [https://bwservices.uni-heidelberg.de &#039;&#039;&#039;Helix&#039;&#039;&#039;] &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;amp;rarr; [https://login.bwidm.de &#039;&#039;&#039;NEMO2&#039;&#039;&#039;]&lt;br /&gt;
 &lt;br /&gt;
3. Select your home organization from the list on the main page and click &#039;&#039;&#039;Proceed&#039;&#039;&#039; or &#039;&#039;&#039;Fortfahren&#039;&#039;&#039;.&lt;br /&gt;
[[File:BwIDM-login.png|center|600px|thumb|Select your home organization]]&lt;br /&gt;
&lt;br /&gt;
4. Authenticate yourself via the user id / username and password provided by your home institution.&lt;br /&gt;
&lt;br /&gt;
5. You will be redirected back to the registration website.&lt;br /&gt;
&lt;br /&gt;
6. Find the cluster entry and select &#039;&#039;&#039;Registry Info&#039;&#039;&#039;.&lt;br /&gt;
[[File:BwIDM-pw.png|center|frame|De-register from a bwForCluster.]]&lt;br /&gt;
&lt;br /&gt;
7. Click &#039;&#039;&#039;Deregister&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
8. You will be redirected to a confirmation page. Click &#039;&#039;&#039;DEREGISTER&#039;&#039;&#039;.&lt;br /&gt;
[[File:BwIDM-dereg.png|center|400px|thumb|Confirm de-registeristration.]]&lt;br /&gt;
&lt;br /&gt;
9. If the de-registration was successful, you will be redirected back to the main page.&lt;/div&gt;</summary>
		<author><name>F Bartusch</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=Registration/Password&amp;diff=15556</id>
		<title>Registration/Password</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=Registration/Password&amp;diff=15556"/>
		<updated>2025-12-02T08:43:58Z</updated>

		<summary type="html">&lt;p&gt;F Bartusch: BinAC -&amp;gt; BinAC 2&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Your bwUniCluster/bwForCluster &#039;&#039;&#039;password&#039;&#039;&#039; is the service password you set during the registration for [[Registration/bwUniCluster/Service|bwUniCluster]] or [[Registration/bwForCluster/Service|bwForCluster]].&lt;br /&gt;
At any time, you can set a new bwUniCluster/bwForCluster password via the registration websites by carrying out the following steps:&lt;br /&gt;
&lt;br /&gt;
1. Select the cluster you want change your password and select your home organization: &amp;lt;br /&amp;gt; &amp;amp;rarr; [https://login.bwidm.de &#039;&#039;&#039;bwUniCluster 3.0&#039;&#039;&#039;] &amp;lt;br /&amp;gt; &amp;amp;rarr; [https://bwservices.uni-tuebingen.de &#039;&#039;&#039;BINAC 2&#039;&#039;&#039;] &amp;lt;br /&amp;gt; &amp;amp;rarr; [https://login.bwidm.de &#039;&#039;&#039;JUSTUS 2&#039;&#039;&#039;] &amp;lt;br /&amp;gt; &amp;amp;rarr; [https://bwservices.uni-heidelberg.de &#039;&#039;&#039;Helix&#039;&#039;&#039;] &amp;lt;br /&amp;gt; &amp;amp;rarr; [https://login.bwidm.de &#039;&#039;&#039;NEMO2&#039;&#039;&#039;]&lt;br /&gt;
&lt;br /&gt;
2. Select your home organization from the list on the main page and click &#039;&#039;&#039;Proceed&#039;&#039;&#039; or &#039;&#039;&#039;Fortfahren&#039;&#039;&#039;.&lt;br /&gt;
[[File:BwIDM-login.png|center|600px|thumb|Select your home organization]]&lt;br /&gt;
&lt;br /&gt;
3. Authenticate yourself via the user id / username and password provided by your home institution.&lt;br /&gt;
&lt;br /&gt;
4. You will be redirected back to the registration website.&lt;br /&gt;
&lt;br /&gt;
5. Find the cluster entry and select &#039;&#039;&#039;Set Service Password&#039;&#039;&#039;&lt;br /&gt;
[[File:BwIDM-pw.png|center|frame|Set password for desired cluster.]]&lt;br /&gt;
&lt;br /&gt;
6. Enter the new password, repeat it and click &#039;&#039;&#039;SAVE&#039;&#039;&#039; button.&lt;br /&gt;
Be sure to use a secure password that is different from any other passwords you currently use or have used on other systems.&lt;br /&gt;
[[File:BwIDM-passwd.png|center|600px|thumb|Set service password]]&lt;br /&gt;
&lt;br /&gt;
7. If the change was successful, the message &amp;quot;Das Passwort wurde bei dem Dienst geändert/Password has been changed&amp;quot; will be shown.&lt;/div&gt;</summary>
		<author><name>F Bartusch</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=SDS@hd/Access&amp;diff=15555</id>
		<title>SDS@hd/Access</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=SDS@hd/Access&amp;diff=15555"/>
		<updated>2025-12-02T08:41:34Z</updated>

		<summary type="html">&lt;p&gt;F Bartusch: BinAC -&amp;gt; BinAC 2&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page provides an overview on how to access data served by SDS@hd. To get an introduction to data transfer in general, see [[Data_Transfer|data transfer]].&lt;br /&gt;
&lt;br /&gt;
== Prerequisites ==&lt;br /&gt;
&lt;br /&gt;
* You need to be [[SDS@hd/Registration|registered]].&lt;br /&gt;
* You need to be in the belwue-Network. This means you have to use the VPN Service of your HomeOrganization, if you want to access SDS@hd from outside the bwHPC-Clusters (e.g. via eduroam or from your personal notebook).&lt;br /&gt;
&lt;br /&gt;
== Needed Information, independent of the chosen tool ==&lt;br /&gt;
&lt;br /&gt;
* [[Registration/Login/Username| Username]]: Same as for the bwHPC Clusters&lt;br /&gt;
* Password: The Service Password that you set at bwServices in the [[SDS@hd/Registration|registration step]].&lt;br /&gt;
* SV-Acronym: Use the lower case version of the acronym for all access options.&lt;br /&gt;
* Hostname: The hostname depends on the chosen network protocol:&lt;br /&gt;
** For [[Data_Transfer/SSHFS|SSHFS]] and [[Data_Transfer/SFTP|SFTP]]: &#039;&#039;lsdf02-sshfs.urz.uni-heidelberg.de&#039;&#039;&lt;br /&gt;
** For [[SDS@hd/Access/SMB|SMB]] and [[SDS@hd/Access/NFS|NFS]]: &#039;&#039;lsdf02.urz.uni-heidelberg.de&#039;&#039;&lt;br /&gt;
** For [[Data_Transfer/WebDAV|WebDAV]] the url is: &#039;&#039;https://lsdf02-webdav.urz.uni-heidelberg.de&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Recommended Setup ==&lt;br /&gt;
The following graphic shows the recommended way for accessing SDS@hd via Windows/Mac/Linux. The table provides an overview of the most important access options and links to the related pages.&amp;lt;br /&amp;gt;&lt;br /&gt;
If you have various use cases, it is recommended to use [[Data_Transfer/Rclone|Rclone]]. You can copy, sync and mount with it. Thanks to its multithreading capability Rclone is a good fit for transferring big data.&amp;lt;br /&amp;gt;&lt;br /&gt;
For an overview of all connection possibilities, please have a look at [[Data_Transfer/All_Data_Transfer_Routes|all data transfer routes]].&lt;br /&gt;
&lt;br /&gt;
[[File:Data_transfer_diagram_simple.jpg|center|500px]]&lt;br /&gt;
&amp;lt;p style=&amp;quot;text-align: center; font-size: small; margin-top: 10px&amp;quot;&amp;gt;Figure 1: SDS@hd main transfer routes&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; &lt;br /&gt;
|- style=&amp;quot;font-weight:bold; text-align:center; vertical-align:middle;&amp;quot;&lt;br /&gt;
! &lt;br /&gt;
! Use Case&lt;br /&gt;
! Windows&lt;br /&gt;
! Mac&lt;br /&gt;
! Linux&lt;br /&gt;
! Possible Bandwith&lt;br /&gt;
! Firewall Ports&lt;br /&gt;
|-&lt;br /&gt;
| [[Data_Transfer/Rclone|Rclone]] + &amp;lt;protocol&amp;gt;&lt;br /&gt;
| copy, sync and mount, multithreading&lt;br /&gt;
| ✓&lt;br /&gt;
| ✓&lt;br /&gt;
| ✓&lt;br /&gt;
| depends on used protocol&lt;br /&gt;
| depends on used protocol&lt;br /&gt;
|-&lt;br /&gt;
| [[SDS@hd/Access/SMB|SMB]]&lt;br /&gt;
| mount as network drive in file explorer or usage via Rclone&lt;br /&gt;
| [[SDS@hd/Access/SMB#Windows|✓]]&lt;br /&gt;
| [[SDS@hd/Access/SMB#Mac|✓]]&lt;br /&gt;
| [[SDS@hd/Access/SMB#Linux|✓]]&lt;br /&gt;
| up to 40 Gbit/sec&lt;br /&gt;
| 139 (netbios), 135 (rpc), 445 (smb)&lt;br /&gt;
|-&lt;br /&gt;
| [[Data_Transfer/WebDAV|WebDAV]]&lt;br /&gt;
| go to solution for restricted networks&lt;br /&gt;
| [✓]&lt;br /&gt;
| ✓&lt;br /&gt;
| ✓&lt;br /&gt;
| up to 100GBit/sec&lt;br /&gt;
| 80 (http), 443 (https)&lt;br /&gt;
|- style=&amp;quot;vertical-align:middle;&amp;quot;&lt;br /&gt;
| [[Data_Transfer/Graphical_Clients#MobaXterm|MobaXterm]]&lt;br /&gt;
| Graphical User Interface (GUI)&lt;br /&gt;
| [[Data_Transfer/Graphical_Clients#MobaXterm|✓]]&lt;br /&gt;
| ☓&lt;br /&gt;
| ☓&lt;br /&gt;
| see sftp&lt;br /&gt;
| see sftp&lt;br /&gt;
|- style=&amp;quot;vertical-align:middle;&amp;quot;&lt;br /&gt;
| [[SDS@hd/Access/NFS|NFS]]&lt;br /&gt;
| mount for multi-user environments&lt;br /&gt;
| ☓&lt;br /&gt;
| ☓&lt;br /&gt;
| [[SDS@hd/Access/NFS|✓]]&lt;br /&gt;
| up to 40 Gbit/sec&lt;br /&gt;
| -&lt;br /&gt;
|- style=&amp;quot;vertical-align:middle;&amp;quot;&lt;br /&gt;
| [[Data_Transfer/SSHFS|SSHFS]]&lt;br /&gt;
| mount, needs stable internet connection&lt;br /&gt;
| ☓&lt;br /&gt;
| [[Data_Transfer/SSHFS#MacOS_&amp;amp;_Linux|✓]]&lt;br /&gt;
| [[Data_Transfer/SSHFS#MacOS_&amp;amp;_Linux|✓]]&lt;br /&gt;
| see sftp&lt;br /&gt;
| see sftp&lt;br /&gt;
|- style=&amp;quot;vertical-align:middle;&amp;quot;&lt;br /&gt;
| [[Data_Transfer/SFTP|SFTP]]&lt;br /&gt;
| interactive shell, better usability when used together with Rclone&lt;br /&gt;
| [[Data_Transfer/SFTP#Windows|✓]]&lt;br /&gt;
| [[Data_Transfer/SFTP#MacOS_&amp;amp;_Linux|✓]]&lt;br /&gt;
| [[Data_Transfer/SFTP#MacOS_&amp;amp;_Linux|✓]]&lt;br /&gt;
| up to 40 Gbit/sec&lt;br /&gt;
| 22 (ssh)&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;p style=&amp;quot;text-align: center; font-size: small; margin-top: 10px&amp;quot;&amp;gt;Table 1: SDS@hd transfer routes&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Access from a bwHPC Cluster ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;bwUniCluster&#039;&#039;&#039;&amp;lt;br /&amp;gt;&lt;br /&gt;
You can&#039;t mount to your $HOME directory but you can create a mount under $TMPDIR by following the instructions for [[Data_Transfer/Rclone#Usage_Rclone_Mount | Rclone mount]]. It is advised to wait a couple of seconds (&amp;lt;code&amp;gt;sleep 5&amp;lt;/code&amp;gt;) before trying to use the mounted directory. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;bwForCluster Helix&#039;&#039;&#039;&amp;lt;br /&amp;gt;&lt;br /&gt;
You can directly access your storage space under &#039;&#039;/mnt/sds-hd/&#039;&#039; on all login and compute nodes.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;bwForCluster BinAC 2&#039;&#039;&#039;&amp;lt;br /&amp;gt;&lt;br /&gt;
You can directly access your storage space under &#039;&#039;/mnt/sds-hd/&#039;&#039; on all login and compute nodes. The prerequisites are: &lt;br /&gt;
* The SV responsible has enabled the SV on BinAC 2 once by writing to [mailto:sds-hd-support@urz.uni-heidelberg.de sds-hd-support@urz.uni-heidelberg.de]&lt;br /&gt;
* You have a valid kerberos ticket, which can be fetched with &amp;lt;code&amp;gt;kinit &amp;lt;userID&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Other&#039;&#039;&#039;&amp;lt;br /&amp;gt;&lt;br /&gt;
You can mount your SDS@hd SV on the cluster yourself by using [[Data_Transfer/Rclone#Usage_Rclone_Mount | Rclone mount]]. As transfer protocol you can use WebDAV or sftp. For a full overview please have a look at [[Data_Transfer/All_Data_Transfer_Routes | All Data Transfer Routes]].&lt;br /&gt;
&lt;br /&gt;
=== Access via Webbrowser (read-only) ===&lt;br /&gt;
&lt;br /&gt;
Visit [https://lsdf02-webdav.urz.uni-heidelberg.de/ lsdf02-webdav.urz.uni-heidelberg.de] and login with your SDS@hd username and service password. Here you can get an overview of the data in your &amp;amp;quot;Speichervorhaben&amp;amp;quot; and download single files. To be able to do more, like moving data, uploading new files, or downloading complete folders, a suitable client is needed as described above.&lt;br /&gt;
&lt;br /&gt;
== Best Practices ==&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Managing access rights with ACLs&#039;&#039;&#039; &amp;lt;br /&amp;gt; -&amp;gt; Please set ACLs either via the [https://www.urz.uni-heidelberg.de/de/service-katalog/desktop-und-arbeitsplatz/windows-terminalserver Windows terminal server] or via bwForCluster Helix. ACL changes won&#039;t work when used locally on a mounted directory.&lt;br /&gt;
* &#039;&#039;&#039;Multiuser environment&#039;&#039;&#039; -&amp;lt;br /&amp;gt; -&amp;gt; Use [[SDS@hd/Access/NFS|NFS]]&lt;br /&gt;
&lt;br /&gt;
== Troubleshooting ==&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Issue:&#039;&#039;&#039; The credentials aren&#039;t accepted &lt;br /&gt;
*: &amp;amp;rarr; &#039;&#039;&#039;[[Registration/Login/Username| Check your Username]]&#039;&#039;&#039;&lt;br /&gt;
*: &amp;amp;rarr; &#039;&#039;&#039;[[Registration/bwForCluster/Helix#Troubleshooting_with_the_Help_of_bwServices | Troubleshooting with bwServices]]&#039;&#039;&#039;&lt;/div&gt;</summary>
		<author><name>F Bartusch</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=Registration/bwForCluster/Service&amp;diff=15553</id>
		<title>Registration/bwForCluster/Service</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=Registration/bwForCluster/Service&amp;diff=15553"/>
		<updated>2025-12-02T08:38:42Z</updated>

		<summary type="html">&lt;p&gt;F Bartusch: Remove BinAC 1&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Prerequisites for Step C =&lt;br /&gt;
&lt;br /&gt;
Prerequisites for successful account creation:&lt;br /&gt;
* &#039;&#039;&#039;[[Registration/bwForCluster/Entitlement|Step A: bwForCluster Entitlement]]&#039;&#039;&#039;&lt;br /&gt;
* &#039;&#039;&#039;[[Registration/bwForCluster/RV|Step B: Apply for a Rechenvorhaben/project]].&lt;br /&gt;
&lt;br /&gt;
Once you have registered your own RV (&#039;&#039;Rechenvorhaben&#039;&#039;) or a membership in an RV, you will receive an email with a website to create an account for yourself on that cluster.&lt;br /&gt;
&lt;br /&gt;
= Step C: bwForCluster Registration (Account Creation) =&lt;br /&gt;
&lt;br /&gt;
To finish the registration procedure select the cluster your RV was assigned to:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;[[Registration/bwForCluster/BinAC2|bwForCluster BinAC 2]]&#039;&#039;&#039;&lt;br /&gt;
* &#039;&#039;&#039;[[Registration/bwForCluster/JUSTUS2|bwForCluster JUSTUS 2]]&#039;&#039;&#039;&lt;br /&gt;
* &#039;&#039;&#039;[[Registration/bwForCluster/Helix|bwForCluster Helix]]&#039;&#039;&#039;&lt;br /&gt;
* &#039;&#039;&#039;[[Registration/bwForCluster/NEMO2|bwForCluster NEMO 2]]&#039;&#039;&#039;&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p style=&amp;quot;text-align:right;&amp;quot;&amp;gt;[[Registration/bwForCluster| Go back to bwForCluster Registration Home]]&amp;lt;/p&amp;gt;&lt;/div&gt;</summary>
		<author><name>F Bartusch</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=Registration/Login/Hostname&amp;diff=15550</id>
		<title>Registration/Login/Hostname</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=Registration/Login/Hostname&amp;diff=15550"/>
		<updated>2025-12-02T08:36:44Z</updated>

		<summary type="html">&lt;p&gt;F Bartusch: Remove BinAC 1&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Login Hostnames of the bwHPC Clusters =&lt;br /&gt;
&lt;br /&gt;
From outside the clusters users can only use the login nodes to submit jobs.&lt;br /&gt;
&lt;br /&gt;
Please go to the section of the cluster you want to login:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;[[Registration/Login/Hostname#bwUniCluster_3.0_Hostnames|bwUniCluster 3.0 Hostnames]]&#039;&#039;&#039;&lt;br /&gt;
* &#039;&#039;&#039;[[Registration/Login/Hostname#BINAC2 Hostnames|BINAC2 Hostnames]]&#039;&#039;&#039;&lt;br /&gt;
* &#039;&#039;&#039;[[Registration/Login/Hostname#JUSTUS2 Hostnames|JUSTUS2 Hostnames]]&#039;&#039;&#039;&lt;br /&gt;
* &#039;&#039;&#039;[[Registration/Login/Hostname#Helix Hostnames|Helix Hostnames]]&#039;&#039;&#039;&lt;br /&gt;
* &#039;&#039;&#039;[[Registration/Login/Hostname#NEMO2 Hostnames|NEMO2 Hostnames]]&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== bwUniCluster 3.0 Hostnames ==&lt;br /&gt;
&lt;br /&gt;
The system has two login nodes.&lt;br /&gt;
The selection of the login node is done automatically.&lt;br /&gt;
If you are logging in multiple times, different sessions might run on different login nodes.&lt;br /&gt;
&lt;br /&gt;
Login to bwUniCluster 3.0:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Hostname !! Node type&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;uc3.scc.kit.edu&#039;&#039;&#039;          || login to one of the two login nodes&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;bwunicluster.scc.kit.edu&#039;&#039;&#039; || login to one of the two login nodes&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
In general, you should use automatic selection to allow us to balance the load over the four login nodes.&lt;br /&gt;
If you need to connect to specific login nodes, you can use the following hostnames:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Hostname !! Node type&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;uc3-login1.scc.kit.edu&#039;&#039;&#039; || bwUniCluster 3.0 first login node&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;uc3-login2.scc.kit.edu&#039;&#039;&#039; || bwUniCluster 3.0 second login node&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== BINAC2 Hostnames ==&lt;br /&gt;
&lt;br /&gt;
BinAC 2 has one login node serving as a load balancer. We use DNS round-robin scheduling to load-balance the incoming connections between the actual three login nodes. If you are logging in multiple times, different sessions might run on different login nodes and hence programs started in one session might not be visible in another sessions. &lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Hostname !! Destination&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;login.binac2.uni-tuebingen.de&#039;&#039;&#039; || one of the three login nodes &lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== JUSTUS2 Hostnames ==&lt;br /&gt;
&lt;br /&gt;
The system has four login nodes.&lt;br /&gt;
The selection of the login node is done automatically.&lt;br /&gt;
If you are logging in multiple times, different sessions might run on different login nodes.&lt;br /&gt;
&lt;br /&gt;
Login to JUSTUS2:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Hostname !! Node type&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;justus2.uni-ulm.de&#039;&#039;&#039; || login to one of the four login nodes&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
In general, you should use automatic selection to allow us to balance the load over the four login nodes.&lt;br /&gt;
If you need to connect to specific login nodes, you can use the following hostnames:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Hostname !! Node type&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;justus2-login01.rz.uni-ulm.de&#039;&#039;&#039; || JUSTUS2 first login node&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;justus2-login02.rz.uni-ulm.de&#039;&#039;&#039; || JUSTUS2 second login node&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;justus2-login03.rz.uni-ulm.de&#039;&#039;&#039; || JUSTUS2 third login node&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;justus2-login04.rz.uni-ulm.de&#039;&#039;&#039; || JUSTUS2 fourth login node&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Helix Hostnames ==&lt;br /&gt;
&lt;br /&gt;
The system has two login nodes.&lt;br /&gt;
The selection of the login node is done automatically.&lt;br /&gt;
If you are logging in multiple times, different sessions might run on different login nodes.&lt;br /&gt;
&lt;br /&gt;
Login to Helix:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Hostname !! Node type&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;helix.bwservices.uni-heidelberg.de&#039;&#039;&#039; || login to one of two login nodes&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
In general, you should use automatic selection to allow us to balance the load over the login nodes.&lt;br /&gt;
&lt;br /&gt;
== NEMO2 Hostnames ==&lt;br /&gt;
&lt;br /&gt;
The system has two login nodes.&lt;br /&gt;
You have to select the login node yourself.&lt;br /&gt;
&lt;br /&gt;
Login to NEMO2:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Hostname !! Node type&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;login.nemo.uni-freiburg.de&#039;&#039;&#039; || NEMO2 first or second login node&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;login1.nemo.uni-freiburg.de&#039;&#039;&#039; || NEMO2 first login node&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;login2.nemo.uni-freiburg.de&#039;&#039;&#039; || NEMO2 second login node&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>F Bartusch</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=Registration/Login&amp;diff=15546</id>
		<title>Registration/Login</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=Registration/Login&amp;diff=15546"/>
		<updated>2025-12-02T08:33:35Z</updated>

		<summary type="html">&lt;p&gt;F Bartusch: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Login to the Clusters =&lt;br /&gt;
&lt;br /&gt;
{|style=&amp;quot;background:#deffee; width:100%;&amp;quot;&lt;br /&gt;
|style=&amp;quot;padding:5px; background:#cef2e0; text-align:left&amp;quot;|&lt;br /&gt;
[[Image:Attention.svg|center|25px]]&lt;br /&gt;
|style=&amp;quot;padding:5px; background:#cef2e0; text-align:left&amp;quot;|&lt;br /&gt;
Access to the clusters (bwUniCluster/bwForCluster) is restricted to IP addresses of universities/colleges from Baden-Württemberg [https://bgpview.io/asn/553#prefixes-v4 (BelWü network)].&lt;br /&gt;
If you are outside the BelWü network (e.g. at home), you must first establish a VPN connection to your home university or a connection to an SSH jump host at your home university.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== SSH Clients ==&lt;br /&gt;
&lt;br /&gt;
After completing the [[Registration|web registration]] and [[Registration/Password|setting a service password]] the HPC cluster is ready for your &#039;&#039;&#039;SSH&#039;&#039;&#039; based login.&lt;br /&gt;
* [[Registration/Login/Client| What Client to Use]]&lt;br /&gt;
&lt;br /&gt;
== Cluster Specific Information ==&lt;br /&gt;
&lt;br /&gt;
Every cluster has its own documentation for login.&lt;br /&gt;
* If you want to &#039;&#039;&#039;login&#039;&#039;&#039; to the bwUniCluster, please refer to &amp;lt;br /&amp;gt; &lt;br /&gt;
&amp;amp;rarr; &#039;&#039;&#039;[[bwUniCluster3.0/Login|bwUniCluster 3.0]]&#039;&#039;&#039;&amp;lt;br&amp;gt;&lt;br /&gt;
* If you want to &#039;&#039;&#039;login&#039;&#039;&#039; to one of the bwForClusters, please refer to &amp;lt;br /&amp;gt; &lt;br /&gt;
&amp;amp;rarr; &#039;&#039;&#039;[[BinAC2/Login|BinAC 2]]&#039;&#039;&#039; &amp;lt;br /&amp;gt; &lt;br /&gt;
&amp;amp;rarr; &#039;&#039;&#039;[[JUSTUS2/Login|JUSTUS 2]]&#039;&#039;&#039; &amp;lt;br /&amp;gt; &lt;br /&gt;
&amp;amp;rarr; &#039;&#039;&#039;[[Helix/Login|Helix]]&#039;&#039;&#039; &amp;lt;br /&amp;gt; &lt;br /&gt;
&amp;amp;rarr; &#039;&#039;&#039;[[NEMO2/Login|NEMO2]]&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
= Allowed Activities on Login Nodes =&lt;br /&gt;
&lt;br /&gt;
{|style=&amp;quot;background:#deffee; width:100%;&amp;quot;&lt;br /&gt;
|style=&amp;quot;padding:5px; background:#cef2e0; text-align:left&amp;quot;|&lt;br /&gt;
[[Image:Attention.svg|center|25px]]&lt;br /&gt;
|style=&amp;quot;padding:5px; background:#cef2e0; text-align:left&amp;quot;|&lt;br /&gt;
To guarantee usability for all the users of clusters you must not run your compute jobs on the login nodes.&lt;br /&gt;
Compute jobs must be submitted to the queuing system.&lt;br /&gt;
Any compute job running on the login nodes will be terminated without any notice.&lt;br /&gt;
Any long-running compilation or any long-running pre- or post-processing of batch jobs must also be submitted to the queuing system.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
The login nodes of the bwHPC clusters are the access point to the compute system, your &amp;lt;code&amp;gt;$HOME&amp;lt;/code&amp;gt; directory and your workspaces.&lt;br /&gt;
These nodes are shared with all the users therefore, your activities on the login nodes are limited to primarily set up your batch jobs.&lt;br /&gt;
Your activities may also be:&lt;br /&gt;
* &#039;&#039;&#039;short&#039;&#039;&#039; compilation of your program code and&lt;br /&gt;
* &#039;&#039;&#039;short&#039;&#039;&#039; pre- and post-processing of your batch jobs.&lt;br /&gt;
&lt;br /&gt;
= Additional Login Information =&lt;br /&gt;
&lt;br /&gt;
Following sub-pages go into deeper detail about the following topics:&lt;br /&gt;
* [[Registration/Login/Username|How do I find out my cluster username?]]&lt;br /&gt;
* [[Registration/Login/Hostname|What are the hostnames of the login nodes of a cluster?]]&lt;br /&gt;
* [[Registration/Account|Is my account on the cluster still valid?]]&lt;br /&gt;
* Configuring your shell: [[.bashrc Do&#039;s and Don&#039;ts]]&lt;br /&gt;
These pages are also referenced in the cluster-specific login documentations.&lt;/div&gt;</summary>
		<author><name>F Bartusch</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=Main_Page&amp;diff=15530</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=Main_Page&amp;diff=15530"/>
		<updated>2025-12-01T11:06:02Z</updated>

		<summary type="html">&lt;p&gt;F Bartusch: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;span style=&amp;quot;font-size:140%;&amp;gt;&#039;&#039;&#039;Welcome to the bwHPC Wiki.&#039;&#039;&#039;&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
bwHPC represents services and resources in the State of &#039;&#039;&#039;B&#039;&#039;&#039;aden-&#039;&#039;&#039;W&#039;&#039;&#039;ürttemberg, Germany, for High Performance Computing (&#039;&#039;&#039;HPC&#039;&#039;&#039;), Data Intensive Computing (&#039;&#039;&#039;DIC&#039;&#039;&#039;) and Large Scale Scientific Data Management (&#039;&#039;&#039;LS2DM&#039;&#039;&#039;).&lt;br /&gt;
&lt;br /&gt;
The main bwHPC web page is at &#039;&#039;&#039;[https://www.bwhpc.de/ https://www.bwhpc.de/]&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
Many topics depend on the cluster system you use. &lt;br /&gt;
First choose the cluster you use,  then select the correct topic.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- bwHPC STATUS START --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- OK/green --&amp;gt;&lt;br /&gt;
{| style=&amp;quot;  background:#B8FFB8; width:100%;&amp;quot; &lt;br /&gt;
| style=&amp;quot;padding:8px; background:#8AFF8A; font-size:120%; text-align:left&amp;quot; | [[Status|&#039;&#039;&#039;Status:&#039;&#039;&#039; Maintenances [0] &amp;amp; Outages [0]]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- warning/yellow --&amp;gt;&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
{| style=&amp;quot;  background:#FFD28A; width:100%;&amp;quot; &lt;br /&gt;
| style=&amp;quot;padding:8px; background:#FFC05C; font-size:120%; font-weight:bold;  text-align:left&amp;quot; | [[Status|Maintenances and Outages]] (Currently: 1)&lt;br /&gt;
|}&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- alert/red --&amp;gt;&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
{| style=&amp;quot;  background:#FF8A8A; width:100%;&amp;quot; &lt;br /&gt;
| style=&amp;quot;padding:8px; background:#FF5C5C; font-size:120%; font-weight:bold;  text-align:left&amp;quot; | [[Status|Maintenances and Outages]] (Currently: 1)&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- bwHPC STATUS END --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot; background:#eeeefe; width:100%;&amp;quot;&lt;br /&gt;
| style=&amp;quot;padding:8px; background:#dedefe; font-size:120%; font-weight:bold; text-align:left&amp;quot; | Courses / eLearning&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
* [https://training.bwhpc.de/ eLearning and Online Courses]&lt;br /&gt;
* [https://hpc-wiki.info/hpc/Introduction_to_Linux_in_HPC Introduction to Linux in HPC (external resource)]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;  background:#deffee; width:100%;&amp;quot;&lt;br /&gt;
| style=&amp;quot;padding:8px; background:#cef2e0; font-size:120%; font-weight:bold; text-align:left&amp;quot; | Need Access to a Cluster?&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
* [[When to use a HPC Cluster]]&lt;br /&gt;
* [[Running Calculations]]&lt;br /&gt;
* [[Registration]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;  background:#deffee; width:100%;&amp;quot;&lt;br /&gt;
| style=&amp;quot;padding:8px; background:#cef2e0; font-size:120%; font-weight:bold;  text-align:left&amp;quot; | HPC System Specific Documentation&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
bwHPC encompasses several HPC compute clusters at different universities in Baden-Württemberg. Each cluster is dedicated to [https://www.bwhpc.de/bwhpc-domains.php specific research domains]. &lt;br /&gt;
 &lt;br /&gt;
Documentation differs between compute clusters, please see cluster specific overview pages:&lt;br /&gt;
{|&lt;br /&gt;
|-&lt;br /&gt;
| style=&amp;quot;padding:5px; width:30%&amp;quot;  | [[BwUniCluster3.0|bwUniCluster 3.0]] &lt;br /&gt;
| style=&amp;quot;padding-left:20px;&amp;quot;  | General Purpose, Teaching&lt;br /&gt;
|-&lt;br /&gt;
| style=&amp;quot;padding:5px; width:30%&amp;quot;  | [[:JUSTUS2| bwForCluster JUSTUS 2]] &lt;br /&gt;
| style=&amp;quot;padding-left:20px;&amp;quot;  | Theoretical Chemistry, Condensed Matter Physics, and Quantum Sciences&lt;br /&gt;
|-&lt;br /&gt;
| style=&amp;quot;padding:5px; width:30%&amp;quot; | [[Helix|bwForCluster Helix]]&lt;br /&gt;
| style=&amp;quot;padding-left:20px;&amp;quot;  |   Structural and Systems Biology, Medical Science, Soft Matter, Computational Humanities, and Mathematics and Computer Science&lt;br /&gt;
|-&lt;br /&gt;
| style=&amp;quot;padding:5px; width:30%&amp;quot;  | [[NEMO2|bwForCluster NEMO 2]] &lt;br /&gt;
| style=&amp;quot;padding-left:20px;&amp;quot;  | Neurosciences, Particle Physics, Materials Science, and Microsystems Engineering&lt;br /&gt;
|-&lt;br /&gt;
| style=&amp;quot;padding:5px; width:30%&amp;quot;  | [[BinAC2|bwForCluster BinAC 2]] &lt;br /&gt;
| style=&amp;quot;padding-left:20px;&amp;quot;  | Bioinformatics, Astrophysics, Geosciences, Pharmacy, and Medical Informatics&lt;br /&gt;
|}&lt;br /&gt;
|-&lt;br /&gt;
|bwHPC Clusters: [https://www.bwhpc.de/cluster.php operational status] &lt;br /&gt;
Further Compute Clusters in Baden-Württemberg (separate access policies):&lt;br /&gt;
* Datenanalyse Cluster der Hochschulen (DACHS): [[DACHS | Datenanalyse Cluster der Hochschulen (DACHS)]]&lt;br /&gt;
* bwHPC tier 1: [https://kb.hlrs.de/platforms/index.php/Hunter_(HPE) Hunter] ([https://www.hlrs.de/apply-for-computing-time getting access])&lt;br /&gt;
* bwHPC tier 2: [https://www.nhr.kit.edu/userdocs/horeka HoreKa] ([https://www.nhr.kit.edu/userdocs/horeka/projects/ getting access])&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;background:#deffee; width:100%;&amp;quot;&lt;br /&gt;
| style=&amp;quot;padding:8px; background:#cef2e0; font-size:120%; font-weight:bold; text-align:left&amp;quot; | Documentation valid for all Clusters&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
* [[Environment Modules| Software Modules]] and software documentation explained&lt;br /&gt;
* [https://www.bwhpc.de/software.html List of Software] on all clusters&lt;br /&gt;
* [[Development| Software Development and Parallel Programming]]&lt;br /&gt;
* [[Energy Efficient Cluster Usage]]&lt;br /&gt;
* [[HPC Glossary]]&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;height:100%; background:#ffeaef; width:100%&amp;quot;&lt;br /&gt;
| style=&amp;quot;padding:8px; background:#f5dfdf; font-size:120%; font-weight:bold;  text-align:left&amp;quot;   | Scientific Data Storage&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
For user guides of the scientific data storage services:&lt;br /&gt;
* [[SDS@hd]]: Everyone can join existing storage projects, entitlement needed for creating your own, [https://sds-hd.urz.uni-heidelberg.de/management/shib/sds_costs.php cost sheet]&lt;br /&gt;
* [https://uni-tuebingen.de/einrichtungen/zentrum-fuer-datenverarbeitung/projekte/laufende-projekte/bwsfs bwSFS]&lt;br /&gt;
Associated, but local scientific storage services are:&lt;br /&gt;
* [https://wiki.scc.kit.edu/lsdf/index.php/Category:LSDF_Online_Storage LSDF Online Storage] (only for KIT and KIT partners)&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;height:100%; background:#ffeaef; width:100%&amp;quot;&lt;br /&gt;
| style=&amp;quot;padding:8px; background:#f5dfdf; font-size:120%; font-weight:bold;  text-align:left&amp;quot;   | Scientific Data Archiving&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
For user guides of the scientific data archiving services:&lt;br /&gt;
* [https://www.rda.kit.edu/english bwDataArchive]: Open to all scientists at KIT and institutions in Baden-Württemberg that have signed a service agreement with KIT&lt;br /&gt;
Associated, but local scientific storage services are:&lt;br /&gt;
* [https://www.urz.uni-heidelberg.de/de/service-katalog/speicher/heiarchive heiArchive] (only for members of the University of Heidelberg)&lt;br /&gt;
For instructions on how to move the data, depending on the offered connections, can be found under [[Data Transfer | Data Transfer]].&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;height:100%; background:#ffeaef; width:100%&amp;quot;&lt;br /&gt;
| style=&amp;quot;padding:8px; background:#f5dfdf; font-size:120%; font-weight:bold;  text-align:left&amp;quot;   | Data Management&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
* [[Data Transfer|Data Transfer]]&lt;br /&gt;
* [https://www.forschungsdaten.org/index.php/FDM-Kontakte#Deutschland Research Data Management (RDM)] contact persons&lt;br /&gt;
* [https://www.forschungsdaten.info Portal for Research Data Management] (Forschungsdaten.info)&lt;br /&gt;
|}&lt;br /&gt;
{| style=&amp;quot;  background:#eeeefe; width:100%;&amp;quot; &lt;br /&gt;
| style=&amp;quot;padding:8px; background:#dedefe; font-size:120%; font-weight:bold;  text-align:left&amp;quot; | Support&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
* [[BwSupportPortal|Submit a Ticket in our Support Portal]]&lt;br /&gt;
Support is provided by the [https://www.bwhpc.de/teams.php bwHPC Competence Centers]:&lt;br /&gt;
|}&lt;br /&gt;
{| style=&amp;quot;  background:#e6e9eb; width:100%;&amp;quot; &lt;br /&gt;
| style=&amp;quot;padding:8px; background:#d1dadf; font-size:120%; font-weight:bold;  text-align:left&amp;quot; | Acknowledgement&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
* Please [[Acknowledgement|acknowledge]] our resources in your publications.&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>F Bartusch</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BinAC2/Software&amp;diff=15392</id>
		<title>BinAC2/Software</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BinAC2/Software&amp;diff=15392"/>
		<updated>2025-11-12T13:41:43Z</updated>

		<summary type="html">&lt;p&gt;F Bartusch: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Environment Modules ==&lt;br /&gt;
Most software is provided as Modules.&lt;br /&gt;
&lt;br /&gt;
Required reading to use: [[Environment Modules]]&lt;br /&gt;
&lt;br /&gt;
== Available Software ==&lt;br /&gt;
&lt;br /&gt;
* Web: Visit [https://www.bwhpc.de/software.php https://www.bwhpc.de/software.php], select &amp;lt;code&amp;gt;Cluster → bwForCluster BinAC 2&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* On the cluster: &amp;lt;code&amp;gt;module avail&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
== Software in Containers ==&lt;br /&gt;
&lt;br /&gt;
Instructions for loading software in containers: [[NEMO/Software/Singularity_Containers|Singularity Containers]]&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
== Documentation ==&lt;br /&gt;
&lt;br /&gt;
Documentation for environment modules available on the cluster:  &lt;br /&gt;
&lt;br /&gt;
* with command &amp;lt;code&amp;gt;module help&amp;lt;/code&amp;gt;&lt;br /&gt;
* examples in &amp;lt;code&amp;gt;$SOFTNAME_EXA_DIR&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Documentation in the Wiki ==&lt;br /&gt;
&lt;br /&gt;
For some applications additional documentation is provided here.&lt;br /&gt;
&lt;br /&gt;
* [[BinAC2/Software/Alphafold | Alphafold 3 ]] &lt;br /&gt;
* [[BinAC2/Software/AMDock | AMDock ]]&lt;br /&gt;
&amp;lt;!-- * [[BinAC/Software/BLAST | BLAST]] --&amp;gt;&lt;br /&gt;
&amp;lt;!-- * [[BinAC/Software/Bowtie | Bowtie]] --&amp;gt;&lt;br /&gt;
* [[BinAC/Software/Cellranger | Cell Ranger]]&lt;br /&gt;
* [[Development/Conda | Conda]]&lt;br /&gt;
&amp;lt;!-- * [[BinAC/Software/Gromacs | Gromacs]] --&amp;gt;&lt;br /&gt;
* [[BinAC2/Software/Jupyterlab | JupyterLab]]&lt;br /&gt;
* [[BinAC2/Software/Nextflow | Nextflow and nf-core]]&lt;br /&gt;
* [[BinAC2/Software/TigerVNC | TigerVNC: Remote visualization using VNC]]&lt;/div&gt;</summary>
		<author><name>F Bartusch</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BinAC2&amp;diff=15391</id>
		<title>BinAC2</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BinAC2&amp;diff=15391"/>
		<updated>2025-11-12T13:41:00Z</updated>

		<summary type="html">&lt;p&gt;F Bartusch: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{| style=&amp;quot;width: 100%; border-spacing: 5px;&amp;quot;&lt;br /&gt;
| style=&amp;quot;text-align:center; color:#000;vertical-align:middle;font-size:75%;&amp;quot; |&lt;br /&gt;
[[File:BinAC2_Logo_RGB_subtitel.svg|center|500px||]] &lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;&#039;bwForCluster BinAC 2&#039;&#039;&#039; supports researchers from the broader fields of Bioinformatics, Astrophysics, Geosciences, Pharmacy, and Medical Informatics.&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;  background:#FEF4AB; width:100%;&amp;quot; &lt;br /&gt;
| style=&amp;quot;padding:8px; background:#FFE856; font-size:120%; font-weight:bold;  text-align:left&amp;quot; | News and Events&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
&amp;lt;!--* [https://uni-tuebingen.de/de/274923 11th bwHPC Symposium - September 23rd, Tübingen]--&amp;gt;&lt;br /&gt;
&amp;lt;!--TODO* [http://vis01.binac.uni-tuebingen.de/ Cluster Status and Usage]--&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;  background:#eeeefe; width:100%;&amp;quot; &lt;br /&gt;
| style=&amp;quot;padding:8px; background:#dedefe; font-size:120%; font-weight:bold;  text-align:left&amp;quot; | Training &amp;amp; Support&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
* [[BinAC2/Getting_Started|Getting Started]]&lt;br /&gt;
* [https://training.bwhpc.de E-Learning Courses]&lt;br /&gt;
* [[BinAC2/Support|Contact and Support]]&lt;br /&gt;
* Send [[Feedback|Feedback]] about Wiki pages&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;  background:#deffee; width:100%;&amp;quot;&lt;br /&gt;
| style=&amp;quot;padding:8px; background:#cef2e0; font-size:120%; font-weight:bold;  text-align:left&amp;quot; | User Documentation&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
* [[Registration/bwForCluster|Registration]]&lt;br /&gt;
&lt;br /&gt;
* [[BinAC2/Login|Login]]&lt;br /&gt;
&lt;br /&gt;
* [[BinAC2/Hardware_and_Architecture|Hardware and Architecture]]&lt;br /&gt;
** [[BinAC2/Hardware_and_Architecture#Compute_Nodes|Node Specifications]]&lt;br /&gt;
** [[BinAC2/Hardware_and_Architecture#File_Systems|File Systems and Workspaces]]&lt;br /&gt;
* Usage of [[BinAC2/Software|Software]] on BinAC 2&lt;br /&gt;
** For available Software Modules see [https://www.bwhpc.de/software.php bwhpc.de Software Search] (select bwForCluster BinAC 2)&lt;br /&gt;
** Create Software Environments with [[Development/Conda|Conda]]&lt;br /&gt;
** Use  [[BinAC2/Software/Nextflow|nf-core Nextflow pipelines]]&lt;br /&gt;
** See [[Development]] for info about compiler and parallelization&lt;br /&gt;
&lt;br /&gt;
* [[BinAC2/Slurm|Batch System (SLURM)]]&lt;br /&gt;
** [[BinAC2/SLURM_Partitions|SLURM Partitions]]&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
{| style=&amp;quot;  background:#e6e9eb; width:100%;&amp;quot;&lt;br /&gt;
| style=&amp;quot;padding:8px; background:#d1dadf; font-size:120%; font-weight:bold;  text-align:left&amp;quot; | Cluster Funding&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
* Please [[BinAC2/Acknowledgement|acknowledge]] the cluster in your publications.&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>F Bartusch</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BinAC2/Software&amp;diff=15390</id>
		<title>BinAC2/Software</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BinAC2/Software&amp;diff=15390"/>
		<updated>2025-11-12T13:09:27Z</updated>

		<summary type="html">&lt;p&gt;F Bartusch: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Environment Modules ==&lt;br /&gt;
Most software is provided as Modules.&lt;br /&gt;
&lt;br /&gt;
Required reading to use: [[Environment Modules]]&lt;br /&gt;
&lt;br /&gt;
== Available Software ==&lt;br /&gt;
&lt;br /&gt;
* Web: Visit [https://www.bwhpc.de/software.php https://www.bwhpc.de/software.php], select &amp;lt;code&amp;gt;Cluster → bwForCluster BinAC 2&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* On the cluster: &amp;lt;code&amp;gt;module avail&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
== Software in Containers ==&lt;br /&gt;
&lt;br /&gt;
Instructions for loading software in containers: [[NEMO/Software/Singularity_Containers|Singularity Containers]]&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
== Documentation ==&lt;br /&gt;
&lt;br /&gt;
Documentation for environment modules available on the cluster:  &lt;br /&gt;
&lt;br /&gt;
* with command &amp;lt;code&amp;gt;module help&amp;lt;/code&amp;gt;&lt;br /&gt;
* examples in &amp;lt;code&amp;gt;$SOFTNAME_EXA_DIR&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Documentation in the Wiki ==&lt;br /&gt;
&lt;br /&gt;
For some applications additional documentation is provided here.&lt;br /&gt;
&lt;br /&gt;
* [[BinAC2/Software/Alphafold | Alphafold ]] &lt;br /&gt;
* [[BinAC2/Software/AMDock | AMDock ]]&lt;br /&gt;
&amp;lt;!-- * [[BinAC/Software/BLAST | BLAST]] --&amp;gt;&lt;br /&gt;
&amp;lt;!-- * [[BinAC/Software/Bowtie | Bowtie]] --&amp;gt;&lt;br /&gt;
* [[BinAC/Software/Cellranger | Cell Ranger]]&lt;br /&gt;
* [[Development/Conda | Conda]]&lt;br /&gt;
&amp;lt;!-- * [[BinAC/Software/Gromacs | Gromacs]] --&amp;gt;&lt;br /&gt;
* [[BinAC2/Software/Jupyterlab | JupyterLab]]&lt;br /&gt;
* [[BinAC2/Software/Nextflow | Nextflow and nf-core]]&lt;br /&gt;
* [[BinAC2/Software/TigerVNC | TigerVNC: Remote visualization using VNC]]&lt;/div&gt;</summary>
		<author><name>F Bartusch</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BinAC2/Login&amp;diff=15359</id>
		<title>BinAC2/Login</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BinAC2/Login&amp;diff=15359"/>
		<updated>2025-10-22T09:16:01Z</updated>

		<summary type="html">&lt;p&gt;F Bartusch: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{|style=&amp;quot;background:#deffee; width:100%;&amp;quot;&lt;br /&gt;
|style=&amp;quot;padding:5px; background:#cef2e0; text-align:left&amp;quot;|&lt;br /&gt;
[[Image:Attention.svg|center|25px]]&lt;br /&gt;
|style=&amp;quot;padding:5px; background:#cef2e0; text-align:left&amp;quot;|&lt;br /&gt;
Access to bwForCluster BinAC 2 is only possible from IP addresses within the [https://www.belwue.de BelWü] network which connects universities and other scientific institutions in Baden-Württemberg.&lt;br /&gt;
If your computer is in your University network (e.g. at your office), you should be able to connect to bwForCluster BinAC 2 without restrictions.&lt;br /&gt;
If you are outside the BelWü network (e.g. at home), a VPN (virtual private network) connection to your University network must be established first. Please consult the VPN documentation of your University.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Prerequisites for successful login:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
You need to have&lt;br /&gt;
* completed the 3-step [[registration/bwForCluster|bwForCluster registration]] procedure.&lt;br /&gt;
* [[Registration/Password|set a service password]] for bwForCluster BinAC 2.&lt;br /&gt;
&amp;lt;!--* Setup the [[BinAC2/Login#TOTP_Second_Factor|two factor authentication (2FA)]].--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Login to bwForCluster BinAC 2 =&lt;br /&gt;
&lt;br /&gt;
Login to bwForCluster BinAC 2 is only possible with a Secure Shell (SSH) client for which you must know your [[BinAC2/Login#Username|username]] on the cluster and the [[BinAC2/Login#Hostname|hostname]] of the BinAC 2 login node.&lt;br /&gt;
&lt;br /&gt;
For more gneral information on SSH clients, visit the [[Registration/Login/Client|SSH clients Guide]].&lt;br /&gt;
&lt;br /&gt;
== TOTP Second Factor ==&lt;br /&gt;
&lt;br /&gt;
At the moment no second factor is needed. We are currently implementing a new TOTP procedure.&lt;br /&gt;
&lt;br /&gt;
== Username ==&lt;br /&gt;
&lt;br /&gt;
Your &amp;lt;code&amp;gt;&amp;lt;username&amp;gt;&amp;lt;/code&amp;gt; on BinAC 2 consists of a prefix and your local username.&lt;br /&gt;
For prefixes please refer to the [[Registration/Login/Username|Username Guide]].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;Example&amp;lt;/b&amp;gt;: If your local username at your University is &amp;lt;code&amp;gt;ab123&amp;lt;/code&amp;gt; and you are a user from Tübingen University, your username on the cluster is: &amp;lt;code&amp;gt;tu_ab123&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
== Hostnames ==&lt;br /&gt;
&lt;br /&gt;
BinAC 2 has one login node serving as a load balancer. We use DNS round-robin scheduling to load-balance the incoming connections between the actual three login nodes. If you are logging in multiple times, different sessions might run on different login nodes and hence programs started in one session might not be visible in another sessions. &lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Hostname !! Destination&lt;br /&gt;
|-&lt;br /&gt;
| login.binac2.uni-tuebingen.de || one of the three login nodes &lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
You can choose a specific login node by using specific ports on the load balancer. Please only do this if there is a real reason for that (e.g. connecting to a running tmux/screen session).&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Port !! Destination&lt;br /&gt;
|-&lt;br /&gt;
| 2221 || login01&lt;br /&gt;
|-&lt;br /&gt;
| 2222 || login02&lt;br /&gt;
|-&lt;br /&gt;
| 2223 || login03&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
Usage: &amp;lt;code&amp;gt;ssh -p &amp;lt;port&amp;gt; [other options] &amp;lt;username&amp;gt;@login.binac2.uni-tuebingen.de&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Login with SSH command (Linux, Mac, Windows) ==&lt;br /&gt;
&lt;br /&gt;
Most Unix and Unix-like operating systems like Linux or MacOS come with a built-in SSH client provided by the OpenSSH project.&lt;br /&gt;
Windows 10 and Windows also come with a built-in OpenSSH client. &lt;br /&gt;
&lt;br /&gt;
For login use one of the following ssh commands:&lt;br /&gt;
&lt;br /&gt;
 ssh &amp;lt;username&amp;gt;@login.binac2.uni-tuebingen.de&lt;br /&gt;
&lt;br /&gt;
To run graphical applications on the cluster, you need to enable X11 forwarding with the &amp;lt;code&amp;gt;-X&amp;lt;/code&amp;gt; flag:&lt;br /&gt;
&lt;br /&gt;
 ssh -X &amp;lt;username&amp;gt;@login.binac2.uni-tuebingen.de&lt;br /&gt;
&lt;br /&gt;
For login to a specific login node (here: login03):&lt;br /&gt;
&lt;br /&gt;
 ssh -p 2223 &amp;lt;username&amp;gt;@login.binac2.uni-tuebingen.de&lt;br /&gt;
&lt;br /&gt;
== Login with graphical SSH client (Windows) ==&lt;br /&gt;
&lt;br /&gt;
For Windows we suggest using MobaXterm for login and file transfer.&lt;br /&gt;
 &lt;br /&gt;
Start MobaXterm and fill in the following fields:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Remote name              : login.binac2.uni-tuebingen.de&lt;br /&gt;
Specify user name        : &amp;lt;username&amp;gt;&lt;br /&gt;
Port                     : 22&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After that click on &#039;ok&#039;. Then a terminal will open where you can enter your credentials.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
&lt;br /&gt;
== Login Example ==&lt;br /&gt;
&lt;br /&gt;
To login to bwForCluster BinAC, proceed as follows:&lt;br /&gt;
# Login with SSH command or MoabXterm as shown above.&lt;br /&gt;
# The system will ask for a one-time password &amp;lt;code&amp;gt;One-time password (OATH) for &amp;lt;username&amp;gt;&amp;lt;/code&amp;gt;. Please enter your OTP and confirm it with Enter/Return. The OTP is not displayed when typing. If you do not have a second factor yet, please create one (see [BinAC/Login#TOTP_Second_Factor]]).&lt;br /&gt;
# The system will ask you for your service password &amp;lt;code&amp;gt;Password:&amp;lt;/code&amp;gt;. Please enter it and confirm it with Enter/Return. The password is not displayed when typing. If you do not have a service password yet or have forgotten it, please create one (see [[Registration/Password]]).&lt;br /&gt;
# You will be greeted by the cluster, followed by a shell.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ssh tu_ab123@login01.binac.uni-tuebingen.de&lt;br /&gt;
One-time password (OATH) for tu_ab123:&lt;br /&gt;
Password: &lt;br /&gt;
&lt;br /&gt;
Last login: ...&lt;br /&gt;
&lt;br /&gt;
          bwFOR Cluster BinAC, Bioinformatics and Astrophysics &lt;br /&gt;
&lt;br /&gt;
------------------------------------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
please submit jobs solely with the &#039;qsub&#039; command. Available queues are&lt;br /&gt;
 &lt;br /&gt;
    tiny  - 20min     - fast queue for testing (four GPU cores available) &lt;br /&gt;
    short - 48hrs     - serial/parallel jobs&lt;br /&gt;
    long  - 7days     - serial/parallel jobs&lt;br /&gt;
    gpu   - 30days    - GPU-only jobs  &lt;br /&gt;
    smp   - 7days     - large SMP jobs ( memory &amp;gt; 128GB/node ) &lt;br /&gt;
&lt;br /&gt;
The COMPUTE and GPU nodes provide 28 cores and 128GB of RAM each. In &lt;br /&gt;
addition, every GPU node is equipped with 2 Nvidia K80 accelerator cards, &lt;br /&gt;
totalling in 4 GPUs per node. The SMP machines provide 40 cores per node &lt;br /&gt;
and 1 TB of RAM.&lt;br /&gt;
&lt;br /&gt;
A local SCRATCH directory (/scratch) is available on each node. A fast, &lt;br /&gt;
parallel WORK file system is mounted on /beegfs/work. Please also use the &lt;br /&gt;
workspace tools.&lt;br /&gt;
&lt;br /&gt;
Register to our BinAC mailing list via &lt;br /&gt;
https://listserv.uni-tuebingen.de/mailman/listinfo/binac_announce&lt;br /&gt;
&lt;br /&gt;
------------------------------------------------------------------------------&lt;br /&gt;
   Please do not keep data on WORK for a prolonged time. If rarely needed and&lt;br /&gt;
   while working on a project, please compress files to an archive. &lt;br /&gt;
------------------------------------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[tu_ab123@login01 ~]$ &lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Allowed Activities on Login Nodes =&lt;br /&gt;
&lt;br /&gt;
{|style=&amp;quot;background:#deffee; width:100%;&amp;quot;&lt;br /&gt;
|style=&amp;quot;padding:5px; background:#cef2e0; text-align:left&amp;quot;|&lt;br /&gt;
[[Image:Attention.svg|center|25px]]&lt;br /&gt;
|style=&amp;quot;padding:5px; background:#cef2e0; text-align:left&amp;quot;|&lt;br /&gt;
To guarantee usability for all users you must not run your compute jobs on the login nodes.&lt;br /&gt;
Compute jobs must be submitted as batch jobs.&lt;br /&gt;
Any compute job running on the login nodes will be terminated without notice.&lt;br /&gt;
Long-running compilation or long-running pre- or post-processing tasks must also be submitted as batch jobs.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
The login nodes are the access points to the compute system, your &amp;lt;code&amp;gt;$HOME&amp;lt;/code&amp;gt; directory and your workspaces.&lt;br /&gt;
These nodes are shared with all users. Hence, your activities on the login nodes are primarily limited to setting up your batch jobs.&lt;br /&gt;
Your activities may also be:&lt;br /&gt;
* quick compilation of program code or&lt;br /&gt;
* quick pre- and post-processing of results from batch jobs.&lt;br /&gt;
&lt;br /&gt;
We advise to use interactive batch jobs for compute and memory intensive compilation and pre- and post-processing tasks.&lt;br /&gt;
&lt;br /&gt;
= Related Information =&lt;br /&gt;
&lt;br /&gt;
* If you want to reset your service password, consult the [[Registration/Password|Password Guide]].&lt;br /&gt;
* If you want to register a new token for the two factor authentication (2FA), consult [[BinAC/Login#TOTP_Second_Factor|this section]].&lt;br /&gt;
* If you want to de-register, consult the [[Registration/Deregistration|De-registration Guide]].&lt;br /&gt;
--&amp;gt;&lt;/div&gt;</summary>
		<author><name>F Bartusch</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BinAC/Acknowledgement&amp;diff=15332</id>
		<title>BinAC/Acknowledgement</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BinAC/Acknowledgement&amp;diff=15332"/>
		<updated>2025-10-10T09:23:08Z</updated>

		<summary type="html">&lt;p&gt;F Bartusch: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;When preparing a publication describing work that involved the usage of a bwForCluster, e.g. BinAC, please ensure that you reference the bwHPC initiative, the bwHPC-S5 project and – if appropriate – also the bwHPC facility itself. The following sample text is suggested as a starting point.&lt;br /&gt;
 &lt;br /&gt;
 Acknowledgement:&lt;br /&gt;
 The authors acknowledge support by the High Performance and Cloud Computing Group at the Zentrum für Datenverarbeitung of the University of Tübingen, the state of Baden-Württemberg through bwHPC&lt;br /&gt;
 and the German Research Foundation (DFG) through grant no INST 37/935-1 FUGG.&lt;br /&gt;
&lt;br /&gt;
In addition, we kindly ask you to notify us of any reports, conference papers, journal articles, theses, posters, talks which contain results obtained on any bwHPC resource by sending an email to  &lt;br /&gt;
[mailto:publications@bwhpc.de  publications@bwhpc.de] stating:&lt;br /&gt;
* cluster facility (e.g. bwForCluster BinAC)&lt;br /&gt;
* RV acronym (e.g. bw16A000)&lt;br /&gt;
* author(s)&lt;br /&gt;
* title &#039;&#039;or&#039;&#039; booktitle&lt;br /&gt;
* journal, volume, pages &#039;&#039;or&#039;&#039; editors, address, publisher &lt;br /&gt;
* year.&lt;br /&gt;
&lt;br /&gt;
Such recognition is highly important for acquiring funding for the next generation of hardware, support services, data storage and infrastructure.&lt;br /&gt;
&lt;br /&gt;
The publications will be referenced on the bwHPC website:&lt;br /&gt;
 https://www.bwhpc.de/user_publications.html&lt;/div&gt;</summary>
		<author><name>F Bartusch</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BinAC2/Acknowledgement&amp;diff=15331</id>
		<title>BinAC2/Acknowledgement</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BinAC2/Acknowledgement&amp;diff=15331"/>
		<updated>2025-10-10T09:22:54Z</updated>

		<summary type="html">&lt;p&gt;F Bartusch: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;When preparing a publication describing work that involved the usage of a bwForCluster, e.g. BinAC2, please ensure that you reference the bwHPC initiative, the bwHPC-S5 project and – if appropriate – also the bwHPC facility itself. The following sample text is suggested as a starting point.&lt;br /&gt;
 &lt;br /&gt;
 Acknowledgement:&lt;br /&gt;
 The authors acknowledge support by the High Performance and Cloud Computing Group at the Zentrum für Datenverarbeitung of the University of Tübingen, the state of Baden-Württemberg through bwHPC&lt;br /&gt;
 and the German Research Foundation (DFG) through grant no INST 37/1159-1 FUGG.&lt;br /&gt;
&lt;br /&gt;
In addition, we kindly ask you to notify us of any reports, conference papers, journal articles, theses, posters, talks which contain results obtained on any bwHPC resource by sending an email to  &lt;br /&gt;
[mailto:publications@bwhpc.de  publications@bwhpc.de] stating:&lt;br /&gt;
* cluster facility (e.g. bwForCluster BinAC2)&lt;br /&gt;
* RV acronym (e.g. bw16A000)&lt;br /&gt;
* author(s)&lt;br /&gt;
* title &#039;&#039;or&#039;&#039; booktitle&lt;br /&gt;
* journal, volume, pages &#039;&#039;or&#039;&#039; editors, address, publisher &lt;br /&gt;
* year.&lt;br /&gt;
&lt;br /&gt;
Such recognition is highly important for acquiring funding for the next generation of hardware, support services, data storage and infrastructure.&lt;br /&gt;
&lt;br /&gt;
The publications will be referenced on the bwHPC website:&lt;br /&gt;
 https://www.bwhpc.de/user_publications.html&lt;/div&gt;</summary>
		<author><name>F Bartusch</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BinAC2/Getting_Started&amp;diff=15294</id>
		<title>BinAC2/Getting Started</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BinAC2/Getting_Started&amp;diff=15294"/>
		<updated>2025-09-17T07:19:23Z</updated>

		<summary type="html">&lt;p&gt;F Bartusch: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Purpose and Goals  ==&lt;br /&gt;
&lt;br /&gt;
The Getting Started guide is designed for users who are new to HPC systems in general and to BinAC 2 specifically. After reading this guide, you should have a basic understanding of how to use BinAC 2 for your research.&lt;br /&gt;
&lt;br /&gt;
Please note that this guide does not cover basic Linux command-line skills. If you&#039;re unfamiliar with commands such as listing directory contents or using a text editor, we recommend first exploring the Linux module on the [https://training.bwhpc.de bwHPC training platform].&lt;br /&gt;
&lt;br /&gt;
This guide also doesn&#039;t cover every feature of the system but aims to provide a broad overview. For more detailed information about specific features, please refer to the dedicated Wiki pages on topics like the batch system, storage, and more.&lt;br /&gt;
&lt;br /&gt;
Some terms in this guide may be unfamiliar. You can look them up in the [[HPC_Glossary|HPC Glossary]].&lt;br /&gt;
&lt;br /&gt;
== General Workflow of Running a Calculation ==&lt;br /&gt;
&lt;br /&gt;
On an &#039;&#039;&#039;HPC Cluster&#039;&#039;&#039;, you do not simply log in and run your software. Instead, you write a &#039;&#039;&#039;Batch Script&#039;&#039;&#039; that contains all the commands needed to run and process your job, then submit it to a waiting queue to be executed on one of several hundred &#039;&#039;&#039;Compute Nodes&#039;&#039;&#039;. &lt;br /&gt;
&lt;br /&gt;
== Get Access to the Cluster ==&lt;br /&gt;
&lt;br /&gt;
Follow the registration process for the bwForCluster. &amp;amp;rarr; [[Registration/bwForCluster|How to Register for a bwForCluster]]&lt;br /&gt;
&lt;br /&gt;
== Login to the Cluster ==&lt;br /&gt;
&lt;br /&gt;
Set up your service password and 2FA token, then log in to BinAC 2. &amp;amp;rarr; [[BinAC2/Login|Login BinAC]]&lt;br /&gt;
&lt;br /&gt;
== Using the Linux command line ==&lt;br /&gt;
&lt;br /&gt;
It is expected that you have at least basic Linux and command-line knowledge before using bwForCluster BinAC 2.&lt;br /&gt;
There are numerous resources available online for learning fundamental concepts and commands.&lt;br /&gt;
Here are two:&lt;br /&gt;
&lt;br /&gt;
* bwHPC Linux Training course &amp;amp;rarr; [https://training.bwhpc.de/ Linux course on training.bwhpc.de]&lt;br /&gt;
* HPC Wiki (external site) &amp;amp;rarr;  [https://hpc-wiki.info/hpc/Introduction_to_Linux_in_HPC/The_Command_Line Introduction to the Linux command line]&lt;br /&gt;
&lt;br /&gt;
Also see: [[.bashrc Do&#039;s and Don&#039;ts]]&lt;br /&gt;
&lt;br /&gt;
= File System Basics =&lt;br /&gt;
&lt;br /&gt;
BinAC 2 offers several file systems for your data, each serving different needs.&lt;br /&gt;
These are explained here in a short and simple form. For more detailed documentation, visit: [https://wiki.bwhpc.de/e/BwForCluster_BinAC_Hardware_and_Architecture#Storage_Architecture here].&lt;br /&gt;
&lt;br /&gt;
== Home File System ==&lt;br /&gt;
&lt;br /&gt;
Home directories are intended for the permanent storage of frequently used files, such as like source codes, configuration files, executable programs, conda environments, etc.&lt;br /&gt;
The home file system is backed up daily and has a quota.&lt;br /&gt;
If that quota is reached, you may experience issues when working with BinAC 2.&lt;br /&gt;
&lt;br /&gt;
Here are some useful command line and bash tips for accessing the Home File system.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# For changing to your home directory, simply run:&lt;br /&gt;
cd&lt;br /&gt;
&lt;br /&gt;
# To access files in your home directory within your job script, you can use one of these:&lt;br /&gt;
~/myFile   # or&lt;br /&gt;
$HOME/myFile&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Project File System ==&lt;br /&gt;
&lt;br /&gt;
BinAC 2 has a &amp;lt;code&amp;gt;project&amp;lt;/code&amp;gt; file system intended for data that:&lt;br /&gt;
* is shared between members of a compute project&lt;br /&gt;
* is not actively used for computations in near future&lt;br /&gt;
&lt;br /&gt;
The data is stored on HDDs. The primary focus of &amp;lt;code&amp;gt;project&amp;lt;/code&amp;gt; is pure capacity, not speed.&lt;br /&gt;
&lt;br /&gt;
Every project gets a dedicated directory located at:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight&amp;gt;&lt;br /&gt;
/pfs/10/project/&amp;lt;project_id&amp;gt;/&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can check the project you&#039;re member of:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight&amp;gt;&lt;br /&gt;
# id $USER | grep -o &#039;bw[^)]*&#039;&lt;br /&gt;
bw16f003&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In this case, your project directory would be:&lt;br /&gt;
```&lt;br /&gt;
/pfs/10/project/bw16f003/&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
Check our [[BinAC2/Project_Data_Organization | data organization guide ]] for methods to organize data inside the project directory.&lt;br /&gt;
&lt;br /&gt;
== Work File System ==&lt;br /&gt;
&lt;br /&gt;
BinAC 2 has a &amp;lt;code&amp;gt;work&amp;lt;/code&amp;gt; file system on SSDs intended for data that is actively used and produced by compute jobs.&lt;br /&gt;
Each user creates workspaces on their own via the [[BinAC2/Hardware_and_Architecture#Work | workspace tools]].&lt;br /&gt;
&lt;br /&gt;
The project file system is available at &amp;lt;code&amp;gt;/pfs/10/work&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ ll /pfs/10/work/&lt;br /&gt;
total 1822&lt;br /&gt;
drwxr-xr-x.  3 root        root          33280 Feb 12 14:56 db&lt;br /&gt;
drwx------.  5 tu_iioba01  tu_tu         25600 Jan  8 14:42 tu_iioba01-alphafold3&lt;br /&gt;
[..]&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As you can see from the file permissions, the resulting workspace can only be accessed by you, not by other group members or other users.&lt;br /&gt;
&lt;br /&gt;
== Scratch ==&lt;br /&gt;
&lt;br /&gt;
Each compute node provides local storage, which is much faster than accessing &amp;lt;code&amp;gt;project&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;work&amp;lt;/code&amp;gt; file systems.&amp;lt;/br&amp;gt;&lt;br /&gt;
When you execute a job, a dedicated temporary directory will be assigned to it on the compute node. This is often referred to as the &amp;lt;code&amp;gt;scratch&amp;lt;/code&amp;gt; directory.&amp;lt;/br&amp;gt;&lt;br /&gt;
Programs frequently generate temporary data only needed during execution. If the program you are using offers an option for setting a temporary directory,&amp;lt;/br&amp;gt;&lt;br /&gt;
please configure it to use the &amp;lt;code&amp;gt;scratch&amp;lt;/code&amp;gt; directory.&amp;lt;/br&amp;gt;&lt;br /&gt;
You can use the environment variable &amp;lt;code&amp;gt;$TMPDIR&amp;lt;/code&amp;gt;, which will point to your job&#039;s &amp;lt;code&amp;gt;scratch&amp;lt;/code&amp;gt; directory.&lt;br /&gt;
&lt;br /&gt;
= Batch System Basics =&lt;br /&gt;
&lt;br /&gt;
On HPC clusters like BinAC 2, you don&#039;t run analyses directly on the login node.&lt;br /&gt;
Instead, you write a script and submit it as a job to the batch system.&lt;br /&gt;
BinAC 2 uses SLURM as its batch system.&lt;br /&gt;
The system then schedules the job to run on one of the available compute nodes, where the actual computation takes place.&lt;br /&gt;
&lt;br /&gt;
The cluster consists of compute nodes with different [[BinAC2/Hardware_and_Architecture#Compute_Nodes | hardware features]].&amp;lt;/br&amp;gt;&lt;br /&gt;
These hardware features are only available when submitting the jobs to the correct [[BinAC2/SLURM_Partitions | partitions]].&lt;br /&gt;
&lt;br /&gt;
The getting started guide only provides very basic SLURM information.&amp;lt;/br&amp;gt;&lt;br /&gt;
Please read the extensive [[BinAC2/Slurm | SLURM documentation]].&lt;br /&gt;
&lt;br /&gt;
== Simple Script Job ==&lt;br /&gt;
&lt;br /&gt;
You will have to write job scripts in order to conduct your computations on BinAC 2.&lt;br /&gt;
Use your favourite text editor to create simple job script called &#039;myjob.sh&#039;.&lt;br /&gt;
&lt;br /&gt;
{|style=&amp;quot;background:#deffee; width:100%;&amp;quot;&lt;br /&gt;
|style=&amp;quot;padding:5px; background:#cef2e0; text-align:left&amp;quot;|&lt;br /&gt;
[[Image:Attention.svg|center|25px]]&lt;br /&gt;
|style=&amp;quot;padding:5px; background:#cef2e0; text-align:left&amp;quot;|&lt;br /&gt;
Please note that there are differences between Windows and Linux line endings.&lt;br /&gt;
Make sure that your editor uses Linux line endings when you are using Windows.&lt;br /&gt;
You can check your line endings with &amp;lt;code&amp;gt;vim -b &amp;lt;your script&amp;gt;&amp;lt;/code&amp;gt;. Windows line endings will be displayed as &amp;lt;code&amp;gt;^M&amp;lt;/code&amp;gt;.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#SBATCH --ntasks=1&lt;br /&gt;
#SBATCH --time=10:00&lt;br /&gt;
#SBATCH --mem=5000m&lt;br /&gt;
#SBATCH --job-name=simple&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;Scratch directory: $TMPDIR&amp;quot;&lt;br /&gt;
echo &amp;quot;Date:&amp;quot;&lt;br /&gt;
date&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;My job is running on node:&amp;quot;&lt;br /&gt;
hostname&lt;br /&gt;
uname -a&lt;br /&gt;
&lt;br /&gt;
sleep 240&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Basic SLURM commands == &lt;br /&gt;
&lt;br /&gt;
Submit the job script you wrote with &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ sbatch myjob.sh&lt;br /&gt;
Submitted batch job 75441&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Take a note of your &amp;lt;code&amp;gt;jobID&amp;lt;/code&amp;gt;. The scheduler will reserve one core and 5000MB memory for 5 minutes on a compute node for your job.&amp;lt;/br&amp;gt;&lt;br /&gt;
The job should be scheduled within seconds if BinAC 2 is not fully busy.&lt;br /&gt;
The output will be stored in a file called &amp;lt;code&amp;gt;slurm-&amp;lt;JobID&amp;gt;.out&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ cat slurm-75441.out &lt;br /&gt;
Scratch directory: /scratch/75441&lt;br /&gt;
Date:&lt;br /&gt;
Thu Feb 13 09:56:41 AM CET 2025&lt;br /&gt;
My job is running on node:&lt;br /&gt;
node1-083&lt;br /&gt;
Linux node1-083 5.14.0-503.14.1.el9_5.x86_64 #1 SMP PREEMPT_DYNAMIC Fri Nov 15 12:04:32 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
There are tons of options, details and caveats for SLURM job script.&lt;br /&gt;
Most of them are explained in the [[BinAC2/Slurm | SLURM documentation]].&lt;br /&gt;
If you encounter any problems, just send a mail to hpcmaster@uni-tuebingen.de.&lt;br /&gt;
&lt;br /&gt;
You can get an overview of your queued and running jobs with &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
[tu_iioba01@login01 ~]$ squeue --user=$USER&lt;br /&gt;
             JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)&lt;br /&gt;
             75441   compute   simple tu_iioba  R       0:03      1 node1-083&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Let&#039;s assume you pulled a Homer and want to stop/kill/remove a running job.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
scancel &amp;lt;JobID&amp;gt;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
== Best Practices == &lt;br /&gt;
&lt;br /&gt;
The scheduler will reserve computational resources (nodes, cores, GPUs, memory) for a specified period for you. By following some best practices, you can avoid common problems beforehand.&lt;br /&gt;
&lt;br /&gt;
=== Specify memory for your job ===&lt;br /&gt;
&lt;br /&gt;
Often we get tickets with question like &amp;quot;Why did the system kill my job?&amp;quot;.&lt;br /&gt;
Most often the user did not specify the required memory resources for the job. Then the following happens:&lt;br /&gt;
&lt;br /&gt;
The job is started on a compute node, where it shares the resources other jobs. Let us assume that the other jobs on this node occupy already 100 gigabyte of memory. Now your job tries to allocate 40 gigabyte of memory. As the compute node has only 128 gigabyte, your job crashes because it cannot allocate that much memory.&lt;br /&gt;
&lt;br /&gt;
You can make your life easier by specifying the required memory in your job script with:&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#PBS -l mem=xxgb&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Then you have the guarantee that your job can allocate xx gigabyte of memory.&lt;br /&gt;
&lt;br /&gt;
If you do not know how much memory your job will need, look into the documentation of the tools you use or ask us.&lt;br /&gt;
We also started [https://wiki.bwhpc.de/e/Memory_Usage a wiki page] on which we will document some guidelines and pitfalls for specific tools.&lt;br /&gt;
&lt;br /&gt;
=== Use the reserved resources ===&lt;br /&gt;
&lt;br /&gt;
Reserved resources (nodes, cores, gpus, memory) are not available to other users and their jobs.&lt;br /&gt;
You have the responsibility that your programs utilize the reserved resources.&lt;br /&gt;
&lt;br /&gt;
An extreme example: You request a whole node (node=1:ppn=28), but your job uses just one core. The other 27 cores are idling. This is bad practice, so take care that the used programs really use the requested resources.&lt;br /&gt;
&lt;br /&gt;
Another example are tools that do not benefit from a increasing number of cores.&lt;br /&gt;
Please check the documentation of your tools and also check the feedback files that report the CPU efficiency of your job.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
[...]&lt;br /&gt;
CPU efficiency, 0-100%                      | 25.00&lt;br /&gt;
[...]&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
This job for example used only 25% of the available CPU resources.&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Software =&lt;br /&gt;
&lt;br /&gt;
There are several mechanisms how software can be installed on BinAC 2.&lt;br /&gt;
If you need software that is not installed on BinAC 2, open a ticket and we can find a way to provide the software on the cluster.&lt;br /&gt;
&lt;br /&gt;
== Environment Modules ==&lt;br /&gt;
&lt;br /&gt;
Environment modules is the &#039;classic&#039; way for providing software on clusters.&lt;br /&gt;
A module consists of a specific software version and can be loaded.&lt;br /&gt;
The module system then manipulates the PATH and other environment variables such that the software can be used.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# Show available modules&lt;br /&gt;
$ module avail&lt;br /&gt;
&lt;br /&gt;
# Load a module&lt;br /&gt;
$ module load bio/samtools/1.21&lt;br /&gt;
&lt;br /&gt;
# Show the module&#039;s help&lt;br /&gt;
$ module help bio/samtools/1.21&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A more detailed description of module environments can be found [https://wiki.bwhpc.de/e/Environment_Modules on this wiki page]&lt;br /&gt;
&lt;br /&gt;
Sometimes software packages have so many dependencies or the user wants a combination of tools, so that environment modules cannot be used in a meaningful way.&lt;br /&gt;
Then other solutions like Conda environments or Singularity containers (see below) can be used.&lt;br /&gt;
&lt;br /&gt;
== Conda Environments ==&lt;br /&gt;
&lt;br /&gt;
Conda environments are a nice possibility for creating custom environments on the cluster, as a majority of the scientific software is available in the meantime as conda packages.&lt;br /&gt;
BinAC 2 already provides Conda via Miniforge.&lt;br /&gt;
You can find a general documtation for using Conda on [[Development/Conda | on this wiki page]].&lt;br /&gt;
&lt;br /&gt;
== Apptainer (formerly Singularity) ==&lt;br /&gt;
&lt;br /&gt;
Sometimes software is also available in a software container format.&lt;br /&gt;
Apptainer (formerly called Singularity) is installed on all BinAC 2 nodes. You can pull Apptainer containers or Docker images from registries onto BinAC 2 and use them.&lt;br /&gt;
You can also build new Apptainer containers on your own machine and copy them to BinAC.&lt;br /&gt;
&lt;br /&gt;
Please note that Apptainer containers should be stored in the &amp;lt;code&amp;gt;project&amp;lt;/code&amp;gt; file system.&lt;br /&gt;
We configured Apptainer such that containers stored in your home directory do not work.&lt;/div&gt;</summary>
		<author><name>F Bartusch</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BinAC2/Project_Data_Organization&amp;diff=15293</id>
		<title>BinAC2/Project Data Organization</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BinAC2/Project_Data_Organization&amp;diff=15293"/>
		<updated>2025-09-17T07:16:39Z</updated>

		<summary type="html">&lt;p&gt;F Bartusch: Created page with &amp;quot;=  Data Organization on BinAC 2 project directories =  This guide explains how to organize and manage your data in the project directory at &amp;lt;code&amp;gt;/pfs/10/project&amp;lt;/code&amp;gt; BinAC 2. Each project has its own dedicated directory with flexible permission systems to help you control access to your files and collaborate effectively with team members.  &amp;#039;&amp;#039;&amp;#039;Important&amp;#039;&amp;#039;&amp;#039; Use workspaces for actual computations. Data in workspaces are stored on &amp;#039;&amp;#039;&amp;#039;much&amp;#039;&amp;#039;&amp;#039; faster storage!  == Project Di...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=  Data Organization on BinAC 2 project directories =&lt;br /&gt;
&lt;br /&gt;
This guide explains how to organize and manage your data in the project directory at &amp;lt;code&amp;gt;/pfs/10/project&amp;lt;/code&amp;gt; BinAC 2.&lt;br /&gt;
Each project has its own dedicated directory with flexible permission systems to help you control access to your files and collaborate effectively with team members.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Important&#039;&#039;&#039; Use workspaces for actual computations. Data in workspaces are stored on &#039;&#039;&#039;much&#039;&#039;&#039; faster storage!&lt;br /&gt;
&lt;br /&gt;
== Project Directory Structure ==&lt;br /&gt;
&lt;br /&gt;
Every project gets a dedicated directory located at:&lt;br /&gt;
&amp;lt;pre&amp;gt;/pfs/10/project/&amp;lt;project_id&amp;gt;/&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For example, if your project ID is &amp;lt;code&amp;gt;bw16f003&amp;lt;/code&amp;gt;, your project directory would be:&lt;br /&gt;
&amp;lt;pre&amp;gt;/pfs/10/project/bw16f003/&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Understanding Your Project Directory ===&lt;br /&gt;
&lt;br /&gt;
When you list your project directory, you&#039;ll see something like this:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ ls -ld /pfs/10/project/bw16f003/&lt;br /&gt;
drwxrwx---. 5 root bw16f003 33280 Jul 25 14:13 /pfs/10/project/bw16f003/&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Let&#039;s break down what this means:&lt;br /&gt;
* &#039;&#039;&#039;&amp;lt;code&amp;gt;d&amp;lt;/code&amp;gt;&#039;&#039;&#039;: This is a directory (not a file)&lt;br /&gt;
* &#039;&#039;&#039;&amp;lt;code&amp;gt;rwxrwx---&amp;lt;/code&amp;gt;&#039;&#039;&#039;: The permissions (explained in detail below)&lt;br /&gt;
* &#039;&#039;&#039;&amp;lt;code&amp;gt;root&amp;lt;/code&amp;gt;&#039;&#039;&#039;: The owner of the directory&lt;br /&gt;
* &#039;&#039;&#039;&amp;lt;code&amp;gt;bw16f003&amp;lt;/code&amp;gt;&#039;&#039;&#039;: The group that owns the directory (your project group)&lt;br /&gt;
* &#039;&#039;&#039;&amp;lt;code&amp;gt;33280&amp;lt;/code&amp;gt;&#039;&#039;&#039;: The size in bytes&lt;br /&gt;
* &#039;&#039;&#039;&amp;lt;code&amp;gt;Jul 25 14:13&amp;lt;/code&amp;gt;&#039;&#039;&#039;: Last modification date&lt;br /&gt;
* &#039;&#039;&#039;&amp;lt;code&amp;gt;/pfs/10/project/bw16f003/&amp;lt;/code&amp;gt;&#039;&#039;&#039;: The full path&lt;br /&gt;
&lt;br /&gt;
== Understanding Unix Permissions ==&lt;br /&gt;
&lt;br /&gt;
Unix permissions control who can read, write, or execute files and directories. They are displayed as a 10-character string like &amp;lt;code&amp;gt;drwxrwx---&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=== Permission Characters ===&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;&amp;lt;code&amp;gt;r&amp;lt;/code&amp;gt;&#039;&#039;&#039; (read): Can view file contents or list directory contents&lt;br /&gt;
* &#039;&#039;&#039;&amp;lt;code&amp;gt;w&amp;lt;/code&amp;gt;&#039;&#039;&#039; (write): Can modify files or create/delete files in directories  &lt;br /&gt;
* &#039;&#039;&#039;&amp;lt;code&amp;gt;x&amp;lt;/code&amp;gt;&#039;&#039;&#039; (execute): Can run files as programs or enter directories&lt;br /&gt;
&lt;br /&gt;
=== Permission Groups ===&lt;br /&gt;
&lt;br /&gt;
Permissions are shown for three groups of users:&lt;br /&gt;
&lt;br /&gt;
# &#039;&#039;&#039;Owner&#039;&#039;&#039; (positions 2-4): The user who owns the file/directory&lt;br /&gt;
# &#039;&#039;&#039;Group&#039;&#039;&#039; (positions 5-7): Users who belong to the file&#039;s group&lt;br /&gt;
# &#039;&#039;&#039;Others&#039;&#039;&#039; (positions 8-10): Everyone else&lt;br /&gt;
&lt;br /&gt;
=== Example Breakdown ===&lt;br /&gt;
&lt;br /&gt;
For &amp;lt;code&amp;gt;drwxrwx---&amp;lt;/code&amp;gt;:&lt;br /&gt;
* &#039;&#039;&#039;&amp;lt;code&amp;gt;d&amp;lt;/code&amp;gt;&#039;&#039;&#039;: Directory&lt;br /&gt;
* &#039;&#039;&#039;&amp;lt;code&amp;gt;rwx&amp;lt;/code&amp;gt;&#039;&#039;&#039; (owner): Owner can read, write, and execute&lt;br /&gt;
* &#039;&#039;&#039;&amp;lt;code&amp;gt;rwx&amp;lt;/code&amp;gt;&#039;&#039;&#039; (group): Group members can read, write, and execute  &lt;br /&gt;
* &#039;&#039;&#039;&amp;lt;code&amp;gt;---&amp;lt;/code&amp;gt;&#039;&#039;&#039; (others): Others have no permissions&lt;br /&gt;
&lt;br /&gt;
=== Numeric Permission Values ===&lt;br /&gt;
&lt;br /&gt;
Permissions on files and directories are changed with the tool &amp;lt;code&amp;gt;chmod&amp;lt;/code&amp;gt;, which uses numerical values for describing permissions.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Number !! Binary !! Permissions !! Description&lt;br /&gt;
|-&lt;br /&gt;
| 0 || 000 || --- || No permissions&lt;br /&gt;
|-&lt;br /&gt;
| 1 || 001 || --x || Execute only&lt;br /&gt;
|-&lt;br /&gt;
| 2 || 010 || -w- || Write only&lt;br /&gt;
|-&lt;br /&gt;
| 3 || 011 || -wx || Write and execute&lt;br /&gt;
|-&lt;br /&gt;
| 4 || 100 || r-- || Read only&lt;br /&gt;
|-&lt;br /&gt;
| 5 || 101 || r-x || Read and execute&lt;br /&gt;
|-&lt;br /&gt;
| 6 || 110 || rw- || Read and write&lt;br /&gt;
|-&lt;br /&gt;
| 7 || 111 || rwx || Read, write, and execute&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Common Permission Patterns ===&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Pattern !! Use Case !! Description&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;code&amp;gt;700&amp;lt;/code&amp;gt; || Private files/directories || Only owner can access&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;code&amp;gt;750&amp;lt;/code&amp;gt; || Shared read-only || Owner full access, group read-only&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;code&amp;gt;770&amp;lt;/code&amp;gt; || Shared read-write || Owner and group full access&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;code&amp;gt;644&amp;lt;/code&amp;gt; || Public readable files || Owner can edit, others can read&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;code&amp;gt;755&amp;lt;/code&amp;gt; || Public executable directories || Owner can edit, others can read/execute&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Data Organization Strategies ==&lt;br /&gt;
&lt;br /&gt;
=== Private Directories ===&lt;br /&gt;
&lt;br /&gt;
Create directories that &#039;&#039;&#039;only you&#039;&#039;&#039; can access:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# Create a private directory&lt;br /&gt;
mkdir /pfs/10/project/&amp;lt;project_id&amp;gt;/$USER&lt;br /&gt;
&lt;br /&gt;
# Set permissions so only you can access it&lt;br /&gt;
chmod 700 /pfs/10/project/&amp;lt;project_id&amp;gt;/$USER&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Permission breakdown for &amp;lt;code&amp;gt;700&amp;lt;/code&amp;gt;:&#039;&#039;&#039;&lt;br /&gt;
* &#039;&#039;&#039;Owner&#039;&#039;&#039;: &amp;lt;code&amp;gt;rwx&amp;lt;/code&amp;gt; (7 = 4+2+1) - full access&lt;br /&gt;
* &#039;&#039;&#039;Group&#039;&#039;&#039;: &amp;lt;code&amp;gt;---&amp;lt;/code&amp;gt; (0) - no access&lt;br /&gt;
* &#039;&#039;&#039;Others&#039;&#039;&#039;: &amp;lt;code&amp;gt;---&amp;lt;/code&amp;gt; (0) - no access&lt;br /&gt;
&lt;br /&gt;
=== Shared Project Directories ===&lt;br /&gt;
&lt;br /&gt;
Create directories that &#039;&#039;&#039;all&#039;&#039;&#039; project members can access:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# Create a shared directory&lt;br /&gt;
mkdir /pfs/10/project/&amp;lt;project_id&amp;gt;/&amp;lt;your directory name&amp;gt;&lt;br /&gt;
&lt;br /&gt;
# Set permissions for group access&lt;br /&gt;
chmod 770 /pfs/10/project/&amp;lt;project_id&amp;gt;/&amp;lt;your directory name&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Permission breakdown for &amp;lt;code&amp;gt;770&amp;lt;/code&amp;gt;:&#039;&#039;&#039;&lt;br /&gt;
* &#039;&#039;&#039;Owner&#039;&#039;&#039;: &amp;lt;code&amp;gt;rwx&amp;lt;/code&amp;gt; (7) - full access&lt;br /&gt;
* &#039;&#039;&#039;Group&#039;&#039;&#039;: &amp;lt;code&amp;gt;rwx&amp;lt;/code&amp;gt; (7) - full access for project members&lt;br /&gt;
* &#039;&#039;&#039;Others&#039;&#039;&#039;: &amp;lt;code&amp;gt;---&amp;lt;/code&amp;gt; (0) - no access&lt;br /&gt;
&lt;br /&gt;
=== Read-Only Shared Directories ===&lt;br /&gt;
&lt;br /&gt;
Create directories where project members can read but not modify:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# Create a read-only shared directory&lt;br /&gt;
mkdir /pfs/10/project/&amp;lt;project_id&amp;gt;/&amp;lt;your directory name&amp;gt;&lt;br /&gt;
&lt;br /&gt;
# Set read-only permissions for group&lt;br /&gt;
chmod 750 /pfs/10/project/&amp;lt;project_id&amp;gt;/&amp;lt;your directory name&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Permission breakdown for &amp;lt;code&amp;gt;750&amp;lt;/code&amp;gt;:&#039;&#039;&#039;&lt;br /&gt;
* &#039;&#039;&#039;Owner&#039;&#039;&#039;: &amp;lt;code&amp;gt;rwx&amp;lt;/code&amp;gt; (7) - full access&lt;br /&gt;
* &#039;&#039;&#039;Group&#039;&#039;&#039;: &amp;lt;code&amp;gt;r-x&amp;lt;/code&amp;gt; (5 = 4+1) - can read and enter, but not write&lt;br /&gt;
* &#039;&#039;&#039;Others&#039;&#039;&#039;: &amp;lt;code&amp;gt;---&amp;lt;/code&amp;gt; (0) - no access&lt;br /&gt;
&lt;br /&gt;
== Advanced Access Control with ACLs ==&lt;br /&gt;
&lt;br /&gt;
For more fine-grained control beyond basic Unix permissions, you can use Access Control Lists (ACLs). ACLs allow you to set permissions for specific users.&lt;br /&gt;
&lt;br /&gt;
=== Checking Current ACLs ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# Check if a directory has ACLs&lt;br /&gt;
getfacl /pfs/10/project/&amp;lt;project_id&amp;gt;/&amp;lt;your directory name&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Setting User-Specific Permissions ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# Give user &#039;alice&#039; read and execute permissions&lt;br /&gt;
setfacl -m u:alice:rx /pfs/10/project/&amp;lt;project_id&amp;gt;/&amp;lt;your directory name&amp;gt;&lt;br /&gt;
&lt;br /&gt;
# Give user &#039;bob&#039; full permissions&lt;br /&gt;
setfacl -m u:bob:rwx /pfs/10/project/&amp;lt;project_id&amp;gt;/&amp;lt;your directory name&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Removing ACLs ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# Remove ACL for specific user&lt;br /&gt;
setfacl -x u:alice /pfs/10/project/&amp;lt;project_id&amp;gt;/&amp;lt;your directory name&amp;gt;&lt;br /&gt;
&lt;br /&gt;
# Remove all ACLs and revert to basic permissions&lt;br /&gt;
setfacl -b /pfs/10/project/&amp;lt;project_id&amp;gt;/&amp;lt;your directory name&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Default ACLs for New Files ===&lt;br /&gt;
&lt;br /&gt;
Set default ACLs that will be applied to new files created in a directory:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# Set default ACL for new files in directory&lt;br /&gt;
setfacl -d -m u:alice:rw /pfs/10/project/&amp;lt;project_id&amp;gt;/&amp;lt;your directory name&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Practical Examples ==&lt;br /&gt;
&lt;br /&gt;
=== Example 1: Personal Workspace with Shared Results ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# Create your personal workspace&lt;br /&gt;
mkdir /pfs/10/project/&amp;lt;project_id&amp;gt;/user_alice&lt;br /&gt;
chmod 700 /pfs/10/project/&amp;lt;project_id&amp;gt;/user_alice&lt;br /&gt;
&lt;br /&gt;
# Create a results directory that others can read&lt;br /&gt;
mkdir /pfs/10/project/&amp;lt;project_id&amp;gt;/user_alice/results&lt;br /&gt;
chmod 755 /pfs/10/project/&amp;lt;project_id&amp;gt;/user_alice/results&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Example 2: Collaborative Analysis Directory ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# Create collaborative space&lt;br /&gt;
mkdir /pfs/10/project/&amp;lt;project_id&amp;gt;/collaborative_analysis&lt;br /&gt;
chmod 770 /pfs/10/project/&amp;lt;project_id&amp;gt;/collaborative_analysis&lt;br /&gt;
&lt;br /&gt;
# Create subdirectories for different types of work&lt;br /&gt;
mkdir /pfs/10/project/&amp;lt;project_id&amp;gt;/collaborative_analysis/{scripts,data,results}&lt;br /&gt;
chmod 770 /pfs/10/project/&amp;lt;project_id&amp;gt;/collaborative_analysis/*&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Example 3: Mixed Access with ACLs ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# Create a directory with complex permissions&lt;br /&gt;
mkdir /pfs/10/project/&amp;lt;project_id&amp;gt;/complex_project&lt;br /&gt;
chmod 750 /pfs/10/project/&amp;lt;project_id&amp;gt;/complex_project&lt;br /&gt;
&lt;br /&gt;
# Give specific users different levels of access&lt;br /&gt;
setfacl -m u:alice:rwx /pfs/10/project/&amp;lt;project_id&amp;gt;/complex_project&lt;br /&gt;
setfacl -m u:bob:rx /pfs/10/project/&amp;lt;project_id&amp;gt;/complex_project&lt;br /&gt;
setfacl -m g:external_collaborators:r /pfs/10/project/&amp;lt;project_id&amp;gt;/complex_project&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Best Practices ==&lt;br /&gt;
&lt;br /&gt;
=== Plan Your Directory Structure ===&lt;br /&gt;
&lt;br /&gt;
Before creating directories, plan your organization:&lt;br /&gt;
* &#039;&#039;&#039;Personal directories&#039;&#039;&#039;: For work-in-progress and private files&lt;br /&gt;
* &#039;&#039;&#039;Shared directories&#039;&#039;&#039;: For collaboration and shared resources&lt;br /&gt;
&lt;br /&gt;
=== Use Descriptive Names ===&lt;br /&gt;
&lt;br /&gt;
Choose clear, descriptive directory names:&lt;br /&gt;
* &amp;lt;span style=&amp;quot;color:green&amp;quot;&amp;gt;✗&amp;lt;/span&amp;gt; &amp;lt;code&amp;gt;protein_folding_analysis&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;span style=&amp;quot;color:green&amp;quot;&amp;gt;✗&amp;lt;/span&amp;gt; &amp;lt;code&amp;gt;shared_reference_genomes&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;span style=&amp;quot;color:green&amp;quot;&amp;gt;✗&amp;lt;/span&amp;gt; &amp;lt;code&amp;gt;alice_optimization_scripts&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;✗&amp;lt;/span&amp;gt; &amp;lt;code&amp;gt;stuff&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;✗&amp;lt;/span&amp;gt; &amp;lt;code&amp;gt;temp&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;✗&amp;lt;/span&amp;gt; &amp;lt;code&amp;gt;dir1&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Set Permissions Appropriately ===&lt;br /&gt;
&lt;br /&gt;
* Start with restrictive permissions and open up as needed&lt;br /&gt;
* Use &amp;lt;code&amp;gt;750&amp;lt;/code&amp;gt; for directories you want to share read-only&lt;br /&gt;
* Use &amp;lt;code&amp;gt;770&amp;lt;/code&amp;gt; for full collaboration&lt;br /&gt;
* Use ACLs for complex permission requirements&lt;br /&gt;
&lt;br /&gt;
=== Regular Backups ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Important&#039;&#039;&#039; The cluster does not provide automatic backups for project directories. You are responsible for backing up your important data.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Why Backups Matter:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Hardware failures can result in permanent data loss&lt;br /&gt;
Human errors (accidental deletion, wrong commands) happen&lt;br /&gt;
Corrupted files from failed jobs or system issues&lt;br /&gt;
No recovery possible once data is lost from the cluster&lt;br /&gt;
&lt;br /&gt;
=== Document Your Organization ===&lt;br /&gt;
&lt;br /&gt;
Create a README file in your project directory explaining the structure:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# Create documentation&lt;br /&gt;
cat &amp;gt; /pfs/10/project/&amp;lt;project_id&amp;gt;/README.md &amp;lt;&amp;lt; &#039;EOF&#039;&lt;br /&gt;
# Project &amp;lt;project_id&amp;gt; Directory Structure&lt;br /&gt;
&lt;br /&gt;
## Overview&lt;br /&gt;
This project focuses on protein folding simulation and analysis.&lt;br /&gt;
&lt;br /&gt;
## Directory Structure&lt;br /&gt;
- `shared_data/`: Reference datasets (read-only for group)&lt;br /&gt;
- `collaborative_analysis/`: Shared analysis scripts and results&lt;br /&gt;
- `user_alice/`: Alice&#039;s personal workspace&lt;br /&gt;
- `user_bob/`: Bob&#039;s personal workspace&lt;br /&gt;
- `archive/`: Completed analysis and backups&lt;br /&gt;
&lt;br /&gt;
## Permissions&lt;br /&gt;
- Group members have read access to shared_data/&lt;br /&gt;
- All group members can contribute to collaborative_analysis/&lt;br /&gt;
- Personal directories are private to each user&lt;br /&gt;
EOF&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Troubleshooting ==&lt;br /&gt;
&lt;br /&gt;
=== Permission Denied Errors ===&lt;br /&gt;
&lt;br /&gt;
If you get &amp;quot;Permission denied&amp;quot; errors:&lt;br /&gt;
&lt;br /&gt;
# Check the permissions:&lt;br /&gt;
   &amp;lt;pre&amp;gt;ls -la /pfs/10/project/&amp;lt;project_id&amp;gt;/problematic_directory&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
# Check if you&#039;re in the correct group:&lt;br /&gt;
   &amp;lt;pre&amp;gt;groups&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
# Check ACLs if basic permissions look correct:&lt;br /&gt;
   &amp;lt;pre&amp;gt;getfacl /pfs/10/project/&amp;lt;project_id&amp;gt;/problematic_directory&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Cannot Create Files in Directory ===&lt;br /&gt;
&lt;br /&gt;
This usually means the directory lacks write permissions for your user/group:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# Check directory permissions&lt;br /&gt;
ls -ld /pfs/10/project/&amp;lt;project_id&amp;gt;/target_directory&lt;br /&gt;
&lt;br /&gt;
# Fix if you own the directory&lt;br /&gt;
chmod g+w /pfs/10/project/&amp;lt;project_id&amp;gt;/target_directory&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Accidentally Locked Yourself Out ===&lt;br /&gt;
&lt;br /&gt;
If you accidentally removed your own permissions:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# As the owner, you can always restore permissions&lt;br /&gt;
chmod 755 /pfs/10/project/&amp;lt;project_id&amp;gt;/locked_directory&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>F Bartusch</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BinAC2/Hardware_and_Architecture&amp;diff=15292</id>
		<title>BinAC2/Hardware and Architecture</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BinAC2/Hardware_and_Architecture&amp;diff=15292"/>
		<updated>2025-09-17T07:16:26Z</updated>

		<summary type="html">&lt;p&gt;F Bartusch: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Hardware and Architecture =&lt;br /&gt;
&lt;br /&gt;
The bwForCluster BinAC 2 supports researchers from the broader fields of Bioinformatics, Medical Informatics, Astrophysics, Geosciences and Pharmacy.&lt;br /&gt;
&lt;br /&gt;
== Operating System and Software ==&lt;br /&gt;
&lt;br /&gt;
* Operating System: Rocky Linux 9.5&lt;br /&gt;
* Queuing System: [https://slurm.schedmd.com/documentation.html Slurm] (see [[BinAC2/Slurm]] for help)&lt;br /&gt;
* (Scientific) Libraries and Software: [[Environment Modules]]&lt;br /&gt;
&lt;br /&gt;
== Compute Nodes ==&lt;br /&gt;
&lt;br /&gt;
BinAC 2 offers compute nodes, high-mem nodes, and three types of GPU nodes.&lt;br /&gt;
* 180 compute nodes&lt;br /&gt;
* 16 SMP node&lt;br /&gt;
* 32 GPU nodes (2xA30)&lt;br /&gt;
* 8 GPU nodes (4xA100)&lt;br /&gt;
* 4 GPU nodes (4xH200)&lt;br /&gt;
* plus several special purpose nodes for login, interactive jobs, etc.&lt;br /&gt;
&lt;br /&gt;
Compute node specification:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|- &lt;br /&gt;
! style=&amp;quot;width:10%&amp;quot;|&lt;br /&gt;
! style=&amp;quot;width:10%&amp;quot;| Standard&lt;br /&gt;
! style=&amp;quot;width:10%&amp;quot;| High-Mem&lt;br /&gt;
! style=&amp;quot;width:10%&amp;quot;| GPU (A30)&lt;br /&gt;
! style=&amp;quot;width:10%&amp;quot;| GPU (A100)&lt;br /&gt;
! style=&amp;quot;width:10%&amp;quot;| GPU (H200)&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot;| Quantity&lt;br /&gt;
| 168 / 12 &lt;br /&gt;
| 14 / 2&lt;br /&gt;
| 32&lt;br /&gt;
| 8&lt;br /&gt;
| 4&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; | Processors&lt;br /&gt;
| 2 x [https://www.amd.com/de/products/processors/server/epyc/7003-series/amd-epyc-7543.html AMD EPYC Milan 7543] / 2 x [https://www.amd.com/en/products/processors/server/epyc/7003-series/amd-epyc-75f3.html AMD EPYC Milan 75F3]&lt;br /&gt;
| 2 x [https://www.amd.com/de/products/processors/server/epyc/7003-series/amd-epyc-7443.html AMD EPYC Milan 7443] / 2 x [https://www.amd.com/en/products/processors/server/epyc/7003-series/amd-epyc-75f3.html AMD EPYC Milan 75F3]&lt;br /&gt;
| 2 x [https://www.amd.com/de/products/processors/server/epyc/7003-series/amd-epyc-7543.html AMD EPYC Milan 7543]&lt;br /&gt;
| 2 x [https://www.amd.com/de/products/processors/server/epyc/7003-series/amd-epyc-7543.html AMD EPYC Milan 7543]&lt;br /&gt;
| 2 x [https://www.amd.com/de/products/processors/server/epyc/9005-series/amd-epyc-9555.html AMD EPYC Milan 9555]&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; | Processor Base Frequency (GHz)&lt;br /&gt;
| 2.80 / 2.95&lt;br /&gt;
| 2.85 / 2.95&lt;br /&gt;
| 2.80&lt;br /&gt;
| 2.80&lt;br /&gt;
| 3.20&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; | Number of Physical Cores / Hypertreads&lt;br /&gt;
| 64 / 128&lt;br /&gt;
| 48 / 96 // 64 / 128&lt;br /&gt;
| 64 / 128&lt;br /&gt;
| 64 / 128&lt;br /&gt;
| 128 / 256&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; | Working Memory (GB)&lt;br /&gt;
| 512&lt;br /&gt;
| 2048&lt;br /&gt;
| 512&lt;br /&gt;
| 512&lt;br /&gt;
| 1536&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; | Local Disk (GiB)&lt;br /&gt;
| 450 (NVMe-SSD)&lt;br /&gt;
| 14000 (NVMe-SSD)&lt;br /&gt;
| 450 (NVMe-SSD)&lt;br /&gt;
| 14000 (NVMe-SSD)&lt;br /&gt;
| 28000 (NVMe-SSD)&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; | Interconnect&lt;br /&gt;
| HDR 100 IB (84 nodes) / 100GbE (96 nodes)&lt;br /&gt;
| 100GbE&lt;br /&gt;
| 100GbE&lt;br /&gt;
| 100GbE&lt;br /&gt;
| HDR 200 IB + 100GbE&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; | Coprocessors&lt;br /&gt;
| -&lt;br /&gt;
| -&lt;br /&gt;
| 2 x [https://www.nvidia.com/de-de/data-center/products/a30-gpu/ NVIDIA A30 (24 GB ECC HBM2, NVLink)]&lt;br /&gt;
| 4 x [https://www.nvidia.com/de-de/data-center/a100/ NVIDIA A100 (80 GB ECC HBM2e)]&lt;br /&gt;
| 4 x [https://www.nvidia.com/de-de/data-center/h200/ NVIDIA H200 NVL  (141 GB ECC HBM3e, NVLink)]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
= Network =&lt;br /&gt;
&lt;br /&gt;
The compute nodes and the parallel file system are connected via 100GbE ethernet&amp;lt;/br&amp;gt;&lt;br /&gt;
In contrast to BinAC 1 not all compute nodes are connected via Infiniband, but there are 84 standard compute nodes connected via HDR Infiniband (100 GbE). In order to get your jobs onto the Infiniband nodes, submit your job with &amp;lt;code&amp;gt;--constraint=ib&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
= File Systems =&lt;br /&gt;
&lt;br /&gt;
The bwForCluster BinAC 2 consists of two separate storage systems, one for the user&#039;s home directory $HOME and one serving as a project/work space.&lt;br /&gt;
The home directory is limited in space and parallel access but offers snapshots of your files and backup.&lt;br /&gt;
&lt;br /&gt;
The project/work is a parallel file system (PFS) which offers fast and parallel file access and a bigger capacity than the home directory. It is mounted at &amp;lt;code&amp;gt;/pfs/10&amp;lt;/code&amp;gt; on the login and compute nodes. This storage is based on Lustre and can be accessed parallel from many nodes. The PFS contains the project and the work directory. Each compute project has its own directory at &amp;lt;code&amp;gt;/pfs/10/project&amp;lt;/code&amp;gt; that is accessible for all members of the compute project.&lt;br /&gt;
Each user can create workspaces under &amp;lt;code&amp;gt;/pfs/10/work&amp;lt;/code&amp;gt; using the workspace tools. These directories are only accessible for the user who created the workspace.&lt;br /&gt;
&lt;br /&gt;
Additionally, each compute node provides high-speed temporary storage (SSD) on the node-local solid state disk via the $TMPDIR environment variable. &lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|- &lt;br /&gt;
! style=&amp;quot;width:10%&amp;quot;|&lt;br /&gt;
! style=&amp;quot;width:10%&amp;quot;| &amp;lt;tt&amp;gt;$HOME&amp;lt;/tt&amp;gt;&lt;br /&gt;
! style=&amp;quot;width:10%&amp;quot;| project&lt;br /&gt;
! style=&amp;quot;width:10%&amp;quot;| work&lt;br /&gt;
! style=&amp;quot;width:10%&amp;quot;| &amp;lt;tt&amp;gt;$TMPDIR&amp;lt;/tt&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; | Visibility&lt;br /&gt;
| global &lt;br /&gt;
| global&lt;br /&gt;
| global&lt;br /&gt;
| node local&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; | Lifetime&lt;br /&gt;
| permanent&lt;br /&gt;
| permanent&lt;br /&gt;
| work space lifetime (max. 30 days, max. 5 extensions)&lt;br /&gt;
| batch job walltime&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; | Capacity&lt;br /&gt;
| -&lt;br /&gt;
| 8.1 PB&lt;br /&gt;
| 1000 TB&lt;br /&gt;
| 480 GB (compute nodes); 7.7 TB (GPU-A30 nodes); 16 TB (GPU-A100 and SMP nodes); 31 TB (GPU-H200 nodes)&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; | Speed (read)&lt;br /&gt;
| ≈ 1 GB/s, shared by all nodes&lt;br /&gt;
| max. 12 GB/s&lt;br /&gt;
| ≈ 145 GB/s peak, aggregated over 56 nodes, ideal striping&lt;br /&gt;
| ≈ 3 GB/s (compute)/ ≈5 GB/S (GPUA-30)/ ≈ 26 GB/s (GPU-A100 + SMP)/ ≈ 42 GB/s (GPU-H200) per node&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; | [https://en.wikipedia.org/wiki/Disk_quota#Quotas Quotas]&lt;br /&gt;
| 40 GB per user&lt;br /&gt;
| not yet, maybe in the future&lt;br /&gt;
| none&lt;br /&gt;
| none&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; | Backup&lt;br /&gt;
| yes (nightly)&lt;br /&gt;
| &#039;&#039;&#039;no&#039;&#039;&#039;&lt;br /&gt;
| &#039;&#039;&#039;no&#039;&#039;&#039;&lt;br /&gt;
| &#039;&#039;&#039;no&#039;&#039;&#039;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
  global             : all nodes access the same file system&lt;br /&gt;
  local              : each node has its own file system&lt;br /&gt;
  permanent          : files are stored permanently&lt;br /&gt;
  batch job walltime : files are removed at end of the batch job&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;color:red; background-color:#ffffcc;&amp;quot; cellpadding=&amp;quot;10&amp;quot;&lt;br /&gt;
|&lt;br /&gt;
Please note that due to the large capacity of &#039;&#039;&#039;work&#039;&#039;&#039; and &#039;&#039;&#039;project&#039;&#039;&#039; and due to frequent file changes on these file systems, no backup can be provided.&amp;lt;/br&amp;gt;&lt;br /&gt;
Backing up these file systems would require a redundant storage facility with multiple times the capacity of &#039;&#039;&#039;project&#039;&#039;&#039;. Furthermore, regular backups would significantly degrade the performance.&amp;lt;/br&amp;gt;&lt;br /&gt;
Data is stored redundantly, i.e. immune against disk failures but not immune against catastrophic incidents like cyber attacks or a fire in the server room.&amp;lt;/br&amp;gt;&lt;br /&gt;
Please consider to use on of the remote storage facilities like [https://wiki.bwhpc.de/e/SDS@hd SDS@hd], [https://uni-tuebingen.de/einrichtungen/zentrum-fuer-datenverarbeitung/projekte/laufende-projekte/bwsfs bwSFS], [https://www.scc.kit.edu/en/services/lsdf.php LSFD Online Storage] or the [https://www.rda.kit.edu/english/ bwDataArchive] to back up your valuable data.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Home ===&lt;br /&gt;
&lt;br /&gt;
Home directories are meant for permanent file storage of files that are keep being used like source codes, configuration files, executable programs etc.; the content of home directories will be backed up on a regular basis.&lt;br /&gt;
Because the backup space is limited we enforce a quota of 40GB on the home directories.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;NOTE:&#039;&#039;&#039;&lt;br /&gt;
Compute jobs on nodes must not write temporary data to $HOME.&lt;br /&gt;
Instead they should use the local $TMPDIR directory for I/O-heavy use cases&lt;br /&gt;
and work spaces for less I/O intense multinode-jobs.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
Current disk usage on home directory and quota status can be checked with the &#039;&#039;&#039;diskusage&#039;&#039;&#039; command: &lt;br /&gt;
 $ diskusage&lt;br /&gt;
 &lt;br /&gt;
 User           	   Used (GB)	  Quota (GB)	Used (%)&lt;br /&gt;
 ------------------------------------------------------------------------&lt;br /&gt;
 &amp;lt;username&amp;gt;                4.38               100.00             4.38&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
=== Project ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The data is stored on HDDs. The primary focus of &amp;lt;code&amp;gt;/pfs/10/project&amp;lt;/code&amp;gt; is pure capacity, not speed.&lt;br /&gt;
&lt;br /&gt;
Every project gets a dedicated directory located at:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight&amp;gt;&lt;br /&gt;
/pfs/10/project/&amp;lt;project_id&amp;gt;/&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can check the project you&#039;re member of:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight&amp;gt;&lt;br /&gt;
# id $USER | grep -o &#039;bw[^)]*&#039;&lt;br /&gt;
bw16f003&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In this case, your project directory would be:&lt;br /&gt;
```&lt;br /&gt;
/pfs/10/project/bw16f003/&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
Check our [[BinAC2/Project_Data_Organization | data organization guide ]] for methods to organize data inside the project directory.&lt;br /&gt;
&lt;br /&gt;
=== Workspaces ===&lt;br /&gt;
&lt;br /&gt;
Data on the fast storage pool at &amp;lt;code&amp;gt;/pfs/10/work&amp;lt;/code&amp;gt; is stored on SSDs.&lt;br /&gt;
The primary focus is speed, not capacity.&lt;br /&gt;
&lt;br /&gt;
In contrast to BinAC 1 we will enforce work space lifetime, as the capacity is limited.&lt;br /&gt;
We ask you to only store data you actively use for computations on &amp;lt;code&amp;gt;/pfs/10/work&amp;lt;/code&amp;gt;.&lt;br /&gt;
Please move data to &amp;lt;code&amp;gt;/pfs/10/project&amp;lt;/code&amp;gt; when you don&#039;t need it on the fast storage any more.&lt;br /&gt;
&lt;br /&gt;
Each user should create workspaces at &amp;lt;code&amp;gt;/pfs/10/work&amp;lt;/code&amp;gt; through the workspace tools&lt;br /&gt;
&lt;br /&gt;
You can find more info on workspace tools on our general page:&lt;br /&gt;
&lt;br /&gt;
:: &amp;amp;rarr; &#039;&#039;&#039;[[Workspace]]s&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
To create a work space you&#039;ll need to supply a name for your work space area and a lifetime in days.&lt;br /&gt;
For more information read the corresponding help, e.g: &amp;lt;code&amp;gt;ws_allocate -h.&amp;lt;/code&amp;gt;&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|- &lt;br /&gt;
!style=&amp;quot;width:30%&amp;quot; | Command&lt;br /&gt;
!style=&amp;quot;width:70%&amp;quot; | Action&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;code&amp;gt;ws_allocate mywork 30&amp;lt;/code&amp;gt;&lt;br /&gt;
|Allocate a work space named &amp;quot;mywork&amp;quot; for 30 days.&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;code&amp;gt;ws_allocate myotherwork&amp;lt;/code&amp;gt;&lt;br /&gt;
|Allocate a work space named &amp;quot;myotherwork&amp;quot; with maximum lifetime.&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;code&amp;gt;ws_list -a&amp;lt;/code&amp;gt;&lt;br /&gt;
|List all your work spaces.&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;code&amp;gt;ws_find mywork&amp;lt;/code&amp;gt;&lt;br /&gt;
|Get absolute path of work space &amp;quot;mywork&amp;quot;.&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;code&amp;gt;ws_extend mywork 30&amp;lt;/code&amp;gt;&lt;br /&gt;
|Extend life me of work space mywork by 30 days from now.&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;code&amp;gt;ws_release mywork&amp;lt;/code&amp;gt;&lt;br /&gt;
|Manually erase your work space &amp;quot;mywork&amp;quot;. Please remove directory content first.&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Scratch ===&lt;br /&gt;
&lt;br /&gt;
Please use the fast local scratch space for storing temporary data during your jobs.&lt;br /&gt;
&lt;br /&gt;
For each job a scratch directory will be created on the compute nodes. It is available via the environment variable &amp;lt;code&amp;gt;$TMPDIR&amp;lt;/code&amp;gt;, which points to &amp;lt;code&amp;gt;/scratch/&amp;lt;jobID&amp;gt;&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Especially the SMP nodes and the GPU nodes are equipped with large and fast local disks that should be used for temporary data, scratch data or data staging for ML model training.&lt;br /&gt;
The Lustre file system (&amp;lt;code&amp;gt;WORK&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;PROJECT&amp;lt;/code&amp;gt;) is unsuited for repetitive random I/O, I/O sizes smaller than the Lustre and ZFS block size (1M) or I/O patterns where files are opened and closed in rapid succession. The XFS file system of the local scratch drives is better suited for typical scratch workloads and access patterns. Moreover, the local scratch drives offer a lower latency and a higher bandwidth than &amp;lt;code&amp;gt;WORK&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== SDS@hd ===&lt;br /&gt;
&lt;br /&gt;
SDS@hd is mounted via NFS on login and compute nodes at &amp;lt;syntaxhighlight inline&amp;gt;/mnt/sds-hd&amp;lt;/syntaxhighlight&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
To access your Speichervorhaben, the export to BinAC 2 must first be enabled by the SDS@hd-Team. Please contact [mailto:sds-hd-support@urz.uni-heidelberg.de SDS@hd support] and provide the acronym of your Speichervorhaben, along with a request to enable the export to BinAC 2.&lt;br /&gt;
&lt;br /&gt;
Once this has been done, you can access your Speichervorhaben as described in the [https://wiki.bwhpc.de/e/SDS@hd/Access/NFS#Access_your_data SDS documentation].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight&amp;gt;&lt;br /&gt;
$ kinit $USER&lt;br /&gt;
Password for &amp;lt;user&amp;gt;@BWSERVICES.UNI-HEIDELBERG.DE: &lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The Kerberos ticket store is shared across all nodes. Creating a single ticket is sufficient to access your Speichervorhaben on all nodes.&lt;/div&gt;</summary>
		<author><name>F Bartusch</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BinAC2/Hardware_and_Architecture&amp;diff=15291</id>
		<title>BinAC2/Hardware and Architecture</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BinAC2/Hardware_and_Architecture&amp;diff=15291"/>
		<updated>2025-09-17T06:27:45Z</updated>

		<summary type="html">&lt;p&gt;F Bartusch: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Hardware and Architecture =&lt;br /&gt;
&lt;br /&gt;
The bwForCluster BinAC 2 supports researchers from the broader fields of Bioinformatics, Medical Informatics, Astrophysics, Geosciences and Pharmacy.&lt;br /&gt;
&lt;br /&gt;
== Operating System and Software ==&lt;br /&gt;
&lt;br /&gt;
* Operating System: Rocky Linux 9.5&lt;br /&gt;
* Queuing System: [https://slurm.schedmd.com/documentation.html Slurm] (see [[BinAC2/Slurm]] for help)&lt;br /&gt;
* (Scientific) Libraries and Software: [[Environment Modules]]&lt;br /&gt;
&lt;br /&gt;
== Compute Nodes ==&lt;br /&gt;
&lt;br /&gt;
BinAC 2 offers compute nodes, high-mem nodes, and three types of GPU nodes.&lt;br /&gt;
* 180 compute nodes&lt;br /&gt;
* 16 SMP node&lt;br /&gt;
* 32 GPU nodes (2xA30)&lt;br /&gt;
* 8 GPU nodes (4xA100)&lt;br /&gt;
* 4 GPU nodes (4xH200)&lt;br /&gt;
* plus several special purpose nodes for login, interactive jobs, etc.&lt;br /&gt;
&lt;br /&gt;
Compute node specification:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|- &lt;br /&gt;
! style=&amp;quot;width:10%&amp;quot;|&lt;br /&gt;
! style=&amp;quot;width:10%&amp;quot;| Standard&lt;br /&gt;
! style=&amp;quot;width:10%&amp;quot;| High-Mem&lt;br /&gt;
! style=&amp;quot;width:10%&amp;quot;| GPU (A30)&lt;br /&gt;
! style=&amp;quot;width:10%&amp;quot;| GPU (A100)&lt;br /&gt;
! style=&amp;quot;width:10%&amp;quot;| GPU (H200)&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot;| Quantity&lt;br /&gt;
| 168 / 12 &lt;br /&gt;
| 14 / 2&lt;br /&gt;
| 32&lt;br /&gt;
| 8&lt;br /&gt;
| 4&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; | Processors&lt;br /&gt;
| 2 x [https://www.amd.com/de/products/processors/server/epyc/7003-series/amd-epyc-7543.html AMD EPYC Milan 7543] / 2 x [https://www.amd.com/en/products/processors/server/epyc/7003-series/amd-epyc-75f3.html AMD EPYC Milan 75F3]&lt;br /&gt;
| 2 x [https://www.amd.com/de/products/processors/server/epyc/7003-series/amd-epyc-7443.html AMD EPYC Milan 7443] / 2 x [https://www.amd.com/en/products/processors/server/epyc/7003-series/amd-epyc-75f3.html AMD EPYC Milan 75F3]&lt;br /&gt;
| 2 x [https://www.amd.com/de/products/processors/server/epyc/7003-series/amd-epyc-7543.html AMD EPYC Milan 7543]&lt;br /&gt;
| 2 x [https://www.amd.com/de/products/processors/server/epyc/7003-series/amd-epyc-7543.html AMD EPYC Milan 7543]&lt;br /&gt;
| 2 x [https://www.amd.com/de/products/processors/server/epyc/9005-series/amd-epyc-9555.html AMD EPYC Milan 9555]&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; | Processor Base Frequency (GHz)&lt;br /&gt;
| 2.80 / 2.95&lt;br /&gt;
| 2.85 / 2.95&lt;br /&gt;
| 2.80&lt;br /&gt;
| 2.80&lt;br /&gt;
| 3.20&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; | Number of Physical Cores / Hypertreads&lt;br /&gt;
| 64 / 128&lt;br /&gt;
| 48 / 96 // 64 / 128&lt;br /&gt;
| 64 / 128&lt;br /&gt;
| 64 / 128&lt;br /&gt;
| 128 / 256&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; | Working Memory (GB)&lt;br /&gt;
| 512&lt;br /&gt;
| 2048&lt;br /&gt;
| 512&lt;br /&gt;
| 512&lt;br /&gt;
| 1536&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; | Local Disk (GiB)&lt;br /&gt;
| 450 (NVMe-SSD)&lt;br /&gt;
| 14000 (NVMe-SSD)&lt;br /&gt;
| 450 (NVMe-SSD)&lt;br /&gt;
| 14000 (NVMe-SSD)&lt;br /&gt;
| 28000 (NVMe-SSD)&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; | Interconnect&lt;br /&gt;
| HDR 100 IB (84 nodes) / 100GbE (96 nodes)&lt;br /&gt;
| 100GbE&lt;br /&gt;
| 100GbE&lt;br /&gt;
| 100GbE&lt;br /&gt;
| HDR 200 IB + 100GbE&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; | Coprocessors&lt;br /&gt;
| -&lt;br /&gt;
| -&lt;br /&gt;
| 2 x [https://www.nvidia.com/de-de/data-center/products/a30-gpu/ NVIDIA A30 (24 GB ECC HBM2, NVLink)]&lt;br /&gt;
| 4 x [https://www.nvidia.com/de-de/data-center/a100/ NVIDIA A100 (80 GB ECC HBM2e)]&lt;br /&gt;
| 4 x [https://www.nvidia.com/de-de/data-center/h200/ NVIDIA H200 NVL  (141 GB ECC HBM3e, NVLink)]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
= Network =&lt;br /&gt;
&lt;br /&gt;
The compute nodes and the parallel file system are connected via 100GbE ethernet&amp;lt;/br&amp;gt;&lt;br /&gt;
In contrast to BinAC 1 not all compute nodes are connected via Infiniband, but there are 84 standard compute nodes connected via HDR Infiniband (100 GbE). In order to get your jobs onto the Infiniband nodes, submit your job with &amp;lt;code&amp;gt;--constraint=ib&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
= File Systems =&lt;br /&gt;
&lt;br /&gt;
The bwForCluster BinAC 2 consists of two separate storage systems, one for the user&#039;s home directory $HOME and one serving as a project/work space.&lt;br /&gt;
The home directory is limited in space and parallel access but offers snapshots of your files and backup.&lt;br /&gt;
&lt;br /&gt;
The project/work is a parallel file system (PFS) which offers fast and parallel file access and a bigger capacity than the home directory. It is mounted at &amp;lt;code&amp;gt;/pfs/10&amp;lt;/code&amp;gt; on the login and compute nodes. This storage is based on Lustre and can be accessed parallel from many nodes. The PFS contains the project and the work directory. Each compute project has its own directory at &amp;lt;code&amp;gt;/pfs/10/project&amp;lt;/code&amp;gt; that is accessible for all members of the compute project.&lt;br /&gt;
Each user can create workspaces under &amp;lt;code&amp;gt;/pfs/10/work&amp;lt;/code&amp;gt; using the workspace tools. These directories are only accessible for the user who created the workspace.&lt;br /&gt;
&lt;br /&gt;
Additionally, each compute node provides high-speed temporary storage (SSD) on the node-local solid state disk via the $TMPDIR environment variable. &lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|- &lt;br /&gt;
! style=&amp;quot;width:10%&amp;quot;|&lt;br /&gt;
! style=&amp;quot;width:10%&amp;quot;| &amp;lt;tt&amp;gt;$HOME&amp;lt;/tt&amp;gt;&lt;br /&gt;
! style=&amp;quot;width:10%&amp;quot;| project&lt;br /&gt;
! style=&amp;quot;width:10%&amp;quot;| work&lt;br /&gt;
! style=&amp;quot;width:10%&amp;quot;| &amp;lt;tt&amp;gt;$TMPDIR&amp;lt;/tt&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; | Visibility&lt;br /&gt;
| global &lt;br /&gt;
| global&lt;br /&gt;
| global&lt;br /&gt;
| node local&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; | Lifetime&lt;br /&gt;
| permanent&lt;br /&gt;
| permanent&lt;br /&gt;
| work space lifetime (max. 30 days, max. 5 extensions)&lt;br /&gt;
| batch job walltime&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; | Capacity&lt;br /&gt;
| -&lt;br /&gt;
| 8.1 PB&lt;br /&gt;
| 1000 TB&lt;br /&gt;
| 480 GB (compute nodes); 7.7 TB (GPU-A30 nodes); 16 TB (GPU-A100 and SMP nodes); 31 TB (GPU-H200 nodes)&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; | Speed (read)&lt;br /&gt;
| ≈ 1 GB/s, shared by all nodes&lt;br /&gt;
| max. 12 GB/s&lt;br /&gt;
| ≈ 145 GB/s peak, aggregated over 56 nodes, ideal striping&lt;br /&gt;
| ≈ 3 GB/s (compute)/ ≈5 GB/S (GPUA-30)/ ≈ 26 GB/s (GPU-A100 + SMP)/ ≈ 42 GB/s (GPU-H200) per node&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; | [https://en.wikipedia.org/wiki/Disk_quota#Quotas Quotas]&lt;br /&gt;
| 40 GB per user&lt;br /&gt;
| not yet, maybe in the future&lt;br /&gt;
| none&lt;br /&gt;
| none&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; | Backup&lt;br /&gt;
| yes (nightly)&lt;br /&gt;
| &#039;&#039;&#039;no&#039;&#039;&#039;&lt;br /&gt;
| &#039;&#039;&#039;no&#039;&#039;&#039;&lt;br /&gt;
| &#039;&#039;&#039;no&#039;&#039;&#039;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
  global             : all nodes access the same file system&lt;br /&gt;
  local              : each node has its own file system&lt;br /&gt;
  permanent          : files are stored permanently&lt;br /&gt;
  batch job walltime : files are removed at end of the batch job&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;color:red; background-color:#ffffcc;&amp;quot; cellpadding=&amp;quot;10&amp;quot;&lt;br /&gt;
|&lt;br /&gt;
Please note that due to the large capacity of &#039;&#039;&#039;work&#039;&#039;&#039; and &#039;&#039;&#039;project&#039;&#039;&#039; and due to frequent file changes on these file systems, no backup can be provided.&amp;lt;/br&amp;gt;&lt;br /&gt;
Backing up these file systems would require a redundant storage facility with multiple times the capacity of &#039;&#039;&#039;project&#039;&#039;&#039;. Furthermore, regular backups would significantly degrade the performance.&amp;lt;/br&amp;gt;&lt;br /&gt;
Data is stored redundantly, i.e. immune against disk failures but not immune against catastrophic incidents like cyber attacks or a fire in the server room.&amp;lt;/br&amp;gt;&lt;br /&gt;
Please consider to use on of the remote storage facilities like [https://wiki.bwhpc.de/e/SDS@hd SDS@hd], [https://uni-tuebingen.de/einrichtungen/zentrum-fuer-datenverarbeitung/projekte/laufende-projekte/bwsfs bwSFS], [https://www.scc.kit.edu/en/services/lsdf.php LSFD Online Storage] or the [https://www.rda.kit.edu/english/ bwDataArchive] to back up your valuable data.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Home ===&lt;br /&gt;
&lt;br /&gt;
Home directories are meant for permanent file storage of files that are keep being used like source codes, configuration files, executable programs etc.; the content of home directories will be backed up on a regular basis.&lt;br /&gt;
Because the backup space is limited we enforce a quota of 40GB on the home directories.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;NOTE:&#039;&#039;&#039;&lt;br /&gt;
Compute jobs on nodes must not write temporary data to $HOME.&lt;br /&gt;
Instead they should use the local $TMPDIR directory for I/O-heavy use cases&lt;br /&gt;
and work spaces for less I/O intense multinode-jobs.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
Current disk usage on home directory and quota status can be checked with the &#039;&#039;&#039;diskusage&#039;&#039;&#039; command: &lt;br /&gt;
 $ diskusage&lt;br /&gt;
 &lt;br /&gt;
 User           	   Used (GB)	  Quota (GB)	Used (%)&lt;br /&gt;
 ------------------------------------------------------------------------&lt;br /&gt;
 &amp;lt;username&amp;gt;                4.38               100.00             4.38&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
=== Project ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The data is stored on HDDs. The primary focus of &amp;lt;code&amp;gt;/pfs/10/project&amp;lt;/code&amp;gt; is pure capacity, not speed.&lt;br /&gt;
&lt;br /&gt;
Every project gets a dedicated directory located at:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight&amp;gt;&lt;br /&gt;
/pfs/10/project/&amp;lt;project_id&amp;gt;/&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can check the project you&#039;re member of:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight&amp;gt;&lt;br /&gt;
# id $USER | grep -o &#039;bw[^)]*&#039;&lt;br /&gt;
bw16f003&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In this case, your project directory would be:&lt;br /&gt;
```&lt;br /&gt;
/pfs/10/project/bw16f003/&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
Check our [[BinAC2/Data_Organization | data organization guide ]] for methods to organize data inside the project directory.&lt;br /&gt;
&lt;br /&gt;
=== Workspaces ===&lt;br /&gt;
&lt;br /&gt;
Data on the fast storage pool at &amp;lt;code&amp;gt;/pfs/10/work&amp;lt;/code&amp;gt; is stored on SSDs.&lt;br /&gt;
The primary focus is speed, not capacity.&lt;br /&gt;
&lt;br /&gt;
In contrast to BinAC 1 we will enforce work space lifetime, as the capacity is limited.&lt;br /&gt;
We ask you to only store data you actively use for computations on &amp;lt;code&amp;gt;/pfs/10/work&amp;lt;/code&amp;gt;.&lt;br /&gt;
Please move data to &amp;lt;code&amp;gt;/pfs/10/project&amp;lt;/code&amp;gt; when you don&#039;t need it on the fast storage any more.&lt;br /&gt;
&lt;br /&gt;
Each user should create workspaces at &amp;lt;code&amp;gt;/pfs/10/work&amp;lt;/code&amp;gt; through the workspace tools&lt;br /&gt;
&lt;br /&gt;
You can find more info on workspace tools on our general page:&lt;br /&gt;
&lt;br /&gt;
:: &amp;amp;rarr; &#039;&#039;&#039;[[Workspace]]s&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
To create a work space you&#039;ll need to supply a name for your work space area and a lifetime in days.&lt;br /&gt;
For more information read the corresponding help, e.g: &amp;lt;code&amp;gt;ws_allocate -h.&amp;lt;/code&amp;gt;&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|- &lt;br /&gt;
!style=&amp;quot;width:30%&amp;quot; | Command&lt;br /&gt;
!style=&amp;quot;width:70%&amp;quot; | Action&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;code&amp;gt;ws_allocate mywork 30&amp;lt;/code&amp;gt;&lt;br /&gt;
|Allocate a work space named &amp;quot;mywork&amp;quot; for 30 days.&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;code&amp;gt;ws_allocate myotherwork&amp;lt;/code&amp;gt;&lt;br /&gt;
|Allocate a work space named &amp;quot;myotherwork&amp;quot; with maximum lifetime.&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;code&amp;gt;ws_list -a&amp;lt;/code&amp;gt;&lt;br /&gt;
|List all your work spaces.&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;code&amp;gt;ws_find mywork&amp;lt;/code&amp;gt;&lt;br /&gt;
|Get absolute path of work space &amp;quot;mywork&amp;quot;.&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;code&amp;gt;ws_extend mywork 30&amp;lt;/code&amp;gt;&lt;br /&gt;
|Extend life me of work space mywork by 30 days from now.&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;code&amp;gt;ws_release mywork&amp;lt;/code&amp;gt;&lt;br /&gt;
|Manually erase your work space &amp;quot;mywork&amp;quot;. Please remove directory content first.&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Scratch ===&lt;br /&gt;
&lt;br /&gt;
Please use the fast local scratch space for storing temporary data during your jobs.&lt;br /&gt;
&lt;br /&gt;
For each job a scratch directory will be created on the compute nodes. It is available via the environment variable &amp;lt;code&amp;gt;$TMPDIR&amp;lt;/code&amp;gt;, which points to &amp;lt;code&amp;gt;/scratch/&amp;lt;jobID&amp;gt;&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Especially the SMP nodes and the GPU nodes are equipped with large and fast local disks that should be used for temporary data, scratch data or data staging for ML model training.&lt;br /&gt;
The Lustre file system (&amp;lt;code&amp;gt;WORK&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;PROJECT&amp;lt;/code&amp;gt;) is unsuited for repetitive random I/O, I/O sizes smaller than the Lustre and ZFS block size (1M) or I/O patterns where files are opened and closed in rapid succession. The XFS file system of the local scratch drives is better suited for typical scratch workloads and access patterns. Moreover, the local scratch drives offer a lower latency and a higher bandwidth than &amp;lt;code&amp;gt;WORK&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== SDS@hd ===&lt;br /&gt;
&lt;br /&gt;
SDS@hd is mounted via NFS on login and compute nodes at &amp;lt;syntaxhighlight inline&amp;gt;/mnt/sds-hd&amp;lt;/syntaxhighlight&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
To access your Speichervorhaben, the export to BinAC 2 must first be enabled by the SDS@hd-Team. Please contact [mailto:sds-hd-support@urz.uni-heidelberg.de SDS@hd support] and provide the acronym of your Speichervorhaben, along with a request to enable the export to BinAC 2.&lt;br /&gt;
&lt;br /&gt;
Once this has been done, you can access your Speichervorhaben as described in the [https://wiki.bwhpc.de/e/SDS@hd/Access/NFS#Access_your_data SDS documentation].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight&amp;gt;&lt;br /&gt;
$ kinit $USER&lt;br /&gt;
Password for &amp;lt;user&amp;gt;@BWSERVICES.UNI-HEIDELBERG.DE: &lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The Kerberos ticket store is shared across all nodes. Creating a single ticket is sufficient to access your Speichervorhaben on all nodes.&lt;/div&gt;</summary>
		<author><name>F Bartusch</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BinAC2/Hardware_and_Architecture&amp;diff=15280</id>
		<title>BinAC2/Hardware and Architecture</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BinAC2/Hardware_and_Architecture&amp;diff=15280"/>
		<updated>2025-09-09T09:22:47Z</updated>

		<summary type="html">&lt;p&gt;F Bartusch: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Hardware and Architecture =&lt;br /&gt;
&lt;br /&gt;
The bwForCluster BinAC 2 supports researchers from the broader fields of Bioinformatics, Medical Informatics, Astrophysics, Geosciences and Pharmacy.&lt;br /&gt;
&lt;br /&gt;
== Operating System and Software ==&lt;br /&gt;
&lt;br /&gt;
* Operating System: Rocky Linux 9.5&lt;br /&gt;
* Queuing System: [https://slurm.schedmd.com/documentation.html Slurm] (see [[BinAC2/Slurm]] for help)&lt;br /&gt;
* (Scientific) Libraries and Software: [[Environment Modules]]&lt;br /&gt;
&lt;br /&gt;
== Compute Nodes ==&lt;br /&gt;
&lt;br /&gt;
BinAC 2 offers compute nodes, high-mem nodes, and three types of GPU nodes.&lt;br /&gt;
* 180 compute nodes&lt;br /&gt;
* 16 SMP node&lt;br /&gt;
* 32 GPU nodes (2xA30)&lt;br /&gt;
* 8 GPU nodes (4xA100)&lt;br /&gt;
* 4 GPU nodes (4xH200)&lt;br /&gt;
* plus several special purpose nodes for login, interactive jobs, etc.&lt;br /&gt;
&lt;br /&gt;
Compute node specification:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|- &lt;br /&gt;
! style=&amp;quot;width:10%&amp;quot;|&lt;br /&gt;
! style=&amp;quot;width:10%&amp;quot;| Standard&lt;br /&gt;
! style=&amp;quot;width:10%&amp;quot;| High-Mem&lt;br /&gt;
! style=&amp;quot;width:10%&amp;quot;| GPU (A30)&lt;br /&gt;
! style=&amp;quot;width:10%&amp;quot;| GPU (A100)&lt;br /&gt;
! style=&amp;quot;width:10%&amp;quot;| GPU (H200)&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot;| Quantity&lt;br /&gt;
| 180 &lt;br /&gt;
| 14 / 2&lt;br /&gt;
| 32&lt;br /&gt;
| 8&lt;br /&gt;
| 4&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; | Processors&lt;br /&gt;
| 2 x [https://www.amd.com/de/products/processors/server/epyc/7003-series/amd-epyc-7543.html AMD EPYC Milan 7543]&lt;br /&gt;
| 2 x [https://www.amd.com/de/products/processors/server/epyc/7003-series/amd-epyc-7443.html AMD EPYC Milan 7443] / 2 x [https://www.amd.com/en/products/processors/server/epyc/7003-series/amd-epyc-75f3.html AMD EPYC Milan 75F3]&lt;br /&gt;
| 2 x [https://www.amd.com/de/products/processors/server/epyc/7003-series/amd-epyc-7543.html AMD EPYC Milan 7543]&lt;br /&gt;
| 2 x [https://www.amd.com/de/products/processors/server/epyc/7003-series/amd-epyc-7543.html AMD EPYC Milan 7543]&lt;br /&gt;
| 2 x [https://www.amd.com/de/products/processors/server/epyc/9005-series/amd-epyc-9555.html AMD EPYC Milan 9555]&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; | Processor Base Frequency (GHz)&lt;br /&gt;
| 2.80&lt;br /&gt;
| 2.85 / 2.95&lt;br /&gt;
| 2.80&lt;br /&gt;
| 2.80&lt;br /&gt;
| 3.20&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; | Number of Physical Cores / Hypertreads&lt;br /&gt;
| 64 / 128&lt;br /&gt;
| 48 / 96 // 64 / 128&lt;br /&gt;
| 64 / 128&lt;br /&gt;
| 64 / 128&lt;br /&gt;
| 128 / 256&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; | Working Memory (GB)&lt;br /&gt;
| 512&lt;br /&gt;
| 2048&lt;br /&gt;
| 512&lt;br /&gt;
| 512&lt;br /&gt;
| 1536&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; | Local Disk (GiB)&lt;br /&gt;
| 450 (NVMe-SSD)&lt;br /&gt;
| 14000 (NVMe-SSD)&lt;br /&gt;
| 450 (NVMe-SSD)&lt;br /&gt;
| 14000 (NVMe-SSD)&lt;br /&gt;
| 28000 (NVMe-SSD)&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; | Interconnect&lt;br /&gt;
| HDR 100 IB (84 nodes) / 100GbE (96 nodes)&lt;br /&gt;
| 100GbE&lt;br /&gt;
| 100GbE&lt;br /&gt;
| 100GbE&lt;br /&gt;
| HDR 200 IB + 100GbE&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; | Coprocessors&lt;br /&gt;
| -&lt;br /&gt;
| -&lt;br /&gt;
| 2 x [https://www.nvidia.com/de-de/data-center/products/a30-gpu/ NVIDIA A30 (24 GB ECC HBM2, NVLink)]&lt;br /&gt;
| 4 x [https://www.nvidia.com/de-de/data-center/a100/ NVIDIA A100 (80 GB ECC HBM2e)]&lt;br /&gt;
| 4 x [https://www.nvidia.com/de-de/data-center/h200/ NVIDIA H200 NVL  (141 GB ECC HBM3e, NVLink)]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
= Network =&lt;br /&gt;
&lt;br /&gt;
The compute nodes and the parallel file system are connected via 100GbE ethernet&amp;lt;/br&amp;gt;&lt;br /&gt;
In contrast to BinAC 1 not all compute nodes are connected via Infiniband, but there are 84 standard compute nodes connected via HDR Infiniband (100 GbE). In order to get your jobs onto the Infiniband nodes, submit your job with &amp;lt;code&amp;gt;--constraint=ib&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
= File Systems =&lt;br /&gt;
&lt;br /&gt;
The bwForCluster BinAC 2 consists of two separate storage systems, one for the user&#039;s home directory $HOME and one serving as a project/work space.&lt;br /&gt;
The home directory is limited in space and parallel access but offers snapshots of your files and backup.&lt;br /&gt;
&lt;br /&gt;
The project/work is a parallel file system (PFS) which offers fast and parallel file access and a bigger capacity than the home directory. It is mounted at &amp;lt;code&amp;gt;/pfs/10&amp;lt;/code&amp;gt; on the login and compute nodes. This storage is based on Lustre and can be accessed parallel from many nodes. The PFS contains the project and the work directory. Each compute project has its own directory at &amp;lt;code&amp;gt;/pfs/10/project&amp;lt;/code&amp;gt; that is accessible for all members of the compute project.&lt;br /&gt;
Each user can create workspaces under &amp;lt;code&amp;gt;/pfs/10/work&amp;lt;/code&amp;gt; using the workspace tools. These directories are only accessible for the user who created the workspace.&lt;br /&gt;
&lt;br /&gt;
Additionally, each compute node provides high-speed temporary storage (SSD) on the node-local solid state disk via the $TMPDIR environment variable. &lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|- &lt;br /&gt;
! style=&amp;quot;width:10%&amp;quot;|&lt;br /&gt;
! style=&amp;quot;width:10%&amp;quot;| &amp;lt;tt&amp;gt;$HOME&amp;lt;/tt&amp;gt;&lt;br /&gt;
! style=&amp;quot;width:10%&amp;quot;| project&lt;br /&gt;
! style=&amp;quot;width:10%&amp;quot;| work&lt;br /&gt;
! style=&amp;quot;width:10%&amp;quot;| &amp;lt;tt&amp;gt;$TMPDIR&amp;lt;/tt&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; | Visibility&lt;br /&gt;
| global &lt;br /&gt;
| global&lt;br /&gt;
| global&lt;br /&gt;
| node local&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; | Lifetime&lt;br /&gt;
| permanent&lt;br /&gt;
| permanent&lt;br /&gt;
| work space lifetime (max. 30 days, max. 5 extensions)&lt;br /&gt;
| batch job walltime&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; | Capacity&lt;br /&gt;
| -&lt;br /&gt;
| 8.1 PB&lt;br /&gt;
| 1000 TB&lt;br /&gt;
| 480 GB (compute nodes); 7.7 TB (GPU-A30 nodes); 16 TB (GPU-A100 and SMP nodes); 31 TB (GPU-H200 nodes)&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; | Speed (read)&lt;br /&gt;
| ≈ 1 GB/s, shared by all nodes&lt;br /&gt;
| max. 12 GB/s&lt;br /&gt;
| ≈ 145 GB/s peak, aggregated over 56 nodes, ideal striping&lt;br /&gt;
| ≈ 3 GB/s (compute)/ ≈5 GB/S (GPUA-30)/ ≈ 26 GB/s (GPU-A100 + SMP)/ ≈ 42 GB/s (GPU-H200) per node&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; | [https://en.wikipedia.org/wiki/Disk_quota#Quotas Quotas]&lt;br /&gt;
| 40 GB per user&lt;br /&gt;
| not yet, maybe in the future&lt;br /&gt;
| none&lt;br /&gt;
| none&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; | Backup&lt;br /&gt;
| yes (nightly)&lt;br /&gt;
| &#039;&#039;&#039;no&#039;&#039;&#039;&lt;br /&gt;
| &#039;&#039;&#039;no&#039;&#039;&#039;&lt;br /&gt;
| &#039;&#039;&#039;no&#039;&#039;&#039;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
  global             : all nodes access the same file system&lt;br /&gt;
  local              : each node has its own file system&lt;br /&gt;
  permanent          : files are stored permanently&lt;br /&gt;
  batch job walltime : files are removed at end of the batch job&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;color:red; background-color:#ffffcc;&amp;quot; cellpadding=&amp;quot;10&amp;quot;&lt;br /&gt;
|&lt;br /&gt;
Please note that due to the large capacity of &#039;&#039;&#039;work&#039;&#039;&#039; and &#039;&#039;&#039;project&#039;&#039;&#039; and due to frequent file changes on these file systems, no backup can be provided.&amp;lt;/br&amp;gt;&lt;br /&gt;
Backing up these file systems would require a redundant storage facility with multiple times the capacity of &#039;&#039;&#039;project&#039;&#039;&#039;. Furthermore, regular backups would significantly degrade the performance.&amp;lt;/br&amp;gt;&lt;br /&gt;
Data is stored redundantly, i.e. immune against disk failures but not immune against catastrophic incidents like cyber attacks or a fire in the server room.&amp;lt;/br&amp;gt;&lt;br /&gt;
Please consider to use on of the remote storage facilities like [https://wiki.bwhpc.de/e/SDS@hd SDS@hd], [https://uni-tuebingen.de/einrichtungen/zentrum-fuer-datenverarbeitung/projekte/laufende-projekte/bwsfs bwSFS], [https://www.scc.kit.edu/en/services/lsdf.php LSFD Online Storage] or the [https://www.rda.kit.edu/english/ bwDataArchive] to back up your valuable data.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Home ===&lt;br /&gt;
&lt;br /&gt;
Home directories are meant for permanent file storage of files that are keep being used like source codes, configuration files, executable programs etc.; the content of home directories will be backed up on a regular basis.&lt;br /&gt;
Because the backup space is limited we enforce a quota of 40GB on the home directories.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;NOTE:&#039;&#039;&#039;&lt;br /&gt;
Compute jobs on nodes must not write temporary data to $HOME.&lt;br /&gt;
Instead they should use the local $TMPDIR directory for I/O-heavy use cases&lt;br /&gt;
and work spaces for less I/O intense multinode-jobs.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
Current disk usage on home directory and quota status can be checked with the &#039;&#039;&#039;diskusage&#039;&#039;&#039; command: &lt;br /&gt;
 $ diskusage&lt;br /&gt;
 &lt;br /&gt;
 User           	   Used (GB)	  Quota (GB)	Used (%)&lt;br /&gt;
 ------------------------------------------------------------------------&lt;br /&gt;
 &amp;lt;username&amp;gt;                4.38               100.00             4.38&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
=== Project ===&lt;br /&gt;
&lt;br /&gt;
Each compute project has its own project directory at &amp;lt;code&amp;gt;/pfs/10/project&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ls -lh /pfs/10/project/&lt;br /&gt;
drwxrwx---. 2 root bw16f003 33K Dec 12 16:46 bw16f003&lt;br /&gt;
[...]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As you can see the directory is owned by a group representing your compute project (here bw16f003) and the directory is accessible by all group members. It is upon your group to decide how to use the space inside this directory: shared data folders, personal directories for each project member, software containers, etc.&lt;br /&gt;
&lt;br /&gt;
The data is stored on HDDs. The primary focus of &amp;lt;code&amp;gt;/pfs/10/project&amp;lt;/code&amp;gt; is pure capacity, not speed.&lt;br /&gt;
&lt;br /&gt;
=== Workspaces ===&lt;br /&gt;
&lt;br /&gt;
Data on the fast storage pool at &amp;lt;code&amp;gt;/pfs/10/work&amp;lt;/code&amp;gt; is stored on SSDs.&lt;br /&gt;
The primary focus is speed, not capacity.&lt;br /&gt;
&lt;br /&gt;
In contrast to BinAC 1 we will enforce work space lifetime, as the capacity is limited.&lt;br /&gt;
We ask you to only store data you actively use for computations on &amp;lt;code&amp;gt;/pfs/10/work&amp;lt;/code&amp;gt;.&lt;br /&gt;
Please move data to &amp;lt;code&amp;gt;/pfs/10/project&amp;lt;/code&amp;gt; when you don&#039;t need it on the fast storage any more.&lt;br /&gt;
&lt;br /&gt;
Each user should create workspaces at &amp;lt;code&amp;gt;/pfs/10/work&amp;lt;/code&amp;gt; through the workspace tools&lt;br /&gt;
&lt;br /&gt;
You can find more info on workspace tools on our general page:&lt;br /&gt;
&lt;br /&gt;
:: &amp;amp;rarr; &#039;&#039;&#039;[[Workspace]]s&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
To create a work space you&#039;ll need to supply a name for your work space area and a lifetime in days.&lt;br /&gt;
For more information read the corresponding help, e.g: &amp;lt;code&amp;gt;ws_allocate -h.&amp;lt;/code&amp;gt;&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|- &lt;br /&gt;
!style=&amp;quot;width:30%&amp;quot; | Command&lt;br /&gt;
!style=&amp;quot;width:70%&amp;quot; | Action&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;code&amp;gt;ws_allocate mywork 30&amp;lt;/code&amp;gt;&lt;br /&gt;
|Allocate a work space named &amp;quot;mywork&amp;quot; for 30 days.&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;code&amp;gt;ws_allocate myotherwork&amp;lt;/code&amp;gt;&lt;br /&gt;
|Allocate a work space named &amp;quot;myotherwork&amp;quot; with maximum lifetime.&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;code&amp;gt;ws_list -a&amp;lt;/code&amp;gt;&lt;br /&gt;
|List all your work spaces.&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;code&amp;gt;ws_find mywork&amp;lt;/code&amp;gt;&lt;br /&gt;
|Get absolute path of work space &amp;quot;mywork&amp;quot;.&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;code&amp;gt;ws_extend mywork 30&amp;lt;/code&amp;gt;&lt;br /&gt;
|Extend life me of work space mywork by 30 days from now.&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;code&amp;gt;ws_release mywork&amp;lt;/code&amp;gt;&lt;br /&gt;
|Manually erase your work space &amp;quot;mywork&amp;quot;. Please remove directory content first.&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Scratch ===&lt;br /&gt;
&lt;br /&gt;
Please use the fast local scratch space for storing temporary data during your jobs.&lt;br /&gt;
&lt;br /&gt;
For each job a scratch directory will be created on the compute nodes. It is available via the environment variable &amp;lt;code&amp;gt;$TMPDIR&amp;lt;/code&amp;gt;, which points to &amp;lt;code&amp;gt;/scratch/&amp;lt;jobID&amp;gt;&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Especially the SMP nodes and the GPU nodes are equipped with large and fast local disks that should be used for temporary data, scratch data or data staging for ML model training.&lt;br /&gt;
The Lustre file system (&amp;lt;code&amp;gt;WORK&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;PROJECT&amp;lt;/code&amp;gt;) is unsuited for repetitive random I/O, I/O sizes smaller than the Lustre and ZFS block size (1M) or I/O patterns where files are opened and closed in rapid succession. The XFS file system of the local scratch drives is better suited for typical scratch workloads and access patterns. Moreover, the local scratch drives offer a lower latency and a higher bandwidth than &amp;lt;code&amp;gt;WORK&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== SDS@hd ===&lt;br /&gt;
&lt;br /&gt;
SDS@hd is mounted via NFS on login and compute nodes at &amp;lt;syntaxhighlight inline&amp;gt;/mnt/sds-hd&amp;lt;/syntaxhighlight&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
To access your Speichervorhaben, the export to BinAC 2 must first be enabled by the SDS@hd-Team. Please contact [mailto:sds-hd-support@urz.uni-heidelberg.de SDS@hd support] and provide the acronym of your Speichervorhaben, along with a request to enable the export to BinAC 2.&lt;br /&gt;
&lt;br /&gt;
Once this has been done, you can access your Speichervorhaben as described in the [https://wiki.bwhpc.de/e/SDS@hd/Access/NFS#Access_your_data SDS documentation].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight&amp;gt;&lt;br /&gt;
$ kinit $USER&lt;br /&gt;
Password for &amp;lt;user&amp;gt;@BWSERVICES.UNI-HEIDELBERG.DE: &lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The Kerberos ticket store is shared across all nodes. Creating a single ticket is sufficient to access your Speichervorhaben on all nodes.&lt;/div&gt;</summary>
		<author><name>F Bartusch</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BinAC2&amp;diff=15279</id>
		<title>BinAC2</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BinAC2&amp;diff=15279"/>
		<updated>2025-09-09T09:17:41Z</updated>

		<summary type="html">&lt;p&gt;F Bartusch: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{| style=&amp;quot;width: 100%; border-spacing: 5px;&amp;quot;&lt;br /&gt;
| style=&amp;quot;text-align:center; color:#000;vertical-align:middle;font-size:75%;&amp;quot; |&lt;br /&gt;
[[File:BinAC2_Logo_RGB_subtitel.svg|center|500px||]] &lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;&#039;bwForCluster BinAC 2&#039;&#039;&#039; supports researchers from the broader fields of Bioinformatics, Astrophysics, Geosciences, Pharmacy, and Medical Informatics.&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;  background:#ffa833; width:100%;&amp;quot; &lt;br /&gt;
| style=&amp;quot;padding:8px; background:#f80; font-size:120%; font-weight:bold;  text-align:left&amp;quot; | BinAC 1 -&amp;gt; BinAC 2 Migration&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
* Still using BinAC 1: &#039;&#039;&#039;[[BinAC|Go To BinAC 1 Legacy Wiki]]&#039;&#039;&#039;&lt;br /&gt;
* &#039;&#039;&#039;Users of BinAC 1 must&#039;&#039;&#039; [[Registration/bwForCluster/BinAC2|&#039;&#039;&#039;re-register for BinAC 2&#039;&#039;&#039;]] (only step C necessary). You keep your existing projects (Rechenvorhaben).&lt;br /&gt;
* &#039;&#039;&#039;[[BinAC2/Migrate BinAC 1 workspaces to BinAC 2 workspaces|Migrate BinAC 1 workspaces to BinAC 2 workspaces]]&#039;&#039;&#039;&lt;br /&gt;
* &#039;&#039;&#039;[[BinAC2/Migrate Moab to Slurm jobs|Migrate Moab/Torque to Slurm jobs]]&#039;&#039;&#039;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;  background:#FEF4AB; width:100%;&amp;quot; &lt;br /&gt;
| style=&amp;quot;padding:8px; background:#FFE856; font-size:120%; font-weight:bold;  text-align:left&amp;quot; | News and Events&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
* [https://uni-tuebingen.de/de/274923|11th bwHPC Symposium - September 23rd, Tübingen]&lt;br /&gt;
&amp;lt;!--TODO* [http://vis01.binac.uni-tuebingen.de/ Cluster Status and Usage]--&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;  background:#eeeefe; width:100%;&amp;quot; &lt;br /&gt;
| style=&amp;quot;padding:8px; background:#dedefe; font-size:120%; font-weight:bold;  text-align:left&amp;quot; | Training &amp;amp; Support&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
* [[BinAC2/Getting_Started|Getting Started]]&lt;br /&gt;
* [https://training.bwhpc.de E-Learning Courses]&lt;br /&gt;
* [[BinAC2/Support|Contact and Support]]&lt;br /&gt;
* Send [[Feedback|Feedback]] about Wiki pages&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;  background:#deffee; width:100%;&amp;quot;&lt;br /&gt;
| style=&amp;quot;padding:8px; background:#cef2e0; font-size:120%; font-weight:bold;  text-align:left&amp;quot; | User Documentation&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
* [[Registration/bwForCluster|Registration]]&lt;br /&gt;
&lt;br /&gt;
* [[BinAC2/Login|Login]]&lt;br /&gt;
&lt;br /&gt;
* [[BinAC2/Hardware_and_Architecture|Hardware and Architecture]]&lt;br /&gt;
** [[BinAC2/Hardware_and_Architecture#Compute_Nodes|Node Specifications]]&lt;br /&gt;
** [[BinAC2/Hardware_and_Architecture#File_Systems|File Systems and Workspaces]]&lt;br /&gt;
* Usage of [[BinAC2/Software|Software]] on BinAC 2&lt;br /&gt;
** For available Software Modules see [https://www.bwhpc.de/software.php bwhpc.de Software Search] (select bwForCluster BinAC 2)&lt;br /&gt;
** Create Software Environments with [[Development/Conda|Conda]]&lt;br /&gt;
** Use  [[BinAC2/Software/Nextflow|nf-core Nextflow pipelines]]&lt;br /&gt;
** See [[Development]] for info about compiler and parallelization&lt;br /&gt;
&lt;br /&gt;
* [[BinAC2/Slurm|Batch System (SLURM)]]&lt;br /&gt;
** [[BinAC2/SLURM_Partitions|SLURM Partitions]]&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
{| style=&amp;quot;  background:#e6e9eb; width:100%;&amp;quot;&lt;br /&gt;
| style=&amp;quot;padding:8px; background:#d1dadf; font-size:120%; font-weight:bold;  text-align:left&amp;quot; | Cluster Funding&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
* Please [[BinAC2/Acknowledgement|acknowledge]] the cluster in your publications.&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>F Bartusch</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BinAC2/Slurm&amp;diff=15246</id>
		<title>BinAC2/Slurm</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BinAC2/Slurm&amp;diff=15246"/>
		<updated>2025-08-26T12:40:05Z</updated>

		<summary type="html">&lt;p&gt;F Bartusch: /* Interactive Jobs : srun / salloc */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= General information about Slurm =&lt;br /&gt;
&lt;br /&gt;
Any kind of calculation on the compute nodes of bwForCluster BinAC 2 requires the user to define calculations as a sequence of commands or single command together with required run time, number of CPU cores and main memory and submit all, i.e., the &#039;&#039;&#039;batch job&#039;&#039;&#039;, to a resource and workload managing software. BinAC 2 has installed the workload managing software Slurm. Therefore any job submission by the user is to be executed by commands of the Slurm software. Slurm queues and runs user jobs based on fair sharing policies.&lt;br /&gt;
&lt;br /&gt;
= External Slurm documentation =&lt;br /&gt;
&lt;br /&gt;
You can find the official Slurm configuration and some other material here:&lt;br /&gt;
&lt;br /&gt;
* Slurm documentation: https://slurm.schedmd.com/documentation.html&lt;br /&gt;
* Slurm cheat sheet: https://slurm.schedmd.com/pdfs/summary.pdf&lt;br /&gt;
* Slurm tutorials: https://slurm.schedmd.com/tutorials.html&lt;br /&gt;
&lt;br /&gt;
= SLURM terminology = &lt;br /&gt;
&lt;br /&gt;
SLURM knows and mirrors the division of the cluster into &#039;&#039;&#039;nodes&#039;&#039;&#039; with several &#039;&#039;&#039;cores&#039;&#039;&#039;. When queuing &#039;&#039;&#039;jobs&#039;&#039;&#039;, there are several ways of requesting resources and it is important to know which term means what in SLURM. Here are some basic SLURM terms:&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;span id=&amp;quot;Job&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;Job&lt;br /&gt;
: A job is a self-contained computation that may encompass multiple tasks and is given specific resources like individual CPUs/GPUs, a specific amount of RAM or entire nodes. These resources are said to have been allocated for the job.&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;span id=&amp;quot;Task&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;Task&lt;br /&gt;
: A task is a single run of a single process. By default, one task is run per node and one CPU is assigned per task.&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;span id=&amp;quot;Partition&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;Partition    &lt;br /&gt;
: A partition (usually called queue outside SLURM) is a waiting line in which jobs are put by users.&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;span id=&amp;quot;Socket&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;Socket    &lt;br /&gt;
: Receptacle on the motherboard for one physically packaged processor (each of which can contain one or more cores).&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;span id=&amp;quot;Core&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;Core    &lt;br /&gt;
: A complete private set of registers, execution units, and retirement queues needed to execute programs.&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;span id=&amp;quot;Thread&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;Thread    &lt;br /&gt;
: One or more hardware contexts withing a single core. Each thread has attributes of one core, managed &amp;amp; scheduled as a single logical processor by the OS.&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;span id=&amp;quot;CPU&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;CPU&lt;br /&gt;
: A &#039;&#039;&#039;CPU&#039;&#039;&#039; in Slurm means a &#039;&#039;&#039;single core&#039;&#039;&#039;. This is different from the more common terminology, where a CPU (a microprocessor chip) consists of multiple cores. Slurm uses the term &#039;&#039;&#039;sockets&#039;&#039;&#039; when talking about CPU chips. Depending upon system configuration, a CPU can be either a &#039;&#039;&#039;core&#039;&#039;&#039; or a &#039;&#039;&#039;thread&#039;&#039;&#039;. On &#039;&#039;&#039;BinAC 2 Hyperthreading is activated on every machine&#039;&#039;&#039;. This means that the operating system and Slurm sees each physical core as two logical cores.&lt;br /&gt;
&lt;br /&gt;
= Slurm Commands =&lt;br /&gt;
&lt;br /&gt;
{| width=750px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Slurm commands !! Brief explanation&lt;br /&gt;
|-&lt;br /&gt;
| [https://slurm.schedmd.com/sbatch.html sbatch] || Submits a job and queues it in an input queue&lt;br /&gt;
|-&lt;br /&gt;
| [https://slurm.schedmd.com/salloc.html saclloc] || Request resources for an interactive job&lt;br /&gt;
|-&lt;br /&gt;
| [https://slurm.schedmd.com/squeue.html squeue] || Displays information about active, eligible, blocked, and/or recently completed jobs &lt;br /&gt;
|-&lt;br /&gt;
| [https://slurm.schedmd.com/scontrol.html scontrol] || Displays detailed job state information&lt;br /&gt;
|-&lt;br /&gt;
| [https://slurm.schedmd.com/scontrol.html sstat] || Displays status information about a running job&lt;br /&gt;
|- &lt;br /&gt;
| [https://slurm.schedmd.com/scancel.html scancel] || Cancels a job&lt;br /&gt;
|- &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Interactive Jobs ==&lt;br /&gt;
&lt;br /&gt;
You can run interactive jobs for testing and developing your job scripts.&lt;br /&gt;
Several nodes are reserved for interactive work, so your jobs should start right away.&lt;br /&gt;
You can only submit one job to this partition at a time. A job can run for up to 10 hours (about one workday).&lt;br /&gt;
&lt;br /&gt;
This example command gives you 16 cores and 128 GB of memory for four hours on one of the reserved nodes:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
salloc --partition=interactive --time=4:00:00 --cpus-per-task=16 --mem=128gb&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can also use srun to request the same resources:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
srun --partition=interactive --time=4:00:00 --cpus-per-task=16 --mem=128gb --pty bash&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Job Submission : sbatch ==&lt;br /&gt;
&lt;br /&gt;
Batch jobs are submitted by using the command &#039;&#039;&#039;sbatch&#039;&#039;&#039;. The main purpose of the &#039;&#039;&#039;sbatch&#039;&#039;&#039; command is to specify the resources that are needed to run the job. &#039;&#039;&#039;sbatch&#039;&#039;&#039; will then queue the batch job. However, starting of batch job depends on the availability of the requested resources and the fair sharing value.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
=== sbatch Command Parameters ===&lt;br /&gt;
The syntax and use of &#039;&#039;&#039;sbatch&#039;&#039;&#039; can be displayed via:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ man sbatch&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;sbatch&#039;&#039;&#039; options can be used from the command line or in your job script. The following table shows the syntax and provides examples for each option.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! colspan=&amp;quot;5&amp;quot; | sbatch Options&lt;br /&gt;
|-&lt;br /&gt;
! Command line&lt;br /&gt;
! Job Script&lt;br /&gt;
! Purpose&lt;br /&gt;
! Example&lt;br /&gt;
! Default value&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| &amp;lt;code&amp;gt;-t &#039;&#039;time&#039;&#039;&amp;lt;/code&amp;gt;  or  &amp;lt;code&amp;gt;--time=&#039;&#039;time&#039;&#039;&amp;lt;/code&amp;gt;&lt;br /&gt;
| #SBATCH --time=&#039;&#039;time&#039;&#039;&lt;br /&gt;
| Wall clock time limit.&amp;lt;br&amp;gt;&lt;br /&gt;
| &amp;lt;code&amp;gt;-t 2:30:00&amp;lt;/code&amp;gt; Limits run time to 2h 30 min.&amp;lt;/br&amp;gt;&amp;lt;code&amp;gt;-t 2-12&amp;lt;/code&amp;gt; Limits run time to 2 days and 12 hours.&lt;br /&gt;
| Depends on Slurm partition.&lt;br /&gt;
|-&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| -N &#039;&#039;count&#039;&#039;  or  --nodes=&#039;&#039;count&#039;&#039;&lt;br /&gt;
| #SBATCH --nodes=&#039;&#039;count&#039;&#039;&lt;br /&gt;
| Number of nodes to be used.&lt;br /&gt;
| &amp;lt;code&amp;gt;-N 1&amp;lt;/code&amp;gt; Run job on one node.&amp;lt;/br&amp;gt;&amp;lt;code&amp;gt;-N 2&amp;lt;/code&amp;gt; Run job on two nodes (have to use MPI!)&lt;br /&gt;
| &lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| -n &#039;&#039;count&#039;&#039;  or  --ntasks=&#039;&#039;count&#039;&#039;&lt;br /&gt;
| #SBATCH --ntasks=&#039;&#039;count&#039;&#039;&lt;br /&gt;
| Number of tasks to be launched.&lt;br /&gt;
| &amp;lt;code&amp;gt;-n 2&amp;lt;/code&amp;gt; launch two tasks in the job.&lt;br /&gt;
| One task per node&lt;br /&gt;
|-&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| --ntasks-per-node=&#039;&#039;count&#039;&#039;&lt;br /&gt;
| #SBATCH --ntasks-per-node=&#039;&#039;count&#039;&#039;&lt;br /&gt;
| Maximum count of tasks per node.&amp;lt;br&amp;gt;(Replaces the option &amp;lt;code&amp;gt;ppn&amp;lt;/code&amp;gt; of MOAB.)&lt;br /&gt;
| &amp;lt;code&amp;gt;--ntasks-per-node=2&amp;lt;/code&amp;gt; Run 2 tasks per node&lt;br /&gt;
| 1 task per node&lt;br /&gt;
|-&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| -c &#039;&#039;count&#039;&#039; or --cpus-per-task=&#039;&#039;count&#039;&#039;&lt;br /&gt;
| #SBATCH --cpus-per-task=&#039;&#039;count&#039;&#039;&lt;br /&gt;
| Number of CPUs required per (MPI-)task.&lt;br /&gt;
| &amp;lt;code&amp;gt;-c 2&amp;lt;/code&amp;gt; Request two CPUs per (MPI-)task.&lt;br /&gt;
| 1 CPU per (MPI-)task&lt;br /&gt;
|-&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| &amp;lt;code&amp;gt;--mem=&amp;lt;size&amp;gt;[units]&amp;lt;/code&amp;gt;&lt;br /&gt;
| #SBATCH --mem=&#039;&#039;value_in_MB&#039;&#039; &lt;br /&gt;
| Memory in MegaByte per node.&amp;lt;/br&amp;gt;&amp;lt;code&amp;gt;[units]&amp;lt;/code&amp;gt; can be one of &amp;lt;code&amp;gt;[K&amp;lt;nowiki&amp;gt;|&amp;lt;/nowiki&amp;gt;M&amp;lt;nowiki&amp;gt;|&amp;lt;/nowiki&amp;gt;G&amp;lt;nowiki&amp;gt;|&amp;lt;/nowiki&amp;gt;T]&amp;lt;/code&amp;gt;.&lt;br /&gt;
| &amp;lt;code&amp;gt;--mem=10g&amp;lt;/code&amp;gt; Request 10GB RAM per node &amp;lt;/br&amp;gt; &amp;lt;code&amp;gt;--mem=0&amp;lt;/code&amp;gt; Request all memory on node&lt;br /&gt;
| Depends on Slurm configuration.&amp;lt;/br&amp;gt;It is better to specify &amp;lt;code&amp;gt;--mem&amp;lt;/code&amp;gt; in every case.&lt;br /&gt;
|-&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| --mem-per-cpu=&#039;&#039;value_in_MB&#039;&#039;&lt;br /&gt;
| #SBATCH --mem-per-cpu=&#039;&#039;value_in_MB&#039;&#039; &lt;br /&gt;
| Minimum Memory required per allocated CPU.&amp;lt;br&amp;gt;(Replaces the option pmem of MOAB. You should omit &amp;lt;br&amp;gt; the setting of this option.)&lt;br /&gt;
|-&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| --mail-type=&#039;&#039;type&#039;&#039;&lt;br /&gt;
| #SBATCH --mail-type=&#039;&#039;type&#039;&#039;&lt;br /&gt;
| Notify user by email when certain event types occur.&amp;lt;br&amp;gt;Valid type values are NONE, BEGIN, END, FAIL, REQUEUE, ALL.&lt;br /&gt;
|-&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| --mail-user=&#039;&#039;mail-address&#039;&#039;&lt;br /&gt;
| #SBATCH --mail-user=&#039;&#039;mail-address&#039;&#039;&lt;br /&gt;
|  The specified mail-address receives email notification of state&amp;lt;br&amp;gt;changes as defined by --mail-type.&lt;br /&gt;
|-&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| --output=&#039;&#039;name&#039;&#039;&lt;br /&gt;
| #SBATCH --output=&#039;&#039;name&#039;&#039;&lt;br /&gt;
| File in which job output is stored. &lt;br /&gt;
|-&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| --error=&#039;&#039;name&#039;&#039;&lt;br /&gt;
| #SBATCH --error=&#039;&#039;name&#039;&#039;&lt;br /&gt;
| File in which job error messages are stored. &lt;br /&gt;
|-&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| -J &#039;&#039;name&#039;&#039; or --job-name=&#039;&#039;name&#039;&#039;&lt;br /&gt;
| #SBATCH --job-name=&#039;&#039;name&#039;&#039;&lt;br /&gt;
| Job name.&lt;br /&gt;
|-&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| --export=[ALL,] &#039;&#039;env-variables&#039;&#039;&lt;br /&gt;
| #SBATCH --export=[ALL,] &#039;&#039;env-variables&#039;&#039;&lt;br /&gt;
| Identifies which environment variables from the submission &amp;lt;br&amp;gt; environment are propagated to the launched application. Default &amp;lt;br&amp;gt; is ALL. If adding an environment variable to the submission&amp;lt;br&amp;gt; environment is intended, the argument ALL must be added.&lt;br /&gt;
|-&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| -A &#039;&#039;group-name&#039;&#039; or --account=&#039;&#039;group-name&#039;&#039;&lt;br /&gt;
| #SBATCH --account=&#039;&#039;group-name&#039;&#039;&lt;br /&gt;
| Change resources used by this job to specified group. You may &amp;lt;br&amp;gt; need this option if your account is assigned to more &amp;lt;br&amp;gt; than one group. By command &amp;quot;scontrol show job&amp;quot; the project &amp;lt;br&amp;gt; group the job is accounted on can be seen behind &amp;quot;Account=&amp;quot;. &lt;br /&gt;
|-&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| -p &#039;&#039;queue-name&#039;&#039; or --partition=&#039;&#039;queue-name&#039;&#039;&lt;br /&gt;
| #SBATCH --partition=&#039;&#039;queue-name&#039;&#039;&lt;br /&gt;
| Request a specific queue for the resource allocation.&lt;br /&gt;
|-&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| --reservation=&#039;&#039;reservation-name&#039;&#039;&lt;br /&gt;
| #SBATCH --reservation=&#039;&#039;reservation-name&#039;&#039;&lt;br /&gt;
| Use a specific reservation for the resource allocation.&lt;br /&gt;
|-&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| -C &#039;&#039;LSDF&#039;&#039; or --constraint=&#039;&#039;LSDF&#039;&#039;&lt;br /&gt;
| #SBATCH --constraint=LSDF&lt;br /&gt;
| Job constraint LSDF Filesystems.&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== sbatch --partition  &#039;&#039;queues&#039;&#039; ====&lt;br /&gt;
Queue classes define maximum resources such as walltime, nodes and processes per node and queue of the compute system. Details can be found here:&lt;br /&gt;
* [[BinAC2/SLURM_Partitions|BinAC 2 partitions]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== sbatch Examples ===&lt;br /&gt;
&lt;br /&gt;
If you are coming from Moab/Torque on BinAC 1 or you are new to HPC/Slurm the &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt; options may confuse you. The following examples give an orientation how to run typical workloads on BinAC 2.&lt;br /&gt;
&lt;br /&gt;
You can find every file mentioned on this Wiki page on BinAC 2 at: &amp;lt;code&amp;gt;/pfs/10/project/examples&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Serial Programs ====&lt;br /&gt;
When you use serial programs that use only one process, you can omit most of the &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt; parameters, as the default values are sufficient.&lt;br /&gt;
&lt;br /&gt;
To submit a serial job that runs the script &amp;lt;code&amp;gt;serial_job.sh&amp;lt;/code&amp;gt; and requires 5000 MB of main memory and 10 minutes of wall clock time, Slurm will allocate one &#039;&#039;&#039;physical&#039;&#039;&#039; core to your job.&lt;br /&gt;
&lt;br /&gt;
a) execute:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sbatch -p compute -t 10:00 --mem=5000m  serial_job.sh&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
or&lt;br /&gt;
b) add after the initial line of your script &#039;&#039;&#039;serial_job.sh&#039;&#039;&#039; the lines:&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#SBATCH --time=10:00&lt;br /&gt;
#SBATCH --mem=5000m&lt;br /&gt;
#SBATCH --job-name=simple-serial-job&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
and execute the modified script with the command line option &#039;&#039;--partition=compute&#039;&#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sbatch -p=compute serial_job.sh&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Note, that sbatch command line options overrule script options.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Multithreaded Programs ====&lt;br /&gt;
&lt;br /&gt;
Multithreaded programs run their processes on multiple threads and share resources such as memory.&amp;lt;br&amp;gt;&lt;br /&gt;
You may use a program that includes a built-in option for multithreading (e.g., options like &amp;lt;code&amp;gt;--threads&amp;lt;/code&amp;gt;).&amp;lt;br&amp;gt;&lt;br /&gt;
For multithreaded programs based on &#039;&#039;&#039;Open&#039;&#039;&#039; &#039;&#039;&#039;M&#039;&#039;&#039;ulti-&#039;&#039;&#039;P&#039;&#039;&#039;rocessing (OpenMP) a number of threads is defined by the environment variable &amp;lt;code&amp;gt;OMP_NUM_THREADS&amp;lt;/code&amp;gt;. By default, this variable is set to 1 (&amp;lt;code&amp;gt;OMP_NUM_THREADS=1&amp;lt;/code&amp;gt;). &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Important:&#039;&#039;&#039; Hyperthreading is activated on bwForCluster BinAC 2. Hyperthreading can be beneficial for some applications and codes, but it can also degrade performance in other cases. We therefore recommend to run a small test job with and without hyperthreading to determine the best choice. &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;a) Program with built-in multithreading option&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The example uses the common Bioinformatics software called &amp;lt;code&amp;gt;samtools&amp;lt;/code&amp;gt; as example for using built-in multithreading.&lt;br /&gt;
&lt;br /&gt;
The module &amp;lt;code&amp;gt;bio/samtools/1.21&amp;lt;/code&amp;gt; provides an example jobscript that requests 4 CPUs and runs &amp;lt;code&amp;gt;samtools sort&amp;lt;/code&amp;gt; with 4 threads.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
#SBATCH --time=19:00&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
#SBATCH --cpus-per-task=4&lt;br /&gt;
#SBATCH --mem=5000m&lt;br /&gt;
#SBATCH --partition compute&lt;br /&gt;
[...]&lt;br /&gt;
samtools sort -@ 4 sample.bam -o sample.sorted.bam&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can use the example jobscript with this command&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sbatch /opt/bwhpc/common/bio/samtools/1.21/bwhpc-examples/binac2-samtools-1.21-bwhpc-examples.slurm&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;b) OpenMP&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
We will run an exaple OpenMP Hello-World program. The jobscript looks like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
#SBATCH --cpus-per-task=4&lt;br /&gt;
#SBATCH --time=1:00&lt;br /&gt;
#SBATCH --mem=5000m   &lt;br /&gt;
#SBATCH -J OpenMP-Hello-World&lt;br /&gt;
&lt;br /&gt;
export OMP_NUM_THREADS=$(${SLURM_JOB_CPUS_PER_NODE}/2)&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;Executable running on ${SLURM_JOB_CPUS_PER_NODE} cores with ${OMP_NUM_THREADS} threads&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# Run parallel Hello World&lt;br /&gt;
/pfs/10/project/examples/openmp_hello_world&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Submit the job to the &amp;lt;code&amp;gt;compute&amp;lt;/code&amp;gt; partition and get the output (in the stdout-file)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sbatch --partition=compute /pfs/10/project/examples/openmp_hello_world.sh&lt;br /&gt;
&lt;br /&gt;
Executable  running on 4 cores with 4 threads&lt;br /&gt;
Hello from process: 0&lt;br /&gt;
Hello from process: 2&lt;br /&gt;
Hello from process: 1&lt;br /&gt;
Hello from process: 3&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== OpenMPI ====&lt;br /&gt;
&lt;br /&gt;
If you want to run MPI-jobs on batch nodes, generate a wrapper script &amp;lt;code&amp;gt;mpi_hello_world.sh&amp;lt;/code&amp;gt; for &#039;&#039;&#039;OpenMPI&#039;&#039;&#039; containing the following lines:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
#SBATCH --partition compute&lt;br /&gt;
#SBATCH --nodes=2&lt;br /&gt;
#SBATCH --ntasks-per-node=2&lt;br /&gt;
#SBATCH --cpus-per-task=2&lt;br /&gt;
#SBATCH --mem-per-cpu=2000&lt;br /&gt;
#SBATCH --time=05:00&lt;br /&gt;
&lt;br /&gt;
# Load the MPI implementation of your choice&lt;br /&gt;
module load mpi/openmpi/4.1-gnu-14.2&lt;br /&gt;
&lt;br /&gt;
# Run your MPI program&lt;br /&gt;
mpirun --bind-to core --map-by core --report-bindings mpi_hello_world&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Attention:&#039;&#039;&#039; Do &#039;&#039;&#039;NOT&#039;&#039;&#039; add mpirun options &amp;lt;code&amp;gt;-n &amp;lt;number_of_processes&amp;gt;&amp;lt;/code&amp;gt; or any other option defining processes or nodes, since Slurm instructs mpirun about number of processes and node hostnames.&lt;br /&gt;
&lt;br /&gt;
Use &#039;&#039;&#039;ALWAYS&#039;&#039;&#039; the MPI options &amp;lt;code&amp;gt;--bind-to core&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;--map-by core|socket|node&amp;lt;/code&amp;gt;.&lt;br /&gt;
Please type &amp;lt;code&amp;gt;man mpirun&amp;lt;/code&amp;gt; for an explanation of the meaning of the different options of mpirun option &amp;lt;code&amp;gt;--map-by&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The above jobscript runs four OpenMPI tasks, distributed between two nodes. Because of hyperthreading you have to set &amp;lt;code&amp;gt;--cpus-per-task=2&amp;lt;/code&amp;gt;. This means each MPI-task will get one physical core. If you omit &amp;lt;code&amp;gt;--cpus-per-task=2&amp;lt;/code&amp;gt; MPI will fail.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Attention:&#039;&#039;&#039; Not all compute nodes are connected via Infiniband. Tell Slurm you want Infiniband via &amp;lt;code&amp;gt;--constraint=ib&amp;lt;/code&amp;gt; when submitting or add &amp;lt;code&amp;gt;#SBATCH --constraint=ib&amp;lt;/code&amp;gt; to your jobscript.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sbatch --constraint=ib /pfs/10/project/examples/mpi_hello_world.sh&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This will run a simple Hello World program:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[...]&lt;br /&gt;
Hello world from processor node2-031, rank 3 out of 4 processors&lt;br /&gt;
Hello world from processor node2-031, rank 2 out of 4 processors&lt;br /&gt;
Hello world from processor node2-030, rank 1 out of 4 processors&lt;br /&gt;
Hello world from processor node2-030, rank 0 out of 4 processors&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Multithreaded + MPI parallel Programs ====&lt;br /&gt;
&lt;br /&gt;
Multithreaded + MPI parallel programs operate faster than serial programs on multi CPUs with multiple cores. All threads of one process share resources such as memory. On the contrary MPI tasks do not share memory but can be spawned over different nodes. &#039;&#039;&#039;Because hyperthreading is switched on BinaC 2, the option --cpus-per-task (-c) must be set to 2*n, if you want to use n threads.&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
===== OpenMPI with Multithreading =====&lt;br /&gt;
Multiple MPI tasks using &#039;&#039;&#039;OpenMPI&#039;&#039;&#039; must be launched by the MPI parallel program &#039;&#039;&#039;mpirun&#039;&#039;&#039;. For multithreaded programs based on &#039;&#039;&#039;Open&#039;&#039;&#039; &#039;&#039;&#039;M&#039;&#039;&#039;ulti-&#039;&#039;&#039;P&#039;&#039;&#039;rocessing (OpenMP) number of threads are defined by the environment variable OMP_NUM_THREADS. By default this variable is set to 1 (OMP_NUM_THREADS=1).&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;For OpenMPI&#039;&#039;&#039; a job-script to submit a batch job called &#039;&#039;job_ompi_omp.sh&#039;&#039; that runs a MPI program with 4 tasks and a 28-fold threaded program &#039;&#039;ompi_omp_program&#039;&#039; requiring 3000 MByte of physical memory per thread (using 28 threads per MPI task you will get 28*3000 MByte = 84000 MByte per MPI task) and total wall clock time of 3 hours looks like:&lt;br /&gt;
&amp;lt;!--b)--&amp;gt;&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --ntasks=4&lt;br /&gt;
#SBATCH --cpus-per-task=56&lt;br /&gt;
#SBATCH --time=03:00:00&lt;br /&gt;
#SBATCH --mem=83gb    # 84000 MB = 84000/1024 GB = 82.1 GB&lt;br /&gt;
#SBATCH --export=ALL,MPI_MODULE=mpi/openmpi/3.1,EXECUTABLE=./ompi_omp_program&lt;br /&gt;
#SBATCH --output=&amp;quot;parprog_hybrid_%j.out&amp;quot;  &lt;br /&gt;
&lt;br /&gt;
# Use when a defined module environment related to OpenMPI is wished&lt;br /&gt;
module load ${MPI_MODULE}&lt;br /&gt;
export OMP_NUM_THREADS=$((${SLURM_CPUS_PER_TASK}/2))&lt;br /&gt;
export MPIRUN_OPTIONS=&amp;quot;--bind-to core --map-by socket:PE=${OMP_NUM_THREADS} -report-bindings&amp;quot;&lt;br /&gt;
export NUM_CORES=${SLURM_NTASKS}*${OMP_NUM_THREADS}&lt;br /&gt;
echo &amp;quot;${EXECUTABLE} running on ${NUM_CORES} cores with ${SLURM_NTASKS} MPI-tasks and ${OMP_NUM_THREADS} threads&amp;quot;&lt;br /&gt;
startexe=&amp;quot;mpirun -n ${SLURM_NTASKS} ${MPIRUN_OPTIONS} ${EXECUTABLE}&amp;quot;&lt;br /&gt;
echo $startexe&lt;br /&gt;
exec $startexe&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Execute the script &#039;&#039;&#039;job_ompi_omp.sh&#039;&#039;&#039; by command sbatch:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sbatch -p compute ./job_ompi_omp.sh&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* With the mpirun option &#039;&#039;--bind-to core&#039;&#039; MPI tasks and OpenMP threads are bound to physical cores.&lt;br /&gt;
* With the option &#039;&#039;--map-by node:PE=&amp;lt;value&amp;gt;&#039;&#039; (neighbored) MPI tasks will be attached to different nodes and each MPI task is bound to the first core of a node. &amp;lt;value&amp;gt; must be set to ${OMP_NUM_THREADS}.&lt;br /&gt;
* The option &#039;&#039;-report-bindings&#039;&#039; shows the bindings between MPI tasks and physical cores.&lt;br /&gt;
* The mpirun-options &#039;&#039;&#039;--bind-to core&#039;&#039;&#039;, &#039;&#039;&#039;--map-by socket|...|node:PE=&amp;lt;value&amp;gt;&#039;&#039;&#039; should always be used when running a multithreaded MPI program.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== GPU jobs ====&lt;br /&gt;
&lt;br /&gt;
The nodes in the gpu_4 and gpu_8 queues have 4 or 8 NVIDIA Tesla V100 GPUs. Just submitting a job to these queues is not enough to also allocate one or more GPUs, you have to do so using the &amp;quot;--gres=gpu&amp;quot; parameter. You have to specifiy how many GPUs your job needs, e.g. &amp;quot;--gres=gpu:2&amp;quot; will request two GPUs.&lt;br /&gt;
&lt;br /&gt;
The GPU nodes are shared between multiple jobs if the jobs don&#039;t request all the GPUs in a node and there are enough ressources to run more than one job. The individual GPUs are always bound to a single job and will not be shared between different jobs.&lt;br /&gt;
&lt;br /&gt;
a) add after the initial line of your script job.sh the line including the&lt;br /&gt;
information about the GPU usage:&amp;lt;br&amp;gt;   #SBATCH --gres=gpu:2&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --ntasks=40&lt;br /&gt;
#SBATCH --time=02:00:00&lt;br /&gt;
#SBATCH --mem=4000&lt;br /&gt;
#SBATCH --gres=gpu:2&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or b) execute:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sbatch -p &amp;lt;queue&amp;gt; -n 40 -t 02:00:00 --mem 4000 --gres=gpu:2 job.sh&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
If you start an interactive session on of the GPU nodes, you can use the &amp;quot;nvidia-smi&amp;quot; command to list the GPUs allocated to your job:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nvidia-smi&lt;br /&gt;
Sun Mar 29 15:20:05 2020       &lt;br /&gt;
+-----------------------------------------------------------------------------+&lt;br /&gt;
| NVIDIA-SMI 440.33.01    Driver Version: 440.33.01    CUDA Version: 10.2     |&lt;br /&gt;
|-------------------------------+----------------------+----------------------+&lt;br /&gt;
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |&lt;br /&gt;
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |&lt;br /&gt;
|===============================+======================+======================|&lt;br /&gt;
|   0  Tesla V100-SXM2...  Off  | 00000000:3A:00.0 Off |                    0 |&lt;br /&gt;
| N/A   29C    P0    39W / 300W |      9MiB / 32510MiB |      0%      Default |&lt;br /&gt;
+-------------------------------+----------------------+----------------------+&lt;br /&gt;
|   1  Tesla V100-SXM2...  Off  | 00000000:3B:00.0 Off |                    0 |&lt;br /&gt;
| N/A   30C    P0    41W / 300W |      8MiB / 32510MiB |      0%      Default |&lt;br /&gt;
+-------------------------------+----------------------+----------------------+&lt;br /&gt;
                                                                               &lt;br /&gt;
+-----------------------------------------------------------------------------+&lt;br /&gt;
| Processes:                                                       GPU Memory |&lt;br /&gt;
|  GPU       PID   Type   Process name                             Usage      |&lt;br /&gt;
|=============================================================================|&lt;br /&gt;
|    0     14228      G   /usr/bin/X                                     8MiB |&lt;br /&gt;
|    1     14228      G   /usr/bin/X                                     8MiB |&lt;br /&gt;
+-----------------------------------------------------------------------------+&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
In case of using Open MPI, the underlying communication infrastructure (UCX and Open MPI&#039;s BTL) is CUDA-aware.&lt;br /&gt;
However, there may be warnings, e.g. when running&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ module load compiler/gnu/10.3 mpi/openmpi devel/cuad&lt;br /&gt;
$ mpirun mpirun -np 2 ./mpi_cuda_app&lt;br /&gt;
--------------------------------------&lt;br /&gt;
WARNING: There are more than one active ports on host &#039;uc2n520&#039;, but the&lt;br /&gt;
default subnet GID prefix was detected on more than one of these&lt;br /&gt;
ports.  If these ports are connected to different physical IB&lt;br /&gt;
networks, this configuration will fail in Open MPI.  This version of&lt;br /&gt;
Open MPI requires that every physically separate IB subnet that is&lt;br /&gt;
used between connected MPI processes must have different subnet ID&lt;br /&gt;
values.&lt;br /&gt;
&lt;br /&gt;
Please see this FAQ entry for more details:&lt;br /&gt;
&lt;br /&gt;
  http://www.open-mpi.org/faq/?category=openfabrics#ofa-default-subnet-gid&lt;br /&gt;
&lt;br /&gt;
NOTE: You can turn off this warning by setting the MCA parameter&lt;br /&gt;
      btl_openib_warn_default_gid_prefix to 0.&lt;br /&gt;
--------------------------------------------------------------------------&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Please run Open MPI&#039;s mpirun using the following command:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ mpirun --mca pml ucx --mca btl_openib_warn_default_gid_prefix 0 -np 2 ./mpi_cuda_app&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
or disabling the (older) communication layer Bit-Transfer-Layer (short BTL) alltogether:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ mpirun --mca pml ucx --mca btl ^openib -np 2 ./mpi_cuda_app&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
(Please note, that CUDA per v11.4 is only available with up to GCC-10)&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Start time of job or resources : squeue --start ==&lt;br /&gt;
The command can be used by any user to displays the estimated start time of a job based a number of analysis types based on historical usage, earliest available reservable resources, and priority based backlog. The command squeue is explained in detail on the webpage https://slurm.schedmd.com/squeue.html or via manpage (man squeue). &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
=== Access ===&lt;br /&gt;
By default, this command can be run by &#039;&#039;&#039;any user&#039;&#039;&#039;. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== List of your submitted jobs : squeue ==&lt;br /&gt;
Displays information about YOUR active, pending and/or recently completed jobs. The command displays all own active and pending jobs. The command squeue is explained in detail on the webpage https://slurm.schedmd.com/squeue.html or via manpage (man squeue).&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
=== Access ===&lt;br /&gt;
By default, this command can be run by any user.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Flags ===&lt;br /&gt;
{| width=750px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Flag !! Description&lt;br /&gt;
|-&lt;br /&gt;
| -l, --long&lt;br /&gt;
| Report more of the available information for the selected jobs or job steps, subject to any constraints specified.&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Examples ===&lt;br /&gt;
&#039;&#039;squeue&#039;&#039; example on BinaC 2 &amp;lt;small&amp;gt;(Only your own jobs are displayed!)&amp;lt;/small&amp;gt;.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ squeue &lt;br /&gt;
             JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)&lt;br /&gt;
          18088744    single CPV.sbat   ab1234 PD       0:00      1 (Priority)&lt;br /&gt;
          18098414  multiple CPV.sbat   ab1234 PD       0:00      2 (Priority) &lt;br /&gt;
          18090089  multiple CPV.sbat   ab1234  R       2:27      2 uc2n[127-128]&lt;br /&gt;
$ squeue -l&lt;br /&gt;
            JOBID PARTITION     NAME     USER    STATE       TIME TIME_LIMI  NODES NODELIST(REASON) &lt;br /&gt;
         18088654    single CPV.sbat   ab1234 COMPLETI       4:29   2:00:00      1 uc2n374&lt;br /&gt;
         18088785    single CPV.sbat   ab1234  PENDING       0:00   2:00:00      1 (Priority)&lt;br /&gt;
         18098414  multiple CPV.sbat   ab1234  PENDING       0:00   2:00:00      2 (Priority)&lt;br /&gt;
         18088683    single CPV.sbat   ab1234  RUNNING       0:14   2:00:00      1 uc2n413  &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* The output of &#039;&#039;squeue&#039;&#039; shows how many jobs of yours are running or pending and how many nodes are in use by your jobs.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Shows free resources : sinfo_t_idle ==&lt;br /&gt;
The Slurm command sinfo is used to view partition and node information for a system running Slurm. It incorporates down time, reservations, and node state information in determining the available backfill window. The sinfo command can only be used by the administrator.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
SCC has prepared a special script (sinfo_t_idle) to find out how many processors are available for immediate use on the system. It is anticipated that users will use this information to submit jobs that meet these criteria and thus obtain quick job turnaround times. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
=== Access ===&lt;br /&gt;
By default, this command can be used by any user or administrator. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
=== Example ===&lt;br /&gt;
* The following command displays what resources are available for immediate use for the whole partition.&lt;br /&gt;
&amp;lt;pre&amp;gt;$ sinfo_t_idle&lt;br /&gt;
Partition dev_multiple  :      8 nodes idle&lt;br /&gt;
Partition multiple      :    332 nodes idle&lt;br /&gt;
Partition dev_single    :      4 nodes idle&lt;br /&gt;
Partition single        :     76 nodes idle&lt;br /&gt;
Partition long          :     80 nodes idle&lt;br /&gt;
Partition fat           :      5 nodes idle&lt;br /&gt;
Partition dev_special   :    342 nodes idle&lt;br /&gt;
Partition special       :    342 nodes idle&lt;br /&gt;
Partition dev_multiple_e:      7 nodes idle&lt;br /&gt;
Partition multiple_e    :    335 nodes idle&lt;br /&gt;
Partition gpu_4         :     12 nodes idle&lt;br /&gt;
Partition gpu_8         :      6 nodes idle&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* For the above example jobs in all partitions can be run immediately.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Detailed job information : scontrol show job ==&lt;br /&gt;
scontrol show job displays detailed job state information and diagnostic output for all or a specified job of yours. Detailed information is available for active, pending and recently completed jobs. The command scontrol is explained in detail on the webpage https://slurm.schedmd.com/scontrol.html or via manpage (man scontrol). &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Display the state of all your jobs in normal mode: scontrol show job&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Display the state of a job with &amp;lt;jobid&amp;gt; in normal mode: scontrol show job &amp;lt;jobid&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
=== Access ===&lt;br /&gt;
* End users can use scontrol show job to view the status of their &#039;&#039;&#039;own jobs&#039;&#039;&#039; only. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Arguments ===&lt;br /&gt;
{| width=750px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Option !! Default !! Description !! Example&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
|- style=&amp;quot;width:12%;&amp;quot; &lt;br /&gt;
| -d&lt;br /&gt;
| (n/a)&lt;br /&gt;
| Detailed mode&lt;br /&gt;
| Example: Display the state with jobid 18089884 in detailed mode. &amp;lt;br&amp;gt; &amp;lt;pre&amp;gt;scontrol -d show job 18089884&amp;lt;/pre&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Scontrol show job Example ===&lt;br /&gt;
Here is an example from BinAC 2.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
squeue    # show my own jobs (here the userid is replaced!)&lt;br /&gt;
             JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)&lt;br /&gt;
          18089884  multiple CPV.sbat   bq0742  R      33:44      2 uc2n[165-166]&lt;br /&gt;
&lt;br /&gt;
$&lt;br /&gt;
$ # now, see what&#039;s up with my pending job with jobid 18089884&lt;br /&gt;
$ &lt;br /&gt;
$ scontrol show job 18089884&lt;br /&gt;
&lt;br /&gt;
JobId=18089884 JobName=CPV.sbatch&lt;br /&gt;
   UserId=bq0742(8946) GroupId=scc(12345) MCS_label=N/A&lt;br /&gt;
   Priority=3 Nice=0 Account=kit QOS=normal&lt;br /&gt;
   JobState=RUNNING Reason=None Dependency=(null)&lt;br /&gt;
   Requeue=1 Restarts=0 BatchFlag=1 Reboot=0 ExitCode=0:0&lt;br /&gt;
   RunTime=00:35:06 TimeLimit=02:00:00 TimeMin=N/A&lt;br /&gt;
   SubmitTime=2020-03-16T14:14:54 EligibleTime=2020-03-16T14:14:54&lt;br /&gt;
   AccrueTime=2020-03-16T14:14:54&lt;br /&gt;
   StartTime=2020-03-16T15:12:51 EndTime=2020-03-16T17:12:51 Deadline=N/A&lt;br /&gt;
   SuspendTime=None SecsPreSuspend=0 LastSchedEval=2020-03-16T15:12:51&lt;br /&gt;
   Partition=multiple AllocNode:Sid=uc2n995:5064&lt;br /&gt;
   ReqNodeList=(null) ExcNodeList=(null)&lt;br /&gt;
   NodeList=uc2n[165-166]&lt;br /&gt;
   BatchHost=uc2n165&lt;br /&gt;
   NumNodes=2 NumCPUs=160 NumTasks=80 CPUs/Task=1 ReqB:S:C:T=0:0:*:1&lt;br /&gt;
   TRES=cpu=160,mem=96320M,node=2,billing=160&lt;br /&gt;
   Socks/Node=* NtasksPerN:B:S:C=40:0:*:1 CoreSpec=*&lt;br /&gt;
   MinCPUsNode=40 MinMemoryCPU=1204M MinTmpDiskNode=0&lt;br /&gt;
   Features=(null) DelayBoot=00:00:00&lt;br /&gt;
   OverSubscribe=NO Contiguous=0 Licenses=(null) Network=(null)&lt;br /&gt;
   Command=/pfs/data5/home/kit/scc/bq0742/git/CPV/bin/CPV.sbatch&lt;br /&gt;
   WorkDir=/pfs/data5/home/kit/scc/bq0742/git/CPV/bin&lt;br /&gt;
   StdErr=/pfs/data5/home/kit/scc/bq0742/git/CPV/bin/slurm-18089884.out&lt;br /&gt;
   StdIn=/dev/null&lt;br /&gt;
   StdOut=/pfs/data5/home/kit/scc/bq0742/git/CPV/bin/slurm-18089884.out&lt;br /&gt;
   Power=&lt;br /&gt;
   MailUser=(null) MailType=NONE&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
You can use standard Linux pipe commands to filter the very detailed scontrol show job output.&lt;br /&gt;
* In which state the job is?&lt;br /&gt;
&amp;lt;pre&amp;gt;$ scontrol show job 18089884 | grep -i State&lt;br /&gt;
   JobState=COMPLETED Reason=None Dependency=(null)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Cancel Slurm Jobs ==&lt;br /&gt;
The scancel command is used to cancel jobs. The command scancel is explained in detail on the webpage https://slurm.schedmd.com/scancel.html or via manpage (man scancel).   &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
=== Canceling own jobs : scancel ===&lt;br /&gt;
scancel is used to signal or cancel jobs, job arrays or job steps. The command is:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ scancel [-i] &amp;lt;job-id&amp;gt;&lt;br /&gt;
$ scancel -t &amp;lt;job_state_name&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
{| width=750px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Flag !! Default !! Description !! Example&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| -i, --interactive&lt;br /&gt;
| (n/a)&lt;br /&gt;
| Interactive mode.&lt;br /&gt;
| Cancel the job 987654 interactively. &amp;lt;br&amp;gt; &amp;lt;pre&amp;gt; scancel -i 987654 &amp;lt;/pre&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| -t, --state&lt;br /&gt;
| (n/a)&lt;br /&gt;
| Restrict the scancel operation to jobs in a certain state. &amp;lt;br&amp;gt; &amp;quot;job_state_name&amp;quot; may have a value of either &amp;quot;PENDING&amp;quot;, &amp;quot;RUNNING&amp;quot; or &amp;quot;SUSPENDED&amp;quot;.&lt;br /&gt;
| Cancel all jobs in state &amp;quot;PENDING&amp;quot;. &amp;lt;br&amp;gt; &amp;lt;pre&amp;gt; scancel -t &amp;quot;PENDING&amp;quot; &amp;lt;/pre&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Resource Managers =&lt;br /&gt;
=== Batch Job (Slurm) Variables ===&lt;br /&gt;
The following environment variables of Slurm are added to your environment once your job has started&lt;br /&gt;
&amp;lt;small&amp;gt;(only an excerpt of the most important ones)&amp;lt;/small&amp;gt;.&lt;br /&gt;
{| width=750px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Environment !! Brief explanation&lt;br /&gt;
|- &lt;br /&gt;
| SLURM_JOB_CPUS_PER_NODE &lt;br /&gt;
| Number of processes per node dedicated to the job&lt;br /&gt;
|- &lt;br /&gt;
| SLURM_JOB_NODELIST &lt;br /&gt;
| List of nodes dedicated to the job&lt;br /&gt;
|- &lt;br /&gt;
| SLURM_JOB_NUM_NODES &lt;br /&gt;
| Number of nodes dedicated to the job&lt;br /&gt;
|- &lt;br /&gt;
| SLURM_MEM_PER_NODE &lt;br /&gt;
| Memory per node dedicated to the job &lt;br /&gt;
|- &lt;br /&gt;
| SLURM_NPROCS&lt;br /&gt;
| Total number of processes dedicated to the job &lt;br /&gt;
|-&lt;br /&gt;
| SLURM_CLUSTER_NAME&lt;br /&gt;
| Name of the cluster executing the job&lt;br /&gt;
|-&lt;br /&gt;
| SLURM_CPUS_PER_TASK &lt;br /&gt;
| Number of CPUs requested per task&lt;br /&gt;
|-&lt;br /&gt;
| SLURM_JOB_ACCOUNT&lt;br /&gt;
| Account name &lt;br /&gt;
|-&lt;br /&gt;
| SLURM_JOB_ID&lt;br /&gt;
| Job ID&lt;br /&gt;
|-&lt;br /&gt;
| SLURM_JOB_NAME&lt;br /&gt;
| Job Name&lt;br /&gt;
|-&lt;br /&gt;
| SLURM_JOB_PARTITION&lt;br /&gt;
| Partition/queue running the job&lt;br /&gt;
|-&lt;br /&gt;
| SLURM_JOB_UID&lt;br /&gt;
| User ID of the job&#039;s owner&lt;br /&gt;
|-&lt;br /&gt;
| SLURM_SUBMIT_DIR&lt;br /&gt;
| Job submit folder.  The directory from which sbatch was invoked. &lt;br /&gt;
|-&lt;br /&gt;
| SLURM_JOB_USER&lt;br /&gt;
| User name of the job&#039;s owner&lt;br /&gt;
|-&lt;br /&gt;
| SLURM_RESTART_COUNT&lt;br /&gt;
| Number of times job has restarted&lt;br /&gt;
|-&lt;br /&gt;
| SLURM_PROCID&lt;br /&gt;
| Task ID (MPI rank)&lt;br /&gt;
|-&lt;br /&gt;
| SLURM_NTASKS&lt;br /&gt;
| The total number of tasks available for the job&lt;br /&gt;
|-&lt;br /&gt;
| SLURM_STEP_ID&lt;br /&gt;
| Job step ID&lt;br /&gt;
|-&lt;br /&gt;
| SLURM_STEP_NUM_TASKS&lt;br /&gt;
| Task count (number of MPI ranks)&lt;br /&gt;
|-&lt;br /&gt;
| SLURM_JOB_CONSTRAINT&lt;br /&gt;
| Job constraints&lt;br /&gt;
|}&lt;br /&gt;
See also:&lt;br /&gt;
* [https://slurm.schedmd.com/sbatch.html#lbAI Slurm input and output environment variables]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Job Exit Codes ===&lt;br /&gt;
A job&#039;s exit code (also known as exit status, return code and completion code) is captured by SLURM and saved as part of the job record. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Any non-zero exit code will be assumed to be a job failure and will result in a Job State of FAILED with a reason of &amp;quot;NonZeroExitCode&amp;quot;.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
The exit code is an 8 bit unsigned number ranging between 0 and 255. While it is possible for a job to return a negative exit code, SLURM will display it as an unsigned value in the 0 - 255 range.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
==== Displaying Exit Codes and Signals ====&lt;br /&gt;
SLURM displays a job&#039;s exit code in the output of the &#039;&#039;&#039;scontrol show job&#039;&#039;&#039; and the sview utility.&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
When a signal was responsible for a job or step&#039;s termination, the signal number will be displayed after the exit code, delineated by a colon(:).&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
==== Submitting Termination Signal ====&lt;br /&gt;
Here is an example, how to &#039;save&#039; a Slurm termination signal in a typical jobscript.&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
[...]&lt;br /&gt;
exit_code=$?&lt;br /&gt;
mpirun  -np &amp;lt;#cores&amp;gt;  &amp;lt;EXE_BIN_DIR&amp;gt;/&amp;lt;executable&amp;gt; ... (options)  2&amp;gt;&amp;amp;1&lt;br /&gt;
[ &amp;quot;$exit_code&amp;quot; -eq 0 ] &amp;amp;&amp;amp; echo &amp;quot;all clean...&amp;quot; || \&lt;br /&gt;
   echo &amp;quot;Executable &amp;lt;EXE_BIN_DIR&amp;gt;/&amp;lt;executable&amp;gt; finished with exit code ${$exit_code}&amp;quot;&lt;br /&gt;
[...]&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
* Do not use &#039;&#039;&#039;&#039;time&#039;&#039;&#039;&#039; mpirun! The exit code will be the one submitted by the first (time) program.&lt;br /&gt;
* You do not need an &#039;&#039;&#039;exit $exit_code&#039;&#039;&#039; in the scripts.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
----&lt;br /&gt;
[[#top|Back to top]]&lt;/div&gt;</summary>
		<author><name>F Bartusch</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BinAC2/Slurm&amp;diff=15245</id>
		<title>BinAC2/Slurm</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BinAC2/Slurm&amp;diff=15245"/>
		<updated>2025-08-26T11:15:42Z</updated>

		<summary type="html">&lt;p&gt;F Bartusch: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= General information about Slurm =&lt;br /&gt;
&lt;br /&gt;
Any kind of calculation on the compute nodes of bwForCluster BinAC 2 requires the user to define calculations as a sequence of commands or single command together with required run time, number of CPU cores and main memory and submit all, i.e., the &#039;&#039;&#039;batch job&#039;&#039;&#039;, to a resource and workload managing software. BinAC 2 has installed the workload managing software Slurm. Therefore any job submission by the user is to be executed by commands of the Slurm software. Slurm queues and runs user jobs based on fair sharing policies.&lt;br /&gt;
&lt;br /&gt;
= External Slurm documentation =&lt;br /&gt;
&lt;br /&gt;
You can find the official Slurm configuration and some other material here:&lt;br /&gt;
&lt;br /&gt;
* Slurm documentation: https://slurm.schedmd.com/documentation.html&lt;br /&gt;
* Slurm cheat sheet: https://slurm.schedmd.com/pdfs/summary.pdf&lt;br /&gt;
* Slurm tutorials: https://slurm.schedmd.com/tutorials.html&lt;br /&gt;
&lt;br /&gt;
= SLURM terminology = &lt;br /&gt;
&lt;br /&gt;
SLURM knows and mirrors the division of the cluster into &#039;&#039;&#039;nodes&#039;&#039;&#039; with several &#039;&#039;&#039;cores&#039;&#039;&#039;. When queuing &#039;&#039;&#039;jobs&#039;&#039;&#039;, there are several ways of requesting resources and it is important to know which term means what in SLURM. Here are some basic SLURM terms:&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;span id=&amp;quot;Job&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;Job&lt;br /&gt;
: A job is a self-contained computation that may encompass multiple tasks and is given specific resources like individual CPUs/GPUs, a specific amount of RAM or entire nodes. These resources are said to have been allocated for the job.&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;span id=&amp;quot;Task&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;Task&lt;br /&gt;
: A task is a single run of a single process. By default, one task is run per node and one CPU is assigned per task.&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;span id=&amp;quot;Partition&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;Partition    &lt;br /&gt;
: A partition (usually called queue outside SLURM) is a waiting line in which jobs are put by users.&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;span id=&amp;quot;Socket&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;Socket    &lt;br /&gt;
: Receptacle on the motherboard for one physically packaged processor (each of which can contain one or more cores).&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;span id=&amp;quot;Core&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;Core    &lt;br /&gt;
: A complete private set of registers, execution units, and retirement queues needed to execute programs.&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;span id=&amp;quot;Thread&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;Thread    &lt;br /&gt;
: One or more hardware contexts withing a single core. Each thread has attributes of one core, managed &amp;amp; scheduled as a single logical processor by the OS.&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;span id=&amp;quot;CPU&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;CPU&lt;br /&gt;
: A &#039;&#039;&#039;CPU&#039;&#039;&#039; in Slurm means a &#039;&#039;&#039;single core&#039;&#039;&#039;. This is different from the more common terminology, where a CPU (a microprocessor chip) consists of multiple cores. Slurm uses the term &#039;&#039;&#039;sockets&#039;&#039;&#039; when talking about CPU chips. Depending upon system configuration, a CPU can be either a &#039;&#039;&#039;core&#039;&#039;&#039; or a &#039;&#039;&#039;thread&#039;&#039;&#039;. On &#039;&#039;&#039;BinAC 2 Hyperthreading is activated on every machine&#039;&#039;&#039;. This means that the operating system and Slurm sees each physical core as two logical cores.&lt;br /&gt;
&lt;br /&gt;
= Slurm Commands =&lt;br /&gt;
&lt;br /&gt;
{| width=750px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Slurm commands !! Brief explanation&lt;br /&gt;
|-&lt;br /&gt;
| [https://slurm.schedmd.com/sbatch.html sbatch] || Submits a job and queues it in an input queue&lt;br /&gt;
|-&lt;br /&gt;
| [https://slurm.schedmd.com/salloc.html saclloc] || Request resources for an interactive job&lt;br /&gt;
|-&lt;br /&gt;
| [https://slurm.schedmd.com/squeue.html squeue] || Displays information about active, eligible, blocked, and/or recently completed jobs &lt;br /&gt;
|-&lt;br /&gt;
| [https://slurm.schedmd.com/scontrol.html scontrol] || Displays detailed job state information&lt;br /&gt;
|-&lt;br /&gt;
| [https://slurm.schedmd.com/scontrol.html sstat] || Displays status information about a running job&lt;br /&gt;
|- &lt;br /&gt;
| [https://slurm.schedmd.com/scancel.html scancel] || Cancels a job&lt;br /&gt;
|- &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Interactive Jobs : srun / salloc ==&lt;br /&gt;
&lt;br /&gt;
You can run interactive jobs for testing and developing your job scripts.&lt;br /&gt;
Several nodes are reserved for interactive work, so your jobs should start right away.&lt;br /&gt;
You can only submit one job to this partition at a time. A job can run for up to 10 hours (about one workday).&lt;br /&gt;
&lt;br /&gt;
This example command gives you 16 cores and 128 GB of memory for four hours on one of the reserved nodes:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
salloc --partition=interactive --time=4:00:00 --cpus-per-task=16 --mem=128gb&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can also use srun to request the same resources:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
srun --partition=interactive --time=4:00:00 --cpus-per-task=16 --mem=128gb --pty bash&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Job Submission : sbatch ==&lt;br /&gt;
&lt;br /&gt;
Batch jobs are submitted by using the command &#039;&#039;&#039;sbatch&#039;&#039;&#039;. The main purpose of the &#039;&#039;&#039;sbatch&#039;&#039;&#039; command is to specify the resources that are needed to run the job. &#039;&#039;&#039;sbatch&#039;&#039;&#039; will then queue the batch job. However, starting of batch job depends on the availability of the requested resources and the fair sharing value.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
=== sbatch Command Parameters ===&lt;br /&gt;
The syntax and use of &#039;&#039;&#039;sbatch&#039;&#039;&#039; can be displayed via:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ man sbatch&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;sbatch&#039;&#039;&#039; options can be used from the command line or in your job script. The following table shows the syntax and provides examples for each option.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! colspan=&amp;quot;5&amp;quot; | sbatch Options&lt;br /&gt;
|-&lt;br /&gt;
! Command line&lt;br /&gt;
! Job Script&lt;br /&gt;
! Purpose&lt;br /&gt;
! Example&lt;br /&gt;
! Default value&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| &amp;lt;code&amp;gt;-t &#039;&#039;time&#039;&#039;&amp;lt;/code&amp;gt;  or  &amp;lt;code&amp;gt;--time=&#039;&#039;time&#039;&#039;&amp;lt;/code&amp;gt;&lt;br /&gt;
| #SBATCH --time=&#039;&#039;time&#039;&#039;&lt;br /&gt;
| Wall clock time limit.&amp;lt;br&amp;gt;&lt;br /&gt;
| &amp;lt;code&amp;gt;-t 2:30:00&amp;lt;/code&amp;gt; Limits run time to 2h 30 min.&amp;lt;/br&amp;gt;&amp;lt;code&amp;gt;-t 2-12&amp;lt;/code&amp;gt; Limits run time to 2 days and 12 hours.&lt;br /&gt;
| Depends on Slurm partition.&lt;br /&gt;
|-&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| -N &#039;&#039;count&#039;&#039;  or  --nodes=&#039;&#039;count&#039;&#039;&lt;br /&gt;
| #SBATCH --nodes=&#039;&#039;count&#039;&#039;&lt;br /&gt;
| Number of nodes to be used.&lt;br /&gt;
| &amp;lt;code&amp;gt;-N 1&amp;lt;/code&amp;gt; Run job on one node.&amp;lt;/br&amp;gt;&amp;lt;code&amp;gt;-N 2&amp;lt;/code&amp;gt; Run job on two nodes (have to use MPI!)&lt;br /&gt;
| &lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| -n &#039;&#039;count&#039;&#039;  or  --ntasks=&#039;&#039;count&#039;&#039;&lt;br /&gt;
| #SBATCH --ntasks=&#039;&#039;count&#039;&#039;&lt;br /&gt;
| Number of tasks to be launched.&lt;br /&gt;
| &amp;lt;code&amp;gt;-n 2&amp;lt;/code&amp;gt; launch two tasks in the job.&lt;br /&gt;
| One task per node&lt;br /&gt;
|-&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| --ntasks-per-node=&#039;&#039;count&#039;&#039;&lt;br /&gt;
| #SBATCH --ntasks-per-node=&#039;&#039;count&#039;&#039;&lt;br /&gt;
| Maximum count of tasks per node.&amp;lt;br&amp;gt;(Replaces the option &amp;lt;code&amp;gt;ppn&amp;lt;/code&amp;gt; of MOAB.)&lt;br /&gt;
| &amp;lt;code&amp;gt;--ntasks-per-node=2&amp;lt;/code&amp;gt; Run 2 tasks per node&lt;br /&gt;
| 1 task per node&lt;br /&gt;
|-&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| -c &#039;&#039;count&#039;&#039; or --cpus-per-task=&#039;&#039;count&#039;&#039;&lt;br /&gt;
| #SBATCH --cpus-per-task=&#039;&#039;count&#039;&#039;&lt;br /&gt;
| Number of CPUs required per (MPI-)task.&lt;br /&gt;
| &amp;lt;code&amp;gt;-c 2&amp;lt;/code&amp;gt; Request two CPUs per (MPI-)task.&lt;br /&gt;
| 1 CPU per (MPI-)task&lt;br /&gt;
|-&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| &amp;lt;code&amp;gt;--mem=&amp;lt;size&amp;gt;[units]&amp;lt;/code&amp;gt;&lt;br /&gt;
| #SBATCH --mem=&#039;&#039;value_in_MB&#039;&#039; &lt;br /&gt;
| Memory in MegaByte per node.&amp;lt;/br&amp;gt;&amp;lt;code&amp;gt;[units]&amp;lt;/code&amp;gt; can be one of &amp;lt;code&amp;gt;[K&amp;lt;nowiki&amp;gt;|&amp;lt;/nowiki&amp;gt;M&amp;lt;nowiki&amp;gt;|&amp;lt;/nowiki&amp;gt;G&amp;lt;nowiki&amp;gt;|&amp;lt;/nowiki&amp;gt;T]&amp;lt;/code&amp;gt;.&lt;br /&gt;
| &amp;lt;code&amp;gt;--mem=10g&amp;lt;/code&amp;gt; Request 10GB RAM per node &amp;lt;/br&amp;gt; &amp;lt;code&amp;gt;--mem=0&amp;lt;/code&amp;gt; Request all memory on node&lt;br /&gt;
| Depends on Slurm configuration.&amp;lt;/br&amp;gt;It is better to specify &amp;lt;code&amp;gt;--mem&amp;lt;/code&amp;gt; in every case.&lt;br /&gt;
|-&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| --mem-per-cpu=&#039;&#039;value_in_MB&#039;&#039;&lt;br /&gt;
| #SBATCH --mem-per-cpu=&#039;&#039;value_in_MB&#039;&#039; &lt;br /&gt;
| Minimum Memory required per allocated CPU.&amp;lt;br&amp;gt;(Replaces the option pmem of MOAB. You should omit &amp;lt;br&amp;gt; the setting of this option.)&lt;br /&gt;
|-&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| --mail-type=&#039;&#039;type&#039;&#039;&lt;br /&gt;
| #SBATCH --mail-type=&#039;&#039;type&#039;&#039;&lt;br /&gt;
| Notify user by email when certain event types occur.&amp;lt;br&amp;gt;Valid type values are NONE, BEGIN, END, FAIL, REQUEUE, ALL.&lt;br /&gt;
|-&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| --mail-user=&#039;&#039;mail-address&#039;&#039;&lt;br /&gt;
| #SBATCH --mail-user=&#039;&#039;mail-address&#039;&#039;&lt;br /&gt;
|  The specified mail-address receives email notification of state&amp;lt;br&amp;gt;changes as defined by --mail-type.&lt;br /&gt;
|-&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| --output=&#039;&#039;name&#039;&#039;&lt;br /&gt;
| #SBATCH --output=&#039;&#039;name&#039;&#039;&lt;br /&gt;
| File in which job output is stored. &lt;br /&gt;
|-&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| --error=&#039;&#039;name&#039;&#039;&lt;br /&gt;
| #SBATCH --error=&#039;&#039;name&#039;&#039;&lt;br /&gt;
| File in which job error messages are stored. &lt;br /&gt;
|-&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| -J &#039;&#039;name&#039;&#039; or --job-name=&#039;&#039;name&#039;&#039;&lt;br /&gt;
| #SBATCH --job-name=&#039;&#039;name&#039;&#039;&lt;br /&gt;
| Job name.&lt;br /&gt;
|-&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| --export=[ALL,] &#039;&#039;env-variables&#039;&#039;&lt;br /&gt;
| #SBATCH --export=[ALL,] &#039;&#039;env-variables&#039;&#039;&lt;br /&gt;
| Identifies which environment variables from the submission &amp;lt;br&amp;gt; environment are propagated to the launched application. Default &amp;lt;br&amp;gt; is ALL. If adding an environment variable to the submission&amp;lt;br&amp;gt; environment is intended, the argument ALL must be added.&lt;br /&gt;
|-&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| -A &#039;&#039;group-name&#039;&#039; or --account=&#039;&#039;group-name&#039;&#039;&lt;br /&gt;
| #SBATCH --account=&#039;&#039;group-name&#039;&#039;&lt;br /&gt;
| Change resources used by this job to specified group. You may &amp;lt;br&amp;gt; need this option if your account is assigned to more &amp;lt;br&amp;gt; than one group. By command &amp;quot;scontrol show job&amp;quot; the project &amp;lt;br&amp;gt; group the job is accounted on can be seen behind &amp;quot;Account=&amp;quot;. &lt;br /&gt;
|-&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| -p &#039;&#039;queue-name&#039;&#039; or --partition=&#039;&#039;queue-name&#039;&#039;&lt;br /&gt;
| #SBATCH --partition=&#039;&#039;queue-name&#039;&#039;&lt;br /&gt;
| Request a specific queue for the resource allocation.&lt;br /&gt;
|-&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| --reservation=&#039;&#039;reservation-name&#039;&#039;&lt;br /&gt;
| #SBATCH --reservation=&#039;&#039;reservation-name&#039;&#039;&lt;br /&gt;
| Use a specific reservation for the resource allocation.&lt;br /&gt;
|-&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| -C &#039;&#039;LSDF&#039;&#039; or --constraint=&#039;&#039;LSDF&#039;&#039;&lt;br /&gt;
| #SBATCH --constraint=LSDF&lt;br /&gt;
| Job constraint LSDF Filesystems.&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== sbatch --partition  &#039;&#039;queues&#039;&#039; ====&lt;br /&gt;
Queue classes define maximum resources such as walltime, nodes and processes per node and queue of the compute system. Details can be found here:&lt;br /&gt;
* [[BinAC2/SLURM_Partitions|BinAC 2 partitions]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== sbatch Examples ===&lt;br /&gt;
&lt;br /&gt;
If you are coming from Moab/Torque on BinAC 1 or you are new to HPC/Slurm the &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt; options may confuse you. The following examples give an orientation how to run typical workloads on BinAC 2.&lt;br /&gt;
&lt;br /&gt;
You can find every file mentioned on this Wiki page on BinAC 2 at: &amp;lt;code&amp;gt;/pfs/10/project/examples&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Serial Programs ====&lt;br /&gt;
When you use serial programs that use only one process, you can omit most of the &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt; parameters, as the default values are sufficient.&lt;br /&gt;
&lt;br /&gt;
To submit a serial job that runs the script &amp;lt;code&amp;gt;serial_job.sh&amp;lt;/code&amp;gt; and requires 5000 MB of main memory and 10 minutes of wall clock time, Slurm will allocate one &#039;&#039;&#039;physical&#039;&#039;&#039; core to your job.&lt;br /&gt;
&lt;br /&gt;
a) execute:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sbatch -p compute -t 10:00 --mem=5000m  serial_job.sh&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
or&lt;br /&gt;
b) add after the initial line of your script &#039;&#039;&#039;serial_job.sh&#039;&#039;&#039; the lines:&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#SBATCH --time=10:00&lt;br /&gt;
#SBATCH --mem=5000m&lt;br /&gt;
#SBATCH --job-name=simple-serial-job&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
and execute the modified script with the command line option &#039;&#039;--partition=compute&#039;&#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sbatch -p=compute serial_job.sh&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Note, that sbatch command line options overrule script options.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Multithreaded Programs ====&lt;br /&gt;
&lt;br /&gt;
Multithreaded programs run their processes on multiple threads and share resources such as memory.&amp;lt;br&amp;gt;&lt;br /&gt;
You may use a program that includes a built-in option for multithreading (e.g., options like &amp;lt;code&amp;gt;--threads&amp;lt;/code&amp;gt;).&amp;lt;br&amp;gt;&lt;br /&gt;
For multithreaded programs based on &#039;&#039;&#039;Open&#039;&#039;&#039; &#039;&#039;&#039;M&#039;&#039;&#039;ulti-&#039;&#039;&#039;P&#039;&#039;&#039;rocessing (OpenMP) a number of threads is defined by the environment variable &amp;lt;code&amp;gt;OMP_NUM_THREADS&amp;lt;/code&amp;gt;. By default, this variable is set to 1 (&amp;lt;code&amp;gt;OMP_NUM_THREADS=1&amp;lt;/code&amp;gt;). &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Important:&#039;&#039;&#039; Hyperthreading is activated on bwForCluster BinAC 2. Hyperthreading can be beneficial for some applications and codes, but it can also degrade performance in other cases. We therefore recommend to run a small test job with and without hyperthreading to determine the best choice. &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;a) Program with built-in multithreading option&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The example uses the common Bioinformatics software called &amp;lt;code&amp;gt;samtools&amp;lt;/code&amp;gt; as example for using built-in multithreading.&lt;br /&gt;
&lt;br /&gt;
The module &amp;lt;code&amp;gt;bio/samtools/1.21&amp;lt;/code&amp;gt; provides an example jobscript that requests 4 CPUs and runs &amp;lt;code&amp;gt;samtools sort&amp;lt;/code&amp;gt; with 4 threads.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
#SBATCH --time=19:00&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
#SBATCH --cpus-per-task=4&lt;br /&gt;
#SBATCH --mem=5000m&lt;br /&gt;
#SBATCH --partition compute&lt;br /&gt;
[...]&lt;br /&gt;
samtools sort -@ 4 sample.bam -o sample.sorted.bam&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can use the example jobscript with this command&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sbatch /opt/bwhpc/common/bio/samtools/1.21/bwhpc-examples/binac2-samtools-1.21-bwhpc-examples.slurm&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;b) OpenMP&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
We will run an exaple OpenMP Hello-World program. The jobscript looks like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
#SBATCH --cpus-per-task=4&lt;br /&gt;
#SBATCH --time=1:00&lt;br /&gt;
#SBATCH --mem=5000m   &lt;br /&gt;
#SBATCH -J OpenMP-Hello-World&lt;br /&gt;
&lt;br /&gt;
export OMP_NUM_THREADS=$(${SLURM_JOB_CPUS_PER_NODE}/2)&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;Executable running on ${SLURM_JOB_CPUS_PER_NODE} cores with ${OMP_NUM_THREADS} threads&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# Run parallel Hello World&lt;br /&gt;
/pfs/10/project/examples/openmp_hello_world&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Submit the job to the &amp;lt;code&amp;gt;compute&amp;lt;/code&amp;gt; partition and get the output (in the stdout-file)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sbatch --partition=compute /pfs/10/project/examples/openmp_hello_world.sh&lt;br /&gt;
&lt;br /&gt;
Executable  running on 4 cores with 4 threads&lt;br /&gt;
Hello from process: 0&lt;br /&gt;
Hello from process: 2&lt;br /&gt;
Hello from process: 1&lt;br /&gt;
Hello from process: 3&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== OpenMPI ====&lt;br /&gt;
&lt;br /&gt;
If you want to run MPI-jobs on batch nodes, generate a wrapper script &amp;lt;code&amp;gt;mpi_hello_world.sh&amp;lt;/code&amp;gt; for &#039;&#039;&#039;OpenMPI&#039;&#039;&#039; containing the following lines:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
#SBATCH --partition compute&lt;br /&gt;
#SBATCH --nodes=2&lt;br /&gt;
#SBATCH --ntasks-per-node=2&lt;br /&gt;
#SBATCH --cpus-per-task=2&lt;br /&gt;
#SBATCH --mem-per-cpu=2000&lt;br /&gt;
#SBATCH --time=05:00&lt;br /&gt;
&lt;br /&gt;
# Load the MPI implementation of your choice&lt;br /&gt;
module load mpi/openmpi/4.1-gnu-14.2&lt;br /&gt;
&lt;br /&gt;
# Run your MPI program&lt;br /&gt;
mpirun --bind-to core --map-by core --report-bindings mpi_hello_world&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Attention:&#039;&#039;&#039; Do &#039;&#039;&#039;NOT&#039;&#039;&#039; add mpirun options &amp;lt;code&amp;gt;-n &amp;lt;number_of_processes&amp;gt;&amp;lt;/code&amp;gt; or any other option defining processes or nodes, since Slurm instructs mpirun about number of processes and node hostnames.&lt;br /&gt;
&lt;br /&gt;
Use &#039;&#039;&#039;ALWAYS&#039;&#039;&#039; the MPI options &amp;lt;code&amp;gt;--bind-to core&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;--map-by core|socket|node&amp;lt;/code&amp;gt;.&lt;br /&gt;
Please type &amp;lt;code&amp;gt;man mpirun&amp;lt;/code&amp;gt; for an explanation of the meaning of the different options of mpirun option &amp;lt;code&amp;gt;--map-by&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The above jobscript runs four OpenMPI tasks, distributed between two nodes. Because of hyperthreading you have to set &amp;lt;code&amp;gt;--cpus-per-task=2&amp;lt;/code&amp;gt;. This means each MPI-task will get one physical core. If you omit &amp;lt;code&amp;gt;--cpus-per-task=2&amp;lt;/code&amp;gt; MPI will fail.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Attention:&#039;&#039;&#039; Not all compute nodes are connected via Infiniband. Tell Slurm you want Infiniband via &amp;lt;code&amp;gt;--constraint=ib&amp;lt;/code&amp;gt; when submitting or add &amp;lt;code&amp;gt;#SBATCH --constraint=ib&amp;lt;/code&amp;gt; to your jobscript.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sbatch --constraint=ib /pfs/10/project/examples/mpi_hello_world.sh&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This will run a simple Hello World program:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[...]&lt;br /&gt;
Hello world from processor node2-031, rank 3 out of 4 processors&lt;br /&gt;
Hello world from processor node2-031, rank 2 out of 4 processors&lt;br /&gt;
Hello world from processor node2-030, rank 1 out of 4 processors&lt;br /&gt;
Hello world from processor node2-030, rank 0 out of 4 processors&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Multithreaded + MPI parallel Programs ====&lt;br /&gt;
&lt;br /&gt;
Multithreaded + MPI parallel programs operate faster than serial programs on multi CPUs with multiple cores. All threads of one process share resources such as memory. On the contrary MPI tasks do not share memory but can be spawned over different nodes. &#039;&#039;&#039;Because hyperthreading is switched on BinaC 2, the option --cpus-per-task (-c) must be set to 2*n, if you want to use n threads.&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
===== OpenMPI with Multithreading =====&lt;br /&gt;
Multiple MPI tasks using &#039;&#039;&#039;OpenMPI&#039;&#039;&#039; must be launched by the MPI parallel program &#039;&#039;&#039;mpirun&#039;&#039;&#039;. For multithreaded programs based on &#039;&#039;&#039;Open&#039;&#039;&#039; &#039;&#039;&#039;M&#039;&#039;&#039;ulti-&#039;&#039;&#039;P&#039;&#039;&#039;rocessing (OpenMP) number of threads are defined by the environment variable OMP_NUM_THREADS. By default this variable is set to 1 (OMP_NUM_THREADS=1).&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;For OpenMPI&#039;&#039;&#039; a job-script to submit a batch job called &#039;&#039;job_ompi_omp.sh&#039;&#039; that runs a MPI program with 4 tasks and a 28-fold threaded program &#039;&#039;ompi_omp_program&#039;&#039; requiring 3000 MByte of physical memory per thread (using 28 threads per MPI task you will get 28*3000 MByte = 84000 MByte per MPI task) and total wall clock time of 3 hours looks like:&lt;br /&gt;
&amp;lt;!--b)--&amp;gt;&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --ntasks=4&lt;br /&gt;
#SBATCH --cpus-per-task=56&lt;br /&gt;
#SBATCH --time=03:00:00&lt;br /&gt;
#SBATCH --mem=83gb    # 84000 MB = 84000/1024 GB = 82.1 GB&lt;br /&gt;
#SBATCH --export=ALL,MPI_MODULE=mpi/openmpi/3.1,EXECUTABLE=./ompi_omp_program&lt;br /&gt;
#SBATCH --output=&amp;quot;parprog_hybrid_%j.out&amp;quot;  &lt;br /&gt;
&lt;br /&gt;
# Use when a defined module environment related to OpenMPI is wished&lt;br /&gt;
module load ${MPI_MODULE}&lt;br /&gt;
export OMP_NUM_THREADS=$((${SLURM_CPUS_PER_TASK}/2))&lt;br /&gt;
export MPIRUN_OPTIONS=&amp;quot;--bind-to core --map-by socket:PE=${OMP_NUM_THREADS} -report-bindings&amp;quot;&lt;br /&gt;
export NUM_CORES=${SLURM_NTASKS}*${OMP_NUM_THREADS}&lt;br /&gt;
echo &amp;quot;${EXECUTABLE} running on ${NUM_CORES} cores with ${SLURM_NTASKS} MPI-tasks and ${OMP_NUM_THREADS} threads&amp;quot;&lt;br /&gt;
startexe=&amp;quot;mpirun -n ${SLURM_NTASKS} ${MPIRUN_OPTIONS} ${EXECUTABLE}&amp;quot;&lt;br /&gt;
echo $startexe&lt;br /&gt;
exec $startexe&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Execute the script &#039;&#039;&#039;job_ompi_omp.sh&#039;&#039;&#039; by command sbatch:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sbatch -p compute ./job_ompi_omp.sh&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* With the mpirun option &#039;&#039;--bind-to core&#039;&#039; MPI tasks and OpenMP threads are bound to physical cores.&lt;br /&gt;
* With the option &#039;&#039;--map-by node:PE=&amp;lt;value&amp;gt;&#039;&#039; (neighbored) MPI tasks will be attached to different nodes and each MPI task is bound to the first core of a node. &amp;lt;value&amp;gt; must be set to ${OMP_NUM_THREADS}.&lt;br /&gt;
* The option &#039;&#039;-report-bindings&#039;&#039; shows the bindings between MPI tasks and physical cores.&lt;br /&gt;
* The mpirun-options &#039;&#039;&#039;--bind-to core&#039;&#039;&#039;, &#039;&#039;&#039;--map-by socket|...|node:PE=&amp;lt;value&amp;gt;&#039;&#039;&#039; should always be used when running a multithreaded MPI program.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== GPU jobs ====&lt;br /&gt;
&lt;br /&gt;
The nodes in the gpu_4 and gpu_8 queues have 4 or 8 NVIDIA Tesla V100 GPUs. Just submitting a job to these queues is not enough to also allocate one or more GPUs, you have to do so using the &amp;quot;--gres=gpu&amp;quot; parameter. You have to specifiy how many GPUs your job needs, e.g. &amp;quot;--gres=gpu:2&amp;quot; will request two GPUs.&lt;br /&gt;
&lt;br /&gt;
The GPU nodes are shared between multiple jobs if the jobs don&#039;t request all the GPUs in a node and there are enough ressources to run more than one job. The individual GPUs are always bound to a single job and will not be shared between different jobs.&lt;br /&gt;
&lt;br /&gt;
a) add after the initial line of your script job.sh the line including the&lt;br /&gt;
information about the GPU usage:&amp;lt;br&amp;gt;   #SBATCH --gres=gpu:2&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --ntasks=40&lt;br /&gt;
#SBATCH --time=02:00:00&lt;br /&gt;
#SBATCH --mem=4000&lt;br /&gt;
#SBATCH --gres=gpu:2&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or b) execute:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sbatch -p &amp;lt;queue&amp;gt; -n 40 -t 02:00:00 --mem 4000 --gres=gpu:2 job.sh&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
If you start an interactive session on of the GPU nodes, you can use the &amp;quot;nvidia-smi&amp;quot; command to list the GPUs allocated to your job:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nvidia-smi&lt;br /&gt;
Sun Mar 29 15:20:05 2020       &lt;br /&gt;
+-----------------------------------------------------------------------------+&lt;br /&gt;
| NVIDIA-SMI 440.33.01    Driver Version: 440.33.01    CUDA Version: 10.2     |&lt;br /&gt;
|-------------------------------+----------------------+----------------------+&lt;br /&gt;
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |&lt;br /&gt;
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |&lt;br /&gt;
|===============================+======================+======================|&lt;br /&gt;
|   0  Tesla V100-SXM2...  Off  | 00000000:3A:00.0 Off |                    0 |&lt;br /&gt;
| N/A   29C    P0    39W / 300W |      9MiB / 32510MiB |      0%      Default |&lt;br /&gt;
+-------------------------------+----------------------+----------------------+&lt;br /&gt;
|   1  Tesla V100-SXM2...  Off  | 00000000:3B:00.0 Off |                    0 |&lt;br /&gt;
| N/A   30C    P0    41W / 300W |      8MiB / 32510MiB |      0%      Default |&lt;br /&gt;
+-------------------------------+----------------------+----------------------+&lt;br /&gt;
                                                                               &lt;br /&gt;
+-----------------------------------------------------------------------------+&lt;br /&gt;
| Processes:                                                       GPU Memory |&lt;br /&gt;
|  GPU       PID   Type   Process name                             Usage      |&lt;br /&gt;
|=============================================================================|&lt;br /&gt;
|    0     14228      G   /usr/bin/X                                     8MiB |&lt;br /&gt;
|    1     14228      G   /usr/bin/X                                     8MiB |&lt;br /&gt;
+-----------------------------------------------------------------------------+&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
In case of using Open MPI, the underlying communication infrastructure (UCX and Open MPI&#039;s BTL) is CUDA-aware.&lt;br /&gt;
However, there may be warnings, e.g. when running&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ module load compiler/gnu/10.3 mpi/openmpi devel/cuad&lt;br /&gt;
$ mpirun mpirun -np 2 ./mpi_cuda_app&lt;br /&gt;
--------------------------------------&lt;br /&gt;
WARNING: There are more than one active ports on host &#039;uc2n520&#039;, but the&lt;br /&gt;
default subnet GID prefix was detected on more than one of these&lt;br /&gt;
ports.  If these ports are connected to different physical IB&lt;br /&gt;
networks, this configuration will fail in Open MPI.  This version of&lt;br /&gt;
Open MPI requires that every physically separate IB subnet that is&lt;br /&gt;
used between connected MPI processes must have different subnet ID&lt;br /&gt;
values.&lt;br /&gt;
&lt;br /&gt;
Please see this FAQ entry for more details:&lt;br /&gt;
&lt;br /&gt;
  http://www.open-mpi.org/faq/?category=openfabrics#ofa-default-subnet-gid&lt;br /&gt;
&lt;br /&gt;
NOTE: You can turn off this warning by setting the MCA parameter&lt;br /&gt;
      btl_openib_warn_default_gid_prefix to 0.&lt;br /&gt;
--------------------------------------------------------------------------&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Please run Open MPI&#039;s mpirun using the following command:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ mpirun --mca pml ucx --mca btl_openib_warn_default_gid_prefix 0 -np 2 ./mpi_cuda_app&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
or disabling the (older) communication layer Bit-Transfer-Layer (short BTL) alltogether:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ mpirun --mca pml ucx --mca btl ^openib -np 2 ./mpi_cuda_app&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
(Please note, that CUDA per v11.4 is only available with up to GCC-10)&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Start time of job or resources : squeue --start ==&lt;br /&gt;
The command can be used by any user to displays the estimated start time of a job based a number of analysis types based on historical usage, earliest available reservable resources, and priority based backlog. The command squeue is explained in detail on the webpage https://slurm.schedmd.com/squeue.html or via manpage (man squeue). &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
=== Access ===&lt;br /&gt;
By default, this command can be run by &#039;&#039;&#039;any user&#039;&#039;&#039;. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== List of your submitted jobs : squeue ==&lt;br /&gt;
Displays information about YOUR active, pending and/or recently completed jobs. The command displays all own active and pending jobs. The command squeue is explained in detail on the webpage https://slurm.schedmd.com/squeue.html or via manpage (man squeue).&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
=== Access ===&lt;br /&gt;
By default, this command can be run by any user.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Flags ===&lt;br /&gt;
{| width=750px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Flag !! Description&lt;br /&gt;
|-&lt;br /&gt;
| -l, --long&lt;br /&gt;
| Report more of the available information for the selected jobs or job steps, subject to any constraints specified.&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Examples ===&lt;br /&gt;
&#039;&#039;squeue&#039;&#039; example on BinaC 2 &amp;lt;small&amp;gt;(Only your own jobs are displayed!)&amp;lt;/small&amp;gt;.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ squeue &lt;br /&gt;
             JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)&lt;br /&gt;
          18088744    single CPV.sbat   ab1234 PD       0:00      1 (Priority)&lt;br /&gt;
          18098414  multiple CPV.sbat   ab1234 PD       0:00      2 (Priority) &lt;br /&gt;
          18090089  multiple CPV.sbat   ab1234  R       2:27      2 uc2n[127-128]&lt;br /&gt;
$ squeue -l&lt;br /&gt;
            JOBID PARTITION     NAME     USER    STATE       TIME TIME_LIMI  NODES NODELIST(REASON) &lt;br /&gt;
         18088654    single CPV.sbat   ab1234 COMPLETI       4:29   2:00:00      1 uc2n374&lt;br /&gt;
         18088785    single CPV.sbat   ab1234  PENDING       0:00   2:00:00      1 (Priority)&lt;br /&gt;
         18098414  multiple CPV.sbat   ab1234  PENDING       0:00   2:00:00      2 (Priority)&lt;br /&gt;
         18088683    single CPV.sbat   ab1234  RUNNING       0:14   2:00:00      1 uc2n413  &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* The output of &#039;&#039;squeue&#039;&#039; shows how many jobs of yours are running or pending and how many nodes are in use by your jobs.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Shows free resources : sinfo_t_idle ==&lt;br /&gt;
The Slurm command sinfo is used to view partition and node information for a system running Slurm. It incorporates down time, reservations, and node state information in determining the available backfill window. The sinfo command can only be used by the administrator.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
SCC has prepared a special script (sinfo_t_idle) to find out how many processors are available for immediate use on the system. It is anticipated that users will use this information to submit jobs that meet these criteria and thus obtain quick job turnaround times. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
=== Access ===&lt;br /&gt;
By default, this command can be used by any user or administrator. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
=== Example ===&lt;br /&gt;
* The following command displays what resources are available for immediate use for the whole partition.&lt;br /&gt;
&amp;lt;pre&amp;gt;$ sinfo_t_idle&lt;br /&gt;
Partition dev_multiple  :      8 nodes idle&lt;br /&gt;
Partition multiple      :    332 nodes idle&lt;br /&gt;
Partition dev_single    :      4 nodes idle&lt;br /&gt;
Partition single        :     76 nodes idle&lt;br /&gt;
Partition long          :     80 nodes idle&lt;br /&gt;
Partition fat           :      5 nodes idle&lt;br /&gt;
Partition dev_special   :    342 nodes idle&lt;br /&gt;
Partition special       :    342 nodes idle&lt;br /&gt;
Partition dev_multiple_e:      7 nodes idle&lt;br /&gt;
Partition multiple_e    :    335 nodes idle&lt;br /&gt;
Partition gpu_4         :     12 nodes idle&lt;br /&gt;
Partition gpu_8         :      6 nodes idle&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* For the above example jobs in all partitions can be run immediately.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Detailed job information : scontrol show job ==&lt;br /&gt;
scontrol show job displays detailed job state information and diagnostic output for all or a specified job of yours. Detailed information is available for active, pending and recently completed jobs. The command scontrol is explained in detail on the webpage https://slurm.schedmd.com/scontrol.html or via manpage (man scontrol). &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Display the state of all your jobs in normal mode: scontrol show job&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Display the state of a job with &amp;lt;jobid&amp;gt; in normal mode: scontrol show job &amp;lt;jobid&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
=== Access ===&lt;br /&gt;
* End users can use scontrol show job to view the status of their &#039;&#039;&#039;own jobs&#039;&#039;&#039; only. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Arguments ===&lt;br /&gt;
{| width=750px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Option !! Default !! Description !! Example&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
|- style=&amp;quot;width:12%;&amp;quot; &lt;br /&gt;
| -d&lt;br /&gt;
| (n/a)&lt;br /&gt;
| Detailed mode&lt;br /&gt;
| Example: Display the state with jobid 18089884 in detailed mode. &amp;lt;br&amp;gt; &amp;lt;pre&amp;gt;scontrol -d show job 18089884&amp;lt;/pre&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Scontrol show job Example ===&lt;br /&gt;
Here is an example from BinAC 2.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
squeue    # show my own jobs (here the userid is replaced!)&lt;br /&gt;
             JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)&lt;br /&gt;
          18089884  multiple CPV.sbat   bq0742  R      33:44      2 uc2n[165-166]&lt;br /&gt;
&lt;br /&gt;
$&lt;br /&gt;
$ # now, see what&#039;s up with my pending job with jobid 18089884&lt;br /&gt;
$ &lt;br /&gt;
$ scontrol show job 18089884&lt;br /&gt;
&lt;br /&gt;
JobId=18089884 JobName=CPV.sbatch&lt;br /&gt;
   UserId=bq0742(8946) GroupId=scc(12345) MCS_label=N/A&lt;br /&gt;
   Priority=3 Nice=0 Account=kit QOS=normal&lt;br /&gt;
   JobState=RUNNING Reason=None Dependency=(null)&lt;br /&gt;
   Requeue=1 Restarts=0 BatchFlag=1 Reboot=0 ExitCode=0:0&lt;br /&gt;
   RunTime=00:35:06 TimeLimit=02:00:00 TimeMin=N/A&lt;br /&gt;
   SubmitTime=2020-03-16T14:14:54 EligibleTime=2020-03-16T14:14:54&lt;br /&gt;
   AccrueTime=2020-03-16T14:14:54&lt;br /&gt;
   StartTime=2020-03-16T15:12:51 EndTime=2020-03-16T17:12:51 Deadline=N/A&lt;br /&gt;
   SuspendTime=None SecsPreSuspend=0 LastSchedEval=2020-03-16T15:12:51&lt;br /&gt;
   Partition=multiple AllocNode:Sid=uc2n995:5064&lt;br /&gt;
   ReqNodeList=(null) ExcNodeList=(null)&lt;br /&gt;
   NodeList=uc2n[165-166]&lt;br /&gt;
   BatchHost=uc2n165&lt;br /&gt;
   NumNodes=2 NumCPUs=160 NumTasks=80 CPUs/Task=1 ReqB:S:C:T=0:0:*:1&lt;br /&gt;
   TRES=cpu=160,mem=96320M,node=2,billing=160&lt;br /&gt;
   Socks/Node=* NtasksPerN:B:S:C=40:0:*:1 CoreSpec=*&lt;br /&gt;
   MinCPUsNode=40 MinMemoryCPU=1204M MinTmpDiskNode=0&lt;br /&gt;
   Features=(null) DelayBoot=00:00:00&lt;br /&gt;
   OverSubscribe=NO Contiguous=0 Licenses=(null) Network=(null)&lt;br /&gt;
   Command=/pfs/data5/home/kit/scc/bq0742/git/CPV/bin/CPV.sbatch&lt;br /&gt;
   WorkDir=/pfs/data5/home/kit/scc/bq0742/git/CPV/bin&lt;br /&gt;
   StdErr=/pfs/data5/home/kit/scc/bq0742/git/CPV/bin/slurm-18089884.out&lt;br /&gt;
   StdIn=/dev/null&lt;br /&gt;
   StdOut=/pfs/data5/home/kit/scc/bq0742/git/CPV/bin/slurm-18089884.out&lt;br /&gt;
   Power=&lt;br /&gt;
   MailUser=(null) MailType=NONE&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
You can use standard Linux pipe commands to filter the very detailed scontrol show job output.&lt;br /&gt;
* In which state the job is?&lt;br /&gt;
&amp;lt;pre&amp;gt;$ scontrol show job 18089884 | grep -i State&lt;br /&gt;
   JobState=COMPLETED Reason=None Dependency=(null)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Cancel Slurm Jobs ==&lt;br /&gt;
The scancel command is used to cancel jobs. The command scancel is explained in detail on the webpage https://slurm.schedmd.com/scancel.html or via manpage (man scancel).   &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
=== Canceling own jobs : scancel ===&lt;br /&gt;
scancel is used to signal or cancel jobs, job arrays or job steps. The command is:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ scancel [-i] &amp;lt;job-id&amp;gt;&lt;br /&gt;
$ scancel -t &amp;lt;job_state_name&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
{| width=750px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Flag !! Default !! Description !! Example&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| -i, --interactive&lt;br /&gt;
| (n/a)&lt;br /&gt;
| Interactive mode.&lt;br /&gt;
| Cancel the job 987654 interactively. &amp;lt;br&amp;gt; &amp;lt;pre&amp;gt; scancel -i 987654 &amp;lt;/pre&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| -t, --state&lt;br /&gt;
| (n/a)&lt;br /&gt;
| Restrict the scancel operation to jobs in a certain state. &amp;lt;br&amp;gt; &amp;quot;job_state_name&amp;quot; may have a value of either &amp;quot;PENDING&amp;quot;, &amp;quot;RUNNING&amp;quot; or &amp;quot;SUSPENDED&amp;quot;.&lt;br /&gt;
| Cancel all jobs in state &amp;quot;PENDING&amp;quot;. &amp;lt;br&amp;gt; &amp;lt;pre&amp;gt; scancel -t &amp;quot;PENDING&amp;quot; &amp;lt;/pre&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Resource Managers =&lt;br /&gt;
=== Batch Job (Slurm) Variables ===&lt;br /&gt;
The following environment variables of Slurm are added to your environment once your job has started&lt;br /&gt;
&amp;lt;small&amp;gt;(only an excerpt of the most important ones)&amp;lt;/small&amp;gt;.&lt;br /&gt;
{| width=750px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Environment !! Brief explanation&lt;br /&gt;
|- &lt;br /&gt;
| SLURM_JOB_CPUS_PER_NODE &lt;br /&gt;
| Number of processes per node dedicated to the job&lt;br /&gt;
|- &lt;br /&gt;
| SLURM_JOB_NODELIST &lt;br /&gt;
| List of nodes dedicated to the job&lt;br /&gt;
|- &lt;br /&gt;
| SLURM_JOB_NUM_NODES &lt;br /&gt;
| Number of nodes dedicated to the job&lt;br /&gt;
|- &lt;br /&gt;
| SLURM_MEM_PER_NODE &lt;br /&gt;
| Memory per node dedicated to the job &lt;br /&gt;
|- &lt;br /&gt;
| SLURM_NPROCS&lt;br /&gt;
| Total number of processes dedicated to the job &lt;br /&gt;
|-&lt;br /&gt;
| SLURM_CLUSTER_NAME&lt;br /&gt;
| Name of the cluster executing the job&lt;br /&gt;
|-&lt;br /&gt;
| SLURM_CPUS_PER_TASK &lt;br /&gt;
| Number of CPUs requested per task&lt;br /&gt;
|-&lt;br /&gt;
| SLURM_JOB_ACCOUNT&lt;br /&gt;
| Account name &lt;br /&gt;
|-&lt;br /&gt;
| SLURM_JOB_ID&lt;br /&gt;
| Job ID&lt;br /&gt;
|-&lt;br /&gt;
| SLURM_JOB_NAME&lt;br /&gt;
| Job Name&lt;br /&gt;
|-&lt;br /&gt;
| SLURM_JOB_PARTITION&lt;br /&gt;
| Partition/queue running the job&lt;br /&gt;
|-&lt;br /&gt;
| SLURM_JOB_UID&lt;br /&gt;
| User ID of the job&#039;s owner&lt;br /&gt;
|-&lt;br /&gt;
| SLURM_SUBMIT_DIR&lt;br /&gt;
| Job submit folder.  The directory from which sbatch was invoked. &lt;br /&gt;
|-&lt;br /&gt;
| SLURM_JOB_USER&lt;br /&gt;
| User name of the job&#039;s owner&lt;br /&gt;
|-&lt;br /&gt;
| SLURM_RESTART_COUNT&lt;br /&gt;
| Number of times job has restarted&lt;br /&gt;
|-&lt;br /&gt;
| SLURM_PROCID&lt;br /&gt;
| Task ID (MPI rank)&lt;br /&gt;
|-&lt;br /&gt;
| SLURM_NTASKS&lt;br /&gt;
| The total number of tasks available for the job&lt;br /&gt;
|-&lt;br /&gt;
| SLURM_STEP_ID&lt;br /&gt;
| Job step ID&lt;br /&gt;
|-&lt;br /&gt;
| SLURM_STEP_NUM_TASKS&lt;br /&gt;
| Task count (number of MPI ranks)&lt;br /&gt;
|-&lt;br /&gt;
| SLURM_JOB_CONSTRAINT&lt;br /&gt;
| Job constraints&lt;br /&gt;
|}&lt;br /&gt;
See also:&lt;br /&gt;
* [https://slurm.schedmd.com/sbatch.html#lbAI Slurm input and output environment variables]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Job Exit Codes ===&lt;br /&gt;
A job&#039;s exit code (also known as exit status, return code and completion code) is captured by SLURM and saved as part of the job record. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Any non-zero exit code will be assumed to be a job failure and will result in a Job State of FAILED with a reason of &amp;quot;NonZeroExitCode&amp;quot;.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
The exit code is an 8 bit unsigned number ranging between 0 and 255. While it is possible for a job to return a negative exit code, SLURM will display it as an unsigned value in the 0 - 255 range.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
==== Displaying Exit Codes and Signals ====&lt;br /&gt;
SLURM displays a job&#039;s exit code in the output of the &#039;&#039;&#039;scontrol show job&#039;&#039;&#039; and the sview utility.&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
When a signal was responsible for a job or step&#039;s termination, the signal number will be displayed after the exit code, delineated by a colon(:).&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
==== Submitting Termination Signal ====&lt;br /&gt;
Here is an example, how to &#039;save&#039; a Slurm termination signal in a typical jobscript.&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
[...]&lt;br /&gt;
exit_code=$?&lt;br /&gt;
mpirun  -np &amp;lt;#cores&amp;gt;  &amp;lt;EXE_BIN_DIR&amp;gt;/&amp;lt;executable&amp;gt; ... (options)  2&amp;gt;&amp;amp;1&lt;br /&gt;
[ &amp;quot;$exit_code&amp;quot; -eq 0 ] &amp;amp;&amp;amp; echo &amp;quot;all clean...&amp;quot; || \&lt;br /&gt;
   echo &amp;quot;Executable &amp;lt;EXE_BIN_DIR&amp;gt;/&amp;lt;executable&amp;gt; finished with exit code ${$exit_code}&amp;quot;&lt;br /&gt;
[...]&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
* Do not use &#039;&#039;&#039;&#039;time&#039;&#039;&#039;&#039; mpirun! The exit code will be the one submitted by the first (time) program.&lt;br /&gt;
* You do not need an &#039;&#039;&#039;exit $exit_code&#039;&#039;&#039; in the scripts.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
----&lt;br /&gt;
[[#top|Back to top]]&lt;/div&gt;</summary>
		<author><name>F Bartusch</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BinAC2/Software/AMDock&amp;diff=15240</id>
		<title>BinAC2/Software/AMDock</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BinAC2/Software/AMDock&amp;diff=15240"/>
		<updated>2025-08-20T16:04:34Z</updated>

		<summary type="html">&lt;p&gt;F Bartusch: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{| width=700px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Description   !! Content&lt;br /&gt;
|-&lt;br /&gt;
| module load&lt;br /&gt;
| chem/amdock&lt;br /&gt;
|-&lt;br /&gt;
| License&lt;br /&gt;
| [https://github.com/Valdes-Tresanco-MS/AMDock?tab=GPL-3.0-1-ov-file GPL-3.0]&lt;br /&gt;
|-&lt;br /&gt;
| Links&lt;br /&gt;
| [https://github.com/Valdes-Tresanco-MS/AMDock GitHub]&lt;br /&gt;
|- &lt;br /&gt;
| Graphical Interface&lt;br /&gt;
|  Yes&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
= Description = &lt;br /&gt;
&lt;br /&gt;
AMDock (Assisted Molecular Docking) is a user-friendly graphical tool to assist in the docking of protein-ligand complexes using Autodock-Vina or AutoDock4. This tool integrates several external programs for processing docking input files, define the search space (box) and perform docking under user’s supervision.&lt;br /&gt;
&lt;br /&gt;
You can use the AMDock graphical user interface on BinAC 2 via TigerVNC. The AMDock module provides an ready to use example jobscript.&lt;br /&gt;
&lt;br /&gt;
= Usage =&lt;br /&gt;
&lt;br /&gt;
== VNC viewer ==&lt;br /&gt;
&lt;br /&gt;
First, you will need an VNC viewer on your local workstation. Please refer to the [https://wiki.bwhpc.de/e/BinAC2/Software/TigerVNC#VNC_viewer TigerVNC documentation] for installing the VNC viewer.&lt;br /&gt;
&lt;br /&gt;
== Start AMDock ==&lt;br /&gt;
&lt;br /&gt;
The AMDock module provides a ready to use example jobscript.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# Load AMDock module&lt;br /&gt;
module load chem/amdock/1.6.2&lt;br /&gt;
&lt;br /&gt;
# Copy the jobscript into your current working directory&lt;br /&gt;
cp $AMDOCK_EXA_DIR/binac2-amdock-1.6.2-example.slurm .&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Adjust needed resources (cpus, memory, time) as needed. Please don&#039;t touch anything else.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
# Adjust these values as needed&lt;br /&gt;
#SBATCH --cpus-per-task=10&lt;br /&gt;
#SBATCH --mem=20gb&lt;br /&gt;
#SBATCH --time=6:00:00&lt;br /&gt;
&lt;br /&gt;
# Don&#039;t change these settings&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
#SBATCH --partition=interactive&lt;br /&gt;
#SBATCH --gres=display:1&lt;br /&gt;
#SBATCH --signal=B:SIGUSR1@60&lt;br /&gt;
#SBATCH --job-name=amdock-1.6.2&lt;br /&gt;
&lt;br /&gt;
module load chem/amdock/1.6.2&lt;br /&gt;
&lt;br /&gt;
# AMDock is used via a graphical user interface (GUI)&lt;br /&gt;
# We need to start a VNC server in order to work with&lt;br /&gt;
# AMDock&lt;br /&gt;
# This wrapper script will start the VNC server and prints&lt;br /&gt;
# connection details to the job&#039;s stdout file&lt;br /&gt;
amdock_wrapper&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Submit the AMDock job.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
jobid=$(sbatch --parsable binac2-amdock-1.6.2-example.slurm)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Create SSH tunnel ==&lt;br /&gt;
&lt;br /&gt;
The compute node on which AMDock is running is not directly reachable from your local workstation. Therefore, you need to create an SSH tunnel from your workstation to the compute node via the BinAC 2 login node.&lt;br /&gt;
&lt;br /&gt;
The job&#039;s standard output file (&amp;lt;code&amp;gt;slurm-${jobid}.out&amp;lt;/code&amp;gt;) contains all the information you need. Please note that details such as the IP address, port number, and access URL will differ in your case.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ cat slurm-${jobid}.out&lt;br /&gt;
[...]&lt;br /&gt;
&lt;br /&gt;
   Paste this ssh command in a terminal on local host (i.e., laptop)&lt;br /&gt;
   -----------------------------------------------------------------&lt;br /&gt;
   ssh -N -L 5901:172.0.0.1:5901 tu_iioba01@login.binac2.uni-tuebingen.de&lt;br /&gt;
&lt;br /&gt;
   Then you can connect to the session with your local vncviewer:&lt;br /&gt;
   -----------------------------------------------------------------&lt;br /&gt;
   vncviewer localhost:5901&lt;br /&gt;
[...]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Linux users can simply copy&amp;amp;paste the &amp;lt;code&amp;gt;ssh -N -L ...&amp;lt;/code&amp;gt; command into their terminal.&lt;br /&gt;
Windows users, please refer to [https://wiki.bwhpc.de/e/BinAC2/Software/Jupyterlab#Windows_Users this documentation]. It&#039;s about JupyterLab, but the SSH-tunnel is created in a similar way.&lt;br /&gt;
&lt;br /&gt;
== Connect to VNC Server ==&lt;br /&gt;
&lt;br /&gt;
Now you can use your local VNC viewer and access the VNC session. Please use the correct port you got when starting the VNC server. At some point you will need to enter the password you create earlier.&lt;br /&gt;
&lt;br /&gt;
=== Windows ===&lt;br /&gt;
&lt;br /&gt;
TODO&lt;br /&gt;
&lt;br /&gt;
=== Linux ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vncviewer localhost:5901&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You should be greeted with the virtual desktop running AMDock.&lt;br /&gt;
&lt;br /&gt;
[[File:Binac2-amdock.png | 800px | center | Xterm opened in VNC viewer]]&lt;br /&gt;
&lt;br /&gt;
Attention: If you close the AMDock window the VNCserver, as well as the interactive job, will be stopped.&lt;/div&gt;</summary>
		<author><name>F Bartusch</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BinAC2/Software&amp;diff=15239</id>
		<title>BinAC2/Software</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BinAC2/Software&amp;diff=15239"/>
		<updated>2025-08-20T16:02:39Z</updated>

		<summary type="html">&lt;p&gt;F Bartusch: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Environment Modules ==&lt;br /&gt;
Most software is provided as Modules.&lt;br /&gt;
&lt;br /&gt;
Required reading to use: [[Environment Modules]]&lt;br /&gt;
&lt;br /&gt;
== Available Software ==&lt;br /&gt;
&lt;br /&gt;
* Web: Visit [https://www.bwhpc.de/software.php https://www.bwhpc.de/software.php], select &amp;lt;code&amp;gt;Cluster → bwForCluster BinAC 2&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* On the cluster: &amp;lt;code&amp;gt;module avail&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
== Software in Containers ==&lt;br /&gt;
&lt;br /&gt;
Instructions for loading software in containers: [[NEMO/Software/Singularity_Containers|Singularity Containers]]&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
== Documentation ==&lt;br /&gt;
&lt;br /&gt;
Documentation for environment modules available on the cluster:  &lt;br /&gt;
&lt;br /&gt;
* with command &amp;lt;code&amp;gt;module help&amp;lt;/code&amp;gt;&lt;br /&gt;
* examples in &amp;lt;code&amp;gt;$SOFTNAME_EXA_DIR&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Documentation in the Wiki ==&lt;br /&gt;
&lt;br /&gt;
For some applications additional documentation is provided here.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- * [[BinAC/Software/Alphafold | Alphafold ]] --&amp;gt;&lt;br /&gt;
* [[BinAC2/Software/AMDock | AMDock ]]&lt;br /&gt;
&amp;lt;!-- * [[BinAC/Software/BLAST | BLAST]] --&amp;gt;&lt;br /&gt;
&amp;lt;!-- * [[BinAC/Software/Bowtie | Bowtie]] --&amp;gt;&lt;br /&gt;
* [[BinAC/Software/Cellranger | Cell Ranger]]&lt;br /&gt;
* [[Development/Conda | Conda]]&lt;br /&gt;
&amp;lt;!-- * [[BinAC/Software/Gromacs | Gromacs]] --&amp;gt;&lt;br /&gt;
* [[BinAC2/Software/Jupyterlab | JupyterLab]]&lt;br /&gt;
* [[BinAC2/Software/Nextflow | Nextflow and nf-core]]&lt;br /&gt;
* [[BinAC2/Software/TigerVNC | TigerVNC: Remote visualization using VNC]]&lt;/div&gt;</summary>
		<author><name>F Bartusch</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BinAC2/Software&amp;diff=15238</id>
		<title>BinAC2/Software</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BinAC2/Software&amp;diff=15238"/>
		<updated>2025-08-20T16:02:23Z</updated>

		<summary type="html">&lt;p&gt;F Bartusch: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Environment Modules ==&lt;br /&gt;
Most software is provided as Modules.&lt;br /&gt;
&lt;br /&gt;
Required reading to use: [[Environment Modules]]&lt;br /&gt;
&lt;br /&gt;
== Available Software ==&lt;br /&gt;
&lt;br /&gt;
* Web: Visit [https://www.bwhpc.de/software.php https://www.bwhpc.de/software.php], select &amp;lt;code&amp;gt;Cluster → bwForCluster BinAC 2&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* On the cluster: &amp;lt;code&amp;gt;module avail&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
== Software in Containers ==&lt;br /&gt;
&lt;br /&gt;
Instructions for loading software in containers: [[NEMO/Software/Singularity_Containers|Singularity Containers]]&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
== Documentation ==&lt;br /&gt;
&lt;br /&gt;
Documentation for environment modules available on the cluster:  &lt;br /&gt;
&lt;br /&gt;
* with command &amp;lt;code&amp;gt;module help&amp;lt;/code&amp;gt;&lt;br /&gt;
* examples in &amp;lt;code&amp;gt;$SOFTNAME_EXA_DIR&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Documentation in the Wiki ==&lt;br /&gt;
&lt;br /&gt;
For some applications additional documentation is provided here.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- * [[BinAC/Software/Alphafold | Alphafold ]] --&amp;gt;&lt;br /&gt;
* [[BinAC2/Software/AMDock | AMDock ]]&lt;br /&gt;
&amp;lt;!-- * [[BinAC/Software/BLAST | BLAST]] --&amp;gt;&lt;br /&gt;
&amp;lt;!-- * [[BinAC/Software/Bowtie | Bowtie]] --&amp;gt;&lt;br /&gt;
* [[BinAC/Software/Cellranger | Cell Ranger]]&lt;br /&gt;
* [[Development/Conda | Conda]]&lt;br /&gt;
&amp;lt;!-- * [[BinAC/Software/Gromacs | Gromacs]] --&amp;gt;&lt;br /&gt;
* [[BinAC2/Software/Jupyterlab | JupyterLab]]&lt;br /&gt;
* [[BinAC/Software/Nextflow | Nextflow and nf-core]]&lt;br /&gt;
* [[BinAC2/Software/TigerVNC | TigerVNC: Remote visualization using VNC]]&lt;/div&gt;</summary>
		<author><name>F Bartusch</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BinAC2/Software/AMDock&amp;diff=15237</id>
		<title>BinAC2/Software/AMDock</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BinAC2/Software/AMDock&amp;diff=15237"/>
		<updated>2025-08-20T16:02:00Z</updated>

		<summary type="html">&lt;p&gt;F Bartusch: F Bartusch moved page BinAC/Software/AMDock to BinAC2/Software/AMDock without leaving a redirect: Aus Versehen im BinAC 1 Baum angelegt ...&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{| width=700px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Description   !! Content&lt;br /&gt;
|-&lt;br /&gt;
| module load&lt;br /&gt;
| chem/amdock&lt;br /&gt;
|-&lt;br /&gt;
| License&lt;br /&gt;
| [https://github.com/Valdes-Tresanco-MS/AMDock?tab=GPL-3.0-1-ov-file GPL-3.0]&lt;br /&gt;
|-&lt;br /&gt;
| Links&lt;br /&gt;
| [https://github.com/Valdes-Tresanco-MS/AMDock GitHub]&lt;br /&gt;
|- &lt;br /&gt;
| Graphical Interface&lt;br /&gt;
|  Yes&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
= Description = &lt;br /&gt;
&lt;br /&gt;
AMDock (Assisted Molecular Docking) is a user-friendly graphical tool to assist in the docking of protein-ligand complexes using Autodock-Vina or AutoDock4. This tool integrates several external programs for processing docking input files, define the search space (box) and perform docking under user’s supervision.&lt;br /&gt;
&lt;br /&gt;
You can use the AMDock graphical user interface on BinAC 2 via TigerVNC. The AMDock provides an ready to use example jobscript.&lt;br /&gt;
&lt;br /&gt;
= Usage =&lt;br /&gt;
&lt;br /&gt;
== VNC viewer ==&lt;br /&gt;
&lt;br /&gt;
First, you will need an VNC viewer on your local workstation. Please refer to the [https://wiki.bwhpc.de/e/BinAC2/Software/TigerVNC#VNC_viewer TigerVNC documentation] for installing the VNC viewer.&lt;br /&gt;
&lt;br /&gt;
== Start AMDock ==&lt;br /&gt;
&lt;br /&gt;
The AMDock module provides a ready to use example jobscript.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# Load AMDock module&lt;br /&gt;
module load chem/amdock/1.6.2&lt;br /&gt;
&lt;br /&gt;
# Copy the jobscript into your current working directory&lt;br /&gt;
cp $AMDOCK_EXA_DIR/binac2-amdock-1.6.2-example.slurm .&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Adjust needed resources (cpus, memory, time) as needed. Please don&#039;t touch anything else.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
# Adjust these values as needed&lt;br /&gt;
#SBATCH --cpus-per-task=10&lt;br /&gt;
#SBATCH --mem=20gb&lt;br /&gt;
#SBATCH --time=6:00:00&lt;br /&gt;
&lt;br /&gt;
# Don&#039;t change these settings&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
#SBATCH --partition=interactive&lt;br /&gt;
#SBATCH --gres=display:1&lt;br /&gt;
#SBATCH --signal=B:SIGUSR1@60&lt;br /&gt;
#SBATCH --job-name=amdock-1.6.2&lt;br /&gt;
&lt;br /&gt;
module load chem/amdock/1.6.2&lt;br /&gt;
&lt;br /&gt;
# AMDock is used via a graphical user interface (GUI)&lt;br /&gt;
# We need to start a VNC server in order to work with&lt;br /&gt;
# AMDock&lt;br /&gt;
# This wrapper script will start the VNC server and prints&lt;br /&gt;
# connection details to the job&#039;s stdout file&lt;br /&gt;
amdock_wrapper&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Submit the AMDock job.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
jobid=$(sbatch --parsable binac2-amdock-1.6.2-example.slurm)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Create SSH tunnel ==&lt;br /&gt;
&lt;br /&gt;
The compute node on which AMDock is running is not directly reachable from your local workstation. Therefore, you need to create an SSH tunnel from your workstation to the compute node via the BinAC 2 login node.&lt;br /&gt;
&lt;br /&gt;
The job&#039;s standard output file (&amp;lt;code&amp;gt;slurm-${jobid}.out&amp;lt;/code&amp;gt;) contains all the information you need. Please note that details such as the IP address, port number, and access URL will differ in your case.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ cat slurm-${jobid}.out&lt;br /&gt;
[...]&lt;br /&gt;
&lt;br /&gt;
   Paste this ssh command in a terminal on local host (i.e., laptop)&lt;br /&gt;
   -----------------------------------------------------------------&lt;br /&gt;
   ssh -N -L 5901:172.0.0.1:5901 tu_iioba01@login.binac2.uni-tuebingen.de&lt;br /&gt;
&lt;br /&gt;
   Then you can connect to the session with your local vncviewer:&lt;br /&gt;
   -----------------------------------------------------------------&lt;br /&gt;
   vncviewer localhost:5901&lt;br /&gt;
[...]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Linux users can simply copy&amp;amp;paste the &amp;lt;code&amp;gt;ssh -N -L ...&amp;lt;/code&amp;gt; command into their terminal.&lt;br /&gt;
Windows users, please refer to [https://wiki.bwhpc.de/e/BinAC2/Software/Jupyterlab#Windows_Users this documentation]. It&#039;s about JupyterLab, but the SSH-tunnel is created in a similar way.&lt;br /&gt;
&lt;br /&gt;
== Connect to VNC Server ==&lt;br /&gt;
&lt;br /&gt;
Now you can use your local VNC viewer and access the VNC session. Please use the correct port you got when starting the VNC server. At some point you will need to enter the password you create earlier.&lt;br /&gt;
&lt;br /&gt;
=== Windows ===&lt;br /&gt;
&lt;br /&gt;
TODO&lt;br /&gt;
&lt;br /&gt;
=== Linux ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vncviewer localhost:5901&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You should be greeted with the virtual desktop running AMDock.&lt;br /&gt;
&lt;br /&gt;
[[File:Binac2-amdock.png | 800px | center | Xterm opened in VNC viewer]]&lt;br /&gt;
&lt;br /&gt;
Attention: If you close the AMDock window the VNCserver, as well as the interactive job, will be stopped.&lt;/div&gt;</summary>
		<author><name>F Bartusch</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BinAC2/Software/AMDock&amp;diff=15236</id>
		<title>BinAC2/Software/AMDock</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BinAC2/Software/AMDock&amp;diff=15236"/>
		<updated>2025-08-20T15:50:04Z</updated>

		<summary type="html">&lt;p&gt;F Bartusch: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{| width=700px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Description   !! Content&lt;br /&gt;
|-&lt;br /&gt;
| module load&lt;br /&gt;
| chem/amdock&lt;br /&gt;
|-&lt;br /&gt;
| License&lt;br /&gt;
| [https://github.com/Valdes-Tresanco-MS/AMDock?tab=GPL-3.0-1-ov-file GPL-3.0]&lt;br /&gt;
|-&lt;br /&gt;
| Links&lt;br /&gt;
| [https://github.com/Valdes-Tresanco-MS/AMDock GitHub]&lt;br /&gt;
|- &lt;br /&gt;
| Graphical Interface&lt;br /&gt;
|  Yes&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
= Description = &lt;br /&gt;
&lt;br /&gt;
AMDock (Assisted Molecular Docking) is a user-friendly graphical tool to assist in the docking of protein-ligand complexes using Autodock-Vina or AutoDock4. This tool integrates several external programs for processing docking input files, define the search space (box) and perform docking under user’s supervision.&lt;br /&gt;
&lt;br /&gt;
You can use the AMDock graphical user interface on BinAC 2 via TigerVNC. The AMDock provides an ready to use example jobscript.&lt;br /&gt;
&lt;br /&gt;
= Usage =&lt;br /&gt;
&lt;br /&gt;
== VNC viewer ==&lt;br /&gt;
&lt;br /&gt;
First, you will need an VNC viewer on your local workstation. Please refer to the [https://wiki.bwhpc.de/e/BinAC2/Software/TigerVNC#VNC_viewer TigerVNC documentation] for installing the VNC viewer.&lt;br /&gt;
&lt;br /&gt;
== Start AMDock ==&lt;br /&gt;
&lt;br /&gt;
The AMDock module provides a ready to use example jobscript.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# Load AMDock module&lt;br /&gt;
module load chem/amdock/1.6.2&lt;br /&gt;
&lt;br /&gt;
# Copy the jobscript into your current working directory&lt;br /&gt;
cp $AMDOCK_EXA_DIR/binac2-amdock-1.6.2-example.slurm .&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Adjust needed resources (cpus, memory, time) as needed. Please don&#039;t touch anything else.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
# Adjust these values as needed&lt;br /&gt;
#SBATCH --cpus-per-task=10&lt;br /&gt;
#SBATCH --mem=20gb&lt;br /&gt;
#SBATCH --time=6:00:00&lt;br /&gt;
&lt;br /&gt;
# Don&#039;t change these settings&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
#SBATCH --partition=interactive&lt;br /&gt;
#SBATCH --gres=display:1&lt;br /&gt;
#SBATCH --signal=B:SIGUSR1@60&lt;br /&gt;
#SBATCH --job-name=amdock-1.6.2&lt;br /&gt;
&lt;br /&gt;
module load chem/amdock/1.6.2&lt;br /&gt;
&lt;br /&gt;
# AMDock is used via a graphical user interface (GUI)&lt;br /&gt;
# We need to start a VNC server in order to work with&lt;br /&gt;
# AMDock&lt;br /&gt;
# This wrapper script will start the VNC server and prints&lt;br /&gt;
# connection details to the job&#039;s stdout file&lt;br /&gt;
amdock_wrapper&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Submit the AMDock job.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
jobid=$(sbatch --parsable binac2-amdock-1.6.2-example.slurm)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Create SSH tunnel ==&lt;br /&gt;
&lt;br /&gt;
The compute node on which AMDock is running is not directly reachable from your local workstation. Therefore, you need to create an SSH tunnel from your workstation to the compute node via the BinAC 2 login node.&lt;br /&gt;
&lt;br /&gt;
The job&#039;s standard output file (&amp;lt;code&amp;gt;slurm-${jobid}.out&amp;lt;/code&amp;gt;) contains all the information you need. Please note that details such as the IP address, port number, and access URL will differ in your case.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ cat slurm-${jobid}.out&lt;br /&gt;
[...]&lt;br /&gt;
&lt;br /&gt;
   Paste this ssh command in a terminal on local host (i.e., laptop)&lt;br /&gt;
   -----------------------------------------------------------------&lt;br /&gt;
   ssh -N -L 5901:172.0.0.1:5901 tu_iioba01@login.binac2.uni-tuebingen.de&lt;br /&gt;
&lt;br /&gt;
   Then you can connect to the session with your local vncviewer:&lt;br /&gt;
   -----------------------------------------------------------------&lt;br /&gt;
   vncviewer localhost:5901&lt;br /&gt;
[...]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Linux users can simply copy&amp;amp;paste the &amp;lt;code&amp;gt;ssh -N -L ...&amp;lt;/code&amp;gt; command into their terminal.&lt;br /&gt;
Windows users, please refer to [https://wiki.bwhpc.de/e/BinAC2/Software/Jupyterlab#Windows_Users this documentation]. It&#039;s about JupyterLab, but the SSH-tunnel is created in a similar way.&lt;br /&gt;
&lt;br /&gt;
== Connect to VNC Server ==&lt;br /&gt;
&lt;br /&gt;
Now you can use your local VNC viewer and access the VNC session. Please use the correct port you got when starting the VNC server. At some point you will need to enter the password you create earlier.&lt;br /&gt;
&lt;br /&gt;
=== Windows ===&lt;br /&gt;
&lt;br /&gt;
TODO&lt;br /&gt;
&lt;br /&gt;
=== Linux ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vncviewer localhost:5901&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You should be greeted with the virtual desktop running AMDock.&lt;br /&gt;
&lt;br /&gt;
[[File:Binac2-amdock.png | 800px | center | Xterm opened in VNC viewer]]&lt;br /&gt;
&lt;br /&gt;
Attention: If you close the AMDock window the VNCserver, as well as the interactive job, will be stopped.&lt;/div&gt;</summary>
		<author><name>F Bartusch</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=File:Binac2-amdock.png&amp;diff=15235</id>
		<title>File:Binac2-amdock.png</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=File:Binac2-amdock.png&amp;diff=15235"/>
		<updated>2025-08-20T15:48:13Z</updated>

		<summary type="html">&lt;p&gt;F Bartusch: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>F Bartusch</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BinAC2/Software/TigerVNC&amp;diff=15234</id>
		<title>BinAC2/Software/TigerVNC</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BinAC2/Software/TigerVNC&amp;diff=15234"/>
		<updated>2025-08-20T14:38:15Z</updated>

		<summary type="html">&lt;p&gt;F Bartusch: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{| width=700px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Description   !! Content&lt;br /&gt;
|-&lt;br /&gt;
| module load&lt;br /&gt;
| vis/tigervnc&lt;br /&gt;
|-&lt;br /&gt;
| License&lt;br /&gt;
| [https://github.com/TigerVNC/tigervnc?tab=GPL-2.0-1-ov-file GPL-2.0]&lt;br /&gt;
|-&lt;br /&gt;
| Links&lt;br /&gt;
| [https://tigervnc.org/ Homepage]&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
| [https://github.com/TigerVNC/tigervnc GitHub] &lt;br /&gt;
|-&lt;br /&gt;
| Graphical Interface&lt;br /&gt;
|  Yes&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
= Description = &lt;br /&gt;
&lt;br /&gt;
TigerVNC is a high-performance, platform-neutral implementation of VNC (Virtual Network Computing), a client/server application that allows users to launch and interact with graphical applications on remote machines. TigerVNC provides the levels of performance necessary to run 3D and video applications, and it attempts to maintain a common look and feel and re-use components, where possible, across the various platforms that it supports.&lt;br /&gt;
&lt;br /&gt;
In simple terms: TigerVNC lets you run graphical programs on compute nodes while seeing and controlling them from your own computer.&lt;br /&gt;
&lt;br /&gt;
= Technical Background =&lt;br /&gt;
&lt;br /&gt;
This is some technical background. You don&#039;t really need it in order to use TigerVNC, but maybe you are interested how it works.&lt;br /&gt;
&lt;br /&gt;
== Why Use VNC on HPC? ==&lt;br /&gt;
&lt;br /&gt;
BinAC 2 doesn&#039;t have monitors or keyboards attached to the compute nodes. VNC allows you to:&lt;br /&gt;
&lt;br /&gt;
* Run graphical programs (like data visualization tools, editors with GUIs, or analysis software)&lt;br /&gt;
* See the program windows on your local computer&lt;br /&gt;
* Interact with these programs using your mouse and keyboard&lt;br /&gt;
 &lt;br /&gt;
== X11 - The Graphics System ==&lt;br /&gt;
&lt;br /&gt;
TigerVNC will provide a virtual display for your programs via an X11 server.&lt;br /&gt;
&lt;br /&gt;
Think of X11 as the &amp;quot;graphics engine&amp;quot; that makes windows, buttons, and visual interfaces work on Linux systems. Just like Windows has its own way of showing programs on screen, Linux uses X11 to display graphical applications.&lt;br /&gt;
When you run a program with a graphical interface on BinAC 2, X11 is what creates the windows and handles mouse clicks and keyboard input.&lt;br /&gt;
&lt;br /&gt;
== VNC - Remote Desktop Access ==&lt;br /&gt;
&lt;br /&gt;
VNC (Virtual Network Computing) is like having a &amp;quot;remote control&amp;quot; for a computer&#039;s desktop. It lets you see and control a computer&#039;s screen from anywhere over the network.&lt;br /&gt;
Imagine you&#039;re at home but want to use a computer at work - VNC lets you see that computer&#039;s desktop in a window on your home computer and control it as if you were sitting right in front of it.&lt;br /&gt;
&lt;br /&gt;
= Usage =&lt;br /&gt;
&lt;br /&gt;
You will need a VNC viewer on your local workstation.&lt;br /&gt;
&lt;br /&gt;
For programs that are installed as modules on BinAC 2 we might already have ready to use example jobscripts, so you don&#039;t need to start the VNC server yourself. Please check the &amp;lt;code&amp;gt;module help&amp;lt;/code&amp;gt; of the module or ask us via hpcmaster@uni-tuebingen.de.&lt;br /&gt;
&lt;br /&gt;
== VNC viewer ==&lt;br /&gt;
&lt;br /&gt;
You will need to install a VNC viewer on your computer in order to connect to the VNC server on BinAC. In principel every VNC viewer should work, but by installing the corresponding TigerVNC viewer one eliminates some problems from the start.&lt;br /&gt;
&lt;br /&gt;
=== Windows ===&lt;br /&gt;
&lt;br /&gt;
There is an official TigerVNC client for Windows:&lt;br /&gt;
&lt;br /&gt;
* [https://sourceforge.net/projects/tigervnc/files/stable/1.15.0/vncviewer64-1.15.0.exe/download Download on Sourceforge]&lt;br /&gt;
&lt;br /&gt;
=== Linux ===&lt;br /&gt;
&lt;br /&gt;
Many Linux distributions have TigerVNC available in their repository. Usually this will install not the newest TigerVNC version.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# Ubuntu&lt;br /&gt;
sudo apt-get install tigervnc-viewer &lt;br /&gt;
&lt;br /&gt;
# RHEL/Rocky/Fedora&lt;br /&gt;
dnf install tigervnc&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can also install the newest version provided by TigerVNC on [https://sourceforge.net/projects/tigervnc/files/stable/1.15.0/ Sourceforge].&lt;br /&gt;
&lt;br /&gt;
== Start VNC Server on BinAC 2 ==&lt;br /&gt;
&lt;br /&gt;
First, submit an interactive job. This command will start a 4-hour interactive job and tells the system it will need one display.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
srun --partition=interactive -t 4:00:00 --gres=display:1 --pty bash&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When the job has started, load the TigerVNC module&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load vis/tigervnc/1.15.0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This will add some helper scripts into your &amp;lt;code&amp;gt;PATH&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You will need to set a password for the VNC server if you use TigerVNC for the first time. You don&#039;t need a view-only password.&lt;br /&gt;
Your password will be stored at &amp;lt;code&amp;gt;~/.config/tigervnc/passwd&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vncpasswd &lt;br /&gt;
&lt;br /&gt;
Creating VNC password using TigerVNC container...&lt;br /&gt;
Password:&lt;br /&gt;
Verify:&lt;br /&gt;
Would you like to enter a view-only password (y/n)? n&lt;br /&gt;
A view-only password is not used&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now get an available display and start the VNC server, connection details are printed to the console.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
display_num=$(get_display)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
start_vncserver ${display_num}&lt;br /&gt;
INFO:    Instance stats will not be available - requires cgroups v2 with systemd as manager.&lt;br /&gt;
INFO:    instance started successfully&lt;br /&gt;
&lt;br /&gt;
New &#039;node1-001:1 (tu_iioba01)&#039; desktop is node1-001:1&lt;br /&gt;
&lt;br /&gt;
Starting applications specified in /opt/bwhpc/common/vis/tigervnc/1.15.0/bin/xstartup&lt;br /&gt;
Log file is /home/tu/tu_tu/tu_iioba01/.vnc/node1-001:1.log&lt;br /&gt;
&lt;br /&gt;
   Paste this ssh command in a terminal on local host (i.e., laptop)&lt;br /&gt;
   -----------------------------------------------------------------&lt;br /&gt;
   ssh -N -L 5901:172.0.0.1:5901 tu_iioba01@login.binac2.uni-tuebingen.de&lt;br /&gt;
&lt;br /&gt;
   Then you can connect to the session with your local vncviewer:&lt;br /&gt;
   -----------------------------------------------------------------&lt;br /&gt;
   vncviewer localhost:5901&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You succesfully started &lt;br /&gt;
&lt;br /&gt;
== Connect to VNC Server ==&lt;br /&gt;
&lt;br /&gt;
You will need to create an SSH-tunnel from your local workstation to the compute node on which the VNC server is running.&lt;br /&gt;
Please check the already existing documentation in the [https://wiki.bwhpc.de/e/BinAC2/Software/Jupyterlab#Create_SSH_tunnel JupyterLab page ].&lt;br /&gt;
&lt;br /&gt;
Now you can use your local VNC viewer and access the VNC session. Please use the correct port you got when starting the VNC server. At some point you will need to enter the password you create earlier.&lt;br /&gt;
&lt;br /&gt;
=== Windows ===&lt;br /&gt;
TODO&lt;br /&gt;
&lt;br /&gt;
=== Linux ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vncviewer localhost:5901&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You should be greeted with the virtual desktop.&lt;br /&gt;
&lt;br /&gt;
[[File:Binac2-tigervnc-xfce.png | 800px | center | VNC viewer with Xfce window manager]]&lt;br /&gt;
&lt;br /&gt;
You can now start your program. In this example we are starting a simple X terminal window. The terminal should then appear in your vncviewer.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
DISPLAY=:${display_num} xterm&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:Binac2-tigervnc-xfce-xterm.png | 800px | center | Xterm opened in VNC viewer]]&lt;br /&gt;
&lt;br /&gt;
== Stop VNC Server ==&lt;br /&gt;
&lt;br /&gt;
It&#039;s important to stop the VNC server when you&#039;re done.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
stop_vncserver ${display_num}&lt;br /&gt;
Killing Xvnc process ID 3515575&lt;br /&gt;
INFO:    Stopping tigervnc_1 instance of /opt/bwhpc/common/vis/tigervnc/1.15.0/tigervnc-1.15.0.sif (PID=3515518)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>F Bartusch</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BinAC2/Software/TigerVNC&amp;diff=15233</id>
		<title>BinAC2/Software/TigerVNC</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BinAC2/Software/TigerVNC&amp;diff=15233"/>
		<updated>2025-08-20T14:37:47Z</updated>

		<summary type="html">&lt;p&gt;F Bartusch: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{| width=700px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Description   !! Content&lt;br /&gt;
|-&lt;br /&gt;
| module load&lt;br /&gt;
| vis/tigervnc&lt;br /&gt;
|-&lt;br /&gt;
| License&lt;br /&gt;
| [https://github.com/TigerVNC/tigervnc?tab=GPL-2.0-1-ov-file GPL-2.0]&lt;br /&gt;
|-&lt;br /&gt;
| Links&lt;br /&gt;
| [https://tigervnc.org/ Homepage]&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
| [https://github.com/TigerVNC/tigervnc GitHub] &lt;br /&gt;
|-&lt;br /&gt;
| Graphical Interface&lt;br /&gt;
|  Yes&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
= Description = &lt;br /&gt;
&lt;br /&gt;
TigerVNC is a high-performance, platform-neutral implementation of VNC (Virtual Network Computing), a client/server application that allows users to launch and interact with graphical applications on remote machines. TigerVNC provides the levels of performance necessary to run 3D and video applications, and it attempts to maintain a common look and feel and re-use components, where possible, across the various platforms that it supports.&lt;br /&gt;
&lt;br /&gt;
In simple terms: TigerVNC lets you run graphical programs on compute nodes while seeing and controlling them from your own computer.&lt;br /&gt;
&lt;br /&gt;
= Technical Background =&lt;br /&gt;
&lt;br /&gt;
This is some technical background. You don&#039;t really need it in order to use TigerVNC, but maybe you are interested how it works.&lt;br /&gt;
&lt;br /&gt;
== Why Use VNC on HPC? ==&lt;br /&gt;
&lt;br /&gt;
BinAC 2 doesn&#039;t have monitors or keyboards attached to the compute nodes. VNC allows you to:&lt;br /&gt;
&lt;br /&gt;
* Run graphical programs (like data visualization tools, editors with GUIs, or analysis software)&lt;br /&gt;
* See the program windows on your local computer&lt;br /&gt;
* Interact with these programs using your mouse and keyboard&lt;br /&gt;
 &lt;br /&gt;
== X11 - The Graphics System ==&lt;br /&gt;
&lt;br /&gt;
TigerVNC will provide a virtual display for your programs via an X11 server.&lt;br /&gt;
&lt;br /&gt;
Think of X11 as the &amp;quot;graphics engine&amp;quot; that makes windows, buttons, and visual interfaces work on Linux systems. Just like Windows has its own way of showing programs on screen, Linux uses X11 to display graphical applications.&lt;br /&gt;
When you run a program with a graphical interface on BinAC 2, X11 is what creates the windows and handles mouse clicks and keyboard input.&lt;br /&gt;
&lt;br /&gt;
== VNC - Remote Desktop Access ==&lt;br /&gt;
&lt;br /&gt;
VNC (Virtual Network Computing) is like having a &amp;quot;remote control&amp;quot; for a computer&#039;s desktop. It lets you see and control a computer&#039;s screen from anywhere over the network.&lt;br /&gt;
Imagine you&#039;re at home but want to use a computer at work - VNC lets you see that computer&#039;s desktop in a window on your home computer and control it as if you were sitting right in front of it.&lt;br /&gt;
&lt;br /&gt;
= Usage =&lt;br /&gt;
&lt;br /&gt;
You will need a VNC viewer on your local workstation.&lt;br /&gt;
&lt;br /&gt;
For programs that are installed as modules on BinAC 2 we might already haveready to use example jobscripts, so you don&#039;t need to start the VNC server yourself. Please check the &amp;lt;code&amp;gt;module help&amp;lt;/code&amp;gt; of the module or ask us via hpcmaster@uni-tuebingen.de.&lt;br /&gt;
&lt;br /&gt;
== VNC viewer ==&lt;br /&gt;
&lt;br /&gt;
You will need to install a VNC viewer on your computer in order to connect to the VNC server on BinAC. In principel every VNC viewer should work, but by installing the corresponding TigerVNC viewer one eliminates some problems from the start.&lt;br /&gt;
&lt;br /&gt;
=== Windows ===&lt;br /&gt;
&lt;br /&gt;
There is an official TigerVNC client for Windows:&lt;br /&gt;
&lt;br /&gt;
* [https://sourceforge.net/projects/tigervnc/files/stable/1.15.0/vncviewer64-1.15.0.exe/download Download on Sourceforge]&lt;br /&gt;
&lt;br /&gt;
=== Linux ===&lt;br /&gt;
&lt;br /&gt;
Many Linux distributions have TigerVNC available in their repository. Usually this will install not the newest TigerVNC version.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# Ubuntu&lt;br /&gt;
sudo apt-get install tigervnc-viewer &lt;br /&gt;
&lt;br /&gt;
# RHEL/Rocky/Fedora&lt;br /&gt;
dnf install tigervnc&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can also install the newest version provided by TigerVNC on [https://sourceforge.net/projects/tigervnc/files/stable/1.15.0/ Sourceforge].&lt;br /&gt;
&lt;br /&gt;
== Start VNC Server on BinAC 2 ==&lt;br /&gt;
&lt;br /&gt;
First, submit an interactive job. This command will start a 4-hour interactive job and tells the system it will need one display.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
srun --partition=interactive -t 4:00:00 --gres=display:1 --pty bash&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When the job has started, load the TigerVNC module&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load vis/tigervnc/1.15.0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This will add some helper scripts into your &amp;lt;code&amp;gt;PATH&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You will need to set a password for the VNC server if you use TigerVNC for the first time. You don&#039;t need a view-only password.&lt;br /&gt;
Your password will be stored at &amp;lt;code&amp;gt;~/.config/tigervnc/passwd&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vncpasswd &lt;br /&gt;
&lt;br /&gt;
Creating VNC password using TigerVNC container...&lt;br /&gt;
Password:&lt;br /&gt;
Verify:&lt;br /&gt;
Would you like to enter a view-only password (y/n)? n&lt;br /&gt;
A view-only password is not used&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now get an available display and start the VNC server, connection details are printed to the console.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
display_num=$(get_display)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
start_vncserver ${display_num}&lt;br /&gt;
INFO:    Instance stats will not be available - requires cgroups v2 with systemd as manager.&lt;br /&gt;
INFO:    instance started successfully&lt;br /&gt;
&lt;br /&gt;
New &#039;node1-001:1 (tu_iioba01)&#039; desktop is node1-001:1&lt;br /&gt;
&lt;br /&gt;
Starting applications specified in /opt/bwhpc/common/vis/tigervnc/1.15.0/bin/xstartup&lt;br /&gt;
Log file is /home/tu/tu_tu/tu_iioba01/.vnc/node1-001:1.log&lt;br /&gt;
&lt;br /&gt;
   Paste this ssh command in a terminal on local host (i.e., laptop)&lt;br /&gt;
   -----------------------------------------------------------------&lt;br /&gt;
   ssh -N -L 5901:172.0.0.1:5901 tu_iioba01@login.binac2.uni-tuebingen.de&lt;br /&gt;
&lt;br /&gt;
   Then you can connect to the session with your local vncviewer:&lt;br /&gt;
   -----------------------------------------------------------------&lt;br /&gt;
   vncviewer localhost:5901&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You succesfully started &lt;br /&gt;
&lt;br /&gt;
== Connect to VNC Server ==&lt;br /&gt;
&lt;br /&gt;
You will need to create an SSH-tunnel from your local workstation to the compute node on which the VNC server is running.&lt;br /&gt;
Please check the already existing documentation in the [https://wiki.bwhpc.de/e/BinAC2/Software/Jupyterlab#Create_SSH_tunnel JupyterLab page ].&lt;br /&gt;
&lt;br /&gt;
Now you can use your local VNC viewer and access the VNC session. Please use the correct port you got when starting the VNC server. At some point you will need to enter the password you create earlier.&lt;br /&gt;
&lt;br /&gt;
=== Windows ===&lt;br /&gt;
TODO&lt;br /&gt;
&lt;br /&gt;
=== Linux ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vncviewer localhost:5901&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You should be greeted with the virtual desktop.&lt;br /&gt;
&lt;br /&gt;
[[File:Binac2-tigervnc-xfce.png | 800px | center | VNC viewer with Xfce window manager]]&lt;br /&gt;
&lt;br /&gt;
You can now start your program. In this example we are starting a simple X terminal window. The terminal should then appear in your vncviewer.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
DISPLAY=:${display_num} xterm&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:Binac2-tigervnc-xfce-xterm.png | 800px | center | Xterm opened in VNC viewer]]&lt;br /&gt;
&lt;br /&gt;
== Stop VNC Server ==&lt;br /&gt;
&lt;br /&gt;
It&#039;s important to stop the VNC server when you&#039;re done.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
stop_vncserver ${display_num}&lt;br /&gt;
Killing Xvnc process ID 3515575&lt;br /&gt;
INFO:    Stopping tigervnc_1 instance of /opt/bwhpc/common/vis/tigervnc/1.15.0/tigervnc-1.15.0.sif (PID=3515518)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>F Bartusch</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BinAC2/Software/TigerVNC&amp;diff=15232</id>
		<title>BinAC2/Software/TigerVNC</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BinAC2/Software/TigerVNC&amp;diff=15232"/>
		<updated>2025-08-20T14:33:47Z</updated>

		<summary type="html">&lt;p&gt;F Bartusch: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{| width=700px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Description   !! Content&lt;br /&gt;
|-&lt;br /&gt;
| module load&lt;br /&gt;
| vis/tigervnc&lt;br /&gt;
|-&lt;br /&gt;
| License&lt;br /&gt;
| [https://github.com/TigerVNC/tigervnc?tab=GPL-2.0-1-ov-file GPL-2.0]&lt;br /&gt;
|-&lt;br /&gt;
| Links&lt;br /&gt;
| [https://tigervnc.org/ Homepage]&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
| [https://github.com/TigerVNC/tigervnc GitHub] &lt;br /&gt;
|-&lt;br /&gt;
| Graphical Interface&lt;br /&gt;
|  Yes&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
= Description = &lt;br /&gt;
&lt;br /&gt;
TigerVNC is a high-performance, platform-neutral implementation of VNC (Virtual Network Computing), a client/server application that allows users to launch and interact with graphical applications on remote machines. TigerVNC provides the levels of performance necessary to run 3D and video applications, and it attempts to maintain a common look and feel and re-use components, where possible, across the various platforms that it supports.&lt;br /&gt;
&lt;br /&gt;
In simple terms: TigerVNC lets you run graphical programs on compute nodes while seeing and controlling them from your own computer.&lt;br /&gt;
&lt;br /&gt;
= Technical Background =&lt;br /&gt;
&lt;br /&gt;
This is some technical background. You don&#039;t really need it in order to use TigerVNC, but maybe you are interested how it works.&lt;br /&gt;
&lt;br /&gt;
== Why Use VNC on HPC? ==&lt;br /&gt;
&lt;br /&gt;
BinAC 2 doesn&#039;t have monitors or keyboards attached to the compute nodes. VNC allows you to:&lt;br /&gt;
&lt;br /&gt;
* Run graphical programs (like data visualization tools, editors with GUIs, or analysis software)&lt;br /&gt;
* See the program windows on your local computer&lt;br /&gt;
* Interact with these programs using your mouse and keyboard&lt;br /&gt;
 &lt;br /&gt;
== X11 - The Graphics System ==&lt;br /&gt;
&lt;br /&gt;
TigerVNC will provide a virtual display for your programs via an X11 server.&lt;br /&gt;
&lt;br /&gt;
Think of X11 as the &amp;quot;graphics engine&amp;quot; that makes windows, buttons, and visual interfaces work on Linux systems. Just like Windows has its own way of showing programs on screen, Linux uses X11 to display graphical applications.&lt;br /&gt;
When you run a program with a graphical interface on BinAC 2, X11 is what creates the windows and handles mouse clicks and keyboard input.&lt;br /&gt;
&lt;br /&gt;
== VNC - Remote Desktop Access ==&lt;br /&gt;
&lt;br /&gt;
VNC (Virtual Network Computing) is like having a &amp;quot;remote control&amp;quot; for a computer&#039;s desktop. It lets you see and control a computer&#039;s screen from anywhere over the network.&lt;br /&gt;
Imagine you&#039;re at home but want to use a computer at work - VNC lets you see that computer&#039;s desktop in a window on your home computer and control it as if you were sitting right in front of it.&lt;br /&gt;
&lt;br /&gt;
= Usage =&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== VNC viewer ==&lt;br /&gt;
&lt;br /&gt;
You will need to install a VNC viewer on your computer in order to connect to the VNC server on BinAC. In principel every VNC viewer should work, but by installing the corresponding TigerVNC viewer one eliminates some problems from the start.&lt;br /&gt;
&lt;br /&gt;
=== Windows ===&lt;br /&gt;
&lt;br /&gt;
There is an official TigerVNC client for Windows:&lt;br /&gt;
&lt;br /&gt;
* [https://sourceforge.net/projects/tigervnc/files/stable/1.15.0/vncviewer64-1.15.0.exe/download Download on Sourceforge]&lt;br /&gt;
&lt;br /&gt;
=== Linux ===&lt;br /&gt;
&lt;br /&gt;
Many Linux distributions have TigerVNC available in their repository. Usually this will install not the newest TigerVNC version.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# Ubuntu&lt;br /&gt;
sudo apt-get install tigervnc-viewer &lt;br /&gt;
&lt;br /&gt;
# RHEL/Rocky/Fedora&lt;br /&gt;
dnf install tigervnc&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can also install the newest version provided by TigerVNC on [https://sourceforge.net/projects/tigervnc/files/stable/1.15.0/ Sourceforge].&lt;br /&gt;
&lt;br /&gt;
== Start VNC Server on BinAC 2 ==&lt;br /&gt;
&lt;br /&gt;
First, submit an interactive job. This command will start a 4-hour interactive job and tells the system it will need one display.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
srun --partition=interactive -t 4:00:00 --gres=display:1 --pty bash&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When the job has started, load the TigerVNC module&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load vis/tigervnc/1.15.0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This will add some helper scripts into your &amp;lt;code&amp;gt;PATH&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You will need to set a password for the VNC server if you use TigerVNC for the first time. You don&#039;t need a view-only password.&lt;br /&gt;
Your password will be stored at &amp;lt;code&amp;gt;~/.config/tigervnc/passwd&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vncpasswd &lt;br /&gt;
&lt;br /&gt;
Creating VNC password using TigerVNC container...&lt;br /&gt;
Password:&lt;br /&gt;
Verify:&lt;br /&gt;
Would you like to enter a view-only password (y/n)? n&lt;br /&gt;
A view-only password is not used&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now get an available display and start the VNC server, connection details are printed to the console.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
display_num=$(get_display)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
start_vncserver ${display_num}&lt;br /&gt;
INFO:    Instance stats will not be available - requires cgroups v2 with systemd as manager.&lt;br /&gt;
INFO:    instance started successfully&lt;br /&gt;
&lt;br /&gt;
New &#039;node1-001:1 (tu_iioba01)&#039; desktop is node1-001:1&lt;br /&gt;
&lt;br /&gt;
Starting applications specified in /opt/bwhpc/common/vis/tigervnc/1.15.0/bin/xstartup&lt;br /&gt;
Log file is /home/tu/tu_tu/tu_iioba01/.vnc/node1-001:1.log&lt;br /&gt;
&lt;br /&gt;
   Paste this ssh command in a terminal on local host (i.e., laptop)&lt;br /&gt;
   -----------------------------------------------------------------&lt;br /&gt;
   ssh -N -L 5901:172.0.0.1:5901 tu_iioba01@login.binac2.uni-tuebingen.de&lt;br /&gt;
&lt;br /&gt;
   Then you can connect to the session with your local vncviewer:&lt;br /&gt;
   -----------------------------------------------------------------&lt;br /&gt;
   vncviewer localhost:5901&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You succesfully started &lt;br /&gt;
&lt;br /&gt;
== Connect to VNC Server ==&lt;br /&gt;
&lt;br /&gt;
You will need to create an SSH-tunnel from your local workstation to the compute node on which the VNC server is running.&lt;br /&gt;
Please check the already existing documentation in the [https://wiki.bwhpc.de/e/BinAC2/Software/Jupyterlab#Create_SSH_tunnel JupyterLab page ].&lt;br /&gt;
&lt;br /&gt;
Now you can use your local VNC viewer and access the VNC session. Please use the correct port you got when starting the VNC server. At some point you will need to enter the password you create earlier.&lt;br /&gt;
&lt;br /&gt;
=== Windows ===&lt;br /&gt;
TODO&lt;br /&gt;
&lt;br /&gt;
=== Linux ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vncviewer localhost:5901&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You should be greeted with the virtual desktop.&lt;br /&gt;
&lt;br /&gt;
[[File:Binac2-tigervnc-xfce.png | 800px | center | VNC viewer with Xfce window manager]]&lt;br /&gt;
&lt;br /&gt;
You can now start your program. In this example we are starting a simple X terminal window. The terminal should then appear in your vncviewer.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
DISPLAY=:${display_num} xterm&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:Binac2-tigervnc-xfce-xterm.png | 800px | center | Xterm opened in VNC viewer]]&lt;br /&gt;
&lt;br /&gt;
== Stop VNC Server ==&lt;br /&gt;
&lt;br /&gt;
It&#039;s important to stop the VNC server when you&#039;re done.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
stop_vncserver ${display_num}&lt;br /&gt;
Killing Xvnc process ID 3515575&lt;br /&gt;
INFO:    Stopping tigervnc_1 instance of /opt/bwhpc/common/vis/tigervnc/1.15.0/tigervnc-1.15.0.sif (PID=3515518)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>F Bartusch</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=File:Binac2-tigervnc-xfce-xterm.png&amp;diff=15231</id>
		<title>File:Binac2-tigervnc-xfce-xterm.png</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=File:Binac2-tigervnc-xfce-xterm.png&amp;diff=15231"/>
		<updated>2025-08-20T14:30:25Z</updated>

		<summary type="html">&lt;p&gt;F Bartusch: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>F Bartusch</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=File:Binac2-tigervnc-xfce.png&amp;diff=15230</id>
		<title>File:Binac2-tigervnc-xfce.png</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=File:Binac2-tigervnc-xfce.png&amp;diff=15230"/>
		<updated>2025-08-20T14:29:53Z</updated>

		<summary type="html">&lt;p&gt;F Bartusch: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>F Bartusch</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BinAC2/Software/Jupyterlab&amp;diff=15227</id>
		<title>BinAC2/Software/Jupyterlab</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BinAC2/Software/Jupyterlab&amp;diff=15227"/>
		<updated>2025-08-20T12:39:45Z</updated>

		<summary type="html">&lt;p&gt;F Bartusch: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Softwarepage|devel/jupyterlab}}&lt;br /&gt;
&lt;br /&gt;
{| width=700px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Description   !! Content&lt;br /&gt;
|-&lt;br /&gt;
| module load&lt;br /&gt;
| devel/jupyterlab&lt;br /&gt;
|-&lt;br /&gt;
| License&lt;br /&gt;
| [https://github.com/jupyterlab/jupyterlab/blob/main/LICENSE JupyterLab License]&lt;br /&gt;
|-&lt;br /&gt;
| Links&lt;br /&gt;
| [https://jupyter.org/ Homepage]&lt;br /&gt;
|-&lt;br /&gt;
| Graphical Interface&lt;br /&gt;
|  Yes&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
= Description = &lt;br /&gt;
&lt;br /&gt;
JupyterLab is a web-based interactive development environment for notebooks, code, and data.&lt;br /&gt;
&lt;br /&gt;
Currently, BinAC 2 provides the following [https://jupyter-docker-stacks.readthedocs.io/en/latest/using/selecting.html#jupyter-minimal-notebook JupyterLab Docker images] via Apptainer:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;syntaxhighlight inline&amp;gt;minimal-notebook&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
* &amp;lt;syntaxhighlight inline&amp;gt;r-notebook&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
* &amp;lt;syntaxhighlight inline&amp;gt;julia-notebook&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
* &amp;lt;syntaxhighlight inline&amp;gt;scipy-notebook&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Usage =&lt;br /&gt;
&lt;br /&gt;
This guide is valid for &amp;lt;syntaxhighlight inline&amp;gt;minimal-notebook&amp;lt;/syntaxhighlight&amp;gt;. You can also follow it for the other notebook flavors &amp;amp;mdash; just replace &amp;lt;syntaxhighlight inline&amp;gt;minimal-notebook&amp;lt;/syntaxhighlight&amp;gt; with one of the other notebooks listed above.&lt;br /&gt;
&lt;br /&gt;
You will start a job on the cluster as usual, create an SSH tunnel, and connect to the running JupyterLab instance via your browser.&lt;br /&gt;
&lt;br /&gt;
== Start JupyterLab ==&lt;br /&gt;
&lt;br /&gt;
The module &amp;lt;syntaxhighlight inline&amp;gt;devel/jupyterlab&amp;lt;/syntaxhighlight&amp;gt; provides a job script for starting a JupyterLab instance on BinAC 2. Load the module and copy the template job script into your workspace:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight&amp;gt;&lt;br /&gt;
$ module load devel/jupyterlab&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Upon loading the module, you can list the available template job scripts and copy the one you need into your workspace:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight&amp;gt;&lt;br /&gt;
$ ls $JUPYTERLAB_EXA_DIR &lt;br /&gt;
binac2-julia-notebook.slurm  binac2-minimal-notebook.slurm  binac2-r-notebook.slurm  binac2-scipy-notebook.slurm&lt;br /&gt;
&lt;br /&gt;
$ cp $JUPYTERLAB_EXA_DIR/binac2-minimal-notebook.slurm &amp;lt;your workspace&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The job script is very simple; you only need to adjust the hardware resources according to your needs:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot; line&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
# Adjust these values as needed&lt;br /&gt;
#SBATCH --cpus-per-task=1&lt;br /&gt;
#SBATCH --mem=2gb&lt;br /&gt;
#SBATCH --time=6:00:00&lt;br /&gt;
&lt;br /&gt;
# Don&#039;t change these settings&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
#SBATCH --partition=interactive&lt;br /&gt;
#SBATCH --job-name=minimal-notebook-7.4.1&lt;br /&gt;
#PBS -j oe&lt;br /&gt;
&lt;br /&gt;
# Load Jupyterlab module&lt;br /&gt;
module load devel/jupyterlab/7.4.1&lt;br /&gt;
&lt;br /&gt;
# Start jupyterlab&lt;br /&gt;
$ ${JUPYTERLAB_BIN_DIR}/minimal-notebook.sh&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now submit the job to SLURM. This command will store the job ID in the variable &amp;lt;syntaxhighlight inline&amp;gt;jobid&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight&amp;gt;&lt;br /&gt;
$ jobid=$(sbatch --parsable binac2-minimal-notebook.slurm)&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{|style=&amp;quot;background:#deffee; width:100%;&amp;quot;&lt;br /&gt;
|style=&amp;quot;padding:5px; background:#cef2e0; text-align:left&amp;quot;|&lt;br /&gt;
[[Image:Attention.svg|center|25px]]&lt;br /&gt;
|style=&amp;quot;padding:5px; background:#cef2e0; text-align:left&amp;quot;|&lt;br /&gt;
Depending on the current load on BinAC 2 and the resources you requested in your job script, it may take some time for the job to start.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Create SSH tunnel ==&lt;br /&gt;
&lt;br /&gt;
The compute node on which JupyterLab is running is not directly reachable from your local workstation. Therefore, you need to create an SSH tunnel from your workstation to the compute node via the BinAC 2 login node.&lt;br /&gt;
&lt;br /&gt;
The job&#039;s standard output file (&amp;lt;syntaxhighlight inline&amp;gt;slurm-${jobid}.out&amp;lt;/syntaxhighlight&amp;gt;) contains all the information you need. Please note that details such as the IP address, port number, and access URL will differ in your case.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ cat slurm-${jobid}.out&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:Binac2_jupyterlab_connection_details.png | 800px | center | JupyterLab connection info]]&lt;br /&gt;
&lt;br /&gt;
=== Linux Users ===&lt;br /&gt;
&lt;br /&gt;
Copy the &amp;lt;syntaxhighlight inline&amp;gt;ssh -N -L ... &amp;lt;/syntaxhighlight&amp;gt; command and execute it in a shell on your workstation. After successful authentication, the SSH tunnel will be ready to use. The ssh command does not return any output. If there are no error messages, everything should be fine:&lt;br /&gt;
&lt;br /&gt;
[[File:Binac2_jupyterlab_ssh_tunnel_linux.png | 800px | center | SSH tunnel creation with Linux]]&lt;br /&gt;
&lt;br /&gt;
=== Windows Users ===&lt;br /&gt;
&lt;br /&gt;
If you are using Windows, you will need to create the SSH tunnel using an SSH client of your choice (e.g. MobaXTerm, PuTTY, etc.). Her, we will show how to create an SSH tunnel with MobaXTerm.&lt;br /&gt;
&lt;br /&gt;
Select &amp;lt;syntaxhighlight inline&amp;gt;Tunneling&amp;lt;/syntaxhighlight&amp;gt; in the top ribbon. Then click &amp;lt;syntaxhighlight inline&amp;gt;New SSH tunnel&amp;lt;/syntaxhighlight&amp;gt;.&lt;br /&gt;
Configure the SSH tunnel with the correct values taken from the SSH tunnel information provided above.&lt;br /&gt;
For the example in this tutorial it looks as follows:&lt;br /&gt;
&lt;br /&gt;
[[File:Binac2_jupyterlab_mobaxterm.png | 800px | center | SSH tunnel creation with MobaXTerm]]&lt;br /&gt;
&lt;br /&gt;
== Access JupyterLab ==&lt;br /&gt;
&lt;br /&gt;
JupyterLab is now running on a BinAC 2 compute node, and you have created an SSH tunnel from your workstation to that compute node. Open a browser and paste the URL with the access token into the address bar:&lt;br /&gt;
&lt;br /&gt;
[[File:Binac2_jupyterlab_browser_url.png | 800px | center | JupyterLab URL in browser ]]&lt;br /&gt;
&lt;br /&gt;
Your browser should now display the JupyterLab interface:&lt;br /&gt;
&lt;br /&gt;
[[File:Binac2_jupyterlab_browser_lab.png | 800px | center ]]&lt;br /&gt;
&lt;br /&gt;
== Shut Down JupyterLab ==&lt;br /&gt;
&lt;br /&gt;
You can shut down JupyterLab via &amp;lt;syntaxhighlight inline&amp;gt;File -&amp;gt; Shut Down&amp;lt;/syntaxhighlight&amp;gt;.&lt;br /&gt;
Please note that this will also terminate your compute job on BinAC 2!&lt;br /&gt;
&lt;br /&gt;
[[ File:Binac2_jupyterlab_browser_shutdown.png | 800px | center | JupyterLab user interface]]&lt;br /&gt;
&lt;br /&gt;
== Tips &amp;amp; Tricks ==&lt;br /&gt;
&lt;br /&gt;
=== Managing Notebooks ===&lt;br /&gt;
&lt;br /&gt;
JupyterLab&#039;s root directory will be your home directory. Since your home directory is backed up daily, you may want to store your notebooks there.&lt;br /&gt;
It may also be a good idea to place your notebooks under proper version control using Git.&lt;br /&gt;
&lt;br /&gt;
=== Access &amp;lt;syntaxhighlight inline&amp;gt;work&amp;lt;/syntaxhighlight&amp;gt; and &amp;lt;syntaxhighlight inline&amp;gt;project&amp;lt;/syntaxhighlight&amp;gt; ===&lt;br /&gt;
&lt;br /&gt;
The notebooks will always be able to access your data stored in &amp;lt;syntaxhighlight inline&amp;gt;work&amp;lt;/syntaxhighlight&amp;gt; and &amp;lt;syntaxhighlight inline&amp;gt;project&amp;lt;/syntaxhighlight&amp;gt;. However, the file browser shows the content in your home directory and you won’t be able to access &amp;lt;syntaxhighlight inline&amp;gt;work&amp;lt;/syntaxhighlight&amp;gt; and &amp;lt;syntaxhighlight inline&amp;gt;project&amp;lt;/syntaxhighlight&amp;gt; initially.&lt;br /&gt;
You can create symbolic links in your home directory to work and project, making these two partitions available in JupyterLab&#039;s file browser.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ cd ~&lt;br /&gt;
$ ln -s /pfs/10/work/ $HOME/work&lt;br /&gt;
$ ln -s /pfs/10/project $HOME/project&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Through that link in your home directory you can navigate to your research data in Jupyterlab&#039;s file browser.&lt;br /&gt;
Here is an example how I linked to my directory:&lt;br /&gt;
&lt;br /&gt;
[[ File:Binac2_jupyterlab_link.png | 800px | center | Access work and project storage in JupyterLab file browser ]]&lt;br /&gt;
&lt;br /&gt;
=== Managing Kernels ===&lt;br /&gt;
&lt;br /&gt;
Depending on the notebook you started, the available kernels will differ.&lt;br /&gt;
The Python notebook, for example, only has one Python kernel, whereas the R notebook has only one R kernel, and so on.&lt;br /&gt;
Given the nearly endless combinations of programming languages and packages they provide, we suggest creating your own kernels if needed.&lt;br /&gt;
The kernels are stored in your hone directory on BinAC 2: &amp;lt;syntaxhighlight inline&amp;gt;$HOME/.local/share/jupyter/kernels/&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
You cannot install new kernels directly from within JupyterLab; this must be done from the command line in your usual BinAC 2 shell session.&lt;br /&gt;
&lt;br /&gt;
=== Add a new Kernel ===&lt;br /&gt;
&lt;br /&gt;
We will show you how to create new kernels for Python and R.&lt;br /&gt;
&lt;br /&gt;
==== Python ====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
module load devel/miniforge&lt;br /&gt;
conda create --name kernel_env python=3.13 pandas numpy matplotlib ipykernel            # 1&lt;br /&gt;
conda activate kernel_env                                                               # 2&lt;br /&gt;
python -m ipykernel install --user --name pandas --display-name=&amp;quot;Python 3.13 (pandas)&amp;quot;  # 3&lt;br /&gt;
# Installed kernelspec pandas in $HOME/.local/share/jupyter/kernels/pandas&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The first command creates a new Conda environment called &amp;lt;syntaxhighlight inline&amp;gt;kernel_env&amp;lt;/syntaxhighlight&amp;gt; and installs a specific Python packages , along with a few additional Python packages.&lt;br /&gt;
It&#039;s important that you also install &amp;lt;syntaxhighlight inline&amp;gt;ipykernel&amp;lt;/syntaxhighlight&amp;gt;, as we will need it later to create the JupyterLab kernel.&lt;br /&gt;
The second command activates the &amp;lt;syntaxhighlight inline&amp;gt;kernel_env&amp;lt;/syntaxhighlight&amp;gt; Conda environment.&lt;br /&gt;
The third command creates the new JupyterLab kernel.&lt;br /&gt;
&lt;br /&gt;
[[ File:Binac2_jupyterlab_new_kernel.png | 800px | center | New Python kernel in JupyterLab]]&lt;br /&gt;
&lt;br /&gt;
==== R ====&lt;br /&gt;
&lt;br /&gt;
The instructions for creating new R kernels are slightly different.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
module load devel/miniforge&lt;br /&gt;
conda create --name r_kernel_env r-base=4.4.3 jupyter r-irkernel&lt;br /&gt;
conda activate r_kernel_env &lt;br /&gt;
R&lt;br /&gt;
# In the R-Session&lt;br /&gt;
install.packages(...)&lt;br /&gt;
IRkernel::installspec(name = &#039;ir44&#039;, displayname = &#039;R 4.4.3&#039;)&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The first command creates a new Conda environment called &amp;lt;syntaxhighlight inline&amp;gt;r_kernel_env&amp;lt;/syntaxhighlight&amp;gt; and installs a specific R version.&lt;br /&gt;
It&#039;s important to also install &amp;lt;syntaxhighlight inline&amp;gt;r-irkernel&amp;lt;/syntaxhighlight&amp;gt;, as we will need it later to create the JupyterLab kernel.&lt;br /&gt;
The second command activates the &amp;lt;syntaxhighlight inline&amp;gt;r_kernel_env&amp;lt;/syntaxhighlight&amp;gt; Conda environment and open an R session.&lt;br /&gt;
In this session, you can install any R packages you need for your kernel.&lt;br /&gt;
Finally, create the new kernel with the &amp;lt;syntaxhighlight inline&amp;gt;installspec&amp;lt;/syntaxhighlight&amp;gt; command.&lt;br /&gt;
&lt;br /&gt;
=== Remove a Kernel ===&lt;br /&gt;
&lt;br /&gt;
To remove a kernel from JupyterLab, simply delete the corresponding directory at &amp;lt;syntaxhighlight inline&amp;gt;$HOME/.local/share/jupyter/kernels/&amp;lt;/syntaxhighlight&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# Remove the JupyterLab kernel installed in the previous example&lt;br /&gt;
rm -rf $HOME/.local/share/jupyter/kernels/pandas&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Also, remove the corresponding Conda environment if you do not need it any more:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
conda env remove --name kernel_env&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;/div&gt;</summary>
		<author><name>F Bartusch</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BinAC2/SLURM_Partitions&amp;diff=15226</id>
		<title>BinAC2/SLURM Partitions</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BinAC2/SLURM_Partitions&amp;diff=15226"/>
		<updated>2025-08-20T12:38:09Z</updated>

		<summary type="html">&lt;p&gt;F Bartusch: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Partitions ==&lt;br /&gt;
&lt;br /&gt;
The bwForCluster BinAC 2 provides two partitions for job submission.&lt;br /&gt;
Within a partition job allocations are routed automatically to the most suitable compute node(s) for the requested resources (e.g. amount of nodes and cores, memory, number and type of GPUs). &lt;br /&gt;
&lt;br /&gt;
The &amp;lt;code&amp;gt;gpu&amp;lt;/code&amp;gt; partition will only run 8 jobs per user at the same time. A user can only use 4 A100 and 8 A30 GPUs at the same time.&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;code&amp;gt;interactive&amp;lt;/code&amp;gt; will only run 1 job per user at the same time.&lt;br /&gt;
This partition is reserved is dedicated for testing things and using tools via a graphical user interfae.&lt;br /&gt;
The four nodes &amp;lt;code&amp;gt;node1-00[1-4]&amp;lt;/code&amp;gt; are exclusively reserved for this partition.&lt;br /&gt;
You can run a VNC server in this partition. Please use &amp;lt;code&amp;gt;#SBATCH --gres=display:1&amp;lt;/code&amp;gt; in your jobscript or &amp;lt;code&amp;gt;--gres=display:1&amp;lt;/code&amp;gt; on the command line if you need a display. This ensures that your job starts on a node with &amp;quot;free&amp;quot; displays, because each of the four nodes only provide 20 possible virtual displays.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
All partitions are operated in shared mode, that is, jobs from different users can be executed on the same node. However, one can get exclusive access to compute nodes by using the &amp;quot;--exclusive&amp;quot; option.&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|- &lt;br /&gt;
! style=&amp;quot;width:10%&amp;quot;| Partition&lt;br /&gt;
! style=&amp;quot;width:10%&amp;quot;| Node Access Policy&lt;br /&gt;
! style=&amp;quot;width:10%&amp;quot;| Node Types&lt;br /&gt;
! style=&amp;quot;width:20%&amp;quot;| Default&lt;br /&gt;
! style=&amp;quot;width:20%&amp;quot;| Limits&lt;br /&gt;
|-&lt;br /&gt;
| compute (default)&lt;br /&gt;
| shared&lt;br /&gt;
| cpu&lt;br /&gt;
| ntasks=1, time=00:10:00, mem-per-cpu=1gb&lt;br /&gt;
| nodes=2, time=14-00:00:00&lt;br /&gt;
|-&lt;br /&gt;
| gpu&lt;br /&gt;
| shared&lt;br /&gt;
| gpu &lt;br /&gt;
| ntasks=1, time=00:10:00, mem-per-cpu=1gb&lt;br /&gt;
| time=14-00:00:00&amp;lt;/br&amp;gt;MaxJobsPerUser: 8&amp;lt;/br&amp;gt;MaxTRESPerUser:&amp;lt;/br&amp;gt;&amp;lt;pre&amp;gt;gres/gpu:a100=4,&lt;br /&gt;
gres/gpu:a30=8,&lt;br /&gt;
gres/gpu:h200=4&amp;lt;/pre&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| interactive&lt;br /&gt;
| shared&lt;br /&gt;
| cpu &lt;br /&gt;
| ntasks=1, time=00:10:00, mem-per-cpu=1gb&lt;br /&gt;
| time=10:00:00&amp;lt;/br&amp;gt;MaxJobsPerUser: 1&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Parallel Jobs ===&lt;br /&gt;
&lt;br /&gt;
In order to submit parallel jobs to the InfiniBand part of the cluster, i.e., for fast inter-node communication, please select the appropriate nodes via the &amp;lt;code&amp;gt;--constraint=ib&amp;lt;/code&amp;gt; option in your job script. For less demanding parallel jobs, you may try the &amp;lt;code&amp;gt;--constraint=eth&amp;lt;/code&amp;gt; option, which utilizes 100Gb/s Ethernet instead of the low-latency 100Gb/s InfiniBand.&lt;br /&gt;
&lt;br /&gt;
=== GPU Jobs ===&lt;br /&gt;
&lt;br /&gt;
BinAC 2 provides different GPU models for computations. Please select the appropriate GPU type and the amount of GPUs with the &amp;lt;code&amp;gt;--gres=aXX:N&amp;lt;/code&amp;gt; option in your job script&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|- &lt;br /&gt;
! style=&amp;quot;width:20%&amp;quot;| GPU&lt;br /&gt;
! style=&amp;quot;width:20%&amp;quot;| GPU Memory&lt;br /&gt;
! style=&amp;quot;width:20%&amp;quot;| # GPUs per Node [N]&lt;br /&gt;
! style=&amp;quot;width:20%&amp;quot;| Submit Option&lt;br /&gt;
|-&lt;br /&gt;
| Nvidia A30&lt;br /&gt;
| 24GB&lt;br /&gt;
| 2&lt;br /&gt;
| &amp;lt;code&amp;gt;--gres=gpu:a30:N&amp;lt;/code&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| Nvidia A100&lt;br /&gt;
| 80GB&lt;br /&gt;
| 4&lt;br /&gt;
| &amp;lt;code&amp;gt;--gres=gpu:a100:N&amp;lt;/code&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| Nvidia H200&lt;br /&gt;
| 141GB&lt;br /&gt;
| 4&lt;br /&gt;
| &amp;lt;code&amp;gt;--gres=gpu:h200:N&amp;lt;/code&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>F Bartusch</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BinAC2/SLURM_Partitions&amp;diff=15225</id>
		<title>BinAC2/SLURM Partitions</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BinAC2/SLURM_Partitions&amp;diff=15225"/>
		<updated>2025-08-20T12:34:03Z</updated>

		<summary type="html">&lt;p&gt;F Bartusch: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Partitions ==&lt;br /&gt;
&lt;br /&gt;
The bwForCluster BinAC 2 provides two partitions for job submission.&lt;br /&gt;
Within a partition job allocations are routed automatically to the most suitable compute node(s) for the requested resources (e.g. amount of nodes and cores, memory, number and type of GPUs). &lt;br /&gt;
&lt;br /&gt;
The &amp;lt;code&amp;gt;gpu&amp;lt;/code&amp;gt; partition will only run 8 jobs per user at the same time. A user can only use 4 A100 and 8 A30 GPUs at the same time.&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;code&amp;gt;interactive&amp;lt;/code&amp;gt; will only run 1 job per user at the same time.&lt;br /&gt;
This partition is reserved is dedicated for testing things and using tools via a graphical user interfae.&lt;br /&gt;
The four nodes &amp;lt;code&amp;gt;node1-00[1-4]&amp;lt;/code&amp;gt; are exclusively reserved for this partition.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
All partitions are operated in shared mode, that is, jobs from different users can be executed on the same node. However, one can get exclusive access to compute nodes by using the &amp;quot;--exclusive&amp;quot; option.&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|- &lt;br /&gt;
! style=&amp;quot;width:10%&amp;quot;| Partition&lt;br /&gt;
! style=&amp;quot;width:10%&amp;quot;| Node Access Policy&lt;br /&gt;
! style=&amp;quot;width:10%&amp;quot;| Node Types&lt;br /&gt;
! style=&amp;quot;width:20%&amp;quot;| Default&lt;br /&gt;
! style=&amp;quot;width:20%&amp;quot;| Limits&lt;br /&gt;
|-&lt;br /&gt;
| compute (default)&lt;br /&gt;
| shared&lt;br /&gt;
| cpu&lt;br /&gt;
| ntasks=1, time=00:10:00, mem-per-cpu=1gb&lt;br /&gt;
| nodes=2, time=14-00:00:00&lt;br /&gt;
|-&lt;br /&gt;
| gpu&lt;br /&gt;
| shared&lt;br /&gt;
| gpu &lt;br /&gt;
| ntasks=1, time=00:10:00, mem-per-cpu=1gb&lt;br /&gt;
| time=14-00:00:00&amp;lt;/br&amp;gt;MaxJobsPerUser: 8&amp;lt;/br&amp;gt;MaxTRESPerUser:&amp;lt;/br&amp;gt;&amp;lt;pre&amp;gt;gres/gpu:a100=4,&lt;br /&gt;
gres/gpu:a30=8,&lt;br /&gt;
gres/gpu:h200=4&amp;lt;/pre&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| interactive&lt;br /&gt;
| shared&lt;br /&gt;
| cpu &lt;br /&gt;
| ntasks=1, time=00:10:00, mem-per-cpu=1gb&lt;br /&gt;
| time=10:00:00&amp;lt;/br&amp;gt;MaxJobsPerUser: 1&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Parallel Jobs ===&lt;br /&gt;
&lt;br /&gt;
In order to submit parallel jobs to the InfiniBand part of the cluster, i.e., for fast inter-node communication, please select the appropriate nodes via the &amp;lt;code&amp;gt;--constraint=ib&amp;lt;/code&amp;gt; option in your job script. For less demanding parallel jobs, you may try the &amp;lt;code&amp;gt;--constraint=eth&amp;lt;/code&amp;gt; option, which utilizes 100Gb/s Ethernet instead of the low-latency 100Gb/s InfiniBand.&lt;br /&gt;
&lt;br /&gt;
=== GPU Jobs ===&lt;br /&gt;
&lt;br /&gt;
BinAC 2 provides different GPU models for computations. Please select the appropriate GPU type and the amount of GPUs with the &amp;lt;code&amp;gt;--gres=aXX:N&amp;lt;/code&amp;gt; option in your job script&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|- &lt;br /&gt;
! style=&amp;quot;width:20%&amp;quot;| GPU&lt;br /&gt;
! style=&amp;quot;width:20%&amp;quot;| GPU Memory&lt;br /&gt;
! style=&amp;quot;width:20%&amp;quot;| # GPUs per Node [N]&lt;br /&gt;
! style=&amp;quot;width:20%&amp;quot;| Submit Option&lt;br /&gt;
|-&lt;br /&gt;
| Nvidia A30&lt;br /&gt;
| 24GB&lt;br /&gt;
| 2&lt;br /&gt;
| &amp;lt;code&amp;gt;--gres=gpu:a30:N&amp;lt;/code&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| Nvidia A100&lt;br /&gt;
| 80GB&lt;br /&gt;
| 4&lt;br /&gt;
| &amp;lt;code&amp;gt;--gres=gpu:a100:N&amp;lt;/code&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| Nvidia H200&lt;br /&gt;
| 141GB&lt;br /&gt;
| 4&lt;br /&gt;
| &amp;lt;code&amp;gt;--gres=gpu:h200:N&amp;lt;/code&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>F Bartusch</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BinAC2/Software/AMDock&amp;diff=15223</id>
		<title>BinAC2/Software/AMDock</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BinAC2/Software/AMDock&amp;diff=15223"/>
		<updated>2025-08-19T15:22:56Z</updated>

		<summary type="html">&lt;p&gt;F Bartusch: Created page with &amp;quot;stub&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;stub&lt;/div&gt;</summary>
		<author><name>F Bartusch</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BinAC2/Software&amp;diff=15222</id>
		<title>BinAC2/Software</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BinAC2/Software&amp;diff=15222"/>
		<updated>2025-08-19T15:22:50Z</updated>

		<summary type="html">&lt;p&gt;F Bartusch: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Environment Modules ==&lt;br /&gt;
Most software is provided as Modules.&lt;br /&gt;
&lt;br /&gt;
Required reading to use: [[Environment Modules]]&lt;br /&gt;
&lt;br /&gt;
== Available Software ==&lt;br /&gt;
&lt;br /&gt;
* Web: Visit [https://www.bwhpc.de/software.php https://www.bwhpc.de/software.php], select &amp;lt;code&amp;gt;Cluster → bwForCluster BinAC 2&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* On the cluster: &amp;lt;code&amp;gt;module avail&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
== Software in Containers ==&lt;br /&gt;
&lt;br /&gt;
Instructions for loading software in containers: [[NEMO/Software/Singularity_Containers|Singularity Containers]]&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
== Documentation ==&lt;br /&gt;
&lt;br /&gt;
Documentation for environment modules available on the cluster:  &lt;br /&gt;
&lt;br /&gt;
* with command &amp;lt;code&amp;gt;module help&amp;lt;/code&amp;gt;&lt;br /&gt;
* examples in &amp;lt;code&amp;gt;$SOFTNAME_EXA_DIR&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Documentation in the Wiki ==&lt;br /&gt;
&lt;br /&gt;
For some applications additional documentation is provided here.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- * [[BinAC/Software/Alphafold | Alphafold ]] --&amp;gt;&lt;br /&gt;
* [[BinAC/Software/AMDock | AMDock ]]&lt;br /&gt;
&amp;lt;!-- * [[BinAC/Software/BLAST | BLAST]] --&amp;gt;&lt;br /&gt;
&amp;lt;!-- * [[BinAC/Software/Bowtie | Bowtie]] --&amp;gt;&lt;br /&gt;
* [[BinAC/Software/Cellranger | Cell Ranger]]&lt;br /&gt;
* [[Development/Conda | Conda]]&lt;br /&gt;
&amp;lt;!-- * [[BinAC/Software/Gromacs | Gromacs]] --&amp;gt;&lt;br /&gt;
* [[BinAC2/Software/Jupyterlab | JupyterLab]]&lt;br /&gt;
* [[BinAC/Software/Nextflow | Nextflow and nf-core]]&lt;br /&gt;
* [[BinAC2/Software/TigerVNC | TigerVNC: Remote visualization using VNC]]&lt;/div&gt;</summary>
		<author><name>F Bartusch</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BinAC2/Software/TigerVNC&amp;diff=15218</id>
		<title>BinAC2/Software/TigerVNC</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BinAC2/Software/TigerVNC&amp;diff=15218"/>
		<updated>2025-08-19T13:15:03Z</updated>

		<summary type="html">&lt;p&gt;F Bartusch: Created page with &amp;quot;stub&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;stub&lt;/div&gt;</summary>
		<author><name>F Bartusch</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BinAC2/Software&amp;diff=15217</id>
		<title>BinAC2/Software</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BinAC2/Software&amp;diff=15217"/>
		<updated>2025-08-19T13:14:38Z</updated>

		<summary type="html">&lt;p&gt;F Bartusch: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Environment Modules ==&lt;br /&gt;
Most software is provided as Modules.&lt;br /&gt;
&lt;br /&gt;
Required reading to use: [[Environment Modules]]&lt;br /&gt;
&lt;br /&gt;
== Available Software ==&lt;br /&gt;
&lt;br /&gt;
* Web: Visit [https://www.bwhpc.de/software.php https://www.bwhpc.de/software.php], select &amp;lt;code&amp;gt;Cluster → bwForCluster BinAC 2&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* On the cluster: &amp;lt;code&amp;gt;module avail&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
== Software in Containers ==&lt;br /&gt;
&lt;br /&gt;
Instructions for loading software in containers: [[NEMO/Software/Singularity_Containers|Singularity Containers]]&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
== Documentation ==&lt;br /&gt;
&lt;br /&gt;
Documentation for environment modules available on the cluster:  &lt;br /&gt;
&lt;br /&gt;
* with command &amp;lt;code&amp;gt;module help&amp;lt;/code&amp;gt;&lt;br /&gt;
* examples in &amp;lt;code&amp;gt;$SOFTNAME_EXA_DIR&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Documentation in the Wiki ==&lt;br /&gt;
&lt;br /&gt;
For some applications additional documentation is provided here.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- * [[BinAC/Software/Alphafold | Alphafold ]] --&amp;gt;&lt;br /&gt;
&amp;lt;!-- * [[BinAC/Software/BLAST | BLAST]] --&amp;gt;&lt;br /&gt;
&amp;lt;!-- * [[BinAC/Software/Bowtie | Bowtie]] --&amp;gt;&lt;br /&gt;
* [[BinAC/Software/Cellranger | Cell Ranger]]&lt;br /&gt;
* [[Development/Conda | Conda]]&lt;br /&gt;
&amp;lt;!-- * [[BinAC/Software/Gromacs | Gromacs]] --&amp;gt;&lt;br /&gt;
* [[BinAC2/Software/Jupyterlab | JupyterLab]]&lt;br /&gt;
* [[BinAC/Software/Nextflow | Nextflow and nf-core]]&lt;br /&gt;
* [[BinAC2/Software/TigerVNC | TigerVNC: Remote visualization using VNC]]&lt;/div&gt;</summary>
		<author><name>F Bartusch</name></author>
	</entry>
</feed>