<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://wiki.bwhpc.de/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=H+Winkhardt</id>
	<title>bwHPC Wiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://wiki.bwhpc.de/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=H+Winkhardt"/>
	<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/e/Special:Contributions/H_Winkhardt"/>
	<updated>2026-04-06T03:59:10Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.39.17</generator>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=Helix/Slurm&amp;diff=15647</id>
		<title>Helix/Slurm</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=Helix/Slurm&amp;diff=15647"/>
		<updated>2025-12-12T14:50:23Z</updated>

		<summary type="html">&lt;p&gt;H Winkhardt: /* GPU Programs */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= General information about Slurm =&lt;br /&gt;
The bwForCluster Helix uses Slurm as batch system.&lt;br /&gt;
* Slurm documentation: https://slurm.schedmd.com/documentation.html&lt;br /&gt;
* Slurm cheat sheet: https://slurm.schedmd.com/pdfs/summary.pdf&lt;br /&gt;
* Slurm tutorials: https://slurm.schedmd.com/tutorials.html&lt;br /&gt;
&lt;br /&gt;
= Slurm Command Overview =&lt;br /&gt;
&lt;br /&gt;
{| width=750px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Slurm commands !! Brief explanation&lt;br /&gt;
|-&lt;br /&gt;
| [https://slurm.schedmd.com/sbatch.html sbatch] || Submits a job and queues it in an input queue&lt;br /&gt;
|-&lt;br /&gt;
| [https://slurm.schedmd.com/salloc.html salloc] || Request resources for an interactive job&lt;br /&gt;
|-&lt;br /&gt;
| [https://slurm.schedmd.com/squeue.html squeue] || Displays information about active, eligible, blocked, and/or recently completed jobs &lt;br /&gt;
|-&lt;br /&gt;
| [https://slurm.schedmd.com/scontrol.html scontrol] || Displays detailed job state information&lt;br /&gt;
|-&lt;br /&gt;
| [https://slurm.schedmd.com/scontrol.html sstat] || Displays status information about a running job&lt;br /&gt;
|- &lt;br /&gt;
| [https://slurm.schedmd.com/scancel.html scancel] || Cancels a job&lt;br /&gt;
|- &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
= Job Submission =&lt;br /&gt;
&lt;br /&gt;
Batch jobs are submitted with the command:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;$ sbatch &amp;lt;job-script&amp;gt; &amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A job script contains options for Slurm in lines beginning with #SBATCH as well as your commands which you want to execute on the compute nodes. For example:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;slurm&amp;quot;&amp;gt;#!/bin/bash&lt;br /&gt;
#SBATCH --partition=cpu-single&lt;br /&gt;
#SBATCH --ntasks=1&lt;br /&gt;
#SBATCH --time=00:20:00&lt;br /&gt;
#SBATCH --mem=1gb&lt;br /&gt;
#SBATCH --export=NONE&lt;br /&gt;
echo &#039;Hello world&#039;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This jobs requests one core (--ntasks=1) and 1 GB memory (--mem=1gb) for 20 minutes (--time=00:20:00) on nodes provided by the partition &#039;cpu-single&#039;.&lt;br /&gt;
&lt;br /&gt;
For the sake of a better reproducibility of jobs it is recommended to use the option --export=NONE to prevent the propagation of environment variables from the submit session into the job environment and to load required software modules in the job script.&lt;br /&gt;
&lt;br /&gt;
== Partitions ==&lt;br /&gt;
&lt;br /&gt;
On bwForCluster Helix it is necessary to request a partition with &amp;quot;--partition=&amp;lt;partition_name&amp;gt;&amp;quot; on job submission. Within a partition job allocations are routed automatically to the most suitable compute node(s) for the requested resources (e.g. amount of nodes and cores, memory, number of GPUs). The devel partition is the default partition, if no partition is requested. &lt;br /&gt;
&lt;br /&gt;
The partitions devel, cpu-single and gpu-single are operated in shared mode, i.e. jobs from different users can run on the same node. Jobs can get exclusive access to compute nodes in these partitions with the &amp;quot;--exclusive&amp;quot; option. The partitions cpu-multi and gpu-multi are operated in exclusive mode. Jobs in these partitions automatically get exclusive access to the requested compute nodes.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|- &lt;br /&gt;
! style=&amp;quot;width:20%&amp;quot;| Partition&lt;br /&gt;
! style=&amp;quot;width:20%&amp;quot;| Node Access Policy&lt;br /&gt;
! style=&amp;quot;width:20%&amp;quot;| [https://wiki.bwhpc.de/e/Helix/Hardware#Compute_Nodes Node Types]&lt;br /&gt;
! style=&amp;quot;width:20%&amp;quot;| Default&lt;br /&gt;
! style=&amp;quot;width:20%&amp;quot;| Limits&lt;br /&gt;
|-&lt;br /&gt;
| devel&lt;br /&gt;
| shared&lt;br /&gt;
| cpu, gpu4&lt;br /&gt;
| ntasks=1, time=00:10:00, mem-per-cpu=2gb&lt;br /&gt;
| nodes=2, time=00:30:00&lt;br /&gt;
|-&lt;br /&gt;
| cpu-single&lt;br /&gt;
| shared&lt;br /&gt;
| cpu, fat&lt;br /&gt;
| ntasks=1, time=00:30:00, mem-per-cpu=2gb&lt;br /&gt;
| nodes=1, time=120:00:00&lt;br /&gt;
|-&lt;br /&gt;
| gpu-single&lt;br /&gt;
| shared&lt;br /&gt;
| gpu4, gpu8&lt;br /&gt;
| ntasks=1, time=00:30:00, mem-per-cpu=2gb&lt;br /&gt;
| nodes=1, time=120:00:00&lt;br /&gt;
|- &lt;br /&gt;
| cpu-multi&lt;br /&gt;
| job exclusive&lt;br /&gt;
| cpu&lt;br /&gt;
| nodes=2, time=00:30:00&lt;br /&gt;
| nodes=32, time=48:00:00&lt;br /&gt;
|- &lt;br /&gt;
| gpu-multi&lt;br /&gt;
| job exclusive&lt;br /&gt;
| gpu4&lt;br /&gt;
| nodes=2, time=00:30:00&lt;br /&gt;
| nodes=8, time=48:00:00&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== GPU requests ==&lt;br /&gt;
&lt;br /&gt;
For the partitions gpu-single and gpu-multi is it required to request GPU ressources.&lt;br /&gt;
* The number of GPUs is requested with the option &amp;quot;--gres=gpu:&amp;lt;number-of-gpus&amp;gt;&amp;quot;.   &lt;br /&gt;
* A specific GPU type can be requested with the option &amp;quot;--gres=gpu:&amp;lt;gpu-type&amp;gt;:&amp;lt;number-of-gpus&amp;gt;&amp;quot;. Possible values for &amp;lt;gpu-type&amp;gt; are listed in the line &#039;GPU Type&#039; of the [https://wiki.bwhpc.de/e/Helix/Hardware#Compute_Nodes Compute Nodes table].&lt;br /&gt;
* GPUs that are suitable for a specific GPU memory requirement can be requested with option &amp;quot;--gres=gpu:&amp;lt;number-of-gpus&amp;gt;,gpumem_per_gpu:&amp;lt;required-gpumem&amp;gt;GB&amp;quot;. This only restricts the selection of possible GPU types. For the job the total GPU memory per GPU is available as listed in the line &#039;GPU memory per GPU&#039; of the [https://wiki.bwhpc.de/e/Helix/Hardware#Compute_Nodes Compute Nodes table].&lt;br /&gt;
&lt;br /&gt;
== Constraints ==&lt;br /&gt;
&lt;br /&gt;
It is possible to refine the resource request for a job with the option &amp;quot;--constraint=&amp;lt;feature&amp;gt;&amp;quot;. &lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|- &lt;br /&gt;
! style=&amp;quot;width:20%&amp;quot;| Feature&lt;br /&gt;
! style=&amp;quot;width:80%&amp;quot;| Meaning&lt;br /&gt;
|-&lt;br /&gt;
| fp64&lt;br /&gt;
| request GPU types with FP64 capability (double precision)&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Examples ==&lt;br /&gt;
&lt;br /&gt;
Here you can find some example scripts for batch jobs.&lt;br /&gt;
&lt;br /&gt;
=== Serial Programs ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;slurm&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --partition=cpu-single&lt;br /&gt;
#SBATCH --ntasks=1&lt;br /&gt;
#SBATCH --time=20:00:00&lt;br /&gt;
#SBATCH --mem=4gb&lt;br /&gt;
./my_serial_program&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Notes:&#039;&#039;&#039;&lt;br /&gt;
* Jobs with &amp;quot;--mem&amp;quot; up to 236gb can run on all node types associated with the cpu-single partition.&lt;br /&gt;
&lt;br /&gt;
=== Multi-threaded Programs ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;slurm&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --partition=cpu-single&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
#SBATCH --ntasks-per-node=1&lt;br /&gt;
#SBATCH --cpus-per-task=16&lt;br /&gt;
#SBATCH --time=01:30:00&lt;br /&gt;
#SBATCH --mem=50gb&lt;br /&gt;
export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK}&lt;br /&gt;
./my_multithreaded_program&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Notes:&#039;&#039;&#039;&lt;br /&gt;
* Jobs with &amp;quot;--ntasks-per-node&amp;quot; up to 64 and &amp;quot;--mem&amp;quot; up to 236gb can run on all node types associated with the cpu-single partition.&lt;br /&gt;
* With &amp;quot;export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK}&amp;quot; you can set the number of threads according to the number of resources requested.&lt;br /&gt;
&lt;br /&gt;
=== MPI Programs ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;slurm&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --partition=cpu-multi&lt;br /&gt;
#SBATCH --nodes=2&lt;br /&gt;
#SBATCH --ntasks-per-node=64&lt;br /&gt;
#SBATCH --time=12:00:00&lt;br /&gt;
module load compiler/gnu&lt;br /&gt;
module load mpi/openmpi&lt;br /&gt;
srun ./my_mpi_program&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Notes:&#039;&#039;&#039;&lt;br /&gt;
* &amp;quot;--mem&amp;quot; requests the memory per node. The maximum is 236gb.&lt;br /&gt;
* The Compiler and MPI modules used for the compilation must be loaded before the start of the program.&lt;br /&gt;
* It is recommended to start MPI programs with &#039;srun&#039;.&lt;br /&gt;
&lt;br /&gt;
=== GPU Programs ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;slurm&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --partition=gpu-single &lt;br /&gt;
#SBATCH --nodes=1 &lt;br /&gt;
#SBATCH --ntasks=1 &lt;br /&gt;
#SBATCH --cpus-per-task=8&lt;br /&gt;
#SBATCH --gres=gpu:A40:1 &lt;br /&gt;
#SBATCH --time=12:00:00&lt;br /&gt;
#SBATCH --mem=16gb&lt;br /&gt;
&lt;br /&gt;
module load devel/cuda &lt;br /&gt;
export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK}&lt;br /&gt;
 ./my_cuda_program&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Notes:&#039;&#039;&#039;&lt;br /&gt;
* The number of GPUs per node is requested with the option &amp;quot;--gres=gpu:&amp;lt;number-of-gpus&amp;gt;&amp;quot;&lt;br /&gt;
* It is recommended to request a suitable GPU type for your application with the option &amp;quot;--gres=gpu:&amp;lt;gpu-type&amp;gt;:&amp;lt;number-of-gpus&amp;gt;&amp;quot;. For &amp;lt;gpu-type&amp;gt; put the &#039;GPU Type&#039; listed in the [https://wiki.bwhpc.de/e/Helix/Hardware#Compute_Nodes Compute Nodes table].&lt;br /&gt;
** Example for a request of two A40 GPUs: --gres=gpu:A40:2&lt;br /&gt;
** Example for a request of one A100 GPU: --gres=gpu:A100:1&lt;br /&gt;
* If you are unsure on which GPU type your code runs faster, please run a test case and compare the run times. In general the following applies:&lt;br /&gt;
** A40 GPUs are optimized for single precision computations.&lt;br /&gt;
** A100 and H200 GPUs offer better performance for double precision computations or if the code makes use of tensor cores.&lt;br /&gt;
* The CUDA module used for compilation must be loaded before the start of the program.&lt;br /&gt;
&lt;br /&gt;
=== More examples ===&lt;br /&gt;
&lt;br /&gt;
Further batch script examples are available on bwForCluster Helix in the directory: &amp;lt;code&amp;gt;/opt/bwhpc/common/system/slurm-examples&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Interactive Jobs =&lt;br /&gt;
&lt;br /&gt;
Interactive jobs must NOT run on the login nodes, however resources for interactive jobs can be requested using srun. The following example requests an interactive session on 1 core for 2 hours:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;$ salloc --partition=cpu-single --ntasks=1 --time=2:00:00 &amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After execution of this command wait until the queueing system has granted you the requested resources. Once granted you will be automatically logged on the allocated compute node.&lt;br /&gt;
&lt;br /&gt;
If you use applications or tools which provide a GUI, enable X-forwarding for your interactive session with:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;$ salloc --partition=cpu-single --ntasks=1 --time=2:00:00 --x11 &amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Once the walltime limit has been reached you will be automatically logged out from the compute node.&lt;br /&gt;
&lt;br /&gt;
For convenient access to specific GUI applications (JupyterLab, ...) on the cluster, we provide a web-based platform: [[Helix/bwVisu | bwVisu]]&lt;br /&gt;
&lt;br /&gt;
= Job Monitoring =&lt;br /&gt;
&lt;br /&gt;
== Information about submitted jobs ==&lt;br /&gt;
&lt;br /&gt;
For an overview of your submitted jobs use the command:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;$ squeue&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To get detailed information about a specific job use the command:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;$ scontrol show job &amp;lt;jobid&amp;gt;&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Notes:&#039;&#039;&#039;&amp;lt;br&amp;gt;&lt;br /&gt;
A job start may be delayed for various reasons:&lt;br /&gt;
* (QOSMaxCpuPerUserLimit) - There is a limit to how many CPU cores a user can use at the same time. The job exceeds this limit.&lt;br /&gt;
* (QOSMaxGRESPerUser) - There is a limit to how many GPUs a user can use at the same time. The job exceeds this limit.&lt;br /&gt;
* (QOSMinGRES) - The job was submitted to a gpu partition without requesting a GPU.&lt;br /&gt;
* (launch failed requeued held) - The job has failed to start. You may be able to resume it using scontrol. Alternatively you can cancel it and submit it again.&lt;br /&gt;
For further reasons please refer to: https://slurm.schedmd.com/job_reason_codes.html&lt;br /&gt;
&lt;br /&gt;
== Information about resource usage of running jobs ==&lt;br /&gt;
&lt;br /&gt;
You can monitor the resource usage of running jobs with the sstat command. For example:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sstat --format=JobId,AveCPU,AveRSS,MaxRSS -j &amp;lt;jobid&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This will show average CPU time, average and maximum memory consumption of all tasks in the running job.&lt;br /&gt;
&lt;br /&gt;
&#039;sstat -e&#039; command shows a list of fields that can be specified with the &#039;--format&#039; option.&lt;br /&gt;
&lt;br /&gt;
== Interactive access to running jobs ==&lt;br /&gt;
&lt;br /&gt;
It is also possible to attach an interactive shell to a running job with command:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;$ srun --jobid=&amp;lt;jobid&amp;gt; --overlap --pty /bin/bash&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Commands like &#039;top&#039; show you the most busy processes on the node. To exit &#039;top&#039; type &#039;q&#039;.&lt;br /&gt;
&lt;br /&gt;
To monitor your GPU processes use the command &#039;nvidia-smi&#039;.&lt;br /&gt;
&lt;br /&gt;
== Job Feedback ==&lt;br /&gt;
You get feedback on resource usage and job efficiency for completed jobs with the command:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ seff &amp;lt;jobid&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Example Output:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
============================= JOB FEEDBACK =============================&lt;br /&gt;
Job ID: 12345678&lt;br /&gt;
Cluster: helix&lt;br /&gt;
User/Group: hd_ab123/hd_hd&lt;br /&gt;
State: COMPLETED (exit code 0)&lt;br /&gt;
Nodes: 2&lt;br /&gt;
Cores per node: 64&lt;br /&gt;
CPU Utilized: 3-04:11:46&lt;br /&gt;
CPU Efficiency: 97.90% of 3-05:49:52 core-walltime&lt;br /&gt;
Job Wall-clock time: 00:36:29&lt;br /&gt;
Memory Utilized: 432.74 GB (estimated maximum)&lt;br /&gt;
Memory Efficiency: 85.96% of 503.42 GB (251.71 GB/node)&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Explanation:&lt;br /&gt;
* Nodes: Number of allocated nodes for the job.&lt;br /&gt;
* Cores per node: Number of physical cores per node allocated for the job.&lt;br /&gt;
* CPU Utilized: Sum of utilized core time.&lt;br /&gt;
* CPU Efficiency: &#039;CPU Utilized&#039; with respect to core-walltime (= &#039;Nodes&#039; x &#039;Cores per node&#039; x &#039;Job Wall-clock time&#039;) in percent. &lt;br /&gt;
* Job Wall-clock time: runtime of the job.&lt;br /&gt;
* Memory Utilized: Sum of memory used. For multi node MPI jobs the sum is only correct when srun is used instead of mpirun.&lt;br /&gt;
* Memory Efficiency: &#039;Memory Utilized&#039; with respect to total allocated memory for the job.&lt;br /&gt;
&lt;br /&gt;
== Job Monitoring Portal ==&lt;br /&gt;
For more detailed information about your jobs visit the job monitoring portal: https://helix-monitoring.bwservices.uni-heidelberg.de&lt;br /&gt;
&lt;br /&gt;
= Accounting =&lt;br /&gt;
&lt;br /&gt;
Jobs are billed for allocated CPU cores, memory and GPUs.&lt;br /&gt;
&lt;br /&gt;
To see the accounting data of a specific job:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;$ sacct -j &amp;lt;jobid&amp;gt; --format=user,jobid,account,nnodes,ncpus,time,elapsed,AllocTRES%50&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To retrive the job history for a specific user for a certain time frame:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;$ sacct -u &amp;lt;user&amp;gt; -S 2022-08-20 -E 2022-08-30 --format=user,jobid,account,nnodes,ncpus,time,elapsed,AllocTRES%50&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Overview about free resources =&lt;br /&gt;
&lt;br /&gt;
On the login nodes the following command shows what resources are available for immediate use:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;$ sinfo_t_idle&amp;lt;/syntaxhighlight&amp;gt;&lt;/div&gt;</summary>
		<author><name>H Winkhardt</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=Helix/Slurm&amp;diff=15646</id>
		<title>Helix/Slurm</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=Helix/Slurm&amp;diff=15646"/>
		<updated>2025-12-12T14:49:53Z</updated>

		<summary type="html">&lt;p&gt;H Winkhardt: GPU example script update for Helix&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= General information about Slurm =&lt;br /&gt;
The bwForCluster Helix uses Slurm as batch system.&lt;br /&gt;
* Slurm documentation: https://slurm.schedmd.com/documentation.html&lt;br /&gt;
* Slurm cheat sheet: https://slurm.schedmd.com/pdfs/summary.pdf&lt;br /&gt;
* Slurm tutorials: https://slurm.schedmd.com/tutorials.html&lt;br /&gt;
&lt;br /&gt;
= Slurm Command Overview =&lt;br /&gt;
&lt;br /&gt;
{| width=750px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Slurm commands !! Brief explanation&lt;br /&gt;
|-&lt;br /&gt;
| [https://slurm.schedmd.com/sbatch.html sbatch] || Submits a job and queues it in an input queue&lt;br /&gt;
|-&lt;br /&gt;
| [https://slurm.schedmd.com/salloc.html salloc] || Request resources for an interactive job&lt;br /&gt;
|-&lt;br /&gt;
| [https://slurm.schedmd.com/squeue.html squeue] || Displays information about active, eligible, blocked, and/or recently completed jobs &lt;br /&gt;
|-&lt;br /&gt;
| [https://slurm.schedmd.com/scontrol.html scontrol] || Displays detailed job state information&lt;br /&gt;
|-&lt;br /&gt;
| [https://slurm.schedmd.com/scontrol.html sstat] || Displays status information about a running job&lt;br /&gt;
|- &lt;br /&gt;
| [https://slurm.schedmd.com/scancel.html scancel] || Cancels a job&lt;br /&gt;
|- &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
= Job Submission =&lt;br /&gt;
&lt;br /&gt;
Batch jobs are submitted with the command:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;$ sbatch &amp;lt;job-script&amp;gt; &amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A job script contains options for Slurm in lines beginning with #SBATCH as well as your commands which you want to execute on the compute nodes. For example:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;slurm&amp;quot;&amp;gt;#!/bin/bash&lt;br /&gt;
#SBATCH --partition=cpu-single&lt;br /&gt;
#SBATCH --ntasks=1&lt;br /&gt;
#SBATCH --time=00:20:00&lt;br /&gt;
#SBATCH --mem=1gb&lt;br /&gt;
#SBATCH --export=NONE&lt;br /&gt;
echo &#039;Hello world&#039;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This jobs requests one core (--ntasks=1) and 1 GB memory (--mem=1gb) for 20 minutes (--time=00:20:00) on nodes provided by the partition &#039;cpu-single&#039;.&lt;br /&gt;
&lt;br /&gt;
For the sake of a better reproducibility of jobs it is recommended to use the option --export=NONE to prevent the propagation of environment variables from the submit session into the job environment and to load required software modules in the job script.&lt;br /&gt;
&lt;br /&gt;
== Partitions ==&lt;br /&gt;
&lt;br /&gt;
On bwForCluster Helix it is necessary to request a partition with &amp;quot;--partition=&amp;lt;partition_name&amp;gt;&amp;quot; on job submission. Within a partition job allocations are routed automatically to the most suitable compute node(s) for the requested resources (e.g. amount of nodes and cores, memory, number of GPUs). The devel partition is the default partition, if no partition is requested. &lt;br /&gt;
&lt;br /&gt;
The partitions devel, cpu-single and gpu-single are operated in shared mode, i.e. jobs from different users can run on the same node. Jobs can get exclusive access to compute nodes in these partitions with the &amp;quot;--exclusive&amp;quot; option. The partitions cpu-multi and gpu-multi are operated in exclusive mode. Jobs in these partitions automatically get exclusive access to the requested compute nodes.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|- &lt;br /&gt;
! style=&amp;quot;width:20%&amp;quot;| Partition&lt;br /&gt;
! style=&amp;quot;width:20%&amp;quot;| Node Access Policy&lt;br /&gt;
! style=&amp;quot;width:20%&amp;quot;| [https://wiki.bwhpc.de/e/Helix/Hardware#Compute_Nodes Node Types]&lt;br /&gt;
! style=&amp;quot;width:20%&amp;quot;| Default&lt;br /&gt;
! style=&amp;quot;width:20%&amp;quot;| Limits&lt;br /&gt;
|-&lt;br /&gt;
| devel&lt;br /&gt;
| shared&lt;br /&gt;
| cpu, gpu4&lt;br /&gt;
| ntasks=1, time=00:10:00, mem-per-cpu=2gb&lt;br /&gt;
| nodes=2, time=00:30:00&lt;br /&gt;
|-&lt;br /&gt;
| cpu-single&lt;br /&gt;
| shared&lt;br /&gt;
| cpu, fat&lt;br /&gt;
| ntasks=1, time=00:30:00, mem-per-cpu=2gb&lt;br /&gt;
| nodes=1, time=120:00:00&lt;br /&gt;
|-&lt;br /&gt;
| gpu-single&lt;br /&gt;
| shared&lt;br /&gt;
| gpu4, gpu8&lt;br /&gt;
| ntasks=1, time=00:30:00, mem-per-cpu=2gb&lt;br /&gt;
| nodes=1, time=120:00:00&lt;br /&gt;
|- &lt;br /&gt;
| cpu-multi&lt;br /&gt;
| job exclusive&lt;br /&gt;
| cpu&lt;br /&gt;
| nodes=2, time=00:30:00&lt;br /&gt;
| nodes=32, time=48:00:00&lt;br /&gt;
|- &lt;br /&gt;
| gpu-multi&lt;br /&gt;
| job exclusive&lt;br /&gt;
| gpu4&lt;br /&gt;
| nodes=2, time=00:30:00&lt;br /&gt;
| nodes=8, time=48:00:00&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== GPU requests ==&lt;br /&gt;
&lt;br /&gt;
For the partitions gpu-single and gpu-multi is it required to request GPU ressources.&lt;br /&gt;
* The number of GPUs is requested with the option &amp;quot;--gres=gpu:&amp;lt;number-of-gpus&amp;gt;&amp;quot;.   &lt;br /&gt;
* A specific GPU type can be requested with the option &amp;quot;--gres=gpu:&amp;lt;gpu-type&amp;gt;:&amp;lt;number-of-gpus&amp;gt;&amp;quot;. Possible values for &amp;lt;gpu-type&amp;gt; are listed in the line &#039;GPU Type&#039; of the [https://wiki.bwhpc.de/e/Helix/Hardware#Compute_Nodes Compute Nodes table].&lt;br /&gt;
* GPUs that are suitable for a specific GPU memory requirement can be requested with option &amp;quot;--gres=gpu:&amp;lt;number-of-gpus&amp;gt;,gpumem_per_gpu:&amp;lt;required-gpumem&amp;gt;GB&amp;quot;. This only restricts the selection of possible GPU types. For the job the total GPU memory per GPU is available as listed in the line &#039;GPU memory per GPU&#039; of the [https://wiki.bwhpc.de/e/Helix/Hardware#Compute_Nodes Compute Nodes table].&lt;br /&gt;
&lt;br /&gt;
== Constraints ==&lt;br /&gt;
&lt;br /&gt;
It is possible to refine the resource request for a job with the option &amp;quot;--constraint=&amp;lt;feature&amp;gt;&amp;quot;. &lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|- &lt;br /&gt;
! style=&amp;quot;width:20%&amp;quot;| Feature&lt;br /&gt;
! style=&amp;quot;width:80%&amp;quot;| Meaning&lt;br /&gt;
|-&lt;br /&gt;
| fp64&lt;br /&gt;
| request GPU types with FP64 capability (double precision)&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Examples ==&lt;br /&gt;
&lt;br /&gt;
Here you can find some example scripts for batch jobs.&lt;br /&gt;
&lt;br /&gt;
=== Serial Programs ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;slurm&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --partition=cpu-single&lt;br /&gt;
#SBATCH --ntasks=1&lt;br /&gt;
#SBATCH --time=20:00:00&lt;br /&gt;
#SBATCH --mem=4gb&lt;br /&gt;
./my_serial_program&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Notes:&#039;&#039;&#039;&lt;br /&gt;
* Jobs with &amp;quot;--mem&amp;quot; up to 236gb can run on all node types associated with the cpu-single partition.&lt;br /&gt;
&lt;br /&gt;
=== Multi-threaded Programs ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;slurm&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --partition=cpu-single&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
#SBATCH --ntasks-per-node=1&lt;br /&gt;
#SBATCH --cpus-per-task=16&lt;br /&gt;
#SBATCH --time=01:30:00&lt;br /&gt;
#SBATCH --mem=50gb&lt;br /&gt;
export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK}&lt;br /&gt;
./my_multithreaded_program&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Notes:&#039;&#039;&#039;&lt;br /&gt;
* Jobs with &amp;quot;--ntasks-per-node&amp;quot; up to 64 and &amp;quot;--mem&amp;quot; up to 236gb can run on all node types associated with the cpu-single partition.&lt;br /&gt;
* With &amp;quot;export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK}&amp;quot; you can set the number of threads according to the number of resources requested.&lt;br /&gt;
&lt;br /&gt;
=== MPI Programs ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;slurm&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --partition=cpu-multi&lt;br /&gt;
#SBATCH --nodes=2&lt;br /&gt;
#SBATCH --ntasks-per-node=64&lt;br /&gt;
#SBATCH --time=12:00:00&lt;br /&gt;
module load compiler/gnu&lt;br /&gt;
module load mpi/openmpi&lt;br /&gt;
srun ./my_mpi_program&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Notes:&#039;&#039;&#039;&lt;br /&gt;
* &amp;quot;--mem&amp;quot; requests the memory per node. The maximum is 236gb.&lt;br /&gt;
* The Compiler and MPI modules used for the compilation must be loaded before the start of the program.&lt;br /&gt;
* It is recommended to start MPI programs with &#039;srun&#039;.&lt;br /&gt;
&lt;br /&gt;
=== GPU Programs ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;slurm&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --partition=gpu-single &lt;br /&gt;
#SBATCH --nodes=1 &lt;br /&gt;
#SBATCH --ntasks=1 &lt;br /&gt;
#SBATCH --cpus-per-task=8&lt;br /&gt;
#SBATCH --gres=gpu:A40:1 &lt;br /&gt;
#SBATCH --time=12:00:00&lt;br /&gt;
#SBATCH --mem=59gb&lt;br /&gt;
&lt;br /&gt;
module load devel/cuda &lt;br /&gt;
export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK}&lt;br /&gt;
 ./my_cuda_program&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Notes:&#039;&#039;&#039;&lt;br /&gt;
* The number of GPUs per node is requested with the option &amp;quot;--gres=gpu:&amp;lt;number-of-gpus&amp;gt;&amp;quot;&lt;br /&gt;
* It is recommended to request a suitable GPU type for your application with the option &amp;quot;--gres=gpu:&amp;lt;gpu-type&amp;gt;:&amp;lt;number-of-gpus&amp;gt;&amp;quot;. For &amp;lt;gpu-type&amp;gt; put the &#039;GPU Type&#039; listed in the [https://wiki.bwhpc.de/e/Helix/Hardware#Compute_Nodes Compute Nodes table].&lt;br /&gt;
** Example for a request of two A40 GPUs: --gres=gpu:A40:2&lt;br /&gt;
** Example for a request of one A100 GPU: --gres=gpu:A100:1&lt;br /&gt;
* If you are unsure on which GPU type your code runs faster, please run a test case and compare the run times. In general the following applies:&lt;br /&gt;
** A40 GPUs are optimized for single precision computations.&lt;br /&gt;
** A100 and H200 GPUs offer better performance for double precision computations or if the code makes use of tensor cores.&lt;br /&gt;
* The CUDA module used for compilation must be loaded before the start of the program.&lt;br /&gt;
&lt;br /&gt;
=== More examples ===&lt;br /&gt;
&lt;br /&gt;
Further batch script examples are available on bwForCluster Helix in the directory: &amp;lt;code&amp;gt;/opt/bwhpc/common/system/slurm-examples&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Interactive Jobs =&lt;br /&gt;
&lt;br /&gt;
Interactive jobs must NOT run on the login nodes, however resources for interactive jobs can be requested using srun. The following example requests an interactive session on 1 core for 2 hours:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;$ salloc --partition=cpu-single --ntasks=1 --time=2:00:00 &amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After execution of this command wait until the queueing system has granted you the requested resources. Once granted you will be automatically logged on the allocated compute node.&lt;br /&gt;
&lt;br /&gt;
If you use applications or tools which provide a GUI, enable X-forwarding for your interactive session with:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;$ salloc --partition=cpu-single --ntasks=1 --time=2:00:00 --x11 &amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Once the walltime limit has been reached you will be automatically logged out from the compute node.&lt;br /&gt;
&lt;br /&gt;
For convenient access to specific GUI applications (JupyterLab, ...) on the cluster, we provide a web-based platform: [[Helix/bwVisu | bwVisu]]&lt;br /&gt;
&lt;br /&gt;
= Job Monitoring =&lt;br /&gt;
&lt;br /&gt;
== Information about submitted jobs ==&lt;br /&gt;
&lt;br /&gt;
For an overview of your submitted jobs use the command:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;$ squeue&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To get detailed information about a specific job use the command:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;$ scontrol show job &amp;lt;jobid&amp;gt;&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Notes:&#039;&#039;&#039;&amp;lt;br&amp;gt;&lt;br /&gt;
A job start may be delayed for various reasons:&lt;br /&gt;
* (QOSMaxCpuPerUserLimit) - There is a limit to how many CPU cores a user can use at the same time. The job exceeds this limit.&lt;br /&gt;
* (QOSMaxGRESPerUser) - There is a limit to how many GPUs a user can use at the same time. The job exceeds this limit.&lt;br /&gt;
* (QOSMinGRES) - The job was submitted to a gpu partition without requesting a GPU.&lt;br /&gt;
* (launch failed requeued held) - The job has failed to start. You may be able to resume it using scontrol. Alternatively you can cancel it and submit it again.&lt;br /&gt;
For further reasons please refer to: https://slurm.schedmd.com/job_reason_codes.html&lt;br /&gt;
&lt;br /&gt;
== Information about resource usage of running jobs ==&lt;br /&gt;
&lt;br /&gt;
You can monitor the resource usage of running jobs with the sstat command. For example:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sstat --format=JobId,AveCPU,AveRSS,MaxRSS -j &amp;lt;jobid&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This will show average CPU time, average and maximum memory consumption of all tasks in the running job.&lt;br /&gt;
&lt;br /&gt;
&#039;sstat -e&#039; command shows a list of fields that can be specified with the &#039;--format&#039; option.&lt;br /&gt;
&lt;br /&gt;
== Interactive access to running jobs ==&lt;br /&gt;
&lt;br /&gt;
It is also possible to attach an interactive shell to a running job with command:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;$ srun --jobid=&amp;lt;jobid&amp;gt; --overlap --pty /bin/bash&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Commands like &#039;top&#039; show you the most busy processes on the node. To exit &#039;top&#039; type &#039;q&#039;.&lt;br /&gt;
&lt;br /&gt;
To monitor your GPU processes use the command &#039;nvidia-smi&#039;.&lt;br /&gt;
&lt;br /&gt;
== Job Feedback ==&lt;br /&gt;
You get feedback on resource usage and job efficiency for completed jobs with the command:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ seff &amp;lt;jobid&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Example Output:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
============================= JOB FEEDBACK =============================&lt;br /&gt;
Job ID: 12345678&lt;br /&gt;
Cluster: helix&lt;br /&gt;
User/Group: hd_ab123/hd_hd&lt;br /&gt;
State: COMPLETED (exit code 0)&lt;br /&gt;
Nodes: 2&lt;br /&gt;
Cores per node: 64&lt;br /&gt;
CPU Utilized: 3-04:11:46&lt;br /&gt;
CPU Efficiency: 97.90% of 3-05:49:52 core-walltime&lt;br /&gt;
Job Wall-clock time: 00:36:29&lt;br /&gt;
Memory Utilized: 432.74 GB (estimated maximum)&lt;br /&gt;
Memory Efficiency: 85.96% of 503.42 GB (251.71 GB/node)&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Explanation:&lt;br /&gt;
* Nodes: Number of allocated nodes for the job.&lt;br /&gt;
* Cores per node: Number of physical cores per node allocated for the job.&lt;br /&gt;
* CPU Utilized: Sum of utilized core time.&lt;br /&gt;
* CPU Efficiency: &#039;CPU Utilized&#039; with respect to core-walltime (= &#039;Nodes&#039; x &#039;Cores per node&#039; x &#039;Job Wall-clock time&#039;) in percent. &lt;br /&gt;
* Job Wall-clock time: runtime of the job.&lt;br /&gt;
* Memory Utilized: Sum of memory used. For multi node MPI jobs the sum is only correct when srun is used instead of mpirun.&lt;br /&gt;
* Memory Efficiency: &#039;Memory Utilized&#039; with respect to total allocated memory for the job.&lt;br /&gt;
&lt;br /&gt;
== Job Monitoring Portal ==&lt;br /&gt;
For more detailed information about your jobs visit the job monitoring portal: https://helix-monitoring.bwservices.uni-heidelberg.de&lt;br /&gt;
&lt;br /&gt;
= Accounting =&lt;br /&gt;
&lt;br /&gt;
Jobs are billed for allocated CPU cores, memory and GPUs.&lt;br /&gt;
&lt;br /&gt;
To see the accounting data of a specific job:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;$ sacct -j &amp;lt;jobid&amp;gt; --format=user,jobid,account,nnodes,ncpus,time,elapsed,AllocTRES%50&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To retrive the job history for a specific user for a certain time frame:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;$ sacct -u &amp;lt;user&amp;gt; -S 2022-08-20 -E 2022-08-30 --format=user,jobid,account,nnodes,ncpus,time,elapsed,AllocTRES%50&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Overview about free resources =&lt;br /&gt;
&lt;br /&gt;
On the login nodes the following command shows what resources are available for immediate use:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;$ sinfo_t_idle&amp;lt;/syntaxhighlight&amp;gt;&lt;/div&gt;</summary>
		<author><name>H Winkhardt</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=Issueing_Entitlements_in_bwHPC&amp;diff=15635</id>
		<title>Issueing Entitlements in bwHPC</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=Issueing_Entitlements_in_bwHPC&amp;diff=15635"/>
		<updated>2025-12-09T14:32:20Z</updated>

		<summary type="html">&lt;p&gt;H Winkhardt: More links&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The HPC systems provided by the bwHPC initiative are, as any supercomputer, considered dual-use commodities and are subjected to export controls as outlined in a [https://www.bafa.de/EN/Foreign_Trade/Export_Control/Export_Control_and_Academia/export_control_academia_node.html guide about &amp;quot;Export Control and Academia&amp;quot;] by the Federal Office for Economic Affairs and Export Control, Germany (BAFA).&lt;br /&gt;
&lt;br /&gt;
Therefore, any principal investigator or any user seeking to use the bwUnicluster or one of the bwForClusters has to comply with [https://eur-lex.europa.eu/legal-content/EN-DE/TXT/?from=DE&amp;amp;uri=CELEX%3A32021R0821 European Community Council regulation No 2021/821] as well as with the [http://www.gesetze-im-internet.de/englisch_awg/index.html German Foreign Trade and Payments Act] and [http://www.gesetze-im-internet.de/englisch_awv/index.html German Foreign Trade and Payments Ordinance] which includes:&lt;br /&gt;
* no violation regarding embargoed countries and persons&lt;br /&gt;
* no violation regarding regulations of dual-use commodities.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Acknowledged compliance to EU and German regulations (e.g. in a signed statement) is part of the entitlement for the [[Registration/bwUniCluster/Entitlement|bwUniCluster]] and the [[Registration/bwForCluster/Entitlement|bwForClusters]] issued by the home organization of the principal investigator or user.&lt;/div&gt;</summary>
		<author><name>H Winkhardt</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=Registration/SSH/SSH-FIDO2-Quick-Start&amp;diff=15564</id>
		<title>Registration/SSH/SSH-FIDO2-Quick-Start</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=Registration/SSH/SSH-FIDO2-Quick-Start&amp;diff=15564"/>
		<updated>2025-12-02T09:53:16Z</updated>

		<summary type="html">&lt;p&gt;H Winkhardt: typo&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= SSH with Yubikey - Quick Start Guide =&lt;br /&gt;
&lt;br /&gt;
This guide shows you how to secure your SSH keys with a Yubikey using FIDO2.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;border: 3px solid #28a745; padding: 15px; background-color: #d4edda; margin: 10px 0;&amp;quot;&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Why FIDO2 SSH Keys?&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
FIDO2 SSH keys (ED25519-SK) offer significant advantages:&lt;br /&gt;
* &#039;&#039;&#039;No 2-factor unlock required&#039;&#039;&#039; - work immediately after registration&lt;br /&gt;
* &#039;&#039;&#039;Hardware-protected&#039;&#039;&#039; - private key never leaves the Yubikey&lt;br /&gt;
* &#039;&#039;&#039;Physical presence required&#039;&#039;&#039; - you must touch the key to authenticate&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note:&#039;&#039;&#039; ECDSA-SK keys are not supported on these clusters. Use ED25519-SK instead.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;text-align:center; margin-top:10px;&amp;quot;&lt;br /&gt;
|+ Cluster Support&lt;br /&gt;
|-&lt;br /&gt;
! Cluster !! FIDO2 Support&lt;br /&gt;
|-&lt;br /&gt;
| bwUniCluster 3.0 || style=&amp;quot;background-color:#90EE90;&amp;quot; | ✓&lt;br /&gt;
|-&lt;br /&gt;
| bwForCluster BinAC 2 || style=&amp;quot;background-color:#FFB6C1;&amp;quot; | ✗&lt;br /&gt;
|-&lt;br /&gt;
| bwForCluster Helix || style=&amp;quot;background-color:#FFB6C1;&amp;quot; | ✗&lt;br /&gt;
|-&lt;br /&gt;
| bwForCluster JUSTUS 2 || style=&amp;quot;background-color:#FFB6C1;&amp;quot; | ✗&lt;br /&gt;
|-&lt;br /&gt;
| bwForCluster NEMO 2 || style=&amp;quot;background-color:#90EE90;&amp;quot; | ✓&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Requirements =&lt;br /&gt;
&lt;br /&gt;
* A Yubikey (5 series or newer)&lt;br /&gt;
* OpenSSH 8.2 or newer&lt;br /&gt;
* Linux system with &amp;lt;code&amp;gt;yubikey-manager&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;openssh-client&amp;lt;/code&amp;gt; packages&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Check your OpenSSH version:&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh -V&lt;br /&gt;
ssh -Q PubkeyAcceptedAlgorithms | grep sk-ssh-ed25519@openssh.com&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Need help with installation?&#039;&#039;&#039; See the [https://gitlab.rz.uni-freiburg.de/escience-public/yubikey#ssh-and-yubikeys detailed setup guide] for your operating system.&lt;br /&gt;
&lt;br /&gt;
= Quick Start Guide =&lt;br /&gt;
&lt;br /&gt;
== Step 1: Set up your Yubikey ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;What is a FIDO2 PIN?&#039;&#039;&#039; The PIN protects your Yubikey&#039;s FIDO2 functions. It&#039;s separate from any other PINs on your Yubikey (like the PIV PIN).&lt;br /&gt;
&lt;br /&gt;
Set a PIN for your Yubikey&#039;s FIDO2 functionality:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ykman fido access change-pin&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You&#039;ll be asked to enter a new PIN. Choose a PIN you can remember - you have 8 attempts before needing to reset.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note:&#039;&#039;&#039; For GUI setup, you can use [https://www.yubico.com/products/yubico-authenticator/ Yubico Authenticator] instead.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Forgot your PIN or need to reset?&#039;&#039;&#039; See the [https://gitlab.rz.uni-freiburg.de/escience-public/yubikey#troubleshooting troubleshooting guide].&lt;br /&gt;
&lt;br /&gt;
== Step 2: Create an SSH Key ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;What happens here?&#039;&#039;&#039; This creates a non-discoverable FIDO2 SSH key. The private key material is wrapped/encrypted with a secret stored on your Yubikey - the Yubikey is required for every authentication, but the actual key credentials are stored on your computer.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note:&#039;&#039;&#039; For keys stored directly on the Yubikey (resident/discoverable keys), see the [https://gitlab.rz.uni-freiburg.de/escience-public/yubikey#creating-discoverable-ssh-keys resident keys documentation].&lt;br /&gt;
&lt;br /&gt;
Connect your Yubikey and create a new SSH key:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh-keygen -t ed25519-sk&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When prompted:&lt;br /&gt;
* &#039;&#039;&#039;Enter PIN:&#039;&#039;&#039; First, you&#039;ll be asked to enter your FIDO2 PIN (the one you set in Step 1)&lt;br /&gt;
* &#039;&#039;&#039;Filename:&#039;&#039;&#039; Choose a descriptive name like &amp;lt;code&amp;gt;id_ed25519_sk_bwunicluster&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;id_ed25519_sk_nemo2&amp;lt;/code&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Passphrase:&#039;&#039;&#039; Press Enter to skip - the key is protected by requiring physical Yubikey presence&lt;br /&gt;
* &#039;&#039;&#039;Touch your Yubikey:&#039;&#039;&#039; The key will blink - touch it to confirm&lt;br /&gt;
&lt;br /&gt;
Two files are created:&lt;br /&gt;
* &amp;lt;code&amp;gt;~/.ssh/id_ed25519_sk_bwunicluster&amp;lt;/code&amp;gt; - key handle (encrypted credentials that require the Yubikey)&lt;br /&gt;
* &amp;lt;code&amp;gt;~/.ssh/id_ed25519_sk_bwunicluster.pub&amp;lt;/code&amp;gt; - public key (this is what you&#039;ll upload)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Want keys stored on the Yubikey itself?&#039;&#039;&#039; See [https://gitlab.rz.uni-freiburg.de/escience-public/yubikey#creating-discoverable-ssh-keys resident keys documentation] for discoverable keys that can be copied to any machine.&lt;br /&gt;
&lt;br /&gt;
== Step 3: Register Your Public Key ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;border: 3px solid #17a2b8; padding: 15px; background-color: #d1ecf1; margin: 10px 0;&amp;quot;&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;FIDO2 SSH Keys work immediately without 2-factor authentication!&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Unlike regular SSH keys, FIDO2 keys do not need to be &amp;quot;unlocked&amp;quot; with username/password.&lt;br /&gt;
They work as soon as you register them as &#039;&#039;&#039;Interactive Keys&#039;&#039;&#039;.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;For bwUniCluster 3.0 and bwForCluster NEMO 2:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
1. View your public key:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cat ~/.ssh/id_ed25519_sk_bwunicluster.pub&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
2. Copy the complete output (starts with &amp;lt;code&amp;gt;sk-ssh-ed25519@openssh.com&amp;lt;/code&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
3. Follow the [[Registration/SSH#Adding_a_new_SSH_Key|SSH Key Registration Guide]] to:&lt;br /&gt;
* Add your public key in bwIDM/bwServices&lt;br /&gt;
* Register it as an &#039;&#039;&#039;Interactive Key&#039;&#039;&#039; for your cluster&lt;br /&gt;
* Connect immediately - no 2FA unlock needed!&lt;br /&gt;
&lt;br /&gt;
== Step 4: Connect with Your Yubikey ==&lt;br /&gt;
&lt;br /&gt;
After registering your key as an &#039;&#039;&#039;Interactive Key&#039;&#039;&#039; in bwIDM:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh -i ~/.ssh/id_ed25519_sk_bwunicluster your_username@bwunicluster.scc.kit.edu&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;What to expect:&#039;&#039;&#039;&lt;br /&gt;
1. Your Yubikey will blink&lt;br /&gt;
2. Touch it to authenticate&lt;br /&gt;
3. You&#039;re logged in - no password needed!&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Tip:&#039;&#039;&#039; Add your key to &amp;lt;code&amp;gt;~/.ssh/config&amp;lt;/code&amp;gt; to avoid typing the &amp;lt;code&amp;gt;-i&amp;lt;/code&amp;gt; option every time:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Host bwuni&lt;br /&gt;
  HostName bwunicluster.scc.kit.edu&lt;br /&gt;
  User fr_user1234&lt;br /&gt;
  IdentityFile ~/.ssh/id_ed25519_sk_bwunicluster&lt;br /&gt;
&lt;br /&gt;
Host nemo2&lt;br /&gt;
  HostName login.nemo.uni-freiburg.de&lt;br /&gt;
  User fr_user1234&lt;br /&gt;
  IdentityFile ~/.ssh/id_ed25519_sk_nemo2&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then simply connect with: &amp;lt;code&amp;gt;ssh bwuni&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;ssh nemo2&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
See [https://gitlab.rz.uni-freiburg.de/escience-public/yubikey#ssh-configuration-for-fido2-secured-ssh-keys SSH config examples] for more advanced configurations.&lt;br /&gt;
&lt;br /&gt;
= Advanced Topics =&lt;br /&gt;
&lt;br /&gt;
== Using Multiple Yubikeys (Optional) ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Why?&#039;&#039;&#039; Having backup Yubikeys adds convenience - you can switch between devices without reconfiguring.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note:&#039;&#039;&#039; If you lose your Yubikey, you can still access the cluster using regular SSH keys (with 2FA unlock) or password login with TOTP.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Setup:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
1. Create a key on your first Yubikey: &amp;lt;code&amp;gt;id_ed25519_sk_key1&amp;lt;/code&amp;gt;&lt;br /&gt;
2. Create another key on your second Yubikey: &amp;lt;code&amp;gt;id_ed25519_sk_key2&amp;lt;/code&amp;gt;&lt;br /&gt;
3. Register &#039;&#039;&#039;both public keys&#039;&#039;&#039; in bwIDM as Interactive Keys&lt;br /&gt;
4. Configure &amp;lt;code&amp;gt;~/.ssh/config&amp;lt;/code&amp;gt; with multiple identities:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Host bwuni nemo2&lt;br /&gt;
  User fr_user1234&lt;br /&gt;
  IdentityFile ~/.ssh/id_ed25519_sk_key1&lt;br /&gt;
  IdentityFile ~/.ssh/id_ed25519_sk_key2&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
SSH will automatically try each key until it finds one with a connected Yubikey.&lt;br /&gt;
&lt;br /&gt;
= Troubleshooting =&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Common Issues:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Key not working?&#039;&#039;&#039; &lt;br /&gt;
** Make sure your Yubikey is plugged in&lt;br /&gt;
** Try unplugging and re-plugging the Yubikey&lt;br /&gt;
** Check if the key is blinking (requires touch)&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Command not found?&#039;&#039;&#039; &lt;br /&gt;
** Install &amp;lt;code&amp;gt;yubikey-manager&amp;lt;/code&amp;gt;:&lt;br /&gt;
*** Debian/Ubuntu: &amp;lt;code&amp;gt;sudo apt install yubikey-manager&amp;lt;/code&amp;gt;&lt;br /&gt;
*** RHEL/Rocky/Alma: &amp;lt;code&amp;gt;sudo dnf install yubikey-manager&amp;lt;/code&amp;gt;&lt;br /&gt;
** Install &amp;lt;code&amp;gt;openssh-client&amp;lt;/code&amp;gt; if missing&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Old OpenSSH?&#039;&#039;&#039; &lt;br /&gt;
** Check version: &amp;lt;code&amp;gt;ssh -V&amp;lt;/code&amp;gt;&lt;br /&gt;
** Update to version 8.2 or newer&lt;br /&gt;
** On older systems, consider using a newer OpenSSH version or upgrade your OS&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;&amp;quot;invalid format&amp;quot; or &amp;quot;unknown key type&amp;quot;?&#039;&#039;&#039;&lt;br /&gt;
** Your OpenSSH version is too old (needs 8.2+)&lt;br /&gt;
** The cluster doesn&#039;t support FIDO2 keys (only bwUniCluster 3.0 and NEMO 2)&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;PIN locked after too many attempts?&#039;&#039;&#039;&lt;br /&gt;
** Reset FIDO2 PIN: See [https://gitlab.rz.uni-freiburg.de/escience-public/yubikey#configure-yubikey-for-fido2 PIN reset guide]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Need more help?&#039;&#039;&#039; See the [https://gitlab.rz.uni-freiburg.de/escience-public/yubikey#troubleshooting complete troubleshooting guide].&lt;br /&gt;
&lt;br /&gt;
= Learn More =&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Cluster-Specific Documentation:&#039;&#039;&#039;&lt;br /&gt;
* [[Registration/SSH|Complete SSH Key Registration Guide for bwUniCluster/bwForCluster]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Yubikey &amp;amp; FIDO2 SSH Documentation:&#039;&#039;&#039;&lt;br /&gt;
* [https://gitlab.rz.uni-freiburg.de/escience-public/yubikey Comprehensive Yubikey Guide] - Setup, configuration, troubleshooting&lt;br /&gt;
** [https://gitlab.rz.uni-freiburg.de/escience-public/yubikey#creating-discoverable-ssh-keys Resident Keys] - Store keys directly on Yubikey&lt;br /&gt;
** [https://gitlab.rz.uni-freiburg.de/escience-public/yubikey#ssh-configuration-for-fido2-secured-ssh-keys SSH Config Examples] - Automate your SSH connections&lt;br /&gt;
** [https://gitlab.rz.uni-freiburg.de/escience-public/yubikey#yubikey-best-practices Backup Strategies] - Multi-key and recovery setups&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Official Yubico Documentation:&#039;&#039;&#039;&lt;br /&gt;
* [https://developers.yubico.com/SSH/Securing_SSH_with_FIDO2.html Securing SSH with FIDO2] - Technical deep dive&lt;/div&gt;</summary>
		<author><name>H Winkhardt</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=Login_Problems&amp;diff=15464</id>
		<title>Login Problems</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=Login_Problems&amp;diff=15464"/>
		<updated>2025-11-20T15:23:35Z</updated>

		<summary type="html">&lt;p&gt;H Winkhardt: Fixed Redlinks&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page lists common causes for problems with logging in as well as listing information you should provide when writing a ticket &lt;br /&gt;
&lt;br /&gt;
== Cannot Connect to Cluster == &lt;br /&gt;
&lt;br /&gt;
Please note that you need to come from internet hosts inside your university network. Either by connecting from a machine located on the campus or by using a VPN to virtually become part of the university network. Also note, that unauthenticated wireless connection options of a university may be considerd as out-of-network.&lt;br /&gt;
&lt;br /&gt;
== Missing Entitlement ==&lt;br /&gt;
&lt;br /&gt;
This can happen because&lt;br /&gt;
&lt;br /&gt;
* you have not completed &amp;lt;b&amp;gt;User Access Step A&amp;lt;/b&amp;gt; for the cluster you want to use: &amp;lt;br&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;rarr; [[Registration/bwForCluster/Entitlement|bwForCluster]] or &amp;lt;br&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;rarr; [[Registration/bwUniCluster/Entitlement|bwUniCluster]] &lt;br /&gt;
* you have left University &lt;br /&gt;
* there is a malfunction of the IDM of your University &lt;br /&gt;
&lt;br /&gt;
You can check your Entitlements as shown on the Entitlement page:&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;rarr;  &amp;lt;b&amp;gt;[[Registration/bwForCluster/Entitlement#Check_your_Entitlements|Check your Entitlements]]&amp;lt;/b&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In short: go to https://login.bwidm.de/ and select Index(Übersicht) &amp;amp;rarr; Personal Data, then select the tab &amp;quot;Shibboleth&amp;quot; and search for Cluster on that page to find &amp;lt;pre&amp;gt;http://bwidm.de/entitlement/bwUniCluster&amp;lt;/pre&amp;gt; or &amp;lt;pre&amp;gt;http://bwidm.de/entitlement/bwForCluster&amp;lt;/pre&amp;gt; (these are the names of the Entitlement, not links).&lt;br /&gt;
&lt;br /&gt;
In case your Entitlement is missing, please &amp;lt;b&amp;gt;contact the support of the university you are a member of&amp;lt;/b&amp;gt;. The bw Clusters rely on your university to provide the correct information about your ongoing affiliation and cannot solve the problem.&lt;br /&gt;
&lt;br /&gt;
=== Missing bw Group ===&lt;br /&gt;
&lt;br /&gt;
You can also check if your compute project group still exists by selecting the tab &amp;quot;Groups&amp;quot; on the same page as described above.&lt;br /&gt;
&lt;br /&gt;
== Lost TOTP Secret ==&lt;br /&gt;
&lt;br /&gt;
Please avoid this problem by creating a backup TAN list and keep it in a safe place - e.g. in printed form in a folder in your office.&lt;br /&gt;
&lt;br /&gt;
To create a backup tan list, use the registration server used by your cluster. You will be asked a current TOTP password from your device.&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;rarr; see &amp;lt;b&amp;gt; [[Registration/2FA#Token_Management]] &amp;lt;/b&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Please contact us via authenticated login in the [[bwSupportPortal|Support Portal]] and not via Email. &lt;br /&gt;
&lt;br /&gt;
We will only be able to deactivate your current secret, which will enable you to create a new one without having to enter a TOTP. &lt;br /&gt;
&lt;br /&gt;
For the next steps, follow the relevant wiki page: https://wiki.bwhpc.de/e/Registration/2FA&lt;br /&gt;
&lt;br /&gt;
In short:&lt;br /&gt;
&lt;br /&gt;
a)  Delete the old token, reload the web page and verify that all tokens have been deleted.&lt;br /&gt;
b)  Create a new smartphone token and test it (if the QR code is missing check your browser&#039;s add-ons/ad-blockers)&lt;br /&gt;
c)  Create a backup Tan List, print it on paper and keep in a safe place for cases like this&lt;br /&gt;
&lt;br /&gt;
Note: Users from KIT Karlsruhe who are using a cluster which uses https://login.bwidm.de/ to register the service will need to contact KIT support to have the TOTP reset&lt;br /&gt;
&lt;br /&gt;
== TOTP Secret Disabled ==&lt;br /&gt;
&lt;br /&gt;
After a certain number of unsuccessful TOTP attempts, your TOTP will have accumulated too many errors if no successful login resets this.&lt;br /&gt;
&lt;br /&gt;
In this case, you will have to open a ticket at the [[bwSupportPortal|Support Portal]] for us to re-enable the TOTP. &lt;br /&gt;
&lt;br /&gt;
Some programs used to open a ssh connection with some configuration options cause the client to keep trying to re-login in the background but fails because this mechanism cannot account for entering a TOTP. Please check configuration of your software for such behavior. &lt;br /&gt;
&lt;br /&gt;
== Forgot Password == &lt;br /&gt;
&lt;br /&gt;
You can re-set the ssh password on the registration server used by your cluster &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;rarr; list or registration servers also at: &amp;lt;b&amp;gt; [[Registration/2FA#Token_Management]] &amp;lt;/b&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== ZAS Permission Missing (bwForClusters only) ==&lt;br /&gt;
&lt;br /&gt;
This can have one of the following reasons:&lt;br /&gt;
&lt;br /&gt;
* You have not yet successfully become a member of your compute project group&lt;br /&gt;
* Your account has been disabled by an RV owner or manager and no longer belongs to the compute project&lt;br /&gt;
* The compute project you are a member of has not been renewed and has expired&lt;br /&gt;
* There is a ZAS malfunction&lt;br /&gt;
&lt;br /&gt;
You can see the status of your own compute project or a project membership at https://zas.bwhpc.de/en/zas_info_bwforcluster.php via the points &amp;quot;RV collaboration&amp;quot; or &amp;quot;My RVs&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
== Writing a Ticket == &lt;br /&gt;
&lt;br /&gt;
Before writing a ticket to the [[bwSupportPortal|Support Portal]], please check if your Entitlement is missing - in which case you should contact your own university support. &lt;br /&gt;
&lt;br /&gt;
Please provide the following information:&lt;br /&gt;
&lt;br /&gt;
* which cluster you have tried to access&lt;br /&gt;
* your username&lt;br /&gt;
* your eppn&lt;/div&gt;</summary>
		<author><name>H Winkhardt</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=Login_Problems&amp;diff=15463</id>
		<title>Login Problems</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=Login_Problems&amp;diff=15463"/>
		<updated>2025-11-20T15:22:37Z</updated>

		<summary type="html">&lt;p&gt;H Winkhardt: Fixed Redlink&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page lists common causes for problems with logging in as well as listing information you should provide when writing a ticket &lt;br /&gt;
&lt;br /&gt;
== Cannot Connect to Cluster == &lt;br /&gt;
&lt;br /&gt;
Please note that you need to come from internet hosts inside your university network. Either by connecting from a machine located on the campus or by using a VPN to virtually become part of the university network. Also note, that unauthenticated wireless connection options of a university may be considerd as out-of-network.&lt;br /&gt;
&lt;br /&gt;
== Missing Entitlement ==&lt;br /&gt;
&lt;br /&gt;
This can happen because&lt;br /&gt;
&lt;br /&gt;
* you have not completed &amp;lt;b&amp;gt;User Access Step A&amp;lt;/b&amp;gt; for the cluster you want to use: &amp;lt;br&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;rarr; [[Registration/bwForCluster/Entitlement|bwForCluster]] or &amp;lt;br&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;rarr; [[Registration/bwUniCluster/Entitlement|bwUniCluster]] &lt;br /&gt;
* you have left University &lt;br /&gt;
* there is a malfunction of the IDM of your University &lt;br /&gt;
&lt;br /&gt;
You can check your Entitlements as shown on the Entitlement page:&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;rarr;  &amp;lt;b&amp;gt;[[Registration/bwForCluster/Entitlement#Check_your_Entitlements|Check your Entitlements]]&amp;lt;/b&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In short: go to https://login.bwidm.de/ and select Index(Übersicht) &amp;amp;rarr; Personal Data, then select the tab &amp;quot;Shibboleth&amp;quot; and search for Cluster on that page to find &amp;lt;pre&amp;gt;http://bwidm.de/entitlement/bwUniCluster&amp;lt;/pre&amp;gt; or &amp;lt;pre&amp;gt;http://bwidm.de/entitlement/bwForCluster&amp;lt;/pre&amp;gt; (these are the names of the Entitlement, not links).&lt;br /&gt;
&lt;br /&gt;
In case your Entitlement is missing, please &amp;lt;b&amp;gt;contact the support of the university you are a member of&amp;lt;/b&amp;gt;. The bw Clusters rely on your university to provide the correct information about your ongoing affiliation and cannot solve the problem.&lt;br /&gt;
&lt;br /&gt;
=== Missing bw Group ===&lt;br /&gt;
&lt;br /&gt;
You can also check if your compute project group still exists by selecting the tab &amp;quot;Groups&amp;quot; on the same page as described above.&lt;br /&gt;
&lt;br /&gt;
== Lost TOTP Secret ==&lt;br /&gt;
&lt;br /&gt;
Please avoid this problem by creating a backup TAN list and keep it in a safe place - e.g. in printed form in a folder in your office.&lt;br /&gt;
&lt;br /&gt;
To create a backup tan list, use the registration server used by your cluster. You will be asked a current TOTP password from your device.&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;rarr; see &amp;lt;b&amp;gt; [[Registration/2FA#Token_Management]] &amp;lt;/b&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Please contact us via authenticated login in the [[Support Portal]] and not via Email. &lt;br /&gt;
&lt;br /&gt;
We will only be able to deactivate your current secret, which will enable you to create a new one without having to enter a TOTP. &lt;br /&gt;
&lt;br /&gt;
For the next steps, follow the relevant wiki page: https://wiki.bwhpc.de/e/Registration/2FA&lt;br /&gt;
&lt;br /&gt;
In short:&lt;br /&gt;
&lt;br /&gt;
a)  Delete the old token, reload the web page and verify that all tokens have been deleted.&lt;br /&gt;
b)  Create a new smartphone token and test it (if the QR code is missing check your browser&#039;s add-ons/ad-blockers)&lt;br /&gt;
c)  Create a backup Tan List, print it on paper and keep in a safe place for cases like this&lt;br /&gt;
&lt;br /&gt;
Note: Users from KIT Karlsruhe who are using a cluster which uses https://login.bwidm.de/ to register the service will need to contact KIT support to have the TOTP reset&lt;br /&gt;
&lt;br /&gt;
== TOTP Secret Disabled ==&lt;br /&gt;
&lt;br /&gt;
After a certain number of unsuccessful TOTP attempts, your TOTP will have accumulated too many errors if no successful login resets this.&lt;br /&gt;
&lt;br /&gt;
In this case, you will have to open a ticket at the [[Support Portal]] for us to re-enable the TOTP. &lt;br /&gt;
&lt;br /&gt;
Some programs used to open a ssh connection with some configuration options cause the client to keep trying to re-login in the background but fails because this mechanism cannot account for entering a TOTP. Please check configuration of your software for such behavior. &lt;br /&gt;
&lt;br /&gt;
== Forgot Password == &lt;br /&gt;
&lt;br /&gt;
You can re-set the ssh password on the registration server used by your cluster &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;rarr; list or registration servers also at: &amp;lt;b&amp;gt; [[Registration/2FA#Token_Management]] &amp;lt;/b&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== ZAS Permission Missing (bwForClusters only) ==&lt;br /&gt;
&lt;br /&gt;
This can have one of the following reasons:&lt;br /&gt;
&lt;br /&gt;
* You have not yet successfully become a member of your compute project group&lt;br /&gt;
* Your account has been disabled by an RV owner or manager and no longer belongs to the compute project&lt;br /&gt;
* The compute project you are a member of has not been renewed and has expired&lt;br /&gt;
* There is a ZAS malfunction&lt;br /&gt;
&lt;br /&gt;
You can see the status of your own compute project or a project membership at https://zas.bwhpc.de/en/zas_info_bwforcluster.php via the points &amp;quot;RV collaboration&amp;quot; or &amp;quot;My RVs&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
== Writing a Ticket == &lt;br /&gt;
&lt;br /&gt;
Before writing a ticket to the [[bwSupportPortal]], please check if your Entitlement is missing - in which case you should contact your own university support. &lt;br /&gt;
&lt;br /&gt;
Please provide the following information:&lt;br /&gt;
&lt;br /&gt;
* which cluster you have tried to access&lt;br /&gt;
* your username&lt;br /&gt;
* your eppn&lt;/div&gt;</summary>
		<author><name>H Winkhardt</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=Development/Containers&amp;diff=15396</id>
		<title>Development/Containers</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=Development/Containers&amp;diff=15396"/>
		<updated>2025-11-12T13:56:55Z</updated>

		<summary type="html">&lt;p&gt;H Winkhardt: Nemo1 -&amp;gt; Nemo2&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The bwHPC clusters mostly provide Singularity as a container runtime environment.&lt;br /&gt;
&lt;br /&gt;
To learn more about the usage of Singularity, visit the corresponding cluster page.&lt;br /&gt;
&lt;br /&gt;
* [[BwUniCluster3.0/Containers]]&lt;br /&gt;
* [[JUSTUS2/Software/Singularity]]&lt;br /&gt;
* [[Helix/Software/Singularity]]&lt;br /&gt;
* [[NEMO2/Software/Singularity_Containers]]&lt;/div&gt;</summary>
		<author><name>H Winkhardt</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=Development/Containers&amp;diff=15395</id>
		<title>Development/Containers</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=Development/Containers&amp;diff=15395"/>
		<updated>2025-11-12T13:55:15Z</updated>

		<summary type="html">&lt;p&gt;H Winkhardt: UC2 -&amp;gt; UC3&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The bwHPC clusters mostly provide Singularity as a container runtime environment.&lt;br /&gt;
&lt;br /&gt;
To learn more about the usage of Singularity, visit the corresponding cluster page.&lt;br /&gt;
&lt;br /&gt;
* [[BwUniCluster3.0/Containers]]&lt;br /&gt;
* [[JUSTUS2/Software/Singularity]]&lt;br /&gt;
* [[Helix/Software/Singularity]]&lt;br /&gt;
* [[NEMO1/Software/Singularity_Containers]]&lt;/div&gt;</summary>
		<author><name>H Winkhardt</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=OpenFoam&amp;diff=15394</id>
		<title>OpenFoam</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=OpenFoam&amp;diff=15394"/>
		<updated>2025-11-12T13:52:51Z</updated>

		<summary type="html">&lt;p&gt;H Winkhardt: Shortened redirect chain&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;#REDIRECT [[BwUniCluster3.0/Software/OpenFoam]]&lt;/div&gt;</summary>
		<author><name>H Winkhardt</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=Registration/Login/Username&amp;diff=15393</id>
		<title>Registration/Login/Username</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=Registration/Login/Username&amp;diff=15393"/>
		<updated>2025-11-12T13:51:11Z</updated>

		<summary type="html">&lt;p&gt;H Winkhardt: Removed outdated references to UC2&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__FORCETOC__&lt;br /&gt;
&lt;br /&gt;
= Login Username =&lt;br /&gt;
&lt;br /&gt;
All members of universities and colleges in Baden-Württemberg can use the bwHPC resources.&lt;br /&gt;
Prefixes are used to ensure that the usernames assigned by the home organization are unique within the state.&lt;br /&gt;
The username for the bwHPC clusters is the same as the one assigned by the university or college, but it is prefixed with two letters for the home institution.&lt;br /&gt;
&lt;br /&gt;
{|style=&amp;quot;background:#deffee; width:100%;&amp;quot;&lt;br /&gt;
|style=&amp;quot;padding:5px; background:#cef2e0; text-align:left&amp;quot;|&lt;br /&gt;
[[Image:Attention.svg|center|25px]]&lt;br /&gt;
|style=&amp;quot;padding:5px; background:#cef2e0; text-align:left&amp;quot;|&lt;br /&gt;
Starting with &#039;&#039;&#039;bwUniCluster 3.0&#039;&#039;&#039;, KIT users have the &#039;&#039;&#039;prefix ka&#039;&#039;&#039; on all systems. &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
There are two ways to find out your username for the bwForCluster and the bwUniCluster:&lt;br /&gt;
&lt;br /&gt;
* [[#Get_your_Username_by_adding_a_Prefix_to_your_local_Account_Name|Get your Username by adding a Prefix to your local Account Name]]&lt;br /&gt;
* [[#Find_out_your_Username_by_visiting_the_Registration_Service|Find out your Username by visiting the Registration Service]]&lt;br /&gt;
&lt;br /&gt;
== Get your Username by adding a Prefix to your local Account Name ==&lt;br /&gt;
&lt;br /&gt;
{|style=&amp;quot;background:#deffee; width:100%;&amp;quot;&lt;br /&gt;
|style=&amp;quot;padding:5px; background:#cef2e0; text-align:left&amp;quot;|&lt;br /&gt;
[[Image:Attention.svg|center|25px]]&lt;br /&gt;
|style=&amp;quot;padding:5px; background:#cef2e0; text-align:left&amp;quot;|&lt;br /&gt;
If your university is not listed in the [https://www.bwidm.de/hochschulen.php &#039;&#039;&#039;bwIDM Membership Table&#039;&#039;&#039;], please ask your university to join bwIDM or update the table.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
If you want to use a &#039;&#039;&#039;bwForCluster&#039;&#039;&#039; or the &#039;&#039;&#039;bwUniCluster&#039;&#039;&#039; you need to add a prefix to your local username.&lt;br /&gt;
The username is constructed as follows (see [[#Examples|Examples]] below):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;prefix&amp;gt;_&amp;lt;local username&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Prefix for Universities of Applied Sciences ===&lt;br /&gt;
&lt;br /&gt;
For universities see table below, users from the [[Registration/HAW|HAW BW e.V.]] or a university of applied sciences should use this table updated by bwIDM as a guideline:&lt;br /&gt;
* [https://www.bwidm.de/hochschulen.php &#039;&#039;&#039;bwIDM Membership Table&#039;&#039;&#039;]&lt;br /&gt;
&lt;br /&gt;
=== Prefix for Universities ===&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; &lt;br /&gt;
|- &lt;br /&gt;
!style=&amp;quot;width:280px&amp;quot; | User from University&lt;br /&gt;
!style=&amp;quot;width:60px&amp;quot;  | Prefix&lt;br /&gt;
!style=&amp;quot;width:430px&amp;quot; | Username&lt;br /&gt;
|-&lt;br /&gt;
| Universität Freiburg&lt;br /&gt;
| fr&lt;br /&gt;
| colspan=&amp;quot;2&amp;quot; align=&amp;quot;center&amp;quot;| fr_&#039;&#039;&amp;lt;username&amp;gt;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
| Universität Heidelberg&lt;br /&gt;
| hd&lt;br /&gt;
| colspan=&amp;quot;2&amp;quot; align=&amp;quot;center&amp;quot;| hd_&#039;&#039;&amp;lt;username&amp;gt;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
| Universität Hohenheim&lt;br /&gt;
| ho&lt;br /&gt;
| colspan=&amp;quot;2&amp;quot; align=&amp;quot;center&amp;quot;| ho_&#039;&#039;&amp;lt;username&amp;gt;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
| Karlsruhe Institute of Technology (KIT)&lt;br /&gt;
| ka&lt;br /&gt;
| align=&amp;quot;center&amp;quot;| ka_&#039;&#039;&amp;lt;username&amp;gt;&#039;&#039; &lt;br /&gt;
|-&lt;br /&gt;
| Universität Konstanz&lt;br /&gt;
| kn&lt;br /&gt;
| colspan=&amp;quot;2&amp;quot; align=&amp;quot;center&amp;quot;| kn_&#039;&#039;&amp;lt;username&amp;gt;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
| Universität Mannheim&lt;br /&gt;
| ma&lt;br /&gt;
| colspan=&amp;quot;2&amp;quot; align=&amp;quot;center&amp;quot;| ma_&#039;&#039;&amp;lt;username&amp;gt;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
| Universität Stuttgart&lt;br /&gt;
| st&lt;br /&gt;
| colspan=&amp;quot;2&amp;quot; align=&amp;quot;center&amp;quot;| st_&#039;&#039;&amp;lt;username&amp;gt;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
| Universität Tübingen&lt;br /&gt;
| tu&lt;br /&gt;
| colspan=&amp;quot;2&amp;quot; align=&amp;quot;center&amp;quot;| tu_&#039;&#039;&amp;lt;username&amp;gt;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
| Universität Ulm&lt;br /&gt;
| ul&lt;br /&gt;
| colspan=&amp;quot;2&amp;quot; align=&amp;quot;center&amp;quot;| ul_&#039;&#039;&amp;lt;username&amp;gt;&#039;&#039;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Examples ===&lt;br /&gt;
&lt;br /&gt;
* If you are a member of the University of Konstanz and your local username is &amp;lt;code&amp;gt;ab1234&amp;lt;/code&amp;gt;, your username on any bwHPC cluster is &amp;lt;code&amp;gt;kn_ab1234&amp;lt;/code&amp;gt;.&lt;br /&gt;
* If your local username for the university is &amp;lt;code&amp;gt;vwxyz1234&amp;lt;/code&amp;gt; and you are a user of the University of Freiburg, your username on any bwHPC cluster is &amp;lt;code&amp;gt;fr_vwxyz1234&amp;lt;/code&amp;gt;.&lt;br /&gt;
* If you are from Aalen University and your username is &amp;lt;code&amp;gt;xyzs12342&amp;lt;/code&amp;gt;, your username for any bwHPC cluster is &amp;lt;code&amp;gt;aa_xyzs12342&amp;lt;/code&amp;gt;.&lt;br /&gt;
* If your KIT username is &amp;lt;code&amp;gt;ab1234&amp;lt;/code&amp;gt;, your username on any bwHPC cluster is &amp;lt;code&amp;gt;ka_ab1234&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
== Find out your Username by visiting the Registration Service ==&lt;br /&gt;
&lt;br /&gt;
Alternatively, you can log in to the registration service and verify your username online.&lt;br /&gt;
To do this, follow the next steps:&lt;br /&gt;
&lt;br /&gt;
1. Select the cluster you want to know your username for: &amp;lt;br /&amp;gt; &amp;amp;rarr; [https://login.bwidm.de &#039;&#039;&#039;bwUniCluster 3.0&#039;&#039;&#039;] &amp;lt;br /&amp;gt; &amp;amp;rarr; [https://bwservices.uni-tuebingen.de &#039;&#039;&#039;BinAC&#039;&#039;&#039;] &amp;lt;br /&amp;gt; &amp;amp;rarr; [https://login.bwidm.de &#039;&#039;&#039;JUSTUS 2&#039;&#039;&#039;] &amp;lt;br /&amp;gt; &amp;amp;rarr; [https://bwservices.uni-heidelberg.de &#039;&#039;&#039;Helix&#039;&#039;&#039;] &amp;lt;br /&amp;gt; &amp;amp;rarr; [https://login.bwidm.de &#039;&#039;&#039;NEMO2&#039;&#039;&#039;]&lt;br /&gt;
 &lt;br /&gt;
2. Select your home organization from the list and click &#039;&#039;&#039;Proceed&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
3. Enter your home-organisational user ID / username  and your home-organisational password and click &#039;&#039;&#039;Login&#039;&#039;&#039; button.&lt;br /&gt;
&lt;br /&gt;
4. You will be redirected back to the registration website.&lt;br /&gt;
&lt;br /&gt;
5. Find the cluster entry and select &#039;&#039;&#039;Registry Info&#039;&#039;&#039;.&lt;br /&gt;
[[File:BwIDM-pw.png|center|frame|Check Registry Info.]]&lt;br /&gt;
&lt;br /&gt;
6. Depending on the registration service, you will see one of the following entries. See &#039;&#039;&#039;Username for login&#039;&#039;&#039; or &#039;&#039;&#039;localUid&#039;&#039;&#039; for your username.&lt;br /&gt;
{| style=&amp;quot;width:50%;&amp;quot; align=&amp;quot;center&amp;quot;&lt;br /&gt;
|- &lt;br /&gt;
!style=&amp;quot;width:50%;&amp;quot; align=&amp;quot;center&amp;quot;|[[File:BwIDM-user.png|center|thumb|300px|Username for login (newer registration services).]]&lt;br /&gt;
!style=&amp;quot;width:50%;&amp;quot; align=&amp;quot;center&amp;quot;|[[File:BwIDM-uid.png|center|thumb|300px|Username for login in &#039;&#039;localUid&#039;&#039; (older registration services).]]&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>H Winkhardt</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=Development/Python&amp;diff=15366</id>
		<title>Development/Python</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=Development/Python&amp;diff=15366"/>
		<updated>2025-10-29T14:11:48Z</updated>

		<summary type="html">&lt;p&gt;H Winkhardt: typo&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Python is a versatile, easy-to-learn, interpreted programming language. It offers a wide range of libraries for scientific tasks and visualization. Python counts to the best languages for machine learning. Python can be used in particular as an open source alternative for tasks that have usually been used for Matlab.&lt;br /&gt;
&lt;br /&gt;
== Installation and Versions ==&lt;br /&gt;
Python is available on all systems. With &amp;lt;code&amp;gt;python --version&amp;lt;/code&amp;gt; you can see the currently active default python version. In general, you can choose from various types of python installations:&lt;br /&gt;
* &#039;&#039;&#039;System python:&#039;&#039;&#039; This python version comes together with the operating system and is available upon login to the cluster. Other python versions might be installed along with it. All versions can be seen with &lt;br /&gt;
*: &amp;lt;code&amp;gt;ls /usr/bin/python[0-9].*[0-9] | sort -V | cut -d&amp;quot;/&amp;quot; -f4 | xargs&amp;lt;/code&amp;gt;&lt;br /&gt;
*: They can change over time. You can access a specific Python version by specifying the version in the Python command.&lt;br /&gt;
* &#039;&#039;&#039;[[Environment_Modules | Software module]]:&#039;&#039;&#039; Available versions can be identified via&lt;br /&gt;
*:  &amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
  module avail devel/python&lt;br /&gt;
  &amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
*: The provided software modules on the clusters are optimized for making efficient use of the cluster resources. Please use them whenever possible. &lt;br /&gt;
* &#039;&#039;&#039;Python distributions and virtual environments:&#039;&#039;&#039; By using python distributions such as Anaconda, you can easily install the needed python version into a virtual environment. For the use of conda on bwHPC clusters, please refer to [[Development/Conda|Conda]]. Alternatively, you can use more python specific tools for installing python. Some options are listed in [[#Virtual Environments and Package Management | Virtual Environments and Package Management]]&lt;br /&gt;
* &#039;&#039;&#039;[[Development/Containers | Container]]:&#039;&#039;&#039; Containers can contain their own python installation. Keep this in mind when you are working with containers provided by others.&lt;br /&gt;
&lt;br /&gt;
== Running Python Code ==&lt;br /&gt;
&lt;br /&gt;
There are three ways to run Python commands:&lt;br /&gt;
&lt;br /&gt;
* Within a &#039;&#039;&#039;terminal&#039;&#039;&#039; by executing the comand &amp;lt;code&amp;gt;python&amp;lt;/code&amp;gt;. This starts a Python shell, where all commands are evaluated by the Python interpreter. &lt;br /&gt;
* Within a &#039;&#039;&#039;script&#039;&#039;&#039; (file ends with &#039;&#039;.py&#039;&#039; and can be run with &amp;lt;code&amp;gt;python myProgram.py&amp;lt;/code&amp;gt;)&lt;br /&gt;
* Within a &#039;&#039;&#039;notebook&#039;&#039;&#039; (file ends with .ipynb). You can use other programming languages and markdown within a notebook next to your python code. Besides software development itself, teaching, prototyping and visualization are good use cases for notebooks as well.  &lt;br /&gt;
&lt;br /&gt;
== Development Environments ==&lt;br /&gt;
&lt;br /&gt;
Development Environments are usually more comfortable than running code directly from the shell. Some common options are: &lt;br /&gt;
&lt;br /&gt;
* [[Jupyter]]&lt;br /&gt;
* [[Development/VS_Code | VS Code]]&lt;br /&gt;
* PyCharm&lt;br /&gt;
&lt;br /&gt;
== Virtual Environments and Package Management ==&lt;br /&gt;
&lt;br /&gt;
Packages contain a set of functions that offer additional functionality. A package can be installed by using a package manager. Virtual environments prevent conflicts between different Python packages by using separate installation directories. &lt;br /&gt;
&lt;br /&gt;
At least one virtual environment should be defined per project. This way, it is clear which packages are needed by a specific project. All virtual environments allow to save the corresponding packages with their specific version numbers to a file. This allows to reinstall them in another place and therefore improves the reproducibility of projects. Furthermore, it makes finding and removing packages, that aren&#039;t needed anymore, easier. &lt;br /&gt;
&lt;br /&gt;
=== Overview ===&lt;br /&gt;
The following table provides an overview of common tools in the field of virtual environments and package management. The main differences between the various options are highlighted. After deciding on a specific tool, it can be installed by following the given link. &lt;br /&gt;
To make it short, if you plan to use...&lt;br /&gt;
* ...Python only: &lt;br /&gt;
** venv is the most basic option. Other options build upon it. &lt;br /&gt;
** poetry is widely used and offers a broad set of functionality&lt;br /&gt;
** uv is the latest option and much faster than poetry while offering the same (or more) functionality.&lt;br /&gt;
* ...Python + Conda: &lt;br /&gt;
** We wouldn&#039;t recommend it but if you want to use conda only: Install the conda packages first into the conda environment and afterwards the python packages. Otherwise, problems might arise.&lt;br /&gt;
** For a faster and more up to date solution, choose pixi. &lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; &lt;br /&gt;
|- style=&amp;quot;text-align:center;&amp;quot;&lt;br /&gt;
! Tool&lt;br /&gt;
! Description&lt;br /&gt;
! Can install python versions&lt;br /&gt;
! Installs packages from PyPI&lt;br /&gt;
! Installs packages from conda&lt;br /&gt;
! Dependency Resolver&lt;br /&gt;
! Dependency Management&lt;br /&gt;
! Creates Virtual Environments&lt;br /&gt;
! Supports building, packaging and publishing code (to PyPI)&lt;br /&gt;
|-&lt;br /&gt;
| pyenv&lt;br /&gt;
| Manages python versions on your system.&lt;br /&gt;
| yes&lt;br /&gt;
| no&lt;br /&gt;
| no&lt;br /&gt;
| no&lt;br /&gt;
| no&lt;br /&gt;
| no&lt;br /&gt;
| no&lt;br /&gt;
|-&lt;br /&gt;
| pip&lt;br /&gt;
| For installing python packages.&lt;br /&gt;
| no&lt;br /&gt;
| yes&lt;br /&gt;
| no&lt;br /&gt;
| yes&lt;br /&gt;
| no&lt;br /&gt;
| no&lt;br /&gt;
| no&lt;br /&gt;
|-&lt;br /&gt;
| venv&lt;br /&gt;
| For installing and managing python packages. Part of Python&#039;s standard library.&lt;br /&gt;
| no&lt;br /&gt;
| no&lt;br /&gt;
| no&lt;br /&gt;
| yes&lt;br /&gt;
| yes&lt;br /&gt;
| yes&lt;br /&gt;
| no&lt;br /&gt;
|-&lt;br /&gt;
| poetry&lt;br /&gt;
| For installing and managing python packages. Install it with pipx.&lt;br /&gt;
| no&lt;br /&gt;
| yes&lt;br /&gt;
| no&lt;br /&gt;
| yes&lt;br /&gt;
| yes&lt;br /&gt;
| yes&lt;br /&gt;
| yes&lt;br /&gt;
|-&lt;br /&gt;
| pipx&lt;br /&gt;
| For installing and running python applications (like poetry) globally while having them in isolated environments. It is useful for keeping applications globally available and at the same time separated in their own virtual environments. Use it when the installation instructions of an application offer you this way of installation.&lt;br /&gt;
| no&lt;br /&gt;
| no&lt;br /&gt;
| no&lt;br /&gt;
| yes&lt;br /&gt;
| yes&lt;br /&gt;
| yes (only for single applications)&lt;br /&gt;
| no&lt;br /&gt;
|-&lt;br /&gt;
| uv&lt;br /&gt;
| Replaces poetry, pyenv, pip etc. and is very fast (https://www.loopwerk.io/articles/2025/uv-keeps-getting-better/)&lt;br /&gt;
| yes&lt;br /&gt;
| yes&lt;br /&gt;
| no&lt;br /&gt;
| yes&lt;br /&gt;
| yes&lt;br /&gt;
| yes&lt;br /&gt;
| yes&lt;br /&gt;
|-&lt;br /&gt;
| pixi&lt;br /&gt;
| For installing and managing python as well as conda packages. Uses uv in the background&lt;br /&gt;
| yes&lt;br /&gt;
| yes&lt;br /&gt;
| yes&lt;br /&gt;
| yes&lt;br /&gt;
| yes&lt;br /&gt;
| yes&lt;br /&gt;
| yes&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Pip ===&lt;br /&gt;
&lt;br /&gt;
The standard package manager under Python is &amp;lt;code&amp;gt;pip&amp;lt;/code&amp;gt;. It can be used to install, update and delete packages. Pip can be called directly or via &amp;lt;code&amp;gt;python -m pip&amp;lt;/code&amp;gt;. The standard repository from which packages are obtained is PyPI (https://pypi.org/). When a package depends on others, they are automatically installed as well.&lt;br /&gt;
In the following, the most common pip commands are shown exemplarily. Packages should always be installed within virtual environments to avoid conflicting installations. If you decide to not use a virtual environment, the install commands have to be accomplished by a &amp;lt;code&amp;gt;--user&amp;lt;/code&amp;gt; flag or controlled via environment variables. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;Installation of packages&amp;lt;/b&amp;gt;&amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot;&amp;gt;&lt;br /&gt;
pip install pandas           # Installs the latest compatible version of pandas and its required dependencies&lt;br /&gt;
pip install pandas=1.5.3     # Installs exact version&lt;br /&gt;
pip install pandas&amp;gt;=1.5.3    # Installs version newer or equal to 1.5.3&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The packages from PyPI usually consist of precompiled libraries. However, &amp;lt;code&amp;gt;pip&amp;lt;/code&amp;gt; is also capable of creating packages from source code. However, this may require the C/C++ compiler and other dependencies needed to build the libraries. In the example, pip obtains the source code of matplotlib from github.com, installs its dependencies, compiles the library and installs it:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot;&amp;gt;&lt;br /&gt;
pip install git+https://github.com/matplotlib/matplotlib&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&amp;lt;b&amp;gt;Upgrade packages&amp;lt;/b&amp;gt;&amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot;&amp;gt;&lt;br /&gt;
pip install --upgrade pandas # Updates the library if update is available&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&amp;lt;b&amp;gt;Removing packages&amp;lt;/b&amp;gt;&amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot;&amp;gt;&lt;br /&gt;
pip uninstall pandas         # Removes pandas&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&amp;lt;b&amp;gt;Show packages&amp;lt;/b&amp;gt;&amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot;&amp;gt;&lt;br /&gt;
pip list           # Shows the installed packages&lt;br /&gt;
pip freeze         # Shows the installed packages and their versions&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&amp;lt;b&amp;gt;Save State&amp;lt;/b&amp;gt;&amp;lt;br/&amp;gt;&lt;br /&gt;
To allow for reproducibility it is important to provide information about the full list of packages and their exact versions [https://pip.pypa.io/en/stable/topics/repeatable-installs/ see version pinning]. &lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot;&amp;gt;&lt;br /&gt;
pip freeze &amp;gt; requirements.txt     # redirect package and version information to a textfile&lt;br /&gt;
pip install -r requirements.txt   # Installs all packages that are listed in the file&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Venv ===&lt;br /&gt;
&lt;br /&gt;
The module &amp;lt;code&amp;gt;venv&amp;lt;/code&amp;gt; enables the creation of a virtual environment and is a standard component of Python. Creating a &amp;lt;code&amp;gt;venv&amp;lt;/code&amp;gt; means that a folder is created which contains a separate copy of the Python binary file as well as &amp;lt;code&amp;gt;pip&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;setuptools&amp;lt;/code&amp;gt;. After activating the &amp;lt;code&amp;gt;venv&amp;lt;/code&amp;gt;, the binary file in this folder is used when &amp;lt;code&amp;gt;python&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;pip&amp;lt;/code&amp;gt; is called. This folder is also the installation target for other Python packages.&lt;br /&gt;
&lt;br /&gt;
==== Creation ====&lt;br /&gt;
&lt;br /&gt;
Create, activate, install software, deactivate: &lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
python3.11 -m venv myEnv        # Create venv&lt;br /&gt;
source myEnv/bin/activate       # Activate venv&lt;br /&gt;
pip install --upgrade pip       # Update of the venv-local pip&lt;br /&gt;
pip install &amp;lt;list of packages&amp;gt;  # Install packages/modules&lt;br /&gt;
deactivate&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Install list of software:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
pip install -r requirements.txt # Install packages/modules&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Usage ====&lt;br /&gt;
To use the virtual environment after all dependencies have been installed there, it is sufficient to simply activate it:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
source myEnv/bin/activate       # Activate venv&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
With &amp;lt;code&amp;gt;venv&amp;lt;/code&amp;gt; activated the terminal prompt will reflect that accordingly:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
(myEnv) $ &lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
It is no longer necessary to specify the Python version, the simple command &amp;lt;code&amp;gt;python&amp;lt;/code&amp;gt; starts the Python version that was used to create the virtual environment.&amp;lt;br/&amp;gt;&lt;br /&gt;
You can check, which Python version is in use via:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
(myEnv) $ which python&lt;br /&gt;
&amp;lt;/path/to/project&amp;gt;/myEnv/bin/python&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Poetry ===&lt;br /&gt;
&lt;br /&gt;
==== Creation ====&lt;br /&gt;
When you want to create a virtual environment for an already existing project, you can go to the top directory and run&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
poetry init      # Create virtual environment&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Otherwise start with the demo project: &lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
poetry new poetry-demo&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
You can set the allowed python versions in the pyproject.toml. To switch between python installations on your sytem, you can use&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
poetry env use python3.11&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Usage ====&lt;br /&gt;
&lt;br /&gt;
Install and update packages&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
poetry install &amp;lt;package_name&amp;gt;&lt;br /&gt;
poetry update      # Update to latest versions of packages&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To execute something within the virtual environment:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
poetry run &amp;lt;command&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Helpful links: &lt;br /&gt;
* [https://python-poetry.org/docs/managing-environments/#activating-the-environment Activate Environment]&lt;br /&gt;
&lt;br /&gt;
Helpful commands&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
poetry env info # show environment information&lt;br /&gt;
poetry env list # list all virtual environments associated with the current project &lt;br /&gt;
poetry env list --full-path&lt;br /&gt;
poetry env remove # delete virtual environment&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== uv ===&lt;br /&gt;
&lt;br /&gt;
==== Creation ====&lt;br /&gt;
&lt;br /&gt;
With uv you can create two types of new projects. Either an application (default) or a library.&lt;br /&gt;
&lt;br /&gt;
&amp;amp;rarr; &#039;&#039;&#039;[https://docs.astral.sh/uv/concepts/projects/ projects]&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
uv init &amp;lt;application_name&amp;gt;&lt;br /&gt;
# If you want to create a python package: &lt;br /&gt;
uv init --package &amp;lt;application_name&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When you have an existing project, you can create a virtual environment at .venv with a specific python version as follows: &lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
uv python list # get available versions&lt;br /&gt;
uv venv --python 3.13&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
A specific name or path can be given instead of &amp;quot;venv&amp;quot; as well. If the python version is not available on the system, uv downloads it. &lt;br /&gt;
&lt;br /&gt;
==== Usage ====&lt;br /&gt;
The currently active environment is saved in the environment variable &amp;lt;code&amp;gt;VIRTUAL_ENV&amp;lt;/code&amp;gt;. The environment can be activated and deactivated as follows:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
source .venv/bin/activate&lt;br /&gt;
deactivate&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
If you activated the virtual environment, you can install packagages as usual. Otherwise, you need to add &amp;quot;uv&amp;quot; before the command to run it within the virtual environment: &lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
uv add &amp;lt;package_name&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Note: Additionally, there is a pip interface for uv but it should only be used in fitting edge cases.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To execute a python file within the virtual environment:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
uv run &amp;lt;file.py&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If one project has multiple different groups of packages that are used, it could make sense to look into uv workspaces: &lt;br /&gt;
&lt;br /&gt;
&amp;amp;rarr; &#039;&#039;&#039;[https://docs.astral.sh/uv/concepts/projects/workspaces/ workspaces]&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== Pixi ===&lt;br /&gt;
&lt;br /&gt;
==== Creation ====&lt;br /&gt;
&lt;br /&gt;
With uv you can create two types of new projects. Either an application (default) or a library.&lt;br /&gt;
&lt;br /&gt;
&amp;amp;rarr; &#039;&#039;&#039;[https://pixi.sh/latest/python/tutorial/ Python Tutorial]&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
pixi init &amp;lt;project_name&amp;gt; --format pyproject&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Usage ====&lt;br /&gt;
Add a new package to the environment: &lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# Get it from conda:&lt;br /&gt;
pixi add &amp;lt;package_name&amp;gt;&lt;br /&gt;
# Get it from PyPi&lt;br /&gt;
pixi add &amp;lt;package_name&amp;gt; --pypi&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To execute a command within the virtual environment:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
pixi run &amp;lt;command&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Workflow Management Systems ==&lt;br /&gt;
&lt;br /&gt;
There are a lot of different workflow management systems. One example is [https://snakemake.readthedocs.io/en/stable/index.html Snakemake] which is often used in the field of biology/medicine. Two use cases for snakemake workflows would be: &lt;br /&gt;
* Use an existing workflow to analyze data. &lt;br /&gt;
* Create your own workflow to make your research more reproducible. &lt;br /&gt;
Leaving the python world, a common choice for bioinformaticians would be Nextflow as there is a strong community that focuses on standardisation. Those more standardised pipelines can be found at the [https://nf-co.re/pipelines/ nf-core website].&lt;br /&gt;
In other research fields, other workflow management systems might be more common. &lt;br /&gt;
&lt;br /&gt;
=== Snakemake ===&lt;br /&gt;
&lt;br /&gt;
Some recommendations: &lt;br /&gt;
* Start the Snakemake pipeline within a batch job. &lt;br /&gt;
* Define rules that don&#039;t need much time as &amp;quot;localrule&amp;quot;. &lt;br /&gt;
* Work with rule grouping as described here: https://hpc.nih.gov/apps/snakemake.html#group .&lt;br /&gt;
* Set the following snakemake profile options:&lt;br /&gt;
*: &amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
max-jobs-per-second: 1&lt;br /&gt;
max-status-checks-per-second: 1&lt;br /&gt;
jobs: 400&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== File Formats ==&lt;br /&gt;
&lt;br /&gt;
While providing final results in a human readable format like csv it can be more beneficial to store intermediate results in file formats that are faster to read from and write to. Some common options are the following: &lt;br /&gt;
* &#039;&#039;&#039;Pickle&#039;&#039;&#039;&lt;br /&gt;
*: Python specific. Very fast in the case of storing numerical dataframes. Not recommended for longterm storage as changes in library versions etc. can lead to unreadable pickle files. &lt;br /&gt;
* &#039;&#039;&#039;Feather&#039;&#039;&#039;&lt;br /&gt;
*: Good option in regards to read/write time.&lt;br /&gt;
* &#039;&#039;&#039;Parquet&#039;&#039;&#039;&lt;br /&gt;
*: Good for big data files. Reduces storage need. &lt;br /&gt;
*: Used via libraries like PyArrow and FastParquet. The latter is a version with less features but therefore better performance when used with small to medium sized datasets. [See the comparison [https://dev.to/alexmercedcoder/all-about-parquet-part-08-reading-and-writing-parquet-files-in-python-338d here]]&lt;br /&gt;
*: Significantly slower when storing a mixed dataframe in contrast to a numerical dataframe. &lt;br /&gt;
* &#039;&#039;&#039;hdf5&#039;&#039;&#039;&lt;br /&gt;
* &#039;&#039;&#039;Database&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Best Practices ==&lt;br /&gt;
* Always use virtual environments! Use one environment per project.&lt;br /&gt;
* If a new or existing Python project is to be created or used, the following procedure is recommended:&lt;br /&gt;
# Version the Python source files and the &amp;lt;code&amp;gt;requirements.txt&amp;lt;/code&amp;gt; file with a version control system, e.g. git. Exclude unnecessary folders and files like for example &amp;lt;code&amp;gt;venv&amp;lt;/code&amp;gt; via an entry in the ignore file, for example &amp;lt;code&amp;gt;.gitignore&amp;lt;/code&amp;gt;.&lt;br /&gt;
# Create and activate a virtual environment.&lt;br /&gt;
# Use specialized number crunching python modules (e.g. numpy and scipy), don&#039;t use plain python for serious caluclations&lt;br /&gt;
## check if optimized compiled modules are available on the cluster (numpy, scipy) &lt;br /&gt;
# Update pip &amp;lt;code&amp;gt;pip install --upgrade pip&amp;lt;/code&amp;gt;.&lt;br /&gt;
# Install all required packages via the &amp;lt;code&amp;gt;requirements.txt&amp;lt;/code&amp;gt; file in the case of venv. Or by using the corresponding command of your chosen tool.&lt;br /&gt;
* List or dictionary comprehensions are to be prefered over loops as they are in general faster.&lt;br /&gt;
* Be aware of the differences between references, shallow and deep copies. &lt;br /&gt;
* Do not parallelize by hand but use libraries were possible (Dask, ...).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
== Example ==&lt;br /&gt;
== Do&#039;s and don&#039;ts ==&lt;br /&gt;
&lt;br /&gt;
Using Conda and Pip/Venv together&lt;br /&gt;
In some cases, Pip/Venv might be the preferred method for installing an environment. This could be because a central Python package comes with installation instructions that only show Pip as a supported option (like Tensorflow) or the use of projects which were written with the use of Pip in mind, for example by offering a requirements.txt and a rewrite with testing is not feasable.&lt;br /&gt;
In this case, it makes sense to use Conda as a replacement for Venv and use it to supply the virtual enviroment with the required Python version. After activating the environment, Pip will we available and refer to this Python enviroment.&lt;br /&gt;
&lt;br /&gt;
--&amp;gt;&lt;/div&gt;</summary>
		<author><name>H Winkhardt</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=Status&amp;diff=15357</id>
		<title>Status</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=Status&amp;diff=15357"/>
		<updated>2025-10-15T10:38:23Z</updated>

		<summary type="html">&lt;p&gt;H Winkhardt: ZAS wieder da&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= bwHPC Cluster and Service Status Page =&lt;br /&gt;
&lt;br /&gt;
== Current Status ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- OK/green --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;  background:#B8FFB8; width:100%;&amp;quot; &lt;br /&gt;
| style=&amp;quot;padding:8px; background:#8AFF8A; font-size:120%; font-weight:bold;  text-align:left&amp;quot; | Status&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
Normal operation of all systems.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- warning/yellow --&amp;gt;&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
{| style=&amp;quot;  background:#FFD28A; width:100%;&amp;quot; &lt;br /&gt;
| style=&amp;quot;padding:8px; background:#FFC05C; font-size:120%; font-weight:bold;  text-align:left&amp;quot; | Status&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
TEXT&lt;br /&gt;
|}&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- alert/red --&amp;gt;&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
{| style=&amp;quot;  background:#FF8A8A; width:100%;&amp;quot; &lt;br /&gt;
| style=&amp;quot;padding:8px; background:#FF5C5C; font-size:120%; font-weight:bold;  text-align:left&amp;quot; | Status&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
TEXT&lt;br /&gt;
|}&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Old Messages ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;strong&amp;gt;15.10.2025: central application page (ZAS) [https://zas.bwhpc.de/] currently down&amp;lt;/strong&amp;gt;&lt;br /&gt;
* Renewal and application of new projects (Rechenvorhaben/RV) and registration of new RV members not possible.&lt;br /&gt;
* Filling out the bwUniCluster3.0 questionnaire not possible.&lt;br /&gt;
* Login and compute activities are &#039;&#039;&#039;not&#039;&#039;&#039; affected&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;strong&amp;gt;09.10.2025: central application page (ZAS) [https://zas.bwhpc.de/] currently down&amp;lt;/strong&amp;gt;&lt;br /&gt;
* Renewal and application of new projects (Rechenvorhaben/RV) and registration of new RV members not possible.&lt;br /&gt;
* Filling out the bwUniCluster3.0 questionnaire not possible.&lt;br /&gt;
* 10.10.: ongoing issue&lt;br /&gt;
* Login and compute activities are &#039;&#039;&#039;not&#039;&#039;&#039; affected&lt;br /&gt;
* 2025-10-10T16: Some sites report normal operation. Identity providers need to update DFN information, expected within 24h.&lt;/div&gt;</summary>
		<author><name>H Winkhardt</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=Main_Page&amp;diff=15356</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=Main_Page&amp;diff=15356"/>
		<updated>2025-10-15T10:37:22Z</updated>

		<summary type="html">&lt;p&gt;H Winkhardt: ZAS wieder da&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;span style=&amp;quot;font-size:140%;&amp;gt;&#039;&#039;&#039;Welcome to the bwHPC Wiki.&#039;&#039;&#039;&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
bwHPC represents services and resources in the State of &#039;&#039;&#039;B&#039;&#039;&#039;aden-&#039;&#039;&#039;W&#039;&#039;&#039;ürttemberg, Germany, for High Performance Computing (&#039;&#039;&#039;HPC&#039;&#039;&#039;), Data Intensive Computing (&#039;&#039;&#039;DIC&#039;&#039;&#039;) and Large Scale Scientific Data Management (&#039;&#039;&#039;LS2DM&#039;&#039;&#039;).&lt;br /&gt;
&lt;br /&gt;
The main bwHPC web page is at &#039;&#039;&#039;[https://www.bwhpc.de/ https://www.bwhpc.de/]&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
Many topics depend on the cluster system you use. &lt;br /&gt;
First choose the cluster you use,  then select the correct topic.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- bwHPC STATUS START --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- OK/green --&amp;gt;&lt;br /&gt;
{| style=&amp;quot;  background:#B8FFB8; width:100%;&amp;quot; &lt;br /&gt;
| style=&amp;quot;padding:8px; background:#8AFF8A; font-size:120%; font-weight:bold;  text-align:left&amp;quot; | [[Status|Maintenances and Outages]] (Currently: 0)&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- warning/yellow --&amp;gt;&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
{| style=&amp;quot;  background:#FFD28A; width:100%;&amp;quot; &lt;br /&gt;
| style=&amp;quot;padding:8px; background:#FFC05C; font-size:120%; font-weight:bold;  text-align:left&amp;quot; | [[Status|Maintenances and Outages]] (Currently: 1)&lt;br /&gt;
|}&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- alert/red --&amp;gt;&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
{| style=&amp;quot;  background:#FF8A8A; width:100%;&amp;quot; &lt;br /&gt;
| style=&amp;quot;padding:8px; background:#FF5C5C; font-size:120%; font-weight:bold;  text-align:left&amp;quot; | [[Status|Maintenances and Outages]] (Currently: 1)&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- bwHPC STATUS END --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot; background:#eeeefe; width:100%;&amp;quot;&lt;br /&gt;
| style=&amp;quot;padding:8px; background:#dedefe; font-size:120%; font-weight:bold; text-align:left&amp;quot; | Courses / eLearning&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
* [https://training.bwhpc.de/ eLearning and Online Courses]&lt;br /&gt;
* [https://hpc-wiki.info/hpc/Introduction_to_Linux_in_HPC Introduction to Linux in HPC (external resource)]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;  background:#deffee; width:100%;&amp;quot;&lt;br /&gt;
| style=&amp;quot;padding:8px; background:#cef2e0; font-size:120%; font-weight:bold; text-align:left&amp;quot; | Need Access to a Cluster?&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
* [[When to use a HPC Cluster]]&lt;br /&gt;
* [[Running Calculations]]&lt;br /&gt;
* [[Registration]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;  background:#deffee; width:100%;&amp;quot;&lt;br /&gt;
| style=&amp;quot;padding:8px; background:#cef2e0; font-size:120%; font-weight:bold;  text-align:left&amp;quot; | HPC System Specific Documentation&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
bwHPC encompasses several HPC compute clusters at different universities in Baden-Württemberg. Each cluster is dedicated to [https://www.bwhpc.de/bwhpc-domains.php specific research domains]. &lt;br /&gt;
 &lt;br /&gt;
Documentation differs between compute clusters, please see cluster specific overview pages:&lt;br /&gt;
{|&lt;br /&gt;
|-&lt;br /&gt;
| style=&amp;quot;padding:5px; width:30%&amp;quot;  | [[BwUniCluster3.0|bwUniCluster 3.0]] &lt;br /&gt;
| style=&amp;quot;padding-left:20px;&amp;quot;  | General Purpose, Teaching&lt;br /&gt;
|-&lt;br /&gt;
| style=&amp;quot;padding:5px; width:30%&amp;quot;  | [[:JUSTUS2| bwForCluster JUSTUS 2]] &lt;br /&gt;
| style=&amp;quot;padding-left:20px;&amp;quot;  | Theoretical Chemistry, Condensed Matter Physics, and Quantum Sciences&lt;br /&gt;
|-&lt;br /&gt;
| style=&amp;quot;padding:5px; width:30%&amp;quot; | [[Helix|bwForCluster Helix]]&lt;br /&gt;
| style=&amp;quot;padding-left:20px;&amp;quot;  |   Structural and Systems Biology, Medical Science, Soft Matter, Computational Humanities, and Mathematics and Computer Science&lt;br /&gt;
|-&lt;br /&gt;
| style=&amp;quot;padding:5px; width:30%&amp;quot;  | [[NEMO2|bwForCluster NEMO 2]] &lt;br /&gt;
| style=&amp;quot;padding-left:20px;&amp;quot;  | Neurosciences, Particle Physics, Materials Science, and Microsystems Engineering&lt;br /&gt;
|-&lt;br /&gt;
| style=&amp;quot;padding:5px; width:30%&amp;quot;  | [[BinAC|bwForCluster BinAC]] &lt;br /&gt;
| style=&amp;quot;padding-left:20px;&amp;quot;  | Bioinformatics, Geosciences and Astrophysics&lt;br /&gt;
|-&lt;br /&gt;
| style=&amp;quot;padding:5px; width:30%&amp;quot;  | [[BinAC2|bwForCluster BinAC 2]] &lt;br /&gt;
| style=&amp;quot;padding-left:20px;&amp;quot;  | Bioinformatics, Astrophysics, Geosciences, Pharmacy, and Medical Informatics&lt;br /&gt;
|}&lt;br /&gt;
|-&lt;br /&gt;
|bwHPC Clusters: [https://www.bwhpc.de/cluster.php operational status] &lt;br /&gt;
Further Compute Clusters in Baden-Württemberg (separate access policies):&lt;br /&gt;
* Datenanalyse Cluster der Hochschulen (DACHS): [[DACHS | Datenanalyse Cluster der Hochschulen (DACHS)]]&lt;br /&gt;
* bwHPC tier 1: [https://kb.hlrs.de/platforms/index.php/Hunter_(HPE) Hunter] ([https://www.hlrs.de/apply-for-computing-time getting access])&lt;br /&gt;
* bwHPC tier 2: [https://www.nhr.kit.edu/userdocs/horeka HoreKa] ([https://www.nhr.kit.edu/userdocs/horeka/projects/ getting access])&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;background:#deffee; width:100%;&amp;quot;&lt;br /&gt;
| style=&amp;quot;padding:8px; background:#cef2e0; font-size:120%; font-weight:bold; text-align:left&amp;quot; | Documentation valid for all Clusters&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
* [[Environment Modules| Software Modules]] and software documentation explained&lt;br /&gt;
* [https://www.bwhpc.de/software.html List of Software] on all clusters&lt;br /&gt;
* [[Development| Software Development and Parallel Programming]]&lt;br /&gt;
* [[Energy Efficient Cluster Usage]]&lt;br /&gt;
* [[HPC Glossary]]&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;height:100%; background:#ffeaef; width:100%&amp;quot;&lt;br /&gt;
| style=&amp;quot;padding:8px; background:#f5dfdf; font-size:120%; font-weight:bold;  text-align:left&amp;quot;   | Scientific Data Storage&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
For user guides of the scientific data storage services:&lt;br /&gt;
* [[SDS@hd]]&lt;br /&gt;
* [https://www.rda.kit.edu/english bwDataArchive]&lt;br /&gt;
* [https://uni-tuebingen.de/einrichtungen/zentrum-fuer-datenverarbeitung/projekte/laufende-projekte/bwsfs bwSFS]&lt;br /&gt;
Associated, but local scientific storage services are:&lt;br /&gt;
* [https://wiki.scc.kit.edu/lsdf/index.php/Category:LSDF_Online_Storage LSDF Online Storage] (only for KIT and KIT partners)&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;height:100%; background:#ffeaef; width:100%&amp;quot;&lt;br /&gt;
| style=&amp;quot;padding:8px; background:#f5dfdf; font-size:120%; font-weight:bold;  text-align:left&amp;quot;   | Data Management&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
* [[Data Transfer|Data Transfer]]&lt;br /&gt;
* [https://www.forschungsdaten.org/index.php/FDM-Kontakte#Deutschland Research Data Management (RDM)] contact persons&lt;br /&gt;
* [https://www.forschungsdaten.info Portal for Research Data Management] (Forschungsdaten.info)&lt;br /&gt;
|}&lt;br /&gt;
{| style=&amp;quot;  background:#eeeefe; width:100%;&amp;quot; &lt;br /&gt;
| style=&amp;quot;padding:8px; background:#dedefe; font-size:120%; font-weight:bold;  text-align:left&amp;quot; | Support&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
* [[BwSupportPortal|Submit a Ticket in our Support Portal]]&lt;br /&gt;
Support is provided by the [https://www.bwhpc.de/teams.php bwHPC Competence Centers]:&lt;br /&gt;
|}&lt;br /&gt;
{| style=&amp;quot;  background:#e6e9eb; width:100%;&amp;quot; &lt;br /&gt;
| style=&amp;quot;padding:8px; background:#d1dadf; font-size:120%; font-weight:bold;  text-align:left&amp;quot; | Acknowledgement&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
* Please [[Acknowledgement|acknowledge]] our resources in your publications.&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>H Winkhardt</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=Main_Page&amp;diff=15355</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=Main_Page&amp;diff=15355"/>
		<updated>2025-10-15T08:24:06Z</updated>

		<summary type="html">&lt;p&gt;H Winkhardt: ZAS wieder down&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;span style=&amp;quot;font-size:140%;&amp;gt;&#039;&#039;&#039;Welcome to the bwHPC Wiki.&#039;&#039;&#039;&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
bwHPC represents services and resources in the State of &#039;&#039;&#039;B&#039;&#039;&#039;aden-&#039;&#039;&#039;W&#039;&#039;&#039;ürttemberg, Germany, for High Performance Computing (&#039;&#039;&#039;HPC&#039;&#039;&#039;), Data Intensive Computing (&#039;&#039;&#039;DIC&#039;&#039;&#039;) and Large Scale Scientific Data Management (&#039;&#039;&#039;LS2DM&#039;&#039;&#039;).&lt;br /&gt;
&lt;br /&gt;
The main bwHPC web page is at &#039;&#039;&#039;[https://www.bwhpc.de/ https://www.bwhpc.de/]&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
Many topics depend on the cluster system you use. &lt;br /&gt;
First choose the cluster you use,  then select the correct topic.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- bwHPC STATUS START --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- OK/green --&amp;gt;&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
{| style=&amp;quot;  background:#B8FFB8; width:100%;&amp;quot; &lt;br /&gt;
| style=&amp;quot;padding:8px; background:#8AFF8A; font-size:120%; font-weight:bold;  text-align:left&amp;quot; | [[Status|Maintenances and Outages]] (Currently: 0)&lt;br /&gt;
|}&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- warning/yellow --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;  background:#FFD28A; width:100%;&amp;quot; &lt;br /&gt;
| style=&amp;quot;padding:8px; background:#FFC05C; font-size:120%; font-weight:bold;  text-align:left&amp;quot; | [[Status|Maintenances and Outages]] (Currently: 1)&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- alert/red --&amp;gt;&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
{| style=&amp;quot;  background:#FF8A8A; width:100%;&amp;quot; &lt;br /&gt;
| style=&amp;quot;padding:8px; background:#FF5C5C; font-size:120%; font-weight:bold;  text-align:left&amp;quot; | [[Status|Maintenances and Outages]] (Currently: 1)&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- bwHPC STATUS END --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot; background:#eeeefe; width:100%;&amp;quot;&lt;br /&gt;
| style=&amp;quot;padding:8px; background:#dedefe; font-size:120%; font-weight:bold; text-align:left&amp;quot; | Courses / eLearning&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
* [https://training.bwhpc.de/ eLearning and Online Courses]&lt;br /&gt;
* [https://hpc-wiki.info/hpc/Introduction_to_Linux_in_HPC Introduction to Linux in HPC (external resource)]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;  background:#deffee; width:100%;&amp;quot;&lt;br /&gt;
| style=&amp;quot;padding:8px; background:#cef2e0; font-size:120%; font-weight:bold; text-align:left&amp;quot; | Need Access to a Cluster?&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
* [[When to use a HPC Cluster]]&lt;br /&gt;
* [[Running Calculations]]&lt;br /&gt;
* [[Registration]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;  background:#deffee; width:100%;&amp;quot;&lt;br /&gt;
| style=&amp;quot;padding:8px; background:#cef2e0; font-size:120%; font-weight:bold;  text-align:left&amp;quot; | HPC System Specific Documentation&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
bwHPC encompasses several HPC compute clusters at different universities in Baden-Württemberg. Each cluster is dedicated to [https://www.bwhpc.de/bwhpc-domains.php specific research domains]. &lt;br /&gt;
 &lt;br /&gt;
Documentation differs between compute clusters, please see cluster specific overview pages:&lt;br /&gt;
{|&lt;br /&gt;
|-&lt;br /&gt;
| style=&amp;quot;padding:5px; width:30%&amp;quot;  | [[BwUniCluster3.0|bwUniCluster 3.0]] &lt;br /&gt;
| style=&amp;quot;padding-left:20px;&amp;quot;  | General Purpose, Teaching&lt;br /&gt;
|-&lt;br /&gt;
| style=&amp;quot;padding:5px; width:30%&amp;quot;  | [[:JUSTUS2| bwForCluster JUSTUS 2]] &lt;br /&gt;
| style=&amp;quot;padding-left:20px;&amp;quot;  | Theoretical Chemistry, Condensed Matter Physics, and Quantum Sciences&lt;br /&gt;
|-&lt;br /&gt;
| style=&amp;quot;padding:5px; width:30%&amp;quot; | [[Helix|bwForCluster Helix]]&lt;br /&gt;
| style=&amp;quot;padding-left:20px;&amp;quot;  |   Structural and Systems Biology, Medical Science, Soft Matter, Computational Humanities, and Mathematics and Computer Science&lt;br /&gt;
|-&lt;br /&gt;
| style=&amp;quot;padding:5px; width:30%&amp;quot;  | [[NEMO2|bwForCluster NEMO 2]] &lt;br /&gt;
| style=&amp;quot;padding-left:20px;&amp;quot;  | Neurosciences, Particle Physics, Materials Science, and Microsystems Engineering&lt;br /&gt;
|-&lt;br /&gt;
| style=&amp;quot;padding:5px; width:30%&amp;quot;  | [[BinAC|bwForCluster BinAC]] &lt;br /&gt;
| style=&amp;quot;padding-left:20px;&amp;quot;  | Bioinformatics, Geosciences and Astrophysics&lt;br /&gt;
|-&lt;br /&gt;
| style=&amp;quot;padding:5px; width:30%&amp;quot;  | [[BinAC2|bwForCluster BinAC 2]] &lt;br /&gt;
| style=&amp;quot;padding-left:20px;&amp;quot;  | Bioinformatics, Astrophysics, Geosciences, Pharmacy, and Medical Informatics&lt;br /&gt;
|}&lt;br /&gt;
|-&lt;br /&gt;
|bwHPC Clusters: [https://www.bwhpc.de/cluster.php operational status] &lt;br /&gt;
Further Compute Clusters in Baden-Württemberg (separate access policies):&lt;br /&gt;
* Datenanalyse Cluster der Hochschulen (DACHS): [[DACHS | Datenanalyse Cluster der Hochschulen (DACHS)]]&lt;br /&gt;
* bwHPC tier 1: [https://kb.hlrs.de/platforms/index.php/Hunter_(HPE) Hunter] ([https://www.hlrs.de/apply-for-computing-time getting access])&lt;br /&gt;
* bwHPC tier 2: [https://www.nhr.kit.edu/userdocs/horeka HoreKa] ([https://www.nhr.kit.edu/userdocs/horeka/projects/ getting access])&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;background:#deffee; width:100%;&amp;quot;&lt;br /&gt;
| style=&amp;quot;padding:8px; background:#cef2e0; font-size:120%; font-weight:bold; text-align:left&amp;quot; | Documentation valid for all Clusters&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
* [[Environment Modules| Software Modules]] and software documentation explained&lt;br /&gt;
* [https://www.bwhpc.de/software.html List of Software] on all clusters&lt;br /&gt;
* [[Development| Software Development and Parallel Programming]]&lt;br /&gt;
* [[Energy Efficient Cluster Usage]]&lt;br /&gt;
* [[HPC Glossary]]&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;height:100%; background:#ffeaef; width:100%&amp;quot;&lt;br /&gt;
| style=&amp;quot;padding:8px; background:#f5dfdf; font-size:120%; font-weight:bold;  text-align:left&amp;quot;   | Scientific Data Storage&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
For user guides of the scientific data storage services:&lt;br /&gt;
* [[SDS@hd]]&lt;br /&gt;
* [https://www.rda.kit.edu/english bwDataArchive]&lt;br /&gt;
* [https://uni-tuebingen.de/einrichtungen/zentrum-fuer-datenverarbeitung/projekte/laufende-projekte/bwsfs bwSFS]&lt;br /&gt;
Associated, but local scientific storage services are:&lt;br /&gt;
* [https://wiki.scc.kit.edu/lsdf/index.php/Category:LSDF_Online_Storage LSDF Online Storage] (only for KIT and KIT partners)&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;height:100%; background:#ffeaef; width:100%&amp;quot;&lt;br /&gt;
| style=&amp;quot;padding:8px; background:#f5dfdf; font-size:120%; font-weight:bold;  text-align:left&amp;quot;   | Data Management&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
* [[Data Transfer|Data Transfer]]&lt;br /&gt;
* [https://www.forschungsdaten.org/index.php/FDM-Kontakte#Deutschland Research Data Management (RDM)] contact persons&lt;br /&gt;
* [https://www.forschungsdaten.info Portal for Research Data Management] (Forschungsdaten.info)&lt;br /&gt;
|}&lt;br /&gt;
{| style=&amp;quot;  background:#eeeefe; width:100%;&amp;quot; &lt;br /&gt;
| style=&amp;quot;padding:8px; background:#dedefe; font-size:120%; font-weight:bold;  text-align:left&amp;quot; | Support&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
* [[BwSupportPortal|Submit a Ticket in our Support Portal]]&lt;br /&gt;
Support is provided by the [https://www.bwhpc.de/teams.php bwHPC Competence Centers]:&lt;br /&gt;
|}&lt;br /&gt;
{| style=&amp;quot;  background:#e6e9eb; width:100%;&amp;quot; &lt;br /&gt;
| style=&amp;quot;padding:8px; background:#d1dadf; font-size:120%; font-weight:bold;  text-align:left&amp;quot; | Acknowledgement&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
* Please [[Acknowledgement|acknowledge]] our resources in your publications.&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>H Winkhardt</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=Status&amp;diff=15354</id>
		<title>Status</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=Status&amp;diff=15354"/>
		<updated>2025-10-15T08:24:01Z</updated>

		<summary type="html">&lt;p&gt;H Winkhardt: ZAS wieder down&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= bwHPC Cluster and Service Status Page =&lt;br /&gt;
&lt;br /&gt;
== Current Status ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- OK/green --&amp;gt;&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
{| style=&amp;quot;  background:#B8FFB8; width:100%;&amp;quot; &lt;br /&gt;
| style=&amp;quot;padding:8px; background:#8AFF8A; font-size:120%; font-weight:bold;  text-align:left&amp;quot; | Status&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
Normal operation of all systems.&lt;br /&gt;
|}&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- warning/yellow --&amp;gt;&lt;br /&gt;
{| style=&amp;quot;  background:#FFD28A; width:100%;&amp;quot; &lt;br /&gt;
| style=&amp;quot;padding:8px; background:#FFC05C; font-size:120%; font-weight:bold;  text-align:left&amp;quot; | Status&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
&amp;lt;strong&amp;gt;15.10.2025: central application page (ZAS) [https://zas.bwhpc.de/] currently down&amp;lt;/strong&amp;gt;&lt;br /&gt;
* Renewal and application of new projects (Rechenvorhaben/RV) and registration of new RV members not possible.&lt;br /&gt;
* Filling out the bwUniCluster3.0 questionnaire not possible.&lt;br /&gt;
* Login and compute activities are &#039;&#039;&#039;not&#039;&#039;&#039; affected&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- alert/red --&amp;gt;&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
{| style=&amp;quot;  background:#FF8A8A; width:100%;&amp;quot; &lt;br /&gt;
| style=&amp;quot;padding:8px; background:#FF5C5C; font-size:120%; font-weight:bold;  text-align:left&amp;quot; | Status&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
TEXT&lt;br /&gt;
|}&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Old Messages ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;strong&amp;gt;09.10.2025: central application page (ZAS) [https://zas.bwhpc.de/] currently down&amp;lt;/strong&amp;gt;&lt;br /&gt;
* Renewal and application of new projects (Rechenvorhaben/RV) and registration of new RV members not possible.&lt;br /&gt;
* Filling out the bwUniCluster3.0 questionnaire not possible.&lt;br /&gt;
* 10.10.: ongoing issue&lt;br /&gt;
* Login and compute activities are &#039;&#039;&#039;not&#039;&#039;&#039; affected&lt;br /&gt;
* 2025-10-10T16: Some sites report normal operation. Identity providers need to update DFN information, expected within 24h.&lt;/div&gt;</summary>
		<author><name>H Winkhardt</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=Status&amp;diff=15352</id>
		<title>Status</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=Status&amp;diff=15352"/>
		<updated>2025-10-13T07:38:33Z</updated>

		<summary type="html">&lt;p&gt;H Winkhardt: Update Status ZAS&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= bwHPC Cluster and Service Status Page =&lt;br /&gt;
&lt;br /&gt;
== Current Status ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- OK/green --&amp;gt;&lt;br /&gt;
&amp;lt;!-- --&amp;gt;&lt;br /&gt;
{| style=&amp;quot;  background:#B8FFB8; width:100%;&amp;quot; &lt;br /&gt;
| style=&amp;quot;padding:8px; background:#8AFF8A; font-size:120%; font-weight:bold;  text-align:left&amp;quot; | Status&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
Normal operation of all systems.&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;!-- --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- warning/yellow --&amp;gt;&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
{| style=&amp;quot;  background:#FFD28A; width:100%;&amp;quot; &lt;br /&gt;
| style=&amp;quot;padding:8px; background:#FFC05C; font-size:120%; font-weight:bold;  text-align:left&amp;quot; | Status&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
TEXT&lt;br /&gt;
|}&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- alert/red --&amp;gt;&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
{| style=&amp;quot;  background:#FF8A8A; width:100%;&amp;quot; &lt;br /&gt;
| style=&amp;quot;padding:8px; background:#FF5C5C; font-size:120%; font-weight:bold;  text-align:left&amp;quot; | Status&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
TEXT&lt;br /&gt;
|}&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Old Messages ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;strong&amp;gt;09.10.2025: central application page (ZAS) [https://zas.bwhpc.de/] currently down&amp;lt;/strong&amp;gt;&lt;br /&gt;
* Renewal and application of new projects (Rechenvorhaben/RV) and registration of new RV members not possible.&lt;br /&gt;
* Filling out the bwUniCluster3.0 questionnaire not possible.&lt;br /&gt;
* 10.10.: ongoing issue&lt;br /&gt;
* Login and compute activities are &#039;&#039;&#039;not&#039;&#039;&#039; affected&lt;br /&gt;
* 2025-10-10T16: Some sites report normal operation. Identity providers need to update DFN information, expected within 24h.&lt;/div&gt;</summary>
		<author><name>H Winkhardt</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=Main_Page&amp;diff=15351</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=Main_Page&amp;diff=15351"/>
		<updated>2025-10-13T07:36:03Z</updated>

		<summary type="html">&lt;p&gt;H Winkhardt: Update Status ZAS&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;span style=&amp;quot;font-size:140%;&amp;gt;&#039;&#039;&#039;Welcome to the bwHPC Wiki.&#039;&#039;&#039;&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
bwHPC represents services and resources in the State of &#039;&#039;&#039;B&#039;&#039;&#039;aden-&#039;&#039;&#039;W&#039;&#039;&#039;ürttemberg, Germany, for High Performance Computing (&#039;&#039;&#039;HPC&#039;&#039;&#039;), Data Intensive Computing (&#039;&#039;&#039;DIC&#039;&#039;&#039;) and Large Scale Scientific Data Management (&#039;&#039;&#039;LS2DM&#039;&#039;&#039;).&lt;br /&gt;
&lt;br /&gt;
The main bwHPC web page is at &#039;&#039;&#039;[https://www.bwhpc.de/ https://www.bwhpc.de/]&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
Many topics depend on the cluster system you use. &lt;br /&gt;
First choose the cluster you use,  then select the correct topic.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- bwHPC STATUS START --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- OK/green --&amp;gt;&lt;br /&gt;
{| style=&amp;quot;  background:#B8FFB8; width:100%;&amp;quot; &lt;br /&gt;
| style=&amp;quot;padding:8px; background:#8AFF8A; font-size:120%; font-weight:bold;  text-align:left&amp;quot; | [[Status|Maintenances and Outages]] (Currently: 0)&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- warning/yellow --&amp;gt;&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
{| style=&amp;quot;  background:#FFD28A; width:100%;&amp;quot; &lt;br /&gt;
| style=&amp;quot;padding:8px; background:#FFC05C; font-size:120%; font-weight:bold;  text-align:left&amp;quot; | [[Status|Maintenances and Outages]] (Currently: 1)&lt;br /&gt;
|}&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- alert/red --&amp;gt;&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
{| style=&amp;quot;  background:#FF8A8A; width:100%;&amp;quot; &lt;br /&gt;
| style=&amp;quot;padding:8px; background:#FF5C5C; font-size:120%; font-weight:bold;  text-align:left&amp;quot; | [[Status|Maintenances and Outages]] (Currently: 1)&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- bwHPC STATUS END --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot; background:#eeeefe; width:100%;&amp;quot;&lt;br /&gt;
| style=&amp;quot;padding:8px; background:#dedefe; font-size:120%; font-weight:bold; text-align:left&amp;quot; | Courses / eLearning&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
* [https://training.bwhpc.de/ eLearning and Online Courses]&lt;br /&gt;
* [https://hpc-wiki.info/hpc/Introduction_to_Linux_in_HPC Introduction to Linux in HPC (external resource)]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;  background:#deffee; width:100%;&amp;quot;&lt;br /&gt;
| style=&amp;quot;padding:8px; background:#cef2e0; font-size:120%; font-weight:bold; text-align:left&amp;quot; | Need Access to a Cluster?&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
* [[When to use a HPC Cluster]]&lt;br /&gt;
* [[Running Calculations]]&lt;br /&gt;
* [[Registration]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;  background:#deffee; width:100%;&amp;quot;&lt;br /&gt;
| style=&amp;quot;padding:8px; background:#cef2e0; font-size:120%; font-weight:bold;  text-align:left&amp;quot; | HPC System Specific Documentation&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
bwHPC encompasses several HPC compute clusters at different universities in Baden-Württemberg. Each cluster is dedicated to [https://www.bwhpc.de/bwhpc-domains.php specific research domains]. &lt;br /&gt;
 &lt;br /&gt;
Documentation differs between compute clusters, please see cluster specific overview pages:&lt;br /&gt;
{|&lt;br /&gt;
|-&lt;br /&gt;
| style=&amp;quot;padding:5px; width:30%&amp;quot;  | [[BwUniCluster3.0|bwUniCluster 3.0]] &lt;br /&gt;
| style=&amp;quot;padding-left:20px;&amp;quot;  | General Purpose, Teaching&lt;br /&gt;
|-&lt;br /&gt;
| style=&amp;quot;padding:5px; width:30%&amp;quot;  | [[:JUSTUS2| bwForCluster JUSTUS 2]] &lt;br /&gt;
| style=&amp;quot;padding-left:20px;&amp;quot;  | Theoretical Chemistry, Condensed Matter Physics, and Quantum Sciences&lt;br /&gt;
|-&lt;br /&gt;
| style=&amp;quot;padding:5px; width:30%&amp;quot; | [[Helix|bwForCluster Helix]]&lt;br /&gt;
| style=&amp;quot;padding-left:20px;&amp;quot;  |   Structural and Systems Biology, Medical Science, Soft Matter, Computational Humanities, and Mathematics and Computer Science&lt;br /&gt;
|-&lt;br /&gt;
| style=&amp;quot;padding:5px; width:30%&amp;quot;  | [[NEMO2|bwForCluster NEMO 2]] &lt;br /&gt;
| style=&amp;quot;padding-left:20px;&amp;quot;  | Neurosciences, Particle Physics, Materials Science, and Microsystems Engineering&lt;br /&gt;
|-&lt;br /&gt;
| style=&amp;quot;padding:5px; width:30%&amp;quot;  | [[BinAC|bwForCluster BinAC]] &lt;br /&gt;
| style=&amp;quot;padding-left:20px;&amp;quot;  | Bioinformatics, Geosciences and Astrophysics&lt;br /&gt;
|-&lt;br /&gt;
| style=&amp;quot;padding:5px; width:30%&amp;quot;  | [[BinAC2|bwForCluster BinAC 2]] &lt;br /&gt;
| style=&amp;quot;padding-left:20px;&amp;quot;  | Bioinformatics, Astrophysics, Geosciences, Pharmacy, and Medical Informatics&lt;br /&gt;
|}&lt;br /&gt;
|-&lt;br /&gt;
|bwHPC Clusters: [https://www.bwhpc.de/cluster.php operational status] &lt;br /&gt;
Further Compute Clusters in Baden-Württemberg (separate access policies):&lt;br /&gt;
* Datenanalyse Cluster der Hochschulen (DACHS): [[DACHS | Datenanalyse Cluster der Hochschulen (DACHS)]]&lt;br /&gt;
* bwHPC tier 1: [https://kb.hlrs.de/platforms/index.php/Hunter_(HPE) Hunter] ([https://www.hlrs.de/apply-for-computing-time getting access])&lt;br /&gt;
* bwHPC tier 2: [https://www.nhr.kit.edu/userdocs/horeka HoreKa] ([https://www.nhr.kit.edu/userdocs/horeka/projects/ getting access])&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;background:#deffee; width:100%;&amp;quot;&lt;br /&gt;
| style=&amp;quot;padding:8px; background:#cef2e0; font-size:120%; font-weight:bold; text-align:left&amp;quot; | Documentation valid for all Clusters&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
* [[Environment Modules| Software Modules]] and software documentation explained&lt;br /&gt;
* [https://www.bwhpc.de/software.html List of Software] on all clusters&lt;br /&gt;
* [[Development| Software Development and Parallel Programming]]&lt;br /&gt;
* [[Energy Efficient Cluster Usage]]&lt;br /&gt;
* [[HPC Glossary]]&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;height:100%; background:#ffeaef; width:100%&amp;quot;&lt;br /&gt;
| style=&amp;quot;padding:8px; background:#f5dfdf; font-size:120%; font-weight:bold;  text-align:left&amp;quot;   | Scientific Data Storage&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
For user guides of the scientific data storage services:&lt;br /&gt;
* [[SDS@hd]]&lt;br /&gt;
* [https://www.rda.kit.edu/english bwDataArchive]&lt;br /&gt;
* [https://uni-tuebingen.de/einrichtungen/zentrum-fuer-datenverarbeitung/projekte/laufende-projekte/bwsfs bwSFS]&lt;br /&gt;
Associated, but local scientific storage services are:&lt;br /&gt;
* [https://wiki.scc.kit.edu/lsdf/index.php/Category:LSDF_Online_Storage LSDF Online Storage] (only for KIT and KIT partners)&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;height:100%; background:#ffeaef; width:100%&amp;quot;&lt;br /&gt;
| style=&amp;quot;padding:8px; background:#f5dfdf; font-size:120%; font-weight:bold;  text-align:left&amp;quot;   | Data Management&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
* [[Data Transfer|Data Transfer]]&lt;br /&gt;
* [https://www.forschungsdaten.org/index.php/FDM-Kontakte#Deutschland Research Data Management (RDM)] contact persons&lt;br /&gt;
* [https://www.forschungsdaten.info Portal for Research Data Management] (Forschungsdaten.info)&lt;br /&gt;
|}&lt;br /&gt;
{| style=&amp;quot;  background:#eeeefe; width:100%;&amp;quot; &lt;br /&gt;
| style=&amp;quot;padding:8px; background:#dedefe; font-size:120%; font-weight:bold;  text-align:left&amp;quot; | Support&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
* [[BwSupportPortal|Submit a Ticket in our Support Portal]]&lt;br /&gt;
Support is provided by the [https://www.bwhpc.de/teams.php bwHPC Competence Centers]:&lt;br /&gt;
|}&lt;br /&gt;
{| style=&amp;quot;  background:#e6e9eb; width:100%;&amp;quot; &lt;br /&gt;
| style=&amp;quot;padding:8px; background:#d1dadf; font-size:120%; font-weight:bold;  text-align:left&amp;quot; | Acknowledgement&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
* Please [[Acknowledgement|acknowledge]] our resources in your publications.&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>H Winkhardt</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=Status&amp;diff=15336</id>
		<title>Status</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=Status&amp;diff=15336"/>
		<updated>2025-10-10T12:26:50Z</updated>

		<summary type="html">&lt;p&gt;H Winkhardt: UC3 Fragebogen&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{| style=&amp;quot;  background:#ffa833; width:100%;&amp;quot; &lt;br /&gt;
| style=&amp;quot;padding:8px; background:#f80; font-size:120%; font-weight:bold;  text-align:left&amp;quot; | ZAS&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
&amp;lt;strong&amp;gt;09.10.2025: central application page (ZAS) [https://zas.bwhpc.de/] currently down&amp;lt;/strong&amp;gt;&lt;br /&gt;
* Renewal and application of new projects (Rechenvorhaben/RV) and registration of new RV members not possible.&lt;br /&gt;
* Filling out the bwUniCluster3.0 questionnaire not possible.&lt;br /&gt;
* 10.10.: ongoing issue&lt;br /&gt;
* Login and compute activities are &#039;&#039;&#039;not&#039;&#039;&#039; affected&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>H Winkhardt</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=SDS@hd/Registration/Entitlement&amp;diff=15322</id>
		<title>SDS@hd/Registration/Entitlement</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=SDS@hd/Registration/Entitlement&amp;diff=15322"/>
		<updated>2025-10-07T09:11:34Z</updated>

		<summary type="html">&lt;p&gt;H Winkhardt: Mannheim&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Get SDS@hd Entitlement ==&lt;br /&gt;
&lt;br /&gt;
New SVs can only be opened by bwIDM members who have the &#039;&#039;&#039;sds-hd-sv&#039;&#039;&#039; entitlement from their home university. This entitlement may have to be applied for individually at the home organisation. &lt;br /&gt;
&lt;br /&gt;
If you are not sure if you already have an entitlement, please check it first with the [[Registration/bwForCluster/Entitlement#Check_your_Entitlements|&#039;&#039;&#039;Check your Entitlements&#039;&#039;&#039;]] guide below.&lt;br /&gt;
If you need the entitlement, please follow the link for your institution or contact your local service desk if no information is provided:&lt;br /&gt;
* Hochschule Esslingen&lt;br /&gt;
* Universität Freiburg&lt;br /&gt;
* Universität Heidelberg: &lt;br /&gt;
** Employees of Heidelberg University receive this entitlement automatically.&lt;br /&gt;
* Universität Hohenheim&lt;br /&gt;
* Karlsruhe Institute of Technology (KIT)&lt;br /&gt;
* Universität Konstanz&lt;br /&gt;
* Universität Mannheim&lt;br /&gt;
** Employees of the University of Mannheim receive this entitlement automatically.&lt;br /&gt;
* Universität Stuttgart&lt;br /&gt;
* Universität Tübingen&lt;br /&gt;
* Universität Ulm&lt;br /&gt;
* HAW BW e.V.&lt;br /&gt;
&lt;br /&gt;
== Check your Entitlements ==&lt;br /&gt;
&lt;br /&gt;
To make sure you do not already have the entitlement, please log in to &#039;&#039;&#039;https://login.bwidm.de/user/index.xhtml&#039;&#039;&#039;.&lt;br /&gt;
To see the list of your entitlements, first select the &#039;&#039;&#039;Shibboleth&#039;&#039;&#039; tab at the top.&lt;br /&gt;
If the list below &amp;lt;code&amp;gt;&amp;lt;nowiki&amp;gt;urn:oid:1.3.6.1.4.1.5923.1.1.1.7&amp;lt;/nowiki&amp;gt;&amp;lt;/code&amp;gt; contains&lt;br /&gt;
&amp;lt;pre&amp;gt;http://bwidm.de/entitlement/bwForCluster&amp;lt;/pre&amp;gt;&lt;br /&gt;
you already have the entitlement and can skip step A.&lt;br /&gt;
{|style=&amp;quot;background:#deffee; width:100%;&amp;quot;&lt;br /&gt;
|style=&amp;quot;padding:5px; background:#cef2e0; text-align:left&amp;quot;|&lt;br /&gt;
[[Image:Attention.svg|center|25px]]&lt;br /&gt;
|style=&amp;quot;padding:5px; background:#cef2e0; text-align:left&amp;quot;|&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;nowiki&amp;gt;http://bwidm.de/entitlement/bwForCluster&amp;lt;/nowiki&amp;gt;&amp;lt;/code&amp;gt; is an attribute and not a link!&lt;br /&gt;
See [https://www.bwidm.de/dienste.php bwUniCluster und bwForCluster] for more information about needed attributes for this service.&lt;br /&gt;
|}&lt;br /&gt;
[[File:BwIDM-idp.png|center|600px|thumb|Verify Entitlement.]]&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p style=&amp;quot;text-align:right;&amp;quot;&amp;gt;[[Registration/bwForCluster/RV|Go to step B]]&amp;lt;/p&amp;gt;&lt;/div&gt;</summary>
		<author><name>H Winkhardt</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=Helix/bwVisu/RStudio&amp;diff=15316</id>
		<title>Helix/bwVisu/RStudio</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=Helix/bwVisu/RStudio&amp;diff=15316"/>
		<updated>2025-10-06T11:51:40Z</updated>

		<summary type="html">&lt;p&gt;H Winkhardt: Java module path&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__FORCETOC__&lt;br /&gt;
[https://posit.co/products/open-source/rstudio-server/ RStudio] is an integrated development environment (IDE) for programming with R. It can handle Python code, too.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Change R Version==&lt;br /&gt;
&lt;br /&gt;
By default, the default R version on Helix is used. You can view the current default version by executing &amp;lt;code&amp;gt;module -t -d avail 2&amp;amp;gt;&amp;amp;amp;1 | grep math/R&amp;lt;/code&amp;gt; on Helix.&lt;br /&gt;
&lt;br /&gt;
=== Change the R version to a version unavailable on Helix ===&lt;br /&gt;
&lt;br /&gt;
# Open a shell on Helix by [https://wiki.bwhpc.de/e/Helix/Login sshing into Helix] or by starting the RStudio app on bwVisu and switching to the terminal tab.&lt;br /&gt;
# [https://cran.r-project.org/doc/FAQ/R-FAQ.html#How-can-R-be-obtained_003f-1 Download] your desired R version, [https://cran.r-project.org/doc/manuals/r-release/R-admin.html#Getting-and-unpacking-the-sources-1 unpack it] and change into the unpacked directory:&lt;br /&gt;
#: &amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;tar -xf R-x.y.z.tar.gz&lt;br /&gt;
cd R-x.y.z.tar.gz&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
#: where &amp;lt;code&amp;gt;x.y.z&amp;lt;/code&amp;gt; denotes the version of your downloaded R installation files.&lt;br /&gt;
# Patch the configuration file to make it compatible with your shell environment on Helix:&lt;br /&gt;
#: &amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;sed &#039;/date-time conversions do not work in 2020/s/^/: #/&#039; -i configure&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
# Install R into a chosen location in your home directory:&lt;br /&gt;
#: &amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;./configure prefix=$HOME/path/in/your/home/directory&lt;br /&gt;
module load devel/java_jdk&lt;br /&gt;
make&lt;br /&gt;
make install&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
# Make your installed R version the default by exporting the location of its binary to your &amp;lt;code&amp;gt;PATH&amp;lt;/code&amp;gt;: Open or create the file &amp;lt;code&amp;gt;.bashrc&amp;lt;/code&amp;gt; in your home directory and add the following line:&lt;br /&gt;
#: &amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;export &amp;quot;PATH=/path/to/your/R/installation/directory/bin:$PATH&amp;quot;&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
#: Please make sure that the suffix &amp;lt;code&amp;gt;/bin&amp;lt;/code&amp;gt; is added to the path to the installation location of your custom R version. This is the default location where the binaries of R will be usually installed.&lt;br /&gt;
# Test the installation: Reload your bash environment and obtain the version of the R binary in your path:&lt;br /&gt;
#: &amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;source $HOME/.bashrc &amp;amp;&amp;amp; R --version&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
#: If the displayed version number is equal to the one of your custom R installation, then the installation was successfull and your custom R version will be automatically loaded in RStudio &amp;lt;u&amp;gt;when you choose&amp;lt;/u&amp;gt; &amp;lt;code&amp;gt;custom&amp;lt;/code&amp;gt; in the dropdown menu when starting the job.&lt;br /&gt;
# (Re)start RStudio&lt;br /&gt;
&lt;br /&gt;
== Add R packages ==&lt;br /&gt;
&lt;br /&gt;
* Add a new package to RStudio&lt;br /&gt;
*# [[Helix/Login|Login to bwForCluster Helix]] or start the RStudio app on bwVisu and switch to the Terminal tab.&lt;br /&gt;
*# Follow the instructions from the [[Helix/Software/R#Installing_R-Packages_into_your_home_folder|R page]]&lt;br /&gt;
* &#039;&#039;&#039;Note&#039;&#039;&#039;: For some packages, not all development dependencies are available through bwVisu. Some packages may have to be installed on the Login nodes after a ssh-login.&lt;br /&gt;
* To create dashboards, you can use RShiny with bwVisu RStudio as well.&lt;/div&gt;</summary>
		<author><name>H Winkhardt</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=Helix/bwVisu/RStudio&amp;diff=15315</id>
		<title>Helix/bwVisu/RStudio</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=Helix/bwVisu/RStudio&amp;diff=15315"/>
		<updated>2025-10-06T11:46:46Z</updated>

		<summary type="html">&lt;p&gt;H Winkhardt: /* Add R packages */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__FORCETOC__&lt;br /&gt;
[https://posit.co/products/open-source/rstudio-server/ RStudio] is an integrated development environment (IDE) for programming with R. It can handle Python code, too.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Change R Version==&lt;br /&gt;
&lt;br /&gt;
By default, the default R version on Helix is used. You can view the current default version by executing &amp;lt;code&amp;gt;module -t -d avail 2&amp;amp;gt;&amp;amp;amp;1 | grep math/R&amp;lt;/code&amp;gt; on Helix.&lt;br /&gt;
&lt;br /&gt;
=== Change the R version to a version unavailable on Helix ===&lt;br /&gt;
&lt;br /&gt;
# Open a shell on Helix by [https://wiki.bwhpc.de/e/Helix/Login sshing into Helix] or by starting the RStudio app on bwVisu and switching to the terminal tab.&lt;br /&gt;
# [https://cran.r-project.org/doc/FAQ/R-FAQ.html#How-can-R-be-obtained_003f-1 Download] your desired R version, [https://cran.r-project.org/doc/manuals/r-release/R-admin.html#Getting-and-unpacking-the-sources-1 unpack it] and change into the unpacked directory:&lt;br /&gt;
#: &amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;tar -xf R-x.y.z.tar.gz&lt;br /&gt;
cd R-x.y.z.tar.gz&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
#: where &amp;lt;code&amp;gt;x.y.z&amp;lt;/code&amp;gt; denotes the version of your downloaded R installation files.&lt;br /&gt;
# Patch the configuration file to make it compatible with your shell environment on Helix:&lt;br /&gt;
#: &amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;sed &#039;/date-time conversions do not work in 2020/s/^/: #/&#039; -i configure&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
# Install R into a chosen location in your home directory:&lt;br /&gt;
#: &amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;./configure prefix=$HOME/path/in/your/home/directory&lt;br /&gt;
module load devel/java&lt;br /&gt;
make&lt;br /&gt;
make install&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
# Make your installed R version the default by exporting the location of its binary to your &amp;lt;code&amp;gt;PATH&amp;lt;/code&amp;gt;: Open or create the file &amp;lt;code&amp;gt;.bashrc&amp;lt;/code&amp;gt; in your home directory and add the following line:&lt;br /&gt;
#: &amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;export &amp;quot;PATH=/path/to/your/R/installation/directory/bin:$PATH&amp;quot;&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
#: Please make sure that the suffix &amp;lt;code&amp;gt;/bin&amp;lt;/code&amp;gt; is added to the path to the installation location of your custom R version. This is the default location where the binaries of R will be usually installed.&lt;br /&gt;
# Test the installation: Reload your bash environment and obtain the version of the R binary in your path:&lt;br /&gt;
#: &amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;source $HOME/.bashrc &amp;amp;&amp;amp; R --version&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
#: If the displayed version number is equal to the one of your custom R installation, then the installation was successfull and your custom R version will be automatically loaded in RStudio &amp;lt;u&amp;gt;when you choose&amp;lt;/u&amp;gt; &amp;lt;code&amp;gt;custom&amp;lt;/code&amp;gt; in the dropdown menu when starting the job.&lt;br /&gt;
# (Re)start RStudio&lt;br /&gt;
&lt;br /&gt;
== Add R packages ==&lt;br /&gt;
&lt;br /&gt;
* Add a new package to RStudio&lt;br /&gt;
*# [[Helix/Login|Login to bwForCluster Helix]] or start the RStudio app on bwVisu and switch to the Terminal tab.&lt;br /&gt;
*# Follow the instructions from the [[Helix/Software/R#Installing_R-Packages_into_your_home_folder|R page]]&lt;br /&gt;
* &#039;&#039;&#039;Note&#039;&#039;&#039;: For some packages, not all development dependencies are available through bwVisu. Some packages may have to be installed on the Login nodes after a ssh-login.&lt;br /&gt;
* To create dashboards, you can use RShiny with bwVisu RStudio as well.&lt;/div&gt;</summary>
		<author><name>H Winkhardt</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=Batch_Jobs_-_bwUniCluster_Features&amp;diff=15310</id>
		<title>Batch Jobs - bwUniCluster Features</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=Batch_Jobs_-_bwUniCluster_Features&amp;diff=15310"/>
		<updated>2025-10-02T13:36:02Z</updated>

		<summary type="html">&lt;p&gt;H Winkhardt: Redirected page to BwUniCluster3.0/Running Jobs&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;#REDIRECT[[BwUniCluster3.0/Running_Jobs]]&lt;/div&gt;</summary>
		<author><name>H Winkhardt</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster3.0/Software/Python_Dask&amp;diff=15307</id>
		<title>BwUniCluster3.0/Software/Python Dask</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster3.0/Software/Python_Dask&amp;diff=15307"/>
		<updated>2025-10-01T11:24:30Z</updated>

		<summary type="html">&lt;p&gt;H Winkhardt: UC2 -&amp;gt; UC3&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;!--{| style=&amp;quot;border-style: solid; border-width: 1px&amp;quot;&lt;br /&gt;
! Navigation: [[BwHPC_Best_Practices_Repository|bwHPC BPR]] / [[BwUniCluster_User_Guide|bwUniCluster]] &lt;br /&gt;
|}--&amp;gt;&lt;br /&gt;
This guide explains how to use Python Dask and dask-jobqueue on bwUniCluster3.0.&lt;br /&gt;
&lt;br /&gt;
== Installation and Usage ==&lt;br /&gt;
Please have a look at our [https://github.com/hpcraink/workshop-parallel-jupyter Workshop] on how to use Dask on bwUniCluster3.0 (2_Fundamentals: Creating Environments and 6_Dask).&lt;/div&gt;</summary>
		<author><name>H Winkhardt</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BinAC2&amp;diff=15304</id>
		<title>BinAC2</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BinAC2&amp;diff=15304"/>
		<updated>2025-09-29T12:46:37Z</updated>

		<summary type="html">&lt;p&gt;H Winkhardt: Formatting error&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{| style=&amp;quot;width: 100%; border-spacing: 5px;&amp;quot;&lt;br /&gt;
| style=&amp;quot;text-align:center; color:#000;vertical-align:middle;font-size:75%;&amp;quot; |&lt;br /&gt;
[[File:BinAC2_Logo_RGB_subtitel.svg|center|500px||]] &lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;&#039;bwForCluster BinAC 2&#039;&#039;&#039; supports researchers from the broader fields of Bioinformatics, Astrophysics, Geosciences, Pharmacy, and Medical Informatics.&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;  background:#ffa833; width:100%;&amp;quot; &lt;br /&gt;
| style=&amp;quot;padding:8px; background:#f80; font-size:120%; font-weight:bold;  text-align:left&amp;quot; | BinAC 1 -&amp;gt; BinAC 2 Migration&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
* Still using BinAC 1: &#039;&#039;&#039;[[BinAC|Go To BinAC 1 Legacy Wiki]]&#039;&#039;&#039;&lt;br /&gt;
* &#039;&#039;&#039;Users of BinAC 1 must&#039;&#039;&#039; [[Registration/bwForCluster/BinAC2|&#039;&#039;&#039;re-register for BinAC 2&#039;&#039;&#039;]] (only step C necessary). You keep your existing projects (Rechenvorhaben).&lt;br /&gt;
* &#039;&#039;&#039;[[BinAC2/Migrate BinAC 1 workspaces to BinAC 2 workspaces|Migrate BinAC 1 workspaces to BinAC 2 workspaces]]&#039;&#039;&#039;&lt;br /&gt;
* &#039;&#039;&#039;[[BinAC2/Migrate Moab to Slurm jobs|Migrate Moab/Torque to Slurm jobs]]&#039;&#039;&#039;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;  background:#FEF4AB; width:100%;&amp;quot; &lt;br /&gt;
| style=&amp;quot;padding:8px; background:#FFE856; font-size:120%; font-weight:bold;  text-align:left&amp;quot; | News and Events&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
* [https://uni-tuebingen.de/de/274923 11th bwHPC Symposium - September 23rd, Tübingen]&lt;br /&gt;
&amp;lt;!--TODO* [http://vis01.binac.uni-tuebingen.de/ Cluster Status and Usage]--&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;  background:#eeeefe; width:100%;&amp;quot; &lt;br /&gt;
| style=&amp;quot;padding:8px; background:#dedefe; font-size:120%; font-weight:bold;  text-align:left&amp;quot; | Training &amp;amp; Support&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
* [[BinAC2/Getting_Started|Getting Started]]&lt;br /&gt;
* [https://training.bwhpc.de E-Learning Courses]&lt;br /&gt;
* [[BinAC2/Support|Contact and Support]]&lt;br /&gt;
* Send [[Feedback|Feedback]] about Wiki pages&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;  background:#deffee; width:100%;&amp;quot;&lt;br /&gt;
| style=&amp;quot;padding:8px; background:#cef2e0; font-size:120%; font-weight:bold;  text-align:left&amp;quot; | User Documentation&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
* [[Registration/bwForCluster|Registration]]&lt;br /&gt;
&lt;br /&gt;
* [[BinAC2/Login|Login]]&lt;br /&gt;
&lt;br /&gt;
* [[BinAC2/Hardware_and_Architecture|Hardware and Architecture]]&lt;br /&gt;
** [[BinAC2/Hardware_and_Architecture#Compute_Nodes|Node Specifications]]&lt;br /&gt;
** [[BinAC2/Hardware_and_Architecture#File_Systems|File Systems and Workspaces]]&lt;br /&gt;
* Usage of [[BinAC2/Software|Software]] on BinAC 2&lt;br /&gt;
** For available Software Modules see [https://www.bwhpc.de/software.php bwhpc.de Software Search] (select bwForCluster BinAC 2)&lt;br /&gt;
** Create Software Environments with [[Development/Conda|Conda]]&lt;br /&gt;
** Use  [[BinAC2/Software/Nextflow|nf-core Nextflow pipelines]]&lt;br /&gt;
** See [[Development]] for info about compiler and parallelization&lt;br /&gt;
&lt;br /&gt;
* [[BinAC2/Slurm|Batch System (SLURM)]]&lt;br /&gt;
** [[BinAC2/SLURM_Partitions|SLURM Partitions]]&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
{| style=&amp;quot;  background:#e6e9eb; width:100%;&amp;quot;&lt;br /&gt;
| style=&amp;quot;padding:8px; background:#d1dadf; font-size:120%; font-weight:bold;  text-align:left&amp;quot; | Cluster Funding&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
* Please [[BinAC2/Acknowledgement|acknowledge]] the cluster in your publications.&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>H Winkhardt</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=bwHPC_Wiki:General_disclaimer&amp;diff=15297</id>
		<title>bwHPC Wiki:General disclaimer</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=bwHPC_Wiki:General_disclaimer&amp;diff=15297"/>
		<updated>2025-09-17T12:45:47Z</updated>

		<summary type="html">&lt;p&gt;H Winkhardt: New SCC name&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;bwHPC Wiki is hosted by Scientific Computing Center (SCC) at Karlsruhe Institute of Technology. [http://www.scc.kit.edu/en/legals.php SCC disclaimer] is valid for all pages of bwHPC Wiki.&lt;/div&gt;</summary>
		<author><name>H Winkhardt</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=bwHPC_Wiki:Privacy_policy&amp;diff=15296</id>
		<title>bwHPC Wiki:Privacy policy</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=bwHPC_Wiki:Privacy_policy&amp;diff=15296"/>
		<updated>2025-09-17T12:45:22Z</updated>

		<summary type="html">&lt;p&gt;H Winkhardt: New SCC name&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;bwHPC Wiki is hosted by Scientific Computing Center (SCC) at Karlsruhe Institute of Technology. [http://www.scc.kit.edu/en/datenschutz.php SCC private policy] is valid for all pages of bwHPC Wiki.&lt;/div&gt;</summary>
		<author><name>H Winkhardt</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=R&amp;diff=15284</id>
		<title>R</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=R&amp;diff=15284"/>
		<updated>2025-09-11T07:55:33Z</updated>

		<summary type="html">&lt;p&gt;H Winkhardt: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;R&#039;&#039;&#039; is a language and environment for statistical computing and graphics. Please refer to corresponding page:&lt;br /&gt;
 &lt;br /&gt;
* [[Development/R | General remarks]]&lt;br /&gt;
* Cluster-specific pages:&lt;br /&gt;
** [[BwUniCluster3.0/Software/R | R on bwUniCluster 3.0]]&lt;br /&gt;
** [[Helix/Software/R | R on Helix]]&lt;br /&gt;
* RStudio IDE&lt;br /&gt;
** [[Helix/bwVisu/RStudio | RStudio on Helix]] through [[Helix/bwVisu | bwVisu ]]&lt;/div&gt;</summary>
		<author><name>H Winkhardt</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=Python&amp;diff=15283</id>
		<title>Python</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=Python&amp;diff=15283"/>
		<updated>2025-09-11T07:42:27Z</updated>

		<summary type="html">&lt;p&gt;H Winkhardt: Create disambiguation to make the search a bit easier&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Disambiguation page for the programming language &#039;&#039;&#039;Python&#039;&#039;&#039; and related tools and packages.&lt;br /&gt;
&lt;br /&gt;
* [[Development/Python | General remarks on Python usage and setting up environments]]&lt;br /&gt;
* Additional cluster-specific pages&lt;br /&gt;
** [[JUSTUS2/Software/Python | Python on JUSTUS 2]]&lt;br /&gt;
* Specific related software packages&lt;br /&gt;
** [[BwUniCluster3.0/Software/Python Dask | Dask on the bwUniCluster3.0]]&lt;br /&gt;
** [[Development/ollama | ollama ]]&lt;br /&gt;
* Jupyter IDE and environments&lt;br /&gt;
** [[BwUniCluster3.0/Jupyter | Jupyterlab on the bwUniCluster3.0 ]]&lt;br /&gt;
** [[Helix/bwVisu/JupyterLab | Jupyterlab on Helix]] through [[Helix/bwVisu | bwVisu ]]&lt;br /&gt;
** [[BinAC2/Software/Jupyterlab | Jupyterlab module on BinAC2 ]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
External links:&lt;br /&gt;
* [https://github.com/hpcraink/workshop-parallel-jupyter Workshop] for parallel Python via Jupyter on the bwUniCluster3.0&lt;/div&gt;</summary>
		<author><name>H Winkhardt</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=R&amp;diff=15282</id>
		<title>R</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=R&amp;diff=15282"/>
		<updated>2025-09-10T10:32:16Z</updated>

		<summary type="html">&lt;p&gt;H Winkhardt: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;R&#039;&#039;&#039; is a language and environment for statistical computing and graphics. Please refer to corresponding page:&lt;br /&gt;
 &lt;br /&gt;
* [[Development/R | General remarks]]&lt;br /&gt;
* Cluster-specific pages:&lt;br /&gt;
** [[BwUniCluster3.0/Software/R | R on bwUniCluster 3.0]]&lt;br /&gt;
** [[Helix/Software/R | R on Helix]]&lt;/div&gt;</summary>
		<author><name>H Winkhardt</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=R&amp;diff=15281</id>
		<title>R</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=R&amp;diff=15281"/>
		<updated>2025-09-10T10:31:48Z</updated>

		<summary type="html">&lt;p&gt;H Winkhardt: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;R&#039;&#039;&#039; is a language and environment for statistical computing and graphics. Please refer to the corresponding cluster-specific page:&lt;br /&gt;
 &lt;br /&gt;
* [[Development/R | General remarks]]&lt;br /&gt;
* Cluster-specific pages:&lt;br /&gt;
** [[BwUniCluster3.0/Software/R | R on bwUniCluster 3.0]]&lt;br /&gt;
** [[Helix/Software/R | R on Helix]]&lt;/div&gt;</summary>
		<author><name>H Winkhardt</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=Development/Python&amp;diff=15276</id>
		<title>Development/Python</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=Development/Python&amp;diff=15276"/>
		<updated>2025-09-09T08:30:42Z</updated>

		<summary type="html">&lt;p&gt;H Winkhardt: typo&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Python is a versatile, easy-to-learn, interpreted programming language. It offers a wide range of libraries for scientific tasks and visualization. Python counts to the best languages for machine learning. Python can be used in particular as an open source alternative for tasks that have usually been used for Matlab.&lt;br /&gt;
&lt;br /&gt;
== Installation and Versions ==&lt;br /&gt;
Python is available on all systems. With &amp;lt;code&amp;gt;python --version&amp;lt;/code&amp;gt; you can see the currently active default python version. In general, you can choose from various types of python installations:&lt;br /&gt;
* &#039;&#039;&#039;System python:&#039;&#039;&#039; This python version comes together with the operating system and is available upon login to the cluster. Other python versions might be installed along with it. All versions can be seen with &lt;br /&gt;
*: &amp;lt;code&amp;gt;ls /usr/bin/python[0-9].*[0-9] | sort -V | cut -d&amp;quot;/&amp;quot; -f4 | xargs&amp;lt;/code&amp;gt;&lt;br /&gt;
*: They can change over time. You can access a specific Python version by specifying the version in the Python command.&lt;br /&gt;
* &#039;&#039;&#039;[[Environment_Modules | Software module]]:&#039;&#039;&#039; Available versions can be identified via&lt;br /&gt;
*:  &amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
  module avail devel/python&lt;br /&gt;
  &amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Python distributions and virtual environments:&#039;&#039;&#039; By using python distributions such as Anaconda, you can easily install the needed python version into a virtual environment. For the use of conda on bwHPC clusters, please refer to [[Development/Conda|Conda]]. Alternatively, you can use more python specific tools for installing python. Some options are listed in [[#Virtual Environments and Package Management | Virtual Environments and Package Management]]&lt;br /&gt;
* &#039;&#039;&#039;[[Development/Containers | Container]]:&#039;&#039;&#039; Containers can contain their own python installation. Keep this in mind when you are working with containers provided by others.&lt;br /&gt;
&lt;br /&gt;
== Running Python Code ==&lt;br /&gt;
&lt;br /&gt;
There are three ways to run Python commands:&lt;br /&gt;
&lt;br /&gt;
* Within a &#039;&#039;&#039;terminal&#039;&#039;&#039; by executing the comand &amp;lt;code&amp;gt;python&amp;lt;/code&amp;gt;. This starts a Python shell, where all commands are evaluated by the Python interpreter. &lt;br /&gt;
* Within a &#039;&#039;&#039;script&#039;&#039;&#039; (file ends with &#039;&#039;.py&#039;&#039; and can be run with &amp;lt;code&amp;gt;python myProgram.py&amp;lt;/code&amp;gt;)&lt;br /&gt;
* Within a &#039;&#039;&#039;notebook&#039;&#039;&#039; (file ends with .ipynb). You can use other programming languages and markdown within a notebook next to your python code. Besides software development itself, teaching, prototyping and visualization are good use cases for notebooks as well.  &lt;br /&gt;
&lt;br /&gt;
== Development Environments ==&lt;br /&gt;
&lt;br /&gt;
Development Environments are usually more comfortable than running code directly from the shell. Some common options are: &lt;br /&gt;
&lt;br /&gt;
* [[Jupyter]]&lt;br /&gt;
* [[Development/VS_Code | VS Code]]&lt;br /&gt;
* PyCharm&lt;br /&gt;
&lt;br /&gt;
== Virtual Environments and Package Management ==&lt;br /&gt;
&lt;br /&gt;
Virtual environments allow Python packages to be installed in a separate installation directory. There should be at least one environment per project. This prevents version conflicts, promotes clarity as to which packages are required for which software and prevents the home directory from being cluttered with libraries that are not (or no longer) required. &lt;br /&gt;
A package can be installed via a package manager and contains an additional set of functions. &lt;br /&gt;
&lt;br /&gt;
=== Overview ===&lt;br /&gt;
The following table provides an overview of both and explains the differences between the various options. After deciding on a specific tool, it can be installed by following the given link. &lt;br /&gt;
To make it short, if you plan to use...&lt;br /&gt;
* ...Python only: venv is the base for later solutions, poetry is widely used, uv is the latest and fastest option.&lt;br /&gt;
* ...Python + Conda: Installing conda packages first into a conda environment and then python packages with pip should work. For a faster and more up to date solution, choose pixi. &lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; &lt;br /&gt;
|- style=&amp;quot;text-align:center;&amp;quot;&lt;br /&gt;
! Tool&lt;br /&gt;
! Description&lt;br /&gt;
! Can install python versions&lt;br /&gt;
! Installs packages from PyPI&lt;br /&gt;
! Installs packages from conda&lt;br /&gt;
! Dependency Resolver&lt;br /&gt;
! Dependency Management&lt;br /&gt;
! Creates Virtual Environments&lt;br /&gt;
! Supports building, packaging and publishing code (to PyPI)&lt;br /&gt;
|-&lt;br /&gt;
| pyenv&lt;br /&gt;
| Manages python versions on your system.&lt;br /&gt;
| yes&lt;br /&gt;
| no&lt;br /&gt;
| no&lt;br /&gt;
| no&lt;br /&gt;
| no&lt;br /&gt;
| no&lt;br /&gt;
| no&lt;br /&gt;
|-&lt;br /&gt;
| pip&lt;br /&gt;
| For installing python packages.&lt;br /&gt;
| no&lt;br /&gt;
| yes&lt;br /&gt;
| no&lt;br /&gt;
| yes&lt;br /&gt;
| no&lt;br /&gt;
| no&lt;br /&gt;
| no&lt;br /&gt;
|-&lt;br /&gt;
| venv&lt;br /&gt;
| For installing and managing python packages. Part of Python&#039;s standard library.&lt;br /&gt;
| no&lt;br /&gt;
| no&lt;br /&gt;
| no&lt;br /&gt;
| yes&lt;br /&gt;
| yes&lt;br /&gt;
| yes&lt;br /&gt;
| no&lt;br /&gt;
|-&lt;br /&gt;
| setuptools&lt;br /&gt;
| For packaging python projects.&lt;br /&gt;
| no&lt;br /&gt;
| no&lt;br /&gt;
| no&lt;br /&gt;
| no&lt;br /&gt;
| no&lt;br /&gt;
| no&lt;br /&gt;
| yes&lt;br /&gt;
|-&lt;br /&gt;
| poetry&lt;br /&gt;
| For installing and managing python packages. Install it with pipx.&lt;br /&gt;
| no&lt;br /&gt;
| yes&lt;br /&gt;
| no&lt;br /&gt;
| yes&lt;br /&gt;
| yes&lt;br /&gt;
| yes&lt;br /&gt;
| yes&lt;br /&gt;
|-&lt;br /&gt;
| pipx&lt;br /&gt;
| For installing and running python applications (like poetry) globally while having them in isolated environments. It is useful for keeping applications globally available and at the same time separated in their own virtual environments. Use it when the installation instructions of an application offer you this way of installation.&lt;br /&gt;
| no&lt;br /&gt;
| no&lt;br /&gt;
| no&lt;br /&gt;
| yes&lt;br /&gt;
| yes&lt;br /&gt;
| yes (only for single applications)&lt;br /&gt;
| yes&lt;br /&gt;
|-&lt;br /&gt;
| uv&lt;br /&gt;
| Replaces pixi, poetry, pyenv, pip etc. and is very fast (https://www.loopwerk.io/articles/2025/uv-keeps-getting-better/)&lt;br /&gt;
| yes&lt;br /&gt;
| yes&lt;br /&gt;
| no&lt;br /&gt;
| yes&lt;br /&gt;
| yes&lt;br /&gt;
| yes&lt;br /&gt;
| yes&lt;br /&gt;
|-&lt;br /&gt;
| pixi&lt;br /&gt;
| For installing and managing python as well as conda packages. Uses uv.&lt;br /&gt;
| yes&lt;br /&gt;
| yes&lt;br /&gt;
| yes&lt;br /&gt;
| yes&lt;br /&gt;
| yes&lt;br /&gt;
| yes&lt;br /&gt;
| yes&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Pip ===&lt;br /&gt;
&lt;br /&gt;
The standard package manager under Python is &amp;lt;code&amp;gt;pip&amp;lt;/code&amp;gt;. It can be used to install, update and delete packages. Pip can be called directly or via &amp;lt;code&amp;gt;python -m pip&amp;lt;/code&amp;gt;. The standard repository from which packages are obtained is PyPI (https://pypi.org/). When a package depends on others, they are automatically installed as well.&lt;br /&gt;
In the following, the most common pip commands are shown exemplarily. Packages should always be installed within virtual environments to avoid conflicting installations. If you decide to not use a virtual environment, the install commands have to be accomplished by a &amp;lt;code&amp;gt;--user&amp;lt;/code&amp;gt; flag or controlled via environment variables. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;Installation of packages&amp;lt;/b&amp;gt;&amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot;&amp;gt;&lt;br /&gt;
pip install pandas           # Installs the latest compatible version of pandas and its required dependencies&lt;br /&gt;
pip install pandas=1.5.3     # Installs exact version&lt;br /&gt;
pip install pandas&amp;gt;=1.5.3    # Installs version newer or equal to 1.5.3&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The packages from PyPI usually consist of precompiled libraries. However, &amp;lt;code&amp;gt;pip&amp;lt;/code&amp;gt; is also capable of creating packages from source code. However, this may require the C/C++ compiler and other dependencies needed to build the libraries. In the example, pip obtains the source code of matplotlib from github.com, installs its dependencies, compiles the library and installs it:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot;&amp;gt;&lt;br /&gt;
pip install git+https://github.com/matplotlib/matplotlib&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&amp;lt;b&amp;gt;Upgrade packages&amp;lt;/b&amp;gt;&amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot;&amp;gt;&lt;br /&gt;
pip install --upgrade pandas # Updates the library if update is available&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&amp;lt;b&amp;gt;Removing packages&amp;lt;/b&amp;gt;&amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot;&amp;gt;&lt;br /&gt;
pip uninstall pandas         # Removes pandas&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&amp;lt;b&amp;gt;Show packages&amp;lt;/b&amp;gt;&amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot;&amp;gt;&lt;br /&gt;
pip list           # Shows the installed packages&lt;br /&gt;
pip freeze         # Shows the installed packages and their versions&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&amp;lt;b&amp;gt;Save State&amp;lt;/b&amp;gt;&amp;lt;br/&amp;gt;&lt;br /&gt;
To allow for reproducibility it is important to provide information about the full list of packages and their exact versions [https://pip.pypa.io/en/stable/topics/repeatable-installs/ see version pinning]. &lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot;&amp;gt;&lt;br /&gt;
pip freeze &amp;gt; requirements.txt     # redirect package and version information to a textfile&lt;br /&gt;
pip install -r requirements.txt   # Installs all packages that are listed in the file&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Venv ===&lt;br /&gt;
&lt;br /&gt;
The module &amp;lt;code&amp;gt;venv&amp;lt;/code&amp;gt; enables the creation of a virtual environment and is a standard component of Python. Creating a &amp;lt;code&amp;gt;venv&amp;lt;/code&amp;gt; means that a folder is created which contains a separate copy of the Python binary file as well as &amp;lt;code&amp;gt;pip&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;setuptools&amp;lt;/code&amp;gt;. After activating the &amp;lt;code&amp;gt;venv&amp;lt;/code&amp;gt;, the binary file in this folder is used when &amp;lt;code&amp;gt;python&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;pip&amp;lt;/code&amp;gt; is called. This folder is also the installation target for other Python packages.&lt;br /&gt;
&lt;br /&gt;
==== Creation ====&lt;br /&gt;
&lt;br /&gt;
Create, activate, install software, deactivate: &lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
python3.11 -m venv myEnv        # Create venv&lt;br /&gt;
source myEnv/bin/activate       # Activate venv&lt;br /&gt;
pip install --upgrade pip       # Update of the venv-local pip&lt;br /&gt;
pip install &amp;lt;list of packages&amp;gt;  # Install packages/modules&lt;br /&gt;
deactivate&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Install list of software:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
pip install -r requirements.txt # Install packages/modules&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Usage ====&lt;br /&gt;
To use the virtual environment after all dependencies have been installed there, it is sufficient to simply activate it:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
source myEnv/bin/activate       # Activate venv&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
With &amp;lt;code&amp;gt;venv&amp;lt;/code&amp;gt; activated the terminal prompt will reflect that accordingly:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
(myEnv) $ &lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
It is no longer necessary to specify the Python version, the simple command &amp;lt;code&amp;gt;python&amp;lt;/code&amp;gt; starts the Python version that was used to create the virtual environment.&amp;lt;br/&amp;gt;&lt;br /&gt;
You can check, which Python version is in use via:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
(myEnv) $ which python&lt;br /&gt;
&amp;lt;/path/to/project&amp;gt;/myEnv/bin/python&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Poetry ===&lt;br /&gt;
&lt;br /&gt;
==== Creation ====&lt;br /&gt;
When you want to create a virtual environment for an already existing project, you can go to the top directory and run&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
poetry init      # Create virtual enviroment&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Otherwise start with the demo project: &lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
poetry new poetry-demo&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
You can set the allowed python versions in the pyproject.toml. To switch between python installations on your sytem, you can use&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
poetry env use python3.11&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Usage ====&lt;br /&gt;
&lt;br /&gt;
Install and update packages&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
poetry install &amp;lt;package_name&amp;gt;&lt;br /&gt;
poetry update      # Update to latest versions of packages&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To execute something within the virtual environment:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
poetry run &amp;lt;command&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Helpful links: &lt;br /&gt;
* [https://python-poetry.org/docs/managing-environments/#activating-the-environment Activate Environment]&lt;br /&gt;
&lt;br /&gt;
Helpful commands&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
poetry env info # show environment information&lt;br /&gt;
poetry env list # list all virtual environments associated with the current project &lt;br /&gt;
poetry env list --full-path&lt;br /&gt;
poetry env remove # delete virtual environment&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Best Practices ==&lt;br /&gt;
* Always use virtual environments! Use one environment per project.&lt;br /&gt;
* If a new or existing Python project is to be created or used, the following procedure is recommended:&lt;br /&gt;
# Version the Python source files and the &amp;lt;code&amp;gt;requirements.txt&amp;lt;/code&amp;gt; file with a version control system, e.g. git. Exclude unnecessary folders and files like for example &amp;lt;code&amp;gt;venv&amp;lt;/code&amp;gt; via an entry in the ignore file, for example &amp;lt;code&amp;gt;.gitignore&amp;lt;/code&amp;gt;.&lt;br /&gt;
# Create and activate a virtual environment.&lt;br /&gt;
# Update pip &amp;lt;code&amp;gt;pip install --upgrade pip&amp;lt;/code&amp;gt;.&lt;br /&gt;
# Install all required packages via the &amp;lt;code&amp;gt;requirements.txt&amp;lt;/code&amp;gt; file in the case of venv. Or by using the corresponding command of your chosen tool.&lt;br /&gt;
* List or dictionary comprehensions are to be prefered over loops as they are in general faster.&lt;br /&gt;
* Be aware of the differences between references, shallow and deep copies. &lt;br /&gt;
* Do not parallelize by hand but use libraries were possible (Dask, ...).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
== Example ==&lt;br /&gt;
== Do&#039;s and don&#039;ts ==&lt;br /&gt;
&lt;br /&gt;
Using Conda and Pip/Venv together&lt;br /&gt;
In some cases, Pip/Venv might be the preferred method for installing an environment. This could be because a central Python package comes with installation instructions that only show Pip as a supported option (like Tensorflow) or the use of projects which were written with the use of Pip in mind, for example by offering a requirements.txt and a rewrite with testing is not feasable.&lt;br /&gt;
In this case, it makes sense to use Conda as a replacement for Venv and use it to supply the virtual enviroment with the required Python version. After activating the environment, Pip will we available and refer to this Python enviroment.&lt;br /&gt;
&lt;br /&gt;
--&amp;gt;&lt;/div&gt;</summary>
		<author><name>H Winkhardt</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=Development/Python&amp;diff=15275</id>
		<title>Development/Python</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=Development/Python&amp;diff=15275"/>
		<updated>2025-09-09T08:29:07Z</updated>

		<summary type="html">&lt;p&gt;H Winkhardt: typo&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Python is a versatile, easy-to-learn, interpreted programming language. It offers a wide range of libraries for scientific tasks and visualization. Python counts to the best languages for machine learning. Python can be used in particular as an open source alternative for tasks that have usually been used for Matlab.&lt;br /&gt;
&lt;br /&gt;
== Installation and Versions ==&lt;br /&gt;
Python is available on all systems. With &amp;lt;code&amp;gt;python --version&amp;lt;/code&amp;gt; you can see the currently active default python version. In general, you can choose from various types of python installations:&lt;br /&gt;
* &#039;&#039;&#039;System python:&#039;&#039;&#039; This python version comes together with the operating system and is available upon login to the cluster. Other python versions might be installed along with it. All versions can be seen with &lt;br /&gt;
*: &amp;lt;code&amp;gt;ls /usr/bin/python[0-9].*[0-9] | sort -V | cut -d&amp;quot;/&amp;quot; -f4 | xargs&amp;lt;/code&amp;gt;&lt;br /&gt;
*: They can change over time. You can access a specific Python version by specifying the version in the Python command.&lt;br /&gt;
* &#039;&#039;&#039;[[Environment_Modules | Software module]]:&#039;&#039;&#039; Available versions can be identified via&lt;br /&gt;
*:  &amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
  module avail devel/python&lt;br /&gt;
  &amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Python distributions and virtual environments:&#039;&#039;&#039; By using python distributions such as Anaconda, you can easily install the needed python version into a virtual environment. For the use of conda on bwHPC clusters, please refer to [[Development/Conda|Conda]]. Alternatively, you can use more python specific tools for installing python. Some options are listed in [[#Virtual Environments and Package Management | Virtual Environments and Package Management]]&lt;br /&gt;
* &#039;&#039;&#039;[[Development/Containers | Container]]:&#039;&#039;&#039; Containers can contain their own python installation. Keep this in mind when you are working with containers provided by others.&lt;br /&gt;
&lt;br /&gt;
== Running Python Code ==&lt;br /&gt;
&lt;br /&gt;
There are three ways to run Python commands:&lt;br /&gt;
&lt;br /&gt;
* Within a &#039;&#039;&#039;terminal&#039;&#039;&#039; by executing the comand &amp;lt;code&amp;gt;python&amp;lt;/code&amp;gt;. This starts a Python shell, where all commands are evaluated by the Python interpreter. &lt;br /&gt;
* Within a &#039;&#039;&#039;script&#039;&#039;&#039; (file ends with &#039;&#039;.py&#039;&#039; and can be run with &amp;lt;code&amp;gt;python myProgram.py&amp;lt;/code&amp;gt;)&lt;br /&gt;
* Within a &#039;&#039;&#039;notebook&#039;&#039;&#039; (file ends with .ipynb). You can use other programming languages and markdown within a notebook next to your python code. Besides software development itself, teaching, prototyping and visualization are good use cases for notebooks as well.  &lt;br /&gt;
&lt;br /&gt;
== Development Environments ==&lt;br /&gt;
&lt;br /&gt;
Development Environments are usually more comfortable than running code directly from the shell. Some common options are: &lt;br /&gt;
&lt;br /&gt;
* [[Jupyter]]&lt;br /&gt;
* [[Development/VS_Code | VS Code]]&lt;br /&gt;
* PyCharm&lt;br /&gt;
&lt;br /&gt;
== Virtual Environments and Package Management ==&lt;br /&gt;
&lt;br /&gt;
Virtual environments allow Python packages to be installed in a separate installation directory. There should be at least one environment per project. This prevents version conflicts, promotes clarity as to which packages are required for which software and prevents the home directory from being cluttered with libraries that are not (or no longer) required. &lt;br /&gt;
A package can be installed via a package manager and contains an additional set of functions. &lt;br /&gt;
&lt;br /&gt;
=== Overview ===&lt;br /&gt;
The following table provides an overview of both and explains the differences between the various options. After deciding on a specific tool, it can be installed by following the given link. &lt;br /&gt;
To make it short, if you plan to use...&lt;br /&gt;
* ...Python only: venv is the base for later solutions, poetry is widely used, uv is the latest and fastest option.&lt;br /&gt;
* ...Python + Conda: Installing conda packages first into a conda environment and then python packages with pip should work. For a faster and more up to date solution, choose pixi. &lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; &lt;br /&gt;
|- style=&amp;quot;text-align:center;&amp;quot;&lt;br /&gt;
! Tool&lt;br /&gt;
! Description&lt;br /&gt;
! Can install python versions&lt;br /&gt;
! Installs packages from PyPI&lt;br /&gt;
! Installs packages from conda&lt;br /&gt;
! Dependency Resolver&lt;br /&gt;
! Dependency Management&lt;br /&gt;
! Creates Virtual Environments&lt;br /&gt;
! Supports building, packaging and publishing code (to PyPI)&lt;br /&gt;
|-&lt;br /&gt;
| pyenv&lt;br /&gt;
| Manages python versions on your system.&lt;br /&gt;
| yes&lt;br /&gt;
| no&lt;br /&gt;
| no&lt;br /&gt;
| no&lt;br /&gt;
| no&lt;br /&gt;
| no&lt;br /&gt;
| no&lt;br /&gt;
|-&lt;br /&gt;
| pip&lt;br /&gt;
| For installing python packages.&lt;br /&gt;
| no&lt;br /&gt;
| yes&lt;br /&gt;
| no&lt;br /&gt;
| yes&lt;br /&gt;
| no&lt;br /&gt;
| no&lt;br /&gt;
| no&lt;br /&gt;
|-&lt;br /&gt;
| venv&lt;br /&gt;
| For installing and managing python packages. Part of Python&#039;s standard library.&lt;br /&gt;
| no&lt;br /&gt;
| no&lt;br /&gt;
| no&lt;br /&gt;
| yes&lt;br /&gt;
| yes&lt;br /&gt;
| yes&lt;br /&gt;
| no&lt;br /&gt;
|-&lt;br /&gt;
| setuptools&lt;br /&gt;
| For packaging python projects.&lt;br /&gt;
| no&lt;br /&gt;
| no&lt;br /&gt;
| no&lt;br /&gt;
| no&lt;br /&gt;
| no&lt;br /&gt;
| no&lt;br /&gt;
| yes&lt;br /&gt;
|-&lt;br /&gt;
| poetry&lt;br /&gt;
| For installing and managing python packages. Install it with pipx.&lt;br /&gt;
| no&lt;br /&gt;
| yes&lt;br /&gt;
| no&lt;br /&gt;
| yes&lt;br /&gt;
| yes&lt;br /&gt;
| yes&lt;br /&gt;
| yes&lt;br /&gt;
|-&lt;br /&gt;
| pipx&lt;br /&gt;
| For installing and running python applications (like poetry) globally while having them in isolated environments. It is useful for keeping applications globally available and at the same time separated in their own virtual environments. Use it when the installation instructions of an application offer you this way of installation.&lt;br /&gt;
| no&lt;br /&gt;
| no&lt;br /&gt;
| no&lt;br /&gt;
| yes&lt;br /&gt;
| yes&lt;br /&gt;
| yes (only for single applications)&lt;br /&gt;
| yes&lt;br /&gt;
|-&lt;br /&gt;
| uv&lt;br /&gt;
| Replaces pixi, poetry, pyenv, pip etc. dand is very fast (https://www.loopwerk.io/articles/2025/uv-keeps-getting-better/)&lt;br /&gt;
| yes&lt;br /&gt;
| yes&lt;br /&gt;
| no&lt;br /&gt;
| yes&lt;br /&gt;
| yes&lt;br /&gt;
| yes&lt;br /&gt;
| yes&lt;br /&gt;
|-&lt;br /&gt;
| pixi&lt;br /&gt;
| For installing and managing python as well as conda packages. Uses uv.&lt;br /&gt;
| yes&lt;br /&gt;
| yes&lt;br /&gt;
| yes&lt;br /&gt;
| yes&lt;br /&gt;
| yes&lt;br /&gt;
| yes&lt;br /&gt;
| yes&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Pip ===&lt;br /&gt;
&lt;br /&gt;
The standard package manager under Python is &amp;lt;code&amp;gt;pip&amp;lt;/code&amp;gt;. It can be used to install, update and delete packages. Pip can be called directly or via &amp;lt;code&amp;gt;python -m pip&amp;lt;/code&amp;gt;. The standard repository from which packages are obtained is PyPI (https://pypi.org/). When a package depends on others, they are automatically installed as well.&lt;br /&gt;
In the following, the most common pip commands are shown exemplarily. Packages should always be installed within virtual environments to avoid conflicting installations. If you decide to not use a virtual environment, the install commands have to be accomplished by a &amp;lt;code&amp;gt;--user&amp;lt;/code&amp;gt; flag or controlled via environment variables. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;Installation of packages&amp;lt;/b&amp;gt;&amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot;&amp;gt;&lt;br /&gt;
pip install pandas           # Installs the latest compatible version of pandas and its required dependencies&lt;br /&gt;
pip install pandas=1.5.3     # Installs exact version&lt;br /&gt;
pip install pandas&amp;gt;=1.5.3    # Installs version newer or equal to 1.5.3&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The packages from PyPI usually consist of precompiled libraries. However, &amp;lt;code&amp;gt;pip&amp;lt;/code&amp;gt; is also capable of creating packages from source code. However, this may require the C/C++ compiler and other dependencies needed to build the libraries. In the example, pip obtains the source code of matplotlib from github.com, installs its dependencies, compiles the library and installs it:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot;&amp;gt;&lt;br /&gt;
pip install git+https://github.com/matplotlib/matplotlib&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&amp;lt;b&amp;gt;Upgrade packages&amp;lt;/b&amp;gt;&amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot;&amp;gt;&lt;br /&gt;
pip install --upgrade pandas # Updates the library if update is available&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&amp;lt;b&amp;gt;Removing packages&amp;lt;/b&amp;gt;&amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot;&amp;gt;&lt;br /&gt;
pip uninstall pandas         # Removes pandas&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&amp;lt;b&amp;gt;Show packages&amp;lt;/b&amp;gt;&amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot;&amp;gt;&lt;br /&gt;
pip list           # Shows the installed packages&lt;br /&gt;
pip freeze         # Shows the installed packages and their versions&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&amp;lt;b&amp;gt;Save State&amp;lt;/b&amp;gt;&amp;lt;br/&amp;gt;&lt;br /&gt;
To allow for reproducibility it is important to provide information about the full list of packages and their exact versions [https://pip.pypa.io/en/stable/topics/repeatable-installs/ see version pinning]. &lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot;&amp;gt;&lt;br /&gt;
pip freeze &amp;gt; requirements.txt     # redirect package and version information to a textfile&lt;br /&gt;
pip install -r requirements.txt   # Installs all packages that are listed in the file&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Venv ===&lt;br /&gt;
&lt;br /&gt;
The module &amp;lt;code&amp;gt;venv&amp;lt;/code&amp;gt; enables the creation of a virtual environment and is a standard component of Python. Creating a &amp;lt;code&amp;gt;venv&amp;lt;/code&amp;gt; means that a folder is created which contains a separate copy of the Python binary file as well as &amp;lt;code&amp;gt;pip&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;setuptools&amp;lt;/code&amp;gt;. After activating the &amp;lt;code&amp;gt;venv&amp;lt;/code&amp;gt;, the binary file in this folder is used when &amp;lt;code&amp;gt;python&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;pip&amp;lt;/code&amp;gt; is called. This folder is also the installation target for other Python packages.&lt;br /&gt;
&lt;br /&gt;
==== Creation ====&lt;br /&gt;
&lt;br /&gt;
Create, activate, install software, deactivate: &lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
python3.11 -m venv myEnv        # Create venv&lt;br /&gt;
source myEnv/bin/activate       # Activate venv&lt;br /&gt;
pip install --upgrade pip       # Update of the venv-local pip&lt;br /&gt;
pip install &amp;lt;list of packages&amp;gt;  # Install packages/modules&lt;br /&gt;
deactivate&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Install list of software:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
pip install -r requirements.txt # Install packages/modules&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Usage ====&lt;br /&gt;
To use the virtual environment after all dependencies have been installed there, it is sufficient to simply activate it:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
source myEnv/bin/activate       # Activate venv&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
With &amp;lt;code&amp;gt;venv&amp;lt;/code&amp;gt; activated the terminal prompt will reflect that accordingly:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
(myEnv) $ &lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
It is no longer necessary to specify the Python version, the simple command &amp;lt;code&amp;gt;python&amp;lt;/code&amp;gt; starts the Python version that was used to create the virtual environment.&amp;lt;br/&amp;gt;&lt;br /&gt;
You can check, which Python version is in use via:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
(myEnv) $ which python&lt;br /&gt;
&amp;lt;/path/to/project&amp;gt;/myEnv/bin/python&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Poetry ===&lt;br /&gt;
&lt;br /&gt;
==== Creation ====&lt;br /&gt;
When you want to create a virtual environment for an already existing project, you can go to the top directory and run&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
poetry init      # Create virtual enviroment&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Otherwise start with the demo project: &lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
poetry new poetry-demo&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
You can set the allowed python versions in the pyproject.toml. To switch between python installations on your sytem, you can use&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
poetry env use python3.11&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Usage ====&lt;br /&gt;
&lt;br /&gt;
Install and update packages&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
poetry install &amp;lt;package_name&amp;gt;&lt;br /&gt;
poetry update      # Update to latest versions of packages&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To execute something within the virtual environment:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
poetry run &amp;lt;command&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Helpful links: &lt;br /&gt;
* [https://python-poetry.org/docs/managing-environments/#activating-the-environment Activate Environment]&lt;br /&gt;
&lt;br /&gt;
Helpful commands&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
poetry env info # show environment information&lt;br /&gt;
poetry env list # list all virtual environments associated with the current project &lt;br /&gt;
poetry env list --full-path&lt;br /&gt;
poetry env remove # delete virtual environment&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Best Practices ==&lt;br /&gt;
* Always use virtual environments! Use one environment per project.&lt;br /&gt;
* If a new or existing Python project is to be created or used, the following procedure is recommended:&lt;br /&gt;
# Version the Python source files and the &amp;lt;code&amp;gt;requirements.txt&amp;lt;/code&amp;gt; file with a version control system, e.g. git. Exclude unnecessary folders and files like for example &amp;lt;code&amp;gt;venv&amp;lt;/code&amp;gt; via an entry in the ignore file, for example &amp;lt;code&amp;gt;.gitignore&amp;lt;/code&amp;gt;.&lt;br /&gt;
# Create and activate a virtual environment.&lt;br /&gt;
# Update pip &amp;lt;code&amp;gt;pip install --upgrade pip&amp;lt;/code&amp;gt;.&lt;br /&gt;
# Install all required packages via the &amp;lt;code&amp;gt;requirements.txt&amp;lt;/code&amp;gt; file in the case of venv. Or by using the corresponding command of your chosen tool.&lt;br /&gt;
* List or dictionary comprehensions are to be prefered over loops as they are in general faster.&lt;br /&gt;
* Be aware of the differences between references, shallow and deep copies. &lt;br /&gt;
* Do not parallelize by hand but use libraries were possible (Dask, ...).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
== Example ==&lt;br /&gt;
== Do&#039;s and don&#039;ts ==&lt;br /&gt;
&lt;br /&gt;
Using Conda and Pip/Venv together&lt;br /&gt;
In some cases, Pip/Venv might be the preferred method for installing an environment. This could be because a central Python package comes with installation instructions that only show Pip as a supported option (like Tensorflow) or the use of projects which were written with the use of Pip in mind, for example by offering a requirements.txt and a rewrite with testing is not feasable.&lt;br /&gt;
In this case, it makes sense to use Conda as a replacement for Venv and use it to supply the virtual enviroment with the required Python version. After activating the environment, Pip will we available and refer to this Python enviroment.&lt;br /&gt;
&lt;br /&gt;
--&amp;gt;&lt;/div&gt;</summary>
		<author><name>H Winkhardt</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=VNC&amp;diff=15264</id>
		<title>VNC</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=VNC&amp;diff=15264"/>
		<updated>2025-09-01T13:31:00Z</updated>

		<summary type="html">&lt;p&gt;H Winkhardt: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;VNC&#039;&#039;&#039; is short for &#039;&#039;&#039;V&#039;&#039;&#039;irtual &#039;&#039;&#039;N&#039;&#039;&#039;etwork &#039;&#039;&#039;C&#039;&#039;&#039;omputing. It is used to have a graphical user interface on a remote computer. &lt;br /&gt;
&lt;br /&gt;
== Cluster Specific Information ==&lt;br /&gt;
&lt;br /&gt;
* [[BwUniCluster3.0/Software/Start vnc desktop]]&lt;br /&gt;
&lt;br /&gt;
* [[JUSTUS2/Visualization]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Also see [[:Category:Visualization]]&lt;/div&gt;</summary>
		<author><name>H Winkhardt</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=Acknowledgement&amp;diff=15263</id>
		<title>Acknowledgement</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=Acknowledgement&amp;diff=15263"/>
		<updated>2025-09-01T13:30:05Z</updated>

		<summary type="html">&lt;p&gt;H Winkhardt: UC2 -&amp;gt; UC3&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Remember to acknowledge our resources in your publications!&lt;br /&gt;
&lt;br /&gt;
Such recognition is important for acquiring funding for the next generation of hardware, support services, data storage, and infrastructure.&lt;br /&gt;
&lt;br /&gt;
The publications will be referenced on the bwHPC website: https://www.bwhpc.de/user_publications.html&lt;br /&gt;
&lt;br /&gt;
== HPC Clusters ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;rarr;[[BwUniCluster3.0/Acknowledgement| bwUniCluster Acknowledgement]]&lt;br /&gt;
&lt;br /&gt;
&amp;amp;rarr;[[BinAC2/Acknowledgement| bwForCluster BinAC2 Acknowledgement]]&lt;br /&gt;
&lt;br /&gt;
&amp;amp;rarr;[[Helix/Acknowledgement| bwForCluster Helix Acknowledgement]]&lt;br /&gt;
&lt;br /&gt;
&amp;amp;rarr;[[JUSTUS2/Acknowledgement| bwForCluster JUSTUS2 Acknowledgement]]&lt;br /&gt;
&lt;br /&gt;
&amp;amp;rarr;[[NEMO2/Acknowledgement| bwForCluster NEMO2 Acknowledgement]]&lt;br /&gt;
&lt;br /&gt;
== Data Facilities ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;rarr;[[SDS@hd/Acknowledgement| SDS@hd Acknowledgement]]&lt;/div&gt;</summary>
		<author><name>H Winkhardt</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=JUSTUS2/Software/Turbomole&amp;diff=15262</id>
		<title>JUSTUS2/Software/Turbomole</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=JUSTUS2/Software/Turbomole&amp;diff=15262"/>
		<updated>2025-09-01T13:27:13Z</updated>

		<summary type="html">&lt;p&gt;H Winkhardt: UC2 -&amp;gt; UC3&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Softwarepage|chem/turbomole}}&lt;br /&gt;
&lt;br /&gt;
{| width=600px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Description !! Content&lt;br /&gt;
|-&lt;br /&gt;
| module load&lt;br /&gt;
| chem/turbomole&lt;br /&gt;
|-&lt;br /&gt;
| License&lt;br /&gt;
| [https://www.turbomole.org/turbomole/order-turbomole/ Commerical]&lt;br /&gt;
|-&lt;br /&gt;
| Citing&lt;br /&gt;
| [https://www.turbomole.org/turbomole/turbomole-documentation/ See  Turbomole manual]&lt;br /&gt;
|-&lt;br /&gt;
| Links&lt;br /&gt;
| [https://www.turbomole.com Homepage] &amp;amp;#124; [https://www.turbomole-gmbh.com/turbomole-manuals.html Documentation]&lt;br /&gt;
|-&lt;br /&gt;
| User Forum&lt;br /&gt;
| [https://www.turbo-forum.com/ external]&lt;br /&gt;
|}&lt;br /&gt;
= Description = &lt;br /&gt;
&#039;&#039;&#039;Turbomole&#039;&#039;&#039; is a general purpose &#039;&#039;quantum chemistry&#039;&#039; software package for &#039;&#039;ab initio&#039;&#039; electronic structure calculations and provides:&lt;br /&gt;
* ground state calculations for methods such as Hartree-Fock, DFT, MP2, and CCSD(T);&lt;br /&gt;
* excited state calculations at different levels such as full RPA, TDDFT, CIS(D), CC2, an ADC(2);&lt;br /&gt;
* geometry optimizations, transition state searches, molecular dynamics calculations;&lt;br /&gt;
* property and spectra calculations such as IR, UV/VIS, Raman, and CD; &lt;br /&gt;
* approximations like resolution-of-the-identity (RI) to speed-up the calculations without introducing uncontrollable or unknown errors; as well as&lt;br /&gt;
* parallel versions for all kind of jobs.&lt;br /&gt;
For more information on Turbmole&#039;s features please visit [https://www.turbomole-gmbh.com/program-overview.html http://www.turbomole-gmbh.com/program-overview.html].&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Versions and Availability =&lt;br /&gt;
&lt;br /&gt;
On the command line interface (CLI) of a particular bwHPC cluster a list of all available Turbomole versions can be inquired as followed&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ module avail chem/turbomole&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A current list of the versions available on the bwUniCluster and bwForClusters can be found here:&lt;br /&gt;
https://www.bwhpc.de/software.php&lt;br /&gt;
&lt;br /&gt;
== bwUniCluster 3.0 ==&lt;br /&gt;
&lt;br /&gt;
* Turbomole 7.9&lt;br /&gt;
&lt;br /&gt;
== bwForCluster JUSTUS 2 ==&lt;br /&gt;
&lt;br /&gt;
* Turbomole 7.5&lt;br /&gt;
* Turbomole 7.4.1&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Parallel computing ==&lt;br /&gt;
The Turbomole &#039;&#039;Module&#039;&#039; subsumes all available parallel computing variants of Turbomole&#039;s binaries. Turbomole defines the following parallel computing variants:&lt;br /&gt;
* SMP = Shared-memory parallel computing based on OpenMP and Fork() with the latter using separated address spaces.&lt;br /&gt;
* MPI = Message passing interface protocol based parallel computing&lt;br /&gt;
However only one of the parallel variants or the sequential variant can be loaded at once and most Turbomole&#039;s binaries support only 1 or 2 of the parallelization variants. Like for Turbomole installations without the &#039;&#039;Module&#039;&#039; system of the bwHPC clusters, the variants have to be triggered by the environment variable $PARA_ARCH.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Usage =&lt;br /&gt;
== Before loading the Module ==&lt;br /&gt;
Before loading the Turbomole &#039;&#039;Module&#039;&#039; the parallel computing variant has to be defined via the environment variable $PARA_ARCH using the abbreviations SMP or MPI, e.g.:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ export PARA_ARCH=MPI&lt;br /&gt;
&amp;lt;/pre&amp;gt; &lt;br /&gt;
will [[#Loading the Module|later load]] the &#039;&#039;&#039;MPI binary variants&#039;&#039;&#039;. If the variable $PARA_ARCH is not defined or empty, the sequential binary variants will be active once the Turbomole Module is loaded.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
== Loading the Module ==&lt;br /&gt;
You can load the default version of &#039;&#039;Turbomole&#039;&#039; with the command:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ module load chem/turbomole&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The Turbomole &#039;&#039;Module&#039;&#039; does not depend on any other &#039;&#039;Module&#039;&#039;, but on the variable $PARA_ARCH. Moreover, Turbomole provides its own libraries regarding &#039;&#039;OpenMP&#039;&#039;, &#039;&#039;Fork()&#039;&#039;, and &#039;&#039;MPI&#039;&#039; based parallelization.&lt;br /&gt;
If you wish to load a specific (older) version you can do so by executing e.g.: &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ module load chem/turbomole/7.4.1&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
to load the version 7.4.1&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
== Switching between different parallel variants ==&lt;br /&gt;
To switch between the different parallel variants provided by the Turbomole &#039;&#039;Module&#039;&#039;, simply define the new parallel variant via $PARA_ARCH and load the &#039;&#039;Module&#039;&#039; again. Note that for switching between the parallel variants unloading of the Turbomole &#039;&#039;Module&#039;&#039; is not required. For instance to change to the MPI variant, execute:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ export PARA_ARCH=MPI&lt;br /&gt;
$ module load chem/turbomole&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
== Turbomole binaries ==&lt;br /&gt;
The &#039;&#039;&#039;Turbomole&#039;&#039;&#039; software package consists of a set of stand-alone program binaries providing different features and parallelization support:&lt;br /&gt;
{|  width=750px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|- style=text-align:left&amp;quot;&lt;br /&gt;
! Binary !! Features !! OpenMP !! Fork !! MPI&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| define&lt;br /&gt;
| Interactive input generator&lt;br /&gt;
| no&lt;br /&gt;
| no&lt;br /&gt;
| no&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| dscf &lt;br /&gt;
| Energy calculations&lt;br /&gt;
| yes&lt;br /&gt;
| yes&lt;br /&gt;
| yes&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| grad&lt;br /&gt;
| Gradient calculations&lt;br /&gt;
| yes&lt;br /&gt;
| yes&lt;br /&gt;
| yes&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| ridft&lt;br /&gt;
| Energy calc. with fast Coulomb approximation &lt;br /&gt;
| no&lt;br /&gt;
| yes&lt;br /&gt;
| yes&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| rdgrad&lt;br /&gt;
| Gradient calc. with fast Coulomb approximation &lt;br /&gt;
| yes&lt;br /&gt;
| yes&lt;br /&gt;
| yes&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| ricc2&lt;br /&gt;
| Electronic excitation energies, transition moments and properties of excited states&lt;br /&gt;
| yes&lt;br /&gt;
| no&lt;br /&gt;
| yes&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| statpt&lt;br /&gt;
| Hessian and coordinate update for stationary point search&lt;br /&gt;
| no&lt;br /&gt;
| no&lt;br /&gt;
| no&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| aoforce&lt;br /&gt;
| Analytic calculation of force constants, vibrational frequencies and IR intensities&lt;br /&gt;
| yes&lt;br /&gt;
| yes&lt;br /&gt;
| yes&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| escf&lt;br /&gt;
| Calc. of time dependent and dielectric properties &lt;br /&gt;
| yes&lt;br /&gt;
| yes&lt;br /&gt;
| yes&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| egrad&lt;br /&gt;
| gradients and first-order properties of excited states&lt;br /&gt;
| yes&lt;br /&gt;
| no&lt;br /&gt;
| yes&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| odft&lt;br /&gt;
| Orbital-dependent energy calc.&lt;br /&gt;
| yes&lt;br /&gt;
| no&lt;br /&gt;
| no&lt;br /&gt;
|}&lt;br /&gt;
For the complete set of binaries and more detailed description of their features read [https://www.turbomole.org/turbomole/turbomole-documentation/ here].&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Turbomole tools ==&lt;br /&gt;
Turbomole&#039;s tool set contains scripts and binaries that help to prepare, execute workflows (such as geometry optimisation) and postprocess results. &lt;br /&gt;
{| width=750px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|- style=&amp;quot;text-align:left&amp;quot;&lt;br /&gt;
! Script !! Type !! Description &lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| x2t&lt;br /&gt;
| Preparation&lt;br /&gt;
| Converts  XYZ coordinates into &#039;&#039;Turbomole&#039;&#039; coordinates. &lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| sdg&lt;br /&gt;
| Preparation&lt;br /&gt;
| Shows data group from &#039;&#039;control&#039;&#039; file: for example&amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;$ sdg coord&amp;lt;/span&amp;gt;shows current Turbomole coordinates used.&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| jobex&lt;br /&gt;
| Optimization workflow&lt;br /&gt;
| Shell script that controls and executes automatic optimizations of molecular geometry parameters.&lt;br /&gt;
&amp;lt;!-- NumForce does currently not work with Moab+Slurm&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| NumForce &lt;br /&gt;
| Workflow &lt;br /&gt;
| Shell script to calculates 2nd derivatives by numerical differentiation of gradients. Supports parallelization of tasks.&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| tm2molden&lt;br /&gt;
| Postprocessing&lt;br /&gt;
| Creates a molden format input file for the [[Molden]] program. &lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| eiger&lt;br /&gt;
| Postprocessing&lt;br /&gt;
| Script front-end of program &#039;&#039;eigerf&#039;&#039; to display HOMO-LUMO gap and MO eigenvalues.&lt;br /&gt;
|}&lt;br /&gt;
For the complete set of tools and more detailed description of their features read [http://www.turbomole-gmbh.com/manuals/version_6_5/Documentation_html/node9.html here].&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
== Disk Usage ==&lt;br /&gt;
By default, scratch files of Turbomole binaries are placed in the directory of Turbomole binary execution. Please do not run your Turbomole calculations in your $HOME or $WORK directory. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
= Examples =&lt;br /&gt;
== Single node jobs ==&lt;br /&gt;
=== Template provided by Turbomole Module ===&lt;br /&gt;
The Turbomole &#039;&#039;Module&#039;&#039; provides a simple job script example of Cubane (C8H8) that runs an energy relaxation via MPI parallel &#039;&#039;dscf&#039;&#039; using 4 cores on a single node and its local file system. To simply run the example do the following steps:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ module load chem/turbomole&lt;br /&gt;
$ mkdir -vp ~/Turbomole-example/&lt;br /&gt;
$ cd ~/Turbomole-examples/&lt;br /&gt;
$ cp -r $TURBOMOLE_EXA_DIR/* ~/Turbmole-example/&lt;br /&gt;
$ sbatch bwHPC_turbomole_single-node_tmpdir_example.sh&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The last step submits the job example script bwHPC_turbomole_single-node_example.sh to the queueing system. Once started on a compute node, all calculations will be done under an unique directory on the [[BwUniCluster_File_System#File_Systems|local file system]] of that particular compute node.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
=== Geometry optimization ===&lt;br /&gt;
To do a geometry optimization of the [[#Single node job template provided by Turbomole Module|previous job example]] modify bwHPC_turbomole_single-node_tmpdir_example.sh by replacing the following line&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
time dscf &amp;gt; dscf.out 2&amp;gt;&amp;amp;1&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
with&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
time jobex -dscf -keep 2&amp;gt;&amp;amp;1&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
and submit the modified script to the queueing system on bwForCluster JUSTUS 2  (→[[JUSTUS2/Jobscripts: Running Your Calculations]] and bwUniCluster (→[[BwUniCluster3.0/Slurm]]) via sbatch.&lt;br /&gt;
The Turbomole command &#039;&#039;jobex&#039;&#039; controls the call of all the required Turbomole binaries for the geometry optimization.  &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Turbomole-Specific Environment Variables =&lt;br /&gt;
To see a list of all Turbomole environments set by the &#039;module load&#039;-command do the following:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module show chem/turbomole 2&amp;gt;&amp;amp;1 | grep -E &#039;(setenv|prepend-path)&#039;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Version-Specific Information =&lt;br /&gt;
For specific information about the Turbomole version (e.g. 7.5) to be loaded do the following:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module help chem/turbomole/7.5&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
----&lt;/div&gt;</summary>
		<author><name>H Winkhardt</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=JUSTUS2/Software/Turbomole&amp;diff=15261</id>
		<title>JUSTUS2/Software/Turbomole</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=JUSTUS2/Software/Turbomole&amp;diff=15261"/>
		<updated>2025-09-01T13:26:49Z</updated>

		<summary type="html">&lt;p&gt;H Winkhardt: UC2 -&amp;gt; UC3&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Softwarepage|chem/turbomole}}&lt;br /&gt;
&lt;br /&gt;
{| width=600px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Description !! Content&lt;br /&gt;
|-&lt;br /&gt;
| module load&lt;br /&gt;
| chem/turbomole&lt;br /&gt;
|-&lt;br /&gt;
| License&lt;br /&gt;
| [https://www.turbomole.org/turbomole/order-turbomole/ Commerical]&lt;br /&gt;
|-&lt;br /&gt;
| Citing&lt;br /&gt;
| [https://www.turbomole.org/turbomole/turbomole-documentation/ See  Turbomole manual]&lt;br /&gt;
|-&lt;br /&gt;
| Links&lt;br /&gt;
| [https://www.turbomole.com Homepage] &amp;amp;#124; [https://www.turbomole-gmbh.com/turbomole-manuals.html Documentation]&lt;br /&gt;
|-&lt;br /&gt;
| User Forum&lt;br /&gt;
| [https://www.turbo-forum.com/ external]&lt;br /&gt;
|}&lt;br /&gt;
= Description = &lt;br /&gt;
&#039;&#039;&#039;Turbomole&#039;&#039;&#039; is a general purpose &#039;&#039;quantum chemistry&#039;&#039; software package for &#039;&#039;ab initio&#039;&#039; electronic structure calculations and provides:&lt;br /&gt;
* ground state calculations for methods such as Hartree-Fock, DFT, MP2, and CCSD(T);&lt;br /&gt;
* excited state calculations at different levels such as full RPA, TDDFT, CIS(D), CC2, an ADC(2);&lt;br /&gt;
* geometry optimizations, transition state searches, molecular dynamics calculations;&lt;br /&gt;
* property and spectra calculations such as IR, UV/VIS, Raman, and CD; &lt;br /&gt;
* approximations like resolution-of-the-identity (RI) to speed-up the calculations without introducing uncontrollable or unknown errors; as well as&lt;br /&gt;
* parallel versions for all kind of jobs.&lt;br /&gt;
For more information on Turbmole&#039;s features please visit [https://www.turbomole-gmbh.com/program-overview.html http://www.turbomole-gmbh.com/program-overview.html].&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Versions and Availability =&lt;br /&gt;
&lt;br /&gt;
On the command line interface (CLI) of a particular bwHPC cluster a list of all available Turbomole versions can be inquired as followed&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ module avail chem/turbomole&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A current list of the versions available on the bwUniCluster and bwForClusters can be found here:&lt;br /&gt;
https://www.bwhpc.de/software.php&lt;br /&gt;
&lt;br /&gt;
== bwUniCluster 3.0 ==&lt;br /&gt;
&lt;br /&gt;
* Turbomole 7.9&lt;br /&gt;
&lt;br /&gt;
== bwForCluster JUSTUS 2 ==&lt;br /&gt;
&lt;br /&gt;
* Turbomole 7.5&lt;br /&gt;
* Turbomole 7.4.1&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Parallel computing ==&lt;br /&gt;
The Turbomole &#039;&#039;Module&#039;&#039; subsumes all available parallel computing variants of Turbomole&#039;s binaries. Turbomole defines the following parallel computing variants:&lt;br /&gt;
* SMP = Shared-memory parallel computing based on OpenMP and Fork() with the latter using separated address spaces.&lt;br /&gt;
* MPI = Message passing interface protocol based parallel computing&lt;br /&gt;
However only one of the parallel variants or the sequential variant can be loaded at once and most Turbomole&#039;s binaries support only 1 or 2 of the parallelization variants. Like for Turbomole installations without the &#039;&#039;Module&#039;&#039; system of the bwHPC clusters, the variants have to be triggered by the environment variable $PARA_ARCH.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Usage =&lt;br /&gt;
== Before loading the Module ==&lt;br /&gt;
Before loading the Turbomole &#039;&#039;Module&#039;&#039; the parallel computing variant has to be defined via the environment variable $PARA_ARCH using the abbreviations SMP or MPI, e.g.:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ export PARA_ARCH=MPI&lt;br /&gt;
&amp;lt;/pre&amp;gt; &lt;br /&gt;
will [[#Loading the Module|later load]] the &#039;&#039;&#039;MPI binary variants&#039;&#039;&#039;. If the variable $PARA_ARCH is not defined or empty, the sequential binary variants will be active once the Turbomole Module is loaded.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
== Loading the Module ==&lt;br /&gt;
You can load the default version of &#039;&#039;Turbomole&#039;&#039; with the command:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ module load chem/turbomole&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The Turbomole &#039;&#039;Module&#039;&#039; does not depend on any other &#039;&#039;Module&#039;&#039;, but on the variable $PARA_ARCH. Moreover, Turbomole provides its own libraries regarding &#039;&#039;OpenMP&#039;&#039;, &#039;&#039;Fork()&#039;&#039;, and &#039;&#039;MPI&#039;&#039; based parallelization.&lt;br /&gt;
If you wish to load a specific (older) version you can do so by executing e.g.: &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ module load chem/turbomole/7.4.1&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
to load the version 7.4.1&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
== Switching between different parallel variants ==&lt;br /&gt;
To switch between the different parallel variants provided by the Turbomole &#039;&#039;Module&#039;&#039;, simply define the new parallel variant via $PARA_ARCH and load the &#039;&#039;Module&#039;&#039; again. Note that for switching between the parallel variants unloading of the Turbomole &#039;&#039;Module&#039;&#039; is not required. For instance to change to the MPI variant, execute:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ export PARA_ARCH=MPI&lt;br /&gt;
$ module load chem/turbomole&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
== Turbomole binaries ==&lt;br /&gt;
The &#039;&#039;&#039;Turbomole&#039;&#039;&#039; software package consists of a set of stand-alone program binaries providing different features and parallelization support:&lt;br /&gt;
{|  width=750px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|- style=text-align:left&amp;quot;&lt;br /&gt;
! Binary !! Features !! OpenMP !! Fork !! MPI&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| define&lt;br /&gt;
| Interactive input generator&lt;br /&gt;
| no&lt;br /&gt;
| no&lt;br /&gt;
| no&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| dscf &lt;br /&gt;
| Energy calculations&lt;br /&gt;
| yes&lt;br /&gt;
| yes&lt;br /&gt;
| yes&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| grad&lt;br /&gt;
| Gradient calculations&lt;br /&gt;
| yes&lt;br /&gt;
| yes&lt;br /&gt;
| yes&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| ridft&lt;br /&gt;
| Energy calc. with fast Coulomb approximation &lt;br /&gt;
| no&lt;br /&gt;
| yes&lt;br /&gt;
| yes&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| rdgrad&lt;br /&gt;
| Gradient calc. with fast Coulomb approximation &lt;br /&gt;
| yes&lt;br /&gt;
| yes&lt;br /&gt;
| yes&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| ricc2&lt;br /&gt;
| Electronic excitation energies, transition moments and properties of excited states&lt;br /&gt;
| yes&lt;br /&gt;
| no&lt;br /&gt;
| yes&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| statpt&lt;br /&gt;
| Hessian and coordinate update for stationary point search&lt;br /&gt;
| no&lt;br /&gt;
| no&lt;br /&gt;
| no&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| aoforce&lt;br /&gt;
| Analytic calculation of force constants, vibrational frequencies and IR intensities&lt;br /&gt;
| yes&lt;br /&gt;
| yes&lt;br /&gt;
| yes&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| escf&lt;br /&gt;
| Calc. of time dependent and dielectric properties &lt;br /&gt;
| yes&lt;br /&gt;
| yes&lt;br /&gt;
| yes&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| egrad&lt;br /&gt;
| gradients and first-order properties of excited states&lt;br /&gt;
| yes&lt;br /&gt;
| no&lt;br /&gt;
| yes&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| odft&lt;br /&gt;
| Orbital-dependent energy calc.&lt;br /&gt;
| yes&lt;br /&gt;
| no&lt;br /&gt;
| no&lt;br /&gt;
|}&lt;br /&gt;
For the complete set of binaries and more detailed description of their features read [https://www.turbomole.org/turbomole/turbomole-documentation/ here].&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Turbomole tools ==&lt;br /&gt;
Turbomole&#039;s tool set contains scripts and binaries that help to prepare, execute workflows (such as geometry optimisation) and postprocess results. &lt;br /&gt;
{| width=750px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|- style=&amp;quot;text-align:left&amp;quot;&lt;br /&gt;
! Script !! Type !! Description &lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| x2t&lt;br /&gt;
| Preparation&lt;br /&gt;
| Converts  XYZ coordinates into &#039;&#039;Turbomole&#039;&#039; coordinates. &lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| sdg&lt;br /&gt;
| Preparation&lt;br /&gt;
| Shows data group from &#039;&#039;control&#039;&#039; file: for example&amp;lt;span style=&amp;quot;background:#edeae2;margin:10px;padding:1px;border:1px dotted #808080&amp;quot;&amp;gt;$ sdg coord&amp;lt;/span&amp;gt;shows current Turbomole coordinates used.&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| jobex&lt;br /&gt;
| Optimization workflow&lt;br /&gt;
| Shell script that controls and executes automatic optimizations of molecular geometry parameters.&lt;br /&gt;
&amp;lt;!-- NumForce does currently not work with Moab+Slurm&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| NumForce &lt;br /&gt;
| Workflow &lt;br /&gt;
| Shell script to calculates 2nd derivatives by numerical differentiation of gradients. Supports parallelization of tasks.&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| tm2molden&lt;br /&gt;
| Postprocessing&lt;br /&gt;
| Creates a molden format input file for the [[Molden]] program. &lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| eiger&lt;br /&gt;
| Postprocessing&lt;br /&gt;
| Script front-end of program &#039;&#039;eigerf&#039;&#039; to display HOMO-LUMO gap and MO eigenvalues.&lt;br /&gt;
|}&lt;br /&gt;
For the complete set of tools and more detailed description of their features read [http://www.turbomole-gmbh.com/manuals/version_6_5/Documentation_html/node9.html here].&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
== Disk Usage ==&lt;br /&gt;
By default, scratch files of Turbomole binaries are placed in the directory of Turbomole binary execution. Please do not run your Turbomole calculations in your $HOME or $WORK directory. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
= Examples =&lt;br /&gt;
== Single node jobs ==&lt;br /&gt;
=== Template provided by Turbomole Module ===&lt;br /&gt;
The Turbomole &#039;&#039;Module&#039;&#039; provides a simple job script example of Cubane (C8H8) that runs an energy relaxation via MPI parallel &#039;&#039;dscf&#039;&#039; using 4 cores on a single node and its local file system. To simply run the example do the following steps:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ module load chem/turbomole&lt;br /&gt;
$ mkdir -vp ~/Turbomole-example/&lt;br /&gt;
$ cd ~/Turbomole-examples/&lt;br /&gt;
$ cp -r $TURBOMOLE_EXA_DIR/* ~/Turbmole-example/&lt;br /&gt;
$ sbatch bwHPC_turbomole_single-node_tmpdir_example.sh&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The last step submits the job example script bwHPC_turbomole_single-node_example.sh to the queueing system. Once started on a compute node, all calculations will be done under an unique directory on the [[BwUniCluster_File_System#File_Systems|local file system]] of that particular compute node.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
=== Geometry optimization ===&lt;br /&gt;
To do a geometry optimization of the [[#Single node job template provided by Turbomole Module|previous job example]] modify bwHPC_turbomole_single-node_tmpdir_example.sh by replacing the following line&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
time dscf &amp;gt; dscf.out 2&amp;gt;&amp;amp;1&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
with&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
time jobex -dscf -keep 2&amp;gt;&amp;amp;1&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
and submit the modified script to the queueing system on bwForCluster JUSTUS 2  (→[[JUSTUS2/Jobscripts: Running Your Calculations]] and bwUniCluster (→[[BwUniCluster2.0/Slurm]]) via sbatch.&lt;br /&gt;
The Turbomole command &#039;&#039;jobex&#039;&#039; controls the call of all the required Turbomole binaries for the geometry optimization.  &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
= Turbomole-Specific Environment Variables =&lt;br /&gt;
To see a list of all Turbomole environments set by the &#039;module load&#039;-command do the following:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module show chem/turbomole 2&amp;gt;&amp;amp;1 | grep -E &#039;(setenv|prepend-path)&#039;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Version-Specific Information =&lt;br /&gt;
For specific information about the Turbomole version (e.g. 7.5) to be loaded do the following:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module help chem/turbomole/7.5&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
----&lt;/div&gt;</summary>
		<author><name>H Winkhardt</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=Development/ollama&amp;diff=15260</id>
		<title>Development/ollama</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=Development/ollama&amp;diff=15260"/>
		<updated>2025-09-01T13:24:51Z</updated>

		<summary type="html">&lt;p&gt;H Winkhardt: UC2 -&amp;gt; UC3&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Using LLMs even for inferencing requires large computational resources - currently at best a powerful GPU -&lt;br /&gt;
as provided by the bwHPC clusters.&lt;br /&gt;
This page explains to how to make usage of bwHPC resources, using Ollama as an example to show best practices at work.&lt;br /&gt;
&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Ollama is an inferencing framework that provides access to a multitude of powerful, large models and allows&lt;br /&gt;
performant access to a variety of accelerators, e.g. from CPUs using AVX-512 to APUs like the AMD MI-300A,&lt;br /&gt;
as well as GPUs like multiple NVIDIA H100.&lt;br /&gt;
&lt;br /&gt;
Installing the inference server Ollama by default assumes you have root permission to install the server globally for all users&lt;br /&gt;
into the directory &amp;lt;code&amp;gt;/usr/local/bin&amp;lt;/code&amp;gt;. Of course, this is &#039;&#039;&#039;not&#039;&#039;&#039; sensible.&lt;br /&gt;
Therefore the clusters provide the [[Environment_Modules|Environment Modules]] including binaries and libraries for CPU (if available AVX-512), AMD ROCm (if available) and NVIDIA CUDA using:&lt;br /&gt;
 module load cs/ollama&lt;br /&gt;
&lt;br /&gt;
More information is available in [https://github.com/ollama/ollama/tree/main/docs Ollamas Github documentation] page.&lt;br /&gt;
&lt;br /&gt;
The inference server Ollama opens the well-known port 11434. The compute node&#039;s IP is on the internal network, e.g. 10.1.0.101,&lt;br /&gt;
which is not visible to any outside computer like Your laptop.&lt;br /&gt;
Therefore we need a way to forward this port on an IP visible to the outside, aka the login nodes.&lt;br /&gt;
&lt;br /&gt;
{|style=&amp;quot;background:#FEF4AB; width:100%;&amp;quot;&lt;br /&gt;
|style=&amp;quot;padding:5px; background:#FEF4AB; text-align:left&amp;quot;|&lt;br /&gt;
[[Image:Attention.svg|center|25px]]&lt;br /&gt;
|style=&amp;quot;padding:5px; background:#FEF4AB; text-align:left&amp;quot;|&lt;br /&gt;
Please note: this module started off in the Category &amp;lt;code&amp;gt;devel&amp;lt;/code&amp;gt;, but has been moved to the correct category computer science, or short &amp;lt;code&amp;gt;cs&amp;lt;/code&amp;gt;.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Preparation ==&lt;br /&gt;
&lt;br /&gt;
Prior to starting and pulling models, it is a &#039;&#039;&#039;good idea&#039;&#039;&#039; to allocate a proper [[Workspace]] for the (multi-gigabyte) models&lt;br /&gt;
and create a soft-link into this directory for Ollama:&lt;br /&gt;
 ws_allocate ollama_models 60&lt;br /&gt;
 ln -s `ws_find ollama_models`/ ~/.ollama&lt;br /&gt;
&lt;br /&gt;
Now we may allocate a compute node using [[BwUniCluster3.0/Slurm|Slurm]].&lt;br /&gt;
At first You may start with interactively checking out the method in one terminal:&lt;br /&gt;
 srun --time=00:30:00 --gres=gpu:1 --pty /bin/bash&lt;br /&gt;
Please note that on bwUniCluster, You need to provide a partition, here containing a GPU, e.g. for this 30 minute run, we may select &amp;lt;code&amp;gt;--partition=dev_gpu_4&amp;lt;/code&amp;gt;, on DACHS &amp;lt;code&amp;gt;--partition=gpu1&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Your Shell&#039;s prompt will list the node&#039;s name, e.g. on bwUniCluster node &amp;lt;code&amp;gt;uc2n520&amp;lt;/code&amp;gt;:&lt;br /&gt;
 [USERNAME@uc2n520 ~]$&lt;br /&gt;
&lt;br /&gt;
Now You may load the Ollama module and start the server on the compute node and make sure using &amp;lt;code&amp;gt;OLLAMA_HOST&amp;lt;/code&amp;gt; that it serves to the external IP address:&lt;br /&gt;
 module load cs/ollama&lt;br /&gt;
 export OLLAMA_HOST=0.0.0.0:11434&lt;br /&gt;
 ollama serve&lt;br /&gt;
&lt;br /&gt;
You should be able to see the usage of the accelerator:&lt;br /&gt;
&lt;br /&gt;
[[File:ollama_gpus.png|850x329px]]&lt;br /&gt;
&lt;br /&gt;
== Accessing from login nodes ==&lt;br /&gt;
From another terminal You may log into the Cluster&#039;s login node a second time and pull a LLM (please check [https://ollama.com/search] for available models):&lt;br /&gt;
 module load cs/ollama&lt;br /&gt;
 export OLLAMA_HOST=uc2n520&lt;br /&gt;
 ollama pull deepseek-r1&lt;br /&gt;
&lt;br /&gt;
On the previous terminal on the compute node, You should see the model being downloaded and installed into the workspace.&lt;br /&gt;
Of course developing on the login nodes is not viable, therefore You may want to forward the ports.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{|style=&amp;quot;background:#dedefe; width:100%;&amp;quot;&lt;br /&gt;
|style=&amp;quot;padding:5px; background:#dedefe; text-align:left&amp;quot;|&lt;br /&gt;
[[Image:Info.svg|center]]&lt;br /&gt;
|style=&amp;quot;padding:5px; background:#dedefe; text-align:left&amp;quot;|&lt;br /&gt;
On GPUs with 48GB VRAM like NVIDIA L40S, you may want to use the 70b model of Deepseek, i.e. &amp;lt;code&amp;gt;ollama pull deepseek-r1:70b&amp;lt;/code&amp;gt; and amend the below commands accordingly.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Port forwarding ==&lt;br /&gt;
&lt;br /&gt;
The login nodes of course have externally visible IP addresses, e.g. &amp;lt;code&amp;gt;bwunicluster.scc.kit.edu&amp;lt;/code&amp;gt; which get to resolved to one of the multiple login nodes.&lt;br /&gt;
Using the Secure shell &amp;lt;code&amp;gt;ssh&amp;lt;/code&amp;gt; one may forward a port from the login node to the compute node.&lt;br /&gt;
&lt;br /&gt;
Of course, You may want to &#039;&#039;&#039;locally on Your laptop&#039;&#039;&#039;.&lt;br /&gt;
Open another terminal and start the Secure shell using the port forwarding:&lt;br /&gt;
 ssh -L 11434:uc2n520:11434 USERNAME@bwunicluster.scc.kit.edu&lt;br /&gt;
 Your OTP: 123456&lt;br /&gt;
 Password:&lt;br /&gt;
&lt;br /&gt;
You may check using whether this worked using Your local browser on Your Laptop:&lt;br /&gt;
  [[File:firefox_ollama.png|645x325px]]&lt;br /&gt;
&lt;br /&gt;
== Local programming ==&lt;br /&gt;
Now that You made sure You have access to the compute nodes GPU, you may develop on your local system:&lt;br /&gt;
 python -m venv ollama_test&lt;br /&gt;
 source ollama_test/bin/activate&lt;br /&gt;
 python -m pip install ollama&lt;br /&gt;
 export OLLAMA_HOST=localhost&lt;br /&gt;
&lt;br /&gt;
and call &amp;lt;code&amp;gt;python&amp;lt;/code&amp;gt; to run the following code:&lt;br /&gt;
 import ollama&lt;br /&gt;
 response = ollama.chat(model=&#039;deepseek-r1&#039;, messages=[ { &#039;role&#039;: &#039;user&#039;, &#039;content&#039;: &#039;why is the sky blue?&#039;},])&lt;br /&gt;
 print(response)&lt;br /&gt;
&lt;br /&gt;
You should now see DeepSeek&#039;s response regarding Rayleigh Scattering.&lt;br /&gt;
&lt;br /&gt;
On the compute node, You will see the computation:&lt;br /&gt;
&lt;br /&gt;
[[File:ollama_gpus_computing.png|850x329px]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Enjoy!&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Best Practice ==&lt;br /&gt;
Running interactively is generally &#039;&#039;&#039;not&#039;&#039;&#039; a good idea, especially not with very large models. Better submit Your job with mail notification, here file &amp;lt;code&amp;gt;ollama.slurm&amp;lt;/code&amp;gt;:&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 #SBATCH --partition=gpu_h100     # bwUniCluster3, for DACHS: gpu8&lt;br /&gt;
 #SBATCH --gres=gpu:h100:4        # bwUniCluster3, for DACHS: gpu:h100:8&lt;br /&gt;
 #SBATCH --ntasks-per-node=96     # considering bwUniCluster3 AMD EPYC9454, same on DACHS&lt;br /&gt;
 #SBATCH --mem=500G               # considering bwUniCluster3 768GB, enough on DACHS&lt;br /&gt;
 #SBATCH --time=2:00:00           # Please be courteous to other users&lt;br /&gt;
 #SBATCH --mail-type=BEGIN        # Email when the job starts&lt;br /&gt;
 #SBATCH --mail-user=my@mail.de   # Your email address&lt;br /&gt;
 &lt;br /&gt;
 module load cs/ollama            # Load the the&lt;br /&gt;
 export OLLAMA_HOST=0.0.0.0       # Serve on global interface&lt;br /&gt;
 export OLLAMA_KEEP_ALIVE=-1      # Do not unload model (default is 5 minutes)&lt;br /&gt;
 ollama serve&lt;br /&gt;
&lt;br /&gt;
After starting the SSH for portforwarding or on the login-node after setting &amp;lt;code&amp;gt;export OLLAMA_HOST=&amp;lt;/code&amp;gt; to the allocated node (see output of &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt;):&lt;br /&gt;
 ollama run deepseek-r1:671b&lt;br /&gt;
 &amp;gt;&amp;gt;&amp;gt; /?              # Shows the help&lt;br /&gt;
 &amp;gt;&amp;gt;&amp;gt; /? shortcuts    # Shows the keyboard shortcuts&lt;br /&gt;
 &amp;gt;&amp;gt;&amp;gt; /show           # Show information regarding model, prompt&lt;br /&gt;
 &amp;gt;&amp;gt;&amp;gt; What is log(e)? # Returns explanation of logarithm under the assumption of base 10 and the natural logarithm including LaTeX Math notation.&lt;/div&gt;</summary>
		<author><name>H Winkhardt</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=Registration/SSH/rrsync&amp;diff=15259</id>
		<title>Registration/SSH/rrsync</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=Registration/SSH/rrsync&amp;diff=15259"/>
		<updated>2025-09-01T13:22:28Z</updated>

		<summary type="html">&lt;p&gt;H Winkhardt: UC2 -&amp;gt; UC3&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= RSYNC for SSH Command Keys =&lt;br /&gt;
&lt;br /&gt;
If you want to use rsync for SSH command keys, you normally need one SSH key per directory that you want to synchronize.&lt;br /&gt;
With rrsync you can use one SSH key per main directory.&lt;br /&gt;
&lt;br /&gt;
== Usage ==&lt;br /&gt;
&lt;br /&gt;
You can find all the options in the [https://github.com/WayneD/rsync/blob/master/support/rrsync.1.md man page] for rrsync.&lt;br /&gt;
Below we provide some useful examples for daily use on the bwHPC clusters.&lt;br /&gt;
&lt;br /&gt;
If you want to copy files from your home directory but only want to allow read operations, you can use the following SSH command:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
/usr/local/bin/rrsync -ro /home/aa/aa_bb/aa_abc1/&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
[[File:Ssh-com.png|center|600px|thumb|Add command SSH key to service.]]&lt;br /&gt;
&lt;br /&gt;
Then you can run the following rsync command (replace login.cluster.bw with the node you want to use):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
rsync -av login.cluster.bw:homesubdir/ localsubdir/&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For read-only access to your workspaces, you could use the following command:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
rsync -av localsubdir/ login.cluster.bw:homesubdir/&lt;br /&gt;
/usr/local/bin/rrsync error: sending to read-only server is not allowed&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For write-only access to your workspaces, you could use the following command:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
/usr/local/bin/rrsync -wo /workspace/aa_abc1-abc-0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
[[File:Ssh-com.png|center|600px|thumb|Add command SSH key to service.]]&lt;br /&gt;
&lt;br /&gt;
{|style=&amp;quot;background:#deffee; width:100%;&amp;quot;&lt;br /&gt;
|style=&amp;quot;padding:5px; background:#cef2e0; text-align:left&amp;quot;|&lt;br /&gt;
[[Image:Attention.svg|center|25px]]&lt;br /&gt;
|style=&amp;quot;padding:5px; background:#cef2e0; text-align:left&amp;quot;|&lt;br /&gt;
With rrsync, you must use relative paths starting from the configured path in your SSH command.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Then you could run the following rsync command (replace login.cluster.bw with the node you want to use and your user home):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
rsync -av localsubdir/ login.cluster.bw:workspacesubdir/&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Reading is not permitted with read-only access.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
rsync -av login.cluster.bw:workspacesubdir/ localsubdir/&lt;br /&gt;
/usr/local/bin/rrsync error: reading from write-only server is not allowed&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you do not want to restrict read or write access, simply omit &#039;-ro&#039; or &#039;-wo&#039; from your SSH command:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
/usr/local/bin/rrsync -ro /home/aa/aa_bb/aa_abc1/&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== rrsync Path on the Clusters ==&lt;br /&gt;
&lt;br /&gt;
If you want to use the rrysnc command, you must specify the following paths on the clusters:&lt;br /&gt;
* bwUniCluster3.0:&lt;br /&gt;
&amp;lt;pre&amp;gt;/usr/bin/rrsync&amp;lt;/pre&amp;gt;&lt;br /&gt;
* HELIX/NEMO:&lt;br /&gt;
&amp;lt;pre&amp;gt;/usr/local/bin/rrsync&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>H Winkhardt</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=Rjags&amp;diff=15258</id>
		<title>Rjags</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=Rjags&amp;diff=15258"/>
		<updated>2025-09-01T13:19:44Z</updated>

		<summary type="html">&lt;p&gt;H Winkhardt: UC2 -&amp;gt; UC3&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;#REDIRECT [[BwUniCluster3.0/Software/R/Rjags]]&lt;/div&gt;</summary>
		<author><name>H Winkhardt</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster&amp;diff=15257</id>
		<title>BwUniCluster</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster&amp;diff=15257"/>
		<updated>2025-09-01T13:17:06Z</updated>

		<summary type="html">&lt;p&gt;H Winkhardt: UC2 -&amp;gt; UC3&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;#REDIRECT [[BwUniCluster3.0]]&lt;/div&gt;</summary>
		<author><name>H Winkhardt</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=MATLAB&amp;diff=15256</id>
		<title>MATLAB</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=MATLAB&amp;diff=15256"/>
		<updated>2025-09-01T13:16:00Z</updated>

		<summary type="html">&lt;p&gt;H Winkhardt: UC2 -&amp;gt; UC3&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;MATLAB&#039;&#039;&#039; (MATrix LABoratory) is a high-level programming language and interactive computing environment for numerical calculation and data visualization.&lt;br /&gt;
&lt;br /&gt;
Each cluster provides its own Matlab installation and documentation.&lt;br /&gt;
&lt;br /&gt;
To learn more about Matlab installations, visit the appropriate cluster site.&lt;br /&gt;
&lt;br /&gt;
* [[bwUniCluster3.0/Software/Matlab]]&lt;br /&gt;
* [[Helix/Software/Matlab]]&lt;br /&gt;
* [[NEMO2/Software/Matlab]]&lt;/div&gt;</summary>
		<author><name>H Winkhardt</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=Registration/Login&amp;diff=15255</id>
		<title>Registration/Login</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=Registration/Login&amp;diff=15255"/>
		<updated>2025-09-01T13:06:44Z</updated>

		<summary type="html">&lt;p&gt;H Winkhardt: Fixed Redlink to Account&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Login to the Clusters =&lt;br /&gt;
&lt;br /&gt;
{|style=&amp;quot;background:#deffee; width:100%;&amp;quot;&lt;br /&gt;
|style=&amp;quot;padding:5px; background:#cef2e0; text-align:left&amp;quot;|&lt;br /&gt;
[[Image:Attention.svg|center|25px]]&lt;br /&gt;
|style=&amp;quot;padding:5px; background:#cef2e0; text-align:left&amp;quot;|&lt;br /&gt;
Access to the clusters (bwUniCluster/bwForCluster) is restricted to IP addresses of universities/colleges from Baden-Württemberg [https://bgpview.io/asn/553#prefixes-v4 (BelWü network)].&lt;br /&gt;
If you are outside the BelWü network (e.g. at home), you must first establish a VPN connection to your home university or a connection to an SSH jump host at your home university.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== SSH Clients ==&lt;br /&gt;
&lt;br /&gt;
After completing the [[Registration|web registration]] and [[Registration/Password|setting a service password]] the HPC cluster is ready for your &#039;&#039;&#039;SSH&#039;&#039;&#039; based login.&lt;br /&gt;
* [[Registration/Login/Client| What Client to Use]]&lt;br /&gt;
&lt;br /&gt;
== Cluster Specific Information ==&lt;br /&gt;
&lt;br /&gt;
Every cluster has its own documentation for login.&lt;br /&gt;
* If you want to &#039;&#039;&#039;login&#039;&#039;&#039; to the bwUniCluster, please refer to &amp;lt;br /&amp;gt; &lt;br /&gt;
&amp;amp;rarr; &#039;&#039;&#039;[[bwUniCluster3.0/Login|bwUniCluster 3.0]]&#039;&#039;&#039;&amp;lt;br&amp;gt;&lt;br /&gt;
* If you want to &#039;&#039;&#039;login&#039;&#039;&#039; to one of the bwForClusters, please refer to &amp;lt;br /&amp;gt; &lt;br /&gt;
&amp;amp;rarr; &#039;&#039;&#039;[[BinAC/Login|BINAC]]&#039;&#039;&#039; / &#039;&#039;&#039;[[BinAC2/Login|BINAC2]]&#039;&#039;&#039; &amp;lt;br /&amp;gt; &lt;br /&gt;
&amp;amp;rarr; &#039;&#039;&#039;[[JUSTUS2/Login|JUSTUS 2]]&#039;&#039;&#039; &amp;lt;br /&amp;gt; &lt;br /&gt;
&amp;amp;rarr; &#039;&#039;&#039;[[Helix/Login|Helix]]&#039;&#039;&#039; &amp;lt;br /&amp;gt; &lt;br /&gt;
&amp;amp;rarr; &#039;&#039;&#039;[[NEMO2/Login|NEMO2]]&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
= Allowed Activities on Login Nodes =&lt;br /&gt;
&lt;br /&gt;
{|style=&amp;quot;background:#deffee; width:100%;&amp;quot;&lt;br /&gt;
|style=&amp;quot;padding:5px; background:#cef2e0; text-align:left&amp;quot;|&lt;br /&gt;
[[Image:Attention.svg|center|25px]]&lt;br /&gt;
|style=&amp;quot;padding:5px; background:#cef2e0; text-align:left&amp;quot;|&lt;br /&gt;
To guarantee usability for all the users of clusters you must not run your compute jobs on the login nodes.&lt;br /&gt;
Compute jobs must be submitted to the queuing system.&lt;br /&gt;
Any compute job running on the login nodes will be terminated without any notice.&lt;br /&gt;
Any long-running compilation or any long-running pre- or post-processing of batch jobs must also be submitted to the queuing system.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
The login nodes of the bwHPC clusters are the access point to the compute system, your &amp;lt;code&amp;gt;$HOME&amp;lt;/code&amp;gt; directory and your workspaces.&lt;br /&gt;
These nodes are shared with all the users therefore, your activities on the login nodes are limited to primarily set up your batch jobs.&lt;br /&gt;
Your activities may also be:&lt;br /&gt;
* &#039;&#039;&#039;short&#039;&#039;&#039; compilation of your program code and&lt;br /&gt;
* &#039;&#039;&#039;short&#039;&#039;&#039; pre- and post-processing of your batch jobs.&lt;br /&gt;
&lt;br /&gt;
= Additional Login Information =&lt;br /&gt;
&lt;br /&gt;
Following sub-pages go into deeper detail about the following topics:&lt;br /&gt;
* [[Registration/Login/Username|How do I find out my cluster username?]]&lt;br /&gt;
* [[Registration/Login/Hostname|What are the hostnames of the login nodes of a cluster?]]&lt;br /&gt;
* [[Registration/Account|Is my account on the cluster still valid?]]&lt;br /&gt;
* Configuring your shell: [[.bashrc Do&#039;s and Don&#039;ts]]&lt;br /&gt;
These pages are also referenced in the cluster-specific login documentations.&lt;/div&gt;</summary>
		<author><name>H Winkhardt</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=Registration/Login&amp;diff=15254</id>
		<title>Registration/Login</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=Registration/Login&amp;diff=15254"/>
		<updated>2025-09-01T13:05:21Z</updated>

		<summary type="html">&lt;p&gt;H Winkhardt: Removed UC2 and added BINAC2&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Login to the Clusters =&lt;br /&gt;
&lt;br /&gt;
{|style=&amp;quot;background:#deffee; width:100%;&amp;quot;&lt;br /&gt;
|style=&amp;quot;padding:5px; background:#cef2e0; text-align:left&amp;quot;|&lt;br /&gt;
[[Image:Attention.svg|center|25px]]&lt;br /&gt;
|style=&amp;quot;padding:5px; background:#cef2e0; text-align:left&amp;quot;|&lt;br /&gt;
Access to the clusters (bwUniCluster/bwForCluster) is restricted to IP addresses of universities/colleges from Baden-Württemberg [https://bgpview.io/asn/553#prefixes-v4 (BelWü network)].&lt;br /&gt;
If you are outside the BelWü network (e.g. at home), you must first establish a VPN connection to your home university or a connection to an SSH jump host at your home university.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== SSH Clients ==&lt;br /&gt;
&lt;br /&gt;
After completing the [[Registration|web registration]] and [[Registration/Password|setting a service password]] the HPC cluster is ready for your &#039;&#039;&#039;SSH&#039;&#039;&#039; based login.&lt;br /&gt;
* [[Registration/Login/Client| What Client to Use]]&lt;br /&gt;
&lt;br /&gt;
== Cluster Specific Information ==&lt;br /&gt;
&lt;br /&gt;
Every cluster has its own documentation for login.&lt;br /&gt;
* If you want to &#039;&#039;&#039;login&#039;&#039;&#039; to the bwUniCluster, please refer to &amp;lt;br /&amp;gt; &lt;br /&gt;
&amp;amp;rarr; &#039;&#039;&#039;[[bwUniCluster3.0/Login|bwUniCluster 3.0]]&#039;&#039;&#039;&amp;lt;br&amp;gt;&lt;br /&gt;
* If you want to &#039;&#039;&#039;login&#039;&#039;&#039; to one of the bwForClusters, please refer to &amp;lt;br /&amp;gt; &lt;br /&gt;
&amp;amp;rarr; &#039;&#039;&#039;[[BinAC/Login|BINAC]]&#039;&#039;&#039; / &#039;&#039;&#039;[[BinAC2/Login|BINAC2]]&#039;&#039;&#039; &amp;lt;br /&amp;gt; &lt;br /&gt;
&amp;amp;rarr; &#039;&#039;&#039;[[JUSTUS2/Login|JUSTUS 2]]&#039;&#039;&#039; &amp;lt;br /&amp;gt; &lt;br /&gt;
&amp;amp;rarr; &#039;&#039;&#039;[[Helix/Login|Helix]]&#039;&#039;&#039; &amp;lt;br /&amp;gt; &lt;br /&gt;
&amp;amp;rarr; &#039;&#039;&#039;[[NEMO2/Login|NEMO2]]&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
= Allowed Activities on Login Nodes =&lt;br /&gt;
&lt;br /&gt;
{|style=&amp;quot;background:#deffee; width:100%;&amp;quot;&lt;br /&gt;
|style=&amp;quot;padding:5px; background:#cef2e0; text-align:left&amp;quot;|&lt;br /&gt;
[[Image:Attention.svg|center|25px]]&lt;br /&gt;
|style=&amp;quot;padding:5px; background:#cef2e0; text-align:left&amp;quot;|&lt;br /&gt;
To guarantee usability for all the users of clusters you must not run your compute jobs on the login nodes.&lt;br /&gt;
Compute jobs must be submitted to the queuing system.&lt;br /&gt;
Any compute job running on the login nodes will be terminated without any notice.&lt;br /&gt;
Any long-running compilation or any long-running pre- or post-processing of batch jobs must also be submitted to the queuing system.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
The login nodes of the bwHPC clusters are the access point to the compute system, your &amp;lt;code&amp;gt;$HOME&amp;lt;/code&amp;gt; directory and your workspaces.&lt;br /&gt;
These nodes are shared with all the users therefore, your activities on the login nodes are limited to primarily set up your batch jobs.&lt;br /&gt;
Your activities may also be:&lt;br /&gt;
* &#039;&#039;&#039;short&#039;&#039;&#039; compilation of your program code and&lt;br /&gt;
* &#039;&#039;&#039;short&#039;&#039;&#039; pre- and post-processing of your batch jobs.&lt;br /&gt;
&lt;br /&gt;
= Additional Login Information =&lt;br /&gt;
&lt;br /&gt;
Following sub-pages go into deeper detail about the following topics:&lt;br /&gt;
* [[Registration/Login/Username|How do I find out my cluster username?]]&lt;br /&gt;
* [[Registration/Login/Hostname|What are the hostnames of the login nodes of a cluster?]]&lt;br /&gt;
* [[Registration/Login/Account|Is my account on the cluster still valid?]]&lt;br /&gt;
* Configuring your shell: [[.bashrc Do&#039;s and Don&#039;ts]]&lt;br /&gt;
These pages are also referenced in the cluster-specific login documentations.&lt;/div&gt;</summary>
		<author><name>H Winkhardt</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=Registration/Login/Hostname&amp;diff=15253</id>
		<title>Registration/Login/Hostname</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=Registration/Login/Hostname&amp;diff=15253"/>
		<updated>2025-09-01T12:59:33Z</updated>

		<summary type="html">&lt;p&gt;H Winkhardt: Fixed no of Helix Login nodes&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Login Hostnames of the bwHPC Clusters =&lt;br /&gt;
&lt;br /&gt;
From outside the clusters users can only use the login nodes to submit jobs.&lt;br /&gt;
&lt;br /&gt;
Please go to the section of the cluster you want to login:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;[[Registration/Login/Hostname#bwUniCluster_3.0_Hostnames|bwUniCluster 3.0 Hostnames]]&#039;&#039;&#039;&lt;br /&gt;
* &#039;&#039;&#039;[[Registration/Login/Hostname#BINAC Hostnames|BINAC Hostnames]]&#039;&#039;&#039;&lt;br /&gt;
* &#039;&#039;&#039;[[Registration/Login/Hostname#BINAC2 Hostnames|BINAC2 Hostnames]]&#039;&#039;&#039;&lt;br /&gt;
* &#039;&#039;&#039;[[Registration/Login/Hostname#JUSTUS2 Hostnames|JUSTUS2 Hostnames]]&#039;&#039;&#039;&lt;br /&gt;
* &#039;&#039;&#039;[[Registration/Login/Hostname#Helix Hostnames|Helix Hostnames]]&#039;&#039;&#039;&lt;br /&gt;
* &#039;&#039;&#039;[[Registration/Login/Hostname#NEMO2 Hostnames|NEMO2 Hostnames]]&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== bwUniCluster 3.0 Hostnames ==&lt;br /&gt;
&lt;br /&gt;
The system has two login nodes.&lt;br /&gt;
The selection of the login node is done automatically.&lt;br /&gt;
If you are logging in multiple times, different sessions might run on different login nodes.&lt;br /&gt;
&lt;br /&gt;
Login to bwUniCluster 3.0:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Hostname !! Node type&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;uc3.scc.kit.edu&#039;&#039;&#039;          || login to one of the two login nodes&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;bwunicluster.scc.kit.edu&#039;&#039;&#039; || login to one of the two login nodes&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
In general, you should use automatic selection to allow us to balance the load over the four login nodes.&lt;br /&gt;
If you need to connect to specific login nodes, you can use the following hostnames:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Hostname !! Node type&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;uc3-login1.scc.kit.edu&#039;&#039;&#039; || bwUniCluster 3.0 first login node&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;uc3-login2.scc.kit.edu&#039;&#039;&#039; || bwUniCluster 3.0 second login node&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== BINAC Hostnames ==&lt;br /&gt;
&lt;br /&gt;
The system has three login nodes.&lt;br /&gt;
You have to select the login node yourself.&lt;br /&gt;
&lt;br /&gt;
Login to BINAC:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Hostname !! Node type&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;login01.binac.uni-tuebingen.de&#039;&#039;&#039; || BINAC first login node&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;login02.binac.uni-tuebingen.de&#039;&#039;&#039; || BINAC second login node&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;login03.binac.uni-tuebingen.de&#039;&#039;&#039; || BINAC third login node&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== BINAC2 Hostnames ==&lt;br /&gt;
&lt;br /&gt;
BinAC 2 has one login node serving as a load balancer. We use DNS round-robin scheduling to load-balance the incoming connections between the actual three login nodes. If you are logging in multiple times, different sessions might run on different login nodes and hence programs started in one session might not be visible in another sessions. &lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Hostname !! Destination&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;login.binac2.uni-tuebingen.de&#039;&#039;&#039; || one of the three login nodes &lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== JUSTUS2 Hostnames ==&lt;br /&gt;
&lt;br /&gt;
The system has four login nodes.&lt;br /&gt;
The selection of the login node is done automatically.&lt;br /&gt;
If you are logging in multiple times, different sessions might run on different login nodes.&lt;br /&gt;
&lt;br /&gt;
Login to JUSTUS2:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Hostname !! Node type&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;justus2.uni-ulm.de&#039;&#039;&#039; || login to one of the four login nodes&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
In general, you should use automatic selection to allow us to balance the load over the four login nodes.&lt;br /&gt;
If you need to connect to specific login nodes, you can use the following hostnames:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Hostname !! Node type&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;justus2-login01.rz.uni-ulm.de&#039;&#039;&#039; || JUSTUS2 first login node&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;justus2-login02.rz.uni-ulm.de&#039;&#039;&#039; || JUSTUS2 second login node&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;justus2-login03.rz.uni-ulm.de&#039;&#039;&#039; || JUSTUS2 third login node&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;justus2-login04.rz.uni-ulm.de&#039;&#039;&#039; || JUSTUS2 fourth login node&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Helix Hostnames ==&lt;br /&gt;
&lt;br /&gt;
The system has two login nodes.&lt;br /&gt;
The selection of the login node is done automatically.&lt;br /&gt;
If you are logging in multiple times, different sessions might run on different login nodes.&lt;br /&gt;
&lt;br /&gt;
Login to Helix:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Hostname !! Node type&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;helix.bwservices.uni-heidelberg.de&#039;&#039;&#039; || login to one of two login nodes&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
In general, you should use automatic selection to allow us to balance the load over the login nodes.&lt;br /&gt;
&lt;br /&gt;
== NEMO2 Hostnames ==&lt;br /&gt;
&lt;br /&gt;
The system has two login nodes.&lt;br /&gt;
You have to select the login node yourself.&lt;br /&gt;
&lt;br /&gt;
Login to NEMO2:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Hostname !! Node type&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;login.nemo.uni-freiburg.de&#039;&#039;&#039; || NEMO2 first or second login node&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;login1.nemo.uni-freiburg.de&#039;&#039;&#039; || NEMO2 first login node&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;login2.nemo.uni-freiburg.de&#039;&#039;&#039; || NEMO2 second login node&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>H Winkhardt</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=Registration/Login/Hostname&amp;diff=15252</id>
		<title>Registration/Login/Hostname</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=Registration/Login/Hostname&amp;diff=15252"/>
		<updated>2025-09-01T12:58:34Z</updated>

		<summary type="html">&lt;p&gt;H Winkhardt: Added BINAC2&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Login Hostnames of the bwHPC Clusters =&lt;br /&gt;
&lt;br /&gt;
From outside the clusters users can only use the login nodes to submit jobs.&lt;br /&gt;
&lt;br /&gt;
Please go to the section of the cluster you want to login:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;[[Registration/Login/Hostname#bwUniCluster_3.0_Hostnames|bwUniCluster 3.0 Hostnames]]&#039;&#039;&#039;&lt;br /&gt;
* &#039;&#039;&#039;[[Registration/Login/Hostname#BINAC Hostnames|BINAC Hostnames]]&#039;&#039;&#039;&lt;br /&gt;
* &#039;&#039;&#039;[[Registration/Login/Hostname#BINAC2 Hostnames|BINAC2 Hostnames]]&#039;&#039;&#039;&lt;br /&gt;
* &#039;&#039;&#039;[[Registration/Login/Hostname#JUSTUS2 Hostnames|JUSTUS2 Hostnames]]&#039;&#039;&#039;&lt;br /&gt;
* &#039;&#039;&#039;[[Registration/Login/Hostname#Helix Hostnames|Helix Hostnames]]&#039;&#039;&#039;&lt;br /&gt;
* &#039;&#039;&#039;[[Registration/Login/Hostname#NEMO2 Hostnames|NEMO2 Hostnames]]&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== bwUniCluster 3.0 Hostnames ==&lt;br /&gt;
&lt;br /&gt;
The system has two login nodes.&lt;br /&gt;
The selection of the login node is done automatically.&lt;br /&gt;
If you are logging in multiple times, different sessions might run on different login nodes.&lt;br /&gt;
&lt;br /&gt;
Login to bwUniCluster 3.0:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Hostname !! Node type&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;uc3.scc.kit.edu&#039;&#039;&#039;          || login to one of the two login nodes&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;bwunicluster.scc.kit.edu&#039;&#039;&#039; || login to one of the two login nodes&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
In general, you should use automatic selection to allow us to balance the load over the four login nodes.&lt;br /&gt;
If you need to connect to specific login nodes, you can use the following hostnames:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Hostname !! Node type&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;uc3-login1.scc.kit.edu&#039;&#039;&#039; || bwUniCluster 3.0 first login node&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;uc3-login2.scc.kit.edu&#039;&#039;&#039; || bwUniCluster 3.0 second login node&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== BINAC Hostnames ==&lt;br /&gt;
&lt;br /&gt;
The system has three login nodes.&lt;br /&gt;
You have to select the login node yourself.&lt;br /&gt;
&lt;br /&gt;
Login to BINAC:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Hostname !! Node type&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;login01.binac.uni-tuebingen.de&#039;&#039;&#039; || BINAC first login node&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;login02.binac.uni-tuebingen.de&#039;&#039;&#039; || BINAC second login node&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;login03.binac.uni-tuebingen.de&#039;&#039;&#039; || BINAC third login node&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== BINAC2 Hostnames ==&lt;br /&gt;
&lt;br /&gt;
BinAC 2 has one login node serving as a load balancer. We use DNS round-robin scheduling to load-balance the incoming connections between the actual three login nodes. If you are logging in multiple times, different sessions might run on different login nodes and hence programs started in one session might not be visible in another sessions. &lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Hostname !! Destination&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;login.binac2.uni-tuebingen.de&#039;&#039;&#039; || one of the three login nodes &lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== JUSTUS2 Hostnames ==&lt;br /&gt;
&lt;br /&gt;
The system has four login nodes.&lt;br /&gt;
The selection of the login node is done automatically.&lt;br /&gt;
If you are logging in multiple times, different sessions might run on different login nodes.&lt;br /&gt;
&lt;br /&gt;
Login to JUSTUS2:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Hostname !! Node type&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;justus2.uni-ulm.de&#039;&#039;&#039; || login to one of the four login nodes&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
In general, you should use automatic selection to allow us to balance the load over the four login nodes.&lt;br /&gt;
If you need to connect to specific login nodes, you can use the following hostnames:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Hostname !! Node type&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;justus2-login01.rz.uni-ulm.de&#039;&#039;&#039; || JUSTUS2 first login node&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;justus2-login02.rz.uni-ulm.de&#039;&#039;&#039; || JUSTUS2 second login node&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;justus2-login03.rz.uni-ulm.de&#039;&#039;&#039; || JUSTUS2 third login node&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;justus2-login04.rz.uni-ulm.de&#039;&#039;&#039; || JUSTUS2 fourth login node&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Helix Hostnames ==&lt;br /&gt;
&lt;br /&gt;
The system has four login nodes.&lt;br /&gt;
The selection of the login node is done automatically.&lt;br /&gt;
If you are logging in multiple times, different sessions might run on different login nodes.&lt;br /&gt;
&lt;br /&gt;
Login to Helix:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Hostname !! Node type&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;helix.bwservices.uni-heidelberg.de&#039;&#039;&#039; || login to one of two login nodes&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
In general, you should use automatic selection to allow us to balance the load over the login nodes.&lt;br /&gt;
&lt;br /&gt;
== NEMO2 Hostnames ==&lt;br /&gt;
&lt;br /&gt;
The system has two login nodes.&lt;br /&gt;
You have to select the login node yourself.&lt;br /&gt;
&lt;br /&gt;
Login to NEMO2:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Hostname !! Node type&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;login.nemo.uni-freiburg.de&#039;&#039;&#039; || NEMO2 first or second login node&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;login1.nemo.uni-freiburg.de&#039;&#039;&#039; || NEMO2 first login node&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;login2.nemo.uni-freiburg.de&#039;&#039;&#039; || NEMO2 second login node&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>H Winkhardt</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=Registration/Login/Hostname&amp;diff=15251</id>
		<title>Registration/Login/Hostname</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=Registration/Login/Hostname&amp;diff=15251"/>
		<updated>2025-09-01T12:53:53Z</updated>

		<summary type="html">&lt;p&gt;H Winkhardt: Remove UC2&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Login Hostnames of the bwHPC Clusters =&lt;br /&gt;
&lt;br /&gt;
From outside the clusters users can only use the login nodes to submit jobs.&lt;br /&gt;
&lt;br /&gt;
Please go to the section of the cluster you want to login:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;[[Login/Hostname#bwUniCluster_3.0_Hostnames|bwUniCluster 3.0 Hostnames]]&#039;&#039;&#039;&lt;br /&gt;
* &#039;&#039;&#039;[[Login/Hostname#BINAC Hostnames|BINAC Hostnames]]&#039;&#039;&#039;&lt;br /&gt;
* &#039;&#039;&#039;[[Login/Hostname#JUSTUS2 Hostnames|JUSTUS2 Hostnames]]&#039;&#039;&#039;&lt;br /&gt;
* &#039;&#039;&#039;[[Login/Hostname#Helix Hostnames|Helix Hostnames]]&#039;&#039;&#039;&lt;br /&gt;
* &#039;&#039;&#039;[[Login/Hostname#NEMO2 Hostnames|NEMO2 Hostnames]]&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== bwUniCluster 3.0 Hostnames ==&lt;br /&gt;
&lt;br /&gt;
The system has two login nodes.&lt;br /&gt;
The selection of the login node is done automatically.&lt;br /&gt;
If you are logging in multiple times, different sessions might run on different login nodes.&lt;br /&gt;
&lt;br /&gt;
Login to bwUniCluster 3.0:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Hostname !! Node type&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;uc3.scc.kit.edu&#039;&#039;&#039;          || login to one of the two login nodes&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;bwunicluster.scc.kit.edu&#039;&#039;&#039; || login to one of the two login nodes&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
In general, you should use automatic selection to allow us to balance the load over the four login nodes.&lt;br /&gt;
If you need to connect to specific login nodes, you can use the following hostnames:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Hostname !! Node type&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;uc3-login1.scc.kit.edu&#039;&#039;&#039; || bwUniCluster 3.0 first login node&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;uc3-login2.scc.kit.edu&#039;&#039;&#039; || bwUniCluster 3.0 second login node&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== BINAC Hostnames ==&lt;br /&gt;
&lt;br /&gt;
The system has three login nodes.&lt;br /&gt;
You have to select the login node yourself.&lt;br /&gt;
&lt;br /&gt;
Login to BINAC:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Hostname !! Node type&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;login01.binac.uni-tuebingen.de&#039;&#039;&#039; || BINAC first login node&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;login02.binac.uni-tuebingen.de&#039;&#039;&#039; || BINAC second login node&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;login03.binac.uni-tuebingen.de&#039;&#039;&#039; || BINAC third login node&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== JUSTUS2 Hostnames ==&lt;br /&gt;
&lt;br /&gt;
The system has four login nodes.&lt;br /&gt;
The selection of the login node is done automatically.&lt;br /&gt;
If you are logging in multiple times, different sessions might run on different login nodes.&lt;br /&gt;
&lt;br /&gt;
Login to JUSTUS2:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Hostname !! Node type&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;justus2.uni-ulm.de&#039;&#039;&#039; || login to one of the four login nodes&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
In general, you should use automatic selection to allow us to balance the load over the four login nodes.&lt;br /&gt;
If you need to connect to specific login nodes, you can use the following hostnames:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Hostname !! Node type&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;justus2-login01.rz.uni-ulm.de&#039;&#039;&#039; || JUSTUS2 first login node&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;justus2-login02.rz.uni-ulm.de&#039;&#039;&#039; || JUSTUS2 second login node&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;justus2-login03.rz.uni-ulm.de&#039;&#039;&#039; || JUSTUS2 third login node&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;justus2-login04.rz.uni-ulm.de&#039;&#039;&#039; || JUSTUS2 fourth login node&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Helix Hostnames ==&lt;br /&gt;
&lt;br /&gt;
The system has four login nodes.&lt;br /&gt;
The selection of the login node is done automatically.&lt;br /&gt;
If you are logging in multiple times, different sessions might run on different login nodes.&lt;br /&gt;
&lt;br /&gt;
Login to Helix:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Hostname !! Node type&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;helix.bwservices.uni-heidelberg.de&#039;&#039;&#039; || login to one of two login nodes&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
In general, you should use automatic selection to allow us to balance the load over the login nodes.&lt;br /&gt;
&lt;br /&gt;
== NEMO2 Hostnames ==&lt;br /&gt;
&lt;br /&gt;
The system has two login nodes.&lt;br /&gt;
You have to select the login node yourself.&lt;br /&gt;
&lt;br /&gt;
Login to NEMO2:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Hostname !! Node type&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;login.nemo.uni-freiburg.de&#039;&#039;&#039; || NEMO2 first or second login node&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;login1.nemo.uni-freiburg.de&#039;&#039;&#039; || NEMO2 first login node&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;login2.nemo.uni-freiburg.de&#039;&#039;&#039; || NEMO2 second login node&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>H Winkhardt</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=Acknowledgement&amp;diff=15244</id>
		<title>Acknowledgement</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=Acknowledgement&amp;diff=15244"/>
		<updated>2025-08-21T15:09:07Z</updated>

		<summary type="html">&lt;p&gt;H Winkhardt: Update NEMO2 und BinAC2&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Remember to acknowledge our resources in your publications!&lt;br /&gt;
&lt;br /&gt;
Such recognition is important for acquiring funding for the next generation of hardware, support services, data storage, and infrastructure.&lt;br /&gt;
&lt;br /&gt;
The publications will be referenced on the bwHPC website: https://www.bwhpc.de/user_publications.html&lt;br /&gt;
&lt;br /&gt;
== HPC Clusters ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;rarr;[[BwUniCluster2.0/Acknowledgement| bwUniCluster Acknowledgement]]&lt;br /&gt;
&lt;br /&gt;
&amp;amp;rarr;[[BinAC2/Acknowledgement| bwForCluster BinAC2 Acknowledgement]]&lt;br /&gt;
&lt;br /&gt;
&amp;amp;rarr;[[Helix/Acknowledgement| bwForCluster Helix Acknowledgement]]&lt;br /&gt;
&lt;br /&gt;
&amp;amp;rarr;[[JUSTUS2/Acknowledgement| bwForCluster JUSTUS2 Acknowledgement]]&lt;br /&gt;
&lt;br /&gt;
&amp;amp;rarr;[[NEMO2/Acknowledgement| bwForCluster NEMO2 Acknowledgement]]&lt;br /&gt;
&lt;br /&gt;
== Data Facilities ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;rarr;[[SDS@hd/Acknowledgement| SDS@hd Acknowledgement]]&lt;/div&gt;</summary>
		<author><name>H Winkhardt</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BWUniCluster_User_Access_Members_Uni_Mannheim&amp;diff=15242</id>
		<title>BWUniCluster User Access Members Uni Mannheim</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BWUniCluster_User_Access_Members_Uni_Mannheim&amp;diff=15242"/>
		<updated>2025-08-21T13:07:35Z</updated>

		<summary type="html">&lt;p&gt;H Winkhardt: Update to UC3&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Entitlement for bwUniCluster @ Uni Mannheim =&lt;br /&gt;
&lt;br /&gt;
Members of the University of Mannheim need to apply for an entitlement with a valid university-account to use bwHPC resources.&lt;br /&gt;
Please use this web registration service:&lt;br /&gt;
&lt;br /&gt;
[https://entitlements.uni-mannheim.de https://entitlements.uni-mannheim.de]&lt;br /&gt;
&lt;br /&gt;
The entitlement service asks for a project description, which is used for internal documentation only. Please indicate that you want access to the &#039;&#039;&#039;bwUniCluster&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
With the approval of your request, you will be able to [[Registration/bwUniCluster/Service|register for the &#039;&#039;&#039;bwUniCluster&#039;&#039;&#039;]].&lt;br /&gt;
&lt;br /&gt;
If you have any questions, please contact [mailto:hpc-support@mailman.uni-mannheim.de hpc-support@mailman.uni-mannheim.de] .&lt;br /&gt;
&lt;br /&gt;
== Terms of Use ==&lt;br /&gt;
&lt;br /&gt;
As with any other UNIT-service, users of the bwUniCluster must strictly adhere to the [https://www.uni-mannheim.de/media/Einrichtungen/it/Benutzerordnung_und_wichtige_Dokumente/Benutzungsordnung_Informationssysteme_Universitaet_Mannheim.pdf terms of use of Mannheim University].&lt;br /&gt;
&lt;br /&gt;
== Acknowledgement ==&lt;br /&gt;
&lt;br /&gt;
Publications written with the help of the bwUniCluster must [[BwUniCluster3.0/Acknowledgement | acknowledge the support given by the bwHPC-project]].&lt;br /&gt;
&lt;br /&gt;
= Technical details =&lt;br /&gt;
&lt;br /&gt;
Users from Mannheim have a special quota in the [[BwUniCluster3.0/Hardware_and_Architecture/Filesystem_Details#$HOME|HOME file system]] on bwUniCluster3.0, 250 GiB and 2.5 million inodes. Workspaces are unchanged from the default.&lt;/div&gt;</summary>
		<author><name>H Winkhardt</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=Helix/Software/Matlab&amp;diff=15199</id>
		<title>Helix/Software/Matlab</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=Helix/Software/Matlab&amp;diff=15199"/>
		<updated>2025-08-14T09:13:10Z</updated>

		<summary type="html">&lt;p&gt;H Winkhardt: -r deprecated in favor of -batch&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Softwarepage|math/matlab}}&lt;br /&gt;
&lt;br /&gt;
{| width=600px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Description !! Content&lt;br /&gt;
|-&lt;br /&gt;
| module load&lt;br /&gt;
| math/matlab&lt;br /&gt;
|-&lt;br /&gt;
| License&lt;br /&gt;
| [https://de.mathworks.com/pricing-licensing/index.html?intendeduse=edu&amp;amp;prodcode=ML Academic License/Commercial]&lt;br /&gt;
|-&lt;br /&gt;
| Citing&lt;br /&gt;
| n/a&lt;br /&gt;
|-&lt;br /&gt;
| Links&lt;br /&gt;
| [https://de.mathworks.com/products/matlab/ MATLAB Homepage] &amp;amp;#124; [https://de.mathworks.com/index.html?s_tid=gn_logo MathWorks Homepage] &amp;amp;#124; [https://de.mathworks.com/support/?s_tid=gn_supp Support and more]&lt;br /&gt;
|-&lt;br /&gt;
| Graphical Interface&lt;br /&gt;
| No&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Description =&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;MATLAB&#039;&#039;&#039; (MATrix LABoratory) is a high-level programming language and interactive computing environment for numerical calculation and data visualization.&lt;br /&gt;
&lt;br /&gt;
= Loading MATLAB =&lt;br /&gt;
&lt;br /&gt;
The preferable way is to run the MATLAB command line interface without GUI:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ matlab -nodisplay&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
An interactive MATLAB session with graphical user interface (GUI) can be started with the command (requires X11 forwarding enabled for your ssh login):&lt;br /&gt;
&amp;lt;pre&amp;gt;$ matlab&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note: Do not start a long-duration interactive MATLAB session on a login node of the cluster. Submit an [[Helix/Slurm#Interactive_Jobs | interactive job]] and start MATLAB from within the dedicated compute node assigned to you by the queueing system.&lt;br /&gt;
&lt;br /&gt;
The following generic command will execute a MATLAB script or function named &amp;quot;example&amp;quot;:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ matlab -nodisplay -batch example &amp;gt; result.out 2&amp;gt;&amp;amp;1&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The output of this session will be redirected to the file result.out. The option &amp;lt;syntaxhighlight style=&amp;quot;border:0px&amp;quot; inline=1&amp;gt;-batch&amp;lt;/syntaxhighlight&amp;gt; executes the MATLAB statement non-interactively.&lt;br /&gt;
&lt;br /&gt;
= Parallel Computing Using MATLAB =&lt;br /&gt;
&lt;br /&gt;
Parallelization of MATLAB jobs is realized via the built-in multi-threading provided by MATLAB&#039;s BLAS and FFT implementation and the parallel computing functionality of MATLAB&#039;s Parallel Computing Toolbox (PCT).&lt;br /&gt;
&lt;br /&gt;
== Implicit Threading ==&lt;br /&gt;
&lt;br /&gt;
A large number of built-in MATLAB functions may utilize multiple cores automatically without any code modifications required. This is referred to as implicit multi-threading and must be strictly distinguished from explicit parallelism provided by the Parallel Computing Toolbox (PCT) which requires specific commands in your code in order to create threads.&lt;br /&gt;
&lt;br /&gt;
Implicit threading particularly takes place for linear algebra operations (such as the solution to a linear system A\b or matrix products A*B) and FFT operations. Many other high-level MATLAB functions do also benefit from multi-threading capabilities of their underlying routines. If multi-threading is not desired, single-threading can be enforced by adding the command line option &amp;lt;syntaxhighlight style=&amp;quot;border:0px&amp;quot; inline=1&amp;gt;-singleCompThread&amp;lt;/syntaxhighlight&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Whenever implicit threading takes place, MATLAB will detect the total number of cores that exist on a machine and by default makes use of all of them. This has very important implications for MATLAB jobs in HPC environments with shared-node job scheduling policy (i.e. with multiple users sharing one compute node). Due to this behaviour, a MATLAB job may take over more compute resources than assigned by the queueing system of the cluster (and thereby taking away these resources from all other users with running jobs on the same node - including your own jobs).&lt;br /&gt;
&lt;br /&gt;
Therefore, when running in multi-threaded mode, MATLAB always requires the user&#039;s intervention to not allocate all cores of the machine (unless requested so from the queueing system). The number of threads must be controlled from within the code by means of the &amp;lt;syntaxhighlight style=&amp;quot;border:0px&amp;quot; inline=1&amp;gt;maxNumCompThreads(N)&amp;lt;/syntaxhighlight&amp;gt; function or, alternatively, with the &amp;lt;syntaxhighlight style=&amp;quot;border:0px&amp;quot; inline=1&amp;gt;feature(&#039;numThreads&#039;, N)&amp;lt;/syntaxhighlight&amp;gt; function (which is undocumented).&lt;br /&gt;
&lt;br /&gt;
== Using the Parallel Computing Toolbox (PCT) ==&lt;br /&gt;
&lt;br /&gt;
By using the PCT one can make explicit use of several cores on multicore processors to parallelize MATLAB applications without MPI programming. Under MATLAB version 8.4 and earlier, this toolbox provides 12 workers (MATLAB computational engines) to execute applications locally on a single multicore node. Under MATLAB version 8.5 and later, the number of workers available is equal to the number of cores on a single node (up to a maximum of 512).&lt;br /&gt;
&lt;br /&gt;
If multiple PCT jobs are running at the same time, they all write temporary MATLAB job information to the same location. This race condition can cause one or more of the parallel MATLAB jobs fail to use the parallel functionality of the toolbox.&lt;br /&gt;
&lt;br /&gt;
To solve this issue, each MATLAB job should explicitly set a unique location where these files are created. This can be accomplished by the following snippet of code added to your MATLAB script.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
% create a local cluster object&lt;br /&gt;
pc = parcluster(&#039;local&#039;)&lt;br /&gt;
&lt;br /&gt;
% get the number of dedicated cores from environment&lt;br /&gt;
nprocs = str2num(getenv(&#039;SLURM_NPROCS&#039;))&lt;br /&gt;
&lt;br /&gt;
% you may explicitly set the JobStorageLocation to the tmp directory that is unique to each cluster job (and is on local, fast scratch)&lt;br /&gt;
parpool_tmpdir = [getenv(&#039;TMP&#039;),&#039;/.matlab/local_cluster_jobs/slurm_jobID_&#039;,getenv(&#039;SLURM_JOB_ID&#039;)]&lt;br /&gt;
mkdir(parpool_tmpdir)&lt;br /&gt;
pc.JobStorageLocation = parpool_tmpdir&lt;br /&gt;
&lt;br /&gt;
% start the parallel pool&lt;br /&gt;
parpool(pc, nprocs)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If a large number of MATLAB-jobs are run in parallel, they can also conflict when writing generic information to &amp;lt;syntaxhighlight style=&amp;quot;border:0px&amp;quot; inline=1&amp;gt;~/.matlab&amp;lt;/syntaxhighlight&amp;gt;. This can be circumvented by setting &amp;lt;syntaxhighlight style=&amp;quot;border:0px&amp;quot; inline=1&amp;gt;$MATLAB_PREFDIR&amp;lt;/syntaxhighlight&amp;gt; to different directories in your Batch-script, e.g.  &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;export MATLAB_PREFDIR=$TMP&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Using a different implementation of BLAS/LAPACK ==&lt;br /&gt;
&lt;br /&gt;
By default, Matlab uses a version of Intel MKL as its BLAS/LAPACK library. It is possible to manually change this to different libraries by setting the &amp;lt;syntaxhighlight style=&amp;quot;border:0px&amp;quot; inline=1&amp;gt;BLAS_VERSION&amp;lt;/syntaxhighlight&amp;gt; and &amp;lt;syntaxhighlight style=&amp;quot;border:0px&amp;quot; inline=1&amp;gt;LAPACK_VERSION&amp;lt;/syntaxhighlight&amp;gt; environment variables. The following lines can be added to the batch-script to change it, in this example to BLIS and Flame, which are optimized for AMD processors: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load numlib/aocl/3.2.0&lt;br /&gt;
&lt;br /&gt;
export BLAS_VERSION=$AOCL_LIB_DIR/libblis-mt.so&lt;br /&gt;
export LAPACK_VERSION=$AOCL_LIB_DIR/libflame.so&lt;br /&gt;
&lt;br /&gt;
export BLIS_NUM_THREADS=$SLURM_NTASKS&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This can increase performance depending on the task, for example large matrix multiplications, but caution is advised.&lt;br /&gt;
&lt;br /&gt;
= General Performance Tips for MATLAB =&lt;br /&gt;
&lt;br /&gt;
MATLAB data structures (arrays or matrices) are dynamic in size, i.e. MATLAB will automatically resize the structure on demand. Although this seems to be convenient, MATLAB continually needs to allocate a new chunk of memory and copy over the data to the new block of memory as the array or matrix grows in a loop. This may take a significant amount of extra time during execution of the program.&lt;br /&gt;
&lt;br /&gt;
Code performance can often be drastically improved by pre-allocating memory for the final expected size of the array or matrix before actually starting the processing loop. In order to pre-allocate an array of strings, you can use MATLAB&#039;s build-in cell function. In order to pre-allocate an array or matrix of numbers, you can use MATLAB&#039;s build-in zeros function.&lt;br /&gt;
&lt;br /&gt;
The performance benefit of pre-allocation is illustrated with the following example code.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
% prealloc.m&lt;br /&gt;
&lt;br /&gt;
clear all;&lt;br /&gt;
&lt;br /&gt;
num=10000000;&lt;br /&gt;
&lt;br /&gt;
disp(&#039;Without pre-allocation:&#039;)&lt;br /&gt;
tic&lt;br /&gt;
for i=1:num&lt;br /&gt;
    a(i)=i;&lt;br /&gt;
end&lt;br /&gt;
toc&lt;br /&gt;
&lt;br /&gt;
disp(&#039;With pre-allocation:&#039;)&lt;br /&gt;
tic&lt;br /&gt;
b=zeros(1,num);&lt;br /&gt;
for i=1:num&lt;br /&gt;
    b(i)=i;&lt;br /&gt;
end&lt;br /&gt;
toc&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
On a compute node, the result may look like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Without pre-allocation:&lt;br /&gt;
Elapsed time is 2.879446 seconds.&lt;br /&gt;
With pre-allocation:&lt;br /&gt;
Elapsed time is 0.097557 seconds.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Please recognize that the code runs almost 30 times faster with pre-allocation.&lt;br /&gt;
&lt;br /&gt;
= Compile MATLAB binaries with mcc =&lt;br /&gt;
&lt;br /&gt;
MATLAB on Helix comes with a compiler, &amp;lt;code&amp;gt;mcc&amp;lt;/code&amp;gt;, that can be used to create binaries from MATLAB-code.&lt;br /&gt;
Stand-alone MATLAB programs compiled with &amp;lt;code&amp;gt;mcc&amp;lt;/code&amp;gt; do not require any license tokens at runtime and you can start jobs in parallel without any risk of running out of licences.&lt;/div&gt;</summary>
		<author><name>H Winkhardt</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=HPC_Glossary/Batch_system&amp;diff=15112</id>
		<title>HPC Glossary/Batch system</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=HPC_Glossary/Batch_system&amp;diff=15112"/>
		<updated>2025-07-15T09:03:36Z</updated>

		<summary type="html">&lt;p&gt;H Winkhardt: Moab raus&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;When we speak of a &#039;&#039;&#039;batch system&#039;&#039;&#039; on compute clusters, we mean the system that knows which compute nodes are used by whom and when they will become available. It also knows about all waiting jobs and determines which job are going to start next on which node whenever a node bekomes available.&lt;br /&gt;
&lt;br /&gt;
== Why do we need a Resource Management System? ==&lt;br /&gt;
&lt;br /&gt;
An HPC cluster is a multi-user system. Users have compute jobs with different demands on number of processor cores, memory, disk space and run-time. Some users run a program only occasionally for a big task, other users must run many simulations to finish their projects. &lt;br /&gt;
&lt;br /&gt;
The cluster only provides a limited number of compute resources with certain features. Free access for all users to all compute nodes without time limit will not work. Therefore we need a resource management system (batch system) for the scheduling and the distribution of compute jobs on suitable compute resources.&lt;br /&gt;
The use of a resource management system pursues several objectives:&lt;br /&gt;
&lt;br /&gt;
* Fair distribution of resources among users&lt;br /&gt;
* Compute jobs should start as soon as possible&lt;br /&gt;
* Full load and efficient usage of all resources&lt;br /&gt;
[[image:distributing_jobs1.svg]]&lt;br /&gt;
&lt;br /&gt;
== How does a Resource Management System work? ==&lt;br /&gt;
A resource management system or batch system manages the compute nodes, jobs and queues and basically consists of two components:&lt;br /&gt;
&lt;br /&gt;
* A resource manager which is responsible for the node status and for the distribution of jobs over the compute nodes.&lt;br /&gt;
* workload manager (scheduler) which is in charge of job scheduling, job managing, job monitoring and job reporting.&lt;br /&gt;
&lt;br /&gt;
A resource management system works as follows:&lt;br /&gt;
&lt;br /&gt;
* The user creates a job script containing requests for compute resources and submits the script to the resource management system.&lt;br /&gt;
* The scheduler parses the job script for resource requests and determines where to run the job and how to schedule it.&lt;br /&gt;
* The scheduler delegates the job to the resource manager.&lt;br /&gt;
* The resource manager executes the job and communicates the status information to the scheduler.&lt;br /&gt;
&lt;br /&gt;
== How does a Job Scheduler work?== &lt;br /&gt;
&lt;br /&gt;
The job scheduling process is influenced by many and sometimes contrasting parameters which are used as metrics for the scheduling algorithm. The objectives of a resource management system are approached in the following way:&lt;br /&gt;
&lt;br /&gt;
* Fair distribution of resources among users: Ensure that all users get a fair share of processing time for their jobs.&lt;br /&gt;
* Compute jobs should start as soon as possible: Minimize the time jobs have to wait until they start.&lt;br /&gt;
* Full load and efficient usage of all resources: Aim for the highest possible utilization with the available jobs, because cluster resources are very expensive&lt;br /&gt;
&lt;br /&gt;
The following simple example illustrates how a scheduler works.&lt;br /&gt;
Let us consider a system with 4 nodes. Each node has 4 processor cores.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Here the cluster is empty. No jobs are scheduled:&lt;br /&gt;
&lt;br /&gt;
[[image:Scheduling-workload0.svg]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Now jobs with different resource requests are scheduled:&lt;br /&gt;
&lt;br /&gt;
* one job which needs two nodes (blue)&lt;br /&gt;
* one job which needs one node (red)&lt;br /&gt;
* multiple jobs, which need only one core (yellow, orange)&lt;br /&gt;
* some jobs are short running, others are long running (see time axis)&lt;br /&gt;
&lt;br /&gt;
[[image:Scheduling-workload1.svg]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Up to now enough resources are available so that all jobs can start instantly.&lt;br /&gt;
&lt;br /&gt;
More jobs are submitted. Now not all job can start immediately on the available hardware resources. Some jobs are scheduled to run at a later time. &lt;br /&gt;
&lt;br /&gt;
* Big jobs (in terms of hardware resources) often have to wait longer, because they cannot be scheduled flexibly.&lt;br /&gt;
* Long running jobs can delay the scheduling of big jobs, because they block needed hardware resources.&lt;br /&gt;
* Small and short jobs can be scheduled to fill gaps. &lt;br /&gt;
* If a job stops prematurely, short jobs can be rescheduled to start earlier (back filling).&lt;br /&gt;
&lt;br /&gt;
[[image:Scheduling-workload2.svg]]&lt;br /&gt;
&lt;br /&gt;
In addition to cores and time a scheduler has to consider many more metrics like memory, co-processors, fair share, and priority. Scheduling is a multi-dimensional optimization problem.&lt;br /&gt;
&lt;br /&gt;
== Resource Management Systems on bwHPC Clusters ==&lt;br /&gt;
&lt;br /&gt;
Slurm: complete resource management system with integrated resource manager and scheduler&lt;br /&gt;
All bwHPC cluster follow a fairshare policy. The waiting time of jobs depends on&lt;br /&gt;
&lt;br /&gt;
* your job&#039;s resource requests: Jobs with large resource requests wait longer for free resources.&lt;br /&gt;
* your usage history: High resource usage in a short time leads to a lower job priority.&lt;br /&gt;
* your university&#039;s share (bwUniCluster only): If the usage exceeds the university&#039;s share, the job priority decreases.&lt;br /&gt;
&lt;br /&gt;
bwHPC clusters have specific configurations for the resource management system concerning:&lt;br /&gt;
&lt;br /&gt;
* Job submission and monitoring commands (via Slurm)&lt;br /&gt;
* Queues and limits for the available hardware&lt;br /&gt;
* Node access policy:&lt;br /&gt;
** shared: compute nodes can be shared by jobs of different users&lt;br /&gt;
** single user: compute nodes can be shared by jobs of a single user&lt;br /&gt;
** single job: compute nodes are allocated exclusively for a single job&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
FAQ: How many jobs can run on a single node at the same time?&lt;br /&gt;
This depends on the node access policy:&lt;br /&gt;
&lt;br /&gt;
* On &amp;quot;shared nodes&amp;quot;, more than one job can run simultaneously if the requested resources are available, and these jobs may have been submitted by different users.&lt;br /&gt;
* On &amp;quot;single user nodes&amp;quot;, more than one job can run simultaneously, but the jobs have to be submitted by one and the same user&lt;br /&gt;
* On &amp;quot;single job nodes&amp;quot;, the access is exclusive for one job, irrespective of the number of requested cores.&lt;/div&gt;</summary>
		<author><name>H Winkhardt</name></author>
	</entry>
</feed>