<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://wiki.bwhpc.de/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=J+Ruess</id>
	<title>bwHPC Wiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://wiki.bwhpc.de/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=J+Ruess"/>
	<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/e/Special:Contributions/J_Ruess"/>
	<updated>2026-04-15T19:56:59Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.39.17</generator>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster3.0/Software/Python_Dask&amp;diff=9386</id>
		<title>BwUniCluster3.0/Software/Python Dask</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster3.0/Software/Python_Dask&amp;diff=9386"/>
		<updated>2021-12-09T08:51:23Z</updated>

		<summary type="html">&lt;p&gt;J Ruess: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;!--{| style=&amp;quot;border-style: solid; border-width: 1px&amp;quot;&lt;br /&gt;
! Navigation: [[BwHPC_Best_Practices_Repository|bwHPC BPR]] / [[BwUniCluster_User_Guide|bwUniCluster]] &lt;br /&gt;
|}--&amp;gt;&lt;br /&gt;
This guide explains how to use Python Dask and dask-jobqueue on bwUniCluster2.0.&lt;br /&gt;
&lt;br /&gt;
== Installation and Usage ==&lt;br /&gt;
Please have a look at our [https://github.com/hpcraink/workshop-parallel-jupyter Workshop] on how to use Dask on bwUniCluster2.0 (2_Grundlagen: Environment erstellen and 6_Dask). This is currently only available in German.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
[[Category:bwUniCluster_2.0|Access]][[Category:Access|bwUniCluster 2.0]]&lt;/div&gt;</summary>
		<author><name>J Ruess</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster3.0/Software/Python_Dask&amp;diff=9385</id>
		<title>BwUniCluster3.0/Software/Python Dask</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster3.0/Software/Python_Dask&amp;diff=9385"/>
		<updated>2021-12-09T08:50:06Z</updated>

		<summary type="html">&lt;p&gt;J Ruess: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;!--{| style=&amp;quot;border-style: solid; border-width: 1px&amp;quot;&lt;br /&gt;
! Navigation: [[BwHPC_Best_Practices_Repository|bwHPC BPR]] / [[BwUniCluster_User_Guide|bwUniCluster]] &lt;br /&gt;
|}--&amp;gt;&lt;br /&gt;
This guide explains how to use Python Dask and dask-jobqueue on bwUniCluster2.0.&lt;br /&gt;
&lt;br /&gt;
== Installation and Usage ==&lt;br /&gt;
Please have a look at our [https://github.com/hpcraink/workshop-parallel-jupyter Workshop] on how to use Dask on bwUniCluster2.0 (2_Grundlagen: Environment erstellen and 6_Dask). This is currently only available in German.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Dask Dashboard ==&lt;br /&gt;
To forward the Dask Dashboard you have to do a ssh port forwarding with the machine on which you have started Dask.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ssh -N -L 8787:machineName:8787 yourusername@bwunicluster.scc.kit.edu&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After executing this command you can access the dask dashboard in your local browser by typing &#039;localhost:8787/status&#039;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
[[Category:bwUniCluster_2.0|Access]][[Category:Access|bwUniCluster 2.0]]&lt;/div&gt;</summary>
		<author><name>J Ruess</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster3.0/Software/Python_Dask&amp;diff=9384</id>
		<title>BwUniCluster3.0/Software/Python Dask</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster3.0/Software/Python_Dask&amp;diff=9384"/>
		<updated>2021-12-09T08:49:37Z</updated>

		<summary type="html">&lt;p&gt;J Ruess: /* Installation and Usage */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;!--{| style=&amp;quot;border-style: solid; border-width: 1px&amp;quot;&lt;br /&gt;
! Navigation: [[BwHPC_Best_Practices_Repository|bwHPC BPR]] / [[BwUniCluster_User_Guide|bwUniCluster]] &lt;br /&gt;
|}--&amp;gt;&lt;br /&gt;
This guide explains how to use Python Dask and dask-jobqueue on bwUniCluster2.0.&lt;br /&gt;
&lt;br /&gt;
== Installation and Usage ==&lt;br /&gt;
Please have a look at our [https://github.com/hpcraink/workshop-parallel-jupyter Workshop] on how to use Dask on bwUniCluster2.0 (2_Grundlagen: Environment erstellen and 6_Dask). This is currently only available in German.&lt;br /&gt;
&lt;br /&gt;
== Using Dask ==&lt;br /&gt;
In a new interactive shell, execute the following commands in Python:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt;&amp;gt;&amp;gt; from dask_jobqueue import SLURMCluster&lt;br /&gt;
&amp;gt;&amp;gt;&amp;gt; cluster = SLURMCluster(cores=X, memory=&#039;X GB&#039;, queue=&#039;X&#039;)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
You have to specify how many cores and memory you want for one dask worker. Furthermore a [[BwUniCluster_2.0_Batch_Queues|batch queue]] is required. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt;&amp;gt;&amp;gt; cluster.scale (X)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
After executing this command with e.g. cluster.scale(5), dask will start to request five worker processes each with the specified amount of cores and memory.&lt;br /&gt;
&lt;br /&gt;
With the command &#039;squeue&#039; in bash you can check if your dask worker processes are actually running or if you have to wait until you get the requested resources. You have a better chance to get resources quickly if you additionally specify a walltime.  You can check all options you have with the following command:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt;&amp;gt;&amp;gt; SLURMCluster?&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
With the command&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt;&amp;gt;&amp;gt; client&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
dask will output all resources that are are actually available to distribute your computing.&lt;br /&gt;
&lt;br /&gt;
== Dask Dashboard ==&lt;br /&gt;
To forward the Dask Dashboard you have to do a ssh port forwarding with the machine on which you have started Dask.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ssh -N -L 8787:machineName:8787 yourusername@bwunicluster.scc.kit.edu&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After executing this command you can access the dask dashboard in your local browser by typing &#039;localhost:8787/status&#039;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
[[Category:bwUniCluster_2.0|Access]][[Category:Access|bwUniCluster 2.0]]&lt;/div&gt;</summary>
		<author><name>J Ruess</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster3.0/Software/Python_Dask&amp;diff=9383</id>
		<title>BwUniCluster3.0/Software/Python Dask</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster3.0/Software/Python_Dask&amp;diff=9383"/>
		<updated>2021-12-09T08:49:10Z</updated>

		<summary type="html">&lt;p&gt;J Ruess: /* Installation and Usage */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;!--{| style=&amp;quot;border-style: solid; border-width: 1px&amp;quot;&lt;br /&gt;
! Navigation: [[BwHPC_Best_Practices_Repository|bwHPC BPR]] / [[BwUniCluster_User_Guide|bwUniCluster]] &lt;br /&gt;
|}--&amp;gt;&lt;br /&gt;
This guide explains how to use Python Dask and dask-jobqueue on bwUniCluster2.0.&lt;br /&gt;
&lt;br /&gt;
== Installation and Usage ==&lt;br /&gt;
Please have a look on our [https://github.com/hpcraink/workshop-parallel-jupyter Workshop] on how to use Dask on bwUniCluster2.0 (2_Grundlagen: Environment erstellen and 6_Dask). This is currently only available in German.&lt;br /&gt;
&lt;br /&gt;
== Using Dask ==&lt;br /&gt;
In a new interactive shell, execute the following commands in Python:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt;&amp;gt;&amp;gt; from dask_jobqueue import SLURMCluster&lt;br /&gt;
&amp;gt;&amp;gt;&amp;gt; cluster = SLURMCluster(cores=X, memory=&#039;X GB&#039;, queue=&#039;X&#039;)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
You have to specify how many cores and memory you want for one dask worker. Furthermore a [[BwUniCluster_2.0_Batch_Queues|batch queue]] is required. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt;&amp;gt;&amp;gt; cluster.scale (X)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
After executing this command with e.g. cluster.scale(5), dask will start to request five worker processes each with the specified amount of cores and memory.&lt;br /&gt;
&lt;br /&gt;
With the command &#039;squeue&#039; in bash you can check if your dask worker processes are actually running or if you have to wait until you get the requested resources. You have a better chance to get resources quickly if you additionally specify a walltime.  You can check all options you have with the following command:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt;&amp;gt;&amp;gt; SLURMCluster?&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
With the command&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt;&amp;gt;&amp;gt; client&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
dask will output all resources that are are actually available to distribute your computing.&lt;br /&gt;
&lt;br /&gt;
== Dask Dashboard ==&lt;br /&gt;
To forward the Dask Dashboard you have to do a ssh port forwarding with the machine on which you have started Dask.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ssh -N -L 8787:machineName:8787 yourusername@bwunicluster.scc.kit.edu&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After executing this command you can access the dask dashboard in your local browser by typing &#039;localhost:8787/status&#039;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
[[Category:bwUniCluster_2.0|Access]][[Category:Access|bwUniCluster 2.0]]&lt;/div&gt;</summary>
		<author><name>J Ruess</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster3.0/Software/Python_Dask&amp;diff=9382</id>
		<title>BwUniCluster3.0/Software/Python Dask</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster3.0/Software/Python_Dask&amp;diff=9382"/>
		<updated>2021-12-09T08:48:58Z</updated>

		<summary type="html">&lt;p&gt;J Ruess: /* Installation and Usage */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;!--{| style=&amp;quot;border-style: solid; border-width: 1px&amp;quot;&lt;br /&gt;
! Navigation: [[BwHPC_Best_Practices_Repository|bwHPC BPR]] / [[BwUniCluster_User_Guide|bwUniCluster]] &lt;br /&gt;
|}--&amp;gt;&lt;br /&gt;
This guide explains how to use Python Dask and dask-jobqueue on bwUniCluster2.0.&lt;br /&gt;
&lt;br /&gt;
== Installation and Usage ==&lt;br /&gt;
Please have a look on our * [https://github.com/hpcraink/workshop-parallel-jupyter Workshop] on how to use Dask on bwUniCluster2.0 (2_Grundlagen: Environment erstellen and 6_Dask). This is currently only available in German.&lt;br /&gt;
&lt;br /&gt;
== Using Dask ==&lt;br /&gt;
In a new interactive shell, execute the following commands in Python:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt;&amp;gt;&amp;gt; from dask_jobqueue import SLURMCluster&lt;br /&gt;
&amp;gt;&amp;gt;&amp;gt; cluster = SLURMCluster(cores=X, memory=&#039;X GB&#039;, queue=&#039;X&#039;)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
You have to specify how many cores and memory you want for one dask worker. Furthermore a [[BwUniCluster_2.0_Batch_Queues|batch queue]] is required. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt;&amp;gt;&amp;gt; cluster.scale (X)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
After executing this command with e.g. cluster.scale(5), dask will start to request five worker processes each with the specified amount of cores and memory.&lt;br /&gt;
&lt;br /&gt;
With the command &#039;squeue&#039; in bash you can check if your dask worker processes are actually running or if you have to wait until you get the requested resources. You have a better chance to get resources quickly if you additionally specify a walltime.  You can check all options you have with the following command:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt;&amp;gt;&amp;gt; SLURMCluster?&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
With the command&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt;&amp;gt;&amp;gt; client&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
dask will output all resources that are are actually available to distribute your computing.&lt;br /&gt;
&lt;br /&gt;
== Dask Dashboard ==&lt;br /&gt;
To forward the Dask Dashboard you have to do a ssh port forwarding with the machine on which you have started Dask.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ssh -N -L 8787:machineName:8787 yourusername@bwunicluster.scc.kit.edu&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After executing this command you can access the dask dashboard in your local browser by typing &#039;localhost:8787/status&#039;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
[[Category:bwUniCluster_2.0|Access]][[Category:Access|bwUniCluster 2.0]]&lt;/div&gt;</summary>
		<author><name>J Ruess</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster3.0/Software/Python_Dask&amp;diff=9381</id>
		<title>BwUniCluster3.0/Software/Python Dask</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster3.0/Software/Python_Dask&amp;diff=9381"/>
		<updated>2021-12-09T08:48:03Z</updated>

		<summary type="html">&lt;p&gt;J Ruess: /* Installation and Usage */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;!--{| style=&amp;quot;border-style: solid; border-width: 1px&amp;quot;&lt;br /&gt;
! Navigation: [[BwHPC_Best_Practices_Repository|bwHPC BPR]] / [[BwUniCluster_User_Guide|bwUniCluster]] &lt;br /&gt;
|}--&amp;gt;&lt;br /&gt;
This guide explains how to use Python Dask and dask-jobqueue on bwUniCluster2.0.&lt;br /&gt;
&lt;br /&gt;
== Installation and Usage ==&lt;br /&gt;
Please have a look on our Workshop * [https://github.com/hpcraink/workshop-parallel-jupyter Workshop] (2_Grundlagen: Environment erstellen and 6_Dask)&lt;br /&gt;
&lt;br /&gt;
== Using Dask ==&lt;br /&gt;
In a new interactive shell, execute the following commands in Python:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt;&amp;gt;&amp;gt; from dask_jobqueue import SLURMCluster&lt;br /&gt;
&amp;gt;&amp;gt;&amp;gt; cluster = SLURMCluster(cores=X, memory=&#039;X GB&#039;, queue=&#039;X&#039;)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
You have to specify how many cores and memory you want for one dask worker. Furthermore a [[BwUniCluster_2.0_Batch_Queues|batch queue]] is required. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt;&amp;gt;&amp;gt; cluster.scale (X)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
After executing this command with e.g. cluster.scale(5), dask will start to request five worker processes each with the specified amount of cores and memory.&lt;br /&gt;
&lt;br /&gt;
With the command &#039;squeue&#039; in bash you can check if your dask worker processes are actually running or if you have to wait until you get the requested resources. You have a better chance to get resources quickly if you additionally specify a walltime.  You can check all options you have with the following command:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt;&amp;gt;&amp;gt; SLURMCluster?&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
With the command&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt;&amp;gt;&amp;gt; client&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
dask will output all resources that are are actually available to distribute your computing.&lt;br /&gt;
&lt;br /&gt;
== Dask Dashboard ==&lt;br /&gt;
To forward the Dask Dashboard you have to do a ssh port forwarding with the machine on which you have started Dask.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ssh -N -L 8787:machineName:8787 yourusername@bwunicluster.scc.kit.edu&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After executing this command you can access the dask dashboard in your local browser by typing &#039;localhost:8787/status&#039;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
[[Category:bwUniCluster_2.0|Access]][[Category:Access|bwUniCluster 2.0]]&lt;/div&gt;</summary>
		<author><name>J Ruess</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster3.0/Software/Python_Dask&amp;diff=9380</id>
		<title>BwUniCluster3.0/Software/Python Dask</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster3.0/Software/Python_Dask&amp;diff=9380"/>
		<updated>2021-12-09T08:46:42Z</updated>

		<summary type="html">&lt;p&gt;J Ruess: /* Installation and Usage */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;!--{| style=&amp;quot;border-style: solid; border-width: 1px&amp;quot;&lt;br /&gt;
! Navigation: [[BwHPC_Best_Practices_Repository|bwHPC BPR]] / [[BwUniCluster_User_Guide|bwUniCluster]] &lt;br /&gt;
|}--&amp;gt;&lt;br /&gt;
This guide explains how to use Python Dask and dask-jobqueue on bwUniCluster2.0.&lt;br /&gt;
&lt;br /&gt;
== Installation and Usage ==&lt;br /&gt;
Please have a look on our Workshop[https://github.com/hpcraink/workshop-parallel-jupyter] (2_Grundlagen: Environment erstellen and 6_Dask)&lt;br /&gt;
&lt;br /&gt;
== Using Dask ==&lt;br /&gt;
In a new interactive shell, execute the following commands in Python:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt;&amp;gt;&amp;gt; from dask_jobqueue import SLURMCluster&lt;br /&gt;
&amp;gt;&amp;gt;&amp;gt; cluster = SLURMCluster(cores=X, memory=&#039;X GB&#039;, queue=&#039;X&#039;)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
You have to specify how many cores and memory you want for one dask worker. Furthermore a [[BwUniCluster_2.0_Batch_Queues|batch queue]] is required. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt;&amp;gt;&amp;gt; cluster.scale (X)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
After executing this command with e.g. cluster.scale(5), dask will start to request five worker processes each with the specified amount of cores and memory.&lt;br /&gt;
&lt;br /&gt;
With the command &#039;squeue&#039; in bash you can check if your dask worker processes are actually running or if you have to wait until you get the requested resources. You have a better chance to get resources quickly if you additionally specify a walltime.  You can check all options you have with the following command:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt;&amp;gt;&amp;gt; SLURMCluster?&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
With the command&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt;&amp;gt;&amp;gt; client&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
dask will output all resources that are are actually available to distribute your computing.&lt;br /&gt;
&lt;br /&gt;
== Dask Dashboard ==&lt;br /&gt;
To forward the Dask Dashboard you have to do a ssh port forwarding with the machine on which you have started Dask.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ssh -N -L 8787:machineName:8787 yourusername@bwunicluster.scc.kit.edu&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After executing this command you can access the dask dashboard in your local browser by typing &#039;localhost:8787/status&#039;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
[[Category:bwUniCluster_2.0|Access]][[Category:Access|bwUniCluster 2.0]]&lt;/div&gt;</summary>
		<author><name>J Ruess</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster3.0/Software/Python_Dask&amp;diff=9379</id>
		<title>BwUniCluster3.0/Software/Python Dask</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster3.0/Software/Python_Dask&amp;diff=9379"/>
		<updated>2021-12-09T08:46:07Z</updated>

		<summary type="html">&lt;p&gt;J Ruess: /* Installation */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;!--{| style=&amp;quot;border-style: solid; border-width: 1px&amp;quot;&lt;br /&gt;
! Navigation: [[BwHPC_Best_Practices_Repository|bwHPC BPR]] / [[BwUniCluster_User_Guide|bwUniCluster]] &lt;br /&gt;
|}--&amp;gt;&lt;br /&gt;
This guide explains how to use Python Dask and dask-jobqueue on bwUniCluster2.0.&lt;br /&gt;
&lt;br /&gt;
== Installation and Usage ==&lt;br /&gt;
Please have a look on our Workshop [https://github.com/hpcraink/workshop-parallel-jupyter] (2_Grundlagen: Environment erstellen and 6_Dask)&lt;br /&gt;
&lt;br /&gt;
== Using Dask ==&lt;br /&gt;
In a new interactive shell, execute the following commands in Python:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt;&amp;gt;&amp;gt; from dask_jobqueue import SLURMCluster&lt;br /&gt;
&amp;gt;&amp;gt;&amp;gt; cluster = SLURMCluster(cores=X, memory=&#039;X GB&#039;, queue=&#039;X&#039;)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
You have to specify how many cores and memory you want for one dask worker. Furthermore a [[BwUniCluster_2.0_Batch_Queues|batch queue]] is required. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt;&amp;gt;&amp;gt; cluster.scale (X)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
After executing this command with e.g. cluster.scale(5), dask will start to request five worker processes each with the specified amount of cores and memory.&lt;br /&gt;
&lt;br /&gt;
With the command &#039;squeue&#039; in bash you can check if your dask worker processes are actually running or if you have to wait until you get the requested resources. You have a better chance to get resources quickly if you additionally specify a walltime.  You can check all options you have with the following command:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt;&amp;gt;&amp;gt; SLURMCluster?&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
With the command&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt;&amp;gt;&amp;gt; client&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
dask will output all resources that are are actually available to distribute your computing.&lt;br /&gt;
&lt;br /&gt;
== Dask Dashboard ==&lt;br /&gt;
To forward the Dask Dashboard you have to do a ssh port forwarding with the machine on which you have started Dask.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ssh -N -L 8787:machineName:8787 yourusername@bwunicluster.scc.kit.edu&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After executing this command you can access the dask dashboard in your local browser by typing &#039;localhost:8787/status&#039;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
[[Category:bwUniCluster_2.0|Access]][[Category:Access|bwUniCluster 2.0]]&lt;/div&gt;</summary>
		<author><name>J Ruess</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster2.0/Batch_Queues&amp;diff=8719</id>
		<title>BwUniCluster2.0/Batch Queues</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster2.0/Batch_Queues&amp;diff=8719"/>
		<updated>2021-05-26T13:35:24Z</updated>

		<summary type="html">&lt;p&gt;J Ruess: Describe how to connect to a specific node in an interactive job&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This article contains information on the queues of the [[BwUniCluster_2.0_Slurm_common_Features|batch job system]] and on interactive jobs.&lt;br /&gt;
&lt;br /&gt;
= Job Submission =&lt;br /&gt;
== sbatch Command ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== sbatch -p &#039;&#039;queue&#039;&#039; ===&lt;br /&gt;
Compute resources such as (wall-)time, nodes and memory are restricted and must fit into &#039;&#039;&#039;queues&#039;&#039;&#039;. Since requested compute resources are NOT always automatically mapped to the correct queue class, &#039;&#039;&#039;you must add the correct queue class to your sbatch command &#039;&#039;&#039;. &amp;lt;font color=red&amp;gt;The specification of a queue is obligatory on BwUniCluster 2.0.&amp;lt;/font&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Details are:&lt;br /&gt;
&lt;br /&gt;
{| width=750px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! colspan=&amp;quot;5&amp;quot; | bwUniCluster 2.0 &amp;lt;br&amp;gt; sbatch -p &#039;&#039;queue&#039;&#039;&lt;br /&gt;
|- style=&amp;quot;text-align:left;&amp;quot;&lt;br /&gt;
! queue !! node !! default resources !! minimum resources !! maximum resources&lt;br /&gt;
|- style=&amp;quot;text-align:left&amp;quot;&lt;br /&gt;
| dev_single&lt;br /&gt;
| thin&lt;br /&gt;
| time=10, mem-per-cpu=1125mb&lt;br /&gt;
| &lt;br /&gt;
| time=30, nodes=1, mem=180000mb, ntasks-per-node=40, (threads-per-core=2)&amp;lt;br&amp;gt;6 nodes are reserved for this queue. &amp;lt;br&amp;gt; Only for development, i.e. debugging or performance optimization ...&lt;br /&gt;
|- style=&amp;quot;text-align:left;&amp;quot;&lt;br /&gt;
| single&lt;br /&gt;
| thin&lt;br /&gt;
| time=30, mem-per-cpu=1125mb&lt;br /&gt;
| &lt;br /&gt;
| time=72:00:00, nodes=1, mem=180000mb, ntasks-per-node=40, (threads-per-core)=2&lt;br /&gt;
|- style=&amp;quot;text-align:left;&amp;quot;&lt;br /&gt;
| dev_multiple&lt;br /&gt;
| thin&lt;br /&gt;
| time=10, mem-per-cpu=1125mb&lt;br /&gt;
| nodes=2&lt;br /&gt;
| time=30, nodes=4, mem=90000mb, ntasks-per-node=40, (threads-per-core=2)&amp;lt;br&amp;gt;8 nodes are reserved for this queue.&amp;lt;br&amp;gt; Only for development, i.e. debugging or performance optimization ...&lt;br /&gt;
|- style=&amp;quot;text-align:left;&amp;quot;&lt;br /&gt;
| multiple&lt;br /&gt;
| thin&lt;br /&gt;
| time=30, mem-per-cpu=1125mb&lt;br /&gt;
| nodes=2&lt;br /&gt;
| time=72:00:00, mem=90000mb, nodes=128, ntasks-per-node=40, (threads-per-core=2) &lt;br /&gt;
|- style=&amp;quot;text-align:left;&amp;quot;&lt;br /&gt;
| dev_multiple_e&lt;br /&gt;
| thin (Broadwell)&lt;br /&gt;
| time=10, mem-per-cpu=2178mb&lt;br /&gt;
| nodes=2&lt;br /&gt;
| time=30, nodes=8, mem=122000, ntasks-per-node=28, (threads-per-core=2)&amp;lt;br&amp;gt;8 nodes are reserved for this queue &amp;lt;br&amp;gt; Only for development, i.e. debugging or performance optimization ...&lt;br /&gt;
|- style=&amp;quot;text-align:left;&amp;quot;&lt;br /&gt;
| multiple_e&lt;br /&gt;
| thin (Broadwell)&lt;br /&gt;
| time=10, mem-per-cpu=2178mb&lt;br /&gt;
| nodes=2&lt;br /&gt;
| time=72:00:00, nodes=128, mem=122000mb, ntasks-per-node=28, (threads-per-core=2)&lt;br /&gt;
|- style=&amp;quot;text-align:left;&amp;quot;&lt;br /&gt;
| dev_special&amp;lt;br/&amp;gt;(&amp;lt;i&amp;gt;restricted access!&amp;lt;/i&amp;gt;)&lt;br /&gt;
| thin (Broadwell)&lt;br /&gt;
| time=10, mem-per-cpu=2178mb&lt;br /&gt;
| &lt;br /&gt;
| time=30, nodes=1, mem=122000mb, ntasks-per-node=28, (threads-per-core=2)&lt;br /&gt;
|- style=&amp;quot;text-align:left;&amp;quot;&lt;br /&gt;
| special&amp;lt;br/&amp;gt;(&amp;lt;i&amp;gt;restricted access!&amp;lt;/i&amp;gt;)&lt;br /&gt;
| thin (Broadwell)&lt;br /&gt;
| time=10, mem-per-cpu=2178mb&lt;br /&gt;
| &lt;br /&gt;
| time=72:00:00, nodes=1, mem=122000mb, ntasks-per-node=28, (threads-per-core=2)&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; text-align:left&amp;quot;&lt;br /&gt;
| fat &lt;br /&gt;
| fat&lt;br /&gt;
| time=10, mem-per-cpu=18750mb&lt;br /&gt;
| &lt;br /&gt;
| time=72:00:00, nodes=1, ntasks-per-node=80, (threads-per-core=2)&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; text-align:left&amp;quot;&lt;br /&gt;
| dev_gpu_4&lt;br /&gt;
| gpu_4&lt;br /&gt;
| time=10, mem-per-gpu=94000mb, cpu-per-gpu=20&lt;br /&gt;
| &lt;br /&gt;
| time=30, nodes=1, mem=376000, ntasks-per-node=40, (threads-per-core=2)&amp;lt;br&amp;gt;1 node is reserved for this queue &amp;lt;br&amp;gt; Only for development, i.e. debugging or performance optimization ...&lt;br /&gt;
|- style=&amp;quot;text-align:left;&amp;quot;&lt;br /&gt;
| gpu_4&lt;br /&gt;
| gpu4&lt;br /&gt;
| time=10, mem-per-gpu=94000mb, cpu-per-gpu=20&lt;br /&gt;
| &lt;br /&gt;
| time=48:00:00, mem=376000, nodes=14, ntasks-per-node=40, (threads-per-core=2)&lt;br /&gt;
|- style=&amp;quot;vertical-align:top; text-align:left&amp;quot;&lt;br /&gt;
| gpu_8&lt;br /&gt;
| gpu8&lt;br /&gt;
| time=10, mem-per-cpu=94000mb, cpu-per-gpu=10&lt;br /&gt;
|&lt;br /&gt;
| time=48:00:00, mem=752000, nodes=10, ntasks-per-node=40, (threads-per-core=2)&lt;br /&gt;
|- &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Default resources of a queue class defines time, #tasks and memory if not explicitly given with sbatch command. Resource list acronyms &#039;&#039;--time&#039;&#039;, &#039;&#039;--ntasks&#039;&#039;, &#039;&#039;--nodes&#039;&#039;, &#039;&#039;--mem&#039;&#039; and &#039;&#039;--mem-per-cpu&#039;&#039; are described [[ForHLR_Batch_Jobs_SLURM#sbatch_Command_Parameters|here]].&lt;br /&gt;
&lt;br /&gt;
Access to the &amp;quot;special&amp;quot; and &amp;quot;dev_special&amp;quot; partitions on the bwUniCluster 2.0 is restricted to members of the institutions which participated in the procurement of the extension partition specifically for this purpose. Please contact the support team if your institution participated in the procurement and your account should be able to run jobs in this partition.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
==== Queue class examples ====&lt;br /&gt;
To run your batch job on one of the thin nodes, please use:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sbatch --partition=dev_multiple&lt;br /&gt;
     or &lt;br /&gt;
$ sbatch -p dev_multiple&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Interactive Jobs ====&lt;br /&gt;
On bwUniCluster 2.0 you are only allowed to run short jobs (&amp;lt;&amp;lt; 1 hour) with little memory requirements (&amp;lt;&amp;lt; 8 GByte) on the logins nodes. If you want to run longer jobs and/or jobs with a request of more than 8 GByte of memory, you must allocate resources for so-called interactive jobs by usage of the command salloc on a login node. Considering a serial application running on a compute node that requires 5000 MByte of memory and limiting the interactive run to 2 hours the following command has to be executed:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ salloc -p single -n 1 -t 120 --mem=5000&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Then you will get one core on a compute node within the partition &amp;quot;single&amp;quot;. After execution of this command &#039;&#039;&#039;DO NOT CLOSE&#039;&#039;&#039; your current terminal session but wait until the queueing system Slurm has granted you the requested resources on the compute system. You will be logged in automatically on the granted core! To run a serial program on the granted core you only have to type the name of the executable.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ./&amp;lt;my_serial_program&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Please be aware that your serial job must run less than 2 hours in this example, else the job will be killed during runtime by the system. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
You can also start now a graphical X11-terminal connecting you to the dedicated resource that is available for 2 hours. You can start it by the command:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ xterm&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Note that, once the walltime limit has been reached the resources - i.e. the compute node - will automatically be revoked.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
An interactive parallel application running on one compute node or on many compute nodes (e.g. here 5 nodes) with 40 cores each requires usually an amount of memory in GByte (e.g. 50 GByte) and a maximum time (e.g. 1 hour). E.g. 5 nodes can be allocated by the following command:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ salloc -p multiple -N 5 --ntasks-per-node=40 -t 01:00:00  --mem=50gb&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Now you can run parallel jobs on 200 cores requiring 50 GByte of memory per node. Please be aware that you will be logged in on core 0 of the first node.&lt;br /&gt;
If you want to have access to another node you have to open a new terminal, connect it also to BwUniCluster 2.0 and type the following commands to&lt;br /&gt;
connect to the running interactive job and then to a specific node:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ srun --jobid=XXXXXXXX --pty /bin/bash&lt;br /&gt;
$ srun --nodelist=uc2nXXX --pty /bin/bash&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
With the command:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ squeue&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
the jobid and the nodelist can be shown.&lt;br /&gt;
&lt;br /&gt;
If you want to run MPI-programs, you can do it by simply typing mpirun &amp;lt;program_name&amp;gt;. Then your program will be run on 200 cores. A very simple example for starting a parallel job can be:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ mpirun &amp;lt;my_mpi_program&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
You can also start the debugger ddt by the commands:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ module add devel/ddt&lt;br /&gt;
$ ddt &amp;lt;my_mpi_program&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The above commands will execute the parallel program &amp;lt;my_mpi_program&amp;gt; on all available cores. You can also start parallel programs on a subset of cores; an example for this can be:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ mpirun -n 50 &amp;lt;my_mpi_program&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
If you are using Intel MPI you must start &amp;lt;my_mpi_program&amp;gt; by the command mpiexec.hydra (instead of mpirun).&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
[[Category:bwUniCluster 2.0|Batch Jobs - bwUniCluster 2.0 Features]]&lt;/div&gt;</summary>
		<author><name>J Ruess</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster3.0/Software/Python_Dask&amp;diff=6383</id>
		<title>BwUniCluster3.0/Software/Python Dask</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster3.0/Software/Python_Dask&amp;diff=6383"/>
		<updated>2020-04-19T09:47:47Z</updated>

		<summary type="html">&lt;p&gt;J Ruess: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;!--{| style=&amp;quot;border-style: solid; border-width: 1px&amp;quot;&lt;br /&gt;
! Navigation: [[BwHPC_Best_Practices_Repository|bwHPC BPR]] / [[BwUniCluster_User_Guide|bwUniCluster]] &lt;br /&gt;
|}--&amp;gt;&lt;br /&gt;
This guide explains how to use Python Dask and dask-jobqueue on bwUniCluster2.0.&lt;br /&gt;
&lt;br /&gt;
== Installation ==&lt;br /&gt;
Use on of our pre-configured Python modules and load them with &#039;module load ...&#039;. You have to install the packages &#039;dask&#039; and &#039;das-jobqueue&#039; if your are using an own conda environment.&lt;br /&gt;
&lt;br /&gt;
== Using Dask ==&lt;br /&gt;
In a new interactive shell, execute the following commands in Python:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt;&amp;gt;&amp;gt; from dask_jobqueue import SLURMCluster&lt;br /&gt;
&amp;gt;&amp;gt;&amp;gt; cluster = SLURMCluster(cores=X, memory=&#039;X GB&#039;, queue=&#039;X&#039;)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
You have to specify how many cores and memory you want for one dask worker. Furthermore a [[BwUniCluster_2.0_Batch_Queues|batch queue]] is required. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt;&amp;gt;&amp;gt; cluster.scale (X)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
After executing this command with e.g. cluster.scale(5), dask will start to request five worker processes each with the specified amount of cores and memory.&lt;br /&gt;
&lt;br /&gt;
With the command &#039;squeue&#039; in bash you can check if your dask worker processes are actually running or if you have to wait until you get the requested resources. You have a better chance to get resources quickly if you additionally specify a walltime.  You can check all options you have with the following command:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt;&amp;gt;&amp;gt; SLURMCluster?&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
With the command&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt;&amp;gt;&amp;gt; client&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
dask will output all resources that are are actually available to distribute your computing.&lt;br /&gt;
&lt;br /&gt;
== Dask Dashboard ==&lt;br /&gt;
To forward the Dask Dashboard you have to do a ssh port forwarding with the machine on which you have started Dask.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ssh -N -L 8787:machineName:8787 yourusername@bwunicluster.scc.kit.edu&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After executing this command you can access the dask dashboard in your local browser by typing &#039;localhost:8787/status&#039;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
[[Category:bwUniCluster_2.0|Access]][[Category:Access|bwUniCluster 2.0]]&lt;/div&gt;</summary>
		<author><name>J Ruess</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster3.0/Software/Python_Dask&amp;diff=6382</id>
		<title>BwUniCluster3.0/Software/Python Dask</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster3.0/Software/Python_Dask&amp;diff=6382"/>
		<updated>2020-04-19T09:36:37Z</updated>

		<summary type="html">&lt;p&gt;J Ruess: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;!--{| style=&amp;quot;border-style: solid; border-width: 1px&amp;quot;&lt;br /&gt;
! Navigation: [[BwHPC_Best_Practices_Repository|bwHPC BPR]] / [[BwUniCluster_User_Guide|bwUniCluster]] &lt;br /&gt;
|}--&amp;gt;&lt;br /&gt;
This guide explains how to use Python Dask and dask-jobqueue on bwUniCluster2.0.&lt;br /&gt;
&lt;br /&gt;
== Installation ==&lt;br /&gt;
Use on of our pre-configured Python modules and load them with &#039;module load ...&#039;. You have to install the packages &#039;dask&#039; and &#039;das-jobqueue&#039; if your are using an own conda environment.&lt;br /&gt;
&lt;br /&gt;
== Using Dask ==&lt;br /&gt;
In a new interactive shell, execute the following commands in Python:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt;&amp;gt;&amp;gt; from dask_jobqueue import SLURMCluster&lt;br /&gt;
&amp;gt;&amp;gt;&amp;gt; cluster = SLURMCluster(cores=X, memory=&#039;X GB&#039;, queue=&#039;X&#039;)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
You have to specify how many cores and memory you want for one dask worker. Furthermore a [[BwUniCluster_2.0_Batch_Queues|batch queue]] is required. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt;&amp;gt;&amp;gt; cluster.scale (X)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
After executing this command with e.g. cluster.scale(5), dask will start to request five worker processes each with the specified amount of cores and memory.&lt;br /&gt;
&lt;br /&gt;
With the command &#039;squeue&#039; in bash you can check if your dask worker processes are actually running or if you have to wait until you get the requested resources. You have a better chance to get resources quickly if you additionally specify a walltime.  You can check all options you have with the following command:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt;&amp;gt;&amp;gt; SLURMCluster?&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
With the command&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt;&amp;gt;&amp;gt; client&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
dask will output all resources that are are actually available to distribute your computing.&lt;br /&gt;
&lt;br /&gt;
== Dask Dashboard ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
[[Category:bwUniCluster_2.0|Access]][[Category:Access|bwUniCluster 2.0]]&lt;/div&gt;</summary>
		<author><name>J Ruess</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster3.0/Software/Python_Dask&amp;diff=6381</id>
		<title>BwUniCluster3.0/Software/Python Dask</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster3.0/Software/Python_Dask&amp;diff=6381"/>
		<updated>2020-04-19T09:17:12Z</updated>

		<summary type="html">&lt;p&gt;J Ruess: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;!--{| style=&amp;quot;border-style: solid; border-width: 1px&amp;quot;&lt;br /&gt;
! Navigation: [[BwHPC_Best_Practices_Repository|bwHPC BPR]] / [[BwUniCluster_User_Guide|bwUniCluster]] &lt;br /&gt;
|}--&amp;gt;&lt;br /&gt;
This guide explains how to use Python Dask and dask-jobqueue on bwUniCluster2.0.&lt;br /&gt;
&lt;br /&gt;
== Installation ==&lt;br /&gt;
Use on of our pre-configured Python modules and load them with &#039;module load ...&#039;. You have to install the packages &#039;dask&#039; and &#039;das-jobqueue&#039; if your are using an own conda environment.&lt;br /&gt;
&lt;br /&gt;
== Using Dask ==&lt;br /&gt;
In a new interactive shell, execute the following commands in Python:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt;&amp;gt;&amp;gt; from dask_jobqueue import SLURMCluster&lt;br /&gt;
&amp;gt;&amp;gt;&amp;gt; cluster = SLURMCluster(cores=X, memory=&#039;X GB&#039;, queue=&#039;X&#039;)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
You have to specify how many cores and memory you want for one dask worker. Furthermore a [[BwUniCluster_2.0_Batch_Queues|batch queue]] is required. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt;&amp;gt;&amp;gt; cluster.scale (X)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
After executing this command with e.g. cluster.scale(5), dask will start to request five worker processes each with the specified amount of cores and memory.&lt;br /&gt;
&lt;br /&gt;
With the command &#039;squeue&#039; in bash you can check if your dask worker processes are actually running or if you have to wait until you get the requested resources. You have a better chance to get resources quickly if you additionally specify a walltime.  You can check all options you have with the following command:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt;&amp;gt;&amp;gt; SLURMCluster?&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
With the command&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt;&amp;gt;&amp;gt; client&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
dask will output all resources that are are actually available to distribute your computing.&lt;br /&gt;
&lt;br /&gt;
== Dask Dashboard ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Replace &lt;br /&gt;
----&lt;br /&gt;
[[Category:bwUniCluster_2.0|Access]][[Category:Access|bwUniCluster 2.0]]&lt;/div&gt;</summary>
		<author><name>J Ruess</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster3.0/Software/Python_Dask&amp;diff=6380</id>
		<title>BwUniCluster3.0/Software/Python Dask</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster3.0/Software/Python_Dask&amp;diff=6380"/>
		<updated>2020-04-19T09:13:35Z</updated>

		<summary type="html">&lt;p&gt;J Ruess: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;!--{| style=&amp;quot;border-style: solid; border-width: 1px&amp;quot;&lt;br /&gt;
! Navigation: [[BwHPC_Best_Practices_Repository|bwHPC BPR]] / [[BwUniCluster_User_Guide|bwUniCluster]] &lt;br /&gt;
|}--&amp;gt;&lt;br /&gt;
This guide explains how to use Python Dask and dask-jobqueue on bwUniCluster2.0.&lt;br /&gt;
&lt;br /&gt;
== Installation ==&lt;br /&gt;
Use on of our pre-configured Python modules and load them with &#039;module load ...&#039;. You have to install the packages &#039;dask&#039; and &#039;das-jobqueue&#039; if your are using an own conda environment.&lt;br /&gt;
&lt;br /&gt;
== Using Dask ==&lt;br /&gt;
In a new interactive shell, execute the following commands in Python:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt;&amp;gt;&amp;gt; from dask_jobqueue import SLURMCluster&lt;br /&gt;
&amp;gt;&amp;gt;&amp;gt; cluster = SLURMCluster(cores=X, memory=&#039;X GB&#039;, queue=&#039;X&#039;)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
You have to specify how many cores and memory you want for one dask worker. Furthermore a [[BwUniCluster_2.0_Batch_Queues|batch queue]] is required. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt;&amp;gt;&amp;gt; cluster.scale (X)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
After executing this command with e.g. cluster.scale(5), dask will start to request five worker processes each with the specified amount of cores and memory.&lt;br /&gt;
&lt;br /&gt;
With the command &#039;squeue&#039; in bash you can check if your dask worker processes are actually running or if you have to wait until you get the requested resources. You have a better chance to get resources quickly if you additionally specify a walltime.  You can check all options you have with the following command:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt;&amp;gt;&amp;gt; SLURMCluster?&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
With the command&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt;&amp;gt;&amp;gt; client&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
dask will output all resources that are are actually available to distribute your computing.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Replace &lt;br /&gt;
----&lt;br /&gt;
[[Category:bwUniCluster_2.0|Access]][[Category:Access|bwUniCluster 2.0]]&lt;/div&gt;</summary>
		<author><name>J Ruess</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster3.0/Software/Python_Dask&amp;diff=6379</id>
		<title>BwUniCluster3.0/Software/Python Dask</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster3.0/Software/Python_Dask&amp;diff=6379"/>
		<updated>2020-04-19T09:11:51Z</updated>

		<summary type="html">&lt;p&gt;J Ruess: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;!--{| style=&amp;quot;border-style: solid; border-width: 1px&amp;quot;&lt;br /&gt;
! Navigation: [[BwHPC_Best_Practices_Repository|bwHPC BPR]] / [[BwUniCluster_User_Guide|bwUniCluster]] &lt;br /&gt;
|}--&amp;gt;&lt;br /&gt;
This guide explains how to use Python Dask and dask-jobqueue on bwUniCluster2.0.&lt;br /&gt;
&lt;br /&gt;
== Installation ==&lt;br /&gt;
Use on of our pre-configured Python modules and load them with &#039;module load ...&#039;. You have to install the packages &#039;dask&#039; and &#039;das-jobqueue&#039; if your are using an own conda environment.&lt;br /&gt;
&lt;br /&gt;
== Using Dask ==&lt;br /&gt;
In a new interactive shell, execute the following commands in Python:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt;&amp;gt;&amp;gt; from dask_jobqueue import SLURMCluster&lt;br /&gt;
&amp;gt;&amp;gt;&amp;gt; cluster = SLURMCluster(cores=X, memory=&#039;X GB&#039;, queue=&#039;X&#039;)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
You have to specify how many cores and memory you want for one dask worker. Furthermore a [[BwUniCluster_2.0_Batch_Queues|batch queue]] is required. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt;&amp;gt;&amp;gt; cluster.scale (X)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
After executing this command with e.g. cluster.scale(5), dask will start to request five worker processes each with the specified amount of cores and memory.&lt;br /&gt;
&lt;br /&gt;
With the command &#039;squeue&#039; in bash you can check if your dask worker processes are actually running or if you have to wait until you get the requested resources. You have a better chance to get resources quickly if you additionally specify a walltime.  You can check all options you have with the following command:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt;&amp;gt;&amp;gt; SLURMCluster?&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Replace &lt;br /&gt;
----&lt;br /&gt;
[[Category:bwUniCluster_2.0|Access]][[Category:Access|bwUniCluster 2.0]]&lt;/div&gt;</summary>
		<author><name>J Ruess</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster3.0/Software/Python_Dask&amp;diff=6378</id>
		<title>BwUniCluster3.0/Software/Python Dask</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster3.0/Software/Python_Dask&amp;diff=6378"/>
		<updated>2020-04-19T09:08:54Z</updated>

		<summary type="html">&lt;p&gt;J Ruess: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;!--{| style=&amp;quot;border-style: solid; border-width: 1px&amp;quot;&lt;br /&gt;
! Navigation: [[BwHPC_Best_Practices_Repository|bwHPC BPR]] / [[BwUniCluster_User_Guide|bwUniCluster]] &lt;br /&gt;
|}--&amp;gt;&lt;br /&gt;
This guide explains how to use Python Dask and dask-jobqueue on bwUniCluster2.0.&lt;br /&gt;
&lt;br /&gt;
== Installation ==&lt;br /&gt;
Use on of our pre-configured Python modules and load them with &#039;module load ...&#039;. You have to install the packages &#039;dask&#039; and &#039;das-jobqueue&#039; if your are using an own conda environment.&lt;br /&gt;
&lt;br /&gt;
== Using Dask ==&lt;br /&gt;
In a new interactive shell, execute the following commands in Python:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt;&amp;gt;&amp;gt; from dask_jobqueue import SLURMCluster&lt;br /&gt;
&amp;gt;&amp;gt;&amp;gt; cluster = SLURMCluster(cores=X, memory=&#039;X GB&#039;, queue=&#039;X&#039;)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
You have to specify how many cores and memory you want for one dask worker. Furthermore a [[BwUniCluster_2.0_Batch_Queues|batch queue]] is required. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt;&amp;gt;&amp;gt; cluster.scale (X)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
After executing this command with e.g. cluster.scale(5), dask will start to request five worker processes each with the specified amount of cores and memory.&lt;br /&gt;
&lt;br /&gt;
With the command &#039;squeue&#039; you can check if your dask worker processes are actually running or if you have to wait until you get the requested resources. You have a better chance to get resources quickly if you additionally specify a walltime.  You can check all options you have with the following command:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt;&amp;gt;&amp;gt; SLURMCluster?&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Replace &lt;br /&gt;
----&lt;br /&gt;
[[Category:bwUniCluster_2.0|Access]][[Category:Access|bwUniCluster 2.0]]&lt;/div&gt;</summary>
		<author><name>J Ruess</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster3.0/Software/Python_Dask&amp;diff=6377</id>
		<title>BwUniCluster3.0/Software/Python Dask</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster3.0/Software/Python_Dask&amp;diff=6377"/>
		<updated>2020-04-19T09:00:03Z</updated>

		<summary type="html">&lt;p&gt;J Ruess: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;!--{| style=&amp;quot;border-style: solid; border-width: 1px&amp;quot;&lt;br /&gt;
! Navigation: [[BwHPC_Best_Practices_Repository|bwHPC BPR]] / [[BwUniCluster_User_Guide|bwUniCluster]] &lt;br /&gt;
|}--&amp;gt;&lt;br /&gt;
This guide explains how to use Python Dask and dask-jobqueue on bwUniCluster2.0.&lt;br /&gt;
&lt;br /&gt;
== Installation ==&lt;br /&gt;
Use on of our pre-configured Python modules and load them with &#039;module load ...&#039;. You have to install the packages &#039;dask&#039; and &#039;das-jobqueue&#039; if your are using an own conda environment.&lt;br /&gt;
&lt;br /&gt;
== Using Dask ==&lt;br /&gt;
In a new interactive shell, execute the following commands in Python:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt;&amp;gt;&amp;gt; from dask_jobqueue import SLURMCluster&lt;br /&gt;
&amp;gt;&amp;gt;&amp;gt; cluster = SLURMCluster(cores=X, memory=&#039;X GB&#039;, queue=&#039;X&#039;)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
You have to specify how many cores and memory you want for one dask worker. Furthermore a [[BwUniCluster_2.0_Batch_Queues|batch queue]] is required. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt;&amp;gt;&amp;gt; cluster.scale (X)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
After executing this command with e.g. cluster.scale(5), dask will start to request five worker processes each with the specified amount of cores and memory.  &lt;br /&gt;
&lt;br /&gt;
Replace &lt;br /&gt;
----&lt;br /&gt;
[[Category:bwUniCluster_2.0|Access]][[Category:Access|bwUniCluster 2.0]]&lt;/div&gt;</summary>
		<author><name>J Ruess</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster3.0/Software/Python_Dask&amp;diff=6376</id>
		<title>BwUniCluster3.0/Software/Python Dask</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster3.0/Software/Python_Dask&amp;diff=6376"/>
		<updated>2020-04-19T08:56:14Z</updated>

		<summary type="html">&lt;p&gt;J Ruess: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;!--{| style=&amp;quot;border-style: solid; border-width: 1px&amp;quot;&lt;br /&gt;
! Navigation: [[BwHPC_Best_Practices_Repository|bwHPC BPR]] / [[BwUniCluster_User_Guide|bwUniCluster]] &lt;br /&gt;
|}--&amp;gt;&lt;br /&gt;
This guide explains how to use Python Dask and dask-jobqueue on bwUniCluster2.0.&lt;br /&gt;
&lt;br /&gt;
== Installation ==&lt;br /&gt;
Use on of our pre-configured Python modules and load them with &#039;module load ...&#039;. You have to install the packages &#039;dask&#039; and &#039;das-jobqueue&#039; if your are using an own conda environment.&lt;br /&gt;
&lt;br /&gt;
== Using Dask ==&lt;br /&gt;
In a new interactive shell, execute the following commands in Python:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt;&amp;gt;&amp;gt; from dask_jobqueue import SLURMCluster&lt;br /&gt;
&amp;gt;&amp;gt;&amp;gt; cluster = SLURMCluster(cores=X, memory=&#039;X GB&#039;, queue=&#039;X&#039;)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
You have to specify how many cores and memory you want for one dask worker. Furthermore a [[BwUniCluster_2.0_Batch_Queues|batch queue]] is required. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt;&amp;gt;&amp;gt; cluster.scale (X)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Replace &lt;br /&gt;
----&lt;br /&gt;
[[Category:bwUniCluster_2.0|Access]][[Category:Access|bwUniCluster 2.0]]&lt;/div&gt;</summary>
		<author><name>J Ruess</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster3.0/Software/Python_Dask&amp;diff=6375</id>
		<title>BwUniCluster3.0/Software/Python Dask</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster3.0/Software/Python_Dask&amp;diff=6375"/>
		<updated>2020-04-19T08:55:05Z</updated>

		<summary type="html">&lt;p&gt;J Ruess: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;!--{| style=&amp;quot;border-style: solid; border-width: 1px&amp;quot;&lt;br /&gt;
! Navigation: [[BwHPC_Best_Practices_Repository|bwHPC BPR]] / [[BwUniCluster_User_Guide|bwUniCluster]] &lt;br /&gt;
|}--&amp;gt;&lt;br /&gt;
This guide explains how to use Python Dask and dask-jobqueue on bwUniCluster2.0.&lt;br /&gt;
&lt;br /&gt;
== Installation ==&lt;br /&gt;
Use on of our pre-configured Python modules and load them with &#039;module load ...&#039;. You have to install the packages &#039;dask&#039; and &#039;das-jobqueue&#039; if your are using an own conda environment.&lt;br /&gt;
&lt;br /&gt;
== Using Dask ==&lt;br /&gt;
In a new interactive shell, execute the following commands in Python:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt;&amp;gt;&amp;gt; from dask_jobqueue import SLURMCluster&lt;br /&gt;
&amp;gt;&amp;gt;&amp;gt; cluster = SLURMCluster(cores=X, memory=&#039;X GB&#039;, queue=&#039;X&#039;)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
You have to specify how many cores and memory you want for one dask worker. [[BwUniCluster_2.0_Batch_Queues]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt;&amp;gt;&amp;gt; cluster.scale (X)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Replace &lt;br /&gt;
----&lt;br /&gt;
[[Category:bwUniCluster_2.0|Access]][[Category:Access|bwUniCluster 2.0]]&lt;/div&gt;</summary>
		<author><name>J Ruess</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster3.0/Software/Python_Dask&amp;diff=6374</id>
		<title>BwUniCluster3.0/Software/Python Dask</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster3.0/Software/Python_Dask&amp;diff=6374"/>
		<updated>2020-04-19T08:53:40Z</updated>

		<summary type="html">&lt;p&gt;J Ruess: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;!--{| style=&amp;quot;border-style: solid; border-width: 1px&amp;quot;&lt;br /&gt;
! Navigation: [[BwHPC_Best_Practices_Repository|bwHPC BPR]] / [[BwUniCluster_User_Guide|bwUniCluster]] &lt;br /&gt;
|}--&amp;gt;&lt;br /&gt;
This guide explains how to use Python Dask and dask-jobqueue on bwUniCluster2.0.&lt;br /&gt;
&lt;br /&gt;
== Installation ==&lt;br /&gt;
Use on of our pre-configured Python modules and load them with &#039;module load ...&#039;. You have to install the packages &#039;dask&#039; and &#039;das-jobqueue&#039; if your are using an own conda environment.&lt;br /&gt;
&lt;br /&gt;
== Using Dask ==&lt;br /&gt;
In a new interactive shell, execute the following commands in Python:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt;&amp;gt;&amp;gt; from dask_jobqueue import SLURMCluster&lt;br /&gt;
&amp;gt;&amp;gt;&amp;gt; cluster = SLURMCluster(cores=X, memory=&#039;X GB&#039;, queue=&#039;X&#039;)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
You have to specify how many cores and memory you want for one dask worker. [[BwUniCluster_2.0_Slurm_common_Features|batch job system]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt;&amp;gt;&amp;gt; cluster.scale (X)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Replace &lt;br /&gt;
----&lt;br /&gt;
[[Category:bwUniCluster_2.0|Access]][[Category:Access|bwUniCluster 2.0]]&lt;/div&gt;</summary>
		<author><name>J Ruess</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster3.0/Software/Python_Dask&amp;diff=6373</id>
		<title>BwUniCluster3.0/Software/Python Dask</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster3.0/Software/Python_Dask&amp;diff=6373"/>
		<updated>2020-04-19T08:51:03Z</updated>

		<summary type="html">&lt;p&gt;J Ruess: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;!--{| style=&amp;quot;border-style: solid; border-width: 1px&amp;quot;&lt;br /&gt;
! Navigation: [[BwHPC_Best_Practices_Repository|bwHPC BPR]] / [[BwUniCluster_User_Guide|bwUniCluster]] &lt;br /&gt;
|}--&amp;gt;&lt;br /&gt;
This guide explains how to use Python Dask and dask-jobqueue on bwUniCluster2.0.&lt;br /&gt;
&lt;br /&gt;
== Installation ==&lt;br /&gt;
Use on of our pre-configured Python modules and load them with &#039;module load ...&#039;. You have to install the packages &#039;dask&#039; and &#039;das-jobqueue&#039; if your are using an own conda environment.&lt;br /&gt;
&lt;br /&gt;
== Using Dask ==&lt;br /&gt;
In a new interactive shell, execute the following commands in Python:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt;&amp;gt;&amp;gt; from dask_jobqueue import SLURMCluster&lt;br /&gt;
&amp;gt;&amp;gt;&amp;gt; cluster = SLURMCluster(cores=X, memory=&#039;X GB&#039;, queue=&#039;X&#039;)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
You have to specify how many cores and memory you want for one dask worker. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt;&amp;gt;&amp;gt; cluster.scale (X)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Replace &lt;br /&gt;
----&lt;br /&gt;
[[Category:bwUniCluster_2.0|Access]][[Category:Access|bwUniCluster 2.0]]&lt;/div&gt;</summary>
		<author><name>J Ruess</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster3.0/Software/Python_Dask&amp;diff=6372</id>
		<title>BwUniCluster3.0/Software/Python Dask</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster3.0/Software/Python_Dask&amp;diff=6372"/>
		<updated>2020-04-19T08:49:52Z</updated>

		<summary type="html">&lt;p&gt;J Ruess: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;!--{| style=&amp;quot;border-style: solid; border-width: 1px&amp;quot;&lt;br /&gt;
! Navigation: [[BwHPC_Best_Practices_Repository|bwHPC BPR]] / [[BwUniCluster_User_Guide|bwUniCluster]] &lt;br /&gt;
|}--&amp;gt;&lt;br /&gt;
This guide explains how to use Python Dask and dask-jobqueue on bwUniCluster2.0.&lt;br /&gt;
&lt;br /&gt;
== Installation ==&lt;br /&gt;
Use on of our pre-configured Python modules and load them with &#039;module load ...&#039;. You have to install the packages &#039;dask&#039; and &#039;das-jobqueue&#039; if your are yousing an own conda environment.&lt;br /&gt;
&lt;br /&gt;
== Using Dask ==&lt;br /&gt;
In a new interactive shell, execute the following commands in Python:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt;&amp;gt;&amp;gt; from dask_jobqueue import SLURMCluster&lt;br /&gt;
&amp;gt;&amp;gt;&amp;gt; cluster = SLURMCluster(cores=X, memory=&#039;X GB&#039;, queue=&#039;X&#039;)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
You have to specify how many cores and memory you want for one dask worker. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt;&amp;gt;&amp;gt; cluster.scale (X)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Replace &lt;br /&gt;
----&lt;br /&gt;
[[Category:bwUniCluster_2.0|Access]][[Category:Access|bwUniCluster 2.0]]&lt;/div&gt;</summary>
		<author><name>J Ruess</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster3.0/Software/Python_Dask&amp;diff=6371</id>
		<title>BwUniCluster3.0/Software/Python Dask</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster3.0/Software/Python_Dask&amp;diff=6371"/>
		<updated>2020-04-19T08:46:35Z</updated>

		<summary type="html">&lt;p&gt;J Ruess: /* Using Dask */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;!--{| style=&amp;quot;border-style: solid; border-width: 1px&amp;quot;&lt;br /&gt;
! Navigation: [[BwHPC_Best_Practices_Repository|bwHPC BPR]] / [[BwUniCluster_User_Guide|bwUniCluster]] &lt;br /&gt;
|}--&amp;gt;&lt;br /&gt;
This guide explains how to use Python Dask and dask-jobqueue on bwUniCluster2.0.&lt;br /&gt;
&lt;br /&gt;
== Installation ==&lt;br /&gt;
Use on of our pre-configured Python modules and load them with &#039;module load ...&#039;. You have to install the packages &#039;dask&#039; and &#039;das-jobqueue&#039; if your are yousing an own conda environment.&lt;br /&gt;
&lt;br /&gt;
== Using Dask ==&lt;br /&gt;
In a new interactive shell, execute execute the following commands in Python:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt;&amp;gt;&amp;gt; from dask_jobqueue import SLURMCluster&lt;br /&gt;
&amp;gt;&amp;gt;&amp;gt; cluster = SLURMCluster(cores=XX, memory=&#039;XX GB&#039;, queue=&#039;XXX&#039;)&lt;br /&gt;
&amp;gt;&amp;gt;&amp;gt; &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
----&lt;br /&gt;
[[Category:bwUniCluster_2.0|Access]][[Category:Access|bwUniCluster 2.0]]&lt;/div&gt;</summary>
		<author><name>J Ruess</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster3.0/Software/Python_Dask&amp;diff=6370</id>
		<title>BwUniCluster3.0/Software/Python Dask</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster3.0/Software/Python_Dask&amp;diff=6370"/>
		<updated>2020-04-19T08:45:00Z</updated>

		<summary type="html">&lt;p&gt;J Ruess: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;!--{| style=&amp;quot;border-style: solid; border-width: 1px&amp;quot;&lt;br /&gt;
! Navigation: [[BwHPC_Best_Practices_Repository|bwHPC BPR]] / [[BwUniCluster_User_Guide|bwUniCluster]] &lt;br /&gt;
|}--&amp;gt;&lt;br /&gt;
This guide explains how to use Python Dask and dask-jobqueue on bwUniCluster2.0.&lt;br /&gt;
&lt;br /&gt;
== Installation ==&lt;br /&gt;
Use on of our pre-configured Python modules and load them with &#039;module load ...&#039;. You have to install the packages &#039;dask&#039; and &#039;das-jobqueue&#039; if your are yousing an own conda environment.&lt;br /&gt;
&lt;br /&gt;
== Using Dask ==&lt;br /&gt;
In Python, execute the following commands: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt;&amp;gt;&amp;gt; from dask_jobqueue import SLURMCluster&lt;br /&gt;
&amp;gt;&amp;gt;&amp;gt; cluster = SLURMCluster(cores=XX, memory=&#039;XX GB&#039;, queue=&#039;XXX&#039;)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
----&lt;br /&gt;
[[Category:bwUniCluster_2.0|Access]][[Category:Access|bwUniCluster 2.0]]&lt;/div&gt;</summary>
		<author><name>J Ruess</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster3.0/Software/Python_Dask&amp;diff=6369</id>
		<title>BwUniCluster3.0/Software/Python Dask</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster3.0/Software/Python_Dask&amp;diff=6369"/>
		<updated>2020-04-19T08:28:08Z</updated>

		<summary type="html">&lt;p&gt;J Ruess: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;!--{| style=&amp;quot;border-style: solid; border-width: 1px&amp;quot;&lt;br /&gt;
! Navigation: [[BwHPC_Best_Practices_Repository|bwHPC BPR]] / [[BwUniCluster_User_Guide|bwUniCluster]] &lt;br /&gt;
|}--&amp;gt;&lt;br /&gt;
This guide explains how to use Python Dask and dask-jobqueue on bwUniCluster2.0.&lt;br /&gt;
&lt;br /&gt;
== Installation ==&lt;br /&gt;
Use on of our pre-configured Python modules and load them with &#039;module load ...&#039;. You have to install the packages &#039;dask&#039; and &#039;das-jobqueue&#039; if your are yousing an own conda environment. &lt;br /&gt;
----&lt;br /&gt;
[[Category:bwUniCluster_2.0|Access]][[Category:Access|bwUniCluster 2.0]]&lt;/div&gt;</summary>
		<author><name>J Ruess</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster3.0/Software/Python_Dask&amp;diff=6368</id>
		<title>BwUniCluster3.0/Software/Python Dask</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster3.0/Software/Python_Dask&amp;diff=6368"/>
		<updated>2020-04-19T08:24:12Z</updated>

		<summary type="html">&lt;p&gt;J Ruess: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;!--{| style=&amp;quot;border-style: solid; border-width: 1px&amp;quot;&lt;br /&gt;
! Navigation: [[BwHPC_Best_Practices_Repository|bwHPC BPR]] / [[BwUniCluster_User_Guide|bwUniCluster]] &lt;br /&gt;
|}--&amp;gt;&lt;br /&gt;
This guide explains how to use Python Dask and dask-jobqueue on bwUniCluster2.0.&lt;br /&gt;
&lt;br /&gt;
== Installation ==&lt;br /&gt;
Use on of our pre-configured Python modules and load them with &#039;module load ...&#039;. You have to install the packages &lt;br /&gt;
----&lt;br /&gt;
[[Category:bwUniCluster_2.0|Access]][[Category:Access|bwUniCluster 2.0]]&lt;/div&gt;</summary>
		<author><name>J Ruess</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster3.0/Software/Python_Dask&amp;diff=6367</id>
		<title>BwUniCluster3.0/Software/Python Dask</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster3.0/Software/Python_Dask&amp;diff=6367"/>
		<updated>2020-04-19T08:18:02Z</updated>

		<summary type="html">&lt;p&gt;J Ruess: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;!--{| style=&amp;quot;border-style: solid; border-width: 1px&amp;quot;&lt;br /&gt;
! Navigation: [[BwHPC_Best_Practices_Repository|bwHPC BPR]] / [[BwUniCluster_User_Guide|bwUniCluster]] &lt;br /&gt;
|}--&amp;gt;&lt;br /&gt;
This guide explains how to use Python Dask on bwUniCluster2.0&lt;br /&gt;
----&lt;br /&gt;
[[Category:bwUniCluster_2.0|Access]][[Category:Access|bwUniCluster 2.0]]&lt;/div&gt;</summary>
		<author><name>J Ruess</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster3.0/Software/Python_Dask&amp;diff=6366</id>
		<title>BwUniCluster3.0/Software/Python Dask</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster3.0/Software/Python_Dask&amp;diff=6366"/>
		<updated>2020-04-19T08:16:56Z</updated>

		<summary type="html">&lt;p&gt;J Ruess: Replaced content with &amp;quot;&amp;lt;!--{| style=&amp;quot;border-style: solid; border-width: 1px&amp;quot; ! Navigation: bwHPC BPR / bwUniCluster  |}--&amp;gt; TestTes...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;!--{| style=&amp;quot;border-style: solid; border-width: 1px&amp;quot;&lt;br /&gt;
! Navigation: [[BwHPC_Best_Practices_Repository|bwHPC BPR]] / [[BwUniCluster_User_Guide|bwUniCluster]] &lt;br /&gt;
|}--&amp;gt;&lt;br /&gt;
TestTest&lt;br /&gt;
----&lt;br /&gt;
[[Category:bwUniCluster_2.0|Access]][[Category:Access|bwUniCluster 2.0]]&lt;/div&gt;</summary>
		<author><name>J Ruess</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster3.0/Software/Python_Dask&amp;diff=6365</id>
		<title>BwUniCluster3.0/Software/Python Dask</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster3.0/Software/Python_Dask&amp;diff=6365"/>
		<updated>2020-04-19T08:14:10Z</updated>

		<summary type="html">&lt;p&gt;J Ruess: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;!--{| style=&amp;quot;border-style: solid; border-width: 1px&amp;quot;&lt;br /&gt;
! Navigation: [[BwHPC_Best_Practices_Repository|bwHPC BPR]] / [[BwUniCluster_User_Guide|bwUniCluster]] &lt;br /&gt;
|}--&amp;gt;&lt;br /&gt;
[[bwUniCluster_2.0|bwUniCluster 2.0]] is Baden-Württemberg&#039;s general purpose tier 3 high performance computing (HPC)&lt;br /&gt;
cluster co-financed by Baden-Württemberg&#039;s ministry of science, research and arts and &lt;br /&gt;
the shareholders:&lt;br /&gt;
* Albert Ludwig University of Freiburg&lt;br /&gt;
* Eberhard Karls University, Tübingen&lt;br /&gt;
* Karlsruhe Institute of Technology (KIT)&lt;br /&gt;
* Heidelberg University (Ruprecht-Karls-Universität Heidelberg)&lt;br /&gt;
* Ulm University&lt;br /&gt;
* University of Hohenheim&lt;br /&gt;
* University of Konstanz&lt;br /&gt;
* University of Mannheim&lt;br /&gt;
* University of Stuttgart&lt;br /&gt;
* HAW BW e.V. (an association of university of applied sciences in Baden-Württemberg) &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
To &#039;&#039;&#039;log on&#039;&#039;&#039; [[bwUniCluster_2.0|bwUniCluster 2.0]] a user account is required. All members of  the shareholder&lt;br /&gt;
universities can apply for an account.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;width: 100%; border-spacing: 5px;&amp;quot;&lt;br /&gt;
| style=&amp;quot;text-align:left; color:#000;vertical-align:top;&amp;quot; |__TOC__ &lt;br /&gt;
|&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;[[File:bwUniCluster_17Jan2014_p044-rot_t10.10.00.jpg|center|border|250px|bwUniCluster wiring by Holger Obermaier, copyright: KIT (SCC)]] &amp;lt;span style=&amp;quot;font-size:80%&amp;quot;&amp;gt;bwUniCluster wiring © KIT (SCC)&amp;lt;/span&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Registration =&lt;br /&gt;
&lt;br /&gt;
Granting access and issuing a user account for &#039;&#039;&#039;bwUniCluster 2.0&#039;&#039;&#039; requires the registration at the KIT service website [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu] (step B). However, this registration depends on the &#039;&#039;&#039;bwUniCluster entitlement&#039;&#039;&#039; issued by your university (step A).&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Step A: bwUniCluster (!) entitlement for registration ==&lt;br /&gt;
&#039;&#039;&#039;The entitlement is called bwUniCluster (NOT bwUniCluster 2.0, i.e. the entitlements don&#039;t change!)&#039;&#039;&#039; and each university issues the bwUniCluster entitlement &#039;&#039;&#039;only&#039;&#039;&#039; for their own members. Details on the issuing process and/or the entitlement application forms are listed hereafter:  &lt;br /&gt;
&lt;br /&gt;
* [[BwCluster_User_Access_Uni_Freiburg|Albert Ludwig University of Freiburg]]&lt;br /&gt;
* [https://bwunicluster.urz.uni-heidelberg.de/ Heidelberg University]&lt;br /&gt;
* [https://kim.uni-hohenheim.de/bwhpc-account University of Hohenheim]&lt;br /&gt;
* [http://www.scc.kit.edu/downloads/sdo/Antrag_Benutzernummer_bwUniCluster.pdf Karlsruhe Institute of Technology (KIT)]&lt;br /&gt;
* [[BWUniCluster_User_Access_Members_Uni_Konstanz|University of Konstanz]]&lt;br /&gt;
* [[BWUniCluster_User_Access_Members_Uni_Mannheim|University of Mannheim]]&lt;br /&gt;
* [https://www.hlrs.de/solutions-services/academic-users/bwunicluster-access/ University of Stuttgart]&lt;br /&gt;
* [http://www.zdv.uni-tuebingen.de/dienstleistungen/computing/anmeldung/bwunicluster.html Eberhard Karls University Tübingen]&lt;br /&gt;
* [[BWUniCluster_User_Access_Members_Uni_Ulm|Ulm University]]&lt;br /&gt;
* Hochschule Aalen&lt;br /&gt;
* Hochschule Albstadt-Sigmaringen&lt;br /&gt;
* Hochschule Esslingen&lt;br /&gt;
* Hochschule Furtwangen&lt;br /&gt;
* Hochschule Karlsruhe&lt;br /&gt;
* Hochschule Konstanz&lt;br /&gt;
* Hochschule Reutlingen&lt;br /&gt;
* Hochschule Rottenburg&lt;br /&gt;
* Hochschule Stuttgart (HfT)&lt;br /&gt;
* Hochschule Ulm&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Step B: Web Registration ==&lt;br /&gt;
After step A, i.e., after issueing the bwUniCluster entitlement, please visit: &lt;br /&gt;
* [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu] &lt;br /&gt;
*# Select your home organization from the list and click &#039;&#039;&#039;Proceed&#039;&#039;&#039;&lt;br /&gt;
*# You will be directed to the &#039;&#039;Identity Provider&#039;&#039; of your home organisation  &lt;br /&gt;
*# Enter your home-organisational user ID / username  and your home-organisational password and click &#039;&#039;&#039;Login&#039;&#039;&#039; button&lt;br /&gt;
*# You will be redirected back to the registration website [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu/] &lt;br /&gt;
*# &amp;lt;div&amp;gt;Select unter &#039;&#039;&#039;The following services are available&#039;&#039;&#039; the service &#039;&#039;&#039;bwUniCluster&#039;&#039;&#039; &amp;lt;br&amp;gt;[[File:bwRegWebApp_avail_services_pic01.png|center|border|]]&amp;lt;/div&amp;gt;&lt;br /&gt;
*# Click &#039;&#039;&#039;Register&#039;&#039;&#039;&lt;br /&gt;
*# Finally&lt;br /&gt;
*#* for all non-KIT members &#039;&#039;&#039;mandatorily&#039;&#039;&#039; set a service password for authentication on bwUniCluster&lt;br /&gt;
*#* for KIT members &#039;&#039;&#039;optionally&#039;&#039;&#039; set a service password for authentication on bwUniCluster&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- old description&lt;br /&gt;
** login with your home-organizational user account and user password,&lt;br /&gt;
** select service &#039;&#039;&#039;&#039;&#039;bwUniCluster&#039;&#039;&#039;&#039;&#039; (on the left side) and &lt;br /&gt;
** follow the instructions to complete the registration.&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Step C: &amp;lt;span style=&amp;quot;color:red;font-size:100%;font-weight:bold&amp;quot;&amp;gt;Amendment:&amp;lt;/span&amp;gt; bwUniCluster questionnaire ==&lt;br /&gt;
Starting June 1st, 2015, usage of bwUniCluster mandatorily requires the&lt;br /&gt;
questionnaire&lt;br /&gt;
&lt;br /&gt;
   https://zas.bwhpc.de/shib/en/bwunicluster_survey.php&lt;br /&gt;
&lt;br /&gt;
to be answered. The input is solely used for the&lt;br /&gt;
elaboration of future support activities and for planning future HPC&lt;br /&gt;
resources. From June 1st, 2015, login to bwUniCluster 2.0 is only permitted&lt;br /&gt;
to users who have already answered the questionnaire. &#039;&#039;&#039;&#039;&#039;Exception&#039;&#039;&#039;: For the first 14 days after the web registration login to bwUniCluster 2.0 is permitted without participation in the questionnaire.&#039;&#039;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Changing Password ==&lt;br /&gt;
By default, for KIT members your bwUniCluster 2.0 &#039;&#039;&#039;password&#039;&#039;&#039; to log on matches  that one of your KIT &lt;br /&gt;
account, while for all non-KIT members your bwUniCluster 2.0 &#039;&#039;&#039;password&#039;&#039;&#039; is that one you saved during the web registration (compare step 7 of chapter 1.2). &lt;br /&gt;
At any time, you can set a new bwUniCluster 2.0 password via the registration website [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu] by carrying out the following steps:&lt;br /&gt;
# visit [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu] and select your home organization &lt;br /&gt;
# authenticate yourself via your home-organizational user id / username and your home-organizational password&lt;br /&gt;
# find on the left side &#039;&#039;&#039;bwUniCluster&#039;&#039;&#039; and select &#039;&#039;&#039;Set Service Password&#039;&#039;&#039;&lt;br /&gt;
# set new service, i.e. bwUniCluster, password, repeat it and click &#039;&#039;&#039;Save&#039;&#039;&#039; button.&lt;br /&gt;
# the page answers e.g. &amp;quot;Das Passwort wurde bei dem Dienst geändert&amp;quot; (&amp;quot;password has been changed&amp;quot;)&lt;br /&gt;
# proceed to log in using the new password in the next step&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==  Contact ==&lt;br /&gt;
If you have questions or problems concerning bwUniCluster (2.0) registration, please [[Registration_Support_-_bwUniCluster|contact your local hotline]]. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Login =&lt;br /&gt;
After finishing the web registration bwUniCluster 2.0 is ready for your &#039;&#039;&#039;SSH&#039;&#039;&#039; based login. Available SSH applications are:&lt;br /&gt;
* built-in command under Linux and macOS using the application &#039;&#039;terminal&#039;&#039;&lt;br /&gt;
* [http://mobaxterm.mobatek.net/ MobaXterm] under Windows &lt;br /&gt;
&lt;br /&gt;
bwUniCluster 2.0 has four + two dedicated login nodes. The selection of the login node is done automatically and is based on a round-robin scheduling. Logging in another time the session might run &lt;br /&gt;
on the other login node.&lt;br /&gt;
&lt;br /&gt;
Only the secure shell &#039;&#039;SSH&#039;&#039; is allowed to login. Other protocols like &#039;&#039;telnet&#039;&#039; or &#039;&#039;rlogin&#039;&#039;&lt;br /&gt;
are not allowed for security reasons.&lt;br /&gt;
&lt;br /&gt;
== Login applications: Linux and macOS ==&lt;br /&gt;
&lt;br /&gt;
A connection to bwUniCluster 2.0 can be established by the command:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ssh &amp;lt;UserID&amp;gt;@bwunicluster.scc.kit.edu   # or&lt;br /&gt;
$ ssh &amp;lt;UserID&amp;gt;@uc2.scc.kit.edu            # Login on a Cascade-Lake-node - or&lt;br /&gt;
$ ssh &amp;lt;UserID&amp;gt;@uc1e.scc.kit.edu           # Login on a Broadwell-node  &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you are using &#039;&#039;openSSH&#039;&#039; (usually installed on Linux based systems) and you want to use a GUI-based application on bwUniCluster 2.0 you should use the &lt;br /&gt;
&#039;&#039;ssh&#039;&#039; command with the option &#039;&#039;-X&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ssh -X &amp;lt;UserID&amp;gt;@bwunicluster.scc.kit.edu   # or&lt;br /&gt;
$ ssh -X &amp;lt;UserID&amp;gt;@uc2.scc.kit.edu            # or&lt;br /&gt;
$ ssh -X &amp;lt;UserID&amp;gt;@uc1e.scc.kit.edu&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Login applications: Windows ==&lt;br /&gt;
&lt;br /&gt;
=== Hints for using MobaXterm ===&lt;br /&gt;
&lt;br /&gt;
Note: The bwHPC-C5 support team strongly recommends to use [http://mobaxterm.mobatek.net/ MobaXterm] instead of &#039;&#039;PuTTY&#039;&#039; or &#039;&#039;WinSCP&#039;&#039;. &#039;&#039;MobaXterm&#039;&#039; provides a X11 server allowing to start GUI based software.&lt;br /&gt;
 &lt;br /&gt;
Start &#039;&#039;MobaXterm&#039;&#039;, fill in the following fields:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Remote name              : uc2.scc.kit.edu    # or uc1e.scc.kit.edu&lt;br /&gt;
Specify user name        : &amp;lt;UserID&amp;gt;&lt;br /&gt;
Port                     : 22&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
After that click on &#039;ok&#039;. Then a terminal will be opened and there you can enter your password.&lt;br /&gt;
&lt;br /&gt;
=== Depreciated: Hints for using PuTTY ===&lt;br /&gt;
Start PuTTY, fill in at window &#039;&#039;PuTTY Configuration&#039;&#039; under category &#039;&#039;Session&#039;&#039; the following fields &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Host Name        : &amp;lt;UserID&amp;gt;@uc2.scc.kit.edu    # or &amp;lt;UserID&amp;gt;@uc1e.scc.kit.edu&lt;br /&gt;
Port             : 22&lt;br /&gt;
Connection type  : SSH&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
and click on &#039;&#039;Open&#039;&#039;, enter your password and accept adding &#039;&#039;host key&#039;&#039;. Note that you can also name the configured session via &#039;&#039;Save&#039;&#039; and load it later via the given name.&lt;br /&gt;
&lt;br /&gt;
=== Depreciated: Hints for using WinSCP ===&lt;br /&gt;
&lt;br /&gt;
Start WinSCP, fill in the following fields:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
File Protocol    : SCP&lt;br /&gt;
Host name        : uc2.scc.kit.edu          # or uc1e.scc.kit.edu &lt;br /&gt;
Port             : 22&lt;br /&gt;
User name        : &amp;lt;UserID&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
and click on &#039;Login&#039; and enter your password. Note that you can also name the configured session via &#039;&#039;Save&#039;&#039; and load it later via the given name.&lt;br /&gt;
&lt;br /&gt;
== About UserID / Username ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&amp;lt;UserID&amp;gt;&#039;&#039;&#039; of the ssh command is a placeholder for your &#039;&#039;&#039;&#039;&#039;username&#039;&#039;&#039;&#039;&#039; at your home &lt;br /&gt;
organization together with a &#039;&#039;&#039;&#039;&#039;prefix&#039;&#039;&#039;&#039;&#039; as followed:&lt;br /&gt;
&lt;br /&gt;
{| width=450px class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Home organization !! &amp;lt;UserID&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| Universität Freiburg || &#039;&#039;fr_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Universität Heidelberg || &#039;&#039;hd_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Universität Hohenheim || &#039;&#039;ho_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| KIT || username &#039;&#039;(without any prefix)&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
| Universität Konstanz || &#039;&#039;kn_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Universität Mannheim || &#039;&#039;ma_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Universität Stuttgart ||  &#039;&#039;st_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Universität Tübingen || &#039;&#039;tu_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Universität Ulm  || &#039;&#039;ul_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Aalen || &#039;&#039;aa_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Albstadt-Sigmaringen || &#039;&#039;as_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Esslingen || &#039;&#039;es_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Furtwangen || &#039;&#039;hf_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Karlsruhe || &#039;&#039;hk_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| HTWG Konstanz || &#039;&#039;ht_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Reutlingen || &#039;&#039;hr_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Rottenburg || &#039;&#039;ro_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule für Technik Stuttgart || &#039;&#039;hs_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
| Hochschule Ulm || &#039;&#039;hu_&#039;&#039;username&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Allowed activities on login nodes ==&lt;br /&gt;
&lt;br /&gt;
The login nodes of bwUniCluster 2.0 are the access point to the compute system and to your bwUniCluster 2.0 $HOME directory. The login nodes are shared with all the users of bwUniCluster 2.0. Therefore, your activities on the login nodes are limited to primarily set up your batch jobs. Your activities may also be:&lt;br /&gt;
* &#039;&#039;&#039;short&#039;&#039;&#039; compilation of your program code and&lt;br /&gt;
* &#039;&#039;&#039;short&#039;&#039;&#039; pre- and postprocessing of your batch jobs.&lt;br /&gt;
&lt;br /&gt;
To guarantee usability for all the users of bwUniCluster 2.0 &amp;lt;span style=&amp;quot;color:red;font-size:100%;&amp;quot;&amp;gt;&#039;&#039;&#039;you must not run your compute jobs on the login nodes&#039;&#039;&#039;&amp;lt;/span&amp;gt;. Compute jobs must be submitted to the&lt;br /&gt;
[[bwUniCluster Batch Jobs|queueing system]]. Any compute job running on the login nodes will be terminated without any notice. Any long-running compilation or any long-running pre- or postprocessing of batch jobs must also be submitted to the [[bwUniCluster Batch Jobs|queueing system]].&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= [[First_Steps_on_bwHPC_cluster|First steps on bwUniCluster]] =&lt;br /&gt;
&lt;br /&gt;
First and some important steps on bwUniCluster 2.0 can be found [[First_Steps_on_bwHPC_cluster|here]].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Deregistration =&lt;br /&gt;
If you plan to permanently leave the bwUniCluster 2.0, follow the deregister checklist:&lt;br /&gt;
# Transfer all your data in $HOME and workspace to your local computer/storage and after that clear off all your data&lt;br /&gt;
# Visit [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu] &lt;br /&gt;
#* Select your home organization from the list and click &#039;&#039;&#039;Proceed&#039;&#039;&#039;  &lt;br /&gt;
#* Enter your home-organisational user ID / username  and your home-organisational password and click &#039;&#039;&#039;Login&#039;&#039;&#039; button&lt;br /&gt;
#* You will be redirected back to the registration website [https://bwidm.scc.kit.edu/ https://bwidm.scc.kit.edu/] &lt;br /&gt;
#* &amp;lt;div&amp;gt;Select &#039;&#039;&#039;Registry Info&#039;&#039;&#039; of the service &#039;&#039;&#039;bwUniCluster&#039;&#039;&#039; (on the left hand side)&amp;lt;br&amp;gt;[[File:bwUniCluster_registration_sidebar.png|center|border|]]&amp;lt;/div&amp;gt;&lt;br /&gt;
#* Click &#039;&#039;&#039;Deregister&#039;&#039;&#039;&lt;br /&gt;
Note that Step 2 will automatically unsubscribe you from the bwunicluster mail list.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
[[Category:bwUniCluster_2.0|Access]][[Category:Access|bwUniCluster 2.0]]&lt;/div&gt;</summary>
		<author><name>J Ruess</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster3.0/Software/Python_Dask&amp;diff=6364</id>
		<title>BwUniCluster3.0/Software/Python Dask</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwUniCluster3.0/Software/Python_Dask&amp;diff=6364"/>
		<updated>2020-04-19T08:11:43Z</updated>

		<summary type="html">&lt;p&gt;J Ruess: Created page with &amp;quot;TestTest&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;TestTest&lt;/div&gt;</summary>
		<author><name>J Ruess</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=User:M_Janczyk/Git_GUI&amp;diff=5885</id>
		<title>User:M Janczyk/Git GUI</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=User:M_Janczyk/Git_GUI&amp;diff=5885"/>
		<updated>2019-12-17T16:53:55Z</updated>

		<summary type="html">&lt;p&gt;J Ruess: /* GUI for Git/SVN repositories on bwunicluster */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= GUI for Git/SVN repositories on bwunicluster = &lt;br /&gt;
&lt;br /&gt;
This tutorial explains how to get a GUI experience for a Git or SVN repository on a remote machine with Microsoft Visual Studio Code (VS Code) and the &#039;Remote Development&#039; extension.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:red;font-size:105%;&amp;quot;&amp;gt;Note: The &#039;Remote Development&#039; extension for VS code is in a preview state. If there occur any bugs it is recommended to use VS Code Insiders instead of VS Code.&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[https://code.visualstudio.com/ Visual Studio Code] &amp;lt;br&amp;gt;&lt;br /&gt;
[https://code.visualstudio.com/insiders/ Visual Studio Code Insiders]&lt;br /&gt;
&lt;br /&gt;
If you need Subversion support, follow the steps below and additionally install the ‘SVN’ extension by Chris Johnston.&lt;br /&gt;
&lt;br /&gt;
== Install extension and connect to a repository on bwunicluster ==&lt;br /&gt;
&lt;br /&gt;
#Click on ‘Extensions’ on the left and search for ‘remote development’. Intall the ‘Remote Development’ extension from Microsoft by clicking on ‘Install’. [[File:Git_Remote_1.png|700px]]&lt;br /&gt;
#Click on ‘View → Command Palette...’ and then type ‘remote ssh’. Choose ‘Remote-SSH: Connect to Host...’ and press enter. Now click on ‘Add New SSH Host...’ and enter ‘ssh youruser@bwunicluster.scc.kit.edu’.&lt;br /&gt;
#VS Code asks where to to store the ssh configuration. Choose the first entry to store it in your home directory. [[File:Git_Remote_2.png|700px]]&lt;br /&gt;
#Click on ‘View → Command Palette...’ and then type ‘remote ssh’. Choose ‘Remote-SSH: Connect to Host...’. Now use ‘bwunicluster.scc.kit.edu’ and in the new window accept the fingerprint by clicking on ‘continue’.&lt;br /&gt;
#Enter your password for bwunicluster and press enter.&lt;br /&gt;
#The green ‘SSH: bwunicluster.scc.kit.edu’ at the bottom indicates that this VS Code window is connected to the remote machine.&lt;br /&gt;
#Open a terminal by clicking on ‘Terminal → New Terminal’. Navigate to your Git repository and type ‘code .’ and hit enter. After entering your password again a new instance of VS Code opens with the folder of your Git repository as working directory. [[File:Git_Remote_4.png|700px]]&lt;br /&gt;
#Reopen the terminal by clicking on ‘Terminal → New Terminal’. The new terminal should open in the folder of your Git repository.&lt;br /&gt;
#In the newly started terminal, by typing ‘code FILENAME.EXT’ Visual Studio Code will open this file in the editor.&lt;br /&gt;
#When edit files belong to a GIT/SVN repository, you can see this at the left by clicking on ‘Source Control’. You can click on the files which have changed and VS Code displays the changes. If you do not see any files changed it may help to click on the refresh button. [[File:Git_Remote_5.png|700px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:bwUniCluster|GUI for Git and SVN]]&lt;/div&gt;</summary>
		<author><name>J Ruess</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=User:M_Janczyk/Git_GUI&amp;diff=5884</id>
		<title>User:M Janczyk/Git GUI</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=User:M_Janczyk/Git_GUI&amp;diff=5884"/>
		<updated>2019-12-17T12:22:17Z</updated>

		<summary type="html">&lt;p&gt;J Ruess: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= GUI for Git/SVN repositories on bwunicluster = &lt;br /&gt;
&lt;br /&gt;
This tutorial explains how to get a GUI experience for a Git or SVN repository on a remote machine with Microsoft Visual Studio Code (VS Code) and the &#039;Remote Development&#039; extension.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:red;font-size:105%;&amp;quot;&amp;gt;Note: The &#039;Remote Development&#039; extension for VS code is in a preview state. If there occur any bugs it is recommended to use VS Code Insiders instead of VS Code.&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[https://code.visualstudio.com/ Visual Studio Code] &amp;lt;br&amp;gt;&lt;br /&gt;
[https://code.visualstudio.com/insiders/ Visual Studio Code Insiders]&lt;br /&gt;
&lt;br /&gt;
== Install extension and connect to a repository on bwunicluster ==&lt;br /&gt;
&lt;br /&gt;
#Click on ‘Extensions’ on the left and search for ‘remote development’. Intall the ‘Remote Development’ extension from Microsoft by clicking on ‘Install’. [[File:Git_Remote_1.png|700px]]&lt;br /&gt;
#Click on ‘View → Command Palette...’ and then type ‘remote ssh’. Choose ‘Remote-SSH: Connect to Host...’ and press enter. Now click on ‘Add New SSH Host...’ and enter ‘ssh youruser@bwunicluster.scc.kit.edu’.&lt;br /&gt;
#VS Code asks where to to store the ssh configuration. Choose the first entry to store it in your home directory. [[File:Git_Remote_2.png|700px]]&lt;br /&gt;
#Click on ‘View → Command Palette...’ and then type ‘remote ssh’. Choose ‘Remote-SSH: Connect to Host...’. Now use ‘bwunicluster.scc.kit.edu’ and in the new window accept the fingerprint by clicking on ‘continue’.&lt;br /&gt;
#Enter your password for bwunicluster and press enter.&lt;br /&gt;
#The green ‘SSH: bwunicluster.scc.kit.edu’ at the bottom indicates that this VS Code window is connected to the remote machine.&lt;br /&gt;
#Open a terminal by clicking on ‘Terminal → New Terminal’. Navigate to your Git repository and type ‘code .’ and hit enter. After entering your password again a new instance of VS Code opens with the folder of your Git repository as working directory. [[File:Git_Remote_4.png|700px]]&lt;br /&gt;
#Reopen the terminal by clicking on ‘Terminal → New Terminal’. The new terminal should open in the folder of your Git repository.&lt;br /&gt;
#By typing ‘code filename.extension’ you can open and edit a file in VS Code.&lt;br /&gt;
#When files are modified you can see this at the left by clicking on ‘Source Control’. You can click on the files which have changed and VS Code displays the changes. If you do not see any files changed it may help to click on the refresh button. [[File:Git_Remote_5.png|700px]]&lt;br /&gt;
#You can do the same with Subversion by installing the ‘SVN’ extension by Chris Johnston.&lt;br /&gt;
&lt;br /&gt;
[[Category:bwUniCluster|GUI for Git and SVN]]&lt;/div&gt;</summary>
		<author><name>J Ruess</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=User:M_Janczyk/Git_GUI&amp;diff=5883</id>
		<title>User:M Janczyk/Git GUI</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=User:M_Janczyk/Git_GUI&amp;diff=5883"/>
		<updated>2019-12-17T11:58:54Z</updated>

		<summary type="html">&lt;p&gt;J Ruess: /* GUI for Git/SVN repositories on bwunicluster */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= GUI for Git/SVN repositories on bwunicluster = &lt;br /&gt;
&lt;br /&gt;
This tutorial explains how to get a GUI experience for a Git or SVN repository on a remote machine with Microsoft Visual Studio Code (VS Code) and the &#039;Remote Development&#039; extension.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:red;font-size:105%;&amp;quot;&amp;gt;Note: The &#039;Remote Development&#039; extension for VS code is in a preview state. If there occur any bugs it is recommended to use VS Code Insiders instead of VS Code.&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[https://code.visualstudio.com/ Visual Studio Code] &amp;lt;br&amp;gt;&lt;br /&gt;
[https://code.visualstudio.com/insiders/ Visual Studio Code Insiders]&lt;br /&gt;
&lt;br /&gt;
== Install extension and connect to a repository on bwunicluster ==&lt;br /&gt;
&lt;br /&gt;
#Click on ‘Extensions’ on the left and search for ‘remote development’. Intall the ‘Remote Development’ extension from Microsoft by clicking on ‘Install’. [[File:Git_Remote_1.png|700px]]&lt;br /&gt;
#Click on ‘View → Command Palette...’ and then type ‘remote ssh’. Choose ‘Remote-SSH: Connect to Host...’ and press enter. Now click on ‘Add New SSH Host...’ and enter ‘ssh youruser@bwunicluster.scc.kit.edu’.&lt;br /&gt;
#VS Code asks where to to store the ssh configuration. Choose the first entry to store it in your home directory. [[File:Git_Remote_2.png|700px]]&lt;br /&gt;
#Click on ‘View → Command Palette...’ and then type ‘remote ssh’. Choose ‘Remote-SSH: Connect to Host...’. Now use ‘bwunicluster.scc.kit.edu’ and in the new window accept the fingerprint by clicking on ‘continue’.&lt;br /&gt;
#Enter your password for bwunicluster and press enter.&lt;br /&gt;
#The green ‘SSH: bwunicluster.scc.kit.edu’ at the bottom indicates that this VS Code window is connected to the remote machine.&lt;br /&gt;
#Open a terminal by clicking on ‘Terminal → New Terminal’. Navigate to your Git repository and type ‘code .’ and hit enter. After entering your password again a new instance of VS Code opens with the folder of your Git repository as working directory. [[File:Git_Remote_4.png|700px]]&lt;br /&gt;
#Reopen the terminal by clicking on ‘Terminal → New Terminal’. The new terminal should open in the folder of your Git repository.&lt;br /&gt;
#By typing ‘code filename.extension’ you can open and edit a file in VS Code.&lt;br /&gt;
#When files are modified you can see this at the left by clicking on ‘Source Control’. You can click on the files which have changed and VS Code displays the changes. If you do not see any files changed it may help to click on the refresh button. [[File:Git_Remote_5.png|700px]]&lt;br /&gt;
#You can do the same with Subversion by installing the ‘SVN’ extension by Chris Johnston.&lt;/div&gt;</summary>
		<author><name>J Ruess</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=User:M_Janczyk/Git_GUI&amp;diff=5882</id>
		<title>User:M Janczyk/Git GUI</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=User:M_Janczyk/Git_GUI&amp;diff=5882"/>
		<updated>2019-12-17T11:06:29Z</updated>

		<summary type="html">&lt;p&gt;J Ruess: Created page with &amp;quot;= GUI for Git/SVN repositories on bwunicluster =   This tutorial explains how to get a GUI experience for a Git or SVN repository on a remote machine with Microsoft Visual Stu...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= GUI for Git/SVN repositories on bwunicluster = &lt;br /&gt;
&lt;br /&gt;
This tutorial explains how to get a GUI experience for a Git or SVN repository on a remote machine with Microsoft Visual Studio Code (VS Code) and the &#039;Remote Development&#039; extension.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:red;font-size:105%;&amp;quot;&amp;gt;Note: The &#039;Remote Development&#039; extension for VS code is in a preview state. If there occur any bugs it is recommended to use VS Code Insiders instead of VS Code.&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Install extension and connect to a repository on bwunicluster ==&lt;br /&gt;
&lt;br /&gt;
#Click on ‘Extensions’ on the left and search for ‘remote development’. Intall the ‘Remote Development’ extension from Microsoft by clicking on ‘Install’. [[File:Git_Remote_1.png|700px]]&lt;br /&gt;
#Click on ‘View → Command Palette...’ and then type ‘remote ssh’. Choose ‘Remote-SSH: Connect to Host...’ and press enter. Now click on ‘Add New SSH Host...’ and enter ‘ssh youruser@bwunicluster.scc.kit.edu’.&lt;br /&gt;
#VS Code asks where to to store the ssh configuration. Choose the first entry to store it in your home directory. [[File:Git_Remote_2.png|700px]]&lt;br /&gt;
#Click on ‘View → Command Palette...’ and then type ‘remote ssh’. Choose ‘Remote-SSH: Connect to Host...’. Now use ‘bwunicluster.scc.kit.edu’ and in the new window accept the fingerprint by clicking on ‘continue’.&lt;br /&gt;
#Enter your password for bwunicluster and press enter.&lt;br /&gt;
#The green ‘SSH: bwunicluster.scc.kit.edu’ at the bottom indicates that this VS Code window is connected to the remote machine.&lt;br /&gt;
#Open a terminal by clicking on ‘Terminal → New Terminal’. Navigate to your Git repository and type ‘code .’ and hit enter. After entering your password again a new instance of VS Code opens with the folder of your Git repository as working directory. [[File:Git_Remote_4.png|700px]]&lt;br /&gt;
#Reopen the terminal by clicking on ‘Terminal → New Terminal’. The new terminal should open in the folder of your Git repository.&lt;br /&gt;
#By typing ‘code filename.extension’ you can open and edit a file in VS Code.&lt;br /&gt;
#When files are modified you can see this at the left by clicking on ‘Source Control’. You can click on the files which have changed and VS Code displays the changes. If you do not see any files changed it may help to click on the refresh button. [[File:Git_Remote_5.png|700px]]&lt;br /&gt;
#You can do the same with Subversion by installing the ‘SVN’ extension by Chris Johnston.&lt;/div&gt;</summary>
		<author><name>J Ruess</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=File:Git_Remote_5.png&amp;diff=5881</id>
		<title>File:Git Remote 5.png</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=File:Git_Remote_5.png&amp;diff=5881"/>
		<updated>2019-12-17T10:55:32Z</updated>

		<summary type="html">&lt;p&gt;J Ruess: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>J Ruess</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=File:Git_Remote_4.png&amp;diff=5880</id>
		<title>File:Git Remote 4.png</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=File:Git_Remote_4.png&amp;diff=5880"/>
		<updated>2019-12-17T10:55:24Z</updated>

		<summary type="html">&lt;p&gt;J Ruess: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>J Ruess</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=File:Git_Remote_3.png&amp;diff=5879</id>
		<title>File:Git Remote 3.png</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=File:Git_Remote_3.png&amp;diff=5879"/>
		<updated>2019-12-17T10:55:13Z</updated>

		<summary type="html">&lt;p&gt;J Ruess: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>J Ruess</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=File:Git_Remote_2.png&amp;diff=5878</id>
		<title>File:Git Remote 2.png</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=File:Git_Remote_2.png&amp;diff=5878"/>
		<updated>2019-12-17T10:55:03Z</updated>

		<summary type="html">&lt;p&gt;J Ruess: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>J Ruess</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=File:Git_Remote_1.png&amp;diff=5877</id>
		<title>File:Git Remote 1.png</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=File:Git_Remote_1.png&amp;diff=5877"/>
		<updated>2019-12-17T10:45:27Z</updated>

		<summary type="html">&lt;p&gt;J Ruess: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>J Ruess</name></author>
	</entry>
</feed>