BwUniCluster2.0/Software/Python Dask: Difference between revisions
< BwUniCluster2.0 | Software
Jump to navigation
Jump to search
No edit summary |
No edit summary |
||
Line 14: | Line 14: | ||
>>> cluster = SLURMCluster(cores=X, memory='X GB', queue='X') |
>>> cluster = SLURMCluster(cores=X, memory='X GB', queue='X') |
||
</pre> |
</pre> |
||
You have to specify how many cores and memory you want for one dask worker. |
You have to specify how many cores and memory you want for one dask worker. [[BwUniCluster_2.0_Slurm_common_Features|batch job system]] |
||
<pre> |
<pre> |
Revision as of 10:53, 19 April 2020
This guide explains how to use Python Dask and dask-jobqueue on bwUniCluster2.0.
Installation
Use on of our pre-configured Python modules and load them with 'module load ...'. You have to install the packages 'dask' and 'das-jobqueue' if your are using an own conda environment.
Using Dask
In a new interactive shell, execute the following commands in Python:
>>> from dask_jobqueue import SLURMCluster >>> cluster = SLURMCluster(cores=X, memory='X GB', queue='X')
You have to specify how many cores and memory you want for one dask worker. batch job system
>>> cluster.scale (X)
Replace