BwUniCluster3.0/First Steps: Difference between revisions
No edit summary |
|||
Line 15: | Line 15: | ||
* [[BwUniCluster_Environment_Modules|bwUniCluster user environment]] |
* [[BwUniCluster_Environment_Modules|bwUniCluster user environment]] |
||
For guides on how to submit compute jobs to bwUniCluster please visit: |
For guides on how to submit compute jobs to bwUniCluster please visit: |
||
* [[bwUniCluster_2.0_Slurm_common_Features|bwUniCluster |
* [[bwUniCluster_2.0_Slurm_common_Features|bwUniCluster 3.0 batch jobs]] |
||
--> |
--> |
||
Line 35: | Line 35: | ||
== Transfer your data to the cluster == |
== Transfer your data to the cluster == |
||
Get familiar with available file systems on the cluster. → [[ |
Get familiar with available file systems on the cluster. → [[BwUniCluster3.0/Hardware_and_Architecture|File Systems and Workspaces]] |
||
Transfer your data to the cluster using appropriate tools. → [[Data Transfer|Data Transfer]] |
Transfer your data to the cluster using appropriate tools. → [[Data Transfer|Data Transfer]] |
||
Line 41: | Line 41: | ||
== Find information about installed software and examples == |
== Find information about installed software and examples == |
||
Compiler, Libraries and application software are provided as software modules. Learn how to work with [[Environment_Modules|software modules]]. → [[ |
Compiler, Libraries and application software are provided as software modules. Learn how to work with [[Environment_Modules|software modules]]. → [[BwUniCluster3.0/Software|Software]] |
||
== Submit your application as a batch job == |
== Submit your application as a batch job == |
||
Get familiar with available nodes types on the cluster. → [[ |
Get familiar with available nodes types on the cluster. → [[BwUniCluster3.0/Hardware and Architecture|Hardware and Architecture]] |
||
Submit and monitor your jobs with Slurm commands. → [[ |
Submit and monitor your jobs with Slurm commands. → [[BwUniCluster3.0/Slurm|Batch System Slurm]] |
||
== Learn about Scaling your Job == |
== Learn about Scaling your Job == |
||
Line 59: | Line 59: | ||
== Acknowledge the cluster == |
== Acknowledge the cluster == |
||
Remember to mention the cluster in your publications. → [[ |
Remember to mention the cluster in your publications. → [[BwUniCluster3.0/Acknowledgement|Acknowledgement]] |
Revision as of 09:46, 8 January 2025
General Workflow of Running a Calculation
On a compute cluster, you do not simply run log in and your software, but you write a "job script" that contains all commands to run and process your job and send this into a waiting queue to be run on one of several hundred computers.
How this is done is described in a little more detail here: Running Calculations
Get access to the cluster
Follow the registration process for the bwUniCluster. → bwUniCluster registration
Login to the cluster
Setup service password and 2FA token and login to the cluster. → Login and security measures
Transfer your data to the cluster
Get familiar with available file systems on the cluster. → File Systems and Workspaces
Transfer your data to the cluster using appropriate tools. → Data Transfer
Find information about installed software and examples
Compiler, Libraries and application software are provided as software modules. Learn how to work with software modules. → Software
Submit your application as a batch job
Get familiar with available nodes types on the cluster. → Hardware and Architecture
Submit and monitor your jobs with Slurm commands. → Batch System Slurm
Learn about Scaling your Job
How many compute-cores should my job use? This depends on the software and the problem you are trying to solve. But if you use too few cores, your computation may take much too long - if you use too many cores, they will not improve the speed of your computation and all you do by using more cores is wasting compute resources and energy.
If you run hundreds or thousands of similar calculations, you should look at this carefully before starting.
How to do this is described in: Scaling
Acknowledge the cluster
Remember to mention the cluster in your publications. → Acknowledgement