JUSTUS2/Getting Started: Difference between revisions
K Siegmund (talk | contribs) No edit summary |
K Siegmund (talk | contribs) |
||
(4 intermediate revisions by the same user not shown) | |||
Line 24: | Line 24: | ||
How this is done is described in a little more detail here: [[Running Calculations]] |
How this is done is described in a little more detail here: [[Running Calculations]] |
||
== Get |
== Get Access to the Cluster == |
||
Follow the registration process for the bwForCluster. → [[Registration/bwForCluster|How to Register for a bwForCluster]] |
Follow the registration process for the bwForCluster. → [[Registration/bwForCluster|How to Register for a bwForCluster]] |
||
Line 32: | Line 32: | ||
Setup service password and 2FA token and login to the cluster. → [[JUSTUS2/Login|Login JUSTUS2]] |
Setup service password and 2FA token and login to the cluster. → [[JUSTUS2/Login|Login JUSTUS2]] |
||
== |
== Using the Linux Commandline == |
||
HPC Wiki (external site) → [https://hpc-wiki.info/hpc/Introduction_to_Linux_in_HPC/The_Command_Line Introduction to Linux Commandline] |
HPC Wiki (external site) → [https://hpc-wiki.info/hpc/Introduction_to_Linux_in_HPC/The_Command_Line Introduction to Linux Commandline] |
||
Line 38: | Line 38: | ||
Training course → [https://training.bwhpc.de/ Linux course on training.bwhpc.de] |
Training course → [https://training.bwhpc.de/ Linux course on training.bwhpc.de] |
||
== Transfer your |
== Transfer your Data to the Cluster == |
||
Get familiar with available file systems on the cluster. → [[Hardware_and_Architecture_(bwForCluster_JUSTUS_2)#Storage_Architecture|File Systems]] |
Get familiar with available file systems on the cluster. → [[Hardware_and_Architecture_(bwForCluster_JUSTUS_2)#Storage_Architecture|File Systems]] |
||
Line 44: | Line 44: | ||
Transfer your data to the cluster using appropriate tools. → [[Data Transfer|Data Transfer]] |
Transfer your data to the cluster using appropriate tools. → [[Data Transfer|Data Transfer]] |
||
== Find |
== Find Information About Installed Software and Examples == |
||
Compiler, Libraries and application software are provided as software modules. Learn how to work with |
Compiler, Libraries and application software are provided as software modules. Learn how to work with |
||
Line 52: | Line 52: | ||
Run sample script from a pre-installed software (Software job examples in the page above) |
Run sample script from a pre-installed software (Software job examples in the page above) |
||
== |
== Run your Software in a Batch Job == |
||
Get familiar with available nodes types on the cluster. → [[Hardware_and_Architecture_(bwForCluster_JUSTUS_2)|Hardware and Architecture]] |
Get familiar with available nodes types on the cluster. → [[Hardware_and_Architecture_(bwForCluster_JUSTUS_2)|Hardware and Architecture]] |
||
Submit and monitor your jobs with Slurm commands. |
Submit and monitor your jobs with Slurm commands. |
||
* → [[JUSTUS2/Running Your Calculations|Running Your Calculations]] - a very brief introduction. |
|||
* → [[BwForCluster_JUSTUS_2_Slurm_HOWTO| extensive Slurm HOWTO on specific tasks]] |
|||
⚫ | |||
== Learn about Scaling your Job == |
|||
How many compute-cores should my job use? This depends on the software and the problem you are trying to solve. But if you use too few cores, your computation may take much too long - if you use too many cores, they will not improve the speed of your computation and all you do by using more cores is wasting compute resources and energy. |
|||
If you run hundreds or thousands of similar calculations, you should look at this carefully before starting. |
|||
How to do this is described in: [[Scaling]] |
|||
⚫ | |||
Remember to mention the cluster in your publications. → [[bwForCluster JUSTUS 2 Acknowledgement|Acknowledgement]] |
Remember to mention the cluster in your publications. → [[bwForCluster JUSTUS 2 Acknowledgement|Acknowledgement]] |
Latest revision as of 11:58, 11 September 2024
General Workflow of Running a Calculation
On a compute cluster, you do not simply run log in and your software, but you write a "job script" that contains all commands to run and process your job and send this into a waiting queue to be run on one of several hundred computers.
How this is done is described in a little more detail here: Running Calculations
Get Access to the Cluster
Follow the registration process for the bwForCluster. → How to Register for a bwForCluster
Login to the Cluster
Setup service password and 2FA token and login to the cluster. → Login JUSTUS2
Using the Linux Commandline
HPC Wiki (external site) → Introduction to Linux Commandline
Training course → Linux course on training.bwhpc.de
Transfer your Data to the Cluster
Get familiar with available file systems on the cluster. → File Systems
Transfer your data to the cluster using appropriate tools. → Data Transfer
Find Information About Installed Software and Examples
Compiler, Libraries and application software are provided as software modules. Learn how to work with software modules. → Software
Run sample script from a pre-installed software (Software job examples in the page above)
Run your Software in a Batch Job
Get familiar with available nodes types on the cluster. → Hardware and Architecture
Submit and monitor your jobs with Slurm commands.
- → Running Your Calculations - a very brief introduction.
- → extensive Slurm HOWTO on specific tasks
Learn about Scaling your Job
How many compute-cores should my job use? This depends on the software and the problem you are trying to solve. But if you use too few cores, your computation may take much too long - if you use too many cores, they will not improve the speed of your computation and all you do by using more cores is wasting compute resources and energy.
If you run hundreds or thousands of similar calculations, you should look at this carefully before starting.
How to do this is described in: Scaling
Acknowledge the Cluster
Remember to mention the cluster in your publications. → Acknowledgement