BinAC2/SLURM Partitions and Helix/bwVisu: Difference between pages

From bwHPC Wiki
< BinAC2(Difference between pages)
Jump to navigation Jump to search
No edit summary
 
No edit summary
 
Line 1: Line 1:
[[File:BwVisu wide.svg|300px]]
== Partitions ==


bwVisu is a scalable service for remote visualization and interactive applications. This easy-to-use, web-based platform provides interactive access to scientific work environments utilizing massive compute and storage resources.
The bwForCluster BinAC 2 provides two partitions (e.g. queues) for job submission. Within a partition job allocations are routed automatically to the most suitable compute node(s) for the requested resources (e.g. amount of nodes and cores, memory, number of GPUs).

<!--
{| style=" background:#FEF4AB; width:100%;"
All partitions are operated in shared mode, that is, jobs from different users can be executed on the same node. However, one can get exclusive access to compute nodes by using the "--exclusive" option.
| style="padding:8px; background:#FFE856; font-size:120%; font-weight:bold; text-align:left" | News
-->
{| class="wikitable"
|-
! style="width:20%"| Partition
! style="width:20%"| Node Access Policy
! style="width:20%"| Node Types
! style="width:20%"| Default
! style="width:20%"| Limits
|-
|-
|
| compute (default)
* Current state: <span style="color: green">fully operational</span>
| shared
* December 2024: The [https://bwvisu.bwservices.uni-heidelberg.de/ new bwVisu version] is open for testing (RStudio, JupyterLab, KI-Morph). You can write feedback to the [mailto:bwvisu-support@urz.uni-heidelberg.de bwvisu-support]. The bwHPC Wiki pages provide information about the new version. The old version with its documentation and, for now, more applications can still be found [https://www.bwvisu.de/ here].
| cpu
|}
| ntasks=1, time=00:10:00, mem-per-cpu=1gb

| nodes=2, time=14-00:00:00
{| style=" background:#eeeefe; width:100%;"
| style="padding:8px; background:#dedefe; font-size:120%; font-weight:bold; text-align:left" | Training & Support
|-
|-
| gpu
|
* [https://www.urz.uni-heidelberg.de/de/service-katalog/software-und-anwendungen/bwvisu Service Description &amp; FAQ]
| shared
* [mailto:bwvisu-support@urz.uni-heidelberg.de Submit a Ticket]
| gpu
|}
| ntasks=1, time=00:10:00, mem-per-cpu=1gb

| nodes=1, time=14-00:00:00
{| style=" background:#deffee; width:100%;"
| style="padding:8px; background:#cef2e0; font-size:120%; font-weight:bold; text-align:left" | User Documentation
|-
|-
|
* [[Helix/bwVisu/Getting_Started|Getting Started]]
* [[Helix/bwVisu/Usage|Usage]]
* User Guides for applications
** [[Helix/bwVisu/JupyterLab|JupyterLab]]
** [[Helix/bwVisu/RStudio|RStudio]]
** [[Helix/bwVisu/KI-Morph|KI-Morph]]
|}
|}


{| style=" background:#e6e9eb; width:100%;"

| style="padding:8px; background:#d1dadf; font-size:120%; font-weight:bold; text-align:left" | Acknowledgement
=== Parallel Jobs ===

In order to submit parallel jobs to the InfiniBand part of the cluster, i.e., for fast inter-node communication, please select the appropriate nodes via the <code>--constraint=ib</code> option in your job script. For less demanding parallel jobs, you may try the <code>--constraint=eth</code> option, which utilizes 100Gb/s Ethernet instead of the low-latency 100Gb/s InfiniBand.

=== GPU Jobs ===

BinAC 2 provides different GPU models for computations. Please select the appropriate GPU type via the <code>--gres=aXX:1..N</code> option in your job script

{| class="wikitable"
|-
! style="width:20%"| GPU
! style="width:20%"| GPU Memory
! style="width:20%"| # GPUs per Node [N]
! style="width:20%"| Submit Option
|-
|-
|
| Nvidia A30
* Please [[Helix/bwVisu/Acknowledgement|acknowledge]] bwVisu in your publications
| 24GB
| 2
| <code>--gres=gpu:a30:[1..N]</code>
|-
| Nvidia A100
| 80GB
| 4
| <code>--gres=gpu:a100:[1..N]</code>
|-
|}

Revision as of 23:57, 18 December 2024

BwVisu wide.svg

bwVisu is a scalable service for remote visualization and interactive applications. This easy-to-use, web-based platform provides interactive access to scientific work environments utilizing massive compute and storage resources.

News
  • Current state: fully operational
  • December 2024: The new bwVisu version is open for testing (RStudio, JupyterLab, KI-Morph). You can write feedback to the bwvisu-support. The bwHPC Wiki pages provide information about the new version. The old version with its documentation and, for now, more applications can still be found here.
Training & Support
User Documentation
Acknowledgement