New pages
Jump to navigation
Jump to search
- 11:39, 19 February 2025 Infiniband (hist | edit) [1,105 bytes] K Siegmund (talk | contribs) (Created page with "'''Infiniband''' and '''Omni-Path'' are a high-speed Network often used in HPC systems for their low latency. The name comes from "infinite bandwidth", because you can connect several cables to the same machine, up to a theoretical infinite bandwidth. '''Infiniband''' is the original network distributed by Melanox (now part of Nvidia). '''Omni-Path''' or '''OPA''' is about the same technology originally created by Intel, but now sold by a separate company Cornelis Net...")
- 14:50, 11 February 2025 Development/ollama (hist | edit) [6,435 bytes] R Keller (talk | contribs) (Introduction to Ollama)
- 17:36, 31 January 2025 BwUniCluster3.0/Software Modules (hist | edit) [21,802 bytes] S Braun (talk | contribs) (Created page with "<div id="top"></div> <br> = Introduction = '''Software (Environment) Modules''', or short '''Modules''' are the means by which most of the installed scientific software is provided on bwUniCluster 3.0. <br> The use of different compilers, libraries and software packages requires users to set up a specific session environment suited for the program they want to run. bwUniCluster 3.0 provides users with the possibility to load and unload complete environments for compiler...")
- 15:40, 31 January 2025 BwUniCluster3.0/Data Migration Guide (hist | edit) [14,678 bytes] S Braun (talk | contribs) (Created page with "Data Migration Guide How to move data from bwUniCluster 2.0 to bwUniCluster 3.0")
- 17:18, 23 January 2025 BwUniCluster3.0/Running Jobs (hist | edit) [18,849 bytes] S Braun (talk | contribs) (Created page with "= Running Jobs =")
- 18:58, 9 January 2025 BwUniCluster3.0/Maintenance (hist | edit) [929 bytes] S Braun (talk | contribs) (Created page with "'''2024''' * BwUniCluster2.0/Maintenance/2024-05 from 21.05.2024 to 24.05.2024 '''2023''' * BwUniCluster2.0/Maintenance/2023-03 from 20.03.2023 to 24.03.2023 '''2022''' * BwUniCluster2.0/Maintenance/2022-11 from 07.11.2022 to 10.11.2022 * BwUniCluster2.0/Maintenance/2022-03 from 28.03.2022 to 31.03.2022 '''2021''' * BwUniCluster2.0/Maintenance/2021-10 from 11.10.2021 to 15.10.2021 '''2020''' * BwUniCluster2.0/Maintenance/2020-10 from 06....")
- 18:56, 9 January 2025 BwUniCluster3.0/Containers (hist | edit) [10,800 bytes] S Braun (talk | contribs) (Created page with "= Introduction = To date, only few container runtime environments integrate well with HPC environments due to security concerns and differing assumptions in some areas. For example native Docker environments require elevated privileges, which is not an option on shared HPC resources. Docker's "rootless mode" is also currently not supported on our HPC systems because it does not support necessary features such as cgroups resource controls, security profiles, overlay netw...")
- 18:55, 9 January 2025 BwUniCluster3.0/Software (hist | edit) [802 bytes] S Braun (talk | contribs) (Created page with "== Environment Modules == Most software is provided as Modules. Required reading to use: Software Environment Modules == Available Software == Visit [https://www.bwhpc.de/software.php https://www.bwhpc.de/software.php], select <code>Cluster → bwUniCluster 3.0</code> On cluster: <code>module avail</code> == Software in Containers == Instructions for loading software in containers: BwUniCluster2.0/Containers == Documentation...")
- 16:57, 9 January 2025 BwUniCluster3.0/Acknowledgement (hist | edit) [1,130 bytes] S Braun (talk | contribs) (Created page with "The HPC resource bwUniCluster 3.0 is funded by the Ministry of Science, Research and the Arts Baden-Württemberg and the Universities of the State of Baden-Württemberg: * Albert Ludwig University of Freiburg * Eberhard Karls University, Tübingen * Karlsruhe Institute of Technology (KIT) * Ruprecht-Karls-Universität Heidelberg * Ulm University * University of Hohenheim * University of Konstanz * University of Mannheim * University of Stuttgart, * several Registration...")
- 14:29, 9 January 2025 BwUniCluster3.0/Jupyter (hist | edit) [17,510 bytes] S Braun (talk | contribs) (Created page with "Jupyter can be used as an alternative to accessing HPC resources via SSH. For this purpose only a web browser is required. Within the website source code of different programming languages can be edited and executed. Furthermore different user interfaces and terminals are available. = Short description of Jupyter = Jupyter is a web application, central component of Jupyter is the '''Jupyter Notebook'''. It is a document, which can contain formatted text, executable cod...")
- 14:25, 9 January 2025 BwUniCluster3.0/FAQ (hist | edit) [424 bytes] S Braun (talk | contribs) (Created page with "Frequently asked questions (FAQs) concerning best practice of bwUniCluster 3.0 = Login = * In case of having issues with OTP, service password or denied access → Troubleshooting = Hardware and Architecture = * Migration from bwUniCluster 2.0 to 3.0 ** Concerning guide to migrate your data → file system migration guide")
- 14:22, 9 January 2025 BwUniCluster3.0/Support (hist | edit) [2,026 bytes] S Braun (talk | contribs) (Created page with "== Registration == {| style="vertical-align:top;background:#f5fffa;border:2px solid #000000;" | The primary support channel for all inquiries is the ticket system at * '''[https://bw-support.scc.kit.edu/ bwSupport Portal]''' |} If you are having issues connected to your local home institution, e.g. getting the entitlement for registration on bwUniCluster 2.0, you may also contact your local hotline: {| style="vertical-align:top;" ! University !! Hotline |- | Albert...")
- 14:16, 9 January 2025 BwUniCluster3.0/Getting Started (hist | edit) [2,172 bytes] S Braun (talk | contribs) (Created page with "== General Workflow of Running a Calculation == On a compute cluster, you do not simply run log in and your software, but you write a "job script" that contains all commands to run and process your job and send this into a waiting queue to be run on one of several hundred computers. How this is done is described in a little more detail here: Running Calculations == Get access to the cluster == Follow the registration process for the bwUniCluster. → Regi...")
- 14:07, 9 January 2025 BwUniCluster3.0/Login (hist | edit) [9,398 bytes] S Braun (talk | contribs) (Created page with "{|style="background:#deffee; width:100%;" |style="padding:5px; background:#cef2e0; text-align:left"| center|25px |style="padding:5px; background:#cef2e0; text-align:left"| Access to bwUniCluster 2.0 is '''limited to IP addresses from the BelWü network'''. All home institutions of our current users are connected to BelWü, so if you are on your campus network (e.g. in your office or on the Campus WiFi) you should be able to connect to bwUniCluster...")
- 12:35, 9 January 2025 BwUniCluster3.0/Hardware and Architecture/Filesystem Details (hist | edit) [24,764 bytes] S Braun (talk | contribs) (Created page with "Filesystems_Details")
- 09:45, 8 January 2025 BwUniCluster3.0/First Steps (hist | edit) [2,172 bytes] S Braun (talk | contribs) (Created page with "<!-- == Command line interface == Any work is to be done via command line interface. A quick guide can be found [https://indico.scc.kit.edu/indico/event/278/material/slides/10.pdf '''here''']. == Software setup == Softwares provided by bwHPC have to be loaded according to your need via the '''Software Module System'''. == Computation == Any kind of computation on any HPC cluster is done as bwUniCluster_2.0_Slurm_common_Features|'''ba...")
- 22:43, 17 December 2024 DACHS/Queues (hist | edit) [4,537 bytes] R Keller (talk | contribs) (First page describing the partitions of DACHS)
- 14:23, 5 December 2024 BwUniCluster3.0/Slurm (hist | edit) [48,018 bytes] P Schuhmacher (talk | contribs) (Created page with "__TOC__ = Slurm HPC Workload Manager = == Specification == Slurm is an open source, fault-tolerant, and highly scalable cluster management and job scheduling system for large and small Linux clusters. Slurm requires no kernel modifications for its operation and is relatively self-contained. As a cluster workload manager, Slurm has three key functions. First, it allocates exclusive and/or non-exclusive access to resources (compute nodes) to users for some duration of...")
- 16:28, 4 December 2024 Helix/bwVisu/Acknowledgement (hist | edit) [894 bytes] H Schumacher (talk | contribs) (created page)
- 16:27, 4 December 2024 Helix/bwVisu/KI-Morph (hist | edit) [1,350 bytes] H Schumacher (talk | contribs) (created page)
- 16:22, 4 December 2024 Helix/bwVisu/RStudio (hist | edit) [510 bytes] H Schumacher (talk | contribs) (created page)
- 16:17, 4 December 2024 Helix/bwVisu/JupyterLab (hist | edit) [5,344 bytes] H Schumacher (talk | contribs) (created page)
- 16:08, 4 December 2024 Helix/bwVisu/Usage (hist | edit) [5,612 bytes] H Schumacher (talk | contribs) (created page)
- 15:44, 4 December 2024 Helix/bwVisu/Getting Started (hist | edit) [3,707 bytes] H Schumacher (talk | contribs) (created page)
- 15:30, 4 December 2024 Helix/bwVisu (hist | edit) [2,211 bytes] H Schumacher (talk | contribs) (created page)
- 17:45, 3 December 2024 BinAC2/SLURM Partitions (hist | edit) [1,996 bytes] F Bartusch (talk | contribs) (Create stub)
- 14:34, 3 December 2024 BinAC2/Slurm (hist | edit) [30,701 bytes] F Bartusch (talk | contribs) (Copy&Paste of the bwUniCluster Slurm page.)
- 11:28, 3 December 2024 BinAC2/Software (hist | edit) [1,087 bytes] F Bartusch (talk | contribs) (Begin BinAC 2 software page)
- 02:31, 29 November 2024 SDS@hd/Access/SMB (hist | edit) [13,350 bytes] H Schumacher (talk | contribs) (new version of CIFS page; added intro sentence; adjusted Mac and Windows sections)
- 01:25, 29 November 2024 Data Transfer/Rsync (hist | edit) [1,652 bytes] H Schumacher (talk | contribs) (created page)
- 17:51, 28 November 2024 Data Transfer/Graphical Clients (hist | edit) [3,694 bytes] H Schumacher (talk | contribs) (created page)
- 09:47, 28 November 2024 BwUniCluster3.0/Batch Queues (hist | edit) [9,186 bytes] S Braun (talk | contribs) (Created page with "__TOC__ == sbatch -p ''queue'' == Compute resources such as (wall-)time, nodes and memory are restricted and must fit into '''queues'''. Since requested compute resources are NOT always automatically mapped to the correct queue class, '''you must add the correct queue class to your sbatch command '''. <font color=red>The specification of a queue is obligatory on BwUniCluster 2.0.</font> <br> Details are: {| width=750px class="wikitable" ! colspan="5" | bwUniCluster 2....")
- 13:20, 27 November 2024 Data Transfer/SCP (hist | edit) [886 bytes] H Schumacher (talk | contribs) (created page)
- 14:34, 26 November 2024 BwUniCluster3.0/Hardware and Architecture (hist | edit) [11,154 bytes] S Braun (talk | contribs) (Created page with "= Architecture of bwUniCluster 3.0 = The bwUniCluster 3.0 is a parallel computer with distributed memory. Each node of system consists of at least two Intel Xeon processor, local memory, disks, network adapters and optionally accelerators (NVIDIA Tesla V100, A100 or H100). All nodes are connected by a fast InfiniBand interconnect. In addition the file system Lustre, that is connected by coupling the InfiniBand of the file server with the InfiniBand switch of the compute...")
- 18:35, 25 November 2024 Data Transfer/All Data Transfer Routes (hist | edit) [3,856 bytes] H Schumacher (talk | contribs) (created page)
- 18:11, 25 November 2024 Data Transfer/Advanced Data Transfer (hist | edit) [1,850 bytes] H Schumacher (talk | contribs) (created page)