Difference between revisions of "Helix/Filesystems"
< Helix
S Richling (talk | contribs) (Created page with "== Overview == == $HOME == == Workspaces == == $TMPDIR == == Access to SDS@hd ==") |
S Richling (talk | contribs) |
||
Line 1: | Line 1: | ||
== Overview == |
== Overview == |
||
+ | |||
+ | The cluster storage system provides a large parallel file system based on [https://www.ibm.com/support/knowledgecenter/STXKQY/ibmspectrumscale_welcome.html IBM Spectrum Scale] |
||
+ | for $HOME, for workspaces and for temporary storage via the $TMPDIR environment variable |
||
+ | |||
+ | {| class="wikitable" |
||
+ | |- |
||
+ | !style="width:16%" | |
||
+ | !style="width:28%"| $HOME |
||
+ | !style="width:28%"| Workspaces |
||
+ | !style="width:28%"| $TMPDIR |
||
+ | |- |
||
+ | !scope="column" | Visibility |
||
+ | | global |
||
+ | | global |
||
+ | | global |
||
+ | |- |
||
+ | !scope="column" | Lifetime |
||
+ | | permanent |
||
+ | | workspace lifetime |
||
+ | | batch job walltime |
||
+ | |- |
||
+ | !scope="column" | Capacity |
||
+ | | 5.2 PB |
||
+ | | 5.2 PB |
||
+ | | 128 GB per node (9 TB per fat node) |
||
+ | |- |
||
+ | !scope="column" | Quotas |
||
+ | | 200 GB |
||
+ | | 10 TB |
||
+ | | none |
||
+ | |- |
||
+ | !scope="column" | Backup |
||
+ | | no |
||
+ | | no |
||
+ | | no |
||
+ | |} |
||
+ | |||
+ | * global: all nodes access the same file system. |
||
+ | * permanent: files are stored permanently. |
||
+ | * workspace lifetime: files are removed at end of workspace lifetime. |
||
+ | * batch job walltime: files are removed at end of the batch job. |
||
== $HOME == |
== $HOME == |
Revision as of 22:53, 12 July 2022
1 Overview
The cluster storage system provides a large parallel file system based on IBM Spectrum Scale for $HOME, for workspaces and for temporary storage via the $TMPDIR environment variable
$HOME | Workspaces | $TMPDIR | |
---|---|---|---|
Visibility | global | global | global |
Lifetime | permanent | workspace lifetime | batch job walltime |
Capacity | 5.2 PB | 5.2 PB | 128 GB per node (9 TB per fat node) |
Quotas | 200 GB | 10 TB | none |
Backup | no | no | no |
- global: all nodes access the same file system.
- permanent: files are stored permanently.
- workspace lifetime: files are removed at end of workspace lifetime.
- batch job walltime: files are removed at end of the batch job.