<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://wiki.bwhpc.de/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=S+Siebler</id>
	<title>bwHPC Wiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://wiki.bwhpc.de/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=S+Siebler"/>
	<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/e/Special:Contributions/S_Siebler"/>
	<updated>2026-04-16T16:35:16Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.39.17</generator>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=SDS@hd/Access/NFS&amp;diff=15187</id>
		<title>SDS@hd/Access/NFS</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=SDS@hd/Access/NFS&amp;diff=15187"/>
		<updated>2025-08-08T07:55:09Z</updated>

		<summary type="html">&lt;p&gt;S Siebler: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;NFS is a Network File System only for Linux. &lt;br /&gt;
&lt;br /&gt;
== Prerequisites ==&lt;br /&gt;
&lt;br /&gt;
* The access via nfs protocol is machine-based, which means new nfs-Clients have to be registered on SDS@hd. During this registration each machine receives a keytab file from the SDS@hd team, which allows mounting SDS@hd. In order the keytab file to be created, please send an email to SDS@hd team for Clientregistration with the following information:&lt;br /&gt;
** hostname of the new nfs-Client&lt;br /&gt;
** IP address&lt;br /&gt;
** short description&lt;br /&gt;
** location&lt;br /&gt;
** acronym of the Speichervorhaben which should be available on this machine&lt;br /&gt;
&lt;br /&gt;
== Using NFSv4 for UNIX client ==&lt;br /&gt;
&lt;br /&gt;
The authentication for data access via NFSv4 is performed using Kerberostickets. This requires a functioning Kerberos environment on the client!&lt;br /&gt;
&lt;br /&gt;
{{:SDS@hd/Access/Kerberos}}&lt;br /&gt;
&lt;br /&gt;
After configuring kerberos, you have to install nfs packages in your system, and enable kerberized NFSv4. The exact names of the packages depending on you linux distribution (see examples below).&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Example RedHat/CentOS&#039;&#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt; yum install nfs-utils nfs4-acl-tools&lt;br /&gt;
&lt;br /&gt;
/etc/sysconfig/nfs:&lt;br /&gt;
NEED_IDMAPD=yes&lt;br /&gt;
NEED_GSSD=yes&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Example Debian/Ubuntu&#039;&#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt; apt install nfs-common nfs4-acl-tools nfs-server&lt;br /&gt;
&lt;br /&gt;
/etc/default/nfs-common:&lt;br /&gt;
NEED_IDMAPD=yes&lt;br /&gt;
NEED_GSSD=yes&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
On ubuntu server: nfs-kernel-server&lt;br /&gt;
&lt;br /&gt;
{{:SDS@hd/Access/ID-Mapping}}&lt;br /&gt;
&lt;br /&gt;
To enable the ID-Mapping for NFSv4 mounts change the file &#039;&#039;/etc/idmapd.conf&#039;&#039; with the following lines:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
in /etc/idmapd.conf:&lt;br /&gt;
        [General]&lt;br /&gt;
        Domain = urz.uni-heidelberg.de&lt;br /&gt;
        Local-Realms = BWSERVICES.UNI-HEIDELBERG.DE&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Mount a nfs share ===&lt;br /&gt;
The usual restrictions for mounting drives under Linux apply. Usually this can only be done by the superuser &amp;quot;root&amp;quot;. For detailed information, please contact the system administrator of your system.&lt;br /&gt;
&lt;br /&gt;
After successfull configuration (s. 2.1) you can mount your SDS@hd share with the following commands:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt; mkdir &amp;lt;mountpoint&amp;gt;&lt;br /&gt;
&amp;gt; mount -t nfs4 -o sec=krb5,vers=4 lsdf02.urz.uni-heidelberg.de:/gpfs/lsdf02/ &amp;lt;mountpoint&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To enable the mounting after a restart, you have to add the following line to the file &amp;quot;/etc/fstab&amp;quot;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
   lsdf02.urz.uni-heidelberg.de:/gpfs/lsdf02/   &amp;lt;mountpoint&amp;gt;   nfs4     sec=krb5,vers=4     0 0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== AutoFS Setup ====&lt;br /&gt;
&lt;br /&gt;
Instead of the fstab-entry you can also use the automounter &amp;quot;autofs&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
* RedHat/CentOS:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ yum install autofs&lt;br /&gt;
$ systemctl enable autofs &lt;br /&gt;
$ systemctl start autofs &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Debian/Ubuntu:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ apt install autofs&lt;br /&gt;
$ systemctl enable autofs &lt;br /&gt;
$ systemctl start autofs &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Afterwards you configure the SDS@hd Speichervorhaben in a new map file:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ cat /etc/auto.sds-hd&lt;br /&gt;
sds-hd -fstype=nfs4,rw,sec=krb5,vers=4,nosuid,nodev   lsdf02.urz.uni-heidelberg.de:/gpfs/lsdf02&lt;br /&gt;
....&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You have to include the new map into the auto.master file, e.g.:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ cat /etc/auto.master&lt;br /&gt;
[...]&lt;br /&gt;
/mnt   /etc/auto.sds-hd&lt;br /&gt;
[...]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To display all available SDS@hd shares on this machine to the users, you should enable &amp;quot;browser_mode&amp;quot;:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ cat /etc/autofs.conf&lt;br /&gt;
[...]&lt;br /&gt;
# to display all available SDS-hd shares on this to the users&lt;br /&gt;
browse_mode=yes&lt;br /&gt;
[...]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
otherwise each share-folder will only be visible after a user has mounted.&lt;br /&gt;
&lt;br /&gt;
After changing the configuration, you should restart the autofs daemon, e.g.:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ systemctl restart autofs&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Of course you can adopt all other autofs options, like timeouts, etc. to the specific needs of your environment or use any other method for dynamically mounting the shares.&lt;br /&gt;
&lt;br /&gt;
=== Access your data ===&lt;br /&gt;
&#039;&#039;&#039;Attention!&#039;&#039;&#039; The access can not be done as root user, because root uses the Kerberosticket of the machine, which does not have data access! &lt;br /&gt;
&lt;br /&gt;
To access your data on SDS@hd you have to fetch a valid kerberos ticket with your SDS@hd user and Servicepassword:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt; kinit hd_xy123&lt;br /&gt;
Password for hd_xy123@BWSERVICES.UNI-HEIDELBERG.DE: &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
You can check afterwards your kerberos ticket with:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt; klist&lt;br /&gt;
Ticket cache: FILE:/tmp/krb5cc_1000&lt;br /&gt;
Default principal: hd_xy123@BWSERVICES.UNI-HEIDELBERG.DE&lt;br /&gt;
&lt;br /&gt;
Valid starting       Expires              Service principal&lt;br /&gt;
20.09.2017 04:00:01  21.09.2017 04:00:01  krbtgt/BWSERVICES.UNI-HEIDELBERG.DE@BWSERVICES.UNI-HEIDELBERG.DE&lt;br /&gt;
        renew until 29.09.2017 13:38:49&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Afterwards you should be able to access the mountpoint, which contain all Speichervorhaben exported to your machine:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt; ls &amp;lt;mountpoint&amp;gt;&lt;br /&gt;
sd16j007  sd17c010  sd17d005&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Renew a kerberos ticket ===&lt;br /&gt;
Because a kerberos ticket has a limited lifetime (default: 10 hours, maximum 24 hours) for security reasons, you have to renew your ticket before it expires to prevent access loss.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt; kinit -R&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This renewal could only be done for maximum time of 10 Days and as long as the current kerberos ticket is still valid. For renewal of an expired ticket, you have to use again your Servicepassword.&lt;br /&gt;
&lt;br /&gt;
=== Destroy kerberos ticket ===&lt;br /&gt;
Even if kerberos tickets are only valid for a limited period of time, a ticket should be destroyed as soon as access is no longer needed to prevent misuse on multi-user systems:&lt;br /&gt;
&amp;lt;pre&amp;gt;kdestroy&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Automated kerberos tickets ===&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;&amp;lt;strong&amp;gt;&#039;&#039;&#039;Attention!&#039;&#039;&#039; Keep this generated Keytab safe and use it only in trusted environments!&amp;lt;/strong&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If your workflow needs a permanent access to SDS@hd for longer than 10 Days, you can use &#039;&#039;&#039;ktutil&#039;&#039;&#039; to encrypt your Service Password into a keytab file:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Interactive way:&#039;&#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ktutil&lt;br /&gt;
ktutil: addent -password -p hd_xy123@BWSERVICES.UNI-HEIDELBERG.DE -k 1 -e rc4-hmac&lt;br /&gt;
    Password for hd_xy123@BWSERVICES.UNI-HEIDELBERG.DE:&lt;br /&gt;
ktutil:  addent -password -p hd_xy123@BWSERVICES.UNI-HEIDELBERG.DE -k 1 -e aes256-cts&lt;br /&gt;
    Password for hd_xy123@BWSERVICES.UNI-HEIDELBERG.DE:&lt;br /&gt;
ktutil:  wkt xy123.keytab&lt;br /&gt;
ktuitl: quit&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Non-interactive way:&#039;&#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
echo -e &amp;quot;addent -password -p hd_xy123@BWSERVICES.UNI-HEIDELBERG.DE -k 1 -e rc4-hmac\n&amp;lt;your_servicepasword&amp;gt;\n&lt;br /&gt;
addent -password -p hd_xy123@BWSERVICES.UNI-HEIDELBERG.DE -k 1 -e aes256-cts\n&amp;lt;your_servicepasword&amp;gt;\nwkt xy123.keytab&amp;quot; | ktutil&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
With this keytab, you can fetch a kerberos ticket without an interactive password:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
kinit -k -t xy123.keytab hd_xy123 &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Troubleshooting ==&lt;br /&gt;
&lt;br /&gt;
=== Incorrect mount option ===&lt;br /&gt;
&lt;br /&gt;
When you try to mount SDS@hd you get the following error message:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mount.nfs4: mount(2): Invalid argument&lt;br /&gt;
mount.nfs4: an incorrect mount option was specified&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This means the service rpc-gssd.service is not yet running. Please try to start the service via &amp;lt;pre&amp;gt;systemctl restart rpc-gssd.service&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Permission denied while mounting when SELinux is enabled ===&lt;br /&gt;
&lt;br /&gt;
When trying to mount, you get the following error message:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mount.nfs4: mount(2): Permission denied&lt;br /&gt;
mount.nfs4: trying text-based options &#039;sec=krb5,vers=4,addr=x.x.x.x,clientaddr=a.b.c.d&#039;&lt;br /&gt;
mount.nfs4: mount(2): Permission denied&lt;br /&gt;
mount.nfs4: access denied by server while mounting lsdf02.urz.uni-heidelberg.de:/gpfs/lsdf02&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You should check your systemlogs for entries like:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Aug 07 13:09:53 systemname setroubleshoot[510523]: SELinux is preventing /usr/sbin/rpc.gssd from open access on the file /etc/krb5.keytab. For complete SELinux messages run: sealert -l XXXXXX	&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or similar. To solve the issue you have to add a SELinux rule to allow access to /etc/krb5.keytab (or disable SElinux completely).&lt;/div&gt;</summary>
		<author><name>S Siebler</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=SDS@hd/Access/NFS&amp;diff=15186</id>
		<title>SDS@hd/Access/NFS</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=SDS@hd/Access/NFS&amp;diff=15186"/>
		<updated>2025-08-08T07:54:33Z</updated>

		<summary type="html">&lt;p&gt;S Siebler: /* Mount a nfs share */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;NFS is a Network File System only for Linux. &lt;br /&gt;
&lt;br /&gt;
== Prerequisites ==&lt;br /&gt;
&lt;br /&gt;
* The access via nfs protocol is machine-based, which means new nfs-Clients have to be registered on SDS@hd. During this registration each machine receives a keytab file from the SDS@hd team, which allows mounting SDS@hd. In order the keytab file to be created, please send an email to SDS@hd team for Clientregistration with the following information:&lt;br /&gt;
** hostname of the new nfs-Client&lt;br /&gt;
** IP address&lt;br /&gt;
** short description&lt;br /&gt;
** location&lt;br /&gt;
** acronym of the Speichervorhaben which should be available on this machine&lt;br /&gt;
&lt;br /&gt;
== Using NFSv4 for UNIX client ==&lt;br /&gt;
&lt;br /&gt;
The authentication for data access via NFSv4 is performed using Kerberostickets. This requires a functioning Kerberos environment on the client!&lt;br /&gt;
&lt;br /&gt;
{{:SDS@hd/Access/Kerberos}}&lt;br /&gt;
&lt;br /&gt;
After configuring kerberos, you have to install nfs packages in your system, and enable kerberized NFSv4. The exact names of the packages depending on you linux distribution (see examples below).&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Example RedHat/CentOS&#039;&#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt; yum install nfs-utils nfs4-acl-tools&lt;br /&gt;
&lt;br /&gt;
/etc/sysconfig/nfs:&lt;br /&gt;
NEED_IDMAPD=yes&lt;br /&gt;
NEED_GSSD=yes&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Example Debian/Ubuntu&#039;&#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt; apt install nfs-common nfs4-acl-tools nfs-server&lt;br /&gt;
&lt;br /&gt;
/etc/default/nfs-common:&lt;br /&gt;
NEED_IDMAPD=yes&lt;br /&gt;
NEED_GSSD=yes&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
On ubuntu server: nfs-kernel-server&lt;br /&gt;
&lt;br /&gt;
{{:SDS@hd/Access/ID-Mapping}}&lt;br /&gt;
&lt;br /&gt;
To enable the ID-Mapping for NFSv4 mounts change the file &#039;&#039;/etc/idmapd.conf&#039;&#039; with the following lines:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
in /etc/idmapd.conf:&lt;br /&gt;
        [General]&lt;br /&gt;
        Domain = urz.uni-heidelberg.de&lt;br /&gt;
        Local-Realms = BWSERVICES.UNI-HEIDELBERG.DE&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Mount a nfs share ===&lt;br /&gt;
The usual restrictions for mounting drives under Linux apply. Usually this can only be done by the superuser &amp;quot;root&amp;quot;. For detailed information, please contact the system administrator of your system.&lt;br /&gt;
&lt;br /&gt;
After successfull configuration (s. 2.1) you can mount your SDS@hd share with the following commands:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt; mkdir &amp;lt;mountpoint&amp;gt;&lt;br /&gt;
&amp;gt; mount -t nfs4 -o sec=krb5,vers=4 lsdf02.urz.uni-heidelberg.de:/gpfs/lsdf02/ &amp;lt;mountpoint&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To enable the mounting after a restart, you have to add the following line to the file &amp;quot;/etc/fstab&amp;quot;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
   lsdf02.urz.uni-heidelberg.de:/gpfs/lsdf02/   &amp;lt;mountpoint&amp;gt;   nfs4     sec=krb5,vers=4     0 0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== AutoFS Setup ====&lt;br /&gt;
&lt;br /&gt;
Instead of the fstab-entry you can also use the automounter &amp;quot;autofs&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
* RedHat/CentOS:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ yum install autofs&lt;br /&gt;
$ systemctl enable autofs &lt;br /&gt;
$ systemctl start autofs &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Debian/Ubuntu:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ apt install autofs&lt;br /&gt;
$ systemctl enable autofs &lt;br /&gt;
$ systemctl start autofs &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Afterwards you configure the SDS@hd Speichervorhaben in a new map file:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ cat /etc/auto.sds-hd&lt;br /&gt;
sds-hd -fstype=nfs4,rw,sec=krb5,vers=4.1,nosuid,nodev   lsdf02.urz.uni-heidelberg.de:/gpfs/lsdf02&lt;br /&gt;
....&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You have to include the new map into the auto.master file, e.g.:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ cat /etc/auto.master&lt;br /&gt;
[...]&lt;br /&gt;
/mnt   /etc/auto.sds-hd&lt;br /&gt;
[...]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To display all available SDS@hd shares on this machine to the users, you should enable &amp;quot;browser_mode&amp;quot;:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ cat /etc/autofs.conf&lt;br /&gt;
[...]&lt;br /&gt;
# to display all available SDS-hd shares on this to the users&lt;br /&gt;
browse_mode=yes&lt;br /&gt;
[...]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
otherwise each share-folder will only be visible after a user has mounted.&lt;br /&gt;
&lt;br /&gt;
After changing the configuration, you should restart the autofs daemon, e.g.:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ systemctl restart autofs&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Of course you can adopt all other autofs options, like timeouts, etc. to the specific needs of your environment or use any other method for dynamically mounting the shares.&lt;br /&gt;
&lt;br /&gt;
=== Access your data ===&lt;br /&gt;
&#039;&#039;&#039;Attention!&#039;&#039;&#039; The access can not be done as root user, because root uses the Kerberosticket of the machine, which does not have data access! &lt;br /&gt;
&lt;br /&gt;
To access your data on SDS@hd you have to fetch a valid kerberos ticket with your SDS@hd user and Servicepassword:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt; kinit hd_xy123&lt;br /&gt;
Password for hd_xy123@BWSERVICES.UNI-HEIDELBERG.DE: &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
You can check afterwards your kerberos ticket with:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt; klist&lt;br /&gt;
Ticket cache: FILE:/tmp/krb5cc_1000&lt;br /&gt;
Default principal: hd_xy123@BWSERVICES.UNI-HEIDELBERG.DE&lt;br /&gt;
&lt;br /&gt;
Valid starting       Expires              Service principal&lt;br /&gt;
20.09.2017 04:00:01  21.09.2017 04:00:01  krbtgt/BWSERVICES.UNI-HEIDELBERG.DE@BWSERVICES.UNI-HEIDELBERG.DE&lt;br /&gt;
        renew until 29.09.2017 13:38:49&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Afterwards you should be able to access the mountpoint, which contain all Speichervorhaben exported to your machine:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt; ls &amp;lt;mountpoint&amp;gt;&lt;br /&gt;
sd16j007  sd17c010  sd17d005&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Renew a kerberos ticket ===&lt;br /&gt;
Because a kerberos ticket has a limited lifetime (default: 10 hours, maximum 24 hours) for security reasons, you have to renew your ticket before it expires to prevent access loss.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt; kinit -R&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This renewal could only be done for maximum time of 10 Days and as long as the current kerberos ticket is still valid. For renewal of an expired ticket, you have to use again your Servicepassword.&lt;br /&gt;
&lt;br /&gt;
=== Destroy kerberos ticket ===&lt;br /&gt;
Even if kerberos tickets are only valid for a limited period of time, a ticket should be destroyed as soon as access is no longer needed to prevent misuse on multi-user systems:&lt;br /&gt;
&amp;lt;pre&amp;gt;kdestroy&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Automated kerberos tickets ===&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;&amp;lt;strong&amp;gt;&#039;&#039;&#039;Attention!&#039;&#039;&#039; Keep this generated Keytab safe and use it only in trusted environments!&amp;lt;/strong&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If your workflow needs a permanent access to SDS@hd for longer than 10 Days, you can use &#039;&#039;&#039;ktutil&#039;&#039;&#039; to encrypt your Service Password into a keytab file:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Interactive way:&#039;&#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ktutil&lt;br /&gt;
ktutil: addent -password -p hd_xy123@BWSERVICES.UNI-HEIDELBERG.DE -k 1 -e rc4-hmac&lt;br /&gt;
    Password for hd_xy123@BWSERVICES.UNI-HEIDELBERG.DE:&lt;br /&gt;
ktutil:  addent -password -p hd_xy123@BWSERVICES.UNI-HEIDELBERG.DE -k 1 -e aes256-cts&lt;br /&gt;
    Password for hd_xy123@BWSERVICES.UNI-HEIDELBERG.DE:&lt;br /&gt;
ktutil:  wkt xy123.keytab&lt;br /&gt;
ktuitl: quit&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Non-interactive way:&#039;&#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
echo -e &amp;quot;addent -password -p hd_xy123@BWSERVICES.UNI-HEIDELBERG.DE -k 1 -e rc4-hmac\n&amp;lt;your_servicepasword&amp;gt;\n&lt;br /&gt;
addent -password -p hd_xy123@BWSERVICES.UNI-HEIDELBERG.DE -k 1 -e aes256-cts\n&amp;lt;your_servicepasword&amp;gt;\nwkt xy123.keytab&amp;quot; | ktutil&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
With this keytab, you can fetch a kerberos ticket without an interactive password:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
kinit -k -t xy123.keytab hd_xy123 &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Troubleshooting ==&lt;br /&gt;
&lt;br /&gt;
=== Incorrect mount option ===&lt;br /&gt;
&lt;br /&gt;
When you try to mount SDS@hd you get the following error message:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mount.nfs4: mount(2): Invalid argument&lt;br /&gt;
mount.nfs4: an incorrect mount option was specified&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This means the service rpc-gssd.service is not yet running. Please try to start the service via &amp;lt;pre&amp;gt;systemctl restart rpc-gssd.service&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Permission denied while mounting when SELinux is enabled ===&lt;br /&gt;
&lt;br /&gt;
When trying to mount, you get the following error message:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mount.nfs4: mount(2): Permission denied&lt;br /&gt;
mount.nfs4: trying text-based options &#039;sec=krb5,vers=4.1,addr=x.x.x.x,clientaddr=a.b.c.d&#039;&lt;br /&gt;
mount.nfs4: mount(2): Permission denied&lt;br /&gt;
mount.nfs4: access denied by server while mounting lsdf02.urz.uni-heidelberg.de:/gpfs/lsdf02&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You should check your systemlogs for entries like:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Aug 07 13:09:53 systemname setroubleshoot[510523]: SELinux is preventing /usr/sbin/rpc.gssd from open access on the file /etc/krb5.keytab. For complete SELinux messages run: sealert -l XXXXXX	&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or similar. To solve the issue you have to add a SELinux rule to allow access to /etc/krb5.keytab (or disable SElinux completely).&lt;/div&gt;</summary>
		<author><name>S Siebler</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=SDS@hd/Access/SMB&amp;diff=13457</id>
		<title>SDS@hd/Access/SMB</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=SDS@hd/Access/SMB&amp;diff=13457"/>
		<updated>2024-12-06T11:51:22Z</updated>

		<summary type="html">&lt;p&gt;S Siebler: /* AutoFS Setup */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;SMB is a Server Message Block protocol. It has different implementations: CIFS (outdated), SMB2, SMB3, Samba&lt;br /&gt;
&lt;br /&gt;
== Prerequisites ==&lt;br /&gt;
&lt;br /&gt;
The SMB connection has to be established at least with protocol version SMB2.02, which is available since Windows Vista or OSX 10.7, and a NTLMv2 authentication level of &amp;amp;quot;Send NTLMv2 responses only&amp;amp;quot;.&lt;br /&gt;
&lt;br /&gt;
== Windows ==&lt;br /&gt;
&lt;br /&gt;
Use a SMB share via Windows Explorer.&lt;br /&gt;
&lt;br /&gt;
=== Needed Information ===&lt;br /&gt;
You need the following information:&amp;lt;br /&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Username:&#039;&#039;&#039; &amp;lt;code&amp;gt;BWSERVICESAD\&amp;amp;lt;username&amp;amp;gt;&amp;lt;/code&amp;gt;&amp;lt;br /&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Password:&#039;&#039;&#039; &#039;&#039;ServicePassword&#039;&#039;&amp;lt;br /&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Network Path&#039;&#039;&#039; in UNC syntax &#039;&#039;&#039;:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
    &amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;# for mounting the root folder&lt;br /&gt;
    \\lsdf02.urz.uni-heidelberg.de\ &lt;br /&gt;
    # for mounting a specific SV&lt;br /&gt;
    \\lsdf02.urz.uni-heidelberg.de\&amp;lt;sv-acronym&amp;gt;  &amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Instructions ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol style=&amp;quot;list-style-type: decimal;&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Open the Windows Explorer.&amp;lt;/p&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;a) To establish a &#039;&#039;&#039;non-permanent connection&#039;&#039;&#039;:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Click on the address bar, which is located at the top of the Explorer.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Enter the network path and press Enter.&amp;lt;/li&amp;gt;&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:WindowsSmb0_nonPerm_cut.png|center|700px]]&lt;br /&gt;
&amp;lt;p&amp;gt;b) To establish a &#039;&#039;&#039;permanent connection&#039;&#039;&#039; by creating a network (pseudo) drive:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Navigate to &amp;amp;quot;This PC&amp;amp;quot;. At the top of the window, click on &#039;&#039;Computer&#039;&#039; and select &#039;&#039;Map network drive&#039;&#039;.&amp;lt;/li&amp;gt;&amp;lt;/ul&amp;gt;&lt;br /&gt;
[[File:windowsSmb0_connect_network_drive_cut.png|center|700px]]&lt;br /&gt;
&amp;lt;/br&amp;gt;&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Choose a drive letter to be associated with the network drive and enter the network path. Select &#039;&#039;use a different identification&#039;&#039;, as these differ from your credentials used locally.&amp;lt;/li&amp;gt;&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:windowsSmb1_driveLetter.png|center|700px]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;You will then be prompted to enter your credentials.&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:windowsSmb2_password.png|center|x400px]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;After logging in successfully, your network drive will appear under &#039;&#039;This PC&#039;&#039;. You can now manipulate your files as accustomed.&amp;lt;/p&amp;gt;&amp;lt;/li&amp;gt;&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Mac ==&lt;br /&gt;
&lt;br /&gt;
Create a network drive with Finder.&lt;br /&gt;
&lt;br /&gt;
=== Needed Information ===&lt;br /&gt;
&lt;br /&gt;
You need the following information:&amp;lt;br /&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Username:&#039;&#039;&#039; &amp;lt;code&amp;gt;BWSERVICESAD\&amp;amp;lt;username&amp;amp;gt;&amp;lt;/code&amp;gt;&amp;lt;br /&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Password:&#039;&#039;&#039; &#039;&#039;ServicePassword&#039;&#039;&amp;lt;br /&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Network Path:&#039;&#039;&#039; smb://lsdf02.urz.uni-heidelberg.de/&amp;lt;sv-acronym&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Instructions ===&lt;br /&gt;
&lt;br /&gt;
# Select the control field&lt;br /&gt;
# Select a drive letter to be associated with the network share and enter the network path. Select ‘use a different identification‘, as these differ from your credentials used locally.&lt;br /&gt;
&lt;br /&gt;
Alternative: Open Finder, choose ‘Go To’ in the menu, then ‘Connect with Server’.&lt;br /&gt;
&lt;br /&gt;
== Linux ==&lt;br /&gt;
&lt;br /&gt;
A UNIX like operating system needs a CIFS client to use a share. CIFS clients are part of Samba implementation for Linux and other UNIX like operating systems (http://www.samba.org)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Attention:&#039;&#039;&#039; &lt;br /&gt;
The core CIFS protocol does not provide unix ownership information or mode for files and directories. &lt;br /&gt;
Because of this, files and directories will generally appear to be owned by whatever values the uid= or gid= options are set, and will have permissions set to the default file_mode and dir_mode for the mount. &#039;&#039;&#039;Attempting to change these values via chmod/chown will return success but have no effect.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
For security reasons, server side permission checks cannot be overriden. The permission checks done by the server will always correspond to the credentials used to mount the share, and not necessarily to the user who is accessing the share.&lt;br /&gt;
&lt;br /&gt;
Although mapping of POSIX UIDs and SIDs is not needed mounting a CIFS share &#039;&#039;&#039;it might become necessary when working with files on the share, e.g. when modifying ACLs&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
For this reason the mount option &amp;lt;pre&amp;gt;cifsacl&amp;lt;/pre&amp;gt; together with a working &#039;&#039;&#039;ID Mapping&#039;&#039;&#039; setup is required, to allow correct permission handling and changes. It offers also the tools &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
getcifsacl&lt;br /&gt;
setcifsacl&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
to work with ACLs.&lt;br /&gt;
&lt;br /&gt;
With version 5.9 of cifs-utils a plugin interface was introduced by Jeff Layton to allow services other than winbind to handle the mapping of POSIX UIDs and SIDs. SSSD will provide a plugin to allow the cifs-utils to ask SSSD to map the ID. With this plugin a SSSD client can access a CIFS share with the same functionality as a client running Winbind.&lt;br /&gt;
&lt;br /&gt;
For this reason we can use the same [[Sds-hd_nfs#configure kerberos environment for SDS@hd|SSSD setup]] for cifs like we use for the kerberized nfs-Setup. &lt;br /&gt;
--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== SMB Client ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Example:&#039;&#039;&#039;&lt;br /&gt;
To list the files in a SMB share, use the program smbclient.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
smbclient -U &#039;BWSERVICESAD\hd_xy123&#039;  //lsdf02.urz.uni-heidelberg.de/&amp;lt;sv-acronym&amp;gt;&lt;br /&gt;
Enter BWSERVICESAD\hd_xy123&#039;s password: &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The program allows you to access the files with a FTP like tool in an interactive shell.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ smbclient //lsdf02.urz.uni-heidelberg.de/&amp;lt;sv-acronym&amp;gt; -U &#039;BWSERVICESAD\hd_xy123&#039;&lt;br /&gt;
Enter BWSERVICESAD\hd_xy123&#039;s password:&lt;br /&gt;
smb: \&amp;gt; ls&lt;br /&gt;
  .                    D        0  Thu Apr 23 12:51:48 2020&lt;br /&gt;
  ..                   D        0  Wed Apr 22 21:54:04 2020&lt;br /&gt;
  bench                D        0  Fri Jul 26 10:24:05 2019&lt;br /&gt;
  benchmark_test       D        0  Tue Oct 30 16:12:21 2018&lt;br /&gt;
  checksums            D        0  Mon Sep 18 10:24:21 2017&lt;br /&gt;
  test.multiuser       A        6  Thu Apr 23 12:36:07 2020&lt;br /&gt;
  test                 A        7  Thu Apr 23 09:38:13 2020&lt;br /&gt;
  .....&lt;br /&gt;
  .snapshots         DHR        0  Thu Jan  1 01:00:00 1970&lt;br /&gt;
&lt;br /&gt;
                115343360000 blocks of size 1024. 108260302848 blocks available&lt;br /&gt;
smb:\&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Mounting a SDS@hd Share ===&lt;br /&gt;
&lt;br /&gt;
Mounting a SDS@hd CIFS share can be done by using username/password credentials or by using kerberos tickets.&lt;br /&gt;
Information about settting up a kerberos environment for SDS@hd can be found [[SDS@hd/Access/Kerberos|*here*]]&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
==== Single-User Environment ====&lt;br /&gt;
&lt;br /&gt;
A share can be mounted to a local directory, (e.g. /mnt/sds-hd ). Depending on your system setup, root privileges may be required. &lt;br /&gt;
&lt;br /&gt;
CIFS normally binds all shares on the client as the property of the user who mounted them and transfers any existing write rights only to the user. With additional information from uid, gid, file_mode and dir_mode, other ownership and access rights can be defined when mounting on the client. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Nevertheless the ownership and access rights defined in this way are only simulated on the client and are not really transferred to the server.&#039;&#039;&#039; If access rights are changed on the client or files with other owners are created in shared folders, these changes only apply to the client and only until the next remount.&lt;br /&gt;
&lt;br /&gt;
If you need to work with the correct server side permissions, please follow the setup of a [[SDS@hd/Access/CIFS#Multiuser Environment|MultiUser Setup]]&lt;br /&gt;
&lt;br /&gt;
===== Mount over command line =====&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Example:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ mkdir /mnt/sds-hd&lt;br /&gt;
&lt;br /&gt;
$ sudo mount -t cifs -o username=hd_xy123,domain=BWSERVICESAD,vers=3,mfsymlinks  //lsdf02.urz.uni-heidelberg.de/&amp;lt;sv-acronym&amp;gt; /mnt/sds-hd&lt;br /&gt;
Password:&lt;br /&gt;
&lt;br /&gt;
$ df -h | grep sds-hd&lt;br /&gt;
//lsdf02.urz.uni-heidelberg.de/sd16j007  108T    6,6T  101T    7% /mnt/sds-hd&lt;br /&gt;
&lt;br /&gt;
$ cd /mnt/sds-hd/&lt;br /&gt;
$ ls&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Verify the success of the mount invoking the mount command without any arguments:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ mount | grep lsdf02&lt;br /&gt;
//lsdf02.urz.uni-heidelberg.de/sd16j007 on /mnt/sds-hd type cifs (rw,relatime,vers=3.1.1,cache=strict,username=xxxx,domain=BWSERVICESAD,uid=1000,forceuid,gid=0,noforcegid,addr=xxxxx,file_mode=0755,dir_mode=0755,soft,nounix,serverino,mapposix,rsize=1048576,wsize=1048576,echo_interval=60,actimeo=1)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===== Mount over /etc/fstab =====&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Example:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ mkdir /mnt/mountpoint&lt;br /&gt;
&lt;br /&gt;
/etc/fstab&lt;br /&gt;
//lsdf02.urz.uni-heidelberg.de/&amp;lt;sv_acronym&amp;gt;   /mnt/mountpoint   cifs  uid=&amp;lt;YOUR_UID&amp;gt;,gid=&amp;lt;YOUR_GID&amp;gt;,user,vers=3,mfsymlinks,credentials=&amp;lt;path_to_user_HOME&amp;gt;/credentialsfile,noauto  0 0&lt;br /&gt;
&lt;br /&gt;
$ cat /path_to_user_HOME/credentialsfile&lt;br /&gt;
username=hd_ xy123&lt;br /&gt;
password=&amp;lt;your_servicepassword&amp;gt;&lt;br /&gt;
domain=BWSERVICESAD&lt;br /&gt;
&lt;br /&gt;
$ mount /mnt/mountpoint&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Verify the success of the mount invoking the mount command without any arguments:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ mount | grep cifs &lt;br /&gt;
//lsdf02.urz.uni-heidelberg.de/sd16j007 on /mnt/mountpoint type cifs (rw,relatime,vers=3.1.1,cache=strict,username=xxxx,domain=BWSERVICESAD,uid=1000,forceuid,gid=0,noforcegid,addr=xxxxx,file_mode=0755,dir_mode=0755,soft,nounix,serverino,mapposix,rsize=1048576,wsize=1048576,echo_interval=60,actimeo=1)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Multiuser Environment ====&lt;br /&gt;
&lt;br /&gt;
By default, CIFS mounts only use a single set of user credentials (the mount credentials) when accessing a share. To support different user session on the same mountpoint and the correct permission/ownership processing, the mount options &amp;lt;pre&amp;gt;multiuser,cifsacl&amp;lt;/pre&amp;gt; have to be used. Because the kernel cannot prompt for passwords, &#039;&#039;&#039;multiuser mounts are limited to mounts using passwordless sec= options, like with sec=krb5. Information about settting up a kerberos environment can be found [[SDS@hd/Access/Kerberos|*here*]]&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
===== ID Mapping =====&lt;br /&gt;
&lt;br /&gt;
In a Multiuser Environment it is important to get the correct ownerships and permissions from the server. Therefor you need to setup a [[SDS@hd/Access/ID-Mapping|ID Mapping]] environment.&lt;br /&gt;
&lt;br /&gt;
Additionally we need the following packages to enable CIFS Mapping:&lt;br /&gt;
* RedHat/CentOS:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ yum install cifs-utils keyutils&amp;lt;/pre&amp;gt;&lt;br /&gt;
* debian/ubuntu:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ apt install cifs-utils keyutils&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After [[SDS@hd/Access/ID-Mapping|installing SSSD]] you have to ensure that it will be used for CIFS name resolution, e.g.&lt;br /&gt;
&lt;br /&gt;
* RedHat/CentOS:&lt;br /&gt;
On RedHat SSSD should have allready a higher priority than winbind:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ alternatives --display cifs-idmap-plugin&lt;br /&gt;
&lt;br /&gt;
cifs-idmap-plugin - Status ist automatisch.&lt;br /&gt;
 Link verweist auf /usr/lib64/cifs-utils/cifs_idmap_sss.so&lt;br /&gt;
/usr/lib64/cifs-utils/cifs_idmap_sss.so - priority 20&lt;br /&gt;
/usr/lib64/cifs-utils/idmapwb.so - priority 10&lt;br /&gt;
Zur Zeit ist die `best&#039; Version /usr/lib64/cifs-utils/cifs_idmap_sss.so.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* debian/ubuntu:&lt;br /&gt;
On debian systems SSSD has to be registered for ID mapping with an higher priority than winbind:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;$ sudo update-alternatives --install /etc/cifs-utils/idmap-plugin idmap-plugin /usr/lib/x86_64-linux-gnu/cifs-utils/cifs_idmap_sss.so 50&lt;br /&gt;
&lt;br /&gt;
$ update-alternatives --display idmap-plugin&lt;br /&gt;
idmap-plugin - automatischer Modus&lt;br /&gt;
  beste Version des Links ist /usr/lib/x86_64-linux-gnu/cifs-utils/cifs_idmap_sss.so&lt;br /&gt;
  Link verweist zur Zeit auf /usr/lib/x86_64-linux-gnu/cifs-utils/cifs_idmap_sss.so&lt;br /&gt;
  Link idmap-plugin ist /etc/cifs-utils/idmap-plugin&lt;br /&gt;
  Slave idmap-plugin.8.gz ist /usr/share/man/man8/idmap-plugin.8.gz&lt;br /&gt;
/usr/lib/x86_64-linux-gnu/cifs-utils/cifs_idmap_sss.so - Priorität 50&lt;br /&gt;
/usr/lib/x86_64-linux-gnu/cifs-utils/idmapwb.so - Priorität 40&lt;br /&gt;
  Slave idmap-plugin.8.gz: /usr/share/man/man8/idmapwb.8.gz&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===== AutoFS Setup =====&lt;br /&gt;
&lt;br /&gt;
Because CIFS shares, in contrast to nfs-Mounts, have to be mounted directly, the root user can not simply mount them into a global folder. Instead the shares have to be initially mounted by a user who has access to the Share. To achieve this, you can use the automounter &amp;quot;autofs&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
* RedHat/CentOS:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ yum install autofs&lt;br /&gt;
$ systemctl enable autofs &lt;br /&gt;
$ systemctl start autofs &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* debian/ubuntu:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ apt install autofs&lt;br /&gt;
$ systemctl enable autofs &lt;br /&gt;
$ systemctl start autofs &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Afterwards you configure the SDS@hd Speichervorhaben in a new map file:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ cat /etc/auto.sds-hd&lt;br /&gt;
&amp;lt;sv-acronym1&amp;gt;    -fstype=cifs,cifsacl,multiuser,sec=krb5,cruid=${UID},vers=3,mfsymlinks  ://lsdf02.urz.uni-heidelberg.de/&amp;lt;sv-acronym1&amp;gt;&lt;br /&gt;
&amp;lt;sv-acronym2&amp;gt;    -fstype=cifs,cifsacl,multiuser,sec=krb5,cruid=${UID},vers=3,mfsymlinks  ://lsdf02.urz.uni-heidelberg.de/&amp;lt;sv-acronym2&amp;gt;&lt;br /&gt;
....&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You have to include the new map into the auto.master file:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ cat /etc/auto.master&lt;br /&gt;
[...]&lt;br /&gt;
/mnt/sds-hd   /etc/auto.sds-hd&lt;br /&gt;
[...]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To display all available SDS@hd shares on this machine to the users, you should enable &amp;quot;browser_mode&amp;quot;:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ cat /etc/autofs.conf&lt;br /&gt;
[...]&lt;br /&gt;
# to display all available SDS-hd shares on this to the users&lt;br /&gt;
browse_mode=yes&lt;br /&gt;
[...]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
otherwise each share-folder will only be visible after a user has mounted.&lt;br /&gt;
&lt;br /&gt;
After changing the configuration, you should restart the autofs daemon, e.g.:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ systemctl restart autofs&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Of course you can adopt all other autofs options, like timeouts, etc. to the specific needs of your environment or use any other method for dynamically mounting the CIFS shares.&lt;br /&gt;
&lt;br /&gt;
===== Access the Share =====&lt;br /&gt;
&lt;br /&gt;
Now each user should be able to mount a SDS@hd share, which is configured for the machine. If a share is allready mounted, other users will access this share with their own credentials without mounting again.&lt;br /&gt;
&lt;br /&gt;
To get access, each user needs a valid kerberos ticket, which can be fetched with&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ kinit hd_xy123&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For further information about handling kerberos tickets take a look at [[SDS@hd/Access/NFS#Access_your_data|SDS@hd kerberos]]&lt;/div&gt;</summary>
		<author><name>S Siebler</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=SDS@hd/Access/SMB&amp;diff=13456</id>
		<title>SDS@hd/Access/SMB</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=SDS@hd/Access/SMB&amp;diff=13456"/>
		<updated>2024-12-06T11:50:37Z</updated>

		<summary type="html">&lt;p&gt;S Siebler: /* Mount over /etc/fstab */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;SMB is a Server Message Block protocol. It has different implementations: CIFS (outdated), SMB2, SMB3, Samba&lt;br /&gt;
&lt;br /&gt;
== Prerequisites ==&lt;br /&gt;
&lt;br /&gt;
The SMB connection has to be established at least with protocol version SMB2.02, which is available since Windows Vista or OSX 10.7, and a NTLMv2 authentication level of &amp;amp;quot;Send NTLMv2 responses only&amp;amp;quot;.&lt;br /&gt;
&lt;br /&gt;
== Windows ==&lt;br /&gt;
&lt;br /&gt;
Use a SMB share via Windows Explorer.&lt;br /&gt;
&lt;br /&gt;
=== Needed Information ===&lt;br /&gt;
You need the following information:&amp;lt;br /&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Username:&#039;&#039;&#039; &amp;lt;code&amp;gt;BWSERVICESAD\&amp;amp;lt;username&amp;amp;gt;&amp;lt;/code&amp;gt;&amp;lt;br /&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Password:&#039;&#039;&#039; &#039;&#039;ServicePassword&#039;&#039;&amp;lt;br /&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Network Path&#039;&#039;&#039; in UNC syntax &#039;&#039;&#039;:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
    &amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;# for mounting the root folder&lt;br /&gt;
    \\lsdf02.urz.uni-heidelberg.de\ &lt;br /&gt;
    # for mounting a specific SV&lt;br /&gt;
    \\lsdf02.urz.uni-heidelberg.de\&amp;lt;sv-acronym&amp;gt;  &amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Instructions ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol style=&amp;quot;list-style-type: decimal;&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Open the Windows Explorer.&amp;lt;/p&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;a) To establish a &#039;&#039;&#039;non-permanent connection&#039;&#039;&#039;:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Click on the address bar, which is located at the top of the Explorer.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Enter the network path and press Enter.&amp;lt;/li&amp;gt;&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:WindowsSmb0_nonPerm_cut.png|center|700px]]&lt;br /&gt;
&amp;lt;p&amp;gt;b) To establish a &#039;&#039;&#039;permanent connection&#039;&#039;&#039; by creating a network (pseudo) drive:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Navigate to &amp;amp;quot;This PC&amp;amp;quot;. At the top of the window, click on &#039;&#039;Computer&#039;&#039; and select &#039;&#039;Map network drive&#039;&#039;.&amp;lt;/li&amp;gt;&amp;lt;/ul&amp;gt;&lt;br /&gt;
[[File:windowsSmb0_connect_network_drive_cut.png|center|700px]]&lt;br /&gt;
&amp;lt;/br&amp;gt;&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Choose a drive letter to be associated with the network drive and enter the network path. Select &#039;&#039;use a different identification&#039;&#039;, as these differ from your credentials used locally.&amp;lt;/li&amp;gt;&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:windowsSmb1_driveLetter.png|center|700px]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;You will then be prompted to enter your credentials.&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:windowsSmb2_password.png|center|x400px]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;After logging in successfully, your network drive will appear under &#039;&#039;This PC&#039;&#039;. You can now manipulate your files as accustomed.&amp;lt;/p&amp;gt;&amp;lt;/li&amp;gt;&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Mac ==&lt;br /&gt;
&lt;br /&gt;
Create a network drive with Finder.&lt;br /&gt;
&lt;br /&gt;
=== Needed Information ===&lt;br /&gt;
&lt;br /&gt;
You need the following information:&amp;lt;br /&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Username:&#039;&#039;&#039; &amp;lt;code&amp;gt;BWSERVICESAD\&amp;amp;lt;username&amp;amp;gt;&amp;lt;/code&amp;gt;&amp;lt;br /&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Password:&#039;&#039;&#039; &#039;&#039;ServicePassword&#039;&#039;&amp;lt;br /&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Network Path:&#039;&#039;&#039; smb://lsdf02.urz.uni-heidelberg.de/&amp;lt;sv-acronym&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Instructions ===&lt;br /&gt;
&lt;br /&gt;
# Select the control field&lt;br /&gt;
# Select a drive letter to be associated with the network share and enter the network path. Select ‘use a different identification‘, as these differ from your credentials used locally.&lt;br /&gt;
&lt;br /&gt;
Alternative: Open Finder, choose ‘Go To’ in the menu, then ‘Connect with Server’.&lt;br /&gt;
&lt;br /&gt;
== Linux ==&lt;br /&gt;
&lt;br /&gt;
A UNIX like operating system needs a CIFS client to use a share. CIFS clients are part of Samba implementation for Linux and other UNIX like operating systems (http://www.samba.org)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Attention:&#039;&#039;&#039; &lt;br /&gt;
The core CIFS protocol does not provide unix ownership information or mode for files and directories. &lt;br /&gt;
Because of this, files and directories will generally appear to be owned by whatever values the uid= or gid= options are set, and will have permissions set to the default file_mode and dir_mode for the mount. &#039;&#039;&#039;Attempting to change these values via chmod/chown will return success but have no effect.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
For security reasons, server side permission checks cannot be overriden. The permission checks done by the server will always correspond to the credentials used to mount the share, and not necessarily to the user who is accessing the share.&lt;br /&gt;
&lt;br /&gt;
Although mapping of POSIX UIDs and SIDs is not needed mounting a CIFS share &#039;&#039;&#039;it might become necessary when working with files on the share, e.g. when modifying ACLs&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
For this reason the mount option &amp;lt;pre&amp;gt;cifsacl&amp;lt;/pre&amp;gt; together with a working &#039;&#039;&#039;ID Mapping&#039;&#039;&#039; setup is required, to allow correct permission handling and changes. It offers also the tools &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
getcifsacl&lt;br /&gt;
setcifsacl&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
to work with ACLs.&lt;br /&gt;
&lt;br /&gt;
With version 5.9 of cifs-utils a plugin interface was introduced by Jeff Layton to allow services other than winbind to handle the mapping of POSIX UIDs and SIDs. SSSD will provide a plugin to allow the cifs-utils to ask SSSD to map the ID. With this plugin a SSSD client can access a CIFS share with the same functionality as a client running Winbind.&lt;br /&gt;
&lt;br /&gt;
For this reason we can use the same [[Sds-hd_nfs#configure kerberos environment for SDS@hd|SSSD setup]] for cifs like we use for the kerberized nfs-Setup. &lt;br /&gt;
--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== SMB Client ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Example:&#039;&#039;&#039;&lt;br /&gt;
To list the files in a SMB share, use the program smbclient.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
smbclient -U &#039;BWSERVICESAD\hd_xy123&#039;  //lsdf02.urz.uni-heidelberg.de/&amp;lt;sv-acronym&amp;gt;&lt;br /&gt;
Enter BWSERVICESAD\hd_xy123&#039;s password: &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The program allows you to access the files with a FTP like tool in an interactive shell.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ smbclient //lsdf02.urz.uni-heidelberg.de/&amp;lt;sv-acronym&amp;gt; -U &#039;BWSERVICESAD\hd_xy123&#039;&lt;br /&gt;
Enter BWSERVICESAD\hd_xy123&#039;s password:&lt;br /&gt;
smb: \&amp;gt; ls&lt;br /&gt;
  .                    D        0  Thu Apr 23 12:51:48 2020&lt;br /&gt;
  ..                   D        0  Wed Apr 22 21:54:04 2020&lt;br /&gt;
  bench                D        0  Fri Jul 26 10:24:05 2019&lt;br /&gt;
  benchmark_test       D        0  Tue Oct 30 16:12:21 2018&lt;br /&gt;
  checksums            D        0  Mon Sep 18 10:24:21 2017&lt;br /&gt;
  test.multiuser       A        6  Thu Apr 23 12:36:07 2020&lt;br /&gt;
  test                 A        7  Thu Apr 23 09:38:13 2020&lt;br /&gt;
  .....&lt;br /&gt;
  .snapshots         DHR        0  Thu Jan  1 01:00:00 1970&lt;br /&gt;
&lt;br /&gt;
                115343360000 blocks of size 1024. 108260302848 blocks available&lt;br /&gt;
smb:\&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Mounting a SDS@hd Share ===&lt;br /&gt;
&lt;br /&gt;
Mounting a SDS@hd CIFS share can be done by using username/password credentials or by using kerberos tickets.&lt;br /&gt;
Information about settting up a kerberos environment for SDS@hd can be found [[SDS@hd/Access/Kerberos|*here*]]&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
==== Single-User Environment ====&lt;br /&gt;
&lt;br /&gt;
A share can be mounted to a local directory, (e.g. /mnt/sds-hd ). Depending on your system setup, root privileges may be required. &lt;br /&gt;
&lt;br /&gt;
CIFS normally binds all shares on the client as the property of the user who mounted them and transfers any existing write rights only to the user. With additional information from uid, gid, file_mode and dir_mode, other ownership and access rights can be defined when mounting on the client. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Nevertheless the ownership and access rights defined in this way are only simulated on the client and are not really transferred to the server.&#039;&#039;&#039; If access rights are changed on the client or files with other owners are created in shared folders, these changes only apply to the client and only until the next remount.&lt;br /&gt;
&lt;br /&gt;
If you need to work with the correct server side permissions, please follow the setup of a [[SDS@hd/Access/CIFS#Multiuser Environment|MultiUser Setup]]&lt;br /&gt;
&lt;br /&gt;
===== Mount over command line =====&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Example:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ mkdir /mnt/sds-hd&lt;br /&gt;
&lt;br /&gt;
$ sudo mount -t cifs -o username=hd_xy123,domain=BWSERVICESAD,vers=3,mfsymlinks  //lsdf02.urz.uni-heidelberg.de/&amp;lt;sv-acronym&amp;gt; /mnt/sds-hd&lt;br /&gt;
Password:&lt;br /&gt;
&lt;br /&gt;
$ df -h | grep sds-hd&lt;br /&gt;
//lsdf02.urz.uni-heidelberg.de/sd16j007  108T    6,6T  101T    7% /mnt/sds-hd&lt;br /&gt;
&lt;br /&gt;
$ cd /mnt/sds-hd/&lt;br /&gt;
$ ls&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Verify the success of the mount invoking the mount command without any arguments:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ mount | grep lsdf02&lt;br /&gt;
//lsdf02.urz.uni-heidelberg.de/sd16j007 on /mnt/sds-hd type cifs (rw,relatime,vers=3.1.1,cache=strict,username=xxxx,domain=BWSERVICESAD,uid=1000,forceuid,gid=0,noforcegid,addr=xxxxx,file_mode=0755,dir_mode=0755,soft,nounix,serverino,mapposix,rsize=1048576,wsize=1048576,echo_interval=60,actimeo=1)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===== Mount over /etc/fstab =====&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Example:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ mkdir /mnt/mountpoint&lt;br /&gt;
&lt;br /&gt;
/etc/fstab&lt;br /&gt;
//lsdf02.urz.uni-heidelberg.de/&amp;lt;sv_acronym&amp;gt;   /mnt/mountpoint   cifs  uid=&amp;lt;YOUR_UID&amp;gt;,gid=&amp;lt;YOUR_GID&amp;gt;,user,vers=3,mfsymlinks,credentials=&amp;lt;path_to_user_HOME&amp;gt;/credentialsfile,noauto  0 0&lt;br /&gt;
&lt;br /&gt;
$ cat /path_to_user_HOME/credentialsfile&lt;br /&gt;
username=hd_ xy123&lt;br /&gt;
password=&amp;lt;your_servicepassword&amp;gt;&lt;br /&gt;
domain=BWSERVICESAD&lt;br /&gt;
&lt;br /&gt;
$ mount /mnt/mountpoint&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Verify the success of the mount invoking the mount command without any arguments:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ mount | grep cifs &lt;br /&gt;
//lsdf02.urz.uni-heidelberg.de/sd16j007 on /mnt/mountpoint type cifs (rw,relatime,vers=3.1.1,cache=strict,username=xxxx,domain=BWSERVICESAD,uid=1000,forceuid,gid=0,noforcegid,addr=xxxxx,file_mode=0755,dir_mode=0755,soft,nounix,serverino,mapposix,rsize=1048576,wsize=1048576,echo_interval=60,actimeo=1)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Multiuser Environment ====&lt;br /&gt;
&lt;br /&gt;
By default, CIFS mounts only use a single set of user credentials (the mount credentials) when accessing a share. To support different user session on the same mountpoint and the correct permission/ownership processing, the mount options &amp;lt;pre&amp;gt;multiuser,cifsacl&amp;lt;/pre&amp;gt; have to be used. Because the kernel cannot prompt for passwords, &#039;&#039;&#039;multiuser mounts are limited to mounts using passwordless sec= options, like with sec=krb5. Information about settting up a kerberos environment can be found [[SDS@hd/Access/Kerberos|*here*]]&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
===== ID Mapping =====&lt;br /&gt;
&lt;br /&gt;
In a Multiuser Environment it is important to get the correct ownerships and permissions from the server. Therefor you need to setup a [[SDS@hd/Access/ID-Mapping|ID Mapping]] environment.&lt;br /&gt;
&lt;br /&gt;
Additionally we need the following packages to enable CIFS Mapping:&lt;br /&gt;
* RedHat/CentOS:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ yum install cifs-utils keyutils&amp;lt;/pre&amp;gt;&lt;br /&gt;
* debian/ubuntu:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ apt install cifs-utils keyutils&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After [[SDS@hd/Access/ID-Mapping|installing SSSD]] you have to ensure that it will be used for CIFS name resolution, e.g.&lt;br /&gt;
&lt;br /&gt;
* RedHat/CentOS:&lt;br /&gt;
On RedHat SSSD should have allready a higher priority than winbind:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ alternatives --display cifs-idmap-plugin&lt;br /&gt;
&lt;br /&gt;
cifs-idmap-plugin - Status ist automatisch.&lt;br /&gt;
 Link verweist auf /usr/lib64/cifs-utils/cifs_idmap_sss.so&lt;br /&gt;
/usr/lib64/cifs-utils/cifs_idmap_sss.so - priority 20&lt;br /&gt;
/usr/lib64/cifs-utils/idmapwb.so - priority 10&lt;br /&gt;
Zur Zeit ist die `best&#039; Version /usr/lib64/cifs-utils/cifs_idmap_sss.so.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* debian/ubuntu:&lt;br /&gt;
On debian systems SSSD has to be registered for ID mapping with an higher priority than winbind:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;$ sudo update-alternatives --install /etc/cifs-utils/idmap-plugin idmap-plugin /usr/lib/x86_64-linux-gnu/cifs-utils/cifs_idmap_sss.so 50&lt;br /&gt;
&lt;br /&gt;
$ update-alternatives --display idmap-plugin&lt;br /&gt;
idmap-plugin - automatischer Modus&lt;br /&gt;
  beste Version des Links ist /usr/lib/x86_64-linux-gnu/cifs-utils/cifs_idmap_sss.so&lt;br /&gt;
  Link verweist zur Zeit auf /usr/lib/x86_64-linux-gnu/cifs-utils/cifs_idmap_sss.so&lt;br /&gt;
  Link idmap-plugin ist /etc/cifs-utils/idmap-plugin&lt;br /&gt;
  Slave idmap-plugin.8.gz ist /usr/share/man/man8/idmap-plugin.8.gz&lt;br /&gt;
/usr/lib/x86_64-linux-gnu/cifs-utils/cifs_idmap_sss.so - Priorität 50&lt;br /&gt;
/usr/lib/x86_64-linux-gnu/cifs-utils/idmapwb.so - Priorität 40&lt;br /&gt;
  Slave idmap-plugin.8.gz: /usr/share/man/man8/idmapwb.8.gz&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===== AutoFS Setup =====&lt;br /&gt;
&lt;br /&gt;
Because CIFS shares, in contrast to nfs-Mounts, have to be mounted directly, the root user can not simply mount them into a global folder. Instead the shares have to be initially mounted by a user who has access to the Share. To achieve this, you can use the automounter &amp;quot;autofs&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
* RedHat/CentOS:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ yum install autofs&lt;br /&gt;
$ systemctl enable autofs &lt;br /&gt;
$ systemctl start autofs &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* debian/ubuntu:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ apt install autofs&lt;br /&gt;
$ systemctl enable autofs &lt;br /&gt;
$ systemctl start autofs &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Afterwards you configure the SDS@hd Speichervorhaben in a new map file:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ cat /etc/auto.sds-hd&lt;br /&gt;
&amp;lt;sv-acronym1&amp;gt;    -fstype=cifs,cifsacl,multiuser,sec=krb5,cruid=${UID},vers=3.0,mfsymlinks  ://lsdf02.urz.uni-heidelberg.de/&amp;lt;sv-acronym1&amp;gt;&lt;br /&gt;
&amp;lt;sv-acronym2&amp;gt;    -fstype=cifs,cifsacl,multiuser,sec=krb5,cruid=${UID},vers=3.0,mfsymlinks  ://lsdf02.urz.uni-heidelberg.de/&amp;lt;sv-acronym2&amp;gt;&lt;br /&gt;
....&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You have to include the new map into the auto.master file:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ cat /etc/auto.master&lt;br /&gt;
[...]&lt;br /&gt;
/mnt/sds-hd   /etc/auto.sds-hd&lt;br /&gt;
[...]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To display all available SDS@hd shares on this machine to the users, you should enable &amp;quot;browser_mode&amp;quot;:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ cat /etc/autofs.conf&lt;br /&gt;
[...]&lt;br /&gt;
# to display all available SDS-hd shares on this to the users&lt;br /&gt;
browse_mode=yes&lt;br /&gt;
[...]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
otherwise each share-folder will only be visible after a user has mounted.&lt;br /&gt;
&lt;br /&gt;
After changing the configuration, you should restart the autofs daemon, e.g.:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ systemctl restart autofs&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Of course you can adopt all other autofs options, like timeouts, etc. to the specific needs of your environment or use any other method for dynamically mounting the CIFS shares.&lt;br /&gt;
&lt;br /&gt;
===== Access the Share =====&lt;br /&gt;
&lt;br /&gt;
Now each user should be able to mount a SDS@hd share, which is configured for the machine. If a share is allready mounted, other users will access this share with their own credentials without mounting again.&lt;br /&gt;
&lt;br /&gt;
To get access, each user needs a valid kerberos ticket, which can be fetched with&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ kinit hd_xy123&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For further information about handling kerberos tickets take a look at [[SDS@hd/Access/NFS#Access_your_data|SDS@hd kerberos]]&lt;/div&gt;</summary>
		<author><name>S Siebler</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=SDS@hd/Access/SMB&amp;diff=13455</id>
		<title>SDS@hd/Access/SMB</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=SDS@hd/Access/SMB&amp;diff=13455"/>
		<updated>2024-12-06T11:50:18Z</updated>

		<summary type="html">&lt;p&gt;S Siebler: /* Mount over command line */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;SMB is a Server Message Block protocol. It has different implementations: CIFS (outdated), SMB2, SMB3, Samba&lt;br /&gt;
&lt;br /&gt;
== Prerequisites ==&lt;br /&gt;
&lt;br /&gt;
The SMB connection has to be established at least with protocol version SMB2.02, which is available since Windows Vista or OSX 10.7, and a NTLMv2 authentication level of &amp;amp;quot;Send NTLMv2 responses only&amp;amp;quot;.&lt;br /&gt;
&lt;br /&gt;
== Windows ==&lt;br /&gt;
&lt;br /&gt;
Use a SMB share via Windows Explorer.&lt;br /&gt;
&lt;br /&gt;
=== Needed Information ===&lt;br /&gt;
You need the following information:&amp;lt;br /&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Username:&#039;&#039;&#039; &amp;lt;code&amp;gt;BWSERVICESAD\&amp;amp;lt;username&amp;amp;gt;&amp;lt;/code&amp;gt;&amp;lt;br /&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Password:&#039;&#039;&#039; &#039;&#039;ServicePassword&#039;&#039;&amp;lt;br /&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Network Path&#039;&#039;&#039; in UNC syntax &#039;&#039;&#039;:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
    &amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;# for mounting the root folder&lt;br /&gt;
    \\lsdf02.urz.uni-heidelberg.de\ &lt;br /&gt;
    # for mounting a specific SV&lt;br /&gt;
    \\lsdf02.urz.uni-heidelberg.de\&amp;lt;sv-acronym&amp;gt;  &amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Instructions ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol style=&amp;quot;list-style-type: decimal;&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Open the Windows Explorer.&amp;lt;/p&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;a) To establish a &#039;&#039;&#039;non-permanent connection&#039;&#039;&#039;:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Click on the address bar, which is located at the top of the Explorer.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Enter the network path and press Enter.&amp;lt;/li&amp;gt;&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:WindowsSmb0_nonPerm_cut.png|center|700px]]&lt;br /&gt;
&amp;lt;p&amp;gt;b) To establish a &#039;&#039;&#039;permanent connection&#039;&#039;&#039; by creating a network (pseudo) drive:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Navigate to &amp;amp;quot;This PC&amp;amp;quot;. At the top of the window, click on &#039;&#039;Computer&#039;&#039; and select &#039;&#039;Map network drive&#039;&#039;.&amp;lt;/li&amp;gt;&amp;lt;/ul&amp;gt;&lt;br /&gt;
[[File:windowsSmb0_connect_network_drive_cut.png|center|700px]]&lt;br /&gt;
&amp;lt;/br&amp;gt;&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Choose a drive letter to be associated with the network drive and enter the network path. Select &#039;&#039;use a different identification&#039;&#039;, as these differ from your credentials used locally.&amp;lt;/li&amp;gt;&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:windowsSmb1_driveLetter.png|center|700px]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;You will then be prompted to enter your credentials.&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:windowsSmb2_password.png|center|x400px]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;After logging in successfully, your network drive will appear under &#039;&#039;This PC&#039;&#039;. You can now manipulate your files as accustomed.&amp;lt;/p&amp;gt;&amp;lt;/li&amp;gt;&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Mac ==&lt;br /&gt;
&lt;br /&gt;
Create a network drive with Finder.&lt;br /&gt;
&lt;br /&gt;
=== Needed Information ===&lt;br /&gt;
&lt;br /&gt;
You need the following information:&amp;lt;br /&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Username:&#039;&#039;&#039; &amp;lt;code&amp;gt;BWSERVICESAD\&amp;amp;lt;username&amp;amp;gt;&amp;lt;/code&amp;gt;&amp;lt;br /&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Password:&#039;&#039;&#039; &#039;&#039;ServicePassword&#039;&#039;&amp;lt;br /&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Network Path:&#039;&#039;&#039; smb://lsdf02.urz.uni-heidelberg.de/&amp;lt;sv-acronym&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Instructions ===&lt;br /&gt;
&lt;br /&gt;
# Select the control field&lt;br /&gt;
# Select a drive letter to be associated with the network share and enter the network path. Select ‘use a different identification‘, as these differ from your credentials used locally.&lt;br /&gt;
&lt;br /&gt;
Alternative: Open Finder, choose ‘Go To’ in the menu, then ‘Connect with Server’.&lt;br /&gt;
&lt;br /&gt;
== Linux ==&lt;br /&gt;
&lt;br /&gt;
A UNIX like operating system needs a CIFS client to use a share. CIFS clients are part of Samba implementation for Linux and other UNIX like operating systems (http://www.samba.org)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Attention:&#039;&#039;&#039; &lt;br /&gt;
The core CIFS protocol does not provide unix ownership information or mode for files and directories. &lt;br /&gt;
Because of this, files and directories will generally appear to be owned by whatever values the uid= or gid= options are set, and will have permissions set to the default file_mode and dir_mode for the mount. &#039;&#039;&#039;Attempting to change these values via chmod/chown will return success but have no effect.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
For security reasons, server side permission checks cannot be overriden. The permission checks done by the server will always correspond to the credentials used to mount the share, and not necessarily to the user who is accessing the share.&lt;br /&gt;
&lt;br /&gt;
Although mapping of POSIX UIDs and SIDs is not needed mounting a CIFS share &#039;&#039;&#039;it might become necessary when working with files on the share, e.g. when modifying ACLs&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
For this reason the mount option &amp;lt;pre&amp;gt;cifsacl&amp;lt;/pre&amp;gt; together with a working &#039;&#039;&#039;ID Mapping&#039;&#039;&#039; setup is required, to allow correct permission handling and changes. It offers also the tools &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
getcifsacl&lt;br /&gt;
setcifsacl&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
to work with ACLs.&lt;br /&gt;
&lt;br /&gt;
With version 5.9 of cifs-utils a plugin interface was introduced by Jeff Layton to allow services other than winbind to handle the mapping of POSIX UIDs and SIDs. SSSD will provide a plugin to allow the cifs-utils to ask SSSD to map the ID. With this plugin a SSSD client can access a CIFS share with the same functionality as a client running Winbind.&lt;br /&gt;
&lt;br /&gt;
For this reason we can use the same [[Sds-hd_nfs#configure kerberos environment for SDS@hd|SSSD setup]] for cifs like we use for the kerberized nfs-Setup. &lt;br /&gt;
--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== SMB Client ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Example:&#039;&#039;&#039;&lt;br /&gt;
To list the files in a SMB share, use the program smbclient.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
smbclient -U &#039;BWSERVICESAD\hd_xy123&#039;  //lsdf02.urz.uni-heidelberg.de/&amp;lt;sv-acronym&amp;gt;&lt;br /&gt;
Enter BWSERVICESAD\hd_xy123&#039;s password: &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The program allows you to access the files with a FTP like tool in an interactive shell.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ smbclient //lsdf02.urz.uni-heidelberg.de/&amp;lt;sv-acronym&amp;gt; -U &#039;BWSERVICESAD\hd_xy123&#039;&lt;br /&gt;
Enter BWSERVICESAD\hd_xy123&#039;s password:&lt;br /&gt;
smb: \&amp;gt; ls&lt;br /&gt;
  .                    D        0  Thu Apr 23 12:51:48 2020&lt;br /&gt;
  ..                   D        0  Wed Apr 22 21:54:04 2020&lt;br /&gt;
  bench                D        0  Fri Jul 26 10:24:05 2019&lt;br /&gt;
  benchmark_test       D        0  Tue Oct 30 16:12:21 2018&lt;br /&gt;
  checksums            D        0  Mon Sep 18 10:24:21 2017&lt;br /&gt;
  test.multiuser       A        6  Thu Apr 23 12:36:07 2020&lt;br /&gt;
  test                 A        7  Thu Apr 23 09:38:13 2020&lt;br /&gt;
  .....&lt;br /&gt;
  .snapshots         DHR        0  Thu Jan  1 01:00:00 1970&lt;br /&gt;
&lt;br /&gt;
                115343360000 blocks of size 1024. 108260302848 blocks available&lt;br /&gt;
smb:\&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Mounting a SDS@hd Share ===&lt;br /&gt;
&lt;br /&gt;
Mounting a SDS@hd CIFS share can be done by using username/password credentials or by using kerberos tickets.&lt;br /&gt;
Information about settting up a kerberos environment for SDS@hd can be found [[SDS@hd/Access/Kerberos|*here*]]&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
==== Single-User Environment ====&lt;br /&gt;
&lt;br /&gt;
A share can be mounted to a local directory, (e.g. /mnt/sds-hd ). Depending on your system setup, root privileges may be required. &lt;br /&gt;
&lt;br /&gt;
CIFS normally binds all shares on the client as the property of the user who mounted them and transfers any existing write rights only to the user. With additional information from uid, gid, file_mode and dir_mode, other ownership and access rights can be defined when mounting on the client. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Nevertheless the ownership and access rights defined in this way are only simulated on the client and are not really transferred to the server.&#039;&#039;&#039; If access rights are changed on the client or files with other owners are created in shared folders, these changes only apply to the client and only until the next remount.&lt;br /&gt;
&lt;br /&gt;
If you need to work with the correct server side permissions, please follow the setup of a [[SDS@hd/Access/CIFS#Multiuser Environment|MultiUser Setup]]&lt;br /&gt;
&lt;br /&gt;
===== Mount over command line =====&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Example:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ mkdir /mnt/sds-hd&lt;br /&gt;
&lt;br /&gt;
$ sudo mount -t cifs -o username=hd_xy123,domain=BWSERVICESAD,vers=3,mfsymlinks  //lsdf02.urz.uni-heidelberg.de/&amp;lt;sv-acronym&amp;gt; /mnt/sds-hd&lt;br /&gt;
Password:&lt;br /&gt;
&lt;br /&gt;
$ df -h | grep sds-hd&lt;br /&gt;
//lsdf02.urz.uni-heidelberg.de/sd16j007  108T    6,6T  101T    7% /mnt/sds-hd&lt;br /&gt;
&lt;br /&gt;
$ cd /mnt/sds-hd/&lt;br /&gt;
$ ls&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Verify the success of the mount invoking the mount command without any arguments:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ mount | grep lsdf02&lt;br /&gt;
//lsdf02.urz.uni-heidelberg.de/sd16j007 on /mnt/sds-hd type cifs (rw,relatime,vers=3.1.1,cache=strict,username=xxxx,domain=BWSERVICESAD,uid=1000,forceuid,gid=0,noforcegid,addr=xxxxx,file_mode=0755,dir_mode=0755,soft,nounix,serverino,mapposix,rsize=1048576,wsize=1048576,echo_interval=60,actimeo=1)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===== Mount over /etc/fstab =====&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Example:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ mkdir /mnt/mountpoint&lt;br /&gt;
&lt;br /&gt;
/etc/fstab&lt;br /&gt;
//lsdf02.urz.uni-heidelberg.de/&amp;lt;sv_acronym&amp;gt;   /mnt/mountpoint   cifs  uid=&amp;lt;YOUR_UID&amp;gt;,gid=&amp;lt;YOUR_GID&amp;gt;,user,vers=3.0,mfsymlinks,credentials=&amp;lt;path_to_user_HOME&amp;gt;/credentialsfile,noauto  0 0&lt;br /&gt;
&lt;br /&gt;
$ cat /path_to_user_HOME/credentialsfile&lt;br /&gt;
username=hd_ xy123&lt;br /&gt;
password=&amp;lt;your_servicepassword&amp;gt;&lt;br /&gt;
domain=BWSERVICESAD&lt;br /&gt;
&lt;br /&gt;
$ mount /mnt/mountpoint&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Verify the success of the mount invoking the mount command without any arguments:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ mount | grep cifs &lt;br /&gt;
//lsdf02.urz.uni-heidelberg.de/sd16j007 on /mnt/mountpoint type cifs (rw,relatime,vers=3.0,cache=strict,username=xxxx,domain=BWSERVICESAD,uid=1000,forceuid,gid=0,noforcegid,addr=xxxxx,file_mode=0755,dir_mode=0755,soft,nounix,serverino,mapposix,rsize=1048576,wsize=1048576,echo_interval=60,actimeo=1)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Multiuser Environment ====&lt;br /&gt;
&lt;br /&gt;
By default, CIFS mounts only use a single set of user credentials (the mount credentials) when accessing a share. To support different user session on the same mountpoint and the correct permission/ownership processing, the mount options &amp;lt;pre&amp;gt;multiuser,cifsacl&amp;lt;/pre&amp;gt; have to be used. Because the kernel cannot prompt for passwords, &#039;&#039;&#039;multiuser mounts are limited to mounts using passwordless sec= options, like with sec=krb5. Information about settting up a kerberos environment can be found [[SDS@hd/Access/Kerberos|*here*]]&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
===== ID Mapping =====&lt;br /&gt;
&lt;br /&gt;
In a Multiuser Environment it is important to get the correct ownerships and permissions from the server. Therefor you need to setup a [[SDS@hd/Access/ID-Mapping|ID Mapping]] environment.&lt;br /&gt;
&lt;br /&gt;
Additionally we need the following packages to enable CIFS Mapping:&lt;br /&gt;
* RedHat/CentOS:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ yum install cifs-utils keyutils&amp;lt;/pre&amp;gt;&lt;br /&gt;
* debian/ubuntu:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ apt install cifs-utils keyutils&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After [[SDS@hd/Access/ID-Mapping|installing SSSD]] you have to ensure that it will be used for CIFS name resolution, e.g.&lt;br /&gt;
&lt;br /&gt;
* RedHat/CentOS:&lt;br /&gt;
On RedHat SSSD should have allready a higher priority than winbind:&lt;br /&gt;
&amp;lt;pre&amp;gt;$ alternatives --display cifs-idmap-plugin&lt;br /&gt;
&lt;br /&gt;
cifs-idmap-plugin - Status ist automatisch.&lt;br /&gt;
 Link verweist auf /usr/lib64/cifs-utils/cifs_idmap_sss.so&lt;br /&gt;
/usr/lib64/cifs-utils/cifs_idmap_sss.so - priority 20&lt;br /&gt;
/usr/lib64/cifs-utils/idmapwb.so - priority 10&lt;br /&gt;
Zur Zeit ist die `best&#039; Version /usr/lib64/cifs-utils/cifs_idmap_sss.so.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* debian/ubuntu:&lt;br /&gt;
On debian systems SSSD has to be registered for ID mapping with an higher priority than winbind:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;$ sudo update-alternatives --install /etc/cifs-utils/idmap-plugin idmap-plugin /usr/lib/x86_64-linux-gnu/cifs-utils/cifs_idmap_sss.so 50&lt;br /&gt;
&lt;br /&gt;
$ update-alternatives --display idmap-plugin&lt;br /&gt;
idmap-plugin - automatischer Modus&lt;br /&gt;
  beste Version des Links ist /usr/lib/x86_64-linux-gnu/cifs-utils/cifs_idmap_sss.so&lt;br /&gt;
  Link verweist zur Zeit auf /usr/lib/x86_64-linux-gnu/cifs-utils/cifs_idmap_sss.so&lt;br /&gt;
  Link idmap-plugin ist /etc/cifs-utils/idmap-plugin&lt;br /&gt;
  Slave idmap-plugin.8.gz ist /usr/share/man/man8/idmap-plugin.8.gz&lt;br /&gt;
/usr/lib/x86_64-linux-gnu/cifs-utils/cifs_idmap_sss.so - Priorität 50&lt;br /&gt;
/usr/lib/x86_64-linux-gnu/cifs-utils/idmapwb.so - Priorität 40&lt;br /&gt;
  Slave idmap-plugin.8.gz: /usr/share/man/man8/idmapwb.8.gz&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===== AutoFS Setup =====&lt;br /&gt;
&lt;br /&gt;
Because CIFS shares, in contrast to nfs-Mounts, have to be mounted directly, the root user can not simply mount them into a global folder. Instead the shares have to be initially mounted by a user who has access to the Share. To achieve this, you can use the automounter &amp;quot;autofs&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
* RedHat/CentOS:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ yum install autofs&lt;br /&gt;
$ systemctl enable autofs &lt;br /&gt;
$ systemctl start autofs &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* debian/ubuntu:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ apt install autofs&lt;br /&gt;
$ systemctl enable autofs &lt;br /&gt;
$ systemctl start autofs &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Afterwards you configure the SDS@hd Speichervorhaben in a new map file:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ cat /etc/auto.sds-hd&lt;br /&gt;
&amp;lt;sv-acronym1&amp;gt;    -fstype=cifs,cifsacl,multiuser,sec=krb5,cruid=${UID},vers=3.0,mfsymlinks  ://lsdf02.urz.uni-heidelberg.de/&amp;lt;sv-acronym1&amp;gt;&lt;br /&gt;
&amp;lt;sv-acronym2&amp;gt;    -fstype=cifs,cifsacl,multiuser,sec=krb5,cruid=${UID},vers=3.0,mfsymlinks  ://lsdf02.urz.uni-heidelberg.de/&amp;lt;sv-acronym2&amp;gt;&lt;br /&gt;
....&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You have to include the new map into the auto.master file:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ cat /etc/auto.master&lt;br /&gt;
[...]&lt;br /&gt;
/mnt/sds-hd   /etc/auto.sds-hd&lt;br /&gt;
[...]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To display all available SDS@hd shares on this machine to the users, you should enable &amp;quot;browser_mode&amp;quot;:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ cat /etc/autofs.conf&lt;br /&gt;
[...]&lt;br /&gt;
# to display all available SDS-hd shares on this to the users&lt;br /&gt;
browse_mode=yes&lt;br /&gt;
[...]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
otherwise each share-folder will only be visible after a user has mounted.&lt;br /&gt;
&lt;br /&gt;
After changing the configuration, you should restart the autofs daemon, e.g.:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ systemctl restart autofs&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Of course you can adopt all other autofs options, like timeouts, etc. to the specific needs of your environment or use any other method for dynamically mounting the CIFS shares.&lt;br /&gt;
&lt;br /&gt;
===== Access the Share =====&lt;br /&gt;
&lt;br /&gt;
Now each user should be able to mount a SDS@hd share, which is configured for the machine. If a share is allready mounted, other users will access this share with their own credentials without mounting again.&lt;br /&gt;
&lt;br /&gt;
To get access, each user needs a valid kerberos ticket, which can be fetched with&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ kinit hd_xy123&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For further information about handling kerberos tickets take a look at [[SDS@hd/Access/NFS#Access_your_data|SDS@hd kerberos]]&lt;/div&gt;</summary>
		<author><name>S Siebler</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=SDS@hd/Access/NFS&amp;diff=12875</id>
		<title>SDS@hd/Access/NFS</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=SDS@hd/Access/NFS&amp;diff=12875"/>
		<updated>2024-08-07T11:31:15Z</updated>

		<summary type="html">&lt;p&gt;S Siebler: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= &amp;lt;b&amp;gt; Prerequisites &amp;lt;/b&amp;gt; =&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Attention:&#039;&#039;&#039; To access data served by SDS@hd, you need a &#039;&#039;&#039;&#039;&#039;Service Password&#039;&#039;&#039;&#039;&#039;. See details [[SDS@hd/Registration]].&lt;br /&gt;
&lt;br /&gt;
* Additionally the access to SDS@hd is currently only available inside the [https://www.belwue.de/netz/netz0.html belwue-Network]. This means you have to use the VPN Service of your HomeOrganization, if you want to access SDS@hd from outside the bwHPC-Clusters (e.g. via [https://www.eduroam.org/where/ eduroam] or from your personal Laptop)&lt;br /&gt;
&lt;br /&gt;
* The access via nfs protocol is machine-based, which means &#039;&#039;&#039;new nfs-Clients have to be registered&#039;&#039;&#039; on SDS@hd. During this registration each machine receives a keytab file from the SDS@hd team, which allows mounting SDS@hd. In order the keytab file to be created, please [mailto:sds-hd-support@urz.uni-heidelberg.de?subject=SDS@hd%20nfs-Client%20Registration send an email] to SDS@hd team for Clientregistration with the following information:&lt;br /&gt;
** hostname of the new nfs-Client&lt;br /&gt;
** IP address&lt;br /&gt;
** short description&lt;br /&gt;
** location&lt;br /&gt;
** acronym of the Speichervorhaben which should be available on this machine&lt;br /&gt;
&lt;br /&gt;
= &amp;lt;b&amp;gt; Using NFSv4 for UNIX client &amp;lt;/b&amp;gt; = &lt;br /&gt;
&lt;br /&gt;
The authentication for data access via NFSv4 is performed using Kerberostickets. This requires a functioning Kerberos environment on the client!&lt;br /&gt;
&lt;br /&gt;
{{:SDS@hd/Access/Kerberos}}&lt;br /&gt;
&lt;br /&gt;
After configuring kerberos, you have to install nfs packages in your system, and enable kerberized NFSv4. The exact names of the packages depending on you linux distribution (see examples below).&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Example RedHat/CentOS&#039;&#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt; yum install nfs-utils nfs4-acl-tools&lt;br /&gt;
&lt;br /&gt;
/etc/sysconfig/nfs:&lt;br /&gt;
NEED_IDMAPD=yes&lt;br /&gt;
NEED_GSSD=yes&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Example debian/ubuntu&#039;&#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt; apt install nfs-common nfs4-acl-tools nfs-server&lt;br /&gt;
&lt;br /&gt;
/etc/default/nfs-common:&lt;br /&gt;
NEED_IDMAPD=yes&lt;br /&gt;
NEED_GSSD=yes&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
On ubuntu server: nfs-kernel-server&lt;br /&gt;
&lt;br /&gt;
{{:SDS@hd/Access/ID-Mapping}}&lt;br /&gt;
&lt;br /&gt;
To enable the ID-Mapping for NFSv4 mounts change the file &#039;&#039;/etc/idmapd.conf&#039;&#039; with the following lines:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
in /etc/idmapd.conf:&lt;br /&gt;
        [General]&lt;br /&gt;
        Domain = urz.uni-heidelberg.de&lt;br /&gt;
        Local-Realms = BWSERVICES.UNI-HEIDELBERG.DE&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== mount a nfs share ==&lt;br /&gt;
The usual restrictions for mounting drives under Linux apply. Usually this can only be done by the superuser &amp;quot;root&amp;quot;. For detailed information, please contact the system administrator of your system.&lt;br /&gt;
&lt;br /&gt;
After successfull configuration (s. 2.1) you can mount your SDS@hd share with the following commands:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt; mkdir &amp;lt;mountpoint&amp;gt;&lt;br /&gt;
&amp;gt; mount -t nfs4 -o sec=krb5,vers=4.1 lsdf02.urz.uni-heidelberg.de:/gpfs/lsdf02/ &amp;lt;mountpoint&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To enable the mounting after a restart, you have to add the following line to the file &amp;quot;/etc/fstab&amp;quot;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
   lsdf02.urz.uni-heidelberg.de:/gpfs/lsdf02/   &amp;lt;mountpoint&amp;gt;   nfs4     sec=krb5,vers=4.1     0 0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== AutoFS Setup ===&lt;br /&gt;
&lt;br /&gt;
Instead of the fstab-entry you can also use the automounter &amp;quot;autofs&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
* RedHat/CentOS:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ yum install autofs&lt;br /&gt;
$ systemctl enable autofs &lt;br /&gt;
$ systemctl start autofs &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* debian/ubuntu:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ apt install autofs&lt;br /&gt;
$ systemctl enable autofs &lt;br /&gt;
$ systemctl start autofs &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Afterwards you configure the SDS@hd Speichervorhaben in a new map file:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ cat /etc/auto.sds-hd&lt;br /&gt;
sds-hd -fstype=nfs4,rw,sec=krb5,vers=4.1,nosuid,nodev   lsdf02.urz.uni-heidelberg.de:/gpfs/lsdf02&lt;br /&gt;
....&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You have to include the new map into the auto.master file, e.g.:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ cat /etc/auto.master&lt;br /&gt;
[...]&lt;br /&gt;
/mnt   /etc/auto.sds-hd&lt;br /&gt;
[...]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To display all available SDS@hd shares on this machine to the users, you should enable &amp;quot;browser_mode&amp;quot;:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ cat /etc/autofs.conf&lt;br /&gt;
[...]&lt;br /&gt;
# to display all available SDS-hd shares on this to the users&lt;br /&gt;
browse_mode=yes&lt;br /&gt;
[...]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
otherwise each share-folder will only be visible after a user has mounted.&lt;br /&gt;
&lt;br /&gt;
After changing the configuration, you should restart the autofs daemon, e.g.:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ systemctl restart autofs&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Of course you can adopt all other autofs options, like timeouts, etc. to the specific needs of your environment or use any other method for dynamically mounting the shares.&lt;br /&gt;
&lt;br /&gt;
== access your data ==&lt;br /&gt;
&#039;&#039;&#039;Attention!&#039;&#039;&#039; The access can not be done as root user, because root uses the Kerberosticket of the machine, which does not have data access! &lt;br /&gt;
&lt;br /&gt;
To access your data on SDS@hd you have to fetch a valid kerberos ticket with your SDS@hd user and Servicepassword:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt; kinit hd_xy123&lt;br /&gt;
Password for hd_xy123@BWSERVICES.UNI-HEIDELBERG.DE: &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
You can check afterwards your kerberos ticket with:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt; klist&lt;br /&gt;
Ticket cache: FILE:/tmp/krb5cc_1000&lt;br /&gt;
Default principal: hd_xy123@BWSERVICES.UNI-HEIDELBERG.DE&lt;br /&gt;
&lt;br /&gt;
Valid starting       Expires              Service principal&lt;br /&gt;
20.09.2017 04:00:01  21.09.2017 04:00:01  krbtgt/BWSERVICES.UNI-HEIDELBERG.DE@BWSERVICES.UNI-HEIDELBERG.DE&lt;br /&gt;
        renew until 29.09.2017 13:38:49&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Afterwards you should be able to access the mountpoint, which contain all Speichervorhaben exported to your machine:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt; ls &amp;lt;mountpoint&amp;gt;&lt;br /&gt;
sd16j007  sd17c010  sd17d005&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== renew a kerberos ticket ==&lt;br /&gt;
Because a kerberos ticket has a limited lifetime (default: 10 hours, maximum 24 hours) for security reasons, you have to renew your ticket before it expires to prevent access loss.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt; kinit -R&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This renewal could only be done for maximum time of 10 Days and as long as the current kerberos ticket is still valid. For renewal of an expired ticket, you have to use again your Servicepassword.&lt;br /&gt;
&lt;br /&gt;
== destroy kerberos ticket ==&lt;br /&gt;
Even if kerberos tickets are only valid for a limited period of time, a ticket should be destroyed as soon as access is no longer needed to prevent misuse on multi-user systems:&lt;br /&gt;
&amp;lt;pre&amp;gt;kdestroy&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== automated kerberos tickets ==&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;&amp;lt;strong&amp;gt;&#039;&#039;&#039;Attention!&#039;&#039;&#039; Keep this generated Keytab safe and use it only in trusted environments!&amp;lt;/strong&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If your workflow needs a permanent access to SDS@hd for longer than 10 Days, you can use &#039;&#039;&#039;ktutil&#039;&#039;&#039; to encrypt your Service Password into a keytab file:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;interactive way:&#039;&#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ktutil&lt;br /&gt;
ktutil: addent -password -p hd_xy123@BWSERVICES.UNI-HEIDELBERG.DE -k 1 -e rc4-hmac&lt;br /&gt;
    Password for hd_xy123@BWSERVICES.UNI-HEIDELBERG.DE:&lt;br /&gt;
ktutil:  addent -password -p hd_xy123@BWSERVICES.UNI-HEIDELBERG.DE -k 1 -e aes256-cts&lt;br /&gt;
    Password for hd_xy123@BWSERVICES.UNI-HEIDELBERG.DE:&lt;br /&gt;
ktutil:  wkt xy123.keytab&lt;br /&gt;
ktuitl: quit&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;non-interactive way:&#039;&#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
echo -e &amp;quot;addent -password -p hd_xy123@BWSERVICES.UNI-HEIDELBERG.DE -k 1 -e rc4-hmac\n&amp;lt;your_servicepasword&amp;gt;\n&lt;br /&gt;
addent -password -p hd_xy123@BWSERVICES.UNI-HEIDELBERG.DE -k 1 -e aes256-cts\n&amp;lt;your_servicepasword&amp;gt;\nwkt xy123.keytab&amp;quot; | ktutil&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
With this keytab, you can fetch a kerberos ticket without an interactive password:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
kinit -k -t xy123.keytab hd_xy123 &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= &amp;lt;b&amp;gt; Troubleshooting &amp;lt;/b&amp;gt; =&lt;br /&gt;
&lt;br /&gt;
== incorrect mount option ==&lt;br /&gt;
&lt;br /&gt;
When you try to mount SDS@hd you get the following error message:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mount.nfs4: mount(2): Invalid argument&lt;br /&gt;
mount.nfs4: an incorrect mount option was specified&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This means the service rpc-gssd.service is not yet running. Please try to start the service via &amp;lt;pre&amp;gt;systemctl restart rpc-gssd.service&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== permission denied while mounting when SELinux is enabled ==&lt;br /&gt;
&lt;br /&gt;
When trying to mount, you get the following error message:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mount.nfs4: mount(2): Permission denied&lt;br /&gt;
mount.nfs4: trying text-based options &#039;sec=krb5,vers=4.1,addr=x.x.x.x,clientaddr=a.b.c.d&#039;&lt;br /&gt;
mount.nfs4: mount(2): Permission denied&lt;br /&gt;
mount.nfs4: access denied by server while mounting lsdf02.urz.uni-heidelberg.de:/gpfs/lsdf02&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You should check your systemlogs for entries like:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Aug 07 13:09:53 systemname setroubleshoot[510523]: SELinux is preventing /usr/sbin/rpc.gssd from open access on the file /etc/krb5.keytab. For complete SELinux messages run: sealert -l XXXXXX	&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or similar. To solve the issue you have to add a SELinux rule to allow access to /etc/krb5.keytab (or disable SElinux completely).&lt;/div&gt;</summary>
		<author><name>S Siebler</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=SDS@hd/Access/NFS&amp;diff=12874</id>
		<title>SDS@hd/Access/NFS</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=SDS@hd/Access/NFS&amp;diff=12874"/>
		<updated>2024-08-07T11:28:18Z</updated>

		<summary type="html">&lt;p&gt;S Siebler: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= &amp;lt;b&amp;gt; Prerequisites &amp;lt;/b&amp;gt; =&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Attention:&#039;&#039;&#039; To access data served by SDS@hd, you need a &#039;&#039;&#039;&#039;&#039;Service Password&#039;&#039;&#039;&#039;&#039;. See details [[SDS@hd/Registration]].&lt;br /&gt;
&lt;br /&gt;
* Additionally the access to SDS@hd is currently only available inside the [https://www.belwue.de/netz/netz0.html belwue-Network]. This means you have to use the VPN Service of your HomeOrganization, if you want to access SDS@hd from outside the bwHPC-Clusters (e.g. via [https://www.eduroam.org/where/ eduroam] or from your personal Laptop)&lt;br /&gt;
&lt;br /&gt;
* The access via nfs protocol is machine-based, which means &#039;&#039;&#039;new nfs-Clients have to be registered&#039;&#039;&#039; on SDS@hd. During this registration each machine receives a keytab file from the SDS@hd team, which allows mounting SDS@hd. In order the keytab file to be created, please [mailto:sds-hd-support@urz.uni-heidelberg.de?subject=SDS@hd%20nfs-Client%20Registration send an email] to SDS@hd team for Clientregistration with the following information:&lt;br /&gt;
** hostname of the new nfs-Client&lt;br /&gt;
** IP address&lt;br /&gt;
** short description&lt;br /&gt;
** location&lt;br /&gt;
** acronym of the Speichervorhaben which should be available on this machine&lt;br /&gt;
&lt;br /&gt;
= &amp;lt;b&amp;gt; Using NFSv4 for UNIX client &amp;lt;/b&amp;gt; = &lt;br /&gt;
&lt;br /&gt;
The authentication for data access via NFSv4 is performed using Kerberostickets. This requires a functioning Kerberos environment on the client!&lt;br /&gt;
&lt;br /&gt;
{{:SDS@hd/Access/Kerberos}}&lt;br /&gt;
&lt;br /&gt;
After configuring kerberos, you have to install nfs packages in your system, and enable kerberized NFSv4. The exact names of the packages depending on you linux distribution (see examples below).&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Example RedHat/CentOS&#039;&#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt; yum install nfs-utils nfs4-acl-tools&lt;br /&gt;
&lt;br /&gt;
/etc/sysconfig/nfs:&lt;br /&gt;
NEED_IDMAPD=yes&lt;br /&gt;
NEED_GSSD=yes&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Example debian/ubuntu&#039;&#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt; apt install nfs-common nfs4-acl-tools nfs-server&lt;br /&gt;
&lt;br /&gt;
/etc/default/nfs-common:&lt;br /&gt;
NEED_IDMAPD=yes&lt;br /&gt;
NEED_GSSD=yes&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
On ubuntu server: nfs-kernel-server&lt;br /&gt;
&lt;br /&gt;
{{:SDS@hd/Access/ID-Mapping}}&lt;br /&gt;
&lt;br /&gt;
To enable the ID-Mapping for NFSv4 mounts change the file &#039;&#039;/etc/idmapd.conf&#039;&#039; with the following lines:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
in /etc/idmapd.conf:&lt;br /&gt;
        [General]&lt;br /&gt;
        Domain = urz.uni-heidelberg.de&lt;br /&gt;
        Local-Realms = BWSERVICES.UNI-HEIDELBERG.DE&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== mount a nfs share ==&lt;br /&gt;
The usual restrictions for mounting drives under Linux apply. Usually this can only be done by the superuser &amp;quot;root&amp;quot;. For detailed information, please contact the system administrator of your system.&lt;br /&gt;
&lt;br /&gt;
After successfull configuration (s. 2.1) you can mount your SDS@hd share with the following commands:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt; mkdir &amp;lt;mountpoint&amp;gt;&lt;br /&gt;
&amp;gt; mount -t nfs4 -o sec=krb5,vers=4.1 lsdf02.urz.uni-heidelberg.de:/gpfs/lsdf02/ &amp;lt;mountpoint&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To enable the mounting after a restart, you have to add the following line to the file &amp;quot;/etc/fstab&amp;quot;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
   lsdf02.urz.uni-heidelberg.de:/gpfs/lsdf02/   &amp;lt;mountpoint&amp;gt;   nfs4     sec=krb5,vers=4.1     0 0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== AutoFS Setup ===&lt;br /&gt;
&lt;br /&gt;
Instead of the fstab-entry you can also use the automounter &amp;quot;autofs&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
* RedHat/CentOS:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ yum install autofs&lt;br /&gt;
$ systemctl enable autofs &lt;br /&gt;
$ systemctl start autofs &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* debian/ubuntu:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ apt install autofs&lt;br /&gt;
$ systemctl enable autofs &lt;br /&gt;
$ systemctl start autofs &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Afterwards you configure the SDS@hd Speichervorhaben in a new map file:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ cat /etc/auto.sds-hd&lt;br /&gt;
sds-hd -fstype=nfs4,rw,sec=krb5,vers=4.1,nosuid,nodev   lsdf02.urz.uni-heidelberg.de:/gpfs/lsdf02&lt;br /&gt;
....&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You have to include the new map into the auto.master file, e.g.:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ cat /etc/auto.master&lt;br /&gt;
[...]&lt;br /&gt;
/mnt   /etc/auto.sds-hd&lt;br /&gt;
[...]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To display all available SDS@hd shares on this machine to the users, you should enable &amp;quot;browser_mode&amp;quot;:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ cat /etc/autofs.conf&lt;br /&gt;
[...]&lt;br /&gt;
# to display all available SDS-hd shares on this to the users&lt;br /&gt;
browse_mode=yes&lt;br /&gt;
[...]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
otherwise each share-folder will only be visible after a user has mounted.&lt;br /&gt;
&lt;br /&gt;
After changing the configuration, you should restart the autofs daemon, e.g.:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ systemctl restart autofs&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Of course you can adopt all other autofs options, like timeouts, etc. to the specific needs of your environment or use any other method for dynamically mounting the shares.&lt;br /&gt;
&lt;br /&gt;
== access your data ==&lt;br /&gt;
&#039;&#039;&#039;Attention!&#039;&#039;&#039; The access can not be done as root user, because root uses the Kerberosticket of the machine, which does not have data access! &lt;br /&gt;
&lt;br /&gt;
To access your data on SDS@hd you have to fetch a valid kerberos ticket with your SDS@hd user and Servicepassword:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt; kinit hd_xy123&lt;br /&gt;
Password for hd_xy123@BWSERVICES.UNI-HEIDELBERG.DE: &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
You can check afterwards your kerberos ticket with:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt; klist&lt;br /&gt;
Ticket cache: FILE:/tmp/krb5cc_1000&lt;br /&gt;
Default principal: hd_xy123@BWSERVICES.UNI-HEIDELBERG.DE&lt;br /&gt;
&lt;br /&gt;
Valid starting       Expires              Service principal&lt;br /&gt;
20.09.2017 04:00:01  21.09.2017 04:00:01  krbtgt/BWSERVICES.UNI-HEIDELBERG.DE@BWSERVICES.UNI-HEIDELBERG.DE&lt;br /&gt;
        renew until 29.09.2017 13:38:49&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Afterwards you should be able to access the mountpoint, which contain all Speichervorhaben exported to your machine:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt; ls &amp;lt;mountpoint&amp;gt;&lt;br /&gt;
sd16j007  sd17c010  sd17d005&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== renew a kerberos ticket ==&lt;br /&gt;
Because a kerberos ticket has a limited lifetime (default: 10 hours, maximum 24 hours) for security reasons, you have to renew your ticket before it expires to prevent access loss.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt; kinit -R&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This renewal could only be done for maximum time of 10 Days and as long as the current kerberos ticket is still valid. For renewal of an expired ticket, you have to use again your Servicepassword.&lt;br /&gt;
&lt;br /&gt;
== destroy kerberos ticket ==&lt;br /&gt;
Even if kerberos tickets are only valid for a limited period of time, a ticket should be destroyed as soon as access is no longer needed to prevent misuse on multi-user systems:&lt;br /&gt;
&amp;lt;pre&amp;gt;kdestroy&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== automated kerberos tickets ==&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;&amp;lt;strong&amp;gt;&#039;&#039;&#039;Attention!&#039;&#039;&#039; Keep this generated Keytab safe and use it only in trusted environments!&amp;lt;/strong&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If your workflow needs a permanent access to SDS@hd for longer than 10 Days, you can use &#039;&#039;&#039;ktutil&#039;&#039;&#039; to encrypt your Service Password into a keytab file:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;interactive way:&#039;&#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ktutil&lt;br /&gt;
ktutil: addent -password -p hd_xy123@BWSERVICES.UNI-HEIDELBERG.DE -k 1 -e rc4-hmac&lt;br /&gt;
    Password for hd_xy123@BWSERVICES.UNI-HEIDELBERG.DE:&lt;br /&gt;
ktutil:  addent -password -p hd_xy123@BWSERVICES.UNI-HEIDELBERG.DE -k 1 -e aes256-cts&lt;br /&gt;
    Password for hd_xy123@BWSERVICES.UNI-HEIDELBERG.DE:&lt;br /&gt;
ktutil:  wkt xy123.keytab&lt;br /&gt;
ktuitl: quit&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;non-interactive way:&#039;&#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
echo -e &amp;quot;addent -password -p hd_xy123@BWSERVICES.UNI-HEIDELBERG.DE -k 1 -e rc4-hmac\n&amp;lt;your_servicepasword&amp;gt;\n&lt;br /&gt;
addent -password -p hd_xy123@BWSERVICES.UNI-HEIDELBERG.DE -k 1 -e aes256-cts\n&amp;lt;your_servicepasword&amp;gt;\nwkt xy123.keytab&amp;quot; | ktutil&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
With this keytab, you can fetch a kerberos ticket without an interactive password:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
kinit -k -t xy123.keytab hd_xy123 &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= &amp;lt;b&amp;gt; Troubleshooting &amp;lt;/b&amp;gt; =&lt;br /&gt;
&lt;br /&gt;
== incorrect mount option ==&lt;br /&gt;
&lt;br /&gt;
When you try to mount SDS@hd you get the following error message:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mount.nfs4: mount(2): Invalid argument&lt;br /&gt;
mount.nfs4: an incorrect mount option was specified&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This means the service rpc-gssd.service is not yet running. Please try to start the service via &amp;lt;pre&amp;gt;systemctl restart rpc-gssd.service&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== permission denied while mounting when SELinux is enabled ==&lt;br /&gt;
&lt;br /&gt;
When trying to mount, you get the following error message:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mount.nfs4: mount(2): Permission denied&lt;br /&gt;
mount.nfs4: trying text-based options &#039;sec=krb5,vers=4.1,addr=x.x.x.x,clientaddr=a.b.c.d&#039;&lt;br /&gt;
mount.nfs4: mount(2): Permission denied&lt;br /&gt;
mount.nfs4: access denied by server while mounting lsdf02.urz.uni-heidelberg.de:/gpfs/lsdf02&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You should check your systemlogs for entries like:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Aug 07 13:09:53 nc-docker-aio-don setroubleshoot[510523]: SELinux is preventing /usr/sbin/rpc.gssd from open access on the file /etc/krb5.keytab. For complete SELinux messages run: sealert -l XXXXXX	&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or similar. To solve the issue you have to add a SELinux rule to allow access to /etc/krb5.keytab (or disable SElinux completely).&lt;/div&gt;</summary>
		<author><name>S Siebler</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=SDS@hd/Access/NFS&amp;diff=12532</id>
		<title>SDS@hd/Access/NFS</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=SDS@hd/Access/NFS&amp;diff=12532"/>
		<updated>2024-01-10T10:56:50Z</updated>

		<summary type="html">&lt;p&gt;S Siebler: /*  Prerequisites  */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= &amp;lt;b&amp;gt; Prerequisites &amp;lt;/b&amp;gt; =&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Attention:&#039;&#039;&#039; To access data served by SDS@hd, you need a &#039;&#039;&#039;&#039;&#039;Service Password&#039;&#039;&#039;&#039;&#039;. See details [[SDS@hd/Registration]].&lt;br /&gt;
&lt;br /&gt;
* Additionally the access to SDS@hd is currently only available inside the [https://www.belwue.de/netz/netz0.html belwue-Network]. This means you have to use the VPN Service of your HomeOrganization, if you want to access SDS@hd from outside the bwHPC-Clusters (e.g. via [https://www.eduroam.org/where/ eduroam] or from your personal Laptop)&lt;br /&gt;
&lt;br /&gt;
* The access via nfs protocol is machine-based, which means &#039;&#039;&#039;new nfs-Clients have to be registered&#039;&#039;&#039; on SDS@hd. During this registration each machine receives a keytab file from the SDS@hd team, which allows mounting SDS@hd. In order the keytab file to be created, please [mailto:sds-hd-support@urz.uni-heidelberg.de?subject=SDS@hd%20nfs-Client%20Registration send an email] to SDS@hd team for Clientregistration with the following information:&lt;br /&gt;
** hostname of the new nfs-Client&lt;br /&gt;
** IP address&lt;br /&gt;
** short description&lt;br /&gt;
** location&lt;br /&gt;
** acronym of the Speichervorhaben which should be available on this machine&lt;br /&gt;
&lt;br /&gt;
= &amp;lt;b&amp;gt; Using NFSv4 for UNIX client &amp;lt;/b&amp;gt; = &lt;br /&gt;
&lt;br /&gt;
The authentication for data access via NFSv4 is performed using Kerberostickets. This requires a functioning Kerberos environment on the client!&lt;br /&gt;
&lt;br /&gt;
{{:SDS@hd/Access/Kerberos}}&lt;br /&gt;
&lt;br /&gt;
After configuring kerberos, you have to install nfs packages in your system, and enable kerberized NFSv4. The exact names of the packages depending on you linux distribution (see examples below).&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Example RedHat/CentOS&#039;&#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt; yum install nfs-utils nfs4-acl-tools&lt;br /&gt;
&lt;br /&gt;
/etc/sysconfig/nfs:&lt;br /&gt;
NEED_IDMAPD=yes&lt;br /&gt;
NEED_GSSD=yes&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Example debian/ubuntu&#039;&#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt; apt install nfs-common nfs4-acl-tools nfs-server&lt;br /&gt;
&lt;br /&gt;
/etc/default/nfs-common:&lt;br /&gt;
NEED_IDMAPD=yes&lt;br /&gt;
NEED_GSSD=yes&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
On ubuntu server: nfs-kernel-server&lt;br /&gt;
&lt;br /&gt;
{{:SDS@hd/Access/ID-Mapping}}&lt;br /&gt;
&lt;br /&gt;
To enable the ID-Mapping for NFSv4 mounts change the file &#039;&#039;/etc/idmapd.conf&#039;&#039; with the following lines:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
in /etc/idmapd.conf:&lt;br /&gt;
        [General]&lt;br /&gt;
        Domain = urz.uni-heidelberg.de&lt;br /&gt;
        Local-Realms = BWSERVICES.UNI-HEIDELBERG.DE&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== mount a nfs share ==&lt;br /&gt;
The usual restrictions for mounting drives under Linux apply. Usually this can only be done by the superuser &amp;quot;root&amp;quot;. For detailed information, please contact the system administrator of your system.&lt;br /&gt;
&lt;br /&gt;
After successfull configuration (s. 2.1) you can mount your SDS@hd share with the following commands:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt; mkdir &amp;lt;mountpoint&amp;gt;&lt;br /&gt;
&amp;gt; mount -t nfs4 -o sec=krb5,vers=4.1 lsdf02.urz.uni-heidelberg.de:/gpfs/lsdf02/ &amp;lt;mountpoint&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To enable the mounting after a restart, you have to add the following line to the file &amp;quot;/etc/fstab&amp;quot;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
   lsdf02.urz.uni-heidelberg.de:/gpfs/lsdf02/   &amp;lt;mountpoint&amp;gt;   nfs4     sec=krb5,vers=4.1     0 0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== AutoFS Setup ===&lt;br /&gt;
&lt;br /&gt;
Instead of the fstab-entry you can also use the automounter &amp;quot;autofs&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
* RedHat/CentOS:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ yum install autofs&lt;br /&gt;
$ systemctl enable autofs &lt;br /&gt;
$ systemctl start autofs &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* debian/ubuntu:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ apt install autofs&lt;br /&gt;
$ systemctl enable autofs &lt;br /&gt;
$ systemctl start autofs &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Afterwards you configure the SDS@hd Speichervorhaben in a new map file:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ cat /etc/auto.sds-hd&lt;br /&gt;
sds-hd -fstype=nfs4,rw,sec=krb5,vers=4.1,nosuid,nodev   lsdf02.urz.uni-heidelberg.de:/gpfs/lsdf02&lt;br /&gt;
....&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You have to include the new map into the auto.master file, e.g.:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ cat /etc/auto.master&lt;br /&gt;
[...]&lt;br /&gt;
/mnt   /etc/auto.sds-hd&lt;br /&gt;
[...]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To display all available SDS@hd shares on this machine to the users, you should enable &amp;quot;browser_mode&amp;quot;:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ cat /etc/autofs.conf&lt;br /&gt;
[...]&lt;br /&gt;
# to display all available SDS-hd shares on this to the users&lt;br /&gt;
browse_mode=yes&lt;br /&gt;
[...]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
otherwise each share-folder will only be visible after a user has mounted.&lt;br /&gt;
&lt;br /&gt;
After changing the configuration, you should restart the autofs daemon, e.g.:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ systemctl restart autofs&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Of course you can adopt all other autofs options, like timeouts, etc. to the specific needs of your environment or use any other method for dynamically mounting the shares.&lt;br /&gt;
&lt;br /&gt;
== access your data ==&lt;br /&gt;
&#039;&#039;&#039;Attention!&#039;&#039;&#039; The access can not be done as root user, because root uses the Kerberosticket of the machine, which does not have data access! &lt;br /&gt;
&lt;br /&gt;
To access your data on SDS@hd you have to fetch a valid kerberos ticket with your SDS@hd user and Servicepassword:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt; kinit hd_xy123&lt;br /&gt;
Password for hd_xy123@BWSERVICES.UNI-HEIDELBERG.DE: &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
You can check afterwards your kerberos ticket with:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt; klist&lt;br /&gt;
Ticket cache: FILE:/tmp/krb5cc_1000&lt;br /&gt;
Default principal: hd_xy123@BWSERVICES.UNI-HEIDELBERG.DE&lt;br /&gt;
&lt;br /&gt;
Valid starting       Expires              Service principal&lt;br /&gt;
20.09.2017 04:00:01  21.09.2017 04:00:01  krbtgt/BWSERVICES.UNI-HEIDELBERG.DE@BWSERVICES.UNI-HEIDELBERG.DE&lt;br /&gt;
        renew until 29.09.2017 13:38:49&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Afterwards you should be able to access the mountpoint, which contain all Speichervorhaben exported to your machine:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt; ls &amp;lt;mountpoint&amp;gt;&lt;br /&gt;
sd16j007  sd17c010  sd17d005&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== renew a kerberos ticket ==&lt;br /&gt;
Because a kerberos ticket has a limited lifetime (default: 10 hours, maximum 24 hours) for security reasons, you have to renew your ticket before it expires to prevent access loss.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt; kinit -R&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This renewal could only be done for maximum time of 10 Days and as long as the current kerberos ticket is still valid. For renewal of an expired ticket, you have to use again your Servicepassword.&lt;br /&gt;
&lt;br /&gt;
== destroy kerberos ticket ==&lt;br /&gt;
Even if kerberos tickets are only valid for a limited period of time, a ticket should be destroyed as soon as access is no longer needed to prevent misuse on multi-user systems:&lt;br /&gt;
&amp;lt;pre&amp;gt;kdestroy&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== automated kerberos tickets ==&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;&amp;lt;strong&amp;gt;&#039;&#039;&#039;Attention!&#039;&#039;&#039; Keep this generated Keytab safe and use it only in trusted environments!&amp;lt;/strong&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If your workflow needs a permanent access to SDS@hd for longer than 10 Days, you can use &#039;&#039;&#039;ktutil&#039;&#039;&#039; to encrypt your Service Password into a keytab file:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;interactive way:&#039;&#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ktutil&lt;br /&gt;
ktutil: addent -password -p hd_xy123@BWSERVICES.UNI-HEIDELBERG.DE -k 1 -e rc4-hmac&lt;br /&gt;
    Password for hd_xy123@BWSERVICES.UNI-HEIDELBERG.DE:&lt;br /&gt;
ktutil:  addent -password -p hd_xy123@BWSERVICES.UNI-HEIDELBERG.DE -k 1 -e aes256-cts&lt;br /&gt;
    Password for hd_xy123@BWSERVICES.UNI-HEIDELBERG.DE:&lt;br /&gt;
ktutil:  wkt xy123.keytab&lt;br /&gt;
ktuitl: quit&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;non-interactive way:&#039;&#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
echo -e &amp;quot;addent -password -p hd_xy123@BWSERVICES.UNI-HEIDELBERG.DE -k 1 -e rc4-hmac\n&amp;lt;your_servicepasword&amp;gt;\n&lt;br /&gt;
addent -password -p hd_xy123@BWSERVICES.UNI-HEIDELBERG.DE -k 1 -e aes256-cts\n&amp;lt;your_servicepasword&amp;gt;\nwkt xy123.keytab&amp;quot; | ktutil&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
With this keytab, you can fetch a kerberos ticket without an interactive password:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
kinit -k -t xy123.keytab hd_xy123 &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;/div&gt;</summary>
		<author><name>S Siebler</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=SDS@hd/Access&amp;diff=12464</id>
		<title>SDS@hd/Access</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=SDS@hd/Access&amp;diff=12464"/>
		<updated>2023-11-20T11:46:50Z</updated>

		<summary type="html">&lt;p&gt;S Siebler: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Access Protocols =&lt;br /&gt;
* [[SDS@hd/Access/SFTP|SFTP/SSHFS (Win,Mac,Linux)]]&lt;br /&gt;
* [[SDS@hd/Access/CIFS|SMB/CIFS (Win,Mac,Linux)]]&lt;br /&gt;
* [[SDS@hd/Access/NFS|NFSv4 (Linux)]]&lt;br /&gt;
* [[SDS@hd/Access/WEBDAV|WebDAV (Win,Mac,Linux)]]&lt;br /&gt;
&amp;lt;!-- = Authentication Tools = --&amp;gt;&lt;br /&gt;
&amp;lt;!-- * [[SDS@hd/Access/Kerberos|Kerberos]] --&amp;gt;&lt;br /&gt;
&amp;lt;!-- * [[SDS@hd/Access/ID-Mapping|SSSD]] --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Access from bwForCluster Helix =&lt;br /&gt;
On bwForCluster Helix is possible to directly access your storage space under /mnt/sds-hd/ on all login and compute nodes.&lt;/div&gt;</summary>
		<author><name>S Siebler</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=SDS@hd/Access/NFS&amp;diff=12450</id>
		<title>SDS@hd/Access/NFS</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=SDS@hd/Access/NFS&amp;diff=12450"/>
		<updated>2023-11-15T08:59:01Z</updated>

		<summary type="html">&lt;p&gt;S Siebler: /* mount a nfs share */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= &amp;lt;b&amp;gt; Prerequisites &amp;lt;/b&amp;gt; =&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Attention:&#039;&#039;&#039; To access data served by SDS@hd, You need a &#039;&#039;&#039;&#039;&#039;Service Password&#039;&#039;&#039;&#039;&#039;. See details [[SDS@hd/Registration]].&lt;br /&gt;
&lt;br /&gt;
* Additionally the access to SDS@hd is currently only available inside the [https://www.belwue.de/netz/netz0.html belwue-Network]. This means you have to use the VPN Service of your HomeOrganization, if you want to access SDS@hd from outside the bwHPC-Clusters (e.g. via [https://www.eduroam.org/where/ eduroam] or from your personal Laptop)&lt;br /&gt;
&lt;br /&gt;
* The access via nfs protocol is machine-based, which means &#039;&#039;&#039;new nfs-Clients have to be registered&#039;&#039;&#039; on SDS@hd. During this registration each machine gets a keytab file, which allows mounting SDS@hd.&lt;br /&gt;
&lt;br /&gt;
* Currently you have to [mailto:sds-hd-support@urz.uni-heidelberg.de?subject=SDS@hd%20nfs-Client%20Registration send an email] for Clientregistration to SDS@hd Team with the following information:&lt;br /&gt;
** hostname of the new nfs-Client&lt;br /&gt;
** IP address&lt;br /&gt;
** short description&lt;br /&gt;
** location&lt;br /&gt;
** acronym of the Speichervorhaben which should be available on this machine&lt;br /&gt;
&lt;br /&gt;
= &amp;lt;b&amp;gt; Using NFSv4 for UNIX client &amp;lt;/b&amp;gt; = &lt;br /&gt;
&lt;br /&gt;
The authentication for data access via NFSv4 is performed using Kerberostickets. This requires a functioning Kerberos environment on the client!&lt;br /&gt;
&lt;br /&gt;
{{:SDS@hd/Access/Kerberos}}&lt;br /&gt;
&lt;br /&gt;
After configuring kerberos, you have to install nfs packages in your system, and enable kerberized NFSv4. The exact names of the packages depending on you linux distribution (see examples below).&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Example RedHat/CentOS&#039;&#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt; yum install nfs-utils nfs4-acl-tools&lt;br /&gt;
&lt;br /&gt;
/etc/sysconfig/nfs:&lt;br /&gt;
NEED_IDMAPD=yes&lt;br /&gt;
NEED_GSSD=yes&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Example debian/ubuntu&#039;&#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt; apt install nfs-common nfs4-acl-tools nfs-server&lt;br /&gt;
&lt;br /&gt;
/etc/default/nfs-common:&lt;br /&gt;
NEED_IDMAPD=yes&lt;br /&gt;
NEED_GSSD=yes&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
On ubuntu server: nfs-kernel-server&lt;br /&gt;
&lt;br /&gt;
{{:SDS@hd/Access/ID-Mapping}}&lt;br /&gt;
&lt;br /&gt;
To enable the ID-Mapping for NFSv4 mounts change the file &#039;&#039;/etc/idmapd.conf&#039;&#039; with the following lines:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
in /etc/idmapd.conf:&lt;br /&gt;
        [General]&lt;br /&gt;
        Domain = urz.uni-heidelberg.de&lt;br /&gt;
        Local-Realms = BWSERVICES.UNI-HEIDELBERG.DE&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== mount a nfs share ==&lt;br /&gt;
The usual restrictions for mounting drives under Linux apply. Usually this can only be done by the superuser &amp;quot;root&amp;quot;. For detailed information, please contact the system administrator of your system.&lt;br /&gt;
&lt;br /&gt;
After successfull configuration (s. 2.1) you can mount your SDS@hd share with the following commands:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt; mkdir &amp;lt;mountpoint&amp;gt;&lt;br /&gt;
&amp;gt; mount -t nfs4 -o sec=krb5,vers=4.1 lsdf02.urz.uni-heidelberg.de:/gpfs/lsdf02/ &amp;lt;mountpoint&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To enable the mounting after a restart, you have to add the following line to the file &amp;quot;/etc/fstab&amp;quot;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
   lsdf02.urz.uni-heidelberg.de:/gpfs/lsdf02/   &amp;lt;mountpoint&amp;gt;   nfs4     sec=krb5,vers=4.1     0 0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== AutoFS Setup ===&lt;br /&gt;
&lt;br /&gt;
Instead of the fstab-entry you can also use the automounter &amp;quot;autofs&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
* RedHat/CentOS:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ yum install autofs&lt;br /&gt;
$ systemctl enable autofs &lt;br /&gt;
$ systemctl start autofs &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* debian/ubuntu:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ apt install autofs&lt;br /&gt;
$ systemctl enable autofs &lt;br /&gt;
$ systemctl start autofs &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Afterwards you configure the SDS@hd Speichervorhaben in a new map file:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ cat /etc/auto.sds-hd&lt;br /&gt;
sds-hd -fstype=nfs4,rw,sec=krb5,vers=4.1,nosuid,nodev   lsdf02.urz.uni-heidelberg.de:/gpfs/lsdf02&lt;br /&gt;
....&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You have to include the new map into the auto.master file, e.g.:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ cat /etc/auto.master&lt;br /&gt;
[...]&lt;br /&gt;
/mnt   /etc/auto.sds-hd&lt;br /&gt;
[...]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To display all available SDS@hd shares on this machine to the users, you should enable &amp;quot;browser_mode&amp;quot;:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ cat /etc/autofs.conf&lt;br /&gt;
[...]&lt;br /&gt;
# to display all available SDS-hd shares on this to the users&lt;br /&gt;
browse_mode=yes&lt;br /&gt;
[...]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
otherwise each share-folder will only be visible after a user has mounted.&lt;br /&gt;
&lt;br /&gt;
After changing the configuration, you should restart the autofs daemon, e.g.:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ systemctl restart autofs&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Of course you can adopt all other autofs options, like timeouts, etc. to the specific needs of your environment or use any other method for dynamically mounting the shares.&lt;br /&gt;
&lt;br /&gt;
== access your data ==&lt;br /&gt;
&#039;&#039;&#039;Attention!&#039;&#039;&#039; The access can not be done as root user, because root uses the Kerberosticket of the machine, which does not have data access! &lt;br /&gt;
&lt;br /&gt;
To access your data on SDS@hd you have to fetch a valid kerberos ticket with your SDS@hd user and Servicepassword:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt; kinit hd_xy123&lt;br /&gt;
Password for hd_xy123@BWSERVICES.UNI-HEIDELBERG.DE: &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
You can check afterwards your kerberos ticket with:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt; klist&lt;br /&gt;
Ticket cache: FILE:/tmp/krb5cc_1000&lt;br /&gt;
Default principal: hd_xy123@BWSERVICES.UNI-HEIDELBERG.DE&lt;br /&gt;
&lt;br /&gt;
Valid starting       Expires              Service principal&lt;br /&gt;
20.09.2017 04:00:01  21.09.2017 04:00:01  krbtgt/BWSERVICES.UNI-HEIDELBERG.DE@BWSERVICES.UNI-HEIDELBERG.DE&lt;br /&gt;
        renew until 29.09.2017 13:38:49&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Afterwards you should be able to access the mountpoint, which contain all Speichervorhaben exported to your machine:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt; ls &amp;lt;mountpoint&amp;gt;&lt;br /&gt;
sd16j007  sd17c010  sd17d005&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== renew a kerberos ticket ==&lt;br /&gt;
Because a kerberos ticket has a limited lifetime (default: 10 hours, maximum 24 hours) for security reasons, you have to renew your ticket before it expires to prevent access loss.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt; kinit -R&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This renewal could only be done for maximum time of 10 Days and as long as the current kerberos ticket is still valid. For renewal of an expired ticket, you have to use again your Servicepassword.&lt;br /&gt;
&lt;br /&gt;
== destroy kerberos ticket ==&lt;br /&gt;
Even if kerberos tickets are only valid for a limited period of time, a ticket should be destroyed as soon as access is no longer needed to prevent misuse on multi-user systems:&lt;br /&gt;
&amp;lt;pre&amp;gt;kdestroy&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== automated kerberos tickets ==&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;&amp;lt;strong&amp;gt;&#039;&#039;&#039;Attention!&#039;&#039;&#039; Keep this generated Keytab safe and use it only in trusted environments!&amp;lt;/strong&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If your workflow needs a permanent access to SDS@hd for longer than 10 Days, you can use &#039;&#039;&#039;ktutil&#039;&#039;&#039; to encrypt your Service Password into a keytab file:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;interactive way:&#039;&#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ktutil&lt;br /&gt;
ktutil: addent -password -p hd_xy123@BWSERVICES.UNI-HEIDELBERG.DE -k 1 -e rc4-hmac&lt;br /&gt;
    Password for hd_xy123@BWSERVICES.UNI-HEIDELBERG.DE:&lt;br /&gt;
ktutil:  addent -password -p hd_xy123@BWSERVICES.UNI-HEIDELBERG.DE -k 1 -e aes256-cts&lt;br /&gt;
    Password for hd_xy123@BWSERVICES.UNI-HEIDELBERG.DE:&lt;br /&gt;
ktutil:  wkt xy123.keytab&lt;br /&gt;
ktuitl: quit&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;non-interactive way:&#039;&#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
echo -e &amp;quot;addent -password -p hd_xy123@BWSERVICES.UNI-HEIDELBERG.DE -k 1 -e rc4-hmac\n&amp;lt;your_servicepasword&amp;gt;\n&lt;br /&gt;
addent -password -p hd_xy123@BWSERVICES.UNI-HEIDELBERG.DE -k 1 -e aes256-cts\n&amp;lt;your_servicepasword&amp;gt;\nwkt xy123.keytab&amp;quot; | ktutil&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
With this keytab, you can fetch a kerberos ticket without an interactive password:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
kinit -k -t xy123.keytab hd_xy123 &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;/div&gt;</summary>
		<author><name>S Siebler</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=SDS@hd&amp;diff=12375</id>
		<title>SDS@hd</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=SDS@hd&amp;diff=12375"/>
		<updated>2023-09-28T06:54:16Z</updated>

		<summary type="html">&lt;p&gt;S Siebler: dienstöffnung&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[File:sds-hd-logo.png|350px]]&lt;br /&gt;
&lt;br /&gt;
SDS@hd is a central service for securely storing scientific data (Scientific Data Storage). The service is provided as a state service to researchers of higher education institutions of Baden-Württemberg. It is intended to be used for data that is frequently accessed (&#039;hot data&#039;).&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;  background:#FEF4AB; width:100%;&amp;quot; &lt;br /&gt;
| style=&amp;quot;padding:8px; background:#FFE856; font-size:120%; font-weight:bold;  text-align:left&amp;quot; | News&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
* November 2022: No special entitlement is needed anymore to participate in an existing storage project.&lt;br /&gt;
* September 2023: Service has now been opened for DFN AAI &amp;amp; eduGAIN federation members to participate on existing storage projects.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;  background:#eeeefe; width:100%;&amp;quot; &lt;br /&gt;
| style=&amp;quot;padding:8px; background:#dedefe; font-size:120%; font-weight:bold;  text-align:left&amp;quot; | Training &amp;amp; Support&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
* [https://www.urz.uni-heidelberg.de/en/service-catalogue/storage/sdshd-scientific-data-storage Service description &amp;amp; FAQ]&lt;br /&gt;
* [mailto:sds-hd-support@urz.uni-heidelberg.de Submit a Ticket]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;  background:#ffeaef; width:100%;&amp;quot;&lt;br /&gt;
| style=&amp;quot;padding:8px; background:#f5dfdf; font-size:120%; font-weight:bold;  text-align:left&amp;quot; | User Documentation&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
* [[SDS@hd/Registration|Registration]]&lt;br /&gt;
* [[SDS@hd/Access|Access]]&lt;br /&gt;
* [[SDS@hd/Security|Data Security]]&lt;br /&gt;
* Visualization: [https://www.bwvisu.de/ bwVisu]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;  background:#e6e9eb; width:100%;&amp;quot;&lt;br /&gt;
| style=&amp;quot;padding:8px; background:#d1dadf; font-size:120%; font-weight:bold;  text-align:left&amp;quot; | Storage Funding&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
* Please [[SDS@hd/Acknowledgement|acknowledge]] SDS@hd in your publications.&lt;/div&gt;</summary>
		<author><name>S Siebler</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=SDS@hd/Registration&amp;diff=11677</id>
		<title>SDS@hd/Registration</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=SDS@hd/Registration&amp;diff=11677"/>
		<updated>2022-12-14T14:05:05Z</updated>

		<summary type="html">&lt;p&gt;S Siebler: update entitlement requirements&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Granting access and issuing a user account for &#039;&#039;&#039;&#039;&#039;SDS@hd&#039;&#039;&#039;&#039;&#039; requires the registration at the bwServices website [https://bwservices.uni-heidelberg.de/ https://bwservices.uni-heidelberg.de] (step 2).&lt;br /&gt;
&lt;br /&gt;
= Storageproject (Speichervorhaben) =&lt;br /&gt;
&lt;br /&gt;
Before you can use SDS@hd you will have to register for a new &amp;quot;Speichervorhaben&amp;quot; or contribute to an already existing project on [https://sds-hd.urz.uni-heidelberg.de/management SDS@hd Managementtool].&lt;br /&gt;
&lt;br /&gt;
== Register a new &amp;quot;SV&amp;quot; ==&lt;br /&gt;
&lt;br /&gt;
Typically done only by the leader of a scientific work group or the senior scientist of a research group/collaboration.&lt;br /&gt;
Any amount of co-workers can join your SV without having to register another project. &lt;br /&gt;
&lt;br /&gt;
Your own high school or university has to grant you the permission to start an own SDS@hd Storageproject (&amp;quot;SDS@hd SV entitlement&amp;quot;). &lt;br /&gt;
&lt;br /&gt;
Each university has their own procedure.  &lt;br /&gt;
&lt;br /&gt;
For more details about the SDS@hd entitlements you can take a look at the [https://urz.uni-heidelberg.de/en/sds-hd SDS@hd Website]&lt;br /&gt;
&amp;lt;!-- not yet completed&lt;br /&gt;
The page [[Sds_hd_Entitlement]] contains a list of participating high schools and links to instructions on how to get an SDS@hd entitlement at each of them.&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you register your own SV, you will be:&lt;br /&gt;
# held accountable for the co-workers in the SV&lt;br /&gt;
# asked to provide information for the two reports required by the DFG for their funding of SDS@hd&lt;br /&gt;
# likely asked for a contribution to a future DFG grant proposal for an extension of the storage system in your area of research (&amp;quot;wissenschaftliches Beiblatt&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
== Become Coworker of an &amp;quot;SV&amp;quot;==&lt;br /&gt;
&lt;br /&gt;
Your advisor (the &amp;quot;SV&amp;quot; responsible&amp;quot;) will provide you with the following data on the SV:&lt;br /&gt;
* acronym&lt;br /&gt;
* password&lt;br /&gt;
&lt;br /&gt;
To become coworker of an &#039;&#039;SV&#039;&#039;, please login at&lt;br /&gt;
* [https://sds-hd.urz.uni-heidelberg.de/management/index.php?mode=mitarbeit SDS@hd Managementtool] and provide acronym and password.  &lt;br /&gt;
&lt;br /&gt;
You will be assigned to the &#039;SV&#039; as a member.&lt;br /&gt;
&lt;br /&gt;
After submitting the request you will receive an email about the further steps. &lt;br /&gt;
The SV owner and any managers will be notified automatically.&lt;br /&gt;
&lt;br /&gt;
= Personal registration for SDS@hd =&lt;br /&gt;
&lt;br /&gt;
After step 1 you have to register your personal account on the storage system and set a service password.&lt;br /&gt;
Please visit: &lt;br /&gt;
* [https://bwservices.uni-heidelberg.de/ https://bwservices.uni-heidelberg.de] &lt;br /&gt;
*# Select your home organization from the list and click &#039;&#039;&#039;Proceed&#039;&#039;&#039;&lt;br /&gt;
*# You will be directed to the &#039;&#039;Identity Provider&#039;&#039; of your home organisation  &lt;br /&gt;
*# Enter your home-organisational user ID / username  and your home-organisational password and click &#039;&#039;&#039;Login&#039;&#039;&#039; button&lt;br /&gt;
*# You will be redirected back to the registration website [https://bwservices.uni-heidelberg.de/ https://bwservices.uni-heidelberg.de/] &lt;br /&gt;
*# &amp;lt;div&amp;gt;Select unter &#039;&#039;&#039;The following services are available&#039;&#039;&#039; the service &#039;&#039;&#039;SDS@hd - Scientific Data Storage&#039;&#039;&#039; &lt;br /&gt;
*# Click &#039;&#039;&#039;Register&#039;&#039;&#039;&lt;br /&gt;
*# Finally, set a service password for authentication on SDS@hd&lt;br /&gt;
&lt;br /&gt;
[[File:Sds_bwservices_servicepassword.png]]&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Changing Password ==&lt;br /&gt;
&lt;br /&gt;
At any time, you can set a new SDS@hd service password via the registration website [https://bwservices.uni-heidelberg.de/ https://bwservices.uni-heidelberg.de] by carrying out the following steps:&lt;br /&gt;
# visit [https://bwservices.uni-heidelberg.de/ https://bwservices.uni-heidelberg.de] and select your home organization &lt;br /&gt;
# authenticate yourself via your home-organizational user id / username and your home-organizational password&lt;br /&gt;
# find on the left side &#039;&#039;&#039;SDS@hd&#039;&#039;&#039; and select &#039;&#039;&#039;Set Service Password&#039;&#039;&#039;&lt;br /&gt;
# set new service password, repeat it and click &#039;&#039;&#039;Save&#039;&#039;&#039; button.&lt;br /&gt;
# the page answers e.g. &amp;quot;Das Passwort wurde bei dem Dienst geändert&amp;quot; (&amp;quot;password has been changed&amp;quot;)&lt;/div&gt;</summary>
		<author><name>S Siebler</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=Helix/Filesystems&amp;diff=11573</id>
		<title>Helix/Filesystems</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=Helix/Filesystems&amp;diff=11573"/>
		<updated>2022-12-07T15:59:25Z</updated>

		<summary type="html">&lt;p&gt;S Siebler: /* Workspaces */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Overview ==&lt;br /&gt;
&lt;br /&gt;
The cluster storage system provides a large parallel file system based on [https://www.ibm.com/support/knowledgecenter/STXKQY/ibmspectrumscale_welcome.html IBM Spectrum Scale] &lt;br /&gt;
for $HOME, for workspaces, and for temporary storage via the $TMPDIR environment variable.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|- &lt;br /&gt;
!style=&amp;quot;width:16%&amp;quot; |&lt;br /&gt;
!style=&amp;quot;width:28%&amp;quot;| $HOME&lt;br /&gt;
!style=&amp;quot;width:28%&amp;quot;| Workspaces&lt;br /&gt;
!style=&amp;quot;width:28%&amp;quot;| $TMPDIR&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; | Visibility&lt;br /&gt;
| global&lt;br /&gt;
| global&lt;br /&gt;
| local&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; | Lifetime&lt;br /&gt;
| permanent&lt;br /&gt;
| workspace lifetime&lt;br /&gt;
| batch job walltime&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; | Quotas&lt;br /&gt;
| 200 GB&lt;br /&gt;
| 10 TB&lt;br /&gt;
| none&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; | Backup&lt;br /&gt;
| no&lt;br /&gt;
| no&lt;br /&gt;
| no&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
* global: all nodes access the same file system.&lt;br /&gt;
* local: each node has its own temporary file space.&lt;br /&gt;
* permanent: files are stored permanently.&lt;br /&gt;
* workspace lifetime: files are removed at end of workspace lifetime.&lt;br /&gt;
* batch job walltime: files are removed at end of the batch job.&lt;br /&gt;
&lt;br /&gt;
== $HOME ==&lt;br /&gt;
&lt;br /&gt;
Home directories are meant for permanent storage of files that are kept being used like source codes, configuration files, executable programs. There is currently no backup for the home directory. The disk space per user is limited to 200 GB. The used disk space is displayed with the command:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;homequotainfo&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Workspaces ==&lt;br /&gt;
&lt;br /&gt;
Workspace tools can be used to get temporary space for larger amounts of data necessary for or produced by running jobs. To create a workspace you need to supply a name for the workspace and a lifetime in days. The maximum lifetime is 30 days. It is possible to extend the lifetime 10 times.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|- &lt;br /&gt;
!style=&amp;quot;width:30%&amp;quot; | Command&lt;br /&gt;
!style=&amp;quot;width:70%&amp;quot;| Action&lt;br /&gt;
|-&lt;br /&gt;
|ws_allocate -r 7 -m &amp;lt;email&amp;gt; foo 10 &lt;br /&gt;
|Allocate a workspace named foo for 10 days and set a email reminder 7 days before expiring. &lt;br /&gt;
&#039;&#039;&#039;Important:&#039;&#039;&#039; You should add your email address and a relative date for notification!&lt;br /&gt;
|-&lt;br /&gt;
|ws_list -a&lt;br /&gt;
|List all your workspaces.&lt;br /&gt;
|-&lt;br /&gt;
|ws_find foo&lt;br /&gt;
|Get absolute path of workspace foo.&lt;br /&gt;
|-&lt;br /&gt;
|ws_extend foo 5&lt;br /&gt;
|Extend lifetime of workspace foo by 5 days from now.&lt;br /&gt;
|-&lt;br /&gt;
|ws_release foo&lt;br /&gt;
|Manually erase your workspace foo.&lt;br /&gt;
|-&lt;br /&gt;
|ws_send_ical -m &amp;lt;email&amp;gt; foo&lt;br /&gt;
|sending a .ical calendar entry for reminding workspace expiring &lt;br /&gt;
|-&lt;br /&gt;
|ws_share share/unshare/unshare-all &amp;lt;workspacename&amp;gt; &amp;lt;username&amp;gt; [username2]&lt;br /&gt;
|ws_share allows to share an existing workspace with other users. &lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
If you plan to produce or copy large amounts of data in workspaces, please check the availability. The used and free disk space on the workspace filesystem is displayed with the command:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;workquotainfo&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Restoring expired Workspaces ====&lt;br /&gt;
At expiration time your workspace will be moved to a special, hidden directory. For a short period of time you can still restore your data into a valid workspace. For that, use&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ws_restore -l&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
to get a list of your expired workspaces, and then restore them into an &#039;&#039;&#039;existing, active workspace&#039;&#039;&#039; &amp;lt;code&amp;gt;restored_ws&amp;lt;/code&amp;gt;:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ws_restore &amp;lt;deleted_workspacename&amp;gt; restored_ws&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
NOTE: the expired workspace has to be specified using the full name as listed by &amp;lt;code&amp;gt;ws_restore -l&amp;lt;/code&amp;gt;, including username prefix and timestamp suffix (otherwise, it cannot be uniquely identified).&lt;br /&gt;
The target workspace, on the other hand, must be given with just its short name as listed by &amp;lt;code&amp;gt;ws_list&amp;lt;/code&amp;gt;, without the username prefix.&lt;br /&gt;
&lt;br /&gt;
NOTE: &amp;lt;code&amp;gt;ws_restore&amp;lt;/code&amp;gt; can only work on the same filesystem! So you have to ensure that the new workspace allocated with &amp;lt;code&amp;gt;ws_allocate&amp;lt;/code&amp;gt; is placed on the same filesystem as the expired workspace. Therefor you can use &amp;lt;code&amp;gt;-F &amp;lt;filesystem&amp;gt;&amp;lt;/code&amp;gt; flag if needed.&lt;br /&gt;
&lt;br /&gt;
==== Linking workspaces in Home ====&lt;br /&gt;
It might be valuable to have links to personal workspaces within a certain directory, e.g., the user home directory. The command &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ws_register &amp;lt;DIR&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
will create and manage links to all personal workspaces within in the directory &amp;lt;DIR&amp;gt;. Calling this command will do the following:&lt;br /&gt;
&lt;br /&gt;
* The directory &amp;lt;DIR&amp;gt; will be created if necessary&lt;br /&gt;
* Links to all personal workspaces will be managed:&lt;br /&gt;
** Creates links to all available workspaces if not already present&lt;br /&gt;
** Removes links to released workspaces&lt;br /&gt;
&lt;br /&gt;
==== Sharing Workspaces ====&lt;br /&gt;
To simplify sharing of workspaces you can use the command&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ws_share&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This tool setup the needed ACLs on the workspace to allow read-only access to the specified users.&lt;br /&gt;
&lt;br /&gt;
If you need more specific permissions, you can also setup the ACLs on your own, like described here:&lt;br /&gt;
https://wiki.bwhpc.de/e/Workspace#Sharing_Workspace_Data_within_your_Workgroup&lt;br /&gt;
&lt;br /&gt;
==== Troubleshooting ====&lt;br /&gt;
If you are getting the error:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Error: could not create workspace directory!&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
you should check the localesetting of your ssh client. Some clients (e.g. the one from MacOSX) set values that are not valid. You should overwrite LC_CTYPE and set it to a valid locale value like:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
export LC_CTYPE=de_DE.UTF-8&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
To make this change persistent you can add this line also to your &amp;lt;code&amp;gt;.bashrc&amp;lt;/code&amp;gt; file.&lt;br /&gt;
&lt;br /&gt;
A list of valid locales can be retrieved via&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
locale -a&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you are getting the error:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Error: only root can do this &amp;lt;workspacename&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
ensure that you are using the FULL workspacename, including your account_prefix, e.g. &amp;quot;hd_ab123-workspacename&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== $TMPDIR ==&lt;br /&gt;
&lt;br /&gt;
The variable $TMPDIR provides file space for temporary data of running jobs.&lt;br /&gt;
Each node has its own $TMPDIR. &amp;lt;!-- It is possible to request a global $TMPDIR for a job. --&amp;gt;&lt;br /&gt;
The data in $TMPDIR become unavailable as soon as the job has finished.&lt;br /&gt;
&lt;br /&gt;
== Access to SDS@hd ==&lt;br /&gt;
&lt;br /&gt;
It is possible to access your storage space on [http://sds-hd.urz.uni-heidelberg.de SDS@hd] directly on the bwForCluster Helix in /mnt/sds-hd/ on all login and compute nodes.&lt;/div&gt;</summary>
		<author><name>S Siebler</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=Helix/Filesystems&amp;diff=11572</id>
		<title>Helix/Filesystems</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=Helix/Filesystems&amp;diff=11572"/>
		<updated>2022-12-07T15:55:22Z</updated>

		<summary type="html">&lt;p&gt;S Siebler: /* Workspaces */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Overview ==&lt;br /&gt;
&lt;br /&gt;
The cluster storage system provides a large parallel file system based on [https://www.ibm.com/support/knowledgecenter/STXKQY/ibmspectrumscale_welcome.html IBM Spectrum Scale] &lt;br /&gt;
for $HOME, for workspaces, and for temporary storage via the $TMPDIR environment variable.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|- &lt;br /&gt;
!style=&amp;quot;width:16%&amp;quot; |&lt;br /&gt;
!style=&amp;quot;width:28%&amp;quot;| $HOME&lt;br /&gt;
!style=&amp;quot;width:28%&amp;quot;| Workspaces&lt;br /&gt;
!style=&amp;quot;width:28%&amp;quot;| $TMPDIR&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; | Visibility&lt;br /&gt;
| global&lt;br /&gt;
| global&lt;br /&gt;
| local&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; | Lifetime&lt;br /&gt;
| permanent&lt;br /&gt;
| workspace lifetime&lt;br /&gt;
| batch job walltime&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; | Quotas&lt;br /&gt;
| 200 GB&lt;br /&gt;
| 10 TB&lt;br /&gt;
| none&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; | Backup&lt;br /&gt;
| no&lt;br /&gt;
| no&lt;br /&gt;
| no&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
* global: all nodes access the same file system.&lt;br /&gt;
* local: each node has its own temporary file space.&lt;br /&gt;
* permanent: files are stored permanently.&lt;br /&gt;
* workspace lifetime: files are removed at end of workspace lifetime.&lt;br /&gt;
* batch job walltime: files are removed at end of the batch job.&lt;br /&gt;
&lt;br /&gt;
== $HOME ==&lt;br /&gt;
&lt;br /&gt;
Home directories are meant for permanent storage of files that are kept being used like source codes, configuration files, executable programs. There is currently no backup for the home directory. The disk space per user is limited to 200 GB. The used disk space is displayed with the command:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;homequotainfo&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Workspaces ==&lt;br /&gt;
&lt;br /&gt;
Workspace tools can be used to get temporary space for larger amounts of data necessary for or produced by running jobs. To create a workspace you need to supply a name for the workspace and a lifetime in days. The maximum lifetime is 30 days. It is possible to extend the lifetime 10 times.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|- &lt;br /&gt;
!style=&amp;quot;width:30%&amp;quot; | Command&lt;br /&gt;
!style=&amp;quot;width:70%&amp;quot;| Action&lt;br /&gt;
|-&lt;br /&gt;
|ws_allocate -r 7 -m &amp;lt;email&amp;gt; foo 10 &lt;br /&gt;
|Allocate a workspace named foo for 10 days and set a email reminder 7 days before expiring. &lt;br /&gt;
&#039;&#039;&#039;Important:&#039;&#039;&#039; You should add your email address and a relative date for notification!&lt;br /&gt;
|-&lt;br /&gt;
|ws_list -a&lt;br /&gt;
|List all your workspaces.&lt;br /&gt;
|-&lt;br /&gt;
|ws_find foo&lt;br /&gt;
|Get absolute path of workspace foo.&lt;br /&gt;
|-&lt;br /&gt;
|ws_extend foo 5&lt;br /&gt;
|Extend lifetime of workspace foo by 5 days from now.&lt;br /&gt;
|-&lt;br /&gt;
|ws_release foo&lt;br /&gt;
|Manually erase your workspace foo.&lt;br /&gt;
|-&lt;br /&gt;
|ws_send_ical -m &amp;lt;email&amp;gt; foo&lt;br /&gt;
|sending a .ical calendar entry for reminding workspace expiring &lt;br /&gt;
|-&lt;br /&gt;
|ws_share share/unshare/unshare-all &amp;lt;workspacename&amp;gt; &amp;lt;username&amp;gt; [username2]&lt;br /&gt;
|ws_share allows to share an existing workspace with other users. &lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
If you plan to produce or copy large amounts of data in workspaces, please check the availability. The used and free disk space on the workspace filesystem is displayed with the command:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;workquotainfo&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Restoring expired Workspaces ====&lt;br /&gt;
At expiration time your workspace will be moved to a special, hidden directory. For a short period of time you can still restore your data into a valid workspace. For that, use&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ws_restore -l&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
to get a list of your expired workspaces, and then restore them into an &#039;&#039;&#039;existing, active workspace&#039;&#039;&#039; &amp;lt;code&amp;gt;restored_ws&amp;lt;/code&amp;gt;:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ws_restore &amp;lt;deleted_workspacename&amp;gt; restored_ws&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
NOTE: the expired workspace has to be specified using the full name as listed by &amp;lt;code&amp;gt;ws_restore -l&amp;lt;/code&amp;gt;, including username prefix and timestamp suffix (otherwise, it cannot be uniquely identified).&lt;br /&gt;
The target workspace, on the other hand, must be given with just its short name as listed by &amp;lt;code&amp;gt;ws_list&amp;lt;/code&amp;gt;, without the username prefix.&lt;br /&gt;
&lt;br /&gt;
NOTE: &amp;lt;code&amp;gt;ws_restore&amp;lt;/code&amp;gt; can only work on the same filesystem! So you have to ensure that the new workspace allocated with &amp;lt;code&amp;gt;ws_allocate&amp;lt;/code&amp;gt; is placed on the same filesystem as the expired workspace. Therefor you can use &amp;lt;code&amp;gt;-F &amp;lt;filesystem&amp;gt;&amp;lt;/code&amp;gt; flag if needed.&lt;br /&gt;
&lt;br /&gt;
==== Linking workspaces in Home ====&lt;br /&gt;
It might be valuable to have links to personal workspaces within a certain directory, e.g., the user home directory. The command &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ws_register &amp;lt;DIR&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
will create and manage links to all personal workspaces within in the directory &amp;lt;DIR&amp;gt;. Calling this command will do the following:&lt;br /&gt;
&lt;br /&gt;
* The directory &amp;lt;DIR&amp;gt; will be created if necessary&lt;br /&gt;
* Links to all personal workspaces will be managed:&lt;br /&gt;
** Creates links to all available workspaces if not already present&lt;br /&gt;
** Removes links to released workspaces&lt;br /&gt;
&lt;br /&gt;
==== Troubleshooting ====&lt;br /&gt;
If you are getting the error:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Error: could not create workspace directory!&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
you should check the localesetting of your ssh client. Some clients (e.g. the one from MacOSX) set values that are not valid. You should overwrite LC_CTYPE and set it to a valid locale value like:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
export LC_CTYPE=de_DE.UTF-8&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
To make this change persistent you can add this line also to your &amp;lt;code&amp;gt;.bashrc&amp;lt;/code&amp;gt; file.&lt;br /&gt;
&lt;br /&gt;
A list of valid locales can be retrieved via&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
locale -a&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you are getting the error:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Error: only root can do this &amp;lt;workspacename&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
ensure that you are using the FULL workspacename, including your account_prefix, e.g. &amp;quot;hd_ab123-workspacename&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== $TMPDIR ==&lt;br /&gt;
&lt;br /&gt;
The variable $TMPDIR provides file space for temporary data of running jobs.&lt;br /&gt;
Each node has its own $TMPDIR. &amp;lt;!-- It is possible to request a global $TMPDIR for a job. --&amp;gt;&lt;br /&gt;
The data in $TMPDIR become unavailable as soon as the job has finished.&lt;br /&gt;
&lt;br /&gt;
== Access to SDS@hd ==&lt;br /&gt;
&lt;br /&gt;
It is possible to access your storage space on [http://sds-hd.urz.uni-heidelberg.de SDS@hd] directly on the bwForCluster Helix in /mnt/sds-hd/ on all login and compute nodes.&lt;/div&gt;</summary>
		<author><name>S Siebler</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=Helix/Filesystems&amp;diff=11203</id>
		<title>Helix/Filesystems</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=Helix/Filesystems&amp;diff=11203"/>
		<updated>2022-10-12T08:08:05Z</updated>

		<summary type="html">&lt;p&gt;S Siebler: /* Troubleshooting */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Overview ==&lt;br /&gt;
&lt;br /&gt;
The cluster storage system provides a large parallel file system based on [https://www.ibm.com/support/knowledgecenter/STXKQY/ibmspectrumscale_welcome.html IBM Spectrum Scale] &lt;br /&gt;
for $HOME, for workspaces, and for temporary storage via the $TMPDIR environment variable.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|- &lt;br /&gt;
!style=&amp;quot;width:16%&amp;quot; |&lt;br /&gt;
!style=&amp;quot;width:28%&amp;quot;| $HOME&lt;br /&gt;
!style=&amp;quot;width:28%&amp;quot;| Workspaces&lt;br /&gt;
!style=&amp;quot;width:28%&amp;quot;| $TMPDIR&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; | Visibility&lt;br /&gt;
| global&lt;br /&gt;
| global&lt;br /&gt;
| local&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; | Lifetime&lt;br /&gt;
| permanent&lt;br /&gt;
| workspace lifetime&lt;br /&gt;
| batch job walltime&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; | Quotas&lt;br /&gt;
| 200 GB&lt;br /&gt;
| 10 TB&lt;br /&gt;
| none&lt;br /&gt;
|-&lt;br /&gt;
!scope=&amp;quot;column&amp;quot; | Backup&lt;br /&gt;
| no&lt;br /&gt;
| no&lt;br /&gt;
| no&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
* global: all nodes access the same file system.&lt;br /&gt;
* local: each node has its own temporary file space.&lt;br /&gt;
* permanent: files are stored permanently.&lt;br /&gt;
* workspace lifetime: files are removed at end of workspace lifetime.&lt;br /&gt;
* batch job walltime: files are removed at end of the batch job.&lt;br /&gt;
&lt;br /&gt;
== $HOME ==&lt;br /&gt;
&lt;br /&gt;
Home directories are meant for permanent storage of files that are kept being used like source codes, configuration files, executable programs. There is currently no backup for the home directory. The disk space per user is limited to 200 GB. The used disk space is displayed with the command:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;homequotainfo&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Workspaces ==&lt;br /&gt;
&lt;br /&gt;
Workspace tools can be used to get temporary space for larger amounts of data necessary for or produced by running jobs. To create a workspace you need to supply a name for the workspace and a lifetime in days. The maximum lifetime is 30 days. It is possible to extend the lifetime 10 times.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|- &lt;br /&gt;
!style=&amp;quot;width:30%&amp;quot; | Command&lt;br /&gt;
!style=&amp;quot;width:70%&amp;quot;| Action&lt;br /&gt;
|-&lt;br /&gt;
|ws_allocate -r 7 -m &amp;lt;email&amp;gt; foo 10 &lt;br /&gt;
|Allocate a workspace named foo for 10 days and set a email reminder 7 days before expiring. &lt;br /&gt;
&#039;&#039;&#039;Important:&#039;&#039;&#039; You should add your email address and a relative date for notification!&lt;br /&gt;
|-&lt;br /&gt;
|ws_list -a&lt;br /&gt;
|List all your workspaces.&lt;br /&gt;
|-&lt;br /&gt;
|ws_find foo&lt;br /&gt;
|Get absolute path of workspace foo.&lt;br /&gt;
|-&lt;br /&gt;
|ws_extend foo 5&lt;br /&gt;
|Extend lifetime of workspace foo by 5 days from now.&lt;br /&gt;
|-&lt;br /&gt;
|ws_release foo&lt;br /&gt;
|Manually erase your workspace foo.&lt;br /&gt;
|-&lt;br /&gt;
|ws_send_ical -m &amp;lt;email&amp;gt; foo&lt;br /&gt;
|sending a .ical calendar entry for reminding workspace expiring &lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
If you plan to produce or copy large amounts of data in workspaces, please check the availability. The used and free disk space on the workspace filesystem is displayed with the command:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;workquotainfo&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Restoring expired Workspaces ====&lt;br /&gt;
At expiration time your workspace will be moved to a special, hidden directory. For a short period of time you can still restore your data into a valid workspace. For that, use&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ws_restore -l&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
to get a list of your expired workspaces, and then restore them into an &#039;&#039;&#039;existing, active workspace&#039;&#039;&#039; &amp;lt;code&amp;gt;restored_ws&amp;lt;/code&amp;gt;:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ws_restore &amp;lt;deleted_workspacename&amp;gt; restored_ws&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
NOTE: the expired workspace has to be specified using the full name as listed by &amp;lt;code&amp;gt;ws_restore -l&amp;lt;/code&amp;gt;, including username prefix and timestamp suffix (otherwise, it cannot be uniquely identified).&lt;br /&gt;
The target workspace, on the other hand, must be given with just its short name as listed by &amp;lt;code&amp;gt;ws_list&amp;lt;/code&amp;gt;, without the username prefix.&lt;br /&gt;
&lt;br /&gt;
NOTE: &amp;lt;code&amp;gt;ws_restore&amp;lt;/code&amp;gt; can only work on the same filesystem! So you have to ensure that the new workspace allocated with &amp;lt;code&amp;gt;ws_allocate&amp;lt;/code&amp;gt; is placed on the same filesystem as the expired workspace. Therefor you can use &amp;lt;code&amp;gt;-F &amp;lt;filesystem&amp;gt;&amp;lt;/code&amp;gt; flag if needed.&lt;br /&gt;
&lt;br /&gt;
==== Linking workspaces in Home ====&lt;br /&gt;
It might be valuable to have links to personal workspaces within a certain directory, e.g., the user home directory. The command &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ws_register &amp;lt;DIR&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
will create and manage links to all personal workspaces within in the directory &amp;lt;DIR&amp;gt;. Calling this command will do the following:&lt;br /&gt;
&lt;br /&gt;
* The directory &amp;lt;DIR&amp;gt; will be created if necessary&lt;br /&gt;
* Links to all personal workspaces will be managed:&lt;br /&gt;
** Creates links to all available workspaces if not already present&lt;br /&gt;
** Removes links to released workspaces&lt;br /&gt;
&lt;br /&gt;
==== Troubleshooting ====&lt;br /&gt;
If you are getting the error:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Error: could not create workspace directory!&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
you should check the localesetting of your ssh client. Some clients (e.g. the one from MacOSX) set values that are not valid. You should overwrite LC_CTYPE and set it to a valid locale value like:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
export LC_CTYPE=de_DE.UTF-8&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
To make this change persistent you can add this line also to your &amp;lt;code&amp;gt;.bashrc&amp;lt;/code&amp;gt; file.&lt;br /&gt;
&lt;br /&gt;
A list of valid locales can be retrieved via&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
locale -a&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you are getting the error:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Error: only root can do this &amp;lt;workspacename&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
ensure that you are using the FULL workspacename, including your account_prefix, e.g. &amp;quot;hd_ab123-workspacename&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== $TMPDIR ==&lt;br /&gt;
&lt;br /&gt;
The variable $TMPDIR provides file space for temporary data of running jobs.&lt;br /&gt;
Each node has its own $TMPDIR. &amp;lt;!-- It is possible to request a global $TMPDIR for a job. --&amp;gt;&lt;br /&gt;
The data in $TMPDIR become unavailable as soon as the job has finished.&lt;br /&gt;
&lt;br /&gt;
== Access to SDS@hd ==&lt;br /&gt;
&lt;br /&gt;
It is possible to access your storage space on [http://sds-hd.urz.uni-heidelberg.de SDS@hd] directly on the bwForCluster Helix in /mnt/sds-hd/ on all login and compute nodes.&lt;/div&gt;</summary>
		<author><name>S Siebler</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=Registration/SSH&amp;diff=10264</id>
		<title>Registration/SSH</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=Registration/SSH&amp;diff=10264"/>
		<updated>2022-03-30T13:28:13Z</updated>

		<summary type="html">&lt;p&gt;S Siebler: /* Revoke/Delete SSH Key */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{|style=&amp;quot;background:#deffee; width:100%;&amp;quot;&lt;br /&gt;
|style=&amp;quot;padding:5px; background:#cef2e0; text-align:left&amp;quot;|&lt;br /&gt;
[[Image:Attention.svg|center|25px]]&lt;br /&gt;
|style=&amp;quot;padding:5px; background:#cef2e0; text-align:left&amp;quot;|&lt;br /&gt;
This process is only necessary for the bwUniCluster and the bwForCluster MLS&amp;amp;WISO.&lt;br /&gt;
On the other clusters, SSH keys can still be copied to the &amp;lt;code&amp;gt;authorized_keys&amp;lt;/code&amp;gt; file.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Registering SSH Keys with your Cluster =&lt;br /&gt;
&lt;br /&gt;
{|style=&amp;quot;background:#deffee; width:100%;&amp;quot;&lt;br /&gt;
|style=&amp;quot;padding:5px; background:#cef2e0; text-align:left&amp;quot;|&lt;br /&gt;
[[Image:Attention.svg|center|25px]]&lt;br /&gt;
|style=&amp;quot;padding:5px; background:#cef2e0; text-align:left&amp;quot;|&lt;br /&gt;
Interactive SSH Keys are not valid all the time, but only for one hour after the last 2-factor login.&lt;br /&gt;
They have to be &amp;quot;unlocked&amp;quot; by entering the OTP and service password.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;SSH Keys&#039;&#039;&#039; are a mechanism for logging into a computer system without having to enter a password. Instead of authenticating yourself with something you know (a password), you prove your identity by showing the server something you have (a cryptographic key).&lt;br /&gt;
&lt;br /&gt;
The usual process is the following:&lt;br /&gt;
&lt;br /&gt;
* The user generates a pair of SSH Keys, a private key and a public key, on their local system. The private key never leaves the local system.&lt;br /&gt;
&lt;br /&gt;
* The user then logs into the remote system using the remote system password and adds the public key to a file called ~/.ssh/authorized_keys .&lt;br /&gt;
&lt;br /&gt;
* All following logins will no longer require the entry of the remote system password because the local system can prove to the remote system that it has a private key matching the public key on file.&lt;br /&gt;
&lt;br /&gt;
While SSH Keys have many advantages, the concept also has &#039;&#039;&#039;a number of issues&#039;&#039;&#039; which make it hard to handle them securely:&lt;br /&gt;
&lt;br /&gt;
* The private key on the local system is supposed to be protected by a strong passphrase. There is no possibility for the server to check if this is the case. Many users do not use a strong passphrase or do not use any passphrase at all. If such a private key is stolen, an attacker can immediately use it to access the remote system.&lt;br /&gt;
&lt;br /&gt;
* There is no concept of validity. Users are not forced to regularly generate new SSH Key pairs and replace the old ones. Often the same key pair is used for many years and the users have no overview of how many systems they have stored their SSH Keys on.&lt;br /&gt;
&lt;br /&gt;
* SSH Keys can be restricted so they can only be used to execute specific commands on the server, or to log in from specified IP addresses. Most users do not do this.&lt;br /&gt;
&lt;br /&gt;
To fix these issues &#039;&#039;&#039;it is no longer possible to self-manage your SSH Keys by adding them to the ~/.ssh/authorized_keys file&#039;&#039;&#039; on bwUniCluster/bwForCluster.&lt;br /&gt;
SSH Keys have to be managed through bwIDM/bwServces instead.&lt;br /&gt;
Existing authorized_keys files are ignored.&lt;br /&gt;
&lt;br /&gt;
== Minimum requirements for SSH Keys ==&lt;br /&gt;
&lt;br /&gt;
Algorithms and Key sizes:&lt;br /&gt;
&lt;br /&gt;
* 2048 bits or more for RSA&lt;br /&gt;
* 521 bits for ECDSA&lt;br /&gt;
* 256 Bits (Default) for ED25519&lt;br /&gt;
&lt;br /&gt;
ECDSA-SK and ED25519-SK keys (for use with U2F Hardware Tokens) cannot be used yet.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Please set a strong passphrase for your private keys.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Adding a new SSH Key =&lt;br /&gt;
&lt;br /&gt;
{|style=&amp;quot;background:#deffee; width:100%;&amp;quot;&lt;br /&gt;
|style=&amp;quot;padding:5px; background:#cef2e0; text-align:left&amp;quot;|&lt;br /&gt;
[[Image:Attention.svg|center|25px]]&lt;br /&gt;
|style=&amp;quot;padding:5px; background:#cef2e0; text-align:left&amp;quot;|&lt;br /&gt;
* Newly added keys are valid for three months. After that, they are revoked and placed on a &amp;quot;revocation list&amp;quot; so that they cannot be reused.&lt;br /&gt;
* Copy only the contents of your public ssh key file to bwIDM/bwServices. The file ends with &amp;lt;code&amp;gt;.pub&amp;lt;/code&amp;gt; ( e.g. &amp;lt;code&amp;gt;~/.ssh/&amp;lt;filename&amp;gt;.pub&amp;lt;/code&amp;gt;).&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;SSH keys&#039;&#039;&#039; are generally managed via the &#039;&#039;&#039;My SSH Pubkeys&#039;&#039;&#039; menu entry on the registration pages for the clusters.&lt;br /&gt;
Here you can add and revoke SSH keys. To add a ssh key, please follow these steps:&lt;br /&gt;
&lt;br /&gt;
1. &#039;&#039;&#039;Select the cluster&#039;&#039;&#039; for which you want to create a second factor:&amp;lt;/br&amp;gt; &amp;amp;rarr; [https://login.bwidm.de/user/ssh-keys.xhtml &#039;&#039;&#039;bwUniCluster 2.0&#039;&#039;&#039;]&amp;lt;/br&amp;gt; &amp;amp;rarr; [https://bwservices.uni-heidelberg.de/user/ssh-keys.xhtml &#039;&#039;&#039;bwForCluster MLS&amp;amp;WISO&#039;&#039;&#039;]&lt;br /&gt;
[[File:BwIDM-twofa.png|center|600px|thumb|My SSH Pubkeys.]]&lt;br /&gt;
&lt;br /&gt;
3. Click the &#039;&#039;&#039;Add SSH Key&#039;&#039;&#039; or &#039;&#039;&#039;SSH Key Hochladen&#039;&#039;&#039; button.&lt;br /&gt;
[[File:Bwunicluster 2.0 access ssh keys empty.png|center|400px|thumb|Add new SSH key.]]&lt;br /&gt;
&lt;br /&gt;
4. A new window will appear.&lt;br /&gt;
Enter a name for the key and paste your SSH public key (file &amp;lt;code&amp;gt;~/.ssh/&amp;lt;filename&amp;gt;.pub&amp;lt;/code&amp;gt;) into the box labelled &amp;quot;SSH Key:&amp;quot;.&lt;br /&gt;
Click on the button labelled &#039;&#039;&#039;Add&#039;&#039;&#039; or &#039;&#039;&#039;Hinzufügen&#039;&#039;&#039;.&lt;br /&gt;
[[File:Ssh-key.png|center|600px|thumb|Add new SSH key.]]&lt;br /&gt;
&lt;br /&gt;
5. If everything worked fine your new key will show up in the user interface:&lt;br /&gt;
[[File:Ssh-success.png|center|800px|thumb|New SSH key added.]]&lt;br /&gt;
&lt;br /&gt;
Once you have added SSH keys to the system, you can bind them to one or more services to use either for interactive logins (&#039;&#039;&#039;Interactive key&#039;&#039;&#039;) or for automatic logins (&#039;&#039;&#039;Command key&#039;&#039;&#039;).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Registering an Interactive Key ==&lt;br /&gt;
&lt;br /&gt;
{|style=&amp;quot;background:#deffee; width:100%;&amp;quot;&lt;br /&gt;
|style=&amp;quot;padding:5px; background:#cef2e0; text-align:left&amp;quot;|&lt;br /&gt;
[[Image:Attention.svg|center|25px]]&lt;br /&gt;
|style=&amp;quot;padding:5px; background:#cef2e0; text-align:left&amp;quot;|&lt;br /&gt;
Interactive SSH Keys are not valid all the time, but only for one hour after the last 2-factor login.&lt;br /&gt;
They have to be &amp;quot;unlocked&amp;quot; by entering the OTP and service password.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Interactive Keys&#039;&#039;&#039; can be used to log into a system for interactive use.&lt;br /&gt;
Perform the following steps to register an interactive key:&lt;br /&gt;
&lt;br /&gt;
1. [[Registration/SSH#Adding_a_new_SSH_Key|&#039;&#039;&#039;Add a new interactive SSH key&#039;&#039;&#039;]] if you have not already done so.&lt;br /&gt;
&lt;br /&gt;
2. Select &#039;&#039;&#039;Registered services/Registrierte Dienste&#039;&#039;&#039; from the top menu and click &#039;&#039;&#039;Set SSH Key/SSH Key setzen&#039;&#039;&#039; for the cluster for which you want to use the SSH key.&lt;br /&gt;
[[File:BwIDM-registered.png|center|600px|thumb|Select Cluster for which you want to use the SSH key.]]&lt;br /&gt;
&lt;br /&gt;
3. The upper block displays the SSH keys currently registered for the service.&lt;br /&gt;
The bottom block displays all the public SSH keys associated with your account.&lt;br /&gt;
Find the SSH key you want to use and click &#039;&#039;&#039;Add/Hinzufügen&#039;&#039;&#039;.&lt;br /&gt;
[[File:Ssh-service-int.png|center|800px|thumb|Add SSH key to service.]]&lt;br /&gt;
&lt;br /&gt;
4. A new window appears.&lt;br /&gt;
Select &#039;&#039;&#039;Interactive&#039;&#039;&#039; as the usage type, enter an optional comment and click &#039;&#039;&#039;Add/Hinzufügen&#039;&#039;&#039;.&lt;br /&gt;
[[File:Ssh-int.png|center|600px|thumb|Add interactive SSH key to service.]]&lt;br /&gt;
&lt;br /&gt;
5. Your SSH key is now registered for interactive use with this service.&lt;br /&gt;
[[File:Ssh-service.png|center|800px|thumb|SSH key is now registered for interactive use.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Registering a Command Key ==&lt;br /&gt;
&lt;br /&gt;
{|style=&amp;quot;background:#deffee; width:100%;&amp;quot;&lt;br /&gt;
|style=&amp;quot;padding:5px; background:#cef2e0; text-align:left&amp;quot;|&lt;br /&gt;
[[Image:Attention.svg|center|25px]]&lt;br /&gt;
|style=&amp;quot;padding:5px; background:#cef2e0; text-align:left&amp;quot;|&lt;br /&gt;
SSH command keys are always valid and do not need to be unlocked with a 2-factor login.&lt;br /&gt;
This makes these keys extremely valuable to a potential attacker and poses a security risk.&lt;br /&gt;
Therefore, additional restrictions apply to these keys:&lt;br /&gt;
* They must be limited to a single command to be executed.&lt;br /&gt;
* They must be limited to a single IP address (e.g., the workflow server) or a small number of IP addresses (e.g., the institution&#039;s subnet).&lt;br /&gt;
* They must be reviewed and approved by a cluster administrator before they can be used.&lt;br /&gt;
* Validity is reduced to one month.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Command Keys&#039;&#039;&#039; can be used for automatic workflows.&lt;br /&gt;
Perform the following steps to register a &amp;quot;Command key&amp;quot;:&lt;br /&gt;
&lt;br /&gt;
1. [[Registration/SSH#Adding_a_new_SSH_Key|&#039;&#039;&#039;Add a new &amp;quot;command SSH key&amp;quot;&#039;&#039;&#039;]] if you have not already done so.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2. Select &#039;&#039;&#039;Registered services/Registrierte Dienste&#039;&#039;&#039; from the top menu and click &#039;&#039;&#039;Set SSH Key/SSH Key setzen&#039;&#039;&#039; for the cluster for which you want to use the SSH key.&lt;br /&gt;
[[File:BwIDM-registered.png|center|600px|thumb|Select Cluster for which you want to use the SSH key.]]&lt;br /&gt;
&lt;br /&gt;
3. The upper block displays the SSH keys currently registered for the service.&lt;br /&gt;
The bottom block displays all the public SSH keys associated with your account.&lt;br /&gt;
Find the SSH key you want to use and click &#039;&#039;&#039;Add/Hinzufügen&#039;&#039;&#039;.&lt;br /&gt;
[[File:Ssh-service-com.png|center|800px|thumb|Add SSH key to service.]]&lt;br /&gt;
&lt;br /&gt;
4. A new window appears.&lt;br /&gt;
Select &#039;&#039;&#039;Command&#039;&#039;&#039; as the usage type.&lt;br /&gt;
Type the full command with the full path, including all parameters, in the &#039;&#039;&#039;Command&#039;&#039;&#039; text box.&lt;br /&gt;
Specify a network address, list, or range in the &#039;&#039;&#039;From&#039;&#039;&#039; text field (see [https://man.openbsd.org/cgi-bin/man.cgi/OpenBSD-current/man8/sshd.8#from=_pattern-list_ man 8 sshd] for more info).&lt;br /&gt;
Please also provide a comment to speed up the approval process.&lt;br /&gt;
Click &#039;&#039;&#039;Add/Hinzufügen&#039;&#039;&#039;.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! | Example&lt;br /&gt;
|-&lt;br /&gt;
| If you want to register a command key to be able to transfer data automatically, please use the following string as in the &#039;&#039;&#039;Command&#039;&#039;&#039; text field (please verify the path on the cluster first):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
/usr[/local]/bin/rrsync -ro / -rw /&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
[[File:Ssh-com.png|center|600px|thumb|Add interactive SSH key to service.]]&lt;br /&gt;
&lt;br /&gt;
5. After the key has been added, it will be marked as &#039;&#039;&#039;Pending&#039;&#039;&#039;:&lt;br /&gt;
You will receive an e-mail as soon as the key has been approved and can be used.&lt;br /&gt;
[[File:Ssh-service.png|center|800px|thumb|SSH key is now registered for interactive use.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Revoke/Delete SSH Key ==&lt;br /&gt;
&lt;br /&gt;
{|style=&amp;quot;background:#deffee; width:100%;&amp;quot;&lt;br /&gt;
|style=&amp;quot;padding:5px; background:#cef2e0; text-align:left&amp;quot;|&lt;br /&gt;
[[Image:Attention.svg|center|25px]]&lt;br /&gt;
|style=&amp;quot;padding:5px; background:#cef2e0; text-align:left&amp;quot;|&lt;br /&gt;
Revoked keys are locked and can no longer be used.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;SSH keys&#039;&#039;&#039; are generally managed via the &#039;&#039;&#039;My SSH Pubkeys&#039;&#039;&#039; menu entry on the registration pages for the clusters.&lt;br /&gt;
Here you can add and revoke SSH keys. To revoke/delete a ssh key, please follow these steps:&lt;br /&gt;
&lt;br /&gt;
1. &#039;&#039;&#039;Select the cluster&#039;&#039;&#039; for which you want to create a second factor:&amp;lt;/br&amp;gt; &amp;amp;rarr; [https://login.bwidm.de/user/ssh-keys.xhtml &#039;&#039;&#039;bwUniCluster 2.0&#039;&#039;&#039;]&amp;lt;/br&amp;gt; &amp;amp;rarr; [https://bwservices.uni-heidelberg.de/user/ssh-keys.xhtml &#039;&#039;&#039;bwForCluster MLS&amp;amp;WISO&#039;&#039;&#039;]&lt;br /&gt;
[[File:BwIDM-twofa.png|center|600px|thumb|My SSH Pubkeys.]]&lt;br /&gt;
&lt;br /&gt;
2. Click &#039;&#039;&#039;REVOKE/ZURÜCKZIEHEN&#039;&#039;&#039; next to the SSH key you want to revoke.&lt;br /&gt;
[[File:Ssh-success.png|center|800px|thumb|Revoke SSH key.]]&lt;/div&gt;</summary>
		<author><name>S Siebler</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=Registration/SSH&amp;diff=10263</id>
		<title>Registration/SSH</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=Registration/SSH&amp;diff=10263"/>
		<updated>2022-03-30T13:27:20Z</updated>

		<summary type="html">&lt;p&gt;S Siebler: /* Adding a new SSH Key */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{|style=&amp;quot;background:#deffee; width:100%;&amp;quot;&lt;br /&gt;
|style=&amp;quot;padding:5px; background:#cef2e0; text-align:left&amp;quot;|&lt;br /&gt;
[[Image:Attention.svg|center|25px]]&lt;br /&gt;
|style=&amp;quot;padding:5px; background:#cef2e0; text-align:left&amp;quot;|&lt;br /&gt;
This process is only necessary for the bwUniCluster and the bwForCluster MLS&amp;amp;WISO.&lt;br /&gt;
On the other clusters, SSH keys can still be copied to the &amp;lt;code&amp;gt;authorized_keys&amp;lt;/code&amp;gt; file.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Registering SSH Keys with your Cluster =&lt;br /&gt;
&lt;br /&gt;
{|style=&amp;quot;background:#deffee; width:100%;&amp;quot;&lt;br /&gt;
|style=&amp;quot;padding:5px; background:#cef2e0; text-align:left&amp;quot;|&lt;br /&gt;
[[Image:Attention.svg|center|25px]]&lt;br /&gt;
|style=&amp;quot;padding:5px; background:#cef2e0; text-align:left&amp;quot;|&lt;br /&gt;
Interactive SSH Keys are not valid all the time, but only for one hour after the last 2-factor login.&lt;br /&gt;
They have to be &amp;quot;unlocked&amp;quot; by entering the OTP and service password.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;SSH Keys&#039;&#039;&#039; are a mechanism for logging into a computer system without having to enter a password. Instead of authenticating yourself with something you know (a password), you prove your identity by showing the server something you have (a cryptographic key).&lt;br /&gt;
&lt;br /&gt;
The usual process is the following:&lt;br /&gt;
&lt;br /&gt;
* The user generates a pair of SSH Keys, a private key and a public key, on their local system. The private key never leaves the local system.&lt;br /&gt;
&lt;br /&gt;
* The user then logs into the remote system using the remote system password and adds the public key to a file called ~/.ssh/authorized_keys .&lt;br /&gt;
&lt;br /&gt;
* All following logins will no longer require the entry of the remote system password because the local system can prove to the remote system that it has a private key matching the public key on file.&lt;br /&gt;
&lt;br /&gt;
While SSH Keys have many advantages, the concept also has &#039;&#039;&#039;a number of issues&#039;&#039;&#039; which make it hard to handle them securely:&lt;br /&gt;
&lt;br /&gt;
* The private key on the local system is supposed to be protected by a strong passphrase. There is no possibility for the server to check if this is the case. Many users do not use a strong passphrase or do not use any passphrase at all. If such a private key is stolen, an attacker can immediately use it to access the remote system.&lt;br /&gt;
&lt;br /&gt;
* There is no concept of validity. Users are not forced to regularly generate new SSH Key pairs and replace the old ones. Often the same key pair is used for many years and the users have no overview of how many systems they have stored their SSH Keys on.&lt;br /&gt;
&lt;br /&gt;
* SSH Keys can be restricted so they can only be used to execute specific commands on the server, or to log in from specified IP addresses. Most users do not do this.&lt;br /&gt;
&lt;br /&gt;
To fix these issues &#039;&#039;&#039;it is no longer possible to self-manage your SSH Keys by adding them to the ~/.ssh/authorized_keys file&#039;&#039;&#039; on bwUniCluster/bwForCluster.&lt;br /&gt;
SSH Keys have to be managed through bwIDM/bwServces instead.&lt;br /&gt;
Existing authorized_keys files are ignored.&lt;br /&gt;
&lt;br /&gt;
== Minimum requirements for SSH Keys ==&lt;br /&gt;
&lt;br /&gt;
Algorithms and Key sizes:&lt;br /&gt;
&lt;br /&gt;
* 2048 bits or more for RSA&lt;br /&gt;
* 521 bits for ECDSA&lt;br /&gt;
* 256 Bits (Default) for ED25519&lt;br /&gt;
&lt;br /&gt;
ECDSA-SK and ED25519-SK keys (for use with U2F Hardware Tokens) cannot be used yet.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Please set a strong passphrase for your private keys.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Adding a new SSH Key =&lt;br /&gt;
&lt;br /&gt;
{|style=&amp;quot;background:#deffee; width:100%;&amp;quot;&lt;br /&gt;
|style=&amp;quot;padding:5px; background:#cef2e0; text-align:left&amp;quot;|&lt;br /&gt;
[[Image:Attention.svg|center|25px]]&lt;br /&gt;
|style=&amp;quot;padding:5px; background:#cef2e0; text-align:left&amp;quot;|&lt;br /&gt;
* Newly added keys are valid for three months. After that, they are revoked and placed on a &amp;quot;revocation list&amp;quot; so that they cannot be reused.&lt;br /&gt;
* Copy only the contents of your public ssh key file to bwIDM/bwServices. The file ends with &amp;lt;code&amp;gt;.pub&amp;lt;/code&amp;gt; ( e.g. &amp;lt;code&amp;gt;~/.ssh/&amp;lt;filename&amp;gt;.pub&amp;lt;/code&amp;gt;).&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;SSH keys&#039;&#039;&#039; are generally managed via the &#039;&#039;&#039;My SSH Pubkeys&#039;&#039;&#039; menu entry on the registration pages for the clusters.&lt;br /&gt;
Here you can add and revoke SSH keys. To add a ssh key, please follow these steps:&lt;br /&gt;
&lt;br /&gt;
1. &#039;&#039;&#039;Select the cluster&#039;&#039;&#039; for which you want to create a second factor:&amp;lt;/br&amp;gt; &amp;amp;rarr; [https://login.bwidm.de/user/ssh-keys.xhtml &#039;&#039;&#039;bwUniCluster 2.0&#039;&#039;&#039;]&amp;lt;/br&amp;gt; &amp;amp;rarr; [https://bwservices.uni-heidelberg.de/user/ssh-keys.xhtml &#039;&#039;&#039;bwForCluster MLS&amp;amp;WISO&#039;&#039;&#039;]&lt;br /&gt;
[[File:BwIDM-twofa.png|center|600px|thumb|My SSH Pubkeys.]]&lt;br /&gt;
&lt;br /&gt;
3. Click the &#039;&#039;&#039;Add SSH Key&#039;&#039;&#039; or &#039;&#039;&#039;SSH Key Hochladen&#039;&#039;&#039; button.&lt;br /&gt;
[[File:Bwunicluster 2.0 access ssh keys empty.png|center|400px|thumb|Add new SSH key.]]&lt;br /&gt;
&lt;br /&gt;
4. A new window will appear.&lt;br /&gt;
Enter a name for the key and paste your SSH public key (file &amp;lt;code&amp;gt;~/.ssh/&amp;lt;filename&amp;gt;.pub&amp;lt;/code&amp;gt;) into the box labelled &amp;quot;SSH Key:&amp;quot;.&lt;br /&gt;
Click on the button labelled &#039;&#039;&#039;Add&#039;&#039;&#039; or &#039;&#039;&#039;Hinzufügen&#039;&#039;&#039;.&lt;br /&gt;
[[File:Ssh-key.png|center|600px|thumb|Add new SSH key.]]&lt;br /&gt;
&lt;br /&gt;
5. If everything worked fine your new key will show up in the user interface:&lt;br /&gt;
[[File:Ssh-success.png|center|800px|thumb|New SSH key added.]]&lt;br /&gt;
&lt;br /&gt;
Once you have added SSH keys to the system, you can bind them to one or more services to use either for interactive logins (&#039;&#039;&#039;Interactive key&#039;&#039;&#039;) or for automatic logins (&#039;&#039;&#039;Command key&#039;&#039;&#039;).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Registering an Interactive Key ==&lt;br /&gt;
&lt;br /&gt;
{|style=&amp;quot;background:#deffee; width:100%;&amp;quot;&lt;br /&gt;
|style=&amp;quot;padding:5px; background:#cef2e0; text-align:left&amp;quot;|&lt;br /&gt;
[[Image:Attention.svg|center|25px]]&lt;br /&gt;
|style=&amp;quot;padding:5px; background:#cef2e0; text-align:left&amp;quot;|&lt;br /&gt;
Interactive SSH Keys are not valid all the time, but only for one hour after the last 2-factor login.&lt;br /&gt;
They have to be &amp;quot;unlocked&amp;quot; by entering the OTP and service password.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Interactive Keys&#039;&#039;&#039; can be used to log into a system for interactive use.&lt;br /&gt;
Perform the following steps to register an interactive key:&lt;br /&gt;
&lt;br /&gt;
1. [[Registration/SSH#Adding_a_new_SSH_Key|&#039;&#039;&#039;Add a new interactive SSH key&#039;&#039;&#039;]] if you have not already done so.&lt;br /&gt;
&lt;br /&gt;
2. Select &#039;&#039;&#039;Registered services/Registrierte Dienste&#039;&#039;&#039; from the top menu and click &#039;&#039;&#039;Set SSH Key/SSH Key setzen&#039;&#039;&#039; for the cluster for which you want to use the SSH key.&lt;br /&gt;
[[File:BwIDM-registered.png|center|600px|thumb|Select Cluster for which you want to use the SSH key.]]&lt;br /&gt;
&lt;br /&gt;
3. The upper block displays the SSH keys currently registered for the service.&lt;br /&gt;
The bottom block displays all the public SSH keys associated with your account.&lt;br /&gt;
Find the SSH key you want to use and click &#039;&#039;&#039;Add/Hinzufügen&#039;&#039;&#039;.&lt;br /&gt;
[[File:Ssh-service-int.png|center|800px|thumb|Add SSH key to service.]]&lt;br /&gt;
&lt;br /&gt;
4. A new window appears.&lt;br /&gt;
Select &#039;&#039;&#039;Interactive&#039;&#039;&#039; as the usage type, enter an optional comment and click &#039;&#039;&#039;Add/Hinzufügen&#039;&#039;&#039;.&lt;br /&gt;
[[File:Ssh-int.png|center|600px|thumb|Add interactive SSH key to service.]]&lt;br /&gt;
&lt;br /&gt;
5. Your SSH key is now registered for interactive use with this service.&lt;br /&gt;
[[File:Ssh-service.png|center|800px|thumb|SSH key is now registered for interactive use.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Registering a Command Key ==&lt;br /&gt;
&lt;br /&gt;
{|style=&amp;quot;background:#deffee; width:100%;&amp;quot;&lt;br /&gt;
|style=&amp;quot;padding:5px; background:#cef2e0; text-align:left&amp;quot;|&lt;br /&gt;
[[Image:Attention.svg|center|25px]]&lt;br /&gt;
|style=&amp;quot;padding:5px; background:#cef2e0; text-align:left&amp;quot;|&lt;br /&gt;
SSH command keys are always valid and do not need to be unlocked with a 2-factor login.&lt;br /&gt;
This makes these keys extremely valuable to a potential attacker and poses a security risk.&lt;br /&gt;
Therefore, additional restrictions apply to these keys:&lt;br /&gt;
* They must be limited to a single command to be executed.&lt;br /&gt;
* They must be limited to a single IP address (e.g., the workflow server) or a small number of IP addresses (e.g., the institution&#039;s subnet).&lt;br /&gt;
* They must be reviewed and approved by a cluster administrator before they can be used.&lt;br /&gt;
* Validity is reduced to one month.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Command Keys&#039;&#039;&#039; can be used for automatic workflows.&lt;br /&gt;
Perform the following steps to register a &amp;quot;Command key&amp;quot;:&lt;br /&gt;
&lt;br /&gt;
1. [[Registration/SSH#Adding_a_new_SSH_Key|&#039;&#039;&#039;Add a new &amp;quot;command SSH key&amp;quot;&#039;&#039;&#039;]] if you have not already done so.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2. Select &#039;&#039;&#039;Registered services/Registrierte Dienste&#039;&#039;&#039; from the top menu and click &#039;&#039;&#039;Set SSH Key/SSH Key setzen&#039;&#039;&#039; for the cluster for which you want to use the SSH key.&lt;br /&gt;
[[File:BwIDM-registered.png|center|600px|thumb|Select Cluster for which you want to use the SSH key.]]&lt;br /&gt;
&lt;br /&gt;
3. The upper block displays the SSH keys currently registered for the service.&lt;br /&gt;
The bottom block displays all the public SSH keys associated with your account.&lt;br /&gt;
Find the SSH key you want to use and click &#039;&#039;&#039;Add/Hinzufügen&#039;&#039;&#039;.&lt;br /&gt;
[[File:Ssh-service-com.png|center|800px|thumb|Add SSH key to service.]]&lt;br /&gt;
&lt;br /&gt;
4. A new window appears.&lt;br /&gt;
Select &#039;&#039;&#039;Command&#039;&#039;&#039; as the usage type.&lt;br /&gt;
Type the full command with the full path, including all parameters, in the &#039;&#039;&#039;Command&#039;&#039;&#039; text box.&lt;br /&gt;
Specify a network address, list, or range in the &#039;&#039;&#039;From&#039;&#039;&#039; text field (see [https://man.openbsd.org/cgi-bin/man.cgi/OpenBSD-current/man8/sshd.8#from=_pattern-list_ man 8 sshd] for more info).&lt;br /&gt;
Please also provide a comment to speed up the approval process.&lt;br /&gt;
Click &#039;&#039;&#039;Add/Hinzufügen&#039;&#039;&#039;.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! | Example&lt;br /&gt;
|-&lt;br /&gt;
| If you want to register a command key to be able to transfer data automatically, please use the following string as in the &#039;&#039;&#039;Command&#039;&#039;&#039; text field (please verify the path on the cluster first):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
/usr[/local]/bin/rrsync -ro / -rw /&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
[[File:Ssh-com.png|center|600px|thumb|Add interactive SSH key to service.]]&lt;br /&gt;
&lt;br /&gt;
5. After the key has been added, it will be marked as &#039;&#039;&#039;Pending&#039;&#039;&#039;:&lt;br /&gt;
You will receive an e-mail as soon as the key has been approved and can be used.&lt;br /&gt;
[[File:Ssh-service.png|center|800px|thumb|SSH key is now registered for interactive use.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Revoke/Delete SSH Key ==&lt;br /&gt;
&lt;br /&gt;
{|style=&amp;quot;background:#deffee; width:100%;&amp;quot;&lt;br /&gt;
|style=&amp;quot;padding:5px; background:#cef2e0; text-align:left&amp;quot;|&lt;br /&gt;
[[Image:Attention.svg|center|25px]]&lt;br /&gt;
|style=&amp;quot;padding:5px; background:#cef2e0; text-align:left&amp;quot;|&lt;br /&gt;
Revoked keys are locked and can no longer be used.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;SSH keys&#039;&#039;&#039; are generally managed via the &#039;&#039;&#039;My SSH Pubkeys&#039;&#039;&#039; menu entry on the registration pages for the clusters.&lt;br /&gt;
Here you can add and revoke SSH keys. To revoke/delete a ssh key, please follow these steps:&lt;br /&gt;
&lt;br /&gt;
1. &#039;&#039;&#039;Select the cluster&#039;&#039;&#039; for which you want to create a second factor:&amp;lt;/br&amp;gt; &amp;amp;rarr; [https://login.bwidm.de/user/ssh-keys.xhtml &#039;&#039;&#039;bwUniCluster 2.0&#039;&#039;&#039;]&amp;lt;/br&amp;gt; &amp;amp;rarr; [https://login.bwidm.de/user/ssh-keys.xhtml &#039;&#039;&#039;bwForCluster MLS&amp;amp;WISO&#039;&#039;&#039;]&lt;br /&gt;
[[File:BwIDM-twofa.png|center|600px|thumb|My SSH Pubkeys.]]&lt;br /&gt;
&lt;br /&gt;
2. Click &#039;&#039;&#039;REVOKE/ZURÜCKZIEHEN&#039;&#039;&#039; next to the SSH key you want to revoke.&lt;br /&gt;
[[File:Ssh-success.png|center|800px|thumb|Revoke SSH key.]]&lt;/div&gt;</summary>
		<author><name>S Siebler</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=File:BwServices_my_tokens.png&amp;diff=8874</id>
		<title>File:BwServices my tokens.png</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=File:BwServices_my_tokens.png&amp;diff=8874"/>
		<updated>2021-06-19T11:11:24Z</updated>

		<summary type="html">&lt;p&gt;S Siebler: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>S Siebler</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=BwForCluster_User_Access&amp;diff=8828</id>
		<title>BwForCluster User Access</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=BwForCluster_User_Access&amp;diff=8828"/>
		<updated>2021-06-09T13:45:59Z</updated>

		<summary type="html">&lt;p&gt;S Siebler: update 2FA for MLS&amp;amp;WISO&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Granting user access to a bwForCluster requires 3 steps:&lt;br /&gt;
&lt;br /&gt;
# [[File:Zas assignment icon.svg|25px|]] Become part of an &#039;&#039;rechenvorhaben (RV)&#039;&#039;  either by joining as a coworker or creating a new one &lt;br /&gt;
# [[File:bwfor entitlement icon.svg|25px|]] Permission from your university to use a bwForCluster called [[BwForCluster_Entitlement| bwForCluster entitlement]] &lt;br /&gt;
# [[#Personal registration at bwForCluster | Personal registration at the cluster site ]]  based on approved &#039;&#039;RV&#039;&#039; [[File:Zas assignment icon.svg|25px|]] and issued bwForCluster entitlement [[File:bwfor entitlement icon.svg|25px|]].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;width: 100%; border-spacing: 5px;&amp;quot;&lt;br /&gt;
| style=&amp;quot;text-align:center; color:#000;vertical-align:middle;font-size:75%;&amp;quot; |[[File:Bwforreg.svg|center|border|500px|]] &lt;br /&gt;
|-&lt;br /&gt;
| style=&amp;quot;text-align:center; color:#000;vertical-align:middle;&amp;quot; |&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Step 1 and 2 can be done at the same time. When both are finished, you can do step 3. Which cluster you will get access to depends on your research area and is decided in step 1.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=  &amp;lt;b&amp;gt; &#039;&#039;RV&#039;&#039; registration at &#039;&#039;ZAS&#039;&#039;  &amp;lt;/b&amp;gt;=&lt;br /&gt;
&lt;br /&gt;
Typically an RV is registered by the leader or a senior scientist of a research group/collaboration. &lt;br /&gt;
&lt;br /&gt;
Any amount of co-workers can then join your RV to do work using the cluster.&lt;br /&gt;
&lt;br /&gt;
== Register a new &amp;quot;RV&amp;quot; ==&lt;br /&gt;
&lt;br /&gt;
In a RV, you will be asked to shortly describe your group&#039;s compute activities and the resources you need.&lt;br /&gt;
&lt;br /&gt;
If you register your own RV, you will be:&lt;br /&gt;
# held accountable for the co-workers in the RV&lt;br /&gt;
# asked to provide information for the two reports required by the DFG for their funding of bwFor clusters&lt;br /&gt;
# likely asked for a contribution to a future DFG grant proposal for a new bwFor  cluster in your area of research (&amp;quot;wissenschaftliches Beiblatt&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
Please follow the steps at [[bwForCluster RV registration]]&lt;br /&gt;
&lt;br /&gt;
== Become Coworker of an &amp;quot;RV&amp;quot;==&lt;br /&gt;
&lt;br /&gt;
Your advisor (the &amp;quot;RV&amp;quot; responsible&amp;quot;) will provide you with the following data on the RV:&lt;br /&gt;
* acronym&lt;br /&gt;
* password&lt;br /&gt;
&lt;br /&gt;
To become coworker of an &#039;&#039;RV&#039;&#039;, please login at&lt;br /&gt;
* https://www.bwhpc-c5.de/en/ZAS/bwforcluster_collaboration.php&lt;br /&gt;
and provide acronym and password.  You will be assigned to the &#039;RV&#039; as a member.&lt;br /&gt;
&lt;br /&gt;
After submitting the request you will receive an email from &#039;&#039;ZAS&#039;&#039; about the further steps (i.e. [[#Personal registration at bwForCluster | personal registration at assigned bwForCluster]]). &lt;br /&gt;
The RV owner and any managers will be notified automatically. &lt;br /&gt;
You can see your RV memberships at https://www.bwhpc-c5.de/en/ZAS/info_rv.php&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=  &amp;lt;b&amp;gt; Permission of Your University (&amp;quot;bwForCluster entitlement&amp;quot;) &amp;lt;/b&amp;gt; =&lt;br /&gt;
&lt;br /&gt;
Your own high school or university has to grant you permission to use a bwForCluster. &lt;br /&gt;
&lt;br /&gt;
Getting permission from your university to calculate on bwForClusters is independent of an RV and can be done before or while getting an RV. If you are only creating an RV for your research group but do &#039;&#039;&#039;not&#039;&#039;&#039; plan to use the cluster yourself, you do not need to do this step. &lt;br /&gt;
&lt;br /&gt;
Each university has their own procedure.  &lt;br /&gt;
&lt;br /&gt;
Please continue to the next step on this page and log in to a registration server. This will show you the list of your entitlements. &lt;br /&gt;
If the list contains &amp;lt;pre&amp;gt; http://bwidm.de/entitlement/bwForCluster &amp;lt;/pre&amp;gt; you already have the entitlement.&lt;br /&gt;
&lt;br /&gt;
The page [[bwForCluster Entitlement]] contains a list of participating universities and links to instructions on how to get an bwForCluster entitlement at each of them.&lt;br /&gt;
&lt;br /&gt;
=  &amp;lt;b&amp;gt; Personal registration at a bwForCluster - account creation&amp;lt;/b&amp;gt; = &lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;Prerequisites for successful account creation:&amp;lt;/b&amp;gt;&lt;br /&gt;
* [[BwForCluster_User_Access#RV_registration_at_ZAS|Membership in an RV (belonging to the bwForCluster you plan to join)]].&lt;br /&gt;
* [[BwForCluster_User_Access#Permission_of_Your_University_.28.22bwForCluster_entitlement.22.29|bwForCluster entitlement (assigned by your university)]].&lt;br /&gt;
&lt;br /&gt;
Once you have registered your own RV (&#039;&#039;rechenvorhaben&#039;&#039;)&lt;br /&gt;
or a membership in an RV, you will receive&lt;br /&gt;
an email with a website to create an account for yourself&lt;br /&gt;
on that cluster. This email will send you to one of the following websites:&lt;br /&gt;
&lt;br /&gt;
Available bwForCluster registration servers (service providers):&lt;br /&gt;
{| style=&amp;quot;border:3px solid darkgray; margin: 1em auto 1em auto;&amp;quot; width=&amp;quot;72%&amp;quot;&lt;br /&gt;
|- &lt;br /&gt;
!scope=&amp;quot;row&amp;quot; {{Darkgray}} | Cluster topic and location&lt;br /&gt;
!scope=&amp;quot;row&amp;quot; {{Darkgray}} | Registration server (for account creation)&lt;br /&gt;
|-&lt;br /&gt;
| bwForCluster JUSTUS 2 Ulm for Computational Chemistry and Quantum Sciences&lt;br /&gt;
| https://login.bwidm.de &lt;br /&gt;
|-&lt;br /&gt;
| bwForCluster MLS&amp;amp;WISO (Production and Development)&lt;br /&gt;
| https://bwservices.uni-heidelberg.de&lt;br /&gt;
|-&lt;br /&gt;
| bwForCluster NEMO Freiburg&lt;br /&gt;
| https://bwservices.uni-freiburg.de&lt;br /&gt;
|-&lt;br /&gt;
| bwForCluster BinAC Tübingen&lt;br /&gt;
| https://bwservices.uni-tuebingen.de&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
In this chapter, a user account will be created at the cluster based on your personal credentials. &lt;br /&gt;
&lt;br /&gt;
After having completed chapters 1 and 2 (RV approval and bwForCluster entitlement) please visit the&lt;br /&gt;
&lt;br /&gt;
* bwForCluster &#039;&#039;service provider&#039;&#039; registration website (see table above or email after RV approval):&lt;br /&gt;
*# Select your home organization from the list of organizations and click &#039;&#039;&#039;Proceed&#039;&#039;&#039;.&lt;br /&gt;
*# You will be redirected to the &#039;&#039;Identity Provider&#039;&#039; of your home organization.&lt;br /&gt;
*# Enter your home-organizational user ID (might be user name, email, ...) and password and click &#039;&#039;&#039;Login&#039;&#039;&#039; or &#039;&#039;&#039;Anmelden&#039;&#039;&#039;.&lt;br /&gt;
*# When doing this for the first time you need to accept that your personal data is transferred to the &#039;&#039;service provider&#039;&#039;.&lt;br /&gt;
*# You will be redirected back to the cluster registration website.&lt;br /&gt;
*# &#039;&#039;&#039;JUSTUS 2 only&#039;&#039;&#039;: This step is required before registration for JUSTUS 2: To improve security a 2-factor authentication mechanism (2FA) is being enforced. You can manage your 2FA tokens by clicking on [https://login.bwidm.de/user/twofa.xhtml this link] or on &#039;&#039;&#039;My Tokens&#039;&#039;&#039; in the main menu of the JUSTUS 2 registration website. The instructions for registering a new 2FA token can be found on the following page: [[BwForCluster User Access/2FA Tokens]]. Please create at least one 2FA token before proceeding with JUSTUS 2 registration.&lt;br /&gt;
*# &#039;&#039;&#039;MLS&amp;amp;WISO only&#039;&#039;&#039;: This step is required before registration for MLS&amp;amp;WISO: To improve security a 2-factor authentication mechanism (2FA) is being enforced. You can manage your 2FA tokens by clicking on [https://bwservices.uni-heidelberg.de/user/twofa.xhtml this link] or on &#039;&#039;&#039;My Tokens&#039;&#039;&#039; in the main menu of the bwServices registration website. The instructions for registering a new 2FA token can be found on the following page: [[BwForCluster MLS&amp;amp;WISO Production 2FA tokens]]. Please create at least one 2FA token before proceeding with MLS&amp;amp;WISO registration.&lt;br /&gt;
*# Select &#039;&#039;&#039;Service description&#039;&#039;&#039; within the box of your &#039;&#039;&#039;designated cluster&#039;&#039;&#039;. If the cluster is not visible, the reasons are either a missing entitlement or you are not member in an RV assigned to &lt;br /&gt;
that cluster - see prerequisites at the beginning of this chapter.&lt;br /&gt;
*# Click on the &#039;&#039;&#039;Register&#039;&#039;&#039; link below the service description to register for this cluster.&lt;br /&gt;
*# Make sure all requirements are met by checking the &#039;&#039;&#039;Requirements&#039;&#039;&#039; box at the top. If the requirements are not met you might be able to correct the issue by following the instructions. In all other cases please [http://www.support.bwhpc-c5.de open a ticket at the bwSupport Portal]. [[File:BwUniCluster 2.0 access login bwidm registration requirements.png|center|border|]]&lt;br /&gt;
*# Read and accept the terms and conditions of use (&#039;&#039;&#039;[v] I have read and accepted the terms of use&#039;&#039;&#039;) and click on button &#039;&#039;&#039;Register&#039;&#039;&#039;. When requirements are missing, i.e. a missing second factor for authentication, you may need to correct that before being able to click on button &amp;quot;Register&amp;quot;.&lt;br /&gt;
*# Click on &#039;&#039;&#039;Set Service Password&#039;&#039;&#039; and set a password for the cluster. &#039;&#039;&#039;Note: Setting a SERVICE password is MANDATORY for access to any bwForCluster. Using the password of your home organization is not accepted anymore.&#039;&#039;&#039;&lt;br /&gt;
*# Finally you will receive an email with instructions how to login to the cluster. Please wait at least 15 minutes before trying to login. More details about cluster login can be found in the next chapter. &#039;&#039;&#039;Note: Carefully read the email send by the registration server after account registration.&#039;&#039;&#039;&lt;br /&gt;
*# &#039;&#039;&#039;Note:&#039;&#039;&#039; You can return to the registration website at any time, in order to review your registration details, change/reset your service password or deregister from the service by yourself.&lt;br /&gt;
&lt;br /&gt;
=  &amp;lt;b&amp;gt; Login to bwForCluster   &amp;lt;/b&amp;gt; = &lt;br /&gt;
&lt;br /&gt;
Personalized details about how to login to the cluster are included&lt;br /&gt;
in an email send after registration at the bwForCluster service provider.&lt;br /&gt;
&lt;br /&gt;
General instructions for the bwForCluster login can be found here:&lt;br /&gt;
{| style=&amp;quot;border:3px solid darkgray; margin: 1em auto 1em auto;&amp;quot; width=&amp;quot;73%&amp;quot;&lt;br /&gt;
|- &lt;br /&gt;
!scope=&amp;quot;row&amp;quot; {{Darkgray}} | Cluster topic and location&lt;br /&gt;
!scope=&amp;quot;row&amp;quot; {{Darkgray}} | Login instructions&lt;br /&gt;
|-&lt;br /&gt;
| bwForCluster Chemistry JUSTUS 2 Ulm for Computational Chemistry and Quantum Sciences&lt;br /&gt;
| [[BwForCluster_JUSTUS2_Login|bwForCluster JUSTUS 2 Login]]&lt;br /&gt;
|-&lt;br /&gt;
| bwForCluster MLS&amp;amp;WISO Production&lt;br /&gt;
| [[BwForCluster_MLS&amp;amp;WISO_Production_Login|bwForCluster MLS&amp;amp;WISO Production Login]]&lt;br /&gt;
|-&lt;br /&gt;
| bwForCluster MLS&amp;amp;WISO Development&lt;br /&gt;
| [[BwForCluster_MLS&amp;amp;WISO_Development_Login|bwForCluster MLS&amp;amp;WISO Development Login]]&lt;br /&gt;
|-&lt;br /&gt;
| bwForCluster NEMO Freiburg&lt;br /&gt;
| [[bwForCluster NEMO Login]] &lt;br /&gt;
|-&lt;br /&gt;
| bwForCluster BinAC Tübingen&lt;br /&gt;
| [[bwForCluster BinAC Login]] &lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=  &amp;lt;b&amp;gt; Costs and Funding  &amp;lt;/b&amp;gt; =&lt;br /&gt;
The usage of bwForCluster is free of charge. bwForClusters are customized to the requirements of particular research areas. &lt;br /&gt;
&lt;br /&gt;
bwForClusters are financed by the DFG (German Research Foundation) and by the Ministry of Science, Research and Arts of Baden-Württemberg based on scientifc grant proposal (compare proposals guidelines as per Art. 91b GG).&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
[[Category:Access|bwForCluster]][[Category:bwForCluster_Chemistry]][[Category:bwForCluster_MLS&amp;amp;WISO_Production]][[Category:bwForCluster_MLS&amp;amp;WISO_Development]]&lt;/div&gt;</summary>
		<author><name>S Siebler</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=File:Sds-hd_storage_service.png&amp;diff=6441</id>
		<title>File:Sds-hd storage service.png</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=File:Sds-hd_storage_service.png&amp;diff=6441"/>
		<updated>2020-04-22T07:22:19Z</updated>

		<summary type="html">&lt;p&gt;S Siebler: S Siebler uploaded a new version of File:Sds-hd storage service.png&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;SDS@hd Infrastructure Schema&lt;/div&gt;</summary>
		<author><name>S Siebler</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=File:Sds_smb_mac_login.png&amp;diff=6405</id>
		<title>File:Sds smb mac login.png</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=File:Sds_smb_mac_login.png&amp;diff=6405"/>
		<updated>2020-04-20T14:51:25Z</updated>

		<summary type="html">&lt;p&gt;S Siebler: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>S Siebler</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=File:Sds_smb_mac_mountpath.png&amp;diff=6404</id>
		<title>File:Sds smb mac mountpath.png</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=File:Sds_smb_mac_mountpath.png&amp;diff=6404"/>
		<updated>2020-04-20T14:50:37Z</updated>

		<summary type="html">&lt;p&gt;S Siebler: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>S Siebler</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=File:Sds_bwservices_servicepassword.png&amp;diff=6401</id>
		<title>File:Sds bwservices servicepassword.png</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=File:Sds_bwservices_servicepassword.png&amp;diff=6401"/>
		<updated>2020-04-20T14:32:12Z</updated>

		<summary type="html">&lt;p&gt;S Siebler: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>S Siebler</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=File:Sds-hd_hpc_connection.png&amp;diff=5555</id>
		<title>File:Sds-hd hpc connection.png</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=File:Sds-hd_hpc_connection.png&amp;diff=5555"/>
		<updated>2018-08-22T14:00:38Z</updated>

		<summary type="html">&lt;p&gt;S Siebler: S Siebler uploaded a new version of File:Sds-hd hpc connection.png&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>S Siebler</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=File:Sds-hd_storage_service.png&amp;diff=5418</id>
		<title>File:Sds-hd storage service.png</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=File:Sds-hd_storage_service.png&amp;diff=5418"/>
		<updated>2018-04-23T16:49:55Z</updated>

		<summary type="html">&lt;p&gt;S Siebler: S Siebler uploaded a new version of &amp;amp;quot;File:Sds-hd storage service.png&amp;amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;SDS@hd Infrastructure Schema&lt;/div&gt;</summary>
		<author><name>S Siebler</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=Main_Page&amp;diff=5167</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=Main_Page&amp;diff=5167"/>
		<updated>2017-10-23T12:34:41Z</updated>

		<summary type="html">&lt;p&gt;S Siebler: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__NOTOC__&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;width: 100%; border:1px solid #e7aa01; background:#e7d9b4;border-spacing: 2px;&amp;quot;&lt;br /&gt;
| style=&amp;quot;width:280px; text-align:center; white-space:nowrap; color:#000;&amp;quot; |&lt;br /&gt;
&amp;lt;div style=&amp;quot;font-size:135%; border:none; margin:0; padding:.2em; color:#000;&amp;quot;&amp;gt;Online&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;div style=&amp;quot;font-size:135%; border:none; margin:0; padding:.2em; color:#000;&amp;quot;&amp;gt;&#039;&#039;&#039;User and Best Practice Guides&#039;&#039;&#039;&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;div style=&amp;quot;font-size:105%; border:none; margin:0; padding:.2em; color:#000;&amp;quot;&amp;gt;of&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;div style=&amp;quot;font-size:120%; border:none; margin:0; padding:.2em; color:#000;&amp;quot;&amp;gt;Baden-Württemberg&#039;s HPC services&amp;lt;/div&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--Welcome to the &#039;&#039;&#039;bwHPC wiki&#039;&#039;&#039; - the user and best practice guides - for &#039;&#039;high performance computing (&#039;&#039;&#039;HPC&#039;&#039;&#039;)&#039;&#039; and &#039;&#039;HPC data storage&#039;&#039; in the state of Baden-Württemberg, Germany.&lt;br /&gt;
Hosted as a Best Practices Repository, the knowledge base contains user guides and best practice guides &#039;&#039;(&#039;&#039;&#039;BPG&#039;&#039;&#039;)&#039;&#039; and is maintained by members of Baden-Württemberg&#039;s federated HPC competence centers for clusters of tier 3. &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Federated HPC competence centers of tier 3 are an integral part of the project [http://www.bwHPC-C5.de bwHPC-C5] which coordinates the &#039;&#039;federated user and science support&#039;&#039; for &lt;br /&gt;
the [[HPC_infrastructure_of_Baden_Wuerttemberg|HPC infrastructure]] of tier 3 in the state of Baden-Württemberg. --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;width: 100%; margin:4px 0 0 0; background:none; border-spacing: 0px;&amp;quot;&lt;br /&gt;
&amp;lt;!--   bwHPC       --&amp;gt;&lt;br /&gt;
| style=&amp;quot;width:50%; border:1px solid #e7aa01; background:#f5fffa; vertical-align:top; color:#000;&amp;quot; |&lt;br /&gt;
{| style=&amp;quot;width:100%; vertical-align:top; background:#f5fffa;&amp;quot;&lt;br /&gt;
| style=&amp;quot;padding:2px;&amp;quot; | &amp;lt;div style=&amp;quot;margin:3px; background:#cef2e0; font-size:120%; font-weight:bold; border:1px solid #e7aa01; text-align:left; color:#000; padding:0.2em 0.4em;&amp;quot;&amp;gt;HPC Services&amp;lt;/div&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| style=&amp;quot;color:#000;&amp;quot; | &amp;lt;div style=&amp;quot;padding:2px 5px&amp;quot;&amp;gt;The federated HPC competence centers of tier 3 provide and maintain&lt;br /&gt;
user guides and best practice guides for the compute clusters of &#039;&#039;&#039;[[:Category:bwHPC infrastructure|tier 3]]&#039;&#039;&#039;:&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;div style=&amp;quot;font-size:115%;font-weight:bold;&amp;quot;&amp;gt; [[:Category:bwUniCluster|bwUniCluster]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
* &amp;lt;div style=&amp;quot;font-size:115%;font-weight:bold;&amp;quot;&amp;gt; [[:Category:bwForCluster_Chemistry|bwForCluster JUSTUS]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
* &amp;lt;div style=&amp;quot;font-size:115%;font-weight:bold;&amp;quot;&amp;gt; [[:Category:bwForCluster_MLS&amp;amp;WISO|bwForCluster MLS&amp;amp;WISO]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
* &amp;lt;div style=&amp;quot;font-size:115%;font-weight:bold;&amp;quot;&amp;gt; [[:Category:bwForCluster NEMO|bwForCluster NEMO]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
* &amp;lt;div style=&amp;quot;font-size:115%;font-weight:bold;&amp;quot;&amp;gt; [[:Category:bwForCluster BinAC|bwForCluster BinAC]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;!--* &amp;lt;div style=&amp;quot;font-size:115%;font-weight:bold;&amp;quot;&amp;gt; bwForClusters &amp;lt;span style=&amp;quot;font-size:70%;font-weight:normal&amp;quot;&amp;gt;(scheduled for 2014/2015)&amp;lt;/span&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;padding:2px 5px&amp;quot;&amp;gt;User and best practice guides for compute cluster of higher HPC tiers in Baden-Württemberg can be found here:&lt;br /&gt;
* bwHPC tier 1: [https://wickie.hlrs.de/platforms/index.php/Cray_XC40 Hazel Hen]&lt;br /&gt;
* bwHPC tier 2: [https://wiki.scc.kit.edu/hpc/index.php/Category:ForHLR ForHLR]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
| style=&amp;quot;border:1px solid transparent;&amp;quot; |&lt;br /&gt;
&amp;lt;!--        DATA STORAGE        --&amp;gt;&lt;br /&gt;
| style=&amp;quot;width:50%; border:1px solid #e7aa01; background:#f5faff; vertical-align:top;&amp;quot;|&lt;br /&gt;
{| style=&amp;quot;width:100%; vertical-align:top; background:#f5faff;&amp;quot;&lt;br /&gt;
| style=&amp;quot;padding:2px;&amp;quot; | &amp;lt;div style=&amp;quot;margin:3px; background:#cedff2; font-size:120%; font-weight:bold; border:1px solid #e7aa01; text-align:left; color:#000; padding:0.2em 0.4em;&amp;quot;&amp;gt;HPC Data Storage Services&amp;lt;/div&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| style=&amp;quot;color:#000; padding:2px 5px;&amp;quot; | &amp;lt;div style=&amp;quot;padding:2px 5px&amp;quot;&amp;gt;&lt;br /&gt;
For user guides of the data storage services:&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;div style=&amp;quot;font-size:115%;font-weight:bold;&amp;quot;&amp;gt;[[bwFileStorage]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
* &amp;lt;div style=&amp;quot;font-size:115%;font-weight:bold;&amp;quot;&amp;gt;[[sds-hd|SDS@hd]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;/div&gt;</summary>
		<author><name>S Siebler</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=File:Sds-hd-smb-auth.png&amp;diff=5147</id>
		<title>File:Sds-hd-smb-auth.png</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=File:Sds-hd-smb-auth.png&amp;diff=5147"/>
		<updated>2017-09-20T09:14:56Z</updated>

		<summary type="html">&lt;p&gt;S Siebler: SDS@hd SMB Authentication&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;SDS@hd SMB Authentication&lt;/div&gt;</summary>
		<author><name>S Siebler</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=File:Sds-hd-smb-netdrive.png&amp;diff=5146</id>
		<title>File:Sds-hd-smb-netdrive.png</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=File:Sds-hd-smb-netdrive.png&amp;diff=5146"/>
		<updated>2017-09-20T09:14:12Z</updated>

		<summary type="html">&lt;p&gt;S Siebler: SDS@hd Mapping SMB Networkdrive&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;SDS@hd Mapping SMB Networkdrive&lt;/div&gt;</summary>
		<author><name>S Siebler</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=File:Sds-hd_hpc_connection.png&amp;diff=5135</id>
		<title>File:Sds-hd hpc connection.png</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=File:Sds-hd_hpc_connection.png&amp;diff=5135"/>
		<updated>2017-09-19T09:45:00Z</updated>

		<summary type="html">&lt;p&gt;S Siebler: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>S Siebler</name></author>
	</entry>
	<entry>
		<id>https://wiki.bwhpc.de/wiki/index.php?title=File:Sds-hd_storage_service.png&amp;diff=5131</id>
		<title>File:Sds-hd storage service.png</title>
		<link rel="alternate" type="text/html" href="https://wiki.bwhpc.de/wiki/index.php?title=File:Sds-hd_storage_service.png&amp;diff=5131"/>
		<updated>2017-09-19T09:28:06Z</updated>

		<summary type="html">&lt;p&gt;S Siebler: SDS@hd Infrastructure Schema&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;SDS@hd Infrastructure Schema&lt;/div&gt;</summary>
		<author><name>S Siebler</name></author>
	</entry>
</feed>