MICC JINR Multifunctional
Information and Computing
Complex

RU

User's manuals

Cloud Computing User's Guide

To work in the cloud, the user must be authenticated by that service successfully. Detailed information on that topic is presented in the section «Start using cloud».

Resources request

Initially user doesn’t have any resources and hence a he can’t create VMs. To get some the user has to send resource request via special web-form «Request resources» located on the helpdesk site. To do this, on the helpdesk.jinr.ru site, you need to fill in the resource request form (see the picture below) available on the following path: New query →  Cloud service → Cloud computing → Resource request (Новый запрос → Облачный сервис → Облачные вычисления → Запрос ресурсов).

VMs access

KVM VMs created from shared (publicly available) templates can be accessed remotely either via VNC (including via the Sunstone web interface) or ssh protocol, whereas CTs – via ssh only.

VNC

Attention! VNC access to VMs is available only from the JINR network (159.93.0.0/16).

To access VMs via VNC using Sunstone interface user needs to click on «VNC» button located either in the right column of the his VM list or on the panel on the «Information» tab page of its particular VM (see pictures below).

After connecting via VNC, the following window will appear and you can start working on the newly created VM:

VNC uses login/password entrance only. If the VM was created from publicly available templates, you must enter the same username / password that you used when logging into the cloud. If the VM was created from the user template, then you need to log in via ssh and create a user (useradd surname, passwd ***********).

Аccess to the service from other networks (outside JINR)

In order to use the cloud service while outside the JINR network, you can use several options, for example, use a VPN, set up a proxy connection on your computer.

This guide describes how to connect to the service using the x2go program. To start using this program, it’s need install the x2go client. How to do this for your system can be read here - wiki.x2go.org.

After installation it’s , need configure the connection. To do this, run the installed x2go client and create a new connection. In the appeared window on the «Session» tab, configure the following parameters: Session name, Host (lxpub.jinr.ru), Login (login and password for the connection are used from the JINR account (login.jinr.ru)). Leave the SSH port 22, select the session type - Single application, and as the application - Internet browser and click «OK».

Before the «OK» button is pressed, in the «Input/Output» tab you can change the screen resolution with which the connection window will be opened. This is not a required option, but with the help of it you can configure a more comfortable connection display.

To start the session, you need to click on the name of the created connection, enter the password and click «OK». A warning will probably fly out (because of the first connection), it’s just need to agree with it.

After that, a browser window will open in which the user is granted access to internal institute resources.

SSH

By default all users’ VMs/CTs created from predefined and shared templates with linux OS and with public IP addresses are accessible on 22 port via ssh protocol from JINR, CERN, NetByNet and TMPK networks. VMs/CTs with private IP addresses (such as 192.168.0.0/16 and 10.93.0.0/16) are accessible from JINR network only.

For access to VM / CT with the same user login and password that were used to log in to the web interface of the cloud infrastructure, VM / CT must be created on the basis of one of the public VM templates (templates created under oneadmin).

Logging in to VM/CT using an all-institute account, in order to ain superuser privileges, you need to execute the command «sudo su -» in the command line and re-enter the password from your all-institute account.

To access the VM/CT under the root user using the RSA/DSA key, you must add your public key to the profile. To do this, click on your user, select «Settings».

Next, on the «Auth» tab in the «Public SSH key» field, place your public key.

To generate an ssh key in Linux, you need perform the following steps:

In the console (terminal) type the command:

# ssh-keygen

  • When asked «Enter file in which to save the key (/root/.ssh/id_rsa):» set the path for the private key or press the «Enter» to agree on the proposed default path ~/.ssh/id_rsa.
  • When asked «Enter passphrase (empty for no passphrase):» enter a password to access the private key or press the «Enter» to leave access to the private key without a password.
  • When asked «Enter same passphrase again:» enter the password again and press «Enter» again or just press «Enter» for passwordless access to the private key.

As a result of these actions a directory ~/.ssh will be created if it hasn’t existed before and public and private RSA-keys (files with names id_rsa and id_rsa.pub, respectively) will be created.

The contents of the public RSA-key (i.e., the id_rsa.pub file) has to be pasted into the field «Public SSH key» in the settings of your profile.

To generate an ssh key in Windows, you must perform the following steps:

Launch the PuTTYgen program (you can download the program from the link). Click on the «Generate» button. To generate a key, move the cursor in the empty «Key» field.

Save the keys on your computer. Copy the contents of the open part of the rsa key into the field «Public SSH key» in the settings of your profile.

Sometimes it may be necessary to convert a private SSH key. For example, you created an SSH key in the PuTTY application, and the application in which you want to use the key supports the OpenSSH key format, in particular, the x2go program. To convert the key, run the PuTTYgen program. Download the private ppk key by selecting  «File» → «Load private key». Enter a codeword (passphrase) if required for this key. To export a private key to the OpenSSH format, select «Conversions» → «Export OpenSSH key» in the main menu. Save the private key to a new file. As a result of conversion, you get a private key in the OpenSSH format.

To create your own VM template, you can use any of the templates that are generally available in the cloud service. To do this, select the appropriate template and click the «Clone» button.

Specify the name of the cloned template and click the «Clone» button (if you select «Clone» in the new template, the same disks as in the previous template will be indicated, if you select «Clone from image», then copies of the images in the cloned template will be made. The new template will be substituted with the IDs of new (cloned) disk images, similarly, the commands for deleting templates and images will occur.

If you need to work with another SSH key (different from the one loaded in the profile), for example, if you log in from another machine and use another key pair, you can add a new SSH key to the template. Select a template and click «Update» on the «Context» tab, in the «Configuration»block in the «SSH public key» field, insert a new SSH key.

Creating and changing VM Templates

To open VM Template editor you have to press the green «plus» button and select «Create».

The Template editor has 2 editing modes: «Wizard» and «Advanced».

«Wizard» mode is used to set the basic VM parameters: CPU, RAM, DISK, IP and so on.

«Advanced» mode is used to set some specific parameters that are not available in «Wizard» mode, like OVZ_SIZE or OSTEMPLATE.

Attention!!! It is allowed to work simultaneously with only one window. That is, the «Setup Wizard» window is used to set the basic and specific parameters, after which the template is saved and edited in the «Advanced» window.

To save the new template, click the «Create» button.

To edit a template and save it, you must click the «Refresh» button.

Creating a VM with a passthrough GPU card (GPU capabilities inside VM)

To use such resources, you need access to the juno-gpu cluster and the presence of images on the ceph-image storage.

To create a VM of this type, you need to specify additional parameters in the template.

To do this, create or update an existing template. After setting the VM parameters, go to the “Other” tab, in the “PCI Devices” block, click the “+” button and select an available device from the drop-down list. Next, save the template and initiate the creation of VM from it.

To check the device connection, you need to go to the VM created by ssh and using a utility, such as "pciutils", to view the connected devices.

Example for Centos:

yum install pciutils

lspci -vvv | grep NVIDIA

Create virtual instance (either KVM VM or OpenVZ container)

The information below assumes the paragraph «VMs access» was read.

Ways to get an image for KVM VM or OpenVZ containers (CTs)

There are several methods for adding images to your cloud, namely:

  • Using the OpenNebula application store (VM only, not recommended);
  • Image request via HelpDesk (VM / CT);
  • Creating images in an OpenNebula (VM) environment.

Using the OpenNebula Marketplace 

To download the image from the store, you need to go to the «MarketPlace», select «OpenNebula Public» → «Apps» (or select «Apps» right away).

It is necessary to select a suitable disk image and a hypervisor type (KVM in case of the JINR cloud) from the list. Images for OpenVZ containers are not available via OpenNebula Marketplace. Attentively study the description of templates of ready VM. These templates are provided and change only by the representatives of OpenNebula project. Download the selected image.

Further one need to select a datastore and assign a name for an image and for a template. For KVM — use storage ImageDS_ceph. The same rule is true for the OpenVZ containers ( storage «ImageDS_22x-openvz»), but there are no images for OpenVZ containers on the Marketplace.

Please note that choosing the ImageDS_ceph datastore, you need to select images in raw format, not qcow2, since the disk in qcow2 format may cause problems when creating a VM (detailed information can be found on the website)

Request of a necessary image through HelpDesk

If an image and/or a template with a required operating system is absent in the JINR cloud, then it is possible to send a request for reviewing via HelpDesk to get access to the necessary image / template both for VM (KVM), and for a container (OpenVZ) if they already exist in a cloud, or to ask to create one.

The request is created through the form on the HelpDesk website, in the section «User Support of a Cloud Service» (in Russian «Облачный сервис» → «Облачные вычисления» → «Поддержка пользователей облачного сервиса»).

In the request form, you need to indicate the identifier (i.e. login) of the user in cloud.jinr.ru, the name or image ID and/or template ID (if it already exists in the cloud and user can see it but can’t use) or the name and version of the required OS, its bit depth and type of virtualization (KVM or OpenVZ).

!Attention. Enter a user name in cloud.jinr.ru, a name and the version of an operating system and type of virtualization (KVM or OpenVZ).

Creation of images in the OpenNebula environment

OpenNebula provides a possiblity for users to create a necessary OS image for KVM VM.

To create a new image, go to the «Images»  section. Click the «plus» sign → «Create». Give a name, a description, select the type of the image «Readonly CD-ROM», the datastore «ImageDS_ceph», image location «Upload», then specify the path to the file (.iso). Then click «Create» button.

Create a DATABLOCK image, specify its size (for example, 10240 MB). Creating a DATABLOCK image is described in the «Creating and deleting a persistent DATABLOCK image» section.

Pass into the section «Templates» → «VMs», further create a new template and configure the specified settings.

On the «General» tab set parameters of the machine depending on your quota.

On the «Storage» tab select an earlier created empty DATABLOCK disk, further add a new disk and specify your disk with an installer of OS.

On the «Network» tab choose a network (for example, «22x-priv»).

On the «OS&CPU» tab specify an architecture of the installed system, then set HDD as the first boot device and CDROM as the second one. Such boot sequence lets skip during first boot an empty HDD and boot from CDROM and then when OS is already installed on HDD, boot will start from HDD.

On the «Input/Output» tab set a checkbox on VNC.

On the «Context» tab → «Configuration»  insert a content of the public part of the rsa/dsa key (.pub) to access a VM (how to access VM is described in the «VMs access» section).

Launch VM, having selected the created, tuned template and be connected to VM through VNC. Continue installation of system in the graphic mode.

The description of creation as well as editing the BM/CT templates are provided in the «Creating and changing VM Templates» section mentioned above.

Important point! Notice it is important to choose a proper datastore and network: in case of KVM VM one needs to select ImageDS_ceph and in case of OpenVZ CT – ImageDS_22x-openvz.

Creation of KVM/OpenVZ container

To create a VM in the section «Instances» → «VMs», it is necessary to press the «+» button, select the VM / CT template. Specify the name and number of instances created by the template. If necessary, configure other parameters like «Instantiate as persistent » (make a permanent copy of the template, along with all the added disks, and expand it), «Memory», «CPU», «VCPU», disk size (only for machines with KVM virtualization) «Network», etc., then click on the «Create» button.

It takes a few minutes to get VM/CT up and running (see here for more details on VM/CT statuses). As soon as created instance gets the status «RUNNING» one can check its accessibility via network by using for example ping command:

 # ping <IP-адрес ВМ> 

Once a response is received – the machine is running and ready to work.

You can perform certain actions on a VM/CT: save, start, block, suspend, stop, power off, undeploy, reboot, migrate, terminate (for more information, click here).

Please note that if the machine is locked, then no action can be applied to it other than «unlock».

You can access to the newly created VM / CT via VNC or ssh-protocol. Connection setup is described in paragraph «VM access»

Connect via SSH-protocol through a command:

 # ssh root@<VM/CT IP-address or hostname> 

​Replace the«<VM/CT IP-address or hostname>» field by the IP or hostname of your VM/CT.

Specific parameters description

OVZ_SIZE: sets the size of the hard disk for OpenVZ machines. Applies only to OpenVZ CTs.

OSTEMPLATE: name of the OS inside of the CT which is used to correctly deploy system environment. Applies only to OpenVZ CTs.

LOOKUP_HOSTNAME: sets hostname of the VM/CT from DNS. Possible values: «true» or «false»

HOSTNAME: manual setting of the VM / CT network name.

ARCHIVE_CONTENT_TYPE: can be used to reduce CT deployment time. Possible values: «ploop» or «archive». Applies only to OpenVZ CTs.

Mounting sshfs inside OpenVZ CT

1) Make sure the fuse module is enabled inside OpenVZ container:

[root]$ sudo cat /proc/filesystems|grep fuse

nodev fusectl

nodev fuse

2) Enable EPEL repository:

[root]$ sudo yum install yum-conf-epel

3) Install sshfs package and its dependencies:

[root]$ yum install fuse-sshfs

4) Mount remote folder example:

[root]$ sshfs root@foo.org:/root /mnt/foo.org.root

5) Unmount remote folder example:

# fusermount -u mountpoint

Basic operations with disk

Persistent / non-persistent images

A persistent image can be used by only one single VM/CT. If the VM/CT is deleted using «Terminate» function, all the data is saved to that image.

A non-persistent image can be used by multiple VMs simultaneously, but the data is never written to that image after shutting down the VMs/CTs.

To make an image persistent/non-persistent choose it (tab «Images») and press «Make persistent» or «Make non persistent» button.

An image can have one of several statuses: «READY» - the image is not used by any VM/CT, but is ready for use; «USED» - a non-permanent image is used by at least one VM/CT (for other statuses and in more detail about the above mentioned, you can read the link).

Cloning images

Existing images can be cloned. It is useful when you want to make a backup of the image before modifying it, or to get a private persistent copy of the image shared by other user. Persistent images can be cloned only when they are in «READY» state; non-persistent images can be cloned in any of the states («READY» or «USED»). To clone an image one needs to choose it and press the «Clone» button.

Setting the disk size (OpenVZ only)

OpenVZ containers can use only one single disk. Setting its size is possible only before the VM is created. By default the size of the disk is 10.2 GB. To change it you have to add OVZ_SIZE parameter equal to the size in megabytes to the «DISK» parameter string using advanced editing mode.

For example, to set the «DISK» size equal to 20 GB the OVZ_SIZE should be set to the value 20480 (20*1024):

DISK=[

IMAGE_UNAME="oneadmin",

IMAGE="openvz_scientific_6-x86_64_krb_clst33",

OVZ_SIZE="20480"]

Changing a disk size of a working OpenVZ container (CT)

The procedure for changing the size of the OpenVZ container disk depends on whether the image from which the CT was created was «persistent» or not (non-persistent).

If the image was «permanent», then to change the disk size one needs to perform the following actions.​

Note: Before proceeding, make sure that there are no mounted network shares/disks in the container.

Turn off the VM with the «terminate» command. The changes made to the container disk will be saved to the original image when turned off. Next, you need to edit the template, specifying the desired size in it: «Templates» → «VM» → «the desired template». Then click the «Update» button, select the «Advanced» menu.

and add the line «OVZ_SIZE =<new desired disk size>» in the parameter «DISK» (IMPORTANT: the size is specified in megabytes)​

Example:

 DISK = [

IMAGE = "Copy of centos-7-openvz",

IMAGE_UNAME = "username",

OVZ_SIZE = "8192" ]

with this parameter, a VM with a disk size of 8 GB will be created​

Then click the «Update» button and create a container from that updated template. In that case, all the data and changes that were made by the time when the «terminate» command was launched remain in it.

If the image was not "permanent", then to change the disk size of the running container, you must first save its disk to a new image. To do that the following steps needs to be performed. ​

Select the desired VM: «Instances» → «VMs».

Next, turn off the container with the command «Power off».

Wait until the VM has status «Poweroff» and then one can save its disk by pressing the button «Save». After that one needs to enter a name for the new disk image and click the «Save as template» button. 

Then the disk image and template will be created. The template will be a copy of the one from which the container was created before saving. Only the storage option will be replaced by the saved image, and the «Network» parameter will be reset. Therefore, you need to go to the template and click the «Update» button, select the «Network» menu and specify the desired network. 

Further, to change the disk size the same actions need to be performed as with the «persistent» image. I.e. one needs to go to the «Advanced» tab and add the line «OVZ_SIZE = <new desired disk size>» in the «DISK» parameter (IMPORTANT: the size is specified in megabytes).

Example:

 DISK = [ 

IMAGE = "Copy of centos-7-openvz",

IMAGE_UNAME = "username", 

OVZ_SIZE = "8192" ]

with this parameter, a VM with a disk size of 8 GB will be created​

Then click the «Update» button and create a new VM from updated template. You can also create a new one or edit an existing template (see section Creating and changing VM Templates of the manual), specifying the saved disk image.

When you create a container using a new image, all the data and changes that were made by the time of saving remain.

Using snapshots

The functionality created for working with images allows you to save/return/delete the state of the VM. The functionality is available on the «Snapshots» tab of the «Instances» → «VMs» menu item.

Attention!!! When the VM is stopped, all the snapshots are automatically deleted.

Warning, the Snapshot function in OpenVZ containers is not working. The only way to save data is to use a persistent image. To do this, you need to delete the VM using the «terminate» command, specify this disk image when creating a new VM. The data of the previous VM is stored, with the exception of network parameters.
  

Attention, the Snapshot function in virtual machines deployed in a ceph cluster does not work, due to limitations of the ceph itself. Use disk backup!

Saving disk images (backup disk)

To save data on a VM disk, you can use the cloud functionality. Select the virtual machine whose disk backup you want to run.

Go to the «Storage» tab, select the desired image and click on the «Saveas» button.

Specify the name of the new image and click the «Save As» button.

After performing this action, a new image will appear in the «Storage» → «Images» section. Go there.

On the «Information» tab in the «Persistent» field, select «yes».

Then click «Enable».

Now you can change the previously stored (or already existing) template or create a new one using the created disk image. To do this, in the «Templates» → «VMs» section of the «General» tab, configure the necessary parameters.

On the «Storage» tab, select the disk image you created.

On the «Network» tab, select the appropriate network option.

Now you can create a new virtual machine based on the template, on the disk of which there will be all the data of the original VM.

Creating and deleting a persistent DATABLOCK image.

This part of the manual describes how to create an image of the DATABLOCK type and connect it to an already running VM.

This manual shows an example of creating an empty disk (for example, to add a workspace to a VM). First, you need to create a new image, «Storage» → «Images» → The «plus» sign→ «Create».

In the «Name» field, you must specify the desired image name, then in the «Type» field, select «Generic storage datablock», and as the «Datastore» storage system select «135: ImageDS_ceph». Depending on the type of disk being created, the «Image location» section offers different options. Since the manual considers creating an empty image, you must select the «Empty disk image» parameter and set the required disk size. Next, mark the menu item «This image is persistent» (this is necessary so that all the data that you will make is saved in this image). The next step, without fail, is to expand the menu item «Advanced Options» and select the options «BUS» → «Virtio» and «Image mapping driver» → «raw». After completing these steps, click the «Create» button.

Now you can add the created image to the running VM. To do this, go to «Instances» → «VM’s» and select the VM to which the created Datablock will be added. After that, in the «Storage» menu item, you must click the «Attach disk» button and select the image you created, then click the «Attach» button. After performing these actions, the created Datablock image appears in the list as another disk.

Now you can work with this image in the VM itself. To do this, you need to go to the VM and see the information about the connected disks, it will be displayed as a new disk. Then you can work with it as with a regular disk in the system: format, mount, work with data, etc.

To delete this image, you need perform all the steps in reverse order. To begin with, so that there are no problems in our VM, it is necessary to delete all links and links in the system with this disk that were created and unmount the disk. Then go to «Instances» → «VMs» → « VM» → «Storage» and click on the «Detach» button.

After performing the above actions, you can delete the image itself, «Storage» → «Images», select the image and click on the delete button.

Adding an additional network interface to the VM

To add another network interface to the VM, you need to do the following steps:

- select the VM to which an additional network interface will be added «Instances» → «VMs».

- select the «Network» tab and click the «Attach nic» button.

Then select the desired network and click «Attach». After that the new interface will appear in the list of network interfaces of the VM.

In order for the interface to work, it is necessary to configure it in the VM. Below is an example of the configuration for a Linux system.

To configure the network interface in the VM, you need to connect to it via SSH or VNC. Add table «rt2» in «/etc/iproute2/rt_tables». To do this, you can use the command «echo»

echo 1 rt2 >> /etc/iproute2/rt_tables

or use an editor such as «Vim», «Nano». In this case, the entry is added to the end of the file and should be of the form «1 rt2».

To see what the network interface is called, you can use the commands:

ip a l

or

ifconfig

Also after the output of these commands it will be seen if there is an IP address on the interface.

Next, you need to assign an IP address and add a route by running the following commands (the name of the interface ens7 and IP 159.93.222.222 are used for example, respectively, they will differ from those that you will have):

sudo ifup ens7

ip address add 159.93.222.222/32 dev ens7

ip route add 159.93.220.0/22 dev ens7  src 159.93.222.222 table rt2

ip route add default via 159.93.222.222 dev ens7 table rt2

ip rule add from 159.93.222.222/32 table rt2

sudo ip rule add to159.93.222.222/32 table rt2

In order to remove the interface from the VM, first you need to remove the IP address from the network interface:

 ip addr del 159.93.222.222/32 dev ens7

ip addr flush dev ens7

After that  delete the interface through the web interface of the cloud infrastructure - «Instances of VM» → «VMs».  In the «Network» tab, you must click on the «Detach» button on the interface to be removed.

Creating a virtual network

To create a private network consisting of 1 or more addresses, you need perform the following steps:

Click on the network with internal (ID: 8) or external (ID: 9) addresses.

Go to the «Network» tab, select «Virtual network», select the desired network - with internal (ID: 8) or external (ID: 9) addresses. Click on the «+» → «Reserve».

In the opened window, select the number of reserved addresses, select «Add a new virtual network», specify the name of the network. In the «Advanced Settings», select a range from the list and specify the initial ip from the required range. Then click on the «Reserved» button.

Creating a separate subnet, even consisting of one ip-address, is useful because in this way the user can fix a specific ip-address for his host, which nullifies the probability of capturing this ip-address of another VM, for example, when a user recreates his VM.

Cloud Storage User's Guide

Cloud Storage resources request 

Initially user doesn’t have any resources. To get some the user has to send resource request via special web-form “Request resources” located on the helpdesk site. To do this, on the helpdesk.jinr.ru site, you need to fill in the resource request form (see the picture below) available on the following path: New query -> Cloud service -> Cloud storage -> Resource request (Новый запрос -> Облачный сервис -> Облачное хранилище -> Запрос ресурсов.).

How to use CephFS

Ceph-fuse

Add EPEL repository

yum install epel-release

Add ceph-repository

 yum install http://download.ceph.com/rpm-luminous/el7/noarch/ceph-release-1-1.el7.noarch.rpm

Install ceph-common and ceph-fuse packages

yum install ceph-common ceph-fuse

Keyring file

Create a file /etc/ceph/ceph.client.<user>.keyring with the following content:

[client.<user>]

        key = <key>

Mounting

Mount manually from command line

$ ceph-fuse -m 10.93.220.121:6789 -m 10.93.220.122:6789 -m 10.93.220.123:6789 -n client.<user> /mnt/<user> -r <path-on-remote-share>

Mounting on boot via /etc/fstab

/etc/fstab:

none    /mnt/<cephfs>    fuse.ceph    ceph.id=<user>,ceph.conf=/etc/ceph/ceph.conf,ceph.client_mountpoint=<path-on-remote-share>,_netdev,defaults,noauto,comment=systemd.automount  0 0

/etc/ceph/ceph.conf:

[global]

mon host = 10.93.220.121,10.93.220.122,10.93.220.123

Ceph kernel mode

NOTE: You won't be able to mount cephfs in kernel mode on kernel older than 4.4 due to a bug (see Ceph best practices). Newer kernel needs to be installed (e.g. for CentOS 7 can be installed from ELRepo).

Installing new kernel from ELRepo

yum install http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm

yum --disablerepo="*" --enablerepo="elrepo-kernel" list available

yum --enablerepo=elrepo-kernel install kernel-ml

grub2-mkconfig -o /boot/grub2/grub.cfg

grep vmlinuz /boot/grub2/grub.cfg

grub2-set-default 0

Installing ceph-common package

yum install http://download.ceph.com/rpm-luminous/el7/noarch/ceph-release-1-1.el7.noarch.rpm

Install ceph-common

yum install ceph-common

Keyring file

The key file must be placed somewhere with 600 rights (for example, /root/ceph.mpi.keyring) and contain only secret text:

<Key>

Note! If the key is in the clear, then to use it, you must encode it using the base64 standard.

Mounting

Mount manually from command line

mount -t ceph 10.93.220.121:6789,10.93.220.122:6789,10.93.220.123:6789:<path-on-remote-share> /mnt/cephfs -o name=<user>,secretfile=/root/ceph.mpi.keyring

Mounting on boot via fstab:

10.93.220.121:6789,10.93.220.122:6789,10.93.220.123:6789:<path-on-remote-share>         /mnt/<cephfs>        ceph    name=<user>,secretfile=/root/ceph.mpi.keyring,noatime,_netdev,noauto,comment=systemd.automount     0 0