MICC JINR Multifunctional
Information and Computing
Complex

RU

User manuals

Cloud Computing User Guide

To work in the cloud, the a user must be authenticated by that the service successfully. The detailed information on this topic is presented in the section «Start using the cloud».

Resources request

Initially a user does not have any resources and, therefore, he cannot create virtual machines (VMs). To get some resources, a user should send a resources request via a special web form « Resources request» located on the helpdesk site. To do this on  helpdesk.jinr.ru, you need to fill in a resources request form (see the screenshot below) available on the following path: New query →  Cloud service → Cloud computing → Resources request (Новый запрос → Облачный сервис → Облачные вычисления → Запрос ресурсов).

VMs access

Access to KVM-based VMs created from shared (publicly available) templates can be obtained either via VNC (including the Sunstone web interface) or via the ssh protocol, whereas CTs are available only via ssh.

VNC

ATTENTION! VNC access to VMs is available only from the JINR network (159.93.0.0/16).

To access VMs via VNC using the Sunstone interface a user needs to click on the «VNC» button located either in the right column of his VM list or on the panel on the «Information» tab page of the specific VM (see the screenshots below).

After connecting via VNC, the following window will appear and you can start working on a newly created VM:

VNC uses the login/password only. If the VM was created from publicly available templates, you should enter the same username / password that you used when logging into the cloud. If the VM was created from the user template, you need to log in via ssh and create a user (useradd surname, passwd ***********).

Аccess to the service from other networks (outside JINR)

In order to use the cloud service outside the JINR network, you can use several options, for example, use VPN or set up a proxy connection on your computer.

This guide describes how to connect to the service using the x2go program. To start using this program, you should install the x2go client. How to do it for your system is written here wiki.x2go.org.

After installation, it is necessary to configure the connection. To do this, you should run the installed x2go client and create a new connection. In the window on the «Session» tab, you should configure the following parameters: Session name, Host (lxpub.jinr.ru), Login (the login and the password for the connection are used from the JINR account (login.jinr.ru)). Leave the SSH port 22, select the session type - «Single application» and the application «Internet browser» and click «OK».

Before pressing the «OK» button, in the «Input/Output» tab you can change the screen resolution with which the connection window will be opened. This is not a required option, but with the help of it, you can configure a more comfortable connection display.

To start the session, you need to click on the name of the created connection, enter the password and click «OK». A warning will probably pop up (because of the first connection); you just need to agree with it.

After that, a browser window, in which a user is granted access to the internal Institute resources, will open.

SSH

By default all user VMs/CTs with Linux OS created from public templates and having public IP addresses are available on the port 22 via the ssh protocol from JINR, CERN, NetByNet and TMPK networks.  VMs/CTs with private IP addresses (such as 192.168.0.0/16 and 10.93.0.0/16)  are available only from the JINR network.

For access to a VM / CT with the same user login and password that were used to log in to the web interface of the cloud infrastructure, VMs / CTs must be created on the basis of one of the public VM templates (the templates created under oneadmin).

Having logged in to a VM/CT using an Institute account, to get superuser rights, you need to run the command «sudo su -» in the command line and enter the password from your Institute account again.

To access a VM/CT under the root user using the RSA/DSA key, you should add your public key to the profile. To do this, click on your user and select «Settings».

Next, on the «Auth» tab in the «Public SSH key» field, place your public key.

To generate an ssh key in Linux, you need to performthe following steps:

In the console (terminal) enter the command:

# ssh-keygen

  • In response to the request «Enter file in which to save the key (/root/.ssh/id_rsa):» set the path to the private key or press «Enter» to use the default path ~/.ssh/id_rsa.
  • In response to the request «Enter passphrase (empty for no passphrase):» enter the password to access the private key or press «Enter» to leave access to the private key without a password.
  • In response to the request «Enter same passphrase again:» enter the password again or press «Enter» again for passwordless access to the private key.

As a result of these actions the directory ~/.ssh will be created if did not exist before, and public and private RSA-keys (files with the names id_rsa and id_rsa.pub, respectively) will be created in it.

The contents of the public RSA-key (i.e., the id_rsa.pub file) must be inserted into the «Public SSH key» field in the settings of your profile.

To generate an ssh key in Windows, you must perform the following steps:

Launch the PuTTYgen program (you can download the program from the link). Click on the «Generate» button. To generate a key, move the cursor to the empty «Key» field.

Save the keys on your computer. Copy the contents of the open part of the rsa key into the «Public SSH key» field in your profile settings.

Sometimes you may need to convert a private SSH key. For example, you created an SSH key in the PuTTY application, and the application in which you want to use the key supports the OpenSSH key format, in particular, the x2go program. To convert the key, run the PuTTYgen program. Download the private ppk key by choosing  «File» → «Load private key». Enter a code word (passphrase) if required for this key. To export a private key to the OpenSSH format, select «Conversions» → «Export OpenSSH key» in the main menu. Save the private key to a new file. As a result of the conversion, you get a private key in the OpenSSH format.

To create your own VM template, you can use any of the templates available in the cloud service. To do this, select the appropriate template and click the «Clone» button.

Specify the name of the cloned template and click the «Clone» button (if you select «Clone» in the new template, the same disks as in the previous template will be indicated, if you select «Clone from image», then copies of the images in the cloned template will be made. The new template will be substituted with the IDs of new (cloned) disk images, similarly, commands to delete templates and images will occur.

If you need to work with another SSH key (different from the one loaded in the profile), for example, if you log in from another machine and use another key pair, you can add a new SSH key to the template. Select a template and click «Update» on the «Context» tab, in the «Configuration» section in the «SSH public key» field, insert a new SSH key.

Creating and changing VM Templates

To open the VM/CT Template editor, you need to press the green «plus» button and select «Create».

The Template editor has 2 editing modes: «Wizard» and «Advanced».

The «Wizard» mode is used to set the basic VM parameters: CPU, RAM, DISK, IP address, etc.

The «Advanced» mode is used to set some specific parameters that are not available the «Wizard» mode, for example, OVZ_SIZE or OSTEMPLATE.

ATTENTION! It is allowed to work simultaneously with only one window. That is, the «Setup Wizard» window is used to set the basic and specific parameters, after which the template is saved and edited in the «Advanced» window.

To save a new template, click the «Create» button.

To edit the template and save it, you must click the «Update» button.

To specify the model of the central processor of the server on which the virtual machine will be deployed, go to the «Templates» → «VMs» section, select the desired template and click the "Update" button.

Then, on the «OS & CPU» tab in the item «CPU Model», select «host-passthrough» from the drop-down list and click the “Update” button.

Creating a VM with a pass-through GPU card (GPU capabilities inside the VM)

To use such resources, you need access to the juno-gpu cluster and the presence of images in the ceph-image storage.

To create a VM of this type, you need to specify additional parameters in the template.

To do this, create or update the existing template. After setting the VM parameters, go to the «Other» tab, in the «PCI Devices» section, click the «+» button and select an available device from the drop-down list. Next, save the template and initiate the creation of VM from it.

To check the device connection, you need to go to the VM created by ssh and using a utility, such as «pciutils», to view the connected devices.

Example for Centos:

yum install pciutils

lspci -vvv | grep NVIDIA

Create a virtual instance (either a KVM VM or an OpenVZ container)

The information below assumes that the paragraph «VMs access» was read.

Ways to get an image for KVM VM or OpenVZ containers (CTs)

There are several methods for adding images to your cloud, namely:

  • Using the OpenNebula application store (VM only, not recommended);
  • Image request via HelpDesk (VM / CT);
  • Creating images in the OpenNebula (VM) environment.

Using the OpenNebula Marketplace 

To download the image from the store, you need to go to the «MarketPlace», select «OpenNebula Public» → «Apps» (or select «Apps» right away).

It is necessary to select a suitable disk image and a hypervisor type (KVM in case of the JINR cloud) from the list. Images for OpenVZ containers are not available via the OpenNebula Marketplace. Carefully read the description of the templates of ready-made VMs. These templates are provided and modified only by the representatives of the OpenNebula project. Download the selected image.

Further one need to select a datastore and assign the name for an image and a template. For KVM use the storage ImageDS_ceph. The same rule is true for OpenVZ containers (i.e. «ImageDS_22x-openvz»), but there are no images for OpenVZ containers in the Marketplace.

Please note that choosing the ImageDS_ceph datastore, you need to select images in the raw format, not qcow2, since the disk in the qcow2 format may cause problems when creating a VM (the detailed information can be found on the website)

Request of the necessary image through HelpDesk

If the image and/or the template with the required operating system is absent in the JINR cloud, then it is possible to send a request for reviewing via HelpDesk to get access to the necessary image / template both for a VM (KVM), and for a container (OpenVZ), if they already exist in the cloud, or to ask to create one.

The request is created through the form on the HelpDesk website, in the section «User Support of the Cloud Service» (in Russian «Облачный сервис» → «Облачные вычисления» → «Поддержка пользователей облачного сервиса»).

In the request form, you need to indicate the identifier (i.e. login) of the user in cloud.jinr.ru, the name or image ID and/or template ID (if it already exists in the cloud and a user can see it but cannot use) or the name and version of the required OS, its bit depth and type of virtualization (KVM or OpenVZ).

ATTENTION! Enter the user name in cloud.jinr.ru, the name and version of the operating system and type of virtualization (KVM or OpenVZ).

Creation of images in the OpenNebula environment

OpenNebula provides the possiblity for users to create the necessary OS image for KVM VMs.

To create a new image, go to the «Images»  section. Click the «plus» sign → «Create». Give a name, a description, select the type of the image «Read-only CD-ROM», the datastore «ImageDS_ceph», the image location «Upload», and specify the path to the file (.iso). Then click the «Create» button.

Create a DATABLOCK image, specify its size (for example, 10240 MB). How to create a DATABLOCK image is described in the «Creating and deleting a persistent DATABLOCK image» section.

Pass into the section «Templates» → «VMs», further create a new template and configure the specified settings.

On the «General» tab, set the parameters of the machine depending on your quota.

On the «Storage» tab select the earlier created empty DATABLOCK disk, add a new disk and specify your disk with the OS installer.

On the «Network» tab, choose a network (for example, «22x-priv»).

On the «OS&CPU» tab, specify an architecture of the installed system, then set HDD as the first boot device and CDROM as the second one. Such a boot sequence will allow one to skip empty HDD during the first boot and boot from CDROM and then, when OS is already installed on HDD, the boot will start from HDD.

On the «Input/Output» tab, set a checkbox on VNC.

On the «Context» tab → «Configuration» copy the public part of the rsa/dsa key (.pub) to access a VM (how to access a VM is described in the «VMs access» section).

Launch the VM by selecting the created, configured template and connect to the VM through VNC. Continue the installation of the system in a graphics mode.

The description of creating as well as editing VM/CT templates is provided in the «Creating and changing VM Templates» section mentioned above.

Important point! It is important to choose a proper datastore and a network: in case of KVM VMs one needs to select ImageDS_ceph and in case of OpenVZ CT – ImageDS_22x-openvz.

Creation of KVM/OpenVZ container

To create a VM in the section «Instances» → «VMs», it is necessary to press the «+» button, select a VM / CT template. Specify the name and the number of instances created by the template. If necessary, configure other parameters like «Create as persistent» (make a permanent copy of the template along with all the added disks and expand it), «Memory», «CPU», «VCPU», disk size (only for machines with the KVM virtualization) «Network», etc., then click on the «Create» button.

It takes a few minutes to get the VM/CT up and running (see here for more details on VM/CT statuses). As soon as the created instance gets the status «RUNNING», one can check its accessibility via the network by using, for example, the ping command:

 # ping <IP-адрес ВМ> 

Once a response is received, the machine is running and ready to work.

You can perform certain actions on a VM/CT: save, start, block, suspend, stop, power off, undeploy, reboot, migrate, terminate (click here for more information).

Please note that if the machine is locked, no action can be applied to it except «unlock».

You can access to the newly created VM / CT via VNC or the ssh-protocol. The connection setup is described in the paragraph «VM access»

Connect via the SSH-protocol through the command:

 # ssh root@<VM/CT IP-address or hostname> 

​Replace the«<VM/CT IP-address or hostname>» field by IP or the hostname of your VM/CT.

Specific parameters description

OVZ_SIZE: sets the size of the hard disk for OpenVZ machines. Applies only to OpenVZ CTs.

OSTEMPLATE: OS name of the CT, which is used to correctly deploy a system environment. Applies only to OpenVZ CTs.

LOOKUP_HOSTNAME: sets the host name of the VM/CT from DNS. Possible values: «true» or «false»

HOSTNAME: manual setting of the VM / CT network name.

ARCHIVE_CONTENT_TYPE: can be used to reduce the CT deployment time. Possible values: «ploop» or «archive». Applies only to OpenVZ CTs.

Mounting sshfs in the OpenVZ CT

1) Make sure that the fuse module is enabled inside the OpenVZ container:

[root]$ sudo cat /proc/filesystems|grep fuse

nodev fusectl

nodev fuse

2) Add the EPEL repository:

[root]$ sudo yum install yum-conf-epel

3) Install the sshfs package and its dependencies:

[root]$ yum install fuse-sshfs

4) Mount a remote folder example:

[root]$ sshfs root@foo.org:/root /mnt/foo.org.root

5) Unmount the remote folder example:

# fusermount -u mountpoint

Basic operations with the disk

Persistent / non-persistent images

A persistent image can be used by only one VM/CT. If the VM/CT is deleted using the «Terminate» function, all data is saved to this image.

A non-persistent image can be used by multiple VMs simultaneously, but data is never written to this image after deleting the VM/CT.

To make an image persistent/non-persistent, choose it (the tab «Images») and press the «Make persistent» or «Make non-persistent» button.

An image can have one of several statuses: «READY» - the image is not used by any VM/CT, but is ready for use; «USED» - a non-persistent image is used by at least one VM/CT (for other statuses and in more detail about the mentioned above, you can read the link).

Cloning images

Existing images can be cloned. It is useful when you want to make a backup of the image before modifying it, or to get a private persistent copy of the image shared by other user. Persistent images can be cloned only when they are in the «READY» state; non-persistent images can be cloned in any of the states («READY» or «USED»). To clone an image, one needs to choose it and press the «Clone» button.

Setting the disk size (OpenVZ only)

OpenVZ containers can use only one disk. Setting its size is possible only before the VM is created. By default the size of the disk is 10.2 GB. To change it you have to add the OVZ_SIZE parameter equal to the size in megabytes to the «DISK» parameter string using the advanced editing mode.

For example, to set the «DISK» size equal to 20 GB, OVZ_SIZE should be set to the value 20480 (20*1024):

DISK=[

IMAGE_UNAME="oneadmin",

IMAGE="openvz_scientific_6-x86_64_krb_clst33",

OVZ_SIZE="20480"]

Changing the disk size of the working OpenVZ container (CT)

The procedure for changing the size of the OpenVZ container disk depends on whether the image from which the CT was created was «persistent» or not (non-persistent).

If the image was «permanent», to change the disk size, one needs to perform the following actions.​

Note: Before proceeding, make sure that there are no mounted network shares/disks in the container.

Turn off the VM with the «terminate» command. Changes made to the container disk will be saved to the original image when turned off. Next, you need to edit the template, specifying the desired size in it: «Templates» → «VM» → «the desired template». Then click the «Update» button and select the «Advanced» menu.

Add the line «OVZ_SIZE =<new desired disk size>» in the parameter «DISK» (IMPORTANT: the size is specified in megabytes)​

Example:

 DISK = [

IMAGE = "Copy of centos-7-openvz",

IMAGE_UNAME = "username",

OVZ_SIZE = "8192" ]

with this parameter, a VM with a disk size of 8 GB will be created​

Then click the «Update» button and create a container from this updated template. In this case, all data and changes that had been made by the time, when the «terminate» command was launched, remain in it.

If the image was not "permanent", to change the disk size of the running container, you must first save its disk to a new image. To do it, the following steps need to be performed. ​

Select the desired VM: «Instances» → «VMs».

Next, turn off the container with the command «Power off».

Wait until the VM has status «Power off» and then save its disk by pressing the button «Save». After that one needs to enter the name for a new disk image and click the «Save as template» button.

Then a disk image and a template will be created. The template will be a copy of the one from which the container was created before saving. Only the storage option will be replaced by the saved image, and the «Network» parameter will be reset. Therefore, you need to go to the template and click the «Update» button, select the «Network» menu and specify the desired network. 

Further, to change the disk size, the same actions as with the «persistent» image need to be performed. I.e. one needs to go to the «Advanced» tab and add the line «OVZ_SIZE = <new desired disk size>» in the «DISK» parameter (IMPORTANT: the size is specified in megabytes).

Example:

 DISK = [ 

IMAGE = "Copy of centos-7-openvz",

IMAGE_UNAME = "username", 

OVZ_SIZE = "8192" ]

with this parameter, a VM with a disk size of 8 GB will be created​

Then click the «Update» button and create a new VM from the updated template. You can also create a new template or edit the existing one (see the section Creating and changing VM Templates of the manual), specifying the saved disk image.

When you create a container using a new image, all data and changes that were made by the time of saving will remain.

Using snapshots

Functionality created for working with images allows you to save/return/delete the state of the VM. The functionality is available on the «Snapshots» tab of the «Instances» → «VMs» menu item.

ATTENTION! When the VM is stopped, all the snapshots are automatically deleted.

WARNING: the Snapshot function does not work in OpenVZ containers. The only way to save data is to use a persistent image. To do this, delete the VM using the «terminate» command and specify this disk image when creating a new VM. The data of the previous VM is saved, except for network parameters.
  

ATTENTION! The Snapshot function in virtual machines deployed in the ceph cluster does not work, due to the limitations of ceph itself. Use disk backup!

Saving disk images (backup disk)

To save data on the VM disk, you can use the cloud functionality. Select the virtual machine whose disk backup you want to run.

Go to the «Storage» tab, select the desired image and click on the «Save As» button.

Specify the name of a new image and click the «Save As» button.

After performing this action, a new image will appear in the «Storage» → «Images» section. Go there.

Select «yes» on the «Information» tab in the «Persistent» field.

Then click «Enable».

Now you can change the previously stored (or already existing) template or create a new one using the created disk image. To do this, configure the necessary parameters in the «Templates» → «VMs» section of the «General» tab.

On the «Storage» tab, select the disk image you created.

On the «Network» tab, select the appropriate network option.

Now you can create a new virtual machine based on the template, on the disk of which there will be all the data of the original VM.

Creating and deleting a persistent DATABLOCK image.

This part of the manual describes how to create an image of the DATABLOCK type and connect it to the already running VM.

The given manual shows an example of creating an empty disk (for example, to add workspace to the VM). First, you need to create a new image, «Storage» → «Images» → «+» sign→ «Create».

In the «Name» field, you must specify the desired image name, then in the «Type» field, select «Generic storage datablock», and as the «Datastore» storage system select «135: ImageDS_ceph». Depending on the type of the disk being created, the «Image location» section offers different options. Since the manual considers creating an empty image, you should select the «Empty disk image» parameter and set the required disk size. Next, mark the menu item «This image is persistent» (it is necessary so that all the data that you will make is saved in this image). The next obligatory step is to expand the menu item «Advanced Options» and select the options «BUS» → «Virtio» and «Image mapping driver» → «raw». After completing these steps, click the «Create» button.

Now you can add the created image to the running VM. To do this, go to «Instances» → the «VMs» and select the VM to which the created Datablock will be added. After that, in the «Storage» menu item, you should click the «Attach disk» button and select the image you created, then click the «Attach» button. After performing these actions, the created Datablock image will appear in the list as another disk.

Now you can work with this image in the VM itself. To do this, you need to go to the VM and see the information about the connected disks; it will be displayed as a new disk. Then you can work with it as with a regular disk in the system: format, mount, work with data, etc.

To delete this image, you need to perform all the steps in reverse order. To begin with, so that there are no problems in our VM, it is necessary to delete all links in the system with this disk that were created and unmount the disk. Then go to «Instances» → «VMs» → « VM» → «Storage» and click on the «Detach» button.

After performing the above actions, you can delete the image itself, «Storage» → «Images», select the image and click on the delete button.

Adding a volatile disk to the running VM

Volatile disks are created on the fly on the target host. After the VM is terminated the disk is disposed.

To add a volatile disk to the running virtual machine, go to the «Storage» tab and click the «Add disk» button.

In the appeared window you need to select "Volatile disk", set the size, disk type - «FS», the file system format - «raw», in the advanced settings select the bus type - «Virtio», and click the «Attach» button. Then the created disk will be appeared in the list.

To check the disk connection, you need go to the VM as root or another user with root rights. Initially, the system displays one drive:

[root@localhost ~]# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda1        41G  997M   38G   3% /
tmpfs           999M     0  999M   0% /dev/shm
... 

We prepare the disk, for this we create the partition:

[root@localhost ~]# fdisk /dev/vdX 

in the command, enter n to create the partition, then you can agree with the default settings and at the end enter w to save the settings and exit.

Format the disk:

[root@localhost ~]# mkfs.ext4 /dev/vdX
mke2fs 1.42.13 (17-May-2015)
Creating filesystem with 18349824 4k blocks and 4587520 inodes
Filesystem UUID: 48eac5bd-31f0-433b-9b7a-fbad4ea8ebf1
Superblock backups stored on blocks:
    32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
    4096000, 7962624, 11239424

Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

Mount the disk:

[root@localhost ~]# mount  /dev/vdX /home/

Now, if you check disks in the system, both disks will be displayed:

[root@localhost ~]# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda1        41G  997M   38G   3% /
tmpfs           200M     0  200M   0% /run/user/500
/dev/vdX        XXXG   XXXM   XXXG   1% /home
... 

Adding an additional network interface to the VM

To add another network interface to the VM, you need to do the following steps:

- select the VM to which an additional network interface will be added «Instances» → «VMs».

- select the «Network» tab and click the «Attach nic» button.

Then select the desired network and click «Attach». After that, a new interface will appear in the list of network interfaces of the VM.

In order for the interface to work, it is necessary to configure it in the VM. Below is an example of the configuration for the Linux system.

To configure the network interface in the VM, you need to connect to it via SSH or VNC. Add the table «rt2» in «/etc/iproute2/rt_tables». To do this, you can use the command «echo»

echo 1 rt2 >> /etc/iproute2/rt_tables

or use an editor like «Vim», «Nano». In this case, the entry is added to the end of the file and should be of the form «1 rt2».

To see what the network interface is called, you can use the commands:

ip a l

or

ifconfig

Also after the output of these commands it will be seen if there is an IP address on the interface.

Next, you need to assign an IP address and add the route by running the following commands (the name of the interface ens7 and IP 159.93.222.222 are used for example, respectively, they will differ from those that you will have):

sudo ifup ens7

ip address add 159.93.222.222/32 dev ens7

ip route add 159.93.220.0/22 dev ens7  src 159.93.222.222 table rt2

ip route add default via 159.93.222.222 dev ens7 table rt2

ip rule add from 159.93.222.222/32 table rt2

sudo ip rule add to159.93.222.222/32 table rt2

In order to remove the interface from the VM, first you need to remove the IP address from the network interface:

 ip addr del 159.93.222.222/32 dev ens7

ip addr flush dev ens7

After that, delete the interface through the web interface of the cloud infrastructure - «Instances of VM» → «VMs».  In the «Network» tab, you must click on the «Detach» button on the interface to be deleted.

Creating a virtual network

To create a private network consisting of 1 or more addresses, you need perform the following steps:

Click on the network with internal (ID: 8) or external (ID: 9) addresses.

Go to the «Network» tab, select «Virtual network», select the desired network - with internal (ID: 8) or external (ID: 9) addresses. Click on «+» → «Reserve».

In the opened window, choose the number of reserved addresses, select «Add a new virtual network», specify the name of the network. In the «Advanced Settings», select a range from the list and specify the initial ip from the required range. Then click on the «Reserved» button.

Creating a separate subnet, even consisting of one ip-address, is useful because in this way the user can fix a specific ip-address for his host, which nullifies the probability of capturing this ip-address of another VM, for example, when the user recreates his VM.

Cloud Storage User Guide

Cloud Storage resources request 

Initially a user does not have any resources. To get some resources, a user should send a resources request via a special web form « Resources request» located on the helpdesk site. To do this on helpdesk.jinr.ru, you need to fill in a resource request form (see the screenshot below) available on the following path: New query -> Cloud service -> Cloud storage -> Resources request (Новый запрос -> Облачный сервис -> Облачное хранилище -> Запрос ресурсов.).

How to use CephFS

Ceph-fuse

Add the EPEL repository

yum install epel-release

Add the ceph-repository

 yum install http://download.ceph.com/rpm-luminous/el7/noarch/ceph-release-1-1.el7.noarch.rpm

Install ceph-common and ceph-fuse packages

yum install ceph-common ceph-fuse

Keyring file

Create a file /etc/ceph/ceph.client.<user>.keyring with the following content:

[client.<user>]

        key = <key>

Mounting

Mount manually from the command line

$ ceph-fuse -m 10.93.220.121:6789 -m 10.93.220.122:6789 -m 10.93.220.123:6789 -n client.<user> /mnt/<user> -r <path-on-remote-share>

Mounting on boot via /etc/fstab

/etc/fstab:

none    /mnt/<cephfs>    fuse.ceph    ceph.id=<user>,ceph.conf=/etc/ceph/ceph.conf,ceph.client_mountpoint=<path-on-remote-share>,_netdev,defaults,noauto,comment=systemd.automount  0 0

/etc/ceph/ceph.conf:

[global]

mon host = 10.93.220.121,10.93.220.122,10.93.220.123

Ceph kernel mode

NOTE: You will not be able to mount cephfs in a kernel mode on the kernel older than 4.4 due to the bug (see Ceph best practices). A newer kernel needs to be installed (e.g. for CentOS 7 can be installed from ELRepo).

Installing a new kernel from ELRepo

yum install http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm

yum --disablerepo="*" --enablerepo="elrepo-kernel" list available

yum --enablerepo=elrepo-kernel install kernel-ml

grub2-mkconfig -o /boot/grub2/grub.cfg

grep vmlinuz /boot/grub2/grub.cfg

grub2-set-default 0

Installing the ceph-common package

yum install http://download.ceph.com/rpm-luminous/el7/noarch/ceph-release-1-1.el7.noarch.rpm

Install ceph-common

yum install ceph-common

Keyring file

The key file must be placed somewhere with 600 rights (for example, /root/ceph.mpi.keyring) and contain only the secret text:

<Key>

NOTE: If the key is in the clear form, then to use it, you should encode it using the base64 standard.

Mounting

Mount manually from the command line

mount -t ceph 10.93.220.121:6789,10.93.220.122:6789,10.93.220.123:6789:<path-on-remote-share> /mnt/cephfs -o name=<user>,secretfile=/root/ceph.mpi.keyring

Mounting on boot via fstab:

10.93.220.121:6789,10.93.220.122:6789,10.93.220.123:6789:<path-on-remote-share>         /mnt/<cephfs>        ceph    name=<user>,secretfile=/root/ceph.mpi.keyring,noatime,_netdev,noauto,comment=systemd.automount     0 0