MICC JINR Multifunctional
Information and Computing
Complex

RU

User's manuals

Cloud Computing User's Guide

Resources request

Initially user doesn’t have any resources and hence a he can’t create VMs. To get some the user has to send resource request via special web-form “Request resources” located on the helpdesk site. To do this, on the helpdesk.jinr.ru site, you need to fill in the resource request form (see the picture below) available on the following path: New query -> Cloud service -> Cloud computing -> Resource request (Новый запрос -> Облачный сервис -> Облачные вычисления -> Запрос ресурсов).

 

VMs access

KVM VMs created from shared (publicly available) templates can be accessed remotely either via VNC or ssh protocol whereas CTs – via ssh only.

VNC

To access VMs via VNC using Sunstone interface user needs to click on “VNC” button located either in the right column of the his VM list or in the right top corner of the menu on “Info” tab of its particular VM (see pictures below).

 

SSH

By default all users’ VMs/CTs created from predefined and shared templates with linux OS and with public IP addresses are accessible on 22 port via ssh protocol from JINR, CERN, NetByNet and TMPK networks. VMs/CTs with private IP addresses (such as 192.168.0.0/16 and 10.93.0.0/16) are accessible from JINR network only.

To be able to use Kerberos credentials to log in to the VM via ssh the VM itself has to be created from one of the predefined and shared images (ID: 320, 315, 314, 312, 309) and a parameter UNAME=$UNAME has to be added into the template (“Wizard” -> “Context” -> “Custom Vars” tab) as shown on the picture below:

 

Then to gain superuser privileges inside VM the user can run the ‘sudo su -‘ command.

In case the user would like to log into his VM as a root using RSA/DSA-keys one needs to add user’s public key into the VM Template (“Wizard” -> “Context” tab). Steps to achieve that are described below.

1. The user has to select a template he wants to clond and press the «Clone» button.

2. Then the user has to enter the name of the cloned template.

3. After that he has to select a created template and click the «Update» button.

4. The user has to add his RSA/DSA public key in the «Context» tab

To generate ssh-keys, follow these steps:

  • In the console (terminal) type the command:

 

# ssh-keygen

  • When asked “Enter file in which to save the key (/root/.ssh/id_rsa):” set the path for the private key or press the “Enter” to agree on the proposed default path ~/.ssh/id_rsa.
  • When asked “Enter passphrase (empty for no passphrase):” enter a password to access the private key or press the “Enter” to leave access to the private key without a password.
  • When asked “Enter same passphrase again:” enter the password again and press “Enter” again or just press “Enter” for passwordless access to the private key.

As a result of these actions a directory ~/.ssh will be created if it hasn’t existed before and public and private RSA-keys (files with names id_rsa and id_rsa.pub, respectively) will be created.

The contents of the public RSA-key (i.e. the file ~/.ssh/id_rsa.pub) has to be pasted into the “Public Key” field of the VM template web-form of the cloud web-interface and the «Update» button has to be pressed.

 

 

Creating and changing VM Templates

To open VM Template editor you have to press the green button with the “+” sign on it.
The Template editor has 2 editing modes: Wizard and Advanced.
Wizard mode is used to set the basic VM parameters: CPU, RAM, DISK, IP and so on.
Advanced mode is used to set some specific parameters that are not available in Wizard mode, like OVZ_SIZE or OSTEMPLATE.

Create virtual instance (either KVM VM or OpenVZ container)

The information below assumes the paragraph “VMs access” was read.

Ways to get an image for KVM VM or OpenVZ containers (CTs)

If a needed KVM or OpenVZ image is missing in the JINR cloud but a user strongly needed it then there are a few options can be followed:

  • Download a required KVM image from OpenNebula Marketplace (notice that way is valid only for KVM VM);
  • Send a request for an image through HelpDesk (KVM/OpenVZ containers);
  • Create an KVM image in the JINR cloud using installation ISO disk.

OpenNebula Marketplace

It is possible to get access to OpenNebula Marketplace through the main menu of the web interface of a cloud of JINR.
It is necessary to select a suitable disk image and a hypervisor type (KVM in case of the JINR cloud) from the list, and then to press the “Import” button. Images for OpenVZ containers are not available via OpenNebula Marketplace. Attentively study the description of templates of ready VM. These templates are provided and change only by the representatives of OpenNebula project.

Further one need to select a datastore and assign a name for an image and for a template. For KVM — use storage which name contains “kvm” (for example, “ImagesDS_22x-kvm-priv”). The same rule is true for the OpenVZ containers (for example, “ImageDS_22x-openvz”), but there are no images for OpenVZ containers on Marketplace.

Request of a necessary image through HelpDesk

If an image and/or a template with a required operating system is absent in the JINR cloud, then it is possible to send a request for reviewing via HelpDesk to get access to the necessary image / template both for VM (KVM), and for a container (OpenVZ) if they already exist in a cloud, or to ask to create one.

The request is created through the form on the HelpDesk website, in the section “User Support of a Cloud Service” (in Russian “Поддержка пользователей облачного сервиса)”.

It is necessary to specify the following information in the request:

  • identifier (i.e. login) the user has in the JINR cloud,
  • the name or image ID and/or template ID (if it already exists in the cloud and user can see it but can’t use);
  • or the name and the version of the required OS;
  • type of the virtualization (KVM or OpenVZ).

!Attention. Enter a user name, a name and the version of an operating system and type of virtualization (KVM or OpenVZ). 

Creation of images in the OpenNebula environment

Some advanced OpenNebula Sunstone views provides a possiblity for users to create a necessary OS image for KVM VM.

This process will be the following:

  • Go to the section “Virtual resources” -> “Images”;
  • Create a new image, then specify an image type as CD-ROM/DVD, select corresponding datastore, press a button “Upload” and then specify a path to the ISO file;
  • Create a DATABLOCK image, specify its size (for example, 10240 MB);
  • Pass into the section the “Virtual resources” -> “Templates”, further create a new template and configure the specified settings:

The “General tab” — set parameters of the machine depending on your quota.

Tab “Storage” — select an earlier created empty DATABLOCK disk, further add a new disk and specify your disk with an installer of OS.

The “Network tab” — choose a network (for example, “22x-priv”).

The “OS Booting” tab — specify an architecture of the installed system, then set HDD as the first boot device (“1st Boot”) and CDROM as the second one (2nd Boot). Such boot sequence lets skip during first boot an empty HDD and boot from CDROM and then when OS is already installed on HDD, boot will start from HDD.

The “Input/Output tab” — set a checkbox on VNC.

The Context tab — the Network and SSH, insert here a content of the public part of the rsa/dsa key (.pub) to access a VM (how to access VM is described in the “VMs access” section).

  • Launch VM, having selected the created, tuned template and be connected to VM through VNC. Continue installation of system in the graphic mode.

The description of creation as well as editing the BM/CT templates are provided in the “Creation/editing BM/CT Templates” section mentioned above.

Important point!

Notice it is important to choose a proper datastore and network: in case of KVM VM one needs to select a corresponding item which name contains “kvm” and in case of OpenVZ CT – “ovz”.

Creation of KVM/OpenVZ container
In the «Virtual Machine» section you need to choose a template and specify the VM name and the number of new instances to be created from chosen template as shown on the pictures below.

It takes a few minutes to get VM/CT up and running (see here for more details on VM/CT statuses). As soon as created instance gets the status “RUNNING” one can check its accessibility via network by using for example ping command:

# ping <IP-адрес ВМ>

Once a response is received – the machine is running and ready to work.

Since then one can access VM/CT either  through VNC or ssh-protocol (see “VM access” section above).

After connecting through VNC, the following window appears. Then you can start to work on the created VM.

Connect via ssh-protocol through a command:

# ssh root@<VM/CT IP-address or hostname>

Replace the “VM IP or hostname” field by the IP or hostname of your VM/CT.

Specific parameters description

OVZ_SIZE: sets the size of the hard disk for OpenVZ machines. Applies only to OpenVZ CTs.
OSTEMPLATE: name of the OS inside of the CT which is used to correctly deploy system environment. Applies only to OpenVZ CTs.
LOOKUP_HOSTNAME: sets hostname of the VM/CT from DNS. Possible values: “true” or “false”.
ARCHIVE_CONTENT_TYPE: can be used to reduce CT deployment time. Possible values: “ploop” or “archive”. Applies only to OpenVZ CTs.

Mounting sshfs inside OpenVZ CT

1) Make sure the fuse module is enabled inside OpenVZ container:

[root]$ sudo cat /proc/filesystems|grep fuse

nodev fusectl

nodev fuse

2) Enable EPEL repository:

[root]$ sudo yum install yum-conf-epel

3) Install sshfs package and its dependencies:

[root]$ yum install fuse-sshfs

4) Mount remote folder example:

[root]$ sshfs root@foo.org:/root /mnt/foo.org.root

5) Unmount remote folder example:

# fusermount -u mountpoint

Using snapshots for OpenVZ containers

This feature can be used to save a complete state of your VM, allowing creating, restoring and deleting snapshots.
The feature can be found on the “Snapshot” tab of “Virtual Machine” submenu.

Attention!!! When the VM is stopped, all the snapshots are automatically deleted.

Attention, the Snapshot function in virtual machines deployed in a ceph cluster does not work, due to limitations of the ceph itself. Use disk backup! 

Basic operations with disk

Persistent / non-persistent images

A persistent image can be used by only one single VM/CT. If the VM/CT is deleted using “Shutdown” function, all the data is saved to that image.
A non-persistent image can be used by multiple VMs simultaneously, but the data is never written to that image after shutting down the VMs/CTs.
To make an image persistent / non-persistent choose it (tab “Images”) and press “Make persistent” or “Make non-persistent” button.
Images can be in one of the following states (more details on image statuses one can find here):

  • LOCKED – The image file is being copied or created in the Datastore.
  • READY – Image ready to be used.
  • USED – Non-persistent Image used by at least one VM. It can still be used by other VMs.
  • USED_PERS – Persistent Image is use by a VM. It cannot be used by new VMs.
  • DISABLED – Image disabled by the owner, it cannot be used by new VMs.
  • ERROR – Error state, a FS operation failed.
  • DELETE – The image is being deleted from the Datastore.

Cloning images

Existing images can be cloned. It is useful when you want to make a backup of the image before modifying it, or to get a private persistent copy of the image shared by other user. Persistent images can be cloned only when they are in “Ready” state; non-persistent images can be cloned in any of the states. To clone an image one needs to choose it and press the “Clone” button.

Setting the disk size (OpenVZ only)

OpenVZ containers can use only one single disk. Setting its size is possible only before the VM is created. By default the size of the disk is 10.2 GB. To change it you have to add OVZ_SIZE parameter equal to the size in megabytes to the disk parameter string using advanced editing mode. For example, to set the disk size equal to 20 GB the OVZ_SIZE should be set to the value 20480 (20*1024):

DISK=[

IMAGE_UNAME="oneadmin",

IMAGE="openvz_scientific_6-x86_64_krb_clst33",

OVZ_SIZE="20480"]

Changing a disk size of a working OpenVZ container (CT)

The procedure for changing the size of the OpenVZ container disk depends on whether the image from which the CT was created was "persistent" or not (non-persistent).

If the image was "permanent", then to change the disk size one needs to perform the following actions.​

 

Note: Before proceeding, make sure that there are no mounted network shares/disks in the container.

 

Turn off the VM with the "terminate" command. The changes made to the container disk will be saved to the original image when turned off. Next, you need to edit the template, specifying the desired size in it: "Templates → VM →" the desired template "". Then click the "Update" button, select the "Advanced" menu

and add the line "OVZ_SIZE ="<new desired disk size>" in the parameter "DISK" (IMPORTANT: the size is specified in megabytes)​

Example:

DISK = [

IMAGE = "Copy of centos-7-openvz",

IMAGE_UNAME = "username",

OVZ_SIZE = "8192" ]

with this parameter, a VM with a disk size of 8 GB will be created​
Then click the "Update" button and create a container from that updated template. In that case, all the data and changes that were made by the time when the "terminate" command was launched remain in it.
If the image was not "permanent", then to change the disk size of the running container, you must first save its disk to a new image. To do that the following steps needs to be performed. ​
Select the desired VM: "VM → Instances of VM".

Next, turn off the container with the command "Power off" ,

Wait until the VM has status «Poweroff» and then one can save its disk by pressing the button"Save" . After that one needs to enter a name for the new disk image and click the "Save as template" button. Then the disk image and template will be created. The template will be a copy of the one from which the container was created before saving. Only the storage option will be replaced by the saved image, and the "Network" parameter will be reset. Therefore, you need to go to the template and click the "Update" button, select the "Network" menu and specify the desired network. Further, to change the disk size the same actions need to be performed as with the "persistent" image. I.e. one needs to go to the "Advanced" tab and add the line "OVZ_SIZE ="<new desired disk size> " in the" DISK "parameter (IMPORTANT: the size is specified in megabytes).

Example:

DISK = [

IMAGE = "Copy of centos-7-openvz",

IMAGE_UNAME = "username",

OVZ_SIZE = "8192" ]

with this parameter, a VM with a disk size of 8 GB will be created​

Then click the "Update" button and create a new VM from updated template. You can also create a new one or edit an existing template (see section Creating and changing VM Templates of the manual), specifying the saved disk image. When you create a container using a new image, all the data and changes that were made by the time of saving remain.

Creating and deleting a persistent image of type DATABLOCK.

This part of the manual describes how to create an image of the DATABLOCK type and connect it to an already running VM.

This manual shows an example of creating an empty disk (for example, to add a workspace to a VM). First, you need to create a new image, «Storage» → «Images» → The «plus» sign.

In the «Name» field, you must specify the desired image name, then in the «Type» field, select «Generic storage datablock», and as the «Datastore» storage system select «135: ImageDS_ceph». The «Image location» section of the web form is customizable. Since the manual considers creating an empty image, you must select the «Empty disk image» parameter and set the required disk size. Next, mark the menu item «This image is persistent» (this is necessary so that all the data that you will make is saved in this image). The next step, without fail, is to expand the menu item «Advanced Options» and select the options «BUS» → «Virtio» and «Image mapping driver» → «qcow2». After completing these steps, click the «Create» button.

Now you can add the created image to the running VM. To do this, go to «Instances» → «VM’s» and select the VM to which the created Datablock will be added. After that, in the «Storage» menu item, you must click the «Attach disk» button and select the image you created, then click the «Attach» button. After performing these actions, the created Datablock image appears in the list as another disk.

Now you can work with this image in the VM itself. To do this, you need to go to the VM and see the information about the connected disks, it will be displayed as a new disk. Then you can work with it as with a regular disk in the system: format, mount, work with data, etc.

To delete this image, you need perform all the steps in reverse order. To begin with, so that there are no problems in our VM, it is necessary to delete all links and links in the system with this disk that were created and unmount the disk. Then go to «Instances» → «VMs» → « VM» → «Storage» and click on the «Detach» button.

After performing the above actions, you can delete the image itself, «Storage» → «Images», select the image and click on the delete button.

Adding an additional network interface to the VM

To add another network interface to the VM, you need to do the following steps:

- select the VM to which an additional network interface will be added "VM instances" → "VMs"

- select the "Network" tab and click the "Attach nic" button.

Then select the desired network and click "Attach". After that the new interface will appear in the list of network interfaces of the VM.

In order for the interface to work, it is necessary to configure it in the VM. Below is an example of the configuration for a Linux system.

To configure the network interface in the VM, you need to connect to it via SSH or VNC. Add table «rt2» in “/etc/iproute2/rt_tables”. To do this, you can use the command "echo"

echo 1 rt2 >> /etc/iproute2/rt_tables

or use an editor such as "Vim", "Nano". In this case, the entry is added to the end of the file and should be of the form "1 rt2".

To see what the network interface is called, you can use the commands:

ip a l  

or

ifconfig  

Also after the output of these commands it will be seen if there is an IP address on the interface.

Next, you need to assign an IP address and add a route by running the following commands (the name of the interface ens7 and IP 159.93.222.222 are used for example, respectively, they will differ from those that you will have):

 ip address add 159.93.222.222/32 dev ens7

ip route add 159.93.220.0/22 dev ens7  src 159.93.222.222 table rt2

ip route add default via 159.93.222.222 dev ens7 table rt2

ip rule add from 159.93.222.222/32 table rt2  

In order to remove the interface from the VM, first you need to remove the IP address from the network interface:

 ip addr del 159.93.222.222/32 dev ens7

ip addr flush dev ens7 

After that  delete the interface through the web interface of the cloud infrastructure - "Instances of VM" → "VMs".  In the "Network" tab, you must click on the "Detach" button on the interface to be removed.

Cloud Storage User's Guide

Cloud Storage resources request 

Initially user doesn’t have any resources. To get some the user has to send resource request via special web-form “Request resources” located on the helpdesk site. To do this, on the helpdesk.jinr.ru site, you need to fill in the resource request form (see the picture below) available on the following path: New query -> Cloud service -> Cloud storage -> Resource request (Новый запрос -> Облачный сервис -> Облачное хранилище -> Запрос ресурсов.).

How to use CephFS

Ceph-fuse

Add EPEL repository

yum install epel-release 

Add ceph-repository

 yum install http://download.ceph.com/rpm-luminous/el7/noarch/ceph-release-1-1.el7.noarch.rpm

Install ceph-common and ceph-fuse packages

yum install ceph-common ceph-fuse

Keyring file

Create a file /etc/ceph/ceph.client.<user>.keyring with the following content:

[client.<user>]

        key = <key>

Mounting

Mount manually from command line

$ ceph-fuse -m 10.93.220.121:6789 -m 10.93.220.122:6789 -m 10.93.220.123:6789 -n client.<user> /mnt/<user> -r <path-on-remote-share>

Mounting on boot via /etc/fstab

/etc/fstab:

none    /mnt/<cephfs>    fuse.ceph    ceph.id=<user>,ceph.conf=/etc/ceph/ceph.conf,ceph.client_mountpoint=<path-on-remote-share>,_netdev,defaults,noauto,comment=systemd.automount  0 0

/etc/ceph/ceph.conf:

[global]

mon host = 10.93.220.121,10.93.220.122,10.93.220.123

Ceph kernel mode

NOTE: You won't be able to mount cephfs in kernel mode on kernel older than 4.4 due to a bug (see Ceph best practices). Newer kernel needs to be installed (e.g. for CentOS 7 can be installed from ELRepo).

Installing new kernel from ELRepo

yum install http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm

yum --disablerepo="*" --enablerepo="elrepo-kernel" list available

yum remove -y kernel-{devel,tools,tools-libs}

yum --disablerepo="*" --enablerepo="elrepo-kernel" install kernel-mt kernel-mt-tools

grub2-mkconfig -o /boot/grub2/grub.cfg

grep vmlinuz /boot/grub2/grub.cfg

grub2-set-default 0

Installing ceph-common package

yum install http://download.ceph.com/rpm-luminous/el7/noarch/ceph-release-1-1.el7.noarch.rpm

Install ceph-common

yum install ceph-common

Mounting

A keyring file with just secret text needs to be placed somewhere with 600 permissions (e.g. /root/ceph.mpi.keyring).

Mount manually from command line

mount -t ceph 10.93.220.121:6789,10.93.220.122:6789,10.93.220.123:6789:<path-on-remote-share> /mnt/cephfs -o name=<user>,secretfile=/root/ceph.mpi.keyring

Mounting on boot via fstab:

10.93.220.121:6789,10.93.220.122:6789,10.93.220.123:6789:<path-on-remote-share>         /mnt/<cephfs>        ceph    name=<user>,secretfile=/root/ceph.mpi.keyring,noatime,_netdev,noauto,comment=systemd.automount     0 0