MICC JINR Multifunctional
Information and Computing
Complex

RU

Quick admin guide

Introduction

After following this guide, users will have a working OpenNebula with a graphics interface (Sunstone), at least one hypervisor (host) and a running virtual machine. It is useful when setting up pilot clouds, for quickly testing new features and as a base deployment to build a large infrastructure.

There are two separate roles during the installation: the Frontend and Nodes. The Frontend server will run OpenNebula services, and Nodes will be used to implement virtual machines. Please do not follow this guide to create a single host combining the Frontend and Nodes on one server. However, it is recommended to create virtual machines on hosts with virtualization extensions. To test if your host supports virtualization extensions, please run:

 

# grep -E 'svm|vmx' /proc/cpuinfo

If you do not get any output, you probably do not have virtualization extensions supported/enabled on your server.

Package Layout
opennebula-server: OpenNebula Daemons
opennebula: OpenNebula CLI commands
opennebula-sunstone: OpenNebula’s web GUI
opennebula-java: OpenNebula Java API
opennebula-node-kvm: Install dependencies required by OpenNebula in the nodes
opennebula-gate: Send information from Virtual Machines to OpenNebula
opennebula-flow: Manage OpenNebula Services
opennebula-context: Package for OpenNebula Guests

Additionally opennebula-common and opennebula-ruby exist, but they are intended to be used as dependencies.

Warning

In order to avoid problems, you should disable SELinux in all nodes, the Frontend and Nodes.

 

 # vi /etc/sysconfig/selinux
...
SELINUX=disabled
...
# setenforce 0
# getenforce
Permissive

Warning
Some commands may fail depending on your iptables/firewalls configuration. Disable the firewalls entirely for testing just to rule it out.

Frontend Installation 

Note
Commands prefixed by # are meant to be run as root. Commands prefixed by $ must be run as oneadmin.

Install the repository

Enable the EPEL repository:

# yum install epel-release

Add the OpenNebula repository:

# cat << EOT > /etc/yum.repos.d/opennebula.repo

[opennebula]

SELINUX=disabled

name=opennebula

baseurl=http://downloads.opennebula.org/repo/4.12/CentOS/7/x86_64/

enabled=1

gpgcheck=0

EOT

Install the required packages

A complete installation of OpenNebula will have at least the opennebula-server and opennebula-sunstone packages:

# yum install opennebula-server opennebula-sunstone

You should run install_gems to install all the gem dependencies. Choose CentOS/RedHat if prompted:

# /usr/share/one/install_gems

lsb_release command not found. If you are using a RedHat based distribution install redhat-lsb

Select your distribution or press enter to continue without

installing dependencies.

 1.Ubuntu/Debian

 2.CentOS/RedHat

Configure and start the services

There are two main processes that must be started, i.e. the main OpenNebula daemon: oned, and the graphics user interface: sunstone.

Sunstone listens only to the loopback interface by default for security reasons. To change it, edit /etc/one/sunstone-server.conf and change :host: 127.0.0.1 to :host: 0.0.0.0.

Now we can start the services: 

# systemctl enable opennebula

# systemctl start opennebula

# systemctl enable opennebula-sunstone

# systemctl start opennebula-sunstone

Configure NFS
Note

Skip this section if you are using a single server for both the frontend and worker node roles.

Export /var/lib/one/ from the frontend to the worker nodes. To do so, add the following to the /etc/exports file in the frontend:

/var/lib/one/ *(rw,sync,no_subtree_check,root_squash)

Refresh the NFS exports by doing:

# systemctl restart nfs.service

Configure the SSH Public Key

OpenNebula will need access to SSH passwordlessly from any node (including the frontend) to any other node.

Add the following snippet to ~/.ssh/config as oneadmin so it does not prompt to add keys to the known_hosts file:

 

# su - oneadmin

$ cat << EOT > ~/.ssh/config

Host *

    StrictHostKeyChecking no

    UserKnownHostsFile /dev/null

EOT

$ chmod 600 ~/.ssh/config

Nodes Installation 

Install the repository

Add the OpenNebula repository:

 

# cat << EOT > /etc/yum.repos.d/opennebula.repo

[opennebula]

name=opennebula

baseurl=http://downloads.opennebula.org/repo/4.12/CentOS/7/x86_64/

enabled=1

gpgcheck=0

EOT

Install the required packages

# yum install opennebula-node-kvm

Start the required services:

# systemctl enable messagebus.service

# systemctl start messagebus.service

# systemctl enable libvirtd.service

# systemctl start libvirtd.service

# systemctl enable nfs.service

# systemctl start nfs.service

Configure the Network
 
Warning
 
Back up all the files that are modified in this section before making changes to them.

You will need to have your main interface connected to the bridge. We will do the following example with ens3, but the name of the interface may vary. An OpenNebula requirement is that the name of the bridge should be the same in all nodes.

To do so, substitute /etc/sysconfig/network-scripts/ifcfg-ens3 with:

DEVICE=ens3

BOOTPROTO=none

NM_CONTROLLED=no

ONBOOT=yes

TYPE=Ethernet

BRIDGE=br0

And add a new /etc/sysconfig/network-scripts/ifcfg-br0 file.

If you are using DHCP for your ens3 interface, use this template:

DEVICE=br0

TYPE=Bridge

ONBOOT=yes

BOOTPROTO=dhcp

NM_CONTROLLED=no

If you are using a static IP address, use another template:

DEVICE=br0

TYPE=Bridge

IPADDR=<YOUR_IPADDRESS>

NETMASK=<YOUR_NETMASK>

ONBOOT=yes

BOOTPROTO=static

NM_CONTROLLED=no

After these changes, restart the network:

# systemctl restart network.service

Configure NFS

Note

Skip this section if you are using a single server for both the frontend and worker node roles.

Mount the datastore export. Add the following to your /etc/fstab:

192.168.1.1:/var/lib/one/  /var/lib/one/  nfs   soft,intr,rsize=8192,wsize=8192,noauto

Note

Replace 192.168.1.1 with the IP address of the frontend.

Mount the NFS share:

# mount /var/lib/one/

If the above command fails or hangs, it can be a firewall issue.

Basic Usage

Note

All the operations in this section can be performed using Sunstone instead of the command line. Point your browser to: http://frontend:9869.

The default password for the oneadmin user, which is randomly generated on every installation, can be found in ~/.one/one_auth.

Interact with OpenNebula from the oneadmin account in the frontend. We will assume that all the following commands are performed from this account. To login as oneadmin, run:
 
 
su – oneadmin

Adding a Host

To start running VMs, you should first register a worker node for OpenNebula.

Run this command for each node. Replace localhost with your node’s hostname.

$ onehost create localhost -i kvm -v kvm -n dummy

Run the onehost list command. If it fails, you probably have some problems with your ssh configuration. Look at /var/log/one/oned.log.

Adding virtual resources

Once it works, you need to create a network, an image and a virtual machine template.

To create a network, we need to create a network template file mynetwork.one that contains:
 
NAME = "private"

BRIDGE = br0

AR = [

    TYPE = IP4,

    IP = 192.168.0.100,

    SIZE = 3

    ]

Note

Replace the address range with free IPs in your host network. You can add more than one address range.

Now we can move ahead and create resources in OpenNebula:

$ onevnet create mynetwork.one

$ oneimage create --name "CentOS-7-one-4.8" \

    --path http://marketplace.c12g.com/appliance/53e7bf928fb81d6a69000002/download \

    --driver qcow2 \

    -d default

$ onetemplate create --name "CentOS-7" \

    --cpu 1 --vcpu 1 --memory 512 --arch x86_64 \

    --disk "CentOS-7-one-4.8" \

    --nic "private" \

    --vnc --ssh --net_context

Note

If the ‘oneimage create’ command complains because there is not enough space available in the datastore, you can disable the datastore capacity check in OpenNebula: /etc/one/oned.conf:DATASTORE_CAPACITY_CHECK = “no”. You need to restart OpenNebula after changing it.

You will need to wait until the image is ready to be used. Monitor its state by running oneimage list.

In order to dynamically add ssh keys to Virtual Machines we should add our ssh key to the user template by editing the user template:

$ EDITOR=vi oneuser update oneadmin

Add a new line like the following to the template:

SSH_PUBLIC_KEY="ssh-dss AAAAB3NzaC1kc3MAAACBANBWTQmm4Gt..."

Substitute the value above with the output of:

cat ~/.ssh/id_dsa.pub

Running a Virtual Machine

To run a Virtual Machine, you will need to instantiate a template:

$ onetemplate instantiate "CentOS-7"

Run onevm list and watch the virtual machine going from PENDING to PROLOG to RUNNING. If the VM fails, check the reason in the log: /var/log/one/<VM_ID>/vm.log.

Note

If it stays too long in the pend status, you can check why by doing: onevm show <vmid>|grep ^SCHED_MESSAGE. If it reports that no datastores have enough capacity for the VM, you can force a manual deployment by running: onevm deploy <vmid> <hostid>.

OpenVZ

In this section, the procedure of installation and use on OpenVZ with Opennebula is described.

Supported features in the current driver

  • ploop: deploy, suspend, poweroff, stop*, shutdown, undeploy, migrate*, migrate live, VM snapshots
  • simfs is not tested but it may work – use at your own risk.

The features marked with * need the location of datastores on the hosts to be the same as on the frontend. If they differ, on the frontend you can create a simlink matching host datastore location pointing to the actual frontend datastore.

OpenVZ-specific template parameters:

OSTEMPLATE corresponds to the OpenVZ “ostemplate” parameter. Make sure the value of the OSTEMPLATE parameter is written in the format <OS name>-<version>-<architecture>, e.g. OSTEMPLATE=”sl-6-x86_64″.

The VE_LAYOUT parameter is used to set a file system type of the VM. It can be ploop or simfs. If it is not specified, ploop is used by default.

OVZ_SIZE sets the required disk size for the VM. If it is not specified, the value from DEFAULT_CT_CONF is used. Example:

DISK=[

  IMAGE_ID="1",

  OVZ_SIZE="20480" ]

Current driver installation

Contextualization

In this version of the driver, the contextualization is performed by copying the ISO file contents to the specified location in the VM file tree; the default location for copying files is configured in the file remotes/vmm/ovz/ovzrc by changing CT_CONTEXT_DIR variable.

Frontend node installation and configuration

[root@FN]$ yum install git patch genisoimage epel-release

Download and install OpenNebula according to http://opennebula.org/documentation:rel4.2:ignc, i.e. download the OpenNebula tarball for CentOS-6.x from http://downloads.opennebula.org («OpenNebula 4.2 CentOS 6.4 tarball»), unpack it and install needed rpms on FN. Alternatively it can be installed from ONE repository:

[root@FN]$ cat << EOT > /etc/yum.repos.d/opennebula.repo

[opennebula]

name=opennebula

baseurl=http://opennebula.org/repo/CentOS/6/stable/\$basearch

enabled=1

gpgcheck=0

EOT

[root@FN]$ # yum install <packages>

The installation of the opennebula-* rpms may create a user oneadmin with UID 498 and a group with GID 499 which are reserved for the cgred group (comes from the installation of the libcgroup library which ploop depends on). In this case it is easier to change oneadmin UID and GID on FN instead of changing GID on all CNs (anyway, make sure that UID and GID of the oneadmin user on FN and CNs are the same).

[root@FN]$ groupmod -g 1000 oneadmin

[root@FN]$ usermod -u 1000 -g 1000 oneadmin

[root@FN]$ chown oneadmin:oneadmin /var/run/one /var/lock/one /var/log/one

[root@FN]$ chgrp oneadmin -R /etc/one/

[root@FN]$ yum install ruby-devel

[root@FN]$ /usr/share/one/install_gems

[root@FN]$ git clone git@git.jinr.ru:cloud-team/one-ovz-driver.git

[root@FN]$ cd opennebula-openvz

Switch to the current branch and install the driver:

[root@FN]$ hg update current

[root@FN]$ cd src/

[root@FN]$ bash install.sh

Now make sure that all permissions are correct and generate ssh keys:

[root@FN]$ chown oneadmin:oneadmin -R /var/lib/one/

[root@FN]$ cd ~

[root@FN]$ ssh-keygen -t rsa

Put id_rsa.pub in root@CN:/.ssh/authorized_keys as well as id_rsa* in root@CN:/.ssh/:

[root@FN]$ cp ~/.ssh/id_rsa.pub ~/.ssh/authorized_keys

[root@FN]$ scp -r ~/.ssh/ <CN>:~/

StrictHostKeyChecking needs to be disabled in the /etc/ssh/ssh_config file on FN and CNs:

Host *

            StrictHostKeyChecking no

Do not forget to restart sshd on the host where the /etc/ssh/ssh_config file was modified.

[root@FN]$ service sshd restart

Make sure that root is able to log in on CNs without being asked for a password.

Sunstone GUI

[root@FN]$ yum install opennebula-sunstone-4.2.x86_64.rpm

[root@FN]$ bash /usr/share/one/install_novnc.sh

MySQL

If MySQL is going to be used as an OpenNebula DB backend, the following steps need to be performed.

[root@FN]$  yum install mysql-server

[root@FN]$ /etc/init.d/mysqld start

[root@FN]$ chkconfig mysqld on

[root@FN]$ mysql

mysql> USE mysql;

mysql> UPDATE user SET Password=PASSWORD('<password>') WHERE user='root';

mysql> FLUSH PRIVILEGES;

mysql> create database opennebula;

mysql> GRANT ALL PRIVILEGES ON opennebula.* TO 'one_db_user'@'localhost' IDENTIFIED BY 'one_db_user' WITH GRANT OPTION;

mysql> UPDATE user SET Password=PASSWORD('<password>') WHERE user='one_db_user';

mysql> FLUSH PRIVILEGES;

where <password> is either can be taken from the ~oneadmin/.one_auth file or you can set any other one.

Passwordless access across nodes for the oneadmin user

Opennebula generates DSA keys in ~oneadmin/.ssh/ If necessary, you can generate your own keys:

[root@FN]$ su - oneadmin

[oneadmin@FN]$ ssh-keygen -t rsa

[oneadmin@FN]$ cp ~/.ssh/id_rsa.pub ~/.ssh/authorized_keys

Put id_rsa.pub in the oneadmin@CN:~/.ssh/authorized_keys file as well as id_rsa* in the oneadmin@CN:~/.ssh/ folder.

StrictHostKeyChecking needs to be disabled in the /etc/ssh/ssh_config file on FN and CNs:

Host *

            StrictHostKeyChecking no

Remember to restart sshd on the host where /etc/ssh/ssh_config file was modified.

[root]$ service sshd restart

oned.conf

Edit the original configuration file /etc/one/oned.conf as shown below:

HOST_MONITORING_INTERVAL              = 60

VM_POLLING_INTERVAL                   = 60

SCRIPTS_REMOTE_DIR=/vz/one/scripts

PORT = 2633

DB = [ backend = "mysql",

            server  = "localhost",

            port     = 0,

            user     = "one_db_user",

            passwd  = "<password>",

            db_name = "opennebula" ]

VNC_BASE_PORT = 5900

DEBUG_LEVEL = 3

NETWORK_SIZE = 254

MAC_PREFIX   = "02:00"

DATASTORE_LOCATION = /vz/one/datastores

DEFAULT_IMAGE_TYPE   = "OS"

DEFAULT_DEVICE_PREFIX = "sd"

IM_MAD = [

            name   = "im_ovz",

            executable= "one_im_ssh",

            arguments  = "-r 0 -t 15 ovz" ]

VM_MAD = [

            name   = "vmm_ovz",

            executable = "one_vmm_exec",

            arguments  = "-t 15 -r 0 ovz",

            default = "vmm_exec/vmm_exec_ovz.conf",

            type     = "xml" ]

TM_MAD = [

            executable = "one_tm",

            arguments  = "-t 15 -d dummy,shared,ssh" ]

DATASTORE_MAD = [

            executable = "one_datastore",

            arguments  = "-t 15 -d fs"

]

HM_MAD = [

            executable = "one_hm" ]

AUTH_MAD = [

            executable = "one_auth_mad",

            authn = "ssh,x509,ldap,server_cipher,server_x509"

]

SESSION_EXPIRATION_TIME = 900

VM_RESTRICTED_ATTR = "CONTEXT/FILES"

VM_RESTRICTED_ATTR = "NIC/MAC"

VM_RESTRICTED_ATTR = "NIC/VLAN_ID"

VM_RESTRICTED_ATTR = "RANK"

IMAGE_RESTRICTED_ATTR = "SOURCE"

Leave the rest configuration parameters at their default values.

Set DEFAULT_CT_CONF in the /var/lib/one/remotes/vmm/ovz/ovzrc file to the needed value (e.g. /etc/vz/conf/ve-vswap-1g.conf-sample).

Setting a oneadmin password

If you installed from packages, you should have the ‘/.one/one_auth’ file created with a randomly generated password. Otherwise, set oneadmin’s OpenNebula credentials (username and password) adding the following to /.one/one_auth (change the password if you want):

[oneadmin@FN]$ mkdir ~/.one

[oneadmin@FN]$ echo "oneadmin:<password>" > ~/.one/one_auth

[oneadmin@FN]$ chmod 600 ~/.one/one_auth

It will set the oneadmin password on the first boot. From this point, you must use the ‘oneuser passwd’ command to change the oneadmin password.

Starting OpenNebula daemons

 

[oneadmin@FN]$ one start

Check logs (/var/log/one/oned.log) for any errors.

Datastore (on OpenNebula)

Currently only the sshd transfer manager driver is supported by the OVZ driver, so you need to change all the datastores to use the ssh driver. To change the datastore transfer manager driver (e.g. from shared to ssh), one can perform the following command:

[oneadmin@FN]$ env EDITOR=vim onedatastore update 1

and set a value of the TM_MAD parameter correspondingly (e.g. TM_MAD=”ssh”).

Apart from that, there is a necessity to set NO_DECOMPRESS=”yes” in DS config otherwise OpenNebula will try to decompress OpenVZ template archives and fail.

Cluster nodes configuration (OpenVZ)

Install OS on CN in the minimal configuration and remove unnecessary rpms. E.g. on SL 6.x OS the following rpms can be removed:

[root@CN]$ yum remove qpid* matahari*

[root@CN]$ userdel -rf qpidd

[root@CN]$ groupdel qpidd

or by just one command

[root@CN]$ yum remove qpid* matahari* && userdel -rf qpidd && groupdel qpidd

Disable selinux in /etc/selinux/config

[root@CN]$ setenforce 0

[root@CN]$ sestatus

[root@CN]$ wget -P /etc/yum.repos.d/ http://download.openvz.org/openvz.repo

[root@CN]$ rpm --import  http://download.openvz.org/RPM-GPG-Key-OpenVZ

[root@CN]$ yum install vzkernel vzkernel-firmware

[root@CN]$ mv /etc/sysctl.conf{,.orig}

[root@CN]$ scp <configured CN>:/etc/sysctl.conf /etc/

[root@CN]$ chkconfig ntpd on

[root@CN]$ chkconfig apcupsd on

[root@CN]$ yum install vzctl vzquota ploop

Edit /etc/vz/vz.conf according to the desirable configuration. Edit vz.conf on CNs as below:

$ diff /etc/vz/vz.conf.orig /lesf

45c45

< IPTABLES="ipt_REJECT ipt_tos ipt_limit ipt_multiport iptable_filter iptable_mangle ipt_TCPMSS ipt_tcpmss ipt_ttl ipt_length"

---

> IPTABLES="ipt_REJECT ipt_tos ipt_limit ipt_multiport iptable_filter iptable_mangle ipt_TCPMSS ipt_tcpmss ipt_ttl ipt_length ipt_state"

50c50

< IPV6="yes"

< IP6TABLES="ip6_tables ip6table_filter ip6table_mangle ip6t_REJECT"

---

> IPV6="no"

> #IP6TABLES="ip6_tables ip6table_filter ip6table_mangle ip6t_REJECT"

Make sure that the modules xt_state and nf_conntrack are loaded.

Reboot. Make sure the vz and vzeventd daemons are running. If they are not, check if they are set to be started at boot. They can be run by executing commands as below:

[root@CN]$ /etc/init.d/vz start

[root@CN]$ /etc/init.d/vzeventd start

Default CT conf

Make sure to set the proper values in the file $DEFAULT_CT_CONF (e.g./etc/vz/conf/ve-vswap-1g.conf-sample) corresponding to good enough resources (e.g. disk space), otherwise the VM deployment may fail with errors like «Disk quota exceeded»).

iptables

Copy the iptables rules from configured CN and restart the iptables services:  [root@CN]$ /etc/init.d/iptables restart

On CNs run the following commands:

[root@CN]$ iptables -P FORWARD ACCEPT && iptables -F FORWARD

iptables config example on VMs:

# Firewall configuration written by system-config-firewall

# Manual customization of this file is not recommended.

*filter

:INPUT ACCEPT [0:0]

:FORWARD ACCEPT [0:0]

:OUTPUT ACCEPT [0:0]

-A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT

-A INPUT -p icmp -j ACCEPT

-A INPUT -i lo -j ACCEPT

-A INPUT -m state --state NEW -m tcp -p tcp -s <IP/mask trusted_network> --dport 22 -j ACCEPT

-A INPUT -j REJECT --reject-with icmp-host-prohibited

-A FORWARD -j REJECT --reject-with icmp-host-prohibited

COMMIT

Install required RPMs on CN

[root@CN]$ yum install ruby rubygems file bind-utils

oneadmin user

Create a oneadmin group and a user on CNs with the same uid and gid as on FN:

[root@CN]$ groupadd --gid 1000 oneadmin

[root@CN]$ useradd --uid 1000 -g oneadmin -d /vz/one oneadmin

Edit the /etc/sudoers file:

# Defaults       requiretty

%oneadmin  ALL=(ALL)                   NOPASSWD: ALL

Defaults:%oneadmin secure_path="/bin:/sbin:/usr/bin:/usr/sbin"

[root@CN]$ su - oneadmin

[oneadmin@CN]$ mkdir /vz/one/datastores

Make sure /vz/one/datastores is writable for the group, otherwise do this:

[oneadmin@CN]$ chmod g+w /vz/one/datastores

[oneadmin@FN]$ scp -r .ssh/ root@CN:~oneadmin/

[root@CN]$ chown oneadmin:oneadmin -R ~oneadmin/.ssh/

StrictHostKeyChecking needs to be disabled in the /etc/ssh/ssh_config file on FN and CNs:

Host *~

            StrictHostKeyChecking no

Remember to restart sshd on the host where the /etc/ssh/ssh_config file was modified.

[root]$ service sshd restart 

Make sure that the oneadmin user is able to log in on CN from FN without being asked for a password:

[oneadmin@FN]$ ssh <CN hostname>

[root@FN]$ ssh-copy-id <CN hostname>

[root@FN]$ scp -r .ssh/id_rsa* root@CN:~/.ssh/

Make sure that root is able to log in on CN from FN without being asked for a password:

[root@FN]$ ssh <CN hostname>

Some VM operation examples

Network

[oneadmin@FN]$ cat public.net

NAME = "Public"

TYPE = FIXED

BRIDGE = venet0

LEASES = [IP=<IP1>]

LEASES = [IP=<IP2>]

GATEWAY = <gateway_IP>

DNS = <DNS_IP>

[oneadmin@FN]$ onevnet list

[oneadmin@FN]$ onevnet create public.net

One can add/remove/hold/release leases to/from the FIXED network:

[oneadmin@FN]$ onevnet addleases <network_id> <new_IP_address>

[oneadmin@FN]$ onevnet rmleases <network_id> <IP_address>

[oneadmin@FN]$ onevnet hold <network_id> <IP_address>

Cluster

[oneadmin@FN]$ onecluster create ovz_x64

[oneadmin@FN]$ onecluster addvnet 100 0

[oneadmin@FN]$ onecluster adddatastore 100 1

[oneadmin@FN]$ onehost create <CN hostname> --im im_ovz --vm vmm_ovz --cluster ovz_x64 --net dummy

[oneadmin@FN]$ oneimage create -d default --name "SL 6.3 x86_64 persistent" --path /tmp/sl-6-x86_64.tar.gz --prefix sd --type OS --description "Scientific linux 6.3 custom"

[oneadmin@FN]$ oneimage list

To make the image persistent, run the following command:

[oneadmin@FN]$ oneimage persistent <IMAGE_ID>

Create a template for VMs:

$ cat sl-6.3-x86_64.one.vm.tmpl

CONTEXT=[

  FILES="/var/lib/one/vm_files/rc.local /var/lib/one/vm_files/id_rsa.pub",

  NAMESERVER="$NETWORK[DNS, NETWORK_ID=0 ]" ]

CPU="0.01"

DISK=[

  IMAGE_ID="1",

  SIZE="20480" ]

DISK=[

  SIZE="2048",

  TYPE="swap" ]

LOOKUP_HOSTNAME="true"

MEMORY="4096"

NAME="SL6 x86_64"

NIC=[

  NETWORK_ID="0" ]

OS=[

  ARCH="x86_64",

  BOOT="sd" ]

OSTEMPLATE="sl-6-x86_64"

VE_LAYOUT=”ploop”

RCLOCAL="rc.local"

Make sure the value of the OSTEMPLATE parameter is written in the format <OS name>-<version>-<architecture>, e.g. OSTEMPLATE= «sl-6-x86_64».

The VE_LAYOUT parameter is used to set a file system type of the VM. It can be ploop or simfs. If it is not specified, ploop is used.

Due to the new datastore model in OpenNebula 4.4 you cannot use the disk size attribute anymore, instead you can specify the OVZ_SIZE attribute for the disk containing the VM image. If it is not specified, the value from DEFAULT_CT_CONF is used. Example:

DISK=[

  IMAGE_ID="1",

  OVZ_SIZE="20480" ]

 

You can also pass OpenVZ native parameters directly to the hypervisor using the RAW attribute. For example:

 

RAW = [

            FEATURES = "nfs:on"

            QUOTATIME = "0"

            ....

One can update the created template by the command:

env EDITOR=vim onetemplate update <TEMPLATE ID>

Create a VM template in ONE:

[oneadmin@FN]$ onetemplate create sl-6.3-x86_64.one.vm.tmpl

Instantiate a VM from the existing template:

[oneadmin@FN]$ onetemplate instantiate 0 -n vps103