MICC JINR Multifunctional
Information and Computing
Complex

RU

Quick admin guide

Introduction

After following this guide, users will have a working OpenNebula with graphical interface (Sunstone), at least one hypervisor (host) and a running virtual machines. This is useful at the time of setting up pilot clouds, to quickly test new features and as base deployment to build a large infrastructure.

Throughout the installation there are two separate roles: Frontend and Nodes. The Frontend server will execute the OpenNebula services, and the Nodes will be used to execute virtual machines. Please not that it is possible to follow this guide with just one host combining both the Frontend and Nodes roles in a single server. However it is recommended execute virtual machines in hosts with virtualization extensions. To test if your host supports virtualization extensions, please run:

 

# grep -E 'svm|vmx' /proc/cpuinfo

If you don’t get any output you probably don’t have virtualization extensions supported/enabled in your server.

Package Layout
opennebula-server: OpenNebula Daemons
opennebula: OpenNebula CLI commands
opennebula-sunstone: OpenNebula’s web GUI
opennebula-java: OpenNebula Java API
opennebula-node-kvm: Installs dependencies required by OpenNebula in the nodes
opennebula-gate: Send information from Virtual Machines to OpenNebula
opennebula-flow: Manage OpenNebula Services
opennebula-context: Package for OpenNebula Guests
Additionally opennebula-common and opennebula-ruby exist but they’re intended to be used as dependencies.

Warning
In order to avoid problems, we recommend to disable SELinux in all the nodes, Frontend and Nodes.

 

 # vi /etc/sysconfig/selinux
...
SELINUX=disabled
...
# setenforce 0
# getenforce
Permissive

Warning
Some commands may fail depending on your iptables/firewalld configuration. Disable the firewalls entirely for testing just to rule it out.

Installation in the Frontend

Note
Commands prefixed by # are meant to be run as root. Commands prefixed by $ must be run as oneadmin.
Install the repo

Enable the EPEL repo:

# yum install epel-release

Add the OpenNebula repository:

# cat << EOT > /etc/yum.repos.d/opennebula.repo

[opennebula]

SELINUX=disabled

name=opennebula

baseurl=http://downloads.opennebula.org/repo/4.12/CentOS/7/x86_64/

enabled=1

gpgcheck=0

EOT

Install the required packages
A complete install of OpenNebula will have at least both opennebula-server and opennebula-sunstone package:

# yum install opennebula-server opennebula-sunstone

Now run install_gems to install all the gem dependencies. Choose the CentOS/RedHat if prompted:

# /usr/share/one/install_gems

lsb_release command not found. If you are using a RedHat based distribution install redhat-lsb

Select your distribution or press enter to continue without

installing dependencies.

 1.Ubuntu/Debian

 2.CentOS/RedHat

Configure and Start the services
There are two main processes that must be started, the main OpenNebula daemon: oned, and the graphical user interface: sunstone.

Sunstone listens only in the loopback interface by default for security reasons. To change it edit /etc/one/sunstone-server.conf and change :host: 127.0.0.1 to :host: 0.0.0.0.

Now we can start the services: 

# systemctl enable opennebula

# systemctl start opennebula

# systemctl enable opennebula-sunstone

# systemctl start opennebula-sunstone

Configure NFS
Note
Skip this section if you are using a single server for both the frontend and worker node roles.
Export /var/lib/one/ from the frontend to the worker nodes. To do so add the following to the /etc/exports file in the frontend:

/var/lib/one/ *(rw,sync,no_subtree_check,root_squash)

Refresh the NFS exports by doing:

# systemctl restart nfs.service

Configure SSH Public Key
OpenNebula will need to SSH passwordlessly from any node (including the frontend) to any other node.

Add the following snippet to ~/.ssh/config as oneadmin so it doesn’t prompt to add the keys to the known_hosts file:

 

# su - oneadmin

$ cat << EOT > ~/.ssh/config

Host *

    StrictHostKeyChecking no

    UserKnownHostsFile /dev/null

EOT

$ chmod 600 ~/.ssh/config

Installation in the Nodes

Install the repo
Add the OpenNebula repository:

 

# cat << EOT > /etc/yum.repos.d/opennebula.repo

[opennebula]

name=opennebula

baseurl=http://downloads.opennebula.org/repo/4.12/CentOS/7/x86_64/

enabled=1

gpgcheck=0

EOT

Install the required packages

# yum install opennebula-node-kvm

Start the required services:

# systemctl enable messagebus.service

# systemctl start messagebus.service

# systemctl enable libvirtd.service

# systemctl start libvirtd.service

# systemctl enable nfs.service

# systemctl start nfs.service

Configure the Network
 
Warning
Backup all the files that are modified in this section before making changes to them.
You will need to have your main interface connected to a bridge. We will do the following example with ens3 but the name of the interface may vary. An OpenNebula requirements is that the name of the bridge should be the same in all nodes.

To do so, substitute /etc/sysconfig/network-scripts/ifcfg-ens3 with:

DEVICE=ens3

BOOTPROTO=none

NM_CONTROLLED=no

ONBOOT=yes

TYPE=Ethernet

BRIDGE=br0

And add a new /etc/sysconfig/network-scripts/ifcfg-br0 file.

If you were using DHCP for your ens3 interface, use this template:

DEVICE=br0

TYPE=Bridge

ONBOOT=yes

BOOTPROTO=dhcp

NM_CONTROLLED=no

If you were using a static IP address use this other template:

DEVICE=br0

TYPE=Bridge

IPADDR=<YOUR_IPADDRESS>

NETMASK=<YOUR_NETMASK>

ONBOOT=yes

BOOTPROTO=static

NM_CONTROLLED=no

After these changes, restart the network:

# systemctl restart network.service

Configure NFS
Note
Skip this section if you are using a single server for both the frontend and worker node roles.
Mount the datastores export. Add the following to your /etc/fstab:

192.168.1.1:/var/lib/one/  /var/lib/one/  nfs   soft,intr,rsize=8192,wsize=8192,noauto

Note
Replace 192.168.1.1 with the IP of the frontend.

Mount the NFS share:

# mount /var/lib/one/

If the above command fails or hangs, it could be a firewall issue.

Basic Usage
Note
All the operations in this section can be done using Sunstone instead of the command line. Point your browser to: http://frontend:9869.
The default password for the oneadmin user can be found in ~/.one/one_auth which is randomly generated on every installation.

To interact with OpenNebula, you have to do it from the oneadmin account in the frontend. We will assume all the following commands are performed from that account. To login as oneadmin execute:

 

su – oneadmin

Adding a Host

To start running VMs, you should first register a worker node for OpenNebula.

Issue this command for each one of your nodes. Replace localhost with your node’s hostname.

$ onehost create localhost -i kvm -v kvm -n dummy

Run onehost list until it’s set to on. If it fails you probably have something wrong in your ssh configuration. Take a look at /var/log/one/oned.log.

Adding virtual resources

Once it’s working you need to create a network, an image and a virtual machine template.

To create networks, we need to create first a network template file mynetwork.one that contains:

 

NAME = "private"

BRIDGE = br0

AR = [

    TYPE = IP4,

    IP = 192.168.0.100,

    SIZE = 3

    ]

Note
Replace the address range with free IPs in your host’s network. You can add more than one address range.
Now we can move ahead and create the resources in OpenNebula:

$ onevnet create mynetwork.one

$ oneimage create --name "CentOS-7-one-4.8" \

    --path http://marketplace.c12g.com/appliance/53e7bf928fb81d6a69000002/download \

    --driver qcow2 \

    -d default

$ onetemplate create --name "CentOS-7" \

    --cpu 1 --vcpu 1 --memory 512 --arch x86_64 \

    --disk "CentOS-7-one-4.8" \

    --nic "private" \

    --vnc --ssh --net_context

Note
If ‘oneimage create’ command complains because there’s not enough space available in the datastore, you can disable the datastore capacity check in OpenNebula: /etc/one/oned.conf:DATASTORE_CAPACITY_CHECK = “no”. You need to restart OpenNebula after changing this.
You will need to wait until the image is ready to be used. Monitor its state by running oneimage list.

In order to dynamically add ssh keys to Virtual Machines we must add our ssh key to the user template, by editing the user template:

$ EDITOR=vi oneuser update oneadmin

Add a new line like the following to the template:

SSH_PUBLIC_KEY="ssh-dss AAAAB3NzaC1kc3MAAACBANBWTQmm4Gt..."

Substitute the value above with the output of:

cat ~/.ssh/id_dsa.pub

Running a Virtual Machine
To run a Virtual Machine, you will need to instantiate a template:

$ onetemplate instantiate "CentOS-7"

Execute onevm list and watch the virtual machine going from PENDING to PROLOG to RUNNING. If the vm fails, check the reason in the log: /var/log/one/<VM_ID>/vm.log.

Note
If it stays too long in pend status you can check why by doing: onevm show <vmid>|grep ^SCHED_MESSAGE. If it reports that no datastores have enough capacity for the VM, you can force a manual deployment by running: onevm deploy <vmid> <hostid>.

OpenVZ

In this section procedure of installation and use on openvz with Opennebula is described.

Supported features in the current driver

  • ploop: deploy, suspend, poweroff, stop*, shutdown, undeploy, migrate*, migrate live, VM snapshots
  • simfs is not tested but may work – use at your own risk.

Features marked with * need datastores location on the hosts to be the same as on the frontend. If they differ, then on the frontend you can create a simlink matching host’s datastore location pointing to the actual frontend’s datastore.

OpenVZ-specific template parameters

OSTEMPLATE corresponds to OpenVZ “ostemplate” parameter. Make sure the value of the OSTEMPLATE parameter is written in the format <OS name>-<version>-<architecture>, e.g. OSTEMPLATE=”sl-6-x86_64″.
VE_LAYOUT parameter is used to set filesystem type of the VM. It can be ploop or simfs. If not specified, ploop is used by default.
OVZ_SIZE sets the required disk size for the VM. If it is not specified, the value from DEFAULT_CT_CONF is used. Example:

DISK=[

  IMAGE_ID="1",

  OVZ_SIZE="20480" ]

Installing current driver
Contextualization
In this version of the driver contextualization is performed by copying ISO file contents to the specified location in VM file tree; default location for copying files is configured in the file remotes/vmm/ovz/ovzrc by changing CT_CONTEXT_DIR variable.

Frontend node installation and configuration

[root@FN]$ yum install git patch genisoimage epel-release

Download and install OpenNebula according http://opennebula.org/documentation:rel4.2:ignc, i.e. download opennebula tarball for CentOS-6.x from http://downloads.opennebula.org (“OpenNebula 4.2 CentOS 6.4 tarball”), unpack it and install needed rpms on FN. Alternatively it can be installed from the ONE repository:

[root@FN]$ cat << EOT > /etc/yum.repos.d/opennebula.repo

[opennebula]

name=opennebula

baseurl=http://opennebula.org/repo/CentOS/6/stable/\$basearch

enabled=1

gpgcheck=0

EOT

[root@FN]$ # yum install <packages>

Installation of the opennebula-* rpms may create a user oneadmin with UID 498 and group with GID 499 which are reserved for cgred group (comes from installation of libcgroup library which ploop depends on). In that case it is easier to change oneadmin UID and GID on FN instead of changing GID on all CNs (anyway, make sure that UID and GID of oneadmin user on FN and CNs are the same).

[root@FN]$ groupmod -g 1000 oneadmin

[root@FN]$ usermod -u 1000 -g 1000 oneadmin

[root@FN]$ chown oneadmin:oneadmin /var/run/one /var/lock/one /var/log/one

[root@FN]$ chgrp oneadmin -R /etc/one/

[root@FN]$ yum install ruby-devel

[root@FN]$ /usr/share/one/install_gems

[root@FN]$ git clone git@git.jinr.ru:cloud-team/one-ovz-driver.git

[root@FN]$ cd opennebula-openvz

 

Switch to the current branch and install the driver:

[root@FN]$ hg update current

[root@FN]$ cd src/

[root@FN]$ bash install.sh

Now make sure that all permissions are correct and generate ssh keys:

[root@FN]$ chown oneadmin:oneadmin -R /var/lib/one/

[root@FN]$ cd ~

[root@FN]$ ssh-keygen -t rsa

 

Put id_rsa.pub in root@CN:/.ssh/authorized_keys as well as id_rsa* in root@CN:/.ssh/:

[root@FN]$ cp ~/.ssh/id_rsa.pub ~/.ssh/authorized_keys

[root@FN]$ scp -r ~/.ssh/ <CN>:~/

 

StrictHostKeyChecking needs to be disabled in /etc/ssh/ssh_config file on FN and CNs:

Host *

            StrictHostKeyChecking no

Don’t forget to restart sshd on host where /etc/ssh/ssh_config file was modified.

[root@FN]$ service sshd restart

 

Make sure that root is able to login on CNs without being asked for a password.
Sunstone GUI

[root@FN]$ yum install opennebula-sunstone-4.2.x86_64.rpm

[root@FN]$ bash /usr/share/one/install_novnc.sh

 

MySQL

If MySQL is going to be used as OpenNebula DB backend then the following steps need to be performed.

[root@FN]$  yum install mysql-server

[root@FN]$ /etc/init.d/mysqld start

[root@FN]$ chkconfig mysqld on

[root@FN]$ mysql

mysql> USE mysql;

mysql> UPDATE user SET Password=PASSWORD('<password>') WHERE user='root';

mysql> FLUSH PRIVILEGES;

mysql> create database opennebula;

mysql> GRANT ALL PRIVILEGES ON opennebula.* TO 'one_db_user'@'localhost' IDENTIFIED BY 'one_db_user' WITH GRANT OPTION;

mysql> UPDATE user SET Password=PASSWORD('<password>') WHERE user='one_db_user';

mysql> FLUSH PRIVILEGES;

 

where <password> is either can be taken from ~oneadmin/.one_auth file or you can set any other.

Passwordless access across nodes for oneadmin user

Opennebula generates DSA keys in ~oneadmin/.ssh/ If necessary, you can generate your own keys:

[root@FN]$ su - oneadmin

[oneadmin@FN]$ ssh-keygen -t rsa

[oneadmin@FN]$ cp ~/.ssh/id_rsa.pub ~/.ssh/authorized_keys

 

Put id_rsa.pub in oneadmin@CN:~/.ssh/authorized_keys file as well as id_rsa* in oneadmin@CN:~/.ssh/ folder.
StrictHostKeyChecking needs to be disabled in /etc/ssh/ssh_config file on FN and CNs:

Host *

            StrictHostKeyChecking no

 

Remember to restart sshd on host where /etc/ssh/ssh_config file was modified.

[root]$ service sshd restart

oned.conf

Edit original configuration file /etc/one/oned.conf as shown below:

HOST_MONITORING_INTERVAL              = 60

VM_POLLING_INTERVAL                   = 60

SCRIPTS_REMOTE_DIR=/vz/one/scripts

PORT = 2633

DB = [ backend = "mysql",

            server  = "localhost",

            port     = 0,

            user     = "one_db_user",

            passwd  = "<password>",

            db_name = "opennebula" ]

VNC_BASE_PORT = 5900

DEBUG_LEVEL = 3

NETWORK_SIZE = 254

MAC_PREFIX   = "02:00"

DATASTORE_LOCATION = /vz/one/datastores

DEFAULT_IMAGE_TYPE   = "OS"

DEFAULT_DEVICE_PREFIX = "sd"

IM_MAD = [

            name   = "im_ovz",

            executable= "one_im_ssh",

            arguments  = "-r 0 -t 15 ovz" ]

VM_MAD = [

            name   = "vmm_ovz",

            executable = "one_vmm_exec",

            arguments  = "-t 15 -r 0 ovz",

            default = "vmm_exec/vmm_exec_ovz.conf",

            type     = "xml" ]

TM_MAD = [

            executable = "one_tm",

            arguments  = "-t 15 -d dummy,shared,ssh" ]

DATASTORE_MAD = [

            executable = "one_datastore",

            arguments  = "-t 15 -d fs"

]

HM_MAD = [

            executable = "one_hm" ]

AUTH_MAD = [

            executable = "one_auth_mad",

            authn = "ssh,x509,ldap,server_cipher,server_x509"

]

SESSION_EXPIRATION_TIME = 900

VM_RESTRICTED_ATTR = "CONTEXT/FILES"

VM_RESTRICTED_ATTR = "NIC/MAC"

VM_RESTRICTED_ATTR = "NIC/VLAN_ID"

VM_RESTRICTED_ATTR = "RANK"

IMAGE_RESTRICTED_ATTR = "SOURCE"

 

The rest configuration parameters leave at their default values.

Set DEFAULT_CT_CONF in /var/lib/one/remotes/vmm/ovz/ovzrc file to the needed value (e.g. /etc/vz/conf/ve-vswap-1g.conf-sample).

Setting oneadmin password

If you installed from packages, you should have the ‘/.one/one_auth’ file created with a randomly-generated password. Otherwise, set oneadmin’s OpenNebula credentials (username and password) adding the following to /.one/one_auth (change password for the desired password):

[oneadmin@FN]$ mkdir ~/.one

[oneadmin@FN]$ echo "oneadmin:<password>" > ~/.one/one_auth

[oneadmin@FN]$ chmod 600 ~/.one/one_auth

 

This will set the oneadmin password on the first boot. From that point, you must use the ‘oneuser passwd’ command to change oneadmin’s password.
Starting OpenNebula daemons

 

[oneadmin@FN]$ one start

Check logs (/var/log/one/oned.log) for any errors.
Datastore (on OpenNebula)
Currently only sshd transfer manager driver is supported by OVZ driver, so you need to change all the datastores to use ssh driver. To change the datastores transfer manager driver (e.g. from shared to ssh) one can perform the following command:

[oneadmin@FN]$ env EDITOR=vim onedatastore update 1

and set a value of TM_MAD parameter accordingly (e.g. TM_MAD=”ssh”).
Apart from that there is a necessity to set NO_DECOMPRESS=”yes” in DS config otherwise OpenNebula will try to decompress OpenVZ template archives and fail.
Cluster nodes configuration (OpenVZ)
Install OS on CN in minimal configuration and remove unnecessary rpms. E.g. on SL 6.x OS the following rpms can be removed:

[root@CN]$ yum remove qpid* matahari*

[root@CN]$ userdel -rf qpidd

[root@CN]$ groupdel qpidd

 

or by just one command

[root@CN]$ yum remove qpid* matahari* && userdel -rf qpidd && groupdel qpidd

 

Disable selinux in /etc/selinux/config

[root@CN]$ setenforce 0

[root@CN]$ sestatus

[root@CN]$ wget -P /etc/yum.repos.d/ http://download.openvz.org/openvz.repo

[root@CN]$ rpm --import  http://download.openvz.org/RPM-GPG-Key-OpenVZ

[root@CN]$ yum install vzkernel vzkernel-firmware

[root@CN]$ mv /etc/sysctl.conf{,.orig}

[root@CN]$ scp <configured CN>:/etc/sysctl.conf /etc/

[root@CN]$ chkconfig ntpd on

[root@CN]$ chkconfig apcupsd on

[root@CN]$ yum install vzctl vzquota ploop

 

Edit /etc/vz/vz.conf according to desirable configuration. Edit vz.conf on CNs as below:

$ diff /etc/vz/vz.conf.orig /lesf

45c45

< IPTABLES="ipt_REJECT ipt_tos ipt_limit ipt_multiport iptable_filter iptable_mangle ipt_TCPMSS ipt_tcpmss ipt_ttl ipt_length"

---

> IPTABLES="ipt_REJECT ipt_tos ipt_limit ipt_multiport iptable_filter iptable_mangle ipt_TCPMSS ipt_tcpmss ipt_ttl ipt_length ipt_state"

50c50

< IPV6="yes"

< IP6TABLES="ip6_tables ip6table_filter ip6table_mangle ip6t_REJECT"

---

> IPV6="no"

> #IP6TABLES="ip6_tables ip6table_filter ip6table_mangle ip6t_REJECT"

 

Make sure that modules xt_state and nf_conntrack are loaded.
Reboot. Make sure vz and vzeventd daemons are running. If they are not then check if they are set to be started at boot. They can be run by executing commands as below:

[root@CN]$ /etc/init.d/vz start

[root@CN]$ /etc/init.d/vzeventd start

 

Default CT conf

Make sure to set the proper values in the file $DEFAULT_CT_CONF (e.g./etc/vz/conf/ve-vswap-1g.conf-sample) corresponding to good enough resources (e.g. disk space) otherwise VM deployment may fail with errors like “Disk quota exceeded”).

iptables

Copy iptables rules from configured CN and restart iptables services: [root@CN]$ /etc/init.d/iptables restart
On CNs execute the following commands:

[root@CN]$ iptables -P FORWARD ACCEPT && iptables -F FORWARD

 

iptables config example on VMs:

# Firewall configuration written by system-config-firewall

# Manual customization of this file is not recommended.

*filter

:INPUT ACCEPT [0:0]

:FORWARD ACCEPT [0:0]

:OUTPUT ACCEPT [0:0]

-A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT

-A INPUT -p icmp -j ACCEPT

-A INPUT -i lo -j ACCEPT

-A INPUT -m state --state NEW -m tcp -p tcp -s <IP/mask trusted_network> --dport 22 -j ACCEPT

-A INPUT -j REJECT --reject-with icmp-host-prohibited

-A FORWARD -j REJECT --reject-with icmp-host-prohibited

COMMIT

 

Install required RPMs on CN

[root@CN]$ yum install ruby rubygems file bind-utils

 

oneadmin user

Create oneadmin group and user on CNs with the same uid and gid as on FN:

[root@CN]$ groupadd --gid 1000 oneadmin

[root@CN]$ useradd --uid 1000 -g oneadmin -d /vz/one oneadmin

 

Edit /etc/sudoers file:

# Defaults       requiretty

%oneadmin  ALL=(ALL)                   NOPASSWD: ALL

Defaults:%oneadmin secure_path="/bin:/sbin:/usr/bin:/usr/sbin"

[root@CN]$ su - oneadmin

[oneadmin@CN]$ mkdir /vz/one/datastores

 

Make sure /vz/one/datastores is writable for group otherwise make it such:

[oneadmin@CN]$ chmod g+w /vz/one/datastores

[oneadmin@FN]$ scp -r .ssh/ root@CN:~oneadmin/

[root@CN]$ chown oneadmin:oneadmin -R ~oneadmin/.ssh/

 

And again StrictHostKeyChecking needs to be disabled in /etc/ssh/ssh_config file on FN and CNs:

Host *~

            StrictHostKeyChecking no

 

Remember to restart sshd on host where /etc/ssh/ssh_config file was modified.

[root]$ service sshd restart  

Make sure that oneadmin user is able to login on CN from FN without being asked for a password:

[oneadmin@FN]$ ssh <CN hostname>

[root@FN]$ ssh-copy-id <CN hostname>

[root@FN]$ scp -r .ssh/id_rsa* root@CN:~/.ssh/

 

Make sure that root is able to login on CN from FN without being asked for a password:

[root@FN]$ ssh <CN hostname>

Some VMs operations examples
Network

[oneadmin@FN]$ cat public.net

NAME = "Public"

TYPE = FIXED

BRIDGE = venet0

LEASES = [IP=<IP1>]

LEASES = [IP=<IP2>]

GATEWAY = <gateway_IP>

DNS = <DNS_IP>

[oneadmin@FN]$ onevnet list

[oneadmin@FN]$ onevnet create public.net

 

One can add/remove/hold/release leases to/from FIXED network:

[oneadmin@FN]$ onevnet addleases <network_id> <new_IP_address>

[oneadmin@FN]$ onevnet rmleases <network_id> <IP_address>

[oneadmin@FN]$ onevnet hold <network_id> <IP_address>

Cluster

[oneadmin@FN]$ onecluster create ovz_x64

[oneadmin@FN]$ onecluster addvnet 100 0

[oneadmin@FN]$ onecluster adddatastore 100 1

[oneadmin@FN]$ onehost create <CN hostname> --im im_ovz --vm vmm_ovz --cluster ovz_x64 --net dummy

[oneadmin@FN]$ oneimage create -d default --name "SL 6.3 x86_64 persistent" --path /tmp/sl-6-x86_64.tar.gz --prefix sd --type OS --description "Scientific linux 6.3 custom"

[oneadmin@FN]$ oneimage list

To make image persistent execute the following command:

[oneadmin@FN]$ oneimage persistent <IMAGE_ID>

Create template for VMs:

$ cat sl-6.3-x86_64.one.vm.tmpl

CONTEXT=[

  FILES="/var/lib/one/vm_files/rc.local /var/lib/one/vm_files/id_rsa.pub",

  NAMESERVER="$NETWORK[DNS, NETWORK_ID=0 ]" ]

CPU="0.01"

DISK=[

  IMAGE_ID="1",

  SIZE="20480" ]

DISK=[

  SIZE="2048",

  TYPE="swap" ]

LOOKUP_HOSTNAME="true"

MEMORY="4096"

NAME="SL6 x86_64"

NIC=[

  NETWORK_ID="0" ]

OS=[

  ARCH="x86_64",

  BOOT="sd" ]

OSTEMPLATE="sl-6-x86_64"

VE_LAYOUT=”ploop”

RCLOCAL="rc.local"

Make sure the value of the OSTEMPLATE parameter is written in the format <OS name>-<version>-<architecture>, e.g. OSTEMPLATE=”sl-6-x86_64″.
VE_LAYOUT parameter is used to set filesystem type of the VM. It can be ploop or simfs. If it is not specified, ploop is used.
Due to the new datastore model in OpenNebula 4.4 you can’t use disk size attribute anymore, instead you can specify OVZ_SIZE attribute for disk containing VM image. If it is not specified the value from DEFAULT_CT_CONF is used. Example:

DISK=[

  IMAGE_ID="1",

  OVZ_SIZE="20480" ]

 

You can also pass OpenVZ native parameters directly to hypervisor using RAW attribute. For example:

 

RAW = [

            FEATURES = "nfs:on"

            QUOTATIME = "0"

            ....

One can update created template by command:

env EDITOR=vim onetemplate update <TEMPLATE ID>

Create VM template in ONE:

[oneadmin@FN]$ onetemplate create sl-6.3-x86_64.one.vm.tmpl

Instantiate VM from existing template:

[oneadmin@FN]$ onetemplate instantiate 0 -n vps103