About

KVM is virtualization solution provided by Linux kernel. Below is short howto explaining how to get it working on TLD Linux.

Installation

First we must install some base packages required for virtualization:

[root@tld]# poldek -ivh libvirt-daemon libvirt-daemon-qemu libvirt-client qemu qemu-img

For virtual machine management we will use virt-manager GUI. TLD Linux is server oriented distribution and doesn't provide desktop solution so we will run VNC / RDP server to get minimalistic desktop (LXDE). We need only one virt-manager to manage all KVM hosts. Lets install required packages:

[root@tld]# poldek -ivh virt-manager tigervnc-server xrdp openssh-gnome-askpass metapackage-lxde lxde-icon-theme

Other packages will be installed automatically by dependencies. You may be asked to choose some manually if more than one package offers given functionality or if package suggests instalation of other packages.

Configuration

libvirtd

To be able to use virt-manager to manage our virtual machines we must make few changes to default libvirtd configuration. Uncomment/change following settings in /etc/libvirtd/libvirtd.conf:

unix_sock_group = "kvm"
unix_sock_ro_perms = "0770"
unix_sock_rw_perms = "0770"
auth_unix_ro = "none"
auth_unix_rw = "none"

Warning! This gives full unrestricted and open access to KVM hypervisor management through local unix socket to any member of kvm group! If it is not what you want change libvirtd.conf to fit your needs.

Lets add user test to group kvm:

[root@tld]# usermod -A kvm test

Start required system services:

[root@tld]# service libvirtd start
[root@tld]# service messagebus start

VNC / RDP server

For RDP access you don't need any additional configuration. Make sure your /etc/sysconfig/desktop file sets LXDE as default window manager:

DEFAULTWM=startlxde
DISPLAYMANAGER=LXDE

and just connect to your server using RDP client. You may also want to prevent vncserver from starting with system:

chkconfig vncserver off

If you wish to use VNC client follow below instructions to make required changes to server configuration.

Note: if you already have VNC server configured and wish to start using RDP please stop VNC, deconfigure it and delete .vnc folders on users accounts.

To configure VNC server you must edit file /etc/sysconfig/vncserver. Lets say we want to run 1024×768 16-bit desktop for user test at display 11 and bind server to ip 192.168.5.6. Your vncserver file should look like this:

VNCSERVERS="11:test"
VNCSERVERARGS[21]="-geometry 1024x768 -depth 16 -interface 192.168.5.6"

Note: virtual machines using VNC to provide console may collide with VNC server if you'll use low display numbers.

Now login as user test and configure his VNC session:

[root@tld]# su - test
[test@tld]$ vncserver

You will require a password to access your desktops.

Password:
Verify:
xauth:  file /home/users/test/.Xauthority does not exist
xauth: (stdin):1:  bad display name "tld:1" in "add" command

New 'tld:1 (test)' desktop is tld:1

Creating default startup script /home/users/test/.vnc/xstartup
Starting applications specified in /home/users/test/.vnc/xstartup
Log file is /home/users/test/.vnc/tld:1.log

We don't want to have VNC process running so lets kill it.

[test@tld ~]$ vncserver -kill :1
Killing Xvnc process ID 5076

Edit VNC startup script for user test (/home/users/test/.vnc/xstartup) and put following contents in it to get LXDE up and running in VNC session.

#!/bin/sh

[ -x /etc/vnc/xstartup ] && exec /etc/vnc/xstartup
[ -r /.Xresources ] && xrdb /.Xresources

xsetroot -solid grey
ck-launch-session dbus-launch lxsession -s LXDE -e LXDE &

Finally lets start VNC server:

[root@tld]# service vncserver start

You can now connect with yout VNC client to 192.168.5.6:11, run virt-manager and start creating virtual machines.

Bridged networking

You will probably want to use bridged networking to connect your virtual machines directly instead of using NAT. To make bridge with just eth0 interface create configuration based on example belows:

/etc/sysconfig/interfaces/ifcfg-eth0

DEVICE=eth0
ONBOOT=yes
BOOTPROTO=none
QDISC=sfq

/etc/sysconfig/interfaces/ifcfg-br0

DEVICE=br0
IPADDR=192.168.5.6/24
GATEWAY=192.168.2.1
ONBOOT=yes
BRIDGE_DEVS="eth0"
SPANNING_TREE=yes

Thats all. Restart network and you are done.

[root@tld]# service network restart

Storage

It is good idea to keep virtual machine data on separate storage. For example to use LVM volume group as storage pool you need to do following in virt-manager:

  • Select libvirt connection by clicking on it.
  • Go to “EditConnection details” menu and click “Storage” tab.
  • Click “+” icon at bottom left to add new storage. In next dialog choose “logical: LVM Volume Group” as type and enter volume group name in “Name” field. Click forward. Enter device name in “Target path” field ie. “/dev/storage” and click “Finish” if everything is ok.

You'll now be able to use your storage pool when creating virtual machines.

Managing multiple hosts

To manage remote hosts just add new connection in virt-manager GUI. Choose “FileAdd new connection”. Choose “QEMU/KVM” as “Hypervisor”, tick “Connect to remote host”, set “Method” to “SSH”, enter username and hostname. If your SSH server runs on nonstandard port add it to hostname after colon, ie. 192.168.5.7:2222.

Live migration of virtual machine between hosts

KVM supports live migration of running virtual machines between hosts. There is “Migrate” option in context menu in virt-manager afer right clicking on virtual machine, but it doesn't allow to migrate storage. We will use virsh command to accomplish this task.

First please create exact copy of storage devices on new host. For example if you are using LVM just create logical volumes with same names and sizes. If volume group name differs on new host create proper symlink in /dev. You may remove it after migration finishes.

If you are using bridged networking you must have bridge inteface with same name on both hosts. If it is not possible you will have to disable networking temporarily in virtual machine, remove network interface for migration and recreate it later on new host.

To perform actual migration run following command as root on source host:

virsh migrate --live VM-1 qemu+ssh://user@192.168.5.7:2222/system --copy-storage-all --verbose --persistent --undefinesource

where:

  • VM-1 - full name of virtual machine on source host
  • user - username of account with SSH access and full KVM management access on destination host
  • 2222 - nonstandard SSH port on destination host
© TLD Linux