The Linux Kernel Virtual Machine (KVM) is the latest offer from Red Hat Enterprise Linux (RHEL), starting in RHEL 5.4 for a complete virtualization solution. This document explains how you can use Veritas Cluster Server software in RHEL KVM-based virtualization environments to provide mission-critical clustering and failover capabilities. This document also explains a set of supported clustering
architectures that you can implement.
The KVM virtualization architecture represents the latest generation of virtualization hypervisors. It leverages the hardware-assisted virtualization features of Intel andAMDdeveloped within their CPU architectures. Even though Intel and AMD have different architectures, both significantly reduce the CPU and hypervisor overhead required for virtualization.
KVM is available in the Linux kernel from 2.6.20 and later. It consists of a loadable kernel module kvm.ko that provides the core virtualization infrastructure and converts the standard Linux kernel to a bare-metal hypervisor. Its processor-specific modules are kvm-intel.ko or kvm-amd.ko. Therefore, KVM requires Intel Vt-x and AMD-V enabled processors. It leverages these features to
virtualize the CPU.KVMusesQEMUas an adjunct tool to handle device emulation, making it a complete virtualization suite. The KVM architecture benefits from using the same memory manager, process scheduler, and I/O network stack as provided within the Linux kernel.
Each guest VM is implemented as a regular Linux process. The KVM module is used to start and run new guest operating systems, and to provide them with virtualized environments.
Since KVM leverages hardware-assisted virtualization, the guest VM kernel is a normal, unmodified kernel. Hence, the KVM kernel is a hypervisor that can also run any other applications exactly like a regular Linux distribution without requiring a specific ConsoleOS(VMware ESX) or domain0 (Xen).CPUvirtualization (virtual processor within the guest) is simply provided as a separate Linux process. Memory virtualization is provided through the kernel memory manager, by a special device (of KVM: /dev/kvm) which maps the guest operating systems physical addresses to the virtual addresses on the hypervisor. I/O virtualization for the guest in KVM is provided by QEMU. A separate QEMU process runs for each guest OS and virtualizes (or emulates) the entire set of devices on the host and makes them available to the guest. Any I/O done to these devices by the guest is intercepted and re-routed to the device in user-mode by theQEMUprocess. The flexibility of utilizing large set of devices is offset by the relative small performance toll of rerouting I/O. RHEL based KVM also provides para-virtualized (virtio) drivers for all supported operating systems. RHEL-based KVM installation and usage KVM is available as a part of RHEL 5.4 and later. You can manage KVM either through the Red Hat Enterprise Virtualization Manager (RHEV-M) or through separate RPMs that can be downloaded into the standard RHEL 5.4 installation. The installation and usage information given in this document is focused on using KVM-based virtualization as provided through the RHEL 5.4 distribution. The standard installation does not yet install the virtualization tools. The following additional RPMs are required to be installed for enabling the virtualization capabilities: • kvm-83-105.el5.x86_64.rpm virt-viewer-0.0.2-3.el5.x86_64.rpm virt-manager-0.6.1-8.el5.x86_64.rpm python-virtinst-0.400.3-5.el5.noarch.rpm libvirt-python-0.6.3-20.el5.x86_64.rpm libvirt-0.6.3-20.el5.x86_64.rpm kvm-qemu-img-83-105.el5.x86_64.rpm etherboot-zroms-kvm-5.4.4-10.el5.x86_64.rpm kmod-kvm-83-105.el5.x86_64.rpm celt051-0.5.1.3-0.el5.x86_64.rpm celt051-devel-0.5.1.3-0.el5.x86_64.rpm log4cpp-1.0-4.el5.x86_64.rpm log4cpp-devel-1.0-4.el5.x86_64.rpm qcairo-22.214.171.124-3.el5.x86_64.rpm qspice-0.3.0-39.el5.x86_64.rpm qspice-libs-0.3.0-39.el5.x86_64.rpm qspice-libs-devel-0.3.0-39.el5.x86_64.rpm qcairo-devel-126.96.36.199-3.el5.x86_64.rpm qffmpeg-devel-0.4.9-0.15.20080908.el5.x86_64.rpm qffmpeg-libs-0.4.9-0.15.20080908.el5.x86_64.rpm qpixman-0.13.3-4.el5.x86_64.rpm qpixman-devel-0.13.3-4.el5.x86_64.rpm The above-stated RPMs also have the following essential dependencies: /Server/xen-libs-3.0.3-94.el5.x86_64.rpm /Server/gnome-python2-gnomekeyring-2.16.0-3.el5.x86_64.rpm /Server/gtk-vnc-python-0.3.8-3.el5.x86_64.rpm /Server/cyrus-sasl-md5-2.1.22-5.el5.x86_64.rpm /Server/gtk-vnc-0.3.8-3.el5.x86_64.rpm You can also install all the RPMs through the following yum command: # yum grouplist|grep KVM Subsequently, you can install the KVM group with the following command: # yum groupinstall "KVM"
KVM Terminology used in this document
|KVM||Kernel-based Virtual Machine|
|KVMGuest||KVM virtualized guest.|
|Host||The physical host onwhichKVMis installed.|
|PM||The physical machine running VCS.|
|KVM-KVM||VCS-supported configuration in which a
cluster is formed between KVMGuests
running on top of the same or different
|KVM-PM||VCS-supported configuration in which a
cluster is formed between KVMGuests and
|PM-PM||VCS-supported configuration in which a
cluster is formed between hosts, and which
is mainly used to manage KVMGuests
running inside them.
|Bridge||A device bound to a physical network
interface on the host which enables any
number of guests to connect to the local
network on the host. It is mapped to a
physical NIC which acts as a switch to
VCS setup checklist
System requirements for the KVM-supported configurations
|VCS version||5.1 Service Pack 1|
|Supported OS version in host||RHEL 5.4 and 5.5|
|Supported OS in KVMGuest||RHEL 5.4 and 5.5|
|Hardware requirement||Full virtualization-enabled CPU|
Download the complete Application Note below.