This Blog is to share our knowledge and expertise on Linux System Administration and VMware Administration

Showing posts with label Linux Virtualization. Show all posts
Showing posts with label Linux Virtualization. Show all posts

Friday, December 22, 2017

Install KVM in RHEL7

Friday, December 22, 2017 0

Install KVM in RHEL7

By default, a RHEL 7 system doesn't come with a KVM or libvirt preinstalled. This can be installed in three ways:

Through the graphical setup during the system's setupVia a kickstart installationThrough a manual installation from the command line


To install a KVM, you will require at least 6 GB of free disk space, 2 GB of RAM, and an additional core or thread per guest.

Check whether your CPU supports a virtualization flag (such as SVM or VMX). Some hardware vendors disable this in the BIOS, so you may want to check your BIOS as well. Run the following command:

# grep -E 'svm|vmx' /proc/cpuinfo
flags    : ... svm ...
Check whether the hardware virtualization modules (such as kvm_intel and kvm) are loaded in the kernel using the following command:

# lsmod | grep kvm
kvm_intel             155648  0
kvm                      495616  1 kvm_intel

Manual installation
This way of installing a KVM is generally done once the base system is installed by some other means. 

Install the software needed to provide an environment to host virtualized guests with the following command:
# yum -y install qemu-kvm qemu-img libvirt

The installation of these packages will include quite a lot of dependencies.

Install additional utilities required to configure libvirt and install virtual machines by running this command:
# yum -y install virt-install libvirt-python python-virthost libvirt-client

By default, the libvirt daemon is marked to autostart on each boot. Check whether it is enabled by executing the following command:
# systemctl status libvirtd

libvirtd.service - Virtualization daemon
   Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled)
   Active: inactive
   Docs: man:libvirtd(8)
    http://libvirt.org

If for some reason this is not the case, mark it for autostart by executing the following:
# systemctl enable libvirtd
To manually stop/start/restart the libvirt daemon, this is what you'll need to execute:
# systemctl stop libvirtd
# systemctl start libvirtd
# systemctl restart libvirtd

Kickstart installation
Installing a KVM during kickstart offers you an easy way to automate the installation of KVM instances. 

Add the following package groups to your kickstarted file in the %packages section:
@virtualization-hypervisor
@virtualization-client
@virtualization-platform
@virtualization-tools
Start the installation of your host with this kickstart file.

Graphical setup during the system's setup
This is probably the least common way of installing a KVM. The only time I used this was during the course of writing this recipe. Here's how you can do this:

Boot from the RHEL 7 Installation media.
Complete all steps besides the Software selection step.
Go to Software Selection to complete the KVM software selection.
Select the Virtualization host radio button in Base Environment, and check the Virtualization Platform checkbox in Add-Ons for Selected Environment:
Finalize the installation.
On the Installation Summary screen, complete any other steps and click on Begin Installation.

Monday, December 18, 2017

How to reboot the Xen Virtual Machine when was at hung state.

Monday, December 18, 2017 0

How to reboot the Xen Virtual Machine when was at hung state.

If console is not working from the host(Dom0), the console was opened in order to use sysrq magic packages
#xm console xenvm006

From other terminal on the node, run the below  comamnd one by one.
#xm sysrq xenvm006  h
#xm sysrq xenvm006  m    #should show the amount of memory been used
#xm sysrq xenvm006  t      #should show the current tasks

the output is  "console is opened and was not good"  also   "vm was with out resources"
Check the xentop, if  a high cpu usage is shown like 200%

Then reboot the virtual machine.

Open one more terminal & run the below command.

#xm destroy  xenvm006

if the virtual machine is running under cluster,  first disable the vm from cluster to avoid failover.

#clusvcadm -d vm:xenvm006

Then destroy the virtual machine.

Because as the vm is hung state, clusvcadm -d wont do a clean shutdown, so we need to destroy it manually.

#xm create xenvm006  (will start the vm)
#clusvcadm -e vm:xenvm006   (with cluster)

Thursday, December 14, 2017

Understanding the Redhat Virtualization - Log files

Thursday, December 14, 2017 0

Understanding the Redhat Virtualization -  Log files

Red Hat Virtualization features the xend daemon and qemu-dm process, two utilities that write the multiple log files to the /var/log/xen/ directory:

xend.log is the log file that contains all the data collected by the xend daemon, whether it is a normal system event, or an operator initiated action. All virtual machine operations (such as create, shutdown, destroy, etc.) appears here. The xend.log is usually the first place to look when you track down event or performance problems. It contains detailed entries and conditions of the error messages.

xend-debug.log is the log file that contains records of event errors from xend and the Virtualization subsystems (such as framebuffer, Python scripts, etc.)

Xen-hotplug-log is the log file that contains data from hotplug events. If a device or a network script does not come online, the event appears here.

qemu-dm. [PID].log is the log file created by the qemu-dm process for each fully virtualized guest. When using this log file, you must retrieve the given qemu-dm process PID, by using the ps command to examine process arguments to isolate the qemu-dm process on the virtual machine. Note that you must replace the [PID] symbol with the actual PID qemu-dm process.

If you encounter any errors with the Virtual Machine Manager, you can review the generated data in the virt-manager.log file that resides in the /.virt-manager directory. Note that every time you start the Virtual Machine Manager, it overwrites the existing log file contents. Make sure to backup the virt-manager.log file, before you restart the Virtual Machine manager after a system error.

Monday, November 6, 2017

How to Avoid & Solve the Disk Limitations error with KVM guests?

Monday, November 06, 2017
There are some limitations specific to the virtio-blk driver that will be discussed in this kbase article. Please note, these are not general limitations of KVM, but rather relevant only to cases where virtio-blk is used.

Disks under KVM are para-virtualized block devices when used with the virtio-blk driver. All para-virtualized devices (e.g. disk, network, balloon, etc.) are PCI devices. Presently, guests are limited to a maximum of 32 PCI devices. Of the 32, 4 are required by the guest for minimal baseline functionality and are
therefore reserved.


When adding a disk to a KVM guest, the default method assigns a separate virtual PCI controller for every disk to allow hot-plug support (i.e. the ability to add/remove disks from a running VM without downtime). Therefore, if no other PCI devices have been assigned, the max number of hot-pluggable disks is 28.


If a guest requires more disks than the available PCI slots allow, then there are three possible work-arounds.


1. Use PCI pass-through to assign a physical disk controller (i.e. FC HBA, SCSI controller, etc.) to the VM and subsequently use as many devices as that controller supports.
2. Forego the ability to hot-plug and assign the virtual disks using multi-function PCI addressing.
3. Use the virtio-scsi driver, which creates a virtual SCSI HBA that occupies a single PCI address and supports thousands of hot-plug disks.


Here i have used the option 2 for correcting this problem.

Multi-function PCI addressing allows up to 8 virtual disks per controller. Therefore you can have n * 8 possible disks, where n is the number of available PCI slots.

On a system with 28 free PCI slots, you can assign up to 224 virtual disks to that VM. However, as previously stated you will not be able to add and/or remove the multi-function disks from the guest without a reboot of the guest.

Any disks assigned without multifunction can however continue to use
hot-plug.

The XML config below, demonstrates how to configure a multi-function PCI controller:

<disk type='block' device='disk'>
<driver name='qemu' type='raw'/>
<source dev='/dev/rootvg/lvtest01'/>
<target dev='vdb' bus='virtio'/>
<alias name='virtio-disk1'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/>
</disk>
<disk type='block' device='disk'>
<driver name='qemu' type='raw'/>
<source dev='/dev/rootvg/lvtest02'/>
<target dev='vdc' bus='virtio'/>
<alias name='virtio-disk2'/>
<address type='pci' domain='0x0000' bus='0x01' slot='0x07' function='0x1' multifunction='on'/>


In the above, we defined a new controller in slot 7. And then attached 2 disks (vdb, vdc) to that controller. So that only one PCI slot is used, slot 7. And 2 disks are attached to it. We could add 8 more disks to the controller before having to create a new controller in slot 8 (assuming 8 is the next available slot).

You can check a guests config from the virtualization host, by using "virsh dumpxml <guest>". This will show you which slots are in use and therefore which are available.


To add one or more multifunction controllers you would use "virsh edit <guest>" and then add the appropriate XML entries modeled after the example above. Remember, the guest must be rebooted before the changes to the XML config are applied.

Tuesday, October 17, 2017

When the : "Error: Driver 'pcspkr' is already registered" will appear in Virtual Machine?

Tuesday, October 17, 2017 0

On Virtual machine's, if you are observing following message 'Error: Driver 'pcspkr' is already registered'  in /var/log/messages file, then we get rid of this by adding  'blacklist snd-pcsp' in /etc/modeprobe.d/blacklist.conf file.

#echo 'blacklist snd-pcsp' >> /etc/modprobe.d/blacklist.conf

Thursday, March 3, 2016

How can I add more ethernet interface to a guest Linux server after installation?

Thursday, March 03, 2016 0

Add more ethernet interface to a guest after installation

Issue 

When a guest OS was created using virt-manager or virt-install, it created one ethernet interface for the guest. How do I create a second and third interface and attach them to the guest post insatllation?

Resolution
 

Xen

Using virt-manager. (Recommended)
  •     Right Click the Guest in virt-manager and select "Open" an dselect "Hardware" tab
  •     Click on "Add Hardware"
  •     Select "Network" as the "Hardware Type" and click Forward.
  •     Select "Virtual Network" or "Shared physical device" depending upon the requirement. Set fixed MAC address if needed. Select "Hypervisor default" or another appropriate Deivce Model. Click Forward
  •     Then Click "Finish".
  •     New network will be attached to the guest on the next reboot.

By manually editing the guest configuration file. (Not recommended)

    Edit the configuration file for that guest at /etc/xen/guestname and add nic = 2 for two interfaces and or nic = 3 for three interfaces.
    Change the vif = entry.

     vif = [ 'mac=xx:xx:xx:xx:xx:xx, bridge=xenbr0' ]
     To (For two interfaces)
     vif = [ 'mac=xx:xx:xx:xx:xx:xx, bridge=xenbr0', 'mac=xx:xx:xx:xx:xx:xx:xx,

    bridge=xenbr0' ]
     OR To (For three interfaces)
     vif = [ 'mac=xx:xx:xx:xx:xx:xx, bridge=xenbr0', 'mac=xx:xx:xx:xx:xx:xx:xx,
    bridge=xenbr0', 'mac=xx:xx:xx:xx:xx:xx, bridge=xenbr0' ]

    For fully virtualised guests it should be as below:

     vif = [ 'type=ioemu,mac=xx:xx:xx:xx:xx:xx, bridge=xenbr0',
    'type=ioemu,mac=xx:xx:xx:xx:xx:xx:xx, bridge=xenbr0',
    'type=ioemu,mac=xx:xx:xx:xx:xx:xx, bridge=xenbr0' ]

    Configure eth1 and eth2 inside the guest OS as usual.

Note: xx:xx:xx:xx:xx:xx needs to be replaced by a unique mac address.


KVM

Using virt-manager.
  •     Right Click the Guest in virt-manager and select "Open" an dselect "Hardware" tab
  •     Click on "Add Hardware"
  •     Select "Network" as the "Hardware Type" and click Forward.
  •     Select "Virtual Network" or "Shared physical device" depending upon the requirement. Set fixed MAC address if needed. Select "Hypervisor default" or another appropriate Deivce Model. Click Forward
  •     Then Click "Finish".
    New network will be attached to the guest on the next reboot.

Wednesday, February 24, 2016

Why VM creation in KVM is failing with error: libvirtError: Unable to read from monitor: Connection reset by peer on Red Hat Enterprise Liunx 6.5?

Wednesday, February 24, 2016 0
Issue

    While creating/ starting VM, getting below error:

Unable to complete install: 'Unable to read from monitor: Connection reset by peer'

Traceback (most recent call last):
  File "/usr/share/virt-manager/virtManager/asyncjob.py", line 44, in cb_wrapper
    callback(asyncjob, *args, **kwargs)
  File "/usr/share/virt-manager/virtManager/create.py", line 1928, in do_install
    guest.start_install(False, meter=meter)
  File "/usr/lib/python2.6/site-packages/virtinst/Guest.py", line 1229, in start_install
    noboot)
  File "/usr/lib/python2.6/site-packages/virtinst/Guest.py", line 1297, in _create_guest
    dom = self.conn.createLinux(start_xml or final_xml, 0)
  File "/usr/lib64/python2.6/site-packages/libvirt.py", line 2686, in createLinux
    if ret is None:raise libvirtError('virDomainCreateLinux() failed', conn=self)
libvirtError: Unable to read from monitor: Connection reset by peer

Resolution

    Add Display/ video drivers spice, virtio, qxl in the VM configuration.
    Set loopback address eth-lo to up.

Root Cause

    Display hardware virtio, spice, qxl were not enabled or added in VM configuration.
    Loopback interface eth-lo was down to make localhost connection.

Wednesday, December 9, 2015

Explain about SysRq command and How to reboot the hanged physical Linux & Xen Linux VM Server

Wednesday, December 09, 2015 0
The magic SysRq key is a key combination in the Linux kernel which allows the user to perform various low level commands regardless of the system’s state.

It is often used to recover from freezes, or to reboot a computer without corrupting the filesystem. The key combination consists of Alt+SysRq+commandkey. In many systems the SysRq key is the printscreen key.

First, you need to enable the SysRq key, as shown below.

echo "1" > /proc/sys/kernel/sysrq

List of SysRq Command Keys

Following are the command keys available for Alt+SysRq+commandkey.

    ‘k’ – Kills all the process running on the current virtual console.
    ‘s’ – This will attempt to sync all the mounted file system.
    ‘b’ – Immediately reboot the system, without unmounting partitions or syncing.
    ‘e’ – Sends SIGTERM to all process except init.
    ‘m’ – Output current memory information to the console.
    ‘i’ – Send the SIGKILL signal to all processes except init
    ‘r’ – Switch the keyboard from raw mode (the mode used by programs such as X11), to XLATE mode.
    ‘s’ – sync all mounted file system.
    ‘t’ – Output a list of current tasks and their information to the console.
    ‘u’ – Remount all mounted filesystems in readonly mode.
    ‘o’ – Shutdown the system immediately.
    ‘p’ – Print the current registers and flags to the console.
    ‘0-9′ – Sets the console log level, controlling which kernel messages will be printed to your console.
    ‘f’ – Will call oom_kill to kill process which takes more memory.
    ‘h’ – Used to display the help. But any other keys than the above listed will print help.

Perform a Safe reboot of Linux

To perform a safe reboot of a Linux computer which hangs up, do the following. This will avoid the fsck during the next re-booting. i.e Press Alt+SysRq+letter highlighted below.

  •     unRaw (take control of keyboard back from X11,
  •     Terminate (send SIGTERM to all processes, allowing them to terminate gracefully),
  •     Kill (send SIGILL to all processes, forcing them to terminate immediately),
  •     Sync (flush data to disk),
  •     Unmount (remount all filesystems read-only),
  •     Reboot.
VM Server

 To perform a safe reboot of a Linux Xen Virtual Server which hangs up, do the following. This will avoid the fsck during the next re-booting.

Run the below command in Xen Dom0.

#xm sysrq <domainid> s
#xm sysrq <domainid> u
#xm sysrq <domainid> b


Wednesday, November 4, 2015

Red Hat Enterprise Virtualization Manager (RHEVM) minimum hardware requirements.

Wednesday, November 04, 2015 0
Red Hat Enterprise Virtualization Manager servers must run Red Hat Enterprise Linux 6. A number  of additional hardware requirements must also be met.

Item                   Limitations
RAM                  A minimum of 3 GB of RAM is required.
PCI Devices      At least one network controller with a minimum bandwidth of 1 Gbps (Rec)

Storage             A minimum of 3 GB of available local disk space is recommended.

Monday, November 2, 2015

Understanding the Virsh Command in Linux Virtualization

Monday, November 02, 2015 0
Connecting to a Hypervisor  (Unsupported now)
virsh connect <name>

Where <name> is the machine name of the hypervisor. If you want to initiate a read—only connection, append the above command with —readonly.

Creating a Virtual Machine
virsh create <path to XML configuration file>

Configuring an XML Dump
virsh dumpxml [domain-id | domain-name | domain-uuid]

This command outputs the domain information (in XML) to stdout . If you save the data to a file, you can use the create option to recreate the virtual machine.

Suspending a Virtual Machine
virsh suspend [domain-id | domain-name |domain-uuid]

When a domain is in a suspended state, it still consumes system RAM. There will also be no disk or network I/O when suspended. This operation is immediate and the virtual machine must be restarted with the resume option

Resuming a Virtual Machine
virsh resume [domain-id | domain-name | domain-uuid]

This operation is immediate and the virtual machine parameters are preserved in a suspend and resume cycle.

Saving a Virtual Machine
virsh save [domain-name][domain-id | domain-uuid][filename]

This stops the virtual machine you specify and saves the data to a file, which may take some time given the amount of memory in use by your virtual machine. You can restore the state of the virtual machine with the restore option

Restoring a Virtual Machine
virsh restore [filename]

This restarts the saved virtual machine, which may take some time. The virtual machine's name and UUID are preserved but are allocated for a new id.

Shutting Down a Virtual Machine
virsh shutdown [domain-id | domain-name | domain-uuid]

You can control the behavior of the rebooting virtual machine by modifying the on_shutdown parameter of the xmdomain.cfg file.

Rebooting a Virtual Machine
virsh reboot [domain-id | domain-name | domain-uuid]

 You can control the behavior of the rebooting virtual machine by modifying the on_reboot parameter of the xmdomain.cfg file.

Terminating a Domain
virsh destroy [domain-name | domain-id | domain-uuid]

This command does an immediate ungraceful shutdown and stops any guest domain sessions (which could potentially lead to file corrupted file systems still in use by the virtual machine). You should use the destroy option only when the virtual machine's
operating system is non-responsive. For a paravirtualized virtual machine, you should use the shutdown option .

Converting a Domain Name to a Domain ID
virsh domid [domain-name | domain-uuid]

Converting a Domain ID to a Domain Name
virsh domname [domain-name | domain-uuid]

Converting a Domain Name to a UUID
virsh domuuid [domain-id | domain-uuid]

Displaying Virtual Machine Information
virsh dominfo [domain-id | domain-name | domain-uuid]

Displaying Node Information
virsh nodeinfo

 The outputs displays something similar to:
CPU model                    x86_64
CPU (s)                      8
CPU frequency                2895 Mhz
CPU socket(s)                2    
Core(s) per socket           2
Threads per core:            2
Numa cell(s)                 1
Memory size:                 1046528 kb
This displays the node information and the machines that support the virtualization process.

Displaying the Virtual Machines
virsh list domain-name [ ——inactive  |  ——all]


The ——inactive option lists inactive domains (domains that have been defined but are not currently active).
The —all domain lists all domains, whether active or not. Your output should resemble the this example:
ID                 Name                 State
————————————————
0                   Domain0             running
1                   Domain202           paused
2                   Domain010           inactive
3                   Domain9600          crashed

Here are the six domain states:
running           lists domains currently active on the CPU
blocked           lists domains that are blocked
paused            lists domains that are suspended
shutdown          lists domains that are in process of shutting down
shutoff           lists domains that are completely down.
crashed           lists domains that are crashed

Displaying Virtual CPU Information
virsh vcpuinfo [domain-id | domain-name | domain-uuid]

Configuring Virtual CPU Affinity
virsh vcpupin [domain-id | domain-name | domain-uuid] [vcpu] , [cpulist]

Where [vcpu] is the virtual VCPU number and [cpulist] lists the physical number of CPUs.

Configuring Virtual CPU Count
virsh setvcpus [domain-name | domain-id | domain-uuid] [count]

 Note that the new count cannot exceed the amount you specified when you created the Virtual Machine

Configuring Memory Allocation
virsh setmem [domain-id | domain-name]  [count]

You must specify the [count] in kilobytes. Note that the new count cannot exceed the amount you specified when you created the Virtual Machine. Values lower than 64 MB probably won't work.You can adjust the Virtual Machine memory as necessary.

Configuring Maximum Memory
virsh setmaxmem  [domain-name | domain-id | domain-uuid] [count]

You must specify the [count] in kilobytes. Note that the new count cannot exceed the amount
you specified when you created the Virtual Machine. Values lower than 64 MB probably won't work. The maximum memory doesn't affect the current use of the Virtual Machine (unless the new value is lower which should shrink memory usage).

BASIC MANAGEMENT OPTIONS


Resource Management Options

setmem         : changes the allocated memory.
setmaxmem  : changes maximum memory limit
setvcpus        : changes number of virtual CPUs.
vcpuinfo         : domain vcpu information.
vcpupin          : control the domain vcpu affinity.

Monitoring and troubleshooting Options

version        : show version
dumpxml        : domain information in XML
nodeinfo        : node information

virsh command output

The following are example outputs from common virsh commands:
the list command:
virsh # list

Id  Name                 State
----------------------------------
0   Domain-0             running
13  r5b2-mySQL01         blocked

the dominfo domain command:
virsh # dominfo r5b2-mySQL01

Id:             13
Name:           r5b2-mySQL01
UUID:           4a4c59a7-ee3f-c781-96e4-288f2862f011
OS Type:                 linux
State:          blocked
CPU(s):         1
CPU time:               11.0s
Max memory:     512000 kB
Used memory:    512000 kB

the domstate domain command:

virsh # domstate r5b2-mySQL01
blocked

the domuuid domain command:

virsh # domuuid r5b2-mySQL01
4a4c59a7-ee3f-c781-96e4-288f2862f011

the vcpuinfo domain command:

virsh # vcpuinfo r5b2-mySQL01
VCPU:           0
CPU:            0
State:          blocked
CPU time:       0.0s
CPU Affinity:   yy

the dumpxml domain command:

virsh # dumpxml r5b2-mySQL01
<domain type='xen' id='13'>
            <name>r5b2-mySQL01</name>
            <uuid>4a4c59a7ee3fc78196e4288f2862f011</uuid>
            <bootloader>/usr/bin/pygrub</bootloader>
            <os>
                                 <type>linux</type>
                                 <kernel>/var/lib/xen/vmlinuz.2dgnU_</kernel>
                                 <initrd>/var/lib/xen/initrd.UQafMw</initrd>
                                <cmdline>ro root=/dev/VolGroup00/LogVol00 rhgb quiet</cmdline>
            </os>
            <memory>512000</memory>
            <vcpu>1</vcpu>
            <on_poweroff>destroy</on_poweroff>
            <on_reboot>restart</on_reboot>
            <on_crash>restart</on_crash>
            <devices>
                                <interface type='bridge'>
                                                     <source bridge='xenbr0'/>
                                                    <mac address='00:16:3e:49:1d:11'/>
                                                     <script path='vif-bridge'/>
                                 </interface>
                                 <graphics type='vnc' port='5900'/>
                                 <console tty='/dev/pts/4'/>
            </devices>

the version domain command:

virsh # version
Compiled against library: libvir 0.1.7
Using library: libvir 0.1.7
Using API: Xen 3.0.1
Running hypervisor: Xen 3.0.0

Monday, October 19, 2015

How to do KVM Clock Sync?

Monday, October 19, 2015 0
These are the instructions to fix a KVM guest that has its clock jumped ahead a few hours after it is created/started.
The clock will eventually get corrected after ntpd gets running, but the server may run up to 1/2 hour on skewed time. The skewed time may cause issues with scheduled jobs
Update the Virtual Guests clock setting. This will prevent the clock on the virtual guest from jumping forward.

FROM the DOM-0 (KVM Host Server)
#vi /Path/to/server/configuration_file.xml
replace line:
<clock offset='utc'/>
with:
<clock offset='localtime'/>

Thursday, October 15, 2015

How to Increase Memory in Xen Vm

Thursday, October 15, 2015 0
For Example, Here you want to increase memory from 6GB to 12GB.

Login to the Xen Host as root user and check the VM maximum memory settings. If Maximum memory setting is not more than equal to Target memory follow the below steps.

[root@Xenhost ~]# virsh dumpxml xenvm100 | grep -i mem
  <memory>6291456</memory>    -----------------> Here Memory settings in KB
  <currentMemory>6291456</currentMemory>

This shows that it has 6 GB currently in use. And that max memory is set to 6GB. Therefore we need server downtime for increase to 12GB

So, the procedure will be:

1. virsh setmem xenvm100 12582912
2. vi /etc/xen/xenvm100
   2a. change "memory = 12288" to "memory = 12288"  ------> Here in MB
   2b. save config
 
Reboot the VM and check the memory.

Hope it will help.

Thursday, October 8, 2015

KVM Virtualization in RHEL 7 - Detailed document

Thursday, October 08, 2015 0
Purpose of this document
 
This document describes how to quickly setup and manage a virtualized environment with KVM (Kernel-based Virtual Machine) in Red Hat Enterprise Linux 7. This is not an in-depth discussion of virtualization or KVM, but rather an easy-to-follow step-by-step description of how to install and manage Virtual Machines (VMs) on a physical server.

A very brief overview of KVM

KVM is a Linux kernel module that allows a user space program access to the hardware virtualization features of Intel and AMD processors. With the KVM kernel module, VMs run as ordinary user-space processes.

KVM uses QEMU for I/O hardware emulation. QEMU is a user-space emulator that can emulate a variety of guest processors on host processors with decent performance. Using the KVM kernel module allows it to approach native speeds.


KVM is managed via the libvirt API and tools. Some libvirt tools used in this article include virsh, virt-install and virt-clone.

Virtualization Technology

Verify that Virtualization Technology (VT) is enabled in your server’s BIOS.
Another item to check once your server boots up is whether your processors support VT. This is not a requirement but it will help a lot with performance, so you will be better off with processors that support VT. Check for these CPU extensions:


# grep -E 'svm|vmx' /proc/cpuinfo
- vmx is for Intel processors
- svm is for AMD processors


Required packages
 

There are several packages to install that are not part of the base RHEL 7 installation. Assuming that you have a yum repository already defined, install the following:

# yum install qemu-kvm libvirt libvirt-python libguestfs-tools virt-install

 
Enable and start the libvirtd service:


# systemctl enable libvirtd && systemctl start libvirtd

 
Verify the following kernel modules are loaded, and if not load manually:
kvm
kvm_intel (only on Intel-based systems)
OS Installation Source


You need to have an OS installation source ready for your VMs. You can either use an iso or a network installation source that can be accessed via http, ftp or nfs.

 
Disk Space

 
When a VM is created, image files are created in the default directory /var/lib/libvirt/images, but you can choose any directory you’d like. Regardless of what directory you choose, you will have to verify there is enough disk space available in that partition.  I use directory /vm-images.


KVM supports several types of VM image formats, which determine the amount of actual disk space each VM uses on the host. Here, we will only create VMs with raw file formats, which use the exact amount of disk space you specify. So for example if you specify that a VM will have 10 GB of disk space, the VM install tool will create a file image of exactly 10 GB on the host, regardless whether the VM uses all 10 GB or not.


Best practice here is to allocate more than enough disk space on the host to safely fit all your VMs. For example, if you want to create 4 VMs with 20 GB storage each, be sure you have at least 85-90 GB space available on your host
 

Networking
 
By default, VMs will only have network access to other VMs on the same server (and to the host itself) via private network 192.168.122.0. If you want the VMs to have access to your LAN, then you must create a network bridge on the host that is connected to the NIC that connects to your LAN. Follow these steps to create a network bridge:


1. We will create a bridge named ‘br0’. Add to your network controller configuration file (i.e. /etc/sysconfig/network-scripts/ifcfg-em1) this line:
BRIDGE=br0


2. Create /etc/sysconfig/network-scripts/ifcfg-br0 and add:
DEVICE="br0"
# BOOTPROTO is up to you. If you prefer “static”, you will need to
# specify the IP address, netmask, gateway and DNS information.
BOOTPROTO="dhcp"
IPV6INIT="yes"
IPV6_AUTOCONF="yes"
ONBOOT="yes"
TYPE="Bridge"
DELAY="0"


3. Enable network forwarding. Add to /etc/sysctl.conf:
net.ipv4.ip_forward = 1


And read the file:
# sysctl -p /etc/sysctl.conf

 
4. Restart the ‘NetworkManager’ service so that the bridge you just created can get an IP address:
# systemctl restart NetworkManager


Firewalld
In RHEL 6, the default packet filtering and forwarding service is ‘iptables’. In RHEL 7, the default service is ‘firewalld’, which provides the same packet filtering and forwarding capabilities as iptables, but implements rules dynamically and has additional features such as network zones, which give you added flexibility when managing different networks.
Please note that the iptables tool is still available in RHEL 7, and in fact it is used by firewalld to talk to the kernel packet filter (it is the iptables service that has been replaced by firewalld). If you prefer, you can install the iptables-service package to use the iptables service instead of firewalld.


SELinux
If you are using SELinux in Enforcing mode, then there are some things to consider. The most common issue is when you use a non-default directory for your VM images. If you use a directory other than /var/lib/libvirt/images, then you must change the security context for that directory. For example, let’s say you select /vm-images to place your VM images:


1. Create the directory:
# mkdir /vm-images


2. Install the policycoreutils-python package (which contains the semanage SELinux utility):
# yum -y install policycoreutils-python


3. Set the security context for the directory and everything under it:
# semanage fcontext --add -t virt_image_t '/vm-images(/.*)?'


Verify it:

# semanage fcontext -l | grep virt_image_t
/var/lib/imagefactory/images(/.*)? all files system_u:object_r:virt_image_t:s0
/var/lib/libvirt/images(/.*)? all files system_u:object_r:virt_image_t:s0
/vm-images(/.*)? all files system_u:object_r:virt_image_t:s0


4. Restore the security context. This will effectively change the context to virt_image_t:
# restorecon -R -v /vm-images
Verify that the context was changed:
# ls –aZ /vm-images
drwxr-xr-x. root root system_u:object_r:virt_image_t:s0 .
dr-xr-xr-x. root root system_u:object_r:root_t:s0 ..


5. If you are going to export the directory /vm-images as a samba or NFS share, there are SELinux Booleans that need to be set as well:
# setsebool -P virt_use_samba 1
# setsebool -P virt_use_nfs 1


Creating VMs

 
Installation of VMs using the virt-install tool is very straight-forward. This tool can run in interactive or non-interactive mode. Let’s use virt-install in non-interactive mode to create a RHEL 7 x64 VM named vm1 with one virtual CPU, 1 GB memory and 10 GB of disk space:
# virt-install \
--network bridge:br0 \
--name vm1 \
--ram=1024 \
--vcpus=1 \
--disk path=/vm-images/vm1.img,size=10 \
--graphics none \
--location=http://my.server.com/pub/rhel7/install-x86_64/ \
--extra-args="console=tty0 console=ttyS0,115200"
    --network bridge:br0


If you created a network bridge (as specified in Chapter I, steps 6-10) and want to use it for full inbound and outbound connectivity, then you must specify it.
     --name vm1
No big mystery here, this is the name of the VM
     --ram=1024
This is the amount of memory in the VM in MBs
    --vcpus=1
You guessed it, this is the number of virtual CPUs
     --disk path=/vm-images/vm1.img,size=10
This is the image file for the VM, the size is specified in GBs.
     --graphics none


This tells the installer not to launch a VNC window to access the VM’s main console. Instead, it will use a text console on the VM’s serial port. If you rather use an X window with graphics to install the OS on the VM, omit this parameter.
     --location=http://my.server.com/pub/rhel7/install-x86_64/


This is the location of the RHEL 7 x64 installation directory, which of course will be different for you. If you don’t have a remote installation location for the OS, you can install from an iso instead. Instead of using the location parameter, use the cdrom parameter:
    --cdrom /root/RHEL-7.0-20140507.0-Server-x86_64-dvd1.iso
     --extra-args="console=tty0 console=ttyS0,115200"


The extra-args parameter is used to pass kernel boot parameters to the OS installer. In this case, since we are connecting to the VM’s serial port, we must use the proper kernel parameters to set it up, just like we would on any server, virtual or not.
The extra-args parameter can also be used to specify a kickstart file for non-interactive installations. So if we had a kickstart file we would use:


    --extra-args="ks=http://my.server.com/pub/ks.cfg console=tty0 console=ttyS0,115200”
The OS installation on the VM proceeds as with a physical server, where you provide information such as disk partitions, time zone, root password, etc.


Here is another example: Install a RHEL 7 x86 VM with 2 VCPUs, 2GB of memory, 15GB disk space, using the default network (private VM network), install from a local iso on the host and use VNC to interact with the VM (must have an X server running):
# virt-install \
--name vm1 \
--ram=2048 \
--vcpus=2 \
--disk path=/vm-images/vm1.img,size=15 \
--cdrom /root/RHEL-7.0-20140507.0-Server-x86_64-dvd1.iso


For more information on all virt-install parameters, refer to the virt-install man page
Cloning VMs


If you want several VMs with the same OS and same configuration, I recommend cloning existing VMs rather than installing the OS on each one, which can quickly become a time-consuming & tedious task. In this example, we clone vm1 to create a new VM clone called vm1-clone:


1. Suspend the VM to be cloned. This is a requirement since it ensures that all data and network I/O on the VM is stopped.
# virsh suspend vm1


2. Run the virt-clone command:
# virt-clone \
--connect qemu:///system \
--original vm1 \
--name vm1-clone \
--file /vm-images/vm1-clone.img
This operation will take 2-3 minutes, depending on the size of the VM.


3. When done, you can resume the original VM:
# virsh resume vm1


4. The cloned VM is placed in shutdown mode. To start it:
# virsh start vm1-clone


The cloned VM is an exact copy of the original VM, all VM properties (VCPUs, memory, disk space) and disk contents will be the same. The virt-clone command takes care to generate a new MAC address for the VM clone and updates the proper network controller configuration file (i.e. /etc/sysconfig/network-scripts/ifcfg-em1), thus avoiding duplicate MAC addresses.


These are some of the commands I use to administer my VMs. As always, for a list of all available commands, your best bet is the virsh man page


List all VMs on a host, running or otherwise:


# virsh list --all
Show VM information:


# virsh dominfo vm1
Show VCPU/memory usage for all running VMs:


# virt-top
Show VM disk partitions (will take a few moments):


# virt-df vm1
Stop a VM (shutdown the OS):


# virsh shutdown vm1
Start VM:


# virsh start vm1
Mark VM for autostart (VM will start automatically after host reboots):


# virsh autostart vm1
Mark VM for manual start (VM will not start automatically after host reboots):
# virsh autostart –disable vm1


Getting access to a VM’s console

 
If you do not have an X server running on your host, connecting to a VMs serial console might be the only way to login to a VM if networking is not available. Setting up access to a VM’s console is no different than in a physical server, where you simply add the proper kernel boot parameters to the VM. For example, for a RHEL VM, append the following parameters to the kernel boot line in /etc/grub.conf and then reboot the VM:
console=tty0 console=ttyS0,115200


Then, after the VM boots, run in the host:
# virsh console vm1


Attaching storage device to a VM
Say you have files in a USB key that you want to copy to your VM. Rather than copying the files to your VM via the network, you can directly attach the USB key (or any storage device other than USB)


to your VM, which will then appear as an additional storage device on your VM. First identify the device name of your storage device after you plug it in on the host. In this example, it will be /dev/sdb:


# virsh attach-disk vm1 /dev/sdb vdb --driver qemu --mode shareable
     vdb is the device name you want to map to inside the VM
     you can mount your device to more than one VM at a time, but be careful as there is no write access control here.


You can now access the storage device directly from your VM at /dev/vdb. When you are done with it, simply detach it from your VM:
# virsh detach-disk vm1 vdb


GUI Tools


Up until now we’ve been working only with the CLI, but there are a couple of GUI tools that can be very useful when managing and interacting with VMs.


     virt-viewer – Launches a VNC window that gives you access to a VMs main console.
     virt-manager – Launches a window where you can manage all your VMs. Among other things, you can start, pause & shutdown VMs, display VM details (VCPUs, memory, disk space), add devices to VMs and even create new VMs. I don’t cover virt-manager here, but it is rather intuitive to use.
To install these tools, run:
# yum install virt-manager virt-viewer


Changing VM parameters


You can easily change VM parameters after creating them, such as memory, VCPUs and disk space


Memory

 
You can dynamically change the memory in a VM up to what its maximum memory setting is. Note that by default the maximum memory setting in a VM will always equal the amount of memory you specified when you created the VM with the ram parameter in virt-install.
For example if you created a VM with 1 GB of memory, you can dynamically reduce this amount without having to shut down the VM. If you want to increase the memory above 1 GB, you will have to first increase its maximum memory setting which requires shutting down the VM first.


In our first example, let’s reduce the amount of memory in vm1 from 1 GB to 512 MB


1. View the VM’s current memory settings:
# virsh dominfo vm1 | grep memory
Max memory: 1048576 kB
Used memory: 1048576 kB


2. To dynamically set to 512 MB, run:
# virsh setmem vm1 524288
Value must be specified in KB, so 512 MB x 1024 = 524288 KB


3. View memory settings:
# virsh dominfo vm1 | grep memory
Max memory: 1048576 kB
Used memory: 524288 kB


In our second example, let’s increase the amount of memory in vm1 above from 512 MB to 2 GB:


1. In this case we will first need to increase the maximum memory setting. The best way to do it is by editing the VM’s configuration file. Shutdown the VM or you might see unexpected results:
# virsh shutdown vm1


2. Edit the VM’s configuration file:
# virsh edit vm1
Change the value in the memory tab (value is in KB):
<memory>2097152</memory>


3. Restart the VM from its updated configuration file:
# virsh create /etc/libvirt/qemu/vm1.xml


4. View memory settings:
# virsh dominfo vm1 | grep memory
Max memory: 2097152 kB
Used memory: 524288 kB


5. Now you can dynamically change the memory:
# virsh setmem vm1 2097152
Verify:
# virsh dominfo vm1 | grep memory
Max memory: 2097152 kB
Used memory: 2097152 kB


VCPUs

 
To change the number of virtual CPUs in a VM, change the number in the vcpu tag in the VM’s configuration file. For example, let’s change the number of virtual CPUs to 2:


# virsh shutdown vm1
# virsh edit vm1
<vcpu>2</vcpu>
# virsh create /etc/libvirt/qemu/vm1.xml


Disk capacity

 
You can always add additional ‘disks’ to your VMs by attaching additional file images. Say that you want to add an additional 10 GB of disk space in your VM, here is what you do:


1. Create a 10-GB non-sparse file:
# dd if=/dev/zero of=/vm-images/vm1-add.img bs=1M count=10240


2. Shutdown the VM:
# virsh shutdown vm1


3. Add an extra entry for ‘disk’ in the VM's XML file in /etc/libvirt/qemu. You can look copy & paste the entry for your mail storage device and just change the target and address tags. 


For example:
# virsh edit vm1
<disk type='file' device='disk'>
<driver name='qemu' type='raw' cache='none' io='threads'/>
<source file='/vm-images/vm1.img'/>
<target dev='vda' bus='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
</disk>
Add:
<disk type='file' device='disk'>
<driver name='qemu' type='raw' cache='none' io='threads'/>
<source file='/vm-images/vm1-add.img'/>
<target dev='vdb' bus='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
    </disk> make sure that the name of the device (i.e. vdb) follows the first one in sequential order
    in the address tag, use a unique slot address (check the address tag of ALL devices, not just storage devices) 


4. Restart the VM from the updated XML configuration file:
# virsh create /etc/libvirt/qemu/vm1.xml


Deleting VMs


When you no longer need a VM, it is best practice to remove it to free up its resources. A VM that is shutdown is not taking up a VCPU or memory, but its image file is still taking up disk space.


Deleting a VM is a lot faster than creating one, just a few quick commands. Let’s delete vm1-clone:


1. First, shutdown the VM:
# virsh shutdown vm1-clone
If the VM is not responding or fails to shut down, shut it down forcefully:
# virsh destroy vm1-clone
2. Undefine the VMs configuration:
# virsh undefine vm1-clone
3. Finally, remove the VM’s image file:
# rm /vm-images/vm1-clone.img