There are some limitations specific to the virtio-blk driver that will be discussed in this kbase article. Please note, these are not general limitations of KVM, but rather relevant only to cases where virtio-blk is used.
Disks under KVM are para-virtualized block devices when used with the virtio-blk driver. All para-virtualized devices (e.g. disk, network, balloon, etc.) are PCI devices. Presently, guests are limited to a maximum of 32 PCI devices. Of the 32, 4 are required by the guest for minimal baseline functionality and are
therefore reserved.
When adding a disk to a KVM guest, the default method assigns a separate virtual PCI controller for every disk to allow hot-plug support (i.e. the ability to add/remove disks from a running VM without downtime). Therefore, if no other PCI devices have been assigned, the max number of hot-pluggable disks is 28.
If a guest requires more disks than the available PCI slots allow, then there are three possible work-arounds.
1. Use PCI pass-through to assign a physical disk controller (i.e. FC HBA, SCSI controller, etc.) to the VM and subsequently use as many devices as that controller supports.
2. Forego the ability to hot-plug and assign the virtual disks using multi-function PCI addressing.
3. Use the virtio-scsi driver, which creates a virtual SCSI HBA that occupies a single PCI address and supports thousands of hot-plug disks.
Here i have used the option 2 for correcting this problem.
Multi-function PCI addressing allows up to 8 virtual disks per controller. Therefore you can have n * 8 possible disks, where n is the number of available PCI slots.
On a system with 28 free PCI slots, you can assign up to 224 virtual disks to that VM. However, as previously stated you will not be able to add and/or remove the multi-function disks from the guest without a reboot of the guest.
Any disks assigned without multifunction can however continue to use
hot-plug.
The XML config below, demonstrates how to configure a multi-function PCI controller:
<disk type='block' device='disk'>
<driver name='qemu' type='raw'/>
<source dev='/dev/rootvg/lvtest01'/>
<target dev='vdb' bus='virtio'/>
<alias name='virtio-disk1'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/>
</disk>
<disk type='block' device='disk'>
<driver name='qemu' type='raw'/>
<source dev='/dev/rootvg/lvtest02'/>
<target dev='vdc' bus='virtio'/>
<alias name='virtio-disk2'/>
<address type='pci' domain='0x0000' bus='0x01' slot='0x07' function='0x1' multifunction='on'/>
In the above, we defined a new controller in slot 7. And then attached 2 disks (vdb, vdc) to that controller. So that only one PCI slot is used, slot 7. And 2 disks are attached to it. We could add 8 more disks to the controller before having to create a new controller in slot 8 (assuming 8 is the next available slot).
You can check a guests config from the virtualization host, by using "virsh dumpxml <guest>". This will show you which slots are in use and therefore which are available.
To add one or more multifunction controllers you would use "virsh edit <guest>" and then add the appropriate XML entries modeled after the example above. Remember, the guest must be rebooted before the changes to the XML config are applied.