Archive for the 'kvm' Category

Nested Virt and Fedora 20 Virt Test Day

Introduction


I decided this year to take part in the Fedora Virtualization Test Day on October 8th. In order to take part I needed a system with Fedora 20 installed so that I could then create VMs on top. Since I like my current setup and I didn't have a hard drive laying around that I wanted to wipe I decided to give nested virtualization a shot.

Most of the documentation I have seen for nested virtualization has come from Kashyap Chamarthy. Relevant posts are here, here, and here. He has done a great job with these tutorials and this post is nothing more than my notes for what I found to work for me.

Steps


With nested virtualization the OS/Hypervisor that touches the physical hardware is known as L0. The first level of virtualized guest is known as L1. The second level of virtualized guest (the guest inside a guest) is known as L2. In my setup I ultimately wanted F19(L0), F20(L1), and F20(L2).

First, in order to pass along intel vmx extensions to the guest I created a modprobe config file that instructs the kvm_intel kernel module to allow nested virtualization support:

[root@L0 ~]# echo "options kvm-intel nested=y" > /etc/modprobe.d/nestvirt.conf

After a reboot I can now confirm the kvm_intel moduel is configured for nested virt:

[root@L0 ~]# cat /sys/module/kvm_intel/parameters/nested Y

Next I converted an existing Fedora 20 installation to use "host-passthrough" (see here) so that the L1 guest would see the same processor (with vmx extensions) as my L0 host. To do this i modified the cpu xml tags as follows in the libvirt xml definition:

<cpu mode='host-passthrough'> </cpu>

After powering up the guest I now see that the processor that the L1 guest sees is indeed the same as the host:
[root@L1 ~]# cat /proc/cpuinfo | grep "model name" model name : Intel(R) Core(TM) i7-3770 CPU @ 3.40GHz model name : Intel(R) Core(TM) i7-3770 CPU @ 3.40GHz model name : Intel(R) Core(TM) i7-3770 CPU @ 3.40GHz model name : Intel(R) Core(TM) i7-3770 CPU @ 3.40GHz

Next I decided to enable nested virt in the L1 guest by adding the same modprobe.conf file as I did in L0. I did this based on a tip from Kashyap in the #fedora-test-day chat that this tends to give about a 10X performance improvement in the L2 guests.

[root@L1 ~]# echo "options kvm-intel nested=y" > /etc/modprobe.d/nestvirt.conf

After a reboot I could then create and install L2 guests using virt-install and virt-manager. This seemed to work fine except for the fact that I would often see an unknown NMI in the guest periodically.

[ 14.324786] Uhhuh. NMI received for unknown reason 30 on CPU 0. [ 14.325046] Do you have a strange power saving mode enabled? [ 14.325046] Dazed and confused, but trying to continue

I believe the issue I was seeing may be documented in kernel BZ#58941 . After asking about it in the chat I was informed that for the best experience with nested virt I should go to a 3.12 kernel. I decided to leave that exercise for another day :).

Have a great day!

Dusty

Find Guest IP address using QEMU Guest Agent


Ever needed to find the IP address of a particular guest? Of course, the answer is "yes". For the most part I have either resorted to going in through the console of the VM to find this information or used some nifty little script like the one described here by Richard Jones. However, if you have qemu Guest Agent set up ( I covered this briefly in a previous post ), then you can just query this information using the guest-network-get-interfaces qemu-ga command:

[root@host ~]# virsh qemu-agent-command Fedora19 '{"execute":"guest-network-get-interfaces"}' | python -mjson.tool { "return": [ { "hardware-address": "00:00:00:00:00:00", "ip-addresses": [ { "ip-address": "127.0.0.1", "ip-address-type": "ipv4", "prefix": 8 }, { "ip-address": "::1", "ip-address-type": "ipv6", "prefix": 128 } ], "name": "lo" }, { "hardware-address": "52:54:00:ba:4d:ef", "ip-addresses": [ { "ip-address": "192.168.100.136", "ip-address-type": "ipv4", "prefix": 24 }, { "ip-address": "fe80::5054:ff:feba:4def", "ip-address-type": "ipv6", "prefix": 64 } ], "name": "eth0" } ] }

This gives us all of the information related to each network interface of the VM. Notice that I ran the output through a JSON formatter to make it more readable.

Dusty

Guest Discard/FSTRIM On Thin LVs


In my last post I showed how to recover space from disk images backed by sparse files. As a small addition I'd like to also show how to do the same with a guest disk image that is backed by a thinly provisioned Logical Volume.

First things first, I modified the /etc/lvm/lvm.conf file to have the issue_discards = 1 option set. I'm not 100% sure this is needed but I did it at the time so I wanted to include it here.

Next I created a new VG (vgthin) out of a spare partition and then created an thin LV pool (lvthinpool) inside the VG. Finally I created a thin LV within the pool (lvthin). This is all shown below:

[root@host ~]# vgcreate vgthin /dev/sda3 Volume group "vgthin" successfully created [root@host ~]# lvcreate --thinpool lvthinpool --size 20G vgthin Logical volume "lvthinpool" created [root@host ~]# [root@host ~]# lvcreate --name lvthin --virtualsize 10G --thin vgthin/lvthinpool Logical volume "lvthin" created

To observe the usages of the thin LV and the thin pool you can use the lvs command and take note of the Data% column:

[root@host ~]# lvs vgthin LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert lvthin vgthin Vwi-aotz- 10.00g lvthinpool 0.00 lvthinpool vgthin twi-a-tz- 20.00g 0.00

Next I needed to add the disk to the guest. I did it using the following xml and virsh command. Note from my previous post that the scsi controller inside of my guest is a virtio-scsi controller and that I am adding the discard='unmap' option.

[root@host ~]# cat <<EOF > /tmp/thinLV.xml <disk type='block' device='disk'> <driver name='qemu' type='raw' discard='unmap'/> <source dev='/dev/vgthin/lvthin'/> <target dev='sdb' bus='scsi'/> </disk> EOF [root@host ~]# [root@host ~]# virsh attach-device Fedora19 /tmp/thinLV.xml --config ...

After a quick power cycle of the guest I then created a filesystem on the new disk (sdb) and mounted it within the guest.

[root@guest ~]# mkfs.ext4 /dev/sdb ... [root@guest ~]# [root@guest ~]# mount /dev/sdb /mnt/

Same as last time, I then copied a large file into the guest. After I did so you can see from the lvs output that the thin LV is now using 11% of its allotted space within the pool.

[root@host ~]# lvs vgthin LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert lvthin vgthin Vwi-aotz- 10.00g lvthinpool 1.34 lvthinpool vgthin twi-a-tz- 20.00g [root@host ~]# [root@host ~]# scp /tmp/code.tar.gz root@192.168.100.136:/mnt/ root@192.168.100.136's password: code.tar.gz 100% 1134MB 29.8MB/s 00:38 [root@host ~]# [root@host ~]# lvs vgthin LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert lvthin vgthin Vwi-aotz- 10.00g lvthinpool 11.02 lvthinpool vgthin twi-a-tz- 20.00g

It was then time for a little TRIM action:

[root@guest ~]# df -kh /mnt/ Filesystem Size Used Avail Use% Mounted on /dev/sdb 9.8G 1.2G 8.1G 13% /mnt [root@guest ~]# [root@guest ~]# [root@guest ~]# rm /mnt/code.tar.gz rm: remove regular file ‘/mnt/code.tar.gz’? y [root@guest ~]# [root@guest ~]# df -kh /mnt/ Filesystem Size Used Avail Use% Mounted on /dev/sdb 9.8G 23M 9.2G 1% /mnt [root@guest ~]# fstrim -v /mnt/ /mnt/: 1.2 GiB (1329049600 bytes) trimmed

And from within the host we can see that the utilization of the thin LV has appropriately dwindled back down to ~2.85%

[root@host ~]# lvs vgthin LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert lvthin vgthin Vwi-aotz- 10.00g lvthinpool 2.85 lvthinpool vgthin twi-a-tz- 20.00g 1.42

Again I have posted my full guest libvirt XML here.

Dusty

PS See here for a more thorough example of creating thin LVs.

Recover Space From VM Disk Images By Using Discard/FSTRIM


Sparse guest disk image files are a dream. I can have many guests on a small amount of storage because they are only using what they need. Of course, if each guest were to suddenly use all of the space in their filesystems then the host filesystem containing the guest disk images would fill up as well. However, since filesystems grow over time rather than overnight, with proper monitoring you can foresee this event and add more storage as needed.

Sparse guest disk images aren't all bells and whistles though. Over time files are created/deleted within the filesystems on the disk images and the images themselves are no longer as compact as they were in the past. There is good news though; we can recover the space from all of those deleted files!

A Little History


With the rise of SSDs has come along a new low level command known as TRIM that allows the filesystem to notify the underlying block device of blocks that are no longer in use by the filesystem. This allows for improved performance in SSDs because delete operations can be handled in advance of write operations, thus speeding up writes.

Fortunately for us this TRIM notification also has plenty of application with thinly provisioned block devices. If the filesystem can notify a thin LV or a sparse disk image of blocks that are no longer being used then the blocks can be released back to the pool of available space.

"So I should be able to recover space from my guest disk images, right?" The answer is "yes"! It is relatively new, but virtio-scsi devices (QEMU) support TRIM operations. This is available in QEMU 1.5.0 by adding discard=unmap to the -drive option. You can also bypass the QEMU command line by using Libvirt 1.0.6 and adding the discard=unmap option to disk XML.

Creating/Configuring Guest For Discard


To take advantage of discard/TRIM operations I needed a guest that utilizes virtio-scsi. I created a guest with a virtio-scsi backed device by using the following virt-install command.

[root@host ~]# virt-install --name Fedora19 --disk path=/guests/Fedora19.img,size=30,bus=scsi --controller scsi,model=virtio-scsi --network=bridge:virbr0,model=virtio --accelerate --ram 2048 -c /images/F19.iso

The XML that was generated clearly shows that scsi controller 0 is of model virtio-scsi and thus all scsi devices on that controller will be virtio-scsi devices.

<controller type='scsi' index='0' model='virtio-scsi'> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/> </controller>

The next step was to actually notify QEMU that we want to relay discard operations from the guest to the host. This is supported in QEMU 1.5.0 (since commit a9384aff5315e7568b6ebc171f4a482e01f06526 ). Fortunately libvirt also added support for this in version 1.0.6 (since commit a7c4202cdd12208dcd107fde3b79b2420d863370 ).

For libvirt, to make all discard/TRIM operations be passed from the guest back to the host I had to add the discard='unmap' to the disk XML description. After adding the option the XML looked like the following block:

<disk type='file' device='disk'> <driver name='qemu' type='raw' discard='unmap'/> <source file='/guests/Fedora19.img'/> <target dev='sda' bus='scsi'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk>

Trimming The Fat


After a power cycle of the guest I am now able to test it out. First I checked the disk image size and then copied a 1.2G file into the guest. Afterwards I confirmed the sparse disk image had increased size in the host.

[root@host ~]# du -sh /guests/Fedora19.img 1.1G /guests/Fedora19.img [root@host ~]# [root@host ~]# du -sh /tmp/code.tar.gz 1.2G /tmp/code.tar.gz [root@host ~]# [root@host ~]# scp /tmp/code.tar.gz root@192.168.100.136:/root/ root@192.168.100.136's password: code.tar.gz 100% 1134MB 81.0MB/s 00:14 : [root@host ~]# [root@host ~]# du -sh /guests/Fedora19.img 2.1G /guests/Fedora19.img

Within the guest I then deleted the file and executed the fstrim command in order to notify the block devices that the blocks for that file (and any other file that had been deleted) are no longer being used by the filesystem.

[root@guest ~]# rm /root/code.tar.gz rm: remove regular file ‘/root/code.tar.gz’? y [root@guest ~]# [root@guest ~]# fstrim -v / /: 1.3 GiB (1372569600 bytes) trimmed

As can be seen from the output of the fstrim command approximately 1.3G were trimmed. A final check of the guest disk image confirms that the space was recovered in the host filesystem.

[root@host ~]# du -sh /guests/Fedora19.img 1.1G /guests/Fedora19.img

If anyone is interested I have posted my full guest libvirt XML here .

Until Next Time,
Dusty

NOTE: An easy way to tell if trim operations are supported in the guest is to cat out the /sys/block/sda/queue/discard_* files. On my system that supports trim operations it looks like:

[root@guest ~]# cat /sys/block/sda/queue/discard_* 4096 4294966784 0

TRIM/SSD Reference Material:
https://patrick-nagel.net/blog/archives/337
http://www.linux-kvm.org/wiki/images/7/77/2012-forum-thin-provisioning.pdf
http://www.outflux.net/blog/archives/2012/02/15/discard-hole-punching-and-trim/