Archive for the 'virsh' Category

Nested Virt and Fedora 20 Virt Test Day

Introduction


I decided this year to take part in the Fedora Virtualization Test Day on October 8th. In order to take part I needed a system with Fedora 20 installed so that I could then create VMs on top. Since I like my current setup and I didn't have a hard drive laying around that I wanted to wipe I decided to give nested virtualization a shot.

Most of the documentation I have seen for nested virtualization has come from Kashyap Chamarthy. Relevant posts are here, here, and here. He has done a great job with these tutorials and this post is nothing more than my notes for what I found to work for me.

Steps


With nested virtualization the OS/Hypervisor that touches the physical hardware is known as L0. The first level of virtualized guest is known as L1. The second level of virtualized guest (the guest inside a guest) is known as L2. In my setup I ultimately wanted F19(L0), F20(L1), and F20(L2).

First, in order to pass along intel vmx extensions to the guest I created a modprobe config file that instructs the kvm_intel kernel module to allow nested virtualization support:

[root@L0 ~]# echo "options kvm-intel nested=y" > /etc/modprobe.d/nestvirt.conf

After a reboot I can now confirm the kvm_intel moduel is configured for nested virt:

[root@L0 ~]# cat /sys/module/kvm_intel/parameters/nested Y

Next I converted an existing Fedora 20 installation to use "host-passthrough" (see here) so that the L1 guest would see the same processor (with vmx extensions) as my L0 host. To do this i modified the cpu xml tags as follows in the libvirt xml definition:

<cpu mode='host-passthrough'> </cpu>

After powering up the guest I now see that the processor that the L1 guest sees is indeed the same as the host:
[root@L1 ~]# cat /proc/cpuinfo | grep "model name" model name : Intel(R) Core(TM) i7-3770 CPU @ 3.40GHz model name : Intel(R) Core(TM) i7-3770 CPU @ 3.40GHz model name : Intel(R) Core(TM) i7-3770 CPU @ 3.40GHz model name : Intel(R) Core(TM) i7-3770 CPU @ 3.40GHz

Next I decided to enable nested virt in the L1 guest by adding the same modprobe.conf file as I did in L0. I did this based on a tip from Kashyap in the #fedora-test-day chat that this tends to give about a 10X performance improvement in the L2 guests.

[root@L1 ~]# echo "options kvm-intel nested=y" > /etc/modprobe.d/nestvirt.conf

After a reboot I could then create and install L2 guests using virt-install and virt-manager. This seemed to work fine except for the fact that I would often see an unknown NMI in the guest periodically.

[ 14.324786] Uhhuh. NMI received for unknown reason 30 on CPU 0. [ 14.325046] Do you have a strange power saving mode enabled? [ 14.325046] Dazed and confused, but trying to continue

I believe the issue I was seeing may be documented in kernel BZ#58941 . After asking about it in the chat I was informed that for the best experience with nested virt I should go to a 3.12 kernel. I decided to leave that exercise for another day :).

Have a great day!

Dusty

Find Guest IP address using QEMU Guest Agent


Ever needed to find the IP address of a particular guest? Of course, the answer is "yes". For the most part I have either resorted to going in through the console of the VM to find this information or used some nifty little script like the one described here by Richard Jones. However, if you have qemu Guest Agent set up ( I covered this briefly in a previous post ), then you can just query this information using the guest-network-get-interfaces qemu-ga command:

[root@host ~]# virsh qemu-agent-command Fedora19 '{"execute":"guest-network-get-interfaces"}' | python -mjson.tool { "return": [ { "hardware-address": "00:00:00:00:00:00", "ip-addresses": [ { "ip-address": "127.0.0.1", "ip-address-type": "ipv4", "prefix": 8 }, { "ip-address": "::1", "ip-address-type": "ipv6", "prefix": 128 } ], "name": "lo" }, { "hardware-address": "52:54:00:ba:4d:ef", "ip-addresses": [ { "ip-address": "192.168.100.136", "ip-address-type": "ipv4", "prefix": 24 }, { "ip-address": "fe80::5054:ff:feba:4def", "ip-address-type": "ipv6", "prefix": 64 } ], "name": "eth0" } ] }

This gives us all of the information related to each network interface of the VM. Notice that I ran the output through a JSON formatter to make it more readable.

Dusty

Enabling QEMU Guest Agent anddddd FSTRIM (AGAIN)


In an earlier post I walked through reclaiming disk space from guests using FSTRIM and in a follow up I showed how to do the same thing with thin Logical Volumes as the sparse backing storage for the disk images. In both of the previous posts I logged in to the guest first and then executed the fstrim command in order to release the free blocks back to the underlying block devices.

Thankfully, due to some recent work, this operation has now been exposed externally via qemu Guest Agent and can be executed remotely via libvirt. To enable qemu Guest Agent, first I added a virtio-serial device that the host and guest will use for communication. I did this by adding the following to the guest's xml:

<channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/Fedora19.agent'/> <target type='virtio' name='org.qemu.guest_agent.0'/> </channel>

After a power cycle of the guest I added the qemu-guest-agent rpm inside of my guest. Then I started the qemu-guest-agent service using systemctl as shown below:

[root@guest ~]# yum install qemu-guest-agent ... Installed: qemu-guest-agent.x86_64 2:1.4.2-2.fc19 Complete! [root@guest ~]# systemctl start qemu-guest-agent.service [root@guest ~]# systemctl status qemu-guest-agent.service qemu-guest-agent.service - QEMU Guest Agent Loaded: loaded (/usr/lib/systemd/system/qemu-guest-agent.service; static) Active: active (running) since Sun 2013-06-02 16:38:18 EDT; 6s ago Main PID: 913 (qemu-ga) CGroup: name=systemd:/system/qemu-guest-agent.service └─913 /usr/bin/qemu-ga

Finally I could test out the fstrim functionality (again)! In the host I copied the file into the guest.

[root@host ~]# du -sh /guests/Fedora19.img 1.3G /guests/Fedora19.img [root@host ~]# [root@host ~]# scp /tmp/code.tar.gz root@192.168.100.136:/root/ root@192.168.100.136's password: code.tar.gz 100% 1134MB 81.0MB/s 00:14 [root@host ~]# du -sh /guests/Fedora19.img 2.4G /guests/Fedora19.img

Then, inside the guest I deleted the file:

[root@guest ~]# rm /root/code.tar.gz rm: remove regular file ‘/root/code.tar.gz’? y

And finally I can remotely execute the guest-fstrim command via virsh:

[root@host ~]# [root@host ~]# virsh qemu-agent-command Fedora19 '{"execute":"guest-fstrim"}' {"return":{}} [root@host ~]# du -sh /guests/Fedora19.img 1.3G /guests/Fedora19.img

This is powerful stuff because I can now remotely (via libvirt) direct all of my guests, whether there be 5 or 5000, to all give back free space to underlying sparse storage devices.

The full guest libvirt XML from this post can be found here.

Hope everyone is having a great summer!

Dusty

Guest Discard/FSTRIM On Thin LVs


In my last post I showed how to recover space from disk images backed by sparse files. As a small addition I'd like to also show how to do the same with a guest disk image that is backed by a thinly provisioned Logical Volume.

First things first, I modified the /etc/lvm/lvm.conf file to have the issue_discards = 1 option set. I'm not 100% sure this is needed but I did it at the time so I wanted to include it here.

Next I created a new VG (vgthin) out of a spare partition and then created an thin LV pool (lvthinpool) inside the VG. Finally I created a thin LV within the pool (lvthin). This is all shown below:

[root@host ~]# vgcreate vgthin /dev/sda3 Volume group "vgthin" successfully created [root@host ~]# lvcreate --thinpool lvthinpool --size 20G vgthin Logical volume "lvthinpool" created [root@host ~]# [root@host ~]# lvcreate --name lvthin --virtualsize 10G --thin vgthin/lvthinpool Logical volume "lvthin" created

To observe the usages of the thin LV and the thin pool you can use the lvs command and take note of the Data% column:

[root@host ~]# lvs vgthin LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert lvthin vgthin Vwi-aotz- 10.00g lvthinpool 0.00 lvthinpool vgthin twi-a-tz- 20.00g 0.00

Next I needed to add the disk to the guest. I did it using the following xml and virsh command. Note from my previous post that the scsi controller inside of my guest is a virtio-scsi controller and that I am adding the discard='unmap' option.

[root@host ~]# cat <<EOF > /tmp/thinLV.xml <disk type='block' device='disk'> <driver name='qemu' type='raw' discard='unmap'/> <source dev='/dev/vgthin/lvthin'/> <target dev='sdb' bus='scsi'/> </disk> EOF [root@host ~]# [root@host ~]# virsh attach-device Fedora19 /tmp/thinLV.xml --config ...

After a quick power cycle of the guest I then created a filesystem on the new disk (sdb) and mounted it within the guest.

[root@guest ~]# mkfs.ext4 /dev/sdb ... [root@guest ~]# [root@guest ~]# mount /dev/sdb /mnt/

Same as last time, I then copied a large file into the guest. After I did so you can see from the lvs output that the thin LV is now using 11% of its allotted space within the pool.

[root@host ~]# lvs vgthin LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert lvthin vgthin Vwi-aotz- 10.00g lvthinpool 1.34 lvthinpool vgthin twi-a-tz- 20.00g [root@host ~]# [root@host ~]# scp /tmp/code.tar.gz root@192.168.100.136:/mnt/ root@192.168.100.136's password: code.tar.gz 100% 1134MB 29.8MB/s 00:38 [root@host ~]# [root@host ~]# lvs vgthin LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert lvthin vgthin Vwi-aotz- 10.00g lvthinpool 11.02 lvthinpool vgthin twi-a-tz- 20.00g

It was then time for a little TRIM action:

[root@guest ~]# df -kh /mnt/ Filesystem Size Used Avail Use% Mounted on /dev/sdb 9.8G 1.2G 8.1G 13% /mnt [root@guest ~]# [root@guest ~]# [root@guest ~]# rm /mnt/code.tar.gz rm: remove regular file ‘/mnt/code.tar.gz’? y [root@guest ~]# [root@guest ~]# df -kh /mnt/ Filesystem Size Used Avail Use% Mounted on /dev/sdb 9.8G 23M 9.2G 1% /mnt [root@guest ~]# fstrim -v /mnt/ /mnt/: 1.2 GiB (1329049600 bytes) trimmed

And from within the host we can see that the utilization of the thin LV has appropriately dwindled back down to ~2.85%

[root@host ~]# lvs vgthin LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert lvthin vgthin Vwi-aotz- 10.00g lvthinpool 2.85 lvthinpool vgthin twi-a-tz- 20.00g 1.42

Again I have posted my full guest libvirt XML here.

Dusty

PS See here for a more thorough example of creating thin LVs.

Share a Folder Between KVM Host and Guest


I often find myself in situations where I need to share information or files between a KVM host and KVM guest. With libvirt version 0.8.5 and newer there is support for mounting a shared folder between a host and guest. I decided to try this out on my Fedora 17 host, with a Fedora 17 guest.

Using the libvirt <filesystem> xml tag I created the following xml that defines a filesystem device.

<filesystem type='mount' accessmode='mapped'> <source dir='/tmp/shared'/> <target dir='tag'/> </filesystem>

Note that target dir is not necessarily a mount point, but rather a string that is exported to the guest that we will use when mounting in the guest.

In order to get this xml into the guest I had to use virsh edit F17 where F17 is the domain name of my guest. This opens the guest xml in the VI text editor. I then inserted the xml at the end of the <devices> section of the guest xml, closed VI, and started the guest.

Once the guest had booted I used the following command to mount the shared folder in the guest.

[root@F17 ~]# mount -t 9p -o trans=virtio,version=9p2000.L tag /mnt/shared/

And voila! I can now access the /tmp/ directory of the host inside of the guest.

Note: I had some SELinux denials as a result of doing this. If I was using this as a long term solution I would clean them up, but for now I just disabled SELinux temporarily by using sudo setenforce 0 in the host.

Resources:
http://www.linux-kvm.org/page/9p_virtio
http://wiki.qemu.org/Documentation/9psetup
http://libvirt.org/formatdomain.html#elementsFilesystems

Send Magic SysRq to a KVM guest using virsh


When a linux computer is "hung" or "frozen" you can use a Magic SysRq key sequence to send various low level requests to the kernel in order to try to recover from or investigate the problem. This is extremely useful when troubleshooting server lockups, but until recently libvirt did not expose this functionality for KVM guests.

In v0.9.3 (and newer) of libvirt you can send a Magic SysRq sequence to a guest by utilizing the send-key subcommand provided by virsh. In other words, sending the 'h' Magic SysRq command is as simple as:

dustymabe@media: ~>virsh send-key guest1 KEY_LEFTALT KEY_SYSRQ KEY_H

After executing the command a help message will be printed to the console of the guest known as 'guest1'. Any other character can be sent as well by substituting KEY_H with KEY_X, where X is the character.

Happy Investigating!

Dusty