Archive for the 'lvm' Category

Fedup 19 to 20 with a Thin LVM Configuration

Introduction


I have been running my home desktop on thin logical volumes for a while now. I have enjoyed the flexibility of this setup and I like taking a snapshot before making any big changes to my setup. Recently I decided to update to Fedora 20 from Fedora 19 and I hit some trouble along the way because the Fedora 20 initramfs (images/pxeboot/upgrade.img) that is used by fedup for the upgrade does not have support for thin logical volumes. After running fedup and rebooting you end up with a message to the screen that looks something like this:
[ OK ] Started Show Plymouth Boot Screen. [ OK ] Reached target Paths. [ OK ] Reached target Basic System. [ 191.023332] dracut-initqueue[363]: Warning: Could not boot. [ 191.028263] dracut-initqueue[363]: Warning: /dev/mapper/vg_root-thin_root does not exist [ 191.029689] dracut-initqueue[363]: Warning: /dev/vg_root/thin_root does not exist Starting Dracut Emergency Shell... Warning: /dev/mapper/vg_root-thin_root does not exist Warning: /dev/vg_root/thin_root does not exist Generating "/run/initramfs/rdsosreport.txt" Entering emergency mode. Exit the shell to continue.

Working Around the Issue


First off run install and run fedup :
[root@localhost ~]# yum update -y fedup fedora-release &>/dev/null [root@localhost ~]# fedup --network 20 &>/dev/null

After running fedup usually you would be able to reboot and go directly into the upgrade process. For us we need to add a few helper utilities (thin_dump, thin_check, thin_restore) to the initramfs so that thin LVs will work. This can be done by appending more files in a cpio archive to the end of the initramfs that was downloaded by fedup. I learned about this technique by peeking at the initramfs_append_files() function within fedup's boot.py. Note also that I had to append a few libraries that are required by the utilities into the initramfs as well.

[root@localhost ~]# cpio -co >> /boot/initramfs-fedup.img << EOF /lib64/libexpat.so.1 /lib64/libexpat.so.1.6.0 /lib64/libstdc++.so.6 /lib64/libstdc++.so.6.0.18 /usr/sbin/thin_dump /usr/sbin/thin_check /usr/sbin/thin_restore EOF 4334 blocks [root@localhost ~]#

And thats it.. You are now able to reboot into the upgrade environment and watch the upgrade. If you'd like to watch a (rather lengthy) screencast of the entire process then you can download the screencast.log and the screencast.timing files and follow the instructions here.

Dusty

Excellent LVM Tutorial for Beginners or Experts


I ran across a great PDF from this year's Red Hat Summit in Boston. Hosted by Christoph Doerbech and Jonathan Brassow the lab covers the following topics:
  • What is LVM? What are filesystems? etc..
  • Creating PVs, VGs, LVs.
  • LVM Striping and Mirroring.
  • LVM Raid.
  • LVM Snapshots (and reverting).
  • LVM Sparse Volumes (a snapshot of /dev/zero).
  • LVM Thin LVs and new snapshots.
Check out the PDF here . If that link ceases to work at some point I have it hosted here as well.

Hope everyone can use this as a great learning tool!

Dusty

Convert an Existing System to Use Thin LVs

Introduction


Want to take advantage of the efficiency and improved snapshotting of thin LVs on an existing system? It will take a little work but it is possible. The following steps will show how to convert a CentOS 6.4 basic installation to use thin logical volumes for the root device (containing the root filesystem).

Preparation


To kick things off there are few preparation steps we need that seem a bit unreleated but will prove useful. First I enabled LVM to issue discards to underlying block devices (if you are interested in why this is needed you can check out my post here. )

[root@Cent64 ~]# cat /etc/lvm/lvm.conf | grep issue_discards issue_discards = 0 [root@Cent64 ~]# sed -i -e 's/issue_discards = 0/issue_discards = 1/' /etc/lvm/lvm.conf [root@Cent64 ~]# cat /etc/lvm/lvm.conf | grep issue_discards issue_discards = 1

Next, since we are converting the whole system to use thin LVs we need to enable our initramfs to mount and switch root to a thin LV. By default dracut does not include the utilities that are needed to do this (see BZ#921235 ). This means we need to tell dracut to add thin_dump, thin_restore, and thin_check (provided by the device-mapper-persistent-data rpm) to the initramfs. We also want to make sure they get added for any future initramfs building so we will add it to a file within /usr/share/dracut/modules.d/.

[root@Cent64 ~]# mkdir /usr/share/dracut/modules.d/99thinlvm [root@Cent64 ~]# cat << EOF > /usr/share/dracut/modules.d/99thinlvm/install > #!/bin/bash > dracut_install -o thin_dump thin_restore thin_check > EOF [root@Cent64 ~]# chmod +x /usr/share/dracut/modules.d/99thinlvm/install [root@Cent64 ~]# dracut --force [root@Cent64 ~]# lsinitrd /boot/initramfs-2.6.32-358.el6.x86_64.img | grep thin_ -rwxr-xr-x 1 root root 351816 Sep 3 23:11 usr/sbin/thin_dump -rwxr-xr-x 1 root root 238072 Sep 3 23:11 usr/sbin/thin_check -rwxr-xr-x 1 root root 355968 Sep 3 23:11 usr/sbin/thin_restore

OK, so now that we have an adequate initramfs the final step before the conversion is to make sure there is enough free space in the VG to move our data around (in the worst case scenario we will need twice the space we are currently using). On my system I just added a 2nd disk (sdb) and added that disk to the VG:

[root@Cent64 ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sr0 11:0 1 1024M 0 rom sdb 8:16 0 31G 0 disk sda 8:0 0 30G 0 disk ├─sda1 8:1 0 500M 0 part /boot └─sda2 8:2 0 29.5G 0 part ├─vg_cent64-lv_root (dm-0) 253:0 0 25.6G 0 lvm / └─vg_cent64-lv_swap (dm-1) 253:1 0 4G 0 lvm [SWAP] [root@Cent64 ~]# [root@Cent64 ~]# vgextend vg_cent64 /dev/sdb Volume group "vg_cent64" successfully extended [root@Cent64 ~]# [root@Cent64 ~]# vgs VG #PV #LV #SN Attr VSize VFree vg_cent64 2 2 0 wz--n- 60.50g 31.00g

Conversion


Now comes the main event! We need to create a thin LV pool and then move the root LV over to the pool. Since thin pools currently cannot be reduced in size ( BZ#812731 ) I decided to make my thin pool be exactly the size of the LV I wanted to put in the pool. Below I show creating the thin pool as well as the thin_root that will be our new "thin" root logical volume.

[root@Cent64 ~]# lvs --units=b /dev/vg_cent64/lv_root LV VG Attr LSize Pool Origin Data% Move Log Cpy%Sync Convert lv_root vg_cent64 -wi-ao--- 27455913984B [root@Cent64 ~]# [root@Cent64 ~]# lvcreate -T vg_cent64/thinp --size=27455913984B Logical volume "thinp" created [root@Cent64 ~]# [root@Cent64 ~]# lvcreate -T vg_cent64/thinp -n thin_root -V 27455913984B Logical volume "thin_root" created [root@Cent64 ~]# [root@Cent64 ~]# lvs LV VG Attr LSize Pool Origin Data% Move Log Cpy%Sync Convert lv_root vg_cent64 -wi-ao--- 25.57g lv_swap vg_cent64 -wi-ao--- 3.94g thin_root vg_cent64 Vwi-a-tz- 25.57g thinp 0.00 thinp vg_cent64 twi-a-tz- 25.57g 0.00

Now we need to get all of the data from lv_root and into thin_root. My original thought is just to dd all of the content from one to the other, but there is one problem: we are still mounted on lv_root. For safety I would probably recommend booting into a rescue mode from a cd and then doing the dd without either filesystem mounted. However, today I just decided to make an LVM snapshot of the root LV which gives us a consistent view of the block device for the duration of the copy.

[root@Cent64 ~]# lvcreate --snapshot -n snap_root --size=2g vg_cent64/lv_root Logical volume "snap_root" created [root@Cent64 ~]# [root@Cent64 ~]# dd if=/dev/vg_cent64/snap_root of=/dev/vg_cent64/thin_root 53624832+0 records in 53624832+0 records out 27455913984 bytes (27 GB) copied, 597.854 s, 45.9 MB/s [root@Cent64 ~]# [root@Cent64 ~]# lvs LV VG Attr LSize Pool Origin Data% Move Log Cpy%Sync Convert lv_root vg_cent64 owi-aos-- 25.57g lv_swap vg_cent64 -wi-ao--- 3.94g snap_root vg_cent64 swi-a-s-- 2.00g lv_root 0.07 thin_root vg_cent64 Vwi-a-tz- 25.57g thinp 100.00 thinp vg_cent64 twi-a-tz- 25.57g 100.00 [root@Cent64 ~]# [root@Cent64 ~]# lvremove /dev/vg_cent64/snap_root Do you really want to remove active logical volume snap_root? [y/n]: y Logical volume "snap_root" successfully removed

So there we have it. All of the data has been copied to the thin_root LV. You can see from the output of lvs that the thin LV and the thin pool are both 100% full. 100% full? really? I thought these were "thin" LVs. :)

Let's recover that space! I'll do this by mounting thin_root and then running fstrim to release the unused blocks back to the pool. First I check the fs and clean up any dirt by running fsck.

[root@Cent64 ~]# fsck /dev/vg_cent64/thin_root fsck from util-linux-ng 2.17.2 e2fsck 1.41.12 (17-May-2010) Clearing orphaned inode 1047627 (uid=0, gid=0, mode=0100700, size=0) Clearing orphaned inode 1182865 (uid=0, gid=0, mode=0100755, size=15296) Clearing orphaned inode 1182869 (uid=0, gid=0, mode=0100755, size=24744) Clearing orphaned inode 1444589 (uid=0, gid=0, mode=0100755, size=15256) ... /dev/mapper/vg_cent64-thin_root: clean, 30776/1676080 files, 340024/6703104 blocks [root@Cent64 ~]# [root@Cent64 ~]# mount /dev/vg_cent64/thin_root /mnt/ [root@Cent64 ~]# [root@Cent64 ~]# fstrim -v /mnt/ /mnt/: 26058436608 bytes were trimmed [root@Cent64 ~]# [root@Cent64 ~]# lvs LV VG Attr LSize Pool Origin Data% Move Log Cpy%Sync Convert lv_root vg_cent64 -wi-ao--- 25.57g lv_swap vg_cent64 -wi-ao--- 3.94g thin_root vg_cent64 Vwi-aotz- 25.57g thinp 5.13 thinp vg_cent64 twi-a-tz- 25.57g 5.13

Success! All the way from 100% back down to 5%.

Now let's update the grub.conf and the fstab to use the new thin_root LV.

NOTE: grub.conf is on the filesystem on sda1.
NOTE: fstab is on the filesystem on thin_root.

[root@Cent64 ~]# sed -i -e 's/lv_root/thin_root/g' /boot/grub/grub.conf [root@Cent64 ~]# sed -i -e 's/lv_root/thin_root/g' /mnt/etc/fstab [root@Cent64 ~]# umount /mnt/

Time for a reboot!

After the system comes back up we should now be able to delete the original lv_root.

[root@Cent64 ~]# lvremove /dev/vg_cent64/lv_root Do you really want to remove active logical volume lv_root? [y/n]: y Logical volume "lv_root" successfully removed

Now we want to remove that extra disk (/dev/sdb) I added. However there is a subtle difference between my system now and my system before. There is metadata LV (thinp_tmeta) that is taking up a minute amount of space that is preventing us from being able to fit completely on the first disk (/dev/sda).

No biggie. We'll just steal this amount of space from lv_swap. And then run pvmove to move all data back to /dev/sda.

[root@Cent64 ~]# lvs -a --units=b LV VG Attr LSize Pool Origin Data% Move Log Cpy%Sync Convert lv_swap vg_cent64 -wi-ao--- 4227858432B thin_root vg_cent64 Vwi-aotz- 27455913984B thinp 5.13 thinp vg_cent64 twi-a-tz- 27455913984B 5.13 [thinp_tdata] vg_cent64 Twi-aot-- 27455913984B [thinp_tmeta] vg_cent64 ewi-aot-- 29360128B [root@Cent64 ~]# [root@Cent64 ~]# swapoff /dev/vg_cent64/lv_swap [root@Cent64 ~]# [root@Cent64 ~]# lvresize --size=-29360128B /dev/vg_cent64/lv_swap WARNING: Reducing active logical volume to 3.91 GiB THIS MAY DESTROY YOUR DATA (filesystem etc.) Do you really want to reduce lv_swap? [y/n]: y Reducing logical volume lv_swap to 3.91 GiB Logical volume lv_swap successfully resized [root@Cent64 ~]# [root@Cent64 ~]# mkswap /dev/vg_cent64/lv_swap mkswap: /dev/vg_cent64/lv_swap: warning: don't erase bootbits sectors on whole disk. Use -f to force. Setting up swapspace version 1, size = 4100092 KiB no label, UUID=7b023342-a9a9-4676-8bc6-1e60541010e4 [root@Cent64 ~]# [root@Cent64 ~]# swapon -v /dev/vg_cent64/lv_swap swapon on /dev/vg_cent64/lv_swap swapon: /dev/mapper/vg_cent64-lv_swap: found swap signature: version 1, page-size 4, same byte order swapon: /dev/mapper/vg_cent64-lv_swap: pagesize=4096, swapsize=4198498304, devsize=4198498304

Now we can get rid of sdb by running pvmove and vgreduce.

[root@Cent64 ~]# pvmove /dev/sdb /dev/sdb: Moved: 0.1% /dev/sdb: Moved: 11.8% /dev/sdb: Moved: 21.0% /dev/sdb: Moved: 32.0% /dev/sdb: Moved: 45.6% /dev/sdb: Moved: 56.2% /dev/sdb: Moved: 68.7% /dev/sdb: Moved: 79.6% /dev/sdb: Moved: 90.7% /dev/sdb: Moved: 100.0% [root@Cent64 ~]# [root@Cent64 ~]# pvs PV VG Fmt Attr PSize PFree /dev/sda2 vg_cent64 lvm2 a-- 29.51g 0 /dev/sdb vg_cent64 lvm2 a-- 31.00g 31.00g [root@Cent64 ~]# [root@Cent64 ~]# vgreduce vg_cent64 /dev/sdb Removed "/dev/sdb" from volume group "vg_cent64"

Boom! You're done!

Dusty

Thin LVM Snapshots: Why Size Is Less Important


Traditionally with LVM snapshots you need to be especially careful when choosing how big to make your snapshots; if it is too small it will fill up and become invalid. If taking many snapshots with limited space then it becomes quite difficult to decide which snapshots need more space than others.

One approach has been to leave some extra space in the VG and let dmeventd periodically poll and lvextend the snapshot if necessary (I covered this in a previous post ). However, as a reader of mine has found out, this polling mechanism does not work very well for small snapshots.

Fortunately, with the addition of thin logical volume support within LVM (I believe initially in RHEL/CentOS 6.4 and/or Fedora 17), size is much less important to consider when taking a snapshot. If you create a thin LV and then "snapshot" the thin LV, what you actually end up with are two thin LVs. They both use extents from the same pool and the size will grow dynamically as needed.

As always, examples help. In my system I have a 20G sdb. I'll create a VG, vgthin, that uses sdb and then a 10G thin pool, lvpool, within vgthin.

[root@localhost ~]# lsblk /dev/sdb NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sdb 8:16 0 20G 0 disk [root@localhost ~]# [root@localhost ~]# vgcreate vgthin /dev/sdb Volume group "vgthin" successfully created [root@localhost ~]# [root@localhost ~]# lvcreate --thinpool lvpool --size 10G vgthin Logical volume "lvpool" created

Next, I'll create a thin LV (lvthin), add a filesystem and mount it.

[root@localhost ~]# lvcreate --name lvthin --virtualsize 5G --thin vgthin/lvpool Logical volume "lvthin" created [root@localhost ~]# [root@localhost ~]# mkfs.ext4 /dev/vgthin/lvthin ... [root@localhost ~]# mkdir /mnt/origin [root@localhost ~]# mount /dev/vgthin/lvthin /mnt/origin [root@localhost ~]# [root@localhost ~]# lvs LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert lvpool vgthin twi-a-tz- 10.00g 1.27 lvthin vgthin Vwi-a-tz- 5.00g lvpool 2.54

I'll go ahead and create the snapshot now, but just as a sanity check I'll create a file, A, that exists before the snapshot. After the snapshot I'll create a file, B. This file should NOT be visible in the snapshot if it is working properly.

[root@localhost ~]# touch /mnt/origin/A [root@localhost ~]# [root@localhost ~]# lvcreate --name lvsnap --snapshot vgthin/lvthin Logical volume "lvsnap" created [root@localhost ~]# [root@localhost ~]# mkdir /mnt/snapshot [root@localhost ~]# mount /dev/vgthin/lvsnap /mnt/snapshot/ [root@localhost ~]# [root@localhost ~]# touch /mnt/origin/B [root@localhost ~]# [root@localhost ~]# ls /mnt/origin/ A B lost+found [root@localhost ~]# ls /mnt/snapshot/ A lost+found

Perfect! Snapshotting is working as expected. What are our utilizations?

[root@localhost ~]# lvs LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert lvpool vgthin twi-a-tz- 10.00g 2.05 lvsnap vgthin Vwi-aotz- 5.00g lvpool lvthin 4.10 lvthin vgthin Vwi-aotz- 5.00g lvpool 4.10

Since we just created the snapshot our current utilization for both lvthin and lvsnap are the same. Take note also that the overall data usage for the entire pool actually shows us that lvthin and lvsnap are sharing the blocks that were present at the time the snapshot was taken. This will continue to be true as long as those blocks don't change.

A few more sanity checks.. If we add a 1G file into the filesystem on lvthin we should see only the usage of lvthin increase.

[root@localhost ~]# cp /root/1Gfile /mnt/origin/ [root@localhost ~]# lvs LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert lvpool vgthin twi-a-tz- 10.00g 12.06 lvsnap vgthin Vwi-aotz- 5.00g lvpool lvthin 4.10 lvthin vgthin Vwi-aotz- 5.00g lvpool 24.10

If we add a 512M file into the snapshot then we should see only the usage of lvsnap increase.

[root@localhost ~]# cp /root/512Mfile /mnt/snapshot/ [root@localhost ~]# lvs LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert lvpool vgthin twi-a-tz- 10.00g 17.06 lvsnap vgthin Vwi-aotz- 5.00g lvpool lvthin 14.10 lvthin vgthin Vwi-aotz- 5.00g lvpool 24.10

And thats it.. Not that exciting, but it is dynamic allocation of snapshots (did I also mention there is support for snapshots of snapshots of snapshots?). As long as there is still space within the pool the snapshot will grow dynamically.

Cheers

Dusty

Guest Discard/FSTRIM On Thin LVs


In my last post I showed how to recover space from disk images backed by sparse files. As a small addition I'd like to also show how to do the same with a guest disk image that is backed by a thinly provisioned Logical Volume.

First things first, I modified the /etc/lvm/lvm.conf file to have the issue_discards = 1 option set. I'm not 100% sure this is needed but I did it at the time so I wanted to include it here.

Next I created a new VG (vgthin) out of a spare partition and then created an thin LV pool (lvthinpool) inside the VG. Finally I created a thin LV within the pool (lvthin). This is all shown below:

[root@host ~]# vgcreate vgthin /dev/sda3 Volume group "vgthin" successfully created [root@host ~]# lvcreate --thinpool lvthinpool --size 20G vgthin Logical volume "lvthinpool" created [root@host ~]# [root@host ~]# lvcreate --name lvthin --virtualsize 10G --thin vgthin/lvthinpool Logical volume "lvthin" created

To observe the usages of the thin LV and the thin pool you can use the lvs command and take note of the Data% column:

[root@host ~]# lvs vgthin LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert lvthin vgthin Vwi-aotz- 10.00g lvthinpool 0.00 lvthinpool vgthin twi-a-tz- 20.00g 0.00

Next I needed to add the disk to the guest. I did it using the following xml and virsh command. Note from my previous post that the scsi controller inside of my guest is a virtio-scsi controller and that I am adding the discard='unmap' option.

[root@host ~]# cat <<EOF > /tmp/thinLV.xml <disk type='block' device='disk'> <driver name='qemu' type='raw' discard='unmap'/> <source dev='/dev/vgthin/lvthin'/> <target dev='sdb' bus='scsi'/> </disk> EOF [root@host ~]# [root@host ~]# virsh attach-device Fedora19 /tmp/thinLV.xml --config ...

After a quick power cycle of the guest I then created a filesystem on the new disk (sdb) and mounted it within the guest.

[root@guest ~]# mkfs.ext4 /dev/sdb ... [root@guest ~]# [root@guest ~]# mount /dev/sdb /mnt/

Same as last time, I then copied a large file into the guest. After I did so you can see from the lvs output that the thin LV is now using 11% of its allotted space within the pool.

[root@host ~]# lvs vgthin LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert lvthin vgthin Vwi-aotz- 10.00g lvthinpool 1.34 lvthinpool vgthin twi-a-tz- 20.00g [root@host ~]# [root@host ~]# scp /tmp/code.tar.gz root@192.168.100.136:/mnt/ root@192.168.100.136's password: code.tar.gz 100% 1134MB 29.8MB/s 00:38 [root@host ~]# [root@host ~]# lvs vgthin LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert lvthin vgthin Vwi-aotz- 10.00g lvthinpool 11.02 lvthinpool vgthin twi-a-tz- 20.00g

It was then time for a little TRIM action:

[root@guest ~]# df -kh /mnt/ Filesystem Size Used Avail Use% Mounted on /dev/sdb 9.8G 1.2G 8.1G 13% /mnt [root@guest ~]# [root@guest ~]# [root@guest ~]# rm /mnt/code.tar.gz rm: remove regular file ‘/mnt/code.tar.gz’? y [root@guest ~]# [root@guest ~]# df -kh /mnt/ Filesystem Size Used Avail Use% Mounted on /dev/sdb 9.8G 23M 9.2G 1% /mnt [root@guest ~]# fstrim -v /mnt/ /mnt/: 1.2 GiB (1329049600 bytes) trimmed

And from within the host we can see that the utilization of the thin LV has appropriately dwindled back down to ~2.85%

[root@host ~]# lvs vgthin LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert lvthin vgthin Vwi-aotz- 10.00g lvthinpool 2.85 lvthinpool vgthin twi-a-tz- 20.00g 1.42

Again I have posted my full guest libvirt XML here.

Dusty

PS See here for a more thorough example of creating thin LVs.

Easily Resize LVs and Underlying Filesystems


Part of the reason I use Logical Volumes for my block devices rather than standard partitions is because LVs are much more flexible when it comes to sizing/resizing.

For example, in a particular setup you might have a 1 TB hard drive that you want to be broken up into two block devices. You could either choose two 500 GB partitions, or two 500 GB LVs. If you use partitions and later find out that you really needed 300 GB for one and 700 GB for the other then resizing might get a little complicated. On the other hand, with LVs resizing is simple!

LVM has the ability to resize the LV and the underlying filesystem at the same time (it uses fsadm under the covers to resize the filesystem which on my system supports resizing ext2/ext3/ext4/ReiserFS/XFS). In order to pull this off simply use lvresize along with the --resizefs option. An example of this command is shown below:

dustymabe@fedorabook: tmp>sudo lvresize --size +1g --resizefs /dev/vg1/lv1 [sudo] password for dustymabe: fsck from util-linux 2.19.1 /dev/mapper/vg1-lv1: clean, 11/262144 files, 51278/1048576 blocks Extending logical volume lv1 to 5.00 GiB Logical volume lv1 successfully resized resize2fs 1.41.14 (22-Dec-2010) Resizing the filesystem on /dev/mapper/vg1-lv1 to 1310720 (4k) blocks. The filesystem on /dev/mapper/vg1-lv1 is now 1310720 blocks long. dustymabe@fedorabook: tmp>

It should be noted that you can only do online resizing when you are making an LV larger. If you are making it smaller then the filesystem will most likely need to be unmounted.

Happy Resizing!
Dusty Mabe

Automatically Extend LVM Snapshots


Snapshot logical volumes are a great way to save the state of an LV (a special block device) at a particular point in time. Essentially this provides the ability to snapshot block devices and then revert them back at a later date. In other words you can rest easy when that big upgrade comes along :)

This all seems fine and dandy until your snapshot runs out of space! Yep, the size of the snapshot does matter. Snapshot LVs are Copy-On-Write (COW) devices. Old blocks from the origin LV get "Copied" to the snapshot LV only when new blocks are "Written" to in the origin LV. Additionally, only the blocks that get written to in the origin LV get copied over to the snapshot LV.

Thus, you can make a snapshot LV much smaller than the origin LV and as long as the snapshot never fills up then you are fine. If it does fill up, then the snapshot is invalid and you can no longer use it.

The problem with this is the fact that it becomes quite tricky to determine how much space you actually need in your snapshot. If you notice that your snapshot is becoming full then you can use lvextend to increase the size of the snapshot, but this is not very desirable as it's not automated and requires user intervention.

The good news is that recently there was an addition to lvm that allows for autoextension of snapshot LVs! The bugzilla report # 427298 tracked the request and it has now been released in lvm2-2.02.84-1. The lvm-devel email from when the patch came through contains some good details on how to use the new functionality.

To summarize, you edit /etc/lvm/lvm.conf and set the snapshot_autoextend_threshold to something other than 100 (100 is the default value and also disables automatic extension). In addition, you also edit the snapshot_autoextend_percent. This value will be the amount you want to extend the snapshot LV.

To test this out I edited my /etc/lvm/lvm.conf file to have the following values:

snapshot_autoextend_threshold = 80 snapshot_autoextend_percent = 20

These values indicate that once the snapshot is 80% full then extend it's size by 20%. To get the lvm monitoring to pick up the changes the lvm2-monitor service needs to be restarted (this varies by platform).

Now, lets test it out! We will create an LV, make a filesystem, mount it, and then snapshot the LV.

[root@F17 ~]# lvcreate --size=1G --name=lv1 --addtag @lv1 vg1 Logical volume "lv1" created [root@F17 ~]# [root@F17 ~]# mkfs.ext4 /dev/vg1/lv1 > /dev/null mke2fs 1.42 (29-Nov-2011) [root@F17 ~]# [root@F17 ~]# mount /dev/vg1/lv1 /mnt/ [root@F17 ~]# [root@F17 ~]# lvcreate --snapshot --size=500M --name=snap1 --addtag @lv1 /dev/vg1/lv1 Logical volume "snap1" created [root@F17 ~]#

Verify the snapshot was created by using lvs.

[root@F17 ~]# lvs -o lv_name,vg_name,lv_size,origin,snap_percent @lv1 LV VG LSize Origin Snap% lv1 vg1 1.00g snap1 vg1 500.00m lv1 0.00

Finally, I can test the snapshot autoextension. Since my snapshot is 500M in size let's create a file that is ~420M in the origin LV. This will be just over 80% of the snapsphot size so it should get resized.

[root@F17 ~]# dd if=/dev/zero of=/mnt/file bs=1M count=420 420+0 records in 420+0 records out 440401920 bytes (440 MB) copied, 134.326 s, 3.3 MB/s [root@F17 ~]# [root@F17 ~]# ls -lh /mnt/file -rw-r--r--. 1 root root 420M Mar 4 11:36 /mnt/file

A quick run of lvs reveals that the underlying monitoring code did it's job and extended the LV by 20% to 600M!!

[root@F17 ~]# lvs -o lv_name,vg_name,lv_size,origin,snap_percent @lv1 LV VG LSize Origin Snap% lv1 vg1 1.00g snap1 vg1 600.00m lv1 70.29 [root@F17 ~]#


Dusty Mabe