Automatically Extend LVM Snapshots

Snapshot logical volumes are a great way to save the state of an LV (a special block device) at a particular point in time. Essentially this provides the ability to snapshot block devices and then revert them back at a later date. In other words you can rest easy when that big upgrade comes along :)

This all seems fine and dandy until your snapshot runs out of space! Yep, the size of the snapshot does matter. Snapshot LVs are Copy-On-Write (COW) devices. Old blocks from the origin LV get "Copied" to the snapshot LV only when new blocks are "Written" to in the origin LV. Additionally, only the blocks that get written to in the origin LV get copied over to the snapshot LV.

Thus, you can make a snapshot LV much smaller than the origin LV and as long as the snapshot never fills up then you are fine. If it does fill up, then the snapshot is invalid and you can no longer use it.

The problem with this is the fact that it becomes quite tricky to determine how much space you actually need in your snapshot. If you notice that your snapshot is becoming full then you can use lvextend to increase the size of the snapshot, but this is not very desirable as it's not automated and requires user intervention.

The good news is that recently there was an addition to lvm that allows for autoextension of snapshot LVs! The bugzilla report # 427298 tracked the request and it has now been released in lvm2-2.02.84-1. The lvm-devel email from when the patch came through contains some good details on how to use the new functionality.

To summarize, you edit /etc/lvm/lvm.conf and set the snapshot_autoextend_threshold to something other than 100 (100 is the default value and also disables automatic extension). In addition, you also edit the snapshot_autoextend_percent. This value will be the amount you want to extend the snapshot LV.

To test this out I edited my /etc/lvm/lvm.conf file to have the following values:

snapshot_autoextend_threshold = 80 snapshot_autoextend_percent = 20

These values indicate that once the snapshot is 80% full then extend it's size by 20%. To get the lvm monitoring to pick up the changes the lvm2-monitor service needs to be restarted (this varies by platform).

Now, lets test it out! We will create an LV, make a filesystem, mount it, and then snapshot the LV.

[root@F17 ~]# lvcreate --size=1G --name=lv1 --addtag @lv1 vg1 Logical volume "lv1" created [root@F17 ~]# [root@F17 ~]# mkfs.ext4 /dev/vg1/lv1 > /dev/null mke2fs 1.42 (29-Nov-2011) [root@F17 ~]# [root@F17 ~]# mount /dev/vg1/lv1 /mnt/ [root@F17 ~]# [root@F17 ~]# lvcreate --snapshot --size=500M --name=snap1 --addtag @lv1 /dev/vg1/lv1 Logical volume "snap1" created [root@F17 ~]#

Verify the snapshot was created by using lvs.

[root@F17 ~]# lvs -o lv_name,vg_name,lv_size,origin,snap_percent @lv1 LV VG LSize Origin Snap% lv1 vg1 1.00g snap1 vg1 500.00m lv1 0.00

Finally, I can test the snapshot autoextension. Since my snapshot is 500M in size let's create a file that is ~420M in the origin LV. This will be just over 80% of the snapsphot size so it should get resized.

[root@F17 ~]# dd if=/dev/zero of=/mnt/file bs=1M count=420 420+0 records in 420+0 records out 440401920 bytes (440 MB) copied, 134.326 s, 3.3 MB/s [root@F17 ~]# [root@F17 ~]# ls -lh /mnt/file -rw-r--r--. 1 root root 420M Mar 4 11:36 /mnt/file

A quick run of lvs reveals that the underlying monitoring code did it's job and extended the LV by 20% to 600M!!

[root@F17 ~]# lvs -o lv_name,vg_name,lv_size,origin,snap_percent @lv1 LV VG LSize Origin Snap% lv1 vg1 1.00g snap1 vg1 600.00m lv1 70.29 [root@F17 ~]#

Dusty Mabe

15 Responses to “Automatically Extend LVM Snapshots”

  • Wow, this is a very nifty feature! Do you know if it is available in RHEL?


  • Hello!

    Thanks for the hint. But it seems that lvm2-monitor is not available for Ubuntu 12.10. Found nothing so far…
    Do you have any ideas how I can use the auto-extend feature for snapshots on Ubuntu 12.10?


    • Hi,

      I’m not sure if ubuntu has such an initscript/service (if I get a chance later in the week I’ll fire up a live-cd and see if I can find out). All that the lvm2-monitor script in RHEL/CentOS does is call vgchange –monitor y –poll y $vg for all VGs in the system. See the vgchange command in the following file for an example of what to run to get monitoring to work.


      • Hi Dusty!

        Thank you sooo much 😀 I tried “vgchange –-monitor=y -–poll=y $vg” and it automatically extended my snapshot. Nice.
        But I still have some questions:

        1. What is the difference between “–poll=y” and “–poll=n”? I do not understand the description of the man page and also Googling was not really helpful (no native speaker). In both cases I have not noticed any differences.

        2. Do you know why even when I say “–monitor=y” the monitoring switches off after auto-resizing?

        Apr 17 22:51:26 easystore lvm[2969]: Monitoring snapshot easystore-snap2
        Apr 17 23:00:26 easystore lvm[2969]: Snapshot easystore-snap2 is now 81% full.
        Apr 17 23:00:26 easystore lvm[2969]: Extending logical volume snap2 to 256.00 MiB
        Apr 17 23:00:28 easystore lvm[2969]: Logical volume snap2 successfully resized
        Apr 17 23:00:28 easystore lvm[2969]: No longer monitoring snapshot easystore-snap2

        So actually “–monitor=y” does not constantly monitor the snapshots? Or is it an intended behaviour and I should run “vgchange –monitor=y” as a cronjob to keep it monitoring the snapshots permanently?

        Thank you very much!

        • Hi En,

          I had this exact same issue – what is happening is this
          1. You add the logical volume to dmeventd to monitor
          2. dmevent detects a size change event and starts lvextend
          3. lvextend reads the lvm config file which has monitoring turned off
          4. lvextend extends the LV as it should
          5. lvextend removes the volume from monitoring as per the config.

          the fix is to change “monitoring = 0” to “monitoring = 1” in /etc/lvm/lvm.conf (note that this also causes it to monitor when restarted, and is a better fix that the initscript hack RH are doing )

          This is amazingly counter intuitive, but there you go

          • Matthew,

            Thanks for sharing. I see that it does work when changing monitoring=1 in the lvm.conf file. Note that even though it still says “No longer monitoring snapshot…” it seems to still be monitoring it and extending it when appropriate:

            Apr 22 04:07:33 ubuntu lvm[10871]: Extending logical volume snap1 to 48.00 MiB
            Apr 22 04:07:33 ubuntu lvm[10871]: Monitoring snapshot vg1-snap1
            Apr 22 04:07:33 ubuntu lvm[10871]: Logical volume snap1 successfully resized
            Apr 22 04:07:33 ubuntu lvm[10871]: No longer monitoring snapshot vg1-snap1
            Apr 22 04:09:23 ubuntu lvm[10871]: Extending logical volume snap1 to 60.00 MiB
            Apr 22 04:09:23 ubuntu lvm[10871]: Monitoring snapshot vg1-snap1
            Apr 22 04:09:23 ubuntu lvm[10871]: Logical volume snap1 successfully resized
            Apr 22 04:09:23 ubuntu lvm[10871]: No longer monitoring snapshot vg1-snap1


  • I’ve tried this, and may have found a problem. I’m curious if others have as well.

    I tested this on CentOS5 and CentOS6 by building a volume of 64M filling it with a single file:

    dd if=/dev/urandom of=/mnt/test/testsnapvol0/file0 bs=1048576 count=56

    I then created a snapshot of 32M, and repeated the above dd operation. This breaks the snapshot.

    If I do essentially the same thing, but with a larger number of smaller dd operations, it works. The conclusion I draw is that there’s some latency involved in the expansion. If a write operation takes the snapshot’s space consumption past the percentage stored in snapshot_autoextend_threshold sufficiently quickly, the snapshot fails.

    Have others experienced this? Is there some way to address this latency? Is this the result of the “dmeventd only checks the snapshot every 10 seconds” issue that Petr mentions in his email?

    • A. Gideon,

      Yes, unfortunately the monitoring is more of a polling mechanism (i.e “every 10 seconds”) rather than some sort of event triggered mechanism. Usually this is not a problem because LVs are large enough such that it would be hard to go from 80% to 100% in 10 seconds. If your LVs are very small this polling doesn’t work as well as you have found out.

      I may have an idea on what you can do to get around this type of behavior. It involves using thin logical volumes and snapshots of thin logical volumes. I’ll post back here within a day or two if I find anything out.


  • Hello
    Thank for that tuto.
    I have LVM on Ubuntu 14.04 and I can easely create new lv with that command
    #sudo lvcreate -n snap_`date +%Y%m%d_%H%M%S` -L 10G -s /dev/vg00/home;
    but the auto extend does not work even if I change the lvm.conf setting as it is written, above. However, I have not understood/found the ‘lv2-monitor’ on Ubuntu.

    I finally run this:
    #sudo vgchange –monitor=y
    But should I not monitor the lv with?
    #sudo lvchange –monitor=y
    I restarted the server.

    Then, I create a LV-snap with no data in my /home/efl.
    (#sudo lvcreate -n snap_`date +%Y%m%d_%H%M%S` -L 10G -s /dev/vg00/home;)
    Then I copied 6.7G of data to /home/efl/rob
    I created a new LV-snap with same command.
    I display the lv with #lvdisplay command.
    All is ok.
    I copied again 6.7G of data in /home/efl/rob1
    I created a new 10G LV-snap with the same command
    I display the LV with #lvdisplay command and all look fine.
    I coped again 6.7 of data in /home/efl/rob2
    I created a new LV-snap, and the problem start from now.
    The third LV is create but I got error message for the first LV because it become inactive.
    When I run #sudo lvs, the first LV is full at 100%.

    Why the first LV has not been extended?
    Why the first LV is full while I created it with not data was on /home/efl ??????????
    Each time I created a LV (excepted for the first) I used ca 67% of LV size. So when I display the LV with the command #lvdisplay, each LV should show 67% for the line ‘Allocated to snapshot’. Why the first LV is full and the other are not?????

    Someone can explain me how the LV is growing and why the ‘snapshot_autoextend_threshold’ and ‘snapshot_autoextend_percent’ does not work?

    Many thank

    • If you rebooted the host after you enabled monitoring using vgchange/lvchange then your changes got wiped and monitoring was no longer enabled. I suggest you set monitoring=1 in /etc/lvm/lvm.conf and try it again. Also keep in mind that even if the LVs are being monitored you can’t copy too much data into them too fast or you will fill up the snapshot before the polling mechanism has a chance to extend it.

      • Dear Dustymabe, Thank a lot for you answer. It was I just noticed before I red your comment. I enabled monitoring=2 and it works :o). The problem I noticed, it only work twice.

        For exemple, 1) I created a snap of 10G, 2) I enter that command #watch sudo lvs, 3) I copied 6.7G data.

        When 6G of data was copied, the LV increase of 40%. Nice!!!!!! It was to nice to believe, then when all of 6.7G was copied, I tried again by copying another folder of 6,7G. And it works again, great!!!

        Then it was too great to believe. Then I wait a couple of mn and I tried again. But this time the LV remind at 19G and did not increase and it full at 88%

        I also enter that command
        sudo vgchange –monitor=y –poll=y
        but no way, it remind at 88%.

        Is there a reason, that it increase only twice??

        Many thank for you great help.

  • For exemple, this time I changed the limit

    snapshot_autoextend_threshold = 80
    snapshot_autoextend_percent = 40

    But it only extend once when it reach 80% of data.
    But when I copy again new folder of 6.7G, the data% grew up to 95% and the size of the LV is not more extended.

    Why this time, it only automaticaly extend once???
    Many thank

    • Is it possible you are hitting some upper limit on the space within your volume group? What is the size of the volume group? What is the amount of free space in the volume group? What is the size of the LV you are snapshotting?

      The snapshot should never grow larger than the origin LV because it won’t ever need that much space.

      These are just suggestions.. Hopefully they lead you to an answer.

Leave a Reply