Monthly Archive for April, 2016

Fedora BTRFS+Snapper – The Fedora 24 Edition

History

In the past I have configured my personal computers to be able to snapshot and rollback the entire system. To do this I am leveraging the BTRFS filesystem, a tool called snapper, and a patched version of Fedora's grub2 package. The patches needed from grub2 come from the SUSE guys and are documented well in this git repo.

This setup is not new. I have fully documented the steps I took in the past for my Fedora 22 systems in two blog posts: part1 and part2. This is a condensed continuation of those posts for Fedora 24.

NOTE: I'm using Fedora 24 alpha, but everything should be the same for the released version of Fedora 24.

Setting up System with LUKS + LVM + BTRFS

The manual steps for setting up the system are detailed in the part1 blog post from Fedora 22. This time around I have created a script that will quickly configure the system with LUKS + LVM + BTRFS. The script will need to be run in an Anaconda environment just like the manual steps were done in part1 last time.

You can easily enable ssh access to your Anaconda booted machine by adding inst.sshd to the kernel command line arguments. After booting up you can scp the script over and then execute it to build the system. Please read over the script and modify it to your liking.

Alternatively, for an automated install I have embedded that same script into a kickstart file that you can use. The kickstart file doesn't really leverage Anaconda at all because it simply runs a %pre script and then reboots the box. It's basically like just telling Anaconda to run a bash script, but allows you to do it in an automated way. None of the kickstart directives at the top of the kickstart file actually get used.

Installing and Configuring Snapper

After the system has booted for the first time, let's configure the system for doing snapshots. I still want to be able to track how much size each snapshot has taken so I'll go ahead and enable quota support on BTRFS. I covered how to do this in a previous post:

[root@localhost ~]# btrfs quota enable /
[root@localhost ~]# btrfs qgroup show /
qgroupid         rfer         excl 
--------         ----         ---- 
0/5           1.08GiB      1.08GiB

Next up is installing/configuring snapper. I am also going to install the dnf plugin for snapper so that rpm transactions will automatically get snapshotted:

[root@localhost ~]# dnf install -y snapper python3-dnf-plugins-extras-snapper
...
Complete!
[root@localhost ~]# snapper --config=root create-config /
[root@localhost ~]# snapper ls
Type   | # | Pre # | Date | User | Cleanup | Description | Userdata
-------+---+-------+------+------+---------+-------------+---------
single | 0 |       |      | root |         | current     |         
[root@localhost ~]# snapper list-configs
Config | Subvolume
-------+----------
root   | /        
[root@localhost ~]# btrfs subvolume list /
ID 259 gen 57 top level 5 path .snapshots

So we used the snapper command to create a configuration for BTRFS filesystem mounted at /. As part of this process we can see from the btrfs subvolume list / command that snapper also created a .snapshots subvolume. This subvolume will be used to house the COW snapshots that are taken of the system.

Next, we'll workaround a bug that is causing snapper to have the wrong SELinux context on the .snapshots directory:

[root@localhost ~]# restorecon -v /.snapshots/
restorecon reset /.snapshots context system_u:object_r:unlabeled_t:s0->system_u:object_r:snapperd_data_t:s0

Finally, we'll add an entry to fstab so that regardless of what subvolume we are actually booted in we will always be able to view the .snapshots subvolume and all nested subvolumes (snapshots):

[root@localhost ~]# echo '/dev/vgroot/lvroot /.snapshots btrfs subvol=.snapshots 0 0' >> /etc/fstab

Taking Snapshots

OK, now that we have snapper installed and the .snapshots subvolume in /etc/fstab we can start creating snapshots:

[root@localhost ~]# btrfs subvolume get-default /
ID 5 (FS_TREE)
[root@localhost ~]# snapper create --description "BigBang"
[root@localhost ~]# snapper ls
Type   | # | Pre # | Date                            | User | Cleanup | Description | Userdata
-------+---+-------+---------------------------------+------+---------+-------------+---------
single | 0 |       |                                 | root |         | current     |         
single | 1 |       | Sat 23 Apr 2016 01:04:51 PM UTC | root |         | BigBang     |         
[root@localhost ~]# btrfs subvolume list /
ID 259 gen 64 top level 5 path .snapshots
ID 260 gen 64 top level 259 path .snapshots/1/snapshot
[root@localhost ~]# ls /.snapshots/1/snapshot/
bin  boot  dev  etc  home  lib  lib64  media  mnt  opt  proc  root  run  sbin  srv  sys  tmp  usr  var

We made our first snapshot called BigBang and then ran a btrfs subvolume list / to view that a new snapshot was actually created. Notice at the top of the output of the sections that we ran a btrfs subvolume get-default /. This outputs what the currently set default subvolume is for the BTRFS filesystem. Right now we are booted into the root subvolume but that will change as soon as we decide we want to use one of the snapshots for rollback.

Since we took a snapshot let's go ahead and make some changes to the system by updating the kernel:

[root@localhost ~]# dnf update -y kernel
...
Complete!
[root@localhost ~]# rpm -q kernel
kernel-4.5.0-0.rc7.git0.2.fc24.x86_64
kernel-4.5.2-300.fc24.x86_64
[root@localhost ~]# snapper ls
Type   | # | Pre # | Date                            | User | Cleanup | Description                   | Userdata
-------+---+-------+---------------------------------+------+---------+-------------------------------+---------
single | 0 |       |                                 | root |         | current                       |         
single | 1 |       | Sat 23 Apr 2016 01:04:51 PM UTC | root |         | BigBang                       |         
single | 2 |       | Sat 23 Apr 2016 01:08:18 PM UTC | root | number  | /usr/bin/dnf update -y kernel |

So we updated the kernel and the snapper dnf plugin automatically created a snapshot for us. Let's reboot the system and see if the new kernel boots properly:

[root@localhost ~]# reboot 
...
[dustymabe@media ~]$ ssh root@192.168.122.188 
Warning: Permanently added '192.168.122.188' (ECDSA) to the list of known hosts.
root@192.168.122.188's password: 
Last login: Sat Apr 23 12:18:55 2016 from 192.168.122.1
[root@localhost ~]# 
[root@localhost ~]# uname -r
4.5.2-300.fc24.x86_64

Rolling Back

Say we don't like that new kernel. Let's go back to the earlier snapshot we made:

[root@localhost ~]# snapper rollback 1
Creating read-only snapshot of current system. (Snapshot 3.)
Creating read-write snapshot of snapshot 1. (Snapshot 4.)
Setting default subvolume to snapshot 4.
[root@localhost ~]# reboot

snapper created a read-only snapshot of the current system and then a new read-write subvolume based on the snapshot we wanted to go back to. It then sets the default subvolume to be the newly created read-write subvolume. After reboot you'll be in the newly created read-write subvolume and exactly back in the state you system was in at the time the snapshot was created.

In our case, after reboot we should now be booted into snapshot 4 as indicated by the output of the snapper rollback command above and we should be able to inspect information about all of the snapshots on the system:

[root@localhost ~]# btrfs subvolume get-default /
ID 263 gen 87 top level 259 path .snapshots/4/snapshot
[root@localhost ~]# snapper ls
Type   | # | Pre # | Date                            | User | Cleanup | Description                   | Userdata
-------+---+-------+---------------------------------+------+---------+-------------------------------+---------
single | 0 |       |                                 | root |         | current                       |         
single | 1 |       | Sat 23 Apr 2016 01:04:51 PM UTC | root |         | BigBang                       |         
single | 2 |       | Sat 23 Apr 2016 01:08:18 PM UTC | root | number  | /usr/bin/dnf update -y kernel |         
single | 3 |       | Sat 23 Apr 2016 01:17:43 PM UTC | root |         |                               |         
single | 4 |       | Sat 23 Apr 2016 01:17:43 PM UTC | root |         |                               |         
[root@localhost ~]# ls /.snapshots/
1  2  3  4
[root@localhost ~]# btrfs subvolume list /
ID 259 gen 88 top level 5 path .snapshots
ID 260 gen 81 top level 259 path .snapshots/1/snapshot
ID 261 gen 70 top level 259 path .snapshots/2/snapshot
ID 262 gen 80 top level 259 path .snapshots/3/snapshot
ID 263 gen 88 top level 259 path .snapshots/4/snapshot

And the big test is to see if the change we made to the system was actually reverted:

[root@localhost ~]# uname -r
4.5.0-0.rc7.git0.2.fc24.x86_64
[root@localhost ~]# rpm -q kernel
kernel-4.5.0-0.rc7.git0.2.fc24.x86_64

Enjoy!

Dusty

Vagrant: Sharing Folders with vagrant-sshfs

cross posted from this fedora magazine post

Introduction

We're trying to focus more on developer experience in the Red Hat ecosystem. In the process we've started to incorporate the Vagrant into our standard offerings. As part of that effort, we're seeking a shared folder solution that doesn't include a bunch of if/else logic to figure out exactly which one you should use based on the OS/hypervisor you use under Vagrant.

The current options for Vagrant shared folder support can make you want to tear your hair out when you try to figure out which one you should use in your environment. This led us to look for a better answer for the user, so they no longer have to make these choices on their own based on their environment.

Current Synced Folder Solutions

"So what is the fuss about? Is it really that hard?" Well it's certainly doable, but we want it to be easier. Here are the currently available synced folder options within vagrant today:

  • virtualbox
    • This synced folder type uses a kernel module from the VirtualBox Guest Additions software to talk to the hypervisor. It requires you to be running on top of the Virtualbox hypervisor, and that the VirtualBox Guest Additions are installed in the Vagrant Box you launch. Licensing can also make distribution of the compiled Guest Additions problematic.
    • Hypervisor Limitation: VirtualBox
    • Host OS Limitation: None
  • nfs
    • This synced folder type uses NFS mounts. It requires you to be running on top of a Linux or Mac OS X host.
    • Hypervisor Limitation: None
    • Host OS Limitation: Linux, Mac
  • smb
    • This synced folder type uses Samba mounts. It requires you to be running on top of a Windows host and to have Samba client software in the guest.
    • Hypervisor Limitation: None
    • Host OS Limitation: Windows
  • 9p
    • This synced folder implementation uses 9p file sharing within the libvirt/KVM hypervisor. It requires the hypervisor to be libvirt/KVM and thus also requires Linux to be the host OS.
    • Hypervisor Limitation: Libvirt
    • Host OS Limitation: Linux
  • rsync
    • This synced folder implementation simply syncs folders between host and guest using rsync. Unfortunately this isn't actually shared folders, because the files are simply copied back and forth and can become out of sync.
    • Hypervisor Limitation: None
    • Host OS Limitation: None

So depending on your environment you are rather limited in which options work. You have to choose carefully to get something working without much hassle.

What About SSHFS?

As part of this discovery process I had a simple question: "why not sshfs?" It turns out that Fabio Kreusch had a similar idea a while back and wrote a plugin to do mounts via SSHFS.

When I first found this I was excited because I thought I had the answer in my hands and someone had already written it! Unfortunately the old implementation didn't implement a synced folder plugin like all of the other synced folder plugins within Vagrant. In other words, it didn't inherit the synced folder class and implement the functions. It also, by default, mounted a guest folder onto the host rather than the other way around like most synced folder implementations do.

One goal I have is to actually have SSHFS be a supported synced folder plugin within Vagrant and possibly get it committed back up into Vagrant core one day. So I reached out to Fabio to find out if he would be willing to accept patches to get things working more along the lines of a traditional synced folder plugin. He kindly let me know he didn't have much time to work on vagrant-sshfs these days, and he no longer used it. I volunteered to take over.

The vagrant-sshfs Plugin

To make the plugin follow along the traditional synced folder plugin model I decided to rewrite the plugin. I based most of the original code off of the NFS synced folder plugin code. The new code repo is here on Github.

So now we have a plugin that will do SSHFS mounts of host folders into the guest. It works without any setup on the host, but it requires that the sftp-server software exist on the host. sftp-server is usually provided by OpenSSH and thus is easily available on Windows/Mac/Linux.

To compare with the other implementations on environment restrictions here is what the SSHFS implementation looks like:

  • sshfs
    • This synced folder implementation uses SSHFS to share folders between host and guest. The only requirement is that the sftp-server executable exist on the host.
    • Hypervisor Limitation: None
    • Host OS Limitation: None

Here are the overall benefits of using vagrant-sshfs:

  • Works on any host platform
    • Windows, Linux, Mac OS X
  • Works on any type-2 hypervisor
    • VirtualBox, Libvirt/KVM, Hyper-V, VMWare
  • Seamlessly Works on Remote Vagrant Solutions
    • Works with vagrant-aws, vagrant-openstack, etc..

Where To Get The Plugin

This plugin is hot off the presses, so it hasn't quite made it into Fedora yet. There are a few ways you can get it though. First, you can use Vagrant itself to retrieve the plugin from rubygems:

$ vagrant plugin install vagrant-sshfs

Alternatively you can get the RPM package from my copr:

$ sudo dnf copr enable dustymabe/vagrant-sshfs
$ sudo dnf install vagrant-sshfs

Your First vagrant-sshfs Mount

To use use the plugin, you must tell Vagrant what folder you want mounted into the guest and where, by adding it to your Vagrantfile. An example Vagrantfile is below:

Vagrant.configure(2) do |config|
  config.vm.box = "fedora/23-cloud-base"
  config.vm.synced_folder "/path/on/host", "/path/on/guest", type: "sshfs"
end

This will start a Fedora 23 base cloud image and will mount the /path/on/host directory from the host into the running vagrant box under the /path/on/guest directory.

Conclusion

We've tried to find the option that is easiest for the user to configure. While SSHFS may have some drawbacks as compared to the others, such as speed, we believe it solves most people's use cases and is dead simple to configure out of the box.

Please give it a try and let us know how it works for you! Drop a mail to or open an issue on Github.

Cheers!
Dusty