Archive for the 'GRUB' Category

Fedora BTRFS+Snapper – The Fedora 25 Edition

History

I'm back again with the Fedora 25 edition of my Fedora BTRFS+Snapper series. As you know, in the past I have configured my computers to be able to snapshot and rollback the entire system by leveraging BTRFS snapshots, a tool called snapper, and a patched version of Fedora's grub2 package. I have updated the patchset (patches taken from SUSE) for Fedora 25's version of grub and the results are available in this git repo.

This setup is not new. I have fully documented the steps I took in the past for my Fedora 22 systems in two blog posts: part1 and part2. This is a condensed continuation of those posts for Fedora 25.

Setting up System with LUKS + LVM + BTRFS

The manual steps for setting up the system are detailed in the part1 blog post from Fedora 22. This time around I have created a script that will quickly configure the system with LUKS + LVM + BTRFS. The script will need to be run in an Anaconda environment just like the manual steps were done in part1 last time.

You can easily enable ssh access to your Anaconda booted machine by adding inst.sshd to the kernel command line arguments. After booting up you can scp the script over and then execute it to build the system. Please read over the script and modify it to your liking.

Alternatively, for an automated install I have embedded that same script into a kickstart file that you can use. The kickstart file doesn't really leverage Anaconda at all because it simply runs a %pre script and then reboots the box. It's basically like just telling Anaconda to run a bash script, but allows you to do it in an automated way. None of the kickstart directives at the top of the kickstart file actually get used.

Installing and Configuring Snapper

After the system has booted for the first time, let's configure the system for doing snapshots. I still want to be able to track how much size each snapshot has taken so I'll go ahead and enable quota support on BTRFS. I covered how to do this in a previous post:

[root@localhost ~]# btrfs quota enable /
[root@localhost ~]# btrfs qgroup show /
qgroupid         rfer         excl 
--------         ----         ---- 
0/5         999.80MiB    999.80MiB

Next up is installing/configuring snapper. I am also going to install the dnf plugin for snapper so that rpm transactions will automatically get snapshotted:

[root@localhost ~]# dnf install -y snapper python3-dnf-plugins-extras-snapper
...
Complete!
[root@localhost ~]# snapper --config=root create-config /
[root@localhost ~]# snapper ls
Type   | # | Pre # | Date | User | Cleanup | Description | Userdata
-------+---+-------+------+------+---------+-------------+---------
single | 0 |       |      | root |         | current     |         
[root@localhost ~]# snapper list-configs
Config | Subvolume
-------+----------
root   | /        
[root@localhost ~]# btrfs subvolume list /
ID 260 gen 44 top level 5 path .snapshots

So we used the snapper command to create a configuration for BTRFS filesystem mounted at /. As part of this process we can see from the btrfs subvolume list / command that snapper also created a .snapshots subvolume. This subvolume will be used to house the COW snapshots that are taken of the system.

Next, we'll add an entry to fstab so that regardless of what subvolume we are actually booted in we will always be able to view the .snapshots subvolume and all nested subvolumes (snapshots):

[root@localhost ~]# echo '/dev/vgroot/lvroot /.snapshots btrfs subvol=.snapshots 0 0' >> /etc/fstab

Taking Snapshots

OK, now that we have snapper installed and the .snapshots subvolume in /etc/fstab we can start creating snapshots:

[root@localhost ~]# btrfs subvolume get-default /
ID 5 (FS_TREE)
[root@localhost ~]# snapper create --description "BigBang"
[root@localhost ~]# snapper ls
Type   | # | Pre # | Date                            | User | Cleanup | Description | Userdata
-------+---+-------+---------------------------------+------+---------+-------------+---------
single | 0 |       |                                 | root |         | current     |         
single | 1 |       | Mon 13 Feb 2017 12:50:51 AM UTC | root |         | BigBang     |         
[root@localhost ~]# btrfs subvolume list /
ID 260 gen 48 top level 5 path .snapshots
ID 261 gen 48 top level 260 path .snapshots/1/snapshot
[root@localhost ~]# ls /.snapshots/1/snapshot/
bin  boot  dev  etc  home  lib  lib64  media  mnt  opt  proc  root  run  sbin  srv  sys  tmp  usr  var

We made our first snapshot called BigBang and then ran a btrfs subvolume list / to view that a new snapshot was actually created. Notice at the top of the output of the sections that we ran a btrfs subvolume get-default /. This outputs what the currently set default subvolume is for the BTRFS filesystem. Right now we are booted into the root subvolume but that will change as soon as we decide we want to use one of the snapshots for rollback.

Since we took a snapshot let's go ahead and make some changes to the system by updating the kernel:

[root@localhost ~]# dnf update -y kernel
...
Complete!
[root@localhost ~]# rpm -q kernel
kernel-4.8.6-300.fc25.x86_64
kernel-4.9.8-201.fc25.x86_64
[root@localhost ~]# snapper ls
Type   | # | Pre # | Date                            | User | Cleanup | Description                   | Userdata
-------+---+-------+---------------------------------+------+---------+-------------------------------+---------
single | 0 |       |                                 | root |         | current                       |         
single | 1 |       | Mon 13 Feb 2017 12:50:51 AM UTC | root |         | BigBang                       |         
single | 2 |       | Mon 13 Feb 2017 12:52:38 AM UTC | root | number  | /usr/bin/dnf update -y kernel |

So we updated the kernel and the snapper dnf plugin automatically created a snapshot for us. Let's reboot the system and see if the new kernel boots properly:

[root@localhost ~]# reboot 
...
[dustymabe@media ~]$ ssh root@192.168.122.177
Warning: Permanently added '192.168.122.177' (ECDSA) to the list of known hosts.
root@192.168.122.177's password: 
Last login: Mon Feb 13 00:41:40 2017 from 192.168.122.1
[root@localhost ~]# 
[root@localhost ~]# uname -r
4.9.8-201.fc25.x86_64

Rolling Back

Say we don't like that new kernel. Let's go back to the earlier snapshot we made:

[root@localhost ~]# snapper rollback 1
Creating read-only snapshot of current system. (Snapshot 3.)
Creating read-write snapshot of snapshot 1. (Snapshot 4.)
Setting default subvolume to snapshot 4.
[root@localhost ~]# reboot

snapper created a read-only snapshot of the current system and then a new read-write subvolume based on the snapshot we wanted to go back to. It then sets the default subvolume to be the newly created read-write subvolume. After reboot you'll be in the newly created read-write subvolume and exactly back in the state you system was in at the time the snapshot was created.

In our case, after reboot we should now be booted into snapshot 4 as indicated by the output of the snapper rollback command above and we should be able to inspect information about all of the snapshots on the system:

[root@localhost ~]# btrfs subvolume get-default /
ID 264 gen 66 top level 260 path .snapshots/4/snapshot
[root@localhost ~]# snapper ls
Type   | # | Pre # | Date                     | User | Cleanup | Description                   | Userdata
-------+---+-------+--------------------------+------+---------+-------------------------------+---------
single | 0 |       |                          | root |         | current                       |         
single | 1 |       | Mon Feb 13 00:50:51 2017 | root |         | BigBang                       |         
single | 2 |       | Mon Feb 13 00:52:38 2017 | root | number  | /usr/bin/dnf update -y kernel |         
single | 3 |       | Mon Feb 13 00:56:13 2017 | root |         |                               |         
single | 4 |       | Mon Feb 13 00:56:13 2017 | root |         |                               |         
[root@localhost ~]# ls /.snapshots/
1  2  3  4
[root@localhost ~]# btrfs subvolume list /
ID 260 gen 67 top level 5 path .snapshots
ID 261 gen 61 top level 260 path .snapshots/1/snapshot
ID 262 gen 53 top level 260 path .snapshots/2/snapshot
ID 263 gen 60 top level 260 path .snapshots/3/snapshot
ID 264 gen 67 top level 260 path .snapshots/4/snapshot

And the big test is to see if the change we made to the system was actually reverted:

[root@localhost ~]# uname -r 
4.8.6-300.fc25.x86_64
[root@localhost ~]# rpm -q kernel
kernel-4.8.6-300.fc25.x86_64

Enjoy!

Dusty

Fedora BTRFS+Snapper – The Fedora 24 Edition

History

In the past I have configured my personal computers to be able to snapshot and rollback the entire system. To do this I am leveraging the BTRFS filesystem, a tool called snapper, and a patched version of Fedora's grub2 package. The patches needed from grub2 come from the SUSE guys and are documented well in this git repo.

This setup is not new. I have fully documented the steps I took in the past for my Fedora 22 systems in two blog posts: part1 and part2. This is a condensed continuation of those posts for Fedora 24.

NOTE: I'm using Fedora 24 alpha, but everything should be the same for the released version of Fedora 24.

Setting up System with LUKS + LVM + BTRFS

The manual steps for setting up the system are detailed in the part1 blog post from Fedora 22. This time around I have created a script that will quickly configure the system with LUKS + LVM + BTRFS. The script will need to be run in an Anaconda environment just like the manual steps were done in part1 last time.

You can easily enable ssh access to your Anaconda booted machine by adding inst.sshd to the kernel command line arguments. After booting up you can scp the script over and then execute it to build the system. Please read over the script and modify it to your liking.

Alternatively, for an automated install I have embedded that same script into a kickstart file that you can use. The kickstart file doesn't really leverage Anaconda at all because it simply runs a %pre script and then reboots the box. It's basically like just telling Anaconda to run a bash script, but allows you to do it in an automated way. None of the kickstart directives at the top of the kickstart file actually get used.

Installing and Configuring Snapper

After the system has booted for the first time, let's configure the system for doing snapshots. I still want to be able to track how much size each snapshot has taken so I'll go ahead and enable quota support on BTRFS. I covered how to do this in a previous post:

[root@localhost ~]# btrfs quota enable /
[root@localhost ~]# btrfs qgroup show /
qgroupid         rfer         excl 
--------         ----         ---- 
0/5           1.08GiB      1.08GiB

Next up is installing/configuring snapper. I am also going to install the dnf plugin for snapper so that rpm transactions will automatically get snapshotted:

[root@localhost ~]# dnf install -y snapper python3-dnf-plugins-extras-snapper
...
Complete!
[root@localhost ~]# snapper --config=root create-config /
[root@localhost ~]# snapper ls
Type   | # | Pre # | Date | User | Cleanup | Description | Userdata
-------+---+-------+------+------+---------+-------------+---------
single | 0 |       |      | root |         | current     |         
[root@localhost ~]# snapper list-configs
Config | Subvolume
-------+----------
root   | /        
[root@localhost ~]# btrfs subvolume list /
ID 259 gen 57 top level 5 path .snapshots

So we used the snapper command to create a configuration for BTRFS filesystem mounted at /. As part of this process we can see from the btrfs subvolume list / command that snapper also created a .snapshots subvolume. This subvolume will be used to house the COW snapshots that are taken of the system.

Next, we'll workaround a bug that is causing snapper to have the wrong SELinux context on the .snapshots directory:

[root@localhost ~]# restorecon -v /.snapshots/
restorecon reset /.snapshots context system_u:object_r:unlabeled_t:s0->system_u:object_r:snapperd_data_t:s0

Finally, we'll add an entry to fstab so that regardless of what subvolume we are actually booted in we will always be able to view the .snapshots subvolume and all nested subvolumes (snapshots):

[root@localhost ~]# echo '/dev/vgroot/lvroot /.snapshots btrfs subvol=.snapshots 0 0' >> /etc/fstab

Taking Snapshots

OK, now that we have snapper installed and the .snapshots subvolume in /etc/fstab we can start creating snapshots:

[root@localhost ~]# btrfs subvolume get-default /
ID 5 (FS_TREE)
[root@localhost ~]# snapper create --description "BigBang"
[root@localhost ~]# snapper ls
Type   | # | Pre # | Date                            | User | Cleanup | Description | Userdata
-------+---+-------+---------------------------------+------+---------+-------------+---------
single | 0 |       |                                 | root |         | current     |         
single | 1 |       | Sat 23 Apr 2016 01:04:51 PM UTC | root |         | BigBang     |         
[root@localhost ~]# btrfs subvolume list /
ID 259 gen 64 top level 5 path .snapshots
ID 260 gen 64 top level 259 path .snapshots/1/snapshot
[root@localhost ~]# ls /.snapshots/1/snapshot/
bin  boot  dev  etc  home  lib  lib64  media  mnt  opt  proc  root  run  sbin  srv  sys  tmp  usr  var

We made our first snapshot called BigBang and then ran a btrfs subvolume list / to view that a new snapshot was actually created. Notice at the top of the output of the sections that we ran a btrfs subvolume get-default /. This outputs what the currently set default subvolume is for the BTRFS filesystem. Right now we are booted into the root subvolume but that will change as soon as we decide we want to use one of the snapshots for rollback.

Since we took a snapshot let's go ahead and make some changes to the system by updating the kernel:

[root@localhost ~]# dnf update -y kernel
...
Complete!
[root@localhost ~]# rpm -q kernel
kernel-4.5.0-0.rc7.git0.2.fc24.x86_64
kernel-4.5.2-300.fc24.x86_64
[root@localhost ~]# snapper ls
Type   | # | Pre # | Date                            | User | Cleanup | Description                   | Userdata
-------+---+-------+---------------------------------+------+---------+-------------------------------+---------
single | 0 |       |                                 | root |         | current                       |         
single | 1 |       | Sat 23 Apr 2016 01:04:51 PM UTC | root |         | BigBang                       |         
single | 2 |       | Sat 23 Apr 2016 01:08:18 PM UTC | root | number  | /usr/bin/dnf update -y kernel |

So we updated the kernel and the snapper dnf plugin automatically created a snapshot for us. Let's reboot the system and see if the new kernel boots properly:

[root@localhost ~]# reboot 
...
[dustymabe@media ~]$ ssh root@192.168.122.188 
Warning: Permanently added '192.168.122.188' (ECDSA) to the list of known hosts.
root@192.168.122.188's password: 
Last login: Sat Apr 23 12:18:55 2016 from 192.168.122.1
[root@localhost ~]# 
[root@localhost ~]# uname -r
4.5.2-300.fc24.x86_64

Rolling Back

Say we don't like that new kernel. Let's go back to the earlier snapshot we made:

[root@localhost ~]# snapper rollback 1
Creating read-only snapshot of current system. (Snapshot 3.)
Creating read-write snapshot of snapshot 1. (Snapshot 4.)
Setting default subvolume to snapshot 4.
[root@localhost ~]# reboot

snapper created a read-only snapshot of the current system and then a new read-write subvolume based on the snapshot we wanted to go back to. It then sets the default subvolume to be the newly created read-write subvolume. After reboot you'll be in the newly created read-write subvolume and exactly back in the state you system was in at the time the snapshot was created.

In our case, after reboot we should now be booted into snapshot 4 as indicated by the output of the snapper rollback command above and we should be able to inspect information about all of the snapshots on the system:

[root@localhost ~]# btrfs subvolume get-default /
ID 263 gen 87 top level 259 path .snapshots/4/snapshot
[root@localhost ~]# snapper ls
Type   | # | Pre # | Date                            | User | Cleanup | Description                   | Userdata
-------+---+-------+---------------------------------+------+---------+-------------------------------+---------
single | 0 |       |                                 | root |         | current                       |         
single | 1 |       | Sat 23 Apr 2016 01:04:51 PM UTC | root |         | BigBang                       |         
single | 2 |       | Sat 23 Apr 2016 01:08:18 PM UTC | root | number  | /usr/bin/dnf update -y kernel |         
single | 3 |       | Sat 23 Apr 2016 01:17:43 PM UTC | root |         |                               |         
single | 4 |       | Sat 23 Apr 2016 01:17:43 PM UTC | root |         |                               |         
[root@localhost ~]# ls /.snapshots/
1  2  3  4
[root@localhost ~]# btrfs subvolume list /
ID 259 gen 88 top level 5 path .snapshots
ID 260 gen 81 top level 259 path .snapshots/1/snapshot
ID 261 gen 70 top level 259 path .snapshots/2/snapshot
ID 262 gen 80 top level 259 path .snapshots/3/snapshot
ID 263 gen 88 top level 259 path .snapshots/4/snapshot

And the big test is to see if the change we made to the system was actually reverted:

[root@localhost ~]# uname -r
4.5.0-0.rc7.git0.2.fc24.x86_64
[root@localhost ~]# rpm -q kernel
kernel-4.5.0-0.rc7.git0.2.fc24.x86_64

Enjoy!

Dusty

Fedora BTRFS+Snapper PART 2: Full System Snapshot/Rollback

History

In part 1 of this series I discussed why I desired a computer setup where I can do full system snapshots so I could seamlessly roll back at will. I also gave an overview of how I went about setting up a system so it could take advantage of BTRFS and snapper to do full system snapshotting and recovery. In this final post of the series I will give an overview of how to get snapper installed and configured on the system and walk through using it to do a rollback.

Installing and Configuring Snapper

First things first, as part of this whole setup I want to be able to tell how much space each one of my snapshots are taking up. I covered how to do this in a previous post, but the way you do it is by enabled quota on the BTRFS filesystem:

[root@localhost ~]# btrfs quota enable /
[root@localhost ~]#
[root@localhost ~]# btrfs subvolume list /
ID 258 gen 50 top level 5 path var/lib/machines
[root@localhost ~]# btrfs qgroup show /
WARNING: Rescan is running, qgroup data may be incorrect
qgroupid         rfer         excl
--------         ----         ----
0/5         975.90MiB    975.90MiB
0/258        16.00KiB     16.00KiB

You can see from the output that we currently have two subvolumes. One of them is the root subvolume while the other is a subvolume automatically created by systemd for systemd-nspawn container images.

Now that we have quota enabled let's get snapper installed and configured:

[root@localhost ~]# dnf install -y snapper
...
Complete!
[root@localhost ~]# snapper --config=root create-config /
[root@localhost ~]# snapper ls
Type   | # | Pre # | Date | User | Cleanup | Description | Userdata
-------+---+-------+------+------+---------+-------------+---------
single | 0 |       |      | root |         | current     |
[root@localhost ~]# snapper list-configs
Config | Subvolume
-------+----------
root   | /
[root@localhost ~]#
[root@localhost ~]# btrfs subvolume list /
ID 258 gen 50 top level 5 path var/lib/machines
ID 260 gen 83 top level 5 path .snapshots

So we used the snapper command to create a configuration for BTRFS filesystem mounted at /. As part of this process we can see from the btrfs subvolume list / command that snapper also created a .snapshots subvolume. This subvolume will be used to house the COW snapshots that are taken of the system.

The next thing we want to do is add an entry to fstab to make it so that regardless of what subvolume we are actually booted into we will always be able to view the .snapshots subvolume and all nested subvolumes (snapshots):

[root@localhost ~]# echo '/dev/vgroot/lvroot /.snapshots btrfs subvol=.snapshots 0 0' >> /etc/fstab

Taking Snapshots

OK, now that we have snapper installed and the .snapshots subvolume in /etc/fstab we can start creating snapshots:

[root@localhost ~]# btrfs subvolume get-default /
ID 5 (FS_TREE)
[root@localhost ~]# snapper create --description "BigBang"
[root@localhost ~]# snapper ls
Type   | # | Pre # | Date                     | User | Cleanup | Description | Userdata
-------+---+-------+--------------------------+------+---------+-------------+---------
single | 0 |       |                          | root |         | current     |
single | 1 |       | Tue Jul 14 23:07:42 2015 | root |         | BigBang     |
[root@localhost ~]#
[root@localhost ~]# btrfs subvolume list /
ID 258 gen 50 top level 5 path var/lib/machines
ID 260 gen 90 top level 5 path .snapshots
ID 261 gen 88 top level 260 path .snapshots/1/snapshot
[root@localhost ~]#
[root@localhost ~]# ls /.snapshots/1/snapshot/
bin  boot  dev  etc  home  lib  lib64  media  mnt  opt  proc  root  run  sbin  srv  sys  tmp  usr  var

We made our first snapshot called BigBang and then ran a btrfs subvolume list / to view that a new snapshot was actually created. Notice at the top of the output of the sections that we ran a btrfs subvolume get-default /. This outputs what the currently set default subvolume is for the BTRFS filesystem. Right now we are booted into the root subvolume but that will change as soon as we decide we want to use one of the snapshots for rollback.

Since we took a snapshot let's go ahead and make some changes to the system:

[root@localhost ~]# dnf install -y htop
[root@localhost ~]# rpm -q htop
htop-1.0.3-4.fc22.x86_64
[root@localhost ~]#
[root@localhost ~]# snapper status 1..0  | grep htop
+..... /usr/bin/htop
+..... /usr/share/doc/htop
+..... /usr/share/doc/htop/AUTHORS
+..... /usr/share/doc/htop/COPYING
+..... /usr/share/doc/htop/ChangeLog
+..... /usr/share/doc/htop/README
+..... /usr/share/man/man1/htop.1.gz
+..... /usr/share/pixmaps/htop.png
+..... /var/lib/dnf/yumdb/h/2cd64300c204b0e1ecc9ad185259826852226561-htop-1.0.3-4.fc22-x86_64
+..... /var/lib/dnf/yumdb/h/2cd64300c204b0e1ecc9ad185259826852226561-htop-1.0.3-4.fc22-x86_64/checksum_data
+..... /var/lib/dnf/yumdb/h/2cd64300c204b0e1ecc9ad185259826852226561-htop-1.0.3-4.fc22-x86_64/checksum_type
+..... /var/lib/dnf/yumdb/h/2cd64300c204b0e1ecc9ad185259826852226561-htop-1.0.3-4.fc22-x86_64/command_line
+..... /var/lib/dnf/yumdb/h/2cd64300c204b0e1ecc9ad185259826852226561-htop-1.0.3-4.fc22-x86_64/from_repo
+..... /var/lib/dnf/yumdb/h/2cd64300c204b0e1ecc9ad185259826852226561-htop-1.0.3-4.fc22-x86_64/installed_by
+..... /var/lib/dnf/yumdb/h/2cd64300c204b0e1ecc9ad185259826852226561-htop-1.0.3-4.fc22-x86_64/reason
+..... /var/lib/dnf/yumdb/h/2cd64300c204b0e1ecc9ad185259826852226561-htop-1.0.3-4.fc22-x86_64/releasever

So from this we installed htop and then compared the current running system (0) with snapshot 1.

Rolling Back

Now that we have taken a previous snapshot and have since made a change to the system we can use the snapper rollback functionality to get back to the state the system was in before we made the change. Let's do the rollback to get back to the snapshot 1 BigBang state:

[root@localhost ~]# snapper rollback 1
Creating read-only snapshot of current system. (Snapshot 2.)
Creating read-write snapshot of snapshot 1. (Snapshot 3.)
Setting default subvolume to snapshot 3.
[root@localhost ~]# reboot

As part of the rollback process you specify to snapper which snapshot you want to go back to. It then creates a read-only snapshot of the current system (in case you change your mind and want to get back to where you currently are) and then a new read-write subvolume based on the snapshot you specified to go back to. It then sets the default subvolume to be the newly created read-write subvolume it just created. After a reboot you will be booted into the new read-write subvolume and your state should be exactly as it was at the time you made the original snapshot.

In our case, after reboot we should now be booted into snapshot 3 as indicated by the output of the snapper rollback command above and we should be able to inspect information about all of the snapshots on the system:

[root@localhost ~]# btrfs subvolume get-default /
ID 263 gen 104 top level 260 path .snapshots/3/snapshot
[root@localhost ~]#
[root@localhost ~]# snapper ls
Type   | # | Pre # | Date                     | User | Cleanup | Description | Userdata
-------+---+-------+--------------------------+------+---------+-------------+---------
single | 0 |       |                          | root |         | current     |
single | 1 |       | Tue Jul 14 23:07:42 2015 | root |         | BigBang     |
single | 2 |       | Tue Jul 14 23:14:12 2015 | root |         |             |
single | 3 |       | Tue Jul 14 23:14:12 2015 | root |         |             |
[root@localhost ~]#
[root@localhost ~]# ls /.snapshots/
1  2  3
[root@localhost ~]# btrfs subvolume list /
ID 258 gen 50 top level 5 path var/lib/machines
ID 260 gen 100 top level 5 path .snapshots
ID 261 gen 98 top level 260 path .snapshots/1/snapshot
ID 262 gen 97 top level 260 path .snapshots/2/snapshot
ID 263 gen 108 top level 260 path .snapshots/3/snapshot

And the big test is to see if the change we made to the system was actually reverted:

[root@localhost ~]# rpm -q htop
package htop is not installed

Bliss!!

Now in my case I like to have more descriptive notes on my snapshots so I'll go back now and give some notes for snapshots 2 and 3:

[root@localhost ~]# snapper modify --description "installed htop" 2
[root@localhost ~]# snapper modify --description "rollback to 1 - read/write" 3
[root@localhost ~]#
[root@localhost ~]# snapper ls
Type   | # | Pre # | Date                     | User | Cleanup | Description                | Userdata
-------+---+-------+--------------------------+------+---------+----------------------------+---------
single | 0 |       |                          | root |         | current                    |
single | 1 |       | Tue Jul 14 23:07:42 2015 | root |         | BigBang                    |
single | 2 |       | Tue Jul 14 23:14:12 2015 | root |         | installed htop             |
single | 3 |       | Tue Jul 14 23:14:12 2015 | root |         | rollback to 1 - read/write |

We can also see how much space (shared and exclusive each of the snapshots are taking up:

[root@localhost ~]# btrfs qgroup show /
WARNING: Qgroup data inconsistent, rescan recommended
qgroupid         rfer         excl
--------         ----         ----
0/5           1.08GiB      7.53MiB
0/258        16.00KiB     16.00KiB
0/260        16.00KiB     16.00KiB
0/261         1.07GiB      2.60MiB
0/262         1.07GiB    740.00KiB
0/263         1.08GiB     18.91MiB

Now that is useful info so you can know how much space you will be recovering when you delete snapshots in the future.

Updating The Kernel

I mentioned in part 1 that I had to get a special rebuild of GRUB with some patches from the SUSE guys in order to get booting from the default subvolume to work. This was all needed so that I can update the kernel as normal and have the GRUB files that get used be the ones that are in the actual subvolume I am currently using. So let's test it out by doing a full system update (including a kernel update):

[root@localhost ~]# dnf update -y
...
Install    8 Packages
Upgrade  173 Packages
...
Complete!
[root@localhost ~]# rpm -q kernel
kernel-4.0.4-301.fc22.x86_64
kernel-4.0.7-300.fc22.x86_64
[root@localhost ~]#
[root@localhost ~]# btrfs qgroup show /
WARNING: Qgroup data inconsistent, rescan recommended
qgroupid         rfer         excl
--------         ----         ----
0/5           1.08GiB      7.53MiB
0/258        16.00KiB     16.00KiB
0/260        16.00KiB     16.00KiB
0/261         1.07GiB     11.96MiB
0/262         1.07GiB    740.00KiB
0/263         1.19GiB    444.35MiB

So we did a full system upgrade that upgraded 173 packages and installed a few others. We can see now that the current subvolume (snapshot 3 with ID 263) now has 444MiB of exclusive data. This makes sense since all of the other snapshots were from before the full system update.

Let's create a new snapshot that represents the state of the system right after we did the full system update and then reboot:

[root@localhost ~]# snapper create --description "full system upgrade"
[root@localhost ~]# reboot

After reboot we can now check to see if we have properly booted the recently installed kernel:

[root@localhost ~]# rpm -q kernel
kernel-4.0.4-301.fc22.x86_64
kernel-4.0.7-300.fc22.x86_64
[root@localhost ~]# uname -r
4.0.7-300.fc22.x86_64

Bliss again. Yay! And I'm Done.

Enjoy!

Dusty

Fedora BTRFS+Snapper PART 1: System Preparation

The Problem

For some time now I have wanted a linux desktop setup where I could run updates automatically and not worry about losing productivity if my system gets hosed from the update. My desired setup to achieve this has been a combination of snapper and BTRFS, but unfortunately the support on Fedora for full rollback isn't quite there.

In Fedora 22 the support for rollback was added but there is one final piece of the puzzle that is missing that I need in order to have a fully working setup: I needed GRUB to respect the default subvolume that is set on the BTRFS filesystem. In the past GRUB did use the default subvolume but this behavior was removed in 82591fa (link).

With GRUB respecting the default subvolume I can include /boot/ just as a directory on my system (not as a separate subvolume) and it will be included in all of the snapshots that are created by snapper of the root filesystem.

In order to get this functionality I grabbed some of the patches from the SUSE guys and applied them to the Fedora GRUB rpm. All of the work and the resulting rpms can be found here.

System Preparation

So now I had a GRUB rpm that would work for me. The first step is to get my system up and running in a setup that I could then use snapper on top of. I mentioned before that I wanted to put /boot/ just as a directory on the BTRFS filesystem. I also wanted it to be encrypted as I have done in the past.

This means I have yet another setup that is funky and I'll need to basically install it from scratch using Anaconda and a chroot environment.

After getting up and running in anaconda I then switched to a different virtual terminal and formatted my hard disk, set up an encrypted LUKS device, created a VG and two LVs, and finally a BTRFS filesystem:

[anaconda root@localhost ~]# fdisk /dev/sda <<EOF
o
n
p
1
2048

w
EOF
[anaconda root@localhost ~]# lsblk /dev/sda
NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda      8:0    0 465.8G  0 disk
`-sda1   8:1    0 465.8G  0 part
[anaconda root@localhost ~]# cryptsetup luksFormat /dev/sda1
[anaconda root@localhost ~]# cryptsetup luksOpen /dev/sda1 cryptodisk
[anaconda root@localhost ~]# vgcreate vgroot /dev/mapper/cryptodisk
[anaconda root@localhost ~]# lvcreate --size=4G --name lvswap vgroot
[anaconda root@localhost ~]# mkswap /dev/vgroot/lvswap
[anaconda root@localhost ~]# lvcreate -l 100%FREE --name lvroot vgroot
[anaconda root@localhost ~]# mkfs.btrfs /dev/vgroot/lvroot

NOTE: Most of the commands run above have truncated output for brevity.

The next step was to mount the filesystem and install software into the filesystem in a chrooted environment. Since the dnf binary isn't actually installed in the anaconda environment by default we first need to install it:

[anaconda root@localhost ~]# rpm -ivh --nodeps /run/install/repo/Packages/d/dnf-1.0.0-1.fc22.noarch.rpm
warning: /run/install/repo/Packages/d/dnf-1.0.0-1.fc22.noarch.rpm: Header V3 RSA/SHA256 Signature, key ID 8e1431d5: NOKEY
Preparing...                          ################################# [100%]
Updating / installing...
   1:dnf-1.0.0-1.fc22                 ################################# [100%]

Now we can "create" a repo file from the repo that is on the media and install the bare minimum (the filesystem rpm):

[anaconda root@localhost ~]# mount /dev/vgroot/lvroot /mnt/sysimage/
[anaconda root@localhost ~]# mkdir /etc/yum.repos.d
[anaconda root@localhost ~]# cat <<EOF > /etc/yum.repos.d/dvd.repo
[dvd]
name=dvd
baseurl=file:///run/install/repo
enabled=1
gpgcheck=0
EOF
[anaconda root@localhost ~]# dnf install -y --releasever=22 --installroot=/mnt/sysimage filesystem
...
Complete!

The reason we only installed the filesystem rpm is because a lot of the other rpms we are going to install will fail if some of the "special" directories aren't mounted. We'll go ahead and mount them now:

[anaconda root@localhost ~]# mount -v -o bind /dev /mnt/sysimage/dev/
mount: /dev bound on /mnt/sysimage/dev.
[anaconda root@localhost ~]# mount -v -o bind /run /mnt/sysimage/run/
mount: /run bound on /mnt/sysimage/run.
[anaconda root@localhost ~]# mount -v -t proc proc /mnt/sysimage/proc/
mount: proc mounted on /mnt/sysimage/proc.
[anaconda root@localhost ~]# mount -v -t sysfs sys /mnt/sysimage/sys/
mount: sys mounted on /mnt/sysimage/sys.

Now we can install the rest of the software into the chroot environment:

[anaconda root@localhost ~]# cp /etc/yum.repos.d/dvd.repo /mnt/sysimage/etc/yum.repos.d/
[anaconda root@localhost ~]# dnf install -y --installroot=/mnt/sysimage --disablerepo=* --enablerepo=dvd @core @standard kernel btrfs-progs lvm2
...
Complete!

We can also install the "special" GRUB packages that I created and then get rid of the repo file because we won't need it any longer:

[anaconda root@localhost ~]# dnf install -y --installroot=/mnt/sysimage --disablerepo=* --enablerepo=dvd 
https://github.com/dustymabe/fedora-grub-boot-btrfs-default-subvolume/raw/master/rpmbuild/RPMS/x86_64/grub2-2.02-0.16.fc22.dusty.x86_64.rpm 
https://github.com/dustymabe/fedora-grub-boot-btrfs-default-subvolume/raw/master/rpmbuild/RPMS/x86_64/grub2-tools-2.02-0.16.fc22.dusty.x86_64.rpm
...
Complete!
[anaconda root@localhost ~]# rm /mnt/sysimage/etc/yum.repos.d/dvd.repo

Now we can do some minimal system configuration by chrooting into the system and setting up crypttab, setting up fstab, setting the root password and setting up the system to a relabel on boot:

[anaconda root@localhost ~]# chroot /mnt/sysimage
[anaconda root@localhost /]# ls -l /dev/disk/by-uuid/f0d889d8-5225-4d9d-9a89-edd387e65ab7
lrwxrwxrwx. 1 root root 10 Jul 14 02:24 /dev/disk/by-uuid/f0d889d8-5225-4d9d-9a89-edd387e65ab7 -> ../../sda1
[anaconda root@localhost /]# cat <<EOF > /etc/crypttab
cryptodisk /dev/disk/by-uuid/f0d889d8-5225-4d9d-9a89-edd387e65ab7 -
EOF
[anaconda root@localhost /]# cat <<EOF > /etc/fstab
/dev/vgroot/lvroot / btrfs defaults 1 1
/dev/vgroot/lvswap swap swap defaults 0 0
EOF
[anaconda root@localhost /]# passwd --stdin root <<< "password"
Changing password for user root.
passwd: all authentication tokens updated successfully.
[anaconda root@localhost /]# touch /.autorelabel

Finally configure and install GRUB on sda and generate a ramdisk that has all the required modules using dracut:

[anaconda root@localhost /]# echo GRUB_ENABLE_CRYPTODISK=y >> /etc/default/grub
[anaconda root@localhost /]# echo SUSE_BTRFS_SNAPSHOT_BOOTING=true >> /etc/default/grub
[anaconda root@localhost /]# grub2-mkconfig -o /boot/grub2/grub.cfg
Generating grub configuration file ...
File descriptor 4 (/) leaked on vgs invocation. Parent PID 29465: /usr/sbin/grub2-probe
File descriptor 4 (/) leaked on vgs invocation. Parent PID 29465: /usr/sbin/grub2-probe
Found linux image: /boot/vmlinuz-4.0.4-301.fc22.x86_64
Found initrd image: /boot/initramfs-4.0.4-301.fc22.x86_64.img
Found linux image: /boot/vmlinuz-0-rescue-225efda374c043e3886d349ef724c79e
Found initrd image: /boot/initramfs-0-rescue-225efda374c043e3886d349ef724c79e.img
done
[anaconda root@localhost /]# grub2-install /dev/sda
Installing for i386-pc platform.
File descriptor 4 (/) leaked on vgs invocation. Parent PID 29866: grub2-install
File descriptor 4 (/) leaked on vgs invocation. Parent PID 29866: grub2-install
File descriptor 4 (/) leaked on vgs invocation. Parent PID 29866: grub2-install
File descriptor 7 (/) leaked on vgs invocation. Parent PID 29866: grub2-install
File descriptor 8 (/) leaked on vgs invocation. Parent PID 29866: grub2-install
Installation finished. No error reported.
[anaconda root@localhost /]# dracut --kver 4.0.4-301.fc22.x86_64 --force

Now we can exit the chroot, unmount all filesystems and reboot into our new system:

[anaconda root@localhost /]# exit
exit
[anaconda root@localhost ~]# umount /mnt/sysimage/{dev,run,sys,proc}
[anaconda root@localhost ~]# umount /mnt/sysimage/
[anaconda root@localhost ~]# reboot

To Be Continued

So we have set up the system to have a single BTRFS filesystem (no subvolumes) on top of LVM on top of LUKS and with a custom GRUB that respects the configured default subvolume on the BTRFS filesystem. Here is what an lsblk shows:

[root@localhost ~]# lsblk -o NAME,TYPE,FSTYPE,MOUNTPOINT /dev/sda
NAME                TYPE  FSTYPE      MOUNTPOINT
sda                 disk
`-sda1              part  crypto_LUKS
  `-cryptodisk      crypt LVM2_member
    |-vgroot-lvswap lvm   swap        [SWAP]
    `-vgroot-lvroot lvm   btrfs       /

In a later post I will configure snapper on this system and show how rollbacks can be used to simply revert changes that have been made.

Dusty

Encrypting More: /boot Joins The Party

Typically when installing major linux distros they make it easy to select encryption as an option to have encrypted block devices. This is great! The not so great part is the linux kernel and the initial ramdisk aren't typically invited to the party; they are left sitting in a separate and unencrypted /boot partition. Historically it has been necessary to leave /boot unencrypted because bootloaders didn't support decrypting block devices. However, there are some dangers to leaving the bootloader and ramdisks unencrypted (see this post).

Newer versions of GRUB do support booting from encrypted block devices (a reference here). This means that we can theoretically boot from a device that is encrypted. And the theory is right!

While the installers don't make it easy to actually install in this setup (without a separate boot partition) it is actually pretty easy to convert an existing system to use this setup. I'll step through doing this on a Fedora 22 system (I have done this one Fedora 21 in the past).

The typical disk configuration (with crypto selected) from a vanilla install of Fedora 22 looks like this:

[root@localhost ~]# lsblk -i -o NAME,TYPE,MOUNTPOINT
NAME                                          TYPE  MOUNTPOINT
sda                                           disk
|-sda1                                        part  /boot
`-sda2                                        part
  `-luks-cb85c654-7561-48a3-9806-f8bbceaf3973 crypt
    |-fedora-swap                             lvm   [SWAP]
    `-fedora-root                             lvm   /

What we need to do is copy the files from the /boot partition and into the /boot directory on the root filesystem. We can do this easily with a bind mount like so:

[root@localhost ~]# mount --bind / /mnt/
[root@localhost ~]# cp -a /boot/* /mnt/boot/
[root@localhost ~]# cp -a /boot/.vmlinuz-* /mnt/boot/
[root@localhost ~]# diff -ur /boot/ /mnt/boot/
[root@localhost ~]# umount /mnt

This copied the files over and verified the contents matched. The final step is to unmount the partition and to remove the mount from /etc/fstab. Since we'll no longer be using that partition we don't want kernel updates to be written to the wrong place:

[root@localhost ~]# umount /boot
[root@localhost ~]# sed -i -e '//boot/d' /etc/fstab

The next step is to write out a new grub.cfg that loads the appropriate modules for loading from the encrypted disk:

[root@localhost ~]# cp /boot/grub2/grub.cfg /boot/grub2/grub.cfg.backup
[root@localhost ~]# grub2-mkconfig > /boot/grub2/grub.cfg
Generating grub configuration file ...
Found linux image: /boot/vmlinuz-4.0.4-301.fc22.x86_64
Found initrd image: /boot/initramfs-4.0.4-301.fc22.x86_64.img
Found linux image: /boot/vmlinuz-0-rescue-3f9d22f02d854d9a857066570127584a
Found initrd image: /boot/initramfs-0-rescue-3f9d22f02d854d9a857066570127584a.img
done
[root@localhost ~]# cat /boot/grub2/grub.cfg | grep cryptodisk
        insmod cryptodisk
        insmod cryptodisk

And finally we need to reinstall the GRUB bootloader with GRUB_ENABLE_CRYPTODISK=y set in /etc/default/grub:

[root@localhost ~]# echo GRUB_ENABLE_CRYPTODISK=y >> /etc/default/grub
[root@localhost ~]# cat /etc/default/grub
GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)"
GRUB_DEFAULT=saved
GRUB_DISABLE_SUBMENU=true
GRUB_TERMINAL_OUTPUT="console"
GRUB_CMDLINE_LINUX="rd.lvm.lv=fedora/swap rd.lvm.lv=fedora/root rd.luks.uuid=luks-cb85c654-7561-48a3-9806-f8bbceaf3973 rhgb quiet"
GRUB_DISABLE_RECOVERY="true"
GRUB_ENABLE_CRYPTODISK=y
[root@localhost ~]# grub2-install /dev/sda
Installing for i386-pc platform.
Installation finished. No error reported.

After a reboot you now get your grub prompt:

image

Unfortunately this does mean that you have to type your password twice on boot but at least your system is more encrypted than it was before. This may not completely get rid of the attack vector described in this post as there is still part of the bootloader that isn't encrypted, but at least the grub stage2 and the kernel/ramdisk are encrypted and should make it much harder to attack.

Happy Encrypting!

Dusty