F23 Cloud Base Test Day September 8th!

cross posted from this fedora magazine post

Hey everyone! Fedora 23 has been baking in the oven. The Fedora Cloud WG has elected to do a temperature check on September 8th.

For this test day we are going to concentrate on the base image. We will have vagrant boxes (see this page for how to set up your machine), qcow images, raw images, and AWS EC2 images. In a later test day we will focus on the Atomic images and Docker images.

The landing page for the Fedora Cloud Base test day is here. If you're available to test on the test day (or any other time) please go there and fill out your name and test results. Also, don't forget that you can use some of our new projects testcloud (copr link) and/or Tunir to aid in testing.

Happy testing and we hope to see you on test day!

Dusty

Installing/Starting Systemd Services Using Cloud-Init

Intro

Using cloudiinit to bootstrap cloud instances and install custom sofware/services is common practice today. One thing you often want to do is install the software, enable it to start on boot, and then start it so that you don't have to reboot in order to go ahead and start using it.

The Problem

Actually starting a service can be tricky though because when executing cloud-init configuration/scripts you are essentially already within a systemd unit while you try to start another systemd unit.

To illustrate this I decided to start a Fedora 22 cloud instance and install/start docker as part of bringup. The instance I started had the following user-data:

#cloud-config
packages:
  - docker
runcmd:
  - [ systemctl, daemon-reload ]
  - [ systemctl, enable, docker.service ]
  - [ systemctl, start, docker.service ]

After the system came up and some time had passed (takes a minute for the package to get installed) here is what we are left with:

[root@f22 ~]# pstree -asp 925
systemd,1 --switched-root --system --deserialize 21
  `-cloud-init,895 /usr/bin/cloud-init modules --mode=final
      `-runcmd,898 /var/lib/cloud/instance/scripts/runcmd
          `-systemctl,925 start docker.service
[root@f22 ~]# systemctl status | head -n 5
‚óŹ f22
    State: starting
     Jobs: 5 queued
   Failed: 0 units
    Since: Tue 2015-08-04 00:49:13 UTC; 30min ago

Basically the systemctl start docker.service command has been started but is blocking until it finishes. It doesn't ever finish though. As can be seen from the output above it's been 30 minutes and the system is still starting with 5 jobs queued.

I suspect this is because the start command queues the start of the docker service which then waits to be scheduled. It doesn't ever get scheduled, though, because the cloud-final.service unit needs to complete first.

The Solution

Is there a way to get the desired behavior? There is an option to systemctl that will cause it to not block during an operation, but rather just queue the action and exit. This is the --no-block option. From the systemctl man page:

--no-block
    Do not synchronously wait for the requested operation
    to finish. If this is not specified, the job will be
    verified, enqueued and systemctl will wait until it is
    completed. By passing this argument, it is only
    verified and enqueued.

To test this out I just added --no-block to the user-data file that was used previously:

#cloud-config
packages:
  - docker
runcmd:
  - [ systemctl, daemon-reload ]
  - [ systemctl, enable, docker.service ]
  - [ systemctl, start, --no-block, docker.service ]

And.. After booting the instance we get a running service:

[root@f22 ~]# systemctl is-active docker
active

Cheers!

Dusty

Fedora BTRFS+Snapper PART 2: Full System Snapshot/Rollback

History

In part 1 of this series I discussed why I desired a computer setup where I can do full system snapshots so I could seamlessly roll back at will. I also gave an overview of how I went about setting up a system so it could take advantage of BTRFS and snapper to do full system snapshotting and recovery. In this final post of the series I will give an overview of how to get snapper installed and configured on the system and walk through using it to do a rollback.

Installing and Configuring Snapper

First things first, as part of this whole setup I want to be able to tell how much space each one of my snapshots are taking up. I covered how to do this in a previous post, but the way you do it is by enabled quota on the BTRFS filesystem:

[root@localhost ~]# btrfs quota enable /      
[root@localhost ~]# 
[root@localhost ~]# btrfs subvolume list /
ID 258 gen 50 top level 5 path var/lib/machines
[root@localhost ~]# btrfs qgroup show /
WARNING: Rescan is running, qgroup data may be incorrect
qgroupid         rfer         excl 
--------         ----         ---- 
0/5         975.90MiB    975.90MiB 
0/258        16.00KiB     16.00KiB

You can see from the output that we currently have two subvolumes. One of them is the root subvolume while the other is a subvolume automatically created by systemd for systemd-nspawn container images.

Now that we have quota enabled let's get snapper installed and configured:

[root@localhost ~]# dnf install -y snapper
...
Complete!
[root@localhost ~]# snapper --config=root create-config /
[root@localhost ~]# snapper ls
Type   | # | Pre # | Date | User | Cleanup | Description | Userdata
-------+---+-------+------+------+---------+-------------+---------
single | 0 |       |      | root |         | current     |         
[root@localhost ~]# snapper list-configs
Config | Subvolume
-------+----------
root   | /        
[root@localhost ~]#
[root@localhost ~]# btrfs subvolume list /
ID 258 gen 50 top level 5 path var/lib/machines
ID 260 gen 83 top level 5 path .snapshots

So we used the snapper command to create a configuration for BTRFS filesystem mounted at /. As part of this process we can see from the btrfs subvolume list / command that snapper also created a .snapshots subvolume. This subvolume will be used to house the COW snapshots that are taken of the system.

The next thing we want to do is add an entry to fstab to make it so that regardless of what subvolume we are actually booted into we will always be able to view the .snapshots subvolume and all nested subvolumes (snapshots):

[root@localhost ~]# echo '/dev/vgroot/lvroot /.snapshots btrfs subvol=.snapshots 0 0' >> /etc/fstab

Taking Snapshots

OK, now that we have snapper installed and the .snapshots subvolume in /etc/fstab we can start creating snapshots:

[root@localhost ~]# btrfs subvolume get-default /
ID 5 (FS_TREE)
[root@localhost ~]# snapper create --description "BigBang"
[root@localhost ~]# snapper ls
Type   | # | Pre # | Date                     | User | Cleanup | Description | Userdata
-------+---+-------+--------------------------+------+---------+-------------+---------
single | 0 |       |                          | root |         | current     |         
single | 1 |       | Tue Jul 14 23:07:42 2015 | root |         | BigBang     |
[root@localhost ~]# 
[root@localhost ~]# btrfs subvolume list /
ID 258 gen 50 top level 5 path var/lib/machines
ID 260 gen 90 top level 5 path .snapshots
ID 261 gen 88 top level 260 path .snapshots/1/snapshot
[root@localhost ~]# 
[root@localhost ~]# ls /.snapshots/1/snapshot/
bin  boot  dev  etc  home  lib  lib64  media  mnt  opt  proc  root  run  sbin  srv  sys  tmp  usr  var

We made our first snapshot called BigBang and then ran a btrfs subvolume list / to view that a new snapshot was actually created. Notice at the top of the output of the sections that we ran a btrfs subvolume get-default /. This outputs what the currently set default subvolume is for the BTRFS filesystem. Right now we are booted into the root subvolume but that will change as soon as we decide we want to use one of the snapshots for rollback.

Since we took a snapshot let's go ahead and make some changes to the system:

[root@localhost ~]# dnf install -y htop
[root@localhost ~]# rpm -q htop
htop-1.0.3-4.fc22.x86_64
[root@localhost ~]# 
[root@localhost ~]# snapper status 1..0  | grep htop
+..... /usr/bin/htop
+..... /usr/share/doc/htop
+..... /usr/share/doc/htop/AUTHORS
+..... /usr/share/doc/htop/COPYING
+..... /usr/share/doc/htop/ChangeLog
+..... /usr/share/doc/htop/README
+..... /usr/share/man/man1/htop.1.gz
+..... /usr/share/pixmaps/htop.png
+..... /var/lib/dnf/yumdb/h/2cd64300c204b0e1ecc9ad185259826852226561-htop-1.0.3-4.fc22-x86_64
+..... /var/lib/dnf/yumdb/h/2cd64300c204b0e1ecc9ad185259826852226561-htop-1.0.3-4.fc22-x86_64/checksum_data
+..... /var/lib/dnf/yumdb/h/2cd64300c204b0e1ecc9ad185259826852226561-htop-1.0.3-4.fc22-x86_64/checksum_type
+..... /var/lib/dnf/yumdb/h/2cd64300c204b0e1ecc9ad185259826852226561-htop-1.0.3-4.fc22-x86_64/command_line
+..... /var/lib/dnf/yumdb/h/2cd64300c204b0e1ecc9ad185259826852226561-htop-1.0.3-4.fc22-x86_64/from_repo
+..... /var/lib/dnf/yumdb/h/2cd64300c204b0e1ecc9ad185259826852226561-htop-1.0.3-4.fc22-x86_64/installed_by
+..... /var/lib/dnf/yumdb/h/2cd64300c204b0e1ecc9ad185259826852226561-htop-1.0.3-4.fc22-x86_64/reason
+..... /var/lib/dnf/yumdb/h/2cd64300c204b0e1ecc9ad185259826852226561-htop-1.0.3-4.fc22-x86_64/releasever

So from this we installed htop and then compared the current running system (0) with snapshot 1.

Rolling Back

Now that we have taken a previous snapshot and have since made a change to the system we can use the snapper rollback functionality to get back to the state the system was in before we made the change. Let's do the rollback to get back to the snapshot 1 BigBang state:

[root@localhost ~]# snapper rollback 1
Creating read-only snapshot of current system. (Snapshot 2.)
Creating read-write snapshot of snapshot 1. (Snapshot 3.)
Setting default subvolume to snapshot 3.
[root@localhost ~]# reboot

As part of the rollback process you specify to snapper which snapshot you want to go back to. It then creates a read-only snapshot of the current system (in case you change your mind and want to get back to where you currently are) and then a new read-write subvolume based on the snapshot you specified to go back to. It then sets the default subvolume to be the newly created read-write subvolume it just created. After a reboot you will be booted into the new read-write subvolume and your state should be exactly as it was at the time you made the original snapshot.

In our case, after reboot we should now be booted into snapshot 3 as indicated by the output of the snapper rollback command above and we should be able to inspect information about all of the snapshots on the system:

[root@localhost ~]# btrfs subvolume get-default /
ID 263 gen 104 top level 260 path .snapshots/3/snapshot
[root@localhost ~]# 
[root@localhost ~]# snapper ls
Type   | # | Pre # | Date                     | User | Cleanup | Description | Userdata
-------+---+-------+--------------------------+------+---------+-------------+---------
single | 0 |       |                          | root |         | current     |         
single | 1 |       | Tue Jul 14 23:07:42 2015 | root |         | BigBang     |         
single | 2 |       | Tue Jul 14 23:14:12 2015 | root |         |             |         
single | 3 |       | Tue Jul 14 23:14:12 2015 | root |         |             |         
[root@localhost ~]# 
[root@localhost ~]# ls /.snapshots/
1  2  3
[root@localhost ~]# btrfs subvolume list /
ID 258 gen 50 top level 5 path var/lib/machines
ID 260 gen 100 top level 5 path .snapshots
ID 261 gen 98 top level 260 path .snapshots/1/snapshot
ID 262 gen 97 top level 260 path .snapshots/2/snapshot
ID 263 gen 108 top level 260 path .snapshots/3/snapshot

And the big test is to see if the change we made to the system was actually reverted:

[root@localhost ~]# rpm -q htop
package htop is not installed

Bliss!!

Now in my case I like to have more descriptive notes on my snapshots so I'll go back now and give some notes for snapshots 2 and 3:

[root@localhost ~]# snapper modify --description "installed htop" 2
[root@localhost ~]# snapper modify --description "rollback to 1 - read/write" 3 
[root@localhost ~]# 
[root@localhost ~]# snapper ls
Type   | # | Pre # | Date                     | User | Cleanup | Description                | Userdata
-------+---+-------+--------------------------+------+---------+----------------------------+---------
single | 0 |       |                          | root |         | current                    |         
single | 1 |       | Tue Jul 14 23:07:42 2015 | root |         | BigBang                    |         
single | 2 |       | Tue Jul 14 23:14:12 2015 | root |         | installed htop             |         
single | 3 |       | Tue Jul 14 23:14:12 2015 | root |         | rollback to 1 - read/write |

We can also see how much space (shared and exclusive each of the snapshots are taking up:

[root@localhost ~]# btrfs qgroup show / 
WARNING: Qgroup data inconsistent, rescan recommended
qgroupid         rfer         excl 
--------         ----         ---- 
0/5           1.08GiB      7.53MiB 
0/258        16.00KiB     16.00KiB 
0/260        16.00KiB     16.00KiB 
0/261         1.07GiB      2.60MiB 
0/262         1.07GiB    740.00KiB 
0/263         1.08GiB     18.91MiB

Now that is useful info so you can know how much space you will be recovering when you delete snapshots in the future.

Updating The Kernel

I mentioned in part 1 that I had to get a special rebuild of GRUB with some patches from the SUSE guys in order to get booting from the default subvolume to work. This was all needed so that I can update the kernel as normal and have the GRUB files that get used be the ones that are in the actual subvolume I am currently using. So let's test it out by doing a full system update (including a kernel update):

[root@localhost ~]# dnf update -y
...
Install    8 Packages
Upgrade  173 Packages
...
Complete!
[root@localhost ~]# rpm -q kernel
kernel-4.0.4-301.fc22.x86_64
kernel-4.0.7-300.fc22.x86_64
[root@localhost ~]# 
[root@localhost ~]# btrfs qgroup show /
WARNING: Qgroup data inconsistent, rescan recommended
qgroupid         rfer         excl 
--------         ----         ---- 
0/5           1.08GiB      7.53MiB 
0/258        16.00KiB     16.00KiB 
0/260        16.00KiB     16.00KiB 
0/261         1.07GiB     11.96MiB 
0/262         1.07GiB    740.00KiB 
0/263         1.19GiB    444.35MiB

So we did a full system upgrade that upgraded 173 packages and installed a few others. We can see now that the current subvolume (snapshot 3 with ID 263) now has 444MiB of exclusive data. This makes sense since all of the other snapshots were from before the full system update.

Let's create a new snapshot that represents the state of the system right after we did the full system update and then reboot:

[root@localhost ~]# snapper create --description "full system upgrade"
[root@localhost ~]# reboot

After reboot we can now check to see if we have properly booted the recently installed kernel:

[root@localhost ~]# rpm -q kernel
kernel-4.0.4-301.fc22.x86_64
kernel-4.0.7-300.fc22.x86_64
[root@localhost ~]# uname -r
4.0.7-300.fc22.x86_64

Bliss again. Yay! And I'm Done.

Enjoy!

Dusty

Fedora BTRFS+Snapper PART 1: System Preparation

The Problem

For some time now I have wanted a linux desktop setup where I could run updates automatically and not worry about losing productivity if my system gets hosed from the update. My desired setup to achieve this has been a combination of snapper and BTRFS, but unfortunately the support on Fedora for full rollback isn't quite there.

In Fedora 22 the support for rollback was added but there is one final piece of the puzzle that is missing that I need in order to have a fully working setup: I needed GRUB to respect the default subvolume that is set on the BTRFS filesystem. In the past GRUB did use the default subvolume but this behavior was removed in 82591fa (link).

With GRUB respecting the default subvolume I can include /boot/ just as a directory on my system (not as a separate subvolume) and it will be included in all of the snapshots that are created by snapper of the root filesystem.

In order to get this functionality I grabbed some of the patches from the SUSE guys and applied them to the Fedora GRUB rpm. All of the work and the resulting rpms can be found here.

System Preparation

So now I had a GRUB rpm that would work for me. The first step is to get my system up and running in a setup that I could then use snapper on top of. I mentioned before that I wanted to put /boot/ just as a directory on the BTRFS filesystem. I also wanted it to be encrypted as I have done in the past.

This means I have yet another setup that is funky and I'll need to basically install it from scratch using Anaconda and a chroot environment.

After getting up and running in anaconda I then switched to a different virtual terminal and formatted my hard disk, set up an encrypted LUKS device, created a VG and two LVs, and finally a BTRFS filesystem:

[anaconda root@localhost ~]# fdisk /dev/sda <<EOF
o
n
p
1
2048

w
EOF
[anaconda root@localhost ~]# lsblk /dev/sda
NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda      8:0    0 465.8G  0 disk 
`-sda1   8:1    0 465.8G  0 part
[anaconda root@localhost ~]# cryptsetup luksFormat /dev/sda1           
[anaconda root@localhost ~]# cryptsetup luksOpen /dev/sda1 cryptodisk
[anaconda root@localhost ~]# vgcreate vgroot /dev/mapper/cryptodisk
[anaconda root@localhost ~]# lvcreate --size=4G --name lvswap vgroot
[anaconda root@localhost ~]# mkswap /dev/vgroot/lvswap
[anaconda root@localhost ~]# lvcreate -l 100%FREE --name lvroot vgroot
[anaconda root@localhost ~]# mkfs.btrfs /dev/vgroot/lvroot

NOTE: Most of the commands run above have truncated output for brevity.

The next step was to mount the filesystem and install software into the filesystem in a chrooted environment. Since the dnf binary isn't actually installed in the anaconda environment by default we first need to install it:

[anaconda root@localhost ~]# rpm -ivh --nodeps /run/install/repo/Packages/d/dnf-1.0.0-1.fc22.noarch.rpm
warning: /run/install/repo/Packages/d/dnf-1.0.0-1.fc22.noarch.rpm: Header V3 RSA/SHA256 Signature, key ID 8e1431d5: NOKEY
Preparing...                          ################################# [100%]
Updating / installing...
   1:dnf-1.0.0-1.fc22                 ################################# [100%]

Now we can "create" a repo file from the repo that is on the media and install the bare minimum (the filesystem rpm):

[anaconda root@localhost ~]# mount /dev/vgroot/lvroot /mnt/sysimage/
[anaconda root@localhost ~]# mkdir /etc/yum.repos.d
[anaconda root@localhost ~]# cat <<EOF > /etc/yum.repos.d/dvd.repo
[dvd]
name=dvd
baseurl=file:///run/install/repo
enabled=1
gpgcheck=0
EOF
[anaconda root@localhost ~]# dnf install -y --releasever=22 --installroot=/mnt/sysimage filesystem
...
Complete!

The reason we only installed the filesystem rpm is because a lot of the other rpms we are going to install will fail if some of the "special" directories aren't mounted. We'll go ahead and mount them now:

[anaconda root@localhost ~]# mount -v -o bind /dev /mnt/sysimage/dev/
mount: /dev bound on /mnt/sysimage/dev.
[anaconda root@localhost ~]# mount -v -o bind /run /mnt/sysimage/run/
mount: /run bound on /mnt/sysimage/run.
[anaconda root@localhost ~]# mount -v -t proc proc /mnt/sysimage/proc/ 
mount: proc mounted on /mnt/sysimage/proc.
[anaconda root@localhost ~]# mount -v -t sysfs sys /mnt/sysimage/sys/
mount: sys mounted on /mnt/sysimage/sys.

Now we can install the rest of the software into the chroot environment:

[anaconda root@localhost ~]# cp /etc/yum.repos.d/dvd.repo /mnt/sysimage/etc/yum.repos.d/
[anaconda root@localhost ~]# dnf install -y --installroot=/mnt/sysimage --disablerepo=* --enablerepo=dvd @core @standard kernel btrfs-progs lvm2
...
Complete!

We can also install the "special" GRUB packages that I created and then get rid of the repo file because we won't need it any longer:

[anaconda root@localhost ~]# dnf install -y --installroot=/mnt/sysimage --disablerepo=* --enablerepo=dvd \
https://github.com/dustymabe/fedora-grub-boot-btrfs-default-subvolume/raw/master/rpmbuild/RPMS/x86_64/grub2-2.02-0.16.fc22.dusty.x86_64.rpm \

https://github.com/dustymabe/fedora-grub-boot-btrfs-default-subvolume/raw/master/rpmbuild/RPMS/x86_64/grub2-tools-2.02-0.16.fc22.dusty.x86_64.rpm

...
Complete!
[anaconda root@localhost ~]# rm /mnt/sysimage/etc/yum.repos.d/dvd.repo

Now we can do some minimal system configuration by chrooting into the system and setting up crypttab, setting up fstab, setting the root password and setting up the system to a relabel on boot:

[anaconda root@localhost ~]# chroot /mnt/sysimage
[anaconda root@localhost /]# ls -l /dev/disk/by-uuid/f0d889d8-5225-4d9d-9a89-edd387e65ab7 
lrwxrwxrwx. 1 root root 10 Jul 14 02:24 /dev/disk/by-uuid/f0d889d8-5225-4d9d-9a89-edd387e65ab7 -> ../../sda1
[anaconda root@localhost /]# cat <<EOF > /etc/crypttab
cryptodisk /dev/disk/by-uuid/f0d889d8-5225-4d9d-9a89-edd387e65ab7 -
EOF
[anaconda root@localhost /]# cat <<EOF > /etc/fstab
/dev/vgroot/lvroot / btrfs defaults 1 1
/dev/vgroot/lvswap swap swap defaults 0 0
EOF
[anaconda root@localhost /]# passwd --stdin root <<< "password"
Changing password for user root.
passwd: all authentication tokens updated successfully.
[anaconda root@localhost /]# touch /.autorelabel

Finally configure and install GRUB on sda and generate a ramdisk that has all the required modules using dracut:

[anaconda root@localhost /]# echo GRUB_ENABLE_CRYPTODISK=y >> /etc/default/grub
[anaconda root@localhost /]# echo SUSE_BTRFS_SNAPSHOT_BOOTING=true >> /etc/default/grub
[anaconda root@localhost /]# grub2-mkconfig -o /boot/grub2/grub.cfg
Generating grub configuration file ...
File descriptor 4 (/) leaked on vgs invocation. Parent PID 29465: /usr/sbin/grub2-probe
File descriptor 4 (/) leaked on vgs invocation. Parent PID 29465: /usr/sbin/grub2-probe
Found linux image: /boot/vmlinuz-4.0.4-301.fc22.x86_64
Found initrd image: /boot/initramfs-4.0.4-301.fc22.x86_64.img
Found linux image: /boot/vmlinuz-0-rescue-225efda374c043e3886d349ef724c79e
Found initrd image: /boot/initramfs-0-rescue-225efda374c043e3886d349ef724c79e.img
done
[anaconda root@localhost /]# grub2-install /dev/sda
Installing for i386-pc platform.
File descriptor 4 (/) leaked on vgs invocation. Parent PID 29866: grub2-install
File descriptor 4 (/) leaked on vgs invocation. Parent PID 29866: grub2-install
File descriptor 4 (/) leaked on vgs invocation. Parent PID 29866: grub2-install
File descriptor 7 (/) leaked on vgs invocation. Parent PID 29866: grub2-install
File descriptor 8 (/) leaked on vgs invocation. Parent PID 29866: grub2-install
Installation finished. No error reported.
[anaconda root@localhost /]# dracut --kver 4.0.4-301.fc22.x86_64 --force

Now we can exit the chroot, unmount all filesystems and reboot into our new system:

[anaconda root@localhost /]# exit
exit
[anaconda root@localhost ~]# umount /mnt/sysimage/{dev,run,sys,proc}
[anaconda root@localhost ~]# umount /mnt/sysimage/
[anaconda root@localhost ~]# reboot

To Be Continued

So we have set up the system to have a single BTRFS filesystem (no subvolumes) on top of LVM on top of LUKS and with a custom GRUB that respects the configured default subvolume on the BTRFS filesystem. Here is what an lsblk shows:

[root@localhost ~]# lsblk -o NAME,TYPE,FSTYPE,MOUNTPOINT /dev/sda
NAME                TYPE  FSTYPE      MOUNTPOINT
sda                 disk              
`-sda1              part  crypto_LUKS 
  `-cryptodisk      crypt LVM2_member 
    |-vgroot-lvswap lvm   swap        [SWAP]
    `-vgroot-lvroot lvm   btrfs       /

In a later post I will configure snapper on this system and show how rollbacks can be used to simply revert changes that have been made.

Dusty

Encrypting More: /boot Joins The Party

Typically when installing major linux distros they make it easy to select encryption as an option to have encrypted block devices. This is great! The not so great part is the linux kernel and the initial ramdisk aren't typically invited to the party; they are left sitting in a separate and unencrypted /boot partition. Historically it has been necessary to leave /boot unencrypted because bootloaders didn't support decrypting block devices. However, there are some dangers to leaving the bootloader and ramdisks unencrypted (see this post).

Newer versions of GRUB do support booting from encrypted block devices (a reference here). This means that we can theoretically boot from a device that is encrypted. And the theory is right!

While the installers don't make it easy to actually install in this setup (without a separate boot partition) it is actually pretty easy to convert an existing system to use this setup. I'll step through doing this on a Fedora 22 system (I have done this one Fedora 21 in the past).

The typical disk configuration (with crypto selected) from a vanilla install of Fedora 22 looks like this:

[root@localhost ~]# lsblk -i -o NAME,TYPE,MOUNTPOINT
NAME                                          TYPE  MOUNTPOINT
sda                                           disk  
|-sda1                                        part  /boot
`-sda2                                        part  
  `-luks-cb85c654-7561-48a3-9806-f8bbceaf3973 crypt 
    |-fedora-swap                             lvm   [SWAP]
    `-fedora-root                             lvm   /

What we need to do is copy the files from the /boot partition and into the /boot directory on the root filesystem. We can do this easily with a bind mount like so:

[root@localhost ~]# mount --bind / /mnt/
[root@localhost ~]# cp -a /boot/* /mnt/boot/
[root@localhost ~]# cp -a /boot/.vmlinuz-* /mnt/boot/
[root@localhost ~]# diff -ur /boot/ /mnt/boot/
[root@localhost ~]# umount /mnt 

This copied the files over and verified the contents matched. The final step is to unmount the partition and to remove the mount from /etc/fstab. Since we'll no longer be using that partition we don't want kernel updates to be written to the wrong place:

[root@localhost ~]# umount /boot
[root@localhost ~]# sed -i -e '/\/boot/d' /etc/fstab

The next step is to write out a new grub.cfg that loads the appropriate modules for loading from the encrypted disk:

[root@localhost ~]# cp /boot/grub2/grub.cfg /boot/grub2/grub.cfg.backup
[root@localhost ~]# grub2-mkconfig > /boot/grub2/grub.cfg
Generating grub configuration file ...
Found linux image: /boot/vmlinuz-4.0.4-301.fc22.x86_64
Found initrd image: /boot/initramfs-4.0.4-301.fc22.x86_64.img
Found linux image: /boot/vmlinuz-0-rescue-3f9d22f02d854d9a857066570127584a
Found initrd image: /boot/initramfs-0-rescue-3f9d22f02d854d9a857066570127584a.img
done
[root@localhost ~]# cat /boot/grub2/grub.cfg | grep cryptodisk
        insmod cryptodisk
        insmod cryptodisk

And finally we need to reinstall the GRUB bootloader with GRUB_ENABLE_CRYPTODISK=y set in /etc/default/grub:

[root@localhost ~]# echo GRUB_ENABLE_CRYPTODISK=y >> /etc/default/grub
[root@localhost ~]# cat /etc/default/grub
GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)"
GRUB_DEFAULT=saved
GRUB_DISABLE_SUBMENU=true
GRUB_TERMINAL_OUTPUT="console"
GRUB_CMDLINE_LINUX="rd.lvm.lv=fedora/swap rd.lvm.lv=fedora/root rd.luks.uuid=luks-cb85c654-7561-48a3-9806-f8bbceaf3973 rhgb quiet"
GRUB_DISABLE_RECOVERY="true"
GRUB_ENABLE_CRYPTODISK=y
[root@localhost ~]# grub2-install /dev/sda 
Installing for i386-pc platform.
Installation finished. No error reported.

After a reboot you now get your grub prompt:

image

Unfortunately this does mean that you have to type your password twice on boot but at least your system is more encrypted than it was before. This may not completely get rid of the attack vector described in this post as there is still part of the bootloader that isn't encrypted, but at least the grub stage2 and the kernel/ramdisk are encrypted and should make it much harder to attack.

Happy Encrypting!

Dusty

Atomic Host Red Hat Summit Lab

Red Hat Summit was a blast this year. I participated in several Hands On Labs to help the community learn about the new tools that are available in the ecosystem. For one of the labs I wrote up a section on Atomic Host, but more specifically on rpm-ostree. I have copied a portion of the lab here as well as added example text to the code blocks.

Lab Intro

Atomic Host is a minimalistic operating system that is designed to contain a very small subset of tools that are needed for running container based applications. A few of it's features are shown below:

  • It is Lightweight
    • a small base means less potential issues.
  • Provides Atomic Upgrades and Rollbacks
    • upgrades/rollbacks are staged and take effect on reboot
  • Static and Dynamic
    • software/binaries in /usr and other similar directories are read-only
      • this guarantees no changes have been made to the software
    • configuration and temporary directories are read/write
      • you can still make important configuration changes and have them propagate forward

We will explore some of these features as we illustrate a bit of the lifecycle of managing a RHEL Atomic Host.

Hello rpm-ostree World

In an rpm-ostree world you can't install new software on the system or even touch most of the software that exists. Go ahead and try:

-bash-4.2# echo 'Crazy Talk' > /usr/bin/docker
-bash: /usr/bin/docker: Read-only file system

What we can do is configure the existing software on the system using the provided mechanisms for configuration. We can illustrate this by writing to motd and then logging in to see the message:

-bash-4.2# echo 'Lab 1 is fun' > /etc/motd
-bash-4.2# ssh root@localhost
Last login: Fri Jun  5 02:26:59 2015 from localhost
Lab 1 is fun
-bash-4.2# exit
logout
Connection to localhost closed.

Even though we can't install new software, your Atomic Host operating system isn't just a black box. The rpm command is there and we can run queries just the same as if we were on a traditional system. This is quite useful because we can use the tools we are familiar with to investigate the system. Try out a few rpm queries on the Atomic Host:

-bash-4.2# rpm -q kernel
kernel-3.10.0-229.4.2.el7.x86_64
-bash-4.2# rpm -qf /usr/bin/vi
vim-minimal-7.4.160-1.el7.x86_64
-bash-4.2# rpm -q --changelog util-linux | wc -l
1832

Another nice thing about atomic, or rather the underlying ostree software is that it is like git for your OS. At any point in time you can see what has changed between what was delivered in the tree vs. what is on the system. That means for those few directories that are read/write, you can easily view what changes have been made to them.

Let's take a look at the existing differences between what we have and what was delivered in the tree:

-bash-4.2# ostree admin config-diff | head -n 5
M    adjtime
M    motd
M    group
M    hosts
M    gshadow

You can see right in the middle the the motd file we just modified.

As a final step before we do an upgrade let's run a container and verify all is working:

-bash-4.2# docker run -d -p 80:80 --name=test
repo.atomic.lab:5000/apache
e18a5f7d54c8dbe0d352e2c2854af16d27f166d11b95bc37a3b4267cfcd39cd6
-bash-4.2# curl http://localhost
Apache
-bash-4.2# docker rm -f test
test

Performing an Upgrade

Ok, now that we have took a little tour, let's actually perform an upgrade in which we move from one version of the tree to a newer version. First, let's check the current status of the system:

-bash-4.2# atomic host status
  TIMESTAMP (UTC)         VERSION   ID             OSNAME               REFSPEC
* 2015-05-30 04:10:40               d306dcf255     rhel-atomic-host     lab:labtree
  2015-05-07 19:00:48     7.1.2     203dd666d3     rhel-atomic-host     rhel-atomic-host-ostree:rhel-...

Note that the * indicates which tree is currently booted. The ID is a short commit ID for that commit in the tree. The REFSPEC for the latest tree specifies the remote we are using (lab) and the ref that we are tracking (labtree). Quite a lot of information!

A fun fact is that the atomic host command is just a frontend for the rpm-ostree utility. It has some of the functionality of the rpm-ostree utility that is suitable for most daily use. Let's use rpm-ostree now to check the status:

-bash-4.2# rpm-ostree status
  TIMESTAMP (UTC)         VERSION   ID             OSNAME               REFSPEC
* 2015-05-30 04:10:40               d306dcf255     rhel-atomic-host     lab:labtree
  2015-05-07 19:00:48     7.1.2     203dd666d3     rhel-atomic-host     rhel-atomic-host-ostree:rhel-...

The next step is to actually move to a new tree. For the purposes of this lab, and to illustrate Atomic's usefulness, we are actually going to upgrade to a tree that has some bad software in it. If we were to run an atomic host upgrade command then it would actually take us to the newest commit in the repo. In this case we want to go to an intermediate commit (a bad one) so we are going to run a special command to get us there:

-bash-4.2# rpm-ostree rebase lab:badtree

26 metadata, 37 content objects fetched; 101802 KiB transferred in 7 seconds
Copying /etc changes: 26 modified, 8 removed, 70 added
Transaction complete; bootconfig swap: yes deployment count change: 0
Freed objects: 180.1 MB
Deleting ref 'lab:labtree'
Changed:
  etcd-2.0.11-2.el7.x86_64
  kubernetes-0.17.1-1.el7.x86_64
Removed:
  setools-console-3.3.7-46.el7.x86_64

What we did there was rebase to another ref (badtree), but we kept with the same remote (lab).

So we have rebased to a new tree but we aren't yet using that tree. During upgrade the new environment is staged for the next boot, but not yet being used. This allows the upgrade to be atomic. Before we reboot we can check the status. You will see the new tree as well as the old tree listed. The * still should be next to the old tree since that is the tree that is currently booted and running:

-bash-4.2# atomic host status
  TIMESTAMP (UTC)         ID             OSNAME               REFSPEC
  2015-05-30 04:39:22     146b72d9d7     rhel-atomic-host     lab:badtree
* 2015-05-30 04:10:40     d306dcf255     rhel-atomic-host     lab:labtree

After checking the status reboot the machine in order to boot into the new tree.

Rolling Back

So why would you ever need to roll back? It's a perfect world and nothing ever breaks right? No! Sometimes problems arise and it is always nice to have an undo button to fix it. In the case of Atomic, there is atomic host rollback. Do we need to use it now? Let's see if everything is OK on the system:

-bash-4.2# atomic host status
  TIMESTAMP (UTC)         ID             OSNAME               REFSPEC
* 2015-05-30 04:39:22     146b72d9d7     rhel-atomic-host     lab:badtree
  2015-05-30 04:10:40     d306dcf255     rhel-atomic-host     lab:labtree
-bash-4.2# 
-bash-4.2# docker run -d -p 80:80 --name=test repo.atomic.lab:5000/apache
ERROR
-bash-4.2# curl http://localhost
curl: (7) Failed connect to localhost:80; Connection refused
-bash-4.2# systemctl --failed | head -n 3
UNIT           LOAD   ACTIVE SUB    DESCRIPTION
docker.service loaded failed failed Docker Application Container Engine

Did anything fail? Of course it did. So let's press the eject button and get ourselves back to safety:

-bash-4.2# atomic host rollback
Moving 'd306dcf255b370e5702206d064f2ca2e24d1ebf648924d52a2e00229d5b08365.0' to be first deployment
Transaction complete; bootconfig swap: yes deployment count change: 0
Changed:
  etcd-2.0.9-2.el7.x86_64
  kubernetes-0.15.0-0.4.git0ea87e4.el7.x86_64
Added:
  setools-console-3.3.7-46.el7.x86_64
Sucessfully reset deployment order; run "systemctl reboot" to start a reboot
-bash-4.2# reboot

Now, let's check to see if we are back to a good state:

-bash-4.2# atomic host status
  TIMESTAMP (UTC)         ID             OSNAME               REFSPEC
* 2015-05-30 04:10:40     d306dcf255     rhel-atomic-host     lab:labtree
  2015-05-30 04:39:22     146b72d9d7     rhel-atomic-host     lab:badtree
-bash-4.2# docker run -d -p 80:80 --name=test repo.atomic.lab:5000/apache
a28a5f80bc2d1da9d405199f88951a62a7c4c125484d30fbb6eb2c4c032ef7f3
-bash-4.2# curl http://localhost
Apache
-bash-4.2# docker rm -f test
test

All dandy!

Final Upgrade

So since the badtree has been released the developers fixed the bug and have put out a new tree that is fixed. Now we can upgrade to the newest tree. As part of this upgrade let's explore some of the rpm-ostree features.

First, create a file in /etc/ and show that ostree knows that it has been created and differs from the tree that was delivered:

-bash-4.2# echo "Before Upgrade d306dcf255" > /etc/before-upgrade.txt
-bash-4.2# ostree admin config-diff | grep before-upgrade
A    before-upgrade.txt

Now we can do the upgrade:

-bash-4.2# atomic host upgrade --reboot
Updating from: lab:labtree

48 metadata, 54 content objects fetched; 109056 KiB transferred in 9 seconds
Copying /etc changes: 26 modified, 8 removed, 74 added
Transaction complete; bootconfig swap: yes deployment count change: 0

After the upgrade let's actually run a few commands to see the actual difference is (in terms of rpms) between the two trees:

-bash-4.2# atomic host status
  TIMESTAMP (UTC)         ID             OSNAME               REFSPEC
* 2015-05-30 05:12:55     ec89f90273     rhel-atomic-host     lab:labtree
  2015-05-30 04:10:40     d306dcf255     rhel-atomic-host     lab:labtree
-bash-4.2# rpm-ostree db diff -F diff d306dcf255 ec89f90273
ostree diff commit old: d306dcf255 (d306dcf255b370e5702206d064f2ca2e24d1ebf648924d52a2e00229d5b08365)
ostree diff commit new: ec89f90273 (ec89f902734e70b4e8fbe5000e87dd944a3c95ffdb04ef92f364e5aaab049813)
!atomic-0-0.22.git5b2fa8d.el7.x86_64
=atomic-0-0.26.gitcc9aed4.el7.x86_64
!docker-1.6.0-11.el7.x86_64
=docker-1.6.0-15.el7.x86_64
!docker-python-1.0.0-35.el7.x86_64
=docker-python-1.0.0-39.el7.x86_64
!docker-selinux-1.6.0-11.el7.x86_64
=docker-selinux-1.6.0-15.el7.x86_64
!docker-storage-setup-0.0.4-2.el7.noarch
=docker-storage-setup-0.5-2.el7.x86_64
!etcd-2.0.9-2.el7.x86_64
=etcd-2.0.11-2.el7.x86_64
!kubernetes-0.15.0-0.4.git0ea87e4.el7.x86_64
=kubernetes-0.17.1-4.el7.x86_64
+kubernetes-master-0.17.1-4.el7.x86_64
+kubernetes-node-0.17.1-4.el7.x86_64
!python-websocket-client-0.14.1-78.el7.noarch
=python-websocket-client-0.14.1-82.el7.noarch
-setools-console-3.3.7-46.el7.x86_64

This shows added, removed, changed rpms between the two trees.

Now remember that file we created before the upgrade? Is it still there? Let's check and also create a new file that represents the after upgrade state:

-bash-4.2# cat /etc/before-upgrade.txt
Before Upgrade d306dcf255
-bash-4.2# echo "After Upgrade ec89f90273" > /etc/after-upgrade.txt
-bash-4.2# cat /etc/after-upgrade.txt
After Upgrade ec89f90273

Now which of the files do you think will exist after a rollback? Only you can find out!:

-bash-4.2# rpm-ostree rollback --reboot 
Moving 'd306dcf255b370e5702206d064f2ca2e24d1ebf648924d52a2e00229d5b08365.0' to be first deployment
Transaction complete; bootconfig swap: yes deployment count change: 0

After rollback:

-bash-4.2# atomic host status
  TIMESTAMP (UTC)         ID             OSNAME               REFSPEC         
* 2015-05-30 04:10:40     d306dcf255     rhel-atomic-host     lab:labtree     
  2015-05-30 05:12:55     ec89f90273     rhel-atomic-host     lab:labtree     
-bash-4.2# ls -l /etc/*.txt
-rw-r--r--. 1 root root 26 Jun  5 03:35 /etc/before-upgrade.txt

Fin!

Now you know quite a bit about upgrading, rolling back, and querying information from your Atomic Host. Have fun exploring!

Dusty

Fedora 22 Updates-Testing Atomic Tree

It has generally been difficult to test new updates for the rpm-ostree or ostree packages for Atomic Host. This is because in the past you had to build your own tree in order to test them. Now, however, Fedora has starting building a tree based off the updates-testing yum repositories. This means that you can easily test updates by simply running Fedora Atomic Host and rebasing to the fedora-atomic/f22/x86_64/testing/docker-host ref:

# rpm-ostree rebase fedora-atomic:fedora-atomic/f22/x86_64/testing/docker-host
# reboot

After reboot you are now (hopefully) booted into the tree with updates baked in. You can do your tests and report your results back upstream. If you ever want to go back to following the stable branch then you can do that by running:

# rpm-ostree rebase fedora-atomic:fedora-atomic/f22/x86_64/docker-host
# reboot

Testing updates this way can apply to any of the packages within Atomic Host. Since Atomic Host has a small footprint the package you want to test might not be included, but if it is then this is a great way to test things out.

Dusty

Fedora 22 Now Swimming in DigitalOcean

cross posted from this fedora magazine post

DigitalOcean is a cloud provider that provides a one-click deployment of a Fedora Cloud instance to an all-SSD server in under a minute. After some quick work by the DigitalOcean and Fedora Cloud teams we are pleased to announce that you can now make it rain Fedora 22 droplets!

One significant change over previous Fedora droplets is that this is the first release to have support for managing your kernel internally. Meaning if you dnf update kernel-core and reboot then you'll actually be running the kernel you updated to. Win!

Here are a couple more tips for Fedora 22 Droplets:

  • Like with other DigitalOcean images, you will log in with your ssh key as root rather than the typical fedora user that you may be familiar with when logging in to a Fedora cloud image.
  • Similar to Fedora 21, Fedora 22 also has SELinux enabled by default.
  • Fedora 22 should be available in all the newest datacenters in each region, but some legacy datacenters aren't supported. If you have a problem you think is Fedora specific then drop us an email at , ping us in #fedora-cloud on freenode, or visit the Fedora cloud trac to see if it is already being worked on.

Visit the DigitalOcean Fedora landing page and spin one up today!

Happy Developing!
Dusty

F22 Cloud/Atomic Test Day May 7th!

Hey everyone! Fedora 22 is on the cusp of being released and the Fedora Cloud Working Group has elected to organize a test day for May 7th in order to work out some bugs before shipping it off to the rest of the world.

With a new release comes some new features and tools. We are working on Vagrant images as well as a testing tool called Tunir. Joe Brockmeier has a nice writeup about Vagrant and Kushal Das maintains some docs on Tunir.

On the test day we will be testing both the Cloud Base Image and the Fedora Atomic Host cloud image. The landing pages where we are organizing instructions and information are here (for Cloud Base) and here (for Atomic). If you're available to test on the test day (or any other time) please go there and fill out your name and test results.

Happy Testing!

Dusty

Crisis Averted.. I’m using Atomic Host

This blog has been running on Docker on Fedora 21 Atomic Host since early January. Occasionally I log in and run rpm-ostree upgrade followed by a subsequent reboot (usually after I inspect a few things). Today I happened to do just that and what did I come up with?? A bunch of 404s. Digging through some of the logs for the systemd unit file I use to start my wordpress container I found this:

systemd[1]: wordpress-server.service: main process exited, code=exited, status=1/FAILURE
docker[2321]: time="2015-01-31T19:09:24-05:00" level="fatal" msg="Error response from daemon: Cannot start container 51a2b8c45bbee564a61bcbffaee5bc78357de97cdd38918418026c26ae40fb09: write /sys/fs/cgroup/memory/system.slice/docker-51a2b8c45bbee564a61bcbffaee5bc78357de97cdd38918418026c26ae40fb09.scope/memory.memsw.limit_in_bytes: invalid argument"

Hmmm.. So that means I have updated to the latest atomic and docker doesn't work?? What am I to do?

Well, the nice thing about atomic host is that in moments like these you can easily go back to the state you were before you upgraded. A quick rpm-ostree rollback and my blog was back up and running in minutes.

Whew! Crisis averted.. But now what? Well the nice thing about atomic host is that I can easily go to another (non-production) system and test out exactly the same scenario as the upgrade that I performed in production. Some quick googling led me to this github issue which looks like it has to do with setting memory limits when you start a container using later versions of systemd.

Let's test out that theory by recreating this failure.

Recreating the Failure

To recreate I decided to start with the Fedora 21 atomic cloud image that was released in December. Here is what I have:

-bash-4.3# ostree admin status
* fedora-atomic ba7ee9475c462c9265517ab1e5fb548524c01a71709539bbe744e5fdccf6288b.0
    origin refspec: fedora-atomic:fedora-atomic/f21/x86_64/docker-host
-bash-4.3#
-bash-4.3# rpm-ostree status
  TIMESTAMP (UTC)         ID             OSNAME            REFSPEC
* 2014-12-03 01:30:09     ba7ee9475c     fedora-atomic     fedora-atomic:fedora-atomic/f21/x86_64/docker-host
-bash-4.3#
-bash-4.3# rpm -q docker-io systemd
docker-io-1.3.2-2.fc21.x86_64
systemd-216-12.fc21.x86_64
-bash-4.3#
-bash-4.3# docker run --rm --memory 500M busybox echo "I'm Alive"
Unable to find image 'busybox' locally
Pulling repository busybox
4986bf8c1536: Download complete 
511136ea3c5a: Download complete 
df7546f9f060: Download complete 
ea13149945cb: Download complete 
Status: Downloaded newer image for busybox:latest
I'm Alive

So the system is up and running and able to run a container with the --memory option set. Now lets upgrade to the same commit that I did when I saw the failure earlier and reboot:

-bash-4.3# ostree pull fedora-atomic 153f577dc4b039e53abebd7c13de6dfafe0fb64b4fdc2f5382bdf59214ba7acb

778 metadata, 4374 content objects fetched; 174535 KiB transferred in 156 seconds
-bash-4.3#
-bash-4.3# echo 153f577dc4b039e53abebd7c13de6dfafe0fb64b4fdc2f5382bdf59214ba7acb > /ostree/repo/refs/remotes/fedora-atomic/fedora-atomic/f21/x86_64/docker-host
-bash-4.3#
-bash-4.3# ostree admin deploy fedora-atomic:fedora-atomic/f21/x86_64/docker-host
Copying /etc changes: 26 modified, 4 removed, 36 added
Transaction complete; bootconfig swap: yes deployment count change: 1
-bash-4.3#
-bash-4.3# ostree admin status
  fedora-atomic 153f577dc4b039e53abebd7c13de6dfafe0fb64b4fdc2f5382bdf59214ba7acb.0
    origin refspec: fedora-atomic:fedora-atomic/f21/x86_64/docker-host
* fedora-atomic ba7ee9475c462c9265517ab1e5fb548524c01a71709539bbe744e5fdccf6288b.0
    origin refspec: fedora-atomic:fedora-atomic/f21/x86_64/docker-host
-bash-4.3# 
-bash-4.3# rpm-ostree status
  TIMESTAMP (UTC)         ID             OSNAME            REFSPEC
  2015-01-31 21:08:35     153f577dc4     fedora-atomic     fedora-atomic:fedora-atomic/f21/x86_64/docker-host
* 2014-12-03 01:30:09     ba7ee9475c     fedora-atomic     fedora-atomic:fedora-atomic/f21/x86_64/docker-host
-bash-4.3# reboot

Note that I had to manually update the ref to point to the commit I downloaded in order to get this to work. I'm not sure why this is but it wouldn't work otherwise.

Ok now I had a system using the same tree that I was when I saw the failure. Let's check to see if it still happens:

-bash-4.3# rpm-ostree status
  TIMESTAMP (UTC)         ID             OSNAME            REFSPEC
* 2015-01-31 21:08:35     153f577dc4     fedora-atomic     fedora-atomic:fedora-atomic/f21/x86_64/docker-host
  2014-12-03 01:30:09     ba7ee9475c     fedora-atomic     fedora-atomic:fedora-atomic/f21/x86_64/docker-host
-bash-4.3#
-bash-4.3# rpm -q docker-io systemd
docker-io-1.4.1-5.fc21.x86_64
systemd-216-17.fc21.x86_64
-bash-4.3#
-bash-4.3# docker run --rm --memory 500M busybox echo "I'm Alive"
FATA[0003] Error response from daemon: Cannot start container d79629bfddc7833497b612e2b6d4cc2542ce9a8c2253d39ace4434bbd385185b: write /sys/fs/cgroup/memory/system.slice/docker-d79629bfddc7833497b612e2b6d4cc2542ce9a8c2253d39ace4434bbd385185b.scope/memory.memsw.limit_in_bytes: invalid argument

Yep! Looks like it consistently happens. This is good because this is a recreator that can now be used by anyone to verify the problem on their own. For completeness I'll go ahead and rollback the system to show that the problem goes away when back in the old state:

-bash-4.3# rpm-ostree rollback 
Moving 'ba7ee9475c462c9265517ab1e5fb548524c01a71709539bbe744e5fdccf6288b.0' to be first deployment
Transaction complete; bootconfig swap: yes deployment count change: 0
Changed:
  NetworkManager-1:0.9.10.0-13.git20140704.fc21.x86_64
  NetworkManager-glib-1:0.9.10.0-13.git20140704.fc21.x86_64
  ...
  ...
Removed:
  flannel-0.2.0-1.fc21.x86_64
Sucessfully reset deployment order; run "systemctl reboot" to start a reboot
-bash-4.3# reboot

And the final test:

-bash-4.3# rpm-ostree status
  TIMESTAMP (UTC)         ID             OSNAME            REFSPEC
* 2014-12-03 01:30:09     ba7ee9475c     fedora-atomic     fedora-atomic:fedora-atomic/f21/x86_64/docker-host
  2015-01-31 21:08:35     153f577dc4     fedora-atomic     fedora-atomic:fedora-atomic/f21/x86_64/docker-host
-bash-4.3# docker run --rm --memory 500M busybox echo "I'm Alive"
I'm Alive

Bliss! And you can thank Atomic Host for that.

Dusty