Archive for the 'Fedora' Category

Page 2 of 2

Getting Ansible Working on Fedora 23

Cross posted with this fedora magazine post

Inspired mostly from a post by Lars Kellogg-Stedman.

Intro

Ansible is a simple IT automation platform written in python that makes your applications and systems easier to deploy. It has become quite popular over the past few years but you may hit some trouble when trying to run Ansible on Fedora 23.

Fedora 23 is using python 3 as the default python version that gets installed (see changes), but Ansible still requires python 2. For that reason Ansible errors out when you try to run it because it assumes python 2 by default:

GATHERING FACTS *
failed: [f23] => {"failed": true, "parsed": false}
/bin/sh: /usr/bin/python: No such file or directory

Fortunately there are a few steps you can add to your playbooks in order to fully workaround this problem. You can either choose to apply them in a single play or in mulitple plays as shown below.

Workaround - Single All-in-One Play

In the case of a single play, which is something I use often when applying configuration to vagrant boxes, you can workaround this problem by taking the following steps:

  • Explicitly disable the gathering of facts on initialization
  • Use Ansible's raw module to install python2
  • Explicitly call the setup module to gather facts again

The gathering of facts that happens by default on ansible execution will try to use python 2. We must disable this or it will fail before executing the raw ssh commands to install python 2. Fortunately we can still use facts in our single play, though, by explicitly calling the setup module after python2 is installed.

So with these minor changes applied a simple all in one play might look like:

- hosts: f23
  remote_user: fedora
  gather_facts: false
  become_user: root
  become: yes
  tasks:
    - name: install python and deps for ansible modules
      raw: dnf install -y python2 python2-dnf libselinux-python
    - name: gather facts
      setup:
    - name: use facts
      lineinfile: dest=/etc/some-cfg-file line="myip={{ ansible_eth0.ipv4.address }}" create=true

And the output of running the play should be successful:

PLAY [f23] ****************************************************************

TASK: [install python and deps for ansible modules] ***************************
ok: [f23]

TASK: [gather facts] **********************************************************
ok: [f23]

TASK: [use facts] *************************************************************
changed: [f23]

PLAY RECAP ********************************************************************
f23                        : ok=3    changed=1    unreachable=0    failed=0

Workaround - Multiple Plays

If you use multiple plays in your playbooks then you can simply have one of them do the python 2 install in raw mode while the others can remain unchanged; you don't have to explicitly gather facts because python 2 is now installed. So for the first play you would have something like:

- hosts: f23
  remote_user: fedora
  gather_facts: false
  become_user: root
  become: yes
  tasks:
    - name: install python and deps for ansible modules
      raw: dnf install -y python2 python2-dnf libselinux-python

And, re-using the code from the sample above the second play would look like:

- hosts: f23
  remote_user: fedora
  become_user: root
  become: yes
  tasks:
    - name: use facts
      lineinfile: dest=/etc/some-cfg-file line="myip={{ ansible_eth0.ipv4.address }}" create=true

Conclusion

So using these small changes you should be back up and running until Ansible adds first class support for python 3.

Enjoy!
Dusty

F23 Cloud Base Test Day September 8th!

cross posted from this fedora magazine post

Hey everyone! Fedora 23 has been baking in the oven. The Fedora Cloud WG has elected to do a temperature check on September 8th.

For this test day we are going to concentrate on the base image. We will have vagrant boxes (see this page for how to set up your machine), qcow images, raw images, and AWS EC2 images. In a later test day we will focus on the Atomic images and Docker images.

The landing page for the Fedora Cloud Base test day is here. If you're available to test on the test day (or any other time) please go there and fill out your name and test results. Also, don't forget that you can use some of our new projects testcloud (copr link) and/or Tunir to aid in testing.

Happy testing and we hope to see you on test day!

Dusty

Fedora BTRFS+Snapper PART 2: Full System Snapshot/Rollback

History

In part 1 of this series I discussed why I desired a computer setup where I can do full system snapshots so I could seamlessly roll back at will. I also gave an overview of how I went about setting up a system so it could take advantage of BTRFS and snapper to do full system snapshotting and recovery. In this final post of the series I will give an overview of how to get snapper installed and configured on the system and walk through using it to do a rollback.

Installing and Configuring Snapper

First things first, as part of this whole setup I want to be able to tell how much space each one of my snapshots are taking up. I covered how to do this in a previous post, but the way you do it is by enabled quota on the BTRFS filesystem:

[root@localhost ~]# btrfs quota enable /
[root@localhost ~]#
[root@localhost ~]# btrfs subvolume list /
ID 258 gen 50 top level 5 path var/lib/machines
[root@localhost ~]# btrfs qgroup show /
WARNING: Rescan is running, qgroup data may be incorrect
qgroupid         rfer         excl
--------         ----         ----
0/5         975.90MiB    975.90MiB
0/258        16.00KiB     16.00KiB

You can see from the output that we currently have two subvolumes. One of them is the root subvolume while the other is a subvolume automatically created by systemd for systemd-nspawn container images.

Now that we have quota enabled let's get snapper installed and configured:

[root@localhost ~]# dnf install -y snapper
...
Complete!
[root@localhost ~]# snapper --config=root create-config /
[root@localhost ~]# snapper ls
Type   | # | Pre # | Date | User | Cleanup | Description | Userdata
-------+---+-------+------+------+---------+-------------+---------
single | 0 |       |      | root |         | current     |
[root@localhost ~]# snapper list-configs
Config | Subvolume
-------+----------
root   | /
[root@localhost ~]#
[root@localhost ~]# btrfs subvolume list /
ID 258 gen 50 top level 5 path var/lib/machines
ID 260 gen 83 top level 5 path .snapshots

So we used the snapper command to create a configuration for BTRFS filesystem mounted at /. As part of this process we can see from the btrfs subvolume list / command that snapper also created a .snapshots subvolume. This subvolume will be used to house the COW snapshots that are taken of the system.

The next thing we want to do is add an entry to fstab to make it so that regardless of what subvolume we are actually booted into we will always be able to view the .snapshots subvolume and all nested subvolumes (snapshots):

[root@localhost ~]# echo '/dev/vgroot/lvroot /.snapshots btrfs subvol=.snapshots 0 0' >> /etc/fstab

Taking Snapshots

OK, now that we have snapper installed and the .snapshots subvolume in /etc/fstab we can start creating snapshots:

[root@localhost ~]# btrfs subvolume get-default /
ID 5 (FS_TREE)
[root@localhost ~]# snapper create --description "BigBang"
[root@localhost ~]# snapper ls
Type   | # | Pre # | Date                     | User | Cleanup | Description | Userdata
-------+---+-------+--------------------------+------+---------+-------------+---------
single | 0 |       |                          | root |         | current     |
single | 1 |       | Tue Jul 14 23:07:42 2015 | root |         | BigBang     |
[root@localhost ~]#
[root@localhost ~]# btrfs subvolume list /
ID 258 gen 50 top level 5 path var/lib/machines
ID 260 gen 90 top level 5 path .snapshots
ID 261 gen 88 top level 260 path .snapshots/1/snapshot
[root@localhost ~]#
[root@localhost ~]# ls /.snapshots/1/snapshot/
bin  boot  dev  etc  home  lib  lib64  media  mnt  opt  proc  root  run  sbin  srv  sys  tmp  usr  var

We made our first snapshot called BigBang and then ran a btrfs subvolume list / to view that a new snapshot was actually created. Notice at the top of the output of the sections that we ran a btrfs subvolume get-default /. This outputs what the currently set default subvolume is for the BTRFS filesystem. Right now we are booted into the root subvolume but that will change as soon as we decide we want to use one of the snapshots for rollback.

Since we took a snapshot let's go ahead and make some changes to the system:

[root@localhost ~]# dnf install -y htop
[root@localhost ~]# rpm -q htop
htop-1.0.3-4.fc22.x86_64
[root@localhost ~]#
[root@localhost ~]# snapper status 1..0  | grep htop
+..... /usr/bin/htop
+..... /usr/share/doc/htop
+..... /usr/share/doc/htop/AUTHORS
+..... /usr/share/doc/htop/COPYING
+..... /usr/share/doc/htop/ChangeLog
+..... /usr/share/doc/htop/README
+..... /usr/share/man/man1/htop.1.gz
+..... /usr/share/pixmaps/htop.png
+..... /var/lib/dnf/yumdb/h/2cd64300c204b0e1ecc9ad185259826852226561-htop-1.0.3-4.fc22-x86_64
+..... /var/lib/dnf/yumdb/h/2cd64300c204b0e1ecc9ad185259826852226561-htop-1.0.3-4.fc22-x86_64/checksum_data
+..... /var/lib/dnf/yumdb/h/2cd64300c204b0e1ecc9ad185259826852226561-htop-1.0.3-4.fc22-x86_64/checksum_type
+..... /var/lib/dnf/yumdb/h/2cd64300c204b0e1ecc9ad185259826852226561-htop-1.0.3-4.fc22-x86_64/command_line
+..... /var/lib/dnf/yumdb/h/2cd64300c204b0e1ecc9ad185259826852226561-htop-1.0.3-4.fc22-x86_64/from_repo
+..... /var/lib/dnf/yumdb/h/2cd64300c204b0e1ecc9ad185259826852226561-htop-1.0.3-4.fc22-x86_64/installed_by
+..... /var/lib/dnf/yumdb/h/2cd64300c204b0e1ecc9ad185259826852226561-htop-1.0.3-4.fc22-x86_64/reason
+..... /var/lib/dnf/yumdb/h/2cd64300c204b0e1ecc9ad185259826852226561-htop-1.0.3-4.fc22-x86_64/releasever

So from this we installed htop and then compared the current running system (0) with snapshot 1.

Rolling Back

Now that we have taken a previous snapshot and have since made a change to the system we can use the snapper rollback functionality to get back to the state the system was in before we made the change. Let's do the rollback to get back to the snapshot 1 BigBang state:

[root@localhost ~]# snapper rollback 1
Creating read-only snapshot of current system. (Snapshot 2.)
Creating read-write snapshot of snapshot 1. (Snapshot 3.)
Setting default subvolume to snapshot 3.
[root@localhost ~]# reboot

As part of the rollback process you specify to snapper which snapshot you want to go back to. It then creates a read-only snapshot of the current system (in case you change your mind and want to get back to where you currently are) and then a new read-write subvolume based on the snapshot you specified to go back to. It then sets the default subvolume to be the newly created read-write subvolume it just created. After a reboot you will be booted into the new read-write subvolume and your state should be exactly as it was at the time you made the original snapshot.

In our case, after reboot we should now be booted into snapshot 3 as indicated by the output of the snapper rollback command above and we should be able to inspect information about all of the snapshots on the system:

[root@localhost ~]# btrfs subvolume get-default /
ID 263 gen 104 top level 260 path .snapshots/3/snapshot
[root@localhost ~]#
[root@localhost ~]# snapper ls
Type   | # | Pre # | Date                     | User | Cleanup | Description | Userdata
-------+---+-------+--------------------------+------+---------+-------------+---------
single | 0 |       |                          | root |         | current     |
single | 1 |       | Tue Jul 14 23:07:42 2015 | root |         | BigBang     |
single | 2 |       | Tue Jul 14 23:14:12 2015 | root |         |             |
single | 3 |       | Tue Jul 14 23:14:12 2015 | root |         |             |
[root@localhost ~]#
[root@localhost ~]# ls /.snapshots/
1  2  3
[root@localhost ~]# btrfs subvolume list /
ID 258 gen 50 top level 5 path var/lib/machines
ID 260 gen 100 top level 5 path .snapshots
ID 261 gen 98 top level 260 path .snapshots/1/snapshot
ID 262 gen 97 top level 260 path .snapshots/2/snapshot
ID 263 gen 108 top level 260 path .snapshots/3/snapshot

And the big test is to see if the change we made to the system was actually reverted:

[root@localhost ~]# rpm -q htop
package htop is not installed

Bliss!!

Now in my case I like to have more descriptive notes on my snapshots so I'll go back now and give some notes for snapshots 2 and 3:

[root@localhost ~]# snapper modify --description "installed htop" 2
[root@localhost ~]# snapper modify --description "rollback to 1 - read/write" 3
[root@localhost ~]#
[root@localhost ~]# snapper ls
Type   | # | Pre # | Date                     | User | Cleanup | Description                | Userdata
-------+---+-------+--------------------------+------+---------+----------------------------+---------
single | 0 |       |                          | root |         | current                    |
single | 1 |       | Tue Jul 14 23:07:42 2015 | root |         | BigBang                    |
single | 2 |       | Tue Jul 14 23:14:12 2015 | root |         | installed htop             |
single | 3 |       | Tue Jul 14 23:14:12 2015 | root |         | rollback to 1 - read/write |

We can also see how much space (shared and exclusive each of the snapshots are taking up:

[root@localhost ~]# btrfs qgroup show /
WARNING: Qgroup data inconsistent, rescan recommended
qgroupid         rfer         excl
--------         ----         ----
0/5           1.08GiB      7.53MiB
0/258        16.00KiB     16.00KiB
0/260        16.00KiB     16.00KiB
0/261         1.07GiB      2.60MiB
0/262         1.07GiB    740.00KiB
0/263         1.08GiB     18.91MiB

Now that is useful info so you can know how much space you will be recovering when you delete snapshots in the future.

Updating The Kernel

I mentioned in part 1 that I had to get a special rebuild of GRUB with some patches from the SUSE guys in order to get booting from the default subvolume to work. This was all needed so that I can update the kernel as normal and have the GRUB files that get used be the ones that are in the actual subvolume I am currently using. So let's test it out by doing a full system update (including a kernel update):

[root@localhost ~]# dnf update -y
...
Install    8 Packages
Upgrade  173 Packages
...
Complete!
[root@localhost ~]# rpm -q kernel
kernel-4.0.4-301.fc22.x86_64
kernel-4.0.7-300.fc22.x86_64
[root@localhost ~]#
[root@localhost ~]# btrfs qgroup show /
WARNING: Qgroup data inconsistent, rescan recommended
qgroupid         rfer         excl
--------         ----         ----
0/5           1.08GiB      7.53MiB
0/258        16.00KiB     16.00KiB
0/260        16.00KiB     16.00KiB
0/261         1.07GiB     11.96MiB
0/262         1.07GiB    740.00KiB
0/263         1.19GiB    444.35MiB

So we did a full system upgrade that upgraded 173 packages and installed a few others. We can see now that the current subvolume (snapshot 3 with ID 263) now has 444MiB of exclusive data. This makes sense since all of the other snapshots were from before the full system update.

Let's create a new snapshot that represents the state of the system right after we did the full system update and then reboot:

[root@localhost ~]# snapper create --description "full system upgrade"
[root@localhost ~]# reboot

After reboot we can now check to see if we have properly booted the recently installed kernel:

[root@localhost ~]# rpm -q kernel
kernel-4.0.4-301.fc22.x86_64
kernel-4.0.7-300.fc22.x86_64
[root@localhost ~]# uname -r
4.0.7-300.fc22.x86_64

Bliss again. Yay! And I'm Done.

Enjoy!

Dusty

Fedora BTRFS+Snapper PART 1: System Preparation

The Problem

For some time now I have wanted a linux desktop setup where I could run updates automatically and not worry about losing productivity if my system gets hosed from the update. My desired setup to achieve this has been a combination of snapper and BTRFS, but unfortunately the support on Fedora for full rollback isn't quite there.

In Fedora 22 the support for rollback was added but there is one final piece of the puzzle that is missing that I need in order to have a fully working setup: I needed GRUB to respect the default subvolume that is set on the BTRFS filesystem. In the past GRUB did use the default subvolume but this behavior was removed in 82591fa (link).

With GRUB respecting the default subvolume I can include /boot/ just as a directory on my system (not as a separate subvolume) and it will be included in all of the snapshots that are created by snapper of the root filesystem.

In order to get this functionality I grabbed some of the patches from the SUSE guys and applied them to the Fedora GRUB rpm. All of the work and the resulting rpms can be found here.

System Preparation

So now I had a GRUB rpm that would work for me. The first step is to get my system up and running in a setup that I could then use snapper on top of. I mentioned before that I wanted to put /boot/ just as a directory on the BTRFS filesystem. I also wanted it to be encrypted as I have done in the past.

This means I have yet another setup that is funky and I'll need to basically install it from scratch using Anaconda and a chroot environment.

After getting up and running in anaconda I then switched to a different virtual terminal and formatted my hard disk, set up an encrypted LUKS device, created a VG and two LVs, and finally a BTRFS filesystem:

[anaconda root@localhost ~]# fdisk /dev/sda <<EOF
o
n
p
1
2048

w
EOF
[anaconda root@localhost ~]# lsblk /dev/sda
NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda      8:0    0 465.8G  0 disk
`-sda1   8:1    0 465.8G  0 part
[anaconda root@localhost ~]# cryptsetup luksFormat /dev/sda1
[anaconda root@localhost ~]# cryptsetup luksOpen /dev/sda1 cryptodisk
[anaconda root@localhost ~]# vgcreate vgroot /dev/mapper/cryptodisk
[anaconda root@localhost ~]# lvcreate --size=4G --name lvswap vgroot
[anaconda root@localhost ~]# mkswap /dev/vgroot/lvswap
[anaconda root@localhost ~]# lvcreate -l 100%FREE --name lvroot vgroot
[anaconda root@localhost ~]# mkfs.btrfs /dev/vgroot/lvroot

NOTE: Most of the commands run above have truncated output for brevity.

The next step was to mount the filesystem and install software into the filesystem in a chrooted environment. Since the dnf binary isn't actually installed in the anaconda environment by default we first need to install it:

[anaconda root@localhost ~]# rpm -ivh --nodeps /run/install/repo/Packages/d/dnf-1.0.0-1.fc22.noarch.rpm
warning: /run/install/repo/Packages/d/dnf-1.0.0-1.fc22.noarch.rpm: Header V3 RSA/SHA256 Signature, key ID 8e1431d5: NOKEY
Preparing...                          ################################# [100%]
Updating / installing...
   1:dnf-1.0.0-1.fc22                 ################################# [100%]

Now we can "create" a repo file from the repo that is on the media and install the bare minimum (the filesystem rpm):

[anaconda root@localhost ~]# mount /dev/vgroot/lvroot /mnt/sysimage/
[anaconda root@localhost ~]# mkdir /etc/yum.repos.d
[anaconda root@localhost ~]# cat <<EOF > /etc/yum.repos.d/dvd.repo
[dvd]
name=dvd
baseurl=file:///run/install/repo
enabled=1
gpgcheck=0
EOF
[anaconda root@localhost ~]# dnf install -y --releasever=22 --installroot=/mnt/sysimage filesystem
...
Complete!

The reason we only installed the filesystem rpm is because a lot of the other rpms we are going to install will fail if some of the "special" directories aren't mounted. We'll go ahead and mount them now:

[anaconda root@localhost ~]# mount -v -o bind /dev /mnt/sysimage/dev/
mount: /dev bound on /mnt/sysimage/dev.
[anaconda root@localhost ~]# mount -v -o bind /run /mnt/sysimage/run/
mount: /run bound on /mnt/sysimage/run.
[anaconda root@localhost ~]# mount -v -t proc proc /mnt/sysimage/proc/
mount: proc mounted on /mnt/sysimage/proc.
[anaconda root@localhost ~]# mount -v -t sysfs sys /mnt/sysimage/sys/
mount: sys mounted on /mnt/sysimage/sys.

Now we can install the rest of the software into the chroot environment:

[anaconda root@localhost ~]# cp /etc/yum.repos.d/dvd.repo /mnt/sysimage/etc/yum.repos.d/
[anaconda root@localhost ~]# dnf install -y --installroot=/mnt/sysimage --disablerepo=* --enablerepo=dvd @core @standard kernel btrfs-progs lvm2
...
Complete!

We can also install the "special" GRUB packages that I created and then get rid of the repo file because we won't need it any longer:

[anaconda root@localhost ~]# dnf install -y --installroot=/mnt/sysimage --disablerepo=* --enablerepo=dvd 
https://github.com/dustymabe/fedora-grub-boot-btrfs-default-subvolume/raw/master/rpmbuild/RPMS/x86_64/grub2-2.02-0.16.fc22.dusty.x86_64.rpm 
https://github.com/dustymabe/fedora-grub-boot-btrfs-default-subvolume/raw/master/rpmbuild/RPMS/x86_64/grub2-tools-2.02-0.16.fc22.dusty.x86_64.rpm
...
Complete!
[anaconda root@localhost ~]# rm /mnt/sysimage/etc/yum.repos.d/dvd.repo

Now we can do some minimal system configuration by chrooting into the system and setting up crypttab, setting up fstab, setting the root password and setting up the system to a relabel on boot:

[anaconda root@localhost ~]# chroot /mnt/sysimage
[anaconda root@localhost /]# ls -l /dev/disk/by-uuid/f0d889d8-5225-4d9d-9a89-edd387e65ab7
lrwxrwxrwx. 1 root root 10 Jul 14 02:24 /dev/disk/by-uuid/f0d889d8-5225-4d9d-9a89-edd387e65ab7 -> ../../sda1
[anaconda root@localhost /]# cat <<EOF > /etc/crypttab
cryptodisk /dev/disk/by-uuid/f0d889d8-5225-4d9d-9a89-edd387e65ab7 -
EOF
[anaconda root@localhost /]# cat <<EOF > /etc/fstab
/dev/vgroot/lvroot / btrfs defaults 1 1
/dev/vgroot/lvswap swap swap defaults 0 0
EOF
[anaconda root@localhost /]# passwd --stdin root <<< "password"
Changing password for user root.
passwd: all authentication tokens updated successfully.
[anaconda root@localhost /]# touch /.autorelabel

Finally configure and install GRUB on sda and generate a ramdisk that has all the required modules using dracut:

[anaconda root@localhost /]# echo GRUB_ENABLE_CRYPTODISK=y >> /etc/default/grub
[anaconda root@localhost /]# echo SUSE_BTRFS_SNAPSHOT_BOOTING=true >> /etc/default/grub
[anaconda root@localhost /]# grub2-mkconfig -o /boot/grub2/grub.cfg
Generating grub configuration file ...
File descriptor 4 (/) leaked on vgs invocation. Parent PID 29465: /usr/sbin/grub2-probe
File descriptor 4 (/) leaked on vgs invocation. Parent PID 29465: /usr/sbin/grub2-probe
Found linux image: /boot/vmlinuz-4.0.4-301.fc22.x86_64
Found initrd image: /boot/initramfs-4.0.4-301.fc22.x86_64.img
Found linux image: /boot/vmlinuz-0-rescue-225efda374c043e3886d349ef724c79e
Found initrd image: /boot/initramfs-0-rescue-225efda374c043e3886d349ef724c79e.img
done
[anaconda root@localhost /]# grub2-install /dev/sda
Installing for i386-pc platform.
File descriptor 4 (/) leaked on vgs invocation. Parent PID 29866: grub2-install
File descriptor 4 (/) leaked on vgs invocation. Parent PID 29866: grub2-install
File descriptor 4 (/) leaked on vgs invocation. Parent PID 29866: grub2-install
File descriptor 7 (/) leaked on vgs invocation. Parent PID 29866: grub2-install
File descriptor 8 (/) leaked on vgs invocation. Parent PID 29866: grub2-install
Installation finished. No error reported.
[anaconda root@localhost /]# dracut --kver 4.0.4-301.fc22.x86_64 --force

Now we can exit the chroot, unmount all filesystems and reboot into our new system:

[anaconda root@localhost /]# exit
exit
[anaconda root@localhost ~]# umount /mnt/sysimage/{dev,run,sys,proc}
[anaconda root@localhost ~]# umount /mnt/sysimage/
[anaconda root@localhost ~]# reboot

To Be Continued

So we have set up the system to have a single BTRFS filesystem (no subvolumes) on top of LVM on top of LUKS and with a custom GRUB that respects the configured default subvolume on the BTRFS filesystem. Here is what an lsblk shows:

[root@localhost ~]# lsblk -o NAME,TYPE,FSTYPE,MOUNTPOINT /dev/sda
NAME                TYPE  FSTYPE      MOUNTPOINT
sda                 disk
`-sda1              part  crypto_LUKS
  `-cryptodisk      crypt LVM2_member
    |-vgroot-lvswap lvm   swap        [SWAP]
    `-vgroot-lvroot lvm   btrfs       /

In a later post I will configure snapper on this system and show how rollbacks can be used to simply revert changes that have been made.

Dusty

Fedora 22 Now Swimming in DigitalOcean

cross posted from this fedora magazine post

DigitalOcean is a cloud provider that provides a one-click deployment of a Fedora Cloud instance to an all-SSD server in under a minute. After some quick work by the DigitalOcean and Fedora Cloud teams we are pleased to announce that you can now make it rain Fedora 22 droplets!

One significant change over previous Fedora droplets is that this is the first release to have support for managing your kernel internally. Meaning if you dnf update kernel-core and reboot then you'll actually be running the kernel you updated to. Win!

Here are a couple more tips for Fedora 22 Droplets:

  • Like with other DigitalOcean images, you will log in with your ssh key as root rather than the typical fedora user that you may be familiar with when logging in to a Fedora cloud image.
  • Similar to Fedora 21, Fedora 22 also has SELinux enabled by default.
  • Fedora 22 should be available in all the newest datacenters in each region, but some legacy datacenters aren't supported. If you have a problem you think is Fedora specific then drop us an email at , ping us in #fedora-cloud on freenode, or visit the Fedora cloud trac to see if it is already being worked on.

Visit the DigitalOcean Fedora landing page and spin one up today!

Happy Developing!
Dusty

F22 Cloud/Atomic Test Day May 7th!

Hey everyone! Fedora 22 is on the cusp of being released and the Fedora Cloud Working Group has elected to organize a test day for May 7th in order to work out some bugs before shipping it off to the rest of the world.

With a new release comes some new features and tools. We are working on Vagrant images as well as a testing tool called Tunir. Joe Brockmeier has a nice writeup about Vagrant and Kushal Das maintains some docs on Tunir.

On the test day we will be testing both the Cloud Base Image and the Fedora Atomic Host cloud image. The landing pages where we are organizing instructions and information are here (for Cloud Base) and here (for Atomic). If you're available to test on the test day (or any other time) please go there and fill out your name and test results.

Happy Testing!

Dusty

Crisis Averted.. I'm using Atomic Host

This blog has been running on Docker on Fedora 21 Atomic Host since early January. Occasionally I log in and run rpm-ostree upgrade followed by a subsequent reboot (usually after I inspect a few things). Today I happened to do just that and what did I come up with?? A bunch of 404s. Digging through some of the logs for the systemd unit file I use to start my wordpress container I found this:

systemd[1]: wordpress-server.service: main process exited, code=exited, status=1/FAILURE
docker[2321]: time="2015-01-31T19:09:24-05:00" level="fatal" msg="Error response from daemon: Cannot start container 51a2b8c45bbee564a61bcbffaee5bc78357de97cdd38918418026c26ae40fb09: write /sys/fs/cgroup/memory/system.slice/docker-51a2b8c45bbee564a61bcbffaee5bc78357de97cdd38918418026c26ae40fb09.scope/memory.memsw.limit_in_bytes: invalid argument"

Hmmm.. So that means I have updated to the latest atomic and docker doesn't work?? What am I to do?

Well, the nice thing about atomic host is that in moments like these you can easily go back to the state you were before you upgraded. A quick rpm-ostree rollback and my blog was back up and running in minutes.

Whew! Crisis averted.. But now what? Well the nice thing about atomic host is that I can easily go to another (non-production) system and test out exactly the same scenario as the upgrade that I performed in production. Some quick googling led me to this github issue which looks like it has to do with setting memory limits when you start a container using later versions of systemd.

Let's test out that theory by recreating this failure.

Recreating the Failure

To recreate I decided to start with the Fedora 21 atomic cloud image that was released in December. Here is what I have:

-bash-4.3# ostree admin status
* fedora-atomic ba7ee9475c462c9265517ab1e5fb548524c01a71709539bbe744e5fdccf6288b.0
    origin refspec: fedora-atomic:fedora-atomic/f21/x86_64/docker-host
-bash-4.3#
-bash-4.3# rpm-ostree status
  TIMESTAMP (UTC)         ID             OSNAME            REFSPEC
* 2014-12-03 01:30:09     ba7ee9475c     fedora-atomic     fedora-atomic:fedora-atomic/f21/x86_64/docker-host
-bash-4.3#
-bash-4.3# rpm -q docker-io systemd
docker-io-1.3.2-2.fc21.x86_64
systemd-216-12.fc21.x86_64
-bash-4.3#
-bash-4.3# docker run --rm --memory 500M busybox echo "I'm Alive"
Unable to find image 'busybox' locally
Pulling repository busybox
4986bf8c1536: Download complete
511136ea3c5a: Download complete
df7546f9f060: Download complete
ea13149945cb: Download complete
Status: Downloaded newer image for busybox:latest
I'm Alive

So the system is up and running and able to run a container with the --memory option set. Now lets upgrade to the same commit that I did when I saw the failure earlier and reboot:

-bash-4.3# ostree pull fedora-atomic 153f577dc4b039e53abebd7c13de6dfafe0fb64b4fdc2f5382bdf59214ba7acb

778 metadata, 4374 content objects fetched; 174535 KiB transferred in 156 seconds
-bash-4.3#
-bash-4.3# echo 153f577dc4b039e53abebd7c13de6dfafe0fb64b4fdc2f5382bdf59214ba7acb > /ostree/repo/refs/remotes/fedora-atomic/fedora-atomic/f21/x86_64/docker-host
-bash-4.3#
-bash-4.3# ostree admin deploy fedora-atomic:fedora-atomic/f21/x86_64/docker-host
Copying /etc changes: 26 modified, 4 removed, 36 added
Transaction complete; bootconfig swap: yes deployment count change: 1
-bash-4.3#
-bash-4.3# ostree admin status
  fedora-atomic 153f577dc4b039e53abebd7c13de6dfafe0fb64b4fdc2f5382bdf59214ba7acb.0
    origin refspec: fedora-atomic:fedora-atomic/f21/x86_64/docker-host
* fedora-atomic ba7ee9475c462c9265517ab1e5fb548524c01a71709539bbe744e5fdccf6288b.0
    origin refspec: fedora-atomic:fedora-atomic/f21/x86_64/docker-host
-bash-4.3#
-bash-4.3# rpm-ostree status
  TIMESTAMP (UTC)         ID             OSNAME            REFSPEC
  2015-01-31 21:08:35     153f577dc4     fedora-atomic     fedora-atomic:fedora-atomic/f21/x86_64/docker-host
* 2014-12-03 01:30:09     ba7ee9475c     fedora-atomic     fedora-atomic:fedora-atomic/f21/x86_64/docker-host
-bash-4.3# reboot

Note that I had to manually update the ref to point to the commit I downloaded in order to get this to work. I'm not sure why this is but it wouldn't work otherwise.

Ok now I had a system using the same tree that I was when I saw the failure. Let's check to see if it still happens:

-bash-4.3# rpm-ostree status
  TIMESTAMP (UTC)         ID             OSNAME            REFSPEC
* 2015-01-31 21:08:35     153f577dc4     fedora-atomic     fedora-atomic:fedora-atomic/f21/x86_64/docker-host
  2014-12-03 01:30:09     ba7ee9475c     fedora-atomic     fedora-atomic:fedora-atomic/f21/x86_64/docker-host
-bash-4.3#
-bash-4.3# rpm -q docker-io systemd
docker-io-1.4.1-5.fc21.x86_64
systemd-216-17.fc21.x86_64
-bash-4.3#
-bash-4.3# docker run --rm --memory 500M busybox echo "I'm Alive"
FATA[0003] Error response from daemon: Cannot start container d79629bfddc7833497b612e2b6d4cc2542ce9a8c2253d39ace4434bbd385185b: write /sys/fs/cgroup/memory/system.slice/docker-d79629bfddc7833497b612e2b6d4cc2542ce9a8c2253d39ace4434bbd385185b.scope/memory.memsw.limit_in_bytes: invalid argument

Yep! Looks like it consistently happens. This is good because this is a recreator that can now be used by anyone to verify the problem on their own. For completeness I'll go ahead and rollback the system to show that the problem goes away when back in the old state:

-bash-4.3# rpm-ostree rollback
Moving 'ba7ee9475c462c9265517ab1e5fb548524c01a71709539bbe744e5fdccf6288b.0' to be first deployment
Transaction complete; bootconfig swap: yes deployment count change: 0
Changed:
  NetworkManager-1:0.9.10.0-13.git20140704.fc21.x86_64
  NetworkManager-glib-1:0.9.10.0-13.git20140704.fc21.x86_64
  ...
  ...
Removed:
  flannel-0.2.0-1.fc21.x86_64
Sucessfully reset deployment order; run "systemctl reboot" to start a reboot
-bash-4.3# reboot

And the final test:

-bash-4.3# rpm-ostree status
  TIMESTAMP (UTC)         ID             OSNAME            REFSPEC
* 2014-12-03 01:30:09     ba7ee9475c     fedora-atomic     fedora-atomic:fedora-atomic/f21/x86_64/docker-host
  2015-01-31 21:08:35     153f577dc4     fedora-atomic     fedora-atomic:fedora-atomic/f21/x86_64/docker-host
-bash-4.3# docker run --rm --memory 500M busybox echo "I'm Alive"
I'm Alive

Bliss! And you can thank Atomic Host for that.

Dusty

Fedora 21 now available on Digital Ocean

cross posted from this fedora magazine post

It's raining Droplets! Fedora 21 has landed in Digital Ocean's cloud hosting. Fedora 21 offers a fantastic cloud image for developers, and it's now easy for Digital Ocean users to spin it up and get started! Here are a couple of tips:

  • Like with other Digital Ocean images, you will log in with your ssh key as root rather than the typical fedora user that you may be familiar with when logging in to a Fedora cloud image.
  • This is the first time Digital Ocean has SELinux enabled by default (yay for security). If you want or need to you can still easily switch back to permissive mode; Red Hat's Dan Walsh may have a "shame on you" or two for you though.
  • Fedora 21 should be available in all the newest datacenters in each region, but some legacy datacenters aren't supported. If you have a problem you think is Fedora specific then drop us an email at , ping us in #fedora-cloud on freenode, or visit the Fedora cloud trac to see if it is already being worked on.

Happy Developing!
Dusty

PS If anyone wants a $10 credit for creating a new account you can use my referral link

F21 Atomic Test Day && Test steps for Atomic Host

Test Day on Thursday 11/20

The F21 test day for atomic is this Thursday, November 20th. If anyone can participate please do drop into #atomic on freenode as it will be great to have more people involved in helping build/test this new technology.

In anticipation of the test day I have put together some test notes for other people to follow in hopes that it will help smooth things along.

Booting with cloud-init

First step is to start an atomic host using any method/cloud provider you like. For me I decided to use openstack since I have Juno running on F21 here in my apartment. I used this user-data for the atomic host:

#cloud-config password: passw0rd chpasswd: { expire: False } ssh_pwauth: True runcmd: - [ sh, -c, 'echo -e "ROOT_SIZE=4GnDATA_SIZE=10G" > /etc/sysconfig/docker-storage-setup']

Note that the build of atomic I used for this testing resides here

Verifying docker-storage-setup

docker-storage-setup is a service that can be used to configure the storage configuration for docker in different ways on instance bringup. Notice in the user-data above that I decided to set config variables for docker-storage-setup. They basically mean that I want to resize my atomicos/root LV to 4G and I want to create an atomicos/docker-data LV and make it 10G in size.

To verify the storage was set up successfully, log in (as the fedora user) and become root (usind sudo su -). Now you can check if docker-storage-setup worked by checking the logs as well as looking at the output from lsblk:

# journalctl -o cat --unit docker-storage-setup.service CHANGED: partition=2 start=411648 old: size=12171264 end=12582912 new: size=41531232,end=41942880 Physical volume "/dev/vda2" changed 1 physical volume(s) resized / 0 physical volume(s) not resized Size of logical volume atomicos/root changed from 1.95 GiB (500 extents) to 4.00 GiB (1024 extents). Logical volume root successfully resized Rounding up size to full physical extent 24.00 MiB Logical volume "docker-meta" created Logical volume "docker-data" created # # lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT vda 252:0 0 20G 0 disk ├─vda1 252:1 0 200M 0 part /boot └─vda2 252:2 0 19.8G 0 part ├─atomicos-root 253:0 0 4G 0 lvm /sysroot ├─atomicos-docker--meta 253:1 0 24M 0 lvm └─atomicos-docker--data 253:2 0 10G 0 lvm

Verifying Docker Lifecycle

To verify Docker runs fine on the atomic host we will perform a simple run of the busybox docker image. This will contact the docker hub, pull down the image, and run /bin/true:

# docker run -it --rm busybox true && echo "PASS" || echo "FAIL" Unable to find image 'busybox' locally Pulling repository busybox e72ac664f4f0: Download complete 511136ea3c5a: Download complete df7546f9f060: Download complete e433a6c5b276: Download complete PASS

After the Docker daemon has started the LVs that were created by docker-storage-setup will be used by device mapper as shown in the lsblk output below:

# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT vda 252:0 0 20G 0 disk ├─vda1 252:1 0 200M 0 part /boot └─vda2 252:2 0 19.8G 0 part ├─atomicos-root 253:0 0 4G 0 lvm /sysroot ├─atomicos-docker--meta 253:1 0 24M 0 lvm │ └─docker-253:0-6298462-pool 253:3 0 10G 0 dm │ └─docker-253:0-6298462-base 253:4 0 10G 0 dm └─atomicos-docker--data 253:2 0 10G 0 lvm └─docker-253:0-6298462-pool 253:3 0 10G 0 dm └─docker-253:0-6298462-base 253:4 0 10G 0 dm

Atomic Host: Upgrade

Now on to an atomic upgrade. First let's check what commit we are currently at and store a file in /etc/file1 to save it for us:

# rpm-ostree status TIMESTAMP (UTC) ID OSNAME REFSPEC * 2014-11-12 22:28:04 1877f1fa64 fedora-atomic-host fedora-atomic:fedora-atomic/f21/x86_64/docker-host # # ostree admin status * fedora-atomic-host 1877f1fa64be8bec8adcd43de6bd4b5c39849ec7842c07a6d4c2c2033651cd84.0 origin refspec: fedora-atomic:fedora-atomic/f21/x86_64/docker-host # # cat /ostree/repo/refs/heads/ostree/0/1/0 1877f1fa64be8bec8adcd43de6bd4b5c39849ec7842c07a6d4c2c2033651cd84 # # cat /ostree/repo/refs/heads/ostree/0/1/0 > /etc/file1

Now run an upgrade to the latest atomic compose:

# rpm-ostree upgrade Updating from: fedora-atomic:fedora-atomic/f21/x86_64/docker-host 14 metadata, 19 content objects fetched; 33027 KiB transferred in 16 seconds Copying /etc changes: 26 modified, 4 removed, 39 added Transaction complete; bootconfig swap: yes deployment count change: 1) Updates prepared for next boot; run "systemctl reboot" to start a reboot

And do a bit of poking around right before we reboot:

# rpm-ostree status TIMESTAMP (UTC) ID OSNAME REFSPEC 2014-11-13 10:52:06 18e02c4166 fedora-atomic-host fedora-atomic:fedora-atomic/f21/x86_64/docker-host * 2014-11-12 22:28:04 1877f1fa64 fedora-atomic-host fedora-atomic:fedora-atomic/f21/x86_64/docker-host # # ostree admin status fedora-atomic-host 18e02c41666ef5f426bc43d01c4ce1b7ffc0611e993876cf332600e2ad8aa7c0.0 origin refspec: fedora-atomic:fedora-atomic/f21/x86_64/docker-host * fedora-atomic-host 1877f1fa64be8bec8adcd43de6bd4b5c39849ec7842c07a6d4c2c2033651cd84.0 origin refspec: fedora-atomic:fedora-atomic/f21/x86_64/docker-host # # reboot

Note that the * in the above output indicates which tree is currently booted.

After reboot now the new tree should be booted. Let's check things out and make /etc/file2 with our new commit hash in it:

# rpm-ostree status TIMESTAMP (UTC) ID OSNAME REFSPEC * 2014-11-13 10:52:06 18e02c4166 fedora-atomic-host fedora-atomic:fedora-atomic/f21/x86_64/docker-host 2014-11-12 22:28:04 1877f1fa64 fedora-atomic-host fedora-atomic:fedora-atomic/f21/x86_64/docker-host # # ostree admin status * fedora-atomic-host 18e02c41666ef5f426bc43d01c4ce1b7ffc0611e993876cf332600e2ad8aa7c0.0 origin refspec: fedora-atomic:fedora-atomic/f21/x86_64/docker-host fedora-atomic-host 1877f1fa64be8bec8adcd43de6bd4b5c39849ec7842c07a6d4c2c2033651cd84.0 origin refspec: fedora-atomic:fedora-atomic/f21/x86_64/docker-host # # cat /ostree/repo/refs/heads/ostree/1/1/0 18e02c41666ef5f426bc43d01c4ce1b7ffc0611e993876cf332600e2ad8aa7c0 # # cat /ostree/repo/refs/heads/ostree/1/1/0 > /etc/file2

As one final item let's boot up a docker container to make sure things still work there:

# docker run -it --rm busybox true && echo "PASS" || echo "FAIL" PASS

Atomic Host: Rollback

Atomic host provides the ability to revert to the previous working tree if things go awry with the new tree. Lets revert our upgrade now and make sure things still work:

# rpm-ostree rollback Moving '1877f1fa64be8bec8adcd43de6bd4b5c39849ec7842c07a6d4c2c2033651cd84.0' to be first deployment Transaction complete; bootconfig swap: yes deployment count change: 0) Sucessfully reset deployment order; run "systemctl reboot" to start a reboot # # rpm-ostree status TIMESTAMP (UTC) ID OSNAME REFSPEC 2014-11-12 22:28:04 1877f1fa64 fedora-atomic-host fedora-atomic:fedora-atomic/f21/x86_64/docker-host * 2014-11-13 10:52:06 18e02c4166 fedora-atomic-host fedora-atomic:fedora-atomic/f21/x86_64/docker-host # # reboot

After reboot:

# rpm-ostree status TIMESTAMP (UTC) ID OSNAME REFSPEC * 2014-11-12 22:28:04 1877f1fa64 fedora-atomic-host fedora-atomic:fedora-atomic/f21/x86_64/docker-host 2014-11-13 10:52:06 18e02c4166 fedora-atomic-host fedora-atomic:fedora-atomic/f21/x86_64/docker-host # # cat /etc/file1 1877f1fa64be8bec8adcd43de6bd4b5c39849ec7842c07a6d4c2c2033651cd84 # cat /etc/file2 cat: /etc/file2: No such file or directory

Notice that /etc/file2 did not exist until after the upgrade so it did not persist during the rollback.

And the final item on the list is to make sure Docker still works:

# docker run -it --rm busybox true && echo "PASS" || echo "FAIL" PASS

Anddd Boom.. You have just put atomic through some paces.