F21 Atomic Test Day && Test steps for Atomic Host

Test Day on Thursday 11/20

The F21 test day for atomic is this Thursday, November 20th. If anyone can participate please do drop into #atomic on freenode as it will be great to have more people involved in helping build/test this new technology.

In anticipation of the test day I have put together some test notes for other people to follow in hopes that it will help smooth things along.

Booting with cloud-init

First step is to start an atomic host using any method/cloud provider you like. For me I decided to use openstack since I have Juno running on F21 here in my apartment. I used this user-data for the atomic host:

#cloud-config password: passw0rd chpasswd: { expire: False } ssh_pwauth: True runcmd: - [ sh, -c, 'echo -e "ROOT_SIZE=4G\nDATA_SIZE=10G" > /etc/sysconfig/docker-storage-setup']

Note that the build of atomic I used for this testing resides here

Verifying docker-storage-setup

docker-storage-setup is a service that can be used to configure the storage configuration for docker in different ways on instance bringup. Notice in the user-data above that I decided to set config variables for docker-storage-setup. They basically mean that I want to resize my atomicos/root LV to 4G and I want to create an atomicos/docker-data LV and make it 10G in size.

To verify the storage was set up successfully, log in (as the fedora user) and become root (usind sudo su -). Now you can check if docker-storage-setup worked by checking the logs as well as looking at the output from lsblk:

# journalctl -o cat --unit docker-storage-setup.service CHANGED: partition=2 start=411648 old: size=12171264 end=12582912 new: size=41531232,end=41942880 Physical volume "/dev/vda2" changed 1 physical volume(s) resized / 0 physical volume(s) not resized Size of logical volume atomicos/root changed from 1.95 GiB (500 extents) to 4.00 GiB (1024 extents). Logical volume root successfully resized Rounding up size to full physical extent 24.00 MiB Logical volume "docker-meta" created Logical volume "docker-data" created # # lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT vda 252:0 0 20G 0 disk ├─vda1 252:1 0 200M 0 part /boot └─vda2 252:2 0 19.8G 0 part ├─atomicos-root 253:0 0 4G 0 lvm /sysroot ├─atomicos-docker--meta 253:1 0 24M 0 lvm └─atomicos-docker--data 253:2 0 10G 0 lvm

Verifying Docker Lifecycle

To verify Docker runs fine on the atomic host we will perform a simple run of the busybox docker image. This will contact the docker hub, pull down the image, and run /bin/true:

# docker run -it --rm busybox true && echo "PASS" || echo "FAIL" Unable to find image 'busybox' locally Pulling repository busybox e72ac664f4f0: Download complete 511136ea3c5a: Download complete df7546f9f060: Download complete e433a6c5b276: Download complete PASS

After the Docker daemon has started the LVs that were created by docker-storage-setup will be used by device mapper as shown in the lsblk output below:

# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT vda 252:0 0 20G 0 disk ├─vda1 252:1 0 200M 0 part /boot └─vda2 252:2 0 19.8G 0 part ├─atomicos-root 253:0 0 4G 0 lvm /sysroot ├─atomicos-docker--meta 253:1 0 24M 0 lvm │ └─docker-253:0-6298462-pool 253:3 0 10G 0 dm │ └─docker-253:0-6298462-base 253:4 0 10G 0 dm └─atomicos-docker--data 253:2 0 10G 0 lvm └─docker-253:0-6298462-pool 253:3 0 10G 0 dm └─docker-253:0-6298462-base 253:4 0 10G 0 dm

Atomic Host: Upgrade

Now on to an atomic upgrade. First let's check what commit we are currently at and store a file in /etc/file1 to save it for us:

# rpm-ostree status TIMESTAMP (UTC) ID OSNAME REFSPEC * 2014-11-12 22:28:04 1877f1fa64 fedora-atomic-host fedora-atomic:fedora-atomic/f21/x86_64/docker-host # # ostree admin status * fedora-atomic-host 1877f1fa64be8bec8adcd43de6bd4b5c39849ec7842c07a6d4c2c2033651cd84.0 origin refspec: fedora-atomic:fedora-atomic/f21/x86_64/docker-host # # cat /ostree/repo/refs/heads/ostree/0/1/0 1877f1fa64be8bec8adcd43de6bd4b5c39849ec7842c07a6d4c2c2033651cd84 # # cat /ostree/repo/refs/heads/ostree/0/1/0 > /etc/file1

Now run an upgrade to the latest atomic compose:

# rpm-ostree upgrade Updating from: fedora-atomic:fedora-atomic/f21/x86_64/docker-host 14 metadata, 19 content objects fetched; 33027 KiB transferred in 16 seconds Copying /etc changes: 26 modified, 4 removed, 39 added Transaction complete; bootconfig swap: yes deployment count change: 1) Updates prepared for next boot; run "systemctl reboot" to start a reboot

And do a bit of poking around right before we reboot:

# rpm-ostree status TIMESTAMP (UTC) ID OSNAME REFSPEC 2014-11-13 10:52:06 18e02c4166 fedora-atomic-host fedora-atomic:fedora-atomic/f21/x86_64/docker-host * 2014-11-12 22:28:04 1877f1fa64 fedora-atomic-host fedora-atomic:fedora-atomic/f21/x86_64/docker-host # # ostree admin status fedora-atomic-host 18e02c41666ef5f426bc43d01c4ce1b7ffc0611e993876cf332600e2ad8aa7c0.0 origin refspec: fedora-atomic:fedora-atomic/f21/x86_64/docker-host * fedora-atomic-host 1877f1fa64be8bec8adcd43de6bd4b5c39849ec7842c07a6d4c2c2033651cd84.0 origin refspec: fedora-atomic:fedora-atomic/f21/x86_64/docker-host # # reboot

Note that the * in the above output indicates which tree is currently booted.

After reboot now the new tree should be booted. Let's check things out and make /etc/file2 with our new commit hash in it:

# rpm-ostree status TIMESTAMP (UTC) ID OSNAME REFSPEC * 2014-11-13 10:52:06 18e02c4166 fedora-atomic-host fedora-atomic:fedora-atomic/f21/x86_64/docker-host 2014-11-12 22:28:04 1877f1fa64 fedora-atomic-host fedora-atomic:fedora-atomic/f21/x86_64/docker-host # # ostree admin status * fedora-atomic-host 18e02c41666ef5f426bc43d01c4ce1b7ffc0611e993876cf332600e2ad8aa7c0.0 origin refspec: fedora-atomic:fedora-atomic/f21/x86_64/docker-host fedora-atomic-host 1877f1fa64be8bec8adcd43de6bd4b5c39849ec7842c07a6d4c2c2033651cd84.0 origin refspec: fedora-atomic:fedora-atomic/f21/x86_64/docker-host # # cat /ostree/repo/refs/heads/ostree/1/1/0 18e02c41666ef5f426bc43d01c4ce1b7ffc0611e993876cf332600e2ad8aa7c0 # # cat /ostree/repo/refs/heads/ostree/1/1/0 > /etc/file2

As one final item let's boot up a docker container to make sure things still work there:

# docker run -it --rm busybox true && echo "PASS" || echo "FAIL" PASS

Atomic Host: Rollback

Atomic host provides the ability to revert to the previous working tree if things go awry with the new tree. Lets revert our upgrade now and make sure things still work:

# rpm-ostree rollback Moving '1877f1fa64be8bec8adcd43de6bd4b5c39849ec7842c07a6d4c2c2033651cd84.0' to be first deployment Transaction complete; bootconfig swap: yes deployment count change: 0) Sucessfully reset deployment order; run "systemctl reboot" to start a reboot # # rpm-ostree status TIMESTAMP (UTC) ID OSNAME REFSPEC 2014-11-12 22:28:04 1877f1fa64 fedora-atomic-host fedora-atomic:fedora-atomic/f21/x86_64/docker-host * 2014-11-13 10:52:06 18e02c4166 fedora-atomic-host fedora-atomic:fedora-atomic/f21/x86_64/docker-host # # reboot

After reboot:

# rpm-ostree status TIMESTAMP (UTC) ID OSNAME REFSPEC * 2014-11-12 22:28:04 1877f1fa64 fedora-atomic-host fedora-atomic:fedora-atomic/f21/x86_64/docker-host 2014-11-13 10:52:06 18e02c4166 fedora-atomic-host fedora-atomic:fedora-atomic/f21/x86_64/docker-host # # cat /etc/file1 1877f1fa64be8bec8adcd43de6bd4b5c39849ec7842c07a6d4c2c2033651cd84 # cat /etc/file2 cat: /etc/file2: No such file or directory

Notice that /etc/file2 did not exist until after the upgrade so it did not persist during the rollback.

And the final item on the list is to make sure Docker still works:

# docker run -it --rm busybox true && echo "PASS" || echo "FAIL" PASS

Anddd Boom.. You have just put atomic through some paces.

Capture Elusive cloud-init Debug Output With journalctl


Recently I have been trying to debug some problems with cloud-init in the alpha versions of cloud images for CentOS 7 and Fedora 21. What I have found is that it's not so straight forward to figure out how to set up debug logging.

The defaults (defined in /etc/cloud/cloud.cfg.d/05_logging.cfg ) for some reason don't really capture the debug output in /var/log/cloud-init.log. Luckily, though, on systemd based systems we can get most of that output by using journalctl. There are several services releated to cloud-init and if you want to get the output from all of them you can just use wildcard matching in journalctl (freshly added in ea18a4b ) like so:

[root@f21test ~]# journalctl --unit cloud-* ...debug...debug...blah...blah

This worked great in Fedora 21, but in CentOS/RHEL 7 this actually won't work because wildcard matching is too new. As a result I found another way to get the same output. It just so happens that the services all use the same executable (/usr/bin/cloud-init) so I was able to use that as a trigger:

[root@c7test ~]# journalctl /usr/bin/cloud-init ...debug...debug...blah...blah

I hope others can find this useful when debugging cloud-init.

Cheers,
Dusty

Docker: Copy Into A Container Volume


I need to copy a few files into my docker container.. Should be easy right? Turns out it's not so trivial. In Docker 1.0.0 and earlier the docker cp command can be used to copy files from a container to the host, but not the other way around...

Most of the time you can work around this by using an ADD statement in the Dockerfile but I often need to populate some data within data-only volume containers before I start other containers that use the data. To achieve copying data into the volume you can simply use tar and pipe the contents into the volume within a new container like so:
[root@localhost ~]# docker run -d -i -t -v /volumes/wpdata --name wpdata mybusybox sh 416ea2a877267f566ef8b054a836e8b6b2550b347143c4fe8ed2616e11140226 [root@localhost ~]# [root@localhost ~]# tar -c files/ | docker run -i --rm -w /volumes/wpdata/ --volumes-from wpdata mybusybox tar -xv files/ files/file8.txt files/file9.txt files/file4.txt files/file7.txt files/file1.txt files/file6.txt files/file2.txt files/file5.txt files/file10.txt files/file3.txt

So.. In the example I created a new data-only volume container named wpdata and then ran tar to pipe the contents of a directory to a new container that also used the same volumes as the original container. Not so tough, but not as easy as docker cp. I think docker cp should have this functionality sometime in the future ( issue tracker here ).

Enjoy

Dusty

Creating Your Own Minimal Docker Image in Fedora


Sometimes it can be useful to have a docker image with just the bare essentials. Maybe you want to have a container with just enough to run your app or you are using something like data volume containers and want just enough to browse the filesystem. Either way you can create your own minimalist busybox image on Fedora with a pretty simple script.

The script below was inspired a little from Marek Goldmann's post about creating a minimal image for wildfly and a little from the busybox website .

# cd to a temporary directory tmpdir=$(mktemp -d) pushd $tmpdir # Get and extract busybox yumdownloader busybox rpm2cpio busybox*rpm | cpio -imd rm -f busybox*rpm # Create symbolic links back to busybox for i in $(./sbin/busybox --list);do ln -s /sbin/busybox ./sbin/$i done # Create container tar -c . | docker import - mybusybox # Go back to old pwd popd

After running the script there is a new image on your system with the mybusybox tag. You can run it and take a look around like so:
[root@localhost ~]# docker images mybusybox REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE mybusybox latest f526db9e0d80 12 minutes ago 1.309 MB [root@localhost ~]# [root@localhost ~]# docker run -i -t mybusybox /sbin/busybox sh # ls -l /sbin/ls lrwxrwxrwx 1 0 0 13 Jul 8 02:15 /sbin/ls -> /sbin/busybox # # ls / dev etc proc sbin sys usr # # df -kh . Filesystem Size Used Available Use% Mounted on /dev/mapper/docker-253:0-394094-addac9507205082fbd49c8f45bbd0316fd6b3efbb373bb1d717a3ccf44b8a97e 9.7G 23.8M 9.2G 0% /

Enjoy!

Dusty

Manual Linux Installs with Funky Storage Configurations

Introduction


I often find that my tastes for hard drive configurations on my installed systems is a bit outside of the norm. I like playing with thin LVs, BTRFS snapshots, or whatever new thing there is around the corner. The Anaconda UI has been adding support for these fringe cases but I still find it hard to get Anaconda to do what I want in certain cases.

An example of this happened most recently when I went to reformat and install Fedora 20 on my laptop. Ultimately what I wanted was encrypted root and swap devices and btrfs filesystems on root and boot. One other requirement was that I needed to leave sda4 (a Windows Partition) completely intact. At the end the configuration should look something like:

[root@lintop ~]# lsblk /dev/sda NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 465.8G 0 disk ├─sda1 8:1 0 1G 0 part /boot ├─sda2 8:2 0 4G 0 part │ └─cryptoswap 253:1 0 4G 0 crypt [SWAP] ├─sda3 8:3 0 299.2G 0 part │ └─cryptoroot 253:0 0 299.2G 0 crypt / └─sda4 8:4 0 161.6G 0 part

After a few failed attempts with Anaconda I decided to do a custom install instead.

Custom Install


I used the Fedora 20 install DVD (and thus the Anaconda environment) to do the install but I performed all the steps manually by switching to a different terminal with a prompt. First off I used fdisk to format the disk the way I wanted. The results looked like:
[anaconda root@localhost ~]# fdisk -l /dev/sda Disk /dev/sda: 465.8 GiB, 500107862016 bytes, 976773168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: dos Disk identifier: 0xcfe1cf72 Device Boot Start End Blocks Id System /dev/sda1 * 2048 2099199 1048576 83 Linux /dev/sda2 2099200 10487807 4194304 82 Linux swap / Solaris /dev/sda3 10487808 637945855 313729024 83 Linux /dev/sda4 637945856 976773119 169413632 7 HPFS/NTFS/exFAT

Next I set up the encrypted root (/dev/sda3) device and created a btrfs filesystems on both boot (/dev/sda2) and the encrypted root device (/dev/mapper/cryptoroot):

[anaconda root@localhost ~]# cryptsetup luksFormat /dev/sda3 ... [anaconda root@localhost ~]# cryptsetup luksOpen /dev/sda3 cryptoroot Enter passphrase for /dev/sda3: [anaconda root@localhost ~]# [anaconda root@localhost ~]# mkfs.btrfs --force --label=root /dev/mapper/cryptoroot ... fs created label root on /dev/mapper/cryptoroot ... [anaconda root@localhost ~]# mkfs.btrfs --force --label=boot --mixed /dev/sda1 ... fs created label boot on /dev/sda1 ...

Next, if you want to use the yum cli then you need to install it because some of the files are left out of the environment by default. I show the error you get below and then how to fix it:

[anaconda root@localhost ~]# yum list Traceback (most recent call last): File "/bin/yum", line 28, in import yummain ImportError: No module named yummain [anaconda root@localhost ~]# rpm -ivh --nodeps /run/install/repo/Packages/y/yum-3.4.3-106.fc20.noarch.rpm ...

I needed to set up a repo that used the DVD as the source:

[anaconda root@localhost ~]# cat <<EOF > /etc/yum.repos.d/repo.repo [dvd] name=dvd baseurl=file:///run/install/repo enabled=1 gpgcheck=0 EOF

Now I could mount my root device on /mnt/sysimage and then lay down the basic filesystem tree by installing the filesystem package into it:

[anaconda root@localhost ~]# mount /dev/mapper/cryptoroot /mnt/sysimage/ [anaconda root@localhost ~]# yum install -y --installroot=/mnt/sysimage filesystem ... Complete!

Now I can mount boot and other filesystems into the /mnt/sysimage tree:

[anaconda root@localhost ~]# mount /dev/sda1 /mnt/sysimage/boot/ [anaconda root@localhost ~]# mount -v -o bind /dev /mnt/sysimage/dev/ mount: /dev bound on /mnt/sysimage/dev. [anaconda root@localhost ~]# mount -v -o bind /run /mnt/sysimage/run/ mount: /run bound on /mnt/sysimage/run. [anaconda root@localhost ~]# mount -v -t proc proc /mnt/sysimage/proc/ mount: proc mounted on /mnt/sysimage/proc. [anaconda root@localhost ~]# mount -v -t sysfs sys /mnt/sysimage/sys/ mount: sys mounted on /mnt/sysimage/sys.

Now ready for the actual install. For this install I just went with a small set of packages (I'll use yum once the system is up to add on what I want later).

[anaconda root@localhost ~]# yum install -y --installroot=/mnt/sysimage @core @standard kernel grub2 grub2-tools btrfs-progs ... Complete!

After the install there are a few housekeeping items to take care of. I started with populating crypttab, populating fstab, changing the root password, and touching /.autorelabel to trigger an selinux relabel on first boot:

[anaconda root@localhost ~]# chroot /mnt/sysimage/ [anaconda root@localhost /]# cat <<EOF > /etc/crypttab cryptoswap /dev/sda2 /dev/urandom swap cryptoroot /dev/sda3 - EOF [anaconda root@localhost /]# cat <<EOF > /etc/fstab LABEL=boot /boot btrfs defaults 1 2 /dev/mapper/cryptoswap swap swap defaults 0 0 /dev/mapper/cryptoroot / btrfs defaults 1 1 EOF [anaconda root@localhost /]# passwd --stdin root <<< "password" Changing password for user root. passwd: all authentication tokens updated successfully. [anaconda root@localhost /]# touch /.autorelabel

Next I needed to install grub and make a config file. I set the grub kernel command line arguments and then generated a config. The config needed some fixing up (I am not using EFI on my system, but grub2-mkconfig thought I was because I had booted through EFI off of the install CD).

[anaconda root@localhost /]# echo 'GRUB_CMDLINE_LINUX="ro root=/dev/mapper/cryptoroot"' > /etc/default/grub [anaconda root@localhost /]# grub2-mkconfig -o /boot/grub2/grub.cfg Generating grub.cfg ... Found linux image: /boot/vmlinuz-3.11.10-301.fc20.x86_64 Found initrd image: /boot/initramfs-3.11.10-301.fc20.x86_64.img Found linux image: /boot/vmlinuz-0-rescue-81c04e9030594ef6a5265a95f58ccf98 Found initrd image: /boot/initramfs-0-rescue-81c04e9030594ef6a5265a95f58ccf98.img done [anaconda root@localhost /]# sed -i s/linuxefi/linux/ /boot/grub2/grub.cfg [anaconda root@localhost /]# sed -i s/initrdefi/initrd/ /boot/grub2/grub.cfg [anaconda root@localhost /]# grub2-install -d /usr/lib/grub/i386-pc/ /dev/sda Installation finished. No error reported.

NOTE: grub2-mkconfig didn't find my windows partition until I rebooted into the system and ran it again.

Finally I re-executed dracut to pick up the crypttab, exited the chroot, unmounted the filesystems, and rebooted into my new system:
[anaconda root@localhost /]# dracut --kver 3.11.10-301.fc20.x86_64 --force [anaconda root@localhost /]# exit [anaconda root@localhost ~]# umount /mnt/sysimage/{boot,dev,run,sys,proc} [anaconda root@localhost ~]# reboot

After booting into Fedora I was then able to run grub2-mkconfig again and get it to recognize my (untouched) Windows partition:

[root@localhost /]# grub2-mkconfig -o /boot/grub2/grub.cfg Generating grub.cfg ... Found linux image: /boot/vmlinuz-3.11.10-301.fc20.x86_64 Found initrd image: /boot/initramfs-3.11.10-301.fc20.x86_64.img Found linux image: /boot/vmlinuz-0-rescue-375c7019484a45838666c572d241249a Found initrd image: /boot/initramfs-0-rescue-375c7019484a45838666c572d241249a.img Found Windows 7 (loader) on /dev/sda4 done

And that's pretty much it. Using this method you can have virtually any hard drive setup that you desire. Hope someone else can find this useful.

Dusty

P.S. You can start sshd in anaconda by running systemctl start anaconda-sshd.service.

TermRecord: Terminal Screencast in a Self-Contained HTML File

Introduction


Some time ago I wrote a few posts ( 1, 2 ) on how to use script to record a terminal session and then scriptreplay to play it back. This functionality can be very useful by enabling you the power to show others what happens when you do insert anything here.

I have been happy with this solution for a while until one day Wolfgang Richter commented on my original post and shared a project he has been working on known as TermRecord.

I gave it a spin and have been using it quite a bit. Sharing a terminal recording now becomes much easier as you can simply email the .html file or you can host it yourself and share links. As long the people you are sharing with have a browser then they can watch the playback. Thus, it is not tied to a system with a particular piece of software and clicking a link to view is very easy to do :)

Basics of TermRecord


Before anything else we need to install TermRecord. Currently TermRecord is available in the python package index (hopefully will be packaged in some major distributions soon) and can be installed using pip.
[root@localhost ~]# pip install TermRecord Downloading/unpacking TermRecord Downloading TermRecord-1.1.3.tar.gz (49kB): 49kB downloaded Running setup.py egg_info for package TermRecord ... ... Successfully installed TermRecord Jinja2 markupsafe Cleaning up...
Now you can make a self-contained html file for sharing in a couple of ways.

First, you can use TermRecord to convert already existing timing and log files that were created using the script command by specifying them as inputs to TermRecord:
[root@localhost ~]# TermRecord -o screencast.html -t screencast.timing -s screencast.log

The other option is to create a new recording using TermRecord like so:
[root@localhost ~]# TermRecord -o screencast.html Script started, file is /tmp/tmp5I4SYq [root@localhost ~]# [root@localhost ~]# #This is a screencast. [root@localhost ~]# exit exit Script done, file is /tmp/tmp5I4SYq

And.. Done. Now you can email or share the html file any way you like. If you would like to see some examples of terminal recordings you can check out the TermRecord github page or here is one from my previous post on wordpress/docker.

Cheers,
Dusty

Zero to WordPress on Docker in 5 Minutes

Introduction


Docker is an emerging technology that has garnered a lot of momentum in the past year. I have been busy with a move to NYC and a job change (now officially a Red Hatter), so I am just now getting around to getting my feet wet with Docker.

Last night I sat down and decided to bang out some steps for installing wordpress in a docker container. Eventually I plan to move this site into a container so I figured this would be a good first step.

DockerPress


There a few bits and pieces that need to be done to configure wordpress. For simplicity I decided to make this wordpress instance use sqlite rather than mysql. Considering all of this here is the basic recipe for wordpress:
  • Install apache and php.
  • Download wordpress and extract to appropriate folder.
  • Download the sqlite-integration plugin and extract.
  • Modify a few files...and DONE.
This is easily automated by creating a Dockerfile and using docker. The minimal Dockerfile (with comments) is shown below:
FROM goldmann/f20 MAINTAINER Dusty Mabe # Install httpd and update openssl RUN yum install -y httpd openssl unzip php php-pdo # Download and extract wordpress RUN curl -o wordpress.tar.gz http://wordpress.org/latest.tar.gz RUN tar -xzvf wordpress.tar.gz --strip-components=1 --directory /var/www/html/ RUN rm wordpress.tar.gz # Download plugin to allow WP to use sqlite # http://wordpress.org/plugins/sqlite-integration/installation/ # - Move sqlite-integration folder to wordpress/wp-content/plugins folder. # - Copy db.php file in sqlite-integratin folder to wordpress/wp-content folder. # - Rename wordpress/wp-config-sample.php to wordpress/wp-config.php. # RUN curl -o sqlite-plugin.zip http://downloads.wordpress.org/plugin/sqlite-integration.1.6.3.zip RUN unzip sqlite-plugin.zip -d /var/www/html/wp-content/plugins/ RUN rm sqlite-plugin.zip RUN cp /var/www/html/wp-content/{plugins/sqlite-integration/db.php,} RUN cp /var/www/html/{wp-config-sample.php,wp-config.php} # # Fix permissions on all of the files RUN chown -R apache /var/www/html/ RUN chgrp -R apache /var/www/html/ # # Update keys/salts in wp-config for security RUN RE='put your unique phrase here'; for i in {1..8}; do KEY=$(openssl rand -base64 40); sed -i "0,/$RE/s|$RE|$KEY|" /var/www/html/wp-config.php; done; # # Expose port 80 and set httpd as our entrypoint EXPOSE 80 ENTRYPOINT ["/usr/sbin/httpd"] CMD ["-D", "FOREGROUND"]

With the power of the Dockerfile you can now build a new image using docker build and then run the new container with the docker run command. An example of these two commands is shown below:
[root@localhost ~]# ls Dockerfile Dockerfile [root@localhost ~]# docker build -t "wordpress" . ... Successfully built 0b388013905e ... [root@localhost ~]# [root@localhost ~]# docker run -d -p 8080:80 -t wordpress 6da59c864d35bb0bb6043c09eb8b1128b2c1cb91f7fa456156df4a0a22f271b0

The docker build command will build an image from the Dockerfile and then tag the new image with the "wordpress" tag. The docker run command will run a new container based on the "wordpress" image and bind port 8080 from the host machine to port 80 within the container.

Now you can happily point your browser to http://localhost:8080 and see the wordpress 5 minute installation screen:



See a full screencast of the "zero to wordpress" process using docker here .
Download the Dockerfile here .

Cheers!
Dusty

NOTE: This was done on Fedora 20 with docker-io-0.9.1-1.fc20.x86_64.

Fedup 19 to 20 with a Thin LVM Configuration

Introduction


I have been running my home desktop on thin logical volumes for a while now. I have enjoyed the flexibility of this setup and I like taking a snapshot before making any big changes to my setup. Recently I decided to update to Fedora 20 from Fedora 19 and I hit some trouble along the way because the Fedora 20 initramfs (images/pxeboot/upgrade.img) that is used by fedup for the upgrade does not have support for thin logical volumes. After running fedup and rebooting you end up with a message to the screen that looks something like this:
[ OK ] Started Show Plymouth Boot Screen. [ OK ] Reached target Paths. [ OK ] Reached target Basic System. [ 191.023332] dracut-initqueue[363]: Warning: Could not boot. [ 191.028263] dracut-initqueue[363]: Warning: /dev/mapper/vg_root-thin_root does not exist [ 191.029689] dracut-initqueue[363]: Warning: /dev/vg_root/thin_root does not exist Starting Dracut Emergency Shell... Warning: /dev/mapper/vg_root-thin_root does not exist Warning: /dev/vg_root/thin_root does not exist Generating "/run/initramfs/rdsosreport.txt" Entering emergency mode. Exit the shell to continue.

Working Around the Issue


First off run install and run fedup :
[root@localhost ~]# yum update -y fedup fedora-release &>/dev/null [root@localhost ~]# fedup --network 20 &>/dev/null

After running fedup usually you would be able to reboot and go directly into the upgrade process. For us we need to add a few helper utilities (thin_dump, thin_check, thin_restore) to the initramfs so that thin LVs will work. This can be done by appending more files in a cpio archive to the end of the initramfs that was downloaded by fedup. I learned about this technique by peeking at the initramfs_append_files() function within fedup's boot.py. Note also that I had to append a few libraries that are required by the utilities into the initramfs as well.

[root@localhost ~]# cpio -co >> /boot/initramfs-fedup.img << EOF /lib64/libexpat.so.1 /lib64/libexpat.so.1.6.0 /lib64/libstdc++.so.6 /lib64/libstdc++.so.6.0.18 /usr/sbin/thin_dump /usr/sbin/thin_check /usr/sbin/thin_restore EOF 4334 blocks [root@localhost ~]#

And thats it.. You are now able to reboot into the upgrade environment and watch the upgrade. If you'd like to watch a (rather lengthy) screencast of the entire process then you can download the screencast.log and the screencast.timing files and follow the instructions here.

Dusty

Nested Virt and Fedora 20 Virt Test Day

Introduction


I decided this year to take part in the Fedora Virtualization Test Day on October 8th. In order to take part I needed a system with Fedora 20 installed so that I could then create VMs on top. Since I like my current setup and I didn't have a hard drive laying around that I wanted to wipe I decided to give nested virtualization a shot.

Most of the documentation I have seen for nested virtualization has come from Kashyap Chamarthy. Relevant posts are here, here, and here. He has done a great job with these tutorials and this post is nothing more than my notes for what I found to work for me.

Steps


With nested virtualization the OS/Hypervisor that touches the physical hardware is known as L0. The first level of virtualized guest is known as L1. The second level of virtualized guest (the guest inside a guest) is known as L2. In my setup I ultimately wanted F19(L0), F20(L1), and F20(L2).

First, in order to pass along intel vmx extensions to the guest I created a modprobe config file that instructs the kvm_intel kernel module to allow nested virtualization support:

[root@L0 ~]# echo "options kvm-intel nested=y" > /etc/modprobe.d/nestvirt.conf

After a reboot I can now confirm the kvm_intel moduel is configured for nested virt:

[root@L0 ~]# cat /sys/module/kvm_intel/parameters/nested Y

Next I converted an existing Fedora 20 installation to use "host-passthrough" (see here) so that the L1 guest would see the same processor (with vmx extensions) as my L0 host. To do this i modified the cpu xml tags as follows in the libvirt xml definition:

<cpu mode='host-passthrough'> </cpu>

After powering up the guest I now see that the processor that the L1 guest sees is indeed the same as the host:
[root@L1 ~]# cat /proc/cpuinfo | grep "model name" model name : Intel(R) Core(TM) i7-3770 CPU @ 3.40GHz model name : Intel(R) Core(TM) i7-3770 CPU @ 3.40GHz model name : Intel(R) Core(TM) i7-3770 CPU @ 3.40GHz model name : Intel(R) Core(TM) i7-3770 CPU @ 3.40GHz

Next I decided to enable nested virt in the L1 guest by adding the same modprobe.conf file as I did in L0. I did this based on a tip from Kashyap in the #fedora-test-day chat that this tends to give about a 10X performance improvement in the L2 guests.

[root@L1 ~]# echo "options kvm-intel nested=y" > /etc/modprobe.d/nestvirt.conf

After a reboot I could then create and install L2 guests using virt-install and virt-manager. This seemed to work fine except for the fact that I would often see an unknown NMI in the guest periodically.

[ 14.324786] Uhhuh. NMI received for unknown reason 30 on CPU 0. [ 14.325046] Do you have a strange power saving mode enabled? [ 14.325046] Dazed and confused, but trying to continue

I believe the issue I was seeing may be documented in kernel BZ#58941 . After asking about it in the chat I was informed that for the best experience with nested virt I should go to a 3.12 kernel. I decided to leave that exercise for another day :).

Have a great day!

Dusty

BTRFS: How big are my snapshots?

Introduction


I have been using BTRFS snapshots for a while now on my laptop to incrementally save the state of my machine before I perform system updates or run some harebrained test. I quickly ran into a problem though, as on a smaller filesystem I was running out of space. I then wanted to be able to look at each snapshot and easily determine how much space I could recover if I deleted each snapshot. Surprisingly this information was not readily available. Of course you could determine the total size of each snapshot by using du, but that only tells you how big the entire snapshot is and not how much of the snapshot is exclusive to this snapshot only..

Enter filesystem quota and qgroups in git commit 89fe5b5f666c247aa3173745fb87c710f3a71a4a . With quota and qgroups (see an overview here ) we can now see how big each of those snapshots are, including exclusive usage.

Steps


The system I am using for this example is Fedora 19 with btrfs-progs-0.20.rc1.20130308git704a08c-1.fc19.x86_64 installed. I have a 2nd disk attached (/dev/sdb) that I will use for the BTRFS filesystem.

First things first lets create a BTRFS filesystem on sdb, mount the filesystem and then create a .snapshots directory.

[root@localhost ~]# mkfs.btrfs /dev/sdb WARNING! - Btrfs v0.20-rc1 IS EXPERIMENTAL WARNING! - see http://btrfs.wiki.kernel.org before using fs created label (null) on /dev/sdb nodesize 4096 leafsize 4096 sectorsize 4096 size 10.00GB Btrfs v0.20-rc1 [root@localhost ~]# [root@localhost ~]# mount /dev/sdb /btrfs [root@localhost ~]# mkdir /btrfs/.snapshots

Next lets copy some files into the filesystem. I will copy in a 50M file and then create a snapshot (snap1). Then I will copy in a 4151M file and take another snapshot (snap2). Finally, a 279M file and another snapshot (snap3).

[root@localhost ~]# cp /root/50M_File /btrfs/ [root@localhost ~]# btrfs subvolume snapshot /btrfs /btrfs/.snapshots/snap1 Create a snapshot of '/btrfs' in '/btrfs/.snapshots/snap1' [root@localhost ~]# [root@localhost ~]# cp /root/4151M_File /btrfs/ [root@localhost ~]# btrfs subvolume snapshot /btrfs /btrfs/.snapshots/snap2 Create a snapshot of '/btrfs' in '/btrfs/.snapshots/snap2' [root@localhost ~]# [root@localhost ~]# cp /root/279M_File /btrfs/ [root@localhost ~]# btrfs subvolume snapshot /btrfs /btrfs/.snapshots/snap3 Create a snapshot of '/btrfs' in '/btrfs/.snapshots/snap3' [root@localhost ~]# [root@localhost ~]# df -kh /btrfs/ Filesystem Size Used Avail Use% Mounted on /dev/sdb 10G 4.4G 3.6G 55% /btrfs

Now how much is each one of those snapshots taking up? We can see this information by enabling quota and then printing out the qgroup information:

[root@localhost ~]# btrfs quota enable /btrfs/ [root@localhost ~]# [root@localhost ~]# btrfs qgroup show /btrfs/ 0/5 4698025984 8192 0/257 52432896 4096 0/263 4405821440 12288 0/264 4698025984 8192

The first number on each line represents the subvolume id. The second number represents the amount of space contained within each subvolume (in bytes) and the last number represents the amount of space that is exclusive to that subvolume (in bytes). Now for some reason when I see such large numbers I go brain dead and fail to comprehend how much space is actually being used. I wrote a little perl script to convert the numbers to MB.

[root@localhost ~]# btrfs qgroup show /btrfs/ | /root/convert 0/5 4480M 0M 0/257 50M 0M 0/263 4201M 0M 0/264 4480M 0M

So that makes sense. The 1st snapshot (denoted by the 2nd line) contains 50M. The 2nd snapshot contains 50M+4151M and the 3rd snapshot contains 50M+4151M+279M. We can also see that at the moment none of them have any exclusive content. This is because all data is shared among them all.

We can fix that by deleting some of the files.

[root@localhost ~]# rm /btrfs/279M_File rm: remove regular file ‘/btrfs/279M_File’? y [root@localhost ~]# btrfs qgroup show /btrfs/ | /root/convert 0/5 4201M 0M 0/257 50M 0M 0/263 4201M 0M 0/264 4480M 278M

Now if we delete all of the files and view the qgroup info, what do we see?

[root@localhost ~]# rm -f /btrfs/4151M_File [root@localhost ~]# rm -f /btrfs/50M_File [root@localhost ~]# btrfs qgroup show /btrfs/ | /root/convert 0/5 0M 0M 0/257 50M 0M 0/263 4201M 0M 0/264 4480M 278M

We can see from the first line that the files have been removed from the root subvolume but the exclusive counts didn't go up for snap1 and snap2?

This is because the files are shared with snap3. If we remove snap3 then we'll see the exclusive number go up for snap2:

[root@localhost ~]# btrfs subvolume delete /btrfs/.snapshots/snap3 Delete subvolume '/btrfs/.snapshots/snap3' [root@localhost ~]# [root@localhost ~]# btrfs qgroup show /btrfs/ | /root/convert 0/5 -4480M -278M 0/257 50M 0M 0/263 4201M 4151M 0/264 4480M 278M

As expected the 2nd snapshot now shows 4151M as exclusive. However, unexpectedly the qgroup for the 3rd snapshot still exists and the root subvolume qgroup now shows negative numbers.

Finally lets delete snap2 and observe that the amount of exclusive space (4151M) is actually released back to the pool of free space:

[root@localhost ~]# df -kh /btrfs/ Filesystem Size Used Avail Use% Mounted on /dev/sdb 10G 4.2G 3.9G 52% /btrfs [root@localhost ~]# [root@localhost ~]# btrfs subvolume delete /btrfs/.snapshots/snap2 Delete subvolume '/btrfs/.snapshots/snap2' [root@localhost ~]# [root@localhost ~]# btrfs qgroup show /btrfs/ | /root/convert 0/5 -8682M -4430M 0/257 50M 50M 0/263 4201M 4151M 0/264 4480M 278M [root@localhost ~]# [root@localhost ~]# df -kh /btrfs/ Filesystem Size Used Avail Use% Mounted on /dev/sdb 10G 52M 8.0G 1% /btrfs

So we can see that the space is in fact released and is now counted as free space. Again the negative numbers and the fact that the qgroups show up for the deleted subvolumes is a bit odd.

Cheers!

Dusty Mabe

Bonus: It seems like there is a patch floating around to enhance the output of qgroup show. Check it out here .