quick audit rules for sanity check

Most of the time when I really want to figure out what is going on deep within a piece of software I break out strace and capture all the gory detail. Unfortunately it isn't always that easy to manipulate and run something from the command line but I have found that some simple uses of the audit daemon can give you great insight without having to dig too deep.

Example Problem

I have a script, switch.py, I want to call via a bound key sequence from i3 window manager. However, I notice that nothing happens when I press the key sequence. Is the script failing or is the script not getting called at all? auditd and auditctl can help us figure this out.

Using audit

To take advantage of system auditing the daemon must be up and running:

# systemctl status auditd.service | grep active
       Active: active (running) since Sun 2015-01-25 13:56:27 EST; 1 day 9h ago

You can then add a watch for read/write/execute/attribute accesses on the file:

# auditctl -w /home/dustymabe/.i3/switch.py -p rwxa -k 'switchtest'
# auditctl -l
-w /home/dustymabe/.i3/switch.py -p rwxa -k switchtest

Notice the usage of the -k option to add a key to the rule. This means any events that match the rule will be tagged with this key and can be easily found. Any accesses will be logged and can be viewed later by using ausearch and aureport. After putting the rules in place in another terminal I accessed the file as a normal user:

$ pwd
/home/dustymabe
$ cat  .i3/switch.py
... contents of file ...
$ ls .i3/switch.py
.i3/switch.py

Then I was able to use a combination of ausearch and aureport to easily see who accessed the file and how it was accessed:

# ausearch -k switchtest --raw | aureport --file

File Report
===============================================
# date time file syscall success exe auid event
===============================================
1. 01/26/15 22:59:26 .i3/switch.py 2 yes /usr/bin/cat 1000 1299
2. 01/26/15 23:00:19 .i3/switch.py 191 no /usr/bin/ls 1000 1300

Awesome.. So with auditing working now all I have to do is press the key sequence to see if my script is getting called?? Turns out it was being called:

# ausearch -k switchtest --raw | aureport --file

File Report
===============================================
# date time file syscall success exe auid event
===============================================
1. 01/26/15 22:59:26 .i3/switch.py 2 yes /usr/bin/cat 1000 1299
2. 01/26/15 23:00:19 .i3/switch.py 191 no /usr/bin/ls 1000 1300
10. 01/26/15 23:38:15 /home/dustymabe/.i3/switch.py 59 yes /usr/bin/python2.7 1000 1326
11. 01/26/15 23:38:15 /home/dustymabe/.i3/switch.py 89 no /usr/bin/python2.7 1000 1327
12. 01/26/15 23:38:15 /home/dustymabe/.i3/switch.py 2 yes /usr/bin/python2.7 1000 1328
13. 01/26/15 23:38:15 /home/dustymabe/.i3/switch.py 2 yes /usr/bin/python2.7 1000 1329
14. 01/26/15 23:38:15 /home/dustymabe/.i3/switch.py 2 yes /usr/bin/python2.7 1000 1330
15. 01/26/15 23:38:15 /home/dustymabe/.i3/switch.py 2 yes /usr/bin/python2.7 1000 1331

So that enabled me to concentrate on my script and find the bug that was lurking within :)

Have fun auditing!
Dusty

Fedora 21 now available on Digital Ocean

cross posted from this fedora magazine post

It's raining Droplets! Fedora 21 has landed in Digital Ocean's cloud hosting. Fedora 21 offers a fantastic cloud image for developers, and it's now easy for Digital Ocean users to spin it up and get started! Here are a couple of tips:

  • Like with other Digital Ocean images, you will log in with your ssh key as root rather than the typical fedora user that you may be familiar with when logging in to a Fedora cloud image.
  • This is the first time Digital Ocean has SELinux enabled by default (yay for security). If you want or need to you can still easily switch back to permissive mode; Red Hat's Dan Walsh may have a "shame on you" or two for you though.
  • Fedora 21 should be available in all the newest datacenters in each region, but some legacy datacenters aren't supported. If you have a problem you think is Fedora specific then drop us an email at , ping us in #fedora-cloud on freenode, or visit the Fedora cloud trac to see if it is already being worked on.

Happy Developing!
Dusty

PS If anyone wants a $10 credit for creating a new account you can use my referral link

qemu-img Backing Files: A Poor Man’s Snapshot/Rollback

I often like to formulate detailed steps when trying to reproduce a bug or a working setup. VMs are great for this because they can be manipulated easily. To manipulate their disk images I use qemu-img to create new disk images that use other disk images as a backing store. This is what I like to call a "poor man's" way to do snapshots because the snapshotting process is a bit manual, but that is also why I like it; I don't touch the original disk image at all so I have full confidence I haven't compromised it.

NOTE: I use QEMU/KVM/Libvirt so those are the tools used in this example:

Taking A Snapshot

In order to take a snapshot you should first shutdown the VM and then simply create a new disk image that uses the original disk image as a backing store:

$ sudo virsh shutdown F21server
Domain F21server is being shutdown
$ sudo qemu-img create -f qcow2 -b /guests/F21server.img /guests/F21server.qcow2.snap
Formatting '/guests/F21server.qcow2.snap', fmt=qcow2 size=21474836480 backing_file='/guests/F21server.img' encryption=off cluster_size=65536 lazy_refcounts=off

This new disk image is a COW snapshot of the original image, which means any writes will go into the new image but any reads of non-modified blocks will be read from the original image. A benefit of this is that the size of the new file will start off at 0 and increase only as modifications are made.

To get the virtual machine to pick up and start using the new COW disk image we will need to modify the libvirt XML to point it at the new file:

$ sudo virt-xml F21server --edit target=vda --disk driver_type=qcow2,path=/guests/F21server.qcow2.snap --print-diff
--- Original XML
+++ Altered XML
@@ -27,8 +27,8 @@
   <devices>
     <emulator>/usr/bin/qemu-kvm</emulator>
     <disk type="file" device="disk">
-      <driver name="qemu" type="raw"/>
-      <source file="/guests/F21server.img"/>
+      <driver name="qemu" type="qcow2"/>
+      <source file="/guests/F21server.qcow2.snap"/>
       <target dev="vda" bus="virtio"/>
       <address type="pci" domain="0x0000" bus="0x00" slot="0x07" function="0x0"/>
     </disk>
$ 
$ sudo virt-xml F21server --edit target=vda --disk driver_type=qcow2,path=/guests/F21server.qcow2.snap
Domain 'F21server' defined successfully.

You can now start your VM and make changes as you wish. Be destructive if you like; the original disk image hasn't been touched.

After making a few changes I had around 15M of differences between the original image and the snapshot:

$ du -sh /guests/F21server.img 
21G     /guests/F21server.img
$ du -sh /guests/F21server.qcow2.snap 
15M     /guests/F21server.qcow2.snap

Going Back

To go back to the point you started you must first delete the file that you created (/guests/F21server.qcow2.snap) and then you have two options:

  • Again create a disk image using the origin as a backing file.
  • Go back to using the original image.

If you want to continue testing and going back to your starting point then you will want to delete and recreate the COW snapshot disk image:

$ sudo rm /guests/F21server.qcow2.snap 
$ sudo qemu-img create -f qcow2 -b /guests/F21server.img /guests/F21server.qcow2.snap
Formatting '/guests/F21server.qcow2.snap', fmt=qcow2 size=21474836480 backing_file='/guests/F21server.img' encryption=off cluster_size=65536 lazy_refcounts=off 

If you want to go back to your original setup then we'll also need to change back the xml to what it was before:

$ sudo rm /guests/F21server.qcow2.snap 
$ sudo virt-xml F21server --edit target=vda --disk driver_type=raw,path=/guests/F21server.img
Domain 'F21server' defined successfully.

Committing Changes

If you happen to decide that the changes you have made are some that you want to carry forward then you can commit the changes in the COW disk image into the backing disk image. In the case below I have 15M worth of changes that get committed back into the original image. I then edit the xml accordingly and can start the guest with all the changes baked back into the original disk image:

$ sudo qemu-img info /guests/F21server.qcow2.snap
image: /guests/F21server.qcow2.snap
file format: qcow2
virtual size: 20G (21474836480 bytes)
disk size: 15M
cluster_size: 65536
backing file: /guests/F21server.img
$ sudo qemu-img commit /guests/F21server.qcow2.snap
Image committed.
$ sudo rm /guests/F21server.qcow2.snap
$ sudo virt-xml F21server --edit target=vda --disk driver_type=raw,path=/guests/F21server.img
Domain 'F21server' defined successfully.

Fin

This backing file approach is useful because it's much more convenient than making multiple copies of huge disk image files, but it can be used for much more than just snapshotting/reverting changes. It can also be used to start 100 virtual machines from a common backing image, thus saving space...etc.. Go ahead and try it!

Happy Snapshotting!
Dusty

F21 Atomic Test Day && Test steps for Atomic Host

Test Day on Thursday 11/20

The F21 test day for atomic is this Thursday, November 20th. If anyone can participate please do drop into #atomic on freenode as it will be great to have more people involved in helping build/test this new technology.

In anticipation of the test day I have put together some test notes for other people to follow in hopes that it will help smooth things along.

Booting with cloud-init

First step is to start an atomic host using any method/cloud provider you like. For me I decided to use openstack since I have Juno running on F21 here in my apartment. I used this user-data for the atomic host:

#cloud-config password: passw0rd chpasswd: { expire: False } ssh_pwauth: True runcmd: - [ sh, -c, 'echo -e "ROOT_SIZE=4GnDATA_SIZE=10G" > /etc/sysconfig/docker-storage-setup']

Note that the build of atomic I used for this testing resides here

Verifying docker-storage-setup

docker-storage-setup is a service that can be used to configure the storage configuration for docker in different ways on instance bringup. Notice in the user-data above that I decided to set config variables for docker-storage-setup. They basically mean that I want to resize my atomicos/root LV to 4G and I want to create an atomicos/docker-data LV and make it 10G in size.

To verify the storage was set up successfully, log in (as the fedora user) and become root (usind sudo su -). Now you can check if docker-storage-setup worked by checking the logs as well as looking at the output from lsblk:

# journalctl -o cat --unit docker-storage-setup.service CHANGED: partition=2 start=411648 old: size=12171264 end=12582912 new: size=41531232,end=41942880 Physical volume "/dev/vda2" changed 1 physical volume(s) resized / 0 physical volume(s) not resized Size of logical volume atomicos/root changed from 1.95 GiB (500 extents) to 4.00 GiB (1024 extents). Logical volume root successfully resized Rounding up size to full physical extent 24.00 MiB Logical volume "docker-meta" created Logical volume "docker-data" created # # lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT vda 252:0 0 20G 0 disk ├─vda1 252:1 0 200M 0 part /boot └─vda2 252:2 0 19.8G 0 part ├─atomicos-root 253:0 0 4G 0 lvm /sysroot ├─atomicos-docker--meta 253:1 0 24M 0 lvm └─atomicos-docker--data 253:2 0 10G 0 lvm

Verifying Docker Lifecycle

To verify Docker runs fine on the atomic host we will perform a simple run of the busybox docker image. This will contact the docker hub, pull down the image, and run /bin/true:

# docker run -it --rm busybox true && echo "PASS" || echo "FAIL" Unable to find image 'busybox' locally Pulling repository busybox e72ac664f4f0: Download complete 511136ea3c5a: Download complete df7546f9f060: Download complete e433a6c5b276: Download complete PASS

After the Docker daemon has started the LVs that were created by docker-storage-setup will be used by device mapper as shown in the lsblk output below:

# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT vda 252:0 0 20G 0 disk ├─vda1 252:1 0 200M 0 part /boot └─vda2 252:2 0 19.8G 0 part ├─atomicos-root 253:0 0 4G 0 lvm /sysroot ├─atomicos-docker--meta 253:1 0 24M 0 lvm │ └─docker-253:0-6298462-pool 253:3 0 10G 0 dm │ └─docker-253:0-6298462-base 253:4 0 10G 0 dm └─atomicos-docker--data 253:2 0 10G 0 lvm └─docker-253:0-6298462-pool 253:3 0 10G 0 dm └─docker-253:0-6298462-base 253:4 0 10G 0 dm

Atomic Host: Upgrade

Now on to an atomic upgrade. First let's check what commit we are currently at and store a file in /etc/file1 to save it for us:

# rpm-ostree status TIMESTAMP (UTC) ID OSNAME REFSPEC * 2014-11-12 22:28:04 1877f1fa64 fedora-atomic-host fedora-atomic:fedora-atomic/f21/x86_64/docker-host # # ostree admin status * fedora-atomic-host 1877f1fa64be8bec8adcd43de6bd4b5c39849ec7842c07a6d4c2c2033651cd84.0 origin refspec: fedora-atomic:fedora-atomic/f21/x86_64/docker-host # # cat /ostree/repo/refs/heads/ostree/0/1/0 1877f1fa64be8bec8adcd43de6bd4b5c39849ec7842c07a6d4c2c2033651cd84 # # cat /ostree/repo/refs/heads/ostree/0/1/0 > /etc/file1

Now run an upgrade to the latest atomic compose:

# rpm-ostree upgrade Updating from: fedora-atomic:fedora-atomic/f21/x86_64/docker-host 14 metadata, 19 content objects fetched; 33027 KiB transferred in 16 seconds Copying /etc changes: 26 modified, 4 removed, 39 added Transaction complete; bootconfig swap: yes deployment count change: 1) Updates prepared for next boot; run "systemctl reboot" to start a reboot

And do a bit of poking around right before we reboot:

# rpm-ostree status TIMESTAMP (UTC) ID OSNAME REFSPEC 2014-11-13 10:52:06 18e02c4166 fedora-atomic-host fedora-atomic:fedora-atomic/f21/x86_64/docker-host * 2014-11-12 22:28:04 1877f1fa64 fedora-atomic-host fedora-atomic:fedora-atomic/f21/x86_64/docker-host # # ostree admin status fedora-atomic-host 18e02c41666ef5f426bc43d01c4ce1b7ffc0611e993876cf332600e2ad8aa7c0.0 origin refspec: fedora-atomic:fedora-atomic/f21/x86_64/docker-host * fedora-atomic-host 1877f1fa64be8bec8adcd43de6bd4b5c39849ec7842c07a6d4c2c2033651cd84.0 origin refspec: fedora-atomic:fedora-atomic/f21/x86_64/docker-host # # reboot

Note that the * in the above output indicates which tree is currently booted.

After reboot now the new tree should be booted. Let's check things out and make /etc/file2 with our new commit hash in it:

# rpm-ostree status TIMESTAMP (UTC) ID OSNAME REFSPEC * 2014-11-13 10:52:06 18e02c4166 fedora-atomic-host fedora-atomic:fedora-atomic/f21/x86_64/docker-host 2014-11-12 22:28:04 1877f1fa64 fedora-atomic-host fedora-atomic:fedora-atomic/f21/x86_64/docker-host # # ostree admin status * fedora-atomic-host 18e02c41666ef5f426bc43d01c4ce1b7ffc0611e993876cf332600e2ad8aa7c0.0 origin refspec: fedora-atomic:fedora-atomic/f21/x86_64/docker-host fedora-atomic-host 1877f1fa64be8bec8adcd43de6bd4b5c39849ec7842c07a6d4c2c2033651cd84.0 origin refspec: fedora-atomic:fedora-atomic/f21/x86_64/docker-host # # cat /ostree/repo/refs/heads/ostree/1/1/0 18e02c41666ef5f426bc43d01c4ce1b7ffc0611e993876cf332600e2ad8aa7c0 # # cat /ostree/repo/refs/heads/ostree/1/1/0 > /etc/file2

As one final item let's boot up a docker container to make sure things still work there:

# docker run -it --rm busybox true && echo "PASS" || echo "FAIL" PASS

Atomic Host: Rollback

Atomic host provides the ability to revert to the previous working tree if things go awry with the new tree. Lets revert our upgrade now and make sure things still work:

# rpm-ostree rollback Moving '1877f1fa64be8bec8adcd43de6bd4b5c39849ec7842c07a6d4c2c2033651cd84.0' to be first deployment Transaction complete; bootconfig swap: yes deployment count change: 0) Sucessfully reset deployment order; run "systemctl reboot" to start a reboot # # rpm-ostree status TIMESTAMP (UTC) ID OSNAME REFSPEC 2014-11-12 22:28:04 1877f1fa64 fedora-atomic-host fedora-atomic:fedora-atomic/f21/x86_64/docker-host * 2014-11-13 10:52:06 18e02c4166 fedora-atomic-host fedora-atomic:fedora-atomic/f21/x86_64/docker-host # # reboot

After reboot:

# rpm-ostree status TIMESTAMP (UTC) ID OSNAME REFSPEC * 2014-11-12 22:28:04 1877f1fa64 fedora-atomic-host fedora-atomic:fedora-atomic/f21/x86_64/docker-host 2014-11-13 10:52:06 18e02c4166 fedora-atomic-host fedora-atomic:fedora-atomic/f21/x86_64/docker-host # # cat /etc/file1 1877f1fa64be8bec8adcd43de6bd4b5c39849ec7842c07a6d4c2c2033651cd84 # cat /etc/file2 cat: /etc/file2: No such file or directory

Notice that /etc/file2 did not exist until after the upgrade so it did not persist during the rollback.

And the final item on the list is to make sure Docker still works:

# docker run -it --rm busybox true && echo "PASS" || echo "FAIL" PASS

Anddd Boom.. You have just put atomic through some paces.

Capture Elusive cloud-init Debug Output With journalctl


Recently I have been trying to debug some problems with cloud-init in the alpha versions of cloud images for CentOS 7 and Fedora 21. What I have found is that it's not so straight forward to figure out how to set up debug logging.

The defaults (defined in /etc/cloud/cloud.cfg.d/05_logging.cfg ) for some reason don't really capture the debug output in /var/log/cloud-init.log. Luckily, though, on systemd based systems we can get most of that output by using journalctl. There are several services releated to cloud-init and if you want to get the output from all of them you can just use wildcard matching in journalctl (freshly added in ea18a4b ) like so:

[root@f21test ~]# journalctl --unit cloud-* ...debug...debug...blah...blah

This worked great in Fedora 21, but in CentOS/RHEL 7 this actually won't work because wildcard matching is too new. As a result I found another way to get the same output. It just so happens that the services all use the same executable (/usr/bin/cloud-init) so I was able to use that as a trigger:

[root@c7test ~]# journalctl /usr/bin/cloud-init ...debug...debug...blah...blah

I hope others can find this useful when debugging cloud-init.

Cheers,
Dusty

Docker: Copy Into A Container Volume


I need to copy a few files into my docker container.. Should be easy right? Turns out it's not so trivial. In Docker 1.0.0 and earlier the docker cp command can be used to copy files from a container to the host, but not the other way around...

Most of the time you can work around this by using an ADD statement in the Dockerfile but I often need to populate some data within data-only volume containers before I start other containers that use the data. To achieve copying data into the volume you can simply use tar and pipe the contents into the volume within a new container like so:
[root@localhost ~]# docker run -d -i -t -v /volumes/wpdata --name wpdata mybusybox sh 416ea2a877267f566ef8b054a836e8b6b2550b347143c4fe8ed2616e11140226 [root@localhost ~]# [root@localhost ~]# tar -c files/ | docker run -i --rm -w /volumes/wpdata/ --volumes-from wpdata mybusybox tar -xv files/ files/file8.txt files/file9.txt files/file4.txt files/file7.txt files/file1.txt files/file6.txt files/file2.txt files/file5.txt files/file10.txt files/file3.txt

So.. In the example I created a new data-only volume container named wpdata and then ran tar to pipe the contents of a directory to a new container that also used the same volumes as the original container. Not so tough, but not as easy as docker cp. I think docker cp should have this functionality sometime in the future ( issue tracker here ).

Enjoy

Dusty

Creating Your Own Minimal Docker Image in Fedora


Sometimes it can be useful to have a docker image with just the bare essentials. Maybe you want to have a container with just enough to run your app or you are using something like data volume containers and want just enough to browse the filesystem. Either way you can create your own minimalist busybox image on Fedora with a pretty simple script.

The script below was inspired a little from Marek Goldmann's post about creating a minimal image for wildfly and a little from the busybox website .

# cd to a temporary directory tmpdir=$(mktemp -d) pushd $tmpdir # Get and extract busybox yumdownloader busybox rpm2cpio busybox*rpm | cpio -imd rm -f busybox*rpm # Create symbolic links back to busybox for i in $(./sbin/busybox --list);do ln -s /sbin/busybox ./sbin/$i done # Create container tar -c . | docker import - mybusybox # Go back to old pwd popd

After running the script there is a new image on your system with the mybusybox tag. You can run it and take a look around like so:
[root@localhost ~]# docker images mybusybox REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE mybusybox latest f526db9e0d80 12 minutes ago 1.309 MB [root@localhost ~]# [root@localhost ~]# docker run -i -t mybusybox /sbin/busybox sh # ls -l /sbin/ls lrwxrwxrwx 1 0 0 13 Jul 8 02:15 /sbin/ls -> /sbin/busybox # # ls / dev etc proc sbin sys usr # # df -kh . Filesystem Size Used Available Use% Mounted on /dev/mapper/docker-253:0-394094-addac9507205082fbd49c8f45bbd0316fd6b3efbb373bb1d717a3ccf44b8a97e 9.7G 23.8M 9.2G 0% /

Enjoy!

Dusty

Manual Linux Installs with Funky Storage Configurations

Introduction


I often find that my tastes for hard drive configurations on my installed systems is a bit outside of the norm. I like playing with thin LVs, BTRFS snapshots, or whatever new thing there is around the corner. The Anaconda UI has been adding support for these fringe cases but I still find it hard to get Anaconda to do what I want in certain cases.

An example of this happened most recently when I went to reformat and install Fedora 20 on my laptop. Ultimately what I wanted was encrypted root and swap devices and btrfs filesystems on root and boot. One other requirement was that I needed to leave sda4 (a Windows Partition) completely intact. At the end the configuration should look something like:

[root@lintop ~]# lsblk /dev/sda NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 465.8G 0 disk ├─sda1 8:1 0 1G 0 part /boot ├─sda2 8:2 0 4G 0 part │ └─cryptoswap 253:1 0 4G 0 crypt [SWAP] ├─sda3 8:3 0 299.2G 0 part │ └─cryptoroot 253:0 0 299.2G 0 crypt / └─sda4 8:4 0 161.6G 0 part

After a few failed attempts with Anaconda I decided to do a custom install instead.

Custom Install


I used the Fedora 20 install DVD (and thus the Anaconda environment) to do the install but I performed all the steps manually by switching to a different terminal with a prompt. First off I used fdisk to format the disk the way I wanted. The results looked like:
[anaconda root@localhost ~]# fdisk -l /dev/sda Disk /dev/sda: 465.8 GiB, 500107862016 bytes, 976773168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: dos Disk identifier: 0xcfe1cf72 Device Boot Start End Blocks Id System /dev/sda1 * 2048 2099199 1048576 83 Linux /dev/sda2 2099200 10487807 4194304 82 Linux swap / Solaris /dev/sda3 10487808 637945855 313729024 83 Linux /dev/sda4 637945856 976773119 169413632 7 HPFS/NTFS/exFAT

Next I set up the encrypted root (/dev/sda3) device and created a btrfs filesystems on both boot (/dev/sda2) and the encrypted root device (/dev/mapper/cryptoroot):

[anaconda root@localhost ~]# cryptsetup luksFormat /dev/sda3 ... [anaconda root@localhost ~]# cryptsetup luksOpen /dev/sda3 cryptoroot Enter passphrase for /dev/sda3: [anaconda root@localhost ~]# [anaconda root@localhost ~]# mkfs.btrfs --force --label=root /dev/mapper/cryptoroot ... fs created label root on /dev/mapper/cryptoroot ... [anaconda root@localhost ~]# mkfs.btrfs --force --label=boot --mixed /dev/sda1 ... fs created label boot on /dev/sda1 ...

Next, if you want to use the yum cli then you need to install it because some of the files are left out of the environment by default. I show the error you get below and then how to fix it:

[anaconda root@localhost ~]# yum list Traceback (most recent call last): File "/bin/yum", line 28, in import yummain ImportError: No module named yummain [anaconda root@localhost ~]# rpm -ivh --nodeps /run/install/repo/Packages/y/yum-3.4.3-106.fc20.noarch.rpm ...

I needed to set up a repo that used the DVD as the source:

[anaconda root@localhost ~]# cat <<EOF > /etc/yum.repos.d/repo.repo [dvd] name=dvd baseurl=file:///run/install/repo enabled=1 gpgcheck=0 EOF

Now I could mount my root device on /mnt/sysimage and then lay down the basic filesystem tree by installing the filesystem package into it:

[anaconda root@localhost ~]# mount /dev/mapper/cryptoroot /mnt/sysimage/ [anaconda root@localhost ~]# yum install -y --installroot=/mnt/sysimage filesystem ... Complete!

Now I can mount boot and other filesystems into the /mnt/sysimage tree:

[anaconda root@localhost ~]# mount /dev/sda1 /mnt/sysimage/boot/ [anaconda root@localhost ~]# mount -v -o bind /dev /mnt/sysimage/dev/ mount: /dev bound on /mnt/sysimage/dev. [anaconda root@localhost ~]# mount -v -o bind /run /mnt/sysimage/run/ mount: /run bound on /mnt/sysimage/run. [anaconda root@localhost ~]# mount -v -t proc proc /mnt/sysimage/proc/ mount: proc mounted on /mnt/sysimage/proc. [anaconda root@localhost ~]# mount -v -t sysfs sys /mnt/sysimage/sys/ mount: sys mounted on /mnt/sysimage/sys.

Now ready for the actual install. For this install I just went with a small set of packages (I'll use yum once the system is up to add on what I want later).

[anaconda root@localhost ~]# yum install -y --installroot=/mnt/sysimage @core @standard kernel grub2 grub2-tools btrfs-progs ... Complete!

After the install there are a few housekeeping items to take care of. I started with populating crypttab, populating fstab, changing the root password, and touching /.autorelabel to trigger an selinux relabel on first boot:

[anaconda root@localhost ~]# chroot /mnt/sysimage/ [anaconda root@localhost /]# cat <<EOF > /etc/crypttab cryptoswap /dev/sda2 /dev/urandom swap cryptoroot /dev/sda3 - EOF [anaconda root@localhost /]# cat <<EOF > /etc/fstab LABEL=boot /boot btrfs defaults 1 2 /dev/mapper/cryptoswap swap swap defaults 0 0 /dev/mapper/cryptoroot / btrfs defaults 1 1 EOF [anaconda root@localhost /]# passwd --stdin root <<< "password" Changing password for user root. passwd: all authentication tokens updated successfully. [anaconda root@localhost /]# touch /.autorelabel

Next I needed to install grub and make a config file. I set the grub kernel command line arguments and then generated a config. The config needed some fixing up (I am not using EFI on my system, but grub2-mkconfig thought I was because I had booted through EFI off of the install CD).

[anaconda root@localhost /]# echo 'GRUB_CMDLINE_LINUX="ro root=/dev/mapper/cryptoroot"' > /etc/default/grub [anaconda root@localhost /]# grub2-mkconfig -o /boot/grub2/grub.cfg Generating grub.cfg ... Found linux image: /boot/vmlinuz-3.11.10-301.fc20.x86_64 Found initrd image: /boot/initramfs-3.11.10-301.fc20.x86_64.img Found linux image: /boot/vmlinuz-0-rescue-81c04e9030594ef6a5265a95f58ccf98 Found initrd image: /boot/initramfs-0-rescue-81c04e9030594ef6a5265a95f58ccf98.img done [anaconda root@localhost /]# sed -i s/linuxefi/linux/ /boot/grub2/grub.cfg [anaconda root@localhost /]# sed -i s/initrdefi/initrd/ /boot/grub2/grub.cfg [anaconda root@localhost /]# grub2-install -d /usr/lib/grub/i386-pc/ /dev/sda Installation finished. No error reported.

NOTE: grub2-mkconfig didn't find my windows partition until I rebooted into the system and ran it again.

Finally I re-executed dracut to pick up the crypttab, exited the chroot, unmounted the filesystems, and rebooted into my new system:
[anaconda root@localhost /]# dracut --kver 3.11.10-301.fc20.x86_64 --force [anaconda root@localhost /]# exit [anaconda root@localhost ~]# umount /mnt/sysimage/{boot,dev,run,sys,proc} [anaconda root@localhost ~]# reboot

After booting into Fedora I was then able to run grub2-mkconfig again and get it to recognize my (untouched) Windows partition:

[root@localhost /]# grub2-mkconfig -o /boot/grub2/grub.cfg Generating grub.cfg ... Found linux image: /boot/vmlinuz-3.11.10-301.fc20.x86_64 Found initrd image: /boot/initramfs-3.11.10-301.fc20.x86_64.img Found linux image: /boot/vmlinuz-0-rescue-375c7019484a45838666c572d241249a Found initrd image: /boot/initramfs-0-rescue-375c7019484a45838666c572d241249a.img Found Windows 7 (loader) on /dev/sda4 done

And that's pretty much it. Using this method you can have virtually any hard drive setup that you desire. Hope someone else can find this useful.

Dusty

P.S. You can start sshd in anaconda by running systemctl start anaconda-sshd.service.

TermRecord: Terminal Screencast in a Self-Contained HTML File

Introduction


Some time ago I wrote a few posts ( 1, 2 ) on how to use script to record a terminal session and then scriptreplay to play it back. This functionality can be very useful by enabling you the power to show others what happens when you do insert anything here.

I have been happy with this solution for a while until one day Wolfgang Richter commented on my original post and shared a project he has been working on known as TermRecord.

I gave it a spin and have been using it quite a bit. Sharing a terminal recording now becomes much easier as you can simply email the .html file or you can host it yourself and share links. As long the people you are sharing with have a browser then they can watch the playback. Thus, it is not tied to a system with a particular piece of software and clicking a link to view is very easy to do :)

Basics of TermRecord


Before anything else we need to install TermRecord. Currently TermRecord is available in the python package index (hopefully will be packaged in some major distributions soon) and can be installed using pip.
[root@localhost ~]# pip install TermRecord Downloading/unpacking TermRecord Downloading TermRecord-1.1.3.tar.gz (49kB): 49kB downloaded Running setup.py egg_info for package TermRecord ... ... Successfully installed TermRecord Jinja2 markupsafe Cleaning up...
Now you can make a self-contained html file for sharing in a couple of ways.

First, you can use TermRecord to convert already existing timing and log files that were created using the script command by specifying them as inputs to TermRecord:
[root@localhost ~]# TermRecord -o screencast.html -t screencast.timing -s screencast.log

The other option is to create a new recording using TermRecord like so:
[root@localhost ~]# TermRecord -o screencast.html Script started, file is /tmp/tmp5I4SYq [root@localhost ~]# [root@localhost ~]# #This is a screencast. [root@localhost ~]# exit exit Script done, file is /tmp/tmp5I4SYq

And.. Done. Now you can email or share the html file any way you like. If you would like to see some examples of terminal recordings you can check out the TermRecord github page or here is one from my previous post on wordpress/docker.

Cheers,
Dusty

Zero to WordPress on Docker in 5 Minutes

Introduction


Docker is an emerging technology that has garnered a lot of momentum in the past year. I have been busy with a move to NYC and a job change (now officially a Red Hatter), so I am just now getting around to getting my feet wet with Docker.

Last night I sat down and decided to bang out some steps for installing wordpress in a docker container. Eventually I plan to move this site into a container so I figured this would be a good first step.

DockerPress


There a few bits and pieces that need to be done to configure wordpress. For simplicity I decided to make this wordpress instance use sqlite rather than mysql. Considering all of this here is the basic recipe for wordpress:
  • Install apache and php.
  • Download wordpress and extract to appropriate folder.
  • Download the sqlite-integration plugin and extract.
  • Modify a few files...and DONE.
This is easily automated by creating a Dockerfile and using docker. The minimal Dockerfile (with comments) is shown below:
FROM goldmann/f20 MAINTAINER Dusty Mabe # Install httpd and update openssl RUN yum install -y httpd openssl unzip php php-pdo # Download and extract wordpress RUN curl -o wordpress.tar.gz http://wordpress.org/latest.tar.gz RUN tar -xzvf wordpress.tar.gz --strip-components=1 --directory /var/www/html/ RUN rm wordpress.tar.gz # Download plugin to allow WP to use sqlite # http://wordpress.org/plugins/sqlite-integration/installation/ # - Move sqlite-integration folder to wordpress/wp-content/plugins folder. # - Copy db.php file in sqlite-integratin folder to wordpress/wp-content folder. # - Rename wordpress/wp-config-sample.php to wordpress/wp-config.php. # RUN curl -o sqlite-plugin.zip http://downloads.wordpress.org/plugin/sqlite-integration.1.6.3.zip RUN unzip sqlite-plugin.zip -d /var/www/html/wp-content/plugins/ RUN rm sqlite-plugin.zip RUN cp /var/www/html/wp-content/{plugins/sqlite-integration/db.php,} RUN cp /var/www/html/{wp-config-sample.php,wp-config.php} # # Fix permissions on all of the files RUN chown -R apache /var/www/html/ RUN chgrp -R apache /var/www/html/ # # Update keys/salts in wp-config for security RUN RE='put your unique phrase here'; for i in {1..8}; do KEY=$(openssl rand -base64 40); sed -i "0,/$RE/s|$RE|$KEY|" /var/www/html/wp-config.php; done; # # Expose port 80 and set httpd as our entrypoint EXPOSE 80 ENTRYPOINT ["/usr/sbin/httpd"] CMD ["-D", "FOREGROUND"]

With the power of the Dockerfile you can now build a new image using docker build and then run the new container with the docker run command. An example of these two commands is shown below:
[root@localhost ~]# ls Dockerfile Dockerfile [root@localhost ~]# docker build -t "wordpress" . ... Successfully built 0b388013905e ... [root@localhost ~]# [root@localhost ~]# docker run -d -p 8080:80 -t wordpress 6da59c864d35bb0bb6043c09eb8b1128b2c1cb91f7fa456156df4a0a22f271b0

The docker build command will build an image from the Dockerfile and then tag the new image with the "wordpress" tag. The docker run command will run a new container based on the "wordpress" image and bind port 8080 from the host machine to port 80 within the container.

Now you can happily point your browser to http://localhost:8080 and see the wordpress 5 minute installation screen:



See a full screencast of the "zero to wordpress" process using docker here .
Download the Dockerfile here .

Cheers!
Dusty

NOTE: This was done on Fedora 20 with docker-io-0.9.1-1.fc20.x86_64.