Archive for the 'Docker' Category

Non Deterministic docker Networking and Source Based IP Routing

Introduction

In the open source docker engine a new networking model was introduced in docker 1.9 which enabled the creation of separate "networks" for containers to be attached to. This, however, can lead to a nasty little problem where a port that is supposed to be exposed on the host isn't accessible from the outside. There are a few bug reports that are related to this issue.

Cause

This problem happens because docker wires up all of these containers to each other and the various "networks" using port forwarding/NAT via iptables. Let's take a popular example application which exhibits the problem, the Docker 3rd Birthday Application, and show what the problem is and why it happens.

We'll clone the git repo first and then check out the latest commit as of 2016-05-25:

# git clone https://github.com/docker/docker-birthday-3
# cd docker-birthday-3/
# git checkout 'master@{2016-05-25}'
...
HEAD is now at 4f2f1c9... Update Dockerfile

Next we'll bring up the application:

# cd example-voting-app/
# docker-compose up -d 
Creating network "examplevotingapp_front-tier" with the default driver
Creating network "examplevotingapp_back-tier" with the default driver
Creating db
Creating redis
Creating examplevotingapp_voting-app_1
Creating examplevotingapp_worker_1
Creating examplevotingapp_result-app_1

So this created two networks and brought up several containers to host our application. Let's poke around to see what's there:

# docker network ls
NETWORK ID          NAME                          DRIVER
23c96b2e1fe7        bridge                        bridge              
cd8ecb4c0556        examplevotingapp_front-tier   bridge              
5760e64b9176        examplevotingapp_back-tier    bridge              
bce0f814fab1        none                          null                
1b7e62bcc37d        host                          host
#
# docker ps -a --format "table {{.Names}}\t{{.Image}}\t{{.Ports}}"
NAMES                           IMAGE                         PORTS
examplevotingapp_result-app_1   examplevotingapp_result-app   0.0.0.0:5001->80/tcp
examplevotingapp_voting-app_1   examplevotingapp_voting-app   0.0.0.0:5000->80/tcp
redis                           redis:alpine                  0.0.0.0:32773->6379/tcp
db                              postgres:9.4                  5432/tcp
examplevotingapp_worker_1       manomarks/worker              

So two networks were created and the containers running the application were brought up. Looks like we should be able to connect to the examplevotingapp_voting-app_1 application on the host port 5000 that is bound to all interfaces. Does it work?:

# ip -4 -o a
1: lo    inet 127.0.0.1/8 scope host lo\       valid_lft forever preferred_lft forever
2: eth0    inet 192.168.121.98/24 brd 192.168.121.255 scope global dynamic eth0\       valid_lft 2921sec preferred_lft 2921sec
3: docker0    inet 172.17.0.1/16 scope global docker0\       valid_lft forever preferred_lft forever
106: br-cd8ecb4c0556    inet 172.18.0.1/16 scope global br-cd8ecb4c0556\       valid_lft forever preferred_lft forever
107: br-5760e64b9176    inet 172.19.0.1/16 scope global br-5760e64b9176\       valid_lft forever preferred_lft forever
#
# curl --connect-timeout 5 192.168.121.98:5000 &>/dev/null && echo success || echo failure
failure
# curl --connect-timeout 5 127.0.0.1:5000 &>/dev/null && echo success || echo failure
success

Does it work? Yes and no?

That's right. There is something complicated going on with the networking here. I can connect from localhost but can't connect to the public IP of the host. Docker wires things up in iptables so that things can go into and out of containers following a strict set of rules; see the iptables output if you are interested. This works fine if you only have one network interface per container but can break down when you have multiple interfaces attached to a container.

Let's jump in to the examplevotingapp_voting-app_1 container and check out some of the networking:

# docker exec -it examplevotingapp_voting-app_1 /bin/sh
/app # ip -4 -o a
1: lo    inet 127.0.0.1/8 scope host lo\       valid_lft forever preferred_lft forever
112: eth1    inet 172.18.0.2/16 scope global eth1\       valid_lft forever preferred_lft forever
114: eth0    inet 172.19.0.4/16 scope global eth0\       valid_lft forever preferred_lft forever
/app # 
/app # ip route show
default via 172.19.0.1 dev eth0 
172.18.0.0/16 dev eth1  src 172.18.0.2 
172.19.0.0/16 dev eth0  src 172.19.0.4

So there is a clue. We have two interfaces, but our default route is going to go out of the eth0 on the 172.19.0.0/16 network. It just so happens that our iptables rules (see linked iptables output from above) performed DNAT for tcp dpt:5000 to:172.18.0.2:80. So traffic from the outside is going to come in to this container on the eth1 interface but leave it on the eth0 interface, which doesn't play nice with the iptables rules docker has set up.

We can prove that here by asking what route we will take when a packet leaves the machine:

/app # ip route get 10.10.10.10 from 172.18.0.2
10.10.10.10 from 172.18.0.2 via 172.19.0.1 dev eth0

Which basically means it will leave from eth0 even though it came in on eth1. The Docker documentation was updated to try to explain the behavior when multiple interfaces are attached to a container in this git commit.

Test Out Theory Using Source Based IP Routing

To test out the theory on this we can use source based IP routing (some reading on that here). Basically the idea is that we create policy rules that make IP traffic leave on the same interface it came in on.

To perform the test we'll need our container to be privileged so we can add routes. Modify the docker-compose.yml to add privileged: true to the voting-app:

services:
  voting-app:
    build: ./voting-app/.
    volumes:
     - ./voting-app:/app
    ports:
      - "5000:80"
    networks:
      - front-tier
      - back-tier
    privileged: true

Take down and bring up the application:

# docker-compose down
...
# docker-compose up -d
...

Exec into the container and create a new policy rule for packets originating from the 172.18.0.0/16 network. Tell packets matching this rule to look up routing table 200:

# docker exec -it examplevotingapp_voting-app_1 /bin/sh
/app # ip rule add from 172.18.0.0/16 table 200

Now add a default route for 172.18.0.1 to routing table 200. Show the routing table after that and the rules as well:

/app # ip route add default via 172.18.0.1 dev eth1 table 200
/app # ip route show table 200
default via 172.18.0.1 dev eth1
/app # ip rule show
0:      from all lookup local 
32765:  from 172.18.0.0/16 lookup 200 
32766:  from all lookup main 
32767:  from all lookup default

Now ask the kernel where a packet originating from our 172.18.0.2 address will get sent:

/app # ip route get 10.10.10.10 from 172.18.0.2
10.10.10.10 from 172.18.0.2 via 172.18.0.1 dev eth1

And finally, go back to the host and check to see if everything works now:

# curl --connect-timeout 5 192.168.121.98:5000 &>/dev/null && echo success || echo failure
success
# curl --connect-timeout 5 127.0.0.1:5000 &>/dev/null && echo success || echo failure
success

Success!!

I don't know if source based routing can be incorporated into docker to fix this problem or if there is a better solution. I guess we'll have to wait and find out.

Enjoy!

Dusty

NOTE I used the following versions of software for this blog post:

# rpm -q docker docker-compose kernel-core
docker-1.10.3-10.git8ecd47f.fc24.x86_64
docker-compose-1.7.0-1.fc24.noarch
kernel-core-4.5.4-300.fc24.x86_64

Crisis Averted.. I'm using Atomic Host

This blog has been running on Docker on Fedora 21 Atomic Host since early January. Occasionally I log in and run rpm-ostree upgrade followed by a subsequent reboot (usually after I inspect a few things). Today I happened to do just that and what did I come up with?? A bunch of 404s. Digging through some of the logs for the systemd unit file I use to start my wordpress container I found this:

systemd[1]: wordpress-server.service: main process exited, code=exited, status=1/FAILURE
docker[2321]: time="2015-01-31T19:09:24-05:00" level="fatal" msg="Error response from daemon: Cannot start container 51a2b8c45bbee564a61bcbffaee5bc78357de97cdd38918418026c26ae40fb09: write /sys/fs/cgroup/memory/system.slice/docker-51a2b8c45bbee564a61bcbffaee5bc78357de97cdd38918418026c26ae40fb09.scope/memory.memsw.limit_in_bytes: invalid argument"

Hmmm.. So that means I have updated to the latest atomic and docker doesn't work?? What am I to do?

Well, the nice thing about atomic host is that in moments like these you can easily go back to the state you were before you upgraded. A quick rpm-ostree rollback and my blog was back up and running in minutes.

Whew! Crisis averted.. But now what? Well the nice thing about atomic host is that I can easily go to another (non-production) system and test out exactly the same scenario as the upgrade that I performed in production. Some quick googling led me to this github issue which looks like it has to do with setting memory limits when you start a container using later versions of systemd.

Let's test out that theory by recreating this failure.

Recreating the Failure

To recreate I decided to start with the Fedora 21 atomic cloud image that was released in December. Here is what I have:

-bash-4.3# ostree admin status
* fedora-atomic ba7ee9475c462c9265517ab1e5fb548524c01a71709539bbe744e5fdccf6288b.0
    origin refspec: fedora-atomic:fedora-atomic/f21/x86_64/docker-host
-bash-4.3#
-bash-4.3# rpm-ostree status
  TIMESTAMP (UTC)         ID             OSNAME            REFSPEC
* 2014-12-03 01:30:09     ba7ee9475c     fedora-atomic     fedora-atomic:fedora-atomic/f21/x86_64/docker-host
-bash-4.3#
-bash-4.3# rpm -q docker-io systemd
docker-io-1.3.2-2.fc21.x86_64
systemd-216-12.fc21.x86_64
-bash-4.3#
-bash-4.3# docker run --rm --memory 500M busybox echo "I'm Alive"
Unable to find image 'busybox' locally
Pulling repository busybox
4986bf8c1536: Download complete
511136ea3c5a: Download complete
df7546f9f060: Download complete
ea13149945cb: Download complete
Status: Downloaded newer image for busybox:latest
I'm Alive

So the system is up and running and able to run a container with the --memory option set. Now lets upgrade to the same commit that I did when I saw the failure earlier and reboot:

-bash-4.3# ostree pull fedora-atomic 153f577dc4b039e53abebd7c13de6dfafe0fb64b4fdc2f5382bdf59214ba7acb

778 metadata, 4374 content objects fetched; 174535 KiB transferred in 156 seconds
-bash-4.3#
-bash-4.3# echo 153f577dc4b039e53abebd7c13de6dfafe0fb64b4fdc2f5382bdf59214ba7acb > /ostree/repo/refs/remotes/fedora-atomic/fedora-atomic/f21/x86_64/docker-host
-bash-4.3#
-bash-4.3# ostree admin deploy fedora-atomic:fedora-atomic/f21/x86_64/docker-host
Copying /etc changes: 26 modified, 4 removed, 36 added
Transaction complete; bootconfig swap: yes deployment count change: 1
-bash-4.3#
-bash-4.3# ostree admin status
  fedora-atomic 153f577dc4b039e53abebd7c13de6dfafe0fb64b4fdc2f5382bdf59214ba7acb.0
    origin refspec: fedora-atomic:fedora-atomic/f21/x86_64/docker-host
* fedora-atomic ba7ee9475c462c9265517ab1e5fb548524c01a71709539bbe744e5fdccf6288b.0
    origin refspec: fedora-atomic:fedora-atomic/f21/x86_64/docker-host
-bash-4.3#
-bash-4.3# rpm-ostree status
  TIMESTAMP (UTC)         ID             OSNAME            REFSPEC
  2015-01-31 21:08:35     153f577dc4     fedora-atomic     fedora-atomic:fedora-atomic/f21/x86_64/docker-host
* 2014-12-03 01:30:09     ba7ee9475c     fedora-atomic     fedora-atomic:fedora-atomic/f21/x86_64/docker-host
-bash-4.3# reboot

Note that I had to manually update the ref to point to the commit I downloaded in order to get this to work. I'm not sure why this is but it wouldn't work otherwise.

Ok now I had a system using the same tree that I was when I saw the failure. Let's check to see if it still happens:

-bash-4.3# rpm-ostree status
  TIMESTAMP (UTC)         ID             OSNAME            REFSPEC
* 2015-01-31 21:08:35     153f577dc4     fedora-atomic     fedora-atomic:fedora-atomic/f21/x86_64/docker-host
  2014-12-03 01:30:09     ba7ee9475c     fedora-atomic     fedora-atomic:fedora-atomic/f21/x86_64/docker-host
-bash-4.3#
-bash-4.3# rpm -q docker-io systemd
docker-io-1.4.1-5.fc21.x86_64
systemd-216-17.fc21.x86_64
-bash-4.3#
-bash-4.3# docker run --rm --memory 500M busybox echo "I'm Alive"
FATA[0003] Error response from daemon: Cannot start container d79629bfddc7833497b612e2b6d4cc2542ce9a8c2253d39ace4434bbd385185b: write /sys/fs/cgroup/memory/system.slice/docker-d79629bfddc7833497b612e2b6d4cc2542ce9a8c2253d39ace4434bbd385185b.scope/memory.memsw.limit_in_bytes: invalid argument

Yep! Looks like it consistently happens. This is good because this is a recreator that can now be used by anyone to verify the problem on their own. For completeness I'll go ahead and rollback the system to show that the problem goes away when back in the old state:

-bash-4.3# rpm-ostree rollback
Moving 'ba7ee9475c462c9265517ab1e5fb548524c01a71709539bbe744e5fdccf6288b.0' to be first deployment
Transaction complete; bootconfig swap: yes deployment count change: 0
Changed:
  NetworkManager-1:0.9.10.0-13.git20140704.fc21.x86_64
  NetworkManager-glib-1:0.9.10.0-13.git20140704.fc21.x86_64
  ...
  ...
Removed:
  flannel-0.2.0-1.fc21.x86_64
Sucessfully reset deployment order; run "systemctl reboot" to start a reboot
-bash-4.3# reboot

And the final test:

-bash-4.3# rpm-ostree status
  TIMESTAMP (UTC)         ID             OSNAME            REFSPEC
* 2014-12-03 01:30:09     ba7ee9475c     fedora-atomic     fedora-atomic:fedora-atomic/f21/x86_64/docker-host
  2015-01-31 21:08:35     153f577dc4     fedora-atomic     fedora-atomic:fedora-atomic/f21/x86_64/docker-host
-bash-4.3# docker run --rm --memory 500M busybox echo "I'm Alive"
I'm Alive

Bliss! And you can thank Atomic Host for that.

Dusty

F21 Atomic Test Day && Test steps for Atomic Host

Test Day on Thursday 11/20

The F21 test day for atomic is this Thursday, November 20th. If anyone can participate please do drop into #atomic on freenode as it will be great to have more people involved in helping build/test this new technology.

In anticipation of the test day I have put together some test notes for other people to follow in hopes that it will help smooth things along.

Booting with cloud-init

First step is to start an atomic host using any method/cloud provider you like. For me I decided to use openstack since I have Juno running on F21 here in my apartment. I used this user-data for the atomic host:

#cloud-config password: passw0rd chpasswd: { expire: False } ssh_pwauth: True runcmd: - [ sh, -c, 'echo -e "ROOT_SIZE=4GnDATA_SIZE=10G" > /etc/sysconfig/docker-storage-setup']

Note that the build of atomic I used for this testing resides here

Verifying docker-storage-setup

docker-storage-setup is a service that can be used to configure the storage configuration for docker in different ways on instance bringup. Notice in the user-data above that I decided to set config variables for docker-storage-setup. They basically mean that I want to resize my atomicos/root LV to 4G and I want to create an atomicos/docker-data LV and make it 10G in size.

To verify the storage was set up successfully, log in (as the fedora user) and become root (usind sudo su -). Now you can check if docker-storage-setup worked by checking the logs as well as looking at the output from lsblk:

# journalctl -o cat --unit docker-storage-setup.service CHANGED: partition=2 start=411648 old: size=12171264 end=12582912 new: size=41531232,end=41942880 Physical volume "/dev/vda2" changed 1 physical volume(s) resized / 0 physical volume(s) not resized Size of logical volume atomicos/root changed from 1.95 GiB (500 extents) to 4.00 GiB (1024 extents). Logical volume root successfully resized Rounding up size to full physical extent 24.00 MiB Logical volume "docker-meta" created Logical volume "docker-data" created # # lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT vda 252:0 0 20G 0 disk ├─vda1 252:1 0 200M 0 part /boot └─vda2 252:2 0 19.8G 0 part ├─atomicos-root 253:0 0 4G 0 lvm /sysroot ├─atomicos-docker--meta 253:1 0 24M 0 lvm └─atomicos-docker--data 253:2 0 10G 0 lvm

Verifying Docker Lifecycle

To verify Docker runs fine on the atomic host we will perform a simple run of the busybox docker image. This will contact the docker hub, pull down the image, and run /bin/true:

# docker run -it --rm busybox true && echo "PASS" || echo "FAIL" Unable to find image 'busybox' locally Pulling repository busybox e72ac664f4f0: Download complete 511136ea3c5a: Download complete df7546f9f060: Download complete e433a6c5b276: Download complete PASS

After the Docker daemon has started the LVs that were created by docker-storage-setup will be used by device mapper as shown in the lsblk output below:

# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT vda 252:0 0 20G 0 disk ├─vda1 252:1 0 200M 0 part /boot └─vda2 252:2 0 19.8G 0 part ├─atomicos-root 253:0 0 4G 0 lvm /sysroot ├─atomicos-docker--meta 253:1 0 24M 0 lvm │ └─docker-253:0-6298462-pool 253:3 0 10G 0 dm │ └─docker-253:0-6298462-base 253:4 0 10G 0 dm └─atomicos-docker--data 253:2 0 10G 0 lvm └─docker-253:0-6298462-pool 253:3 0 10G 0 dm └─docker-253:0-6298462-base 253:4 0 10G 0 dm

Atomic Host: Upgrade

Now on to an atomic upgrade. First let's check what commit we are currently at and store a file in /etc/file1 to save it for us:

# rpm-ostree status TIMESTAMP (UTC) ID OSNAME REFSPEC * 2014-11-12 22:28:04 1877f1fa64 fedora-atomic-host fedora-atomic:fedora-atomic/f21/x86_64/docker-host # # ostree admin status * fedora-atomic-host 1877f1fa64be8bec8adcd43de6bd4b5c39849ec7842c07a6d4c2c2033651cd84.0 origin refspec: fedora-atomic:fedora-atomic/f21/x86_64/docker-host # # cat /ostree/repo/refs/heads/ostree/0/1/0 1877f1fa64be8bec8adcd43de6bd4b5c39849ec7842c07a6d4c2c2033651cd84 # # cat /ostree/repo/refs/heads/ostree/0/1/0 > /etc/file1

Now run an upgrade to the latest atomic compose:

# rpm-ostree upgrade Updating from: fedora-atomic:fedora-atomic/f21/x86_64/docker-host 14 metadata, 19 content objects fetched; 33027 KiB transferred in 16 seconds Copying /etc changes: 26 modified, 4 removed, 39 added Transaction complete; bootconfig swap: yes deployment count change: 1) Updates prepared for next boot; run "systemctl reboot" to start a reboot

And do a bit of poking around right before we reboot:

# rpm-ostree status TIMESTAMP (UTC) ID OSNAME REFSPEC 2014-11-13 10:52:06 18e02c4166 fedora-atomic-host fedora-atomic:fedora-atomic/f21/x86_64/docker-host * 2014-11-12 22:28:04 1877f1fa64 fedora-atomic-host fedora-atomic:fedora-atomic/f21/x86_64/docker-host # # ostree admin status fedora-atomic-host 18e02c41666ef5f426bc43d01c4ce1b7ffc0611e993876cf332600e2ad8aa7c0.0 origin refspec: fedora-atomic:fedora-atomic/f21/x86_64/docker-host * fedora-atomic-host 1877f1fa64be8bec8adcd43de6bd4b5c39849ec7842c07a6d4c2c2033651cd84.0 origin refspec: fedora-atomic:fedora-atomic/f21/x86_64/docker-host # # reboot

Note that the * in the above output indicates which tree is currently booted.

After reboot now the new tree should be booted. Let's check things out and make /etc/file2 with our new commit hash in it:

# rpm-ostree status TIMESTAMP (UTC) ID OSNAME REFSPEC * 2014-11-13 10:52:06 18e02c4166 fedora-atomic-host fedora-atomic:fedora-atomic/f21/x86_64/docker-host 2014-11-12 22:28:04 1877f1fa64 fedora-atomic-host fedora-atomic:fedora-atomic/f21/x86_64/docker-host # # ostree admin status * fedora-atomic-host 18e02c41666ef5f426bc43d01c4ce1b7ffc0611e993876cf332600e2ad8aa7c0.0 origin refspec: fedora-atomic:fedora-atomic/f21/x86_64/docker-host fedora-atomic-host 1877f1fa64be8bec8adcd43de6bd4b5c39849ec7842c07a6d4c2c2033651cd84.0 origin refspec: fedora-atomic:fedora-atomic/f21/x86_64/docker-host # # cat /ostree/repo/refs/heads/ostree/1/1/0 18e02c41666ef5f426bc43d01c4ce1b7ffc0611e993876cf332600e2ad8aa7c0 # # cat /ostree/repo/refs/heads/ostree/1/1/0 > /etc/file2

As one final item let's boot up a docker container to make sure things still work there:

# docker run -it --rm busybox true && echo "PASS" || echo "FAIL" PASS

Atomic Host: Rollback

Atomic host provides the ability to revert to the previous working tree if things go awry with the new tree. Lets revert our upgrade now and make sure things still work:

# rpm-ostree rollback Moving '1877f1fa64be8bec8adcd43de6bd4b5c39849ec7842c07a6d4c2c2033651cd84.0' to be first deployment Transaction complete; bootconfig swap: yes deployment count change: 0) Sucessfully reset deployment order; run "systemctl reboot" to start a reboot # # rpm-ostree status TIMESTAMP (UTC) ID OSNAME REFSPEC 2014-11-12 22:28:04 1877f1fa64 fedora-atomic-host fedora-atomic:fedora-atomic/f21/x86_64/docker-host * 2014-11-13 10:52:06 18e02c4166 fedora-atomic-host fedora-atomic:fedora-atomic/f21/x86_64/docker-host # # reboot

After reboot:

# rpm-ostree status TIMESTAMP (UTC) ID OSNAME REFSPEC * 2014-11-12 22:28:04 1877f1fa64 fedora-atomic-host fedora-atomic:fedora-atomic/f21/x86_64/docker-host 2014-11-13 10:52:06 18e02c4166 fedora-atomic-host fedora-atomic:fedora-atomic/f21/x86_64/docker-host # # cat /etc/file1 1877f1fa64be8bec8adcd43de6bd4b5c39849ec7842c07a6d4c2c2033651cd84 # cat /etc/file2 cat: /etc/file2: No such file or directory

Notice that /etc/file2 did not exist until after the upgrade so it did not persist during the rollback.

And the final item on the list is to make sure Docker still works:

# docker run -it --rm busybox true && echo "PASS" || echo "FAIL" PASS

Anddd Boom.. You have just put atomic through some paces.

Docker: Copy Into A Container Volume


I need to copy a few files into my docker container.. Should be easy right? Turns out it's not so trivial. In Docker 1.0.0 and earlier the docker cp command can be used to copy files from a container to the host, but not the other way around...

Most of the time you can work around this by using an ADD statement in the Dockerfile but I often need to populate some data within data-only volume containers before I start other containers that use the data. To achieve copying data into the volume you can simply use tar and pipe the contents into the volume within a new container like so:
[root@localhost ~]# docker run -d -i -t -v /volumes/wpdata --name wpdata mybusybox sh 416ea2a877267f566ef8b054a836e8b6b2550b347143c4fe8ed2616e11140226 [root@localhost ~]# [root@localhost ~]# tar -c files/ | docker run -i --rm -w /volumes/wpdata/ --volumes-from wpdata mybusybox tar -xv files/ files/file8.txt files/file9.txt files/file4.txt files/file7.txt files/file1.txt files/file6.txt files/file2.txt files/file5.txt files/file10.txt files/file3.txt

So.. In the example I created a new data-only volume container named wpdata and then ran tar to pipe the contents of a directory to a new container that also used the same volumes as the original container. Not so tough, but not as easy as docker cp. I think docker cp should have this functionality sometime in the future ( issue tracker here ).

Enjoy

Dusty

Creating Your Own Minimal Docker Image in Fedora


Sometimes it can be useful to have a docker image with just the bare essentials. Maybe you want to have a container with just enough to run your app or you are using something like data volume containers and want just enough to browse the filesystem. Either way you can create your own minimalist busybox image on Fedora with a pretty simple script.

The script below was inspired a little from Marek Goldmann's post about creating a minimal image for wildfly and a little from the busybox website .

# cd to a temporary directory tmpdir=$(mktemp -d) pushd $tmpdir # Get and extract busybox yumdownloader busybox rpm2cpio busybox*rpm | cpio -imd rm -f busybox*rpm # Create symbolic links back to busybox for i in $(./sbin/busybox --list);do ln -s /sbin/busybox ./sbin/$i done # Create container tar -c . | docker import - mybusybox # Go back to old pwd popd

After running the script there is a new image on your system with the mybusybox tag. You can run it and take a look around like so:
[root@localhost ~]# docker images mybusybox REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE mybusybox latest f526db9e0d80 12 minutes ago 1.309 MB [root@localhost ~]# [root@localhost ~]# docker run -i -t mybusybox /sbin/busybox sh # ls -l /sbin/ls lrwxrwxrwx 1 0 0 13 Jul 8 02:15 /sbin/ls -> /sbin/busybox # # ls / dev etc proc sbin sys usr # # df -kh . Filesystem Size Used Available Use% Mounted on /dev/mapper/docker-253:0-394094-addac9507205082fbd49c8f45bbd0316fd6b3efbb373bb1d717a3ccf44b8a97e 9.7G 23.8M 9.2G 0% /

Enjoy!

Dusty

Zero to WordPress on Docker in 5 Minutes

Introduction


Docker is an emerging technology that has garnered a lot of momentum in the past year. I have been busy with a move to NYC and a job change (now officially a Red Hatter), so I am just now getting around to getting my feet wet with Docker.

Last night I sat down and decided to bang out some steps for installing wordpress in a docker container. Eventually I plan to move this site into a container so I figured this would be a good first step.

DockerPress


There a few bits and pieces that need to be done to configure wordpress. For simplicity I decided to make this wordpress instance use sqlite rather than mysql. Considering all of this here is the basic recipe for wordpress:
  • Install apache and php.
  • Download wordpress and extract to appropriate folder.
  • Download the sqlite-integration plugin and extract.
  • Modify a few files...and DONE.
This is easily automated by creating a Dockerfile and using docker. The minimal Dockerfile (with comments) is shown below:
FROM goldmann/f20 MAINTAINER Dusty Mabe # Install httpd and update openssl RUN yum install -y httpd openssl unzip php php-pdo # Download and extract wordpress RUN curl -o wordpress.tar.gz http://wordpress.org/latest.tar.gz RUN tar -xzvf wordpress.tar.gz --strip-components=1 --directory /var/www/html/ RUN rm wordpress.tar.gz # Download plugin to allow WP to use sqlite # http://wordpress.org/plugins/sqlite-integration/installation/ # - Move sqlite-integration folder to wordpress/wp-content/plugins folder. # - Copy db.php file in sqlite-integratin folder to wordpress/wp-content folder. # - Rename wordpress/wp-config-sample.php to wordpress/wp-config.php. # RUN curl -o sqlite-plugin.zip http://downloads.wordpress.org/plugin/sqlite-integration.1.6.3.zip RUN unzip sqlite-plugin.zip -d /var/www/html/wp-content/plugins/ RUN rm sqlite-plugin.zip RUN cp /var/www/html/wp-content/{plugins/sqlite-integration/db.php,} RUN cp /var/www/html/{wp-config-sample.php,wp-config.php} # # Fix permissions on all of the files RUN chown -R apache /var/www/html/ RUN chgrp -R apache /var/www/html/ # # Update keys/salts in wp-config for security RUN RE='put your unique phrase here'; for i in {1..8}; do KEY=$(openssl rand -base64 40); sed -i "0,/$RE/s|$RE|$KEY|" /var/www/html/wp-config.php; done; # # Expose port 80 and set httpd as our entrypoint EXPOSE 80 ENTRYPOINT ["/usr/sbin/httpd"] CMD ["-D", "FOREGROUND"]

With the power of the Dockerfile you can now build a new image using docker build and then run the new container with the docker run command. An example of these two commands is shown below:
[root@localhost ~]# ls Dockerfile Dockerfile [root@localhost ~]# docker build -t "wordpress" . ... Successfully built 0b388013905e ... [root@localhost ~]# [root@localhost ~]# docker run -d -p 8080:80 -t wordpress 6da59c864d35bb0bb6043c09eb8b1128b2c1cb91f7fa456156df4a0a22f271b0

The docker build command will build an image from the Dockerfile and then tag the new image with the "wordpress" tag. The docker run command will run a new container based on the "wordpress" image and bind port 8080 from the host machine to port 80 within the container.

Now you can happily point your browser to http://localhost:8080 and see the wordpress 5 minute installation screen:



See a full screencast of the "zero to wordpress" process using docker here .
Download the Dockerfile here .

Cheers!
Dusty

NOTE: This was done on Fedora 20 with docker-io-0.9.1-1.fc20.x86_64.