Installing an OpenShift Origin Cluster on Fedora 25 Atomic Host: Part 2

Cross posted with this Project Atomic Blog post


In part 1 of this series we used the OpenShift Ansible Installer to install Openshift Origin on three servers that were running Fedora 25 Atomic Host. The three machines we'll be using have the following roles and IP address configurations:

|     Role    |   Public IPv4  | Private IPv4 |
| master,etcd |    | |
|    worker   |   |  |
|    worker   | | |

In this blog, we'll explore the installed Origin cluster and then launch an application to see if everything works.

The Installed Origin Cluster

With the cluster up and running, we can log in as admin to the master node via the oc command. To install the oc CLI on a your machine, you can follow these instructions or, on Fedora, you can install via dnf install origin-clients. For this demo, we have the origin-clients-1.3.1-1.fc25.x86_64 rpm installed:

$ oc login --insecure-skip-tls-verify -u admin -p OriginAdmin
Login successful.

You have access to the following projects and can switch between them with 'oc project <projectname>':

  * default

Using project "default".
Welcome! See 'oc help' to get started.

NOTE: --insecure-skip-tls-verify was added because we do not have properly signed certificates. See the docs for installing a custom signed certificate.

After we log in we can see that we are using the default namespace. Let's see what nodes exist:

$ oc get nodes
NAME           STATUS                     AGE    Ready                      9h   Ready,SchedulingDisabled   9h   Ready                      9h

The nodes represent each of the servers that are a part of the Origin cluster. The name of each node corresponds with its private IPv4 address. Also note that the is the private IP address from the master,etcd node and that its status contains SchedulingDisabled. This is because we specified openshift_schedulable=false for this node when we did the install in part 1.

Now let's check the pods, services, and routes that are running in the default namespace:

$ oc get pods -o wide 
NAME                       READY     STATUS    RESTARTS   AGE       IP             NODE
docker-registry-3-hgwfr    1/1       Running   0          9h
registry-console-1-q48xn   1/1       Running   0          9h
router-1-nwjyj             1/1       Running   0          9h
router-1-o6n4a             1/1       Running   0          9h
$ oc get svc
NAME               CLUSTER-IP       EXTERNAL-IP   PORT(S)                   AGE
docker-registry      <none>        5000/TCP                  9h
kubernetes       <none>        443/TCP,53/UDP,53/TCP     9h
registry-console   <none>        9000/TCP                  9h
router      <none>        80/TCP,443/TCP,1936/TCP   9h
$ oc get routes
NAME               HOST/PORT                                        PATH      SERVICES           PORT               TERMINATION
docker-registry              docker-registry    5000-tcp           passthrough
registry-console             registry-console   registry-console   passthrough

NOTE: If there are any pods that have failed to run you can try to debug with the oc status -v, and oc describe pod/<podname> commands. You can retry any failed deployments with the oc deploy <deploymentname> --retry command.

We can see that we have a pod, service, and route for both a docker-registry and a registry-console. The docker registry is where any container builds within OpenShift will be pushed and the registry console is a web frontend interface for the registry.

Notice that there are two router pods and they are running on two different nodes; the worker nodes. We can effectively send traffic to either of these nodes and it will get routed appropriately. For our install we elected to set the openshift_master_default_subdomain to With that setting we are only directing traffic to one of the worker nodes. Alternatively, we could have configured this as a hostname that was load balanced and/or performed round robin to either worker node.

Now that we have explored the install, let's try out logging in as admin to the openshift web console at


And after we've logged in, we see the list of projects that the admin user has access to:


We then select the default project and can view the same applications that we looked at before using the oc command:


At the top, there is the registry console. Let's try out accessing the registry console by clicking the link in the top right. Note that this is the link from the exposed route:


We can log in with the same admin/OriginAdmin credentials that we used to log in to the OpenShift web console.


After logging in, there are links to each project so we can see images that belong to each project, and we see recently pushed images.

And.. We're done! We have poked around the infrastructure of the installed Origin cluster a bit. We've seen registry pods, router pods, and accessed the registry web console frontend. Next we'll get fancy and throw an example application onto the platform for the user user.

Running an Application as a Normal User

Now that we've observed some of the more admin like items using the admin user's account, we'll give the normal user a spin. First, we'll log in:

$ oc login --insecure-skip-tls-verify -u user -p OriginUser                                                                                        
Login successful.

You don't have any projects. You can try to create a new project, by running

    oc new-project <projectname>

After we log in as a normal user, the CLI tools recognize pretty quickly that this user has no projects and no applications running. The CLI tools give us some helpful clues as to what we should do next: create a new project. Let's create a new project called myproject:

$ oc new-project myproject
Now using project "myproject" on server "".

You can add applications to this project with the 'new-app' command. For example, try:

    oc new-app centos/ruby-22-centos7~

to build a new example application in Ruby.

After creating the new project the CLI tools again give us some helpful text showing us how to get started with a new application on the platform. It is telling us to try out the ruby application with source code at and build it on top of the Source-to-Image (or S2I) image known as centos/ruby-22-centos7. Might as well give it a spin:

$ oc new-app centos/ruby-22-centos7~
--> Found Docker image ecd5025 (10 hours old) from Docker Hub for "centos/ruby-22-centos7"

    Ruby 2.2 
    Platform for building and running Ruby 2.2 applications

    Tags: builder, ruby, ruby22

    * An image stream will be created as "ruby-22-centos7:latest" that will track the source image
    * A source build using source code from will be created
      * The resulting image will be pushed to image stream "ruby-ex:latest"
      * Every time "ruby-22-centos7:latest" changes a new build will be triggered
    * This image will be deployed in deployment config "ruby-ex"
    * Port 8080/tcp will be load balanced by service "ruby-ex"
      * Other containers can access this service through the hostname "ruby-ex"

--> Creating resources with label app=ruby-ex ...
    imagestream "ruby-22-centos7" created
    imagestream "ruby-ex" created
    buildconfig "ruby-ex" created
    deploymentconfig "ruby-ex" created
    service "ruby-ex" created
--> Success
    Build scheduled, use 'oc logs -f bc/ruby-ex' to track its progress.
    Run 'oc status' to view your app.

Let's take a moment to digest that. A new image stream was created to track the upstream ruby-22-centos7:latest image. A ruby-ex buildconfig was created that will perform an S2I build that will bake the source code into the image from the ruby-22-centos7 image stream. The resulting image will be the source for another image stream known as ruby-ex. A deploymentconfig was created to deploy the application into pods once the build is done. Finally, a ruby-ex service was created so the application can be load balanced and discoverable.

After a short time, we check the status of the application:

$ oc status 
In project myproject on server

svc/ruby-ex -
  dc/ruby-ex deploys istag/ruby-ex:latest <-
    bc/ruby-ex source builds on istag/ruby-22-centos7:latest 
      build #1 running for 26 seconds
    deployment #1 waiting on image or update

1 warning identified, use 'oc status -v' to see details.

NOTE: The warning referred to in the output is a warning about there being no healthcheck defined for this service. You can view the text of this warning by running oc status -v.

We can see here that there is a svc (service) that is associated with a dc (deploymentconfig) that is associated with a bc (buildconfig) that has a build that has been running for 26 seconds. The deployment is waiting for the build to finish before attempting to run.

After some more time:

$ oc status 
In project myproject on server

svc/ruby-ex -
  dc/ruby-ex deploys istag/ruby-ex:latest <-
    bc/ruby-ex source builds on istag/ruby-22-centos7:latest 
    deployment #1 running for 6 seconds

1 warning identified, use 'oc status -v' to see details.

The build is now done and the deployment is running.

And after more time:

$ oc status 
In project myproject on server

svc/ruby-ex -
  dc/ruby-ex deploys istag/ruby-ex:latest <-
    bc/ruby-ex source builds on istag/ruby-22-centos7:latest 
    deployment #1 deployed about a minute ago - 1 pod

1 warning identified, use 'oc status -v' to see details.

We have an app! What are the running pods in this project?:

$ oc get pods
NAME              READY     STATUS      RESTARTS   AGE
ruby-ex-1-build   0/1       Completed   0          13m
ruby-ex-1-mo3lb   1/1       Running     0          11m

The build has Completed and the ruby-ex-1-mo3lb pod is Running. The only thing we have left to do is expose the service so that it can be accessed via the router from the outside world:

$ oc expose svc/ruby-ex
route "ruby-ex" exposed
$ oc get route/ruby-ex
NAME      HOST/PORT                                 PATH      SERVICES   PORT       TERMINATION
ruby-ex             ruby-ex    8080-tcp   

With the route exposed we should now be able to access the application on Before we do that we'll log in to the openshift console as the user user and view the running pods in project myproject:


And pointing the browser to we see:




We have explored the basic OpenShift Origin cluster that we set up in part 1 of this two part blog series. We viewed the infrastructure docker registry and router components, as well as discussed the router components and how they are set up. We also ran through an example application that was suggested to us by the command line tools and were able to define that application, monitor its progress, and eventually access it from our web browser. Hopefully this blog gives the reader an idea or two about how they can get started with setting up and using an Origin cluster on Fedora 25 Atomic Host.


Installing an OpenShift Origin Cluster on Fedora 25 Atomic Host: Part 1

Cross posted with this Project Atomic Blog post


Openshift Origin is the upstream project that builds on top of the Kubernetes platform and feeds into the OpenShift Container Platform product that is available from Red Hat today. Origin is a great way to get started with Kubernetes, and what better place to run a container orchestration layer than on top of Fedora Atomic Host?

We recently released Fedora 25, along with the first biweekly release of Fedora 25 Atomic Host. This blog post will show you the basics for getting a production installation of Origin running on Fedora 25 Atomic Host using the OpenShift Ansible Installer. The OpenShift Ansible installer will allow you to install a production-worthy OpenShift cluster. If instead you'd like to just try out OpenShift on a single node instead, you can set up OpenShift with the oc cluster up command, which we will detail in a later blog post.

This first post will cover just the installation. In a later blog post we'll take the system we just installed for a spin and make sure everything is working as expected.


We've tried to make this setup as generic as possible. In this case we will be targeting three generic servers that are running Fedora 25 Atomic Host. As is common with cloud environments these servers each have an "internal" private address that can't be accessed from the internet, and a public NATed address that can be accessed from the outside. Here is the identifying information for the three servers:

|     Role    |   Public IPv4  | Private IPv4 |
| master,etcd |    | |
|    worker   |   |  |
|    worker   | | |

NOTE In a real production setup we would want mutiple master nodes and multiple etcd nodes closer to what is shown in the installation docs.

As you can see from the table we've marked one of the nodes as the master and the other two as what we're calling worker nodes. The master node will run the api server, scheduler, and controller manager. We'll also run etcd on it. Since we want to make sure we don't starve the node running etcd, we'll mark the master node as unschedulable so that application containers don't get scheduled to run on it.

The other two nodes, the worker nodes, will have the proxy and the kubelet running on them; this is where the containers (inside of pods) will get scheduled to run. We'll also tell the installer to run a registry and an HAProxy router on the two worker nodes so that we can perform builds as well as access our services from the outside world via HAProxy.

The Installer

Openshift Origin uses Ansible to manage the installation of different nodes in a cluster. The code for this is aggregated in the OpenShift Ansible Installer on GitHub. Additionally, to run the installer we'll need to install Ansible on our workstation or laptop.

NOTE At this time Ansible 2.2 or greater is REQUIRED.

We already have Ansible 2.2 installed so we can skip to cloning the repo:

$ git clone &>/dev/null
$ cd openshift-ansible/
$ git checkout 734b9ae199bd585d24c5131f3403345fe88fe5e6
Previous HEAD position was 6d2a272... Merge pull request #2884 from sdodson/image-stream-sync
HEAD is now at 734b9ae... Merge pull request #2876 from dustymabe/dusty-fix-etcd-selinux

In order to document this better in this blog post we are specifically checking out commit 734b9ae199bd585d24c5131f3403345fe88fe5e6 so that we can get reproducible results, since the Openshift Ansible project is fast-moving. These instructions will probably work on the latest master, but you may hit a bug, in which case you should open an issue.

Now that we have the installer we can create an inventory file called myinventory in the same directory as the git repo. This inventory file can be anywhere, but for this install we'll place it there.

Using the IP information from the table above we create the following inventory file:

# Create an OSEv3 group that contains the masters and nodes groups

# Set variables common for all OSEv3 hosts

# enable htpasswd auth
openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/origin/master/htpasswd'}]
openshift_master_htpasswd_users={'admin': '$apr1$zgSjCrLt$1KSuj66CggeWSv.D.BXOA1', 'user': '$apr1$.gw8w9i1$ln9bfTRiD6OwuNTG5LvW50'}

# host group for masters
[masters] openshift_public_hostname= openshift_hostname=

# host group for etcd, should run on a node that is not schedulable

# host group for worker nodes, we list master node here so that
# openshift-sdn gets installed. We mark the master node as not
# schedulable.
[nodes]    openshift_hostname= openshift_schedulable=false   openshift_hostname=  openshift_node_labels="{'router':'true','registry':'true'}" openshift_hostname= openshift_node_labels="{'router':'true','registry':'true'}"

Well that is quite a bit to digest, isn't it? Don't worry, we'll break down this file in detail.

Details of the Inventory File

OK, so how did we create this inventory file? We started with the docs and copied one of the examples from there. This type of install we are doing is called a BYO (Bring Your Own) install because we are bringing our own servers and not having the installer contact a cloud provider to bring up the infrastructure for us. For reference there is also a much more detailed BYO inventory file you can look study.

So let's break down our inventory file. First we have the OSEv3 group and list the hosts in the masters, nodes, and etcd groups as children of that group:

# Create an OSEv3 group that contains the masters and nodes groups

Then we set a bunch of variables for that group:

# Set variables common for all OSEv3 hosts

Let's run through each of them:

  • ansible_user=fedora - fedora is the user that you use to connect to Fedora 25 Atomic Host.
  • ansible_become=true - We want the installer to sudo when running commands.
  • deployment_type=origin - Run OpenShift Origin.
  • containerized=true - Run Origin from containers.
  • openshift_release=v1.3.1 - The version of Origin to run.
  • openshift_router_selector='router=true' - Set it so that any nodes that have this label applied to them will run a router by default.
  • openshift_registry_selector='registry=true' - Set it so that any nodes that have this label applied to them will run a registry by default.
  • - This setting is used to tell OpenShift what subdomain to apply to routes that are created when exposing services to the outside world.

Whew ... quite a bit to run through there! Most of them are relatively self-explanatory but the openshift_master_default_subdomain might need a little more explanation. Basically, the value of this needs to be a Wildcard DNS Record so that any domain can be prefixed onto the front of the record and it will still resolve to the same IP address. We have decided to use a free service called xipiio so that we don't have to set up wildcard DNS just for this example.

So for our example, a domain like will resolve to IP address A domain like will also resolve to that same address. These requests will come in to node, which is one of our worker nodes where a router (HAProxy) is running. HAProxy will route the traffic based on the domain used (app1 vs app2, etc) to the appropriate service within OpenShift.

OK, next up in our inventory file we have some auth settings:

# enable htpasswd auth
openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/origin/master/htpasswd'}]
openshift_master_htpasswd_users={'admin': '$apr1$zgSjCrLt$1KSuj66CggeWSv.D.BXOA1', 'user': '$apr1$.gw8w9i1$ln9bfTRiD6OwuNTG5LvW50'}

You can use a multitude of authentication providers with OpenShift. The above statements say that we want to use htpasswd for authentication and we want to create two users. The password for the admin user is OriginAdmin, while the password for the user user is OriginUser. We generated these passwords by running htpasswd on the command line like so:

$ htpasswd -bc /dev/stdout admin OriginAdmin
Adding password for admin user
$ htpasswd -bc /dev/stdout user OriginUser
Adding password for user user

OK, now on to the host groups. First up, our master nodes:

# host group for masters
[masters] openshift_public_hostname= openshift_hostname=

We have used as the hostname and also set openshift_public_hostname to this same value so that certificates will use that hostname rather than a detected hostname. We're also setting the openshift_hostname= because there is a bug where the golang resolver can't resolve *.ec2.internal addresses. This is also documented as an issue against Origin. Once this bug is resolved, you won't have to set openshift_hostname.

Next up we have the etcd host group. We're simply re-using the master node for a single etcd node. In a production deployment, we'd have several:

# host group for etcd, should run on a node that is not schedulable

Finally, we have our worker nodes:

# host group for worker nodes, we list master node here so that
# openshift-sdn gets installed. We mark the master node as not
# schedulable.
[nodes]    openshift_hostname= openshift_schedulable=false   openshift_hostname=  openshift_node_labels="{'router':'true','registry':'true'}" openshift_hostname= openshift_node_labels="{'router':'true','registry':'true'}"

We include the master node in this group so that the openshift-sdn will get installed and run there. However, we do set the master node as openshift_schedulable=false because it is running etcd. The last two nodes are our worker nodes and we have also added the router=true and registry=true node labels to them so that the registry and the router will run on them.

Executing the Installer

Now that we have the installer code and the inventory file named myinventory in the same directory, let's see if we can ping our hosts and check their state:

$ ansible -i myinventory nodes -a '/usr/bin/rpm-ostree status' | SUCCESS | rc=0 >>
State: idle
● fedora-atomic:fedora-atomic/25/x86_64/docker-host
       Version: 25.42 (2016-11-16 10:26:30)
        Commit: c91f4c671a6a1f6770a0f186398f256abf40b2a91562bb2880285df4f574cde4
        OSName: fedora-atomic | SUCCESS | rc=0 >>
State: idle
● fedora-atomic:fedora-atomic/25/x86_64/docker-host
       Version: 25.42 (2016-11-16 10:26:30)
        Commit: c91f4c671a6a1f6770a0f186398f256abf40b2a91562bb2880285df4f574cde4
        OSName: fedora-atomic | SUCCESS | rc=0 >>
State: idle
● fedora-atomic:fedora-atomic/25/x86_64/docker-host
       Version: 25.42 (2016-11-16 10:26:30)
        Commit: c91f4c671a6a1f6770a0f186398f256abf40b2a91562bb2880285df4f574cde4
        OSName: fedora-atomic

Looks like they are up and all at the same state. The next step is to unleash the installer. Before we do, we should note that Fedora has moved to python3 by default. While Atomic Host still has python2 installed for legacy package support not all of the modules needed by the installer are supported in python2 on Atomic Host. Thus, we'll forge ahead and use python3 as the interpreter for ansible by specifying -e 'ansible_python_interpreter=/usr/bin/python3' on the command line:

$ ansible-playbook -i myinventory playbooks/byo/config.yml -e 'ansible_python_interpreter=/usr/bin/python3'
Using /etc/ansible/ansible.cfg as config file
PLAY RECAP *********************************************************************               : ok=162  changed=49   unreachable=0    failed=0                : ok=540  changed=150  unreachable=0    failed=0             : ok=159  changed=49   unreachable=0    failed=0   
localhost                  : ok=15   changed=9    unreachable=0    failed=0

We snipped pretty much all of the output. You can download the log file in its entirety from here.

So now the installer has run, and our systems should be up and running. There is only one more thing we have to do before we can take this system for a spin.

We created two users user and admin. Currently there is no way to have the installer associate one of these users with the cluster admin role in OpenShift (we opened a request for that). We must run a command to associate the admin user we created with cluster admin role for the cluster. The command is oadm policy add-cluster-role-to-user cluster-admin admin.

We'll go ahead and run that command now on the master node via ansible:

$ ansible -i myinventory masters -a '/usr/local/bin/oadm policy add-cluster-role-to-user cluster-admin admin' | SUCCESS | rc=0 >>

And now we are ready to log in as either the admin or user users using oc login from the command line or visiting the web frontend at

NOTE To install the oc CLI tool follow these instructions.

To Be Continued

In this blog we brought up an OpenShift Origin cluster on three servers that were running Fedora 25 Atomic Host. We reviewed the inventory file in detail to explain exactly what options were used and why. In a future blog post we'll take the system for a spin, inspect some of the running system that was generated from the installer, and spin up an application that will run on and be hosted by the Origin cluster.

If you run into issues following these installation instructions, please report them in one of the following places:


Kompose Up for OpenShift and Kubernetes

Cross posted with this Red Hat Developer Blog post


Kompose is a tool to convert from higher level abstractions of application definitions into more detailed Kubernetes artifacts. These artifacts can then be used to bring up the application in a Kubernetes cluster. What higher level application abstraction should kompose use?

One of the most popular application definition formats for developers is the docker-compose.yml format for use with docker-compose that communicates with the docker daemon to bring up the application. Since this format has gained some traction we decided to make it the initial focus of Kompose to support converting this format to Kubernetes. So, where you would choose docker-compose to bring up the application in docker, you can use kompose to bring up the same application in Kubernetes, if that is your preferred platform.

How Did We Get Here?

At Red Hat, we had initially started on a project similar to Kompose, called Henge. We soon found Kompose and realized we had a lot of overlap in our goals so we decided to jump on board with the folks at Skippbox and Google who were already working on it.

TL;DR We have been working hard with the Kompose and Kubernetes communities. Kompose is now a part of the Kuberetes Incubator and we also have added support in Kompose for getting up and running into your target environment in one command:

$ kompose up 

In this blog I'll run you through a simple application example and use kompose up to bring up the application on Kuberenetes and OpenShift.

Getting an Environment

It is now easier than ever to get up and running with Kubernetes and Openshift. If you want hosted you can spin up clusters in many cloud environments including Google Container Engine and OpenShift Online (with the developer preview). If you want a local experience for trying out Kubernetes/OpenShift on your laptop, there is the RHEL based CDK, (and the ADB for upstream components), oc cluster up, minikube, and the list goes on!

Any way you look at it, there are many options for trying out Kubernetes and OpenShift these days. For this blog I'll choose to run on OpenShift Online, but the steps should work on any Openshift or Kubernetes environment.

Once I had logged in to the openshift console at I was able to grab a token by visiting and clicking Request another token. It then will show you the oc command you can run to log your local machine into openshift online.

I'll log in below and create a new project for this example blog:

$ oc login --token=xxxx-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx --server=
Logged into "" as "dustymabe" using the token provided.

You don't have any projects. You can try to create a new project, by running

    $ oc new-project <projectname>

$ oc new-project blogpost
Now using project "blogpost" on server "".

You can add applications to this project with the 'new-app' command. For example, try:

    $ oc new-app centos/ruby-22-centos7~

to build a new hello-world application in Ruby.

Example Application

Now that I have an environment to run my app in I need to give it an app to run! I took the example mlbparks application that we have been using for openshift for some time and converted the template to a more simplified definition of the application using the docker-compose.yml format:

$ cat docker-compose.yml
version: "2"
    image: centos/mongodb-26-centos7
      - '27017'
      - /var/lib/mongodb/data
      MONGODB_USER: user
      MONGODB_PASSWORD: mypass
      MONGODB_ADMIN_PASSWORD: myrootpass
    image: dustymabe/mlbparks
      - '8080'
      MONGODB_USER: user
      MONGODB_PASSWORD: mypass
      MONGODB_ADMIN_PASSWORD: myrootpass

Basically we have the mongodb service and then the mlbparks service which is backed by the dustymabe/mlbparks image. I simply generated this image from the openshift3mlbparks source code using s2i with the following command:

$ s2i build openshift/wildfly-100-centos7 dustymabe/mlbparks 

Now that we have our compose yaml file we can use kompose to bring it up. I am using kompose version v0.1.2 here:

$ kompose --version
kompose version 0.1.2 (92ea047)
$ kompose --provider openshift up
We are going to create OpenShift DeploymentConfigs, Services and PersistentVolumeClaims for your Dockerized application. 
If you need different kind of resources, use the 'kompose convert' and 'oc create -f' commands instead. 

INFO[0000] Successfully created Service: mlbparks       
INFO[0000] Successfully created Service: mongodb        
INFO[0000] Successfully created DeploymentConfig: mlbparks 
INFO[0000] Successfully created ImageStream: mlbparks   
INFO[0000] Successfully created DeploymentConfig: mongodb 
INFO[0000] Successfully created ImageStream: mongodb    
INFO[0000] Successfully created PersistentVolumeClaim: mongodb-claim0 

Your application has been deployed to OpenShift. You can run 'oc get dc,svc,is,pvc' for details.

Ok what happened here... We created an mlbparks Service, DeploymentConfig and ImageStream as well as a mongodb Service, DeploymentConfig, and ImageStream. We also created a PersistentVolumeClaim named mongodb-claim0 for the /var/lib/mongodb/data.

Note: If you don't have Persistent Volumes the application will never come up because the claim will never get satisfied. If you want to deploy somewhere without Persistent Volumes then add --emptyvols to your command like kompose --provider openshift up --emptyvols.

So let's see what is going on in OpenShift by querying from the CLI:

$ oc get dc,svc,is,pvc
NAME             REVISION                               REPLICAS       TRIGGERED BY
mlbparks         1                                      1              config,image(mlbparks:latest)
mongodb          1                                      1              config,image(mongodb:latest)
NAME             CLUSTER-IP                             EXTERNAL-IP    PORT(S)     AGE
mlbparks                           <none>         8080/TCP    4m
mongodb                          <none>         27017/TCP   4m
NAME             DOCKER REPO                            TAGS           UPDATED
mlbparks   latest         4 minutes ago
mongodb    latest         4 minutes ago
NAME             STATUS                                 VOLUME         CAPACITY   ACCESSMODES   AGE
mongodb-claim0   Bound                                  pv-aws-adbb5   100Mi      RWO           4m

and the web console looks like:


One final thing we have to do is set it up so that we can connect to the service (i.e. the service is exposed to the outside world). On OpenShift, we need to expose a route. This will be done for us automatically in the future (follow along at #140), but for now the following command will suffice:

$ oc expose svc/mlbparks
route "mlbparks" exposed
$ oc get route mlbparks 
NAME       HOST/PORT                                          PATH      SERVICE         TERMINATION   LABELS
mlbparks             mlbparks:8080                 service=mlbparks

For me this means I can now access the mlbparks application by pointing my web browser to

Let's try it out:



Fedora 25 available in DigitalOcean

Cross posted with this fedora magazine post

Last week the Fedora Project released Fedora 25. This week Fedora Project Community members have worked with the DigitalOcean team to make Fedora 25 available on their platform. If you're not familiar with DigitalOcean already, it's a dead simple cloud hosting platform that's great for developers.

Important Notes

The image has some specific differences from others that Fedora ships. You may need to know about these differences before you use the image.

  • Usually Fedora Cloud images have you log in as user fedora. But as with other DigitalOcean images, log into the Fedora 25 DigitalOcean cloud image with your ssh key as root.
  • Similar to our last few Fedora releases, Fedora 25 also has SELinux enabled by default. Not familiar with SELinux yet? No problem. Read more about it here
  • In these images there's no firewall on by default. There's also no cloud provided firewall solution. We recommend that you secure your system immediately after you log in.
  • Fedora 25 should be available in all datacenters across the globe.
  • If you have a problem you think is Fedora specific then drop us an email at , or ping us in #fedora-cloud on Freenode. You can also let the team know if you just want to say you're enjoying using the F25 image.

Visit the DigitalOcean Fedora landing page and spin one up today!

Happy Developing!

Sharing a Go library to Python using CFFI


I am not a Go expert so I may not be able to answer questions you may have about this process. I am simply trying to reproduce and document what I saw. Additionally, it was actually recommended to not do this for long running processes because you'll have two runtimes that may work against each other eventually (garbage collection, etc). Be wary.


Back in July I went to Go Camp that was a part of Open Camps in NYC. It was an informal setting, which actually made it much more comfortable for someone relatively new to the Go programming language.

At the conference Filippo Valsorda (@FiloSottile) did an off the cuff session where he took a Go function and then created a C shared library (.so) out of it (added in Go 1.5). He then took the C shared library and generated a Python shared object that can be directly imported in Python and executed.

Filippo actually has an excellent blog post on this process already. The one difference between his existing blog post and what he presented at Go Camp is that he used CFFI (as opposed to defining CPython extensions) to generate the shared object files that could then be imported directly into Python.

In an attempt to recreate what Filippo did I created a git repo and I'll use that to demonstrate the process that was showed to us a Go Camp. If desired there is also an archive of that git repo here.

System Set Up

I ran this on a Fedora 24 system. To set a Fedora 24 system up from scratch I ran:

$ sudo dnf install golang git python2 python2-cffi python2-devel redhat-rpm-config

I then set my GOPATH. On your system you may already have your GOPATH set:

$ export GOPATH=~/go

Then I cloned the git repo with the example code for this blog post and changed into that directory:

$ go get
$ cd $GOPATH/src/

Hello From Go

Now we can look at our the file that contains the function we want to export to Python:

$ cd _Go
$ cat hello.go 
package main

import "C"
import "fmt"
import "math"

//export Hello
func Hello() {
   fmt.Printf("Hello! The square root of 4 is: %g\n", math.Sqrt(4))

func main() {

In this file you can see that we are importing a few standard libraries and then defining a function that prints some text to the terminal.

The import "C" is part of cgo and the //export Hello comment right before the function declaration is where we tell go that we want to export the Hello function into the shared library.

Before we generate a shared library, let's test to see if the code compiles and runs:

$ go build -o hello .
$ ./hello 
Hello! The square root of 4 is: 2

Now let's generate a shared library:

$ go build -buildmode=c-shared -o .
$ ls
hello hello.go  hello.h  vendor
$ file ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), dynamically linked, BuildID[sha1]=ecf1770f0897ca064aab8dacbcb5f7c2f688f34d, not stripped

The and hello.h files were generated by go. The .so is the shared library that we can now use in C.

Hello From C

We could jump directly to Python now, but we are going to take a detour and just make sure we can get the shared object we just created to work in a simple C program. Here we go:

$ cd ../_C
$ tree .
├── hello.c
├── hello.h -> ../_Go/hello.h
├── -> ../_Go/

0 directories, 4 files
$ cat hello.c
#include "hello.h"

void main() {
$ gcc hello.c 
$ LD_LIBRARY_PATH=$(pwd) ./a.out                                                                                                                                                           
Hello! The square root of 4 is: 2

What did we just do there? Well, we can see that hello.h and are symlinked to the files that were just created by the go compiler. Then we show the simple C program that just includes the hello.h header file and calls the Hello() function. We then compile that C program and run it.

We also set the LD_LIBRARY_PATH to $(pwd) so that the runtime shared library loader can find the shared object ( at runtime and then we ran the program.

So... It worked! Everything looks good in C land.

Hello From Python

For Python we'll first generate the shared object that can be imported directly into Python (just like any .py file). To do this we are using CFFI. A good example that is close to what we are doing here is in the CFFI API Mode documentation.

Here is the file we are using:

$ cd ../_Python/
$ tree .
├── hello.h -> ../_Go/hello.h
├── -> ../_Go/

0 directories, 5 files
$ cat 
from cffi import FFI
ffibuilder = FFI()

    """ //passed to the real C compiler
        #include "hello.h"

    extern void Hello();

if __name__ == "__main__":

The ffibuilder.set_source("pyhello",... function sets the name of the file that will get created ( and also defines the code that gets passed to the C compiler. Additionally, it specifies some other objects to load ( The ffibuilder.cdef defines what program we are building into a shared object; in this case extern void Hello();, so we are just stealing what was defined in

Let's run it and see what happens:

$ ./ 
running build_ext
building 'pyhello' extension
gcc -pthread -fno-strict-aliasing -O2 -g -pipe -Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -DNDEBUG -O2 -g -pipe -Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -fPIC -I/usr/include/python2.7 -c pyhello.c -o ./pyhello.o
gcc -pthread -shared -Wl,-z,relro -specs=/usr/lib/rpm/redhat/redhat-hardened-ld ./pyhello.o -L/usr/lib64 -lpython2.7 -o ./
$ ls pyhello.* 
pyhello.c  pyhello.o
$ file ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), dynamically linked, BuildID[sha1]=9a2670b5d287fe80180b158a61ea3e35086e89d7, not stripped

OK. A program (pyhello.c) was generated and then compiled into a shared object ( Can we use it?

Here is the file that imports the library from and then runs the Hello() function:

$ cat 

from pyhello import ffi, lib

Does it work?:

$ LD_LIBRARY_PATH=$(pwd) ./                                                                                                                                                   
Hello! The square root of 4 is: 2

You bet!

- Dusty

Booting Lenovo T460s after Fedora 24 Updates


I recently picked up a new Lenovo T460s work laptop. It is fairly light and has 20G of memory, which is great for running Virtual Machines. One of the first things I did on this new laptop was install Fedora 24 onto the system. After installing from the install media I was up and running and humming along.

Soon after installing I updated the system to get all bugfixes and security patches. I updated everything and rebooted and:

Probing EDD (edd=off to disable)... OK

Yep, the system wouldn't even boot. The GRUB menu would appear and you could select different entries, but the system would not boot.

The Problem

It was time to hit the internet. After some futile attempts on Google searches I decided to ask on Fedora's devel list. I got a quick response that led to two bug reports (1351943, 1353103) about the issue. It turns out that the newer kernel + microcode that we had updated to had some sort of incompatibility with the existing system firmware. According to the comments from 1353103 there was most likely an assumption the microcode was making that wasn't true.

The Solution

The TL;DR on the fix for this is that you need to update the BIOS on the T460s. I did this by going to the T460s support page (at the time of this writing the link is here) and downloading the ISO image with the BIOS update (n1cur06w.iso).

Now what was I going to do with an ISO image? The laptop doesn't have a cd-rom drive, so burning a cd wouldn't help us one bit. There is a helpful article about how to take this ISO image and update the BIOS.

Here are the steps that I took.

  • First, boot the laptop into the older kernel so you can actually get to a Linux environment.
  • Next, install the geteltorito software so that we can create an image to dd onto the USB key
$ sudo dnf install geteltorito 
  • Next use the software to create the image and write it to a flash drive. Note, /dev/sdg wsa the USB key on my system. Please be sure to change that to the device corresponding to your USB key.
$ geteltorito -o bios.img n1cur06w.iso 
Booting catalog starts at sector: 20 
Manufacturer of CD: NERO BURNING ROM
Image architecture: x86
Boot media type is: harddisk
El Torito image starts at sector 27 and has 47104 sector(s) of 512 Bytes

Image has been written to file "bios.img".

$ sudo dd if=bios.img of=/dev/sdg bs=1M
23+0 records in
23+0 records out
24117248 bytes (24 MB, 23 MiB) copied, 1.14576 s, 21.0 MB/s
  • Next, reboot the laptop.
  • After the Lenovo logo appears press ENTER.
  • Press F12 to make your laptop boot from something else than your HDD.
  • Select the USB stick.
  • Make sure your laptop has its power supply plugged in. (It will refuse to update otherwise.)
  • Follow the instructions.
  • Select number 2: Update system program
  • Once finished reboot the laptop and remove the USB key.

Done! I was now happily booting Fedora again. I hope this helps others who hit the same problem!

- Dusty

Non Deterministic docker Networking and Source Based IP Routing


In the open source docker engine a new networking model was introduced in docker 1.9 which enabled the creation of separate "networks" for containers to be attached to. This, however, can lead to a nasty little problem where a port that is supposed to be exposed on the host isn't accessible from the outside. There are a few bug reports that are related to this issue.


This problem happens because docker wires up all of these containers to each other and the various "networks" using port forwarding/NAT via iptables. Let's take a popular example application which exhibits the problem, the Docker 3rd Birthday Application, and show what the problem is and why it happens.

We'll clone the git repo first and then check out the latest commit as of 2016-05-25:

# git clone
# cd docker-birthday-3/
# git checkout 'master@{2016-05-25}'
HEAD is now at 4f2f1c9... Update Dockerfile

Next we'll bring up the application:

# cd example-voting-app/
# docker-compose up -d 
Creating network "examplevotingapp_front-tier" with the default driver
Creating network "examplevotingapp_back-tier" with the default driver
Creating db
Creating redis
Creating examplevotingapp_voting-app_1
Creating examplevotingapp_worker_1
Creating examplevotingapp_result-app_1

So this created two networks and brought up several containers to host our application. Let's poke around to see what's there:

# docker network ls
NETWORK ID          NAME                          DRIVER
23c96b2e1fe7        bridge                        bridge              
cd8ecb4c0556        examplevotingapp_front-tier   bridge              
5760e64b9176        examplevotingapp_back-tier    bridge              
bce0f814fab1        none                          null                
1b7e62bcc37d        host                          host
# docker ps -a --format "table {{.Names}}\t{{.Image}}\t{{.Ports}}"
NAMES                           IMAGE                         PORTS
examplevotingapp_result-app_1   examplevotingapp_result-app>80/tcp
examplevotingapp_voting-app_1   examplevotingapp_voting-app>80/tcp
redis                           redis:alpine        >6379/tcp
db                              postgres:9.4                  5432/tcp
examplevotingapp_worker_1       manomarks/worker              

So two networks were created and the containers running the application were brought up. Looks like we should be able to connect to the examplevotingapp_voting-app_1 application on the host port 5000 that is bound to all interfaces. Does it work?:

# ip -4 -o a
1: lo    inet scope host lo\       valid_lft forever preferred_lft forever
2: eth0    inet brd scope global dynamic eth0\       valid_lft 2921sec preferred_lft 2921sec
3: docker0    inet scope global docker0\       valid_lft forever preferred_lft forever
106: br-cd8ecb4c0556    inet scope global br-cd8ecb4c0556\       valid_lft forever preferred_lft forever
107: br-5760e64b9176    inet scope global br-5760e64b9176\       valid_lft forever preferred_lft forever
# curl --connect-timeout 5 &>/dev/null && echo success || echo failure
# curl --connect-timeout 5 &>/dev/null && echo success || echo failure

Does it work? Yes and no?

That's right. There is something complicated going on with the networking here. I can connect from localhost but can't connect to the public IP of the host. Docker wires things up in iptables so that things can go into and out of containers following a strict set of rules; see the iptables output if you are interested. This works fine if you only have one network interface per container but can break down when you have multiple interfaces attached to a container.

Let's jump in to the examplevotingapp_voting-app_1 container and check out some of the networking:

# docker exec -it examplevotingapp_voting-app_1 /bin/sh
/app # ip -4 -o a
1: lo    inet scope host lo\       valid_lft forever preferred_lft forever
112: eth1    inet scope global eth1\       valid_lft forever preferred_lft forever
114: eth0    inet scope global eth0\       valid_lft forever preferred_lft forever
/app # 
/app # ip route show
default via dev eth0 dev eth1  src dev eth0  src

So there is a clue. We have two interfaces, but our default route is going to go out of the eth0 on the network. It just so happens that our iptables rules (see linked iptables output from above) performed DNAT for tcp dpt:5000 to: So traffic from the outside is going to come in to this container on the eth1 interface but leave it on the eth0 interface, which doesn't play nice with the iptables rules docker has set up.

We can prove that here by asking what route we will take when a packet leaves the machine:

/app # ip route get from from via dev eth0

Which basically means it will leave from eth0 even though it came in on eth1. The Docker documentation was updated to try to explain the behavior when multiple interfaces are attached to a container in this git commit.

Test Out Theory Using Source Based IP Routing

To test out the theory on this we can use source based IP routing (some reading on that here). Basically the idea is that we create policy rules that make IP traffic leave on the same interface it came in on.

To perform the test we'll need our container to be privileged so we can add routes. Modify the docker-compose.yml to add privileged: true to the voting-app:

    build: ./voting-app/.
     - ./voting-app:/app
      - "5000:80"
      - front-tier
      - back-tier
    privileged: true

Take down and bring up the application:

# docker-compose down
# docker-compose up -d

Exec into the container and create a new policy rule for packets originating from the network. Tell packets matching this rule to look up routing table 200:

# docker exec -it examplevotingapp_voting-app_1 /bin/sh
/app # ip rule add from table 200

Now add a default route for to routing table 200. Show the routing table after that and the rules as well:

/app # ip route add default via dev eth1 table 200
/app # ip route show table 200
default via dev eth1
/app # ip rule show
0:      from all lookup local 
32765:  from lookup 200 
32766:  from all lookup main 
32767:  from all lookup default

Now ask the kernel where a packet originating from our address will get sent:

/app # ip route get from from via dev eth1

And finally, go back to the host and check to see if everything works now:

# curl --connect-timeout 5 &>/dev/null && echo success || echo failure
# curl --connect-timeout 5 &>/dev/null && echo success || echo failure


I don't know if source based routing can be incorporated into docker to fix this problem or if there is a better solution. I guess we'll have to wait and find out.



NOTE I used the following versions of software for this blog post:

# rpm -q docker docker-compose kernel-core

Fedora BTRFS+Snapper – The Fedora 24 Edition


In the past I have configured my personal computers to be able to snapshot and rollback the entire system. To do this I am leveraging the BTRFS filesystem, a tool called snapper, and a patched version of Fedora's grub2 package. The patches needed from grub2 come from the SUSE guys and are documented well in this git repo.

This setup is not new. I have fully documented the steps I took in the past for my Fedora 22 systems in two blog posts: part1 and part2. This is a condensed continuation of those posts for Fedora 24.

NOTE: I'm using Fedora 24 alpha, but everything should be the same for the released version of Fedora 24.

Setting up System with LUKS + LVM + BTRFS

The manual steps for setting up the system are detailed in the part1 blog post from Fedora 22. This time around I have created a script that will quickly configure the system with LUKS + LVM + BTRFS. The script will need to be run in an Anaconda environment just like the manual steps were done in part1 last time.

You can easily enable ssh access to your Anaconda booted machine by adding inst.sshd to the kernel command line arguments. After booting up you can scp the script over and then execute it to build the system. Please read over the script and modify it to your liking.

Alternatively, for an automated install I have embedded that same script into a kickstart file that you can use. The kickstart file doesn't really leverage Anaconda at all because it simply runs a %pre script and then reboots the box. It's basically like just telling Anaconda to run a bash script, but allows you to do it in an automated way. None of the kickstart directives at the top of the kickstart file actually get used.

Installing and Configuring Snapper

After the system has booted for the first time, let's configure the system for doing snapshots. I still want to be able to track how much size each snapshot has taken so I'll go ahead and enable quota support on BTRFS. I covered how to do this in a previous post:

[root@localhost ~]# btrfs quota enable /
[root@localhost ~]# btrfs qgroup show /
qgroupid         rfer         excl 
--------         ----         ---- 
0/5           1.08GiB      1.08GiB

Next up is installing/configuring snapper. I am also going to install the dnf plugin for snapper so that rpm transactions will automatically get snapshotted:

[root@localhost ~]# dnf install -y snapper python3-dnf-plugins-extras-snapper
[root@localhost ~]# snapper --config=root create-config /
[root@localhost ~]# snapper ls
Type   | # | Pre # | Date | User | Cleanup | Description | Userdata
single | 0 |       |      | root |         | current     |         
[root@localhost ~]# snapper list-configs
Config | Subvolume
root   | /        
[root@localhost ~]# btrfs subvolume list /
ID 259 gen 57 top level 5 path .snapshots

So we used the snapper command to create a configuration for BTRFS filesystem mounted at /. As part of this process we can see from the btrfs subvolume list / command that snapper also created a .snapshots subvolume. This subvolume will be used to house the COW snapshots that are taken of the system.

Next, we'll workaround a bug that is causing snapper to have the wrong SELinux context on the .snapshots directory:

[root@localhost ~]# restorecon -v /.snapshots/
restorecon reset /.snapshots context system_u:object_r:unlabeled_t:s0->system_u:object_r:snapperd_data_t:s0

Finally, we'll add an entry to fstab so that regardless of what subvolume we are actually booted in we will always be able to view the .snapshots subvolume and all nested subvolumes (snapshots):

[root@localhost ~]# echo '/dev/vgroot/lvroot /.snapshots btrfs subvol=.snapshots 0 0' >> /etc/fstab

Taking Snapshots

OK, now that we have snapper installed and the .snapshots subvolume in /etc/fstab we can start creating snapshots:

[root@localhost ~]# btrfs subvolume get-default /
[root@localhost ~]# snapper create --description "BigBang"
[root@localhost ~]# snapper ls
Type   | # | Pre # | Date                            | User | Cleanup | Description | Userdata
single | 0 |       |                                 | root |         | current     |         
single | 1 |       | Sat 23 Apr 2016 01:04:51 PM UTC | root |         | BigBang     |         
[root@localhost ~]# btrfs subvolume list /
ID 259 gen 64 top level 5 path .snapshots
ID 260 gen 64 top level 259 path .snapshots/1/snapshot
[root@localhost ~]# ls /.snapshots/1/snapshot/
bin  boot  dev  etc  home  lib  lib64  media  mnt  opt  proc  root  run  sbin  srv  sys  tmp  usr  var

We made our first snapshot called BigBang and then ran a btrfs subvolume list / to view that a new snapshot was actually created. Notice at the top of the output of the sections that we ran a btrfs subvolume get-default /. This outputs what the currently set default subvolume is for the BTRFS filesystem. Right now we are booted into the root subvolume but that will change as soon as we decide we want to use one of the snapshots for rollback.

Since we took a snapshot let's go ahead and make some changes to the system by updating the kernel:

[root@localhost ~]# dnf update -y kernel
[root@localhost ~]# rpm -q kernel
[root@localhost ~]# snapper ls
Type   | # | Pre # | Date                            | User | Cleanup | Description                   | Userdata
single | 0 |       |                                 | root |         | current                       |         
single | 1 |       | Sat 23 Apr 2016 01:04:51 PM UTC | root |         | BigBang                       |         
single | 2 |       | Sat 23 Apr 2016 01:08:18 PM UTC | root | number  | /usr/bin/dnf update -y kernel |

So we updated the kernel and the snapper dnf plugin automatically created a snapshot for us. Let's reboot the system and see if the new kernel boots properly:

[root@localhost ~]# reboot 
[dustymabe@media ~]$ ssh root@ 
Warning: Permanently added '' (ECDSA) to the list of known hosts.
root@'s password: 
Last login: Sat Apr 23 12:18:55 2016 from
[root@localhost ~]# 
[root@localhost ~]# uname -r

Rolling Back

Say we don't like that new kernel. Let's go back to the earlier snapshot we made:

[root@localhost ~]# snapper rollback 1
Creating read-only snapshot of current system. (Snapshot 3.)
Creating read-write snapshot of snapshot 1. (Snapshot 4.)
Setting default subvolume to snapshot 4.
[root@localhost ~]# reboot

snapper created a read-only snapshot of the current system and then a new read-write subvolume based on the snapshot we wanted to go back to. It then sets the default subvolume to be the newly created read-write subvolume. After reboot you'll be in the newly created read-write subvolume and exactly back in the state you system was in at the time the snapshot was created.

In our case, after reboot we should now be booted into snapshot 4 as indicated by the output of the snapper rollback command above and we should be able to inspect information about all of the snapshots on the system:

[root@localhost ~]# btrfs subvolume get-default /
ID 263 gen 87 top level 259 path .snapshots/4/snapshot
[root@localhost ~]# snapper ls
Type   | # | Pre # | Date                            | User | Cleanup | Description                   | Userdata
single | 0 |       |                                 | root |         | current                       |         
single | 1 |       | Sat 23 Apr 2016 01:04:51 PM UTC | root |         | BigBang                       |         
single | 2 |       | Sat 23 Apr 2016 01:08:18 PM UTC | root | number  | /usr/bin/dnf update -y kernel |         
single | 3 |       | Sat 23 Apr 2016 01:17:43 PM UTC | root |         |                               |         
single | 4 |       | Sat 23 Apr 2016 01:17:43 PM UTC | root |         |                               |         
[root@localhost ~]# ls /.snapshots/
1  2  3  4
[root@localhost ~]# btrfs subvolume list /
ID 259 gen 88 top level 5 path .snapshots
ID 260 gen 81 top level 259 path .snapshots/1/snapshot
ID 261 gen 70 top level 259 path .snapshots/2/snapshot
ID 262 gen 80 top level 259 path .snapshots/3/snapshot
ID 263 gen 88 top level 259 path .snapshots/4/snapshot

And the big test is to see if the change we made to the system was actually reverted:

[root@localhost ~]# uname -r
[root@localhost ~]# rpm -q kernel



Vagrant: Sharing Folders with vagrant-sshfs

cross posted from this fedora magazine post


We're trying to focus more on developer experience in the Red Hat ecosystem. In the process we've started to incorporate the Vagrant into our standard offerings. As part of that effort, we're seeking a shared folder solution that doesn't include a bunch of if/else logic to figure out exactly which one you should use based on the OS/hypervisor you use under Vagrant.

The current options for Vagrant shared folder support can make you want to tear your hair out when you try to figure out which one you should use in your environment. This led us to look for a better answer for the user, so they no longer have to make these choices on their own based on their environment.

Current Synced Folder Solutions

"So what is the fuss about? Is it really that hard?" Well it's certainly doable, but we want it to be easier. Here are the currently available synced folder options within vagrant today:

  • virtualbox
    • This synced folder type uses a kernel module from the VirtualBox Guest Additions software to talk to the hypervisor. It requires you to be running on top of the Virtualbox hypervisor, and that the VirtualBox Guest Additions are installed in the Vagrant Box you launch. Licensing can also make distribution of the compiled Guest Additions problematic.
    • Hypervisor Limitation: VirtualBox
    • Host OS Limitation: None
  • nfs
    • This synced folder type uses NFS mounts. It requires you to be running on top of a Linux or Mac OS X host.
    • Hypervisor Limitation: None
    • Host OS Limitation: Linux, Mac
  • smb
    • This synced folder type uses Samba mounts. It requires you to be running on top of a Windows host and to have Samba client software in the guest.
    • Hypervisor Limitation: None
    • Host OS Limitation: Windows
  • 9p
    • This synced folder implementation uses 9p file sharing within the libvirt/KVM hypervisor. It requires the hypervisor to be libvirt/KVM and thus also requires Linux to be the host OS.
    • Hypervisor Limitation: Libvirt
    • Host OS Limitation: Linux
  • rsync
    • This synced folder implementation simply syncs folders between host and guest using rsync. Unfortunately this isn't actually shared folders, because the files are simply copied back and forth and can become out of sync.
    • Hypervisor Limitation: None
    • Host OS Limitation: None

So depending on your environment you are rather limited in which options work. You have to choose carefully to get something working without much hassle.

What About SSHFS?

As part of this discovery process I had a simple question: "why not sshfs?" It turns out that Fabio Kreusch had a similar idea a while back and wrote a plugin to do mounts via SSHFS.

When I first found this I was excited because I thought I had the answer in my hands and someone had already written it! Unfortunately the old implementation didn't implement a synced folder plugin like all of the other synced folder plugins within Vagrant. In other words, it didn't inherit the synced folder class and implement the functions. It also, by default, mounted a guest folder onto the host rather than the other way around like most synced folder implementations do.

One goal I have is to actually have SSHFS be a supported synced folder plugin within Vagrant and possibly get it committed back up into Vagrant core one day. So I reached out to Fabio to find out if he would be willing to accept patches to get things working more along the lines of a traditional synced folder plugin. He kindly let me know he didn't have much time to work on vagrant-sshfs these days, and he no longer used it. I volunteered to take over.

The vagrant-sshfs Plugin

To make the plugin follow along the traditional synced folder plugin model I decided to rewrite the plugin. I based most of the original code off of the NFS synced folder plugin code. The new code repo is here on Github.

So now we have a plugin that will do SSHFS mounts of host folders into the guest. It works without any setup on the host, but it requires that the sftp-server software exist on the host. sftp-server is usually provided by OpenSSH and thus is easily available on Windows/Mac/Linux.

To compare with the other implementations on environment restrictions here is what the SSHFS implementation looks like:

  • sshfs
    • This synced folder implementation uses SSHFS to share folders between host and guest. The only requirement is that the sftp-server executable exist on the host.
    • Hypervisor Limitation: None
    • Host OS Limitation: None

Here are the overall benefits of using vagrant-sshfs:

  • Works on any host platform
    • Windows, Linux, Mac OS X
  • Works on any type-2 hypervisor
    • VirtualBox, Libvirt/KVM, Hyper-V, VMWare
  • Seamlessly Works on Remote Vagrant Solutions
    • Works with vagrant-aws, vagrant-openstack, etc..

Where To Get The Plugin

This plugin is hot off the presses, so it hasn't quite made it into Fedora yet. There are a few ways you can get it though. First, you can use Vagrant itself to retrieve the plugin from rubygems:

$ vagrant plugin install vagrant-sshfs

Alternatively you can get the RPM package from my copr:

$ sudo dnf copr enable dustymabe/vagrant-sshfs
$ sudo dnf install vagrant-sshfs

Your First vagrant-sshfs Mount

To use use the plugin, you must tell Vagrant what folder you want mounted into the guest and where, by adding it to your Vagrantfile. An example Vagrantfile is below:

Vagrant.configure(2) do |config| = "fedora/23-cloud-base"
  config.vm.synced_folder "/path/on/host", "/path/on/guest", type: "sshfs"

This will start a Fedora 23 base cloud image and will mount the /path/on/host directory from the host into the running vagrant box under the /path/on/guest directory.


We've tried to find the option that is easiest for the user to configure. While SSHFS may have some drawbacks as compared to the others, such as speed, we believe it solves most people's use cases and is dead simple to configure out of the box.

Please give it a try and let us know how it works for you! Drop a mail to or open an issue on Github.


802.11ac on Linux With NetGear A6100 (RTL8811AU) USB Adapter

NOTE: Most of the content from this post comes from a blog post I found that concentrated on getting the driver set up on Fedora 21. I did mostly the same steps with a few tweaks.


Driver support for 802.11ac in Linux is spotty especially if you are using a USB adapter. I picked up the NetGear A6100 that has the Realtek RTL8811AU chip inside of it. Of course, when I plug it in I can see the device, but no support in the kernel I'm using (kernel-4.2.8-200.fc22.x86_64).

On my system I currently have the built in wireless adapter, as well as the USB plugged in. From the output below you can see only one wireless NIC shows up:

# lspci | grep Network
00:19.0 Ethernet controller: Intel Corporation 82579LM Gigabit Network Connection (rev 04)
03:00.0 Network controller: Intel Corporation Centrino Ultimate-N 6300 (rev 3e)
$ lsusb | grep NetGear
Bus 001 Device 002: ID 0846:9052 NetGear, Inc. A6100 AC600 DB Wireless Adapter [Realtek RTL8811AU]
# ip -o link | grep wlp | cut -d ' ' -f 2

The wlp3s0 is my built in wifi device.

The Realtek Driver

You can find the Realtek driver for the RTL8811AU at a couple of the sites for the many vendors that incorporate the chip:

These drivers often vary in version and often don't compile on newer kernels, which can be frustrating when you just want something to work.

Getting The RTL8811AU Driver Working

Luckily some people in the community unofficially manage code repositories with fixes to the Realtek driver to allow it to compile on newer kernels. From the blog post I mentioned earlier there is a linked GitHub repository where the Realtek driver is reproduced with some patches on top. This repository makes it pretty easy to get set up and get the USB adapters working on Linux.

NOTE: In case the git repo moves in the future I have copied an archive of it here for posterity. The commit id at head at the time of this use is 9fc227c2987f23a6b2eeedf222526973ed7a9d63.

The first step is to set up your system for DKMS to make it so that you don't have to recompile the kernel module every time you install a new kernel. To do this install the following packages and set the dkms service to start on boot:

# dnf install -y dkms kernel-devel-$(uname -r)
# systemctl enable dkms

Next, clone the git repository and observe the version of the driver:

# mkdir /tmp/rtldriver && cd /tmp/rtldriver
# git clone
# cat rtl8812au_rtl8821au/include/rtw_version.h 
#define DRIVERVERSION   "v4.3.14_13455.20150212_BTCOEX20150128-51"
#define BTCOEXVERSION   "BTCOEX20150128-51"

From the output we can see this is the 4.3.14_13455.20150212 version of the driver, which is fairly recent.

Next let's create a directory under /usr/src for the source code to live and copy it into place:

# mkdir /usr/src/8812au-4.3.14_13455.20150212
# cp -R  ./rtl8812au_rtl8821au/* /usr/src/8812au-4.3.14_13455.20150212/

Next we'll create a dkms.conf file which will tell DKMS how to manage building this module when builds/installs are requested; run man dkms to view more information on these settings:

# cat <<'EOF' > /usr/src/8812au-4.3.14_13455.20150212/dkms.conf
MAKE[0]="'make' all KVER=${kernelver}"
CLEAN="'make' clean"

Note one change from the earlier blog post, which is that I include KVER=${kernelver} in the make line. If you don't do this then the Makefile will incorrectly detect the kernel by calling uname, which is wrong when run during a new kernel installation because the new kernel is not yet running. If we didn't do this then every time a new kernel was installed the driver would get compiled for the previous kernel (the one that was running at the time of installation).

The next step is to add the module to the DKMS system and go ahead and build it:

# dkms add -m 8812au -v 4.3.14_13455.20150212

Creating symlink /var/lib/dkms/8812au/4.3.14_13455.20150212/source ->

DKMS: add completed.
# dkms build -m 8812au -v 4.3.14_13455.20150212

Kernel preparation unnecessary for this kernel.  Skipping...

Building module:
cleaning build area...
'make' all KVER=4.2.8-200.fc22.x86_64......................
cleaning build area...

DKMS: build completed.

And finally install it:

# dkms install -m 8812au -v 4.3.14_13455.20150212

Running module version sanity check.
 - Original module
   - No original module exists within this kernel
 - Installation
   - Installing to /lib/modules/4.2.8-200.fc22.x86_64/extra/
Adding any weak-modules


DKMS: install completed.

Now we can load the module and see information about it:

# modprobe 8812au
# modinfo 8812au | head -n 3
filename:       /lib/modules/4.2.8-200.fc22.x86_64/extra/8812au.ko
version:        v4.3.14_13455.20150212_BTCOEX20150128-51
author:         Realtek Semiconductor Corp.

Does the wireless NIC work now? After connecting to an AC only network here are the results:

# ip -o link | grep wlp | cut -d ' ' -f 2
# iwconfig wlp0s20u2
wlp0s20u2  IEEE 802.11AC  ESSID:"random"  Nickname:"<WIFI@REALTEK>"
          Mode:Managed  Frequency:5.26 GHz  Access Point: A8:BB:B7:EE:B6:8D   
          Bit Rate:87 Mb/s   Sensitivity:0/0  
          Retry:off   RTS thr:off   Fragment thr:off
          Encryption key:****-****-****-****-****-****-****-****   Security mode:open
          Power Management:off
          Link Quality=95/100  Signal level=100/100  Noise level=0/100
          Rx invalid nwid:0  Rx invalid crypt:0  Rx invalid frag:0
          Tx excessive retries:0  Invalid misc:0   Missed beacon:0


Keeping it Working After Kernel Updates

Let's test out to see if updating a kernel leaves us with a system that has an updated driver or not. Before the kernel update:

# tree /var/lib/dkms/8812au/4.3.14_13455.20150212/
├── 4.2.8-200.fc22.x86_64
│   └── x86_64
│       ├── log
│       │   └── make.log
│       └── module
│           └── 8812au.ko
└── source -> /usr/src/8812au-4.3.14_13455.20150212

5 directories, 2 files

Now the kernel update and viewing it after:

# dnf -y update kernel kernel-devel --enablerepo=updates-testing
  kernel.x86_64 4.3.3-200.fc22
  kernel-core.x86_64 4.3.3-200.fc22
  kernel-devel.x86_64 4.3.3-200.fc22
  kernel-modules.x86_64 4.3.3-200.fc22

# tree /var/lib/dkms/8812au/4.3.14_13455.20150212/
├── 4.2.8-200.fc22.x86_64
│   └── x86_64
│       ├── log
│       │   └── make.log
│       └── module
│           └── 8812au.ko
├── 4.3.3-200.fc22.x86_64
│   └── x86_64
│       ├── log
│       │   └── make.log
│       └── module
│           └── 8812au.ko
└── source -> /usr/src/8812au-4.3.14_13455.20150212

9 directories, 4 files

And from the log we can verify that the module was built against the right kernel:

# head -n 4 /var/lib/dkms/8812au/4.3.14_13455.20150212/4.3.3-200.fc22.x86_64/x86_64/log/make.log
DKMS make.log for 8812au-4.3.14_13455.20150212 for kernel 4.3.3-200.fc22.x86_64 (x86_64)
Sun Jan 24 19:40:51 EST 2016
make ARCH=x86_64 CROSS_COMPILE= -C /lib/modules/4.3.3-200.fc22.x86_64/build M=/var/lib/dkms/8812au/4.3.14_13455.20150212/build  modules
make[1]: Entering directory '/usr/src/kernels/4.3.3-200.fc22.x86_64'