Monthly Archive for January, 2016

802.11ac on Linux With NetGear A6100 (RTL8811AU) USB Adapter

NOTE: Most of the content from this post comes from a blog post I found that concentrated on getting the driver set up on Fedora 21. I did mostly the same steps with a few tweaks.


Driver support for 802.11ac in Linux is spotty especially if you are using a USB adapter. I picked up the NetGear A6100 that has the Realtek RTL8811AU chip inside of it. Of course, when I plug it in I can see the device, but no support in the kernel I'm using (kernel-4.2.8-200.fc22.x86_64).

On my system I currently have the built in wireless adapter, as well as the USB plugged in. From the output below you can see only one wireless NIC shows up:

# lspci | grep Network
00:19.0 Ethernet controller: Intel Corporation 82579LM Gigabit Network Connection (rev 04)
03:00.0 Network controller: Intel Corporation Centrino Ultimate-N 6300 (rev 3e)
$ lsusb | grep NetGear
Bus 001 Device 002: ID 0846:9052 NetGear, Inc. A6100 AC600 DB Wireless Adapter [Realtek RTL8811AU]
# ip -o link | grep wlp | cut -d ' ' -f 2

The wlp3s0 is my built in wifi device.

The Realtek Driver

You can find the Realtek driver for the RTL8811AU at a couple of the sites for the many vendors that incorporate the chip:

These drivers often vary in version and often don't compile on newer kernels, which can be frustrating when you just want something to work.

Getting The RTL8811AU Driver Working

Luckily some people in the community unofficially manage code repositories with fixes to the Realtek driver to allow it to compile on newer kernels. From the blog post I mentioned earlier there is a linked GitHub repository where the Realtek driver is reproduced with some patches on top. This repository makes it pretty easy to get set up and get the USB adapters working on Linux.

NOTE: In case the git repo moves in the future I have copied an archive of it here for posterity. The commit id at head at the time of this use is 9fc227c2987f23a6b2eeedf222526973ed7a9d63.

The first step is to set up your system for DKMS to make it so that you don't have to recompile the kernel module every time you install a new kernel. To do this install the following packages and set the dkms service to start on boot:

# dnf install -y dkms kernel-devel-$(uname -r)
# systemctl enable dkms

Next, clone the git repository and observe the version of the driver:

# mkdir /tmp/rtldriver && cd /tmp/rtldriver
# git clone
# cat rtl8812au_rtl8821au/include/rtw_version.h 
#define DRIVERVERSION   "v4.3.14_13455.20150212_BTCOEX20150128-51"
#define BTCOEXVERSION   "BTCOEX20150128-51"

From the output we can see this is the 4.3.14_13455.20150212 version of the driver, which is fairly recent.

Next let's create a directory under /usr/src for the source code to live and copy it into place:

# mkdir /usr/src/8812au-4.3.14_13455.20150212
# cp -R  ./rtl8812au_rtl8821au/* /usr/src/8812au-4.3.14_13455.20150212/

Next we'll create a dkms.conf file which will tell DKMS how to manage building this module when builds/installs are requested; run man dkms to view more information on these settings:

# cat <<'EOF' > /usr/src/8812au-4.3.14_13455.20150212/dkms.conf
MAKE[0]="'make' all KVER=${kernelver}"
CLEAN="'make' clean"

Note one change from the earlier blog post, which is that I include KVER=${kernelver} in the make line. If you don't do this then the Makefile will incorrectly detect the kernel by calling uname, which is wrong when run during a new kernel installation because the new kernel is not yet running. If we didn't do this then every time a new kernel was installed the driver would get compiled for the previous kernel (the one that was running at the time of installation).

The next step is to add the module to the DKMS system and go ahead and build it:

# dkms add -m 8812au -v 4.3.14_13455.20150212

Creating symlink /var/lib/dkms/8812au/4.3.14_13455.20150212/source ->

DKMS: add completed.
# dkms build -m 8812au -v 4.3.14_13455.20150212

Kernel preparation unnecessary for this kernel.  Skipping...

Building module:
cleaning build area...
'make' all KVER=4.2.8-200.fc22.x86_64......................
cleaning build area...

DKMS: build completed.

And finally install it:

# dkms install -m 8812au -v 4.3.14_13455.20150212

Running module version sanity check.
 - Original module
   - No original module exists within this kernel
 - Installation
   - Installing to /lib/modules/4.2.8-200.fc22.x86_64/extra/
Adding any weak-modules


DKMS: install completed.

Now we can load the module and see information about it:

# modprobe 8812au
# modinfo 8812au | head -n 3
filename:       /lib/modules/4.2.8-200.fc22.x86_64/extra/8812au.ko
version:        v4.3.14_13455.20150212_BTCOEX20150128-51
author:         Realtek Semiconductor Corp.

Does the wireless NIC work now? After connecting to an AC only network here are the results:

# ip -o link | grep wlp | cut -d ' ' -f 2
# iwconfig wlp0s20u2
wlp0s20u2  IEEE 802.11AC  ESSID:"random"  Nickname:"<WIFI@REALTEK>"
          Mode:Managed  Frequency:5.26 GHz  Access Point: A8:BB:B7:EE:B6:8D   
          Bit Rate:87 Mb/s   Sensitivity:0/0  
          Retry:off   RTS thr:off   Fragment thr:off
          Encryption key:****-****-****-****-****-****-****-****   Security mode:open
          Power Management:off
          Link Quality=95/100  Signal level=100/100  Noise level=0/100
          Rx invalid nwid:0  Rx invalid crypt:0  Rx invalid frag:0
          Tx excessive retries:0  Invalid misc:0   Missed beacon:0


Keeping it Working After Kernel Updates

Let's test out to see if updating a kernel leaves us with a system that has an updated driver or not. Before the kernel update:

# tree /var/lib/dkms/8812au/4.3.14_13455.20150212/
├── 4.2.8-200.fc22.x86_64
│   └── x86_64
│       ├── log
│       │   └── make.log
│       └── module
│           └── 8812au.ko
└── source -> /usr/src/8812au-4.3.14_13455.20150212

5 directories, 2 files

Now the kernel update and viewing it after:

# dnf -y update kernel kernel-devel --enablerepo=updates-testing
  kernel.x86_64 4.3.3-200.fc22
  kernel-core.x86_64 4.3.3-200.fc22
  kernel-devel.x86_64 4.3.3-200.fc22
  kernel-modules.x86_64 4.3.3-200.fc22

# tree /var/lib/dkms/8812au/4.3.14_13455.20150212/
├── 4.2.8-200.fc22.x86_64
│   └── x86_64
│       ├── log
│       │   └── make.log
│       └── module
│           └── 8812au.ko
├── 4.3.3-200.fc22.x86_64
│   └── x86_64
│       ├── log
│       │   └── make.log
│       └── module
│           └── 8812au.ko
└── source -> /usr/src/8812au-4.3.14_13455.20150212

9 directories, 4 files

And from the log we can verify that the module was built against the right kernel:

# head -n 4 /var/lib/dkms/8812au/4.3.14_13455.20150212/4.3.3-200.fc22.x86_64/x86_64/log/make.log
DKMS make.log for 8812au-4.3.14_13455.20150212 for kernel 4.3.3-200.fc22.x86_64 (x86_64)
Sun Jan 24 19:40:51 EST 2016
make ARCH=x86_64 CROSS_COMPILE= -C /lib/modules/4.3.3-200.fc22.x86_64/build M=/var/lib/dkms/8812au/4.3.14_13455.20150212/build  modules
make[1]: Entering directory '/usr/src/kernels/4.3.3-200.fc22.x86_64'


The CentOS CI Infrastructure: A Getting Started Guide


The CentOS community is trying to build an ecosystem that fosters and encourages upstream communities to continuously perform integration testing of their code running on the the CentOS platform. The CentOS community has built out an infrastructure that (currently) contains 256 servers ("bare metal" servers") that are pooled together to run tests that are orchestrated by a frontend Jenkins instance located at

Who Can Use the CentOS CI?

The CentOS CI is primarily targeted at Open Source projects that use CentOS as a platform in some way. If your project meets those two requirements then check out our page for Getting Started and look at the "Asking for your project to be added" section.

What Is Unique About the CentOS CI?

With many test infrastructures that exist today you are given a virtual machine. With the CentOS CI, when you get a test machine you are actually getting a "bare metal" machine, which allows for testing of workloads that may have not been possible otherwise. One specific example of this is testing out virtualization workloads. The RDO and libvirt projects both use the CentOS CI to do testing that wouldn't be possible on an infrastructure that didn't provide bare metal.

The CentOS CI also offers early access to content that will be in a coming release of CentOS. If there is a pending release, then the content will be available for testing in the CI infrastructure. This allows projects to do testing and find bugs early (before release).

I Have Access. Now What?


Now that you have access to the CentOS CI you should have a few things:

  • Username/Password for the Jenkins frontend
  • An API key to use with Duffy
  • A target slave type to be used for your testing

The 2nd item from above is unique. In order to provision the bare metal machines and present them for testing, the CentOS CI uses a service known as Duffy (a REST API). The Jenkins jobs that run must provision machines using Duffy and then execute tests on those machines; in the future there may be a Jenkins plugin that takes care of this for you.

The 3rd item is actually specific to your project. The slave machines that are contacted from Jenkins have a workspace set up (like a home directory) for your project. These slaves are accessible via SSH and you can put whatever files you need here in order to orchestrate your tests. When a command is executed in a Jenkins job, these machines are the ones that it is run on.

What you really want, however, is to run tests on the Duffy instances. For that reason the slave is typically just used to request an instance from Duffy and then ssh into the instance to execute tests.

A Test To Run

Even though we've brought the infrastructure together we still need you to write the tests! Basically the requirement here is that you have a git repo that can be cloned on the Duffy instance and then a command to run to kick off the tests.

A very simple example of this is my centos-ci-example repo on GitHub. In this repo the script executes tests. So for our case we will use the following environment varialbes when defining our Jenkins job below:


Your First Job: Web Interface

So you have access and you have a git repo that contains a test to run. With the username/password you can login to and create a new job. To create a new job select New Item from the menu on the left hand side of the screen. Enter a name for your job and Freestyle Project as shown below:


After clicking OK, the next page that appears is the page for configuring your job. The following items need to be filled in:

  • Check Restrict where this project can be run
    • Enter the label that applies to environment set up for you

As you can see below, for me this was the atomicapp-shared label.


  • Check Inject environment variables to the build process under Build Environment
    • Populate the environment variables as shown below:


  • Click on the Add Build Step Dropdown and Select Execute Python Script


  • Populate a python script in the text box
    • This script will be executed on the jenkins slaves
    • It will provision new machines using Duffy and then execute the test(s).
    • This script can be found on GitHub or here


Now you are all done configuring your job for the first time. There are plenty of more options that Jenkins gives you, but for now click Save and then run the job. You can do this by clicking Build Now and then viwing the output by selecting Console Output as shown in the screenshot below:


Your Next Job: Jenkins Job Builder

All of those steps can be done in a more automated fashion by using Jenkins Job Builder. You must install the jenkins-jobs executable for this and create a config file that holds the credentials to interface with Jenkins:

# yum install -y /usr/bin/jenkins-jobs
# cat <<EOF > jenkins_jobs.ini

Update the file to have the real user/password in it.

Next you must create a job description:

# cat <<EOF >job.yaml
- job:
    name: dusty-ci-example
    node: atomicapp-shared
        - inject:
            properties-content: |
        - centos-ci-bootstrap
- builder:
    name: centos-ci-bootstrap
        - python:
            !include-raw: './'

Update the file to have the real API_KEY.

The last component is, which is the python script we pasted in before:

# curl >

Now you can run jenkins-jobs and update the job:

# jenkins-jobs --conf jenkins_jobs.ini update job.yaml
INFO:root:Updating jobs in ['job.yaml'] ([])
INFO:jenkins_jobs.local_yaml:Including file './' from path '.'
INFO:jenkins_jobs.builder:Number of jobs generated:  1
INFO:jenkins_jobs.builder:Reconfiguring jenkins job dusty-ci-example
INFO:root:Number of jobs updated: 1
INFO:jenkins_jobs.builder:Cache saved

NOTE: This is all reproduced in the centos-ci-example jjb directory. Cloning the repo and executing the files from there may be a little easier than running the commands above.

After executing all of the steps you should now be able to execute Build Now on the job, just as before. Take Jenkins Job Builder for a spin and consider it a useful tool when managing your Jenkins jobs.


Hopefully by now you can set up and execute a basic test on the CentOS CI. Come and join our community and help us build out the infrastructure and the feature set. Check out the CI Wiki, send us a mail on the mailing list or ping us on #centos-devel in Freenode.

Happy Testing!

Running Nulecules in Openshift via oc new-app


As part of the Container Tools team at Red Hat I'd like to highlight a feature of Atomic App: support for execution via OpenShift's cli command oc new-app.

The native support for launching Nulecules means that OpenShift users can easily pull from a library of Atomic Apps (Nuleculized applications) that exist in a Docker registry and launch them into OpenShift. Applications that have been packaged up in a Nulecule offer a benefit to the packager and to the deployer of the application. The packager can deliver one Nulecule to all users that supports many different platforms and the deployer gets a simplified delivery mechanism; deploying a Nulecule via Atomic App is easier than trying to manage provider definitions.


OK. Let's do a demo. I'll choose the guestbook example application developed by Kubernetes. The Nulecule defintion for this example lives here. To start the container in OpenShift via oc new-app you simply run the following command from the command line:

# oc new-app projectatomic/guestbookgo-atomicapp --grant-install-rights

This will run a pod using the container image projectatomic/guestbookgo-atomicapp. The Atomic App software that runs inside the container image will evaluate the Nulecule in the image and communicate with OpenShift in order to bring up the guestbook application, which also leverages redis.

You should now be able to see the redis and guestbook replication controllers and services:

# oc get rc
CONTROLLER     CONTAINER(S)   IMAGE(S)                  SELECTOR                REPLICAS   AGE
guestbook      guestbook      kubernetes/guestbook:v2   app=guestbook           3          3m
redis-master   redis-master   centos/redis              app=redis,role=master   1          3m
redis-slave    redis-slave    centos/redis              app=redis,role=slave    2          3m
# oc get svc
NAME           CLUSTER_IP      EXTERNAL_IP   PORT(S)    SELECTOR                AGE
guestbook                 3000/TCP   app=guestbook           3m
redis-master   <none>        6379/TCP   app=redis,role=master   3m
redis-slave    <none>        6379/TCP   app=redis,role=slave    3m

As well as the pods that are started as a result of the replication controllers:

# oc get pods
NAME                 READY     STATUS    RESTARTS   AGE
guestbook-6gujf      1/1       Running   0          3m
guestbook-m61vq      1/1       Running   0          3m
guestbook-otoz4      1/1       Running   0          3m
redis-master-wdl80   1/1       Running   0          3m
redis-slave-fbapw    1/1       Running   0          3m
redis-slave-oizwb    1/1       Running   0          3m

If you have access to the instance where the pods are running you can access it via a Node IP and NodePort, however this is not common in a hosted environment. In a hosted environment you need to expose the service in openshift via a route:

# oc expose service guestbook
route "guestbook" exposed
# oc get route guestbook
NAME        HOST/PORT                                       PATH      SERVICE     LABELS          INSECURE POLICY   TLS TERMINATION
guestbook             guestbook   app=guestbook

Now you should be able to access the guestbook service via the provided hostname; in this case it is A quick visit with Firefox gives us:


And we have a guestbook up and running!

Give oc new-app on an Atomic App a spin and give us some feeback at .


Archived-At Email Header From Mailman 3 Lists

By now most Fedora email lists have been migrated to Mailman3. One little (but killer) new feature that I recently discovered was that Mailman3 includes the RFC 5064 Archived-At header in the emails.

This is a feature I have wanted for a really long time; to be able to find an email in your Inbox and copy and paste a link to anyone without having to find the message in the online archive is going to save a lot of time and decrease some latency when chatting on IRC or some other form of real time communication.

OK, how do I easily get this information out of the header and use it without having to view the email source every time?

I use Thunderbird for most of my email viewing (sometimes mutt for composing). In Thunderbird there is the Archived-At plugin. After installing this plugin you can right click on any message from a capable mailing list and select "Copy Archived-At URL".

Now you are off to share that URL with friends :)

Happy New Year!