Archive for the 'Fedora' Category

Matching Fedora OSTree Released Content With Each 2 Week Atomic Release

Cross posted with this Project Atomic Blog post

TL;DR: The default Fedora cadence for updates in the RPM streams is once a day. Until now, the OSTree-based updates cadence has matched this, but we're changing the default OSTree update stream to match the Fedora Atomic Host image release cadence (once every two weeks).

---

In Fedora we release a new Atomic Host approximately every two weeks. In the past this has meant that we bless and ship new ISO, QCOW, and Vagrant images that can then be used to install and or start a new Atomic Host server. But what if you already have an Atomic Host server up and running?

Servers that are already running are configured to get their updates directly from the OSTree repo that is sitting on Fedora Infrastructure servers. The client will ask What is the newest commit for my branch/ref? and the server will kindly reply with the most recent commit. If the client is at an older version then it will start to pull the newer commit and will apply the update.

This is exactly how the client is supposed to behave, but one problem with the way we have been doing things in the past is that we have been updating everyone's branch/ref every night when we do our updates runs in Fedora.

This has the side effect of meaning that users can get content as soon as it has been created, but it also means that the two week release process where we perform testing and validation really means nothing for these users, as they will get something before we ever did testing on it.

We have decided to slow down the cadence of the fedora-atomic/25/x86_64/docker-host ref within the OSTree repo to match the exact releases that we do for the two week release process. Users will be able to track this ref like they always have, but it will only update when we do a release, approximately every two weeks.

We have also decided to create a new ref that will get updated every night, so that we can still do our testing. This ref will be called fedora-atomic/25/x86_64/updates/docker-host. If you want to keep following the content as soon as it is created you can rebase to this branch/ref at any time using:

# rpm-ostree rebase fedora-atomic/25/x86_64/updates/docker-host

As an example, let's say that we have a Fedora Atomic host which is on the default ref. That ostree will now be updated every two weeks, and only every two weeks:

-bash-4.3# date
Fri Feb 10 21:05:27 UTC 2017

-bash-4.3# rpm-ostree status
State: idle
Deployments:
● fedora-atomic:fedora-atomic/25/x86_64/docker-host
       Version: 25.51 (2017-01-30 20:09:59)
        Commit: f294635a1dc62d9ae52151a5fa897085cac8eaa601c52e9a4bc376e9ecee11dd
        OSName: fedora-atomic

-bash-4.3# rpm-ostree upgrade
Updating from: fedora-atomic:fedora-atomic/25/x86_64/docker-host
1 metadata, 0 content objects fetched; 569 B transferred in 1 seconds
No upgrade available.

If you want the daily ostree update instead, as you previously had, you need to switch to the _updates_ ref:

-bash-4.3# rpm-ostree rebase --reboot fedora-atomic/25/x86_64/updates/docker-host

812 metadata, 3580 content objects fetched; 205114 KiB transferred in 151 seconds                                                                                                                                                           
Copying /etc changes: 24 modified, 0 removed, 54 added
Connection to 192.168.121.128 closed by remote host.
Connection to 192.168.121.128 closed.

[laptop]$ ssh fedora@192.168.121.128
[fedora@cloudhost ~]$ sudo su -
-bash-4.3# rpm-ostree status
State: idle
Deployments:
● fedora-atomic:fedora-atomic/25/x86_64/updates/docker-host
       Version: 25.55 (2017-02-10 13:59:37)
        Commit: 38934958d9654721238947458adf3e44ea1ac1384a5f208b26e37e18b28ec7cf
        OSName: fedora-atomic

  fedora-atomic:fedora-atomic/25/x86_64/docker-host
       Version: 25.51 (2017-01-30 20:09:59)
        Commit: f294635a1dc62d9ae52151a5fa897085cac8eaa601c52e9a4bc376e9ecee11dd
        OSName: fedora-atomic

We hope you are enjoying using Fedora Atomic Host. Please share your success or horror stories with us on the mailing lists or in IRC: #atomic or #fedora-cloud on Freenode.

Cheers!

The Fedora Atomic Team

Fedora BTRFS+Snapper – The Fedora 25 Edition

History

I'm back again with the Fedora 25 edition of my Fedora BTRFS+Snapper series. As you know, in the past I have configured my computers to be able to snapshot and rollback the entire system by leveraging BTRFS snapshots, a tool called snapper, and a patched version of Fedora's grub2 package. I have updated the patchset (patches taken from SUSE) for Fedora 25's version of grub and the results are available in this git repo.

This setup is not new. I have fully documented the steps I took in the past for my Fedora 22 systems in two blog posts: part1 and part2. This is a condensed continuation of those posts for Fedora 25.

Setting up System with LUKS + LVM + BTRFS

The manual steps for setting up the system are detailed in the part1 blog post from Fedora 22. This time around I have created a script that will quickly configure the system with LUKS + LVM + BTRFS. The script will need to be run in an Anaconda environment just like the manual steps were done in part1 last time.

You can easily enable ssh access to your Anaconda booted machine by adding inst.sshd to the kernel command line arguments. After booting up you can scp the script over and then execute it to build the system. Please read over the script and modify it to your liking.

Alternatively, for an automated install I have embedded that same script into a kickstart file that you can use. The kickstart file doesn't really leverage Anaconda at all because it simply runs a %pre script and then reboots the box. It's basically like just telling Anaconda to run a bash script, but allows you to do it in an automated way. None of the kickstart directives at the top of the kickstart file actually get used.

Installing and Configuring Snapper

After the system has booted for the first time, let's configure the system for doing snapshots. I still want to be able to track how much size each snapshot has taken so I'll go ahead and enable quota support on BTRFS. I covered how to do this in a previous post:

[root@localhost ~]# btrfs quota enable /
[root@localhost ~]# btrfs qgroup show /
qgroupid         rfer         excl 
--------         ----         ---- 
0/5         999.80MiB    999.80MiB

Next up is installing/configuring snapper. I am also going to install the dnf plugin for snapper so that rpm transactions will automatically get snapshotted:

[root@localhost ~]# dnf install -y snapper python3-dnf-plugins-extras-snapper
...
Complete!
[root@localhost ~]# snapper --config=root create-config /
[root@localhost ~]# snapper ls
Type   | # | Pre # | Date | User | Cleanup | Description | Userdata
-------+---+-------+------+------+---------+-------------+---------
single | 0 |       |      | root |         | current     |         
[root@localhost ~]# snapper list-configs
Config | Subvolume
-------+----------
root   | /        
[root@localhost ~]# btrfs subvolume list /
ID 260 gen 44 top level 5 path .snapshots

So we used the snapper command to create a configuration for BTRFS filesystem mounted at /. As part of this process we can see from the btrfs subvolume list / command that snapper also created a .snapshots subvolume. This subvolume will be used to house the COW snapshots that are taken of the system.

Next, we'll add an entry to fstab so that regardless of what subvolume we are actually booted in we will always be able to view the .snapshots subvolume and all nested subvolumes (snapshots):

[root@localhost ~]# echo '/dev/vgroot/lvroot /.snapshots btrfs subvol=.snapshots 0 0' >> /etc/fstab

Taking Snapshots

OK, now that we have snapper installed and the .snapshots subvolume in /etc/fstab we can start creating snapshots:

[root@localhost ~]# btrfs subvolume get-default /
ID 5 (FS_TREE)
[root@localhost ~]# snapper create --description "BigBang"
[root@localhost ~]# snapper ls
Type   | # | Pre # | Date                            | User | Cleanup | Description | Userdata
-------+---+-------+---------------------------------+------+---------+-------------+---------
single | 0 |       |                                 | root |         | current     |         
single | 1 |       | Mon 13 Feb 2017 12:50:51 AM UTC | root |         | BigBang     |         
[root@localhost ~]# btrfs subvolume list /
ID 260 gen 48 top level 5 path .snapshots
ID 261 gen 48 top level 260 path .snapshots/1/snapshot
[root@localhost ~]# ls /.snapshots/1/snapshot/
bin  boot  dev  etc  home  lib  lib64  media  mnt  opt  proc  root  run  sbin  srv  sys  tmp  usr  var

We made our first snapshot called BigBang and then ran a btrfs subvolume list / to view that a new snapshot was actually created. Notice at the top of the output of the sections that we ran a btrfs subvolume get-default /. This outputs what the currently set default subvolume is for the BTRFS filesystem. Right now we are booted into the root subvolume but that will change as soon as we decide we want to use one of the snapshots for rollback.

Since we took a snapshot let's go ahead and make some changes to the system by updating the kernel:

[root@localhost ~]# dnf update -y kernel
...
Complete!
[root@localhost ~]# rpm -q kernel
kernel-4.8.6-300.fc25.x86_64
kernel-4.9.8-201.fc25.x86_64
[root@localhost ~]# snapper ls
Type   | # | Pre # | Date                            | User | Cleanup | Description                   | Userdata
-------+---+-------+---------------------------------+------+---------+-------------------------------+---------
single | 0 |       |                                 | root |         | current                       |         
single | 1 |       | Mon 13 Feb 2017 12:50:51 AM UTC | root |         | BigBang                       |         
single | 2 |       | Mon 13 Feb 2017 12:52:38 AM UTC | root | number  | /usr/bin/dnf update -y kernel |

So we updated the kernel and the snapper dnf plugin automatically created a snapshot for us. Let's reboot the system and see if the new kernel boots properly:

[root@localhost ~]# reboot 
...
[dustymabe@media ~]$ ssh root@192.168.122.177
Warning: Permanently added '192.168.122.177' (ECDSA) to the list of known hosts.
root@192.168.122.177's password: 
Last login: Mon Feb 13 00:41:40 2017 from 192.168.122.1
[root@localhost ~]# 
[root@localhost ~]# uname -r
4.9.8-201.fc25.x86_64

Rolling Back

Say we don't like that new kernel. Let's go back to the earlier snapshot we made:

[root@localhost ~]# snapper rollback 1
Creating read-only snapshot of current system. (Snapshot 3.)
Creating read-write snapshot of snapshot 1. (Snapshot 4.)
Setting default subvolume to snapshot 4.
[root@localhost ~]# reboot

snapper created a read-only snapshot of the current system and then a new read-write subvolume based on the snapshot we wanted to go back to. It then sets the default subvolume to be the newly created read-write subvolume. After reboot you'll be in the newly created read-write subvolume and exactly back in the state you system was in at the time the snapshot was created.

In our case, after reboot we should now be booted into snapshot 4 as indicated by the output of the snapper rollback command above and we should be able to inspect information about all of the snapshots on the system:

[root@localhost ~]# btrfs subvolume get-default /
ID 264 gen 66 top level 260 path .snapshots/4/snapshot
[root@localhost ~]# snapper ls
Type   | # | Pre # | Date                     | User | Cleanup | Description                   | Userdata
-------+---+-------+--------------------------+------+---------+-------------------------------+---------
single | 0 |       |                          | root |         | current                       |         
single | 1 |       | Mon Feb 13 00:50:51 2017 | root |         | BigBang                       |         
single | 2 |       | Mon Feb 13 00:52:38 2017 | root | number  | /usr/bin/dnf update -y kernel |         
single | 3 |       | Mon Feb 13 00:56:13 2017 | root |         |                               |         
single | 4 |       | Mon Feb 13 00:56:13 2017 | root |         |                               |         
[root@localhost ~]# ls /.snapshots/
1  2  3  4
[root@localhost ~]# btrfs subvolume list /
ID 260 gen 67 top level 5 path .snapshots
ID 261 gen 61 top level 260 path .snapshots/1/snapshot
ID 262 gen 53 top level 260 path .snapshots/2/snapshot
ID 263 gen 60 top level 260 path .snapshots/3/snapshot
ID 264 gen 67 top level 260 path .snapshots/4/snapshot

And the big test is to see if the change we made to the system was actually reverted:

[root@localhost ~]# uname -r 
4.8.6-300.fc25.x86_64
[root@localhost ~]# rpm -q kernel
kernel-4.8.6-300.fc25.x86_64

Enjoy!

Dusty

Installing an OpenShift Origin Cluster on Fedora 25 Atomic Host: Part 2

Cross posted with this Project Atomic Blog post

Introduction

In part 1 of this series we used the OpenShift Ansible Installer to install Openshift Origin on three servers that were running Fedora 25 Atomic Host. The three machines we'll be using have the following roles and IP address configurations:

+-------------+----------------+--------------+
|     Role    |   Public IPv4  | Private IPv4 |
+=============+================+==============+
| master,etcd | 54.175.0.44    | 10.0.173.101 |
+-------------+----------------+--------------+
|    worker   | 52.91.115.81   | 10.0.156.20  |
+-------------+----------------+--------------+
|    worker   | 54.204.208.138 | 10.0.251.101 |
+-------------+----------------+--------------+

In this blog, we'll explore the installed Origin cluster and then launch an application to see if everything works.

The Installed Origin Cluster

With the cluster up and running, we can log in as admin to the master node via the oc command. To install the oc CLI on a your machine, you can follow these instructions or, on Fedora, you can install via dnf install origin-clients. For this demo, we have the origin-clients-1.3.1-1.fc25.x86_64 rpm installed:

$ oc login --insecure-skip-tls-verify -u admin -p OriginAdmin https://54.175.0.44:8443
Login successful.

You have access to the following projects and can switch between them with 'oc project <projectname>':

  * default
    kube-system
    logging
    management-infra
    openshift
    openshift-infra

Using project "default".
Welcome! See 'oc help' to get started.

NOTE: --insecure-skip-tls-verify was added because we do not have properly signed certificates. See the docs for installing a custom signed certificate.

After we log in we can see that we are using the default namespace. Let's see what nodes exist:

$ oc get nodes
NAME           STATUS                     AGE
10.0.156.20    Ready                      9h
10.0.173.101   Ready,SchedulingDisabled   9h
10.0.251.101   Ready                      9h

The nodes represent each of the servers that are a part of the Origin cluster. The name of each node corresponds with its private IPv4 address. Also note that the 10.0.173.101 is the private IP address from the master,etcd node and that its status contains SchedulingDisabled. This is because we specified openshift_schedulable=false for this node when we did the install in part 1.

Now let's check the pods, services, and routes that are running in the default namespace:

$ oc get pods -o wide 
NAME                       READY     STATUS    RESTARTS   AGE       IP             NODE
docker-registry-3-hgwfr    1/1       Running   0          9h        10.129.0.3     10.0.156.20
registry-console-1-q48xn   1/1       Running   0          9h        10.129.0.2     10.0.156.20
router-1-nwjyj             1/1       Running   0          9h        10.0.156.20    10.0.156.20
router-1-o6n4a             1/1       Running   0          9h        10.0.251.101   10.0.251.101
$ 
$ oc get svc
NAME               CLUSTER-IP       EXTERNAL-IP   PORT(S)                   AGE
docker-registry    172.30.2.89      <none>        5000/TCP                  9h
kubernetes         172.30.0.1       <none>        443/TCP,53/UDP,53/TCP     9h
registry-console   172.30.147.190   <none>        9000/TCP                  9h
router             172.30.217.187   <none>        80/TCP,443/TCP,1936/TCP   9h
$ 
$ oc get routes
NAME               HOST/PORT                                        PATH      SERVICES           PORT               TERMINATION
docker-registry    docker-registry-default.54.204.208.138.xip.io              docker-registry    5000-tcp           passthrough
registry-console   registry-console-default.54.204.208.138.xip.io             registry-console   registry-console   passthrough

NOTE: If there are any pods that have failed to run you can try to debug with the oc status -v, and oc describe pod/<podname> commands. You can retry any failed deployments with the oc deploy <deploymentname> --retry command.

We can see that we have a pod, service, and route for both a docker-registry and a registry-console. The docker registry is where any container builds within OpenShift will be pushed and the registry console is a web frontend interface for the registry.

Notice that there are two router pods and they are running on two different nodes; the worker nodes. We can effectively send traffic to either of these nodes and it will get routed appropriately. For our install we elected to set the openshift_master_default_subdomain to 54.204.208.138.xip.io. With that setting we are only directing traffic to one of the worker nodes. Alternatively, we could have configured this as a hostname that was load balanced and/or performed round robin to either worker node.

Now that we have explored the install, let's try out logging in as admin to the openshift web console at https://54.175.0.44:8443:

image

And after we've logged in, we see the list of projects that the admin user has access to:

image

We then select the default project and can view the same applications that we looked at before using the oc command:

image

At the top, there is the registry console. Let's try out accessing the registry console by clicking the https://registry-console-default.54.204.208.138.xip.io/ link in the top right. Note that this is the link from the exposed route:

image

We can log in with the same admin/OriginAdmin credentials that we used to log in to the OpenShift web console.

image

After logging in, there are links to each project so we can see images that belong to each project, and we see recently pushed images.

And.. We're done! We have poked around the infrastructure of the installed Origin cluster a bit. We've seen registry pods, router pods, and accessed the registry web console frontend. Next we'll get fancy and throw an example application onto the platform for the user user.

Running an Application as a Normal User

Now that we've observed some of the more admin like items using the admin user's account, we'll give the normal user a spin. First, we'll log in:

$ oc login --insecure-skip-tls-verify -u user -p OriginUser https://54.175.0.44:8443                                                                                        
Login successful.

You don't have any projects. You can try to create a new project, by running

    oc new-project <projectname>

After we log in as a normal user, the CLI tools recognize pretty quickly that this user has no projects and no applications running. The CLI tools give us some helpful clues as to what we should do next: create a new project. Let's create a new project called myproject:

$ oc new-project myproject
Now using project "myproject" on server "https://54.175.0.44:8443".

You can add applications to this project with the 'new-app' command. For example, try:

    oc new-app centos/ruby-22-centos7~https://github.com/openshift/ruby-ex.git

to build a new example application in Ruby.

After creating the new project the CLI tools again give us some helpful text showing us how to get started with a new application on the platform. It is telling us to try out the ruby application with source code at github.com/openshift/ruby-ex.git and build it on top of the Source-to-Image (or S2I) image known as centos/ruby-22-centos7. Might as well give it a spin:

$ oc new-app centos/ruby-22-centos7~https://github.com/openshift/ruby-ex.git
--> Found Docker image ecd5025 (10 hours old) from Docker Hub for "centos/ruby-22-centos7"

    Ruby 2.2 
    -------- 
    Platform for building and running Ruby 2.2 applications

    Tags: builder, ruby, ruby22

    * An image stream will be created as "ruby-22-centos7:latest" that will track the source image
    * A source build using source code from https://github.com/openshift/ruby-ex.git will be created
      * The resulting image will be pushed to image stream "ruby-ex:latest"
      * Every time "ruby-22-centos7:latest" changes a new build will be triggered
    * This image will be deployed in deployment config "ruby-ex"
    * Port 8080/tcp will be load balanced by service "ruby-ex"
      * Other containers can access this service through the hostname "ruby-ex"

--> Creating resources with label app=ruby-ex ...
    imagestream "ruby-22-centos7" created
    imagestream "ruby-ex" created
    buildconfig "ruby-ex" created
    deploymentconfig "ruby-ex" created
    service "ruby-ex" created
--> Success
    Build scheduled, use 'oc logs -f bc/ruby-ex' to track its progress.
    Run 'oc status' to view your app.

Let's take a moment to digest that. A new image stream was created to track the upstream ruby-22-centos7:latest image. A ruby-ex buildconfig was created that will perform an S2I build that will bake the source code into the image from the ruby-22-centos7 image stream. The resulting image will be the source for another image stream known as ruby-ex. A deploymentconfig was created to deploy the application into pods once the build is done. Finally, a ruby-ex service was created so the application can be load balanced and discoverable.

After a short time, we check the status of the application:

$ oc status 
In project myproject on server https://54.175.0.44:8443

svc/ruby-ex - 172.30.213.94:8080
  dc/ruby-ex deploys istag/ruby-ex:latest <-
    bc/ruby-ex source builds https://github.com/openshift/ruby-ex.git on istag/ruby-22-centos7:latest 
      build #1 running for 26 seconds
    deployment #1 waiting on image or update

1 warning identified, use 'oc status -v' to see details.

NOTE: The warning referred to in the output is a warning about there being no healthcheck defined for this service. You can view the text of this warning by running oc status -v.

We can see here that there is a svc (service) that is associated with a dc (deploymentconfig) that is associated with a bc (buildconfig) that has a build that has been running for 26 seconds. The deployment is waiting for the build to finish before attempting to run.

After some more time:

$ oc status 
In project myproject on server https://54.175.0.44:8443

svc/ruby-ex - 172.30.213.94:8080
  dc/ruby-ex deploys istag/ruby-ex:latest <-
    bc/ruby-ex source builds https://github.com/openshift/ruby-ex.git on istag/ruby-22-centos7:latest 
    deployment #1 running for 6 seconds

1 warning identified, use 'oc status -v' to see details.

The build is now done and the deployment is running.

And after more time:

$ oc status 
In project myproject on server https://54.175.0.44:8443

svc/ruby-ex - 172.30.213.94:8080
  dc/ruby-ex deploys istag/ruby-ex:latest <-
    bc/ruby-ex source builds https://github.com/openshift/ruby-ex.git on istag/ruby-22-centos7:latest 
    deployment #1 deployed about a minute ago - 1 pod

1 warning identified, use 'oc status -v' to see details.

We have an app! What are the running pods in this project?:

$ oc get pods
NAME              READY     STATUS      RESTARTS   AGE
ruby-ex-1-build   0/1       Completed   0          13m
ruby-ex-1-mo3lb   1/1       Running     0          11m

The build has Completed and the ruby-ex-1-mo3lb pod is Running. The only thing we have left to do is expose the service so that it can be accessed via the router from the outside world:

$ oc expose svc/ruby-ex
route "ruby-ex" exposed
$ oc get route/ruby-ex
NAME      HOST/PORT                                 PATH      SERVICES   PORT       TERMINATION
ruby-ex   ruby-ex-myproject.54.204.208.138.xip.io             ruby-ex    8080-tcp   

With the route exposed we should now be able to access the application on ruby-ex-myproject.54.204.208.138.xip.io. Before we do that we'll log in to the openshift console as the user user and view the running pods in project myproject:

image

And pointing the browser to ruby-ex-myproject.54.204.208.138.xip.io we see:

image

Woot!

Conclusion

We have explored the basic OpenShift Origin cluster that we set up in part 1 of this two part blog series. We viewed the infrastructure docker registry and router components, as well as discussed the router components and how they are set up. We also ran through an example application that was suggested to us by the command line tools and were able to define that application, monitor its progress, and eventually access it from our web browser. Hopefully this blog gives the reader an idea or two about how they can get started with setting up and using an Origin cluster on Fedora 25 Atomic Host.

Enjoy!
Dusty

Installing an OpenShift Origin Cluster on Fedora 25 Atomic Host: Part 1

Cross posted with this Project Atomic Blog post

Introduction

Openshift Origin is the upstream project that builds on top of the Kubernetes platform and feeds into the OpenShift Container Platform product that is available from Red Hat today. Origin is a great way to get started with Kubernetes, and what better place to run a container orchestration layer than on top of Fedora Atomic Host?

We recently released Fedora 25, along with the first biweekly release of Fedora 25 Atomic Host. This blog post will show you the basics for getting a production installation of Origin running on Fedora 25 Atomic Host using the OpenShift Ansible Installer. The OpenShift Ansible installer will allow you to install a production-worthy OpenShift cluster. If instead you'd like to just try out OpenShift on a single node instead, you can set up OpenShift with the oc cluster up command, which we will detail in a later blog post.

This first post will cover just the installation. In a later blog post we'll take the system we just installed for a spin and make sure everything is working as expected.

Environment

We've tried to make this setup as generic as possible. In this case we will be targeting three generic servers that are running Fedora 25 Atomic Host. As is common with cloud environments these servers each have an "internal" private address that can't be accessed from the internet, and a public NATed address that can be accessed from the outside. Here is the identifying information for the three servers:

+-------------+----------------+--------------+
|     Role    |   Public IPv4  | Private IPv4 |
+=============+================+==============+
| master,etcd | 54.175.0.44    | 10.0.173.101 |
+-------------+----------------+--------------+
|    worker   | 52.91.115.81   | 10.0.156.20  |
+-------------+----------------+--------------+
|    worker   | 54.204.208.138 | 10.0.251.101 |
+-------------+----------------+--------------+

NOTE In a real production setup we would want mutiple master nodes and multiple etcd nodes closer to what is shown in the installation docs.

As you can see from the table we've marked one of the nodes as the master and the other two as what we're calling worker nodes. The master node will run the api server, scheduler, and controller manager. We'll also run etcd on it. Since we want to make sure we don't starve the node running etcd, we'll mark the master node as unschedulable so that application containers don't get scheduled to run on it.

The other two nodes, the worker nodes, will have the proxy and the kubelet running on them; this is where the containers (inside of pods) will get scheduled to run. We'll also tell the installer to run a registry and an HAProxy router on the two worker nodes so that we can perform builds as well as access our services from the outside world via HAProxy.

The Installer

Openshift Origin uses Ansible to manage the installation of different nodes in a cluster. The code for this is aggregated in the OpenShift Ansible Installer on GitHub. Additionally, to run the installer we'll need to install Ansible on our workstation or laptop.

NOTE At this time Ansible 2.2 or greater is REQUIRED.

We already have Ansible 2.2 installed so we can skip to cloning the repo:

$ git clone https://github.com/openshift/openshift-ansible.git &>/dev/null
$ cd openshift-ansible/
$ git checkout 734b9ae199bd585d24c5131f3403345fe88fe5e6
Previous HEAD position was 6d2a272... Merge pull request #2884 from sdodson/image-stream-sync
HEAD is now at 734b9ae... Merge pull request #2876 from dustymabe/dusty-fix-etcd-selinux

In order to document this better in this blog post we are specifically checking out commit 734b9ae199bd585d24c5131f3403345fe88fe5e6 so that we can get reproducible results, since the Openshift Ansible project is fast-moving. These instructions will probably work on the latest master, but you may hit a bug, in which case you should open an issue.

Now that we have the installer we can create an inventory file called myinventory in the same directory as the git repo. This inventory file can be anywhere, but for this install we'll place it there.

Using the IP information from the table above we create the following inventory file:

# Create an OSEv3 group that contains the masters and nodes groups
[OSEv3:children]
masters
nodes
etcd

# Set variables common for all OSEv3 hosts
[OSEv3:vars]
ansible_user=fedora
ansible_become=true
deployment_type=origin
containerized=true
openshift_release=v1.3.1
openshift_router_selector='router=true'
openshift_registry_selector='registry=true'
openshift_master_default_subdomain=54.204.208.138.xip.io

# enable htpasswd auth
openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/origin/master/htpasswd'}]
openshift_master_htpasswd_users={'admin': '$apr1$zgSjCrLt$1KSuj66CggeWSv.D.BXOA1', 'user': '$apr1$.gw8w9i1$ln9bfTRiD6OwuNTG5LvW50'}

# host group for masters
[masters]
54.175.0.44 openshift_public_hostname=54.175.0.44 openshift_hostname=10.0.173.101

# host group for etcd, should run on a node that is not schedulable
[etcd]
54.175.0.44

# host group for worker nodes, we list master node here so that
# openshift-sdn gets installed. We mark the master node as not
# schedulable.
[nodes]
54.175.0.44    openshift_hostname=10.0.173.101 openshift_schedulable=false
52.91.115.81   openshift_hostname=10.0.156.20  openshift_node_labels="{'router':'true','registry':'true'}"
54.204.208.138 openshift_hostname=10.0.251.101 openshift_node_labels="{'router':'true','registry':'true'}"

Well that is quite a bit to digest, isn't it? Don't worry, we'll break down this file in detail.

Details of the Inventory File

OK, so how did we create this inventory file? We started with the docs and copied one of the examples from there. This type of install we are doing is called a BYO (Bring Your Own) install because we are bringing our own servers and not having the installer contact a cloud provider to bring up the infrastructure for us. For reference there is also a much more detailed BYO inventory file you can look study.

So let's break down our inventory file. First we have the OSEv3 group and list the hosts in the masters, nodes, and etcd groups as children of that group:

# Create an OSEv3 group that contains the masters and nodes groups
[OSEv3:children]
masters
nodes
etcd

Then we set a bunch of variables for that group:

# Set variables common for all OSEv3 hosts
[OSEv3:vars]
ansible_user=fedora
ansible_become=true
deployment_type=origin
containerized=true
openshift_release=v1.3.1
openshift_router_selector='router=true'
openshift_registry_selector='registry=true'
openshift_master_default_subdomain=54.204.208.138.xip.io

Let's run through each of them:

  • ansible_user=fedora - fedora is the user that you use to connect to Fedora 25 Atomic Host.
  • ansible_become=true - We want the installer to sudo when running commands.
  • deployment_type=origin - Run OpenShift Origin.
  • containerized=true - Run Origin from containers.
  • openshift_release=v1.3.1 - The version of Origin to run.
  • openshift_router_selector='router=true' - Set it so that any nodes that have this label applied to them will run a router by default.
  • openshift_registry_selector='registry=true' - Set it so that any nodes that have this label applied to them will run a registry by default.
  • openshift_master_default_subdomain=54.204.208.138.xip.io - This setting is used to tell OpenShift what subdomain to apply to routes that are created when exposing services to the outside world.

Whew ... quite a bit to run through there! Most of them are relatively self-explanatory but the openshift_master_default_subdomain might need a little more explanation. Basically, the value of this needs to be a Wildcard DNS Record so that any domain can be prefixed onto the front of the record and it will still resolve to the same IP address. We have decided to use a free service called xipiio so that we don't have to set up wildcard DNS just for this example.

So for our example, a domain like app1.54.204.208.138.xip.io will resolve to IP address 54.204.208.138. A domain like app2.54.204.208.138.xip.io will also resolve to that same address. These requests will come in to node 54.204.208.138, which is one of our worker nodes where a router (HAProxy) is running. HAProxy will route the traffic based on the domain used (app1 vs app2, etc) to the appropriate service within OpenShift.

OK, next up in our inventory file we have some auth settings:

# enable htpasswd auth
openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/origin/master/htpasswd'}]
openshift_master_htpasswd_users={'admin': '$apr1$zgSjCrLt$1KSuj66CggeWSv.D.BXOA1', 'user': '$apr1$.gw8w9i1$ln9bfTRiD6OwuNTG5LvW50'}

You can use a multitude of authentication providers with OpenShift. The above statements say that we want to use htpasswd for authentication and we want to create two users. The password for the admin user is OriginAdmin, while the password for the user user is OriginUser. We generated these passwords by running htpasswd on the command line like so:

$ htpasswd -bc /dev/stdout admin OriginAdmin
Adding password for admin user
admin:$apr1$zgSjCrLt$1KSuj66CggeWSv.D.BXOA1
$ htpasswd -bc /dev/stdout user OriginUser
Adding password for user user
user:$apr1$.gw8w9i1$ln9bfTRiD6OwuNTG5LvW50

OK, now on to the host groups. First up, our master nodes:

# host group for masters
[masters]
54.175.0.44 openshift_public_hostname=54.175.0.44 openshift_hostname=10.0.173.101

We have used 54.175.0.44 as the hostname and also set openshift_public_hostname to this same value so that certificates will use that hostname rather than a detected hostname. We're also setting the openshift_hostname=10.0.173.101 because there is a bug where the golang resolver can't resolve *.ec2.internal addresses. This is also documented as an issue against Origin. Once this bug is resolved, you won't have to set openshift_hostname.

Next up we have the etcd host group. We're simply re-using the master node for a single etcd node. In a production deployment, we'd have several:

# host group for etcd, should run on a node that is not schedulable
[etcd]
54.175.0.44

Finally, we have our worker nodes:

# host group for worker nodes, we list master node here so that
# openshift-sdn gets installed. We mark the master node as not
# schedulable.
[nodes]
54.175.0.44    openshift_hostname=10.0.173.101 openshift_schedulable=false
52.91.115.81   openshift_hostname=10.0.156.20  openshift_node_labels="{'router':'true','registry':'true'}"
54.204.208.138 openshift_hostname=10.0.251.101 openshift_node_labels="{'router':'true','registry':'true'}"

We include the master node in this group so that the openshift-sdn will get installed and run there. However, we do set the master node as openshift_schedulable=false because it is running etcd. The last two nodes are our worker nodes and we have also added the router=true and registry=true node labels to them so that the registry and the router will run on them.

Executing the Installer

Now that we have the installer code and the inventory file named myinventory in the same directory, let's see if we can ping our hosts and check their state:

$ ansible -i myinventory nodes -a '/usr/bin/rpm-ostree status'
54.175.0.44 | SUCCESS | rc=0 >>
State: idle
Deployments:
● fedora-atomic:fedora-atomic/25/x86_64/docker-host
       Version: 25.42 (2016-11-16 10:26:30)
        Commit: c91f4c671a6a1f6770a0f186398f256abf40b2a91562bb2880285df4f574cde4
        OSName: fedora-atomic

54.204.208.138 | SUCCESS | rc=0 >>
State: idle
Deployments:
● fedora-atomic:fedora-atomic/25/x86_64/docker-host
       Version: 25.42 (2016-11-16 10:26:30)
        Commit: c91f4c671a6a1f6770a0f186398f256abf40b2a91562bb2880285df4f574cde4
        OSName: fedora-atomic

52.91.115.81 | SUCCESS | rc=0 >>
State: idle
Deployments:
● fedora-atomic:fedora-atomic/25/x86_64/docker-host
       Version: 25.42 (2016-11-16 10:26:30)
        Commit: c91f4c671a6a1f6770a0f186398f256abf40b2a91562bb2880285df4f574cde4
        OSName: fedora-atomic

Looks like they are up and all at the same state. The next step is to unleash the installer. Before we do, we should note that Fedora has moved to python3 by default. While Atomic Host still has python2 installed for legacy package support not all of the modules needed by the installer are supported in python2 on Atomic Host. Thus, we'll forge ahead and use python3 as the interpreter for ansible by specifying -e 'ansible_python_interpreter=/usr/bin/python3' on the command line:

$ ansible-playbook -i myinventory playbooks/byo/config.yml -e 'ansible_python_interpreter=/usr/bin/python3'
Using /etc/ansible/ansible.cfg as config file
....
....
PLAY RECAP *********************************************************************
52.91.115.81               : ok=162  changed=49   unreachable=0    failed=0   
54.175.0.44                : ok=540  changed=150  unreachable=0    failed=0   
54.204.208.138             : ok=159  changed=49   unreachable=0    failed=0   
localhost                  : ok=15   changed=9    unreachable=0    failed=0

We snipped pretty much all of the output. You can download the log file in its entirety from here.

So now the installer has run, and our systems should be up and running. There is only one more thing we have to do before we can take this system for a spin.

We created two users user and admin. Currently there is no way to have the installer associate one of these users with the cluster admin role in OpenShift (we opened a request for that). We must run a command to associate the admin user we created with cluster admin role for the cluster. The command is oadm policy add-cluster-role-to-user cluster-admin admin.

We'll go ahead and run that command now on the master node via ansible:

$ ansible -i myinventory masters -a '/usr/local/bin/oadm policy add-cluster-role-to-user cluster-admin admin'
54.175.0.44 | SUCCESS | rc=0 >>

And now we are ready to log in as either the admin or user users using oc login https://54.175.0.44:8443 from the command line or visiting the web frontend at https://54.175.0.44:8443.

NOTE To install the oc CLI tool follow these instructions.

To Be Continued

In this blog we brought up an OpenShift Origin cluster on three servers that were running Fedora 25 Atomic Host. We reviewed the inventory file in detail to explain exactly what options were used and why. In a future blog post we'll take the system for a spin, inspect some of the running system that was generated from the installer, and spin up an application that will run on and be hosted by the Origin cluster.

If you run into issues following these installation instructions, please report them in one of the following places:

Cheers!
Dusty

Kompose Up for OpenShift and Kubernetes

Cross posted with this Red Hat Developer Blog post

Introduction

Kompose is a tool to convert from higher level abstractions of application definitions into more detailed Kubernetes artifacts. These artifacts can then be used to bring up the application in a Kubernetes cluster. What higher level application abstraction should kompose use?

One of the most popular application definition formats for developers is the docker-compose.yml format for use with docker-compose that communicates with the docker daemon to bring up the application. Since this format has gained some traction we decided to make it the initial focus of Kompose to support converting this format to Kubernetes. So, where you would choose docker-compose to bring up the application in docker, you can use kompose to bring up the same application in Kubernetes, if that is your preferred platform.

How Did We Get Here?

At Red Hat, we had initially started on a project similar to Kompose, called Henge. We soon found Kompose and realized we had a lot of overlap in our goals so we decided to jump on board with the folks at Skippbox and Google who were already working on it.

TL;DR We have been working hard with the Kompose and Kubernetes communities. Kompose is now a part of the Kuberetes Incubator and we also have added support in Kompose for getting up and running into your target environment in one command:

$ kompose up 

In this blog I'll run you through a simple application example and use kompose up to bring up the application on Kuberenetes and OpenShift.

Getting an Environment

It is now easier than ever to get up and running with Kubernetes and Openshift. If you want hosted you can spin up clusters in many cloud environments including Google Container Engine and OpenShift Online (with the developer preview). If you want a local experience for trying out Kubernetes/OpenShift on your laptop, there is the RHEL based CDK, (and the ADB for upstream components), oc cluster up, minikube, and the list goes on!

Any way you look at it, there are many options for trying out Kubernetes and OpenShift these days. For this blog I'll choose to run on OpenShift Online, but the steps should work on any Openshift or Kubernetes environment.

Once I had logged in to the openshift console at api.preview.openshift.com I was able to grab a token by visiting https://api.preview.openshift.com/oauth/token/request and clicking Request another token. It then will show you the oc command you can run to log your local machine into openshift online.

I'll log in below and create a new project for this example blog:

$ oc login --token=xxxx-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx --server=https://api.preview.openshift.com
Logged into "https://api.preview.openshift.com:443" as "dustymabe" using the token provided.

You don't have any projects. You can try to create a new project, by running

    $ oc new-project <projectname>

$ oc new-project blogpost
Now using project "blogpost" on server "https://api.preview.openshift.com:443".

You can add applications to this project with the 'new-app' command. For example, try:

    $ oc new-app centos/ruby-22-centos7~https://github.com/openshift/ruby-hello-world.git

to build a new hello-world application in Ruby.
$

Example Application

Now that I have an environment to run my app in I need to give it an app to run! I took the example mlbparks application that we have been using for openshift for some time and converted the template to a more simplified definition of the application using the docker-compose.yml format:

$ cat docker-compose.yml
version: "2"
services:
  mongodb:
    image: centos/mongodb-26-centos7
    ports:
      - '27017'
    volumes:
      - /var/lib/mongodb/data
    environment:
      MONGODB_USER: user
      MONGODB_PASSWORD: mypass
      MONGODB_DATABASE: mydb
      MONGODB_ADMIN_PASSWORD: myrootpass
  mlbparks:
    image: dustymabe/mlbparks
    ports:
      - '8080'
    environment:
      MONGODB_USER: user
      MONGODB_PASSWORD: mypass
      MONGODB_DATABASE: mydb
      MONGODB_ADMIN_PASSWORD: myrootpass

Basically we have the mongodb service and then the mlbparks service which is backed by the dustymabe/mlbparks image. I simply generated this image from the openshift3mlbparks source code using s2i with the following command:

$ s2i build https://github.com/gshipley/openshift3mlbparks openshift/wildfly-100-centos7 dustymabe/mlbparks 

Now that we have our compose yaml file we can use kompose to bring it up. I am using kompose version v0.1.2 here:

$ kompose --version
kompose version 0.1.2 (92ea047)
$ kompose --provider openshift up
We are going to create OpenShift DeploymentConfigs, Services and PersistentVolumeClaims for your Dockerized application. 
If you need different kind of resources, use the 'kompose convert' and 'oc create -f' commands instead. 

INFO[0000] Successfully created Service: mlbparks       
INFO[0000] Successfully created Service: mongodb        
INFO[0000] Successfully created DeploymentConfig: mlbparks 
INFO[0000] Successfully created ImageStream: mlbparks   
INFO[0000] Successfully created DeploymentConfig: mongodb 
INFO[0000] Successfully created ImageStream: mongodb    
INFO[0000] Successfully created PersistentVolumeClaim: mongodb-claim0 

Your application has been deployed to OpenShift. You can run 'oc get dc,svc,is,pvc' for details.

Ok what happened here... We created an mlbparks Service, DeploymentConfig and ImageStream as well as a mongodb Service, DeploymentConfig, and ImageStream. We also created a PersistentVolumeClaim named mongodb-claim0 for the /var/lib/mongodb/data.

Note: If you don't have Persistent Volumes the application will never come up because the claim will never get satisfied. If you want to deploy somewhere without Persistent Volumes then add --emptyvols to your command like kompose --provider openshift up --emptyvols.

So let's see what is going on in OpenShift by querying from the CLI:

$ oc get dc,svc,is,pvc
NAME             REVISION                               REPLICAS       TRIGGERED BY
mlbparks         1                                      1              config,image(mlbparks:latest)
mongodb          1                                      1              config,image(mongodb:latest)
NAME             CLUSTER-IP                             EXTERNAL-IP    PORT(S)     AGE
mlbparks         172.30.67.72                           <none>         8080/TCP    4m
mongodb          172.30.111.51                          <none>         27017/TCP   4m
NAME             DOCKER REPO                            TAGS           UPDATED
mlbparks         172.30.47.227:5000/blogpost/mlbparks   latest         4 minutes ago
mongodb          172.30.47.227:5000/blogpost/mongodb    latest         4 minutes ago
NAME             STATUS                                 VOLUME         CAPACITY   ACCESSMODES   AGE
mongodb-claim0   Bound                                  pv-aws-adbb5   100Mi      RWO           4m

and the web console looks like:

image

One final thing we have to do is set it up so that we can connect to the service (i.e. the service is exposed to the outside world). On OpenShift, we need to expose a route. This will be done for us automatically in the future (follow along at #140), but for now the following command will suffice:

$ oc expose svc/mlbparks
route "mlbparks" exposed
$ oc get route mlbparks 
NAME       HOST/PORT                                          PATH      SERVICE         TERMINATION   LABELS
mlbparks   mlbparks-blogpost.44fs.preview.openshiftapps.com             mlbparks:8080                 service=mlbparks

For me this means I can now access the mlbparks application by pointing my web browser to mlbparks-blogpost.44fs.preview.openshiftapps.com.

Let's try it out:

image

Success!
Dusty

Fedora 25 available in DigitalOcean

Cross posted with this fedora magazine post

Last week the Fedora Project released Fedora 25. This week Fedora Project Community members have worked with the DigitalOcean team to make Fedora 25 available on their platform. If you're not familiar with DigitalOcean already, it's a dead simple cloud hosting platform that's great for developers.

Important Notes

The image has some specific differences from others that Fedora ships. You may need to know about these differences before you use the image.

  • Usually Fedora Cloud images have you log in as user fedora. But as with other DigitalOcean images, log into the Fedora 25 DigitalOcean cloud image with your ssh key as root.
  • Similar to our last few Fedora releases, Fedora 25 also has SELinux enabled by default. Not familiar with SELinux yet? No problem. Read more about it here
  • In these images there's no firewall on by default. There's also no cloud provided firewall solution. We recommend that you secure your system immediately after you log in.
  • Fedora 25 should be available in all datacenters across the globe.
  • If you have a problem you think is Fedora specific then drop us an email at , or ping us in #fedora-cloud on Freenode. You can also let the team know if you just want to say you're enjoying using the F25 image.

Visit the DigitalOcean Fedora landing page and spin one up today!

Happy Developing!
Dusty

Booting Lenovo T460s after Fedora 24 Updates

Introduction

I recently picked up a new Lenovo T460s work laptop. It is fairly light and has 20G of memory, which is great for running Virtual Machines. One of the first things I did on this new laptop was install Fedora 24 onto the system. After installing from the install media I was up and running and humming along.

Soon after installing I updated the system to get all bugfixes and security patches. I updated everything and rebooted and:

Probing EDD (edd=off to disable)... OK

Yep, the system wouldn't even boot. The GRUB menu would appear and you could select different entries, but the system would not boot.

The Problem

It was time to hit the internet. After some futile attempts on Google searches I decided to ask on Fedora's devel list. I got a quick response that led to two bug reports (1351943, 1353103) about the issue. It turns out that the newer kernel + microcode that we had updated to had some sort of incompatibility with the existing system firmware. According to the comments from 1353103 there was most likely an assumption the microcode was making that wasn't true.

The Solution

The TL;DR on the fix for this is that you need to update the BIOS on the T460s. I did this by going to the T460s support page (at the time of this writing the link is here) and downloading the ISO image with the BIOS update (n1cur06w.iso).

Now what was I going to do with an ISO image? The laptop doesn't have a cd-rom drive, so burning a cd wouldn't help us one bit. There is a helpful article about how to take this ISO image and update the BIOS.

Here are the steps that I took.

  • First, boot the laptop into the older kernel so you can actually get to a Linux environment.
  • Next, install the geteltorito software so that we can create an image to dd onto the USB key
$ sudo dnf install geteltorito 
  • Next use the software to create the image and write it to a flash drive. Note, /dev/sdg wsa the USB key on my system. Please be sure to change that to the device corresponding to your USB key.
$ geteltorito -o bios.img n1cur06w.iso 
Booting catalog starts at sector: 20 
Manufacturer of CD: NERO BURNING ROM
Image architecture: x86
Boot media type is: harddisk
El Torito image starts at sector 27 and has 47104 sector(s) of 512 Bytes

Image has been written to file "bios.img".

$
$ sudo dd if=bios.img of=/dev/sdg bs=1M
23+0 records in
23+0 records out
24117248 bytes (24 MB, 23 MiB) copied, 1.14576 s, 21.0 MB/s
  • Next, reboot the laptop.
  • After the Lenovo logo appears press ENTER.
  • Press F12 to make your laptop boot from something else than your HDD.
  • Select the USB stick.
  • Make sure your laptop has its power supply plugged in. (It will refuse to update otherwise.)
  • Follow the instructions.
  • Select number 2: Update system program
  • Once finished reboot the laptop and remove the USB key.

Done! I was now happily booting Fedora again. I hope this helps others who hit the same problem!

- Dusty

Archived-At Email Header From Mailman 3 Lists

By now most Fedora email lists have been migrated to Mailman3. One little (but killer) new feature that I recently discovered was that Mailman3 includes the RFC 5064 Archived-At header in the emails.

This is a feature I have wanted for a really long time; to be able to find an email in your Inbox and copy and paste a link to anyone without having to find the message in the online archive is going to save a lot of time and decrease some latency when chatting on IRC or some other form of real time communication.

OK, how do I easily get this information out of the header and use it without having to view the email source every time?

I use Thunderbird for most of my email viewing (sometimes mutt for composing). In Thunderbird there is the Archived-At plugin. After installing this plugin you can right click on any message from a capable mailing list and select "Copy Archived-At URL".

Now you are off to share that URL with friends :)

Happy New Year!
Dusty

Fedora Cloud Vagrant Boxes in Atlas

Cross posted with this fedora magazine post

Since the release of Fedora 22, Fedora began creating Vagrant boxes for cloud images in order to make it easier to set up a local environment for development or testing. In the Fedora 22 release cycle we worked out quite a few kinks and we are again releasing libvirt and virtualbox Vagrant boxes for Fedora 23.

Additionally, for Fedora 23, we are making it easier for the users to grab these boxes by having them indexed in Hashicorp's Atlas. Atlas is essentially an index of Vagrant boxes that makes it easy to distribute them (think of it like a Docker registry for virtual machine images). By indexing the Fedora boxes in Atlas, users now have the option of using the vagrant software to download and add the boxes automatically, rather than the user having to go grab the boxes directly from the mirrors first (although this is still an option).

In order to get started with the Fedora cloud base image, run the following command:

# vagrant init fedora/23-cloud-base && vagrant up

Alternatively, to get started with Fedora Atomic host, run this command:

# vagrant init fedora/23-atomic-host && vagrant up

The above commands will grab the latest indexed images in Atlas and start a virtual machine without the user having to go download the image first. This will make it easier for Fedora users to develop and test on Fedora! If you haven't delved into Vagrant yet then you can get started by visiting the Vagrant page on the Fedora Developer Portal. Let us know on the Fedora Cloud mailing list if you have any trouble.

Dusty

Fedora 23: In the Ocean Again

Cross posted with this fedora magazine post

This week was the release week for Fedora 23, and the Fedora Project has again worked together with the DigitalOcean team to make Fedora 23 available in their service. If you're not familiar with DigitalOcean already, it is a dead simple cloud hosting platform which is great for developers.

Using Fedora on DigitalOcean

There are a couple of things to note if you are planning on using Fedora on DigitalOcean services and machines.

  • Like with other DigitalOcean images, you will log in with your ssh key as root, rather than the typical fedora user that you may be familiar with when logging in to a Fedora cloud image.
  • Similar to Fedora 21 and Fedora 22, Fedora 23 also has SELinux enabled by default.
  • In DigitalOcean images there is no firewall on by default, and there is no cloud provided firewall solution. It is highly recommended that you secure your system after you log in.
  • Fedora 23 should be available in all the newest datacenters in each region, but some legacy datacenters aren't supported.
  • If you have a problem you think is Fedora specific then drop us an email at , ping us in #fedora-cloud on freenode, or visit the Fedora cloud trac to see if it is already being worked on.

Visit the DigitalOcean Fedora landing page and spin one up today!

Happy Developing!
Dusty