Archive for the 'kubernetes' Category

Installing an OpenShift Origin Cluster on Fedora 25 Atomic Host: Part 2

Cross posted with this Project Atomic Blog post

Introduction

In part 1 of this series we used the OpenShift Ansible Installer to install Openshift Origin on three servers that were running Fedora 25 Atomic Host. The three machines we'll be using have the following roles and IP address configurations:

+-------------+----------------+--------------+
|     Role    |   Public IPv4  | Private IPv4 |
+=============+================+==============+
| master,etcd | 54.175.0.44    | 10.0.173.101 |
+-------------+----------------+--------------+
|    worker   | 52.91.115.81   | 10.0.156.20  |
+-------------+----------------+--------------+
|    worker   | 54.204.208.138 | 10.0.251.101 |
+-------------+----------------+--------------+

In this blog, we'll explore the installed Origin cluster and then launch an application to see if everything works.

The Installed Origin Cluster

With the cluster up and running, we can log in as admin to the master node via the oc command. To install the oc CLI on a your machine, you can follow these instructions or, on Fedora, you can install via dnf install origin-clients. For this demo, we have the origin-clients-1.3.1-1.fc25.x86_64 rpm installed:

$ oc login --insecure-skip-tls-verify -u admin -p OriginAdmin https://54.175.0.44:8443
Login successful.

You have access to the following projects and can switch between them with 'oc project <projectname>':

  * default
    kube-system
    logging
    management-infra
    openshift
    openshift-infra

Using project "default".
Welcome! See 'oc help' to get started.

NOTE: --insecure-skip-tls-verify was added because we do not have properly signed certificates. See the docs for installing a custom signed certificate.

After we log in we can see that we are using the default namespace. Let's see what nodes exist:

$ oc get nodes
NAME           STATUS                     AGE
10.0.156.20    Ready                      9h
10.0.173.101   Ready,SchedulingDisabled   9h
10.0.251.101   Ready                      9h

The nodes represent each of the servers that are a part of the Origin cluster. The name of each node corresponds with its private IPv4 address. Also note that the 10.0.173.101 is the private IP address from the master,etcd node and that its status contains SchedulingDisabled. This is because we specified openshift_schedulable=false for this node when we did the install in part 1.

Now let's check the pods, services, and routes that are running in the default namespace:

$ oc get pods -o wide 
NAME                       READY     STATUS    RESTARTS   AGE       IP             NODE
docker-registry-3-hgwfr    1/1       Running   0          9h        10.129.0.3     10.0.156.20
registry-console-1-q48xn   1/1       Running   0          9h        10.129.0.2     10.0.156.20
router-1-nwjyj             1/1       Running   0          9h        10.0.156.20    10.0.156.20
router-1-o6n4a             1/1       Running   0          9h        10.0.251.101   10.0.251.101
$ 
$ oc get svc
NAME               CLUSTER-IP       EXTERNAL-IP   PORT(S)                   AGE
docker-registry    172.30.2.89      <none>        5000/TCP                  9h
kubernetes         172.30.0.1       <none>        443/TCP,53/UDP,53/TCP     9h
registry-console   172.30.147.190   <none>        9000/TCP                  9h
router             172.30.217.187   <none>        80/TCP,443/TCP,1936/TCP   9h
$ 
$ oc get routes
NAME               HOST/PORT                                        PATH      SERVICES           PORT               TERMINATION
docker-registry    docker-registry-default.54.204.208.138.xip.io              docker-registry    5000-tcp           passthrough
registry-console   registry-console-default.54.204.208.138.xip.io             registry-console   registry-console   passthrough

NOTE: If there are any pods that have failed to run you can try to debug with the oc status -v, and oc describe pod/<podname> commands. You can retry any failed deployments with the oc deploy <deploymentname> --retry command.

We can see that we have a pod, service, and route for both a docker-registry and a registry-console. The docker registry is where any container builds within OpenShift will be pushed and the registry console is a web frontend interface for the registry.

Notice that there are two router pods and they are running on two different nodes; the worker nodes. We can effectively send traffic to either of these nodes and it will get routed appropriately. For our install we elected to set the openshift_master_default_subdomain to 54.204.208.138.xip.io. With that setting we are only directing traffic to one of the worker nodes. Alternatively, we could have configured this as a hostname that was load balanced and/or performed round robin to either worker node.

Now that we have explored the install, let's try out logging in as admin to the openshift web console at https://54.175.0.44:8443:

image

And after we've logged in, we see the list of projects that the admin user has access to:

image

We then select the default project and can view the same applications that we looked at before using the oc command:

image

At the top, there is the registry console. Let's try out accessing the registry console by clicking the https://registry-console-default.54.204.208.138.xip.io/ link in the top right. Note that this is the link from the exposed route:

image

We can log in with the same admin/OriginAdmin credentials that we used to log in to the OpenShift web console.

image

After logging in, there are links to each project so we can see images that belong to each project, and we see recently pushed images.

And.. We're done! We have poked around the infrastructure of the installed Origin cluster a bit. We've seen registry pods, router pods, and accessed the registry web console frontend. Next we'll get fancy and throw an example application onto the platform for the user user.

Running an Application as a Normal User

Now that we've observed some of the more admin like items using the admin user's account, we'll give the normal user a spin. First, we'll log in:

$ oc login --insecure-skip-tls-verify -u user -p OriginUser https://54.175.0.44:8443                                                                                        
Login successful.

You don't have any projects. You can try to create a new project, by running

    oc new-project <projectname>

After we log in as a normal user, the CLI tools recognize pretty quickly that this user has no projects and no applications running. The CLI tools give us some helpful clues as to what we should do next: create a new project. Let's create a new project called myproject:

$ oc new-project myproject
Now using project "myproject" on server "https://54.175.0.44:8443".

You can add applications to this project with the 'new-app' command. For example, try:

    oc new-app centos/ruby-22-centos7~https://github.com/openshift/ruby-ex.git

to build a new example application in Ruby.

After creating the new project the CLI tools again give us some helpful text showing us how to get started with a new application on the platform. It is telling us to try out the ruby application with source code at github.com/openshift/ruby-ex.git and build it on top of the Source-to-Image (or S2I) image known as centos/ruby-22-centos7. Might as well give it a spin:

$ oc new-app centos/ruby-22-centos7~https://github.com/openshift/ruby-ex.git
--> Found Docker image ecd5025 (10 hours old) from Docker Hub for "centos/ruby-22-centos7"

    Ruby 2.2 
    -------- 
    Platform for building and running Ruby 2.2 applications

    Tags: builder, ruby, ruby22

    * An image stream will be created as "ruby-22-centos7:latest" that will track the source image
    * A source build using source code from https://github.com/openshift/ruby-ex.git will be created
      * The resulting image will be pushed to image stream "ruby-ex:latest"
      * Every time "ruby-22-centos7:latest" changes a new build will be triggered
    * This image will be deployed in deployment config "ruby-ex"
    * Port 8080/tcp will be load balanced by service "ruby-ex"
      * Other containers can access this service through the hostname "ruby-ex"

--> Creating resources with label app=ruby-ex ...
    imagestream "ruby-22-centos7" created
    imagestream "ruby-ex" created
    buildconfig "ruby-ex" created
    deploymentconfig "ruby-ex" created
    service "ruby-ex" created
--> Success
    Build scheduled, use 'oc logs -f bc/ruby-ex' to track its progress.
    Run 'oc status' to view your app.

Let's take a moment to digest that. A new image stream was created to track the upstream ruby-22-centos7:latest image. A ruby-ex buildconfig was created that will perform an S2I build that will bake the source code into the image from the ruby-22-centos7 image stream. The resulting image will be the source for another image stream known as ruby-ex. A deploymentconfig was created to deploy the application into pods once the build is done. Finally, a ruby-ex service was created so the application can be load balanced and discoverable.

After a short time, we check the status of the application:

$ oc status 
In project myproject on server https://54.175.0.44:8443

svc/ruby-ex - 172.30.213.94:8080
  dc/ruby-ex deploys istag/ruby-ex:latest <-
    bc/ruby-ex source builds https://github.com/openshift/ruby-ex.git on istag/ruby-22-centos7:latest 
      build #1 running for 26 seconds
    deployment #1 waiting on image or update

1 warning identified, use 'oc status -v' to see details.

NOTE: The warning referred to in the output is a warning about there being no healthcheck defined for this service. You can view the text of this warning by running oc status -v.

We can see here that there is a svc (service) that is associated with a dc (deploymentconfig) that is associated with a bc (buildconfig) that has a build that has been running for 26 seconds. The deployment is waiting for the build to finish before attempting to run.

After some more time:

$ oc status 
In project myproject on server https://54.175.0.44:8443

svc/ruby-ex - 172.30.213.94:8080
  dc/ruby-ex deploys istag/ruby-ex:latest <-
    bc/ruby-ex source builds https://github.com/openshift/ruby-ex.git on istag/ruby-22-centos7:latest 
    deployment #1 running for 6 seconds

1 warning identified, use 'oc status -v' to see details.

The build is now done and the deployment is running.

And after more time:

$ oc status 
In project myproject on server https://54.175.0.44:8443

svc/ruby-ex - 172.30.213.94:8080
  dc/ruby-ex deploys istag/ruby-ex:latest <-
    bc/ruby-ex source builds https://github.com/openshift/ruby-ex.git on istag/ruby-22-centos7:latest 
    deployment #1 deployed about a minute ago - 1 pod

1 warning identified, use 'oc status -v' to see details.

We have an app! What are the running pods in this project?:

$ oc get pods
NAME              READY     STATUS      RESTARTS   AGE
ruby-ex-1-build   0/1       Completed   0          13m
ruby-ex-1-mo3lb   1/1       Running     0          11m

The build has Completed and the ruby-ex-1-mo3lb pod is Running. The only thing we have left to do is expose the service so that it can be accessed via the router from the outside world:

$ oc expose svc/ruby-ex
route "ruby-ex" exposed
$ oc get route/ruby-ex
NAME      HOST/PORT                                 PATH      SERVICES   PORT       TERMINATION
ruby-ex   ruby-ex-myproject.54.204.208.138.xip.io             ruby-ex    8080-tcp   

With the route exposed we should now be able to access the application on ruby-ex-myproject.54.204.208.138.xip.io. Before we do that we'll log in to the openshift console as the user user and view the running pods in project myproject:

image

And pointing the browser to ruby-ex-myproject.54.204.208.138.xip.io we see:

image

Woot!

Conclusion

We have explored the basic OpenShift Origin cluster that we set up in part 1 of this two part blog series. We viewed the infrastructure docker registry and router components, as well as discussed the router components and how they are set up. We also ran through an example application that was suggested to us by the command line tools and were able to define that application, monitor its progress, and eventually access it from our web browser. Hopefully this blog gives the reader an idea or two about how they can get started with setting up and using an Origin cluster on Fedora 25 Atomic Host.

Enjoy!
Dusty

Kompose Up for OpenShift and Kubernetes

Cross posted with this Red Hat Developer Blog post

Introduction

Kompose is a tool to convert from higher level abstractions of application definitions into more detailed Kubernetes artifacts. These artifacts can then be used to bring up the application in a Kubernetes cluster. What higher level application abstraction should kompose use?

One of the most popular application definition formats for developers is the docker-compose.yml format for use with docker-compose that communicates with the docker daemon to bring up the application. Since this format has gained some traction we decided to make it the initial focus of Kompose to support converting this format to Kubernetes. So, where you would choose docker-compose to bring up the application in docker, you can use kompose to bring up the same application in Kubernetes, if that is your preferred platform.

How Did We Get Here?

At Red Hat, we had initially started on a project similar to Kompose, called Henge. We soon found Kompose and realized we had a lot of overlap in our goals so we decided to jump on board with the folks at Skippbox and Google who were already working on it.

TL;DR We have been working hard with the Kompose and Kubernetes communities. Kompose is now a part of the Kuberetes Incubator and we also have added support in Kompose for getting up and running into your target environment in one command:

$ kompose up 

In this blog I'll run you through a simple application example and use kompose up to bring up the application on Kuberenetes and OpenShift.

Getting an Environment

It is now easier than ever to get up and running with Kubernetes and Openshift. If you want hosted you can spin up clusters in many cloud environments including Google Container Engine and OpenShift Online (with the developer preview). If you want a local experience for trying out Kubernetes/OpenShift on your laptop, there is the RHEL based CDK, (and the ADB for upstream components), oc cluster up, minikube, and the list goes on!

Any way you look at it, there are many options for trying out Kubernetes and OpenShift these days. For this blog I'll choose to run on OpenShift Online, but the steps should work on any Openshift or Kubernetes environment.

Once I had logged in to the openshift console at api.preview.openshift.com I was able to grab a token by visiting https://api.preview.openshift.com/oauth/token/request and clicking Request another token. It then will show you the oc command you can run to log your local machine into openshift online.

I'll log in below and create a new project for this example blog:

$ oc login --token=xxxx-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx --server=https://api.preview.openshift.com
Logged into "https://api.preview.openshift.com:443" as "dustymabe" using the token provided.

You don't have any projects. You can try to create a new project, by running

    $ oc new-project <projectname>

$ oc new-project blogpost
Now using project "blogpost" on server "https://api.preview.openshift.com:443".

You can add applications to this project with the 'new-app' command. For example, try:

    $ oc new-app centos/ruby-22-centos7~https://github.com/openshift/ruby-hello-world.git

to build a new hello-world application in Ruby.
$

Example Application

Now that I have an environment to run my app in I need to give it an app to run! I took the example mlbparks application that we have been using for openshift for some time and converted the template to a more simplified definition of the application using the docker-compose.yml format:

$ cat docker-compose.yml
version: "2"
services:
  mongodb:
    image: centos/mongodb-26-centos7
    ports:
      - '27017'
    volumes:
      - /var/lib/mongodb/data
    environment:
      MONGODB_USER: user
      MONGODB_PASSWORD: mypass
      MONGODB_DATABASE: mydb
      MONGODB_ADMIN_PASSWORD: myrootpass
  mlbparks:
    image: dustymabe/mlbparks
    ports:
      - '8080'
    environment:
      MONGODB_USER: user
      MONGODB_PASSWORD: mypass
      MONGODB_DATABASE: mydb
      MONGODB_ADMIN_PASSWORD: myrootpass

Basically we have the mongodb service and then the mlbparks service which is backed by the dustymabe/mlbparks image. I simply generated this image from the openshift3mlbparks source code using s2i with the following command:

$ s2i build https://github.com/gshipley/openshift3mlbparks openshift/wildfly-100-centos7 dustymabe/mlbparks 

Now that we have our compose yaml file we can use kompose to bring it up. I am using kompose version v0.1.2 here:

$ kompose --version
kompose version 0.1.2 (92ea047)
$ kompose --provider openshift up
We are going to create OpenShift DeploymentConfigs, Services and PersistentVolumeClaims for your Dockerized application. 
If you need different kind of resources, use the 'kompose convert' and 'oc create -f' commands instead. 

INFO[0000] Successfully created Service: mlbparks       
INFO[0000] Successfully created Service: mongodb        
INFO[0000] Successfully created DeploymentConfig: mlbparks 
INFO[0000] Successfully created ImageStream: mlbparks   
INFO[0000] Successfully created DeploymentConfig: mongodb 
INFO[0000] Successfully created ImageStream: mongodb    
INFO[0000] Successfully created PersistentVolumeClaim: mongodb-claim0 

Your application has been deployed to OpenShift. You can run 'oc get dc,svc,is,pvc' for details.

Ok what happened here... We created an mlbparks Service, DeploymentConfig and ImageStream as well as a mongodb Service, DeploymentConfig, and ImageStream. We also created a PersistentVolumeClaim named mongodb-claim0 for the /var/lib/mongodb/data.

Note: If you don't have Persistent Volumes the application will never come up because the claim will never get satisfied. If you want to deploy somewhere without Persistent Volumes then add --emptyvols to your command like kompose --provider openshift up --emptyvols.

So let's see what is going on in OpenShift by querying from the CLI:

$ oc get dc,svc,is,pvc
NAME             REVISION                               REPLICAS       TRIGGERED BY
mlbparks         1                                      1              config,image(mlbparks:latest)
mongodb          1                                      1              config,image(mongodb:latest)
NAME             CLUSTER-IP                             EXTERNAL-IP    PORT(S)     AGE
mlbparks         172.30.67.72                           <none>         8080/TCP    4m
mongodb          172.30.111.51                          <none>         27017/TCP   4m
NAME             DOCKER REPO                            TAGS           UPDATED
mlbparks         172.30.47.227:5000/blogpost/mlbparks   latest         4 minutes ago
mongodb          172.30.47.227:5000/blogpost/mongodb    latest         4 minutes ago
NAME             STATUS                                 VOLUME         CAPACITY   ACCESSMODES   AGE
mongodb-claim0   Bound                                  pv-aws-adbb5   100Mi      RWO           4m

and the web console looks like:

image

One final thing we have to do is set it up so that we can connect to the service (i.e. the service is exposed to the outside world). On OpenShift, we need to expose a route. This will be done for us automatically in the future (follow along at #140), but for now the following command will suffice:

$ oc expose svc/mlbparks
route "mlbparks" exposed
$ oc get route mlbparks 
NAME       HOST/PORT                                          PATH      SERVICE         TERMINATION   LABELS
mlbparks   mlbparks-blogpost.44fs.preview.openshiftapps.com             mlbparks:8080                 service=mlbparks

For me this means I can now access the mlbparks application by pointing my web browser to mlbparks-blogpost.44fs.preview.openshiftapps.com.

Let's try it out:

image

Success!
Dusty

kubernetes skydns setup for testing on a single node

Intro

Kubernetes is (currently) missing an integrated dns solution for service discovery. In the future it will be integrated into kubernetes (see PR11599) but for now we have to setup skydns manually.

I have seen some tutorials on how to get skydns working, but almost all of them are rather involved. However, if you just want a simple setup on a single node for testing then it is actually rather easy to get skydns set up.

Setting it up

NOTE: This tutorial assumes that you already have a machine with docker and kubernetes set up and working. This has been tested on Fedora 22 and CentOS 7. It should work on other platforms but YMMV.

So the way kubernetes/skydns work together is by having two parts:

  • kube2sky - listens on the kubernetes api for new services and adds information into etcd
  • skydns - listens for dns requests and responds based on information in etcd

The easiest way to get kube2sky and skydns up and running is to just kick off a few docker containers. We'll start with kube2sky like so:

[root@f22 ~]$ docker run -d --net=host --restart=always \
                gcr.io/google_containers/kube2sky:1.11  \
                -v=10 -logtostderr=true -domain=kubernetes.local \
                -etcd-server="http://127.0.0.1:2379"

NOTE: We are re-using the same etcd that kubernetes is using.

The next step is to start skydns to respond to dns queries:

[root@f22 ~]$ docker run -d --net=host --restart=always  \
                -e ETCD_MACHINES="http://127.0.0.1:2379" \
                -e SKYDNS_DOMAIN="kubernetes.local"      \
                -e SKYDNS_ADDR="0.0.0.0:53"              \
                -e SKYDNS_NAMESERVERS="8.8.8.8:53,8.8.4.4:53" \
                gcr.io/google_containers/skydns:2015-03-11-001

The final step is to modify your kubelet configuration to let it know where the dns for the cluster is. You can do this by adding --cluster_dns and --cluster_domain to KUBELET_ARGS in /etc/kubernetes/kubelet:

[root@f22 ~]$ grep KUBELET_ARGS /etc/kubernetes/kubelet
KUBELET_ARGS="--cluster_dns=192.168.121.174 --cluster_domain=kubernetes.local"
[root@f22 ~]$ systemctl restart kubelet.service

NOTE: I used the ip address of the machine that we are using for this single node cluster.

And finally we can see our two containers running:

[root@f22 ~]$ docker ps --format "table {{.ID}}\t{{.Status}}\t{{.Image}}"
CONTAINER ID        STATUS              IMAGE
d229442f533c        Up About a minute   gcr.io/google_containers/skydns:2015-03-11-001
76d51770b240        Up About a minute   gcr.io/google_containers/kube2sky:1.11

Testing it out

Now lets see if it works! Taking a page out of the kubernetes github we'll start a busybox container and then do an nslookup on the "kubernetes service":

[root@f22 ~]$ cat > /tmp/busybox.yaml <<EOF
apiVersion: v1
kind: Pod
metadata:
  name: busybox
  namespace: default
spec:
  containers:
  - image: busybox
    command:
      - sleep
      - "3600"
    imagePullPolicy: IfNotPresent
    name: busybox
  restartPolicy: Always
EOF
[root@f22 ~]$ kubectl create -f /tmp/busybox.yaml
pod "busybox" created
[root@f22 ~]$ kubectl get pods
NAME      READY     STATUS    RESTARTS   AGE
busybox   1/1       Running   0          16s
[root@f22 ~]$ kubectl exec busybox -- nslookup kubernetes
Server:    192.168.121.174
Address 1: 192.168.121.174

Name:      kubernetes
Address 1: 10.254.0.1

NOTE: The "kubernetes service" is the one that is shown from the kubectl get services kubernetes command.

Now you have a single node k8s setup with dns. In the future PR11599 should satisfy this need but for now this works. Enjoy!

Dusty