Atom Feed
Comments Atom Feed

Similar Articles

2017-03-21 15:18
Kubernetes to learn Part 2
2017-03-23 16:09
Kubernetes to learn Part 3
2017-03-28 13:07
Kubernetes to learn Part 4
2015-08-03 20:19
Home Lab Project: Galera Clustering with Percona XtraDB
2008-11-23 09:11
Photography HOWTO 2: Understanding Exposure

Recent Articles

2019-07-28 16:35
git http with Nginx via Flask wsgi application (git4nginx)
2018-05-15 16:48
Raspberry Pi Camera, IR Lights and more
2017-04-23 14:21
Raspberry Pi SD Card Test
2017-04-07 10:54
DNS Firewall (blackhole malicious, like Pi-hole) with bind9
2017-03-28 13:07
Kubernetes to learn Part 4

Glen Pitt-Pladdy :: Blog

Kubernetes to learn Part 1

There's been a lot of talk about Kubernetes and in many respects it has the potential to solve a lot of long standing problems. It also has potential to cause a whole lot since this is a massive change from what teams are used to and many areas that would be needed for enterprise deployments are deficient or in a state of rapid change.

This will stabilise with time and in preparation for when that time comes, it's worth learning about this.

The case for doing things the hard way

I've looked through a bunch of options for setting up Kubernetes and there's plenty of apparently easy approaches where all the thinking is already done for you. The problem for me is that while that may avoid a load of time learning about the technical details, it also results in not learning the technical details. That completely defeats the point of this exercise.

Among the other things I've noticed is that many of instructions for this I found are out of date and in some cases incompatible with current Linux distros. Many also disable important security mechanisms like GPG checks on packages. That may seem OK for experimenting, and in some cases it may be in a sufficiently sandboxed environment, but remember that often malicious actors exploit the poor security of lower priority systems to gain a foothold which they can further leverage for their activities against higher priority targets. This is often referred to as establishing a beachhead.

The fundamental thing here is that I want to build an environment that is aligned with a production style design and could be moved in that direction. Many of the examples move far away from anything that could relate to a production deployment and more a non-redundant development environment that "runs on your laptop" which we already know is not ideal.

With this in mind, I'm looking for a stable platform and not to be constantly chasing a moving target. Especially in the enterprise space few are in a position to have the operations teams and skills necessary to maintain a large stable environment in close alignment with a fast moving target. This is why I'll be using the standard packages already available in CentOS which are potentially applicable to running a stable production environment.

My starting point is 1 master and 2 workers, then I'll expand that out to a HA solution.


You will need to install:

# yum install kubernetes-master etcd

That should be sufficient for a basic master node.


This is the database that stores the data relating to the cluster and there's some docs on the configuration details at

Initially the config in /etc/etcd/etcd.conf assumes single self-contained node (all localhost) so we need to change it to allow other nodes to access etcd. The easiest way is just to tell it to listen on all interfaces:


We also need to tell it to advertise it's self on the address it will be accessible to other nodes on. In my case I have an isolated network for my Kubernetes nodes to talk between themselves:


Since we'll be clustering later, it might be a good time to set the cluster name to something consistent with others that we will be adding:


Then enable and start etcd with:

# systemctl enable etcd.service
# systemctl start etcd.service

You can then check it's happy with:

# etcdctl cluster-health
member 3afddc9561d18a43 is healthy: got healthy result from
cluster is healthy


Flannel is a network overlay that is popular for Kubernetes and generally the starting point. It manages a number of different types of networks including networks for major Cloud platforms, but in our case we're going to keep things simple and use VXLAN.

This needs a network configuring and there's a bunch of options. For simplicity we only need one network which I set aside on my lab for this, and I'm putting it in a temporary file called flannel_network.json which we will use in a moment:

        "Network": "",
        "SubnetLen": 24,
        "Backend": {
                "Type": "vxlan",
                "VNI": 1

Flannel will take the Network range and split it up into subnets based on SubnetLen, which in this case gives us 256 subnets. This should be fine for a small scale deployment, and if you're doing something bigger than that then you probably want to do a lot more planning than just reading this blog. You can control that in more detail with other parameters.

The config is stored in etcd at / within etcd. We put this config in place with:

# etcdctl set / <flannel_network.json

Beware that many examples use / as the configuration directory. This is not the default for CentOS (and I'm assuming Red Hat as well), though obviously you could set it to be the same as CoreOS.

More detail on configuration is available at but beware the different configuration directory.

We're not actually going to run Flannel on the master (unless you want it to double as a worker node) but the config needs to be in etcd on the master for worker nodes to access.

Kubernetes Services

I ran into some problems that got logged as "No API token found" and after some research I found a solution which I'm going to adapt slightly. This seems to be that in /etc/kubernetes/apiserver Admission Control is set and includes ServiceAccount, so we need a corresponding key. The solution above creates this in /tmp/ which will in time get cleaned out so IMHO probably not a good idea if you want your cluster to last. Instead I'm going to create it in /etc/kubernetes/ where I know nothing should interfere with it:

# openssl genrsa -out /etc/kubernetes/serviceaccount.key 2048

This creates the key as world readable but we need Kubernetes only to have read access:

# chmod 640 /etc/kubernetes/serviceaccount.key
# chown root.kube /etc/kubernetes/serviceaccount.key

In /etc/kubernetes/apiserver add configuration for the service account key:


Then we need to make sure the API server is listening on all interfaces else specify where you want it to listen and other places where this is used have to match:


The only other configuration you need worry about for now is /etc/kubernetes/apiserver which should have KUBE_ETCD_SERVERS as a comma separated list of the etcd servers. Default is (localhost) which is fine for a stand alone host.

Then we need to set the same service account key in /etc/kubernetes/controller-manager with:


Enable and start these:

# systemctl enable kube-apiserver
# systemctl start kube-apiserver
# systemctl enable kube-controller-manager
# systemctl start kube-controller-manager
# systemctl enable kube-scheduler
# systemctl start kube-scheduler

At this point you should have a basic master node running.

What's listening where

In a production environment we'll want to have some control or at minimum awareness of what services are being used. The setup on the master node so far gives:

  • etcd on TCP 2379 and 2380
  • kube-apiserve on TCP 8080
  • kube-schedule on TCP 10251
  • kube-apiserve on TCP 6443
  • kube-controll on TCP 10252

If you simply want to allow everything for testing, you can disable the firewall:

# systemctl stop firewalld
# systemctl mask firewalld

Remember to enable it again or apply filtering as appropriate externally if you are running a service for real.

Worker Nodes (Minions)

You will need to install:

# yum install kubernetes-node docker flannel

That should be sufficient for a basic master node.


We need to ensure that Flannel is looking at our etcd to get it's config. Edit /etc/sysconfig/flanneld and ensure the FLANNEL_ETCD_ENDPOINTS setting points to your etcd. For me it looks like:


Also note the FLANNEL_ETCD_PREFIX option which matches the directory of the config stored in etcd above.

# systemctl enable flanneld
# systemctl start flanneld

You should see a flannel.1 network appear after this:

# ip a
4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN
    link/ether fe:a2:47:b9:c6:75 brd ff:ff:ff:ff:ff:ff
    inet scope global flannel.1
       valid_lft forever preferred_lft forever

By default Flannel uses the interface with the default route to communicate via. If you want to change this (eg. like I have an isolated internal network between nodes) then you can set in /etc/sysconfig/flanneld one of two options:




Both these do the same thing, but the second will lookup the IP from the interface where the first uses the IP directly. If you are deploying in volume with consistent hardware configurations then the second will probably save you messing with individual configuration. After this restart Flannel:

# systemctl restart flanneld

It's worth noting that if the IP has changed that a new configuration will be created in etcd and the IPs on the host will change. It's worth tidying up on the master by finding the old range:

# etcdctl ls /

And then delete the old configuration:

# etcdctl rm /


And now all we should need to do for a minimal configuration is point the node at the master in /etc/kubernetes/config with the KUBE_MASTER setting:


You will also need to do the same in /etc/kubernetes/kubelet as well has specify a hostname or leave this blank to use the node's hostname (assuming you made it unique and resolves) and address it listens with:


Then enable and start the services

# systemctl enable docker
# systemctl start docker
# systemctl enable kubelet
# systemctl start kubelet
# systemctl enable kube-proxy
# systemctl start kube-proxy

At this point the docker0 network should also be working via flannel and you can test this on worker nodes by pinging each other's address on this network:

# ping -n -c 3
PING ( 56(84) bytes of data.
64 bytes from icmp_seq=1 ttl=64 time=0.306 ms
64 bytes from icmp_seq=2 ttl=64 time=0.520 ms
64 bytes from icmp_seq=3 ttl=64 time=0.340 ms

--- ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2001ms
rtt min/avg/max/mdev = 0.306/0.388/0.520/0.096 ms

What's listening where

In a production environment we'll want to have some control or at minimum awareness of what services are being used. The setup on the worker nodes so far gives:

  • kubelet on TCP 4194
  • flannel / VXLAN on UDP 8472
  • kube-proxy on dynamically allocated ports when NodePort configuration is used

Firewalling with iptables needs a bit more thought on worker nodes since kube-proxy is using iptables to expose services on the node. This means that having multiple things meddling with iptables might cause problems. It should be safe to apply firewalls externally, but I can't be sure at this stage about all the possible interactions with iptables based local firewalls.

For testing just disabling the firewall is simpest:

# systemctl stop firewalld
# systemctl mask firewalld

Remember to revisit appropriate filtering as if you are running a service for real.

Doing something useful (kind of)

As much fun as it might have been setting this up, it's not much use until we can actually run stuff on it.

For this I'm using the guestbook example in the Kubernetes repo. and following guidance on checking out only what we want:

# mkdir kubernetes
# cd kubernetes
# git init
# git remote add -f origin
# git config core.sparseCheckout true
# echo "examples/guestbook" >>.git/info/sparse-checkout
# git pull origin master

That should leave you with an examples/guestbook/ directory full of stuff.

Then to kick off the

# kubectl create -f examples/guestbook/all-in-one/guestbook-all-in-one.yaml

Check that they're running

# kubectl get services
frontend  <none>        80/TCP     3m
kubernetes      <none>        443/TCP    8h
redis-master   <none>        6379/TCP   3m
redis-slave  <none>        6379/TCP   3m

Now it would be nice to expose the frontend to allow us to reach the application. To do this we need to edit the frontend service and change "type: ClusterIP" to "type: Nodeport"

# kubectl edit service frontend

  - port: 80
  type: NodePort

After this you should see the frontend service shows the port:

# kubectl get services
NAME           CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
frontend   <nodes>       80:32726/TCP   8m
kubernetes       <none>        443/TCP        23h
redis-master    <none>        6379/TCP       9m
redis-slave   <none>        6379/TCP       9m

You should be able to point your browser to a worker node and this port and get the application:

Kubernetes Guestbook exposed with NodePort

And finally clean up when you are done:

# kubectl delete -f examples/guestbook/all-in-one/guestbook-all-in-one.yaml


This has been the very first stages of an example cluster and doesn't yet give us a production ready environment - that's still to come. What it does give us is a minimal cluster built from the ground up.

There is an enormous number of variations of this from different sources and in many cases they can be misleading when applied in different scenarios and I've had to recover a lot of ground having tried to apply things from different sources that don't work with particular versions, deployment approaches or underlying platforms (eg. only work on GCP). This only covers CentOS 7 native packages on generic (KVM based) VMs but likely to work on bare metal or any other VM platform without taking advantage of Kubernetes support baked in the underlying platform.

There's a whole bunch of things that I want to do in the next stages including some HA for master nodes and getting services exposed better. At this point Ingress Controllers looks most promising to control a load balancer but is not yet as mature as I think is needed for a production environment. I'll likely follow the usual approaches and build something based around HA Proxy, Nginx or Vulcan.