Atom Feed
Comments Atom Feed

Similar Articles

2017-03-28 13:07
Kubernetes to learn Part 4
2017-03-21 13:53
Kubernetes to learn Part 1
2017-03-21 15:18
Kubernetes to learn Part 2

Recent Articles

2019-07-28 16:35
git http with Nginx via Flask wsgi application (git4nginx)
2018-05-15 16:48
Raspberry Pi Camera, IR Lights and more
2017-04-23 14:21
Raspberry Pi SD Card Test
2017-04-07 10:54
DNS Firewall (blackhole malicious, like Pi-hole) with bind9
2017-03-28 13:07
Kubernetes to learn Part 4

Glen Pitt-Pladdy :: Blog

Kubernetes to learn Part 3

So far I've built a basic Kubernetes cluster with one master and two worker (minion) nodes, then turned etcd into an HA cluster. This time I'm going to be looking further at building on that to give us full HA masters by making other services redundant.

For this we are having 3 masters so that we can tolerate a single etcd fail.

HA API Server

The basic idea is that we need multiple API servers which can all happily coexist. There are different approaches to presenting these to the world, but not all of them work properly so I'm going to go for a full load balanced approach.

including Load Balancing, VIP and the simple approach I'll be using which is to use the built in support for multiple API servers in the clients. This allows us to configure all the API servers and clients will use whichever are available. This should be sufficient for smaller clusters, but on a large scale you might need to consider other options that will balance the load across masters evenly.

The process is pretty much the same as we discussed in Part 1 building the initial master, plus tweaks for our etcd cluster from Part2.

We start by installing the master compoments:

# yum install kubernetes-master

Then in /etc/kubernetes/apiserver we need to make it listen where we can get to it and tell it about all the etcd servers:


We also need to get the service account key (retaining permissions) we have on the existing put in /etc/kubernetes/ and then configure it in /etc/kubernetes/apiserver like before. We also need to tell API Server how many API Servers exist in total:

KUBE_API_ARGS="--service_account_key_file=/etc/kubernetes/serviceaccount.key --apiserver-count=3"

If you have a mismatch in --apiserver-count you will get things logged like "Resetting endpoints for master service"

Then enable and start API Server:

# systemctl enable kube-apiserver
# systemctl start kube-apiserver

At this point we have 3 nodes running API Server, but we also need to be able to steer traffic between them.

Load Balancer (HA Proxy)

On the basis of sticking to standard packages in CentOS, the simplest load balancing solution to use is HA Proxy which we will need on all the nodes doing load balancing. I'm using all my master nodes:

# yum install haproxy

HA Proxy uses the concept of a frontend (listener) and backend (client connecting to servers) which we will need to configure. In /etc/haproxy/haproxy.cfg example frontend and backend sections can be commented and the ones we need added:

frontend  apiserver
        bind *:18080
        default_backend apiserver

And the corresponding backend config:

backend apiserver
        balance roundrobin
        server api0 check
        server api1 check
        server api2 check

We also need to set selinux to allow haproxy to connect to all the different destinations which could be tightened up later:

# setsebool -P haproxy_connect_any 1

Then enable and start haproxy:

# systemctl enable haproxy
# systemctl start haproxy

Once this is done for every node you should have the load balancing part working on port 18080, but any one of these load balancers can still fail. To deal with that we need to pass around a VIP which I'm going to use

To do the VIP trickery we'll use keepalived:

# yum install keepalived

The shipped config for this is messy and it can nearly all be removed to get down to just enough to monitor HA Proxy. Make /etc/keepalived/keepalived.conf look like:

global_defs {
   router_id LVS_KUBE

vrrp_script chk_haproxy {
    script "pidof haprox"
    interval 2
    timeout 2
    fall 3

vrrp_instance VI_1 {
    state MASTER
    interface eth0
    virtual_router_id 51
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass somecomplexsecret
    virtual_ipaddress {
    track_script {

Then enable and start it with:

# systemctl enable keepalived
# systemctl start keepalived

You may quickly have some selinux troubles that can be solved with:

# ausearch -c 'pidof' --raw | audit2allow -M my-pidof
# semodule -i my-pidof.pp

For me this has not been completely consistent so I can't be sure what exactly is going on with selinux at this time.

One of your load balancer nodes should get the VIP at which point we should have working.

Using HA API Server

The two main clients on worker nodes (minions) that need knowledge of the new API Servers are kubelet and proxy.

For kubelet we edit /etc/kubernetes/kubelet and set the HA API Servers via the load balancer vip:


Then we can restart kubelet:

# systemctl restart kubelet

Then we need to also set other components including proxy to use this in /etc/kubernetes/config with:


Then restart proxy:

# systemctl restart kube-proxy

At this point minions should be using the HA API Server via the load balancer.

You should also set the config in /etc/kubernetes/config on master nodes, but don't restart any components just yet, since these are not all enabled and need a little extra care.

Remaining Master Components

Not all master components should be active together, and scheduler and controller-manager are the key ones here. These need to elect a leader to ensure that only one is actively making changes.

For the scheduler we need to add an argument in /etc/kubernetes/scheduler on each master node:


On the existing master node restart it:

# systemctl restart kube-scheduler

On new nodes enable and start it:

# systemctl enable kube-scheduler.service
# systemctl start kube-scheduler

Similarly for controller-manager add the argument in /etc/kubernetes/controller-manager

KUBE_CONTROLLER_MANAGER_ARGS="--service_account_private_key_file=/etc/kubernetes/serviceaccount.key --leader-elect"

Again, on the existing master node restart it:

# systemctl restart kube-controller-manager

On new nodes enable and start it:

# systemctl enable kube-controller-manager
# systemctl start kube-controller-manager

Both these seem to generate a lot of logging on the nodes that aren't the master and that is something you might want to quiet down in the longer term.

At this point you should have an HA Kubernetes master.

This is worth testing - shut down a master node at a time and make sure that everything continues working as expected.