Glen Pitt-Pladdy :: BlogKubernetes to learn Part 3 | |||
So far I've built a basic Kubernetes cluster with one master and two worker (minion) nodes, then turned etcd into an HA cluster. This time I'm going to be looking further at building on that to give us full HA masters by making other services redundant. For this we are having 3 masters so that we can tolerate a single etcd fail. HA API ServerThe basic idea is that we need multiple API servers which can all happily coexist. There are different approaches to presenting these to the world, but not all of them work properly so I'm going to go for a full load balanced approach. including Load Balancing, VIP and the simple approach I'll be using which is to use the built in support for multiple API servers in the clients. This allows us to configure all the API servers and clients will use whichever are available. This should be sufficient for smaller clusters, but on a large scale you might need to consider other options that will balance the load across masters evenly. The process is pretty much the same as we discussed in Part 1 building the initial master, plus tweaks for our etcd cluster from Part2. We start by installing the master compoments: # yum install kubernetes-master Then in /etc/kubernetes/apiserver we need to make it listen where we can get to it and tell it about all the etcd servers: KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0" We also need to get the service account key (retaining permissions) we have on the existing put in /etc/kubernetes/ and then configure it in /etc/kubernetes/apiserver like before. We also need to tell API Server how many API Servers exist in total: KUBE_API_ARGS="--service_account_key_file=/etc/kubernetes/serviceaccount.key --apiserver-count=3" If you have a mismatch in --apiserver-count you will get things logged like "Resetting endpoints for master service" Then enable and start API Server: # systemctl enable kube-apiserver At this point we have 3 nodes running API Server, but we also need to be able to steer traffic between them. Load Balancer (HA Proxy)On the basis of sticking to standard packages in CentOS, the simplest load balancing solution to use is HA Proxy which we will need on all the nodes doing load balancing. I'm using all my master nodes: # yum install haproxy HA Proxy uses the concept of a frontend (listener) and backend (client connecting to servers) which we will need to configure. In /etc/haproxy/haproxy.cfg example frontend and backend sections can be commented and the ones we need added: frontend apiserver And the corresponding backend config: backend apiserver We also need to set selinux to allow haproxy to connect to all the different destinations which could be tightened up later: # setsebool -P haproxy_connect_any 1 Then enable and start haproxy: # systemctl enable haproxy Once this is done for every node you should have the load balancing part working on port 18080, but any one of these load balancers can still fail. To deal with that we need to pass around a VIP which I'm going to use 10.146.47.90. To do the VIP trickery we'll use keepalived: # yum install keepalived The shipped config for this is messy and it can nearly all be removed to get down to just enough to monitor HA Proxy. Make /etc/keepalived/keepalived.conf look like: global_defs { Then enable and start it with: # systemctl enable keepalived You may quickly have some selinux troubles that can be solved with: # ausearch -c 'pidof' --raw | audit2allow -M my-pidof For me this has not been completely consistent so I can't be sure what exactly is going on with selinux at this time. One of your load balancer nodes should get the VIP at which point we should have working. Using HA API ServerThe two main clients on worker nodes (minions) that need knowledge of the new API Servers are kubelet and proxy. For kubelet we edit /etc/kubernetes/kubelet and set the HA API Servers via the load balancer vip: KUBELET_API_SERVER="--api-servers=http://10.146.47.90:18080" Then we can restart kubelet: # systemctl restart kubelet Then we need to also set other components including proxy to use this in /etc/kubernetes/config with: KUBE_MASTER="--master=http://10.146.47.90:18080" Then restart proxy: # systemctl restart kube-proxy At this point minions should be using the HA API Server via the load balancer. You should also set the config in /etc/kubernetes/config on master nodes, but don't restart any components just yet, since these are not all enabled and need a little extra care. Remaining Master ComponentsNot all master components should be active together, and scheduler and controller-manager are the key ones here. These need to elect a leader to ensure that only one is actively making changes. For the scheduler we need to add an argument in /etc/kubernetes/scheduler on each master node: KUBE_SCHEDULER_ARGS="--leader-elect" On the existing master node restart it: # systemctl restart kube-scheduler On new nodes enable and start it: # systemctl enable kube-scheduler.service Similarly for controller-manager add the argument in /etc/kubernetes/controller-manager KUBE_CONTROLLER_MANAGER_ARGS="--service_account_private_key_file=/etc/kubernetes/serviceaccount.key --leader-elect" Again, on the existing master node restart it: # systemctl restart kube-controller-manager On new nodes enable and start it: # systemctl enable kube-controller-manager Both these seem to generate a lot of logging on the nodes that aren't the master and that is something you might want to quiet down in the longer term. At this point you should have an HA Kubernetes master. This is worth testing - shut down a master node at a time and make sure that everything continues working as expected. |
|||
This is a bunch of random thoughts, ideas and other nonsense, and is not intended to be taken seriously. I'm experimenting and mostly have no idea what I am doing with most of this so it should be taken with cuation and at your own risk. Intrustive technologies are minimised where possible. For the purposes of reducing abuse and other risks hCaptcha is used and has it's own policies linked from the widget.
Copyright Glen Pitt-Pladdy 2008-2023
|