In this HowTo I will create a 3-node Kubernetes cluster where all nodes hold all Kubernetes roles (controlplane, worker and etcd). As Linux distribution I used Fedora 38 Server edition.

I created three VMs with about 8GB memory and an extra 50+GB /data partition for later usage.

I call my VMs in this HowTo

  • vkube-1 (
  • vkube-2 (
  • vkube-3 (

A fourth IP ( will be needed for the Kubernetes-API endpoint that will be used e.g. by kubectl.

Make sure vkube-1 can ssh to the other nodes (including itself!), this VM will be the source for setting up the cluster.

KubeKey will add all needed DNS records to /etc/hosts


The OS can be normally installed with minimal setup you initially only need:

  • SSH
  • conntrack
  • ebtables
  • ipset
  • ipvsadm

Disable the firewall, selinux and swap.

Additionally we will need the following packages as we want to create a HA-cluster with multiple master-nodes.

  • HAproxy
  • Keepalived

And we install ourselves containerd.

Install containerd

I installed the latest available containerd via the docker-ce-stable repo.

Also some additional configurations are needed and two kernel modules must be loaded:

# Load the modules
modprobe overlay
modprobe br_netfilter

# Make sure they load after a reboot
cat <<EOF | sudo tee /etc/modules-load.d/containerd.conf

Then prepare a config file for containerd:

mv /etc/containerd/config.toml /etc/containerd/config.toml.orig
containerd config default > /etc/containerd/config.toml

Configure config.toml to use the systemd cgroup driver:

    SystemdCgroup = true

After this start and enable the daemon:

systemctl enable containerd --now

Install HAproxy and Keepalived


Create a /etc/keepalived/ script (do not forget to make it executable):

APISERVER_DEST_PORT=8443  # pointing to port of haproxy's "frontend kube-apiserver"

errorExit() {
    echo "*** $*" 1>&2
    exit 1

curl --silent --max-time 2 --insecure https://localhost:${APISERVER_DEST_PORT}/ -o /dev/null || errorExit "Error GET https://localhost:${APISERVER_DEST_PORT}/"
if ip addr | grep -q ${APISERVER_VIP}; then
    curl --silent --max-time 2 --insecure https://${APISERVER_VIP}:${APISERVER_DEST_PORT}/ -o /dev/null || errorExit "Error GET https://${APISERVER_VIP}:${APISERVER_DEST_PORT}/"

Setup for example following /etc/keepalived/keepalived.conf:

global_defs {
  notification_email {
  router_id KUBE_API
  script_user nobody

vrrp_script check_apiserver {
  script "/etc/keepalived/"
  interval 3
  #weight -2  # commented so it goes into FAULT state if backend is offline
  fall 10
  rise 2

vrrp_instance haproxy-vip {
  state MASTER  # set to BACKUP on other nodes
  priority 200  # highest value on MASTER (e.g. set to 150 on second and 100 on third node)
  interface enp2s0  # network-card interface name on node
  virtual_router_id 60
  advert_int 1

  authentication {
    auth_type PASS
    auth_pass kubeapi1  # only a max of 8 character are used afaik

  unicast_src_ip  # The IP address of this machine

  unicast_peer {  # The IP addresses of peer machines

  virtual_ipaddress {  # The VIP address

  track_script {

Do not forget to adapt the file on each node!


The /etc/haproxy/haproxy.cfg file is also kept pretty simple:

  log         /dev/log local0 warning
  chroot      /var/lib/haproxy
  pidfile     /var/run/
  maxconn     4000
  user        haproxy
  group       haproxy

  stats socket /var/lib/haproxy/stats

  log global
  option  httplog
  option  dontlognull
  timeout connect 5000
  timeout client 50000
  timeout server 50000

frontend kube-apiserver
  bind *:8443
  mode tcp
  option tcplog
  default_backend kube-apiserver

backend kube-apiserver
  option httpchk GET /healthz
  http-check expect status 200
  mode tcp
  option ssl-hello-chk
  balance roundrobin
  server k8s-master-1 check
  server k8s-master-2 check
  server k8s-master-3 check

You can now enable and start HAproxy and Keepalived:

systemctl enable haproxy --now
systemctl enable keepalived --now

Install KubeKey

More details at

They supply a script that will download a tar.gz file and unpack it in the folder where you execute the command. This is simply done via executing:

mkdir kk
cd kk
curl -sfL | sh -

BUT – at the time of my installation, the installed version was v3.0.13. A lot of Kubernetes versions were not supported and upgrading from, for example v1.26.5 to v1.27.2 did not work, even though it was listed in the supported versions. So I suggest to build kk on your own! It’s described here how to do it. You only need to install make and golang it seems.

Cluster Setup

First we check which versions are supported by KubeKey:

./kk version --show-supported-k8s

Then we create a cluster-config and edit it for our needs. I’ve chosen v1.26.5 to later try an upgrade to a newer version to checkout how upgrades work.

./kk create config --with-kubernetes v1.26.5 --filename vkube-cluster.yaml

We will create a cluster where each node holds all roles.

The file in the end may look like this:

kind: Cluster
  name: vkube-cluster
  - {name: vkube-1, address:, internalAddress:, user: root, privateKeyPath: "~/.ssh/id_rsa"}
  - {name: vkube-2, address:, internalAddress:, user: root, privateKeyPath: "~/.ssh/id_rsa"}
  - {name: vkube-3, address:, internalAddress:, user: root, privateKeyPath: "~/.ssh/id_rsa"}
    - vkube-1
    - vkube-2
    - vkube-3
    - vkube-1
    - vkube-2
    - vkube-3
    - vkube-1
    - vkube-2
    - vkube-3
    domain: vkube-cluster
    address: ""
    port: 8443  # pointing to port of haproxy's "frontend kube-apiserver"
    version: v1.26.5
    clusterName: vkube-cluster
    autoRenewCerts: true
    containerManager: containerd
    proxyMode: iptables  # I prefer this mode instead of "ipvs"
    type: kubeadm  # Also here I prefer a different mode instead of the default "kubekey", which installs an extra etcd-cluster outside of the Kubernetes Cluster
    plugin: calico
    ## multus support.
      enabled: false
    privateRegistry: ""
    namespaceOverride: ""
    registryMirrors: []
    insecureRegistries: []
  addons: []

Now we initialize the cluster via the command:

./kk create cluster -f vkube-cluster.yaml

The setup will first check all prerequisites and then download all the needed stuff. It will create a file structure in the same folder where kk is placed, looking like:

├── cni
│   ├── v1.2.0
│   │   └── amd64
│   │       └── cni-plugins-linux-amd64-v1.2.0.tgz
│   └── v3.26.1
│       └── amd64
│           └── calicoctl
├── containerd
│   └── 1.6.4
│       └── amd64
│           └── containerd-1.6.4-linux-amd64.tar.gz
├── crictl
│   └── v1.24.0
│       └── amd64
│           └── crictl-v1.24.0-linux-amd64.tar.gz
├── etcd
│   └── v3.4.13
│       └── amd64
│           └── etcd-v3.4.13-linux-amd64.tar.gz
├── helm
│   └── v3.9.0
│       └── amd64
│           └── helm
├── kube
│   └── v1.27.2
│       └── amd64
│           ├── kubeadm
│           ├── kubectl
│           └── kubelet
├── logs
│   ├── kubekey.log -> kubekey.log.20231104
│   └── kubekey.log.20231104
├── runc
│   └── v1.1.1
│       └── amd64
│           └── runc.amd64
├── vkube-1
│   └──
├── vkube-2
│   └──
└── vkube-3

If something goes wrong and you want to restart the setup you’ll have to execute ./kk delete cluster -f vkube-cluster.yaml first to cleanup any mess.

Cluster Upgrade

As long as kk supports it, you can simply upgrade to a later version for example with:

./kk upgrade --with-kubernetes "v1.27.7" -f vkube-cluster.yaml

As mentioned earlier you might need to build kk on your own to get this working 😐


KubeKey is quite nice to setup clusters with some exceptions that the supplied release-packages seem sometimes buggy (see described workaround with building kk on my own). Also what I quite dislike, is that all Kubernetes images, like kube-apiserver, kube-controller-manager, etc. are pushed 1:1 by KubeSphere to Docker-hub/ instead of just using the So this could be a problem for larger environments if you do not have a registry-mirror/cache.

TODO: Have a look at KubeSphere itself the UI also looks promising and better than Rancher.

Zuletzt bearbeitet: November 9, 2023



Kommentar verfassen

Diese Website verwendet Akismet, um Spam zu reduzieren. Erfahre mehr darüber, wie deine Kommentardaten verarbeitet werden.