In this HowTo I will create a 3-node Kubernetes cluster where all nodes hold all Kubernetes roles (controlplane, worker and etcd). As Linux distribution I used Fedora 38 Server edition.
I created three VMs with about 8GB memory and an extra 50+GB /data
partition for later usage.
I call my VMs in this HowTo
- vkube-1 (192.168.122.51)
- vkube-2 (192.168.122.52)
- vkube-3 (192.168.122.53)
A fourth IP (192.168.122.54) will be needed for the Kubernetes-API endpoint that will be used e.g. by kubectl
.
Make sure vkube-1 can ssh to the other nodes (including itself!), this VM will be the source for setting up the cluster.
KubeKey will add all needed DNS records to
/etc/hosts
Prerequisites
The OS can be normally installed with minimal setup you initially only need:
- SSH
- conntrack
- ebtables
- ipset
- ipvsadm
Disable the firewall, selinux and swap.
Additionally we will need the following packages as we want to create a HA-cluster with multiple master-nodes.
- HAproxy
- Keepalived
And we install ourselves containerd
.
Install containerd
I installed the latest available containerd
via the docker-ce-stable repo.
Also some additional configurations are needed and two kernel modules must be loaded:
# Load the modules
modprobe overlay
modprobe br_netfilter
# Make sure they load after a reboot
cat <<EOF | sudo tee /etc/modules-load.d/containerd.conf
overlay
br_netfilter
EOF
Then prepare a config file for containerd:
mv /etc/containerd/config.toml /etc/containerd/config.toml.orig
containerd config default > /etc/containerd/config.toml
Configure config.toml
to use the systemd
cgroup driver:
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
...
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
SystemdCgroup = true
After this start and enable the daemon:
systemctl enable containerd --now
Install HAproxy and Keepalived
Keepalived
Create a /etc/keepalived/check_apiserver.sh
script (do not forget to make it executable):
#!/bin/sh
# set correct VIP!
APISERVER_VIP=192.168.100.123
APISERVER_VIP_DEST_PORT=8443
APISERVER_DEST_PORT=6443
errorExit() {
echo "*** $*" 1>&2
exit 1
}
# is the local kubernetes api-server running?
curl --silent --max-time 1 --insecure https://localhost:${APISERVER_DEST_PORT}/ -o /dev/null || errorExit "Error GET https://localhost:${APISERVER_DEST_PORT}/"
# is the local haproy running?
pgrep -x haproxy > /dev/null || errorExit "haproxy not running"
# is the current keepalived master and the backend working?
if ip addr | grep -q ${APISERVER_VIP}; then
curl --silent --max-time 1 --insecure https://${APISERVER_VIP}:${APISERVER_VIP_DEST_PORT}/ -o /dev/null || errorExit "Error GET https://${APISERVER_VIP}:${APISERVER_VIP_DEST_PORT}/"
fi
Setup for example following /etc/keepalived/keepalived.conf
:
global_defs {
notification_email {
}
router_id KUBE_API
script_user nobody
enable_script_security
}
########### VRRP CHECK SCRIPT
vrrp_script check_apiserver {
script "/etc/keepalived/check_apiserver.sh"
interval 3
#weight -2 # commented so it goes into FAULT state if backend is offline
fall 10
rise 2
}
vrrp_instance haproxy-vip {
state MASTER # set to BACKUP on other nodes
priority 200 # highest value on MASTER (e.g. set to 150 on second and 100 on third node)
interface enp2s0 # network-card interface name on node
virtual_router_id 60
advert_int 1
authentication {
auth_type PASS
auth_pass kubeapi1 # only a max of 8 character are used afaik
}
unicast_src_ip 192.168.122.51 # The IP address of this machine
unicast_peer {
192.168.122.52 # The IP addresses of peer machines
192.168.122.53
}
virtual_ipaddress {
192.168.122.54/24 # The VIP address
}
track_script {
check_apiserver
}
}
Do not forget to adapt the file on each node!
HAproxy
The /etc/haproxy/haproxy.cfg
file is also kept pretty simple:
global
log /dev/log local0 warning
chroot /var/lib/haproxy
pidfile /var/run/haproxy.pid
maxconn 4000
user haproxy
group haproxy
daemon
stats socket /var/lib/haproxy/stats
defaults
log global
option httplog
option dontlognull
timeout connect 5000
timeout client 50000
timeout server 50000
frontend kube-apiserver
bind *:8443
mode tcp
option tcplog
default_backend kube-apiserver
backend kube-apiserver
option httpchk GET /healthz
http-check expect status 200
mode tcp
option ssl-hello-chk
balance roundrobin
server k8s-master-1 192.168.122.51:6443 check
server k8s-master-2 192.168.122.52:6443 check
server k8s-master-3 192.168.122.53:6443 check
You can now enable and start HAproxy and Keepalived:
systemctl enable haproxy --now
systemctl enable keepalived --now
Install KubeKey
More details at https://github.com/kubesphere/kubekey
They supply a script that will download a tar.gz
file and unpack it in the folder where you execute the command. This is simply done via executing:
mkdir kk
cd kk
curl -sfL https://get-kk.kubesphere.io | sh -
BUT – at the time of my installation, the installed version was v3.0.13. A lot of Kubernetes versions were not supported and upgrading from, for example v1.26.5 to v1.27.2 did not work, even though it was listed in the supported versions. So I suggest to build
kk
on your own! It’s described here how to do it. You only need to installmake
andgolang
it seems.
Cluster Setup
First we check which versions are supported by KubeKey:
./kk version --show-supported-k8s
Then we create a cluster-config and edit it for our needs. I’ve chosen v1.26.5 to later try an upgrade to a newer version to checkout how upgrades work.
./kk create config --with-kubernetes v1.26.5 --filename vkube-cluster.yaml
We will create a cluster where each node holds all roles.
The file in the end may look like this:
apiVersion: kubekey.kubesphere.io/v1alpha2
kind: Cluster
metadata:
name: vkube-cluster
spec:
hosts:
- {name: vkube-1, address: 192.168.122.51, internalAddress: 192.168.122.51, user: root, privateKeyPath: "~/.ssh/id_rsa"}
- {name: vkube-2, address: 192.168.122.52, internalAddress: 192.168.122.52, user: root, privateKeyPath: "~/.ssh/id_rsa"}
- {name: vkube-3, address: 192.168.122.53, internalAddress: 192.168.122.53, user: root, privateKeyPath: "~/.ssh/id_rsa"}
roleGroups:
etcd:
- vkube-1
- vkube-2
- vkube-3
control-plane:
- vkube-1
- vkube-2
- vkube-3
worker:
- vkube-1
- vkube-2
- vkube-3
controlPlaneEndpoint:
domain: vkube-cluster
address: "192.168.122.54"
port: 8443 # pointing to port of haproxy's "frontend kube-apiserver"
kubernetes:
version: v1.26.5
clusterName: vkube-cluster
autoRenewCerts: true
containerManager: containerd
proxyMode: iptables # I prefer this mode instead of "ipvs"
etcd:
type: kubeadm # Also here I prefer a different mode instead of the default "kubekey", which installs an extra etcd-cluster outside of the Kubernetes Cluster
network:
plugin: calico
kubePodsCIDR: 10.233.64.0/18
kubeServiceCIDR: 10.233.0.0/18
## multus support. https://github.com/k8snetworkplumbingwg/multus-cni
multusCNI:
enabled: false
registry:
privateRegistry: ""
namespaceOverride: ""
registryMirrors: []
insecureRegistries: []
addons: []
Now we initialize the cluster via the command:
./kk create cluster -f vkube-cluster.yaml
The setup will first check all prerequisites and then download all the needed stuff. It will create a file structure in the same folder where kk
is placed, looking like:
kubekey
├── cni
│ ├── v1.2.0
│ │ └── amd64
│ │ └── cni-plugins-linux-amd64-v1.2.0.tgz
│ └── v3.26.1
│ └── amd64
│ └── calicoctl
├── containerd
│ └── 1.6.4
│ └── amd64
│ └── containerd-1.6.4-linux-amd64.tar.gz
├── crictl
│ └── v1.24.0
│ └── amd64
│ └── crictl-v1.24.0-linux-amd64.tar.gz
├── etcd
│ └── v3.4.13
│ └── amd64
│ └── etcd-v3.4.13-linux-amd64.tar.gz
├── helm
│ └── v3.9.0
│ └── amd64
│ └── helm
├── kube
│ └── v1.27.2
│ └── amd64
│ ├── kubeadm
│ ├── kubectl
│ └── kubelet
├── logs
│ ├── kubekey.log -> kubekey.log.20231104
│ └── kubekey.log.20231104
├── runc
│ └── v1.1.1
│ └── amd64
│ └── runc.amd64
├── vkube-1
│ └── initOS.sh
├── vkube-2
│ └── initOS.sh
└── vkube-3
└── initOS.sh
If something goes wrong and you want to restart the setup you’ll have to execute
./kk delete cluster -f vkube-cluster.yaml
first to cleanup any mess.
Cluster Upgrade
As long as kk
supports it, you can simply upgrade to a later version for example with:
./kk upgrade --with-kubernetes "v1.27.7" -f vkube-cluster.yaml
As mentioned earlier you might need to build
kk
on your own to get this working 😐
Conclusion
KubeKey is quite nice to setup clusters with some exceptions that the supplied release-packages seem sometimes buggy (see described workaround with building kk
on my own). Also what I quite dislike, is that all Kubernetes images, like kube-apiserver
, kube-controller-manager
, etc. are pushed 1:1 by KubeSphere to Docker-hub/docker.io instead of just using the registry.k8s.io. So this could be a problem for larger environments if you do not have a registry-mirror/cache.
TODO: Have a look at KubeSphere itself the UI also looks promising and better than Rancher.
Kommentare