There are wrong spaces later in some of the config-snippets – I have to adjust the pages. Be aware if you just copy and paste stuff from here – some markdown-conversion is not handled well 😐

Setting Up A Home-Lab Elastic ECK

I used again Fedora Server (33) to setup a two node Kubernetes cluster with enough resources on two KVM machines. For saving Elasticsearch data I use an extra partition /data. This setup will create the possibility for a nice 2-zone setup. About ECK you can find out more at https://www.elastic.co/guide/en/cloud-on-k8s/current/index.html

Kubernetes 1.19 was used.

What To Expect

THIS is absolutely only a testing platform – you must not expect a super awesome powerhouse and because it is only a two-node cluster it is not really failsafe (You will see strange things happening if you turn off your VMs for a few days and then turn them back on later 😉 )! 😀 with https://github.com/jyundt/syslog-benchmark it seems 2000-3000/s without stressing the buffer too much seems doable. Without much tuning and e.g. executing syslog-benchmark.go -host 192.168.100.120 -port 31514 -proto=tcp -tag="syslog-bench" -msgs=1000000 the results was that RabbitMQ had at peak nearly 400k of messages ready. But it was pretty quickly worked off, after the benchmark showed me:

2020/12/06 22:35:32 Starting sending messages
2020/12/06 22:37:54 Total messages sent = 1000000
2020/12/06 22:37:54 Total time = 2m22.307431046s
2020/12/06 22:37:54 Throughput = 7027.039927920251 message per second

The sent messages are in a VERY simple format – so there is not much groking going on with the filters used in this HowTo. But anyway a pretty decent value.

Prerequsites Preparing Kubernetes

Moby-Engine Docker Config

Additionally I added /etc/docker/daemon.json with following set

{ 
  "storage-driver": "overlay2", 
  "storage-opts": [ 
    "overlay2.override_kernel_check=true" 
  ], 

  "log-driver": "json-file", 
  "log-opts": { 
    "max-size": "100m" 
   } 
}

To get docker to start up you also have to edit the start parameters – edit /etc/sysconfig/docker

OPTIONS="--live-restore \
  --default-ulimit nofile=65535:65535 \
  --init-path /usr/libexec/docker/docker-init \
  --userland-proxy-path /usr/libexec/docker/docker-proxy \
"

You now should be able to start Docker and enable it via systemctl

Kubernetes Setup

You can find a howto for setting up a three node HA-Kubernetes cluster here

As it is just for development and testing, the master can be a worker node too. Install Kubernetes with moby-engine and by adding the official Kubernetes repo. My two nodes have the hostname vkube-001 (192.168.100.120) and vkube-002 (192.168.100.121). Both added to the hosts file on each node.

Add the Kubernetes repo:

cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-\$basearch
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF

General Configuration

Disable swap, firewalld, selinux, cgroupv2 (grubby --update-kernel=ALL --args="systemd.unified_cgroup_hierarchy=0") and set vm.max_map_count to 262144 via sysctl and permanently via e.g. /etc/syctl.d/01-max_map_count.conf.

Install everything on both nodes even if not all is needed (maybe for later): dnf install kubeadm kubectl kubelet.

On the first node: kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=192.168.100.120 and save the important output for later ;). Do not forget to enable kubelet: systemctl enable kubelet.service.

I used Calico for the Kubernetes pod network stuff – get the manifest: wget https://docs.projectcalico.org/manifests/calico.yaml, find CALICO_IPV4POOL_CIDR and edit it so it matches the used pod-network-cidr and apply the yaml.

As we only have two nodes enable your master as worker too: kubectl taint nodes --all node-role.kubernetes.io/master- and add worker label to your node to be pretty: kubectl label node vkube-001 node-role.kubernetes.io/worker='' also enable kubelet on your next worker-node and join it to the Kubernetes-Cluster by executing (command from the init-output before): kubeadm join 192.168.100.120:6443 --token dqmckw.gojoqddvqc7nzglj --discovery-token-ca-cert-hash sha256:USE-YOUR-HASH

Also add worker label to your node to be pretty: kubectl label node vkube-002 worker node-role.kubernetes.io/worker=''

Nginx-Ingress Configuration

Add Nginx as ingress-controller, download the current bare-metal yaml file, find type: NodePort line in the service definition and edit the yaml-file a little:

[...]
# Source: ingress-nginx/templates/controller-service.yaml
apiVersion: v1
kind: Service
 metadata:
 annotations:
 labels:
   helm.sh/chart: ingress-nginx-3.10.1
   app.kubernetes.io/name: ingress-nginx
   app.kubernetes.io/instance: ingress-nginx
   app.kubernetes.io/version: 0.41.2
   app.kubernetes.io/managed-by: Helm
   app.kubernetes.io/component: controller
 name: ingress-nginx-controller
 namespace: ingress-nginx
spec:
  type: NodePort
  # this should let us see the IP of the host who sent events
  # BUT this has one disadvantage! IF there is no pod running on
  # the node where the packet was sent, it will just get dropped
  #
  # we try to mitigate this a little with the podAntiAffinity in the
  # Deployment-section later (e.g. Kibana)
  #
  # if all pods are running on one node, the traffic is loadbalanced
  # at least i observed that.
  externalTrafficPolicy: Local
  ports:
    - name: http
      port: 80
      protocol: TCP
      targetPort: http
      nodePort: 30080
    - name: https
      port: 443
      protocol: TCP
      targetPort: https
      nodePort: 30443
  selector:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/component: controller
[...]

Then Find the Deployment-definiton, change Deployment to DaemonSet so that it looks similar to:

[...]
# Source: ingress-nginx/templates/controller-deployment.yaml
apiVersion: apps/v1
#kind: Deployment
kind: DaemonSet
metadata:
  labels:
    helm.sh/chart: ingress-nginx-3.10.1
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.41.2
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: ingress-nginx-controller
  namespace: ingress-nginx
spec:
  selector:
    matchLabels:
      app.kubernetes.io/name: ingress-nginx
      app.kubernetes.io/instance: ingress-nginx
      app.kubernetes.io/component: controller
  revisionHistoryLimit: 10
  minReadySeconds: 0
  template:
    metadata:
      labels:
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/component: controller
      annotations:
        co.elastic.logs/module: nginx
        co.elastic.logs/fileset.stdout: ingress_controller
        co.elastic.logs/fileset.stderr: error
[...]

Also find the "configMap" ingress-nginx-controller definition and edit it to look like:

# Source: ingress-nginx/templates/controller-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  labels:
    helm.sh/chart: ingress-nginx-3.10.1
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.41.2
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: ingress-nginx-controller
  namespace: ingress-nginx
data:
  # default format: '$remote_addr - $remote_user [$time_local] $request $status $body_bytes_sent $http_referer $http_user_agent $request_length $request_time [$proxy_upstream_name] [$proxy_alternative_upstream_name] $upstream_addr $upstream_response_length $upstream_response_time $upstream_status $req_id'
  # we need the $proxy_add_x_forwarded_for maybe for later
  log-format-upstream: '$proxy_add_x_forwarded_for - - $remote_user [$time_local] $request $status $body_bytes_sent $http_referer $http_user_agent $request_length $request_time [$proxy_upstream_name] [$proxy_alternative_upstream_name] $upstream_addr $upstream_response_length $upstream_response_time $upstream_status $req_id'

apply the nginx-ingress yaml

Now Kubernetes should be prepared and ready.

Proceed to PART-2

Zuletzt bearbeitet: Januar 30, 2021

Autor

Kommentare

Kommentar verfassen

Diese Website verwendet Akismet, um Spam zu reduzieren. Erfahre mehr darüber, wie deine Kommentardaten verarbeitet werden.