PART-2 - ECK setup
ECK is only available with basic and enterprise/trial license!
Deploy ECK
Just run following command to install the custom resource definitions - or download the yaml and apply it
kubectl apply -f https://download.elastic.co/downloads/eck/1.3.0/all-in-one.yaml
Setting Up The Elasticsearch-Cluster
Create a folder where you save your yaml-files. In this example we will assume that vkube-001 is in zone-1 and vkube-002 is in zone-2. We will save Elasticsearch data on persistent-volumes which are bound to specific pods on the nodes. The setup will try to imitate a hot-warm-cold setup with two Elasticsearch nodes each - this means for the hot-nodes one will run in zone-1 and the other in zone-2,...
The cluster will run in the namespace elk
and will also be named elk
Why So Complicated?
Yes, this setup may be a little over the top. But if you for example have in each zone two Kubernetes nodes, and one dies or is in maintenance, Elasticsearch would probably move the pod to another available Kubernetes node (in default setup). This would then create another volume and copy all shards, which can take very long in large environments. With this setup we just have to wait until the node is back online and Elasticsearch will reuse the available data and reconstruct everything.
Prepare Static Local Persistent-Volumes
Create a subfolder e.g. called persistent-volumes
here we place all yaml-files holding info about them.
Create the file holding info about the storage-class
we will use - e.g. storage-class.yaml
:
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
Create subfolders hot
, warm
and cold
each with subfolders zone-1
and zone-2
- this will make it easier to keep track of files
Hot-Volumes Configuration
hot
persistent-volumes -- in the hot/zone-1
-folder create a file
pv-001-elk-zone-1-hot-es-pod-0.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-001-elk-zone-1-hot-es-pod-0
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
# Delete is not working in this case!
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
# the folder has to be created before applying this yaml
path: /data/pv-001-elk-zone-1-hot-es-pod-0
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- vkube-001
claimRef:
# we want that the first pod/es-node is always on this node
# the pattern is: elasticsearch-data-CLUSTERNAME-es-NODESETNAME-PODNUMBER
name: elasticsearch-data-elk-es-zone-1-hot-0
namespace: elk
pv-002-elk-zone-2-hot-es-pod-0.yaml
create another file in the hot/zone-2
-folder with this name
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-002-elk-zone-2-hot-es-pod-0
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
# Delete is not working in this case!
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
# the folder has to be created before applying this yaml
path: /data/pv-002-elk-zone-2-hot-es-pod-0
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- vkube-002
claimRef:
# we want that the first pod/es-node is always on this node
# the pattern is: elasticsearch-data-CLUSTERNAME-es-NODESETNAME-PODNUMBER
name: elasticsearch-data-elk-es-zone-2-hot-0
namespace: elk
Warm-Volumes Configuration
create the needed warm
-files in the corresponding folder (in our case pv-001 in zone-1, pv-002 in zone-2)
pv-001-elk-zone-1-warm-es-pod-0.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-001-elk-zone-1-warm-es-pod-0
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
# Delete is not working in this case!
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /data/pv-001-elk-zone-1-warm-es-pod-0
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- vkube-001
claimRef:
# we want that the first pod/es-node is always on this node
# the pattern is: elasticsearch-data-CLUSTERNAME-es-NODESETNAME-PODNUMBER
name: elasticsearch-data-elk-es-zone-1-warm-0
namespace: elk
pv-002-elk-zone-2-warm-es-pod-0.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-002-elk-zone-2-warm-es-pod-0
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
# Delete is not working in this case!
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
# the folder has to be created before applying this yaml
path: /data/pv-002-elk-zone-2-warm-es-pod-0
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- vkube-002
claimRef:
# we want that the first pod/es-node is always on this node
# the pattern is: elasticsearch-data-CLUSTERNAME-es-NODESETNAME-PODNUMBER
name: elasticsearch-data-elk-es-zone-2-warm-0
namespace: elk
Cold-Volumes Configuration
repeat for cold
-files 😉
Create Folders On Each Node
as mentioned in the yaml-code, the folders have to be created on the defined Kubernetes nodes first before applying.
# on vkube-001
mkdir -p /data/pv-001-elk-zone-1-hot-es-pod-0 /data/pv-001-elk-zone-1-warm-es-pod-0 /data/pv-001-elk-zone-1-cold-es-pod-0
# on vkube-002
mkdir -p /data/pv-002-elk-zone-2-hot-es-pod-0 /data/pv-002-elk-zone-2-warm-es-pod-0 /data/pv-002-elk-zone-2-cold-es-pod-0
You should now be able to apply the yamls in the persistent-volumes
-folder: kubectl apply -R -f persistent-volumes
with kubectl get pv
you should see all defined persistent-volumes
pop up as Available
About "storage.capacity": The parameter is mandatory and has to be set BUT unless you use the "CSIStorageCapacity" feature gate and the "CSIDriver" it seems to not have no effect. The data in the folders can grow larger than the set limit, which is in this case OK - because later when we set up the "Stack Monitoring" you will see that Elasticsearch sees all available space where the persistent-volume is placed.
Prepare Elasticsearch
missing the node.roles parameter - will have to update the
elasticsearch.yaml
soon™
create a the file elasticsearch.yaml
in your main-folder where you save your yaml-configs
# This theoretically should install 6 elasticsearch nodes (hot-warm-cold)
# every k8s node is representing a zone
# 2 as hot -- one pod on each k8s node
# 2 as warm -- one pod on each k8s node
# 2 as cold -- one pod on each k8s node
# a persistent local volume is available - see the pv-files
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
# change it if needed
name: elk
namespace: elk
spec:
version: 7.10.0
# this will inject the secret into elasticsearch's keystore. it's content is the password for
# the later activedirectory example you'll find in the first nodeset-zone-definition
# the secrets data saves a base64 (make sure its one line without newline) coded password with the key
# xpack.security.authc.realms.active_directory.ad1.secure_bind_password
# for base64ing the password you can use something like: echo -n 'the-password' | base64 -w 0
#secureSettings:
#- secretName: ldap-elasticsearch-ad
nodeSets:
- name: zone-1-hot
count: 1
config:
# needed if you do not have set the vm.max_map_count kernel parameter via other means
# node.store.allow_mmap: false
node.attr.data: hot
node.attr.zone: zone-1
cluster.routing.allocation.awareness.attributes: k8s_node_name,zone
# https://www.elastic.co/guide/en/elasticsearch/reference/current/configuring-metricbeat.html
xpack.monitoring.collection.enabled: true
xpack.monitoring.elasticsearch.collection.enabled: false
# for audit logging - make sure to enable it after setting up filebeat - else you have a lot of logs
# and finding errors will be hard
xpack.security.audit.enabled: true
# if you have ActiveDirectory and want users to be able to authenticate (roll-mappings needed)
# this is only an example - the sections are missing in the other nodeset-zone-definitions
#xpack.security.authc.realms:
# # we need native to be able to fall back to built-in users, in case something goes wrong
# native:
# native1:
# order: 0
# active_directory:
# ad1:
# order: 1
# domain_name: home.local
# url: ldaps://ldap1.home.local:3269, ldaps://ldap2.home.local:3269
# # for debugging we can disable cert verification
# # ssl.verification_mode: none
# ssl.certificate_authorities: /usr/share/elasticsearch/config/certs/home-local-ca.pem
# bind_dn: ldap_elasticsearch@home.local
# # password set via secure_bind_password
# user_search.base_dn: OU=USR,DC=home,DC=local
# group_search.base_dn: OU=GRP,DC=home,DC=local
podTemplate:
metadata:
labels:
# change it if needed
cluster: elk
stackmonitoring: elasticsearch
spec:
# needed for the AD ldaps ca-cert
#securityContext:
# runAsUser: 1000
# runAsGroup: 1000
# fsGroup: 1000
containers:
- name: elasticsearch
# needed for pointing ssl.certificate_authorities above to the correct cert
#volumeMounts:
#- name: home-local-windows-ca
# mountPath: /usr/share/elasticsearch/config/certs
# readOnly: true
env:
- name: ES_JAVA_OPTS
value: -Xms512m -Xmx512m
# to be able to do the volume mount above - save the secret in the correct namespace
# make sure it's base64 encoded (e.g. `cat home-local-ca.pem | base64 -w 0`)
#volumes:
#- name: home-local-ca
# secret:
# secretName: home-local-ca
# to reclaim an existing persistent volume - check first it's state: kubectl get pv
# it should either not exist yet or if you want to reclaim it you have to remove it's claimref-uid
# e.g. kubectl patch persistentvolume/pv-001-elk-zone-1-hot-es-pod-0 --type json -p '[{op: remove, path:
# /spec/claimRef/uid}]'
volumeClaimTemplates:
- metadata:
name: elasticsearch-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
storageClassName: local-storage
- name: zone-2-hot
count: 1
config:
node.attr.data: hot
node.attr.zone: zone-2
cluster.routing.allocation.awareness.attributes: k8s_node_name,zone
# https://www.elastic.co/guide/en/elasticsearch/reference/current/configuring-metricbeat.html
xpack.monitoring.collection.enabled: true
xpack.monitoring.elasticsearch.collection.enabled: false
# for audit logging
xpack.security.audit.enabled: true
podTemplate:
metadata:
labels:
# change it if needed
cluster: elk
stackmonitoring: elasticsearch
spec:
containers:
- name: elasticsearch
env:
- name: ES_JAVA_OPTS
value: -Xms512m -Xmx512m
volumeClaimTemplates:
- metadata:
name: elasticsearch-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
storageClassName: local-storage
- name: zone-1-warm
count: 1
config:
node.attr.data: warm
node.attr.zone: zone-1
cluster.routing.allocation.awareness.attributes: k8s_node_name,zone
# https://www.elastic.co/guide/en/elasticsearch/reference/current/configuring-metricbeat.html
xpack.monitoring.collection.enabled: true
xpack.monitoring.elasticsearch.collection.enabled: false
# for audit logging
xpack.security.audit.enabled: true
podTemplate:
metadata:
labels:
# change it if needed
cluster: elk
stackmonitoring: elasticsearch
spec:
containers:
- name: elasticsearch
env:
- name: ES_JAVA_OPTS
value: -Xms512m -Xmx512m
volumeClaimTemplates:
- metadata:
name: elasticsearch-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
storageClassName: local-storage
- name: zone-2-warm
count: 1
config:
node.attr.data: warm
node.attr.zone: zone-2
cluster.routing.allocation.awareness.attributes: k8s_node_name,zone
# https://www.elastic.co/guide/en/elasticsearch/reference/current/configuring-metricbeat.html
xpack.monitoring.collection.enabled: true
xpack.monitoring.elasticsearch.collection.enabled: false
# for audit logging
xpack.security.audit.enabled: true
podTemplate:
metadata:
labels:
# change it if needed
cluster: elk
stackmonitoring: elasticsearch
spec:
containers:
- name: elasticsearch
env:
- name: ES_JAVA_OPTS
value: -Xms512m -Xmx512m
volumeClaimTemplates:
- metadata:
name: elasticsearch-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
storageClassName: local-storage
- name: zone-1-cold
count: 1
config:
node.attr.data: cold
node.attr.zone: zone-1
cluster.routing.allocation.awareness.attributes: k8s_node_name,zone
# https://www.elastic.co/guide/en/elasticsearch/reference/current/configuring-metricbeat.html
xpack.monitoring.collection.enabled: true
xpack.monitoring.elasticsearch.collection.enabled: false
# for audit logging
xpack.security.audit.enabled: true
podTemplate:
metadata:
labels:
# change it if needed
cluster: elk
stackmonitoring: elasticsearch
spec:
containers:
- name: elasticsearch
env:
- name: ES_JAVA_OPTS
value: -Xms512m -Xmx512m
volumeClaimTemplates:
- metadata:
name: elasticsearch-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
storageClassName: local-storage
- name: zone-2-cold
count: 1
config:
node.attr.data: cold
node.attr.zone: zone-2
cluster.routing.allocation.awareness.attributes: k8s_node_name,zone
# https://www.elastic.co/guide/en/elasticsearch/reference/current/configuring-metricbeat.html
xpack.monitoring.collection.enabled: true
xpack.monitoring.elasticsearch.collection.enabled: false
# for audit logging
xpack.security.audit.enabled: true
podTemplate:
metadata:
labels:
# change it if needed
cluster: elk
stackmonitoring: elasticsearch
spec:
containers:
- name: elasticsearch
env:
- name: ES_JAVA_OPTS
value: -Xms512m -Xmx512m
volumeClaimTemplates:
- metadata:
name: elasticsearch-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
storageClassName: local-storage
---
# the ingress definition
# the operator seems to create a service CLUSTERNAME-es-http - you should find it with kubectl get services -n elk
# if you see a Warning: networking.k8s.io/v1beta1 Ingress is deprecated in v1.19+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
# you can have a look at https://github.com/kubernetes/kubernetes/pull/89778 if you want to use networking.k8s.io/v1
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: elasticsearch
# change it if needed
namespace: elk
annotations:
nginx.ingress.kubernetes.io/backend-protocol: HTTPS
spec:
rules:
# change it if needed
- host: elasticsearch.home.local
http:
paths:
- backend:
# change it if needed
serviceName: elk-es-http
servicePort: 9200
add elasticsearch.home.local
to your hosts
-file for example: 192.168.100.120 elasticsearch.home.local
and get your elastic credentials: kubectl get secret elk-es-elastic-user -o=jsonpath='{.data.elastic}' -n elk | base64 --decode; echo
you can save it temporarily into a variable: export ELASTIC_PW=$(kubectl get secret elk-es-elastic-user -o=jsonpath='{.data.elastic}' -n elk | base64 --decode; echo)
try curl -k https://elastic:${ELASTIC_PW}@elasticsearch.home.local:30443/?pretty
- save the cluster_uuid
somewhere for later 😉
Congratulations you have your Elastichsearch-Cluster up and running!
Prepare Kibana
As we want to make sure, that it is scaleable we have to first create a secret with Kibana\'s encryptionKey
inside. Also we have to set another annotation in the ingress-definition. For the secret create a folder secrects
and create a file kibana-secret-settings.yaml
:
# the key is base64 encoded
# create some key with 32 characters: HnWzJhQFxNuoUMPj8ufFxJPBcXFOh46m
# echo -n 'HnWzJhQFxNuoUMPj8ufFxJPBcXFOh46m' | base64 -w 0
# this will output: SG5XekpoUUZ4TnVvVU1Qajh1ZkZ4SlBCY1hGT2g0Nm0K
apiVersion: v1
kind: Secret
metadata:
name: kibana-secret-settings
namespace: elk
type: Opaque
data:
xpack.security.encryptionKey: SG5XekpoUUZ4TnVvVU1Qajh1ZkZ4SlBCY1hGT2g0Nm0=
In the main-folder create a kibana.yaml
:
apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
# change if needed
name: kibana
namespace: elk
spec:
version: 7.10.0
count: 2
# if you want to scale up, the encryptionKey must be everywhere the same
secureSettings:
- secretName: kibana-secret-settings
elasticsearchRef:
# change if needed
name: elk
config:
# For Elasticsearch clusters that are running in containers, this setting changes the Node Listing to display the CPU
# utilization based on the reported Cgroup statistics. It also adds the calculated Cgroup CPU utilization to the Node
# Overview page instead of the overall operating system’s CPU utilization. Defaults to false.
xpack.monitoring.ui.container.elasticsearch.enabled: true
# we want to disable "self-monitoring" - in documentations and examples it's mentioned with xpack - but it seems it's
# really the setting without xpack.
xpack.monitoring.kibana.collection.enabled: false
monitoring.kibana.collection.enabled: false
# we need this for collecting data later with metricbeats
xpack.monitoring.collection.enabled: true
# enable audit-log (trial or enterprise license needed)
xpack.security.audit.enabled: true
# disable telemetry
telemetry.enabled: false
# this enables the license management ui - probably not allowed ;o)
# for reverting to a basic-license, call in dev-console: POST /_license/start_basic?acknowledge=true
xpack.license_management.ui.enabled: true
podTemplate:
metadata:
labels:
stackmonitoring: kibana
spec:
# this makes sure that at least on every node a pod is running
# TODO: sometimes not working as expected - may be an better option to have a look at https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- logstash-buffer
topologyKey: "kubernetes.io/hostname"
containers:
- name: kibana
---
# the ingress definition
# if you see a Warning: networking.k8s.io/v1beta1 Ingress is deprecated in v1.19+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
# you can have a look at https://github.com/kubernetes/kubernetes/pull/89778 if you want to use networking.k8s.io/v1 already
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: kibana
namespace: elk
annotations:
# if you want to scale up, the connections must be sticky
nginx.ingress.kubernetes.io/affinity: "cookie"
# because Kibana is running with HTTPS ;)
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
spec:
rules:
- host: kibana.home.local
http:
paths:
- backend:
# thankfully the orchestrator also creates a services we can use
serviceName: kibana-kb-http
servicePort: 5601
You now should be able to access Kibana via https://kibana.home.local:30443
Kommentare