PART-7 – Miscallaneous, Ideas And ToDos

A few things to help or make things easier.

Upload The YAML-Files

e.g. to github or provide a zip-file here (this will take some time – too lazy after finishing this HowTo atm 😉

Usefull Commands

See What’s Going on

kubectl get es,kb,deployments,sts,pods,svc,pv,ingress -o wide -A

Execute it with watch to always see what’s going on in the Kubernetes cluster (small font-size and 4K display recommended 😉

See ALL Logs From A Resource

In this example we want to see ALL logs from all available ingress-nginx pods (also those which run on another Kubernetes nodes).

kubectl logs -f --selector=app.kubernetes.io/name=ingress-nginx -n ingress-nginx

Get Elastic Password

Always good to have that command ready.

kubectl get secret elk-es-elastic-user -o=jsonpath='{.data.elastic}' -n elk | base64 --decode; echo

Open Shell In Pod

To have a look inside a running pod.

kubectl exec -n elk --stdin --tty pod/elk-es-zone-1-cold-0 -- /bin/bash

Restart Pods

Some pods do not restart after a configmap is updated

kubectl rollout restart -n elk deployment logstash-filter

Testing Syslog-Filters

The date sent with logs should be in UTC (date -u) because Kibana will do the rest to show you the correct @timestamp.

# logger - version util-linux-2.35.1 has the --sd-id option and --rfc3164, else those commands will be ignored
logger --udp --server 192.168.100.120 --port 31514 --sd-id zoo@123 --sd-param tiger=\"hungry\" --sd-param zebra=\"running\" --sd-id manager@123 --sd-param onMeeting=\"yes\"  "this is a rfc5424 message"
logger --udp --rfc3164 --server 192.168.100.120 --port 31514 "some rfc3164 test message"
logger --udp --server 192.168.100.120 --port 31514 "some rfc5424 test message with newer logger version - older versions have some different format"
echo "<17>$(date -u +'%b %e %T') 192.168.123.123 service_bibaboo[123]: also some syslog conform message" | nc -u 192.168.100.120 31514
echo "<18>$(date -u +'%Y-%m-%dT%H:%M:%S.%3N%:z') 192.168.123.123 service_bibaboo[123]: also some syslog conform message with +04:00 timezone" | nc -u 192.168.100.120 31514
echo "<19>$(date -u +'%Y-%m-%dT%H:%M:%S.%3N') 192.168.123.123 service_bibaboo[123]: also some syslog conform message without timezone info" | nc -u 192.168.100.120 31514
echo "<20>$(date -u +'%s') 192.168.123.123 service_bibaboo[123]: also some syslog conform message with epochtime" | nc -u 192.168.100.120 31514
echo "<21>$(date -u +'%s.%3N') 192.168.123.123 service_bibaboo[123]: also some syslog conform message with epochtime with milliseconds" | nc -u 192.168.100.120 31514
echo "<13>1 $(date -u +'%Y-%m-%dT%H:%M:%S.%3N%:z') amachine administrator - - [timeQuality tzKnown=\"1\" isSynced=\"1\" syncAccuracy=\"103261\"] some rfc5424 test message simulating newer logger version message" | nc -u 192.168.100.120 31514
echo "<13>1 $(date -u +'%Y-%m-%dT%H:%M:%S.%3N%:z') amachine administrator - - [timeQuality tzKnown=\"1\" isSynced=\"1\" syncAccuracy=\"87912\"][zoo@123 tiger=\"hungry\" zebra=\"running\"][manager@123 onMeeting=\"yes\"] this is a rfc5424 message simulating newer logger version message with additional sd-params and sd-id" | nc -u 192.168.100.120 31514
echo "no parsing with this message" | nc -u 192.168.100.120 31514
# a logtest for apache filtering - not implemented in this config ;)
echo "<17>$(date -u +'%b %e %T') 192.168.123.123 service_bibaboo[123]: 1.252.3.4 - - [10/Jul/2020:08:17:41 +0200] \"GET /blog HTTP/1.1\" 200 22 \"-\" \"Amazon CloudFront\" 0 203213 386 276"  | nc -u 192.168.100.120 31514

Date-Format Examples

     2020-06-02T04:59:48.166Z
date -u +'%Y-%m-%dT%H:%M:%S.%3NZ'

     Jun  3 08:15:01
date -u +'%b %e %T'

     2020-06-02T04:59:48.166+04:00
date -u +'%Y-%m-%dT%H:%M:%S.%3N%:z'

     2020-06-02T04:59:48.166
date -u +'%Y-%m-%dT%H:%M:%S.%3N'

     1591165959
date -u +'%s'

Ideas And ToDos For Later

Add Healthchecks Where Missing

Filebeat seems to have no healthcheck for example

More Security By enabling SSL everywhere

meh, SSL is overrated 😉

Add A Cerebro Pod

For a better and easier overview of the Elasticsearch cluster https://github.com/lmenezes/cerebro

apiVersion: apps/v1
kind: Deployment
metadata:
  name: cerebro
  # change if needed
  namespace: elk
  labels:
    app: cerebro
spec:
  replicas: 2
  selector:
    matchLabels:
      app: cerebro
  template:
    metadata:
      labels:
        app: cerebro
    spec:
      # this makes sure that at least on every node a pod is running
      affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - weight: 100
            podAffinityTerm:
              labelSelector:
                matchExpressions:
                - key: app
                  operator: In
                  values:
                  - cerebro
              topologyKey: "kubernetes.io/hostname"
      containers:
      - image: lmenezes/cerebro:0.9.2
        name: cerebro
        ports:
        - containerPort: 9000
          name: cerebro-http
          protocol: TCP
        livenessProbe:
          httpGet:
            path: /
            port: 9000
          initialDelaySeconds: 60
          periodSeconds: 5
        readinessProbe:
          httpGet:
            path: /
            port: 9000
          initialDelaySeconds: 60
          periodSeconds: 5
        env:
        - name: ELASTICSEARCH_SERVICE
          # change if needed
          value: "https://elk-es-http:9200"
          # for the sake of god - it's only a test-environment ;)
        - name: AUTH_TYPE
          value: "basic"
        - name: BASIC_AUTH_USER
          value: "admin"
        - name: BASIC_AUTH_PWD
          value: "admin"
        - name: ES_USERNAME
          value: "elastic"
        - name: ES_PASSWORD
          valueFrom:
            secretKeyRef:
              key: elastic
              # change it accordingly to your cluster name
              name: elk-es-elastic-user
          # https://github.com/lmenezes/cerebro/issues/441 :|
        - name: "CEREBRO_PORT"
          value: "9000"
        volumeMounts:
        # the main config file for logstash
        - name: config-volume
          mountPath: /opt/cerebro/conf/application.conf
          subPath: application.conf
          readOnly: true
      volumes:
      - name: config-volume
        configMap:
          name: cerebro-configmap
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: cerebro-configmap
  namespace: elk
data:
  application.conf: |
    # Secret will be used to sign session cookies, CSRF tokens and for other encryption utilities.
    # It is highly recommended to change this value before running cerebro in production.
    secret = "ki:s:[adfasdfghjkAg?QadfkY:eqvasdfasdfqoJyi2usdfkasdZvOv^/Kavvbnmv?6YY4[N"

    # Application base path
    basePath = "/"

    # Defaults to RUNNING_PID at the root directory of the app.
    # To avoid creating a PID file set this value to /dev/null
    #pidfile.path = "/var/run/cerebro.pid"
    pidfile.path=/dev/null

    # Rest request history max size per user
    rest.history.size = 50 // defaults to 50 if not specified

    # Path of local database file
    #data.path: "/var/lib/cerebro/cerebro.db"
    data.path = "./cerebro.db"

    play {
      # Cerebro port, by default it's 9000 (play's default)
      server.http.port = ${?CEREBRO_PORT}

      # we disable certificate validation ftm - https://github.com/lmenezes/cerebro/issues/127
      ws.ssl.loose.acceptAnyCertificate = true
    }

    es = {
      gzip = true
    }

    # Authentication
    auth = {
      # either basic or ldap
      type: ${?AUTH_TYPE}
      settings {
        # LDAP
        url = ${?LDAP_URL}
        # OpenLDAP might be something like "ou=People,dc=domain,dc=com"
        base-dn = ${?LDAP_BASE_DN}
        # Usually method should  be "simple" otherwise, set it to the SASL mechanisms to try
        method = ${?LDAP_METHOD}
        # user-template executes a string.format() operation where
        # username is passed in first, followed by base-dn. Some examples
        #  - %s => leave user untouched
        #  - %s@domain.com => append "@domain.com" to username
        #  - uid=%s,%s => usual case of OpenLDAP
        user-template = ${?LDAP_USER_TEMPLATE}
        // User identifier that can perform searches
        bind-dn = ${?LDAP_BIND_DN}
        bind-pw = ${?LDAP_BIND_PWD}
        group-search {
          // If left unset parent's base-dn will be used
          base-dn = ${?LDAP_GROUP_BASE_DN}
          // Attribute that represent the user, for example uid or mail
          user-attr = ${?LDAP_USER_ATTR}
          // Define a separate template for user-attr
          // If left unset parent's user-template will be used
          user-attr-template = ${?LDAP_USER_ATTR_TEMPLATE}
          // Filter that tests membership of the group. If this property is empty then there is no group membership check
          // AD example => memberOf=CN=mygroup,ou=ouofthegroup,DC=domain,DC=com
          // OpenLDAP example => CN=mygroup
          group = ${?LDAP_GROUP}
        }

        # Basic auth
        username = ${?BASIC_AUTH_USER}
        password = ${?BASIC_AUTH_PWD}
      }
    }

    # A list of known hosts
    hosts = [
      #{
      #  host = "http://localhost:9200"
      #  name = "Localhost cluster"
      #  headers-whitelist = [ "x-proxy-user", "x-proxy-roles", "X-Forwarded-For" ]
      #}
      # Example of host with authentication
      {
        host = ${?ELASTICSEARCH_SERVICE}
        name = "elasticsearch-cluster"
        auth = {
          username = ${?ES_USERNAME}
          password = ${?ES_PASSWORD}
        }
      }
    ]
---
apiVersion: v1
kind: Service
metadata:
  name: cerebro-http
  namespace: elk
  labels:
    app: cerebro
spec:
  selector:
    app: cerebro
  ports:
  - name: cerebro-http
    port: 9000
    targetPort: 9000
    protocol: TCP
---
# the ingress definition
# if you see a Warning: networking.k8s.io/v1beta1 Ingress is deprecated in v1.19+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
# you can have a look at https://github.com/kubernetes/kubernetes/pull/89778 if you want to use networking.k8s.io/v1 already
apiVersion: networking.k8s.io/v1beta1 
kind: Ingress 
metadata: 
  name: cerebro-http
  namespace: elk
  labels:
    app: cerebro
  annotations:
    # if you want to scale up, the connections must be sticky
    nginx.ingress.kubernetes.io/affinity: "cookie"
    # if it is running with HTTPS ;)
    #nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
spec: 
  rules: 
  - host: cerebro.home.local
    http: 
      paths: 
      - backend:
          # thankfully the orchestrator also creates a services we can use 
          serviceName: cerebro-http
          servicePort: 9000

Prepare Keepalived As Poor Man’s Loadbalancer

2 VIPs, one on each node with healtchecks watching the needed ports (31514, 30044, 30080, 30443). With this if one node goes down, one of the VIPs should land on the still running node. With DNS-round-robin this could work pretty well.

A more sophisticated setup could be to create a extra min. 2-node Keepalived loadbalancer with "Direct Routing" enabled. But this needs another shitload of configurations on all nodes involved. Maybe another interesting topic and I’m wondering if this would even work with all the Kubernetes jungling around with the network-settings.

Addtional Info On How to Implement ActiveDirectory Authentication

In the elasticsearch.yaml you see in the nodeSet-definition for zone-1-hot how to configure the possibility to implement ActiveDirectory authentication. For features like that you either can enable the 30-day trial or you need a costly enterprise license.

Also the audit-logging part needs higher licenses to work.

Resetup The Whole Thing Without Docker In Between

As you prolly also have read, Kubernetes kicks out Docker support soon™.

Zuletzt bearbeitet: Dezember 19, 2020

Autor

Kommentare

Kommentar verfassen

Diese Website verwendet Akismet, um Spam zu reduzieren. Erfahre mehr darüber, wie deine Kommentardaten verarbeitet werden.