PART-4 – Prepare RabbitMQ For Buffering Events

You may have a look at Elasticsearch As Log-Buffer

The first thought maybe like: What? Why the hell RabbitMQ and not use the persistent queue featuere of Logstash?

Because:

Input plugins that do not use a request-response protocol cannot be protected from data loss. For example: tcp, udp, zeromq push+pull, and many other inputs do not have a mechanism to acknowledge receipt to the sender. Plugins such as beats and http, which do have an acknowledgement capability, are well protected by this queue.

For installing RabbitMQ on Kubernetes we use the RabbitMQ Cluster Kubernetes Operator – see https://github.com/rabbitmq/cluster-operator

It’s stated currently that it’s in beta-state but there is already a github issue open, where they discuss that it’s ready to leave beta. AFAIK only downscaling seems not to work properly.

Installation of the operator is simply done by executing:

kubectl apply -f "https://github.com/rabbitmq/cluster-operator/releases/latest/download/cluster-operator.yml"

Prerequsites for the rmq-cluster

Setup the persistent-volumes, this is similar to the elasticsearch volumes. Create a rabbitmq sub-folder in persistent-volumes and then create the yaml files looking similar to:

apiVersion: v1
kind: PersistentVolume
metadata:
  # change accordingly
  name: pv-001-elk-rmq-cluster-server-pod-0
spec:
  capacity:
    storage: 5Gi
  accessModes:
  - ReadWriteOnce
  # Delete is not working in this case!
  persistentVolumeReclaimPolicy: Retain
  storageClassName: local-storage
  local:
  # the folder has to be created before applying this yaml
    path: /data/pv-001-elk-rmq-cluster-server-pod-0
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - vkube-001
  claimRef:
    # we want that the first pod/es-node is always on this node
    # the pattern is: persistence-CLUSTERNAME-server-PODNUMBER
    name: persistence-rmq-cluster-server-0
    namespace: elk

Remember to create the appropriate folders on each Kubernetes node.

Setting up the yaml-file

apiVersion: rabbitmq.com/v1beta1
kind: RabbitmqCluster
metadata:
  name: rmq-cluster
  namespace: elk
  # pods do not inherit these labels and/or annotations.
  labels:
    app: rabbitmq
spec:
  # uses community management images for default - we will not use the latest tag
  image: rabbitmq:3.8.9-management
  # should be an odd number (1,3,5,...) - yes I know, on a 2 node k8s this is a little strange ;)
  replicas: 3
  # this will create a NodePort service
  service:
    type: NodePort
  override:
    statefulSet:
      spec:
        template:
          metadata:
            labels:
              app: rabbitmq
              stackmonitoring: rabbitmq
            annotations:
              co.elastic.logs/module: rabbitmq
    service:
      spec:
        ports:
        - name: management
          nodePort: 31672
          port: 15672
          protocol: TCP
          targetPort: 15672
        - name: amqp
          nodePort: 30672
          port: 5672
          protocol: TCP
          targetPort: 5672
  persistence:
    # see pv-yamls for rabbitmq
    storageClassName: local-storage
    storage: 5Gi
  resources:
    requests:
      cpu: 1000m
      memory: 2Gi
    limits:
      cpu: 1000m
      memory: 2Gi
  rabbitmq:
    # we install a few plugins already - especially the sharding module is needed, as we want to use x-modulus-hash exchanges
    additionalPlugins:
      - rabbitmq_top
      - rabbitmq_shovel
      - rabbitmq_sharding
---
# the ingress definition
# if you see a Warning: networking.k8s.io/v1beta1 Ingress is deprecated in v1.19+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
# you can have a look at https://github.com/kubernetes/kubernetes/pull/89778 if you want to use networking.k8s.io/v1 already
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: rmq-cluster
  # change it if needed
  namespace: elk
  annotations:
    nginx.ingress.kubernetes.io/affinity: "cookie"
    # if management interface is configured for HTTPS
    #nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
spec:
  rules:
  # change it if needed
  - host: rabbitmq-management.home.local
    http:
      paths:
      - backend:
          # change it if needed
          serviceName: rmq-cluster
          servicePort: 15672

Now apply the file and there will pop up a three node RabbitMQ cluster

Get Initial RabbitMQ Credentials

Username and password are just generated random sequences and are saved in a secret.

# Username
kubectl get secrets -n elk rmq-cluster-default-user -o=jsonpath='{.data.username}' | base64 -d && echo ""
# Password
kubectl get secrets -n elk rmq-cluster-default-user -o=jsonpath='{.data.password}' | base64 -d && echo ""

Login with those credentials for example via Browser at http://rabbitmq-management.home.local:30080 and create your own admin user and a user called “logstash” with proper permissions to set up exchanges and queues.

You can save your credentials in a secret – like I did for the “logstash”-user

# the key is base64 encoded
# echo -n 'HnWzJhQFxNuoUMPj8ufFxJPBcXFOh46m' | base64 -w 0
# this will output: SG5XekpoUUZ4TnVvVU1Qajh1ZkZ4SlBCY1hGT2g0Nm0K
apiVersion: v1
kind: Secret
metadata:
  name: rmq-cluster-logstash-user
  namespace: elk
type: Opaque
data:
  username: bG9nc3Rhc2g=
  password: Ymx1YmJlcmhvbmsK

Proceed to PART-5

Zuletzt bearbeitet: August 5, 2021

Autor

Kommentare

Kommentar verfassen

Diese Website verwendet Akismet, um Spam zu reduzieren. Erfahre mehr darüber, wie deine Kommentardaten verarbeitet werden.