How to declare multiple output.logstash in single filebeat DaemonSet in kubernetes? How to declare multiple output.logstash in single filebeat DaemonSet in kubernetes? kubernetes kubernetes

How to declare multiple output.logstash in single filebeat DaemonSet in kubernetes?


It is not possible, filebeat supports only one output.

From the documentation

Only a single output may be defined.

You will need to send your logs to the same logstash instance and filter the output based on some field.

For example, assuming that you have the field kubernetes.pod.name in the event sent to logstash, you could use something like this.

output {    if [kubernetes][pod][name] == "application1" {        your output for the application1 log    }    if [kubernetes][pod][name] == "application2" {        your output for the application2 log    }}


I found the working way for my problem. Maybe it is not the correct way but it can meet my requirement.

filebeat-kubernetes-whatsapp.yaml

---apiVersion: v1kind: Namespacemetadata:  name: logging---apiVersion: v1kind: ConfigMapmetadata:  name: filebeat-config  namespace: logging  labels:    k8s-app: filebeatdata:  filebeat.yml: |-    # To enable hints based autodiscover, remove `filebeat.inputs` configuration and uncomment this:    filebeat.autodiscover:      providers:        - type: kubernetes          node: ${NODE_NAME}          templates:            - condition:                equals:                  kubernetes.namespace: default            - condition:                contains:                  kubernetes.pod.name: "application1"              config:                - type: container                  paths:                    - /var/log/containers/*${data.kubernetes.container.id}*.log            - condition:                contains:                  kubernetes.pod.name: "application2"              config:                - type: container                  paths:                    - /var/log/containers/*${data.kubernetes.container.id}*.log    processors:      - add_locale:            format: offset      - add_kubernetes_metadata:            host: ${NODE_NAME}             matchers:              - logs_path:                  logs_path: "/var/log/containers/"    output.logstash:      hosts: ["IP:5044"]---apiVersion: apps/v1kind: DaemonSetmetadata:  name: filebeat  namespace: logging  labels:    k8s-app: filebeatspec:  selector:    matchLabels:      k8s-app: filebeat  template:    metadata:      labels:        k8s-app: filebeat    spec:      serviceAccountName: filebeat      terminationGracePeriodSeconds: 30      hostNetwork: true      dnsPolicy: ClusterFirstWithHostNet      containers:      - name: filebeat        image: docker.elastic.co/beats/filebeat:7.10.1        args: [          "-c", "/etc/filebeat.yml",          "-e",        ]        env:        - name: NODE_NAME          valueFrom:            fieldRef:              fieldPath: spec.nodeName        securityContext:          runAsUser: 0          # If using Red Hat OpenShift uncomment this:          #privileged: true        resources:          limits:            memory: 200Mi          requests:            cpu: 100m            memory: 100Mi        volumeMounts:        - name: config          mountPath: /etc/filebeat.yml          readOnly: true          subPath: filebeat.yml        - name: data          mountPath: /usr/share/filebeat/data        - name: varlibdockercontainers          mountPath: /var/lib/docker/containers          readOnly: true        - name: varlog          mountPath: /var/log          readOnly: true      volumes:      - name: config        configMap:          defaultMode: 0640          name: filebeat-config      - name: varlibdockercontainers        hostPath:          path: /var/lib/docker/containers      - name: varlog        hostPath:          path: /var/log      # data folder stores a registry of read status for all files, so we don't send everything again on a Filebeat pod restart      - name: data        hostPath:          # When filebeat runs as non-root user, this directory needs to be writable by group (g+w).          path: /var/lib/filebeat-data          type: DirectoryOrCreate---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata:  name: filebeatsubjects:- kind: ServiceAccount  name: filebeat  namespace: loggingroleRef:  kind: ClusterRole  name: filebeat  apiGroup: rbac.authorization.k8s.io---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRolemetadata:  name: filebeat  labels:    k8s-app: filebeatrules:- apiGroups: [""] # "" indicates the core API group  resources:  - namespaces  - pods  verbs:  - get  - watch  - list---apiVersion: v1kind: ServiceAccountmetadata:  name: filebeat  namespace: logging  labels:    k8s-app: filebeat---

/etc/logstash/conf.d/config.conf

input {  beats {    port => 5044  }}    #filter {#  ...#}output {  if "application1" in [kubernetes][pod][name] {    file {      enable_metric => false      gzip => false       codec => line { format => "[%{[@timestamp]}] [%{[kubernetes][node][name]}/%{[kubernetes][pod][name]}/%{[kubernetes][pod][uid]}] [%{message}]"}      path => "/abc/def/logs/application1%{+YYYY-MM-dd}.log"    }  }  if "application2" in [kubernetes][pod][name] {    file {      enable_metric => false      gzip => false       codec => line { format => "[%{[@timestamp]}] [%{[kubernetes][node][name]}/%{[kubernetes][pod][name]}/%{[kubernetes][pod][uid]}] [%{message}]"}      path => "/abc/def/logs/application2%{+YYYY-MM-dd}.log"    }  }}