Sending Kubernetes Logs and Events

Last updated on 13 March, 2023

LogicMonitor provides different methods for sending logs from a monitored Kubernetes cluster to LM Logs. The method to use depends on the type of logs that you want to send.

  • For Kubernetes logs, use the lm-logs Helm chart configuration which is provided as part of the LogicMonitor Kubernetes integration.
  • For Kubernetes events and Pod logs, configure the LogicMonitor Collector to collect and forward the logs from a monitored cluster or cluster group.

Requirements

Sending Kubernetes Logs

You can install and configure the LogicMonitor Kubernetes integration to forward your Kubernetes logs to the LM Logs ingestion API.

Deploying

The Kubernetes configuration for LM Logs is deployed as a Helm chart.

1. Add the LogicMonitor Helm repository: 

helm repo add logicmonitor https://logicmonitor.github.io/k8s-helm-charts

If you already have the LogicMonitor Helm repository, you should update it to get the latest charts:

helm repo update

2. Install the lm-logs chart, filling in the required values:

helm install -n <namespace> \
--set lm_company_name="<lm_company_name>" \
--set lm_access_id="<lm_access_id>" \
--set lm_access_key="<lm_access_key>" \
lm-logs logicmonitor/lm-logs

Configuring Deviceless Logs for Kubernetes

Logs can be viewed in LM Logs even if the log is “deviceless” and not associated with an LM-monitored resource. Even without resource mapping, or when there are resource mapping issues, logs are still available for anomaly detection and to view and search.

For deviceless logs, log anomaly detection is done using the “namespace” and “service” fields instead of “Device ID”, when creating log profiles. To enable deviceless logs, set “fluent.device_less_logs” to “true”, when configuring lm-logs helmchart. See Send Kubernetes Logs to LM Logs.

Sending Kubernetes Events and Pod Logs

You can configure the LogicMonitor Collector to receive and forward Kubernetes cluster events and Pod logs from a monitored Kubernetes cluster or cluster group.

Prerequisites

Adding Resources to the Collector for Monitoring

Note: This section only applies to existing clusters in monitoring. You do not need to make this edit if the cluster was just added into monitoring with the latest version of Argus.

The Cluster Role Collector needs to have access to the resources you want to monitor.

$ kubectl edit clusterrole collector

Under apiGroups > resources, add events and pod/logs. For example:

- apiGroups:
  resources:
  - events
  - pod/logs

Enabling Events and Logs Collection

You have two options for enabling events and logs collection:

Option 1. (Recommended) Modify the Helm deployment for Argus to enable events collection.

helm upgrade --reuse-values \
   --set device_group_props.cluster.name="lmlogs.k8sevent.enable" \
   --set device_group_props.cluster.value="true" \
   --set device_group_props.pods.name="lmlogs.k8spodlog.enable" \
   --set device_group_props.pods.value="true" \
argus logicmonitor/argus

Option 2. Manually add the following properties to the monitored Kubernetes cluster group (or individual resources) in LogicMonitor.

Property Description
lmlogs.k8sevent.enable=true Sends events from pods, deployments, services, nodes, and so on to LM Logs. When false, ignores events.
lmlogs.k8spodlog.enable=true Sends pod logs to LM Logs. When false, ignores logs from pods.

Optional Configurations

In addition to the enabling logs and events collection, you can add or edit the following entries in the Collector’s agent.conf:

Property Description Default
lmlogs.k8sevent.polling.interval.min=1 Polling interval in minutes for Kubernetes events collection. 1
lmlogs.k8spodlog.polling.interval.min=1 Polling interval in minutes for Kubernetes pod logs collection. 1
lmlogs.thread.count.for.k8s.pod.log.collection=20 Number of threads for Kubernetes pod logs collection. The maximum value is 50. 10

Configure Filters to Remove Logs

Ensure that you configure filters to remove log messages that contain sensitive information like credit cards, phone numbers, or personal identifiers so that these are not sent to LogicMonitor. You can also use filters to reduce the volume of non-essential syslog log messages sent to the logs ingestion API queue.

The filtering criteria for Kubernetes Events are based on the fields “message”, “reason”, and “type”. For Kubernetes pod logs, you can filter on the message fields. Filtering criteria can be defined using keywords, a regular expression pattern, specific values of fields, and so on. To configure a filter criteria, uncomment to enable and then edit the filtering entries in agent.conf. For example:

  • To filter out INFO level pod logs to LogicMonitor, uncomment or add the line: logsource.k8spodlog.filter.1.message.notcontain=INFO
  • To send Kubernetes events of type=Normal, comment out the line: logsource.k8sevent.filter.1.type.notequal=Normal

Troubleshooting

Kubernetes Logs

1. If you are not seeing Kubernetes logs in your LM Portal after a few minutes, it may be a resource mapping issue. Resource mapping for Kubernetes is handled by the Fluentd plugin.

2. If mapping is correct, verify that the log file path is mounted.  If the log file path is not mounted, edit the /k8s-helm-charts/lm-logs/templates/deamonset.yaml file to add the file path and volume.

For example, if the path to mount is /mnt/ephemeral/docker/containers/, you would make the following edits:

  • Add the file path:
name: ephemeraldockercontainers
  mountPath: /mnt/ephemeral/docker/containers/
  readOnly: true
  • Add under volumes:
name: ephemeraldockercontainers
  hostPath:
    path: /mnt/ephemeral/docker/containers/

Kubernetes Pod Logs

If you have enabled pod logs collection and forwarding, but you are not receiving pod logs in LM Logs, restart the Collector and increase the polling interval to 3-5 minutes.

In This Article