Come join our live training webinar every other Wednesday at 11am PST and hear LogicMonitor experts explain best practices and answer common questions. We understand these are uncertain times, and we are here to help!
You can configure the LogicMonitor Collector to receive and forward Kubernetes Events and Pod logs to the LM Logs ingestion API.
Note: This section only applies to existing clusters in monitoring. You do not need to make this edit if the cluster was just added into monitoring with the lastest version of Argus.
The Cluster Role Collector needs to have access to the resources you want to monitor.
$ kubectl edit clusterrole collector
Under apiGroups > resources, add events and pod/logs. For example:
You have two options for enabling logs collection.
1. (Recommended) Modify the Helm deployment for Argus to enable events collection.
helm upgrade --reuse-values \
--set device_group_props.cluster.name="lmlogs.k8sevent.enable" \
--set device_group_props.cluster.value="true" \
--set device_group_props.pods.name="lmlogs.k8spodlog.enable" \
--set device_group_props.pods.value="true" \
2. Manually add the following properties to the monitored Kubernetes cluster group (or individual resources) in LogicMonitor.
You can add or edit the following entries in the Collector’s agent.conf:
We recommend that you configure filters to remove log messages that contain sensitive information (such as credit cards, phone numbers, or personal identifiers) so that they are not sent to LogicMonitor. Filters can also be used to reduce the volume of non-essential syslog log messages that are sent to the logs ingestion API queue.
The filtering criteria for Kubernetes Events are based on the fields: message, reason, and type. For Kubernetes pod logs, you can filter on the message fields. Filtering criteria can be defined using keywords, a regular expression pattern, specific values of fields, and so on. To configure a filter criteria, uncomment to enable and then edit the filtering entries in agent.conf. For example:
If you are not receiving pod logs, restart the Collector and increase the polling interval to 3-5 minutes.
In This Article