As more and more IT organizations move towards containerized workloads and services, it is more important than ever to have insight into the containers and the services running within. Leading the container orchestration charge is Kubernetes (aka k8s – the 8 represents the letters omitted from the middle of the word). In fact, about two-thirds of IT engineers have seen their Kubernetes option increase during the pandemic as there becomes more need for scaling and performance. With great power comes great responsibility, so what can be used to help out? In this blog we’ll recap LogicMonitor’s Kubernetes Monitoring solution and go into detail on how to configure LM Logs to view Kubernetes Logs, Kubernetes Events, and Pods Logs to provide a holistic view into K8s, correlating both metrics and logs.
LogicMonitor’s Kubernetes Monitoring integration relies on an application that can be installed via Helm. It runs as one pod in your cluster, which means there’s no need to install an agent on every node. This application listens to the Kubernetes event stream and uses LogicMonitor’s API to ensure that your monitoring is always up to date, even as the resources within the cluster change. Data is automatically collected for Kubernetes nodes, pods, containers, services, and master components (e.g. scheduler, api-server, controller-manager).
Pre-configured alert thresholds provide meaningful alerts out of the box. Applications are automatically detected and monitored based on best practices, using LogicMonitor’s extensive library of monitoring templates. This means that you’ll get instant visibility into your containerized applications without the hassle of editing configuration files, having to rely on matching container image names, or manually configuring monitoring. Additionally, LogicMonitor retains this granular performance data for up to two years. Combined, these benefits enable LogicMonitor to monitor your Kubernetes clusters with fewer required tools and processes than alternative solutions. Check out LM’s Kubernetes Best Practices blog for more info.
What Can Logs Do For You?
Now that we’ve seen how LogicMonitor helps monitor Kubernetes, let’s look at gaining even more insight into what’s going on with Kubernetes logs! With LM Logs, you can ingest both Kubernetes logs and Kubernetes events and Pods logs to capture everything from the K8s logs themselves to pod events like pod creation and removal. The two Kubernetes log types have different collection methods but both are simple to configure and get up and running in a few minutes.
Ingesting Kubernetes Logs
For Kubernetes logs, we recommend using the lm-logs Helm chart configuration (which is provided as part of the Kubernetes integration). You can install and configure the LogicMonitor Kubernetes integration to forward your Kubernetes logs to the LM Logs ingestion API.
To get started, you will need a LogicMonitor API Token to authenticate all requests to the log ingestion API. Also, be sure to have a LogicMonitor Collector installed and monitoring your Kubernetes Cluster.
- Add the LogicMonitor Helm repository:
helm repo add logicmonitor https://logicmonitor.github.io/k8s-helm-charts
If you already have the LogicMonitor Helm repository, you should update it to get the latest charts:
helm repo update
- Install the lm-logs chart, filling in the required values:
helm install -n <namespace> \ --set lm_company_name="<lm_company_name>" \ --set lm_access_id="<lm_access_id>" \ --set lm_access_key="<lm_access_key>" \ lm-logs logicmonitor/lm-logs
Collecting and Forwarding Kubernetes Events
You can configure the LogicMonitor Collector to receive and forward Kubernetes Cluster events and Pod logs from a monitored Kubernetes cluster or cluster group.
To get started, you will need LM EA Collector 30.100 or later installed, LogicMonitor’s Kubernetes Monitoring deployed, and access to the resources (events or pods) to collect logs from.
Enable Events and Logs Collection
There are two options for enabling events and logs collection:
- Modify the Helm deployment for Argus to enable events collection (the recommended method).
helm upgrade --reuse-values \ --set device_group_props.cluster.name="lmlogs.k8sevent.enable" \ --set device_group_props.cluster.value="true" \ --set device_group_props.pods.name="lmlogs.k8spodlog.enable" \ --set device_group_props.pods.value="true" \ argus logicmonitor/argus
- Manually add the following properties to the monitored Kubernetes cluster group (or individual resources) in LogicMonitor.
|lmlogs.k8sevent.enable=true||Sends events from pods, deployments, services, nodes, and so on to LM Logs. When false, ignores events.|
|lmlogs.k8spodlog.enable=true||Sends pod logs to LM Logs. When false, ignores logs from pods.|
Filtering Out Logs
With Kubernetes Events and Pod logs being ingested in LM Logs, you may want to filter on various criteria. To configure filter criteria, open and edit the agent.conf file. Find the criteria you want to filter and uncomment the line to enable and then edit the filtering entries.
Kubernetes Events are based on the fields: message, reason, and type.
- To filter out INFO level pod logs to LogicMonitor, uncomment or add the line: logsource.k8spodlog.filter.1.message.notcontain=INFO
For Kubernetes pod logs, you can filter on the message fields. Filtering criteria can be defined using keywords, a regular expression pattern, specific values of fields, and so on.
- To send Kubernetes events of type=Normal, comment out the line: logsource.k8sevent.filter.1.type.notequal=Normal
Lumber Bob’s Wrap Up
Once Kubernetes Monitoring and Logging is configured, you will have a full view of your Kubernetes clusters with both metrics and logs. LM’s out-of-the-box Kubernetes dashboards and visualizations will provide visibility into cluster health and the logs will be associated with each resource. Simply click on “View Logs” in the widget option to be brought to the LM Logs page with the filters and time range carried over to continue viewing the data in context.
But what about alerts? Every LM Alert is enhanced with LM Log’s Anomaly Detection to help surface potentially problematic logs to reduce troubleshooting time. Next time you receive a Kubernetes alert, don’t you wish you had a small subset of logs to review to help narrow down the root cause? LM Logs does all that and will provide you with more insight and visibility into increasingly complex architectures.