Support Center Home


Adding your Kubernetes Cluster into Monitoring

This topic discusses how to add a Kubernetes cluster to your monitored resources.

Import the most recent suite of Kubernetes DataSources from LogicMonitor’s Repository: Settings | DataSources | Add | From LogicMonitor’s Repository

On the Resource page, select Add | Kubernetes Cluster. (If you don’t see this option, contact your CSM.)

The Add Kubernetes Cluster setup wizard launches. There are three steps: (1) General Information, (2) Install Argus, and (3) Add Services.

General Information

Fill in the following general information:

Field Description
Cluster Name (Required) The display name for your monitored cluster in the LogicMonitor resource tree.
API Token LogicMonitor API tokens, which should have sufficient permissions for adding Collectors, Collector groups, devices, device groups, and dashboards, and for removing devices. Because dynamic groups are used to represent the cluster, these tokens need permission to manage the root device group.

We recommend that you have a dedicated API-only user for the integration. You can create one as part of this setup wizard.
Kubernetes version (Required) Defaults to the Kubernetes version 1.14 or higher (>=1.14.0), which prompts LogicMonitor to provide Helm 3 installation commands on the following wizard screen.

If you are adding Kubernetes versions older than 1.14, you can select "< 1.14.0" from the dropdown and LogicMonitor will provide instructions for Helm 2.
Namespace Specify the cluster namespace where the monitored applications are running.
Resource Group Specify the group under which the monitored cluster will display in the Resource tree.
Collector Group (Required) Specify which group the Collectors will be added into. Defaults to a new dedicated Collector group.
Dashboard Group (Required) Specify which group the dashboards will be added into. Defaults to a new dedicated Dashboard group.
Kubernetes RBAC Enabled Toggle on if RBAC is enabled in your cluster.
Monitor etcd Hosts Toggle on if you have etcd running external to your cluster. You will be prompted to enter a discovery token.
Proxy Access Enable this setting if it is needed for cluster applications to access the LogicMonitor API to add/remove resources to/from monitoring. You will be prompted to specify a proxy server, username, and password.

The Collector Information section prompts you to enter the number of Collector replicas (Required), a Collector size, and a Collector escalation chain. These fields control how many containerized Collectors run with the cluster, what size those Collectors are, and where “Collector down” alerts get routed.

Install Argus

The setup wizard provides the configuration and install commands for the applications needed for monitoring, CollectorSet-Controller and Argus.

Select Edit Configuration to customize the YAML configuration files for the CollectorSet-Controller and Argus directly in the setup wizard. You may also Download File to edit the configuration and install them using the Kubernetes CLI (if you don’t want to use the Helm install commands which are provided next).

Select Install to see the Helm commands to install the CollectorSet-Controller and Argus and LogicMonitor’s Helm Charts. You can copy and paste the commands to install the integration into your cluster.

If you are using OpenShift: After installing the CollectorSet-Controller via Helm, you may need to elevate the permissions of the serviceaccount for the Collector to enable the Collector install. To do this, run the following command (assuming the default namespace):

oc adm policy add-scc-to-user anyuid system:serviceaccount:default:collector

After installation completes, you can use the “Verify Connection” button to ensure that cluster resources were properly added into monitoring and that Collectors were installed. This process may take up to a minute.

Add Services

This step of the setup wizard does not display if LM Service Insight is not enabled in the account.

You have the option of configuring Services for specific Kubernetes label key-value pairs. Each key-value pair you add to the table will result in a new Service that groups together all Pods and Nodes with that label assigned. Metrics will be aggregated across these grouped Pods and Nodes to provide monitoring for overall health based on that label.

New Pods and Nodes will be automatically incorporated in the Service, and terminated Pods and Nodes will be automatically removed. The aggregated Service-level data will persist regardless of changes in underlying resources.

Where to find the Cluster

When setup completes, you’ll see a notice for a new Resource Group, Collector Group, and Dashboard Group.

The Resource Group representing your cluster dynamically groups Nodes based on worker role, and Pods and Services based on namespace:

Data is automatically collected from the Kubernetes API for Nodes, Pods, Containers (which are automatically discovered for each Pod), and Services. Additionally, standard applications will be automatically detected and monitored with LogicMonitor’s existing LogicModule library (based on AD for existing modules).

In This Article