By default, LM Container deletes resources immediately. If you want to retain resources, you can configure the retention period to delete resources after the set time passes.

Retaining Deleted Kubernetes resources

You must configure the following parameters in the LM Configuration file:

  1. Navigate to the  LM Configuration file and do the following:
    1. Specify the deleted devices’ retention period in ISO-8601 duration format using the property kubernetes.resourcedeleteafter = P1DT0H0M0S.
      For more information, see ISO-8601 duration format.
    2. lm-container adds the retention period property to the cluster resource group to set the global retention period for all the resources in the cluster. You can modify this property in the child groups within the cluster group to have different retention periods for various resource types. In addition, you can modify property for a particular resource.
               Note: lm-container configures different retention periods for lm-container and Collectorset-Controller Pods for troubleshooting. The retention period for these Pods cannot be modified and is set to 10 days.
  2. Enter the following parameters in the lm-container-configuration.yaml to set the retention period for the resources:
argus:
  lm:
    resource:
      # To set the global delete duration for resources
       globalDeleteAfterDuration: "P0DT0H1M0S"  // Adjust this value according to your needs to set the global retention period for resources

When you add your Kubernetes cluster to monitoring, dynamic groups are used to group the cluster resources. For more information, see Adding a Kubernetes Cluster to Monitoring.

Non-admin users can add Kubernetes clusters to monitoring using API keys with more granular access. These API keys should have access to at least one resource group, which provides the necessary permissions to configure monitoring for Kubernetes clusters. This significantly improves access control, as the dynamic groups are now linked to the resource groups that the API keys can access, based on view permissions.

Before non-admin users can add Kubernetes clusters to monitoring, several prerequisites need to be set up:

Enabling non-admin users to add a Kubernetes cluster to monitoring

  1. Ensure that different device groups are created for non-admin users. For more information, see Adding Resource Groups.
  2. Navigate to Settings > User Access > User and Roles.
  3. Select the Roles tab.
  4. Select the required role group and select the manage icon Manage icon.
  5. In the Permissions tab, assign the required access to the Kubernetes cluster static groups.
    Note: You can create multiple users with specific roles from the Manage Role dialog box.
    When the required permissions are provided, the non-admin users can add and monitor the Kubernetes clusters within the static groups.
    Permissions page for resources as non-admin
  6. To create the required dashboard groups, in the top left of the Dashboards page, select the Add icon add icon > Add dashboard group. Enter the required details. For more information, see Adding Dashboard Groups.
  7. To create the required collector groups, navigate to Settings > Collectors.
  8. Under the Collectors tab, select the Add Collector Options add icon dropdown. Enter the required details. For more information, see Adding Collector Groups.
  9. Select the User Profile in the Permissions setting and grant non-admin users access to create API tokens and manage their profiles.
    User profile settings page.

After a resource group is allocated, non-admin users can add Kubernetes clusters into monitoring.

Adding a Kubernetes Cluster into Monitoring as a Non-Admin User

  1. Navigate to Resource Tree > Resources.
  2. Select the allocated resource group to add to the cluster.
  3. Select the Add icon and select Kubernetes Cluster.
    Adding kubernetes clusters page
  4. On the Add Kubernetes Cluster page, add the following information:
    1. In the Cluster Name field, enter the cluster name.
    2. In the API token field, select the allocated resource group’s API token and Save.
      The other API Token field information populates automatically.
    3. In the Resource Group field, select the allocated resource group name
    4. In the Collector Group and Dashboard Group fields, select the allocated Resource Group.
  5. Select Next.
  6. In the Install Instruction section, select the Argus tab.
  7. Select the resourceGroupID parameter and replace the default value with the system.deviceGroupId property value of the allocated resource group.
  8. Select Verify Connection. When the connection is successful, your cluster is added.

In scenarios such as a new installation or a cache refresh, the entire Kubernetes cluster gets synchronized with the portal. This process results in a large number of devices being added to the cluster unnecessarily, adding strain to the system.

During the default installation, all resources are monitored by the lm-container. However, to address the above-mentioned issue, the following resources will be either filtered or disabled from monitoring:

To optimize system performance and reduce unnecessary resource load, several default configurations will be applied to filter or disable specific resources, ensuring that only the essential components are actively monitored.

Resources Disabled for Monitoring by Default

Below is a list of Kubernetes resources that will be disabled by default but can be enabled if you choose:

  1. ResourceQuotas
  2. LimitRanges
  3. Roles
  4. RoleBindings
  5. NetworkPolicies
  6. ConfigMaps
  7. ClusterRoleBindings
  8. ClusterRoles
  9. PriorityClasses
  10. StorageClasses
  11. CronJobs
  12. Jobs
  13. Endpoints
  14. Secrets
  15. ServiceAccounts

Changes from Upgrading Helm Chart

In case of an upgrade from an older version of the lm-container Helm chart to this version, the following applies:

LM Container allows you to configure the underlying collector through Helm Chart configuration values. The collector is responsible for collecting metrics and logs from the cluster resources using the configuration specification format of the collector. For more information, see agent. conf.
You must use the Helm chart configuration to set up the collector. This ensures a permanent configuration, unlike the manual configuration on the collector pod. For example, a pod restart operation can erase the configured state and revert the configuration to the default state, making Helm chart configuration a more reliable option.

Requirements for Managing Properties on Docker Collector

Ensure you have LM Container Helm Charts 4.0.0 or later installed.

Adding agent.conf Properties on your Docker Collector

  1. Open and edit the lm-container-configuration.yaml file.
  2. Under the agentConf section, do the following:
    1. In the value or values parameter, enter the config value. 
    2. (Optional) In dontOverride property, set dontOverride to true to add more property values to the existing list. By default, the value is false.
    3. (Optional) In the coalesceFormat property, specify the CSV format.
    4. (Optional) In the discrete property, set the discrete to true to pass the values array corresponding to each item for each collector.
      The following is an example of these values.
argus:
  collector:
    collectorConf:
      agentConf:
        - key: <Property Key>
          value: <Property Value>
          values: <Property values list/map>
          dontOverride: true/false
          coalesceFormat: csv
          discrete: true/false
  1. Run the following Helm upgrade command:
helm upgrade \
    --reuse-values \
    --namespace=<namespace> \
    -f lm-container-configuration.yaml \
    lm-container logicmonitor/lm-container

Example of Adding Identical Configurations 

You can apply identical configurations on each collector of the set. Following are the different examples of the input properties in the lm-container-configuration.yaml file:

key: EnforceLogicMonitorSSL
value: false
key: collector.defines
value:
  - ping
  - script
  - snmp
  - webpage
coalesceFormat: csv

The resultant property displays as collector.defines=ping,script,snmp,webpage in the agent.conf when you have not set the dontOverride to true. If you have set the dontOverride to true, then the resultant property is appended with config parameter values and displays as collector.defines=ping,script,snmp,webpage,jdbc,perfmon,wmi,netapp,jmx,datapump,memcached,dns,esx,xen,udp,tcp,cim,awscloudwatch,awsdynamodb,awsbilling,awss3,awssqs,batchscript,sdkscript,openmetrics,syntheticsselenium.

Example of Adding Discrete Configurations 

You can define different property values on every collector using discrete flag. Following are the different examples of how the properties display in the agent.conf file:

key: logger.watchdog
discrete: true
values:
  - debug
  - info
  - info

The above configuration enables debug logs on the first collector out of 3 whose index is 0 and info logs to the remaining.

key: ssh.preferredauthentications
discrete: true
values:
  - - publickey
    - keyboard-interactive
    - password
  - - password
    - keyboard-interactive
    - publickey
  - - publickey
    - keyboard-interactive
    - password
coalesceFormat: csv

The resultant property for individual collectors considering you have 3 replica displays as follows:

A Secret is an object that contains a small amount of sensitive data such as a password, a token, or a key. Kubernetes Secrets allows you to configure the Kubernetes cluster to use sensitive data (such as passwords) without writing the password in plain text into the configuration files. For more information, see Secrets from Kubernetes documentation.

Note: If you are using secrets on your LM Container, granting manage permission might reveal your encoded configuration data.

Requirements for Configuring User Defined Secrets in LM Container

Ensure you have LM Container Helm Charts version 5.0.0 or later.

Configuring User Defined Secrets for your Kubernetes Clusters in LM Containers

Creating a Secret involves using the key-value pair to store the data. To create Secrets, do as follows:

  1. Create the secrets.yaml with opaque secret type that encodes in Base64 format similar to the following example.

Note: Adding the data label encodes the accessID, accessKey, and account field values in Base64 format.

apiVersion: v1
data:
  accessID: NmdjRTNndEU2UjdlekZhOEp2M2Q=
  accessKey: bG1hX1JRS1MrNFUtMyhrVmUzLXE0Sms2Qzk0RUh7aytfajIzS1dDcUxQREFLezlRKW1KSChEYzR+dzV5KXo1UExNemxoT0RWa01XTXROVEF5TXkwME1UWmtMV0ZoT1dFdE5XUmpOemd6TlROaVl6Y3hMM2oyVGpo
  account: bG1zYWdhcm1hbWRhcHVyZQ==
  etcdDiscoveryToken: ""
kind: Secret
metadata:
  name: user-provided-secret
  namespace: default
type: Opaque

or

  1. Create the secrets.yaml with an opaque secret stringData type similar to the following example.
apiVersion: v1
stringData:
  accessID: "6gcE3gtE6R7ezFa8Jv3d"
  accessKey: "lma_RQKS+4U-3(kVe3-q4Jk6C94EH{k+_j23KWCqLPDAK{9Q)mJH(Dc4~w5y)z5PLMzlhODVkMWMtNTAyMy00MTZkLWFhOWEtNWRjNzgzNTNiYzcxL3j2Tjh"
  account: "lmadminuser"
  etcdDiscoveryToken: ""
kind: Secret
metadata:
  name: user-provided-secret
  namespace: default
type: Opaque
  1. Enter the accessIDaccessKey, and account field values.

Note: If you have an existing cluster, enter the same values used while creating Kubernetes Cluster.

  1. Save the secrets.yaml file.
  2. Open and edit the lm-container-configuration.yaml file.
  3. Enter a new field userDefinedSecret with the required value similar to the following example.

Note: The value for userDefinedSecret must be the same as the newly created secret name.

argus:
  clusterName: secret-cluster
global:
  accessID: ""
  accessKey: ""
  account: ""
  userDefinedSecret: "user-provided-secret"
  1. Save the lm-container-configuration.yaml file.
  2. In your terminal, enter the following command:
Kubectl apply -f secrets.yaml -n <namespace_where_lm_container will be installed>

Note: Once you apply the secrets and install the LM Container, delete the accessID, accessKey, and account field values in the lm-container-configuration.yaml for security reasons.

The following table displays the Secrets fields:

Field NameField TypeDescription
accessIDmandatoryLM access ID
accessKeymandatoryLM access key
accountmandatoryLM account name
argusProxyPassoptionalargus proxy password
argusProxyUseroptionalargus proxy user name
collectorProxyPassoptionalcollector proxy password
collectorProxyUseroptionalcollector proxy username
collectorSetControllerProxyPassoptionalcollectorset-controller proxy password
collectorSetControllerProxyUseroptionalcollectorset-controller proxy username
etcdDiscoveryTokenoptionaletcd discovery token
proxyPassoptionalglobal proxy password
proxyUseroptionalglobal proxy username

Example of Secrets with Proxy Details for Kubernetes Cluster

The following secrets.yaml file displays user-defined secrets with the proxy details:

apiVersion: v1
data:
  accessID:
  accessKey:
  account:
  etcdDiscoveryToken:
  proxyUser:
  proxyPass:
  argusProxyUser:
  argusProxyPass:
  cscProxyUser:
  cscProxyPass:
  collectorProxyUser:
  collectorProxyPass:
kind: Secret
metadata:
  name: user-provided-secret
  namespace: default
type: Opaque

There are two types of proxies; global proxy and component-level proxy. When you provide a global proxy, it applies to all Argus, Collectorset-Controller, and collector components. When you add both component-level proxy and global proxy, component-level proxy gains precedence. For example, if you add a collector proxy and a global proxy, the collector proxy is applied to the collector, and a global proxy is applied to the other Argus and Collectorset-Controller components. 

The following is an example of the lm-container-configuration.yaml file:

global:
  accessID: ""
  accessKey: ""
  account: ""
  userDefinedSecret: <secret-name>
  proxy: 
    url: "proxy_url_here"

Amazon Elastic Kubernetes Service (EKS) is a managed Kubernetes service that helps you run Kubernetes on any standard Kubernetes environment (AWS). Using Amazon EKS, you can run Kubernetes without installing and operating a Kubernetes control plane or worker nodes.

LogicMonitor helps you to monitor your Amazon EKS environments in real-time. For more information, see What is Amazon EKS from Amazon documentation. 
LogicMonitor officially supports running LM Container Kubernetes Monitoring on AWS Bottlerocket OS. For more information, see Bottlerocket from AWS documentation.

Requirements for Monitoring EKS Cluster

Setting up Amazon EKS Cluster

You don’t need separate installations on your server to monitor the Amazon EKS cluster, since LogicMonitor already integrates with Kubernetes and AWS. For more information on LM Container installation, see Installing the LM Container Helm Chart or Installing LM Container Chart using CLI.

Amazon EKS Cluster Dashboards

You don’t need to create any separate Amazon EKS cluster dashboards. If you have integrated LogicMonitor with Kubernetes and AWS, the Amazon EKS cluster data will display on the relevant dashboards.

Amazon EKS Cluster Dashboard

Kubernetes Monitoring Considerations

Kubernetes Monitoring Dependencies

The following image displays how LogicMonitor’s application runs in your cluster as a pod.

Kubernetes Dependency workflow diagram

Note: LogicMonitor’s Kubernetes monitoring integration is an add-on feature called LM Container. You may contact your Customer Success Manager (CSM) for more information.

LogicMonitor Portal Permissions

Kubernetes Cluster Permissions 

There are minimum permissions that are required to install the LM Container. 

For creating ClusterRole, do the following:

  1. Create and save a cluster-role.yaml file with the following configuration:
apiVersion: rbac.authorization.k8s.io/v1

kind: ClusterRole

metadata:

  name: lm-container-min-permissions

rules:

- apiGroups:

  - ""

  resources:

  - "*"

  verbs:

  - get

  - list

  - create

- apiGroups:

  - ""

  resources:

  - configmaps

  verbs:

  - "*"

- apiGroups:

  - apps

  resources:

  - deployments

  - statefulsets

  - replicasets

  verbs:

  - get

  - list

  - create

- apiGroups:

  - rbac.authorization.k8s.io

  resources:

  - clusterroles

  - clusterrolebindings

  - roles

  - rolebindings

  verbs:

  - "*"

- apiGroups:

  - apiextensions.k8s.io

  resources:

  - customresourcedefinitions

  verbs:

  - "*"

- apiGroups:

  - "*"

  resources:

  - collectorsets

  verbs:

  - "*"
  1. Enter the following command:
kubectl apply -f cluster-role.yaml
kubectl create clusterrolebinding role-binding-view-only --clusterrole view  --user <user-name>
kubectl create clusterrolebinding role-binding-lm-container --clusterrole lm-container-min-permissions --user <user-name>

For more information on LM Container installation, see Installing the LM Container Helm Chart or Installing LM Container Chart using CLI.

The following table provides guidelines on provisioning resources for LogicMonitor components to have optimum performance and reliable monitoring for the Kubernetes cluster:

Collector SizeMediumLarge 
Maximum Resources with 1 collector replica1300 resources3600 resources
Argus and CSC VersionArgus Version – v7.1.2
CSC Version – v3.1.2
Argus Version – v7.1.2
CSC Version – v3.1.2
Collector VersionGD 33.001 GD 33.002 
Recommended Argus limits & RequestsCPU Requests – 0.256 core
CPU Limits – 0.5 core
CPU Requests – 0.5 core
CPU Limits – 1 core
Memory Requests – 250MB
Memory Limits – 500MB
Memory Requests – 500MB
Memory Limits – 1GB

Recommended Collectorset Controller limits & Requests
CPU Requests – 0.02 core
CPU Limits – 0.05 core
CPU Requests – 0.02 core CPU Limits – 0.05 core
Memory Requests – 150MB
Memory Limits – 200MB
Memory Requests – 150MB
Memory Limits – 200MB

Example of Collector Configuration for Resource Sizing

Let’s say you have about 3100 resources to monitor. You need a large collector single replica with the compatible versions as displayed in the above table to monitor your resources. You can configure the collector size and replica count in the configuration.yaml file as follows: 

argus:
  collector:
     size: medium
     replicas: 1

Note: In the size field, you can add the required collector size (Large or Medium) and in the replicas field, you can add the number of required collector replicas.

Specifying Resource Limits for Collectorset-Controller and Argus Pod

You can enforce central processing unit (CPU) and memory constraints on your Collectorset-Controller, Argus Pod, and Collector. 
An example of the collectorset-controller.resources parameter displayed in the following lm-container configuration yaml file:

collectorset-controller:
   resources:
      limits:
        cpu: "1000m"
        memory: "1Gi"
        ephemeral-storage: "100Mi"
     requests:
       cpu: "1000m"
       memory: "1Gi"
       ephemeral-storage: "100Mi"

An example of the argus.resources parameter displays in the following lm-container configuration yaml file:

argus:
   resources:
      limits:
        cpu: "1000m"
        memory: "1Gi"
        ephemeral-storage: "100Mi"
     requests:
       cpu: "1000m"
       memory: "1Gi"
       ephemeral-storage: "100Mi"

The API server is the front end of the Kubernetes Control Plane. It exposes the HTTP API interface, allowing you, other internal components of Kubernetes, and external components to establish communication. For more information, see Kubernetes API from Kubernetes documentation. 

The following are the benefits of using Kubernetes API Server: 

Use Case for Monitoring Kubernetes API Server

Consider a cluster comprising of two nodes; Node 1 and Node 2, constituting the cluster’s Control Plane. The Kubernetes API Server plays a crucial role, consistently interacting with several services within the Control Plane. Its primary function is to schedule and monitor the status of workloads and execute the appropriate measures to maintain continuous operation and prevent downtime. If a network or system issue leads to Node 1’s failure, the system autonomously migrate the workloads to Node 2, whilst the affected Node 1 is promptly removed from the cluster. Considering the Kubernetes API Server’s vital role in the cluster, ensuring the operational efficiency of the cluster heavily relies on the robust monitoring of the component.

Requirements for Monitoring Kubernetes API Server

Setting up Kubernetes API Server Monitoring

Installation

You don’t need any separate installation on your server to use the Kubernetes API Server. For more information on LM Container installation, see Installing the LM Container Helm Chart or Installing LM Container Chart using CLI.

Configuration

The Kubernetes API Server is pre-configured for monitoring. No additional configurations are required.

Viewing Kubernetes API Server Details

Once you have installed and configured the Kubernetes API Server on your server, you can view all the relevant data on the Resources page.

  1. In LogicMonitor, navigate to Resources > select the required DataSource resource.
    Resource tree API
  2. Select Info tab to view the different properties of the Kubernetes API Server.
    API server Info tab screen
  3. Select Alerts tab to view the alerts generated while checking the status of the Kubernetes API Server resource.
  4. Select Graphs tab to view the status or the details of the Kubernetes API Server in the graphical format.
    API Server Graph tab screen
  5. Select Alert Tuning tab to view the datapoints on which the alerts are generated. 
    API Server alert tuning tab screen
  6. Select Raw Data tab to view all the data returned for the defined instances.
    API server raw data tab screen

Creating Kubernetes API Server Dashboards

You can create out-of-the-box dashboards for monitoring the status of the Kubernetes API Server.

Requirement

Download the Kubernetes_API_Server.JSON file.

Procedure

  1. Navigate to Dashboards > Add.
  2. From the Add drop-down list, select From File.
  3. Import the downloaded Kubernetes_API_Server.JSON file to add the Kubernetes API Server dashboard and select Submit.
  4. On the Add Dashboard from JSON File dialog box, enter values in the Name and the Dashboard Group fields.
    Add file API server dashboard screen
  5. Select Save. 
    On the Dashboards page, you can now see the new Kubernetes API Server dashboard created.
    Kubernetes API Server dashboard

A Kubernetes cluster consists of worker machines that are divided into worker nodes and control plane nodes. Worker nodes host your pods and the applications within them, whereas the control plane node manages the worker nodes and the Pods in the cluster. The Control Plane is an orchestration layer that exposes the API and interfaces to define, deploy, and manage the lifecycle of containers. For more information, see Kubernetes Components from Kubernetes documentation. 

The Kubernetes Control Plane components consist of the following:

The following image displays the different Kubernetes Control Plane components and their connection to the Kubernetes Cluster.

Control Plane workflow diagram

14-day access to the full LogicMonitor platform