When you add your Kubernetes cluster to monitoring, dynamic groups are used to group the cluster resources. For more information, see Adding a Kubernetes Cluster to Monitoring.

Non-admin users can add Kubernetes clusters to monitoring using API keys with more granular access. These API keys should have access to at least one resource group, which provides the necessary permissions to configure monitoring for Kubernetes clusters. This significantly improves access control, as the dynamic groups are now linked to the resource groups that the API keys can access, based on view permissions.

Before non-admin users can add Kubernetes clusters to monitoring, several prerequisites need to be set up:

Enabling non-admin users to add a Kubernetes cluster to monitoring

  1. Ensure that different device groups are created for non-admin users. For more information, see Adding Resource Groups.
  2. Navigate to Settings > User Access > User and Roles.
  3. Select the Roles tab.
  4. Select the required role group and select the manage icon Manage icon.
  5. In the Permissions tab, assign the required access to the Kubernetes cluster static groups.
    Note: You can create multiple users with specific roles from the Manage Role dialog box.
    When the required permissions are provided, the non-admin users can add and monitor the Kubernetes clusters within the static groups.
    Permissions page for resources as non-admin
  6. To create the required dashboard groups, in the top left of the Dashboards page, select the Add icon add icon > Add dashboard group. Enter the required details. For more information, see Adding Dashboard Groups.
  7. To create the required collector groups, navigate to Settings > Collectors.
  8. Under the Collectors tab, select the Add Collector Options add icon dropdown. Enter the required details. For more information, see Adding Collector Groups.
  9. Select the User Profile in the Permissions setting and grant non-admin users access to create API tokens and manage their profiles.
    User profile settings page.

After a resource group is allocated, non-admin users can add Kubernetes clusters into monitoring.

Adding a Kubernetes Cluster into Monitoring as a Non-Admin User

  1. Navigate to Resource Tree > Resources.
  2. Select the allocated resource group to add to the cluster.
  3. Select the Add icon and select Kubernetes Cluster.
    Adding kubernetes clusters page
  4. On the Add Kubernetes Cluster page, add the following information:
    1. In the Cluster Name field, enter the cluster name.
    2. In the API token field, select the allocated resource group’s API token and Save.
      The other API Token field information populates automatically.
    3. In the Resource Group field, select the allocated resource group name
    4. In the Collector Group and Dashboard Group fields, select the allocated Resource Group.
  5. Select Next.
  6. In the Install Instruction section, select the Argus tab.
  7. Select the resourceGroupID parameter and replace the default value with the system.deviceGroupId property value of the allocated resource group.
  8. Select Verify Connection. When the connection is successful, your cluster is added.

In scenarios such as a new installation or a cache refresh, the entire Kubernetes cluster gets synchronized with the portal. This process results in a large number of devices being added to the cluster unnecessarily, adding strain to the system.

During the default installation, all resources are monitored by the lm-container. However, to address the above-mentioned issue, the following resources will be either filtered or disabled from monitoring:

To optimize system performance and reduce unnecessary resource load, several default configurations will be applied to filter or disable specific resources, ensuring that only the essential components are actively monitored.

Resources Disabled for Monitoring by Default

Below is a list of Kubernetes resources that will be disabled by default but can be enabled if you choose:

  1. ResourceQuotas
  2. LimitRanges
  3. Roles
  4. RoleBindings
  5. NetworkPolicies
  6. ConfigMaps
  7. ClusterRoleBindings
  8. ClusterRoles
  9. PriorityClasses
  10. StorageClasses
  11. CronJobs
  12. Jobs
  13. Endpoints
  14. Secrets
  15. ServiceAccounts

Changes from Upgrading Helm Chart

In case of an upgrade from an older version of the lm-container Helm chart to this version, the following applies:

LM Container allows you to configure the underlying collector through Helm Chart configuration values. The collector is responsible for collecting metrics and logs from the cluster resources using the configuration specification format of the collector. For more information, see agent. conf.
You must use the Helm chart configuration to set up the collector. This ensures a permanent configuration, unlike the manual configuration on the collector pod. For example, a pod restart operation can erase the configured state and revert the configuration to the default state, making Helm chart configuration a more reliable option.

Requirements for Managing Properties on Docker Collector

Ensure you have LM Container Helm Charts 4.0.0 or later installed.

Adding agent.conf Properties on your Docker Collector

  1. Open and edit the lm-container-configuration.yaml file.
  2. Under the agentConf section, do the following:
    1. In the value or values parameter, enter the config value. 
    2. (Optional) In dontOverride property, set dontOverride to true to add more property values to the existing list. By default, the value is false.
    3. (Optional) In the coalesceFormat property, specify the CSV format.
    4. (Optional) In the discrete property, set the discrete to true to pass the values array corresponding to each item for each collector.
      The following is an example of these values.
argus:
  collector:
    collectorConf:
      agentConf:
        - key: <Property Key>
          value: <Property Value>
          values: <Property values list/map>
          dontOverride: true/false
          coalesceFormat: csv
          discrete: true/false
  1. Run the following Helm upgrade command:
helm upgrade \
    --reuse-values \
    --namespace=<namespace> \
    -f lm-container-configuration.yaml \
    lm-container logicmonitor/lm-container

Example of Adding Identical Configurations 

You can apply identical configurations on each collector of the set. Following are the different examples of the input properties in the lm-container-configuration.yaml file:

key: EnforceLogicMonitorSSL
value: false
key: collector.defines
value:
  - ping
  - script
  - snmp
  - webpage
coalesceFormat: csv

The resultant property displays as collector.defines=ping,script,snmp,webpage in the agent.conf when you have not set the dontOverride to true. If you have set the dontOverride to true, then the resultant property is appended with config parameter values and displays as collector.defines=ping,script,snmp,webpage,jdbc,perfmon,wmi,netapp,jmx,datapump,memcached,dns,esx,xen,udp,tcp,cim,awscloudwatch,awsdynamodb,awsbilling,awss3,awssqs,batchscript,sdkscript,openmetrics,syntheticsselenium.

Example of Adding Discrete Configurations 

You can define different property values on every collector using discrete flag. Following are the different examples of how the properties display in the agent.conf file:

key: logger.watchdog
discrete: true
values:
  - debug
  - info
  - info

The above configuration enables debug logs on the first collector out of 3 whose index is 0 and info logs to the remaining.

key: ssh.preferredauthentications
discrete: true
values:
  - - publickey
    - keyboard-interactive
    - password
  - - password
    - keyboard-interactive
    - publickey
  - - publickey
    - keyboard-interactive
    - password
coalesceFormat: csv

The resultant property for individual collectors considering you have 3 replica displays as follows:

A Secret is an object that contains a small amount of sensitive data such as a password, a token, or a key. Kubernetes Secrets allows you to configure the Kubernetes cluster to use sensitive data (such as passwords) without writing the password in plain text into the configuration files. For more information, see Secrets from Kubernetes documentation.

Note: If you are using secrets on your LM Container, granting manage permission might reveal your encoded configuration data.

Requirements for Configuring User Defined Secrets in LM Container

Ensure you have LM Container Helm Charts version 5.0.0 or later.

Configuring User Defined Secrets for your Kubernetes Clusters in LM Containers

Creating a Secret involves using the key-value pair to store the data. To create Secrets, do as follows:

  1. Create the secrets.yaml with opaque secret type that encodes in Base64 format similar to the following example.

Note: Adding the data label encodes the accessID, accessKey, and account field values in Base64 format.

apiVersion: v1
data:
  accessID: NmdjRTNndEU2UjdlekZhOEp2M2Q=
  accessKey: bG1hX1JRS1MrNFUtMyhrVmUzLXE0Sms2Qzk0RUh7aytfajIzS1dDcUxQREFLezlRKW1KSChEYzR+dzV5KXo1UExNemxoT0RWa01XTXROVEF5TXkwME1UWmtMV0ZoT1dFdE5XUmpOemd6TlROaVl6Y3hMM2oyVGpo
  account: bG1zYWdhcm1hbWRhcHVyZQ==
  etcdDiscoveryToken: ""
kind: Secret
metadata:
  name: user-provided-secret
  namespace: default
type: Opaque

or

  1. Create the secrets.yaml with an opaque secret stringData type similar to the following example.
apiVersion: v1
stringData:
  accessID: "6gcE3gtE6R7ezFa8Jv3d"
  accessKey: "lma_RQKS+4U-3(kVe3-q4Jk6C94EH{k+_j23KWCqLPDAK{9Q)mJH(Dc4~w5y)z5PLMzlhODVkMWMtNTAyMy00MTZkLWFhOWEtNWRjNzgzNTNiYzcxL3j2Tjh"
  account: "lmadminuser"
  etcdDiscoveryToken: ""
kind: Secret
metadata:
  name: user-provided-secret
  namespace: default
type: Opaque
  1. Enter the accessIDaccessKey, and account field values.

Note: If you have an existing cluster, enter the same values used while creating Kubernetes Cluster.

  1. Save the secrets.yaml file.
  2. Open and edit the lm-container-configuration.yaml file.
  3. Enter a new field userDefinedSecret with the required value similar to the following example.

Note: The value for userDefinedSecret must be the same as the newly created secret name.

argus:
  clusterName: secret-cluster
global:
  accessID: ""
  accessKey: ""
  account: ""
  userDefinedSecret: "user-provided-secret"
  1. Save the lm-container-configuration.yaml file.
  2. In your terminal, enter the following command:
Kubectl apply -f secrets.yaml -n <namespace_where_lm_container will be installed>

Note: Once you apply the secrets and install the LM Container, delete the accessID, accessKey, and account field values in the lm-container-configuration.yaml for security reasons.

The following table displays the Secrets fields:

Field NameField TypeDescription
accessIDmandatoryLM access ID
accessKeymandatoryLM access key
accountmandatoryLM account name
argusProxyPassoptionalargus proxy password
argusProxyUseroptionalargus proxy user name
collectorProxyPassoptionalcollector proxy password
collectorProxyUseroptionalcollector proxy username
collectorSetControllerProxyPassoptionalcollectorset-controller proxy password
collectorSetControllerProxyUseroptionalcollectorset-controller proxy username
etcdDiscoveryTokenoptionaletcd discovery token
proxyPassoptionalglobal proxy password
proxyUseroptionalglobal proxy username

Example of Secrets with Proxy Details for Kubernetes Cluster

The following secrets.yaml file displays user-defined secrets with the proxy details:

apiVersion: v1
data:
  accessID:
  accessKey:
  account:
  etcdDiscoveryToken:
  proxyUser:
  proxyPass:
  argusProxyUser:
  argusProxyPass:
  cscProxyUser:
  cscProxyPass:
  collectorProxyUser:
  collectorProxyPass:
kind: Secret
metadata:
  name: user-provided-secret
  namespace: default
type: Opaque

There are two types of proxies; global proxy and component-level proxy. When you provide a global proxy, it applies to all Argus, Collectorset-Controller, and collector components. When you add both component-level proxy and global proxy, component-level proxy gains precedence. For example, if you add a collector proxy and a global proxy, the collector proxy is applied to the collector, and a global proxy is applied to the other Argus and Collectorset-Controller components. 

The following is an example of the lm-container-configuration.yaml file:

global:
  accessID: ""
  accessKey: ""
  account: ""
  userDefinedSecret: <secret-name>
  proxy: 
    url: "proxy_url_here"

kube-state-metrics (KSM) monitors and generates metrics about the state of the Kubernetes objects. KSM monitors the health of various Kubernetes objects such as Deployments, Nodes, and Pods. For more information, see kube-state-metrics (KSM) from GitHub.

You can use the kube-state-metrics-based modules available in LM Exchange in conjunction with the new LM Container Helm charts to gain better visibility of your Kubernetes cluster. The charts automatically install and configure KSM on your cluster. For more information on LM Container installation, see Installing the LM Container Helm Chart or Installing LM Container Chart using CLI.

Monitoring Workflow

The following diagram illustrates the monitoring workflow with the KSM:

Kubernetes KSM monitoring diagram

Sources of Data

Watchdog Component

The Watchdog component collects and stores the data for other LogicModules to consume. LogicMonitor uses kube-state-metrics (KSM) and Kubernetes Summary API to monitor Kubernetes cluster health.

There are two modules, watchdog module and consumer module. Watchdog fetches the data from the API and stores it in the local cache. The consumer uses the stored data.

Module NameAppliesTo
Kubernetes_ KSM_Watchdogsystem.devicetype == “8” && system.collector == “true” && hasCategory(“KubernetesKSM”)
Kubernetes_Summary_Watchdogsystem.collector == “true” && hasCategory(“KubernetesKSM”)

For more information on AppliesTo, see AppliesTo Scripting Overview.

Consumer LogicModules

Kubernetes-related LogicModules utilize the data which is collected by Kubernetes Watchdog. For example, Kubernetes_KSM_PodsKubernetes_KSM_Nodes are collected by Watchdog.

Requirements for Monitoring using KSM

Installing KSM 

You do not need any separate installation on your server to use kube-state-metrics (KSM). If the value of kube-state-metrics.enabled property is set to true in the lm-container helm values.yaml file, KSM installs automatically. In addition, you can configure KSM while installing or upgrading LM Container Helm Charts. 

Note:

Installing and Upgrading KSM with LM Container Helm Charts

Install LM Container Charts. For more information, see Installing the LM Container Helm Chart.

To upgrade the existing cluster to the latest version, see Upgrading LM Container Charts.

Installing and Upgrading KSM without LM Container Helm Charts

Install Argus. For more information, see Argus Installation.

To upgrade the existing cluster to the latest version, see Upgrading Kubernetes Monitoring Applications.

Running the PropertySource

To set the existing cluster for monitoring, you must run the addCategory_KubernetesKSM property source by completing the following steps:

  1. In LogicMonitor, navigate to Settings > PropertySources > addCategory_KubernetesKSM.
  2. In the PropertySouce window, click the More options.
  3. Run PropertySource.

Updating Dashboards

Dashboards can help you visualize the information in a meaningful manner that is retrieved by the modules. It does not affect the usability of the modules.

Requirements

Download the Dashboards from the LogicMonitor repo.

Procedure

  1. Log into your LogicMonitor portal.
  2. On the left panel, navigate to Dashboards and click the expand icon.
  3. On the Dashboards panel, click Add.
  4. From the Add drop-down list, select From File option.
  5. Click Browse, to add the dashboard file downloaded from the portal.
  6. Click Submit.

You can see the required dashboard added to the Dashboard page.

etdc is a lightweight, highly available key-value store where Kubernetes stores the information about a cluster’s state. For more information, see Operating etcd clusters for Kubernetes from Kubernetes documentation.

LogicMonitor can only monitor etcd clusters that are deployed within a Kubernetes Cluster, and not those that are deployed outside of the cluster.

Important: We do not support Kubernetes etcd monitoring for managed Kubernetes services like OpenShift, Amazon Elastic Kubernetes Service (EKS), Google Kubernetes Engine (GKE), and Azure Kubernetes Service (AKS) because they do not expose the Kubernetes Control Plane components.

Requirements for Monitoring etcd

Ensure you have enabled the Kubernetes_etcd DataSource.

Note: This is a multi-instance datasource, with each instance indicating an etcd node. This DataSource is available for download from LM Exchange.

Setting up Kubernetes etcd Monitoring

Installation

You do not need any separate installation on your server to use the Kubernetes etcd.

Depending on your preference, you can install LM Container with the following two options:

  1. Installing through the user interface. For more information, see  Installing the LM Container Helm Chart.
  2. Installing through the command line interface. For more information, see Installing LM Container Chart using CLI.

Configuration

The Kubernetes etcd cluster is pre-configured for monitoring. No additional configurations are required. If you do not see any data for the Kubernetes etcd resource, do the following:

  1. In your system, enter /etc/kubernetes/manifests in the terminal.
  2. Open the etcd.yaml file for updating.
  3. In etcd Pod under kube-system namespace, change the value of –listen-metrics-urls at .spec.containers.command from http://127.0.0.1:2381 to http://0.0.0.0:2381.
    Note: Changing the value of –listen-metrics-urls allows the collector pod to scrape the metrics URL of the etcd pod within the cluster only.
  4. Save the etcd.yaml file.

Note: Ensure you disable the SSL certificate verification. To do this, use http instead of https for the –listen-metrics-urls value.

For monitoring custom etcd deployment, follow the instructions below:

  1. In LogicMonitor, navigate to Settings > DataSources > select Kubernetes etcd DataSource
  2. In the Kubernetes etcd DataSource page, expand Active Discovery.
  3. In the Parameters section, select the Embedded Groovy Script option. 
  4. In the Groovy Script field, enter the required component name for the etcd_label array.
    etcd Active Directory
  5. Expand Collector Attributes, and in the Groovy Script field, enter the required component name for the etcd_label array.
    etcd Collector Attribute
  6. Select Save to save the Kubernetes etcd DataSource. 

Viewing Kubernetes etcd Details

Once you have installed and configured the Kubernetes etcd on your server, you can view the etcd cluster properties and metrics on the Resources page.

  1. In LogicMonitor, navigate to Resources > select the required etcd DataSource resource.
    etcd resource page
  2. Select the Info tab to view the different properties of the Kubernetes etcd.
    etcd Info tab
  3. Select the Alerts tab to view the alerts generated while checking the status of the Kubernetes etcd resource.
    etcd Alerts tab
  4. Select the Graphs tab to view the status or the details of the Kubernetes etcd in the graphical format.
    etcd Graphs tab
  5. Select the Alert Tuning tab to view the datapoints on which the alerts are generated.
    etcd Alert Tuning tab
  6. Select the Raw Data tab to view all the data returned for the defined instances.
    etcd Raw Data tab

The Controller Manager are control loops that continuously watch the state of your cluster loop in a Kubernetes cluster. The Controller Manager monitors the current state of your cluster through the API Server and makes appropriate changes to keep the application running by ensuring sufficient Pods are in a healthy state. For more information, see Controllers from Kubernetes documentation. 

Important: We do not support Kubernetes Controller Manager monitoring for managed services like OpenShift, Amazon Elastic Kubernetes Service (EKS), Google Kubernetes Engine (GKE), and Azure Kubernetes Service (AKS) because they do not expose the Kubernetes Control Plane components.

Requirements for Monitoring Controller Manager

Ensure you have the Kubernetes_Controller_Manager datasource enabled.

Note: This is a multi-instance datasource, with each instance indicating a Controller Manager. This datasource is available for download from LM Exchange.

Setting up Kubernetes Controller Manager Monitoring

Installation

You do not need separate installation on your server to use the Kubernetes Controller Manager.

Depending on your preference, you can install LM Containers with the following two options:

  1. Installing through the user interface. For more information, see  Installing the LM Container Helm Chart.
  2. Installing through the command line interface. For more information, see Installing LM Container Chart using CLI.

Configuration

The Kubernetes Controller Manager is pre-configured for monitoring. If you do not see any data for the Kubernetes Controller Manager resource, do the following:

  1. In your system, enter /etc/kubernetes/manifests in the terminal.
  2. Open the kube-controller-manager.yaml file for updating.
  3. In kube-controller-manager Pod under kube-system namespace, change the –bind-address at .spec.containers.command from 127.0.0.1 to <Value of status.podIP present on kube-controller-manager pod>.

Note: Run the kubectl get pod -n kube-system -o yaml | grep “podIP” command to get the value of status.podIP present on kube-controller-manager pod .

  1. Save the kube-controller-manager.yaml file.

Note: If –bind-address is missing, the scheduler continues to run with its default value 0.0.0.0.

For monitoring custom controlled managers, do as follows:

  1. In LogicMonitor, navigate to Settings > DataSources > select Kubernetes Controller Manager DataSource
  2. In the Kubernetes Controller Manager DataSource page, expand Active Discovery.
  3. In the Parameters section, select the Embedded Groovy Script option. 
  4. In the Groovy Script field, enter the required component name for the controller_label array. 
    KCM Active Directory page
  5. Expand Collector Attributes and in the Groovy Script field, enter the required component name for the controller_label array.
    Collector Attribute KCM page
  6. Select Save to save the Kubernetes Controller Manager DataSource. 

Viewing Kubernetes Controller Manager Details

Once you have installed and configured the Kubernetes Controller Manager on your server, you can view all the relevant data on the Resources page.

  1. In LogicMonitor, navigate to Resources > select the required Kubernetes Controller Manager resource.
    KCM Resource page
  2. Select the Info tab to view the different properties of the Kubernetes Controller Manager.
    KCM Info tab
  3. Select the Alerts tab to view the alerts generated while checking the status of the Kubernetes Controller Manager resource.
    KCM Alerts tab
  4. Select the Graphs tab to view the status or the details of the Kubernetes Controller Manager in the graphical format.
    KCM Graphs tab
  5. Select the Alert Tuning tab to view the datapoints on which the alerts are generated.
    KCM Alert tuning tab
  6. Select the Raw Data tab to view all the data returned for the defined instances.
    KCM Raw data tab

The Ingress is a Kubernetes resource that exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. The cluster must have an ingress controller running for the Ingress resource to work. For more information, see Ingress and Ingress Controllers from Kubernetes documentation.

NGINX Ingress Controller is a type of ingress controller that runs in a cluster and configures an HTTP load balancer according to traffic routing rules. For more information, see How NGINX Ingress Controller Works from NGINX documentation.

Important: NGINX Ingress Controller Monitoring is available with LM Container Helm Charts version 4.2.0 or later.

Requirements for Monitoring NGINX Ingress Controller

Setting up NGINX Ingress Controller Monitoring

Installation

You do not need any separate installation on your cluster to use the NGINX Ingress Controller.

Depending on your preference, you can install LM Containers with the following two options:

  1. Installing through the user interface. For more information, see  Installing the LM Container Helm Chart.
  2. Installing through the command line interface. For more information, see Installing LM Container Chart using CLI.

Configuration

The NGINX Ingress Controller is pre-configured for monitoring. No additional configurations are required.

Viewing NGINX Ingress Controller Details

Once you have installed and configured the NGINX Ingress Controller on your server, you can view all the relevant data on the Resources page.

  1. In LogicMonitor, navigate to Resources > select the required NGINX Ingress Controller service.
    NGINX resource tab
  2. Select the Info tab to view the different properties of the NGINX Ingress Controller.
    NGINX Info tab
  3. Select the Alerts tab to view the alerts generated while checking the status of the NGINX Ingress Controller resource.
    NGINX Alert tab
  4. Select the Graphs tab to view the status or the details of the NGINX Ingress Controller in the graphical format.
    NGINX Graphs tab
  5. Select the Alert Tuning tab to view the datapoints on which the alerts are generated. 
    NGINX Alerttuning tab
  6. Select the Raw Data tab to view all the data returned for the defined instances.
    NGINX rawdata tab

Amazon Elastic Kubernetes Service (EKS) is a managed Kubernetes service that helps you run Kubernetes on any standard Kubernetes environment (AWS). Using Amazon EKS, you can run Kubernetes without installing and operating a Kubernetes control plane or worker nodes.

LogicMonitor helps you to monitor your Amazon EKS environments in real-time. For more information, see What is Amazon EKS from Amazon documentation. 
LogicMonitor officially supports running LM Container Kubernetes Monitoring on AWS Bottlerocket OS. For more information, see Bottlerocket from AWS documentation.

Requirements for Monitoring EKS Cluster

Setting up Amazon EKS Cluster

You don’t need separate installations on your server to monitor the Amazon EKS cluster, since LogicMonitor already integrates with Kubernetes and AWS. For more information on LM Container installation, see Installing the LM Container Helm Chart or Installing LM Container Chart using CLI.

Amazon EKS Cluster Dashboards

You don’t need to create any separate Amazon EKS cluster dashboards. If you have integrated LogicMonitor with Kubernetes and AWS, the Amazon EKS cluster data will display on the relevant dashboards.

Amazon EKS Cluster Dashboard

Kubernetes Monitoring Considerations

Kubernetes Monitoring Dependencies

The following image displays how LogicMonitor’s application runs in your cluster as a pod.

Kubernetes Dependency workflow diagram

Note: LogicMonitor’s Kubernetes monitoring integration is an add-on feature called LM Container. You may contact your Customer Success Manager (CSM) for more information.

LogicMonitor Portal Permissions

Kubernetes Cluster Permissions 

There are minimum permissions that are required to install the LM Container. 

For creating ClusterRole, do the following:

  1. Create and save a cluster-role.yaml file with the following configuration:
apiVersion: rbac.authorization.k8s.io/v1

kind: ClusterRole

metadata:

  name: lm-container-min-permissions

rules:

- apiGroups:

  - ""

  resources:

  - "*"

  verbs:

  - get

  - list

  - create

- apiGroups:

  - ""

  resources:

  - configmaps

  verbs:

  - "*"

- apiGroups:

  - apps

  resources:

  - deployments

  - statefulsets

  - replicasets

  verbs:

  - get

  - list

  - create

- apiGroups:

  - rbac.authorization.k8s.io

  resources:

  - clusterroles

  - clusterrolebindings

  - roles

  - rolebindings

  verbs:

  - "*"

- apiGroups:

  - apiextensions.k8s.io

  resources:

  - customresourcedefinitions

  verbs:

  - "*"

- apiGroups:

  - "*"

  resources:

  - collectorsets

  verbs:

  - "*"
  1. Enter the following command:
kubectl apply -f cluster-role.yaml
kubectl create clusterrolebinding role-binding-view-only --clusterrole view  --user <user-name>
kubectl create clusterrolebinding role-binding-lm-container --clusterrole lm-container-min-permissions --user <user-name>

For more information on LM Container installation, see Installing the LM Container Helm Chart or Installing LM Container Chart using CLI.

14-day access to the full LogicMonitor platform