By default, LM Container deletes resources immediately. If you want to retain resources, you can configure the retention period to delete resources after the set time passes.
Retaining Deleted Kubernetes resources
You must configure the following parameters in the LM Configuration file:
- Navigate to the LM Configuration file and do the following:
- Specify the deleted devices’ retention period in ISO-8601 duration format using the property
kubernetes.resourcedeleteafter
= P1DT0H0M0S.
For more information, see ISO-8601 duration format. lm-container
adds the retention period property to the cluster resource group to set the global retention period for all the resources in the cluster. You can modify this property in the child groups within the cluster group to have different retention periods for various resource types. In addition, you can modify property for a particular resource.
Note: lm-container configures different retention periods for lm-container and Collectorset-Controller Pods for troubleshooting. The retention period for these Pods cannot be modified and is set to 10 days.
- Specify the deleted devices’ retention period in ISO-8601 duration format using the property
- Enter the following parameters in the lm-container-configuration.yaml to set the retention period for the resources:
argus:
lm:
resource:
# To set the global delete duration for resources
globalDeleteAfterDuration: "P0DT0H1M0S" // Adjust this value according to your needs to set the global retention period for resources
In scenarios such as a new installation or a cache refresh, the entire Kubernetes cluster gets synchronized with the portal. This process results in a large number of devices being added to the cluster unnecessarily, adding strain to the system.
During the default installation, all resources are monitored by the lm-container
. However, to address the above-mentioned issue, the following resources will be either filtered or disabled from monitoring:
- Only critical resources will be enabled by default.
- Non-essential resources will be disabled or filtered, but customers can add them back to monitoring by updating the filter criteria in the Helm chart.
To optimize system performance and reduce unnecessary resource load, several default configurations will be applied to filter or disable specific resources, ensuring that only the essential components are actively monitored.
- Default Filtering of Ephemeral Resources: Ephemeral resources, such as Jobs, CronJobs, and pods created by CronJobs, will be filtered out by default to reduce unnecessary load. For example:
argus:
disableBatchingPods: "true"
Default filtering
- Disabled Monitoring of Kubernetes Resources: Resources like ConfigMaps and Secrets will be added to a list of disabled resources, preventing them from being monitored by default during cluster setup. For example:
argus:
monitoringMode: "Advanced"
monitoring: disable:
- configmaps
Resources Disabled for Monitoring by Default
Below is a list of Kubernetes resources that will be disabled by default but can be enabled if you choose:
- ResourceQuotas
- LimitRanges
- Roles
- RoleBindings
- NetworkPolicies
- ConfigMaps
- ClusterRoleBindings
- ClusterRoles
- PriorityClasses
- StorageClasses
- CronJobs
- Jobs
- Endpoints
- Secrets
- ServiceAccounts
Changes from Upgrading Helm Chart
In case of an upgrade from an older version of the lm-container
Helm chart to this version, the following applies:
- Default Filtering: If you have custom filters applied in the previous version, those filters will continue to take priority. The default filtering will not override their settings.
- Default Disabled Monitoring: If you had resources set for disabled monitoring in the older version, those configurations will remain in effect and will not be overwritten by the new defaults.
- Cross-Configuration for Filtering and Disabled Monitoring:
- If a customer had custom filters but had not configured disabled monitoring resources, the default list for disabled monitoring will be applied automatically.
- Conversely, if custom disabled monitoring resources were configured, the default filtering for resources will be applied.
A Secret is an object that contains a small amount of sensitive data such as a password, a token, or a key. Kubernetes Secrets allows you to configure the Kubernetes cluster to use sensitive data (such as passwords) without writing the password in plain text into the configuration files. For more information, see Secrets from Kubernetes documentation.
Note: If you are using secrets on your LM Container, granting manage permission might reveal your encoded configuration data.
Requirements for Configuring User Defined Secrets in LM Container
Ensure you have LM Container Helm Charts version 5.0.0 or later.
Configuring User Defined Secrets for your Kubernetes Clusters in LM Containers
Creating a Secret involves using the key-value pair to store the data. To create Secrets, do as follows:
- Create the
secrets.yaml
with opaque secret type that encodes in Base64 format similar to the following example.
Note: Adding the data
label encodes the accessID
, accessKey
, and account field
values in Base64 format.
apiVersion: v1
data:
accessID: NmdjRTNndEU2UjdlekZhOEp2M2Q=
accessKey: bG1hX1JRS1MrNFUtMyhrVmUzLXE0Sms2Qzk0RUh7aytfajIzS1dDcUxQREFLezlRKW1KSChEYzR+dzV5KXo1UExNemxoT0RWa01XTXROVEF5TXkwME1UWmtMV0ZoT1dFdE5XUmpOemd6TlROaVl6Y3hMM2oyVGpo
account: bG1zYWdhcm1hbWRhcHVyZQ==
etcdDiscoveryToken: ""
kind: Secret
metadata:
name: user-provided-secret
namespace: default
type: Opaque
or
- Create the
secrets.yaml
with an opaque secret stringData type similar to the following example.
apiVersion: v1
stringData:
accessID: "6gcE3gtE6R7ezFa8Jv3d"
accessKey: "lma_RQKS+4U-3(kVe3-q4Jk6C94EH{k+_j23KWCqLPDAK{9Q)mJH(Dc4~w5y)z5PLMzlhODVkMWMtNTAyMy00MTZkLWFhOWEtNWRjNzgzNTNiYzcxL3j2Tjh"
account: "lmadminuser"
etcdDiscoveryToken: ""
kind: Secret
metadata:
name: user-provided-secret
namespace: default
type: Opaque
- Enter the accessID, accessKey, and account field values.
Note: If you have an existing cluster, enter the same values used while creating Kubernetes Cluster.
- Save the
secrets.yaml
file. - Open and edit the
lm-container-configuration.yaml
file. - Enter a new field userDefinedSecret with the required value similar to the following example.
Note: The value for userDefinedSecret must be the same as the newly created secret name.
argus:
clusterName: secret-cluster
global:
accessID: ""
accessKey: ""
account: ""
userDefinedSecret: "user-provided-secret"
- Save the
lm-container-configuration.yaml
file. - In your terminal, enter the following command:
Kubectl apply -f secrets.yaml -n <namespace_where_lm_container will be installed>
Note: Once you apply the secrets and install the LM Container, delete the accessID, accessKey, and account field values in the lm-container-configuration.yaml for security reasons.
The following table displays the Secrets fields:
Field Name | Field Type | Description |
accessID | mandatory | LM access ID |
accessKey | mandatory | LM access key |
account | mandatory | LM account name |
argusProxyPass | optional | argus proxy password |
argusProxyUser | optional | argus proxy user name |
collectorProxyPass | optional | collector proxy password |
collectorProxyUser | optional | collector proxy username |
collectorSetControllerProxyPass | optional | collectorset-controller proxy password |
collectorSetControllerProxyUser | optional | collectorset-controller proxy username |
etcdDiscoveryToken | optional | etcd discovery token |
proxyPass | optional | global proxy password |
proxyUser | optional | global proxy username |
Example of Secrets with Proxy Details for Kubernetes Cluster
The following secrets.yaml
file displays user-defined secrets with the proxy details:
apiVersion: v1
data:
accessID:
accessKey:
account:
etcdDiscoveryToken:
proxyUser:
proxyPass:
argusProxyUser:
argusProxyPass:
cscProxyUser:
cscProxyPass:
collectorProxyUser:
collectorProxyPass:
kind: Secret
metadata:
name: user-provided-secret
namespace: default
type: Opaque
There are two types of proxies; global proxy and component-level proxy. When you provide a global proxy, it applies to all Argus, Collectorset-Controller, and collector components. When you add both component-level proxy and global proxy, component-level proxy gains precedence. For example, if you add a collector proxy and a global proxy, the collector proxy is applied to the collector, and a global proxy is applied to the other Argus and Collectorset-Controller components.
The following is an example of the lm-container-configuration.yaml
file:
global:
accessID: ""
accessKey: ""
account: ""
userDefinedSecret: <secret-name>
proxy:
url: "proxy_url_here"
kube-state-metrics (KSM) monitors and generates metrics about the state of the Kubernetes objects. KSM monitors the health of various Kubernetes objects such as Deployments, Nodes, and Pods. For more information, see kube-state-metrics (KSM) from GitHub.
You can use the kube-state-metrics-based modules available in LM Exchange in conjunction with the new LM Container Helm charts to gain better visibility of your Kubernetes cluster. The charts automatically install and configure KSM on your cluster. For more information on LM Container installation, see Installing the LM Container Helm Chart or Installing LM Container Chart using CLI.
Monitoring Workflow
The following diagram illustrates the monitoring workflow with the KSM:

Sources of Data
- kube-state-metrics (KSM)— Listens to the Kubernetes API server and generates metrics about the state of the objects. For more information, see kube-state-metrics from Github.
- Kubernetes Summary API (/stats/summary)— It is provided by the kubelet for discovering and retrieving per-node summarized stats available through the /stats endpoint. For more information, see Node metrics data from Kubernetes documentation.
Watchdog Component
The Watchdog component collects and stores the data for other LogicModules to consume. LogicMonitor uses kube-state-metrics (KSM) and Kubernetes Summary API to monitor Kubernetes cluster health.
There are two modules, watchdog module and consumer module. Watchdog fetches the data from the API and stores it in the local cache. The consumer uses the stored data.
- KSM Watchdog— KSM Watchdog collects data from kube-state-metrics.
- Summary Watchdog— Summary Watchdog fetches metrics from the summary API and provides data to LogicModules.
Module Name | AppliesTo |
Kubernetes_ KSM_Watchdog | system.devicetype == “8” && system.collector == “true” && hasCategory(“KubernetesKSM”) |
Kubernetes_Summary_Watchdog | system.collector == “true” && hasCategory(“KubernetesKSM”) |
For more information on AppliesTo, see AppliesTo Scripting Overview.
Consumer LogicModules
Kubernetes-related LogicModules utilize the data which is collected by Kubernetes Watchdog. For example, Kubernetes_KSM_Pods, Kubernetes_KSM_Nodes are collected by Watchdog.
Requirements for Monitoring using KSM
- Ensure to add all the Kubernetes modules under LM Exchange in the Kubernetes package.
- Ensure that Kubernetes_ KSM_Watchdog and Kubernetes_Summary_Watchdog modules are installed.
Installing KSM
You do not need any separate installation on your server to use kube-state-metrics (KSM). If the value of kube-state-metrics.enabled property is set to true in the lm-container helm values.yaml file, KSM installs automatically. In addition, you can configure KSM while installing or upgrading LM Container Helm Charts.
Note:
- If you already have KSM installed and intend to use the same, set the kube-state-metrics.enabled property to false.
- In case the existing KSM service is changed, you must restart the LM Collector Pod.
Installing and Upgrading KSM with LM Container Helm Charts
Install LM Container Charts. For more information, see Installing the LM Container Helm Chart.
To upgrade the existing cluster to the latest version, see Upgrading LM Container Charts.
Installing and Upgrading KSM without LM Container Helm Charts
Install Argus. For more information, see Argus Installation.
To upgrade the existing cluster to the latest version, see Upgrading Kubernetes Monitoring Applications.
Running the PropertySource
To set the existing cluster for monitoring, you must run the addCategory_KubernetesKSM property source by completing the following steps:
- In LogicMonitor, navigate to Settings > PropertySources > addCategory_KubernetesKSM.
- In the PropertySouce window, click the More options.
- Run PropertySource.
Updating Dashboards
Dashboards can help you visualize the information in a meaningful manner that is retrieved by the modules. It does not affect the usability of the modules.
Requirements
Download the Dashboards from the LogicMonitor repo.
Procedure
- Log into your LogicMonitor portal.
- On the left panel, navigate to Dashboards and click the expand icon.
- On the Dashboards panel, click Add.
- From the Add drop-down list, select From File option.
- Click Browse, to add the dashboard file downloaded from the portal.
- Click Submit.
You can see the required dashboard added to the Dashboard page.
etdc is a lightweight, highly available key-value store where Kubernetes stores the information about a cluster’s state. For more information, see Operating etcd clusters for Kubernetes from Kubernetes documentation.
LogicMonitor can only monitor etcd clusters that are deployed within a Kubernetes Cluster, and not those that are deployed outside of the cluster.
Important: We do not support Kubernetes etcd monitoring for managed Kubernetes services like OpenShift, Amazon Elastic Kubernetes Service (EKS), Google Kubernetes Engine (GKE), and Azure Kubernetes Service (AKS) because they do not expose the Kubernetes Control Plane components.
Requirements for Monitoring etcd
Ensure you have enabled the Kubernetes_etcd DataSource.
Note: This is a multi-instance datasource, with each instance indicating an etcd node. This DataSource is available for download from LM Exchange.
Setting up Kubernetes etcd Monitoring
Installation
You do not need any separate installation on your server to use the Kubernetes etcd.
Depending on your preference, you can install LM Container with the following two options:
- Installing through the user interface. For more information, see Installing the LM Container Helm Chart.
- Installing through the command line interface. For more information, see Installing LM Container Chart using CLI.
Configuration
The Kubernetes etcd cluster is pre-configured for monitoring. No additional configurations are required. If you do not see any data for the Kubernetes etcd resource, do the following:
- In your system, enter /etc/kubernetes/manifests in the terminal.
- Open the etcd.yaml file for updating.
- In etcd Pod under kube-system namespace, change the value of –listen-metrics-urls at .spec.containers.command from http://127.0.0.1:2381 to http://0.0.0.0:2381.
Note: Changing the value of –listen-metrics-urls allows the collector pod to scrape the metrics URL of the etcd pod within the cluster only. - Save the etcd.yaml file.
Note: Ensure you disable the SSL certificate verification. To do this, use http instead of https for the –listen-metrics-urls value.
For monitoring custom etcd deployment, follow the instructions below:
- In LogicMonitor, navigate to Settings > DataSources > select Kubernetes etcd DataSource.
- In the Kubernetes etcd DataSource page, expand Active Discovery.
- In the Parameters section, select the Embedded Groovy Script option.
- In the Groovy Script field, enter the required component name for the etcd_label array.
- Expand Collector Attributes, and in the Groovy Script field, enter the required component name for the etcd_label array.
- Select Save to save the Kubernetes etcd DataSource.
Viewing Kubernetes etcd Details
Once you have installed and configured the Kubernetes etcd on your server, you can view the etcd cluster properties and metrics on the Resources page.
- In LogicMonitor, navigate to Resources > select the required etcd DataSource resource.
- Select the Info tab to view the different properties of the Kubernetes etcd.
- Select the Alerts tab to view the alerts generated while checking the status of the Kubernetes etcd resource.
- Select the Graphs tab to view the status or the details of the Kubernetes etcd in the graphical format.
- Select the Alert Tuning tab to view the datapoints on which the alerts are generated.
- Select the Raw Data tab to view all the data returned for the defined instances.
The Controller Manager are control loops that continuously watch the state of your cluster loop in a Kubernetes cluster. The Controller Manager monitors the current state of your cluster through the API Server and makes appropriate changes to keep the application running by ensuring sufficient Pods are in a healthy state. For more information, see Controllers from Kubernetes documentation.
Important: We do not support Kubernetes Controller Manager monitoring for managed services like OpenShift, Amazon Elastic Kubernetes Service (EKS), Google Kubernetes Engine (GKE), and Azure Kubernetes Service (AKS) because they do not expose the Kubernetes Control Plane components.
Requirements for Monitoring Controller Manager
Ensure you have the Kubernetes_Controller_Manager datasource enabled.
Note: This is a multi-instance datasource, with each instance indicating a Controller Manager. This datasource is available for download from LM Exchange.
Setting up Kubernetes Controller Manager Monitoring
Installation
You do not need separate installation on your server to use the Kubernetes Controller Manager.
Depending on your preference, you can install LM Containers with the following two options:
- Installing through the user interface. For more information, see Installing the LM Container Helm Chart.
- Installing through the command line interface. For more information, see Installing LM Container Chart using CLI.
Configuration
The Kubernetes Controller Manager is pre-configured for monitoring. If you do not see any data for the Kubernetes Controller Manager resource, do the following:
- In your system, enter /etc/kubernetes/manifests in the terminal.
- Open the kube-controller-manager.yaml file for updating.
- In kube-controller-manager Pod under kube-system namespace, change the –bind-address at .spec.containers.command from 127.0.0.1 to <Value of status.podIP present on kube-controller-manager pod>.
Note: Run the kubectl get pod
- Save the kube-controller-manager.yaml file.
Note: If –bind-address is missing, the scheduler continues to run with its default value 0.0.0.0.
For monitoring custom controlled managers, do as follows:
- In LogicMonitor, navigate to Settings > DataSources > select Kubernetes Controller Manager DataSource.
- In the Kubernetes Controller Manager DataSource page, expand Active Discovery.
- In the Parameters section, select the Embedded Groovy Script option.
- In the Groovy Script field, enter the required component name for the controller_label array.
- Expand Collector Attributes and in the Groovy Script field, enter the required component name for the controller_label array.
- Select Save to save the Kubernetes Controller Manager DataSource.
Viewing Kubernetes Controller Manager Details
Once you have installed and configured the Kubernetes Controller Manager on your server, you can view all the relevant data on the Resources page.
- In LogicMonitor, navigate to Resources > select the required Kubernetes Controller Manager resource.
- Select the Info tab to view the different properties of the Kubernetes Controller Manager.
- Select the Alerts tab to view the alerts generated while checking the status of the Kubernetes Controller Manager resource.
- Select the Graphs tab to view the status or the details of the Kubernetes Controller Manager in the graphical format.
- Select the Alert Tuning tab to view the datapoints on which the alerts are generated.
- Select the Raw Data tab to view all the data returned for the defined instances.
The Ingress is a Kubernetes resource that exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. The cluster must have an ingress controller running for the Ingress resource to work. For more information, see Ingress and Ingress Controllers from Kubernetes documentation.
NGINX Ingress Controller is a type of ingress controller that runs in a cluster and configures an HTTP load balancer according to traffic routing rules. For more information, see How NGINX Ingress Controller Works from NGINX documentation.
Important: NGINX Ingress Controller Monitoring is available with LM Container Helm Charts version 4.2.0 or later.
Requirements for Monitoring NGINX Ingress Controller
- Ensure to enable Prometheus metrics in NGINX Ingress Controller. For more information on exposing Prometheus, see Prometheus from NGINX documentation.
- Ensure to have the Kubernetes_Nginx_IngressController datasource and addCategory_NginxIngressController propertysource enabled.
Note: This is a multi-instance datasource, with each instance indicating a replica (Pod) of the NGINX Ingress Controller. This datasource is available for download from LM Exchange.
Setting up NGINX Ingress Controller Monitoring
Installation
You do not need any separate installation on your cluster to use the NGINX Ingress Controller.
Depending on your preference, you can install LM Containers with the following two options:
- Installing through the user interface. For more information, see Installing the LM Container Helm Chart.
- Installing through the command line interface. For more information, see Installing LM Container Chart using CLI.
Configuration
The NGINX Ingress Controller is pre-configured for monitoring. No additional configurations are required.
Viewing NGINX Ingress Controller Details
Once you have installed and configured the NGINX Ingress Controller on your server, you can view all the relevant data on the Resources page.
- In LogicMonitor, navigate to Resources > select the required NGINX Ingress Controller service.
- Select the Info tab to view the different properties of the NGINX Ingress Controller.
- Select the Alerts tab to view the alerts generated while checking the status of the NGINX Ingress Controller resource.
- Select the Graphs tab to view the status or the details of the NGINX Ingress Controller in the graphical format.
- Select the Alert Tuning tab to view the datapoints on which the alerts are generated.
- Select the Raw Data tab to view all the data returned for the defined instances.
Amazon Elastic Kubernetes Service (EKS) is a managed Kubernetes service that helps you run Kubernetes on any standard Kubernetes environment (AWS). Using Amazon EKS, you can run Kubernetes without installing and operating a Kubernetes control plane or worker nodes.
LogicMonitor helps you to monitor your Amazon EKS environments in real-time. For more information, see What is Amazon EKS from Amazon documentation.
LogicMonitor officially supports running LM Container Kubernetes Monitoring on AWS Bottlerocket OS. For more information, see Bottlerocket from AWS documentation.
Requirements for Monitoring EKS Cluster
- Ensure you have a valid and running cluster on Amazon EKS.
- Ensure to run the supported Kubernetes cluster version on Amazon EKS. For more information, see the Support Matrix for Kubernetes Monitoring.
Setting up Amazon EKS Cluster
You don’t need separate installations on your server to monitor the Amazon EKS cluster, since LogicMonitor already integrates with Kubernetes and AWS. For more information on LM Container installation, see Installing the LM Container Helm Chart or Installing LM Container Chart using CLI.
Amazon EKS Cluster Dashboards
You don’t need to create any separate Amazon EKS cluster dashboards. If you have integrated LogicMonitor with Kubernetes and AWS, the Amazon EKS cluster data will display on the relevant dashboards.

Kubernetes Monitoring Considerations
- LM Container treats each Kubernetes object instance as a device.
- Our Kubernetes integration is Container Runtime Interface (CRI) agnostic. For more information, see Container Runtime Interface from Kubernetes documentation.
- LM Container officially supports the most recent five versions of Kubernetes at any given time and aims to offer support for new versions within 60 days of the official release. For more information, see Support Matrix for Kubernetes Monitoring.
- All Kubernetes Clusters added are displayed on the Resources page. For more information, see Adding Kubernetes Cluster Using LogicMonitor Web Portal.
Kubernetes Monitoring Dependencies
- LM Container Helm Chart — Unified LM Container Helm chart allows you to install all the services necessary to monitor your Kubernetes cluster, including Argus, Collectorset-Controller, and the kube-state-metrics (KSM) service.
- Argus— Uses LogicMonitor’s API to add Nodes, Pods, and Services into monitoring.
- Collectorset-Controller— Manages one or more Dockerized LogicMonitor Collectors for data collection. Once Kubernetes Cluster resources are added to LogicMonitor, data collection starts automatically. Data is collected for Nodes, Pods, Containers, and Services via the Kubernetes API. Additionally, standard containerized applications (e.g. Redis, MySQL, etc.) will be automatically detected and monitored.
- Dockerized Collector— An application used for data collection.
- Kube-state-metrics (KSM) Service— A simple service that listens to the Kubernetes API server and generates metrics about the state of the objects.
The following image displays how LogicMonitor’s application runs in your cluster as a pod.

Note: LogicMonitor’s Kubernetes monitoring integration is an add-on feature called LM Container. You may contact your Customer Success Manager (CSM) for more information.
LogicMonitor Portal Permissions
- You should have manage permissions of:
- Settings:
- LogicModules
- Minimum one dashboard group.
- Minimum one resource group.
- Minimum one collector group.
Resources are created if the hosts running in your clusters do not already exist in monitoring.
- Settings:
- You should have view permissions for all the collector groups.
- For creating API tokens for authentication purposes, ensure to check the Allow Creation of API Token checkbox under Settings > User Profile. Any user except an out-of-the-box administrator user role can create API tokens. For more information, see API Tokens.
- Best to install the LM Container from LM portal with the Administrator user role. For more information, see Roles.
Kubernetes Cluster Permissions
There are minimum permissions that are required to install the LM Container.
For creating ClusterRole, do the following:
- Create and save a cluster-role.yaml file with the following configuration:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: lm-container-min-permissions
rules:
- apiGroups:
- ""
resources:
- "*"
verbs:
- get
- list
- create
- apiGroups:
- ""
resources:
- configmaps
verbs:
- "*"
- apiGroups:
- apps
resources:
- deployments
- statefulsets
- replicasets
verbs:
- get
- list
- create
- apiGroups:
- rbac.authorization.k8s.io
resources:
- clusterroles
- clusterrolebindings
- roles
- rolebindings
verbs:
- "*"
- apiGroups:
- apiextensions.k8s.io
resources:
- customresourcedefinitions
verbs:
- "*"
- apiGroups:
- "*"
resources:
- collectorsets
verbs:
- "*"
- Enter the following command:
kubectl apply -f cluster-role.yaml
- Run the following command to create cluster-role-binding to give view-only permissions of all the resources for a specific user.
kubectl create clusterrolebinding role-binding-view-only --clusterrole view --user <user-name>
- Run the following command to create cluster-role-binding to give permissions to specific user to install LM container components.
kubectl create clusterrolebinding role-binding-lm-container --clusterrole lm-container-min-permissions --user <user-name>
For more information on LM Container installation, see Installing the LM Container Helm Chart or Installing LM Container Chart using CLI.
The following table provides guidelines on provisioning resources for LogicMonitor components to have optimum performance and reliable monitoring for the Kubernetes cluster:
Collector Size | Medium | Large |
Maximum Resources with 1 collector replica | 1300 resources | 3600 resources |
Argus and CSC Version | Argus Version – v7.1.2 CSC Version – v3.1.2 | Argus Version – v7.1.2 CSC Version – v3.1.2 |
Collector Version | GD 33.001 | GD 33.002 |
Recommended Argus limits & Requests | CPU Requests – 0.256 core CPU Limits – 0.5 core | CPU Requests – 0.5 core CPU Limits – 1 core |
Memory Requests – 250MB Memory Limits – 500MB | Memory Requests – 500MB Memory Limits – 1GB | |
Recommended Collectorset Controller limits & Requests | CPU Requests – 0.02 core CPU Limits – 0.05 core | CPU Requests – 0.02 core CPU Limits – 0.05 core |
Memory Requests – 150MB Memory Limits – 200MB | Memory Requests – 150MB Memory Limits – 200MB |
Example of Collector Configuration for Resource Sizing
Let’s say you have about 3100 resources to monitor. You need a large collector single replica with the compatible versions as displayed in the above table to monitor your resources. You can configure the collector size and replica count in the configuration.yaml file as follows:
argus:
collector:
size: medium
replicas: 1
Note: In the size field, you can add the required collector size (Large or Medium) and in the replicas field, you can add the number of required collector replicas.
Specifying Resource Limits for Collectorset-Controller and Argus Pod
You can enforce central processing unit (CPU) and memory constraints on your Collectorset-Controller, Argus Pod, and Collector.
An example of the collectorset-controller.resources
parameter displayed in the following lm-container configuration
yaml file:
collectorset-controller:
resources:
limits:
cpu: "1000m"
memory: "1Gi"
ephemeral-storage: "100Mi"
requests:
cpu: "1000m"
memory: "1Gi"
ephemeral-storage: "100Mi"
An example of the argus.resources
parameter displays in the following lm-container configuration
yaml file:
argus:
resources:
limits:
cpu: "1000m"
memory: "1Gi"
ephemeral-storage: "100Mi"
requests:
cpu: "1000m"
memory: "1Gi"
ephemeral-storage: "100Mi"