You can restart a collector from the LogicMonitor platform or from the collector host. When the collector is up and running, you can restart the collector from the LogicMonitor platform. If the collector is down or dead, you have to restart it from the collector host.
Restarting from LogicMonitor Portal
To restart a collector from the LogicMonitor platform:
- Navigate to Settings > Collectors.
- Under the Collectors tab, select the collector that you want to restart.
- Select the More option and then select Restart Collector.
A message confirming the restart is displayed. - Select Confirm. The collector restart begins.
Restarting from Collector Host
To restart a collector on a Windows host, use the Services control panel to restart the following services:
- LogicMonitor Collector
- LogicMonitor Collector Watchdog
To restart a collector on a Linux host, run the following commands:
To restart a collector from the LogicMonitor platform:
- Stop LogicMonitor:
/usr/local/logicmonitor/agent/bin/sbshutdown
- Start the Watchdog service which may be run from
init.d
orsystemd
.- From
init.d
–/etc/init.d/logicmonitor-watchdog start
- From
systemd
–systemctl start logicmonitor-watchdog
- From
Collectors have the ability to cache collector data to disk. This enables collectors to store data in your environment during periods where your collector is unable to deliver data to your LogicMonitor account (example, as a result of network issues). Once your collector reaches your account again, the buffered data is communicated to our servers, eliminating any gaps in data you would have otherwise seen. By default, collector caching is enabled and configured to cache up to 30 minutes of data.
Note: LogicMonitor only evaluates the most recent five minutes of cached data for alerts. In other words, the connection to LogicMonitor must be reestablished within five minutes of an alert condition in order for an alert to occur for that condition.
You can disable/enable collector caching by setting the reporter.persistent.enable
property in the agent.conf file to false
/true
, respectively. By default, the property is set to true
.
You can change the time the collector can cache data for by changing the value of the reporter.persistent.expire
property in the agent.conf
file. By default, the property is set to 30, which corresponds to up to 30 minutes of cached data. The value should not be set to any value greater than 1440 minutes, which corresponds to 24 hours (collectors restart every 24 hours and caching cannot continue after a restart).

Note:
- Setting the
reporter.persistent.expire
to a greater value will consume more disk space. - If you set the
reporter.persistent.expire = 20
, the Collector retains cached data from the last 20 minutes. However, in some cases, Collector might also cache data older than the set expire time such as when a Collector goes down or when connection is lost between Collector and LogicMonitor server for more than 20 minutes.
Disk Space
The amount of disk space necessary on the collector server depends on how heavily the collector is loaded, and how long the data is cached. The following are estimates of disk space usage for various collectors, assuming 50 instances per device, an average collection interval of 2 minutes and 30 minutes of cached data:
Collector Load | Number of Devices | Disk Space Usage |
Light | 50 | 75MB |
Medium | 200 | 150MB |
High | 1000 | 400MB |
Storing Cached Data
The cached data is stored in the /usr/local/logicmonitor/agent/bin/queues/data
directory. This path may change if you did not install the collector in the default /usr/local/logicmonitor
directory.
Discarding Cached Data
If the collector continues to cache data after the limit configured in reporter.persistent.expire
(30 minutes by default), the oldest data will be discarded.
The amount of data that a Collector can handle depends on the Collector’s configuration and resources. You can monitor the data collection load and performance of your Collector to minimize disruption and notify when a collector is down. See Monitoring your Collectors.
If you have a large environment, and are experiencing alerts on the Unavailable Task Rate datasource of your Collectors, you may need to tune your Collector to increase its monitoring capacity.
Device Capacity Limits
The following table describes the capacity of collectors in different sizes. It is measured in requests per second (RPS) (except for Syslog, which is measured in events per second (EPS).
Note:
- We have attached 50 instances to every device. Thus, to get the number of instances, multiply the number of devices by 50. For example, 50 x 211 (devices) = 10550 instances
- These measurements are estimates and the actual capacity may vary as per production environment.
Protocol | Small Collector | Medium Collector | Large Collector | Extra Large (XL) Collector | Double Extra Large (XXL) Collector |
CPU: 1 Intel Xeon Family System Memory: 2GiB JVM maximum memory: 1GiB | CPU: 2 Intel Xeon E5-2680v2 2.8GHz System Memory: 4GiB JVM maximum memory: 2GiB | CPU: 4 Intel Xeon E5-2680v2 2.8GHz System Memory: 8GiB JVM maximum memory: 4GiB | CPU: 8 System Memory: 16GiB JVM maximum memory: 8GiB | CPU: 16 System Memory: 32GiB JVM maximum memory: 16GiB | |
SNMP v2c (Linux) | 300 standard devices 76 RPS | 1000 standard devices 256 RPS | 4000 standard devices 1024 RPS | 8000 standard devices 2048 RPS | 15000 standard devices 3840 RPS |
SNMP v3 | 855 standard devices 220 RPS | 1087 standard devices 278 RPS | 1520 standard devices 390 RPS | 2660 standard devices 682 RPS | 4180 standard devices 1074 RPS |
HTTP | 320 standard devices 160 RPS | 1400 standard devices 735 RPS | 2400 standard devices 1260 RPS | 4500 standard devices 2000 RPS | 7500 standard devices 3740 RPS |
WMI | 211 standard devices 77 RPS | 287 standard devices 102 RPS | 760 standard devices 272 RPS | 1140 standard devices 409 RPS | 1330 standard devices 433 RPS |
BatchScript | 94 standard devices 5 RPS | 124 standard devices 7 RPS | 180 standard devices 11 RPS | 295 standard devices 17 RPS | 540 standard devices 32 RPS |
Perfmon | 200 standard devices 87 RPS | 400 standard devices 173 RPS | 800 standard devices 347 RPS | TBA | TBA |
JMX | 1000 standard devices 416 RPS | 2500 standard devices 1041 RPS | 5000 standard devices 2083 RPS | TBA | TBA |
Syslog | TBD | 500 EPS (assuming event size of 100-200 bytes) | 2500 EPS (assuming event size of 100-200 bytes) | 4000 EPS (assuming event size of 100-200 bytes) | 7000 EPS (assuming event size of 100-200 bytes) |
SNMP v2 Trap | TBD | 17 standard devices | 87 standard devices | 140 standard devices | 245 standard devices |
SNMP v3 Trap | TBD | 14 standard devices | 70 standard devices | 112 standard devices | 196 standard devices |
The capacity also depends on the number of instances that need to be discovered for each monitored device. For example, if each device is a load balancer with 10,000 instances, collector capacity will be lower. If each device is a switch with hundreds of interfaces, collector capacity may be lower because it is limited by discovery.
Note:
- For monitoring production critical applications and infrastructure, it is recommended to use medium and above size collector as per your requirements. You can use small size collector for testing purpose.
- If a collector runs on an Amazon EC2 instance, we recommend that you use a fixed performance instance type (such as M5 or C5) instead of a credit based instance type (such as T2).
- The nano collector size is not included in this table. The nano collector is used for testing and hence, no recommended device-count capacity has been assigned to it.
- Collectors using JDK 11 (supported by collector version 28.400 and later) will see roughly 10% more memory and CPU usage than the previous JDK 8 collectors on the same hardware.
- Incase of the estimated collector performance numbers for SNMP v2 and v3 Trap LogSource, we have assumed that each device generates 10 traps per second.
Collector Memory Requirements in a VM
Part of a Collector’s system memory allocation is devoted to Standalone Script Engine (SSE), which is enabled by default and used to execute script DataSources (Groovy scripts).
Collector Size | SSE Memory Requirements |
Small | 0.5GiB |
Medium | 1GiB |
Large | 2GiB |
Extra Large | 4GiB |
Double Extra Large | 8GiB |
In general, the SSE requires half of the amount of memory allotted to the JVM. The memory requirements are not shared, rather the SSE requirement is in addition to the JVM memory requirements. If the Collector does not have this memory available, the SSE will not start and you will see “Can’t find the SSE Collector Group” in the Collector Status dialog. The Collector will work without the SSE, but Groovy scripts will be executed from the Agent instead of the SSE.
If the Collector is executed in a VM, this safeguard can be overridden because the OS indicates there is free memory. This burst memory capacity in VMs can increase memory use above the system memory requirements listed previously. Although this can happen for Collector of any size, it is far more likely to happen to small Collectors.
To disable SSE and prevent additional memory use, edit the Collector’s agent.conf
:
- If the configuration setting reads
groovy.script.runner=sse
, change it togroovy.script.runner=agent
. - If the previous setting is not present, update the following setting:
collector.script.asynchronous=false
.
For more information, see Editing the Collector configuration files.
NetFlow Capacity
The following table describes the capacity of NetFlow collectors across different sizes and OS platforms. It is measured in flows per second (FPS).
Note:
- For optimum performance, we recommend that you use the NetFlow collector only for collecting and processing NetFlow data.
- All numbers mentioned below are captured in the in-house PSR lab under controlled conditions. Hence, collector’s actual capacity may vary based on the nature of the NetFlow traffic at the customer’s end.
- Processing NetFlow data is CPU intensive. In case of a CPU crunch, we recommend that you first increase the resources (CPU cores) on the Collector Host to support more number of flows. You can then switch to a bigger size collector if increasing the CPU capacity does not help.
OS Platform | Metric | Small Collector | Medium Collector | Large Collector | Extra Large (XL) Collector | Double Extra Large (XXL) Collector |
Windows 64 bit Linux 64 bit | Supported Flows/sec | 7800 | 13797 | 23166 | 37418 | 52817 |
Adjusting Collector Size
You can adjust the collector size from the LogicMonitor portal, especially for performance tuning and increasing the collector capacity after installing it.
- Navigate to Settings > Collectors.
- Under the Collectors tab, select the collector whose size you want to adjust.
- Select the More option and then select Collector Configuration.
On the Collector Configuration page, the Agent Config settings are displayed. - Select the collector size from the dropdown menu.
- Select Save and Restart. LogicMonitor automatically verifies if your host has enough memory to support the new collector size.
Note:
- Older Collectors will display their current size as “Custom (xGiB)” in the dropdown, even if no parameters have been modified since installing. This is because our definition of size has changed since the Collector was installed. If you want to ensure the Collector configuration is up to date, simply select the size you want (or had installed originally) and select Save and Restart.
- Changing a Collector’s size has no effect on parameters unrelated to its size. The parameters listed in the section below, Configuration Details, are the only ones impacted by a change in the Collector’s size.
If you are manually changing the collector’s config parameters, LogicMonitor runs a validity check after you select Save and Restart to ensure that no errors were made in the new configuration. If errors are detected, the missing/duplicated lines are displayed so that they can be corrected.
Small Collector
Config File | Parameters | Description |
wrapper.conf | wrapper.java.initmemory=128 | Minimum Java Heap Size(MiB) for Collector |
wrapper.java.maxmemory=1024 | Maximum Java Heap Size(MiB) for Collector | |
sbproxy.conf | wmi.stage.threadpool.maxsize=100 | The maximum size of threads to handle WMI query/fetch data in sbwinproxy.exe |
wmi.connection.threadpool.maxsize=50 | The maximum size of threads for WMI to connect to remote machine in sbwinproxy.exe | |
agent.conf | sbproxy.connector.capacity=8192 | The maximum number of requests that the Collector can send in parallel to sbwinproxy and sblinuxproxy |
discover.workers=10 | Allocates resources to Active Discovery iterations | |
autoprops.workers=10 | The thread pool size for AP | |
reporter.persistent.queue.consume.rate=10 | The max count of data entries that will be reported for each API call. | |
reporter.persistent.queue.consumer=10 | The thread count used to read from buffer and execute reporting. | |
collector.script.threadpool=100 | The max thread count to run script tasks. | |
website.conf | sse.max.spawn.process.count=3 | N/A |
Medium Collector
Config File | Parameters | Description |
wrapper.conf | wrapper.java.initmemory=512 | Minimum Java Heap Size(MiB) for Collector |
wrapper.java.maxmemory=2048 | Maximum Java Heap Size(MiB) for Collector | |
sbproxy.conf | wmi.stage.threadpool.maxsize=200 | The maximum size of threads to handle WMI query/fetch data in sbwinproxy.exe |
wmi.connection.threadpool.maxsize=100 | The maximum size of threads for WMI to connect to remote machine in sbwinproxy.exe | |
agent.conf | sbproxy.connector.capacity=8192 | The maximum number of requests that the Collector can send in parallel to sbwinproxy and sblinuxproxy |
discover.workers=40 | Allocates resources to Active Discovery iterations | |
autoprops.workers=10 | The thread pool size for AP | |
reporter.persistent.queue.consume.rate=12 | The max count of data entries that will be reported for each API call. | |
reporter.persistent.queue.consumer=10 | The thread count used to read from buffer and execute reporting. | |
collector.script.threadpool=200 | The max thread count to run script tasks. | |
website.conf | sse.max.spawn.process.count=5 | N/A |
Large Collector
Config File | Parameters | Description |
wrapper.conf | wrapper.java.initmemory=1024 | Minimum Java Heap Size(MiB) for Collector |
wrapper.java.maxmemory=4096 | Maximum Java Heap Size(MiB) for Collector | |
sbproxy.conf | wmi.stage.threadpool.maxsize=400 | The maximum size of threads to handle WMI query/fetch data in sbwinproxy.exe |
wmi.connection.threadpool.maxsize=200 | The maximum size of threads for WMI to connect to remote machine in sbwinproxy.exe | |
agent.conf | sbproxy.connector.capacity=16384 | The maximum number of requests that the Collector can send in parallel to sbwinproxy and sblinuxproxy |
discover.workers=80 | Allocates resources to Active Discovery iterations | |
autoprops.workers=15 | The thread pool size for AP | |
reporter.persistent.queue.consume.rate=12 | The max count of data entries that will be reported for each API call. | |
reporter.persistent.queue.consumer=15 | The thread count used to read from buffer and execute reporting. | |
collector.script.threadpool=300 | The max thread count to run script tasks. | |
website.conf | sse.max.spawn.process.count=5 | N/A |
XL Collector
Config File | Parameters | Description |
wrapper.conf | wrapper.java.initmemory=1024 | Minimum Java Heap Size(MiB) for Collector |
wrapper.java.maxmemory=8192 | Maximum Java Heap Size(MiB) for Collector | |
sbproxy.conf | wmi.stage.threadpool.maxsize=800 | The maximum size of threads to handle WMI query/fetch data in sbwinproxy.exe |
wmi.connection.threadpool.maxsize=400 | The maximum size of threads for WMI to connect to remote machine in sbwinproxy.exe | |
agent.conf | sbproxy.connector.capacity=32768 | The maximum number of requests that the Collector can send in parallel to sbwinproxy and sblinuxproxy |
discover.workers=160 | Allocates resources to Active Discovery iterations | |
autoprops.workers=20 | The thread pool size for AP | |
reporter.persistent.queue.consume.rate=15 | The max count of data entries that will be reported for each API call. | |
reporter.persistent.queue.consumer=20 | The thread count used to read from buffer and execute reporting. | |
collector.script.threadpool=400 | The max thread count to run script tasks. | |
website.conf | sse.max.spawn.process.count=10 | N/A |
XXL Collector
Config File | Parameters | Description |
wrapper.conf | wrapper.java.initmemory=2048 | Minimum Java Heap Size(MiB) for Collector |
wrapper.java.maxmemory=16384 | Maximum Java Heap Size(MiB) for Collector | |
sbproxy.conf | wmi.stage.threadpool.maxsize=1600 | The maximum size of threads to handle WMI query/fetch data in sbwinproxy.exe |
wmi.connection.threadpool.maxsize=800 | The maximum size of threads for WMI to connect to remote machine in sbwinproxy.exe | |
agent.conf | sbproxy.connector.capacity=65536 | The maximum number of requests that the Collector can send in parallel to sbwinproxy and sblinuxproxy |
discover.workers=320 | Allocates resources to Active Discovery iterations | |
autoprops.workers=30 | The thread pool size for AP | |
reporter.persistent.queue.consume.rate=20 | The max count of data entries that will be reported for each API call. | |
reporter.persistent.queue.consumer=30 | The thread count used to read from buffer and execute reporting. | |
collector.script.threadpool=600 | The max thread count to run script tasks. | |
website.conf | sse.max.spawn.process.count=15 | N/A |
Minimum Recommended Disk Space
Although the Collector operates in memory, operations such as caching require available disk space on its host. The exact amount of required storage varies and depends on factors such as Collector size, configuration, NetFlow usage, number of Collector logs, and so on.
These are examples of required disk space based on these factors:
- A brand new install Collector will use about 500MiB.
- At most, Collector logs will use 800MiB.
- Temporary files (ie. upgrade files) will use less than 1500MiB.
- Report cache data will use less than 500MiB by default (this figure represents 30 minutes of cached data for a Large Collector)
- If using NetFlow the disk usage is less than 30GiB.
In total, this means Collector disk usage will be less than 3.5GiB without NetFlow and up to 33.5GiB with NetFlow enabled.
You can control the behavior of LogicMonitor collectors using configuration files. Configuration files are located in the collector’s installation directory at the following default file path:
- Linux –
/usr/local/logicmonitor/agent/conf
- Windows –
C:\Program Files (x86)\LogicMonitor\Agent\conf
You can view and update the settings in the collector configuration files on a per-collector basis on the LogicMonitor user interface.
It is recommended that you use the LogicMonitor user interface to update the settings in the collector configuration files instead of editing the files manually. You can manually modify the local collector configuration files at your own risk.
Note: You can only modify the agent.conf.local configuration file manually on the collector filesystem. Any configurations added to this file override the generic agent.conf configuration file. This enables you to configure settings such as debug.disable=false and remotesession.disable=true and ensures the settings cannot be changed on the user interface.
Editing Collector Configuration
- Navigate to Settings > Collectors.
- Under the Collectors tab, select the collector you want to configure.
- Select the More option and then select Collector Configuration.
On the Collector Configuration page, settings under the Agent Config tab are displayed. You can select the WatchDog Config, Wrapper Config, Sbproxy Config, and Website Config tabs to access more settings. - Manually edit the settings.
- Select Save and Restart to restart the collector and apply the changes.
Typically, collector events include errors related to data collection tasks and the stopping/starting/restarting of the collector services. You can look through these events to debug your collector issues.
To view collector events, follow these steps:
- Navigate to Settings > Collectors.
- Under the Collectors tab, select the collector whose events you want to view.
- Select the More option and then select Collector Events.
All the events associated with the selected collector are displayed on the Collector Events page.
You can select a date/time range to view the events.
Grouping your collectors into logical units can streamline account management, simplify end user permission settings, improve efficiency, and more. LogicMonitor supports two types of collector groups:
- Standard collector groups
- Auto-balanced collector groups (ABCG)
Standard Collector Groups
Standard collector groups primarily assist with collector organization. For example, you can to organize collectors into groups based on any of the following shared characteristics:
- Physical location – If you have infrastructure across multiple data centers or offices, grouping collectors based on their locations can simplify assigning collectors throughout your account (e.g. when devices are added).
- Customer – If you’re an MSP, grouping collectors by customer can make it easier to quickly find a particular collector, and additionally simplifies the collectors page display when you have a large number of collectors in your account.
- Environment – You may want to group collectors based on whether they are in a development, QA, production, or other environment. This will enable you to set user role permissions per group, to ensure that your team members have the appropriate access.
Auto-balanced Collector Groups
Auto-Balanced Collector Groups (ABCGs) provide functionality beyond organization. The collectors in an ABCG share device load, allowing for dynamic device balancing, scaling, and failover.
Adding Collector Group
- Navigate to Settings > Collectors.
- Under the Collectors tab, select the Add Collector Options
dropdown.
- Select Add Collector Group.
- Toggle the Auto Balanced Collector Group option to enable auto balancing for the collector group. Auto-balancing allows you to share device load among a group of collectors.
- Enter a name and description of your new collector group.
- Define the key and value pair to add properties on your collector that can then be tokenized in collector-related alert messages. This is particularly useful for routing collector down, failover, and failback alerts through an external ticketing system.
For example, your team in Austin is responsible for a specific subset of collectors. To ensure the Austin team is properly notified in the event one of their collectors goes down, you can assign these collectors a custom property. Once assigned, the property can be tokenized (##team##), and used to route alerts to the proper team via your organization’s ticketing system. The token is substituted with the property value at the time of generation so that the alert or integration delivery can include dynamic information. - If you designated your new group as an Auto-Balanced Collector Group, the Rebalance Threshold – Instance Count option is displayed. You can adjust the threshold instance count.
Moving a Collector to Collector Group
Collectors are assigned to collector groups at the time collectors are added. As your LogicMonitor deployment evolves over time, it is likely you’ll want to move collectors among groups to suit new organizational needs, create auto-balanced groups, and so on. To move collectors from one collector group to another, navigate to Settings > Collectors. From the Collectors page, either:
- To move collectors in bulk – Select the checkbox on the table header to select all the collectors and select Actions > Move selected items to Group and then confirm your action.
- To move a single collector – Select the Manage
icon of the collector you want to move and then select a new group in the Collector Group field.
Moving Collectors Between Standard and ABCG
Collectors function very differently depending upon whether they are members of a standard collector group or an Auto-Balanced Collector Group (ABCG). Collectors in a standard group operate independently of one another and each manually designates a static failover collector. Collectors in an ABCG dynamically fail over to other collectors in the ABCG, thus requiring no manually designated failover collector. This becomes an important distinction when moving collectors between these two collector group types. You must consider the following points:
- Moving from standard collector group to ABCG. When moving a collector out of a standard collector group to an ABCG, note the following:
- The collector’s Failover Collector designation will be discarded. ABCGs employ a dynamic rebalancing algorithm upon collector failover; they don’t rely upon a one-to-one manual failover designations between collectors.
- There are several characteristics collectors belonging to the same ABCG must share.
- You decide whether the devices currently monitored by the collector should be immediately enabled for auto balancing. Upon moving, you’ll be presented with the Do not auto balance monitored devices option. This option, when selected, leaves the devices assigned to their current collector, allowing you to manually enable the devices on a case-by-case basis for participation in auto balancing.
- Moving from ABCG to standard collector group. When moving a collector out of an ABCG to a standard collector group, it’s important to remember that the collector has no failover designation. You’ll need to open its settings and assign one from the Failover Collector field.
Managing Collector Groups
You can edit or delete collector groups from the collectors page.
- On the Collectors page, select the collector group that you want to edit or delete.
- Select the Manage
icon.
The Manage Collector Group page is displayed.
After updating, save the changes.
To delete the collector group, select Delete.
Note: Note: A combination of collector group and device permissions impact how individual users can interact with collectors within the account.
LogicMonitor Collector monitors your infrastructure and collects data defined by LogicModules for each resource in that location. You do not need to install Collector on every device, instead one Collector on a server can be used to monitor all the resources in that location. See About the LogicMonitor Collector.
Installing Collector in a Container
LogicMonitor also supports installing and running Collector in a Docker container. Installation of a containerized Collector does not support all install options. For example, you can only run the full installation, not the bootstrap, and you will need to run the Collector process as root
. See Installing the Collector in a Container.
Installation Settings
- Navigate to Settings > Collectors.
- Under the Collectors tab, select the Add Collector Options
dropdown.
- Select Add Collector.
Follow the steps given on the Add Collector page to complete and verify the collector installation.
Selecting Device to Install Collector
The first step in adding a Collector is deciding which device will host the Collector.
For each location of your infrastructure, we recommend that you install a Collector on a Windows or Linux server that is physically close to or on the same network as the resources it will monitor. Most often, Collectors are installed on machines that function as syslog servers or DNS servers.
To ensure reliability, the Collector should not communicate across the internet to poll resources in another datacenter, through firewalls or network address translation (NAT) gateways.
Collector Server Requirements
The following table lists general requirements for choosing a server to host the Collector.
Requirement | Details |
Windows Server or Linux running on a physical or virtual server | LogicMonitor follows the Microsoft Lifecycle Policy for the “Extended Support End Date” and the Red Hat Enterprise Linux Life Cycle for the “End of Maintenance Support 2 (Product retirement)” date to determine which Windows Server and Linux operating systems are supported for Collector installation. We support the following Linux distributions:
Notes: |
Comprehensive port access | The server must be able to make outgoing HTTPS (port 443) connection to the LogicMonitor servers (proxies are supported). In addition, the ports for the monitoring protocols you intend to use (such as SNMP, WMI, JDBC, etc.) must be unrestricted between your Collector and the resources you want to monitor. For a detailed list of the ports, see About the LogicMonitor Collector. |
2GB of RAM | A minimum of 2GB of RAM. (More memory permits a Collector to collect data from more resources.) See Collector Capacity. |
Reliable time | The Collector should have reliable time, thus the server should have NTP setup or Windows Time Services to synchronize via NTP. If running on a VMware virtual machine, install VMware tools with VMware tools periodic Time Sync disabled. |
English-language support | LogicMonitor does not support non-English languages. |
Monitoring Collector Performance
We recommend that you select the Monitor the Device on which the collector is installed checkbox. This will allow you to keep track of the CPU utilization, disk usage and other metrics to ensure that the Collector is running and keeping up with its data collection load. See Monitoring Your Collectors.
You may also assign the Collector device into a Device Group. If you leave the device “Ungrouped”, LogicMonitor will automatically add it to the dynamic group “Collectors”. See Device Groups Overview.
Selecting Collector
The next step in adding a Collector is specifying the type, version, and the monitoring capacity (size) for the Collector you will install onto your server. You may also assign the new Collector to a Collector Group.
Selecting Collector Type
Select the appropriate Collector download file for your server: Linux or Windows. For both Windows and Linux, we support only 64-bit Operating System. The type of Collector you choose to install depends on the resources it will monitor. For example, to collect data from Windows devices, you need to install the Collector on a Windows server.
Selecting Collector Version
Select from the available General Release and Early Release Collectors.
Version | Description |
General Release | General Release Collectors are our stable release versions. We recommend this version for most infrastructures. |
Early Release | Early Release Collectors offer new features and functionality which may still be under development. You may want to install this to test the new features. But if you have a large deployment we don’t recommend installing this version to monitor your entire infrastructure. |
You can always change the version by uninstalling and installing a new Collector.
Selecting Collector Size
The Collector size refers to the monitoring capacity for the Collector. The number of resources that a Collector can monitor depends on the data collection method that it uses (such as SNMP, JDBC, WMI, and so on). See Collector Capacity.
You can choose from four available Collector sizes:
Size | Description |
Nano | This Collector is intended for testing purposes and not recommended for production environments. It does not have a memory requirement as it will consume less than 1GB of system memory and will monitor a limited number of Resources. |
Small | This Collector will consume approximately 2GB of system memory and is capable of monitoring roughly 200 (Linux Collector) or 100 (Windows Collector) Resources. |
Medium | This Collector will consume approximately 4GB of system memory and is capable of monitoring roughly 1000 (Linux Collector) or 500 (Windows Collector) Resources. |
Large | This Collector will consume approximately 8GB of system memory and is capable of monitoring roughly 2000 (Linux Collector) or 750 (Windows Collector) Resources. |
Extra Large | This Collector will consume approximately 16GB of system memory. |
Double Extra Large | This Collector will consume approximately 32GB of system memory. |
Assigning Collector Group
You may assign the new Collector to an existing Collector Group or create a new group. Collector Groups pool your Collectors based on their physical locations, defined environments (QA, Development, or Production), or if you are an MSP customer and streamlines the configuration and management of multiple Collectors. See Collector Groups.
Downloading and Installing Collector
This step provides options for you to download the installer file for the collector you selected.
Selecting Installer Package
You have to choose between two installer packages:
- Bootstrap – It downloads a smaller installation package (~500kB) for a faster install using the LogicMonitor CDN.
- Full Package – It downloads the installation package which is approximately 200MB.
Installing Windows Collector
Starting with EA Collector 37.100
, the default installation method for Windows collector will use a non-admin user logicmonitor
as the new collector service user. This user is automatically created with all the necessary permissions.
Recommendation: Although you can choose the LocalSystem or Administrator user, it is recommended to use the default non-admin user for Windows collector installation to follow security best practices.
- Download the installer file directly to your Windows server or use one of the download command options.
For Windows, we provide options to download and install using PowerShell or a URL. Click on the option to copy the download command to your clipboard and then run it on your server. - After downloading, open it to start the Install Shield Wizard.
The Install Shield Wizard will extract the binary and prompt you for credentials. These credentials will correspond to the account that the Collector will run under, which may be Local System or a domain account with local administrator permissions.
- If this Collector is not monitoring other Windows systems, run the service as Local System.
- If this Collector is monitoring other Windows systems in the same domain, run the service as a domain account with local administrator permissions.
- If this Collector is monitoring other Windows systems and they are not part of the same domain, run the service as a local administrator and connect to each resource with local administrator credentials. You may choose to set up the password so that it doesn’t expire, to reduce authentication issues between the Collector and its monitored resources. See Credentials for Accessing Remote Windows Computers.
The LogicMonitor Collector service must be granted “Log on as a service” under “Local Policy/User Rights Assignment” in the Windows server’s local security policy settings. See Troubleshooting Windows Collectors.
If the Windows server is running antivirus software, you will need to add a recursive exclusion for the LogicMonitor Collector application directory. See About the LogicMonitor Collector.
Installing Linux Collector
Prerequisites to install a Linux Collector are as follows:
- For Collectors running version 28.500 (or higher numbered versions), the Bourne shell is required for the Linux installation script. You may need to install the
vim-common
package to get thexxd
binary that the installer depends on. - For Collectors running version 28.100 (or higher numbered versions), the
sudo
package must be installed on Linux when running the Collector as a non-root user. The installer will also make additions to/etc/sudoers
to handle service restart and memory dumps. - In Linux environments with the Collector running in containers, the Collector must run as root: suid root is
/bin/ping
.
- Download the installer file directly to your server (if your server supports web browsing) or onto another server and use a file transfer option (such as scp) to copy it to the server where you will install the collector.
For Linux, we also provide options to download and install using cURL or Wget. Click on the option to copy the download command to your clipboard and then run it on your server. - After downloading the installer onto your Linux server, change the permissions to make the binary executable.
# chmod +x <installer-file>.bin
- Run the executable.
# ./<installer-file>.bin
When the installation is complete, you will see a message that it installed successfully. You can now start adding resources to be monitored.
Note: Installing the Collector on Linux creates a default user, called logicmonitor, to run the Collector as a user without root privileges. Although you can select a different user or run as root, LogicMonitor recommends using this logicmonitor user created by the install script.
If you want to install Linux collector as root, please contact LogicMonitor product team or technical support team.
If you have issues with your Linux collector, see Troubleshooting Linux Collectors.
Verifying Connection
After successfully installing the Collector on your Windows or Linux server, return to the Add Collector page in LogicMonitor and verify that the Collector is connected to your portal.
Collector Hostname
Each Collector has a name or ID that is registered with the LogicMonitor server when you download the Collector. The Collector’s hostname refers to the IP address or DNS name of the server that the Collector has been installed on.
- For Linux, the Collector will resolve the hostname by running the
hostname -f
orhostname
commands. If both commands fail, the hostname defaults to:localhost.localdomain
. - For Windows, the hostname is a combination of the domain and
computername
.
Initially, you could run Linux Collectors using root credentials. Later, we extended this support to users with non-root credentials to install Collectors.
We have now enhanced the migration process to enable users to migrate Collectors running as root to run under non-root users without uninstalling Collector or losing any data. You can follow both prompt based and silent migration processes to migrate Collectors running as root to run under non-root user.
You must run the script updateToNonRoot.sh
. The default path is, /usr/local/logicmonitor/agent/bin/updateToNonRoot.sh
Note:
- When you upgrade Linux collectors that are running as root users to EA Collector 35.400, they are automatically migrated to non-root users. This might impact the collector status and device monitoring. If a collector stops to monitor devices or if it goes down after the upgrade, run the
revertToRootUser.sh
script to roll back to the root user. - To migrate the Docker collector to non-root user, see Running a Linux Collector in a Docker Container as a Non-Root User.
Requirements
- Users with root credentials can execute the script.
- Ensure that the Collector is installed as a root user.
Points to Consider
- In case of silent migration, you must place the parameters in the following sequence:
-q -u -d
. - When migrating Linux Collector using the silent migration method, to access help you can enter the parameter
-h
after the script./updateToNonRoot.sh
. The following parameters are displayed:
Parameter | Description |
-h | Provides help. |
-q | Indicates to the installer that the migration should be done in Silent mode. |
-u | Provide name of the non-root user under whom you want to migrate the Collector service. |
-d | Indicates the path where the Collector is installed. By default, the Collector is installed at /usr/local/logicmonitor. If the Collector is not installed at the default path, then enter the custom path where you have installed the Collector. |
Migrating Linux Collectors
You can migrate Linux Collector from root to non-root user using the silent or prompt based migration method. Note that when you install Linux Collector using any of the two installation methods, LogicMonitor creates a default non-root user ‘logicmonitor’. When migrating Linux Collector from root to non-root user, if the non-root user that you specified for migration does not exist, the ./updateToNonRoot.sh
script will create that non-root user.
Silent Migration
In the command prompt, run the following commands:
- Log in to the machine with root credentials.
- Navigate to the agent/bin folder of your Collector.
- Enter and run the command
./updateToNonRoot.sh
followed by the parameters for silent migration. The format and sequence is-q -u
[non-root username]-d
[custom path, if any]

After you run the script, the Linux Collector is migrated from root to non-root.
Prompt based Migration
In the command prompt, run the following commands:
- Log in to the machine with root credentials.
- Navigate to the agent/bin folder of your Collector.
- Run the script
./updateToNonRoot.sh
.
The system will prompt you to specify the user to migrate the Collector to non-root. - The script will create a default non-root user ‘logicmonitor’ and use it. You can create and use your own non-root user account, if necessary.
- By default, the Collector is located at /usr/local/logicmonitor. If the Collector is located at some other directory, then specify that path.
After you run the script, the Linux Collector is migrated from root to non-root.

Verifying Migration
To verify if the Collector has successfully migrated from root to non-root, follow these steps 10 minutes after the migration is complete:
- In LogicMonitor, navigate to Collectors and search for the Collector ID which you migrated to non-root user.
- In the Manage column corresponding to the specific Collector, click the Settings icon. The Manage Collector dialog box is displayed.
- Click the Support drop-down and select Collector Status.

Rolling Back Migration
In case the updateToNonRoot.sh
script fails to migrate Linux Collector from root to non-root, or if you face any issue after migration, you can run the revertToRootUser.sh
script to roll back migration. The script is available in the agent/bin folder.
Note:
- The destination path must be the path where the Collector is currently installed.
- The rollback script is available in EA Collector 32.400 and later.
- If you want to rollback migration for Collector version prior to 32.400, you can copy the script given below to create the script file.
#!/bin/sh
# get the name of init process
get_init_proc_name() {
file_name="/proc/1/stat"
cat $file_name|cut -f1 -d')'|cut -f2 -d'('
}
# get a string as answer from the stdin
get_input() {
prompt_msg=${1:?"prompt message is required"}
default_value=${2}
if [ "$default_value" != "" ];then
prompt_default_value=" [default: $default_value]"
fi
read -p "$prompt_msg$prompt_default_value:" value
if [ "$value" = "" ];then
value=$default_value
fi
echo $value
}
help() {
echo "Usage : [-h] [-y] [-u install user] [-d install path]
-h help - show this message
-y silent-update - update silently
-d install path - installation path of collector(default: /usr/local/logicmonitor)"
exit 1
}
OPTS_SILENT=false
DEST_USER="root"
DEST_DIR="/usr/local/logicmonitor"
DEST_GROUP="root"
while getopts "hqu:d:" current_opts; do
case "${current_opts}" in
h)
help
;;
q)
OPTS_SILENT=true
;;
d)
DEST_DIR=${OPTARG}
;;
*)
help
;;
esac
done
if [ "$OPTS_SILENT" != "true" ]; then
DEST_DIR=`get_input "Enter the directory under which collector is installed" "$DEST_DIR"`
fi
if [ -d "$DEST_DIR/agent" ]; then
service logicmonitor-watchdog stop
service logicmonitor-agent stop
systemctl disable logicmonitor-agent.service
systemctl disable logicmonitor-watchdog.service
CUR_USER=$(stat -c '%U' $DEST_DIR)
if [ "$CUR_USER" != "root" ]; then
LM_WATCHDOG_SERVICE="$DEST_DIR/agent/bin/logicmonitor-watchdog.service"
sed -i.bak "s#User=$CUR_USER#User=root#g" $LM_WATCHDOG_SERVICE
sed -i.bak "s#Group=$CUR_USER#Group=root#g" $LM_WATCHDOG_SERVICE
rm -f $LM_WATCHDOG_SERVICE.bak
LM_AGENT_SERVICE="$DEST_DIR/agent/bin/logicmonitor-agent.service"
sed -i.bak "s#User=$CUR_USER#User=root#g" $LM_AGENT_SERVICE
sed -i.bak "s#Group=$CUR_USER#Group=root#g" $LM_AGENT_SERVICE
rm -f $LM_AGENT_SERVICE.bak
fi
$ldconfig
chown $DEST_USER:$DEST_GROUP $DEST_DIR/
chown -R $DEST_USER:$DEST_GROUP $DEST_DIR/agent
INIT_PROC=`get_init_proc_name`
if [ "$INIT_PROC" = "systemd" ];then
mkdir /etc/systemd/user
cp $DEST_DIR/agent/bin/logicmonitor-agent.service /etc/systemd/system
cp $DEST_DIR/agent/bin/logicmonitor-watchdog.service /etc/systemd/system
chown $DEST_USER:$DEST_GROUP /etc/systemd/system/logicmonitor-agent.service
chown $DEST_USER:$DEST_GROUP /etc/systemd/system/logicmonitor-watchdog.service
chmod 0644 /etc/systemd/system/logicmonitor-agent.service
chmod 0644 /etc/systemd/system/logicmonitor-watchdog.service
systemctl enable logicmonitor-agent.service
systemctl enable logicmonitor-watchdog.service
rm -f /etc/systemd/user/logicmonitor-watchdog.service
rm -f /etc/systemd/user/logicmonitor-agent.service
systemctl daemon-reload
echo "Succesfully reverted collector services to run under $DEST_USER"
else
ln -sf ./bin/logicmonitor-agent /etc/init.d/logicmonitor-agent
ln -sf ./bin/logicmonitor-watchdog /etc/init.d/logicmonitor-watchdog
chown $DEST_USER:$DEST_GROUP /etc/init.d/logicmonitor-agent
chown $DEST_USER:$DEST_GROUP /etc/init.d/logicmonitor-watchdog
/sbin/chkconfig --add /etc/init.d/logicmonitor/logicmonitor-agent 2>/dev/null
/sbin/chkconfig --add /etc/init.d/logicmonitor/logicmonitor-watchdog 2>/dev/null
#if update-rc.d exists, let's run it to install our services
if which update-rc.d 2> /dev/null;then
# We found update-rc.d, let's use it ...
update-rc.d logicmonitor-agent defaults 2>/dev/null
update-rc.d logicmonitor-watchdog defaults 2>/dev/null
fi
echo "Succesfully reverted collector services to run under $DEST_USER"
fi
$DEST_DIR/agent/bin/logicmonitor-watchdog start
else
echo "The agentPath is not $DEST_DIR or is not provided. Please provide correct path where collector is installed and run the script again."
fi
Disclaimer: This feature is currently in Beta. To become a Beta participant, contact Customer Success.
Previously, you could monitor Windows devices only through Windows Collector. LogicMonitor has now developed the ability to monitor Windows devices using Linux Collectors. You can continue to perform WMI tasks on Linux Collectors. Linux Collectors have the ability to perform tasks for both the Operating Systems.
This feature is cost-effective as you need not buy a licensed copy of Windows server for Collector, thereby reducing your LogicMonitor onboarding and adoption cost.
Note: For now, Perfmon type DataSources will not work on Linux Collectors. If you want to use perfmon data in LogicMonitor, you should create a WMI based DataSource with the intended perfmon class mentioned as the WMI class within the DataSource definition.
Requirements
- OpenSSL version 1.0.x or 1.1.x.
- x64 (64-bit) Linux machine supported by OMI.
- CentOS 6, 7, and 8
- Debian 8, 9, and 10
- Oracle Linux 5, 6, 7, and 8
- Red Hat Enterprise Linux Server 5, 6, 7, and 8
- SUSE Linux Enterprise Server 11, 12, 12 ppc, and 15
- Ubuntu 14.04 LTS, 16.04 LTS and 18.04 LTS, and 20.04 LTS
- Collector version 10000.100. It is a special Beta version and should be used only for this Beta. It will not be supported in the production environment. To access this version, see Installing Collectors.
- Root permission to install Open Management Infrastructure (OMI) server during Collector installation.
Note: The root permission is only required during installation. Collector can run as non-root after the installation is complete.
Points to Consider
- As part of the Collector installation process, OMI will be installed on your Linux machine.
- SBProxy is replaced with OMI server installed on Linux Collector.
- The following OSS dependency and the license it is shipped under is given in the table:
Dependency | License |
OMI | MIT |
WMI Credentials
Provide username and password as a device-level property.
- Username = wmi.user
- Password = wmi.pass
Enter username in the username@domain format to pass the domain name in the username.
agent.conf Configuration Parameters
You should configure the following parameters:
Parameter | Type | Default | Description |
linux.collector.enable.windows.monitoring | Boolean | FALSE | Set it as ‘true’ to enable Linux Collectors to monitor Windows devices. |
omi.encryption.https.enable | Boolean | TRUE | By default, basic authentication over HTTPS is used. If you set it as ‘false’, it will use NTLM over HTTP. Note that we currently do not support NTLM authentication. |
Basic Authentication
By default, basic authentication (only over HTTPS) Port 5986 will be disabled on WinRM configuration of the remote host. To enable it, perform the following steps on the remote monitored host in cmd_prompt:
- winrm quickconfig
- winrm set winrm/config/service/auth @{Basic=”true”}
You need not install any additional packages for basic authentication. However, an HTTPS listener must be created/enabled on the Windows devices. If you do not have HTTPS listener created for WinRM, follow the steps given below to set Windows devices with basic authentication.
Create an HTTPS Listener using Self-Signed Certificate
To create an HTTPS listener for WinRM, perform the following steps:
- Run the command WinRM e winrm/config/listener in cmd_prompt to check if port 5986 is already enabled on WinRM service.
- Create a new self-signed certificate using PowerShell.
New-SelfSignedCertificate -DnsName “<YOUR_DNS_NAME>” – CertStoreLocation Cert:\LocalMachine\My
Note: The DnsName is your computer’s full name. You can find it under Control Panel > System and Security > System > Full computer name. - Copy the thumb print and run the following command in cmd_prompt.
winrm create winrm/config/Listener?Address=*+Transport=HTTPS @{Hostname=”<YOUR_DNS_NAME>”; CertificateThumbprint=”<COPIED_CERTIFICATE_THUMBPRINT>”} - To verify if the listener is created, run the command WinRM e winrm/config/listener in cmd_prompt to print details of port 5985 and 5986.
- To add a new firewall inbound rule to allow all connections for port 5986 (TCP), follow these steps:
- Search and select the Windows Firewall with Advanced Security option. The Windows Firewall with Advanced Security window is displayed.
- In the left navigation, right-click Inbound Rules and select New Rule. The New Inbound Rule Wizard window is displayed.
- Select the Port radio button and click Next.
- Select the TCP radio button.
- Select the Specific local ports radio button and enter 5986 in the blank field. Click Next.
- Select the Allow the connection radio button and click Next.
- Select the Domain, Private, and Public checkboxes and click Next.
- Enter a rule name in the Name field and click Finish.
Enable Windows Monitoring when Installing Linux Collectors
- Follow the instructions given in Installing Collectors to install Linux Collector. Once the Collector is successfully installed, the system will display a message if you want to enable Windows monitoring. Note that by default, Windows monitoring is NOT enabled on Linux Collector.
- Press Y to install packages required to monitor Windows devices through Linux Collector.
- To enable Linux Collector to monitor Windows devices, in agent.conf settings, manually set the linux.collector.enable.windows.monitoring as ‘true’.
Note: To verify the installation of OMI packages, run the following query in Linux command line terminal of your Collector machine:
/opt/omi/bin/omicli –auth Basic –hostname hostname -u user -p password –port 5986 wql root/cimv2 “Select * from win32_UserAccount” –encryption https
You can integrate LogicMonitor Collector with CyberArk Vault to store sensitive information such as login credentials, keys, and other sensitive data for hosts, devices, services, and more. In addition to a single account, CyberArk Vault also supports Dual Account to eliminate any edge case delays that may occur using a single account such as data collection loss and locking of accounts that could occur during the password rotation process.
Requirements
To integrate LogicMonitor Collector with CyberArk for Dual Account, you must fulfil the following requirements:
- EA Collector version 32.200 or later.
- Two CyberArk accounts with similar privileges. The VirtualUsername for both the accounts must be the same.
- One of the two CyberArk accounts must be marked as active in the CyberArk portal. CyberArk should handle this automatically during the password rotation.
CyberArk Application Authentication Methods
LogicMonitor Collector and CyberArk integration support the following methods for the application authentication:
- Allowed machines
- Path
- Hash
- Client certificates
Out of these four methods, we have explained the Client certificate method in the Configuring CyberArk Certificates section.
Authentication to Privileged Access Security (PAS) Solution
The CyberArk AIMWebService application is deployed on the IIS Server. LogicMonitor Collector and CyberArk integration uses Basic authentication to the IIS Server.
You should use the password provided to you to log in to the Vault. After logging in, we recommend that you change your password.
Collector Agent Configuration Settings for CyberArk Integration
The following table contains Collector agent configurations for the Dual Accounts:
Agent Configuration | Type | Default | Description |
vault.bypass | Boolean | TRUE | If the value is set as true, the Vault API is not called. If the value is set as false, the Vault API is called. |
vault.credentials.cache.expirationtime | Integer | 60 minutes | Expiration timeout (in minutes) for credentials in Vault cache. After this time, the credentials in the Vault cache expire and you have to re-fetch them from the Vault. |
vault.credentials.refresh.delay | Integer | 15 seconds | The amount of time delay (in seconds) after credentials cache expiration time. Refresh the task after the cache expiration time. Note: You may customise the amount of time delay while installing the Collector. |
vault.credentials.pair.enable | Boolean | FALSE | This property enables CyberArk Dual Account configuration on Collector. To enable it, set the value as ‘true’. |
Configuring Vault Properties
You must configure Vault properties that include Vault metadata and Vault keys for the Collector at the device or device group level.
Note: CyberArk does not allow use of special characters such as \ / : * ? ” in Safe names and object names.
Configuring Metadata Properties
You must configure the following Vault metadata properties.
Vault Metadata | Description |
vault.meta.url | URL of the Vault. This URL must contain the folder and application ID only. |
vault.meta.safe | Safe (Applicable only in case of CyberArk). A device can have only a single Safe. |
vault.meta.type | The type of the Vault. Currently, the “CyberArk” Vault integration is supported in the Collector. |
vault.meta.header | The headers required for HTTP Get Request. The value for this custom property would be the header separated with “&“ the header key value would be separated with “=” as shown in the below example: vault.meta.header – Content-Type=application/json&Accept-Encoding=gzip, deflate, br |
vault.meta.keystore.type | Type of the key store. If the key store type is not specified, the default type for the key store is JKS. |
vault.meta.keystore.path | Path of the Keystore file. |
vault.meta.keystore.pass | Password for the Keystore. |
Configuring Vault Keys
Vault keys need to be specified at the device level with the suffix .lmvault. For example, ssh.user information should have the key specified as ssh.user.lmvault. You must configure the following Vault keys.
Vault Key | Description |
Property suffixed with .lmvault (for a single account) | The custom property for which value must be retrieved from the Vault and must be specified at the device level by adding suffix .lmvault. The value of such property would be the path of the key in the Vault. For example: ssh.user.lmvault = ssh\ssh.user For ssh.user.lmvault, the property should be retrieved from the Vault. The value of this property “ssh\ssh.user” represents the path in the Vault where the credential is stored. |
Property suffixed with .lmvault : Multi-Safe | The multi-Safe approach allows fetching the values for lmvault properties from different safes within the Vault. The LM Vault property value should be specified in the format safe:path. For example, the property referring to the safe sshcreds and object path as ssh\ssh.user can be specified as: ssh.user.lmvault = sshcreds:ssh\ssh.user The Safe specified at the property level would override the value specified at the device level through the “vault.meta.safe” property. |
Property suffixed with .lmvault (for dual accounts) | The parameters such as safe, appid, and folder are used from the existing Vault metadata. The query is made against the VirtualUsername where the DualAccountStatus is active. An attribute name specified within <> is then parsed from the Vault API JSON response. The Dual Account API has the username and password within the same response, hence such a credential chain is formed at the Collector during the Collector startup. Note: The LM Vault property value must consist of the field/attribute along with the VirtualUsername. The field or attribute can be passed within <>. You can use a Dual Account with any property present in the respective template. The field/attribute will be parsed from the API response received from CyberArk. For example: jdbc.mysql.user.lmvault = VirtualUsername=Test<Username> jdbc.mysql.pass.lmvault = VirtualUsername=Test<Content> jdbc.mysql.port.lmvault = VirtualUsername=Test<Port> |

LogicMonitor Collector and CyberArk Vault Integration
- Multi-Safe support: Specify multiple Safes under a device to retrieve credentials from Multiple Safes.
- Multi-Vault support: You can use multiple Vaults for the devices under a Collector. Each of the devices can point to a single Vault.
- You can call Vault API over HTTP or HTTPS. However, if you call the Vault APIs over HTTPS, you must configure the RootCA cert. For more information, see RootCA cert.
- You must complete CyberArk authentication. For more information, see CyberArk Application Authentication Methods section.
Note: Device-specific cache is implemented at the Collector to avoid frequent requests to the Vault API.
- For CyberArk Dual Accounts, along with the existing parameters, the CyberArk API call requires two additional parameters: ‘DualAccountStatus’ and ‘VirtualUsername’. See the Dual Account Properties section on the CyberArk Dual Account page.
- For CyberArk Dual Accounts, the property can be specified at the device or device group level.
- You must set the Dual Account properties such as VirtualUsername at each property level on the device.
- When a Dual Account is enabled for CyberArk Vault, then based on the template of the account the response parsing can be done for various fields in the response.