You can schedule downtime (SDT) for your Collectors just as you can for your LogicMonitor devices. Creating SDTs for your Collector will suppress alert notifications for any Collector down alerts triggered during the SDT (these alerts will still be displayed in your LogicMonitor account). You may want to SDT your Collectors during maintenance windows or other periods of anticipated downtime. Note that if a Collector goes down while it is in SDT, it will still failover all assigned devices to the backup Collector, if one is assigned, and you will still be notified of this failover.
We separate Collector SDTs from general host/group SDTs in order to prevent unintended alert suppression stemming from your SDT’d Collector.
You can add an SDT that applies to one Collector or multiple Collectors from Settings | Collectors.
Adding an SDT for multiple Collectors
Check the box to the left of the desired Collectors and then select ‘Put in SDT’:
You can schedule a one-time SDT, or schedule a daily, weekly or monthly SDT. Add a note to an SDT to provide context for other users in the account.
You can see a list of events for your LogicMonitor Collector from Settings | Collectors. Typically the displayed events include errors related to data collection tasks and the stopping/starting/restarting of the collector services. Looking through these events can be helpful for debugging issues with your Collector.
To view Collector events, navigate to Settings | Collectors and locate the desired Collector. Select Manage, and then select Collector Events from the Support dropdown menu in the Manage dialog:
When you need to restart a Collector, you can do so from within LogicMonitor or from the Collector host.
Note: You can only use LogicMonitor to restart the Collector while it is up and running. If the Collector is down or dead, you will need to restart it from the Collector host.
Restart from LogicMonitor
To remotely restart a Collector from within the LogicMonitor platform:
1. Navigate to Settings | Collectors.
2. In the table, ind the Collector and click its Manage icon.
3. In the Manage Collector dialog, click Support and select “Restart Collector” from the menu.
Restart from Collector Host
On a Windows host. To restart a Collector, use the Services control panel to restart the following services:
- LogicMonitor Collector
- LogicMonitor Collector Watchdog
On a Linux host. To restart a Collector, run the following commands:
1. Stop LogicMonitor: /usr/local/logicmonitor/agent/bin/sbshutdown
2. Then start the watchdog service, which may be run from init.d or systemd.
- From init.d:
/etc/init.d/logicmonitor-watchdog start
- From systemd:
systemctl start logicmonitor-watchdog
Overview
Every Collector (that is not a member of an Auto-Balanced Collector Group) should have a failover Collector assigned to it. Failover Collectors eliminate the Collector as a single point of failure, ensuring monitoring continues should a Collector go down.
If a Collector is declared down (a Collector is declared down when LogicMonitor’s servers have not heard from it for three minutes), all devices monitored by the down Collector will automatically fail over to the failover Collector, assuming one is designated. Once the down Collector comes back online, failback can take place automatically (if automatic failback is enabled for the Collector) or manually.
Note: In addition to supporting the one-to-one Collector failover/failback method discussed throughout this support article, LogicMonitor also supports failover/failback within the context of Auto-Balanced Collector Groups (ABCGs). The Collectors in ABCGs share device load and support dynamic failover. For more information on ABCGs, see Auto-Balanced Collector Groups.
Designating a Failover Collector
Because the failover Collector will take over all monitoring for the down Collector, it’s important to ensure that the two Collectors (the original preferred Collector and the failover Collector) are well matched. In other words, the failover Collector must have the same data collection abilities and configurations as the original Collector. For example, both Collectors should be listed as exceptions for any firewalls restricting access to the monitors hosts; both Collectors must be permitted in any specific snmpd.conf, ntp.conf or other configuration settings that may limit monitoring access; and both Collectors must be on the same operating system (e.g. Linux or Windows).
For this reason, LogicMonitor recommends that you configure failover Collectors in pairs (i.e. Collector A fails over to Collector B and Collector B fails over to Collector A). As this recommendation implies, failover Collectors can also be assigned their own sets of monitoring tasks.
To designate a failover Collector:
- Install/identify a Collector residing on a different server that is capable of monitoring the same set of devices as the Collector for which you are designating a failover Collector.
- From the original Collector’s Manage Collector dialog (navigate to Settings | Collectors | Manage), select the failover Collector from the Failover Collector field’s dropdown menu.
- Once a failover Collector is designated, two options display:
- Resume using Preferred Collector when it becomes available again. If left checked, automatic failback to the Collector is enabled, as discussed in the Automatic Failback to Original Collector section of this support article. If unchecked, failback will need to be manually initiated, as discussed in the Manual Failback to Original Collector section of this support article.
- Exclude <resource name> from failover actions. If left checked (recommended), the Collector device is excluded from failover. Because Collectors monitor themselves, this is most likely desirable as it will preserve Collector metrics.
Collector Failover
If a Collector is declared down, all devices monitored by the down Collector will automatically fail over to the failover Collector, assuming one is designated.
Note: A Collector is declared down and thus enters failover when LogicMonitor’s servers have not heard from it for three minutes. (The time window for initiating failover is governed by multiple, complex processes, and there may be slight differences in timing for different cases.)
Note: You will be notified of a Collector failover event even if the Collector is in SDT.
Collector Failback
Once the down Collector comes back online, failback can take place automatically (if automatic failback is enabled for the Collector) or manually.
Automatic Failback to Original Collector
To enable automatic device failback to the original Collector, navigate to the Collector’s Manage Collector dialog (Settings | Collectors | Manage) and check the Resume using Preferred Collector when it becomes available again option. As discussed in the Designating a Failover Collector section of this support article, this option is only available if a failback Collector is designated in the Failover Collector field.
Note: LogicMonitor will wait eight minutes after a Collector has resumed functioning to initiate automatic failback to it. (The time window for initiating failover is governed by multiple, complex processes, and there may be slight differences in timing for different cases.)
Manual Failback to Original Collector
If you choose not to enable automatic failback for a Collector, then you’ll need to manually reassign devices back to the original Collector once it is back online. This can be done by navigating to Settings | Collectors | Resources from either the original Collector that went down or the failover Collector.
When manually failing back from the original Collector’s resources list, you have the option to assign all devices back to the original Collector, or permanently assign them to the failover Collector.
When manually failing back from the failover Collector, you have a little more flexibility as you are able to fail back all or a subset of devices back to the original preferred Collector, as well as assign all or a subset of devices to any new preferred Collector. The ability to assign a Collector’s devices to a new preferred Collector can be done at any time; it is not limited to the aftermath of a failover event.
Overview
You can use the Collector Debug Facility to remotely run debug commands on your Collector. This is helpful for troubleshooting issues with data collection and is typically used on the advice of LogicMonitor support.
Note: The history of Collector debug commands is preserved in the Audit Log.
Accessing the Collector Debug Facility
There are two places from which you can launch the Collector Debug Facility:
- From a Collector’s settings. As shown next, select Settings | Collectors, click the Settings icon for the Collector you would like to debug and, from the Support button’s dropdown, select “Run Debug” command.
- From the Device Tree. Open the DataSource or DataSource instance that you would like to debug and from the Raw Data tab, click the Debug button.
Debug Command Syntax
The Collector Debug Facility launches in a new browser tab. A list of built-in debug commands and their descriptions display to assist with troubleshooting.

All debug commands should be preceded with a ‘!’. If you need syntax for a particular command, enter help !<commandname>, as shown next.

The following table highlights some of the most frequently used debug commands. For usage details (e.g. optional and mandatory arguments, parameters, etc.) and examples, enter the help !<commandname> from the Collector Debug Facility.
Command | Description | Example |
!account | Displays the account information used by sbwinproxy. | !account |
!adlist | Displays a list of the Collector’s Active Discovery tasks. A taskID is returned for each task. | !adlist type=get !adlist method=ad_snmp |
!adetail <taskId> | Displays detailed information about a specific Active Discovery task, where taskId can be found with !adlist. Note that the “taskId” reference in the command specification will be labeled as “id” in the output of the !adlist command. | !adetail 142 |
!checkcredential | Enables, disables, and checks credential usage on the specified host to determine source of unexpected login action. | !checkcrendential proto=snmp user=public !checkcrendential proto=snmp user=public usage=AP |
!hostproperty | Adds, updates, or deletes system properties for a host. | !hostproperty action=del host=localhost property=virtualization
!hostproperty action=add host=localhost property=ips value=127.0.0.1,192.168.1.1 |
!http | Sends an HTTP request (with optional username and password) and displays the response. | !http http://www.google.com/index.html |
!jdbc | Executes a SQL query against the specified host. | !jdbc ‘url=jdbc:mysql://productrds.chqwqvn285rf.us-west-2.rds.amazonaws.com:3306 username=LogicMonitor password=MyPassword’ select Name, ID from productDB.Employees |
!logsurf | Displays log file entries that are of the specified debug level. If included, logs will only be displayed for the specified seq and taskId if they are in the specified file, and only n number of logs will be displayed. taskId and seq can be found using !tlist, where taskId is the id of a data collection task and seq is the number of times the collector remembers having done the task. | !logsurf level=trace ../logs/wrapper.log taskid=833 seq=75 |
!ping | Pings the specified host. | !ping 10.36.11.240 !ping type=proxy 10.36.11.240 |
!restart | Restarts the specified Collector service | !restart watchdog !restart collector |
!shealthcheck func=<function> collector=<collector_id> | !shealthcheck commands help you determine the health of Collector, memory consumed, number of scheduled tasks, and more. Based on the result, you can debug the issues and resolve them. func=trigger This debug command triggers a healthcheck task for a specific Collector. Once you run this command, Santaba will schedule a healthcheck task for the specified Collector. To view the result, you must run !shealthcheck command with function func=detail or func=show. func=show Typically, you will find a summary of the scheduled task in the result. It also indicates the number of total, finished, and skipped healthcheck tasks. For example, “Latest run has finished. Scheduled checks (total=34, finished=27, skipped=7)” In case an issue is detected, the issue is summarised in the result. For example, “The disk has a low free space 728.57 MBytes.” You can run !shealthcheck func=detail collector=<collector_id> debug command to know the details of the issue or get details of the health status of a Collector. func=detail This debug command provides status of all the scripts in detail. For example, “Collector exported jars and executables are not modified.” “The Collector has 177 instances.” Once you run this command, Collector will display the result fetched by the func=trigger command. For example, for a specific Collector if you run the func=trigger command at 9.00 AM and then run func=detail at 11.00 AM, the Collector health status fetched at 9.00 AM will be displayed. | !shealthcheck func=show collector=123 !shealthcheck func=trigger collector=123 !shealthcheck func=detail collector=123 |
!tdetail <taskId> | Displays detailed information about a specific data collection task, where taskId can be found with !tlist. | !tdetail 12323209239991 |
!tlist | Lists the Collector’s data collection tasks, including DataSources, ConfigSources, and EventSources. A taskID is returned for each task. | !tlist c=wmi !tlist summary=collector !tlist summary=true lasttime=10 columns=5 |
!uptime | Displays the uptime of the Collector. | !uptime |
Debug Example: Troubleshooting Data Collection
One of the most common uses of the Collector Debug Facility is troubleshooting data collection for a particular DataSource or DataSource instance. Maybe you just wrote a script DataSource and are getting NaN values, or perhaps one instance out of ten is not reporting data. You can typically use the following steps to identify the issue:
- Identify the DataSource or DataSource instance. Find the name of the DataSource in the DataSource definition (this is NOT the same as the display name).
- Use the !tlist command in the Collector Debug Facility of the collector associated with the device the DataSource applies to. You can narrow down the results by using the h=<hostname> and c=<collection type> options.
- Identify the task for the desired DataSource. You’ll see the taskid, followed by an execution count, followed by the collector type, a status, the device name, then the DataSource name, an ival (which is the amount of time it took to execute the task the last time), and finally, a note about the execution.
- Use the !tdetail command with the taskid as the argument.
- If you need more information to diagnose the problem, increase the log level for the appropriate collection method of the Collector, as discussed in Collector Logging.
- Wait a polling cycle (or more) and then use the !logsurf command with taskid as an argument and ../logs/wrapper.log as the filename (if you know the latest execution count, you can also limit the results to one operation by including seq=n). You can also include a number argument to limit the results to a certain number of logs. You should see the log entries only for the task whose id is included in the command.
- If you still haven’t identified the issue, contact support.
From Settings | Collectors you can control how much information is logged by your collector and how long these log files are retained.
Adjusting log levels
You may want to adjust log levels to increase how much information is logged to debug an issue, or to decrease how much information is logged to save disk space. Select the Logs icon for the desired collector (from Settings | Collectors) and then select manage to see the log levels on a per component basis for that collector:
The log level for each collector component controls what information is logged for that component. Available log levels are:
- trace – this log level is the most verbose, and will log every action of the collector in full detail. Note that this can use a significant amount of disk space if your collector is actively monitoring a large number of devices, and as such is typically only recommended for debugging purposes.
- debug – detailed information about collector tasks will be logged (not as much information as trace log level). The debug log level can make it easier to identify an issue and track down the root cause.
- info – this is the default log level, and will log basic information about collector tasks.
- warn – information will only be logged when something isn’t quite right, but it may not be causing an issue yet.
- error – information will only be logged when something is wrong.
- disable – no information will be logged.
As an example, you might write a script datasource and your collector is getting no data, but you can’t figure out the problem. You could increase the log level for the collector.script component to debug or trace and then look at the logs (either using the collector debug facility or on the collector machine itself) to troubleshoot the issue.
Changing log file retention
Collector log files are rotated based on size, not date. By default, there are 3 log files of 64M each. If you’d like to change these numbers, you can do so in the wrapper.conf file (in the conf directory) where the collector is installed. You can edit the wrapper.conf file in the conf directory on the collector machine itself, OR you can edit the file directly from your LogicMonitor account UI. Navigate to Settings | Collectors and select manage for the desired collector. From the dropdown menu select Collector Configuration and then select the Wrapper Config tab. Locate the Wrapper Logging Properties and change these values (make sure to override the Wrapper.config before saving and restarting):
Sending logs to LogicMonitor
From the Manage dialog you can send your logs to LogicMonitor support. This might be useful if you are collaborating with our support team and would like them to be able to look through your collector log files. Select the manage gear icon for the desired collector and then select ‘Send logs to LogicMonitor’:
To avoid downtime when moving your collector to another machine, we recommend that you install a new collector on the new machine and then transfer the monitored devices from the old collector to the new collector.
Once you’ve installed a collector on the new machine, you can transfer monitored devices to the new collector from Settings | Collectors. Simply locate the collector you’d like to transfer devices from, select the Devices (#) icon, select all devices (or a portion, if you don’t want to transfer all devices) and then select ‘Change Preferred Collector’:
Note: You should ensure that the new collector will have the same privileges as the collector it is replacing. For example:
- are devices and networking gear configured to allow snmp access from the new collector device, or were they restricted to the old collector device’s IP address?
- are database permissions set to allow the new collector device’s IP to query them with sufficient access?
You can use the Collector Update Scheduler to perform a one-time update to your LogicMonitor Collectors or to automate receipt of the most recent Collector updates at desired times.
Collector Release Tracks
Collector releases are categorized into three release tracks:
- Early Access (EA) – EA releases are often the first to debut new functionality. We sometimes release a major feature in batches through EA release. So, EA is not recommended for your entire infrastructure. They occur 9-10 times per year. If there are major bug fixes, we patch EA, and it is referred as EA patch release. A stable EA version is designated as an optional general release (GD).
- Optional General Releases (GD) – GD releases are stable collector updates that may have new features. However, it is not mandatory to update collectors with GD releases. They occur twice a year. If there are major bug fixes, we patch GD, and it is referred as GD patch release. A stable GD version is designated as Required General Release (MGD).
- Required General Releases (MGD) – An MGD is released once a year. When we designate a GD as an MGD, we schedule and announce a date to auto-upgrade collectors to the MGD version. To let customers upgrade collectors as per their convenience, we send communication at least 30 days before the scheduled auto-upgrade date. On the auto-upgrade date, we upgrade only those collectors which are still below the MGD version. Thus, going forward, the MGD becomes the minimum required version. If there are major bug fixes, we patch MGD, and it is referred as MGD patch release.
Release Version Conventions
You can differentiate between an EA collector and a GD collector by observing the release version number.
- All EA versions start with NN.100, where NN is the major version number. In subsequent releases, we increment the decimal. For example, EA 32.100, EA 32.200, and so on. In case of a patch to an EA, for example, patch to EA 33.100 is versioned as EA 33.101
- All GD versions start with NN.000 where NN is the major version number. A GD release is followed by EA releases (usually 4-5 EA releases). So, in the subsequent releases, we increment the decimal. For example, after GD 32.000 is released, we will release EA 32.100, EA 32.200, EA 32.300, and so on. In case of a patch to a GD, the decimal number will always be below “.100”. For example, patch to GD 32.000 is versioned as GD 32.001
- All MGD versions take the version number of the GD it is based on. For example, MGD 31.004 is based on the previously released GD 31.004
To understand collector releases and their versions, refer to the following example. Note that the version numbers are used for representation only.
Release Version | Details |
EA 31.400 | EA release |
GD 32.000 | GD release |
EA 32.100 | EA release |
EA 32.200 | EA release |
GD 32.001 | Patch to GD 32.000 |
GD 32.002 | Patch to GD 32.001 |
EA 32.101 | Patch to EA 32.100 |
EA 32.300 | EA release |
MGD 32.002 | GD 32.002 is designated as the MGD |
EA 32.400 | EA release |
GD 33.000 | GD release |
Collector Releases
For a summary of the key features included in each Collector version, see the following table. For a detailed description of what is included in each Collector release, view its dedicated version page.
Active Collector Releases
Version | Type | Release Date | JRE Version | Highlights |
38.100 | Early Access | June 24, 2025 | jre_amazon_21.0.7.6.1 |
|
38.000 | Optional General Release | June 11, 2025 | jre_amazon_11.0.27.6.1 |
|
37.003 | Optional General Release | June 10, 2025 | jre_amazon_11.0.25.9.1 |
|
37.300 | Early Access | April 24, 2025 | jre_amazon_11.0.26.4.1 |
|
37.200 | Early Access | March 13, 2025 | jre_amazon_11.0.26.4.1 |
|
37.002 | Optional General Release | March 10, 2025 | jre_amazon_11.0.25.9.1 |
|
37.001 | Optional General Release | February 19, 2025 | jre_amazon_11.0.25.9.1 |
|
37.100 | Early Access | February 04, 2025 | jre_amazon_11.0.25.9.1 |
|
37.000 | Optional General Release | January 27, 2025 | jre_amazon_11.0.19.7.1 |
|
36.500 | Early Access | December 19, 2024 | jre_amazon_11.0.24.8.1 |
|
36.400 | Early Access | November 06, 2024 | jre_amazon_11.0.24.8.1 |
|
36.300 | Early Access | October 03, 2024 | jre_amazon_11.0.24.8.1 |
|
36.001 | Optional General Release | September 04, 2024 | jre_amazon_11.0.22.7.1 |
|
35.003 | Required General Release | August 12, 2024 | jre_amazon_11.0.19.7.1 |
|
36.200 | Early Access | August 07, 2024 | jre_amazon_11.0.23.9.1 |
|
36.100 | Early Access | July 01, 2024 | jre_amazon_11.0.23.9.1 |
|
36.000 | Optional General Release | June 05, 2024 | jre_amazon_11.0.22.7.1 |
|
Archived Collector Releases
Version | Type | Release Date | Highlights |
35.003 | Optional General Release | July 02, 2024 |
jre_amazon_11.0.19.7.1
|
35.002 | Optional General Release | June 03, 2024 |
jre_amazon_11.0.19.7.1
|
35.401 | Early Access | May 22, 2024 |
jre_amazon_11.0.22.7.1
|
35.400 | Early Access | April 29, 2024 |
jre_amazon_11.0.22.7.1
|
35.301 | Early Access | March 12, 2024 |
jre_amazon_11.0.21.9.1
|
35.300 | Early Access | March 05, 2024 |
jre_amazon_11.0.21.9.1
|
35.001 | Optional General Release | February 13, 2024 |
jre_amazon_11.0.19.7.1
|
35.200 | Early Access | January 25, 2024 |
jre_amazon_11.0.21.9.1
|
35.100 | Early Access | December 19, 2023 |
jre_amazon_11.0.21.9.1
|
35.000 | Optional General Release | December 18, 2023 |
jre_amazon_11.0.19.7.1
|
34.500 | Early Access | November 01, 2023 |
jre_amazon_11.0.19.7.1
|
34.004 | Optional General Release | October 16, 2023 |
jre_amazon_11.0.18.10.1
|
33.007 | Required General Release | October 06, 2023 |
jre_amazon_11.0.16.8.1
|
34.400 | Early Access | October 05, 2023 |
jre_amazon_11.0.19.7.1
|
34.003 | Optional General Release | September 18, 2023 |
jre_amazon_11.0.18.10.1
|
34.300 | Early Access | August 07, 2023 |
jre_amazon_11.0.19.7.1
|
34.002 | Optional General Release | June 29, 2023 |
jre_amazon_11.0.18.10.1
|
34.200 | Early Access | June 29, 2023 |
jre_amazon_11.0.19.7.1
|
34.001 | Optional General Release | June 06, 2023 |
jre_amazon_11.0.18.10.1
|
34.100 | Early Access | June 01, 2023 |
jre_amazon_11.0.18.10.1
|
33.401 | Early Access | May 23, 2023 |
jre_amazon_11.0.18.10.1
|
34.000 | Optional General Release | May 17, 2023 |
jre_amazon_11.0.18.10.1
|
33.400 | Early Access | April 06, 2023 |
jre_amazon_11.0.18.10.1
|
33.301 | Early Access | March 06, 2023 |
jre_amazon_11.0.17.8.1
|
33.300 | Early Access | February 22, 2023 |
jre_amazon_11.0.17.8.1
|
33.200 | Early Access | January 16, 2023 |
jre_amazon_11.0.17.8.1
|
33.101 | Early Access | November 24, 2022 |
jre_amazon_11.0.16.8.1
|
33.100 | Early Access | November 15, 2022 |
jre_amazon_11.0.16.8.1
|
33.007 | Optional General Release | September 04, 2023 |
|
33.006 | Optional General Release | July 03, 2023 |
|
33.005 | Optional General Release | June 06, 2023 |
|
33.004 | Optional General Release | May 02, 2023 |
|
33.003 | Optional General Release | April 12, 2023 |
|
33.002 | Optional General Release | February 20, 2023 |
|
33.001 | Optional General Release | November 23, 2022 |
|
33.000 | Optional General Release | November 10, 2022 |
|
32.004 | Optional General Release | November 01, 2022 |
|
32.400 | Early Access | September 22, 2022 |
|
32.003 | Optional General Release | September 20, 2022 |
|
31.004 | Required General Release | August 30, 2022 |
|
32.002 | Optional General Release | August 29, 2022 |
|
32.300 | Early Access | August 9, 2022 |
|
32.001 | Optional General Release | June 29, 2022 |
|
31.004 | Optional General Release | June 29, 2022 |
|
32.200 | Early Access | June 27, 2022 |
|
32.100 | Early Access | June 03, 2022 |
|
32.000 | Optional General Release | June 03, 2022 |
|
31.200 | Early Access | March 31, 2022 |
|
31.100 | Early Access | February 17, 2022 |
|
30.003 | Optional General Release | February 17, 2022 |
|
31.003 | Optional General Release | January 17, 2022 |
|
31.002 | Optional General Release | December 21, 2021 |
|
30.002 | Required General Release | December 17, 2021 |
|
31.001 | Optional General Release | December 16, 2021 |
|
31.000 | Optional General Release | November 26, 2021 |
|
30.104 | Early Access | October 19, 2021 |
|
30.001 | Optional General Releases | August 23, 2021 |
|
30.102 | Early Access | July 20, 2021 |
|
30.101 | Early Access | June 9, 2021 |
|
30.000 | Optional General Releases | April 29, 2021 | Incorporates all enhancements and fixes found in GD 29.003, as well as EA 29.xxx (29.101, 29.102, 29.104, 29.105, 29.106, 29.107, 29.108, and 29.109). See GD Collector – 30.000 for a complete list of enhancements and fixes. |
30.100 | Early Access | April 20, 2021 |
|
29.109 | Early Access | March 30, 2021 | Fixed an issue in EA 29.107 where upgrading failed if the Collector is running as root. See EA Collector – 29.109 for a complete list of enhancements and fixes. |
29.108 | Early Access | March 11, 2021 | Added integration support for external credential management using CyberArk Vault. See EA Collector – 29.108 for a complete list of enhancements and fixes. |
29.107 | Early Access | March 9, 2021 | See EA Collector – 29.107 for a complete list of enhancements and fixes. |
29.106 | Early Access | February 10, 2021 |
|
29.105 | Early Access | December 9, 2020 |
|
29.104 | Early Acess | October 10, 2020 |
|
29.003 | Optional General Releases | September 29, 2020 |
|
29.102 | Early Access | September 15, 2020 |
|
29.002 | Optional General Releases | August 17, 2020 | Fixes an issue causing web page collection tasks to get held up and consume CPU. |
29.101 | Early Access | August 6, 2020 |
|
29.001 | Optional General Releases | July 15, 2020 | Collector version GD 29.001 fixes an issue in GD 29.000 that prevented the successful upgrade to GD 29.000 from an EA 28 version (28.400 – 28.607) of a Linux Collector running as a non-root user. |
29.000 | Optional General Releases | July 6, 2020 | Collector version GD 29.000 incorporates all enhancements and fixes found in M/GD 28.005, as well as EA 28.xxx (28.100, 28.200, 28.300, 28.400, 28.500, 28.501, 28.600, 28.601, 28.602, 28.603, 28.604, 28.605, 28.606, 28.607). Visit the individual release notes pages for more information. |
29.100 | Early Access | June 24, 2020 |
|
28.607 | Early Access | May 27, 2020 | Enhancements:
Updated the Collector JRE to Amazon Corretto version 11.0.7.10 (April 2020 patch) Fixes: Fixed an issue where AES-256 SNMPv3 was not working with some Cisco devices |
28.606 | Early Access | May 7, 2020 | Improvements:
|
28.005 | Required General Releases | May 5, 2020 | Improvements:
|
28.605 | Early Access | March 19, 2020 | Improvements:
|
28.604 | Early Access | January 8, 2020 | Improvements:
Fixes:
|
28.004 | Optional General Releases | October 25, 2019 | Fixes:
|
28.003 | Optional General Releases | October 15, 2019 | Improvements:
Fixes:
|
28.603 | Early Access | September 5, 2019 | Improvements:
|
28.602 | Early Access | August 8, 2019 | Fixes:
|
28.601 | Early Access | July 24, 2019 | Fixes:
|
28.600 | Early Access | July 5, 2019 | Fixes:
|
28.501 | Early Access | June 21, 2019 | Fixes:
|
28.500 | Early Access | June 14, 2019 | Improvements:
Fixes:
|
28.400 | Early Access | May 23, 2019 | Improvements:
Fixes:
|
28.300 | Early Access | May 7, 2019 | Improvements:
Fixes:
|
28.200 | Early Access | April 10, 2019 | Improvements:
Fixes:
|
28.002 | Optional General Releases | April 2, 2019 | Fixes:
|
28.001 | Optional General Releases | March 21, 2019 | Fixes:
|
28.100 | Early Access | March 20, 2019 |
Improvements:
Fixes:
|
28.000 | Optional General Releases | February 27, 2019 | GD Collector 28.000 will be available February 27th, 2019. This version includes everything in GD 27.001, GD 27.002, GD 27.003, GD 27.004, GD 27.005, as well as EA 27.100, EA 27.200, EA 27.300, EA 27.400, EA 27.500, EA 27.600, EA 27.700, EA 27.750, EA 27.751, EA 27.800, EA 27.850, EA 27.900.
In comparison to GD 27.001, highlights of this version include:
For more detailed information on what’s included, see individual EA release notes (links above). |
27.900 | Early Access | February 21, 2019 |
|
27.850 | Early Access | January 23, 2019 |
|
27.800 | Early Access | January 3, 2019 |
|
27.751 | Early Access | December 25, 2018 |
|
27.005 | Required General Releases | December 5, 2018 |
|
27.750 | Early Access | November 22, 2018 |
|
27.700 | Early Access | November 2, 2018 |
|
27.004 | Optional General Releases | November 1, 2018 |
|
27.600 | Early Access | October 15, 2018 |
|
27.500 | Early Access | September 13, 2018 | Includes all fixes in 27.003, Plus:
|
27.003 | Optional General Releases | September 13, 2018 |
|
27.400 | Early Access | August 17, 2018 |
|
27.300 | Early Access | July 25, 2018 |
|
27.002 | Optional General Releases | July 25, 2018 |
|
27.200 | Early Access | July 10, 2018 |
|
27.001 | Optional General Releases | July 10, 2018 |
|
27.100 | Early Access | June 13, 2018 |
|
27.000 | Optional General Releases | June 12, 2018 | GD Collector 27.000 will be available June 12, 2018. This version includes everything in GD 26.001, as well as EA 26.100, EA 26.200, EA 26.201, EA 26.300, EA 26.400, EA 26.500, EA 26.600 and 26.601.
In comparison to GD 26.001, highlights of this version include:
|
26.601 | Early Access | May 29, 2018 | We highly encourage everyone using EA Collectors 26.100 – 26.600 to upgrade to this version. This version will become our next Optional General Releases Collector. This release includes the following:
|
26.600 | Early Access | May 17, 2018 |
|
26.500 | Early Access | April 12, 2018 |
|
26.400 | Early Access | March 22, 2018 |
|
26.300 | Early Access | March 4, 2018 |
|
26.201 | Early Access | January 29, 2018 |
|
26.200 | Early Access | January 26, 2018 |
|
26.001 | Optional General Releases | January 23, 2018 |
|
26.100 | Early Access | January 16, 2018 |
|
26.0 | Optional General Releases | January 4, 2018 |
|
25.400 | Early Access | December 18, 2017 |
|
25.300 | Early Access | November 9, 2017 |
|
25.001 | Optional General Releases | October 25, 2017 |
|
25.200 | Early Access | October 19, 2017 |
|
25.0 | Optional General Releases | September 8, 2017 |
|
24.300 | Early Access | August 25, 2017 |
|
24.126 | Early Access | July 14, 2017 |
|
24.002 | Required General Releases | June 29, 2017 |
|
Overview
When you delete a Collector from your LogicMonitor account, the Collector and Watchdog services should stop and the Collector should uninstall itself. If your Collector does not correctly uninstall itself, you can manually stop the Collector and Watchdog services and uninstall the Collector from the device.
Removing the Collector from Your Account
- Re-assign the devices being monitored by that Collector to a different Collector – you can re-assign devices in bulk by selecting the devices icon for the Collector you’d like to remove.
- Select Delete for the Collector:
Manually Removing the Collector from a host Resource
In some cases you might need to manually stop the Collector services and then remove/uninstall the Collector from your host resource.
Windows
Navigate to the services control panel for your Windows machine and stop the ‘LogicMonitor Collector’ and ‘LogicMonitor Collector Watchdog’ services. You can then uninstall the Collector using the standard Windows ‘Add or remove programs’ controls.
Linux
Navigate to [LogicMonitor Collector Directory]/agent/bin and execute the sbshutdown script to shut down both the Collector and Collector Watchdog services. For example:
# cd /usr/local/logicmonitor/agent/bin
# ./sbshutdown
Then you can uninstall the Collector by calling:
# ./uninstall.sh
Alternatively, you could do a recursive removal of the logicmonitor Collector directory and all its contents (there are symbolic links in /etc/init.d for logicmonitor.collector and logicmonitor.watchdog, and those should be removed to ensure the services do not keep running in memory). For example:
# rm -rf /usr/local/logicmonitor
Overview
The LogicMonitor Collector is the heart of your monitoring system. As such, it’s important that you monitor your Collectors to ensure that performance is keeping up with data collection load. Equally important is ensuring the least disruption possible when a Collector does go down. This includes making sure timely notifications are delivered to the appropriate recipient(s).
As best practice, LogicMonitor recommends that you (1) set up monitoring for your Collectors and (2) configure notification routing for Collector down alerts.
Adding the Collector Host into Monitoring
If it isn’t already part of your monitoring operations, add the device on which the Collector is installed into monitoring. This will allow you to keep tabs on CPU utilization, disk space and other metrics important to smooth Collector operation. For more information on adding devices into monitoring, see Adding Devices.
Enabling Collector DataSources on the Host
LogicMonitor provides a series of built-in Collector DataSources that provide insight into a Collector’s operations, performance, and workload. In most cases, these Collector DataSources will be automatically applied to the Collector device when you add it into monitoring. You can verify this is the case by expanding the device in the Resources tree and looking for the “Collector” DataSource group.

If the Collector DataSources were not automatically applied to the device, you can do so manually by adding the value of “collector” to the device’s system.categories property. For more information on setting properties, see Resource and Instance Properties.
LogicMonitor will now index this device as the host of a Collector, and automatically apply the Collector DataSources to it. Once Collector DataSources are in place, you can configure alerts to warn you when Collector performance is deficient.
Note: Collector DataSources only monitor the device’s preferred Collector (as established in the device’s configurations). The preferred Collector should be the Collector that is installed on that device. Otherwise, the Collector’s metrics will display on the wrong host. For example, if you attempt to monitor Collector A using Collector B (installed on a separate host), then Collector B’s metrics will display in lieu of Collector A’s on Collector A’s host.
Collector DataSources
Migration from Legacy DataSources
In March of 2019, LogicMonitor released a new set of Collector DataSources. If you are currently monitoring Collector hosts using the legacy DataSources, you will not experience any data loss upon importing the newer DataSources in this package. This is because DataSource names have been changed to eliminate module overwriting.
However, you will collect duplicate data and receive duplicate alerts for as long as both sets of DataSources are active. For this reason, we recommend that you disable the legacy Collector DataSources. The legacy DataSources are any Collector DataSources whose names are NOT prefixed with “LogicMonitor_Collector”. If prefixed with “LogicMonitor_Collector”, it is a current Collector DataSource.
When a DataSource is disabled, it stops querying the host and generating alerts, but maintains all historical data. At some point in time, you may want to delete the legacy DataSources altogether, but consider this move carefully as all historical data will be lost upon deletion. For more information on disabling DataSources, see Disabling Monitoring for a DataSource or Instance.
DataSource Example Highlight: Collector Data Collecting Tasks
One of the Collector DataSources applied is the “Collector Data Collecting Tasks” DataSource. It monitors statistics for collection times, execution time, success/fail rates, and number of active collection tasks. One of the overview graphs available for this DataSource features the top 10 tasks contributing to your Collector’s load, which is extremely useful for identifying the source of CPU or memory usage.

Routing Collector Down Alerts
A Collector is declared down when LogicMonitor’s servers have not heard from it for three minutes. Even though you will likely have a backup Collector in place for when a Collector goes down, it’s never an ideal situation for a Collector to be unexpectedly offline. To minimize downtime and mitigate the risk of interrupted monitoring, ensure that “Collector down” alerts will actively be delivered (as email, text, and so on) to the appropriate individuals in your organization. (These alerts will also be displayed in the LogicMonitor interface. )
Important: When a Collector is declared down, alerts that were triggered by the devices monitored by that Collector before the Collector went down will remain active but new alerts will not be generated while the Collector is down. However, devices that do not fail over to another Collector will ignore the alert generation suppression and may generate Host Status alerts while the Collector status is down.
To route Collector down alerts, open the Collector’s configurations (Settings | Collector | Manage) and specify the following:
- Collector Down Escalation Chain. As discussed in Escalation Chains, an escalation chain specifies what people, or groups of people, should be notified of the alert, how they should be notified, and in what order.
- Resend interval. From the Send Collector Down Notifications field, set the resend interval for Collector down notifications. You can indicate no resend (i.e. notification is only sent once) or you can indicate the amount of time that should pass before the Collector down alert notifications are escalated to the next stage in the escalation chain. If the alert has reached the final stage or there is only one stage specified in the escalation chain, then this interval, when set, determines how often the alert notification will be resent until it is acknowledged or cleared.
Note: By default, an “Alert clear” notification is automatically delivered to all escalation chain recipients when a downed Collector comes back online. You can override this default by expanding the Collector details and unchecking the Alert on Clear option, shown next. However, if the Collector’s designated escalation chain routes alert notifications to an LM Integration, we recommend that you do not disable this option. For more information, see Alert Rules and Escalation Chains.
