- About LogicMonitor
- Cloud Monitoring
- Dashboards and Widgets
- Getting Started
- LM Service Insight
- About LogicModules
- Creating & Managing DataSources
- Active Discovery
- Data Collection Methods
- Groovy Support
- PowerShell Support
- Setting Up JobMonitors
- Help & Troubleshooting
- User-Defined AppliesTo Functions
- SNMP sysOID Maps
- Rest API Developers Guide
- RPC API Developers Guide - Deprecated
- Servicenow CMDB Integration
- Terminology and Syntax
In a typical multi-instance datasource the LogicMonitor data ingest process is such that we periodically run Active Discovery to discover objects we can instrument, and then at the specified polling interval execute the data collection task for each instance object, one after the other.
While this approach is reasonable for collection types such as SNMP and WMI, it is sometimes less practical for devices that expose data for all instances in a single query.
For example, on a device that speaks SNMP we can query it for instances via an snmp “walk” and then do an successive snmp “get” against each of the instances. But for many devices that expose metrics via an API or CLI, there’s no ability to get data for only single instance one at a time. Meaning: each time we query the device we get data across all of its instances. Because of the atomicity in our data collection mechanism, our standard model dictates that we’d have do this exact same query over and over once per instance but then throw out the data for everything except the instance on which this collection task is based.
The BATCHSCRIPT collection mechanism solves this problem by allowing for the collection of all datasource instances within a single data collection task. Because data collection is much more efficient, this approach decreases the load both on the Collector and the target device especially for devices with many instances.
Using BATCHSCRIPT vs. SCRIPT Collection
In our standard Script Data Collection mechanism, data values are typically output from the script as key/value pairs along the lines of:
key1: value1 key2: value2 key3: value3
In SCRIPT mode, the same script runs once per instance and uses the provided instance id (wildvalue) as the “foreign key” to identify the instance associated with a particular data collection task. We then create datapoints with key/value post-processor using “key1”, “key2”, “key3” as the keys.
In BatchScript mode the key/value pairs need to be output as:
instance1.key1: value1 instance1.key2: value2 instance1.key3: value3 instance2.key1: value1 instance2.key2: value2 instance2.key3: value3
where the instance ids specified as the prefix correspond to the instance ids provided in Active Discovery. Again you’d create three datapoints each using the key/value post-processor but with keys specified as ##WILDVALUE##.key1, ##WILDVALUE##.key2, ##WILDVALUE##.key3, and so on.
Note that the key names, both in script output and in the datapoint definitions, must start with ##WILDVALUE##. Keys of the format key1.##WILDVALUE## are not supported.
Example – Using Groovy BatchScript for Datasource Data Collection
Consider the SCRIPT datasource presented in Groovy Data Collection Example, in which we query an HTTP API to instrument the power supplies in a Palo Alto firewall. If we were to rewrite that script to do BATCHSCRIPT collection, it would look like this:
From here we create three datapoints: one for max_voltage, min_voltage, and cur_voltage. We’ll use the key-value post-processor to extract the values using keys ##WILDVALUE##.max_voltage, ##WILDVALUE##.min_voltage, and ##WILDVALUE##.cur_voltage.
In this Article: