Come join our live training webinar every other Wednesday at 11am PST and hear LogicMonitor experts explain best practices and answer common questions. We understand these are uncertain times, and we are here to help!
The BatchScript Data Collection method is ideal for DataSources that:
The Script Data Collection method can also be used to collect data via script, however data is polled for each discovered instance. For DataSources that collect across a large number of instances, this can be inefficient and create too much load on the device data is being collected from. For devices that don’t support requests for data from a single instance, unnecessary complication must be introduced to return data per instance. The BatchScript Data Collection method solves these issues by collecting data for multiple instances at once (instead of per instance).
These characters should be replaced with an underscore or dash in WILDVALUE in both Active Discovery and collection.
Similar to when collecting data for a DataSource that uses the script collector, the batchscript collector will execute the designated script (embedded or uploaded) and capture its output from the program’s stdout. If the program finishes correctly (determined by checking if the exit status code is 0), the post-processing methods will be applied to the output to extract value for datapoints of this DataSource (the same as other collectors).
The output of the script should be either JSON or line-based.
Line-based output needs to be in the following format:
JSON output needs to be in the following format:
Since the BatchScript Data Collection method is collecting datapoint information for multiple instances at once, the ##WILDVALUE## token needs to be used in each datapoint definition to pass instance name. “NoData” will be returned if your WILDVALUE contains the invalid characters named earlier in this support article.
Using the line-based output above, the datapoint definitions should use the multi-line-key-value pairs post processing method, with the following Keys:
Using the JSON output above, the datapoint definitions should use the JSON/BSON object post processing method, with the following JSON paths:
If a script generates the following output:
Then the IOPS datapoint definition may use the key-value pair post processing method like this:
The ##WILDVALUE## token would be replaced with disk1 and then disk2, so this datapoint would return the IOPS values for each instance. The throughput datapoint definition would have ‘##WILDVALUE##.throughput’ in the Key field.
In This Article