Normal Datapoints
Last updated - 19 January, 2026
Normal datapoints in LogicMonitor represent values collected directly from monitored systems. Unlike complex datapoints, which perform mathematical transformations on existing datapoints, normal datapoints capture and store the primary metrics gathered from the raw output.
The configuration of a normal datapoint depends on the DataSource’s collection method. For example, when collected using SNMP, the datapoint references an OID; for WMI it references a WMI class attribute; for JMX it uses MBean objects and attributes; and for script-based methods it draws from script output options. This ensures that the datapoint aligns with the correct source of raw data.
In addition, LogicMonitor provides post-processing interpretation methods that enable normal datapoints to extract or refine values from the raw output.
Normal Datapoint Metric Types
Normal datapoints can be assigned the following metric types:
- Gauge—The gauge metric type stores the reported value directly for the datapoint. For example, data with a raw value of 90 is stored as 90.
- Counter—Counter metrics calculate the rate of change between samples and account for counter wraps, but they can sometimes produce spikes in data unless maximum values are set. For this reason, counter metrics are typically reserved for datapoints that wrap frequently, such as gigabit interfaces using 32-bit counters.
- Derive—Derive metric types are similar to counters, with the exception that they do not correct for counter wraps. Derives can return negative values.
Output Interpretation Methods
When adding a normal datapoint, you specify what raw output should be collected. Depending on the collection method, these options may include parsing multi-line key-value pairs, applying regular expressions, matching text, extracting fields from CSV or TSV data, or interpreting structured formats like XML and JSON. Specialized methods also exist for processing binary payloads with HexExpr. These interpretations make it possible to transform raw output into precise, actionable datapoint values.
Note: Output interpretation methods depend on the collection method used for the datapoint. Not all collection methods require an output interpretation method.
Depending on your collection method, you can leverage the following output interpretation methods for normal datapoints:
| Output Interpretation Method | Details | Example |
| Multi-Line Key-Value Pairs | Treats a multi-line string raw measurement as a set of key-value pairs separated by an equal sign (=) or colon (:). Important: Key names must be unique, or values cannot be extracted reliably. If a raw measurement output contains two identical key names paired with different values, where the separating character (equal sign or colon) differs, the Key-Value Pairs Post Processor method will not be able to extract the values. | If the key is defined as “Buffers” in the datapoint, then “11023” is extracted as the value in the following:Buffers=11023 BuffersAvailable=333 heapSize=245MBNote: This format applies when using Script collection, where each datapoint evaluates its own output independently. When using BatchScript collection, all instances are collected in a single execution. In this case, the output must include an instance identifier so LogicMonitor can determine which datapoint value belongs to which instance. |
| Regular Expressions | Uses regular expressions to extract data from more complex strings. The contents of the first capture group (text in parenthesis) in the regex are assigned to the datapoint. | If the Apache DataSource uses a regular expression to extract counters from the server-status page, then the raw output of the webpage collection method looks similar to the following:Total Accesses: 8798099 |
| TextMatch | Checks if a specific string (or regex pattern) is present in the raw output. Returns 1 if found, 0 if not found. | To check if Tomcat is running on a host, you can include a script DataSource that runs the following periodically:ps -ef | grep javaThe output from the pipeline should contain the following if Tomcat is running: org.apache.catalina.startup.BootstrapYou can configure the datapoint to check if the raw measurement output contains the output from the pipeline. If the output is included, the datapoint returns 1. |
| CSV and TSV | Extracts values if the raw measurement is an array of comma-separated values (CSV) or tab-separated values (TSV). The following forms of parameters are supported by the post-processor for CSV and TSV methods:
| A script datasource executes iostat | grep ‘sda’ | head -1 to get the statistics of hard disk “sda”. The output is a TSV array:sda 33.75 3.92 719.33 9145719 1679687042The fourth column (719.33) is blocks written per second. To extract this value into a datapoint using the TSV interpretation method, select “a TSV string” as the method to interpret the output and enter “3” as the index. |
| HexExpr | Interprets payload as a byte array. Specify offset:length (1, 2, 4, or 8 bytes) to extract numeric values.Supports both big-endian and little-endian formats. Useful for extracting values from binary packets (for example, DNS fields). Note: Applies to TCP and UDP collection methods only. | You can collect a 2-byte field starting at byte 4 from the following returned raw payload:00 01 02 03 00 64 ff aa ...Using the following HexExpr value when configuring the datapoint enables the datapoint to store 100 as the collected value:4:2 |
| XML | Uses XPath syntax to navigate through elements and attributes in an XML document. Any XPath expression that evaluates to a number can be used as a datapoint value. | You can extract the ID of the first item from an XML response containing multiple items by configuring the datapoint to interpret the output as an XML document and using the following XPath:/Order/Manifest/Item[1]/IDThis XPath selects the first <Item> element and returns its <ID> value, which is stored as the datapoint. |
| JSON | Some collection methods return a JSON literal string as the value of one of the raw datapoints. | You can create a webpage DataSource sending an HTTP request/api/overview to RabbitMQ server to collect its performance information. RabbitMQ returns a JSON literal string similar to the following:{ "management_version":"2.5.1", "statistics_level":"fine", "message_stats":[], "queue_totals":{ "messages":0, "messages_ready":0, "messages_unacknowledged":0 }, "node":"rabbit@labpnginx01", "statistics_db_node":"rabbit@labpnginx01", "listeners":[ { "node":"rabbit@labpnginx01", "protocol":"amqp", "host":"labpnginx01", "ip_address":"::", "port":5672 } ] }You can create a datapoint to extract the number of messages in the queue by specifying “queue_totals messages” for the JSON path. |
Configuring a Normal Datapoint
Note: The following steps outline the procedure for configuring the common steps for a datapoint. For specific steps related to each collection method, see the applicable documentation for each collection method.
- In LogicMonitor, navigate to Modules.
- Select My Module Toolbox, and then either create a new DataSource or navigate to the DataSource you want to create a normal datapoint for.
For more information, see DataSources Configuration or Modules Management. - Configure or modify the settings as needed for the module.
For more information, see DataSources Configuration. - In the Datapoints settings, select Add a Normal Datapoint, and do the following:
- In the Name field, enter a name for the datapoint.
- In the Description field, enter a description as needed.
- If the datapoint you are configuring contains an output interpretation method, select the method from the Interpret output with setting.
For more information, see Output Interpretation Methods.
Note: Not all collection methods provide an output interpretation method for the datapoint.
- Configure any additional, applicable settings for the datapoint based on the collection method you are using for the DataSource.
- If the datapoint you are configuring contains a metric type, select the type from the Metric Type setting.
Note: Not all collection methods provide a metric type for the datapoint.
- (Optional) To display readable status values for a datapoint, select Add Status Display Name, and then do the following:
- In the Status Value field, enter the number value returned by the datapoint.
- From the Operator dropdown menu, select how you want the value to apply:
- When the value is higher than the specified value, select “(>) Greater than.”
- When the value reached or exceeds the specified value, select “(>=) Greater than or equal to.”
- When the value is lower than the specified value, select “(<) Less than.”
- When the value is at or below the specified value, select “(<=) Less than or equal to.”
- When the value exactly matches the specified value, select “(=) Equal to.”
- To all values except the specified value, select “(!=) Not equal to.”
- In the Display Name field, enter the corresponding status text (for example,
3 = Non-operational). - Select Apply to save.
- Configure the Alert Thresholds settings as necessary.
For more information, see Alert Threshold Overview. - Select
to save the datapoint.
- Configure any additional settings for the DataSource, and then select Save.