Creating a DataSource
DataSources are templates that define what numerical data should be collected, how it should be collected, and what values should be graphed and trigger alerts.
There are four sections included in every DataSource definition:
The General Information section includes identifying information about the DataSource. This includes the collection method that will be used to collect data, how often data will be collected, and from which devices data will be collected.
The unique name of the DataSource. As a best practice, this name should be descriptive: specify the platform or application name first, then specify a specific component of the platform.
For instance: "Tomcat Threads-", or "Aruba Controller Power Supply-".
The name the DataSource will be displayed with on the Devices page. This name does not need to be unique (e.g. there can be several DataSource that display as "CPU", as they apply to different kinds of devices). As a best practice, the display name should be shorter than the full DataSource name. For example, "Services" instead of "Windows Services", or "VIPs" instead of "Citrix Netscaler VIPs."
Note that the DataSource name and display name cannot include the operators and comparison functions listed on this page.
The description that will be associated with this DataSource. As a best practice, if the DataSource name is not unambiguous as to what the DataSource is collecting, the description field should provide enough information that someone seeing the name and the description will be clear what the DataSource does.
Tag the DataSource with keywords (related to its name, purpose, scope, etc) that will facilitate a search for it.
This field can contain any technical notes associated with the DataSource.
The DataSource Group to which the DataSource will be added. If this field is left empty, the DataSource will not be added to a DataSource group. If you enter text that does not match an existing DataSource Group, one will be created.
For example, all the Dell hardware monitoring DataSources are assigned the group Dell. This collapses those DataSources to a single DataSource group entry that can be expanded.
The Applies To field defines which devices will be associated with this DataSource. This text entered here must be formatted in AppliesTo Scripting syntax.
Select the "Test" option to display a list of all devices that will be added or removed when Applies To is changed.
The Collect every setting defines how frequently data will be collected.
This field should be set to an interval appropriate for the data being checked. For example, items that change frequently (e.g. CPU) or require immediate attention in the event of an error (ie. ping loss) should have a short poll cycle, such as 60 seconds.
Note that longer poll cycles impose less load on both the server being monitored and the Collector.
The Collector field defines the mechanism that will be used to collect data.
If this option is checked, the DataSource will be multi-instance. You should make your DataSource multi-instance if you know that there are multiple occurrences of the object you would like to monitor (e.g. multiple disks or volumes on a server).
If the DataSource is multi-instance (i.e. the Multi-instance option is selected in the General Information section), then you can use Active Discovery to manage the monitored instances of the DataSource.
Enable Active Discovery
A DataSource must be multi-instance to enable Active Discovery.
If enabled, LogicMonitor will automatically find a DataSource's instances on your device, determine their display names or aliases, and keep them up to date.
Disable discovered instances
If this option is checked, instances are placed in a disabled state when they are discovered. Monitoring will have to be manually enabled on specific instances.
Checking this option is helpful if you would like to fine-tune your instances' alert thresholds prior to enabling monitoring. This ensures you do not receive a flood of alerts as soon as new instances are discovered.
Automatically delete instance
If this option is checked, LogicMonitor will automatically remove an instance from monitoring if a future Active Discovery iteration reports that it is no longer present on a device. As a best practice, you should uncheck this option if you would like to receive alerts for the instances of this DataSource even if they are not present. For example, you would likely not want this option checked if Active Discovery detects that a service should be monitored by finding a listening TCP port. This is because, if the service were to fail, the port would stop responding, Active Discovery would report the instance as no longer present, it would be removed from monitoring, and you would no longer receive alerts for it—at precisely the time you would want to receive alerts for it.
When automatically removing an instance from monitoring, you can choose how long the instance's monitoring history will remain in LogicMonitor:
- Delete Immediately. An instance's monitoring history will be immediately removed from LogicMonitor upon the instance being determined not present.
- Delete After 30 Days. An instance's monitoring history will remain in LogicMonitor for 30 days before being removed. If an instance is rediscovered within this 30-day window, the prior monitoring history will be associated with the new instance. This option is useful in cases such as replacing a failed network card, where you would want the old history visible once the new card is active. (You would not want the history kept if the new instance is not related to the old.)
This option controls how frequently Active Discovery will run to discover instances for this DataSource.
Defines the protocol (SNMP, WMI, NetApp API, Script, etc) employed by Active Discovery to find instances.
NOTE: If you choose to embed or upload your own Active Discovery script, you will be able to use our "Test Script" functionality. Selecting this button will return a table of all instances that would be discovered using the script. This is a good way to ensure that all instances you expect to capture are, in fact, being discovered.
For more detail, see our Data Collection Methods topic, which features specific instructions for each of the data collection protocols supported.
Enables you to refine which instances are added into monitoring via Active Discovery. If filters are added, instances must match the filter criteria in order to be discovered.
As a best practice, you should ensure that any Active Discovery filters are described in the "comment" field, if not self explanatory.
Select how you'd like the instances of this DataSource to be grouped. Choose manual if you'd like to manually group monitored instances. Note that if you select an automatic grouping method (e.g. grouping based on regular expression), you will not be able to manually change instance group membership.
Certain data collection methods require you to configure specific attributes in this section. As a best practice, the Name field in the SNMP, JMX, etc. Collector Attributes section should reflect the name used by the entity that defined the object to be collected (i.e. use the oid object name for SNMP data, the WMI property name for WMI data, etc.). For more information, see the Data Collection Methods topic, which features specific instructions for each of the data collection protocols supported.
Each DataSource must have one or more datapoints that define what information to collect and store. Once datapoints are identified, they can be used to trigger alerts when data from a device exceeds the threshold(s) you've specified, or when there is an absence of expected data.
While some configuration options vary depending on the collection method of the DataSource, many configuration settings are common across many types of DataSources. The following sections provide description for common datapoint configuration fields. (For more detail on defining datapoints for specific collection methods, see our Data Collection Methods topic.)
Note: Datapoints can be sorted alphabetically in the DataSource definition. This functionality is not applicable to the Raw Data tab found in the Device instance view.
Normal vs. Complex Datapoints
The data stored is simply the raw value of the data collected.
The data stored is computed based on script and/or arithmetic methods applied to the collected data.
The following datapoint configuration fields exist for both normal and complex datapoints:
The name of the datapoint as displayed in alert notifications. As a best practice you should make this name meaningful; Aborted_clients is a meaningful name - datapoint1 is not, as it would not be helpful to receive an alert stating "datapoint1 is over 10 per second". Note that datapoint names cannot include the operators and comparison functions listed on this page.
The description of the datapoint. As a best practice, every datapoint should have a description.
Valid value range
Any data reported for this datapoint must fall within the Min and Max values, if defined. If data does not fall within the valid value range, it will be rejected and "NaN" will be stored in place of the data.
If you'd like to receive alerts for this datapoint, click the Wizard button to define the datapoint values that should trigger warning, error, and critical alerts (or to adjust the pre-configured thresholds that LogicMonitor automatically applied). For detailed information on this wizard and the configurations it supports, see Using the Alert Threshold Wizard.
Note: If you skip the wizard and directly configure a threshold here, resulting alerts will default to a severity of "warning."
If there is no data for this datapoint:
By default, alerts will not be triggered if no data can be collected for a datapoint. If you would like to receive alerts when no data is collected, called No Data alerts, select the severity of alert that should be triggered from this field's drop-down menu.
While it's possible to configure No Data alerts for datapoints that also have value-based thresholds, it is not necessarily best practice. For example, it's likely you will want different alert messages for these scenarios, as well as different trigger and clear intervals. For these reasons, consider setting the No Data alert on a datapoint that has no value-based alert threshold in place so that you can customize the alert's message, as well as its trigger and clear intervals as appropriate for a no data condition. In most cases, you can choose to set a No Data alert on any datapoint on the datasource as it is not just one specific datapoint that will reflect a no data condition. Rather, all datapoints will reflect this condition as it is typically the result of the entire protocol (e.g. WMI, SNMP, JDBC, etc.) not responding.
Alert Trigger Interval
Defines the number of collection intervals for which an alert condition must exist before an alert is triggered. This interval applies to the value-based alert and/or No Data alert that is established for the datapoint.
The length of one collection interval is determined by the "Collect every" DataSource setting found in the General Information section.
"Trigger alert immediately" will send an alert as soon as the datapoint value satisfies an alert condition criteria.
Setting the alert trigger interval to a higher value typically ensures that a datapoint's value is persistent before an alert is triggered. As a best practice, you should ensure that the trigger interval for all datapoints with alerts is set to a value appropriate for the balance of immediate notification of an alert state (use a value of 0 or 1) or quieting out alerts on transitory conditions (use a value of 5 or more).
Alert Clear Interval
If a triggered alert stops satisfying the alert criteria, the Alert Clear Interval defines how many polling cycles must occur before the alert is automatically cleared. This interval applies to the value-based alert and/or No Data alert that is established for the datapoint.
If "Clear alert immediately" is selected, the alert will be cleared as soon as the datapoint value no longer satisfies the alert criteria.
As with the alert trigger interval, setting a higher value for the alert clear interval typically ensures that a datapoint's value is persistent before an alert clears. This can prevent the triggering of new alerts.
Filling out this field will override the default datapoint alert message that will be displayed in alert notifications for this DataSource.
As a best practice, any datapoint with an alert threshold defined should have an alert message defined that formats the relevant information in the alert, and also provides context and recommended actions, where possible.