LM Logs Overview

Last updated on 05 December, 2022

LogicMonitor Logs offers a unique and unified approach to log analysis centered on algorithmic root-cause analysis. LM Logs analyzes log events from IT environments to identify normal patterns and deviations from these. Deviations are referred to as anomalies. Through anomaly detection teams can act on issues early, before they become more complex and expensive to resolve.

When setting up LM Logs, source devices and services are configured to forward data to one of various log ingestion methods. Most often this is a LogicMonitor Collector, but you can also use the Logs REST API to forward log events to LM. The following is an example architecture with log data collected through different methods from multiple resources.

Log anomaly events are displayed in the Logs page, as well as contextually together with metric Alerts and Topology information to help speed up investigation.

Getting Started

You can send log data to LogicMonitor via the LM Logs Ingestion API directly to the ingestion endpoint. You can also use a provided integration to send your application or IT infrastructure logs.

Recommendation: When setting up logs ingestion, ensure you use the available filtering options to remove logs that contain sensitive information so that they are not sent to LogicMonitor.

Note: Logs can be viewed in LM Logs even if the log is not associated with an LM-monitored resource. Even without resource mapping, or when there are resource mapping issues, logs are still available for viewing and searching.

Reviewing Logs

Once logs are sent to LogicMonitor, they will be mapped to existing monitored resources based on information sent to the API. Anomalies will be detected automatically and displayed in the Logs page and as context of the metric alerts for these resources. Use keyword search and filtering for fast investigation and analysis. See Reviewing Logs and Log Anomalies.

Note: There are two offerings for LM Logs: Pro and Enterprise with different log retention limits. This will affect the time range of the logs you are able to search and review in the Logs page.

You can also configure log pipelines to define filters and other processing steps, such as alert conditions, on specific sets of logs to get notifications when issues occur. See Log Processing Pipelines and Log Alert Conditions.


If you are having issues setting up log ingestion or reviewing logs, check out the Troubleshooting guide for common issues. You can also find troubleshooting help for a specific integration in its setting up article listed under Getting Started.

In This Article