LM Logs offers a unique and unified approach to log analysis centered on algorithmic root-cause analysis. LM Logs analyzes log events to identify normal patterns and deviations from these patterns: anomalies. These log anomaly events are displayed in a Logs page and contextually with metric Alerts and Topology information to help speed investigation.
We currently support sending log data to LogicMonitor via the LM Logs Ingestion API. You can send custom logs directly to the ingestion endpoint or use one of our provided integrations to send your application or IT infrastructure logs:
- Enable a LogicMonitor Collector to collect and forward Syslog log messages, Windows Event Logs, and Kubernetes Cluster Events.
- Use the Fluentd output plugin for sending fluentd records. See Setting up Fluentd Logs Ingestion.
- An Amazon Web Services (AWS) integration for logs stored in Amazon CloudWatch. See Setting up CloudWatch Logs Ingestion.
- A Microsoft Azure integration for Azure device logs. See Setting up Azure Logs Ingestion.
- A Google Cloud Platform (GCP) integration for application logs. See Setting up GCP Logs Ingestion.
- LogicMonitor’s Kubernetes integration has a Helm chart configuration for LM Logs. See Sending Kubernetes Logs and Events
- Send custom logs directly to the log ingestion endpoint. See Sending Logs to the LM Logs Ingestion API.
Note: When setting up logs ingestion, we recommend that you use the available filtering options to remove logs that contain sensitive information so that they are not sent to LogicMonitor.
Once logs are sent to LogicMonitor, they will be mapped to existing monitored resources based on information sent to the API. Anomalies will be detected automatically and displayed in the Logs page and as context of the metric alerts for these resources. Use keyword search and filtering for fast investigation and analysis. See Reviewing Logs and Log Anomalies.
Note: There are two offerings for LM Logs: Pro and Enterprise with different log retention limits. This will affect the time range of the logs you are able to search and review in the Logs page.
You can also configure log pipelines to define filters and other processing steps, such as alert conditions, on specific sets of logs to get notifications when issues occur. See Log Processing Pipelines and Log Alert Conditions.
If you are having issues setting up log ingestion or reviewing logs, check out the Troubleshooting guide for common issues. You can also find troubleshooting help for a specific integration in its setting up article listed under Getting Started.