About Log Ingestion
Last updated on 11 May, 2023Almost every piece of an IT Infrastructure generates logs in some form. These log events from various resources arrive at the LM Logs ingestion endpoint for ingestion and further processing. During ingestion, information in the log lines is parsed to make it available for searching and data analysis.
When setting up LM Logs, there are different ways of configuring resources and services to collect and send log data to LogicMonitor. Often an LM Collector is used, but you can also use the Logs REST API to send log events. The following shows examples of different log sources and methods for collecting and sending log data to LogicMonitor.

Sending Data to LM Logs
Resources must be configured to forward log data to LM Logs using one of various log data input methods. Because some methods are better suited for certain types of log data, choosing the right integration to send your data can improve how LogicMonitor processes the data and detects anomalies.
The various plugins and integrations offered by LogicMonitor for external systems merely help forward the log data. The integrations provide prewritten code that you can use in your environment to get your logs from wherever they are to the LogicMonitor API.
Log Input Options
Options for sending data to LogicMonitor depend on where the data is coming from:
- A resource, for example a host machine that generates log data.
- Log collectors or log servers.
- Cloud services.
- Other applications and technologies, including custom logs.
Available log data input options are described in the following.
Data Sources | Data Input Options |
Network devices, firewalls, routers, and switches. | LM Collector for Syslog. Forward log data to LogicMonitor using standard TCP/UDP protocols. Install the LM Collector, and configure it to forward Syslog messages. See Installing Collectors and Sending Syslog Logs. |
Linux servers | LM Collector for Syslog. Forward local logs from Unix-based systems with built-in support for syslog. Install the LM Collector, and configure it to forward Syslog messages. See Installing Collectors and Sending Syslog Logs. |
VMWare (vCenter, ESX) | LM Collector for Syslog. Forward logs from ESX hosts using the built-in vmsyslogd service. Install the LM Collector, and configure it to forward log messages for VMWare. See Installing Collectors and Configuring LogSources for Syslog. |
Windows Servers and Event Logs | Windows Events LM Logs DataSource. Retrieves logs using Windows Management Instrumentation (WMI). Recommended method, available in LM Exchange. For configuration, see Sending Windows Event Logs. |
Cloud Services – Amazon Web Service (AWS) | LM AWS Integration. Send Amazon CloudWatch logs to LogicMonitor using a Lambda function configured to forward the log events. Data collected through API or Collector. See LM Cloud Monitoring Overview and Sending AWS Logs. |
Cloud Services – Microsoft Azure | LM Microsoft Azure Integration. Forward logs to LogicMonitor using an Azure function that consumes logs from an Event Hub. Data collected through API or Collector. See LM Cloud Monitoring Overview and Sending Azure Logs. |
Cloud Services – Google Cloud Platform (GCP) | LM GCP Integration. Forward different types of application logs from various GCP resources. Data collected through API or Collector. See LM Cloud Monitoring Overview and Sending GCP Logs. |
Containers – Kubernetes | LM Kubernetes Integration. Install the LM Kubernetes Monitoring Integration, see Installing Collectors and About LMs Kubernetes Monitoring. The integration includes the following methods to send logs depending on type. Kubernetes logs. Send logs using the Helm chart configuration provided with the integration. See Sending Kubernetes Logs and Events. Kubernetes cluster events and pod logs. Use the LM Collector to collect and forward logs from a monitored cluster or cluster group. See Sending Kubernetes Logs and Events. |
Log files on disk | LM OTel Collector. Use the LM OpenTelemetry Collector to forward log data from applications to the LogicMonitor platform. See OpenTelemetry Collector Installation and Configuring LogSources for Log Files. Fluentd. Use the LM Logs Fluentd plugin to collect logs from multiple sources, structure the data in JSON format, and forward to the LM ingestion API. Can be used for most data sources. For installation and configuration, see Sending Fluentd Logs. Logstash. Use the LM Logs Logstash plugin to collect and send Logstash events to the LM ingestion API. Can be used for most data sources. For installation and configuration, see Sending Logstash Logs. |
Custom logs | LM Logs Ingestion API. Send logs directly to your LM account via the LogIngest Public API. Use this option if a log integration isn’t available, or you have custom logs you want to analyze. See Sending Logs Ingestion API. |
Recommendation: Ensure you use the available filtering options to remove logs that contain sensitive information so that they are not sent to LogicMonitor.
LogSources
LogSources (Beta) is a LogicModule that provides configuration templates for data collection from monitored logs. LogSources provides out-of-the-box setup and configuration for popular logsources like Windows Events and Kubernetes Events. For more information see About LogSources or contact your Customer Success Manager.
Deviceless Logs
Logs can be viewed in LM Logs even if the log comes from an unmapped resource not monitored by LogicMonitor. Even without resource mapping, or when there are resource mapping issues, logs are still available for anomaly detection and to view and search. For these logs, the Resource and Resource Group fields will be empty in the Logs page listing.
You can create log processing pipelines also for unmapped resources. Since there is no LM-monitored resource or resource group for these, LogicMonitor automatically associates the pipeline with a special resource and resource group. The resource name will be the same as the pipeline name. The resource group for unmapped resources is called LogPipelineResources.
Log alerts are created based on the alert conditions configured for the pipeline. You can see alerts for unmapped resources on the Alerts page. By default, log alerts for unmapped resources can only be seen by users with administrator access. You can change this by navigating to Settings -> Users & Roles, and assign the desired permission to the LogPipelineResources group.
Note: Anomaly detection for logs is based on the service (resource.service.namespace) and namespace (resource.service.name) keys. If these keys are not present in the ingested log event, anomaly detection will not be done.