About Log Ingestion

Last updated on 25 January, 2023

Ingestion refers to the process of formatting and uploading log data from external sources like hosts, applications, and cloud-based logging services. When ingesting log data, information in the log lines is parsed to make it available for searching and data analysis. See Log Processing.

When setting up LM Logs, resources and services are configured to forward data to one of various log ingestion methods. Often this is an LM Collector, but you can also use the Logs REST API to forward log events to LogicMonitor. The following describes different ingestion methods, when to use them, and how log ingestion works.

Sending Data to LM Logs

Resources must be configured to forward log data to LM Logs using one of various log ingestion methods. Because some ingestion methods are better suited for certain types of log data, choosing the right integration to send your data can improve how LogicMonitor learns and detects log anomalies. See Log Anomaly Detection.

Log Input Options

Options for sending data to LogicMonitor depend on where the data is coming from:

  • The resource. For example a host machine, that generates the data. 
  • Log collectors or log servers. 
  • Cloud services. 
  • Other applications and technologies, including custom logs. 

Available log data input options are described in the following.

Data SourcesData Input Options
Network devices, firewalls, routers, and switches.LM Collector for Syslog. Forward log data to LogicMonitor using standard TCP/UDP protocols. Install the LM Collector, and configure it to forward Syslog messages. See Installing Collectors and Collecting and Forwarding Syslog Logs.
Linux serversLM Collector for Syslog. Forward local logs from Unix-based systems with built-in support for syslog. Install the LM Collector, and configure it to forward Syslog messages. See Installing Collectors and Collecting and Forwarding Syslog Logs.
VMWare (vCenter, ESX)LM Collector for Syslog. Forward logs from ESX hosts using the built-in vmsyslogd service. Install the LM Collector, and configure it to forward log messages for VMWare. See Installing Collectors and Configuring LogSources for Syslog.  
Windows Servers and Event LogsWindows Events LM Logs DataSource. Retrieves logs using Windows Management Instrumentation (WMI). Recommended method, available in LM Exchange. For configuration, see Ingesting Windows Event Logs.
Cloud Services – Amazon Web Service (AWS)LM AWS Integration. Send Amazon CloudWatch logs to LogicMonitor using a Lambda function configured to forward the log events. Data collected trough API or Collector. See LM Cloud Monitoring Overview and Setting up AWS Logs Ingestion.
Cloud Services – Microsoft AzureLM Microsoft Azure Integration. Forward logs to LogicMonitor using an Azure function that consumes logs from an Event Hub. Data collected trough API or Collector. See LM Cloud Monitoring Overview and Setting up Azure Logs Ingestion
Cloud Services – Google Cloud Platform (GCP)LM GCP Integration. Forward different types of application logs from various GCP resources. Data collected trough API or Collector. See LM Cloud Monitoring Overview and Setting up GCP Logs Ingestion.
Containers – Kubernetes LM Kubernetes Integration. Install the LM Kubernetes Monitoring Integration, see Installing Collectors and About LMs Kubernetes Monitoring. The integration includes the following methods to send logs depending on type. 

Kubernetes logs. Send logs using the Helm chart configuration provided with the integration. See Sending Kubernetes Logs and Events.

Kubernetes cluster events and pod logs. Use the LM Collector to collect and forward logs from a monitored cluster or cluster group. See Sending Kubernetes Logs and Events
Log files on disk LM OTel Collector. Use the LM OpenTelemetry Collector to forward log data from applications to the LogicMonitor platform. See OpenTelemetry Collector Installation and Configuring LogSources for Log Files.

Fluentd. Use the LM Logs Fluentd plugin to collect logs from multiple sources, structure the data in JSON format, and forward to the LM ingestion API. Can be used for most data sources. For installation and configuration, see Setting up Fluentd Logs Ingestion.

Logstash. Use the LM Logs Logstash plugin to collect and send Logstash events to the LM ingestion API. Can be used for most data sources. For installation and configuration, see Setting up Logstash Ingestion.
Custom logsLM Logs Ingestion API. Send logs directly to your LM account via the LogIngest Public API. Use this option if a log integration isn’t available, or you have custom logs you want to analyze. See Sending Logs to the LM Logs Ingestion API.

Recommendation: Ensure you use the available filtering options to remove logs that contain sensitive information so that they are not sent to LogicMonitor.

LogSources

LogSources is a LogicModule that provides configuration templates for data collection from monitored logs. LogSources provides out-of-the-box setup and configuration for popular logsources like Windows Events and Kubernetes Events. See About LogSources.

Deviceless Logs

Logs can be viewed in LM Logs even if the log comes from an unmapped resource not monitored by LogicMonitor. Even without resource mapping, or when there are resource mapping issues, logs are still available for anomaly detection and to view and search. For these logs, the Resource and Resource Group fields will be empty in the Logs page listing. 

You can create log processing pipelines also for unmapped resources. Since there is no LM-monitored resource or resource group for these, LogicMonitor automatically associates the pipeline with a special resource and resource group. The resource name will be the same as the pipeline name. The resource group for unmapped resources is called LogPipelineResources.

Log alerts are created based on the alert conditions configured for the pipeline. You can see alerts for unmapped resources on the Alerts page. By default, log alerts for unmapped resources can only be seen by users with administrator access. You can change this by navigating to Settings -> Users & Roles, and assign the desired permission to the LogPipelineResources group. 

Note: Anomaly detection for logs is based on the service (resource.service.namespace) and namespace (resource.service.name) keys. If these keys are not present in the ingested log event, anomaly detection will not be done.

In This Article