Almost every piece of an IT Infrastructure generates logs in some form. These log events from various resources arrive at the LM Logs ingestion endpoint for ingestion and further processing. During ingestion, information in the log lines is parsed to make it available for searching and data analysis.
When setting up LM Logs, there are different ways of configuring resources and services to collect and send log data to LogicMonitor. Often an LM Collector is used, but you can also use the Logs REST API to send log events. The following shows examples of different log sources and methods for collecting and sending log data to LogicMonitor.
Sending Data to LM Logs
Resources must be configured to forward log data to LM Logs using one of various log data input methods. Because some methods are better suited for certain types of log data, choosing the right integration to send your data can improve how LogicMonitor processes the data and detects anomalies. For more information, see Log Anomaly Detection.
The various plugins and integrations offered by LogicMonitor for external systems merely help forward the log data. The integrations provide prewritten code that you can use in your environment to get your logs from wherever they are to the LogicMonitor API.
Log Input Options
Options for sending data to LogicMonitor depend on where the data is coming from:
- A resource, for example a host machine that generates log data.
- Log collectors or log servers.
- Cloud services.
- Other applications and technologies, including custom logs.
Available log data input options are described in the following.
A LogSource provide templates that simplify configuration of log data collection and forwarding. LogSource is available for common log data sources like Syslog, Windows Events, and Kubernetes Events. A LogSource contains details about what logs to get, where to get them, and which fields should be parsed before sent to LM Logs for ingestion. For more information, see LogSource Overview.
Recommendation: LogSource is the recommended method to enable LM Logs. However, to use LogSource, the LM Collector must be version EA 31.200 or later.
The following provides an overview of options for collecting and sending log data from different datasources to LM Logs.
|Datasource||Description||Using LogSource||Other configuration options|
|Network devices, firewalls, routers, and switches.||Forward Syslog logs using standard UDP/TCP protocols.||LogSource for Syslog. See Syslog LogSource Configuration.||Configure the log collection. See Sending Syslog Logs.|
|Linux servers||Forward Syslog logs from Unix-based systems.||LogSource for Syslog. See Syslog LogSource Configuration.||Configure the log collection. See Sending Syslog Logs.|
|VMWare (vCenter, ESX)||Forward logs from VMWare hosts using the built-in vmsyslogd service.||LogSource for Syslog. See Syslog LogSource Configuration.||N/A|
|Windows Servers and Event Logs||Forward logs from Windows-|
based systems using WMI.
|LogSource for Windows Event Logging. See Windows Event Logging LogSource Configuration.||Install and configure the Windows Events LM Logs DataSource. See Sending Windows Event Logs.|
|Cloud Services – Amazon Web Service (AWS)||Forward Amazon CloudWatch logs using a Lambda function configured to send log events.||N/A||Configure the collection and forwarding of AWS logs. See Sending AWS Logs.|
|Cloud Services – Microsoft Azure||Forward logs using an Azure function that consumes logs from an Event Hub.||N/A||Configure the collection and forwarding of Azure logs. See Sending Azure Logs.|
|Cloud Services – Google Cloud Platform (GCP)||Forward different types of application logs from various GCP resources.||N/A||Configure the collection and forwarding of GCP logs. See Sending GCP Logs.|
|Containers – Kubernetes||Forward logs from Kubernetes clusters, cluster groups, containerized applications, and pods.||LogSource for Kubernetes Event Logging. See Kubernetes Event Logging LogSource Configuration.|
LogSource for Kubernetes Pods. See Kubernetes Pods LogSource Configuration.
|Install and configure the LM Kubernetes Monitoring Integration. The integration includes methods for logs for events, clusters, and pods. See Sending Kubernetes Logs and Events.|
|Log files on disk – application traces||Forward traces from instrumented applications.||LogSource for Log Files. See Log Files LogSource Configuration.||N/A|
|Log files on disk – LogStash events||Forward Logstash events to the LM Logs ingestion API. Can be used for most datasources.||LogSource for Log Files. See Log Files LogSource Configuration.||Install and configure the LM Logs Logstash plugin. See Sending Logstash Logs.|
|Log files on disk – any files||Forward logs from multiple sources, structure the data in JSON format, and forward to the LM Logs ingestion API. Can be used for most datasources.||LogSource for Log Files. See Log Files LogSource Configuration.||Install and configure the LM Logs Fluentd plugin. See Sending Fluentd Logs.|
|Custom logs||Forward custom logs directly to your LM account through the public API. Use this option if a log integration isn’t available, or you have custom logs you want to analyze.||LogSource for API Script (supports API filtering). See Script Logs LogSource Configuration.||Configure the log collection. See Sending Logs to Ingestion API.|
Recommendation: Ensure you use the available filtering options to remove logs that contain sensitive information so that they are not sent to LogicMonitor.
Logs can be viewed in LM Logs even if the log comes from an unmapped resource not monitored by LogicMonitor. Even without resource mapping, or when there are resource mapping issues, logs are still available for anomaly detection and to view and search. For these logs, the Resource and Resource Group fields will be empty in the Logs page listing.
You can create log processing pipelines also for unmapped resources. Since there is no LM-monitored resource or resource group for these, LogicMonitor automatically associates the pipeline with a special resource and resource group. The resource name will be the same as the pipeline name. The resource group for unmapped resources is called “LogPipelineResources”.
Log alerts are created based on the alert conditions configured for the pipeline. You can see alerts for unmapped resources on the Alerts page. By default, log alerts for unmapped resources can only be seen by users with administrator access. You can change this by navigating to Settings -> Users & Roles, and assign the desired permission to the LogPipelineResources group.
Note: Anomaly detection for logs is based on the service (resource.service.namespace) and namespace (resource.service.name) keys. If these keys are not present in the ingested log event, anomaly detection will not be done.