Sending Logstash Logs

Last updated on 13 June, 2023

Logstash is a popular open-source data collector which provides a unifying layer between different types of log inputs and outputs.

Recommendation: LogSource is the recommended method to enable LM Logs. To use LogSource, the LM Collector must be version EA 31.200 or later. For more information, see LogSource Overview, or contact your Customer Success Manager. The procedure in the following describes how to enable LM Logs if you are not using LogSource.

If you are already using Logstash to collect application and system logs, you can forward the log data to LogicMonitor using the LM Logs Logstash plugin.

This output plugin contains specific instructions for sending Logstash events to the LogicMonitor Log ingestion API. You can also install the Logstash Monitoring LogicModules for added visibility into your Logstash metrics alongside the logs.

Requirements

  • A LogicMonitor account name.
  • LogicMonitor API token to authenticate all requests to the log ingestion API.

Installing the Plugin

Install the LM Logs Logstash plugin using Ruby Gems. Run the following command on your Logstash instance:

logstash-plugin install logstash-output-lmlogs

Configuring the Plugin

The following is an example of the minimum configuration needed for the Logstash plugin. You can add more settings into the configuration file. See the parameters tables in the following.

output {
  lmlogs {
    access_id => "access_id"
    access_key => "access_key"
    portal_name => "account-name"
    property_key => "hostname"
    lm_property => "system.sysname"
  }
}

Including and Excluding Metadata

By default, all metadata is included in the logs sent to LM Logs. You can also use the include_metadata parameter to include or exclude metadata fields. Add this parameter to the logstash.config file. If set to false, metadata fields will not be sent to the LM Logs ingestion. Default value is “true”.

You can also exclude specific metadata by adding the following to the logstash.config file:

filter { mutate

{ remove_field => [ "[event][sequence]" ]}
}

For more information, see the Logstash documentation.

Note: From version 1.1.0 of the Logstash plugin all metadata is included by default. The include_metadata parameter is introduced with version 1.2.0.

Required Parameters

NameDescriptionDefault
access_idUsername to use for HTTP authentication.N/A
access_keyPassword to use for HTTP authentication.N/A
portal_nameThe LogicMonitor portal account name.N/A

Optional Parameters

NameDescriptionDefault
batch_sizeThe number of events to send to LM Logs at one time.

Increasing the batch size can increase throughput by reducing HTTP overhead.
100
keep_timestampIf false, LM Logs will use the ingestion timestamp as the even timestamp.true
lm_propertySpecify the key that will be used by LogicMonitor to match a resource based on property.system.hostname
message_keyThe key in the Logstash event that will be used as the logs message.message
property_keyThey key in Logstash to find the hostname value, that will be used map to lm_property.hostname
timestamp_is_keyIf true, LM Logs will use a specified key as the event timestamp value.false
timestamp_keyIf timestamp_is_key=true, LM Logs will use this key in the event as the timestamp.

Valid timestamp formats are ISO8601 strings or epoch in seconds, milliseconds, and nanoseconds.
logtimestamp
include_metadataIf false, metadata will not be included in the logs sent to LM Logs.true

Note: For more information about the message_key and property_key values syntax, see this Logstash documentation.

Building

Run the following command in Docker to build the Logstash plugin:

docker-compose run jruby gem build logstash-output-lmlogs.gemspec

Troubleshooting

If you are not seeing logs in LM Logs:

  1. Ensure that the resource from which the logs are expected is being monitored.
  2. If the resource exists, check that the lm_property used for mapping the resource is unique. Log ingestion will not work if lm_property is used for more than one resource.
In This Article