No query, no problem: How LM Logs is built for everyone

No query, no problem: How LM Logs is built for everyone

So your team has access to a logging tool? Great! What’s the first thing you want to find? The latest config change gone wrong? Data from 30 days ago when a specific server was at high capacity? Or maybe you’d like to access logs for a certain IP on a certain day for specific HTTP and servers with counts and averages.

Hopefully there was training to teach you the specific query languages and expert skills required to answer these questions. After all, you wouldn’t want to blindly dig through 180gb of Logs with 2.5 million entries per day for your 66 Windows servers collecting Application, System, Security, and Audit logs. Are you filtering to the right error and warning levels? Probably not. 

The truth is, having a logging tool isn’t the same as knowing how to use it to analyze log data. How can your team effectively manage your log tool if you don’t know how to search and find the right data to analyze?  

Where do you start?

If you can’t query your log data, then you won’t be able to drill into your data, analyze it, or pinpoint what’s wrong. It doesn’t stop there. Without taking control of your log data, you won’t be able to create a baseline for log analysis, and you won’t be able to configure your alerting rules or investigate anomalies in your infrastructure.

If you can’t query, you can’t do any troubleshooting, optimization, or performance monitoring. Unfortunately, traditional logging tools require proprietary query knowledge to search and filter log data for effective troubleshooting, delaying identification and resolution of business-critical issues. The art of querying often involves very specific institutional knowledge that’s not easy to share across teams that need to effectively manage the tool.

LM Logs is for everyone 


LM Logs includes querying capabilities for ALL experience levels – from simple to advanced to everything in the middle. For users looking to do simple log analysis, querying knowledge isn’t required. For users who want to show off their querying skills, you can dive right in.

Unified logs and metrics within LM Envision connect Opts team to context-rich data to remove the guesswork from troubleshooting. LM Logs empowers Ops teams to start analysis directly from metric performance dashboards. Identify the resource in question, hop over to Logs, and begin to search and filter your data to begin analysis. 

From here, options include

  • Filtering logs based on the infrastructure resource, time range, or metadata to help identify anomalies
  • Searching on keywords (via autocomplete) or using advanced logical search operators (like regex statements) to find relevant data
  • Advanced search functionality to control what log data is returned and to modify the results format 
Logs dashboard in LogicMonitor

Don’t know where to start?

Need help finding specific information or seeing patterns in your logs? Logs solutions can help you get valuable information out of your logs, not add more work to your ITOps team. That’s why LogicMonitor offers autocomplete capabilities based on your log data. You can quickly identify problems and outliers in your environment while spending more time focusing on the task at hand: fixing them.

Autocomplete offers up suggestions based on your IT environment. Just begin typing into the query bar, and the autocomplete menu will open and provide a list of options based on lgo data and the information that you’ve entered. And whether you select a field from the list or directly type in the name, autocomplete will suggest possible values for that same field.

Start by typing an underscore into the query bar. The autocomplete function will display a list of reserved fields, like resource names, resource groups, anomaly type, alert severity, and more. The filtering query can even include values and fields combined with logical operators. 

As you enter in more keywords, autocomplete will continue making suggestions to help you drill down into your log data for further analysis. In the example below, LM Logs offered up different types of alert severities, resource groups, and resource names to analyze. 

Anomaly type log input in LogicMonitor
Log anomaly operations in LogicMonitor.

Log-based anomaly detection cuts through the noise

LM Logs anomaly detection is an entirely different approach to diving through your massive amount of log data. Out-of-the-box log-based Anomaly Detection surfaces issues requiring attention and eliminates the need for excessive searches and parsing to get the information you need.

Simply click the ‘view anomalies’ button, and discover unseen behavior within log data right before your eyes – with relevant IT context – so you can find the root cause faster. LM Logs uses proprietary algorithms to analyze log and event data automatically at ingestion. Log events are deemed anomalous if they have not previously shown up on a particular device, system, Kubernetes pod, or cloud instance. This ensures total log visibility and enables teams to proactively manage system health. Alerts can also be configured based on certain event patterns to show how many events happened, when they happened, and why. The log event details are clearly displayed for your analysis. 

LM Logs correlates key log events alongside the metric-based alerts and corresponding infrastructure resources for immediate context for active troubleshooting. Follow the log anomalies to investigate what requires more attention, all without a single query. 

Alerts shown in LogicMonitor.

You can also search by using Keywords. A simple free-text search will only search raw log messages for matches. The Keywords function only searches and displays logs in which the keyword is found in the message field. This is helpful when you want to search all log messages for a particular string or value. 

A great way to start is to search for the keyword “Error.” This search would only display log data where the keyword “Error” exists in the message.

Easy to get started, easier to continue

All of your hard work doesn’t go to waste when you identify interesting and relevant log results. At any time, you can view recent searches and even save searches within LM Logs. You can also view and manage recent searches by clicking the clock icon to the left of the query bar. This menu displays the last 10 searches in your history. You can also clear searches from the list or remove certain individuals from it. 

Another great feature is the ability to save a log search for future analysis. Save searches by clicking the star icon to the right of the query bar. 

Picking up where you left off can help you take control of your log data, and you can build this data into larger LM Envision workflows like creating new log alerting. 

Recent searches dropdown in LogicMonitor.

Take control of your log data with advanced searches

Logs for everyone also includes more precise ways of narrowing down your log volume for searching. Autocomplete, anomaly detection, and keyword searching are the building blocks to continue your search . Simple operators of “and/or/not” create the beginning of queries; pattern matching helps you search for specific field values; and advanced operators provide full control over your searches.

There are two types of patterns you can search: exact match or fuzzy matches.

Exact matches only return events if the pattern exactly matches the field value. For example, searching “field=exact” would search and only display logs where the field has the value “exact.”  

This would be a great search pattern to use when you know the exact value you want to match on, like _resource.name=winserver01. This search would only return logs associated with resource “winserver01” in the logs table.

Perhaps you know what you want to search but need help filtering. The Fuzzy Matching feature returns events using glob expressions that match similar field values. This type of matching is not case sensitive. Use this type of search when you don’t want an exact match; that’s why it’s called a fuzzy match. For example, can you spot the difference? Searching for “_resource.name~winserv01” with fuzzy matching would return logs associated with resources that contain “winserver,” which includes “winserver01,” “winserver02,” etc. Hint: the difference was ~winserver instead of =winserver.

Kubernetes search line in LogicMonitor

Logical operators for intuitive filters

Another way to search is to add basic logical operations like “and,” “or,” or “not” to filter down your log data. Here are three common examples you can try out within your own environment:

  • AND Search for logs containing all of the fields and values specified. _resource.name=winserver01 AND type=winevents. This displays all log data associated with winserver01 and also contains winevents in the type field.
  • OR Search for logs containing one or more of the fields and values specified. _resource.group.name=”Linux Servers” OR _resource.name~linux. This displays all logs for resources in the “Linux Servers” Resource Group or any resource that would contain the phrase “linux” in the Resource Name.
  • NOT Search for logs except any of the fields or values specified. NOT _resource.name=winserver01 would display all log data from any resource except winserver01.
AND NOT operator  logic in LogicMonitor

Need more control over your queries? Try advanced searches 

LM logs advanced searches include aggregation, processing, and formatting operators to refine ways to search your log data and modify the results.  

After filtering, add advanced search operations to your query. These operations work sequentially based on a set of events, where the result from one operation is baked into the next one. Some operations work on an event at search time (parsing fields), while others require a partial set, known as a limit, or the full set, known as a sort, to yield a result. You can view advanced search results in the Aggregate tab. 

Some examples of Aggregates include avg, count, max, min, and sum.

There are endless possibilities for the type of queries you can create with these advanced operators.  

Example 1: You can calculate the sum of the size field for each unique resource name and sort the results with the following query: 

* | sum(_size) as log_volume by _resource.name | sort by log_volume desc

Example 2: Or write a search that counts the number of events for each unique resource but only limit to 15 results:  

* | count by _resource.name | sort by _count desc | limit 15

Example 3: Here is a search to display the resource group names with the count of logs and sum of log size, that is sorted by count of logs, and only limited to 25:

* | count(_size), sum(_size) by resource.group.name | sort by _count desc | limit 25

Example 4: Display the minimum, maximum, and average response times for HTTP requests, broken down by HTTP Response Code

Parse out values from the log message using regex and aggregate the results.

Example 5: Parse out values from the log message using regex and aggregate the results.

Logs showing GET method

You can’t reduce MTTR if you can’t get started analyzing your log data! No matter what your experience level is with searching, writing queries, or parsing log data, it’s time to ditch the frustration that traditional log analysis tools have brought and get on with taking control of your log data. 

If your organization is considering or has just started to use a logging tool, you could save hours of time and frustration and reduce MTTR by using LM Logs to surface issues that require attention.

Don’t worry about not knowing what to do next. LM Envision’s unified logs and metrics ensure you always have the means to continue your path to troubleshooting and analyzing. If your Ops team is considering or just getting started with log data, there’s no better time than now.