No query, no problem: How LM Logs is built for everyone

When you’re staring down 180GB of logs a day across dozens of servers, trying to figure out what went wrong and where, the last thing you want to do is spend hours decoding or understanding a query language you barely know. Getting answers like when a config changed or why a server hit capacity last month, shouldn’t feel like a guessing game.
A logging tool isn’t helpful if it assumes you’re a query expert. In fact, most don’t collect logs; they bury the insights in syntax. On the contrary, LM Logs gives you the answers without any queries or delays.
If you can’t query your log data, then you won’t be able to drill into your data, analyze it, or pinpoint what’s wrong. It doesn’t stop there. Without taking control of your log data, you won’t be able to create a baseline for log analysis, and you won’t be able to configure your alerting rules or investigate anomalies in your infrastructure.
If you can’t query, you can’t do any troubleshooting, optimization, or performance monitoring. Unfortunately, traditional logging tools require proprietary query knowledge to search and filter log data for effective troubleshooting, delaying identification and resolution of business-critical issues. The art of querying often involves very specific institutional knowledge that’s not easy to share across teams that need to effectively manage the tool.
LM Logs includes querying capabilities for ALL experience levels – from simple to advanced to everything in the middle. For users looking to do simple log analysis, querying knowledge isn’t required. For users who want to show off their querying skills, you can dive right in.
Unified logs and metrics within LM Envision connect Opts team to context-rich data to remove the guesswork from troubleshooting. LM Logs empowers Ops teams to start analysis directly from metric performance dashboards. Identify the resource in question, hop over to Logs, and begin to search and filter your data to begin analysis.
From here, options include
Here’s how teams of all skill levels can use LM Logs:
Instead of digging through data or translating business problems into query code, LM Logs gives Ops teams direct, contextual access to the necessary insights when needed.
LM Logs helps anyone troubleshoot like pros, on day one.
Need help finding specific information or seeing patterns in your logs? Logs solutions can help you get valuable information out of your logs, not add more work to your ITOps team. That’s why LogicMonitor offers autocomplete capabilities based on your log data. You can quickly identify problems and outliers in your environment while spending more time focusing on the task at hand: fixing them.
Autocomplete offers up suggestions based on your IT environment. Just begin typing into the query bar, and the autocomplete menu will open and provide a list of options based on lgo data and the information that you’ve entered. And whether you select a field from the list or directly type in the name, autocomplete will suggest possible values for that same field.
Start by typing an underscore into the query bar. The autocomplete function will display a list of reserved fields, like resource names, resource groups, anomaly type, alert severity, and more. The filtering query can even include values and fields combined with logical operators.
As you enter in more keywords, autocomplete will continue making suggestions to help you drill down into your log data for further analysis. In the example below, LM Logs offered up different types of alert severities, resource groups, and resource names to analyze.
LM Logs anomaly detection is an entirely different approach to diving through your massive amount of log data. Out-of-the-box log-based Anomaly Detection surfaces issues requiring attention and eliminates the need for excessive searches and parsing to get the information you need.
Simply click the ‘view anomalies’ button, and discover unseen behavior within log data right before your eyes – with relevant IT context – so you can find the root cause faster. LM Logs uses proprietary algorithms to analyze log and event data automatically at ingestion. Log events are deemed anomalous if they have not previously shown up on a particular device, system, Kubernetes pod, or cloud instance. This ensures total log visibility and enables teams to proactively manage system health. Alerts can also be configured based on certain event patterns to show how many events happened, when they happened, and why. The log event details are clearly displayed for your analysis.
LM Logs correlates key log events alongside the metric-based alerts and corresponding infrastructure resources for immediate context for active troubleshooting. Follow the log anomalies to investigate what requires more attention, all without a single query.
You can also search by using Keywords. A simple free-text search will only search raw log messages for matches. The Keywords function only searches and displays logs in which the keyword is found in the message field. This is helpful when you want to search all log messages for a particular string or value.
Can’t write a single query? No worries. You can still find the root cause in under five minutes with LM Logs.
A great way to start is to search for the keyword “Error.” This search would only display log data where the keyword “Error” exists in the message.
All of your hard work doesn’t go to waste when you identify interesting and relevant log results. At any time, you can view recent searches and even save searches within LM Logs. You can also view and manage recent searches by clicking the clock icon to the left of the query bar. This menu displays the last 10 searches in your history. You can also clear searches from the list or remove certain individuals from it.
Another great feature is the ability to save a log search for future analysis. Save searches by clicking the star icon to the right of the query bar.
Picking up where you left off can help you take control of your log data, and you can build this data into larger LM Envision workflows like creating new log alerting.
Logs for everyone also includes more precise ways of narrowing down your log volume for searching. Autocomplete, anomaly detection, and keyword searching are the building blocks to continue your search . Simple operators of “and/or/not” create the beginning of queries; pattern matching helps you search for specific field values; and advanced operators provide full control over your searches.
There are two types of patterns you can search: exact match or fuzzy matches.
Exact matches only return events if the pattern exactly matches the field value. For example, searching “field=exact” would search and only display logs where the field has the value “exact.”
This would be a great search pattern to use when you know the exact value you want to match on, like _resource.name=winserver01. This search would only return logs associated with resource “winserver01” in the logs table.
Perhaps you know what you want to search but need help filtering. The Fuzzy Matching feature returns events using glob expressions that match similar field values. This type of matching is not case sensitive. Use this type of search when you don’t want an exact match; that’s why it’s called a fuzzy match. For example, can you spot the difference? Searching for “_resource.name~winserv01” with fuzzy matching would return logs associated with resources that contain “winserver,” which includes “winserver01,” “winserver02,” etc. Hint: the difference was ~winserver instead of =winserver.
Another way to search is to add basic logical operations like “and,” “or,” or “not” to filter down your log data. Here are three common examples you can try out within your own environment:
LM logs advanced searches include aggregation, processing, and formatting operators to refine ways to search your log data and modify the results.
After filtering, add advanced search operations to your query. These operations work sequentially based on a set of events, where the result from one operation is baked into the next one. Some operations work on an event at search time (parsing fields), while others require a partial set, known as a limit, or the full set, known as a sort, to yield a result. You can view advanced search results in the Aggregate tab.
Some examples of Aggregates include avg, count, max, min, and sum.
There are endless possibilities for the type of queries you can create with these advanced operators.
Example 1: You can calculate the sum of the size field for each unique resource name and sort the results with the following query:
* | sum(_size) as log_volume by _resource.name | sort by log_volume desc
Example 2: Or write a search that counts the number of events for each unique resource but only limit to 15 results:
* | count by _resource.name | sort by _count desc | limit 15
Example 3: Here is a search to display the resource group names with the count of logs and sum of log size, that is sorted by count of logs, and only limited to 25:
* | count(_size), sum(_size) by resource.group.name | sort by _count desc | limit 25
Example 4: Display the minimum, maximum, and average response times for HTTP requests, broken down by HTTP Response Code
Example 5: Parse out values from the log message using regex and aggregate the results.
You can’t reduce MTTR if you can’t get started analyzing your log data! No matter what your experience level is with searching, writing queries, or parsing log data, it’s time to ditch the frustration that traditional log analysis tools have brought and get on with taking control of your log data.
If your organization is considering or has just started to use a logging tool, you could save hours of time and frustration and reduce MTTR by using LM Logs to surface issues that require attention.
Don’t worry about not knowing what to do next. LM Envision’s unified logs and metrics ensure you always have the means to continue your path to troubleshooting and analyzing. If your Ops team is considering or just getting started with log data, there’s no better time than now.
© LogicMonitor 2025 | All rights reserved. | All trademarks, trade names, service marks, and logos referenced herein belong to their respective companies.
Blogs
Explore guides, blogs, and best practices for maximizing performance, reducing downtime, and evolving your observability strategy.