How LM Logs Makes Data Meaningful

How LM Logs Makes Data Meaningful

Before I get started on how excited I am to see LogicMonitor launching a logging product, here’s a little background information. This blog is probably a blast from the past for many longtime LM employees and customers. I served at the company for over seven years, starting from back when it was just a few of us trying to see if a SaaS monitoring product would be accepted in the marketplace (while seemingly crazy to say now, SaaS was a tough sell back in 2011). I wore many hats as an engineer and then in various leadership roles over the years, culminating my tenure as the Chief Product Officer. Having since ceded the reins of Product a couple of years back to the right folks to drive the product through its next stage of evolution, complexity, and growth (I’m looking at you Tej), and leaving the company as a fulltime employee, I’ve acted on a technical advisory basis with LogicMonitor while also returning to my engineering roots and helping companies out in various technical capacities. Unsurprisingly, a number of them are now users of LogicMonitor.

While at LogicMonitor, my first few years were dominated by leading the Technical Operations team where we (unsurprisingly) were the heaviest internal users of the product. There is no better substitute for understanding a product’s strengths and weaknesses than being a daily heavy user of it. Since leaving, I am once again in a position where I have become a direct user of the product and get to benefit from the continued evolution of the platform, from major pushes into Kubernetes monitoring, the continued growth of the LM Exchange, Service Insights, and Dynamic Thresholds, to name a few. The platform keeps expanding to meet the evolving needs of IT, and more importantly, getting more intelligent. I can attest to just how much hard and relentless work the team has put into making this all happen. The LogicMonitor Product and Engineering teams are ripping.

But going back to my earlier focus on Operations while at LogicMonitor, there was always one thing lacking, one irritation while using the product on a daily basis: LogicMonitor could be counted on to detect and notify and point a user to where the problem was, but quite often to completely understand why it was happening, one had to dive into the wonderful world of unstructured data, aka logs. This meant one had to leave the cozy confines of the LogicMonitor product and do a context switch to a log specific product, and from there spend time working through its interface and query syntax to even get to the same application or service, and at the same timeframe, to begin searching for meaningful log information. The context, time, service, alert, etc., was all information LogicMonitor had already provided- why did an engineer have to jump somewhere else and leave all this behind, starting anew in order to find relevant data in the logs? Why could one not see the logs, with all the relevant context and alerts already provided, inside of LogicMonitor? Sure, it was unstructured data, but it was data, and one of LogicMonitor’s strengths has always been striving to be all-encompassing in terms of taking in data from whatever application, cloud provider, database, network equipment, etc., that was in one’s infrastructure stack.

I’m delighted to see the beginning of the end of that irritation is at hand with the introduction of LM Logs to the LogicMonitor platform! As has always been true to what LogicMonitor strives for, it will be flexible and agnostic of where the data (logs in this case) are coming from, whether they be from your favorite cloud provider, your Kubernetes clusters, an already existing log forwarding infrastructure, or even the tried and true old friend Syslog. Even more, the initial focus is not simply dumping log data to the end-user at the relevant context (which would have been fantastic in and of itself), but to go even farther and apply intelligence to bring forth only pertinent or anomalous messages directly to your attention, and in the right context. This goes back to the emphasis on “meaningful log data” above. I love the fact the team went immediately to applying real intelligence to the incoming log data, making sense of it, and not simply dumping huge amounts of text. This is akin in some ways, though in a dynamic fashion, to the intelligence built into our LogicModules and why many of you reading this are already customers. LogicMonitor is now doing the same for logs, making the data meaningful right off the bat.

Today’s complexities mean providing (or rather, overloading) a user with data is not enough. It is why there has been such a concerted focus on the product towards applying intelligence to data. LM Logs is starting immediately with this in mind. I have already had the pleasure of using it and very much look forward to its continued evolution.

Jeff Behl

Jeff Behl is a Technical Advisor at LogicMonitor.

Subscribe to our LogicBlog to stay updated on the latest developments from LogicMonitor and get notified about blog posts from our world-class team of IT experts and engineers, as well as our leadership team with in-depth knowledge and decades of collective experience in delivering a product IT professionals love.

More from LogicBlog

Let's talk shop, shall we?

Get started