So we sometimes get asked about our tagline, Monitoring that Matters.
Doesn’t all monitoring matter? Well, yes it does. But to badly paraphrase George Orwell, all monitoring matters, but some monitorings are more mattering than others.
What makes monitoring matter? It’s whatever monitoring reduces your outages, reduces issues of unacceptable performance, or reduces time to resolution of these issues and outages.
We like to think of monitoring as similar to Maslow’s hierarchy – there are base levels that must be met first, but the higher up you go, the better.
- The base level of monitoring is “is my host/site alive?” Everyone needs this (but not everyone has it.) That gives you reactive monitoring.
- “Is my host going to keep working in the near term?” This means alerts about disks filling up, or swap space and memory used. This helps with reducing some outages.
- “How is my host performing?” Things like CPU load, and rate of swapping. Alerts on these metrics warn of impending performance issues, that can be addressed.
- “How is my application performing?” A measurement of representational application performance. This may be things such as database transaction time, time for a web site to process a request, or even, for a storage array, latency of write requests. It’s really a more fundamental level of monitoring than level 3 – an alert about CPU load, in level 3, may not indicate anything amiss – in the case of NetApp monitoring, it could be a weekly raid scrub, and request latency, which is what really matters, is not affected at all. However, we rank it higher than level 3, simply as it’s easier (and more common) to monitor general purpose metrics (such as CPU) than application specific performance metrics (such as database transaction times.)
- “Why is my application performing as it is?” This is where monitoring really starts to matter. The more data you collect and trend, the better your monitoring system will be at helping you quickly identify and resolve issues. (Of course, the more data you graph, the better your monitoring system need to be at presenting and organizing that data in a meaningful way. It’s easy to show 4 graphs of a system, but when you may have 120 different graphs for one system, and you need to quickly scan them to see correlations – the UI challenges get more interesting.) In a database, an alert about response time is clearly significant, but doesn’t tell you what the cause is. But if the monitoring system can quickly show you your database’s sequential table scans jumped after the last software release, or that latency of a storage array volume is high because another volume sharing the same physical disks is experiencing an abnormal rate of IO operations, you will be able to resolve your issues much quicker.
- “How do I fix my issue?” This is the peak level, where the monitoring system not only shows all the data, but presents directions on how to resolve any issues. LogicMonitor can do this in some cases (for example, using data from the number of select operations, query cache hits and cache prunes due to low memory to recommend enlarging, reducing or disabling the MySQL query cache). But this is a much harder issue to generalize, especially across systems that interact. But we’re always improving.
How far up the hierarchy is your monitoring? If you don’t have a wealth of data about all aspects of your system, that you can trend in real time and look at historically, it’s almost certain that your outages and performance issues are both too frequent and too long.