IT people by nature are supposed to be gurus. They’re supposed to be able to build things from scratch. This expectation certainly applies to data center monitoring, where a common practice is to rely on open source monitoring tools such as Nagios. But when you consider the value of your time, these free tools can quickly wind up being far more costly than commercial tools. For instance, we did a survey and found that some system admins had spent over 100 hours to get their open source monitoring solution to do what they wanted. Further, there was ongoing work to try to keep the system up to date with frequent changes in their datacenter, and even then they only had, for the most part, coarse level monitoring (for example, monitoring only the CPU load of a load balancer, instead of monitoring the state of all the hundreds of VIPs on the load balancer.)
When the only alternatives were costly enterprise-class monitoring solutions, sweating it out with open source was understandable. But now that there are affordable tools that automate configuration and give you everything you need in 30 minutes, insisting on building your own doesn’t seem wise (especially in this era of understaffed data centers.) At the root of this DIY mentality is pride. With so many open source options available, Techies probably feel some sense of shame or embarrassment going to an IT director and asking for tools that cost money.
I’d suggest a better source of pride is being able to spend time on tasks that add value to the enterprise – writing Puppet scripts that automate machine and software deployments, and so greatly reduce the time to spin up machines; investigate cloud usage options; correlate resource expenses with revenue per business unit. There are a lot of things that should be done in any enterprise, that are not because of lack of time. A good systems administrator’s time is very valuable – much more valuable than going through a MIB to figure out which item is important to monitor.
And no matter how good a systems administrator you are, monitoring is not going to be your top priority (nor should it be). You’ll get monitoring going “good enough” – but there will be lots of cases that it failed to alert on, when a comprehensive monitoring system would have. Then after every outage, you’ll have to go back and extend the monitoring, adding in metrics that could have helped predict the specific case.
So given the cost of your time; the more in depth monitoring that you get immediately with LogicMonitor (a typical Nagios implementation may monitor 10 metrics on a linux host; a typical LogicMonitor deployment will monitor over 100); and the opportunity cost of the things you could be doing to add value with your time, if you weren’t configuring monitoring, then why not use an automated monitoring tool such as LogicMonitor that makes you a better system administrator, and doesn’t require a Fortune 500 budget to implement?
If you’d rather skip the tedious work, but want the peace of mind knowing that your infrastructure is properly monitored, and that you will be alerted of any issues early, it’s perfectly okay to go the automation route. You’ll feel a sense of satisfaction in preventing an outage, whether you wrote the code or not. And your CFO may even thank you for spending the money.
Steve Francis is an employee at LogicMonitor.
Subscribe to our LogicBlog to stay updated on the latest developments from LogicMonitor and get notified about blog posts from our world-class team of IT experts and engineers, as well as our leadership team with in-depth knowledge and decades of collective experience in delivering a product IT professionals love.