Observability is a buzzword right now. Rightly so, as many companies are greatly concerned about what’s happening with their systems. Every company has become a software company and if they aren’t, they are being disrupted by one. IT leaders have more weight on their shoulders than ever before and it’s because digitization is rapidly changing the way people consume nearly everything.
Getting My Start in IT
I can remember when I started my career at a major auto manufacturer’s IT organization back in 2004. In those days, we certainly didn’t talk about observability in the way that tech organizations do now. We wanted to see the health of devices if possible, but I don’t recall that being an expectation. There I was, fresh out of college with a world full of options available to me. I had two offers in hand with the opportunity to make more money than my parents ever made in a year, so I decided to leave Atlanta and head to Detroit to join the company’s college graduate program. It was a rotational program that afforded me the opportunity to see four different parts of the auto business before deciding on where to settle my career.
Tech in Marketing, Manufacturing, and Product Development at One of the Big Three
In my first rotation, I had a chance to work in marketing systems, specifically for dealer systems. Here I worked on a vehicle location web app, which enabled dealers to do a number of things with their inventory, including transferring vehicles amongst their many locations as well as to other dealers. In my second rotation, I stepped away from IT into marketing research before taking on my third, and favorite, rotation at one of the company’s transmission plants. It was a very uncommon thing to do in this program, but I wanted to be where the action was. At a vehicle manufacturer, the proverbial rubber meets the road at these plants. I helped test industrial mobile devices, manage a tier II data center, bring in a multi-million dollar transmission manufacturing line, and analyze and consult Siemens Ladder Logic implementations for robots, CNC machines, transfer lines, and more with various suppliers across the Midwest. My fourth and final rotation was the one I eventually spent the rest of my career therein. It was, consequently, the last time I worked in IT properly. It was for a group called the PTG, or Process and Technology Group. Frankly, I’m not sure it still exists, but this group sits between a business unit and IT and helped guide mid-to-large technology decisions for the company. I managed multi-million dollar implementations, applications, and systems for the vehicle testing business unit.
Billions of Dollars in Technology Assets
Within a span of five years, I touched hundreds of applications that were critical to the auto manufacturer. If you include devices, (industrial and traditional), it was more like thousands of applications. I essentially handled billions of dollars in assets and not once was I able to step back and observe a group of these technology assets altogether, within a single view. Thinking about it now is astounding. I was touching consumer and internal customer-facing applications, industrial systems, datacenter equipment, and vehicle testing devices — but I didn’t have visibility into how each system affected the others.
I once worked with a vendor to update our Wherenet system, which effectively shut down our transmission plant for a brief period of time. If the transmission plant remained offline for too long, it would have had a ripple effect and would have ultimately also shut down production at another plant in Atlanta that assembled cars. Did I have a way of being proactively alerted about potential downtime for this system, to avoid a large and very negative business outcome? No. Granted, some capabilities existed to monitor groups of devices associated with a manufacturing line but these were tightly guarded and only locally available at the line. Our Wherenet supplier could provide minimal health metrics with their internal tools, but I needed to physically inspect and understand the command line to check my network and devices at this point. The basic monitoring tech was still relatively new and hadn’t yet made its way to our plant.
That was 2004 — nearly twenty years ago! What’s maddening to some executives, or at least should be, is many companies still fly virtually blind today. Some companies have limited visibility into pieces of their infrastructure and applications through point solutions, but these only provide limited visibility into the products that they’ve manufactured and are poorly supported and vastly disjointed. Or, they are expensive and only good at one area within the tech stack.
The Quest for Visibility That Led Me to LogicMonitor
In 2019, I joined LogicMonitor because it was a product that finally was able to provide true visibility into the hardest part of the tech stack: the infrastructure. There was no other cloud-based platform on the market that came close or was as capable of monitoring infrastructure in such an agentless and extensible fashion. LogicMonitor democratized data, making it more widely and securely available for more of the people who need it to do their jobs within an organization. Learning about LogicMonitor’s platform was like a time machine that instantly brought me back to my time in the auto manufacturing world. Imagine if, from my transmission plant cubicle back in 2004, I could have observed my datacenter devices, HR payroll systems, network, storage, and the up/down on the outdated server that managed our multi-ton press, Wherenet, the PLCs, and sensors on our manufacturing line– all in one single-pane-of-glass. I could have stopped angry shift managers from storming into the IT office and yelling at my manager. I could have resolved problems prior to others within the company even realizing there had been an issue. Back then, it would have been like magic. Today, it’s table stakes for high-growth, world-class businesses.
My story is specific to my own “small” world, on those particular rotations across this manufacturer. Yet it is not unique. At every rotation, I rolled up to a Manager or Director, who worked under other Directors or VPs, and who ultimately reported up to a C-level executive. With each additional level came an expansion of responsibility and therefore the need for wider observability. Back in 2004, I distinctly recall walking to a building for a one-on-one with a senior IT executive after a major power outage. He confided in me that his way of monitoring the power supply at the largest datacenter on the campus, at that time, was by looking at the steam coming off the building.
IT Leaders Have Big Blind Spots
You may say that was the early 2000s and no company is that blind today. I’d say think again. Think about the blind spots within your tech ecosystem. I’ve worked with prospective customers of LogicMonitor who have destroyed employee productivity within their companies because when their Citrix goes down, it takes them hours, sometimes a full day, to understand why. They eventually find out that their storage was at capacity and they weren’t observing that part of their tech stack. But the time has been lost and the damage has already been done.
Providing enterprises with unified observability isn’t just another buzzword for us at LogicMonitor. It’s our calling. Our company literally exists to give IT leaders full visibility into business-critical data from across the organization. Having full access to this data allows smarter decisions to be made. With that comes the ability to sleep better at night. LogicMonitor helps our customers deeply, broadly, and accurately observe everything from application to the edge of their networks. I couldn’t help my company achieve unified observability back in the 2000s. But I sure as heck can help yours achieve it today.