How LogicMonitor Works: SaaS Platform Overview

How LogicMonitor Works: SaaS Platform Overview

LogicMonitor is a Software-as-a-Service (SaaS-based) technology infrastructure performance monitoring platform. This blog breaks down how LogicMonitor’s SaaS platform works from two different perspectives- architectural building blocks and data lifecycle in the LogicMonitor platform.

Architectural Building Blocks

The key elements of the LogicMonitor platform architecture include:

Your Datacenter

Whether you have technology deployed on-site, in co-lo facilities, or in the public cloud, you’ll be able to remotely monitor it all using LogicMonitor. Depending on the level of monitoring, credentials may be required to ensure secure collection.

The Collector: How LogicMonitor Gathers Information

The LogicMonitor Collector is a lightweight (100MB) Java program that performs remote and local device polling within your infrastructure. (Note: LogicMonitor does not require “agents” to be deployed on each of the devices to be monitored.) The collector can be installed on any Linux or Windows server. Once installed, simply direct the LogicMonitor application via either DNS naming or an IP address to a device you wish to monitor. LogicMonitor will identify and immediately begin monitoring the device using a standard monitoring protocol (we scan with over 30 distinct protocols in a matter of milliseconds) such as SNMP, WMI, or the vendor API (in the case of VMWare and NetApp). The collector collects, encrypts, and sends your data via a secure outbound connection to our datacenters, aggregating your entire infrastructure performance in a single application.

Logging Into the LogicMonitor Application

During your trial proof of concept, you’ll be given credentials to your own LogicMonitor application. You will get administrator rights and can use our role-based-access to provide logins to your team(s) wherever they may be. Your LogicMonitor portal can be accessed from any web browser.

Data Lifecycle in LogicMonitor Platform

Data lifecycle starts with the data entering the LogicMonitor platform when the collector collects values of the specific data points of an instance. Based on the collection interval configured by you, the volume of the data may vary. 

The LogicMonitor platform handles six types of data at a time. 

  1. Metrics – Based on collection interval, the collector collects this value and sends it to the LogicMonitor platform. This data is numeric time-series data. 
  2. Alerts – If the data point value goes beyond the limits set for the threshold, an alert is raised. Alert data is non-numeric time-series data
  3. Configuration – Configurations are stored for the LogicMonitor engine and these are read during bootstrapping. Some of these configurations are read on the fly. 
  4. Logs – Logs generated by devices and applications. 
  5. Traces/Spans – Traces created by user transactions passing through various applications and components. 
  6. Topology – Relationship across various devices and applications. 

Today the LogicMonitor platform handles more than 600 Bn metrics per day and handles 720,000 HTTP requests per second through more than 40,000 collectors deployed worldwide. 

Even in the APM beta program, the LogicMonitor platform received 700 spans per second. APM service is generally available as of March and continues to grow at a rapid pace. 

The below picture depicts the data flow in the LogicMonitor platform.

Once data is inside the platform, it is processed, transferred, and stored in one of the data stores in LogicMonitor. When a user wants to view the data using LogicMonitor Portal, API, or SDK, data is transferred to the processing engine, processed as per the user’s need, and presented to the user.  

All this data flows through multiple layers within LogicMonitor. 

  • Data Stores– The data stores layer stores data of various types. A standard RDBMS is commonly used to store metric data which is mostly numeric time-series data and historic/latest non-numeric configuration data. Elastic storage is used to store search optimized data for network and log monitoring. AWS S3 is used to store the backup and the longer-term data. 
  • Data Transfer– The data transfer layer helps move the data from one point to another. It includes Zookeeper and Consul distributed coordination, KAFKA queues, Kubernetes clusters, SQS queues, and Redis cache for faster retrieval. 
  • Data Processing– The data processing layer works as the brain of LogicMonitor where most of the processing happens. This layer includes an in-house microservices layer.
  • Data Collections– the data collections layer includes collectors and related components which help collect data from devices on the cloud and on-premise. This also helps collect data from websites for performance monitoring. 
  • Data Presentation– The data presentation layer helps users view the processed data using LogicMonitor’s Portal, API, or SDK. LogicMonitor provides a rich set of APIs to integrate its features with your in-house workflow.  

LogicMonitor’s SaaS-based observability and IT operations data collaboration platform is built to remotely monitor a wide range of technologies from networks to applications to the cloud. All data collected is stored, processed, and presented in a single pane of glass, giving enterprises visibility and predictability across the technologies they depend on. 

Akshay Sangaonkar

Director, Product Management

Akshay Sangaonkar is an employee at LogicMonitor.

Subscribe to our LogicBlog to stay updated on the latest developments from LogicMonitor and get notified about blog posts from our world-class team of IT experts and engineers, as well as our leadership team with in-depth knowledge and decades of collective experience in delivering a product IT professionals love.

More from LogicBlog

Amps robot shadow

Let's talk shop, shall we?

Get started