If you missed our recent advanced web seminar on Best Practices for Monitoring an AWS Hosted Website, this post is for you. In the web seminar, we covered how to use LogicMonitor to monitor multiple layers of the infrastructure responsible for keeping a website up and running, and specifically how to set up this monitoring. Since this was an advanced topic, we included a fair amount of demonstration.
Specifically, we created an example website and a fairly simple AWS setup so that we could walkthrough everything we talked about in LogicMonitor. Keep in mind that almost everything we talked about can be applied to monitoring websites in general, and was not specific to AWS.
To sum up the key points, we recommend that you always monitor your website:
1. From the inside:
Monitoring your website from inside your network will enable you to pinpoint specific components that are causing problems when issues arise. Monitoring your website from the inside includes:
- Monitoring the infrastructure components that your website is running on
Because the LogicMonitor collector is within your network, it has access to the “behind the scenes” infrastructure components that your website is running on. Infrastructure is a general term, but what we mean here is that you should monitor things like servers and load balancers that you’re using to host your website. In our demonstration, we set up monitoring for our Amazon cloud infrastructure, which included two Elastic Compute Cloud (EC2) servers and one Elastic Load Balancer (ELB). This also allowed us to show off LogicMonitor’s new AWS monitoring functionality (currently in beta release.) LogicMonitor’s AWS monitoring uses the AWS software development kit (SDK) to get CloudWatch metrics and also computes metrics that CloudWatch doesn’t provide. This allows you to monitor your AWS cloud resources alongside your existing monitored architecture in LogicMonitor.
- Using an HTTP datasource to monitor your website for specific content
LogicMonitor has a webpage collection method that allows the collector to query data from any system using HTTP or HTTPS. The LogicMonitor default HTTP page datasource uses this collection method to check for load time and availability of a web page. But really, it is not sufficient to only check the availability a page, because you want to make sure that the correct content is loading. So you can actually customize the HTTP page datasource to look for specific content on a webpage. In our demonstration, we added data points that looked for a the presence of a specific string in our web pages.
- Monitoring the application serving your web pages
It’s a good idea to monitor the application serving the web pages for your site because in a scenario where the web pages do stop working, the monitored data will let you know whether it’s a problem with the application itself or something else, like a problem with the website configuration file. In the webinar demonstration we set up Apache monitoring. The LogicMonitor Apache datasource uses the webpage collection method to accessing the server status page using HTTP, so you need to make sure this page is accessible.
2. From the outside:
You should monitor your website from outside your network to ensure that your website is accessible. Even if all of the equipment running your website is functioning perfectly, there are still multiple external factors that could potentially prevent outside users from being able to access it. A website that isn’t accessible to end users is not a very useful website. So to monitor the full picture, you need to have checks that come from outside your network. You can use LogicMonitor Services (aka SiteMonitor) to set up periodic checks from external locations to mimic the experience of a user accessing your site.
Keep in mind that you may have some overlap in the data you are monitoring for your website, but this redundancy can give you insight into what components aren’t working properly when something goes wrong, especially when you see inconsistencies between data for the same component monitored from different perspectives.
Below is a dashboard we put together to show data for the monitoring we set up in our demonstration.
Want to learn more? View the on demand Web Seminar: Best Practices for Monitoring an AWS Hosted Website.
At the recent Amazon re:Invent show, LogicMonitor demonstrated its new AWS integration and monitoring. (We also announced another set of free tools – JMX Command Line Tools – but more on that later.)
“Why”, you may be asking, “is this interesting? Doesn’t Amazon provide monitoring itself via CloudWatch? And in any case, aren’t there many ‘cloud centric’ companies that do this?”
Good questions.
At this point in time, most companies have a substantial investment in their own hardware, on which they run the bulk of their applications. This is not going away anytime soon – but companies are, for reasons of agility, embarking on exploratory forays into the public cloud, building some applications, or parts of an application, on IaaS or PaaS services.
There are several things unique about the LogicMonitor offering for such enterprises:
- Pulling in AWS Cloudwatch data into the same monitoring used for all the other components of the application (MongoDB, Tomcat, ESX servers, etc) running in on-premises datacenters makes viewing the application as a whole, and investigating performance issues to resolution, much easier. Not to mention that LogicMonitor will trend a years worth of CloudWatch data (instead of the AWS default of 2 weeks.)
- Viewing Cloudwatch data in the same screen as native (Linux or Windows) performance data gives a much more cohesive view of an entity’s infrastructure. CloudWatch by itself does not provide comprehensive metrics about applications (MySQL, Nginx, IIS, .NET) on a server – but seeing the metrics it does provide, including cost, in the context of the OS and application level work being done by the entity, can provide new insight into the utility of systems.
- LogicMonitor’s integrated use of the Amazon SDK means that it measures application performance not just from Amazon’s point of view, but from the perspective of the servers accessing the Amazon services. e.g. the enqueue time for an SQS queue, when accessed by a server in your Minneapolis data center, may be very different from the time when accessed from a server in the same AWS region as the queue. Measuring the performance from where it is being accessed is far more meaningful, and lets you isolate network issues from AWS issues.
- Use of the SDK also allows LogicMonitor to pull metrics out of AWS Services – query DynamoDB for business level information, such as the number of customer sign-ups per minute, for example – and trend these metrics over time or alert on deviations.
LogicMonitor’s mission is to make life easier for I.T. professionals. Having a tool that will automatically discover all the AWS services you are using; provide visibility of their performance along with all the other systems and applications you use in one place, measure the performance from where it matters to you; and allow easy extraction of business level metrics from these services – that goes a long way to easing the path to the cloud.
Want to see more? LogicMonitor will be allowing customers and prospects to sign up for the Beta of the AWS integration starting in mid December and we will post a link to that here when it is available.

Whether you are moving to the cloud for the agility of infrastructure, or to free up valuable space in your own datacenters for strategic initiatives, with Amazon’s announcements of further security certifications; better developer tools; and more services available – now is a good time to test out the cloud. And now you don’t have to sacrifice application visibility to do so.
We’ll be opening up the beta of the AWS monitoring in December. If you want to see it in action before then – just ask, and we’ll be happy to give you a demo.