F5 Big IP Clusters

By default Virtual Servers on a Big IP load balancer will not be discovered or monitored. All other health characteristics such as CPU, memory, network interface statistics, synchronization status, etc., will be. The reason for this in a cluster setup the backup node in an active/backup cluster does not know the health of the Virtual Servers.

The recommended way to monitor Big IP clusters and have monitoring and alerting on all the Virtual Servers and Virtual Server Pools as well is by means of two separate groups.  One group contains all nodes in the cluster and the other contains only the active node by means of using a "Floating IP" address - a type of "Self IP" in F5 nomenclature.  A Floating IP always remains with the active node in the cluster.  An example customer might have DNS names similar to the following in a cluster setup:


In this example, would be the Floating IP and should be added to LogicMonitor as if it were another Big IP device.

We will use two groups in this example, "BigIPs" for all nodes, and "BigIPs Active" for the Floating IP of the active node. The groups will contain:


"BigIPs Active"

After creating and populating the above two groups you need to tag the "BigIPs Active" group with a special system category of "F5Active".  To do so:· Select the "BigIPs Active" group, right-click on it and select "Edit".· If the property system.categories exists, append ",F5Active" to it.· If it does not exist, click Properties Add, type system.categories in the name field, and F5Active in the value field.· Submit the form.

This "F5Active" tag will cause LogicMonitor to actively discover and start reporting on Virtual Servers and Virtual Server Pools.


Loss of VIP monitoringCustomers have reported that when running Big-IP code 2.4.21-, the snmp collector would not respond to enterprise specific MIBs after a few hours. It appears that the subcollector crashed. Restarting the snmp collector on the Big-IP fixed the issue for only for a short time.Running code 2.4.21- or later resolved the issue.