Monitoring Remote Linux Files

Last updated on 17 March, 2023

Due to our collector architecture, our script collector method can only launch scripts from the local collector machine. The collector is not natively able to directly launch files or execute scripts that are present on aremote host.

Generally, it is best practice to avoid remote execution for monitoring purposes, whenever possible. In most cases, it should be possible to serve the information that you would like to monitor via a webpage, API, SNMP, or database connection so that it can be accessed remotely from the collector machine with very little overhead.

But if you are interested in monitoring your files that are remote location, there are extra steps necessary to achieve this. Here is our recommendation:

Preparing File Monitoring Script On Host

On the remote host machine, you will need to prepare a basic script that prints to standard output the values that you are interested in monitoring, and then you will need to extend this command into SNMP so that the collector can remotely query your host for this information. The below example uses the Linux “date” utility to output the age of a remote file:

echo "$((`date +%s ` - `date +%s -r dump.sql` ))"

(This would return the age of a file named “dump.sql”, in seconds.)

Extending your File Monitoring Script into SNMP

Once you have a script that captures the value that you want, you will need to extend this script into SNMP. The process has been expertly detailed in our blog posting How to Teach an Old SNMPD New Tricks.

After this has been done, you will need to create a simple SNMP-collecting datasource in LogicMonitor, which allows your collector to query your newly extended OID.

Alternatives

Depending on what exactly you are hoping to monitor on your remote machine, there may be alternatives to remotely access this information:

  • Via a different protocol (JMX for Java applications, JDBC for databases, etc.)
  • By installing a collector locally on each remote host. Monitor host with itself as a collector.
  • Create a wrapper script on the collector machine that performs a remote connection to the remote and launches a script that is present on the remote host’s filesystem via a script DataSource
    • We offer several options to directly embed a Groovy script to execute local or remote scripts.

Final Considerations

Remote file monitoring is definitely possible to achieve in LogicMonitor, but do keep in mind that this type of data collection is generally the most intensive to implement, both in terms of strain on your network and the collector itself, as well as the complexity in the design that this type of solution demands.

We advise that you reserve script-based data collection as a last-resort method to obtain the metric(s) that you are after. Other considerations:

  • Ideally, design your script data collection so that it works under a single-instance datasource. If it is necessary to have multiple instances enabled, try to use a non-script ActiveDiscovery method, whenever possible.
  • For multi-instance datasources, design your script in a way that it generates as few instances as possible. If large amounts of instances are generated across multiple hosts, this may cause the collector to behave unstably (gaps in graphs, collector restarts, etc.). This may be corrected by allotting more threads to the “collector.script” module in the agent.conf configuration file.
    • See our help page on Script ActiveDiscovery for more information on how to configure instance discovery for script-based datasources.
In This Article