The Usual Tech Ramblings

Nagios, web scraping, and PHP as an agent

Earlier today, I caught a message by @ninjasys on Twitter asking for help looking for ways to catch PHP errors on a website.

Has anyone scraped a webpage for PHP errors using Nagios? PHP errors are displayed before HTML content :( #nagios #sysadmin — Ninjasys

In the past, I’ve used webinject to do page validation, but after making a couple of suggestions, @ninjasys came back with a more detailed explanation as to what they were really after. They’re limited on what can be installed, and were having issues with disk space problems. So they couldn’t install snmp, nor were they able to use NRPE to do agent lookups. Follow the jump to see a few of the ideas I’ve come up with.

Be warned, one of my ideas abuses the HTTP protocol, and bends it to my will. While not breaking the rules, it’s probably a creative use of it.


WebInject was my first idea, as its whole function is to do web site validation. Even better, it has Nagios compatible output when used with specific configuration settings. I considered the use of the verifynegative option. This generates an alert if the string matches. This is exactly what we want, as we want to make sure the php error string is not in the output. In this example, we’re matching on a PHP error, which usually looks like this:

PHP Parse error:  syntax error, unexpected T_STRING in /test.php on line 2

Usually the error includes the name of the file, and a line number, so we’ll look for a match on that.

<testcases repeat="1">
  <case id="1"
    description="verify no errors"
    verifynegative="\.php on line \d+" />

verifynegative uses regular expressions to do matches, so we don’t need to know the line number, or page name. Now we’ll combine it with a WebInject configuration that outputs Nagios compatible data…


Create this as a nagios command and service, and it’ll detect a failure of PHP.

curl, and grep

As Bob Plankers1 also suggested, curl and grep will also do the job.

/usr/bin/curl --silent http://mysite | /bin/grep ".php on line"

The one set back is that grep will return a non-zero exit code when the value is not matched. This requires a little bit of a switch, which isn’t much work, as the standard nagios plugin bundle includes a plugin called negate. This plugin flips values around to make Nagios happy, the problem I’ve found with negate is that it only takes a single command, so the pipe in the command above will not work, so we’ll have to wrap it inside another script. So, if we’re wrapping it in another script, we might as well handle the exit codes in there.


CMDOUT=`${CURL} --silent ${URL} | $GREP -i '.php on line'`

if [ "$?" -eq "0" ];
  echo "WARNING: PHP Script error detected"
  exit 1

echo "OK: No PHP error detected"
exit 0

This script can then be called directly as a Nagios command.

PHP as an Agent

My final idea was to use PHP to do all the heavy lifting, and calculate the free space, and return the status. There is a caveat to this, and the functions I’m using do an open_basedir restrictions check, so this might cause an issue. For this to work, I’m calling in the use of 2 functions, disk_free_space() and disk_total_space(). These functions, if you’ve not guessed it, return the amount of free bytes, and total bytes. A bit of simple math, and we get the percentage free, wrapped up in a quick script, and we can tell a simple check on the Nagios side if we have problems.

< ?php
$disk_free = disk_free_space('/');
$disk_total = disk_total_space('/');
$pct_free = round( ($disk_free / $disk_total) * 100);

if ($pct_free < 10) {
    header('HTTP/1.0 500 Disk space critical ' . $pct_free);
} else if ($pct_free < 20) {
    header('HTTP/1.0 405 Disk space warning ' . $pct_free);
    <title> Space Usage Report </title>
    Disk Size: < ?php print $disk_total; ?> bytes <br />
    Disk Free: < ?php print $disk_free; ?> bytes <br />
    Disk Percentage Free: < ?php print $pct_free; ?>% <br />

Now upload that to your web host, and give it a random name so people cannot guess it2. Then all you need is the use of check_http to do the rest of the work. This is because check_http considers 4xx status codes as warnings, and 5xx status codes as errors. So your Nagios check would look like this:

./check_http -H -u /somerandomname.php

Because we have the header command in PHP to return a 5xx when the disk threshold drops below 10% and a 4xx when below 20%, the check_http plugin will return a warning, or critical state. What’s even better, because we put the percentage free in the header output, the alerts sent out by nagios should also include the value so you can see what level it’s at quickly.

So there is 3 relatively simple ways to do the checks. The first 2 answer @ninjasys original question, while the third solution actually goes a little deeper. If you wanted to get a little more fancy with it, you reformat the HTML output into a specific string, then parse it using a wrapper script using curl, and cut, and get Nagios to see the actual disk spaces as performance data so you can graph it with other utilities such as pnp4Nagios.

I hope this gives you some ideas about thinking outside the box. How would you have solved the problem?

  1. By the way, Bob is the great author on the site The Lone Sysadmin, another site you should have on your favorites/RSS feeds. 

  2. Also consider the use of .htaccess rules to restrict access.