Menu
Index

Contact
LinkedIn
GitHub
Atom Feed
Comments Atom Feed



Tweet

Similar Articles

25/06/2011 12:33
Dovecot stats on Cacti (via SNMP)
22/12/2011 09:30
Peak Network Bandwidth for Cacti
24/03/2013 15:44
fail2ban on Cacti via SNMP
24/08/2011 19:10
TEMPer under Linux (perl) with Cacti
26/08/2014 09:41
NTP Monitoring on Cacti over SNMP
26/04/2012 16:59
MySQL Performance Graphs on Cacti via SNMP

Recent Articles

23/04/2017 14:21
Raspberry Pi SD Card Test
07/04/2017 10:54
DNS Firewall (blackhole malicious, like Pi-hole) with bind9
28/03/2017 13:07
Kubernetes to learn Part 4
23/03/2017 16:09
Kubernetes to learn Part 3
21/03/2017 15:18
Kubernetes to learn Part 2

Glen Pitt-Pladdy :: Blog

Universal Log Analyser and snmpd extension scripts

With my Cacti templates for Postfix, I produced a basic log analyser script which counted up instances of events in mail logs which could then be passed (via SNMP) to Cacti for graphing.

Over time the script has been embellished and extended and it's ended up doing rather more than just Postfix logs. It includes dkim-filter (milter), ClamAV milter, and more recently I've started adding analysis of Dovecot log entries. It really isn't "postfix-loganalyser" any more.

This is the latest variant of the script which now turns it into a universal log analyser with plugins (Perl modules) to handle whatever logs you choose. This means that it can now handle far more than just mail logs - eg. you could produce stats of how regularly npviewer.bin SEGVs, or anything else that leaves a mark in log files which is an awful lot of stuff.

So, on with the script...

How it works

This uses a quick and dirty Perl plugin setup where the plugins you wish to use are simply tacked onto the end of the command line. This allows multiple instances of the script to be used, looking at different logs, with different plugins, and storing the results in a different place, or in the same if you prefer (but ensure the script is not run concurrently with the same results file - it will get messy!)

Plugins are simply Perl modules which return a reference to a function to register the plugin. The register functions are run to collect up the stuff that actually does the work inside the plugin, but we will go into plugin internals later.

The script will handle loading and saving of results as well as tracking where we where in the log file(s) and continuing on from the same place which makes the plugins rather simple - they just need to recognise events in the line they are given and count them.

Secondly, basic shell scripts can be used in the snmpd config to find the relevant data and return it for transmission via SNMP. These can either be universal and you specify the data fields on the command line (great for just a few), or treat them more as config files and hard-code the data fields into the script which is probably more manageable when many fields are returned in one query.

Master (this) Script

I currently have this in /etc/snmp with the cron jobs and other stuff that collects data for snmpd, though now that it has outgrown it's original design this location is more for historic reasons than actually being the best place to put it.

Plugins currently go in /etc/snmp/uloganalyser-plugin, again for historic reasons more than anything.

The arguments for running the script are:

uloganalyser <old logfile> <current logfile> <statsfile> <plugin> [plugin] ...

Basically, beyond the stuff that the original postfix-loganalyser script had, this just adds a list of the plugins you want to use. For maximum efficiency, you also want to order the plugins by most frequently used first as this avoids passing the lines to plugins which are least likely to handle them.

Download: master universal log analyser Perl script and plugins are all on GitHub

The usual stuff applies: make it executable, put it in an appropriate place, put it in a cron job with appropriate privilege to read log files and write the stats file.

Typically you would have this run from a cron job to keep the stats file up to date. Running the config described in my SNMP basics article this script would go in /etc/snmp/local-snmp-cronjob something like:

/etc/snmp/uloganalyser \
    /var/log/mail.log.0 \
    /var/log/mail.log \
    /var/local/snmp/mail \
    plugin1 plugin2 plugin3 plugin4

Meet the Plugins

I will be publishing the plugins for the mail stuff I have done when they are ready, but for now download a demo plugin which you can use as a basis for your own.

Going through this part by part:

Register

This is passed a set of references to arrays from the master script which hold references to the functions to do the actual work. Currently there are only two plus the uloganalyser version:

  • Line processors array ref - these functions are passed log lines to parse and count up events in
  • End processors array ref - these run after the logs have been processed and may collect other information that is not in the logs or report about unhandled content in the logs
  • uloganalyser version number - this allows you to check you have a sufficiently recent uloganalyser version

Really all the register function does is add references to any functions in the module that handle either of these to the appropriate array. There is absolutely no reason why you can't have multiple line processors or end processors in one module so long as the register function adds them in.

The register function runs before log processing so you could also use it to run any other stuff needed before log processing starts (eg. you may need to retrieve remote logs, flush log buffers or dump a log from a database).

After running the register function, the master script knows what needs to be executed in the module and that's about it.

Line processors

These take 4 arguments so will start off like this:

my ( $line, $number, $log, $stats ) = @_;

They are the (chomped) log line, the line number in the log, the log file that is being processed, and a reference to the %stats hash where the statistics are collected.

The first thing the processor needs to do is figure out if the line is one it is responsible for so and if not return null, so something like this is a good idea:

if ( $line !~ s/^.+ demo[d+]:s*// ) { return; }

After that it's just matching events and counting them:

if ( $line =~ s/^stuff happened:s*// ) {
        ++$$stats{'demo:stuffhappened'};
} elsif ...... and so on

It's also worth having the script get noisy about stuff it doesn't understand so that you can go and update/fix it:

} else {
        ++$$stats{'demo:other'};
        print STDERR __FILE__." $VERSION:".__LINE__." $log:$number unknown: $origlinen";
}

And finally, we need to return 1 so that the master knows we have picked up this line and there is no need to process the same line further:

return 1;

That's all there needs to be to a line processor.

End processor

This is just a function which is called with a reference to the %stats hash to collect up statistics. There is nothing more to it than that. It will probably kick off allong the lines of:

my $stats = shift;

After which it may do stuff like set values depending on stuff it has collected round the system:

$$stats{"demo:some:stuff:collected"} = $datacollected;

It's essentially just there to mop up and as such you can use it for whatever you need - eg. cleaning up temporary files etc.

Wrapping up

Perl modules have to return a value else it gets annoyed, but this value is also passed back and returned by require in the master script so a really easy way of passing a reference to our register function. Thus we wrap up the plugin with:

\&register;

And that's all there is to it.

When it starts the master script will run the register function and then any further stuff added into the arrays by the register function. Simple as that!

Connecting the stats to snmpd

For this I am using the extend option in snmpd.conf which allows addition of an external program/script who's output is passed on by snmpd when it gets queried:

extend  demo /path/to/script

First up we can look at a really simple universal script - just dump the parameters on the command line and we're done:

#!/bin/sh
PATH=/bin:/usr/bin
STATSFILE=/var/local/snmp/datafile
# pick up the requested fields
for param in $@; do
        printf "%dn" `grep ^$param= $STATSFILE | cut -d '=' -f 2`
done

Or where many different field are used in one query then we may want to treat this more as a config file - we can specify parameters within the script:

#!/bin/sh
PATH=/bin:/usr/bin
STATSFILE=/var/local/snmp/datafile
# fields we are looking for
parameters="demo:parameter1 demo:parameter2"
# pick up the requested fields
for param in $parameters; do
        printf "%dn" `grep ^$param= $STATSFILE | cut -d '=' -f 2`
done

Or we can do the same thing one-by-one:

#!/bin/sh
PATH=/bin:/usr/bin
STATSFILE=/var/local/snmp/datafile
# pick up fields
printf "%dn" `grep ^demo:parameter1= $STATSFILE | cut -d '=' -f 2`
printf "%dn" `grep ^demo:parameter2= $STATSFILE | cut -d '=' -f 2`
printf "%dn" `grep ^demo:parameter3= $STATSFILE | cut -d '=' -f 2`

Coming soon...

That's all for now. Once I've got the plugins and the latest Cacti templates to a point I am happy with them then I'll be releasing those so keep an eye on my Cacti templates for Postfix for the update.

Comments:

Niccolo Image  15/11/2011 22:51 :: Niccolo

Hi, I find this tool a really useful one!

Searching into Debian for Postfix stats, I found only pflogsumm, which seems not suitable for SNMP and Cacti, because it is intended to be run once a day. uloganalyser seems the right answer.

I just have a question: each stat value counted by uloganalyser is an ever growing counter? Does that counter overflows and resets at some time?

Thanks.

Glen Pitt-Pladdy Image  16/11/2011 08:26 :: Glen Pitt-Pladdy

Yes, you are absolutely right - each stat will be counted forever. I doubt that overflow would be a problem for the life of a server unless you are doing huge volumes of mail, and then you probably need a more complex way of monitoring anyway.

There is a discussion of the subject of integer overflow behaviour in Perl at http://www.perlmonks.org/?node_id=418795">http://www.perlmonks.org/?node_id=418795">http://www.perlmonks.org/?node_id=418795

voytek eymont Image  31/07/2012 10:22 :: voytek eymont

thanks for the uloganalyser and this setup guide
how can I suppress the cron email messages every 5 minutes?

Glen Pitt-Pladdy Image  31/07/2012 10:55 :: Glen Pitt-Pladdy

Messages emailed via Cron are typically going to be lines that the plugins don't understand. If you email them to me then I will check them and update the plugins.

If you want to ignore these completely (errors may go undetected) then you can send them to /dev/null like this in the cronjob:

/etc/snmp/uloganalyser \
    /var/log/mail.log.0 \
    /var/log/mail.log \
    /var/local/snmp/mail \
    plugin1 plugin2 plugin3 plugin4 >/dev/null 2>&1

Oleksiy Image  20/03/2013 21:53 :: Oleksiy

if old postfix log compressed will use next changes into uloganalyser:

sub readlogfile {
        my $log = shift;
        my $uncompress;
        if ($log =~ /\.bz2$/i) {
          $uncompress = '/usr/bin/bzip2 -cd';
        }
        if ($uncompress) {
          open (LOG, "$uncompress $log|")
            || die("ERROR: $uncompress can not read file ($log): $!\n");
        }else{
          open LOG, $log or die "FATAL - can't read \"$log\": $!\n";
        }

Glen Pitt-Pladdy Image  21/03/2013 23:05 :: Glen Pitt-Pladdy

Hi Oleksiy,

This is not going to work for the simple reason that uloganalyser uses inodes to verify that it is reading the correct file. Once a file gets compressed it will be on a different inode and hence will not fulfil the criteria.

I don't think including compression support will be worthwhile when you can just configure the last log to be kept uncompressed. If large logs is the problem then requiring a full decompression to access the last bit in the file is huge overhead and more frequent log rotation would be a better approach.

Jernej Image  25/03/2013 11:18 :: Jernej

We are using "dateext" feature of logrotate, which makes this even bigger problems with previous file as input for uloganalyzer. That is why we disabled the previous file check and fixed the values to zero if the fileinode changed. Yes, we loose a minute worth of data, but this is the expense to it...

Andreas Image  17/10/2013 22:30 :: Andreas

Hello!

My statsfile is STATSFILE=/var/local/snmp/mail

#!/bin/sh
PATH=/bin:/usr/bin
STATSFILE=/var/local/snmp/mail
# pick up the requested fields
for param in $@; do
        printf "%dn" `grep ^$param= $STATSFILE | cut -d '=' -f 2`
done

When I execute that script, the output is empty.
What am I doing wrong?

Andreas

Glen Pitt-Pladdy Image  17/10/2013 22:48 :: Glen Pitt-Pladdy

Hi

The two questions: What arguments are you giving the script? Does the statsfile contain lines matching those arguments?

Andreas Image  18/10/2013 00:45 :: Andreas

Hello!

Ah, it works like this :-) I say /test.sh postfix:local:sent
and get 17n

I guess it works?

Andreas

Glen Pitt-Pladdy Image  18/10/2013 07:14 :: Glen Pitt-Pladdy

That's it! This is the generic/universal script so also check out the custom scripts that go along with each plugin which have the parameters hard-coded into them for snmpd and Cacti.

voytek Image  26/11/2013 04:02 :: voytek

Glen,
thanks for these great tools

just setting a new machine, dovecot now logs independently into dovecot.log
so, do I just run uloganalyzer twice, yes?

Glen Pitt-Pladdy Image  26/11/2013 07:18 :: Glen Pitt-Pladdy

The bit to remember is that the location it last read to in the logs is kept in the stats file so if you want to run twice to process two sets of logs, you also need to keep the stats files independent. That also means you will need to tweak the snmpd extension scripts to pick up from the stats file appropriate to their metrics.

voytek Image  26/11/2013 10:26 :: voytek

Glen,

thanks, I've changed to individual stats, that seems OK

BUT, I'm collecting/outputting lot less info, I think I need to... crank up log level?

[old server]# wc mail
135  142 4828 mail
[new server]# wc mail dove
43   50 1314 mail
16   27  473 dove
59   77 1787 total

Glen Pitt-Pladdy Image  08/12/2013 22:31 :: Glen Pitt-Pladdy

My stats are gathered with the default loglevels in Debain. With Postfix nothing is running with -v or -D in master.cf, and with Docecot no _debug options are set. If you are using a different distro/Unix OS then defaults compiled in may affect what stats are available and you will have to investigate on the basis of the build you are using.




Are you human? (reduces spam)
Note: Identity details will be stored in a cookie. Posts may not appear immediately