Atom Feed
Comments Atom Feed


Recent Articles

23/04/2017 14:21
Raspberry Pi SD Card Test
07/04/2017 10:54
DNS Firewall (blackhole malicious, like Pi-hole) with bind9
28/03/2017 13:07
Kubernetes to learn Part 4
23/03/2017 16:09
Kubernetes to learn Part 3
21/03/2017 15:18
Kubernetes to learn Part 2

Glen Pitt-Pladdy :: Blog

Background files integrity and log auditing (fcheck/logcheck)

Update: A new version of File Integrity and Log Anomaly reporting tools has been released and this article is only here for historic background

Monitoring servers for security and integrity is a key aspect of running a good shop and among all the tools available are filesystem audit tools such as fcheck, tripwire and aide. These are useful to detect unauthorised changes as well as corruptions and accidental changes. Likewise tools like logcheck watch and report on events in log files including errors, attacks and many other useful events to know about.

The problem all these have is that they are typically run periodically (eg. a cron job) which thrashes the machine causing other disk intensive processes (eg. databases) to slow down dramatically and then hours go by before the next check. If you are monitoring for intrusion detection that gives the intruder predictability and the opportunity to circumvent the system.

With log checking this also means that there can be considerable delay between an important event being logged and reported. During that time matters could escalate and the machine fall over resulting in the error never being reported and the opportunity to do pre-emptive maintenance and avoid downtime lost.

A new approach

I realised that there are a whole lot of problems with the existing approaches. What is actually needed is continuous checking so we spread the work thinly over a long period rather than have a huge disk thrash and then wait hours. This "trickling" approach means that we need a daemon that can run the checks continuously.

With log checking/reporting we can check for new log entries every few seconds if needed so that if anything important is logged the time to report it is minimal.

Also, if intrusion monitoring is important then predictability is to be avoided. A normal filesystem traversal follows a set, predictable route for traversing the filesystem, but to avoid this we need to mix things up a bit.

The approach I have taken is rather like a continuous "fcheck" using a sqlite database for random-access storage. The rate files are checked is limited to stop disk thrashing and when files are read for checksuming the speed is also limited to keep impact low.

Periodically (randomly and when any new directory is added) the daemon shuffles it's search stack to reduce predictability. After checking a file in the current filesystem traversal it will also check the file that has been longest since last check in it's database. That means it's both checking for new files and checking up on existing files it has in the database.


The default place for the config file is /etc/integrityd.conf and this file should ideally be only readable by root as it may give useful information to anyone with malicious intent.

There is a simple configuration file format:

item = value

It supports basic # comments and ignores blank lines. Most parameters have sane defaults so the main things you probably want to set are:

email = someone@somewhere.tld
database = /path/to/the/database

Those default to root@localhost and /var/lib/integrityd/integrityd.sqlite.

If you are doing filesystem checks then you can add areas and excludes with:

area = /etc
exclude = /etc/mtab
exclude = /etc/lvm/archive
exclude = /etc/lvm/backup
exclude = /etc/lvm/cache

That will check all files in /etc/ while excluding LVM areas which may be affected by filesystem snapshots for backups. Multiple area and exclude statements may be used to check key areas.

Checking logs is similarly simple. First we define the rule directory to use, then the logs that need to be checked. If we miss out the rule directory then /etc/logcheck is assumed. This uses the same rule format as logcheck so under Debian/Ubuntu the logcheck-database package may be installed to get a set of base rules. Basic config may be something like this:

logrules = paranoid:/etc/logcheck
log = /var/log/syslog
log = /var/log/auth.log

That will use the paranoid ruleset in /etc/logcheck and monitor syslog and auth.log. Further logrules and log entries may be defined (eg. for OpenVz containers on the same host).

Another feature is that this will watch PID files and do some basic process monitoring to flag up processes which have died (including being zombiefied) or restarted.

pidfile = /var/run/
pidfile = /var/spool/postfix/pid/

This also supports monitoring of PIDs in OpenVz containers which can be specified like this:

pidct = 102
pidfile = /var/lib/vz/private/102/var/run/
pidfile = /var/lib/vz/private/102/var/spool/postfix/pid/

Reporting occurs on demand and is controlled by a two simple rules. File checking reports are controlled with:

reporttime = 30
reportholdoff = 900

This will wait and pool up reports for 30 seconds before sending them, and then after sending the report hold off reporting again for 900 seconds (15 minutes) to avoid deluges of report mail.

Log and PID reporting can be controlled independently if required otherwise they will use the same timings as file checking.

logreporttime = 30
logreportholdoff = 900

The only other things is to control the rates we check things. File checking is controlled with:

byterate = 262144
smallfiles = 131072
filerate = 1
speedup = 20

This limits the rate at which files are read to 256kB/s however for files smaller than 128KB we read them in one shot. We will check 1 file a second and speed up by a factor of 20 when changes are found (ie. we will be checking files at 20 files a second).

Log and PID checking times are controlled with:

logchecktime = 30
logruleupdate = 240
pidchecktime = 30

That will check logs for new events every 30 seconds, and check for updates to log rules every 240 seconds. PID files will also be checked every 30 seconds.

An example config file with comments is provided in the tarball (below).


As there is really only a couple scripts and an example config file, a good start is to dump integrityd in local sbin (typically /usr/local/sbin on Debian/Ubuntu), integrityd.conf in /etc (editing as necessary for your system) and rename the init script putting it in /etc/init.d, then running update-rc.d (or as appropriate for your system) to start it at boot.

Initialising the database can be done with:

# sqlite3 /var/local/integrityd.sqlite <integrityd_sqlite3.sql

... or appropriate for your system.

Download: integrityd tarball 20120417

It is also advisable to install logcheck-database on Debian/Ubuntu which will give you a base set of logcheck rules to work from.

Once installed start integrityd and wait for mails to start arriving. Initially there is likely to be a considerable amount of reports as files are added for the first time. Log checking rules will also typically generate a lot of noise and these will need new rules files adding appropriately under /etc/logcheck to quiet down unnecessary messages.


Are you human? (reduces spam)
Note: Identity details will be stored in a cookie. Posts may not appear immediately