Menu
Index

Contact
LinkedIn
GitHub
Atom Feed
Comments Atom Feed



Tweet

Similar Articles

13/12/2015 17:34
Linux Mint 17 with Compressed btrfs
14/02/2012 13:38
Filesystems & Schedulers with CompactFlash
22/12/2011 16:59
LAMP (PHP) Pagetimer for Cacti via SNMP
09/04/2012 08:45
Filesystems for USB drive Backup
11/04/2012 16:28
Benchmarking Disk performance with RAID and iostat
22/06/2010 10:19
Nokia 5800 ExpressMusic M3U Playlists - Unix tricks

Recent Articles

23/04/2017 14:21
Raspberry Pi SD Card Test
07/04/2017 10:54
DNS Firewall (blackhole malicious, like Pi-hole) with bind9
28/03/2017 13:07
Kubernetes to learn Part 4
23/03/2017 16:09
Kubernetes to learn Part 3
21/03/2017 15:18
Kubernetes to learn Part 2

Glen Pitt-Pladdy :: Blog

Filesystems & Fragmentation

Everyone knows how rapidly Windows filesystems fragment and the consequent impact on performance, but it's something that is seldom talked about with Unix. Apple goes as far as to say that it isn't necessary to defrag their filesystems though people have found significant performance improvements by defragging anyway.

While it may be true that Unix filesystems are more resistant to fragmentation, it's not to say that all files don't get fragged up to the extent it can be a performance problem, and many Linux filesystems have no defragmentation tools. ext4 is a bit of an exception in that it goes all the way to having online defragmentation, just at this stage it's not considered release ready so not available in Debian, Ubuntu or probably many other distros.

How bad is it?

There is a Perl script in the Gentoo forums for finding fragmentation. Inspired by this I have written my own script along the same theme with increased safety measures and to provide further information: the maximum fragmented files, the number of bytes per fragment, plus a histogram of fragmentation in .csv format that can be loaded into a spreadsheet for further analysis or graphing.

Download: fragmentation analysis Perl script

To find all the fragmentation on your mounted filesystems try something like this:

# for d in `mount | grep ^/ | awk '{ print $3 }'`; do n=`echo $d|sed 's/\//_/g'`; echo $n; /path/to/findfrag $d >/somewhere/safe/$n.csv; done

I did 3 machines: my workstation, laptop and server

The results where surprising. Considering there had never been any defragmentation done on any of these filesystems in several years there was very small proportion of fragmented files on them, though some massively fragmented files in specific areas. In some cases files with >10k extents. Typically only a few % of files are fragmented on each filesystem which shows how resilient these filesystems are to fragmentation.

While this is far more resilient than I would have expected from Windows, it's still a major performance hit when badly fragmented files are accessed frequently.

As an example, take a CD image that is badly fragmented with ~13k extents. That's a seek every 55kB of data read. I copied it and the copy had 208 extents or a seek every 3.5MB which is far more sensible. The question is how much that affects performance. After a reboot (to ensure cache is clear), copying the files to /dev/null with dd using 8k blocks yields:

The ~13k extents (highly fragmented) file:

731453440 bytes (731 MB) copied, 78.8668 s, 9.3 MB/s

The 208 extents file:

731453440 bytes (731 MB) copied, 36.3631 s, 20.1 MB/s

The figures speak for themselves on the effects of fragmentation. It may not be a huge problem in many cases but on badly fragged files it is having an effect none the less. What is also important to consider is even if the performance degradation is not important for the fragmented file, it is none the less resulting in disk seeks which is also affecting access to other (possibly non-fragemented) files.

The range of fragmentation on my systems is huge - one filesystem is at 7GB/fragment (negligible), another 17kB/fragment (severe!). On my home server /var and the squid cache are particularly bad.

One place that this was having a big impact was things like Cacti .rrd files which where badly fragmented and around 1000 updates are performed on them when a poll occurs every 5 minutes. The resultant disk thrash was causing MySQL to log slow queries during polling as well as many other noticeable slow-downs coinciding with Cacti polling.

Poor man's online-defragger

Given the absence of practical defraggers an alternative approach is needed: If I make an archive copy (cp -a) of a file, everything remains the same (assuming no hard-links) but there is a reasonable chance that the copy will have less fragments. I wrote a script to do just that - it looks at the input files and if they are fragmented it makes an archive copy, if it's an improvement then it replaces the original file.

It's not efficient, but it does improve fragmentation dramatically, and with that performance.

Danger!

Before we go any further, make sure you have backups and can restore completely from them. It seems obvious advice but I am always surprised how many people don't have backups or don't test restoring from their backups.

Although there are precautions in the script, there are a few serious problems here which we can't fully protect against:

  • If the file is open we risk doing something horrible to whatever was using the file, potentially causing data loss.
  • If a race occurs between our defragger and other process opening the file then again nasty stuff could result.
  • If something relies on specific attributes of the file (eg. it's inode) then it will be messed up.
  • There is always the potential of someone deliberately engineering circumstances where the script may fail in some horrible way.
  • Probably a load more things not listed here.

To discourage use of the defragger script without sufficient knowledge it ships in a state where it will not run - you have to make some minor modifications to the script in order to be able to run it.

In use

In order to get the fragmentation info for the file (by running filefrag), the script has to be run as root. As a safety precaution, the script will drop down to the user and group privilege you specify on the command line when it can. This is useful for reducing risks when defragging areas where users may create files and there could be something malicious, but also remember this is a precaution and not a guaranteed solution.

The remaining arguments are the files to defrag.

Additionally there are some settings which go first on the command-line:

  • --verbose - displays some extra information about each file it (tries to) process
  • --readstdin - reads the files to process (one per line) from STDIN instead of taking them on the command line. This is useful if you want to run "find" or some other program to identify the files you want to defrag
  • --dorecent - by default only files older than 1 hour will be processed to help avoid races with regularly accessed files, but you may want to override that with this option if you want to process files that do get used regularly (eg. .rrd files)
  • --skiplowfrag=#KB/frag - this tells it to skip files which have higher KB/frag than you specify. This is because it's common for large files to have a few fragments, but the seeks required may not be significant against the time reading the file anyway
  • --skipbig=#KB - this tells it to skip files larger than this many KB to avoid slow processing with big files that are unlikely to ever be fragmentation free
  • --usleep=NNNN - where the process is to run with low impact this specifies a delay before processing each file to reduce disk-trashing. The time is in microseconds so for 1 second you need to specify 1000000. Often quite short delays is all that's needed to take the sting out of it.

Download: defragger Perl script

Examples

# defragfiles --dorecent www-data www-data /var/lib/cacti/rra/*.rrd

Will defrag the .rrd files used by Cacti which are highly prone to serious fragmentation. I suggest you watch the logs and only do this between Cacti finishing polling and before the next poll to avoid the risk of corruption, or better yet disable Cacti while doing this.

# find ~joebloggs/.thunderbird/ -xdev -type f | defragfiles --verbose --readstdin joebloggs joebloggs

Thunderbird can get things like mail files really fragged up so you may want to do this from time to time. Obviously make sure Thunderbird is not running else bad stuff could happen with your mail.

# defragfiles --verbose --skiplowfrag=512 joebloggs joebloggs ~joebloggs/VMImages/*

Defragging VM Images, but we really don't want to defrag these large files if we don't have to so we use --skiplowfrag=512 to skip files which have less than one fragmentation per 512kB.

Comments:




Are you human? (reduces spam)
Note: Identity details will be stored in a cookie. Posts may not appear immediately