Glen Pitt-Pladdy :: BlogBenchmarking Disk performance with RAID and iostat | |||
Benchmarking drive performance is always a tricky business and emulating real-world usage patterns accurately is near impossible. There is however an alternative approach I discovered on my workstation which over time has evolved a mdadm RAID5 array with 3 different brands of drive. Any RAID level that distributes IO requests evenly will work for this (perhaps RAID1 / RAID10 are not good choices due to seek optimisation that may occur) as this provides real-world IO to 3 drives and the comparative performance can be seen with iostat:
$ iostat -x What this tells us of significance is that sdb is much faster than sdc or sda. Note the long average wait time for sda. Interestingly this brand/model is often regarded as a good performer and there is much debate in forums between the performance merits brands/modes of sda and sdb for best performance while one brand clearly is a significantly better performer. The drive/model of sdc is actually a very old drive and an outsider brand but seems to give a convincing performance. One thing to note with all this is that immediately after boot, sda would appear to be by far the strongest performer but the await (average wait since boot) drifts to this ordering with time. In many ways that's just typical of benchmarking - unless you are able to very accurately replicate usage patterns it's easy to get benchmarks that are not relevant to your situation. In this case from other tests it appears that sda is slightly quicker on seek, but has lower linear read speed. The result is that with the massive thrash during boot when many files are being accessed (lots of seeks) it comes out in front, but with normal running without heavy seeking the higher linear read speed rules. This is an important effect as it means that if I was to change usage habits or run a different OS, perhaps just a different IO scheduler, the results may be completely different. I am purposely avoiding saying what makes/models these drives are because the aim of this posting is to highlight the importance of benchmarking with real access patterns and to take benchmarks with different access patterns (such as this posting) in context: only if your access patterns have been accurately replicated will a benchmark be valid. |
|||
Disclaimer: This is a load of random thoughts, ideas and other nonsense and is not intended to be taken seriously. I have no idea what I am doing with most of this so if you are stupid and naive enough to believe any of it, it is your own fault and you can live with the consequences. More importantly this blog may contain substances such as humor which have not yet been approved for human (or machine) consumption and could seriously damage your health if taken seriously. If you still feel the need to litigate (or whatever other legal nonsense people have dreamed up now), then please address all complaints and other stupidity to yourself as you clearly "don't get it".
Copyright Glen Pitt-Pladdy 2008-2023
|
Comments:
Glen,
I trying to monitor "storage performance" on my VPS, does any of your cacti toolkits chart sar output or iowait ?
or, do you have any ideas how to accomplish ?
I have a VPS that since a week or so developed significant performance issues, 'sar' shows iowait up to 98 or so at those times, so I thought perhaps charting that might be of feature benefit.
The vmstat templates include a CPU graph with "wa" (iowait equivalent from vmstat). Also see my iostat templates - these graph the full set of extended IO metrics.
One thing to note with iowait is it can be a very misleading metric depending on the type of workload. As with all monitoring it's well worth fully understanding how metrics are measured by the OS / tools as well as what circumstances can lead to misleading data.