Atom Feed
Comments Atom Feed

Similar Articles

2018-05-15 16:48
Raspberry Pi Camera, IR Lights and more
2014-02-02 08:38
Transcend Wi-Fi SD Hacks (CF adaptor, telnet, custom upload)
2012-04-11 16:28
Benchmarking Disk performance with RAID and iostat

Recent Articles

2019-07-28 16:35
git http with Nginx via Flask wsgi application (git4nginx)
2018-05-15 16:48
Raspberry Pi Camera, IR Lights and more
2017-04-23 14:21
Raspberry Pi SD Card Test
2017-04-07 10:54
DNS Firewall (blackhole malicious, like Pi-hole) with bind9
2017-03-28 13:07
Kubernetes to learn Part 4

Glen Pitt-Pladdy :: Blog

Raspberry Pi SD Card Test

I've had a few projects on the go that use Raspberry Pis of various flavours, and storage (SD) performance has been one of the clear bottlenecks.

SD Cards (in fact all types of Flash and storage) are designed and optimised for different IO patterns. If you're reading or writing large video files (eg. camera applications) then the requirements are very different general OS usage or even database IO patterns (if tuned right in most cases it's around 80% random writes with hot areas being cached in memory to avoid reads).

I had a bunch of SD cards kicking about and compiled iozone for the RPi to use to test.

SD Cards for Raspberry Pi benchmarking

Benchmarking considerations

Benchmarking is a complex subject and in many cases dramatically different results can be achieved with subtly different circumstances. A different IO scheduler, hardware talking to the storage, kernel version, or IO pattern (application) can give completely different results.

Because of this, what I'm putting here is not endorsement of any particular make or model of card. They're all optimised for different usage and platforms. One that performs poorly in a RPi may be blistering in a phone or camera. A different OS and again you may see different results.

The results here also do not use OS caching/buffering which can make a huge difference to throughput, particularly given we are using multiple threads in all cases.

Flash memory is split up into blocks, and each block is erased (this sets it to 0xff which is why the erased state for SD is 0xff and this signifies that the blocks can be used for wear leveling, and why filling your SD with 0x00 is a bad idea) and then programmed (setting 0 on the bits that need it). Firmware does all sorts of tricks like mapping your data to different blocks, or parts of different blocks. Blocks can be quite large so small writes are likely to only fill a small part. This is why large amounts of small random writes are often much slower than other operations - it requires lots of erasing whole blocks and re-writing of data to accommodate the small amount of data changing in the block. This also results in much higher wear.

Different models, and even different sizes within a range use different block sizes, so can perform completely differently depending on the IO pattern. This is particularly important to consider here since I've got a bunch of different size cards. Another size in the same series may behave completely differently.

Another factor is the amount of wear. As flash cells wear, they may require longer to perform operations on (especially if retries are needed). All these cards have varying amounts of usage.


I've got both a B+ and a Zero W to test with. The bandwidth to the SD is not very high on the RPi and I've heard that you shouldn't expect more than about 20MBs. Many higher performance cards quote speeds many times this, so don't expect to see figures like that from a RPi.

In all cases the cards start off by being written fully with 0xff using dc3dd. This marks the entire card as unused and usable for wear levelling. Then 2017-03-02-raspbian-jessie-lite.img is written to the card with dd. This is 1.3 GB in size and on first boot the OS resizes the root partition to maximise use of the card. This means that most of the space in all cases will be marked with 0xff as free to the SD firmware. I also run fully update the OS and add a bunch of useful packages.

Tests are first performed on the B+ then on the Zero W. I'm using the iozone command:

# iozone -T -t 4 -s 256m -r 4k -I -i0 -i1 -i2

This uses Direct IO to avoid measuring the performance of the OS and caching/buffering. These can make a dramatic difference to the figures.

This does 4 threads each operating on 256MB (1GB total) of data with a 4k block size. The aim is to provide some idea of IO with multiple processes running, while not exceeding the type of small workloads that a RPi would typically run.

No tuning is done - this is vanilla Rasbian.


These are a bit of a surprise in some respects. The arguably higher-spec EVO+ doesn't do nearly as well as the vanilla EVO. The Sandisk Ultra which seems popular for use with the RPi is only slightly better than the EVO+ and a long way short of the vanilla EVO.

Raspberry Pi B+ SD Benchmarks

The RPi Zero achieves slightly faster throughput on reads, but very similar overall.

Raspberry Pi Zero W SD Benchmarks

There you have it! The biggest gap is with random writes where it will be resulting most erase cycles and wear, and here there's a bigger than 10x difference in performance.

The performance on the EVO is actually quite impressive for Random IO considering it's just a cheap SD card - that works out to about 250 Write IOPS, and about 1800 Read IOPs!

As above, this is by no means representative of different scenarios, and all sorts of different use cases may provide completely different results.