Menu
Index

Contact
Atom Feed
Comments Atom Feed

Similar Articles

2009-10-31 11:03
Linux (Debian, Ubuntu) SNMP basics
2015-10-25 08:02
Ubuntu/Debian/Linux Mint and AWS CLI Autocomplete
2015-12-28 14:03
Home Lab Project: Network Bridges for KVM II - Flexible Bridges & VLANs
2010-04-22 22:00
Basic Postfix config guide for Cacti, Spam Blocking, TLS etc.
2009-06-20 13:09
R1800 Gutenprint tricks

Recent Articles

2019-07-28 16:35
git http with Nginx via Flask wsgi application (git4nginx)
2018-05-15 16:48
Raspberry Pi Camera, IR Lights and more
2017-04-23 14:21
Raspberry Pi SD Card Test
2017-04-07 10:54
DNS Firewall (blackhole malicious, like Pi-hole) with bind9
2017-03-28 13:07
Kubernetes to learn Part 4

Glen Pitt-Pladdy :: Blog

Page allocation failures and other wierdness

I work my home server works hard:  CCTV with Zoneminder, PVR with MythTV, serving Squeezecentre with IMMS, and lots of automated processing and monitoring. It doesn't always like it....

Grumpy memory allocation

Periodically my server monitoring flags up log entries like:

kernel: swapper: page allocation failure. order:0, mode:0x20
kernel: Pid: 0, comm: swapper Tainted: G W 2.6.27-14-server #1
kernel:
kernel: Call Trace:
.
.
.

These where rare and didn't seem to do any significant harm, but where just annoying and often seemed to relate to Zoneminder or sox (used by the analyzer for IMMS).

Episode 2

I started seeking the same sort of stuff on a Ubuntu server used for CCTV and storing backups of disk images for virtual machines, again with Zoneminder. More annoyance! When I got some time I did some research to find out what was going on.

My findings (and solution) suggest that the problem is some kind of free memory starvation, but I'm only digging deep enough into Linux kernel memory management to find a solution rather than in depth understanding.

My understanding is like this: Linux takes the approach that if there is memory that is unused by processes, then the smart thing to do is to use this free memory for speeding the system up like caching etc. Makes sense to me - why waste it?

To ensure that there is free memory on tap for processes that request it, it keeps a certain amount free, and this is where the problems seem to be happening (perhaps someone with deeper knowledge of the code can explain this better). When there isn't enough free memory on tap then requests for memory (malloc) may fail.

The kernel is well written and realises an allocation failed and logs it. Many apps aren't so good and don't bother to check the pointer returned from malloc/calloc (that was something I had drummed into my by lecturers and tutors at university - ALWAYS check the pointer for a failed allocation) and then run off and try and use a NULL pointer. This results in the kernel clobbering the process for bad behaviour (SEGV).

It seems that Linux almost always does memory allocation sensibly, but along comes people like me who have configurations that suddenly need a large chunks of memory (eg. large buffers in Zoneminder and probably also other apps), and memory allocation requests sometimes get denied.

Fix

I found lots of people in forums looking for a way to reduce the heavy caching in Linux, and being told it wasn't possible (I assume not easily anyway). Admittedly, this was where I started and in retrospect it was completely the wrong approach. Where we all seemed to run be running into problems is that we are trying get the kernel to keep more free memory in reserve, and thinking by reducing the caching (the main memory hog) this will happen.

Too much thinking!

Linux has a mechanism for reserving more free memory (which also means it will cache less). First get an idea how much it is reserving:

root@servername:~# cat /proc/sys/vm/min_free_kbytes
3816

That means that it is currently keeping just under 4MB of memory free. Not much when some memory hungry process decides it wants a large hunk of memory.

The next step is to figure out how much we actually need. You could try the "trial and terror" method (just pick a bigger number and hope for the best), but I prefer to try and be a bit more intelligent about it. In the case of both these servers I expect the main problem to be Zoneminder's buffers. On my home server these are 40 frames at 768x576, and on the Ubuntu server the biggest one is 40 frames at 1280x960 (AXIS IP cam). Doing some maths gives about 52MB and 144MB respectively. To ensure a bit of wiggle room, I put 65536 (64MB) and 262144 (256MB) respectively.

root@servername:~# echo 65536 >/proc/sys/vm/min_free_kbytes

And from that moment on everything seemed to be happy. This change will last until reboot, and if you want it to be permanent then best add it to sysctl. In my case I already had already created a file in /etc/sysctl.d for setting up shmall and shmmax, so I simply added an extra line:

vm.min_free_kbytes = 65536

Otherwise create a file in /etc/sysctl.d with an intelligent name (avoids unnecessary head scratching later) like 10-local-zoneminder with that line in it, and to be real smart, a comment saying what it is about (and maybe even this URL for an explanation), who made the change, and the date.

That's all there is to it. Enjoy the extra free memory!

Comments:




Note: Identity details will be stored in a cookie. Posts may not appear immediately