Glen Pitt-Pladdy :: BlogHome Servers | |||
My existing home server has started getting flaky. It has been running a long time (long past what is the design life of the components) and has given little trouble until now. Monitoring systems where starting to report early signs of problems so I decided it was time to act. Basic requirements (functional spec)Home servers are something that I expect to start becoming increasingly common (maybe even integrated with broadband modems, WiFi etc.), but for now they remain strictly something for the tech savvy. The basic functionality that I use my one for is:
Going Green, but not *that* GreenMy experience is that mass apathy prevents most people giving more than lip service to being "Green", as in saving the planet. The type of "Green" that does matter to people is the foldy paper type they have in their pocket. If someone is to successfully sell the idea of being environmentally friendly then I think it is likely to be sold on financial benefits rather than appealing to people's conscience. In this case my existing server idles at about 80W power consumption with all the power saving mechanisms that I can practically use. My target for this server was to halve the power consumption. This is significant as this is on all the time so the running cost adds up rapidly. In data centres this becomes even more of a problem as power distribution and cooling combined with keeping systems redundant becomes a massive energy overhead. For this server I was looking at platforms like Via processors and Intel Atom, but ran into a problem with simply not having enough PCI slots for all my DVB and network cards with any of the boards I can find for them. All these platforms have 2 PCI slots maximum which just doesn't come close to my needs. Also, I considered Intel Mobile processors, but drew a blank on suitable Motherboards. There are a few server platforms around based on these (as well as the Atom mentioned above), but not main stream enough to be practical for a home server. Eventually I decided on the AMD Energy Efficient processors which on the AM2 sockets boast a maximum thermal power of 45W (considerably lower than Intel offerings, but in turn they offer more computing power). What really matters though is their idle power consumption (the CPU spends most of it's life idle), but I can find next to nothing in the way of specs or representative data for that. My final choice was an AMD Athlon AM2 2.6GHz Energy Efficient processor. Not the most powerful, most energy efficient, or cheapest, but for this application I think it is the best overall balance for my needs and only £29.57 for the retail boxed version (includes heatsink). For more grunt there is a dual core version for not a lot more money which claims the same power consumption. Another key item is the graphics card. Built in graphics can save around 10W or more of power, plus, as this is just a (headless) server, running minimal graphics capability and keeping slots free for cards is important. This meant built in graphics. And finally, probably the most important thing for low power consumption is an efficient power supply. Many power supplies waste huge amounts of power and to combat this the "80+" rating was introduced. This simply means that the supply will be better than 80% efficient under the test conditions. This is non-exhaustive, but power supplies rated "80+" are likely to be more efficient than the rest. In my case I went for a HEC Compucase Green Earth 350W supply. This should be plenty of power for this box, and at £26.89 is excellent value. Interestingly, I measure 5W when off with this supply. The manufacturers spec says less than 1W, but it may be legit since the motherboard may account for the other 4W when off. Since it is a server and will be on permanently this is not a bother for me. ReliabilityThis is a vital area for any server that isn't depending on massive clustering for redundancy (eg. the Google approach). When I am doing corporate work then there is no compromise on buying quality hardware, but with that comes cost and hardware that will not be easy to domesticate (often very noisy). My choice is to use Asus motherboards. Motherboards are often the source of reliability problems in cheap PCs and Asus has a very good track record in producing reliable boards. Their desktop motherboards often turn up in budget servers, so in the interests for keeping costs low that is also going to be my approach. I would have liked a M2N-LR (a low end server board from Asus with dual Broadcom Gigabit ethernet and lots of expansion slots), but at around £200 it's a bit steep. After lots of hunting around I settled on the M3N-H/HDMI which has 3 PCI slots, 2 PCI-E and a PCI-E x16, built in graphics (HDMI with DVI adaptor and a VGA connector that can be fitted), all the usual features including variable speed fan controllers to keep noise down. The only annoyance is that it only has a single nForce ethernet which I have not had good experience with (problems when using Jumbo Frames and just generally seem to be a bit flaky - probably due to the Linux driver being based on reverse engineering). This means that some of the slots will need to be used for network cards. The board cost me £51.29 - there are cheaper, but for the reliability, number of PCI and PCI-E slots and features I don't think worth going any lower. Memory is the other area which is often a source of problems when corners are cut. Generally choosing a good brand is all that is needed (I have often seen problems with unbranded or "house" brand memory). In my case I picked up 2 strips of 512M Kingston memory for £6.41 each. That's loads of memory when running Linux even with all the things that this box does, and allows for another 2 strips later. It really isn't worth taking chances on memory when it will only save a Pound or two and may waste tens or hundreds of Pounds of my time if it is flaky. Another thing that is often under appreciated for reliability is the case. Cooling is a vital part of keeping a computer running reliably, and the design of the case is probably the single most important factor. A case that is easy to cool also means that fans can run slower and that makes it quieter. For this box I found a mesh fronted case with a 120mm fan front (blowing over the hard drives) and another in the side to push air over the CPU, motherboard and cards. The mesh front means that far less effort is required to move air through the case and it would probably even be ok without any case fans with the low power design I am using (also good if a fan fails as they often do). Since the case is so good on cooling, I have rigged a cable to provide 5V instead of 12V to the case fans which runs them slower and much quieter. Under normal running there is so much cooling that the CPU fan stays on minimum speed and hard drives are barely even warm. The final bit on reliability: assembly. It is vital that components are assembled carefully, securely and most important, using static (ESD) handling precautions. At this point someone will chirp up "But I have been building PCs without static handling precautions for years and never killed one". True - static will rarely be enough to kill a component outright (they do have some static protection built in), but will often be enough that reliability will deteriorate with time and eventually will die prematurely (or just become so troublesome that it gets replaced). At one point I was running the static testing for a chip design house and training staff as well as being involved in investigation and diagnosis of failed chips. Apart from very fragile designs (so much so that they where not able to survive machine assembly in a static controlled environment and had to have layout changes to be practical), many devices did survive testing up to fairly high levels of static discharge. They did however suffer degraded performance which could lead to long term reliability problems. As these sorts of problems will not show up without specialist equipment to test them most people will be blissfully unaware of any immediate dammage. Ignore static precautions at your peril. TweaksLargely, there where no complications to speak of - everything just worked first go. The most annoying thing was the VGA connector (as it it headless and only has a monitor connected when needed this is the most practical way of connecting a monitor). This is on a ribbon cable to a plate that fits into a standard slot, but the ribbon cable is so short that it will only reach the slots closest to where the ribbon cable plugs into the board. Since I need use of all the PCI slots this needed fixing. My solution was simply to cut the ribbon cable at an angle which allows each of the wires to be stripped without touching the adjacent ones and solder in about 4-5" more cable. I then filled the gaps with hot-melt glue to prevent them moving and touching and covered the lot with tape to ensure the joints are supported and don't touch anything else. That allowed me to fit the connector to a slot which is free. Other stuff was simple configuration. The biggest job was lm-sensors and sensord (I like servers to be closely monitored for early warning of any problems). As manufacturers never seem to provide information on the sensor configuration, this is largely guess work. The M3N-H/HDMI uses a it8716 chip and that needed the relevant parts altering in /etc/sensors3.conf as follows: # labels And that's about all there was to it. About the only other change I made was to install cpufrequtils which provides the tools and init scripts to make running the kernel CPU frequency governors easy - in fact, I didn't need to configure anything beyond that as the defaults where good. The old server was running powernowd, but letting the kernel control this is much cleaner... and I like clean: it results far less problems in the long term. The one thing that I couldn't do is get any tools for power management of the chipset - all the tools available seem to be for older chipsets. I am having to depend on the BIOS to set any power saving at boot and hope that it is doing a good job of it. ResultsThe final cost including shipping (existing hard drives, DVB cards and additional NICs which came from the outgoing server) came to £142.56 delivered. Most the cards where originally bought on eBay for about £6 each so not a lot of extra money. Power consumption (playing music on a Squeezebox) comes to a mere 60W, and even when working hard I can only get it up to 84W. Idle (like it is most of the time) results in power consumption of 54W (higher than my target, but a good result none the less). Before fitting all the DVB cards and the second hard drive this was running at about 41W (this would be a representative figure if this was a typical desktop built with these parts). I think better would be acheivable by disabling more unnecessary hardware (even USB!), and using modern "Green" hard drives which use about half the power of the ones I am using. Another option that I may investigate later is using CompactFlash for the parts of the system that need to be kept immediately accessible (system, web sites, databases and swap). Flash memory is getting bigger, faster and cheaper all the time, and the better CF cards have near instant access often making them faster than hard drives in practice despite their limited data transfer rate (beware: many cheap CF cards have access times slower than a hard drive). Linux can be made to run happily on very little disk space so using CF as the primary drive is practical, especially when desktop apps aren't needed. In terms of noise it's excellent: I measured 34.0dB ('A' weighted at 1m). Interestingly, with the server off, the meter dropped to 33.4dB which is the background noise in the room. That suggests that without the background noise we are looking at 25-30dB region for the actual noise produced by the server, but not practical for me to measure. There is ample scope for expansion later: 3 Spare slots, 4 free SATA connectors, 1 free IDE connector, 2 free memory slots, and even with the fans running slow there is plenty of excess cooling capacity (hard drives are hardly even warm at 31 degrees C for one and 25 degrees C for the other when active). |
|||
This is a bunch of random thoughts, ideas and other nonsense, and is not intended to be taken seriously. I'm experimenting and mostly have no idea what I am doing with most of this so it should be taken with cuation and at your own risk. Intrustive technologies are minimised where possible. For the purposes of reducing abuse and other risks hCaptcha is used and has it's own policies linked from the widget.
Copyright Glen Pitt-Pladdy 2008-2023
|