36arn

New Server

This week I have been building a new home server, based around an Intel Core i5-3450S low(er) power chip, an excellent Fractal Design Define Mini case, and various other components including low voltage Crucial memory. It's been more difficult than expected!

The first problem I encountered was a complete boot failure if I tried to run the memory at anything other than the "standard" 1.5V. Much beeping and rebooting later, I discovered that in fact the problem was caused by a BIOS bug. A BIOS update fixed that problem. However, I also discovered that it is possible to get the machine to reboot every 2 seconds with a certain combination of BIOS settings! Needless to say, I haven't attempted to narrow down what they are.

In addition, the motherboard fan controller refuses to allow me to change the duty cycle for the front fan, to slightly reduce the RPM and noise. Again, after much pfaffing about, I solved that problem by disconnecting the fan altogether. The system generates so little heat, it's not needed.

For this server, I decided to use a RAID 5 disk setup. I had naïvely assumed I could use the Intel Matrix RAID built into the motherboard. This is disparagingly referred to as "fake raid" by the Linux community, because the firmware does so little it requires a driver to make it work (or rather, not work slightly better).

The only way to install Proxmox (my server environment, see here for details) on a non-standard RAID setup is to first install a Debian Squeeze base system, and then install Proxmox on top of that. So, first I attempted to install Debian using the minimal network install, adding the magic flags to enable dm-raid (the driver that provides fake raid support). That was very helpful... except that the driver presented me with a read-only RAID 5 array. Not much use for a fileserver.

The next step was to give up, and instead create a software RAID. With such a powerful processor, software RAID is probably as fast, if not faster, than fake raid, with the advantage of the RAID setup being motherboard agnostic. Luckily, I understand the concepts behind the logical volume manager, LVM, so I was able to set up a software RAID array suitable for Proxmox quite easily.

The next thing was to realise that I had failed to leave any spare space for snapshots in the LVM setup (so no backups of running VMs). Also, I had formatted the partititions using ext4 rather than the preferred (for Proxmox) ext3, so that required some more adjustments.

Finally, I ended up with a working base system correctly set up. From there, it is a simple task of adding the Proxmox repositories and installing the required software (and adjusting the network configuration). Easy, and well documented.

Hooray, a working system with the Proxmox web interface just waiting for my virtual machines to be migrated across. At this point, I decided to use the Proxmox clustering facility to move VMs between the old host and the new server. To cut a long story short, I was unable to join the new server to the cluster... and once it's broken the recommended "fix" seems to be to completely reinstall the system! I can now install a Debian base system with Proxmox on top, with my eyes closed.

Anyway, it's now working, and I'm migrating the last of my VMs across (using backup/restore, rather than clustering). This blog post was written on the new server on my "web site" VM. The new server is quiet and much faster than the old one, and importantly doesn't suffer so badly from inadequate IO (well, nothing that a hardware RAID setup wouldn't cure, with some 10K SAS drives).

Addendum

Perhaps my enthusiasm was slightly premature. Testing using bonnie++ indicated that my RAID array was delivering approximately 30MB/s write and 170MB/s read. Before installing Proxmox (and its modified kernel) it was reporting over 70MB/s write. Something wasn't right.

Proxmox steadfastly refuse to support software RAID. Their official position seems to be "Proxmox doesn't work with software RAID". Dig a bit deeper, and this seems to be more related to support difficulties and a lack of testing, rather than anything technical. The Proxmox forums have various posts from people who've successfully used software RAID, and some from people like me with performance issues.

The solution (which is applicable to all md-RAID installations) seems to be to increase the stripe cache size beyond the tiny 256KB to a several megabytes. This increased the write performance to over 80MB/s, and read to well over 200MB/s. The individual disks are capable of over 100MB/s each, so this is an acceptable result. Why performance should drop so dramatically with Proxmox installed, I don't know.

Submit to FacebookSubmit to Google PlusSubmit to TwitterSubmit to LinkedIn