Low power file server platform?

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,269
Location
I am omnipresent
At the moment, my primary home storage consists 48 1TB drives and 24 1.5TB drives, plus some odds and ends here and there. The drives are mostly configured as RAID6 arrays (plus hot spares) that are rsync'd between two independent systems each for a total of about 32TB of available storage.

I made a significant investment in a tape library to deal with proper backups. In theory that means I don't absolutely need the mirrors, but the chances of having an unrecoverable read error while rebuilding a degraded array is still uncomfortably high. I'd like to move to a different method for distributing my data across disks, and that's something that RAID-Z provides. I can mirror files across sets of disks or tell the file system to maintain multiple copies of data, easily expand the arrays and there are some built-in reliability features as well.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,269
Location
I am omnipresent
Relatively speaking, it's not that much porn. And I put it all together because I wanted to figure out the best way to keep such a huge volume of data.

When it's all working, it's really not that complex. It's just difficult to describe.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,920
Location
USA
It's a fun project and I appreciate the love of working with hardware, but lets face it, you've probably spent well over $10K to provide storage and backup for images/video/media of naked ladies. You could have gone to Nevada and bought the real thing many times over. ;-)
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,269
Location
I am omnipresent
Porn is probably the easiest possible media to collect. Until very recently there were no practical ramifications for sharing it online.

Now, if you want to talk about the other thing, I think there are ethical issues to making a trip to Nevada and that's not something I would choose to do. But I'm fine with spending several thousand dollars to have and fill an awesome media library.
 

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,728
Location
Horsens, Denmark
Just a tidbit. I'm looking at some of these heatsinks to sandwich some arrays of 9x 3TB drives, with the whole shebang acoustically decoupled from a custom wooden cabinet. They were cheaper than I was expecting.
 

blakerwry

Storage? I am Storage!
Joined
Oct 12, 2002
Messages
4,203
Location
Kansas City, USA
Website
justblake.com
I'm about to start the process of updating my file server systems. My current rigs are older Core 2 Duo machines of one sort or other, each on full ATX motherboards. Most of them use a combination of onboard SATA and IDE port and 8 or 16 port SATA/SAS controllers. They mostly have power supplies in the 500 - 700W range.

Do you have any stats (UPS load estimate, watt meter, etc) on how much power the components draw now?

My file servers are running RHEL or Centos right now, though looking forward I suspect I'd be better off with a more up to date platform. I could look at another Linux, with BSD or Solaris for zFS support, or to Windows Server since that's what I spend most of my time using these days.
CentOS 5 is still maintained for several years to come and CentOS6 is right around the corner (or use Scientific Linux if you want it now). I don't see any requirement to move away from this platform for a file server.

However, zfs is tempting if it can meet your raid and fs needs, vs using a another fs on top of MD raid - I would stay away from LVM altogether. Although I'm probably more likely to go with a BSD or Solaris Unix platform if I required ZFS, as ZFS on linux is probably not widely used and vetted compared to Solaris/BSD.

If sticking with Linux, I'd use ext4 - Many Linux distributions are moving to it as the default (google uses it exclusively on their servers as well as all android phones). Ext4 which promises better fsck and other optimizations for large file systems.

Have you thought about using FreeNAS or another NAS based distribution?

I'm contemplating a move to Atom-based machines to get power consumption down while at the same time migrating my disk arrays to what will probably wind up being arrays of 3TB drives, but while I'm mulling it over, perhaps it might be interesting to talk about here.

Any modern Intel CPU is going to be pretty efficient at idle - anything in the same family (LGA1155, LGA1156 ) should have roughly the same idle draw. My guess is that the disks are going to be using a lot more power than your (mostly idle) CPU. I would focus on that aspect and choose fewer larger drives and drives that are the most efficient. I think the next big efficiency gain would be in the PSU, add-in cards, and lastly look at the CPU and mobo.


Personally, I have been recommending 3.5" Seagate ES and WD RE drives for those who want SATA based file servers. These drives are limited to 2TB, which probably says something about the reliability of 3TB drives. 2.5" inch drives offer a small power savings over 3.5", but the (much) smaller capacity of the 2.5" drives doesn't begin to make up for the power savings.

Looking at WD's specs, they have an RE4-GP which lists a max of just under 7 watts and an idle draw of just under 4 watts. This compares to the standard RE4's 11/8watt draw. WD also has an AV-GP model which claims 4.5/4 watts. All of these models are rated for 24/7 duty and the RE series has a 5 year warranty. Seagate doesn't have a low power ES, but their standard ES does have slightly lower draw than WD's RE4 series.
 

blakerwry

Storage? I am Storage!
Joined
Oct 12, 2002
Messages
4,203
Location
Kansas City, USA
Website
justblake.com
I'm worried that Centos isn't going to be around forever. Red Hat is becoming less friendly to it. I could move to SuSE or Debian pretty easily.

Redhat is being less friendly to commercial products like Oracle's Unbreakable Linux. RH hasn't done anything unfriendly to CentOS. There was an article about the way Redhat has been repackaging it's source RPMs in RHEL 6 which stated that it would hurt all RHEL derivatives. Unfortunately, the author got a few things wrong. I corrected him, and he consequently posted a correction.

BSD and Solaris are interesting because of zFS. Solaris has a much more mature implementation, but of course it's also on the operating system endangered species list. BSD's version is said to be way behind, but it's also an OS with a stable future.

In response to the stable future and previous CentoS worries, I do think CentOS will be around. The project has had some big problems recently with openness and transparency. However, there have been some visible efforts to improve this in the last couple weeks.

If something does happen to CentOS, Scientific Linux should be able to meet your needs just as well. It only takes updating one file (yum repository) and running one command (yum reinstall) to switch a system between SL, Cent, and RHEL.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,269
Location
I am omnipresent
The project has had some big problems recently with openness and transparency. However, there have been some visible efforts to improve this in the last couple weeks.

I've noticed that Centos seem to be taking longer and longer to catch up with releases of RedHat, which is a cause for some concern. I'm worried that Centos as a project isn't all that healthy.

I did finally slap an i3-2100 together and load FreeBSD on it. I don't really have much to report as yet; I'm still re-familiarizing myself with doing things the BSD way.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,269
Location
I am omnipresent
The point in any case is the replicate data across more than one set of drives in more than one physical set of hardware. I'm not sure I care how I do it as long as it gets done.
 
Top