Low cost high capacity solid performance storage solution

magepug

What is this storage?
Joined
Jul 22, 2009
Messages
4
I currently have a 4u server that has two 6 disk RAID 5 arrays in it and they are a few years old and I need to expand the storage. I would like to remove one of the arrays from the 4u server and stick it in another box that would be capabable of holding 15 SATA disks. Ideally the other box would have an embedded RAID controller so I would not have to stick additional controllers into the server. The storage expansion should be available via iSCSI or multilane eSATA/SAS.

The two options I have discovered up to this point are a solution from Addonics and a low end DAS from Dell.

The Addonics solution is their Storage Rack product:
http://www.addonics.com/products/raid_system/
That would have their iSCSI module in it:
http://www.addonics.com/products/san/isc8p2g_overview.asp

This can potentially seat 15 (3x 5 hotswap) disks and the iSCSI module, however the iSCSI module can only take 8 disks and they have no intention of making a 16 port model. So what I could do in this situation is have 8 disks on the iSCSI module and feed 7 back into the server via Multilane or eSATA.

The other solution is from Dell and it is the MD1000. It is fairly affordable except when it comes down to disks. From what I can read, you can put commodity (workstation) SATA disks into this thing, provided you buy the add on board that allows the disk to fit into the case. The add on boards are like $50 bucks. I realize that this puts me at greater risk to drive failure, but I have no problem creating a 13 disk RAID 5 array and keeping two hot spares going... I am confident that I could replace the disks and rebuild the array before I run out of spares.

Does anyone know of any other solution that I have not considered? I want to spend no more than 4k (incl the price of disks) usd. The addonics solution would be idle if only they had a 16 port iSCSI module.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,026
Location
I am omnipresent
Without providing any specific guidance, you can add in support for an iSCSI initiator to a Linux kernel and then export your disks or arrays in any way you feel like configuring them. Or you can just share softRAID arrays over SMB like a normal NAS.

Moving to 8-disk+ configurations is just NEVER going to be cheap, but if you're looking at standard desktop parts as a starting point, you could look at Dell Perc5s, which can be teamed to support a total of 16 SATA/SAS drives, for $125 per controller (available on Ebay).

There are all kinds of 4U enclosures that support large numbers of disks, though they might have less than ideal backplane support or other cabling issues. One way to go might be to add a SuperMicro 5-in-3 3.5" to 5.25" drive adapter in your chassis' 5.25" bays, or to buy a real SuperMicro chassis that just covers its front with 3.5" hot swap enclosures.

I suspect that anything you do that involves 15 - 16 3.5" drives + enough ports and chassis to run that hardware is going to run at least $2500. I'm not sure how inexpensive that is, but it can be done at a low cost if you're willing to consider Linux software-based solutions rather than hardware iSCSI initiators or the like.
 

magepug

What is this storage?
Joined
Jul 22, 2009
Messages
4
I have no problem with a software solution, at least for the management of the LUNs and the iSCSI support (I do want HW RAID)... this is what I have now. Ideally I would switch to openfiler from the current linux install I am using, but that is more from a curiosity perspective than anything else.

I did not know about those supermicro cases... this is pretty hot:
http://www.supermicro.com/products/chassis/4U/846/SC846E1-R710.cfm

That 2U supermicro looks pretty hot as well, but I have plenty of rack space available, so I would rather allow more breathing room :)

Man, you aren't kidding on the card, thats a pretty sexy raid card;
http://www.newegg.com/Product/Product.aspx?Item=N82E16816151037&Tpk=areca

So what I would do with this is get that 4u supermicro, slap some port multipliers on back (5 sata into one minisas/esata cable), and then those feed into the areca raid card in another server. THat would allow 20 disks to be connected (there are 4 eSATA/minisas ports on that card right?).

Now here is where I have some questions...
1. Does the areca raid card see 20 disks, or 4?
2. Say I have a drive failure, how in the hell do I know what drive failed if 5 drives are sharing a single cable?
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,026
Location
I am omnipresent
OK.

First thing is, if you're serious about doing hardware RAID and it's not just a personal project, whatever controller you buy, you need to buy TWO in case you ever need to recover your data. If you have 20-something TB of disks hanging off one controller and in two years when that Areca card dies, you can't find another, you have 20-something TB of inaccessible data. Buying another card then might not fix your problem, as something as minor as a firmware revision might change how data was written to your array.

Mostly I stick with SoftRAID even when I have the option of doing hardware RAID because there's this thing where I'm terrified the controller will die and my array will not be portable to a new controller; I might feel differently about that if I regularly had a budget that allowed me to purchase multiples of expensive controllers but that has thusfar eluded me.

The only SATA controllers I know to support port multipliers are Sil3100-series, which is typically only found on what I would call lower-end hardware. There certainly are high-density eSATA/SAS connectors that allow for multiple drives to be connected via a single cable but as far as I know there's no way to get more ports than the controller supports other than that Sil-based port multiplication system.
 

magepug

What is this storage?
Joined
Jul 22, 2009
Messages
4
I am the opposite... I am more afraid that the software raid is going to stop working and I will be left with a bunch of bricks. I am much less concerned that the raid card will fail then I am that the SW Raid will get screwed up.

Even with software raid, we still have the same question. There are 6 miniSAS ports on that card and it supports 24 disks. How do I hook up 24 disks and how do I identify a disk in the case of failure?
 

udaman

Wannabe Storage Freak
Joined
Sep 20, 2006
Messages
1,209
This sexy thing filled with SSDs and one of these is what I'm drooling over at the moment (6TB of high-speed goodness).

Merc is right, software is the way to go if money is an issue. It also gives you more flexibility going forward.

The IntelR IOP341 in the areca card is only 800Mhz, go for higher quality and support with Adaptec & IOP348's increased bandwidth for that kind of $$$$ money drooling :p

http://www.ocforums.com/showthread.php?t=604097

mistype, think the poster in this thread meant 1GB/s, not 1000Mb/s

Quote:
Flynn was formerly the chief architect at Linux Networx, and says that his experience in HPC has led him to conclude that "balanced systems lead to cost effective throughput." Fusion-io's device connects to the PCI Express bus, and Flynn conceptualizes the flash memory as sitting between memory and disk, relieving the performancepressure on both, and creating a new first-class participant in the data flow hierarchy.


"You can put 15 Fusion-io cards in a commodity server and get 10 GB/s of throughput from a 10 TB pool with over one million IOPS of performance," says Flynn. How does this matter? He gave NASTRAN as a customer example, in which jobs that took three days to run would complete in six hours on the same system and with no change in the application after the installation of the flash device.
 

Howell

Storage? I am Storage!
Joined
Feb 24, 2003
Messages
4,740
Location
Chattanooga, TN
Even with software raid, we still have the same question. There are 6 miniSAS ports on that card and it supports 24 disks. How do I hook up 24 disks and how do I identify a disk in the case of failure?

newegg review said:
Other Thoughts: The pictures on Newegg do not show the correct cables that come with the card. There are 6 each 1 x SFF-8087 (Min SAS 4i) to 4x SATA cables included. They are ~ 30" long. Two have 90 degree SFF-8087 connectors, the other four are straight.

What does the part manual say about how to identify?
 

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,670
Location
Horsens, Denmark
Each of the 4 SATA breakout cables is numbered 0-3, and the connectors on the controller are numbered 1-6. You have to do the simple math yourself to find out which drive, but it isn't a big deal.
 

Fushigi

Storage Is My Life
Joined
Jan 23, 2002
Messages
2,890
Location
Illinois, USA
So you could label the cables for drive positions 1-24 as you install them. That'd make service later much simpler. You could also map each cable position to a specific drive slot/bay and label those as well.
 

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,670
Location
Horsens, Denmark
So you could label the cables for drive positions 1-24 as you install them. That'd make service later much simpler. You could also map each cable position to a specific drive slot/bay and label those as well.

I have a chassis similar to the one I linked to above, and simply ordered them left to right. Easy enough.
 

magepug

What is this storage?
Joined
Jul 22, 2009
Messages
4
Thanks for all the responses... this looks like it is the option I want.
 
Top