Shingled drives and backup hardware

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,275
Location
I am omnipresent
So backing up via a file consolidation tool (Winzip or Windows Backup) might offer a significant performance improvement? Interesting.

Windows Backup would be an exceedingly poor choice for such a thing, since there's a storied history of incompatibilities between versions. Likewise, I don't know of a tool to make a subset of directories into a VHD file, something that does seem to be a stable standard in Microsoft's world.

But if what you want is simple file consolidation without compression, that tool has existed for about 40 years. It's called tar.
 

Tea

Storage? I am Storage!
Joined
Jan 15, 2002
Messages
3,749
Location
27a No Fixed Address, Oz.
Website
www.redhill.net.au
Good question Buck. A little hard to find here, but $453 seems representative for the 6TB WD at retail on-line.

Compared with others:

$453 for the WD
$424 for a 6TB PMR Seagate (a surveillance drive, but that shouldn't matter.
$348 for a 6TB Seagate SMR
$382 for an 8TB Seagate SMR
$425 for a pair of standard 4TB Seagate desktop drives (WD would be about the same)

The 8TB SMR drive is easily the cheapest per GB

But there is more! A little searching also gives me:

$334 for a 6TB WD WD60EZRX Green - that's pretty good value
$268 for a 5TB WD
$394 for the same damn pair of Seagate 4TB drives my useless bloody wholesaler charges me $425 for! So I could very nearly manage to get cheaper storage from a pair of mass-market 4TB drives than I'm getting from the 8TB SMR unit.

Just the same, it's miles easier having a good big drive in a single unit than it is mucking about with splitting folders and figuring out what goes where.
 

Tea

Storage? I am Storage!
Joined
Jan 15, 2002
Messages
3,749
Location
27a No Fixed Address, Oz.
Website
www.redhill.net.au
Another interesting thing. I let that mega-slow backup job run overnight, and it was still going at the same glacial pace as it (presumably) did the read. write, write thing. I'd tried stopping it and idling the drive for a few hours, then restarting: no difference.

Then, on impulse, I rebooted the system - and Hey Presto! - write speed went straight up to better than 10x the speed before the reboot. Still slow, but vastly more acceptable. Why would a power cycle matter? Clearly, there are things to work out about these drives if they are going to be usable and practical.

In the old days, when areal density was doubling every couple of years, you'd just wait a while for bigger standard drives to come along. That doesn't happen any more, of course. Presumably, HAMR will be the longer-term answer, but that seems to be years away from the mainstream as yet.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,925
Location
USA
Another interesting thing. I let that mega-slow backup job run overnight, and it was still going at the same glacial pace as it (presumably) did the read. write, write thing. I'd tried stopping it and idling the drive for a few hours, then restarting: no difference.

Then, on impulse, I rebooted the system - and Hey Presto! - write speed went straight up to better than 10x the speed before the reboot. Still slow, but vastly more acceptable. Why would a power cycle matter? Clearly, there are things to work out about these drives if they are going to be usable and practical.

In the old days, when areal density was doubling every couple of years, you'd just wait a while for bigger standard drives to come along. That doesn't happen any more, of course. Presumably, HAMR will be the longer-term answer, but that seems to be years away from the mainstream as yet.

Maybe after the reboot you freed up more memory to allow your OS to have more available RAM for write caching? What OS are you using during these tests?
 

Tea

Storage? I am Storage!
Joined
Jan 15, 2002
Messages
3,749
Location
27a No Fixed Address, Oz.
Website
www.redhill.net.au
Maybe after the reboot you freed up more memory to allow your OS to have more available RAM for write caching? What OS are you using during these tests?

Windows 8.1 Pro, Handy. Nope, not that. The amount of data involved is way too large for that to be a factor. Neat idea though.

I'm virtually certain now that the performance difference had nothing to do with the reboot, it was simply coincidental. What happened was that it got to the end of the overwrites (writing over that test folder I sent and then deleted) and started writing on a blank part of the drive again.

I should do some more tinkering now. But hey, the backup set is complete, the 8TB drive is full to within 50GB or so, and there is not much point in an off-line, off-site backup if it's still on-line and on-site. I still need to know - given the terrible re-write speed - how the drive deals with writing after the first fill-blank-drive phase. Once I have my little mind around that |I can decide whether to:

(a) Use SMR drives for all my backups (probably yes)
(b) Use SMR drives for on-line archival data. (Still 50/50 on this. The re-write worries me)

The read speed of this drive, by the way, is very impressive. Things like reading huge folders full of photographs are really snappy compared to my 4TB PMR drives. Is this a consequence of the 20GB flash cache? Is the drive smart enough to keep the MFT in RAM?
 

Buck

Storage? I am Storage!
Joined
Feb 22, 2002
Messages
4,514
Location
Blurry.
Website
www.hlmcompany.com
So we are seeing a difference according to file size, but it's small on PMR. There is a huge difference with the SMR drive, however. (It did occur to me that it might have to do with cluster sizes, but no, all drives are using the same NTFS default 4k clusters.) Earlier, I stopped the third backup at the 56GB mark. Now, on restarting it, with the SMR drive having been idle for a half hour or so, I immediately get the same low transfer rate. It's bouncing around a bit but in the 30-35MB/sec range.

Tea, SMR drives do not read and write in 4K sectors like PMR. They cluster a group of shingled 4K sectors into a band. In order to write to one 4K sector in a band, the entire band is cached, updated, then written back to the drive. The size of the band will affect performance and areal density. A larger band results in increased areal density, but slower write performance. A typical SMR band may be 64MB.
 

Buck

Storage? I am Storage!
Joined
Feb 22, 2002
Messages
4,514
Location
Blurry.
Website
www.hlmcompany.com
Now the drive has 128MB of DRAM cache and, according to rumour, a 20GB magnetic buffer. Seagate don't disclose the size or the type, but it's either a dedicated PMR section for temporary storage or a dedicated SMR section where, through some magic, the drive can just write rather than do the complicated SMR read, write, write dance.

For the SMR method that Seagate has implemented, it is possible that they're using approximately 1GiB of DRAM per 1TiB of SMR storage, and they're not disclosing these details.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,275
Location
I am omnipresent
I pulled my SMR drive array (7x5TB RAID6 + 500GB SSD) out to copy it to a set of off-site drives and reads seem HIGHLY inconsistent now where before I felt they were really fine. Short writes, up to maybe 20GB, are fast to incredible, able to sustain 250MB/sec (probably the max write speed of the destination array) easily. If I jump up to something more like 200GB, performance goes to dogshit. Now, this is reading and not writing, but I'm talking about file copies that are slow enough that Windows actually throws I/O timeout errors off internal drive file copies and AVERAGE speeds dip below 5MB/sec.
Is that all the result of the cache drive being emptied? If I pull it out of the equation, I get a similar result, just faster.
It's actually a lot faster and more practical for me to transfer 50GB at a time than it is to try to set up a 5TB overnight file copy like I might want to. Rebooting does seem to help, in that it stays faster a little longer, but this is something I've been messing with for a few days now and I don't feel like I can just "set it and forget it" at this point. I actually have to shepherd my file copies. That is broken and bogus.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,275
Location
I am omnipresent
Just to support Tea's observations, it's about 12x faster for me to copy 20 10GB files than it is for me to copy 100 200MB files. I'm not sure what the drive firmware could be responding to though, since no one drive should have more than 1/5 of the data for any one file in the array, regardless of the size. Does the firmware somehow "know" that a new file copy operation is starting? Is that where the penalty is?

I wouldn't even want to TRY copying .jpegs to one of these.
 

LunarMist

I can't believe I'm a Fixture
Joined
Feb 1, 2003
Messages
17,497
Location
USA
Just to support Tea's observations, it's about 12x faster for me to copy 20 10GB files than it is for me to copy 100 200MB files. I'm not sure what the drive firmware could be responding to though, since no one drive should have more than 1/5 of the data for any one file in the array, regardless of the size. Does the firmware somehow "know" that a new file copy operation is starting? Is that where the penalty is?

I wouldn't even want to TRY copying .jpegs to one of these.

That's awful. I wonder if in five years the shingles will be remembered as an ugly pothole on the road of storage progress.
 

Buck

Storage? I am Storage!
Joined
Feb 22, 2002
Messages
4,514
Location
Blurry.
Website
www.hlmcompany.com
I pulled my SMR drive array (7x5TB RAID6 + 500GB SSD) out to copy it to a set of off-site drives and reads seem HIGHLY inconsistent now where before I felt they were really fine. Short writes, up to maybe 20GB, are fast to incredible, able to sustain 250MB/sec (probably the max write speed of the destination array) easily. If I jump up to something more like 200GB, performance goes to dogshit. Now, this is reading and not writing, but I'm talking about file copies that are slow enough that Windows actually throws I/O timeout errors off internal drive file copies and AVERAGE speeds dip below 5MB/sec.
Is that all the result of the cache drive being emptied? If I pull it out of the equation, I get a similar result, just faster.
It's actually a lot faster and more practical for me to transfer 50GB at a time than it is to try to set up a 5TB overnight file copy like I might want to. Rebooting does seem to help, in that it stays faster a little longer, but this is something I've been messing with for a few days now and I don't feel like I can just "set it and forget it" at this point. I actually have to shepherd my file copies. That is broken and bogus.

Merc, how is this copy process being processed? Is this a network copy? What type of storage are the off-site drives? SMR reads and writes are very different from each other.
 

Buck

Storage? I am Storage!
Joined
Feb 22, 2002
Messages
4,514
Location
Blurry.
Website
www.hlmcompany.com
That's awful. I wonder if in five years the shingles will be remembered as an ugly pothole on the road of storage progress.

I already think of SMR as an ugly pothole. It is a lame push of technology in order to claim increased areal density to help fan away the flames of SSD.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,275
Location
I am omnipresent
Merc, how is this copy process being processed? Is this a network copy? What type of storage are the off-site drives? SMR reads and writes are very different from each other.

This is a straight copy between two Windows Server Storage spaces. One consists of 7x5TB SMR drives (20TB array) + 500GB cache drvie. The other is a pair of 4x4TB Ultrastar 7k4000s. The drives are connected to a pair of LSI SAS controllers (the two external drives are in an external SAS chassis, so I'm going from one controller to another across PCIe). I was copying from the 20TB array to the 12TB ones. The goal is to mirror local content for off-site Plex servers.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,275
Location
I am omnipresent
It's actually OK for my needs. Things get added. Things never get removed. It doesn't matter if it's slow unless I'm doing a bulk copy.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,925
Location
USA
Well, you've convinced me not to buy SMR drives for a RAID array.

It's actually OK for my needs. Things get added. Things never get removed. It doesn't matter if it's slow unless I'm doing a bulk copy.

I feel the same way. The pains you've described don't seem to be worth whatever $/GB cost savings there might be with SMR drives. Topple something like ZFS with CoW and write amplification and it sounds like a recipe for disaster.
 

Stereodude

Not really a
Joined
Jan 22, 2002
Messages
10,865
Location
Michigan
I feel the same way. The pains you've described don't seem to be worth whatever $/GB cost savings there might be with SMR drives. Topple something like ZFS with CoW and write amplification and it sounds like a recipe for disaster.
You can get PMR drives up to 8TB from HGST. WD has them up to 6GB in their Red Pro line (probably in other lines too). Not as cheap per GB as the Archive line from Seagate, but they're also enterprise grade drives with a better UBE spec too.
 

Stereodude

Not really a
Joined
Jan 22, 2002
Messages
10,865
Location
Michigan
The drives are connected to a pair of LSI SAS controllers...
Which controllers are you using? I noticed the Dell PERC H700 card has effectively no enthusiast usage whereas the 5i and 6i had a huge number of people using them in consumer grade motherboards with standard consumer SATA drives. Of course the 5i and 6i max out at 2TB per drive so they're basically useless now. However, I'm not sure I want to be a guinea pig to potentially save a few hundred dollars on the RAID card by playing around with the H700 or H710. Though it would be nice to get a controller and a spare for less than the price of a comparable new retail RAID card from Areca, LSI, or the like.
 

Buck

Storage? I am Storage!
Joined
Feb 22, 2002
Messages
4,514
Location
Blurry.
Website
www.hlmcompany.com
Those SMR drives should be Device Managed SMR, but they use a static LBA table and Media Based Cache (MBC or DC). With MBC or DC, on a track of data, after a specified set of data sectors there is space used as cache. MBC is being used instead of a huge DRAM/NAND buffer. For proper SMR performance when using a 1-1 mapping that maps a single LBA to a physical location, you need 1GiB of DRAM per 1TiB of HDD user capacity. For SMR, static mapped LBA with MBC results in increased latency. The Windows error is because the process is device managed, not host managed. Early HGST SMR drives were host managed, which requires proper file system support. But giving the OS more control actually reduced latency in general.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,925
Location
USA
You can get PMR drives up to 8TB from HGST. WD has them up to 6GB in their Red Pro line (probably in other lines too). Not as cheap per GB as the Archive line from Seagate, but they're also enterprise grade drives with a better UBE spec too.

I'm aware of them. I was just stating that the cost savings for SMR don't seem worth the hassle I'm reading about.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,275
Location
I am omnipresent
Which controllers are you using?

They're LSI 9211s, either Dell or IBM branded but flashed to standardized firmware. They're well behaved for SATA or SAS but work with hot-swap bays, external enclosures and SAS expanders. There are faster controllers in the world but nothing is really better in terms of compatibility or price. I've been using them for years.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,275
Location
I am omnipresent
You can get PMR drives up to 8TB from HGST. WD has them up to 6GB in their Red Pro line (probably in other lines too). Not as cheap per GB as the Archive line from Seagate, but they're also enterprise grade drives with a better UBE spec too.

If I were only buying a small number of drives, I'd buy the big HGSTs. As it stands, I'm ultimately going to replace 24 3TB drives. I do have another ~30 4TB drives, but at the moment, I'm most interested in getting a meaningful capacity upgrade for the three and four year old Hitachi and Seagate drives I still have in service. The 6TB HGSTs cost literally double what I'm paying for 5TB Seagates and I just can't justify that in the current drive generation. 4TB models aren't enough of an improvement, so the 5TB drives are the best of a bad set of options.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,925
Location
USA
Which controllers are you using? I noticed the Dell PERC H700 card has effectively no enthusiast usage whereas the 5i and 6i had a huge number of people using them in consumer grade motherboards with standard consumer SATA drives. Of course the 5i and 6i max out at 2TB per drive so they're basically useless now. However, I'm not sure I want to be a guinea pig to potentially save a few hundred dollars on the RAID card by playing around with the H700 or H710. Though it would be nice to get a controller and a spare for less than the price of a comparable new retail RAID card from Areca, LSI, or the like.

So they're not HW RAID cards. You're really just using them as pass through drive controllers?

Correct. That's how I've always done it.

I'm also no longer interested in controller cards for their HW RAID. I only want pass-through at this point for my storage servers. I'm long-since done with my Dell Perc 6 cards even though I still have the same 1.5TB drives they formerly managed.
 

Stereodude

Not really a
Joined
Jan 22, 2002
Messages
10,865
Location
Michigan
I'm also no longer interested in controller cards for their HW RAID. I only want pass-through at this point for my storage servers. I'm long-since done with my Dell Perc 6 cards even though I still have the same 1.5TB drives they formerly managed.
I must be the odd one. I still want HW RAID. Probably because the "server" would still run Windows.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,925
Location
USA
I must be the odd one. I still want HW RAID. Probably because the "server" would still run Windows.

I could see that as being beneficial if you want to use windows for your file server. I've moved on over to Linux for that task now and haven't really looked back.
 

Stereodude

Not really a
Joined
Jan 22, 2002
Messages
10,865
Location
Michigan
So how are you guys running gobs of drives? An 8 drive SAS/SATA controller in pass through mode connected to a 24 drive SAS expander with each SAS port from the expander connected to something like the 4 drive backplanes in a Norco RPC-4224?
 

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,728
Location
Horsens, Denmark
I've been running NAS appliances for a long time now. Thinking of switching back for the next generation. It would be nice to have automated system-level imaging of the workstations (like Home Server used to do). I've outgrown the 8x6TB and 5x8TB Synology units at home, and really want 10GbE to the storage anyway.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,925
Location
USA
So how are you guys running gobs of drives? An 8 drive SAS/SATA controller in pass through mode connected to a 24 drive SAS expander with each SAS port from the expander connected to something like the 4 drive backplanes in a Norco RPC-4224?

At the moment I'm running 8 drives direct connected to each port on the LSI 2308 controller in IT mode and the remaining 4 drives on the on-board SATA controller which comprises my storage pool of (12 x 4TB) in RAID Z2 (RAID 6 equivalent). I'm using the remaining two SATA ports for SSDs used for OS and ZFS L2ARC.

You can run an expander and connect to a 4U Norco case or you can look into a SuperMicro SuperChassis 846E26-R1200B with a built-in SAS expander which is a higher quality case at a higher cost. You can pair either case with one or more LSI 9211 adapters that have been flashed into IT mode or use a single LSI 9211 and connect it to an Intel or HP SAS expander card. It might just be worth running two or three LSI 9211s depending on the number of drives you want to manage and available PCIe ports. When I expand and build my next NAS it'll likely be some combination of what I just described. I am still doing research of the parts but it'll end up being another ZFS-based NAS with Samba under Xubuntu.
 

Stereodude

Not really a
Joined
Jan 22, 2002
Messages
10,865
Location
Michigan
You can run an expander and connect to a 4U Norco case or you can look into a SuperMicro SuperChassis 846E26-R1200B with a built-in SAS expander which is a higher quality case at a higher cost. You can pair either case with one or more LSI 9211 adapters that have been flashed into IT mode or use a single LSI 9211 and connect it to an Intel or HP SAS expander card. It might just be worth running two or three LSI 9211s depending on the number of drives you want to manage and available PCIe ports. When I expand and build my next NAS it'll likely be some combination of what I just described. I am still doing research of the parts but it'll end up being another ZFS-based NAS with Samba under Xubuntu.
Yikes, that Supermicro case isn't cheap. The 4U Norco + SAS expander + power supply (not dual redundant mind you) is still far short of $1400. Exactly how much better is that case?

Unless you're really pushing the IOs, what would be the advantage of using multiple cards instead of an expander? 8 6Gbps ports on a single controller offer a lot of bandwidth. Or, are those 8 port LSI 9211 based cards fairly cheap?
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,925
Location
USA
Yikes, that Supermicro case isn't cheap. The 4U Norco + SAS expander + power supply (not dual redundant mind you) is still far short of $1400. Exactly how much better is that case?

Unless you're really pushing the IOs, what would be the advantage of using multiple cards instead of an expander? 8 6Gbps ports on a single controller offer a lot of bandwidth. Or, are those 8 port LSI 9211 based cards fairly cheap?

It is expensive new, I'd probably look for a used one on ebay or forums if I went that route. My suggestion for multiple controllers vs a SAS expander was due to cost and configuration depending on what you'd want to do. If you can get one of the HP SAS expander cards cheap it should support up to 36 drives and work properly with the 9211-8i. This card should also support link aggregation to help improve performance when you connect two SFF-8087 cables to the controller. The downside that I've read with the HP SAS expander is that with SATA drives you'll be limited to 3Gb per port which means if you plan to connect any SSDs they'll be severely limited. If you connect SAS drives you'll get the 6Gb connection speed but that would be hella-costly and defeat the purpose of going this route.

Other options include using the Intel RAID Expander Card (RES2SV240) but you'd be limited to 20 drives connections when using up ports to connect the controller card. You can up the game by going with the Intel RES3FV288 36Port 12Gb/s SAS to link two ports to the controller but the price of the expanded is getting pretty expensive unless you can find a deal somewhere which is what I'm searching around to see if it's feasible. That Intel card would also allow you to eventually grow to another external JBOD if you ever needed to connect to another system externally using the 8 external ports. Going back to the HP SAS expander would likely be the more economical way of doing this to get higher drive port counts in a Norco 24 bay. There are trade-offs for each config and many options for controller cards if you didn't want to go with a 9211. My plan is to start with a 24 bay case and grow into it over time.
 

CougTek

Hairy Aussie
Joined
Jan 21, 2002
Messages
8,729
Location
Québec, Québec
Don't opt for half-mesures; go for the real thing: SuperMicro SC847BE2C-R1K28LPB. Yes, it is ~2000U$, but it comes with two LSI expanders and it has the widest modern motherboards compatibility list and supports SAS 12G. The cost of the chassis won't be a big factor on the overall price. 700$ (over that one, which only supports SAS 6G and 24 drives) is marginal when you consider the entire system cost. Plus, opting for a 36-drive chassis over a 24-drive one will make the "NAS" last longer before it's filled up, delaying an upgrade and saving you money on the long run.

You probably don't want SATA drives anyway, because the IOps suck in large arrays. SAS 12G drive aren't that much more expensive, unless you compare them with cheapo drive with lower reliability figures, which you wouldn't want in such a large array anyway (increased risk of failure and array-corrupting errors).

6TB drives are the best capacity for the buck at the moment, but the newer 8TB Enterprise Capacity from Seagate offer the most throuput. HGST Helium drives are simply too expensive.

So:
  • SuperMicro SC847BE2C-R1K28LPB (2030$)
  • SuperMicro MCP-220-82609-0N (44$) - to put your SSD boot drive RAID 1 array or your tier 1 storage
  • SuperMicro MBD-X10DRH-CT (665$) - has two 10GbE ports and integrated LSI 3108 SAS 12G controller
  • 2x SuperMicro SNK-P0048AP4 (30$ - 2U active heatsink
  • 2x Intel E5-2600 v4 CPU of your choice - they will be out in less than 3 months, so no point buying a v3 now.
  • 8x or 16x DDR4 2400MHz ECC Reg RAM stick of your choice (E5-2600 v4 can use up to 2400MHz frequency memory and they shouldn't be much more expensive than 2133MHz sticks)
  • 2x 2.5" SSD of your choice - to go in the 2-drive cage above
  • 36x 6TB or 36x 8TB SAS 12G LFF of your liking (will cost between 10,542$ and 18,185$)

For 36x 6TB drives, I'm at 13,341$ without the SSDs, CPUs and RAM sticks. For the 36x 8TB drives, without the same parts, I'm at 20,984$. Even with a single low-end CPU and 4 sticks of RAM, you'll hit close to 1000$. The 2 SSDs should cost you at less 300$, so the lowest you can build this is almost 15K$. You can easily hit 30K$ if you crank the CPU and RAM. But in both cases, you won't need an upgrade for a LONG time.
 

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,728
Location
Horsens, Denmark
I don't need hot-swap, and was looking at the enclosures at 45drives.com. Their base model (30 drives) with backplanes and a custom wiring harness for a Corsair HX power supply comes in at just under $1k. Of course, they also offer a maxed out turnkey 60x 6TB solution for $35k.
 

Stereodude

Not really a
Joined
Jan 22, 2002
Messages
10,865
Location
Michigan
It is expensive new, I'd probably look for a used one on ebay or forums if I went that route. My suggestion for multiple controllers vs a SAS expander was due to cost and configuration depending on what you'd want to do. If you can get one of the HP SAS expander cards cheap it should support up to 36 drives and work properly with the 9211-8i. This card should also support link aggregation to help improve performance when you connect two SFF-8087 cables to the controller. The downside that I've read with the HP SAS expander is that with SATA drives you'll be limited to 3Gb per port which means if you plan to connect any SSDs they'll be severely limited. If you connect SAS drives you'll get the 6Gb connection speed but that would be hella-costly and defeat the purpose of going this route.
Interesting... I wasn't aware of this ~$65 HP SAS expander option.

Other options include using the Intel RAID Expander Card (RES2SV240) but you'd be limited to 20 drives connections when using up ports to connect the controller card.
Why would you be limited to 20 drives? It had 8 ports per the Intel literature. 2 for connecting to the SAS controller and 6 for connecting to up to 24 drives. However, from the pictures I see online it only has 6. :confused:

You can up the game by going with the Intel RES3FV288 36Port 12Gb/s SAS to link two ports to the controller but the price of the expanded is getting pretty expensive unless you can find a deal somewhere which is what I'm searching around to see if it's feasible. That Intel card would also allow you to eventually grow to another external JBOD if you ever needed to connect to another system externally using the 8 external ports.
Yes, that does begin to push the price.

Going back to the HP SAS expander would likely be the more economical way of doing this to get higher drive port counts in a Norco 24 bay. There are trade-offs for each config and many options for controller cards if you didn't want to go with a 9211.
I still am leaning toward HW RAID and sticking with Windows. :silent:

My plan is to start with a 24 bay case and grow into it over time.
That seems like a reasonable strategy to me. I only have an 8 drive array now. Starting with 6 or 8 6TB drives seems like a good upgrade point. Growing the RAID-6 array through online capacity expansion in a 24 bay enclosure would allow for a lot of upward movement.
 
Last edited:

Stereodude

Not really a
Joined
Jan 22, 2002
Messages
10,865
Location
Michigan
Don't opt for half-mesures; go for the real thing: SuperMicro SC847BE2C-R1K28LPB. Yes, it is ~2000U$, but it comes with two LSI expanders and it has the widest modern motherboards compatibility list and supports SAS 12G. The cost of the chassis won't be a big factor on the overall price. 700$ (over that one, which only supports SAS 6G and 24 drives) is marginal when you consider the entire system cost. Plus, opting for a 36-drive chassis over a 24-drive one will make the "NAS" last longer before it's filled up, delaying an upgrade and saving you money on the long run.

You probably don't want SATA drives anyway, because the IOps suck in large arrays. SAS 12G drive aren't that much more expensive, unless you compare them with cheapo drive with lower reliability figures, which you wouldn't want in such a large array anyway (increased risk of failure and array-corrupting errors).

6TB drives are the best capacity for the buck at the moment, but the newer 8TB Enterprise Capacity from Seagate offer the most throuput. HGST Helium drives are simply too expensive.

So:
  • SuperMicro SC847BE2C-R1K28LPB (2030$)
  • SuperMicro MCP-220-82609-0N (44$) - to put your SSD boot drive RAID 1 array or your tier 1 storage
  • SuperMicro MBD-X10DRH-CT (665$) - has two 10GbE ports and integrated LSI 3108 SAS 12G controller
  • 2x SuperMicro SNK-P0048AP4 (30$ - 2U active heatsink
  • 2x Intel E5-2600 v4 CPU of your choice - they will be out in less than 3 months, so no point buying a v3 now.
  • 8x or 16x DDR4 2400MHz ECC Reg RAM stick of your choice (E5-2600 v4 can use up to 2400MHz frequency memory and they shouldn't be much more expensive than 2133MHz sticks)
  • 2x 2.5" SSD of your choice - to go in the 2-drive cage above
  • 36x 6TB or 36x 8TB SAS 12G LFF of your liking (will cost between 10,542$ and 18,185$)

For 36x 6TB drives, I'm at 13,341$ without the SSDs, CPUs and RAM sticks. For the 36x 8TB drives, without the same parts, I'm at 20,984$. Even with a single low-end CPU and 4 sticks of RAM, you'll hit close to 1000$. The 2 SSDs should cost you at less 300$, so the lowest you can build this is almost 15K$. You can easily hit 30K$ if you crank the CPU and RAM. But in both cases, you won't need an upgrade for a LONG time.
Are you buying or making a substantial donation?
 

CougTek

Hairy Aussie
Joined
Jan 21, 2002
Messages
8,729
Location
Québec, Québec
I wouldn't cheapen up with the enclosure. The drives are by far the bulk of the cost. Even with 4TB drives, like the Seagate Enterprise NAS (215$) or the Western Digital Red Pro (211$), putting 30 of those will still cost more than 6300$.

The 45Drives enclosure with a single power supply only makes sense in the context of a large cloud provider, like BackBlaze, for which those cases were first designed. They have so many pods that if one fails, the data is replicated elsewhere and is still accessible. That's not your case.

BTW, it is written that the 735$ Rocket 750 bus adapter card, 40$ power switch and 193$ 6x silent fans are required, so there's really no savings in that chassis versus a SuperMicro one. One or the other will cost you around 2000$. I prefer the one with the hot-swap bays and redundant PSU.
 
Top