Shingled drives and backup hardware

Stereodude

Not really a
Joined
Jan 22, 2002
Messages
10,865
Location
Michigan
BTW, it is written that the 735$ Rocket 750 bus adapter card, 40$ power switch and 193$ 6x silent fans are required, so there's really no savings in that chassis versus a SuperMicro one. One or the other will cost you around 2000$. I prefer the one with the hot-swap bays and redundant PSU.
Required to make it functional. You don't have to buy it from them. You can buy 6 fans yourself for half the price from another retailer, or use a different drive controller.
 

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,728
Location
Horsens, Denmark
I wouldn't trust any single enclosure with my data. Even a massive NAS would be backed up daily to another unit at another site. Therefore all you risk with less redundant systems is some downtime, not data. For home use 48 hours of downtime is acceptable.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,926
Location
USA
Interesting... I wasn't aware of this ~$65 HP SAS expander option.


Why would you be limited to 20 drives? It had 8 ports per the Intel literature. 2 for connecting to the SAS controller and 6 for connecting to up to 24 drives. However, from the pictures I see online it only has 6. :confused:

The Intel card has 6 ports from what I can see and find online for the RES2SV240. Where did you find that it has 8 ports? I did overlook the fact that you could use the remaining 6-7 ports on the LSI 9211 for additional drive connectivity so it should still work out ok unless if that's not supported for some reason. You'd end up with 5 ports x 4 for 20 drive connections and then have 1 additional port from the LSI giving you the remaining 4 drives to get to 24.

Yes, that does begin to push the price.


I still am leaning toward HW RAID and sticking with Windows. :silent:


That seems like a reasonable strategy to me. I only have an 8 drive array now. Starting with 6 or 8 6TB drives seems like a good upgrade point. Growing the RAID-6 array through online capacity expansion in a 24 bay enclosure would allow for a lot of upward movement.

If that works better for you or is more comfortable territory I get it. There are a few advantages to skipping the hardware raid when using ZFS that you can't get easily otherwise. One of which is the double checksum protection greatly reducing the chances of bit rot over such large amounts of data stored on a system and basically given you great data integrity. You can get expandable pools to grow into over time if you want or just add additional pools with new drives. You're no longer locked to a single hardware raid card if one dies. If the card dies, install a new one and re-import your pool. You no longer have to worry or manage cache batteries for the HW raid controller. ZFS takes care of this with ARC and an in-memory ZIL with the option to add additional SLOG devices if you have lots of synchronous writes, it's designed to work with lots of JBODs. With 8 spindles you won't have any performance issues especially if you don't mind adding an inexpensive SSD for L2ARC to help with reads and meta data look-ups for writes. ZFS is good with managing corrugating random writes into monolithic writes and also uses copy on write to reduce the need to overwrite old areas which would slow things down. You would have the ability to create fairly-instant snapshots if needed and can run scrubs to check for any potential issues.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,926
Location
USA
Don't opt for half-mesures; go for the real thing: SuperMicro SC847BE2C-R1K28LPB. Yes, it is ~2000U$, but it comes with two LSI expanders and it has the widest modern motherboards compatibility list and supports SAS 12G. The cost of the chassis won't be a big factor on the overall price. 700$ (over that one, which only supports SAS 6G and 24 drives) is marginal when you consider the entire system cost. Plus, opting for a 36-drive chassis over a 24-drive one will make the "NAS" last longer before it's filled up, delaying an upgrade and saving you money on the long run.

You probably don't want SATA drives anyway, because the IOps suck in large arrays. SAS 12G drive aren't that much more expensive, unless you compare them with cheapo drive with lower reliability figures, which you wouldn't want in such a large array anyway (increased risk of failure and array-corrupting errors).

6TB drives are the best capacity for the buck at the moment, but the newer 8TB Enterprise Capacity from Seagate offer the most throuput. HGST Helium drives are simply too expensive.

So:
  • SuperMicro SC847BE2C-R1K28LPB (2030$)
  • SuperMicro MCP-220-82609-0N (44$) - to put your SSD boot drive RAID 1 array or your tier 1 storage
  • SuperMicro MBD-X10DRH-CT (665$) - has two 10GbE ports and integrated LSI 3108 SAS 12G controller
  • 2x SuperMicro SNK-P0048AP4 (30$ - 2U active heatsink
  • 2x Intel E5-2600 v4 CPU of your choice - they will be out in less than 3 months, so no point buying a v3 now.
  • 8x or 16x DDR4 2400MHz ECC Reg RAM stick of your choice (E5-2600 v4 can use up to 2400MHz frequency memory and they shouldn't be much more expensive than 2133MHz sticks)
  • 2x 2.5" SSD of your choice - to go in the 2-drive cage above
  • 36x 6TB or 36x 8TB SAS 12G LFF of your liking (will cost between 10,542$ and 18,185$)

For 36x 6TB drives, I'm at 13,341$ without the SSDs, CPUs and RAM sticks. For the 36x 8TB drives, without the same parts, I'm at 20,984$. Even with a single low-end CPU and 4 sticks of RAM, you'll hit close to 1000$. The 2 SSDs should cost you at less 300$, so the lowest you can build this is almost 15K$. You can easily hit 30K$ if you crank the CPU and RAM. But in both cases, you won't need an upgrade for a LONG time.

This is a very admirable setup but I don't quite see this level being needed for personal use. Why would one even need a dual socket E5-2600 for storage management? You'd have to do a lot more on this system to justify that kind of expense. There are compromises that can be made without drastically affecting the quality or availability of the system. I can't imagine that using 12Gb SAS HDDs vs 6Gb SATA HDDs would even be useful?? No HDD spindle can get near 1500MB/s transfer. Typical off the shelf SATA NAS drives from Seagate/HGST/WD would more than suffice with proper parity or mirroring. You may lose one or two over the years but big deal...it happens. They're still built for 24x7 operation.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,275
Location
I am omnipresent
I use two Norco 24 bay chassis for my big file server. One of them got modded into an SAS enclosure. I have a pair of LSI 9211s (in TI mode) and a pair of HP SAS Expanders installed. The Norco chassis are huge and take standard desktop parts rather than noisy rackmount gear, so in addition to all of that I've stuck a bunch of SSDs in the unused internal space. I use Windows Server Storage Spaces (ZFS for the array sizes I'm dealing would create additional issues in terms of system RAM capacity) in tiered RAID6 plus local mirroring to secondary arrays plus manual distribution of tapes and additional drives containing content to off-site facilities

Ignoring the cost of the drives, the server chassis cost me about $400 each. The LSI cards were $75 or $100. The expander cards were $125 each when I got them. There's probably a few hundred bucks in random SAS cables and two fairly serious power supplies involved as well. The motherboard, CPUs and RAM were free but I could be doing what I do for storage (that machine is a Hyper-V host) with a spare i3 and probably be fine.

Given the budget I'm living with, it's a much more sane configuration than what Coug is proposing. ;)
 

CougTek

Hairy Aussie
Joined
Jan 21, 2002
Messages
8,729
Location
Québec, Québec
This is a very admirable setup but I don't quite see this level being needed for personal use. Why would one even need a dual socket E5-2600 for storage management?
You don't have to fill both sockets. Several NAS-oriented operating systems, like Openfiler, would perform a lot better with a lot of RAM, like at least 64GB, rather than the typical 8GB found in low-end systems.

I can't imagine that using 12Gb SAS HDDs vs 6Gb SATA HDDs would even be useful?? No HDD spindle can get near 1500MB/s transfer. Typical off the shelf SATA NAS drives from Seagate/HGST/WD would more than suffice with proper parity or mirroring. You may lose one or two over the years but big deal...it happens. They're still built for 24x7 operation.
While I haven't made the comparison myself, according to an HP heavy-weight, working on product development, I spoke with last year, SATA drives arrays don't perform well in IO intensive scenarios. Granted, it probably won't happen often with a home setup.

Besides, the lowest price I can find for an HGST 6TB SATA NAS drive (P/N 0S03839) is 270$ (at NewEgg), while the lowest price I can find a Seagate 6TB Enterprise class SAS 12G drive is 285$ (at Wiredzone). So the price difference is irrelevant IMO, even applied on a 24 or 36 drives system. No matter if you go for a 24 or a 36 drives system, the bulk of the cost will always be the drives, as I wrote earlier. A 1K$ or 2K$ cost cut on the chassis and other components won't make a significant difference. What will make a difference is the fact that the cheaper version will be harder to maintain and manage, while it won't perform as well.

Of course, if all you need is a spot to store your movies and TV series downloaded from the Internet at a fairly slow pace, then a setup like the one Mercutio proposes is a lot more economical, if less reliable. But then, if that's the objective, I question DDrueding's need for a 10G Ethernet connection.

One thing the rackmount chassis won't be is quiet. As Merc wrote, a DIY box, with all its indrances, will be less noisy than a typical rackmount system straight from the OEM.

That's just my view.
 
Last edited:

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,728
Location
Horsens, Denmark
I use Windows Server Storage Spaces (ZFS for the array sizes I'm dealing would create additional issues in terms of system RAM capacity) in tiered RAID6 plus local mirroring to secondary arrays plus manual distribution of tapes and additional drives containing content to off-site facilities

This has me interested, but I haven't used Windows for storage in so long that I am completely unfamiliar with the storage spaces concept. A quick look seems to indicate that it manages pools (like ZFS) with variable amounts of redundancy. I really like this. Does it support cache drives (SSDs?). I'd be tempted to move my bulk storage back to my local machine (nothing faster than local) if Windows 10 could do all the stuff.


*This is probably the third time I've rotated through the three modes in the last 15 years:

1. Local storage
2. Windows server
3. NAS (Linux server or appliance)
 

DrunkenBastard

Storage is cool
Joined
Jan 21, 2002
Messages
775
Location
on the floor
Server 2012 R2 supports write back cache if you have an SSD in the storage pool, however it defaults to a 1GB size if you use the GUI via Server Manager. Need to create the storage space via powershell to specify something larger:

http://m.windowsitpro.com/windows-s...server-2012-r2-storage-space-write-back-cache

It also supports SSD tiering, you just need to have a sufficient quantity of SSDs, i.e 2-way mirror needs 2 drives, 3 way mirror needs 3 drives etc.

Same SSDs can be used for both functions.

My understanding is that once these pools/spaces are created they can then be imported into Win 10 Pro.
 

Stereodude

Not really a
Joined
Jan 22, 2002
Messages
10,865
Location
Michigan
While I haven't made the comparison myself, according to an HP heavy-weight, working on product development, I spoke with last year, SATA drives arrays don't perform well in IO intensive scenarios. Granted, it probably won't happen often with a home setup.

Besides, the lowest price I can find for an HGST 6TB SATA NAS drive (P/N 0S03839) is 270$ (at NewEgg), while the lowest price I can find a Seagate 6TB Enterprise class SAS 12G drive is 285$ (at Wiredzone). So the price difference is irrelevant IMO, even applied on a 24 or 36 drives system.
Sorry, but I'm going to discount what your "HP heavy-weight" said. Especially in the case of the same enterprise drive being sold with multiple interface choices.

First of all, lets get out of the way that there's nothing inherent about the SATA interface that prevents good IO performance in intensive scenarios. Look at how well some SATA SSDs perform. Clearly, the SATA command set isn't a bottleneck in the context of the MB/sec you can push through a 6Gpbs SATA pipe from a spinning 7200RPM HD.

Second, these are Enterprise class drives, not consumer drives. You expect me to believe that a ST6000NM0024, the 6Gbps SATA equivalent of the Seagate drive you suggested, is going to have noticeably inferior performance? If that's the case it would have to be due to intentional firmware crippling, not something inherent to the drive's interface. A spinning disc can't generate nearly the IO a good SSD can.

Lastly, even if the price hit is nearly negligible, you're not getting any benefit for that price hit other than reducing the potential re-use of the drives down the road and making troubleshooting more difficult because they lack a SATA interface and you can't plug them into "normal" hardware for reuse or testing down the road.

However, I wasn't aware of Wiredzone.com, so thanks for providing an interesting potential drive vendor. These caught my interest. I can keep my love of Toshiba's drives alive :eek: and their "Toshiba Persistent Write Cache Technology for Data-Loss Protection in Sudden Power-Loss Events" seems interesting. Drive Spec Sheet link (PDF)
 
Last edited:

Stereodude

Not really a
Joined
Jan 22, 2002
Messages
10,865
Location
Michigan
The motherboard, CPUs and RAM were free but I could be doing what I do for storage (that machine is a Hyper-V host) with a spare i3 and probably be fine.
I'm considering a setup that uses ECC memory. Using a motherboard like this: http://www.asrockrack.com/general/productdetail.asp?Model=E3C224 AFAIK, I can use a consumer CPU and get ECC functionality as long as the particular CPU supports ECC. Even the lowly $43 Celeron G1820 supports ECC. Now granted I'd use something a bit higher end, but still...
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,926
Location
USA
You don't have to fill both sockets. Several NAS-oriented operating systems, like Openfiler, would perform a lot better with a lot of RAM, like at least 64GB, rather than the typical 8GB found in low-end systems.


While I haven't made the comparison myself, according to an HP heavy-weight, working on product development, I spoke with last year, SATA drives arrays don't perform well in IO intensive scenarios. Granted, it probably won't happen often with a home setup.

Besides, the lowest price I can find for an HGST 6TB SATA NAS drive (P/N 0S03839) is 270$ (at NewEgg), while the lowest price I can find a Seagate 6TB Enterprise class SAS 12G drive is 285$ (at Wiredzone). So the price difference is irrelevant IMO, even applied on a 24 or 36 drives system. No matter if you go for a 24 or a 36 drives system, the bulk of the cost will always be the drives, as I wrote earlier. A 1K$ or 2K$ cost cut on the chassis and other components won't make a significant difference. What will make a difference is the fact that the cheaper version will be harder to maintain and manage, while it won't perform as well.

Of course, if all you need is a spot to store your movies and TV series downloaded from the Internet at a fairly slow pace, then a setup like the one Mercutio proposes is a lot more economical, if less reliable. But then, if that's the objective, I question DDrueding's need for a 10G Ethernet connection.

One thing the rackmount chassis won't be is quiet. As Merc wrote, a DIY box, with all its indrances, will be less noisy than a typical rackmount system straight from the OEM.

That's just my view.

I've been buying those 6TB HGST NAS drives for $229/each, not $270. They go on sale frequently at newegg and I just installed one Sunday. Much was the same when I was buying the 4TB version. I'd buy them in lots of 5 which was their order limit and just grew my counts slowly.

I do use my NAS for backups, movies, TV series, and original ISO rips of which the size is fairly big (30-50GB) and I often times need/want 10Gb connection. My 12-drive setup is by no means slow and I can get write speeds in excess of 600MB/sec when copying data locally. All this with economical parts and what you suggest are my lesser-grade SATA drives. I don't see how it's harder to maintain. It's been one of the more reliable and maintenance-free systems I've built.
 
Last edited:

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,926
Location
USA
I'm considering a setup that uses ECC memory. Using a motherboard like this: http://www.asrockrack.com/general/productdetail.asp?Model=E3C224 AFAIK, I can use a consumer CPU and get ECC functionality as long as the particular CPU supports ECC. Even the lowly $43 Celeron G1820 supports ECC. Now granted I'd use something a bit higher end, but still...

What about this SUPERMICRO MBD-X10SL7-F-O board? It costs a bit more, but it comes with an 8-port LSI 2308 controller built in. You basically get 14 total SATA ports and a MegaRAC BMC for remote managing that works very well for managing all aspects of the box (BIOS, OS install, etc). Something worth considering if you plan to put your NAS out of sight.
 

Stereodude

Not really a
Joined
Jan 22, 2002
Messages
10,865
Location
Michigan
What about this SUPERMICRO MBD-X10SL7-F-O board? It costs a bit more, but it comes with an 8-port LSI 2308 controller built in. You basically get 14 total SATA ports and a MegaRAC BMC for remote managing that works very well for managing all aspects of the box (BIOS, OS install, etc). Something worth considering if you plan to put your NAS out of sight.
Hmm... I'm not sure that would help me much if I go with a HW RAID card. Am I overlooking some scenario where I'd want so many SATA drives bypassing the HW RAID card? The limited PCIe slots is likely to be less convenient. If I'm using a HP SAS expander + HW RAID card the HP needs a 4x slot (for power). The HW RAID card wants a x8 slot. That uses both of the slots on that Supermicro board right off the bat. The ASRock has a few slots left for something else in that scenario. Is there a similar Supermicro with more PCIe slots in an ATX form factor? It looks like you give up the built in SAS controller to get a 3rd PCIe slot. Something like this: http://www.newegg.com/Product/Product.aspx?Item=N82E16813182820

BTW, doesn't the ASRock have a the same BMC capabilities? "BMC Controller - ASPEED AST2300 : IPMI (Intelligent Platform Management Interface) 2.0 with iKVM support"? I admit to not knowing virtually nothing about BMC, so apologies if this is a stupid question.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,926
Location
USA
Hmm... I'm not sure that would help me much if I go with a HW RAID card. Am I overlooking some scenario where I'd want so many SATA drives bypassing the HW RAID card? The limited PCIe slots is likely to be less convenient. If I'm using a HP SAS expander + HW RAID card the HP needs a 4x slot (for power). The HW RAID card wants a x8 slot. That uses both of the slots on that Supermicro board right off the bat. The ASRock has a few slots left for something else in that scenario. Is there a similar Supermicro with more PCIe slots in an ATX form factor? It looks like you give up the built in SAS controller to get a 3rd PCIe slot. Something like this: http://www.newegg.com/Product/Product.aspx?Item=N82E16813182820

BTW, doesn't the ASRock have a the same BMC capabilities? "BMC Controller - ASPEED AST2300 : IPMI (Intelligent Platform Management Interface) 2.0 with iKVM support"? I admit to not knowing virtually nothing about BMC, so apologies if this is a stupid question.

No, you're right. If you're still going with a dedicated HW RAID card I complicated things by suggesting that board. I missed that the ASRock had a BMC which is great it makes things much easier to manage remotely. That looks like a decent supermicro board but I can't say if it's worth the small price increase over the ASRock you originally listed. Supermicro seems to have so many boards it's hard to get a clear picture of which board has appropriate features and their website is still in the 1990s.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,275
Location
I am omnipresent
I've been fairly impressed with Asrock's workstation boards. They're a lot less picky about RAM than SuperMicro.
 

Tea

Storage? I am Storage!
Joined
Jan 15, 2002
Messages
3,749
Location
27a No Fixed Address, Oz.
Website
www.redhill.net.au
A while back, I bought a Seagate Archive drive, 8TB I think it was. It was slow to fill up but not unbearably so. My impression (untested!) was that it would be somewhere between unpleasant and extremely unpleasant to use in the regular way. I did a single big backup onto it, then stuck it in a bottom drawer and kept it untouched for backup. I have not had any call to wipe it clean and do a new backup onto it, but I rather fear that using it a second time will be horribly slow. (Not that that would really matter if it took a week to fill: we are only looking at once-a-year backups here.)

Anyway, I'm a bit light on for storage and I see that Seagate have an 8TB ST8000AS0002 which, from the model number, I assume is a second-generation shingled drive. Price is about the same as a pair of ordinary 4TB drives, but I'd rather have a single unit if I can. Does anyone know:

(a) Is this in fact a new model?
(b) Is it substantially faster on re-write than the original?

What do you guys recommend for large-scale storage these days? Within broad limits, performance is a non-issue. (The original SMR Seagate Archive Drive probably steps a bit past even those broad limits. Not sure I'd want to put up with that ultra-low performance on a daily driver. Well, more a once-a-week driver, but you know what I mean. I use SSDs for daily driving, of course.) Within those limits, cost per TB and reliability are the only factors to consider.

(Just to clarify, my present requirement is for semi-archival on-line storage: a large, seldom accessed drive which gets used maybe once a week on average. Load is mostly read, but a fair bit of write as well. Essentially, it functions as overflow from my primary drives. Compare with the backup drives which get written to once, then stored off-line for years and (with luck) never get read at all. This one will be used from time to time.)

(Tannin posted this in a new thread, but it makes more sense to append it here. So I moved it.)
 
Top