CougTek
Hairy Aussie
No need. He's made of money. He's proven it many, many times.Are you buying or making a substantial donation?
No need. He's made of money. He's proven it many, many times.Are you buying or making a substantial donation?
Required to make it functional. You don't have to buy it from them. You can buy 6 fans yourself for half the price from another retailer, or use a different drive controller.BTW, it is written that the 735$ Rocket 750 bus adapter card, 40$ power switch and 193$ 6x silent fans are required, so there's really no savings in that chassis versus a SuperMicro one. One or the other will cost you around 2000$. I prefer the one with the hot-swap bays and redundant PSU.
Interesting... I wasn't aware of this ~$65 HP SAS expander option.
Why would you be limited to 20 drives? It had 8 ports per the Intel literature. 2 for connecting to the SAS controller and 6 for connecting to up to 24 drives. However, from the pictures I see online it only has 6.
Yes, that does begin to push the price.
I still am leaning toward HW RAID and sticking with Windows. :silent:
That seems like a reasonable strategy to me. I only have an 8 drive array now. Starting with 6 or 8 6TB drives seems like a good upgrade point. Growing the RAID-6 array through online capacity expansion in a 24 bay enclosure would allow for a lot of upward movement.
Don't opt for half-mesures; go for the real thing: SuperMicro SC847BE2C-R1K28LPB. Yes, it is ~2000U$, but it comes with two LSI expanders and it has the widest modern motherboards compatibility list and supports SAS 12G. The cost of the chassis won't be a big factor on the overall price. 700$ (over that one, which only supports SAS 6G and 24 drives) is marginal when you consider the entire system cost. Plus, opting for a 36-drive chassis over a 24-drive one will make the "NAS" last longer before it's filled up, delaying an upgrade and saving you money on the long run.
You probably don't want SATA drives anyway, because the IOps suck in large arrays. SAS 12G drive aren't that much more expensive, unless you compare them with cheapo drive with lower reliability figures, which you wouldn't want in such a large array anyway (increased risk of failure and array-corrupting errors).
6TB drives are the best capacity for the buck at the moment, but the newer 8TB Enterprise Capacity from Seagate offer the most throuput. HGST Helium drives are simply too expensive.
So:
- SuperMicro SC847BE2C-R1K28LPB (2030$)
- SuperMicro MCP-220-82609-0N (44$) - to put your SSD boot drive RAID 1 array or your tier 1 storage
- SuperMicro MBD-X10DRH-CT (665$) - has two 10GbE ports and integrated LSI 3108 SAS 12G controller
- 2x SuperMicro SNK-P0048AP4 (30$ - 2U active heatsink
- 2x Intel E5-2600 v4 CPU of your choice - they will be out in less than 3 months, so no point buying a v3 now.
- 8x or 16x DDR4 2400MHz ECC Reg RAM stick of your choice (E5-2600 v4 can use up to 2400MHz frequency memory and they shouldn't be much more expensive than 2133MHz sticks)
- 2x 2.5" SSD of your choice - to go in the 2-drive cage above
- 36x 6TB or 36x 8TB SAS 12G LFF of your liking (will cost between 10,542$ and 18,185$)
For 36x 6TB drives, I'm at 13,341$ without the SSDs, CPUs and RAM sticks. For the 36x 8TB drives, without the same parts, I'm at 20,984$. Even with a single low-end CPU and 4 sticks of RAM, you'll hit close to 1000$. The 2 SSDs should cost you at less 300$, so the lowest you can build this is almost 15K$. You can easily hit 30K$ if you crank the CPU and RAM. But in both cases, you won't need an upgrade for a LONG time.
You don't have to fill both sockets. Several NAS-oriented operating systems, like Openfiler, would perform a lot better with a lot of RAM, like at least 64GB, rather than the typical 8GB found in low-end systems.This is a very admirable setup but I don't quite see this level being needed for personal use. Why would one even need a dual socket E5-2600 for storage management?
While I haven't made the comparison myself, according to an HP heavy-weight, working on product development, I spoke with last year, SATA drives arrays don't perform well in IO intensive scenarios. Granted, it probably won't happen often with a home setup.I can't imagine that using 12Gb SAS HDDs vs 6Gb SATA HDDs would even be useful?? No HDD spindle can get near 1500MB/s transfer. Typical off the shelf SATA NAS drives from Seagate/HGST/WD would more than suffice with proper parity or mirroring. You may lose one or two over the years but big deal...it happens. They're still built for 24x7 operation.
I use Windows Server Storage Spaces (ZFS for the array sizes I'm dealing would create additional issues in terms of system RAM capacity) in tiered RAID6 plus local mirroring to secondary arrays plus manual distribution of tapes and additional drives containing content to off-site facilities
Sorry, but I'm going to discount what your "HP heavy-weight" said. Especially in the case of the same enterprise drive being sold with multiple interface choices.While I haven't made the comparison myself, according to an HP heavy-weight, working on product development, I spoke with last year, SATA drives arrays don't perform well in IO intensive scenarios. Granted, it probably won't happen often with a home setup.
Besides, the lowest price I can find for an HGST 6TB SATA NAS drive (P/N 0S03839) is 270$ (at NewEgg), while the lowest price I can find a Seagate 6TB Enterprise class SAS 12G drive is 285$ (at Wiredzone). So the price difference is irrelevant IMO, even applied on a 24 or 36 drives system.
I'm considering a setup that uses ECC memory. Using a motherboard like this: http://www.asrockrack.com/general/productdetail.asp?Model=E3C224 AFAIK, I can use a consumer CPU and get ECC functionality as long as the particular CPU supports ECC. Even the lowly $43 Celeron G1820 supports ECC. Now granted I'd use something a bit higher end, but still...The motherboard, CPUs and RAM were free but I could be doing what I do for storage (that machine is a Hyper-V host) with a spare i3 and probably be fine.
You don't have to fill both sockets. Several NAS-oriented operating systems, like Openfiler, would perform a lot better with a lot of RAM, like at least 64GB, rather than the typical 8GB found in low-end systems.
While I haven't made the comparison myself, according to an HP heavy-weight, working on product development, I spoke with last year, SATA drives arrays don't perform well in IO intensive scenarios. Granted, it probably won't happen often with a home setup.
Besides, the lowest price I can find for an HGST 6TB SATA NAS drive (P/N 0S03839) is 270$ (at NewEgg), while the lowest price I can find a Seagate 6TB Enterprise class SAS 12G drive is 285$ (at Wiredzone). So the price difference is irrelevant IMO, even applied on a 24 or 36 drives system. No matter if you go for a 24 or a 36 drives system, the bulk of the cost will always be the drives, as I wrote earlier. A 1K$ or 2K$ cost cut on the chassis and other components won't make a significant difference. What will make a difference is the fact that the cheaper version will be harder to maintain and manage, while it won't perform as well.
Of course, if all you need is a spot to store your movies and TV series downloaded from the Internet at a fairly slow pace, then a setup like the one Mercutio proposes is a lot more economical, if less reliable. But then, if that's the objective, I question DDrueding's need for a 10G Ethernet connection.
One thing the rackmount chassis won't be is quiet. As Merc wrote, a DIY box, with all its indrances, will be less noisy than a typical rackmount system straight from the OEM.
That's just my view.
I'm considering a setup that uses ECC memory. Using a motherboard like this: http://www.asrockrack.com/general/productdetail.asp?Model=E3C224 AFAIK, I can use a consumer CPU and get ECC functionality as long as the particular CPU supports ECC. Even the lowly $43 Celeron G1820 supports ECC. Now granted I'd use something a bit higher end, but still...
Hmm... I'm not sure that would help me much if I go with a HW RAID card. Am I overlooking some scenario where I'd want so many SATA drives bypassing the HW RAID card? The limited PCIe slots is likely to be less convenient. If I'm using a HP SAS expander + HW RAID card the HP needs a 4x slot (for power). The HW RAID card wants a x8 slot. That uses both of the slots on that Supermicro board right off the bat. The ASRock has a few slots left for something else in that scenario. Is there a similar Supermicro with more PCIe slots in an ATX form factor? It looks like you give up the built in SAS controller to get a 3rd PCIe slot. Something like this: http://www.newegg.com/Product/Product.aspx?Item=N82E16813182820What about this SUPERMICRO MBD-X10SL7-F-O board? It costs a bit more, but it comes with an 8-port LSI 2308 controller built in. You basically get 14 total SATA ports and a MegaRAC BMC for remote managing that works very well for managing all aspects of the box (BIOS, OS install, etc). Something worth considering if you plan to put your NAS out of sight.
Hmm... I'm not sure that would help me much if I go with a HW RAID card. Am I overlooking some scenario where I'd want so many SATA drives bypassing the HW RAID card? The limited PCIe slots is likely to be less convenient. If I'm using a HP SAS expander + HW RAID card the HP needs a 4x slot (for power). The HW RAID card wants a x8 slot. That uses both of the slots on that Supermicro board right off the bat. The ASRock has a few slots left for something else in that scenario. Is there a similar Supermicro with more PCIe slots in an ATX form factor? It looks like you give up the built in SAS controller to get a 3rd PCIe slot. Something like this: http://www.newegg.com/Product/Product.aspx?Item=N82E16813182820
BTW, doesn't the ASRock have a the same BMC capabilities? "BMC Controller - ASPEED AST2300 : IPMI (Intelligent Platform Management Interface) 2.0 with iKVM support"? I admit to not knowing virtually nothing about BMC, so apologies if this is a stupid question.