The numbers I've seen show plenty of STR, just with a 5400RPM seek. The newer NAS units ship with either GBs of RAM for cache or room for an SSD as cache. Unless there turns out to be some catastrophic issue with these (super limited write cycles?) I would totally run them.
Seagate drives are reliable?
I've had good luck with Samsung and Hitachi. Seagate hasn't fared quite as well. I haven't had a WD since the IDE days (mostly because I'm using drives in RAID and the WDs didn't play nice).Are any drives truly reliable?
What I don't like with this drive is the unrecoverable error rate : 1 per 10^14. So every 12 drives fill or so, there will be an unrecoverable error. That's a so-so backuping solution.
RAID-10 seems like it would be as bad if not worse than RAID-5. In RAID-10 (or RAID-1) the best the controller can determine is that the two drives (or arrays) don't agree. It can't tell which is correct. In a non-failed state RAID-5 can detect and correct a drive error due to the parity information. RAID-6 can still correct a read error after a single drive failure due to 2x the parity information.I thought everyone had agreed that RAID-5 was basically useless for modern drives? RAID-6 was marginal with at least -10 recommended?
RAID-10 seems like it would be as bad if not worse than RAID-5. In RAID-10 (or RAID-1) the best the controller can determine is that the two drives (or arrays) don't agree. It can't tell which is correct. In a non-failed state RAID-5 can detect and correct a drive error due to the parity information. RAID-6 can still correct a read error after a single drive failure due to 2x the parity information.
I'm surprised there doesn't seem to be much interest in the later SAS Dell PERC H700 cards that support RAID-6 and SATA drives larger than 2TB. There was a ton of interest in the PERC 5i and 6i. The PERC firmware locks to Dell drives are long gone. I haven't quite gotten up the courage to try to build a new "server" around a H700 myself though and be a pioneer though.
I didn't find the PERC 6i to be problematic. The right bracket was already installed on mine. For cooling all I did was point an 80mm fan at the card. I'm not sure I want to move to Linux as I don't have a lot of experience with it and my "server" box does a lot of other things that I'm not sure I can easily get working in Linux. That basically leaves me looking for a 8 (or more) drive RAID-6 capable card that supports drives >2.2TB. A H700 pull is right around $100. The equivalent LSI 9260 card is $500+. However, motherboard compatibility could be questionable, as could drive compatibility. Of course the LSI 9260 could be just a bad even though it's a retail product, not a re-purposed card intended for specific Dell servers.When I was deciding my next NAS I thought about my experiences with buying the Perc 6i and having to manually mount cooling to it and buying 3rd party back plates and realized it wasn't the greatest experience. On top of that once I was committed to using the Perc there became the controller dependency which makes failure events a potential pain to deal with. There was also the drive-size limitations and didn't want to deal with that any more. Once I found a MB from Supermicro with an on-board 8-port LSI 2308 adapter that could be used in IT mode, I went the route of 8 x 4TB under Linux ZFS with RAID-Z2 with a separate L2ARC SSD. This gives me a bunch of flexibility in pool/resource management, snapshot management, and protection from bit rot. If I need to move my pool, I should be able to export it and import it on a different system assuming I get all the drives moved properly.
I'm much happier so far with this NAS compared to my last which used the Perc 6i and I feel it has potential to expand if/when needed. I don't (yet) see the need for a dedicated RAID controller management like the H700 any more. If I need expansion in the future I would likely consider an LSI 9211-8i and run it in IT mode.
When I was deciding my next NAS I thought about my experiences with buying the Perc 6i and having to manually mount cooling to it and buying 3rd party back plates and realized it wasn't the greatest experience. On top of that once I was committed to using the Perc there became the controller dependency which makes failure events a potential pain to deal with. There was also the drive-size limitations and didn't want to deal with that any more. Once I found a MB from Supermicro with an on-board 8-port LSI 2308 adapter that could be used in IT mode, I went the route of 8 x 4TB under Linux ZFS with RAID-Z2 with a separate L2ARC SSD. This gives me a bunch of flexibility in pool/resource management, snapshot management, and protection from bit rot. If I need to move my pool, I should be able to export it and import it on a different system assuming I get all the drives moved properly.
I'm much happier so far with this NAS compared to my last which used the Perc 6i and I feel it has potential to expand if/when needed. I don't (yet) see the need for a dedicated RAID controller management like the H700 any more. If I need expansion in the future I would likely consider an LSI 9211-8i and run it in IT mode.
I case of a RAID controller failure, isn't much easier to replace an add-in RAID card rather than an entire mother board. And faster too.
I've been trying to buy four of the 8TB Seagate drives for about a week now. Every time I look, Amazon says it has at most three of them in stock, usually from a seller called Oceanside that I know to be the bar-non worst drive shipper in the history of time. The drives are also being marked up like crazy, often selling for $320. Clearly, there's a market for big, slow, potentially unreliable single drives.
Are you planning to use them in RIAD?
Yet you could do that with 6TB drives at almost the same cost. I'm also waiting for the drive (from BH), but now think that it may not be needed if I go for the NASD.
ddrueding and Santilli are the only two people I can think of off the top of my head who detail system specs in their signatures other than myself.
I am dissapoint.
Almost the same cost and holding 2TB less data. 15TB is a magic number for me. Getting there with two drives has a certain value all its own.
Sure, though how can anyone know that 15TB is a magical number. :scratch: