8TB and no helium

LunarMist

I can't believe I'm a Fixture
Joined
Feb 1, 2003
Messages
16,671
Location
USA
Hmm. I recently got a couple of additional different 6TB drives. 8TB drives might be good for backups, but how slow are they really?
 

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,525
Location
Horsens, Denmark
The numbers I've seen show plenty of STR, just with a 5400RPM seek. The newer NAS units ship with either GBs of RAM for cache or room for an SSD as cache. Unless there turns out to be some catastrophic issue with these (super limited write cycles?) I would totally run them.
 

LunarMist

I can't believe I'm a Fixture
Joined
Feb 1, 2003
Messages
16,671
Location
USA
The numbers I've seen show plenty of STR, just with a 5400RPM seek. The newer NAS units ship with either GBs of RAM for cache or room for an SSD as cache. Unless there turns out to be some catastrophic issue with these (super limited write cycles?) I would totally run them.

I don't think the write cycles are any different, just slow. I don't know how to the do the NAS, so they would be individuals.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
21,598
Location
I am omnipresent
Dammit. I bought eight $180 4TB Hitachi drives to use for backups just four weeks ago. I would've jumped on these in a heartbeat.
 

LunarMist

I can't believe I'm a Fixture
Joined
Feb 1, 2003
Messages
16,671
Location
USA
Seagate drives are reliable?

Are any drives truly reliable? To me a drive is a consumable. Seagate drives have been better than the crappy Samsungs for me, and about the same as the Hibachis.
Not that I always agree with Merc, but my older WD green drives have not been doing well so I'm off WD except for the Blacks.
 

CougTek

Hairy Aussie
Joined
Jan 21, 2002
Messages
8,726
Location
Québec, Québec
What I don't like with this drive is the unrecoverable error rate : 1 per 10^14. So every 12 drives fill or so, there will be an unrecoverable error. That's a so-so backuping solution.
 

time

Storage? I am Storage!
Joined
Jan 18, 2002
Messages
4,932
Location
Brisbane, Oz
What I don't like with this drive is the unrecoverable error rate : 1 per 10^14. So every 12 drives fill or so, there will be an unrecoverable error. That's a so-so backuping solution.

Isn't that bits rather than bytes? So statistically you can expect 2 unrecoverable errors every 3 full drive reads? Certainly, these need to be in RAID of some sort and preferably RAID 1 or 10 to avoid the highly likely rebuild failures under RAID 5.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
21,598
Location
I am omnipresent
Perhaps they're meant to be pared with something like SnapRAID (parity snapshots) for additional error correction of presumably stale data? The error rate does seem unreasonably low for stand-alone data storage.
 

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,525
Location
Horsens, Denmark
I thought everyone had agreed that RAID-5 was basically useless for modern drives? RAID-6 was marginal with at least -10 recommended?
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
21,598
Location
I am omnipresent
It's useless when you exceed a 12 - 15TB on a volume because of the statistical certainty of encountering a URE. I usually target storage volumes of ~15TB with parity snapshots to minimize the likelihood of data loss and my important stuff (i.e. not porn) has an extra parity drive and an available hot spare in place.
 

Stereodude

Not really a
Joined
Jan 22, 2002
Messages
10,865
Location
Michigan
I thought everyone had agreed that RAID-5 was basically useless for modern drives? RAID-6 was marginal with at least -10 recommended?
RAID-10 seems like it would be as bad if not worse than RAID-5. In RAID-10 (or RAID-1) the best the controller can determine is that the two drives (or arrays) don't agree. It can't tell which is correct. In a non-failed state RAID-5 can detect and correct a drive error due to the parity information. RAID-6 can still correct a read error after a single drive failure due to 2x the parity information.

I'm surprised there doesn't seem to be much interest in the later SAS Dell PERC H700 cards that support RAID-6 and SATA drives larger than 2TB. There was a ton of interest in the PERC 5i and 6i. The PERC firmware locks to Dell drives are long gone. I haven't quite gotten up the courage to try to build a new "server" around a H700 myself though and be a pioneer though.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
21,598
Location
I am omnipresent
The overhead for getting large RAID6 arrays up and running is definitely an inconvenient starting point. Perhaps 3 - 4TB single drives are large enough chunks that few people see the reason to extend to multiples of that value?
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,741
Location
USA
RAID-10 seems like it would be as bad if not worse than RAID-5. In RAID-10 (or RAID-1) the best the controller can determine is that the two drives (or arrays) don't agree. It can't tell which is correct. In a non-failed state RAID-5 can detect and correct a drive error due to the parity information. RAID-6 can still correct a read error after a single drive failure due to 2x the parity information.

I'm surprised there doesn't seem to be much interest in the later SAS Dell PERC H700 cards that support RAID-6 and SATA drives larger than 2TB. There was a ton of interest in the PERC 5i and 6i. The PERC firmware locks to Dell drives are long gone. I haven't quite gotten up the courage to try to build a new "server" around a H700 myself though and be a pioneer though.

When I was deciding my next NAS I thought about my experiences with buying the Perc 6i and having to manually mount cooling to it and buying 3rd party back plates and realized it wasn't the greatest experience. On top of that once I was committed to using the Perc there became the controller dependency which makes failure events a potential pain to deal with. There was also the drive-size limitations and didn't want to deal with that any more. Once I found a MB from Supermicro with an on-board 8-port LSI 2308 adapter that could be used in IT mode, I went the route of 8 x 4TB under Linux ZFS with RAID-Z2 with a separate L2ARC SSD. This gives me a bunch of flexibility in pool/resource management, snapshot management, and protection from bit rot. If I need to move my pool, I should be able to export it and import it on a different system assuming I get all the drives moved properly.

I'm much happier so far with this NAS compared to my last which used the Perc 6i and I feel it has potential to expand if/when needed. I don't (yet) see the need for a dedicated RAID controller management like the H700 any more. If I need expansion in the future I would likely consider an LSI 9211-8i and run it in IT mode.
 

Stereodude

Not really a
Joined
Jan 22, 2002
Messages
10,865
Location
Michigan
When I was deciding my next NAS I thought about my experiences with buying the Perc 6i and having to manually mount cooling to it and buying 3rd party back plates and realized it wasn't the greatest experience. On top of that once I was committed to using the Perc there became the controller dependency which makes failure events a potential pain to deal with. There was also the drive-size limitations and didn't want to deal with that any more. Once I found a MB from Supermicro with an on-board 8-port LSI 2308 adapter that could be used in IT mode, I went the route of 8 x 4TB under Linux ZFS with RAID-Z2 with a separate L2ARC SSD. This gives me a bunch of flexibility in pool/resource management, snapshot management, and protection from bit rot. If I need to move my pool, I should be able to export it and import it on a different system assuming I get all the drives moved properly.

I'm much happier so far with this NAS compared to my last which used the Perc 6i and I feel it has potential to expand if/when needed. I don't (yet) see the need for a dedicated RAID controller management like the H700 any more. If I need expansion in the future I would likely consider an LSI 9211-8i and run it in IT mode.
I didn't find the PERC 6i to be problematic. The right bracket was already installed on mine. For cooling all I did was point an 80mm fan at the card. I'm not sure I want to move to Linux as I don't have a lot of experience with it and my "server" box does a lot of other things that I'm not sure I can easily get working in Linux. That basically leaves me looking for a 8 (or more) drive RAID-6 capable card that supports drives >2.2TB. A H700 pull is right around $100. The equivalent LSI 9260 card is $500+. However, motherboard compatibility could be questionable, as could drive compatibility. Of course the LSI 9260 could be just a bad even though it's a retail product, not a re-purposed card intended for specific Dell servers.

FWIW, I did get sick of the PERC 5i in my desktop PCs used in RAID-1. Mostly because the long delay they add to the boot time. I took one out of my Sandy Bridge box when I rebuilt it and intentionally did not use one in my Haswell build. I still have one in my Q6600.

Ultimately, my 8x2TB RAID-6 setup isn't full and I'm not really in need of more space or close to running out so I can continue to sit on the sidelines.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
21,598
Location
I am omnipresent
The IBM M1015 cards I use have LSI 9220 chips. I did have to flash them to support target initiator mode, but that takes all of about 90 seconds. I've seen them as cheap as $85. They do work with 6TB drives, SATA drives and with SAS expanders. They seem to be a very good choice for at least medium density storage.

In my application, I run nine drives off the motherboard SAS/SATA ports, eight more off one M1015 and another 16 off another thanks to an expander.

I just badmouthed Storage Spaces in another thread, but if you have Windows Server 2012, you can do double parity (basically RAID6), designate hot spares and set up tiered storage with SSD cache drives. That's a pretty solid feature set, especially for home or small business where some of those options aren't going to be available to Windows users otherwise.
 

Bozo

Storage? I am Storage!
Joined
Feb 12, 2002
Messages
4,396
Location
Twilight Zone
When I was deciding my next NAS I thought about my experiences with buying the Perc 6i and having to manually mount cooling to it and buying 3rd party back plates and realized it wasn't the greatest experience. On top of that once I was committed to using the Perc there became the controller dependency which makes failure events a potential pain to deal with. There was also the drive-size limitations and didn't want to deal with that any more. Once I found a MB from Supermicro with an on-board 8-port LSI 2308 adapter that could be used in IT mode, I went the route of 8 x 4TB under Linux ZFS with RAID-Z2 with a separate L2ARC SSD. This gives me a bunch of flexibility in pool/resource management, snapshot management, and protection from bit rot. If I need to move my pool, I should be able to export it and import it on a different system assuming I get all the drives moved properly.

I'm much happier so far with this NAS compared to my last which used the Perc 6i and I feel it has potential to expand if/when needed. I don't (yet) see the need for a dedicated RAID controller management like the H700 any more. If I need expansion in the future I would likely consider an LSI 9211-8i and run it in IT mode.

I case of a RAID controller failure, isn't much easier to replace an add-in RAID card rather than an entire mother board. And faster too.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,741
Location
USA
I case of a RAID controller failure, isn't much easier to replace an add-in RAID card rather than an entire mother board. And faster too.

Yes, probably. That assumes you can still find the exact make/model raid controller with the same firmware. If the raid controller fails on the board, sure it's a bit more of a pain to replace, but I'm not tied to that specific controller. I could even just add in another PCIe raid controller and more on from there if I don't want to replace the MB.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
21,598
Location
I am omnipresent
I've been trying to buy four of the 8TB Seagate drives for about a week now. Every time I look, Amazon says it has at most three of them in stock, usually from a seller called Oceanside that I know to be the bar-non worst drive shipper in the history of time. The drives are also being marked up like crazy, often selling for $320. Clearly, there's a market for big, slow, potentially unreliable single drives.
 

LunarMist

I can't believe I'm a Fixture
Joined
Feb 1, 2003
Messages
16,671
Location
USA
I've been trying to buy four of the 8TB Seagate drives for about a week now. Every time I look, Amazon says it has at most three of them in stock, usually from a seller called Oceanside that I know to be the bar-non worst drive shipper in the history of time. The drives are also being marked up like crazy, often selling for $320. Clearly, there's a market for big, slow, potentially unreliable single drives.

Are you planning to use them in RIAD?
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
21,598
Location
I am omnipresent
Are you planning to use them in RIAD?

I think that would be sub-optimal given the hard error rate and stated performance characteristics.

I have occasional need to move around a tremendous amount of static content and unfortunately BTsync doesn't handle mutliterabyte-size loads very well. That is not a euphemism for pr0n.
 

LunarMist

I can't believe I'm a Fixture
Joined
Feb 1, 2003
Messages
16,671
Location
USA
Yet you could do that with 6TB drives at almost the same cost. I'm also waiting for the drive (from BH), but now think that it may not be needed if I go for the NASD.
 

Chewy509

Wotty wot wot.
Joined
Nov 8, 2006
Messages
3,327
Location
Gold Coast Hinterland, Australia

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
21,598
Location
I am omnipresent
Yet you could do that with 6TB drives at almost the same cost. I'm also waiting for the drive (from BH), but now think that it may not be needed if I go for the NASD.

Almost the same cost and holding 2TB less data. 15TB is a magic number for me. Getting there with two drives has a certain value all its own.
 

sedrosken

Florida Man
Joined
Nov 20, 2013
Messages
1,598
Location
Eglin AFB Area
Website
sedrosken.xyz
All this talk of multi-terabyte systems and I have trouble filling 500GB. What do you guys run in your personal systems? Daily drivers, that is, not home servers or whatever. ddrueding and Santilli are the only two people I can think of off the top of my head who detail system specs in their signatures other than myself.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,741
Location
USA
I have 10TB in my desktop setup (daily driver) and 32TB raw in my new NAS and 12TB raw in my old NAS.

These are my systems:
Workstation 1: Intel i7 4790K | Thermalright MUX-120 | Asus Maximus VII Hero | 32GB RAM Crucial Ballistix Elite 1866 9-9-9-27 ( 4 x 8GB) | 2 x EVGA GTX 980 SC | Samsung 850 Pro 512GB | Samsung 840 EVO 500GB | HGST 4TB Nas 7.2KRPM | Seagate 3TB 7.2KRPM | 2 x Samsung 1TB 7.2KRPM | Seasonic 1050W 80+ Gold | Fractal Design Define R4 | Win 8.1 64-bit
NAS 1: Intel Intel Xeon E3-1270V3 | SUPERMICRO MBD-X10SL7-F-O | 32GB RAM DDR3L ECC (8GBx4) | 8 x HGST 4TB Deskstar NAS | SAMSUNG 850 Pro 256GB (ZIL + L2ARC) | SAMSUNG 850 Pro 128GB (boot/OS) | Seasonic 650W 80+ Gold | Rosewill RSV-L4411 | Xubuntu 14
Notebook: Lenovo T500 | Intel T9600 | 8GB RAM | Crucial M4 256GB
 

LunarMist

I can't believe I'm a Fixture
Joined
Feb 1, 2003
Messages
16,671
Location
USA
Almost the same cost and holding 2TB less data. 15TB is a magic number for me. Getting there with two drives has a certain value all its own.

Sure, though how can anyone know that 15TB is a magical number. :scratch:
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
21,598
Location
I am omnipresent
Sure, though how can anyone know that 15TB is a magical number. :scratch:

I've discussed my storage setup before. I'm sure you've read those posts.

For sed's benefit: My main desktop is a 32GB Haswell-E @ 4GHz (12 threads) with a GT 970, a 500GB Plextor M.2 and 1TB Corsair SSD + a few 3 or 4TB drives. My main file server has 24 4TB and 18 3TB drives at the moment (one array is offline) + 4 240GB SSDs for storage tiering. It's a 24-thread 3GHz Westmere Xeon machine. I also have a 48GB i7 980X hosting warm copies of important systems I administer, a 16GB i3 NUC in my living room, a Surface Pro 2, a 17" 2011 MBP, a Thinkpad T420, a Dell Venue 8 and some other random systems (e.g. the Pentium 4 I built to play games in Windows 98, my Thinkpad collection) and tablets.

Some of my old desktops (like the 3770k I was using until November) have been relocated outside my home but are now being used for other personal purposes.

Most of what I'm doing is a combination of storing staggering amounts of media and running virtual machines for various purposes.
 
Top