SSDs - State of the Product?

LunarMist

I can't believe I'm a Fixture
Joined
Feb 1, 2003
Messages
16,624
Location
USA
I'm not sure what you meant by when not using it. The drive must stay in the case/enclosure to connect to the wires and USB port.
For example I'm using a 2.5" USB-C to SATA enclosure for the 4TB drives. If I get the 7.68TB U.3 or SAS 2.5" drive, is there one for that? I'm not seeing any TLC SATA. Or maybe I should go with NVMe again and hope to find as better enclosure that doesn't degenerate performance to much. https://www.amazon.com/SABRENT-Internal-Extreme-Performance-SB-RKT4P-8TB/dp/B09WZK8YMY
 
Last edited:

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
21,564
Location
I am omnipresent
All the U.2 bridges I see are for a bare drive with U.2 interface to USB. None seem to offer an enclosure, although a 2.5" form factor is somewhat more protected than an M.2 drive. MLC drives of substantial capacity are available. You should have all sorts of options opened up once the enterprise drives that can handle multiple full writes per day are an option.
 

LunarMist

I can't believe I'm a Fixture
Joined
Feb 1, 2003
Messages
16,624
Location
USA
Got it. That idea is nixed. So the options are the M.2 8TB or SATA 7.68TB, neither of which are great.
I have a Sarbent NVmE enclosure for my M.2 drives, but it is slow and inconsistent despite not overheating. I read many negative reviews about USB-C enclosures and some are burning up the SSDs. :(
 

LunarMist

I can't believe I'm a Fixture
Joined
Feb 1, 2003
Messages
16,624
Location
USA
I'm seeing 64°C on the external 4TB Sandik Pro AF55 during sustained writes. The drive appears as the WD_BLACK SN850XE 4000GB in the Disk Info. It is strange since SN850XE does not exist as a model. I'm not sure how reliable that info is because the S/N does not match the product label.
 

LunarMist

I can't believe I'm a Fixture
Joined
Feb 1, 2003
Messages
16,624
Location
USA
Meanwhile, in my main computer the boot SSD is misrepresented by Windows. What can possibly cause that? Is Device Manager not reading the model info from the drive, but taking it from historical data?
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
21,564
Location
I am omnipresent
You might want to check WD's diagnostic software to see what's up with it. It's not impossible that Device Manager could see it as a generic drive or drive name, if the driver or firmware ID matches something else it knows about.
 

LunarMist

I can't believe I'm a Fixture
Joined
Feb 1, 2003
Messages
16,624
Location
USA
CrystalInfo indicates correctly that the SSD is 970 Pro, but Windows sees it as 970 EVO Plus. Very strange. WD does not see the drive at all.
I know it is the Pro because the capacity is 512GB not 500GB. But why is it wrong in Windows and other software that just reads from Windows? I've also noticed that generic software doesn't read the NVMe SMART data correctly for either WD or Samsung. I assume their goals are to spyware.
 

sedrosken

Florida Man
Joined
Nov 20, 2013
Messages
1,590
Location
Eglin AFB Area
I personally can't think of a single use-case (relevant to me or our clients) where something that big (and expensive!) could be of use. None of our clients have storage needs that intensive, we don't, and I personally certainly don't, I'm agonizing over finding a group of 3 WD Red Pros/IronWolves with CMR and ERC at 8TB without spending 600 dollars for the privilege.
 
Last edited:

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
21,564
Location
I am omnipresent
I'm agonizing over finding a group of 3 WD Red Pros/IronWolves with CMR and ERC at 8TB without spending 600 dollars for the privilege.

I mean don't go hastily buying WD drives. But you should be able to find HGST He8s with a 5 year warranty for under $100. Datacenter used second hand drives can be an absolute bargain though.
 

sedrosken

Florida Man
Joined
Nov 20, 2013
Messages
1,590
Location
Eglin AFB Area
Don't those used enterprise drives have huge amounts of power-on hours? Granted, I usually look to power-on count for a sign of wear over hours, but I thought these came at such a high amount of hours that it was concerning. I'd really rather not spend the money on the drives only to turn around in a month or so and have them turn up dead, but then again, if I can get a matched trio for a RAID5 that might not matter as much. Where are you seeing them with a 5-year warranty? The max I'm seeing is 1 or 2. I also don't have a SAS controller, just plain SATA since I'm repurposing desktop hardware for the task instead of using a proper server. Are helium drives even CMR? The shingles would make the software RAID kick them out as the latency would be too high, I would think.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
21,564
Location
I am omnipresent
Don't those used enterprise drives have huge amounts of power-on hours?

Not always. There are vendors willing to warranty them for five years from date of purchase, if that's a concern. I've noticed that a couple resellers are offering variable length warranties for the same model of drive, which suggest that they're aware of that issue as well.

I get datacenter drives from the ops at my colo. Sometimes I get drives that have less than 200 power on hours.

I've decided that 8TB drives are the largest I'm willing to use in arrays though. Even RAID6 tends to break down at around 36TB, so there's not much point to doing anything but mirroring or copying data elsewhere.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,737
Location
USA
Why only 8TB in RAID6? What part breaks down? Maybe it's slightly different but I'm running 6x20TB in raidz2 which is similar to RAID 6 and it works great.
 

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,511
Location
Horsens, Denmark
Merc probably knows things I don't, but the concern as I understand it is about the arrays ability to rebuild in a reasonable amount of time after a failure, and the likelihood of another failure before that rebuild has completed.

This is why I've been using fast drives if I need a larger array, but all the important arrays I run are now at least RAID10 if not RAID15.
 

sedrosken

Florida Man
Joined
Nov 20, 2013
Messages
1,590
Location
Eglin AFB Area
I don't have anything that's so monumentally important here at home that anything more than a RAID5 is really needed -- I make regular backups to cold storage and more beyond that for the stuff that's actually important.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,737
Location
USA
I definitely get the concern with time of rebuilds. I'd expect a drive rebuild might take:

Ultrastar DC HC650
50% capacity (10TB/drive)
HDD speed: ~280MB/sec
595 minutes (~10 hours)

I'd need to lose 2 more drives before failure in around 10 hours or even double it to 20 hours to be conservative. I find that to be fine for my use case given I'll have backups if needed.
 

LunarMist

I can't believe I'm a Fixture
Joined
Feb 1, 2003
Messages
16,624
Location
USA
SSDs will rebuild so much faster. The reasons for HDDs are mostly cost and TBW. Uber for, SSDs is 2-3 orders of magnitude better. It's really too bad that the higher capacity SSDs are in the enterprise sector and not so amenable to the desktop NAS or computer.

I have no issues with my 8x18TB hard drives in the NAS with Z2. The HDD noise is obnoxious compared to how quiet other components are.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
21,564
Location
I am omnipresent
Why only 8TB in RAID6? What part breaks down? Maybe it's slightly different but I'm running 6x20TB in raidz2 which is similar to RAID 6 and it works great.

Are you snapshotting in addition to running dual parity or just relying on dual parity? I'm assuming you are, but that eats into the capacity of a pool depending on how much your data changes.

Plain old RAID6 or RAID6-like arrays run in to the same statistical certainty of a read error at roughly 12TB/parity drive that RAID5 does. It gets uglier as the arrays get bigger. You can add Snapshotting with RAIDz2 if you have capacity for it, but there's a sanity check in terms snapshot storage and having spares on hand. I think things are better on the SSD side of things, but it's not like I have actual dozens of SSDs.

My solution for the time being are "small" ~22ishTB RAID6 volumes of either 6 or 8TB drives mixed between Windows (Storage Spaces) and Linux (ZFS) hosts, with SnapRAID on the Windows side to handle snapshotting. I have a total of 7 16TB drives and 5 18TB drive that I'm just using as 2-disk mirror sets with a snapshot drive each. SnapRAID is a bit more fiddly and needs some scripting to do what I want from it and I'm not in love with it. Updating the snapshots once a day is fine for me right now. It just adds something handy that Windows didn't have before and it is well behaved IMO.

Don't get me wrong: If I need a giant array for some stupid reason, I'm willing to make one temporarily. I just don't want data to live there long term.

Right now I have about ~170TB of data I care about. A lot of it has been migrated to the mirrored drives and I've been able to pull almost all my shitty SMR drives, which is good news. I haven't lost a substantial amount of data in a couple decades but at the same time I don't have warm fuzzies about where we are right now with high capacity mechanical drives.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,737
Location
USA
I only keep a single snapshot at a time on my main NAS and then zfs send each snap to my other NAS as a form of incremental-forever backups. Both run dual parity in their vdevs. The snapshots are more a function of the filesystem than anything specific to limiting the drive size for URE's. I don't mind that the convenience of snapshots come at the expense of space with COW filesystems.

I see your point for consumer drives rated 1.0e+14 gets you to the 12TB/parity but these 20TB drives are rated 1.0e+15 which brings it to around 114TB. Like anything it's about having good backups which is how I diversify my data anyway. Running with a single pool with fewer larger drives means less noise, heat, and parts/complexity so I'm preferring that over lots of pools or vdev/volumes with smaller drives.

Going with 6x20 allows me to expand nicely in the future if I need to add more space with another 6x20 vdev to the pool.
 

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,511
Location
Horsens, Denmark

LunarMist

I can't believe I'm a Fixture
Joined
Feb 1, 2003
Messages
16,624
Location
USA
I updated the Firmware of one 980 Pro so far and there is also one offsite. They are small sizes. I did not see any point in upgrazing my two original (v1) 2TB 970 EVO Plus SSDs which perform similarly and don't have any issues. The later (v2) 970 EVO Plus are not so great at unbuffered writes so I avoided them. The last ~30TB or so NVMe drives I ordered are all WD. Given the 990 Pro issues as well, Samsung has lost the plot. Hynix has the best SSDs per the benchmarks, but like Samsung none are 4TB in 2280 M.2 FF.
 

LunarMist

I can't believe I'm a Fixture
Joined
Feb 1, 2003
Messages
16,624
Location
USA
What is up with the 4TB SATA SSDs becoming hard to find? I don't know why they never made any newer generations of the 8TB after the early QLC Samsung. If you open up the 4TB TLC drives there is plenty of room for 8TB of flash chips, so the low capacity is just bogus. Is it a scheme by Amazon and the Googles?
 

jtr1962

Storage? I am Storage!
Joined
Jan 25, 2002
Messages
4,168
Location
Flushing, New York
What is up with the 4TB SATA SSDs becoming hard to find? I don't know why they never made any newer generations of the 8TB after the early QLC Samsung. If you open up the 4TB TLC drives there is plenty of room for 8TB of flash chips, so the low capacity is just bogus. Is it a scheme by Amazon and the Googles?
I get a bunch of them popping up on Newegg when I do a search:


The big story I'm seeing lately with SSDs is the fact they're around $50/TB on the budget end of the market. I've seen as low as $40/TB with some specials. Meanwhile, HDDs are bottoming out at perhaps $15/TB. Only a ~3 to 1 price difference at this point. I was around 8 about two years ago. Unless you need a massive amount of storage, I'm not seeing much reason for buying HDDs these days when you can get a 1 TB SSD (adequate for most people) for under $50.
 

LunarMist

I can't believe I'm a Fixture
Joined
Feb 1, 2003
Messages
16,624
Location
USA
The WD and Sandisk TB are backordered. Some people say they are the same drive and both out of production. :(
You might say that I have massive needs. The problem is that there are few NAS with over 12 bays or they are expensive enterprise stuff. One can assemble a mid-range SMB NAS with 8x20TB (120TB in RAID 6/Z2) for less than $5K. (My latest from 2022 was only 8x18GB). A 16-bay NAS with 4TB SSDs only yields 48TB in RAID 6/Z2 for about $10K using the lower grade estimates. There are a few 7.68 TB or similar Enterprise OEM SSDs aorund $1000 or so, but mostly from shady suppliers and not sold to publics. Synology has an all-flash NAS. The mid-grade 3600 would cost $50K with 24x7TB. Maybe if I played the Lotto. LOL
 

LunarMist

I can't believe I'm a Fixture
Joined
Feb 1, 2003
Messages
16,624
Location
USA
The EVO 4TB is on sale at $300, which I don't recall from last weekend. Maybe I'll get a couple for now.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
21,564
Location
I am omnipresent
I've seen the Intel P4500 4TB as cheap as $800 on Amazon, but never when I was looking to buy a 2.5" drive of that capacity. There's some difficulty in knowing whether or not a particular drive is new or a pull, but most of the time pulled drives are around half of MSRP rather than 80%. The P4500s are MLC NAND and are rated for ~.9 DWpD over a five year period, so they're more or less my go to for 2.5" SSDs anyway.
 

LunarMist

I can't believe I'm a Fixture
Joined
Feb 1, 2003
Messages
16,624
Location
USA
What do you think of the ION 6500 Micron SSD? It looks like 30TB U.3 is about $2800. A couple in RIAD 0 would hold most of my actively dataset. The price seems pretty low so I'm suspicious.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
21,564
Location
I am omnipresent
They're supposed to be high capacity budget drives. TLC NAND makes me a little nervous, but with 30TB of flash to work with, it's not like most applications are going to get anywhere near writing so much to have to worry about wear within the warranty period.
 

LunarMist

I can't believe I'm a Fixture
Joined
Feb 1, 2003
Messages
16,624
Location
USA
My first total SSD failure in years'''''''
WD SA 510, such a POS. One day it is MIA and hardly used.
How to I destory the data on it now?
 

LunarMist

I can't believe I'm a Fixture
Joined
Feb 1, 2003
Messages
16,624
Location
USA
Now Merc will explain how WD sucks... I had to use an old Seagate drive from 2014 with the LAPD controller. It had 19000 horas before being taken out of the 5th gen Intel of Lendovo.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
21,564
Location
I am omnipresent
I do not need to explain how WD sucks. You all already know.
Every time I buy a SanDisk SD card, I die a little in my soul, although ironically the SD cards are the one thing with which I haven't had a bad time.
 

LunarMist

I can't believe I'm a Fixture
Joined
Feb 1, 2003
Messages
16,624
Location
USA
I have 3 or 4? of their Extreme Pro 128TB UHS-II v90 cards and of course an untold number of the Extreme Pro UHS-I cards. The R7 has a small buffer and SDxC UHS-II is old, not fast technology, so you cannot shoot very much and 128GB is enough. I could also use them as 2nd cards in R5, a7Rv and other bodies.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
21,564
Location
I am omnipresent
I'm sure I have dozens of Sandisk cards that are between 64 and 256GB. They're more reliable than Samsung or Lexar and a lot cheaper than Angelbird.
 

LunarMist

I can't believe I'm a Fixture
Joined
Feb 1, 2003
Messages
16,624
Location
USA
Extreme Pro 128GB are down to $130 now, but have been about $180-200 for the past 5 years. The 256GB and 512GB are relatively new. I don't think that 512GB SD is worth $600 though. You can get 5x faster CFexpress cards for similar prices.
 

Chewy509

Wotty wot wot.
Joined
Nov 8, 2006
Messages
3,327
Location
Gold Coast Hinterland, Australia
Out of interest as we are now starting to see some of the new PCIe Gen5 NVMe now coming with large heat sinks and reporting operating temperatures in the 70-80C range, does anyone have any links to papers that report on life expectancy vs average operating temperature with the common NAND being used on these devices? Surely operating QLC NAND at high frequency and at 70-80C can't be good for the life of the NAND? Or is this a case of, who cares as long as it lasts the 12month warranty period?

One example:
And yes, those are heat pipes...
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
21,564
Location
I am omnipresent
That particular drive seems to have an operating temperature range of up to 70C. It's just another part to keep cool under load. I'm sure a nearly two inch tall heat sink helps a lot with that. Hope no one is sticking that thing in a laptop, though.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,737
Location
USA
The Crucial T700 is one I've been casually watching and interested in but it's too pricey at the moment. In terms of its running temps and association with longevity, I don't know. I would assume there is a relationship because they designed it to thermal throttle like other SSDs at around 81C and shutdown at 90C. Having a heatsink seems mandatory for it or at least one built into a motherboard that's reasonable.


 
Top