Why only 8TB in RAID6? What part breaks down? Maybe it's slightly different but I'm running 6x20TB in raidz2 which is similar to RAID 6 and it works great.
Are you snapshotting in addition to running dual parity or just relying on dual parity? I'm assuming you are, but that eats into the capacity of a pool depending on how much your data changes.
Plain old RAID6 or RAID6-like arrays run in to the same statistical certainty of a read error at roughly 12TB/parity drive that RAID5 does. It gets uglier as the arrays get bigger. You can add Snapshotting with RAIDz2 if you have capacity for it, but there's a sanity check in terms snapshot storage and having spares on hand. I think things are better on the SSD side of things, but it's not like I have actual dozens of SSDs.
My solution for the time being are "small" ~22ishTB RAID6 volumes of either 6 or 8TB drives mixed between Windows (Storage Spaces) and Linux (ZFS) hosts, with SnapRAID on the Windows side to handle snapshotting. I have a total of 7 16TB drives and 5 18TB drive that I'm just using as 2-disk mirror sets with a snapshot drive each. SnapRAID is a bit more fiddly and needs some scripting to do what I want from it and I'm not in love with it. Updating the snapshots once a day is fine for me right now. It just adds something handy that Windows didn't have before and it is well behaved IMO.
Don't get me wrong: If I need a giant array for some stupid reason, I'm willing to make one temporarily. I just don't want data to live there long term.
Right now I have about ~170TB of data I care about. A lot of it has been migrated to the mirrored drives and I've been able to pull almost all my shitty SMR drives, which is good news. I haven't lost a substantial amount of data in a couple decades but at the same time I don't have warm fuzzies about where we are right now with high capacity mechanical drives.