Home NAS

Chewy509

Wotty wot wot.
Joined
Nov 8, 2006
Messages
3,357
Location
Gold Coast Hinterland, Australia
We've had a few of these for testing, and seem to work well in the storage roll:


And using PCIe to M.2 adapters you can increase to a total of 5x NVMe devices. (1x onboard + 2x on M.2 adapters on the 2x PCIex8 slots).

Supermicro does make a whole set of variants of the motherboard with different IO options, etc, to match pricing or functionality.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,924
Location
USA
That's an interesting project. Would part of your plan include replicating your data offsite? Or are you just looking at it for bitrot / data loss prevention?

That would be a secondary goal. Most of my critical stuff is already offsite in a basic backup/versioned system via backblaze. My bulk media rips would be too costly to try and store offsite so my aim is to at least create versioned backups onsite. Using minio might make it easier to grow storage over time versus how it is with zfs. Both offer bitrot and data availability with parity. From the keep it simple approach, zfs would win here but I also like learning new things so it might be neat to try minio.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,924
Location
USA
Saw this yesterday on LTT:

Interesting to hear that LumaForge is doing similar to me with two groups of 10-disk raidz2 vdevs. That is an optimal config to spread writes across the drives so good to hear they're doing that.

I think Linus is wrong with him declaring that his 2TB SLOG NVMe drive will matter. It will really depend on the application and if it requests a sync writes which isn't very common. That's the only time a SLOG comes into the picture with writes.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,275
Location
I am omnipresent
zfs isn't that bad for drive upgrades. It's slow, sure, but it works well as long as your hardware is in good working order.

I should probably put together a modest proof of concept system before I get carried away and do anything else.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,924
Location
USA
I was close to buying some of the 16TB Exos drives from amazon. Looks like the drive's 5-year warranty may only apply via the seller (Serverpartdeals) since they're OEM and not authorized. @Mercutio have you purchased any of these from Amazon and is there any issue with the 5-year warranty from Seagate?

Newegg also has them for roughly the same price $329 but it's not 100% clear who honors the warranty for 5 years assuming this is sold as an OEM drive. It looks like Newegg is saying it's Seagate but really you'll have to buy the drive, then check each serial number on Seagate's website to confirm.

For a few dollars more, B&H Photo sells them in what appears to be brown box and warranty at $340. Spending the extra $10 might be worth not having the hassle of dealing with Newegg returns on the chance the warranty isn't valid through Seagate. I've read some had to pay restocking fees and shipping back.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,275
Location
I am omnipresent
I bought three of them. They do indeed come up as whitebox drives on Seagate's warranty portal. Which in my case means that I'll wind up using all three of them in the storage system I need at work and I'll buy a different one for myself.

I was surprised how fast these drives are. My expectations for larger drives are tempered from having lots of SMR, 5400rpm units in my bulk storage setup. These things really did sustain 180MB/sec+ transfer rates until they more or less filled.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,924
Location
USA
That's a bummer on the whitebox status, thanks for checking.

I haven't had the pleasure of dealing with an SMR HDD but it sounds like a nice bump in speed with this 16TB drive. My 6TB 7200RPM drives are able to get into the 180-200MB range depending on where on the drive the data is located and assuming large block size non-random. My 20-drive array easily hits the upward limits of my multiport LSI SAS controller bandwidth at times when doing zfs scrubs.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,275
Location
I am omnipresent
SMR drives are perfectly fine for ungodly amounts of data that doesn't change very much. They take forever to read and write to, but that can be ameliorated in most respects with a SSD cache drive. The Seagate SMR drives do behave well in disk arrays, so I have dozens of them. Once they hit their cache threshold, they do slow to a crawl, so it's not a good idea to do that, but even then we're talking about 25 - 50MB/s, and usually the high side of that. Yes, that means they compare poorly to tape for transfer rates under some circumstances.

They also behave very poorly in a "lots of little files" workload, but if you're dumping backups of VMs or video project data on them or your personal Netflix pod, who cares?
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,924
Location
USA
I've been playing with the minio project the past couple nights to see what it offers. I repurposed my older NAS which is a 12 x 4TB of JBOD, Xeon E3-1270 v3, 32GB RAM. After a few hours of updating firmwares and BIOS along with a fresh install of Ubuntu 20.0.4 the system was working very well. I have 10Gb NICs in both this NAS and my other zfs nas via direct connect.

The setup and configuration is mostly time consuming due to the basic sysadmin tasks of partitioning and formatting each of the drives with xfs and setting up fstab to mount. You can use any linux filesystem, xfs was one recommendation offered for minio deployments so I tried it. Most of which can be scripted, etc and are all common linux tasks so nothing specifically noteworthy. I created a common minio directory with a specific datadrive directory for each HDD. I changed the ownership to my user account for minio to be able to access.

The minio install was a breeze and was done via curl. The only complicated piece was setting up my own systemd service to make it startup on reboots but even that was easy enough. When starting the minio process you pass in all the mounted paths to all the drives you want to use with this service. Since the data drives are common filesystems you can see how minio writes to each data drive with its own directories and files. hypothetically you could store other local files in there if you wanted.

The minio web interface is pretty basic and straightforward. There aren't many admin operations you can perform from it as they focus those on the mc commandline util. You define and browse buckets of object storage through basic GUI buttons. The real details are in the minio mc command line utility. Setting up the mc utility is also very easy. You would install this on any remote system you wish to manage files to/from the minio object storage. The command line usage for copying files

As a further test, I configured rclone on my zfs NAS in order to use the sync features. Since minio is an S3 compatible object store, configuring rclone was pretty basic. A few things I changed when syncing files was to use --s3-chunk-size 250M --s3-upload-concurrency 4 --s3-disable-checksum. This increased the throughput quite a bit. Most backups are copying at 330-370MB/sec in total bandwidth over my 10Gb link.

Some pros are that it's really easy to browse my collections and create different buckets to play around with. It all looks like local files when using the mc command line utility or even basic browsing via the GUI. Once I can dig into the nuances of the features I see options for bucket versioning and changing parity levels to improve efficiency in the overhead of protecting my data.

Another fun benefit is browsing the buckets from my mobile device's web browser. I can very easily upload video/pics to a bucket or even create a new bucket. Then it's also very easy to create a sharable url to send and it has an expiration.

Some cons are that it isn't entirely clear what the remaining capacity is in a given system. Understandably the minio system is very scalable by deploying new instances either on the same or other physical hardware. Files can and will even distribute parity among multiple physical systems if desired. Any HDD failures need to be managed with linux utils like smartmon, etc to know when you need to repair/replace something.

I haven't yet tried popping out an HDD to see how it reacts and also what happens if/when I need to replace a drive. How does it resilver parity, etc.

Other things that would make this easier is if I could setup a proper DNS with an actual domain name in my home lab. Then I could add TLS via lets encrypt and also start naming the buckets versus using IP addresses.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,275
Location
I am omnipresent
Why can't you create a proper internal domain?

I set up what I'm calling my backup box over the weekend, but I didn't have another Infiniband HBA, so I'm filling the storage over GbE and it's going to take forever. I also don't have a chassis for it yet, so there's just stacks of drives and a bare motherboard sitting inside my rack. I've made a 56TB zPool with a couple hot spare drives for now. I'm just using retired 4TB drives, although these are Ultrastar or Constellation drives rather than the de-shelled SMR drives I alluded to before.

Hopefully I get another HBA on Wednesday.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,924
Location
USA
Just lack of experience in setting it up. I've never configured a domain name that I've registered to also function with named inside my home network. I found some basic options inside my PiHole config for registering system names as part of my real domain and it mostly works so that might be my answer for now. Although internal systems that use TLS freak out because I haven't configured wildcards in letsencrypt (I think) and I have my main domain name configured to use TLS without configuring wildcards.

I ordered four of the 16TB Exos drives from B&H to test out. I ended up filling up my entire minio NAS in the past two days and decided to move forward with trying a few of these new drives to see how it might work. I'm going to test a mixture of 4TB and 16TB drives to see how it works.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,275
Location
I am omnipresent
The easiest way to handle the internal-external name for a single host is to use a CNAME, so you can have your normal A record for handruin.org and an alias for minio.handruin.org that points to minio.handruin.noip.com or whatever. There are some DNS providers that have scripting APIs for handling it yourself but I'm not a customer of any of them.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,924
Location
USA
Thanks I'll check out managing CNAME configs with my domain register and see how that works for internal systems.

I ended up ordering four of the Exos X16 drives from B&H Photo and they all have valid warranties from Mar 3rd 2021 to April 12, 2026. They also did a decent job packaging each drive into individual brown boxes that come with the plastic drive holders to suspend the drive for adequate shock/damage protection. They cost a little more but they shipped them well and valid warranties means less of a hassle to deal with returns. I haven't verified the drive's health yet but assuming they're good I'll order more next month.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,275
Location
I am omnipresent
Apparently a feature has been added to OpenZFS called dRAID, which uses a hot spare as a distributed target for parity data in order to greatly reduce time for rebuild operations. It looks to be much less read-intensive for repopulating a spare drive while maintaining normal RaidZ levels of performance overall.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,924
Location
USA
I'm glad dRAID has finally made it into the release so it can get more usage and testing. It's still a bit new for anything I'd trust using long term.

It's useful if you have a large quantity of drives, maybe 50+. For me, a rebuild doesn't take that long to warrant a switch to this raid format.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,275
Location
I am omnipresent
I have a system that has over 50 attached drives, but they're in smaller individual arrays precisely because I don't want to deal with the nightmare of doing some kind of crazy huge rebuild.

I have a Mint system up and running with ZFS but I haven't gotten around to buying enough RAM see how viable it truly is. I need another couple more 16GB sticks and I just haven't felt like spending $250 on another pair of qualified DIMMs for it. The Asrock B450 board I'm using is very picky about 16GB DIMMs.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,924
Location
USA
Probably makes sense for you to use dRAID then. How much RAM are you looking to use? What's your RAW capacity for this setup? I find the recommendations for RAM and zfs can be a bit blown out of proportion unless you're in the use case of very high performance configurations. If you give zfs less it'll just have less for ARC and it's never a good idea to use dedup so I wouldn't even recommend it. You can supplement it with a spare SSD for L2ARC if you really have read performance concerns.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,275
Location
I am omnipresent
My ZFS setup currently has a 80TB raidz2 with a couple 500GB L2ARC drives but only 32GB RAM. I was thinking I'd be able to use 64GB (It's a Ryzen 3600 on an Asrock B450 motherboard) or at least 48GB but seemingly once I have my two 16GB DIMMs, it wants a matched set to go higher.

The array is sluggish even just browsing folders over SMB, like 5+ second pauses just navigating. My workload, mostly backing up my photos, probably isn't doing it any favors.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,924
Location
USA
That's surprising you're getting such sluggish performance issues even browsing SMB shares. Is your system under constant IO load when you notice the sluggish behavior? A pair of 500GB L2ARC should be decent for 80TB.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,275
Location
I am omnipresent
I thought I'd be OK as well. I'm using SMR drives, but they behave well under Windows. The cache drives generally ameliorate the slow writes on Storage Spaces, but the slow browsing was something I wasn't prepared to see.
I'll re-visit this once I get some more RAM though.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,924
Location
USA
Hmm, may just be a write amplification issue with zfs doing copy on write along with the SMRs read modify write strategy. Maybe when compared to how NTFS writes data it's far worse. I can see why your rebuilds would be terrible, smr doesn't do well with how zfs rebuilds/resilver a device assuming you're using raidz( n )

When you built out your array did you look into any of the planning guides to optimally align the zfs record size (128KiB) across your drive count? Did you look into specifying ashift 12 for 4K drives?

Another thing to look into is if you're doing a lot of synchronous writes. If you are, it's probably worth adding an SSD SLOG around 16GB or so to take the burden of writing out of the ZIL on an SMR device.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,275
Location
I am omnipresent
I definitely played with recordsize and different configurations for my cache drives, but I haven't hit an perfect configuration. As thing stand, it's functional but clearly sub-optimal. I was under the impression that there's nothing about my workload that would be impacted with a SLOG; I'm mostly moving around media files, not doing database writes.

It may be that I'm better off without the SSD cache in terms of responsiveness, but then I take a hit to write speeds because SMR. And that leads me back to wanting to see what happens when I put in more RAM.
 

LunarMist

I can't believe I'm a Fixture
Joined
Feb 1, 2003
Messages
17,497
Location
USA
Why are you using SMR drives, when the write performance and longevity are poor? I don't think any various software workarounds will make the array fast once the CMR buffers are full. I'm also not so fond of the SSD buffer for large files and single users. Get as much RAM as you can as that and fast drives are the key. I have two 8x10TB arrays and four other arrays of 40, 56, 64, and 80TB. Some of the Seagate Hegas Enterprise/EXO drives are just about expired, but still going strong.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,275
Location
I am omnipresent
Because cheap Seagate SMR drives go on sale at Costco approximately every other week. 8TB drives for $130? Don't mind if I do. The SMR drives do behave properly in Storage Spaces arrays, and there the SSD buffers work wonders. I haven't had any issues with reliability. It might be a problem for the WD models, but I don't have any evidence that the Seagate ones are any less reliable for being SMR. De-shelled external drives are fantastic for bulk storage, especially for media that is seldom accessed.
 

LunarMist

I can't believe I'm a Fixture
Joined
Feb 1, 2003
Messages
17,497
Location
USA
Well, you seemed to having some issues with it. ;)
My view is that life is too short for crappy hard drives, and I've had enough, though not like you.
The cost differential is not that much over five years.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,275
Location
I am omnipresent
I'm buying a lot of drives, although my need for growth in capacity has finally stopped being a concern, at least in the next couple years.
I have an in for buying retired drives from my local data center now as well, which might also help me out in that area.
 

LunarMist

I can't believe I'm a Fixture
Joined
Feb 1, 2003
Messages
17,497
Location
USA
What are your thoughts on Tishiba large enterprise drives?
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,275
Location
I am omnipresent
I've had a bad time getting meaningful service from Toshiba, but kind of like Fuji back in the day, it's also pretty easy to ignore their offerings.

HGST is the standard to which all others are held, but probably not worth the 50% price premium. I'm perfectly happy with Seagate Enterprise drives, will take de-shelled SMR drives for bulk storage.

I try to stick with Samsung, Micron, Crucial, Verbatim and SanDisk for SSDs. I haven't had to come to any decision about WD's SSDs yet, but since they're easy to find in big box stores, I'm sure I'll wind up with one sooner or later. Supposedly both WD and Seagate offer enterprise drives as well but I haven't touched those either.

I just looked at Backblaze's current report (Q1 2021) and apparently the Toshiba 14TB drives they have are doing very well. Almost 30,000 of them with a .5% failure rate. There don't appear to be any HGST drives over 12TB, so maybe I'll have to re-think this if the brand is being phased out.
 

sedrosken

Florida Man
Joined
Nov 20, 2013
Messages
1,811
Location
Eglin AFB Area
Website
sedrosken.xyz
My m.2 SATA games SSD is a WD Blue and I've got no complaints. I chose it for price (caught a sale) and decent benchmark numbers for a SATA drive for the price. It does what I need it to but I'll be honest in that I have a tendency to install stuff and just leave it, so I don't have a ton of write cycles on it especially since my swap file is on my NVME boot SSD
 

LunarMist

I can't believe I'm a Fixture
Joined
Feb 1, 2003
Messages
17,497
Location
USA
The later WD Blue SATA III SSDs are fine, though not particularly fast.
My m.2 SATA games SSD is a WD Blue and I've got no complaints. I chose it for price (caught a sale) and decent benchmark numbers for a SATA drive for the price. It does what I need it to but I'll be honest in that I have a tendency to install stuff and just leave it, so I don't have a ton of write cycles on it especially since my swap file is on my NVME boot SSD

You are using the M.2 SATA for the NAS I/O buffer or what?
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,924
Location
USA
My go-to basic SATA SSD has been the Crucial MX500 2TB for things where I want a bit more performance than an HDD with ample space.
 

sedrosken

Florida Man
Joined
Nov 20, 2013
Messages
1,811
Location
Eglin AFB Area
Website
sedrosken.xyz
You are using the M.2 SATA for the NAS I/O buffer or what?

Just to hold installs of games and such that would tangibly benefit from being on an SSD instead of my hard drive. I'm not using it in a NAS context, I should have clarified, I was just throwing my two-cents in as to if WD SSDs are any good or not. And yes, I'd agree with your summary: decent enough, though not particularly fast. It blows my spinning rust out of the water, though.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,924
Location
USA
I should have bought more of those Seagate 16TB Exos X16 when their price was still $349 at B&H. They haven't come back down in a while and I'd like to finish out my build after experimenting with the original 4 I purchased.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,924
Location
USA
I remember that making some headlines but I thought it died out and mostly affected SSDs, but I don't know for sure. I'll definitely get a few more if the price comes back down.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,275
Location
I am omnipresent
That was a bit of a joke since I'm pretty sure it's mostly SSDs too, but it does look like 10TB+ drives are just perpetually stuck over $300. Maybe there just aren't that many consumers who want giant drives?
 
Top