Home NAS

LunarMist

I can't believe I'm a
Joined
Feb 1, 2003
Messages
15,268
Location
USA
Just like my other NSA units the write speed is always faster than the read speed. I never can understand the rationale for that. Regular drives always have faster reads than writes.
 

sdbardwick

Storage is cool
Joined
Mar 12, 2004
Messages
566
Location
North San Diego County
Just like my other NSA units the write speed is always faster than the read speed. I never can understand the rationale for that. Regular drives always have faster reads than writes.
B/c on writes, in some situations, you are limited by the interface to cache transfer rate, rather than the physical media transfer rate for reads. Depending on the number of drives, RAID implementation, and size of on-disk cache, you can maintain a higher write speed for an extended period of time.
 

LunarMist

I can't believe I'm a
Joined
Feb 1, 2003
Messages
15,268
Location
USA
But why is the interface or other performance bottleneck asymmetrical in the opposite way from the conventional wisdom that reads are faster than writes?

The QNAP 10GbE card in the NAS is wired directly to the 10GbE Intel adapter in the computer, but I also tried a Quantia 10GbE adapter in the PC and there was no difference. I even tried the Quantia in the QNAP, but it is not recognizable and I'm not about to figure out how to add the drivers to the QTS. I also put the Intel 10GbE in the QNAP and that connects fine with the Quantia in the PC. The bottom line is that the performance is all the same.

I am using a simplified setup with RAID 6, EXT 4, no storage pools or Snapshots and no M.2 cache. The write speeds are sustained at 500-600, depending on the source. The OS cannot be caching anything, because it stayed about the same for over 14TB, when I terminated transfer manually. Read speeds are 400 or so.

The NAS and drives are quite cheap and literally twice as fast as the backup unit it will replace, so I'm not complaining.
I just want to understand the reason as it has been bugging me for several years. I don't think it is a coincidence since the behavior is the same with two QNAP and one Synology NAS, all being different models with different kinds of CPUs/chipsets.
 

LunarMist

I can't believe I'm a
Joined
Feb 1, 2003
Messages
15,268
Location
USA
For some reason the QNAP take forever to boot. This one requires five minutes and announces the status in a synthetic female voice.
Synology units take about 1.5-2 minutes to boot. It's not a matter of CPU, just QNAP mentality. I'll probably leave the NAS on and live with the ~40W power drain.
 

DrunkenBastard

Storage is cool
Joined
Jan 21, 2002
Messages
770
Location
on the floor
Going to try out the Synology DS920+, on special on Prime day for $439. Will use 500 GB 970 Evo nvme drives for cache, and bump ram to 8GB.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,364
Location
USA
That sounds like a decent setup for the money and nice that it has a cache option via nvme. What size HDDs will you be adding to it?
 

DrunkenBastard

Storage is cool
Joined
Jan 21, 2002
Messages
770
Location
on the floor
The Seagate Exos drives seem quite affordable (14TB for $285) but apparently they are third party via Amazon and a lot of people are getting serial numbers that come back as OEM.

The WD Gold 12 TB is $384.

i have three external 8TB WD enclosures I got a few years ago when they were on special for $129 or $179, I might just shuck them and go with three 8s to start. Also looks like RAM can be expanded to 20GB total with a 16GB so-dimm.
 

DrunkenBastard

Storage is cool
Joined
Jan 21, 2002
Messages
770
Location
on the floor
Ended up going with two WD 12TB easy store drives from Best Buy with store pickup. I dont like to expose them to UPS/Fedex handling.
 

DrunkenBastard

Storage is cool
Joined
Jan 21, 2002
Messages
770
Location
on the floor
So the way the cache works on the 920, with one nvme drive it is restricted to read cache, with two nvme drives it mirrors them and enables read and write caching. However you can't then mount them as a seperate drive. It can only be assigned as cache to a single specific volume. Hopefully DSM 7 will allow that in the future.

Basically silent running in terms of fan noise, drive seeks are the only sound it makes.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,364
Location
USA
Sounds like they're playing it safe to reduce support calls with single-drive read caching. Bummer that it can only be used as a cache drive for a single volume.

I ended up picking up a pair of the WD Easystore 14TB drives that went on sale at Best Buy q couple days ago for $190. I'm debating shucking them or just leave them as additional USB backups. Do you use any specific utility to verify new drives? In the past I used HD Tune to run a sector test.
 

DrunkenBastard

Storage is cool
Joined
Jan 21, 2002
Messages
770
Location
on the floor
I just fill them with data then run a chkdsk. One of the three 12tb easy stores ended up giving me bad sector errors when initializing the SHR5, so now I test them for a couple days before shucking.
 

DrunkenBastard

Storage is cool
Joined
Jan 21, 2002
Messages
770
Location
on the floor
So I'm considering returning the 920+, and getting something that supports 10gbit Ethernet. I dont like the wait of 1 gbit for large transfers. However I cant tolerate any significant tiny fan noise from the associated switches.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,364
Location
USA
What format do their 10Gb models use? Is it an sfp+ or rj45? Could you just run dedicated cable from your pc to the nas?

I'm still trying to figure out an adequate 10Gb config. There are some switches I've seen over at servethehome.com but to your point, they'll likely have some noise. It'll be in my basement so the noise is less of a concern. It's more the wattage usage.
 

Adcadet

Storage Freak
Joined
Jan 14, 2002
Messages
1,852
Location
44 degrees, 43 minutes latitude; 91 degrees, 28 mi
Long time since I've been around! Great to see conversations still going!

I too am moving to 10GB due to large file transfers. I have a detached garage with a single run of Cat6 where I have a Wifi access point, some POE cameras, and a TrueNAS Core box. I'm considering if I should replace the single run of Cat6 with fiber optic (multimode OC3?), both so I can send larger backups quicker but also to help minimize risk of a lightning strike taking out more equipment than it must. Anybody know about fiber optic and lightning?
 

Adcadet

Storage Freak
Joined
Jan 14, 2002
Messages
1,852
Location
44 degrees, 43 minutes latitude; 91 degrees, 28 mi
What do you guys use for your OS on FreeNAS/TrueNAS? A USB flash drive? Conventional 2.5" SSD? A SATADOM? The FreeNAS material used to strongly suggest a USB key (to avoid taking up an unnecessary SATA port, but ServeTheHome argues that USB flash drives may not be particularly reliable. Do you believe that performance of the disk holding the OS is largely irrelevant since most of the OS is just kept in RAM?
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,364
Location
USA
Long time since I've been around! Great to see conversations still going!

I too am moving to 10GB due to large file transfers. I have a detached garage with a single run of Cat6 where I have a Wifi access point, some POE cameras, and a TrueNAS Core box. I'm considering if I should replace the single run of Cat6 with fiber optic (multimode OC3?), both so I can send larger backups quicker but also to help minimize risk of a lightning strike taking out more equipment than it must. Anybody know about fiber optic and lightning?
Hey there

Running a fiber OC3 cable makes sense. I don't know for sure if they're all non-conductive but I suspect some might be. They're usually plastic, nylon, and glass for the most part. If you're going to pull a new cable, at least pull two. Do you know what you plan to use as a switch on both ends?

What do you guys use for your OS on FreeNAS/TrueNAS? A USB flash drive? Conventional 2.5" SSD? A SATADOM? The FreeNAS material used to strongly suggest a USB key (to avoid taking up an unnecessary SATA port, but ServeTheHome argues that USB flash drives may not be particularly reliable. Do you believe that performance of the disk holding the OS is largely irrelevant since most of the OS is just kept in RAM?
I just run Ubuntu Server 18.0.4 LTS on my NAS with the boot/OS drive being a SATA 1TB Samsung SSD. I would agree that the OS performance in relation to the OS drive is not any kind of significant factor for what I do with my NAS. It has 192GB RAM so lack of memory has never been an issue. Sometimes it's nicer to have a faster OS drive when performing OS maintenance and upgrades. They take less time.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,364
Location
USA
I just fill them with data then run a chkdsk. One of the three 12tb easy stores ended up giving me bad sector errors when initializing the SHR5, so now I test them for a couple days before shucking.
I don't know if this will help, I was looking for a utility to write data to fill the drive like you were doing. I ended up finding a tip that recommended using the included windows command line utility called cypher.

e.g.
cipher /w:J:\test
  • First with fill all free space with zeros – 0x00
  • Second with all 255s, – 0xFF
  • Finally with random numbers

/W Removes data from available unused disk space on the entire
volume. If this option is chosen, all other options are ignored.
The directory specified can be anywhere in a local volume. If it
is a mount point or points to a directory in another volume, the
data on that volume will be removed.
 

time

Storage? I am Storage!
Joined
Jan 18, 2002
Messages
4,932
Location
Brisbane, Oz
It's been 5 years since you were here. :eek: Good to see you visit, Adcadet. ;)

WRT lightning, don't forget the buildings are still sharing power, and therefore copper cable. If you are in a lightning-prone area, nothing short of a full lightning protection system upstream of your power box is going to help with a direct strike.

Having said that, the fiber solution is going to be way more reliable over time.

Don't know why Handy thinks you should pull two, they're not high frequency copper cables. If a rogue backhoe operator attacks, both will be toast anyway. Cable trauma seems unlikely in your situation with the PVC pipe, but maybe he's worried about the pre-terminations failing down the track?
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,364
Location
USA
It's less about backhoe cable trauma and more that if one is going through the hassle to pull a single cable, a second one is hardly much more effort. You never know if you'll need it for something and if by chance something is wrong with the cable you have the spare anyway.
 

time

Storage? I am Storage!
Joined
Jan 18, 2002
Messages
4,932
Location
Brisbane, Oz
Yes, but does that conventional wisdom still make sense when applied to a fiber cable in a benign environment? I suppose it could be damaged during installation if someone went nuts with the bend radius.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,364
Location
USA
I guess it doesn't. I have pulled tons of fiber cables in lab environments over the years. There has been a noteworthy amount of failures either due to trauma from pulling or defects in the terminated ends.

I've also seen plenty of times where the fiber cable was bent too far at the location of the switch or it being kinked in the cabinet door but those happen later over time. Probably not likely to happen in a home environment.
 

Adcadet

Storage Freak
Joined
Jan 14, 2002
Messages
1,852
Location
44 degrees, 43 minutes latitude; 91 degrees, 28 mi
Hmmm...fiber may not offer much increased lightning protection, may be more susceptive to physical damage, might be a pain in the butt to install (I had some insulation copiously sprayed in the basement, including over the bit of conduit that it uses to leave the house), and the speed advantage may not matter much (since I'm using this trunk for cameras and backups, both of which do fine with 1GBE). Maybe I leave well enough alone.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
20,364
Location
I am omnipresent
Website
s-laker.org
I bought two of them for work and one for home today. I have a couple 12TB Ironwolf drives but this is the first time I've seen the enterprise drives so cheap.

Ironically, I usually buy and de-shell external drives to stick in arrays and this time I'm buying the huge single drive to stick in an external enclosure because it cost as much to buy as a pair of 8TB external SMR drives.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,364
Location
USA
I'm debating buying a few and trying them out. I checked out their spec sheet and they look good overall. I haven't been following seagate in a long while since I lost all four of my 3TB to failures. I don't know if my NAS would have any issues reading these given their size. I have four bays open and using these would give a nice little play area in a basic 4-drive parity config.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
20,364
Location
I am omnipresent
Website
s-laker.org
I'm debating buying a few and trying them out. I checked out their spec sheet and they look good overall. I haven't been following seagate in a long while since I lost all four of my 3TB to failures.
If I can get them at a competitive price, I still want HGST drives. One of the reasons I was willing to buy Ironwolf drives is that Seagate includes data recovery as part of the warranty service. The Exos drives don't have that, but on the other hand they're the drives with double the rated MBTF. And the price is certainly right.

My de-shelled Seagate SMR drives have been just fine. I've lose one drive a year within their operating life over the last five years.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,364
Location
USA
I agree, I've loosely followed the Ultrastar HGST drives and they carry a premium. My 6TB HGST drives are over 5 years old now with 24x7 runtime and I'm looking for an update. I also want to cut back on power draw which these Seagates would help with. I could go from 20 drives down to 10 and even find a more power efficient NAS to replace my supermicro box.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
20,364
Location
I am omnipresent
Website
s-laker.org
I've actually been somewhat annoyed that the trend in desktop boards has led to fewer SATA ports. We can always spec out something with an LSI controller, but we used to be able to repurpose just about anything with a mid-range desktop board for a credible storage server. A lot of new mid-tower systems only have 4 SATA ports. Those machines seem crippled as storage boxes, especially if you have to choose between a HBA and a video card and/or the nVMe port might disable one or two of the SATA ports.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,364
Location
USA
I've looked to Supermicro in those situations for a basic board that comes with 6 SATA ports plus a built-in 8-port SAS Broadcom (LSI3008) adapter like the X11SSL-CF for example. You still get three PCIe slots, a built-in BMC for remote management, and a basic VGA video that works well enough for headless setups. Add a basic E3 Xeon and 64GB ECC and use the PCIe slots for either more storage and/or a 10Gb HBA.

A step up for a more all in one would be the X11SSH-CTF which gets you all of the above plus an NVMe slot, 8 SATA ports (vs 6), and a dual Intel X550 10Gb NICs for around $400 is pretty decent for a board with this much stuff.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,364
Location
USA
Parts to start a 16-drive NAS. I'm sure I can find better options with a bit more time but this would work well.

 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
20,364
Location
I am omnipresent
Website
s-laker.org
I think I might be tempted to use a 35W i5, like a 8600T, along with a Gigabyte C246-WU4. That gives me 10 SATA ports and a pair of nVMe slots if I want them. Throw in a couple 32GB DDR4 modules, an Infinband or 10GbE HBA and a SAS controller. I think we're around the same price. Yours has the dual 10GbE and built-in SAS but mine would be easier to cool and supports more overall RAM.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,364
Location
USA
That works too. I do value the BMC feature and also ECC RAM which yours wouldn't support. You'd need to add some kind of video to yours which the supermicro wouldn't need. Difficult to say if that offsets the CPU TDP differences but either way you can't get ECC in an i5 and I prefer ECC in a NAS or any storage system for improved data integrity. The reduced price of the X11SSL-CF would leave 14 SATA/SAS ports and it would really depend on your workload or use-case to justify a consumer NVMe versus an enterprise SAS. You could also just plug in a 2.5" NVMe to a riser in a 4x pcie if you really need that performance. I've done that in workstations and it works fine.

There might be an Intel Atom based solution that could offer both ECC and reduced TDP; I'll have to search more later to see what's available. Again, really depends on the workload. Most systems that warrant more than 64GB of RAM IMHO usually also warrant a better CPU. All the storage servers I've built so far seldom use anywhere near 32GB and page cache really isn't a factor for my builds. It's usually the virtualization that demands the extra RAM but I no longer combine workloads like that. I like the separation of responsibility/task.

What kind of use-case are you deploying with your NAS?
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
20,364
Location
I am omnipresent
Website
s-laker.org
That i5 does have crappy Intel graphics; I have a little 10" HDMI screen I normally use for RPi that would take care of my needs in that department. I tend to look at ECC RAM as more of a headache for a home system. RAM compatibility can be a huge hassle until the RAM is so obsolete that no one wants it anyway, but commodity RAM is probably good enough for a home system. Plus I already have that motherboard sitting around.

The system would be 6C/6T, so nothing more serious than running TrueNAS (the thing that replaced FreeNAS) or Windows Server, but from past experience, FreeNAS used about 1GB RAM for every 1TB of addressable storage. More RAM is probably better.

I haven't compared platforms in years. Windows Storage Spaces turned out to be the best option for me last time I made a big change. As I'm reading about what has changed in TrueNAS, it's entirely possible that I'd be better off back in zfs-land for failover and data integrity, but then Windows has RDMA and multi-tiered caching via Storage Spaces Direct.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,364
Location
USA
Oh right, I forgot about the onboard graphics, fair point.

Your point about ECC is valid and it's the reason why I like to lean towards motherboards from someone like supermicro. Companies like crucial tend to validate/certify their ECC RAM on SM boards, so that ram I listed above may not be the most inexpensive but it was certified to run on that SM board. In the realm of zfs or in your example of TrueNAS/Freenas, ECC has certainly been a long-running topic of discussion to which I lean on the side of just using it given zfs's nature for data integrity and use of checksums to protect against bitrot.

The age old 1GB RAM per 1TB isn't really as applicable and certainly not warranted in home use where performance isn't the utmost requirement. No one sane really uses zfs dedup feature so don't do that if you're planning on it. That is the true RAM hog in zfs. I've been managing two home NAS built on Linux + ZoL and ram size has never been a factor, even on my 120TB zfs NAS. Unless you need all the bells/whistles of TrueNAS with its GUI and plugin system, you can get by very easily with a ubuntu server (or centos stream) + zfs + samba/cifs for 99% of home use cases, especially since you're very technical. If you need the rarer iSCSI target feature then maybe you might want stick with TrueNAS as customizing your own target via something like SCST/LIO/TGT is a pain in the ass.

You can just make a single pool from all your drives with multiple parity and keep it simple. If you know you'll be doing a bunch of synchronous writes to your pool or zvol, add a zfs SLOG device on a decent SSD (or even mirrored). Most people don't need that though. If you have a lot of frequent reads, add a single SSD as an L2ARC to your pool. These are all very simple to do via the zfs command line and don't warrant the overhead and complexity of using TrueNAS on FreeBSD which limits your supported hardware even further. You also won't be able to discern a performance benefit with FreeBSD versus ZoL since it's a kernel mod, not FUSE.

If you need RDMA performance with something like a RoCE card or Infiniband, you might want to reconsider your architecture for performance versus a conventional 10Gb/25Gb ethernet adapter anyway. I went down that road with RDMA and it was a huge pain to get it working right.

All that said, after having a conversation with a friend, I'm considering my next NAS to make use of min.io due to a bunch of interesting technical advantages with erasure coding and far more granular control of replication/parity based on individual objects versus using the same amount of parity protection over the entire array. I'll likely be building out a PoC in my home lab to go through how it works to see if it's a good fit. Might be worth looking into.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
20,364
Location
I am omnipresent
Website
s-laker.org
That's an interesting project. Would part of your plan include replicating your data offsite? Or are you just looking at it for bitrot / data loss prevention?
 
Top