Home NAS

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,898
Location
USA
I was assuming RAID-Z2. I have older NAS with similar CPUs and there can easily be bottlenecks with 10Gb. If you encrypt it will be even worse.
I'll try it out and see what happens. My current raid Z2 setup barely touches the cpu and my storage needs aren't as demanding as yours. I'd prefer the reduced power usage, reduced noise and heat.
 

LunarMist

I can't believe I'm a Fixture
Joined
Feb 1, 2003
Messages
17,371
Location
USA
I just meant that what is effectively an 8th gen part (Like Coffee lake) at 25W must be a lot weaker than a modern 25W CPU or even a 15W CPU. I have no idea what is actually available though. I was comparing the V1500B with the C3758 in Passmarks and although n=6 or 7, the results are similar in multicore with the V1500B better per core. I regret that the V1500B is too wimpy for RAID-Z2 and encryption. Even without encryption its not as fast as my older Synology with a Xenon CPU and not reaching the 10GbE. I read somewhere that the SMB does not scale with more cores very well and faster cores are more important. However, it may very well be that the commercial NAS is not as efficient as the home-built variety.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,898
Location
USA
Which modern 25W CPUs would you consider? It's also that I wanted a package that had 8-12 SATA ports, ECC memory support, a BMC, and the ability to add 10Gb.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,898
Location
USA
I'm curious if you can give more details to your NAS testing strategies @LunarMist. I'd like to try and see how this homebuilt NAS I'm piecing together functions compared to what your concerns are.

Another curiosity I had was regarding your comment about zfs with encryption. Do you happen to know if your NAS used a newer zfs with native encryption or did they implement it using something like LUKS at the disk level?

Also how many HDDs and/or SSDs were you using in your pool configuration? How was the data being shared (SMB, NFS, is so, etc?
 

LunarMist

I can't believe I'm a Fixture
Joined
Feb 1, 2003
Messages
17,371
Location
USA
It is a QNAP, but I have no access and the power is off by Airgap. Trying to rebook. Travel is screwy due to tornadoes in the FL. It has 8 noisy 18GB Exos. SMB with QNAP 10Gb dual plane sfp+. OS is Heroic and drives RIAD Z2. I will locally power it later and look into the configuration maybe on the weekends.
 

LunarMist

I can't believe I'm a Fixture
Joined
Feb 1, 2003
Messages
17,371
Location
USA
I have one large Z2 Storage Pool with two folders. The encryption is done by folder and can be turned on or off. That takes some time but does not affect the data. When encryption is on the folders are locked at NAS boot and it take a few minutes to fully unlock with a password.
It is possible to set each folder as Thinly shared or Thickly shared, deduplication on/off, compression on/off, a bunch of Snapshot options (mine is off), some setting for the Arca RAM usage, etc. I HTH, but don't really understand what it all means.

I should also mention that it is over 97% full, so it is meaningless to test performance now.
 
Last edited:

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,716
Location
Horsens, Denmark
Got a request from the mother-in-law that led to me powering up the old Synology box to retrieve some home movies. It is a DS1812+ with 8x4TB drives in it in RAID-5. The fans in the unit are sounding rough and it took 2 tries and an OS reinstall to get it to boot. It is very old (10+ years?) and I'm thinking about replacing it. Fully solid state would be awesome for power draw and noise with something like the ASUSTOR Flashstor, but I don't think the requirements justify the spend. Performance can really be anything, so long as it can handle a single 4k stream we're good.

24TB after redundancy is enough, 32TB would be plenty for the foreseeable future. Synology now requires their own drives? I don't like that. Something that can run one FreeNAS/OpenNAS/whatever would make me feel better than a closed ecosystem. I have no need for it to have an app store or anything else.

My workstation motherboard has 4 M.2 Slots and 4 SATA ports, should I just fill it with 8TB drives and make the storage local? Local is fine; my workstation is always on and I am the only one who needs access to the files anyway. The drawback to that path is that I won't stand the noise of spinning disks at my desk, so it could get pricey.
 

LunarMist

I can't believe I'm a Fixture
Joined
Feb 1, 2003
Messages
17,371
Location
USA
Both Synology and QNap have gone downhill abit, but Synology is ridiculous due to the specifically FW'd overpriced Toshiba hard drives needed. (They will work with regular drives, but there is not useful drive health data because the drives are always shown in a bad state.)

12TB or larger capacity is needed to get He in current drives. The up to 10TB air-powered drives are rather hotter/noisier in general, though I would not have any Seagate EXOS or IronWolver Pro running constantly within 2m of my desk since they are so noisy on seeks (I have over a dozen 18/20TB). WD HDDs are obviously quieter, but not silent up close. The gold or the DC datacenter are the best WD drives to get. If you are not building something there are a variety of QNAP with QuTS Hero that enact ZFS which is supposedly the best FS. https://www.qnap.com/en-us/product/?conditions=form_factor-tower

There is also TrueNAS, the offshoot of the FreeNAS and IXSystems. The hardware like the Mini is inceasingly archaic though I believe you can just get the software and build something. I looked into them several times and did not find economic or desirable values.
 
Last edited:

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,155
Location
I am omnipresent
I have 28TB of u.2 drives on a x16 card in my new workstation. The HBA cost a whole $50 and I think I spent $1300 total on those drives. The 8TB drives were new, not pulls, although nobody has them that cheap any longer.

Micron, Samsung, Intel/Solidigm and WD have affordable high capacity PCIe 3 u.2 drive options around, albeit mostly as enterprise pulls or spares. I've never seen Kioxia drives at that magical $50/TB target but if you look, you can do really well on 4 and 8TB models.
 

LunarMist

I can't believe I'm a Fixture
Joined
Feb 1, 2003
Messages
17,371
Location
USA
Broadcom makes some MegaRAID tri-model adapters (e.g., 9500 series) that operate off the PCIe at 8 lanes regardless of the number of devices. 8x PCIe 4.0 has an aggregate bandwidth about 15GB/sec. so more than enough for practical use.
I had some ideas of using two U.3 drives and 8 SATA drives at one point before other priorities intervened.
 
Last edited:

LunarMist

I can't believe I'm a Fixture
Joined
Feb 1, 2003
Messages
17,371
Location
USA
I have two 8TB drives in RAID 0 for video work. They could just as easily be in a JBOD but why not get throughput gains?
Sure, the JBLODs are not good when a drive fills.
However, it is the opposite of a NAS. I guess you backup to them.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,155
Location
I am omnipresent
LM, I have a full-fat file server and a tape changer on hand as well as Crashplan online backups. I think I'm OK with my backup situation. The two drives I have in RAID are where I stick work in progress and intermediate data. That's all stuff I can re-render if I really need to since I have project files and source data.

re: Broadcom tri-mode adapters, I bought a 9600-24i off Ebay for ~$330. Every connected NVMe drive takes one high density connector worth of drives, so you can either have 4 SAS drives or 1 NVMe. That should not be surprising. The other thing I will say is that the controller gets HOT, even if nothing is plugged in to it. It should probably have a GPU-like cooling solution on it to go in a chassis that doesn't have rackmount style airflow, so it's getting put one of my datacenter machines for extra SSDs instead of getting used at home albeit possibly in an external enclosure with some unorthodox cabling.

I thought about getting a Tri-mode card for use at home, but an extra four drives is getting me the density of SSDs I need and, when using PCIe 3 drives, isn't crazy to keep cool. They're fast enough that I'm not going to complain about "only" 3000-whatever MB/sec speeds. I do have a couple 2TB gen 4 m.2 drives (I'm using Crucial P5 Plus drives) on my workstation as well, but green bar move fast when I'm copying over exported video files and that's what I was looking for.

dd, I thought you were looking at the possibility of moving to a less-stupid platform than 14th-gen Intel already.
 

LunarMist

I can't believe I'm a Fixture
Joined
Feb 1, 2003
Messages
17,371
Location
USA
Don't those four SSDs take 16 lanes in the dumb adapter? How are the other lanes occupied, 8 to the GPU and 4 to the NIC, assuming you can only use 3 slots?
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,155
Location
I am omnipresent
My top PCIe slot is set up as 4x4x4x4 for the dumb u.2 adapter. My GPU is in the middle on 16x and my IB NIC is using four lanes of a 16x slot and rather stupidly has to be on a extension because my GPU fans stick out a little too much. The bottom port is running off the chipset lanes rather than the CPU lanes. I just left one of the other slot covers off the back of my case to route the SFP+ cable inside.

I am not using the 9600 at home. I definitely thought about it, but my case only has places for 8 3.5" drives and 3x 2.5" anyway.
 

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,716
Location
Horsens, Denmark
...

dd, I thought you were looking at the possibility of moving to a less-stupid platform than 14th-gen Intel already.

I was, but:

1. It doesn't look like Intel or AMD next gen are going to offer me any performance at all. I'd be happy with 20%, maybe even with 10%, but it looks like my overclocked 2 year old machine is within single digits of performance for what I do.

2. If I am dropping $5k on building a NAS or home server or whatever, that should probably be the limit of my absurd PC spending for a while.
 

LunarMist

I can't believe I'm a Fixture
Joined
Feb 1, 2003
Messages
17,371
Location
USA
The Terramaster are usually considered a 2nd tier Chinese NAS company compared to the Taiwanese QNAP and Synology. There are of course others like NetGear, Buffalo, AsusStor, etc. Some of the companies are less stable and/or don't have the best long term OS support. I can still update my >5 YO Synology for example.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,155
Location
I am omnipresent
Every time I look at commercial NASes, I wind up running down what's available with goofy AliExpress hardware. I'd love to see something with 1 10GbE port, at least a x8 PCIe slot and a mix of SATA or MiniSAS + at least m.2 on an ITX package. It's not hard to find most of those but never all of the above. I'm still holding out hope though.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,155
Location
I am omnipresent
I'm sure you would want flesh memory nowadays.


Asus has that one all-m.2 NAS but it only has 4GB RAM and not much in the way of flexibility.

I'd look at a $1500 Asrock Rack ALTRAD8UD-1L2T paired with a 40W Q32-17 (literally, 32 1.7GHz Ampere cores). I'd set it in a 2U rack enclosure or perhaps an older version of the Silverstone LaScala, which can use pair of 4-in-1 5.25" U.2 backplanes ($400 each), probably with a high-efficiency 600W-ish PSU. Not sure how much RAM I'd put it but 8x32GB RDIMMs ($300ish) seems reasonable for a TrueNAS system. There IS a passive heat sink for at least some versions of Ampere, although in that case I'm sure I'd want to pay some attention to fans in the rest of the chassis.

Used ALTRA8UDs sell for about $900 including a CPU, albeit not usually the low power one.

That motherboard has native support for 4x SlimSAS, which is good enough for eight NVMe drives, and it has a pair of oculink ports (potentially two more). It has dual native 10GbE, four physical x16 PCIe slots and two m.2 drives and it an Aspeed controller for BCM goodness.
So there are potentially 10 ways to add NVMe to that thing without even involving any other HBAs, but if you wanted to add a faster NIC and an HBA for your spinning drives, go for it.

I don't see any 8TB u.2 drives as cheap as $400 like I did at the beginning of the year but I do see some for around $650. It's entirely possible that someone really trying to could get a 64TB all-NVMe or perhaps 4x15TB ($1200 each) with a crap-ton of room for expansion.

That is a setup that could be very future proof with lower overall power utilization that's probably lower than the average gaming PC.

A more realistic version would just be to get a Ryzen "GE" CPU. Ryzen is bad at going to low power states like the N100s, but a Ryzen 5700GE is a 35W part with 8c/16t that could live on an x470 or x570 board with 128GB RAM and enough I/O to take a fast NIC and HBA, and one of my nifty little 4 drive u.2 adapter boards. Is 35W low enough? Probably, since it's unlikely that you'd hit any of the CPU cores hard enough to make it boost with a NAS OS. There aren't really any weird Ryzen desktop boards with a mobile CPU on them the way there are for Intel. I did check.

Intel has ULV CPUs, but those on desktop boards generally max out at 32GB RAM and usually only have one x16 slot or x1 slot, and they don't ever seem to get paired with larger than ITX boards. Some of them do use MiniSAS instead of SATA but you'll never see one with 10GbE + x16PCIe + a large enough number of supported drives to make an appealing NAS.

The Minisforum MS-01 can run off an i5-12600H with roughly the same power envelope as the 5700GE. It has 2x10GbE, a x16 PCIe 4 slot and 3 m.2 drives with the option to install an extra back plate for four more m.2s. This is a little more interesting for a $600 PC, but it's still pretty light on drives IMO. I do suspect somebody has probably come up with a 3D printed bottom for that thing that can turn one of those m.2 slots into 6 SATA bays, and that's the point where I'd take the time to investigate one.
 

LunarMist

I can't believe I'm a Fixture
Joined
Feb 1, 2003
Messages
17,371
Location
USA
You are the MAD Scientitst of NASs. :geek:
I think the raw transfer rates (like SFP28) are more important than RAM or power, but I've avoided rack anything due to the howling wind noise. Every time I do the math it's like $25K for what I would want. Maybe we should play the lottery in AZ (not allowed in NV). :LOL:
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,155
Location
I am omnipresent
TrueNAS loves having lots of RAM and will cache a lot more with more RAM available, hence the insistence on getting pretty large amounts of RAM in the first place.

Rack equipment does not have to be loud. Lenovo RD550s are extremely tolerable, like just above silent. even with the traditional row of 10k rpm fans across the front. I've also had good luck using generic rack cases (e.g. my Norco 24 bay chassis) with high quality desktop parts.
 

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,716
Location
Horsens, Denmark
The Terramaster are usually considered a 2nd tier Chinese NAS company compared to the Taiwanese QNAP and Synology. There are of course others like NetGear, Buffalo, AsusStor, etc. Some of the companies are less stable and/or don't have the best long term OS support. I can still update my >5 YO Synology for example.
I honestly consider any appliance to be 2nd tier already. I'm overly sensitive to being trapped in an EoL product line or not being able to scale the solution as I like. The ideal for me is always industry-standard hardware running an actively supported open-source OS (Windows is also tolerated). This Terramaster beats Synology and QNAP for me because I don't plan to run their OS; TrueNAS would probably be my choice. This means that I'm not actually stuck in their ecosystem, it is just a hardware package that would meet my needs.

But putting a whackton of fast SSDs in my workstation is sexier.
 

LunarMist

I can't believe I'm a Fixture
Joined
Feb 1, 2003
Messages
17,371
Location
USA
The problem is that the consumer chipsets have limited PCIe lanes so there is always a compromise on internal storage vs external connectivity vs video cards. I may not live long enough to see all my data internally at a humanly price. Externally there are practically no limits but slower speeds. I don't even want to work directly on a HDD based NAS anymore. I regret not buying those TLC 30.72TB drives last year when they were much less expensive. :(
 
Top