Home NAS

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,402
Location
USA
It's more likely supply chain issues due to pandemic-related stuff. We are seeing it at work with buying hardware and lead times are being noted at 10-12 weeks out right now, mostly for SSDs but also for raid controller chips and some network controller chips. :-/
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,402
Location
USA
I finally ordered a 10Gb switch to add to my home lab. I decided to check out a used Brocade ICX7250 to get me 8 x 10Gb. I'll end up getting a couple SFP+ to RJ45 adapters to wire in a couple workstations.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
20,507
Location
I am omnipresent
Website
s-laker.org
'Tis the season to buy stupid crap.

I just ran across this nVMe to 5 port SATA adapter on Amazon. It features everyone's favorite JMicron bridge controller, but it's a potential solution for people trying to figure out how to cram some extra SATA ports on to a contemporary SFF build. Amazon reviews suggest the bridge IC needs active cooling. I don't need such a thing but I'm also glad they're out there to be had.

This little $100 card supports 4x U.2 on an 8 lane PCIe card without needing bifurcation, which basically no Ryzen motherboard supports. If you're running that on desktop Ryzen, I guess you might be bottlenecking your GTX 3080 by using one of those, but I doubt we're going to see U.2 drive connectors on desktop boards anyway. There's also a slightly more expensive version for M.2. They both use the same PCIe bridge chip and they're supposed to be driver and configuration-free.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
20,507
Location
I am omnipresent
Website
s-laker.org
Seagate SMR drives seem to behave in RAIDz1. I use 4MB block sizes since all the data I'm copying is video, and relatively small 4-drive arrays (two 15TB arrays and two 18TB). Everything is fine, if not particularly fast. Writes are averaging about 75MB/sec/array. I've been feeding data into this system for about three weeks. I frankly expected it to fail by now and it hasn't, which makes me need to rethink how I want to allocate the disks I have available.

Everything is running to a basic Ryzen 2400G with 64GB RAM and a LSI SAS controller. It's stuck on 2.5GbE, but the network interface doesn't seem to bottleneck disk transfers as much as the SMRness of the drives. I'm just using TrueNAS at the moment.

The *is* a working ZFS port for Windows. Performance numbers, while not as good, are getting in the right ballpark.

 
Last edited:
Top