Oh right, I forgot about the onboard graphics, fair point.
Your point about ECC is valid and it's the reason why I like to lean towards motherboards from someone like supermicro. Companies like crucial tend to validate/certify their ECC RAM on SM boards, so that ram I listed above may not be the most inexpensive but it was certified to run on that SM board. In the realm of zfs or in your example of TrueNAS/Freenas, ECC has certainly been a long-running topic of discussion to which I lean on the side of just using it given zfs's nature for data integrity and use of checksums to protect against bitrot.
The age old 1GB RAM per 1TB isn't really as applicable and certainly not warranted in home use where performance isn't the utmost requirement. No one sane really uses zfs dedup feature so don't do that if you're planning on it. That is the true RAM hog in zfs. I've been managing two home NAS built on Linux +
ZoL and ram size has never been a factor, even on my 120TB zfs NAS. Unless you need all the bells/whistles of TrueNAS with its GUI and plugin system, you can get by very easily with a ubuntu server (or centos stream) + zfs + samba/cifs for 99% of home use cases, especially since you're very technical. If you need the rarer iSCSI target feature then maybe you might want stick with TrueNAS as customizing your own target via something like SCST/LIO/TGT is a pain in the ass.
You can just make a single pool from all your drives with multiple parity and keep it simple. If you know you'll be doing a bunch of synchronous writes to your pool or zvol, add a zfs SLOG device on a decent SSD (or even mirrored). Most people don't need that though. If you have a lot of frequent reads, add a single SSD as an L2ARC to your pool. These are all very simple to do via the zfs command line and don't warrant the overhead and complexity of using TrueNAS on FreeBSD which limits your supported hardware even further. You also won't be able to discern a performance benefit with FreeBSD versus ZoL since it's a kernel mod, not FUSE.
If you need RDMA performance with something like a RoCE card or Infiniband, you might want to reconsider your architecture for performance versus a conventional 10Gb/25Gb ethernet adapter anyway. I went down that road with RDMA and it was a huge pain to get it working right.
All that said, after having a conversation with a friend, I'm considering my next NAS to make use of
min.io due to a bunch of interesting technical advantages with erasure coding and far more granular control of replication/parity based on individual objects versus using the same amount of parity protection over the entire array. I'll likely be building out a PoC in my home lab to go through how it works to see if it's a good fit. Might be worth looking into.