Home NAS

LunarMist

I can't believe I'm a Fixture
Joined
Feb 1, 2003
Messages
17,497
Location
USA
Do you really need a 28-core Skylake for 12 magnetic drives? I suppose this will be on a datacenter, not in your home.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,275
Location
I am omnipresent
Not at all. But the platform supports astonishing amounts of RAM and I/O. I could even turn off hyperthreading entirely and still have an extremely nice VM host. That system might be a nice overall upgrade for me.
 

LunarMist

I can't believe I'm a Fixture
Joined
Feb 1, 2003
Messages
17,497
Location
USA
Did you find the performance of the system to be substantially better than a NAS?
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,927
Location
USA
One idea is to use a basic external disk cabinet and connect it via the SFF8087 port to that system.
 

sedrosken

Florida Man
Joined
Nov 20, 2013
Messages
1,814
Location
Eglin AFB Area
Website
sedrosken.xyz
The SATA controller on the X470 board I had in my server has decided to bite it. I thought one of my drives was failing, but no -- the entire controller just disappears to the host, silently, after about 20 minutes. I have a four-port SATA card on the way so I can limp along until I can afford an entire new platform for my home server. In a way, this is kind of a good thing, moving to an external card will let me just pass through the whole thing to the NAS VM instead of passing through the drives by disk/volume ID. I have no idea what version of the PCI-E spec this card conforms to, however -- it's only x1. If it's 3.0, I shouldn't lose any throughput to my drives. If it's 2.0, I lose a little bit, but not enough to quibble over. If it's 1.0 I will simply have to cry about it and use the tears as fuel to justify making the bigger purchase earlier.

As for replacing the platform, I was thinking something along the lines of bundling a 12700K and B690 DDR4 board on Newegg for ~$350 so I could reuse the sizable investment I've put into RAM for it, but I'd really rather have the budget to change over to a proper server platform. Moving to something with a modern Intel iGP will let me ditch the 1070 and really cut down on power consumption by using QSV for media transcoding in Jellyfin rather than relying on the NVENC my old 1070 provided. It would also open up a slot for a proper HBA, like a retired Dell PERC controller or some such.

What this whole boondoggle has taught me is that I am now at the point where I can scarcely function without my home server anymore.
 

sedrosken

Florida Man
Joined
Nov 20, 2013
Messages
1,814
Location
Eglin AFB Area
Website
sedrosken.xyz
Another potential problem waiting for me if I go to the new platform -- E-cores. Apparently they kind of just screw everything up, scheduling-wise. I'm hoping there's a way I can tune the scheduler such that the hypervisor uses the E-cores to remain responsive while the P-cores are getting hammered, and keep the P-cores open for the VMs I plan to run. I wish there was an 8 or 10-core Alder Lake spec that didn't use E-cores at all, but alas. Typically, it's better to have them than not, in the end. What'd be really cool is an option to specify which cores a VM could use. I think as big.LITTLE gains popularity, we'll see more of that. I hope so, anyway. It'd be awesome if I could throw a couple E-cores at my network services VM that doesn't need a ton of CPU grunt, for example.

Thanks for the input, Lunar, and if it is 2.0, I'll take a hit to throughput for sure but not by a whole lot. Like I said, it won't be enough to quibble about.
 
Last edited:

sedrosken

Florida Man
Joined
Nov 20, 2013
Messages
1,814
Location
Eglin AFB Area
Website
sedrosken.xyz
This is the situation that simplifying has wrought -- believe it or not, with how many computers I use regularly, it absolutely is simpler for me to just store everything in a central location and fetch it as-needed. No sneakernetting stuff on flash drives and external hard drives like I used to. My backups are good, and I have everything back up now anyway, but it's certainly annoying. I just don't have the budget I'd like to for a proper solution -- I'd like a proper server platform, something with robust and ideally redundant disk controllers, ECC RAM, so on and so forth, but that costs, to borrow a phrase from a friend of mine, hella bread, and I have so many better things to be putting said bread toward instead.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,275
Location
I am omnipresent
An RS550s might be just the thing for you. sed. They're old, but they're cheap. Everything that goes in them is cheap, and a lot of the ones being sold now come in fully loaded configurations with a couple CPUs, redundant PSUs and a full set of filled DIMM slots. They're also very quiet other than a few seconds at startup with the fans run at full speed.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,275
Location
I am omnipresent
$400 got me an RS630v1. 2x Xeon 6136 for 24c/48t @ 3.5GHz, both 1100W PSUs, 4x10GbE, 10 front 2.5" bays and three internal u.2 ports. I have to put RAM in it but I am just fine with that; $800 will get me 512GB (the system will take 1.5TB), which should cover my needs for years to come. There are also a pair of open 16 lane PCIe ports if I want to do something crazy with it. I can swap up to the Lenovo 940-16i and get my Tri-mode disk controller. The backplane I have supposedly already supports EDSFF.

I'll just have to suffer along with measly little PCIe 3.0 U.2 and $400 8TB 3GB/sec u.2 drives. Shucks.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,927
Location
USA
That's a lot of compute and RAM for a NAS, is it doing other work besides storage? Otherwise seems decent for the money, but a lot of power draw.
 

LunarMist

I can't believe I'm a Fixture
Joined
Feb 1, 2003
Messages
17,497
Location
USA
He has 20A circuits all over the place. :LOL:
It's great in the wintertime.
Hopefully those nasty used SSDs can be sterilized.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,275
Location
I am omnipresent
In this case I was specifically making a point that the option is available and affordable. I'm getting one set up so I can beat on it before I put in my datacenter. If I can get more of those at that price, they'll be great replacements for the RS550s I have out there now.

This is the cheapest direct-attach U.2 cable I could find. I don't really want a 1m cable but I think I can get away with putting a few drives in the back of the system if I want to stick with 2.5 drives. I can't think of anything I'd want in the PCIe slots since they already have 10GbE and that position puts the drives between the physical CPUs for fan exhaust purposes.
 

LunarMist

I can't believe I'm a Fixture
Joined
Feb 1, 2003
Messages
17,497
Location
USA
Wow, SSDs are available cheaply now! Do you run RAID-Z2 or -Z1 of them in the NAS?
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,275
Location
I am omnipresent
I'm really going to have to think about how I want to set up this system. I could just make it another VM host that just has a lot more capacity than the other ones I have but what need for my systems is a place to put things.
We're supposed to get a foot of snow this weekend so I have some time to test. Unfortunately, I don't have any U.2s or enough large SATA/SAS SSDs to fully populate it yet. I'll be a while before I can figure it out.

I cannot believe how much hardware I got for that cheap. I know it's somebody else's off-lease hardware but four of these could cut my rack space in half and then I really would have room for whatever giant storage appliance I care to put in.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,275
Location
I am omnipresent
The big problem I am seeing with the RS630 is what I'm going to do with these internal U.2 ports. The internal HBA that's connected to the backplane isn't the newer one that can run these, so these drives have to be direct-attached to their respective ports. U.2 drives get warm even though they're basically made of heat sink. I think my best option is to set them on top of the heat sinks at the back of the systems, which are cooling the 10GbE NIC and what I think is the motherboard chipset. If I do that, I won't have a place to stick a QDR Mellanox HBA for inter-server traffic.

If I set a couple U.2 drives in the back of the case, I'll probably 3D print some kind of retention bracket just to keep them from sliding around and to give some space for airflow around the heat sinks they'd otherwise be sitting on.

Lenovo offers an inexpensive accessory HBA, the 1610-4P (there are also 8 port versions), allow mini-SAS connections to the backplane I *do* have, but according to Lenovo documentation, it's only for single-CPU servers and only for the four backplane ports on the right-hand side of the system.

Nonetheless, this thing has an astonishing amount of I/O and it's not even a particularly new system.

(Yes, I know this isn't exactly "home NAS" but for as little as I paid for this thing, it could be).

Edit: Apparently I *do* have the right backplane. I just don't have all the cabling I should have for it. The joys of secondhand servers.
 
Last edited:

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,927
Location
USA
Everything you've described, it does sound like a fantastic home server. I'd likely not us it as a nas because it has too much compute for what I'd plan out and I like keeping my storage/nas roles separate from compute.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,275
Location
I am omnipresent
The most important thing for me right now is getting high density U.2 support. As a someday feature, there's also a 10x EDSFF backplane that allows full nvme on all the bays and THAT is what I want to see in support of operations. 10x8TB high endurance drives makes a perfect TrueNAS setup to direct backups to.

It's definitely not ideal as a pure NAS; at my power rates, it costs about $29/month just in power, but it DOES have three PCIe 4.0 16x slots. Plenty of room for an additional SAS HBA and a 40Gbps NIC (in addition to the 4x10GBaseTs) that can turn it into an I/O powerhouse. All in 1U.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,927
Location
USA
Excessive power usage has been the driving factor for my long-overdue homelab rebuild. Here the avg rate is near $0.28/kwh and my entire rack costs mean near $1800/year in power. I do have solar panels on my house that cover about 75% of my annual usage, so I'd like to see if I could cut back my rack usage by 50% with newer and more efficient systems.

That said, the server you're configuring is still very appealing even if impractical for my needs.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,927
Location
USA
I do leave a few systems on 24x7 but not everything. My backup NAS is only powered on every other week or so to receive a zfs snapshot of my main NAS and then powered off.

Otherwise I'm running my large NAS, one esxi server for my various compute services, a brocade ICX 7250 10Gb switch, Ubiquiti Enterprise 8, 10Gb and 2.5Gb PoE switch, and a small Beelink mini S12 Pro running proxmox for my HomeAssistant VM for home automations. There's a few other various small gateway devices for my lutron, Philips, and Solar edge communication but those are low wattage.
 

LunarMist

I can't believe I'm a Fixture
Joined
Feb 1, 2003
Messages
17,497
Location
USA
I don't know how much power all my systems use. Is there a good continuous measuring product that connects to Windows machines (not on the internet)?
I tried that Killing Watts thing some years ago but it crapped out and was providing impossible data.
Merc probably gets cheap electricity somehow.
 

LunarMist

I can't believe I'm a Fixture
Joined
Feb 1, 2003
Messages
17,497
Location
USA
I suppose it makes sense if you WFH. My electricity is fairly expensive and I use it all the time, so I try to have the computer stuff off more than not when not in active use. I set each NAS to spin down after one hour.
Whatever power used needs to be removed by AC, so energy/heat is more of a problem in the summertime.
How much power does that giant server use?
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,275
Location
I am omnipresent
It has two 1100W PSUs for a 1U PC. I only have one of them plugged in. My datacenter does not charge for power either. At least, they don't charge me. The RS630 is subjectively quite a bit louder than my RS550. It is NOT intolerably loud, but it is louder than for example a Supermicro rack system with SQ fans.

I'm not the only person who uses my PCs; my partner switches between her personal desktop, her Mac and the desktop her job bought throughout the day, albeit mainly by RDP, and I run a private storage cloud for a few dozen people as well.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,275
Location
I am omnipresent
Ampere (ARM64) isn't exactly cheap right now, but it will be in a couple years. The motherboard feature set is both serious and pretty dreamy; 4x16 PCIe 4.0 lanes, 8 single channel DDR4 DIMM slots, 4x SlimSAS, 2xOculink, 2xNVMe, 2x 10Gb. The whole thing can be passively cooled , too. I think this is something to keep an eye on for big-boy I/O on relatively modest power draw.
 

LunarMist

I can't believe I'm a Fixture
Joined
Feb 1, 2003
Messages
17,497
Location
USA
Is that more powerful than the MAC M series silicon? I thought in IT contexts the Ampere was a video card?
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,275
Location
I am omnipresent
Apple Silicon is its own thing. Most of its architectural benefit comes from everything being on the SoC, with no expansion at all aside from Thunderbolt and USB. Ampere is an nVidia graphics chip, yes, but there's also a company called Ampere, which makes ARM products for the datacenter. Even more confusing, Ampere's product line is called Alpha, same as the DEC CPU used for VMS and UNIX systems.
IIRC Ampere's philosophy is to have lots of modest cores for things like running web servers that don't need to be insanely powerful. File services probably also qualify for that.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,927
Location
USA
That is a hell of a capable platform for this usecase. No clue if they'll ever be reasonably priced on the ebay market some day.

I also like the idea of these types of boards for NAS or other small compute purposes that use the lower TDP AMD mobile CPUs. If they worked with ECC it would be almost perfect for my needs.

 
Top