Minisforum BD790i SE home server build

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,926
Location
USA
A few weeks ago I ordered the MINISFORUM BD790i SE with an AMD Ryzen 9 7940HX CPU when it went on sale for $329 + tax. I also added to this setup:
I haven't yet decided on a permanent case for it which also means I haven't decided on a PSU. However I have an older Fractal Design Define R6 case with a 750W PSU in it that I'm using to test this out.

The MB arrived today and I set it up as Proxmox server to use for a variety of services. Mainly I was interested in seeing what this thing could do for hosting a few VMs and containers for various things I want to play around with. So far I've been very happy with just how easy it was to get everything setup and installed with almost no issues.

I did some load testing on it throughout the day and I found that at idle with Proxmox loaded and no VMs, it is pulling around 22W from my outlet. The CPU temps hover around 34C during this time. When I loaded up a VM with 32 cores and 95GB of memory and ran Prime95 stress testing for 90min or so, I found that the power uses peaked at around 150W and stayed there and thermals topped up to 74C for the rest of the duration.

Later I'll be running some tests on the 2.5Gb ethernet to verify it can run at full speed and then I'll start playing around with other performance and general VM management.

Funny how tiny this little motherboard looks inside my Fractal Design case.

20241017_132344.jpg
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,275
Location
I am omnipresent
This is the replacement for your Xeon system, right? Is this a PC you'll ever sit at?

My first thought would be something like a Silverstone SST-CS351. You could run one of those 1-to-6 m.2 to SATA adapters and give your VMs their own drives plus a couple spinny boys for bulk storage needs. That could be a very versatile machine that would look just fine sitting out on a desk.

My personal favorite ITX chassis is probably the Cooler Master NR200, but I think it's made for a large GPU build rather than an I/O heavy system.
 

LunarMist

I can't believe I'm a Fixture
Joined
Feb 1, 2003
Messages
17,497
Location
USA
I thought he was getting some low powered Celeron thing. 🤷‍♂️
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,275
Location
I am omnipresent
I think we brought that up as a low power NAS option but low-power Celerons don't do VM hosting so well.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,926
Location
USA
This is the replacement for your Xeon system, right? Is this a PC you'll ever sit at?

My first thought would be something like a Silverstone SST-CS351. You could run one of those 1-to-6 m.2 to SATA adapters and give your VMs their own drives plus a couple spinny boys for bulk storage needs. That could be a very versatile machine that would look just fine sitting out on a desk.

My personal favorite ITX chassis is probably the Cooler Master NR200, but I think it's made for a large GPU build rather than an I/O heavy system.

This is one of two replacements. This will be replacing my old VMware ESXi server which is a dual socket E5-2650 v2. That Silverstone is a neat case but I won't be sitting in front of this system. My plan is to put it in my rack with the other systems. That said, I wouldn't be opposed to finding a use for a neat case like that. I have one other tentative "compute" system planned to go with my upgrades but I don't know if I can swing it right now given the other upgrades.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,926
Location
USA
I thought he was getting some low powered Celeron thing. 🤷‍♂️
Nope, no celeron's in my new systems, unless if you're calling my new NAS motherboard a celeron.

I'm upgrading my home lab with:
  • New compute system using this Minisforum BD790i SE
  • New NAS using the Supermicro A2SDi-8C+-HLN4F (Intel Atom Processor C3758) which is almost completed
    • This will only be used for data, no VMs, etc running on this
  • Possible other new compute system based on Supermicro h13sae-mf with AMD EPYC 4464P (not 100% decided yet)
My goal is to reduce a bunch of energy consumption while also upgrading the performance of all components. This Minisforum BD790i SE is a bit of an experiment to try as a Proxmox server. I wasn't originally planning for this but when the deal popped up I decided to give this a try.

Each of the compute nodes will have minimal amount of storage just to host the VMs and/or containers. The bulk of the storage will come via the NAS over 10Gb using SMB/NFS/etc. The VMs will get backed up to the NAS through Proxmox and any critical container data will live on a mount from the NAS.

My new NAS will backup to my other backup NAS periodically using zfs sends with incremental changes.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,275
Location
I am omnipresent
This chassis is a pretty interesting option; it allows for two ITX systems in 2U of space. It looks like your chosen HSF needs the extra clearance, although it'd probably be fine in a rack since there's no other major source of heat in that thing. What you have now is pretty interesting in a relatively low-power config and I'm guessing a second one could also do good work, probably at a lower price than Epyc Jr.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,926
Location
USA
That's an interesting case and I was actually looking at something similar from a different manufacturer named PlinkUSA. These cases may all be made in the same Chinese factory anyway but the idea of this hybrid blade-like config with two in the same case is intriguing.

To your point, now that I see this small MB is reasonably capable for what I need, I probably don't need to spring for supermicro. I could likely buy two more complete setups of these for less money. I get fixated on wanting ECC capabilities for everything but in reality I likely don't need it for these small nodes. Having the BMC has also been nice to make it easier to work on things but it's difficult to say those two things are worth the extra money.
 

LunarMist

I can't believe I'm a Fixture
Joined
Feb 1, 2003
Messages
17,497
Location
USA
I am clueless for most of what you guys are talking about, so feel free to ignore my comments. :)
 

LunarMist

I can't believe I'm a Fixture
Joined
Feb 1, 2003
Messages
17,497
Location
USA
Building is not the issue, but I have no clue about the software and the how and why you do it.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,275
Location
I am omnipresent
Building is not the issue, but I have no clue about the software and the how and why you do it.

For the most part, "Server" means running either Windows Server or some Linux, *BSD or Solaris-derived operating system. I guess Hackintoshes as well, but most people don't think of MacOS that way.

Windows Server is technically expensive, but not really because key resellers are a thing. The biggest difference between it and desktop Windows are that you lose access to the Microsoft Store and some Store-derived applications (Your Phone is the one I always mention because IMO it's the most useful) and desktop widgets, sometimes drivers aren't labeled for use with Server SKUs (e.g. Intel iGPUs) and things like sound are treated as optional. But the interface rarely changes and it IS Windows, if that's what you want to use.

Linux/BSD/Solaris are all Unix-derivatives. There isn't MUCH reason to talk about OpenIndiana except that it may be important if you use it or used it in your day job, but it's more or less the last vestige of name-brand Unix (AIX still technically exists, if you have some giant rack of PowerPC gear and you're too good for Linux). Anyway, Solaris/OpenIndiana was the birthplace of ZFS, and like the Greeks inventing Democracy, nothing important has happened there ever since.

BSD is the original free UNIX, and in a way it's more rigidly defined and traditional than Linux, but it also misses some crucial functionality and flexibility that Linux has. Open, Free and NetBSD all have their own focus, but these days the only one of those that gets much real attention is OpenBSD, the PITA security-focused version, although the FreeBSD derived TrueNAS is probably the real star of late. The biggest problem in BSD space is that some things just never get driver support, which really hampers the choices available to users on that platform. One of the biggest differences between BSD-style Unix and Linux is the software license. BSD allows for modified, proprietary commercial derivatives, while the GNU license structure present for Linux demands that works derived from open code also remain open. A lot of entities REALLY dislike this, and by a lot I mean Apple.

Linux is a place where the rules are made up and the points don't matter. If you don't like something, you can fork it and change it, and people have. This has led to things like massive controversies over how the OS starts and holy wars over the graphical subsystems. In a lot of cases, we just have to pick something and live with it, but we can't deny that every single thing can be made to work on Linux. The biggest players in Linux are RedHat (part of IBM now) and its current mainstream alternative Rocky, Debian and its derivatives, which include the incredibly popular Ubuntu and Mint, and some random weirdos like the very traditional Slackware; Arch (favorite of trans furry web developers everywhere and also somehow what SteamOS is based on. Somebody might want to check how many Valve employees are on estradiol) and Gentoo, the Linux for people who definitely get boners when a Cenobite shows up at the end of the first act in every Hellraiser movie.

All of these OSes are just a means to an end. Running a server is generally about having a highly available and reliable system where services are run. What service? Why? Depends. A pretty straightforward example is having remote file services. Some people use a NAS for this, but there can be a case for full-fat file servers as well, especially if your needs encompass more than just SMB and NFS off a 4-bay QNAP and you want to run file deduplicaton or netboot clients. Windows doesn't offer native drive availability software on its desktop SKUs, so your RAIDn and cacheX storage solutions have to run on Windows Server, for example, whereas there are Linux releases that'll run on single board computers that can be made to speak SAS to a tape library if you really need that to happen for some misbegotten reason.

Most servers of any sort are really just a combination of a database and a particular sort of network traffic, but configuration is a lot easier when there's segregation between one service and another on an otherwise generic host system, so this is what we do most of the time. This can be handled through Guest instances that live in virtual machines or containerized (shared resource) systems in Docker or whatever. In any case, a lot of servers wind up hosting VMs or VM-like systems, so acting as a host for those smaller systems winds up being a big part of what physical server systems do. This is why we often talk about Hypervisor systems as a major part of our home environments.

Handruin seems to have his system set up so that he has one system for file services and another one or two for VMs or containers. My stuff more or less lives on giant do-everything systems instead, although to some degree that's because I already have that hardware on hand and don't want to buy something new to change how I handle things. I have a mix of Windows and Linux servers, so I have both Windows and Linux VM hosts as well.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,926
Location
USA
To add to all that great detail, my preference is to isolate systems based on their larger responsibilities based on years of working on large enterprise systems and encountering challenges over the years. That's not to say there is anything wrong with one large monolithic system to do everything, both have pros/cons.

In my case, I find that when I define a dedicate storage system I can cleanly and clearly define a boundary to isolate it from other systems that I call as compute. Those systems are the ones running the hypervisors that map the storage over the network. I've found through the years that the hypervisor OS (be it Proxmox, VMware, etc) often need upgrades which require reboots. Given that I have more than one system for compute/hypervisor that shares storage, it's relatively easy to migrate virtual environments around to the other node so that updates can be applied with minimal outages. If my storage was managed by the same system, it can cause outages for other servers.

Another benefit is also protecting/guarding the data side of things a little differently. If one or more of my VMs has an issue or consumes many resources, my storage won't be impacted because it's CPU/RAM isn't shared. I can also present certain shares with media as read-only so that a VM is restricted from making any changes.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,275
Location
I am omnipresent
I'd definitely isolate storage from services, but my home systems tend to be the leftovers from past workstations like my Threadripper. and in that case I most often have computers that have both a lot of CPU cores AND tons of available IO, because that's what I prioritize in my workstation systems.
 

LunarMist

I can't believe I'm a Fixture
Joined
Feb 1, 2003
Messages
17,497
Location
USA
I'm not seeing how that VM stuff will help me, but I might look into it later.

I do keep storage separate. Each unit should be less than 30 lbs. for rapid movement if necessary. What do you do if relocation is requested quickly, like <48 hours? Whether in the back of a rental Jeep GC or DHL Express it must be going somewhere?
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,275
Location
I am omnipresent
Immich is self-hosted media catalog that includes optional components for recognizing faces and locations. It can run on a Synology NAS. Sounds right up your alley.

As far as removing equipment, almost all my data is on tapes that are kept both locally and at my parents' home. If I had to mass move my datacenter hardware, I'd get a U-haul.
 

LunarMist

I can't believe I'm a Fixture
Joined
Feb 1, 2003
Messages
17,497
Location
USA
Synology is just terrible now due to the cats of the special hard drives. :(
A U-Haul is expensive to move to most countries. Ask the DD.
Tape might be good if you had the LTO-8 or LTO-9.
I'm hoping to get 24+TB HDDs soon. Eight of those in a NAS would be good.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,275
Location
I am omnipresent
Funny thing about LTO is that older tapes are WAY cheaper. I'd love to have those 20+TB tapes but I'm pretty happy but I'm OK paying $10/tape and just using more of them.
 

LunarMist

I can't believe I'm a Fixture
Joined
Feb 1, 2003
Messages
17,497
Location
USA
I suppose it depends on how often your data is changing. If you are archiving completed works incrementally and use a ground shipping method it might work out. If you have many TB per week of new or changed data then I'd want larger tapes. Do you travel with tapes by air and if so carry them on or check them in the luggage?
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,275
Location
I am omnipresent
I make a new tape set approximately quarterly. Raw camera data stays on its card of origin, to Crashplan and on old SMR drives I have, since I have approximately infinity of them. Finished video projects go "where they need to", which usually means NextCloud or Google Photos and possibly Dropbox/iCloud as well (no one uses OneDrive. NO ONE) plus any commercial destination, in addition to living on my file server on the fast, permanently installed big-boy drives of my server.

The other challenging thing to keep up to date are copies of working customer systems, but I have live systems in production, backups of the live systems in my colo and copies of the last two full backups and whatever incrementals that exist for those stored at home.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,926
Location
USA
Took a little while to get my new case but I finally got this new Minisforum board installed and I'm enjoying how quiet and small this Sliger CX3152i rack mount case is. So far this little server is working fantastic as a Proxmox setup with a handful of VMs.

20241118_174537.jpg20241118_174545.jpg
20241118_174540.jpg20241118_174554.jpg20241118_174558.jpg
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,926
Location
USA
For now it is. It mainly hosts enough storage to boot VMs and function to run the OS.

Each VM is backed up nightly outside this system in case of the m.2 failing. Each VM maps NAS storage over the network to serve as the volumes for docker containers so I don't need a huge amount of storage inside this system for what I do.

It has a 1TB nvme for Proxmox, ISOs, images, tools and a 4TB nvme for the VMs.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,275
Location
I am omnipresent
I like the chassis layout but I don't think I'd be able to resist sticking a 3D printed mount for at least a couple SATA drives in there.

I just made a little 1U board for my rack at home. It's literally just a piece of thick cardboard and a 3D shell with a couple holes for screws to keep the hubs attached to the front to give me someplace less janky to attach USB hubs so each of my rack systems has an easy connection point. Two of the three rack PC chassis I have don't have any front USB ports at all and the one that does doesn't have any superspeed ports. Basically the dumbest thing in the world, except that it beats the hell out of having the USB hubs cabled-tied to the front of my rack.
 
Last edited:

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,926
Location
USA
It does have rack mount ears and options for rails if desired. I hadn't installed them yet in those pics.

Also I believe there is a version of this case where you can get drive mounts. It also has the 5 1/4 bay at the front for other options.
 
Top