Cheapo Server For Piss-Poor Company

CougTek

Hairy Aussie
Joined
Jan 21, 2002
Messages
8,729
Location
Québec, Québec
Well, hopefully, only a temporarily piss-poor company.

A person close to me asked me to walk on water help his company build a decent virtualization server for under 5 grands. Even worse, I'm talking about 5000$CDN, not U$ (so it's more like ~4400U$). The server also has to be a tower, not rackmount, because, surprisingly, as a young company, they don't have a rack. It's going to be their main server for the next few years, or until they manage to generate some real cash, which probably won't happen before another two years.

It will be a development server on which they'll run their virtual machines. It will probably be an Hyper-V server 2012 R2. Since the server will be used for several years and that they'll be able to upgrade it later on when they'll have additional income, I've tried to put most of the money on the components that would be the most expensive to replace or upgrade later on: the CPU and the server model itself. Things like drives and RAM are easy to add-in later on and since they shouldn't run that many virtual machines in the beginning, skimping on RAM and storage space/speed shouldn't limit them too much early on.

When you need enterprise grade components, but you don't have an enterprise-like budget, you can forget big brand names like HP, Cisco, Fujitsu and Dell. Like I'll show later on, even something like a (still unavailable) Lenovo ThinkServer TD350 would bust the targeted budget, unless you severely cripple the server. So there's Supermicro and the Tawanese manufacturers like Asus and GigaByte. There might also be something interesting from Tyan, but their stuff is hard to come by around here. Same goes for GigaBytes' and Asus' servers. Supermicro is therefore pretty much the only available option.

Here's what I can get for under 5K$, combined to making a sales manager bleed and making him beg for mercy :

And that is it. There's not enough RAM, the boot drive isn't in RAID1 and the storage pool just plain sucks, but it's all I can fit inside 5 Grands. At least, they have an 8-bay SFF drive cage linked to a very potent RAID adapter, so they'll be able to add faster storage later on, should they need it. There's still space for 6 more nearline storage drives (albeit SATA ones) if they need additional low-io storage. BTW, I know that the v4 of the Seagate constellation is out, but the 2TB model is ~50$ more expensive per drive and that would have blown the budget.

The CPU is too much for the beginning, but you have to go for at least the E5-2650 v3 if you want the RAM to operate at 2133MHz, otherwise, you're stuck at 1866MHz. That's why I didn't go for 2x E5-2630 v3 instead.

I could also have saved ~350$ by opting for the Supermicro 7048R-TR, but then they would have been stuck with just 8x LFF SATA drives for their storage. So the server would have permanently sucked reagarding the storage iops. By investing into the 7048R-C1R4+, the storage sluginess is at least curable later on. They get 4 gigabit Ethernet ports as a bonus too, which might be useful eventually.

The closest big brand model in term of price and features is the Lenovo TD350. Similarly configured, mainly because the RAM is slightly more expensive and the storage too, it inflates to ~5700$CDN. It looks like a very nice server, but it simply doesn't fit in.

Agreeing with my choices?
 

sedrosken

Florida Man
Joined
Nov 20, 2013
Messages
1,811
Location
Eglin AFB Area
Website
sedrosken.xyz
All considered, I think you did as well as one could with the restrictions. Budgets just suck all around, but restrictive ones suck even more.

I can't say I'd do any better. In fact, I'd likely do a bit worse.
 

Chewy509

Wotty wot wot.
Joined
Nov 8, 2006
Messages
3,357
Location
Gold Coast Hinterland, Australia
Specs look good for the money, but can't offer any reasonable suggestions without knowing the number of VMs, what the VMs will be running and how hard the server will be hit... Any possiblity of outsourcing (aka using cloud based services) via GoogleApps/Office365, GMail, etc to lessen the upfront costs? That way they only need a basic domain controller and file server for stuff that can't be hosted offsite?

Re: Tyan, also getting very hard to get in Oz as well... I wonder how much that is due to Tyan being bought out a few years ago, and distributors/reseller/OEM builders having problems with shipping and quality?
 

CougTek

Hairy Aussie
Joined
Jan 21, 2002
Messages
8,729
Location
Québec, Québec
Specs look good for the money, but can't offer any reasonable suggestions without knowing the number of VMs, what the VMs will be running and how hard the server will be hit... Any possiblity of outsourcing (aka using cloud based services) via GoogleApps/Office365, GMail, etc to lessen the upfront costs? That way they only need a basic domain controller and file server for stuff that can't be hosted offsite?
I've suggested the temporary cloud solution a few months back, but they still need the server and since they will eventually host all their development internally, I plan for a system that can be made great, not the best system that can be assembled for 5000$ without putting money into it afterward. Since the thing will primarily be used for development to start with, I don't expect, or at least I hope it won't have to answer too many simultaneous requests from its storage pool, where the VMs will be run from.

I know they manly do Sharepoint stuff, so that means quite a lot of RAM and some disk access from the SQL database. The first thing they'll probably need to do once they start to truly stress the system will be to climb the RAM and buy a pair of 10K drives to put into the 2.5" drive cage. Or a few SSDs, but those cost quite a lot of money. I doubt they'll be CPU-limited for a while, but they'll probably have to get the second one eventually in order to use the additional memory controller.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,275
Location
I am omnipresent
In that scenario, I'd be tempted to look at some kind of workstation board and use off the shelf parts rather than rolling with the SuperMicro. If you set your sights on an older chipset with DDR3, you could effectively buy twice as much RAM or make some needed changes to the storage configuration.
 

CougTek

Hairy Aussie
Joined
Jan 21, 2002
Messages
8,729
Location
Québec, Québec
Can you find an off-the-shelf tower enclosure with a redundant power supply and hot-swap hard drive bays? I thought about it, but I've only found Supermicro chassis starting at 700$CDN with the needed features.

I originally suggested a system with the Supermicro 7047R-3RF4+ back in August. The server in itself is much cheaper (~1400$CDN vs ~1775$CDN for the 7048R-C1R4+). If I lower the CPU to an E5-2650 v2, I can double the RAM to 128GB and put four SAS drives in RAID 10. It's a better overall server for 5000$, but it cannot be upgraded nearly as much as the 7048R-C1R4+. The storage is limited to slow LFF drives, for instance.
 

Howell

Storage? I am Storage!
Joined
Feb 24, 2003
Messages
4,740
Location
Chattanooga, TN
IMO, the upgradability of the server is less important in comparison to functionality and safety today for a company that may not last 2 years. Most small businesses fail in the first 3 years. It's not pessimistic, its giving them the best chance at surviving.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,275
Location
I am omnipresent
Can you find an off-the-shelf tower enclosure with a redundant power supply and hot-swap hard drive bays? I thought about it, but I've only found Supermicro chassis starting at 700$CDN with the needed features.

I originally suggested a system with the Supermicro 7047R-3RF4+ back in August. The server in itself is much cheaper (~1400$CDN vs ~1775$CDN for the 7048R-C1R4+). If I lower the CPU to an E5-2650 v2, I can double the RAM to 128GB and put four SAS drives in RAID 10. It's a better overall server for 5000$, but it cannot be upgraded nearly as much as the 7048R-C1R4+. The storage is limited to slow LFF drives, for instance.

For as little work as is involved in swapping a standard PSU, especially in an environment like a server room where clean power and a good UPS will be available, it's a risk I'd be willing to tolerate. Last time I had half a redundant PSU blow, it took three weeks to get a warranty replacement and the system went offline when the PSU died anyway (which, yes, was not supposed to happen. But it did.)
And yeah, you'll get that drool-worthy memory bandwidth out of DDR4, but if you have enough RAM that it's not oversubscribed for the necessary guest OSes, maybe that's not so critical. This machine isn't going to have a bunch of SSDs that could better feed that bandwidth anyway.
 

CougTek

Hairy Aussie
Joined
Jan 21, 2002
Messages
8,729
Location
Québec, Québec
For as little work as is involved in swapping a standard PSU, especially in an environment like a server room where clean power and a good UPS will be available, it's a risk I'd be willing to tolerate. Last time I had half a redundant PSU blow, it took three weeks to get a warranty replacement and the system went offline when the PSU died anyway (which, yes, was not supposed to happen. But it did.)
What manufacturer took three weeks to replace an hot-swappable power supply? I also want to know because of the system shutdown issue.

I met him this afternoon and he prefers future upgradability potential to best value right now. Apparently, they might upgrade it as soon as January, but they need the server now to start developping stuff.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,275
Location
I am omnipresent
What manufacturer took three weeks to replace an hot-swappable power supply? I also want to know because of the system shutdown issue.

It's ancient history now but It was an Proliant DL350, only about 10 months old at the time. HP kept sending me FlexATX power supplies instead of a rackmount redundant one.
I chalked it up to HP's normal level of competence and resolved to just stick with Intel.
 

time

Storage? I am Storage!
Joined
Jan 18, 2002
Messages
4,932
Location
Brisbane, Oz
I cannot begin to imagine how a startup would need 12 cores in a development server. You need servers for version control (Sharepoint in this case), to run a couple of database servers, some middleware servers and maybe some web servers.

The thing is, one Sharepoint server is enough for the entire organization and one database server can typically support many databases. And frankly, these don't virtualize well.

Where is the standby server that will keep them working when something happens to the primary server?
 

timwhit

Hairy Aussie
Joined
Jan 23, 2002
Messages
5,278
Location
Chicago, IL
Why not use EC2 or Azure? You only will pay for what you use and it's essentially endlessly scalable.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,275
Location
I am omnipresent
Where is the standby server that will keep them working when something happens to the primary server?

I think I'd be arguing to make the more modest DDR3/Slightly fewer cores Dev machine the eventual standby server so that some future machine could be the expandable production box. It's OK to have one big server if that's all the resources you have, as long as contingencies have been made and you're willing to accept some down time. If you can't accept downtime, then it's not the right way to allocate resources.
 

CougTek

Hairy Aussie
Joined
Jan 21, 2002
Messages
8,729
Location
Québec, Québec
I cannot begin to imagine how a startup would need 12 cores in a development server. You need servers for version control (Sharepoint in this case), to run a couple of database servers, some middleware servers and maybe some web servers.
The Sharepoint setups are not for themselves, but for several customers. Several customers = several different setups, so several different VMs. As it is development and not production, downtime as to be limited, but it is not the end of the world in case the server goes down. They'll have a backup of their VMs on a NAS in case everyting goes south.They plan on buying a second server and form a failover cluster if/when they move stuff in production mode for customers.BTW, I've worked for a company that used Amazon EC2 instances a lot for their development and the cost was very high, mainly because several persons where just too lazy to shut down their instances when they no longer needed them. I ended up playing the instance cop and monitoring the dormant instances, reminding those responsible to close them when finished.
 

timwhit

Hairy Aussie
Joined
Jan 23, 2002
Messages
5,278
Location
Chicago, IL
There is a cost involved in maintaining the infrastructure. I'd argue that if they can't be profitable using EC2 or another "cloud" provider they have bigger problems to worry about.
 

fb

Storage is cool
Joined
Jan 31, 2003
Messages
726
Location
Östersund, Sweden
Can you find an off-the-shelf tower enclosure with a redundant power supply and hot-swap hard drive bays? I thought about it, but I've only found Supermicro chassis starting at 700$CDN with the needed features.

I originally suggested a system with the Supermicro 7047R-3RF4+ back in August. The server in itself is much cheaper (~1400$CDN vs ~1775$CDN for the 7048R-C1R4+). If I lower the CPU to an E5-2650 v2, I can double the RAM to 128GB and put four SAS drives in RAID 10. It's a better overall server for 5000$, but it cannot be upgraded nearly as much as the 7048R-C1R4+. The storage is limited to slow LFF drives, for instance.
Could this work http://www.amazon.com/Intel-P4208XXMHGC-Server-Case/dp/B007RKML0S ?
 

Bozo

Storage? I am Storage!
Joined
Feb 12, 2002
Messages
4,396
Location
Twilight Zone
If you have SATA3 and SAS built into the motherboard, what's the purpose of the SuperDOM?
 
Top