24GB+ of RAM?

Fushigi

Storage Is My Life
Joined
Jan 23, 2002
Messages
2,890
Location
Illinois, USA
What is the cheapest way to get a machine with 24GB+ of RAM?

Should I be looking for motherboards with more than 6 slots?
The answer to your second question is Yes.

As to your first question, IMO it's the wrong question. You're going to be running multiple server images; hardware reliability is of higher importance than for a single server as there's more impact to the business if the platform goes down. You're already saving a ton of capital from not buying multiple physical servers and a lot of operating expense by not having to pay for electricity & cooling for those extra machines. Use some of that savings to invest in high quality, robust components. Quality over cost. You probably saved $15K or more on the reduced box count; spend a few hundred of that on the best components you can get. The better question would be what's the best memory to pair with the system board & CPU?

I know you're rolling your own for the build, but as a sanity check you might look at comparable servers from the big guys like IBM & Dell to validate that you can do better for less. FWIW, the IBM System x3650 M2 has 16 memory slots & 12 2.5" disk bays in a 2U chassis.
 

blakerwry

Storage? I am Storage!
Joined
Oct 12, 2002
Messages
4,203
Location
Kansas City, USA
Website
justblake.com
Don't forget that you can also buy a BrandX server with a boat load of DIMM slots and load them up with 3rd party RAM from someone like Crucial where the memory is validated for use with the server.

I know many big box manufacturers would price gouge memory upgrades. I used to find that it was cheaper and I could get better RAM from someone like Crucial.
 

Fushigi

Storage Is My Life
Joined
Jan 23, 2002
Messages
2,890
Location
Illinois, USA
Good point. My employer has had good luck buying Kingston RAM for Dell servers in the past. I'm not sure what we do nowadays but we are buying servers with 16-32GB standard since they are almost all serving as VMware hosts.
 

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,742
Location
Horsens, Denmark
OK. Say it with me:
C-L-U-S-T-E-R

Dammit, I had a great response that got eaten, so you'll get the short version.

1. If you don't have the expensive licenses (vMotion), having one big pool is the most efficient use of your resources, and makes it much easier to manage.

2. If you do have the expensive licenses, you are paying per CPU socket, and should be spreading that cost across as many VMs as possible.
 

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,742
Location
Horsens, Denmark
I know you're rolling your own for the build, but as a sanity check you might look at comparable servers from the big guys like IBM & Dell to validate that you can do better for less. FWIW, the IBM System x3650 M2 has 16 memory slots & 12 2.5" disk bays in a 2U chassis.

That isn't bad, actually. Just spec up the CPUs a bit, add the second PSU and the support for all the drive bays that are all already there (can't believe this isn't standard), and that is only at $3k. Not bad at all.
 

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,742
Location
Horsens, Denmark
Don't forget that you can also buy a BrandX server with a boat load of DIMM slots and load them up with 3rd party RAM from someone like Crucial where the memory is validated for use with the server.

I know many big box manufacturers would price gouge memory upgrades. I used to find that it was cheaper and I could get better RAM from someone like Crucial.

btw, I would stick with Kingston or Crucial. I've had trouble with other brands - Corsair, Buffalo, etc

Good point. My employer has had good luck buying Kingston RAM for Dell servers in the past. I'm not sure what we do nowadays but we are buying servers with 16-32GB standard since they are almost all serving as VMware hosts.

Kingston is the plan, they are just cheap and seem to always work. Crucial is good as well, but seems to be more expensive. It made sense when they were the best, but they share that rep now.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,297
Location
I am omnipresent
I think I'd rather split the workload to a second system for redundancy's sake, rather than building some obscenely beefy (and overpriced) single system, but this Tyan board looks like a pretty good deal.

I do not like having all my eggs in one basket though.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,297
Location
I am omnipresent
Make sure that RAM is compatible on that board's HCL.

Anyway, I'm still thinking vMotion is a worthwhile investment for you. It certainly adds a great deal to your ability to manage your Guest OSes, and since you're not planning to buy any more Windows servers, you can use the money you're saving on not buying more licenses and CALs for them.
 
Last edited:

Pradeep

Storage? I am Storage!
Joined
Jan 21, 2002
Messages
3,845
Location
Runny glass
If you go with the 3650, add a dual port Intel gigabit controller. The onboards are insufficient imo.
 

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,742
Location
Horsens, Denmark
If you have dual psus, then dual pdus feeding from duals gens is the next step :)

Already have the dual PDUs, each drawing from their own on-line UPS, that then draws from the single beast UPS, and the single massive generator. Not flawless, but pretty damn good. ;)
 

MaxBurn

Storage Is My Life
Joined
Jan 20, 2004
Messages
3,245
Location
SC
I just don't like seeing things like that because there is always something really important somewhere that someone forgot about that is either single corded or non redundant on its two power supplies the way it is loaded. Anytime we do maintenance or a shutdown on a piece of one of those systems someone is always scrambling because something went down. That's the good scenarios, the bad ones have everything switching to the "B" side, overloading that breaker and tripping it because someone somewhere trusted an amp clamp to do their load calculation for them not realizing that when the A is in use there is no load on the B side till A goes away on some equipment. It takes an extraordinary amount of planning and cooperation to pull this off successfully, something that just doesn't happen over the years including personnel turnover. I know several places that are doing this and had to resort to lock and key to prevent unauthorized plugins, they just keep getting burnt. I feel so much better with a multi module UPS system with redundancy or a true n+1 system with static switches.
 

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,742
Location
Horsens, Denmark
Speaking of power failures...just having our first major rain of the year. The rain started at 4:15AM, the first gensets started turning on at 4:30AM (after 5 minutes of being on battery). PG&E (our power company) is crap.

Now I get to drive to some of our more remote locations in the rain to find out what went wrong with their battery systems.
 

MaxBurn

Storage Is My Life
Joined
Jan 20, 2004
Messages
3,245
Location
SC
From my point of view PG&E is awesome, really drove business for us around 2001 time frame with those rolling blackouts. We could watch the news for the times to plan when we needed more techs on call. Really sort of odd when you can plan for failures like that, most of the time we are emergency response driven when working on power equipment.
 

Pradeep

Storage? I am Storage!
Joined
Jan 21, 2002
Messages
3,845
Location
Runny glass
Yeah with redundant power you have to overprovision by 2x. Costly but what is uptime worth to a critical process? Dual Cat 1.5MWs was the biggest I got to play with. Keep a good eye on the batts Dave.
 

MaxBurn

Storage Is My Life
Joined
Jan 20, 2004
Messages
3,245
Location
SC
Horribly inefficient to have a standby unit too, just looking at a unit that has 0% load on it and is pulling 30amps per phase at 480 volt can't be cheap. Super expensive battery charger till it is needed.

Also see it completely the other way around, a data center with no UPS or generator at all. They just had conditioned power via some datawaves (special transformer capacitive filter). It was a render farm and they tell me worst case they loose whatever frame was being worked on and maybe 10 minutes setup time. Wouldn't tell me what they were working on though but there was probably 40 standard 19" racks all filled with 1U machines. Mind boggling power in there.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,931
Location
USA
Can you really notice the difference using the SSDs with ESXi? It just seems like the money would be better spent on more spindles dedicated per VM rather than SSDs...
 

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,742
Location
Horsens, Denmark
It just seems like the money would be better spent on more spindles dedicated per VM rather than SSDs...

That doesn't make sense to me. The way I see it, either disk access matters or it doesn't. I'm not that sure it does (other than for backup, which should rock). But if disk access matters, surely faster disks are better? It isn't a capacity thing, as most of my VMs are under 6GB.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,931
Location
USA
If you have the money to spare, sure they're going to be faster. If the disk access doesn't matter, why not save $1950 and get one 2TB drive for all your VMs or put the money into more RAM?
 

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,742
Location
Horsens, Denmark
If you have the money to spare, sure they're going to be faster. If the disk access doesn't matter, why not save $1950 and get one 2TB drive for all your VMs or put the money into more RAM?

We'll find out ;) I'll leave one of the X25-Es out and replace it with an HTS541040G9SA00 (Hitachi 5400RPM 2.5" 40GB). If I can't tell the difference, the SSDs go elsewhere. But I suspect that it will make a difference.
 
Top