Ivy Bridge-EP (Xeon E5) specifications leaked

CougTek

Hairy Aussie
Joined
Jan 21, 2002
Messages
8,729
Location
Québec, Québec
The E5-24xx v2 should be out any day now since HP has server models with v2 processors advertised on its website, with part numbers are specifications.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,926
Location
USA
Looks like our new UCS order is being quoted with Cisco UCS B200 M3 - 2x Ten-Core Ivy-Bridge E5-2690 v2 Processors. These appear to be the new ones listed on the sheet above.
 

CougTek

Hairy Aussie
Joined
Jan 21, 2002
Messages
8,729
Location
Québec, Québec
We'll then soon work with similarly performing systems as our Proliant DL380p also use E5-2690v2 Xeon. I hope you get better support from whatever SAN you use than we get from HP for our 3PAR though. We've spent the entire day trying to configure the reporting system on a Linux VM. HP's documentation was lacking, as well as the readiness of the technician they sent us. I ended up doing most of the work since I was (by far) the most knowledgeable in Linux (and I'm really not that much). Surprisingly, it didn't work. At least we don't need the reporting system to start installing production VM on the virtual volumes.

The E5-2690v2 is probably the most bang for the buck among the Xeon E5 lineup. Will you have 256GB of 1866MHz RAM into the blades, or will you opt for the Load Reduce DRAM (they consume also twice as much current)?
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,926
Location
USA
We typically use our own SAN arrays since that's fundamentally what built the company. I've grown partial to the VNX arrays as I find them easy to manage and they perform well. We have an internal (non-customer facing) global labs team that helps with anything outside of the typical norm that I can't deal with so our support is pretty good thankfully. Usually they rack and power the array, license the software and hand it over for us to zone, mask, and provision as we see fit. I'll be planning and placing an internal order for a VNX 8000 SAN that will be all the storage for this test environment. The blades will be boot from SAN to allow us to make good use of the UCS profiling system. I don't know much about the HP 3PAR storage array otherwise I'd offer to help if I could. I don't understand what they need a reporting server for when creating LUNS for production work. Is that server used for provisioning? What are you using to manage your SAN/fabric switches? What SAN switches did you go with for your setup?

Our new UCS will have 16 blades configured with B200 M3s with 196GB of RAM and 2 x E5-2690v2 in each. I don't recall which speed or voltage RAM was spec'ed for our setup but I'll check in the morning.
 

CougTek

Hairy Aussie
Joined
Jan 21, 2002
Messages
8,729
Location
Québec, Québec
I don't understand what they need a reporting server for when creating LUNS for production work. Is that server used for provisioning? What are you using to manage your SAN/fabric switches? What SAN switches did you go with for your setup?
We don't use SAN switches. We are directly attached to the SAN. Because of a misunderstanding, there won't be hardware replication between our main SAN (3PAR) and our SAN at our remote (backup) location. We'll use Veeam to do it instead.

While the reporting manager isn't required before we enter production, it would be quite handy to check what's going on with the SAN. There's a management interface, but it doen't store historical data, just what's going on curretly.

I thought Cisco blade chassis (the 5108 ) only had space for 8 blades, not 16.
 
Last edited:

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,926
Location
USA
We don't use SAN switches. We are directly attached to the SAN. Because of a misunderstanding, there won't be hardware replication between our main SAN (3PAR) and our SAN at our remote (backup) location. We'll use Veeam to do it instead.

While the reporting manager isn't required before we enter production, it would be quite handy to check what's going on with the SAN. There's a management interface, but it doen't store historical data, just what's going on curretly.

I thought Cisco blade chassis (the 5108 ) only had space for 8 blades, not 16.

You are correct that each chassis fits 8 blades. We are planning to buy two UCS 5108 chassis.

The RAM spec'ed for each blade is 12 x 16GB DDR3-1866-MHz RDIMM/PC3-14900/dual rank/x4/1.5v (192B total, not 196 like I incorrectly wrote earlier).

We are currently on borrowed hardware so we will be giving all that back and this will be our new setup for the next several years:

16 x blades
2 x 10 core 3.00 GHz E5-2690 v2
192 GB RAM

Totals:
320 physical cores
3072 GB RAM
 

P5-133XL

Xmas '97
Joined
Jan 15, 2002
Messages
3,173
Location
Salem, Or
If you get permission to fold at the workplace, then often you can take that to whomever in IT controls the firewall to have ports 80/8080 (the only ones that really matter) unblocked for that specific application.
 

CougTek

Hairy Aussie
Joined
Jan 21, 2002
Messages
8,729
Location
Québec, Québec
To be fair, most of the tests I did were during the Holidays' two week shutdown. I ran the client too during off hours in early December, but not for long.

BTW, you're aware that the E5-26xx v2 has quad-channel memory? Because in order to use your memory sticks in the most efficient way, you should either use 4 or 8 sticks per CPU. In the configuration you described above, you only use 6. That was optimal for the older Xeon X56xx and X55xx, but not for the Xeon E5. The impact won't be huge, but let's say that it is a less elegant configuration.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,926
Location
USA
BTW, you're aware that the E5-26xx v2 has quad-channel memory? Because in order to use your memory sticks in the most efficient way, you should either use 4 or 8 sticks per CPU. In the configuration you described above, you only use 6. That was optimal for the older Xeon X56xx and X55xx, but not for the Xeon E5. The impact won't be huge, but let's say that it is a less elegant configuration.

Thanks, you're right that the memory config isn't optimal for quad channel. That was my oversight, thanks again. It didn't affect our previous blades because they are E5-2690 (also quad channel) with 128GB (16 x 8).
 
Last edited:

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,926
Location
USA
If you get permission to fold at the workplace, then often you can take that to whomever in IT controls the firewall to have ports 80/8080 (the only ones that really matter) unblocked for that specific application.

...or just set up a proxy...

Either of these is unlikely to happen. Our internal EMC IT is like a company in and by itself which most of the time we feel like we work for them rather than them working for us. Getting things like that changed for such a small thing like this would take months and probably dozens of approvals. Setting up a proxy would be a quick ticket out of here so I'm not even going to consider circumventing their setup. Whatever they use isn't as simple as blocking a port. They statefully inspect packets with some kind of Cisco IronPort device(s).
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,926
Location
USA
To be fair, most of the tests I did were during the Holidays' two week shutdown. I ran the client too during off hours in early December, but not for long.

BTW, you're aware that the E5-26xx v2 has quad-channel memory? Because in order to use your memory sticks in the most efficient way, you should either use 4 or 8 sticks per CPU. In the configuration you described above, you only use 6. That was optimal for the older Xeon X56xx and X55xx, but not for the Xeon E5. The impact won't be huge, but let's say that it is a less elegant configuration.

Thanks again for pointing this out to me. After numerous emails back and forth with various people we decided to spring for getting 256GB in each blade (or 16 x 16GB DIMMs) making for a proper memory configuration. My fall-back plan was to pull RAM from some of the blades and put that pulled RAM into the other blades re-balancing them to a 128GB and 256GB split.
 

CougTek

Hairy Aussie
Joined
Jan 21, 2002
Messages
8,729
Location
Québec, Québec
It's my pleasure. I seldom can help you on high-level IT since you have been doing this for much longer than I do, so I'm happy when I get the occasion.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,926
Location
USA
It's my pleasure. I seldom can help you on high-level IT since you have been doing this for much longer than I do, so I'm happy when I get the occasion.

The end result turned out to be that the company we use to build the configs did relay the info to our lab team that this memory config was not optimized but suggested the performance hit was negligible for what we're doing. I honestly don't know the performance hit so I can't argue for or against their claim. We have a really good relationship with them and they're typically very sharp with this stuff but it fell through my own cracks causing a small stir when I began asking about it. We'll likely get the RAM later and install it after the initial deployment.
 

Stereodude

Not really a
Joined
Jan 22, 2002
Messages
10,865
Location
Michigan
You're just not very committed to the cause. Use a USB WiFi adapter and a wireless hotspot. :idea:

j/k don't lose your job over it.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,926
Location
USA
You're just not very committed to the cause. Use a USB WiFi adapter and a wireless hotspot. :idea:

j/k don't lose your job over it.

I know you joke but it had been discussed at one point with another coworker. He has a hotspot wifi adapter that he used for his home internet so we joked about how that could be connected to an ESX server.
 

snowhiker

Storage Freak Apprentice
Joined
Jul 5, 2007
Messages
1,668
It's my pleasure. I seldom can help you on high-level IT since you have been doing this for much longer than I do, so I'm happy when I get the occasion.

You should receive some type of kickback from Handruin's memory/hardware vendor for getting him to bump the RAM from 192 to 256 GB.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,926
Location
USA
I'm posting this here because it's most relevant to the Cisco and Intel Xeon chips we just ordered for our new environment.

Today should be an interesting day. I got in a little later than usual this morning around 8:30AM only to be greeted with an oddly familiar stench of plastic-like electrical burning smell. As I approached my cubicle a couple coworkers were oddly smiling and asking me if I knew what was going on. Their candor was less than obvious in conveying the situation in our lab and building. The short of it all is that the chiller unit used for all the A/C units failed for the entire building. The lab temp climb to just about 150F (65.5C) as the lab teams worked to kill the power abruptly to everything in the lab. They were interrupted when the fire alarms went off because the heat sensors or something trigger the fire department to come out to the building. When I went in to assess what was going on, the side paneling on the array was so hot that it hurt to hold a hand on it for more than a few seconds. It was really that hot in there. All the lab doors were open with huge box fans blowing the hot air out of the lab. The entire building felt rather warm but the lab was like a sauna.

I can't wait to see what happens after everything is attempted to turn back on for our UCS and array. We're still waiting for the chiller unit to be brought back online and then we have to wait for the room to reach 80F or below before we can turn back on equipment. I'll be amazed if nothing was damaged. At the very least there will probably HDD failures. There's likely to be some VMs corrupted since the power was brought down hard for both compute and disk array. This has certainly been a less productive day without having any equipment to use.
 

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,729
Location
Horsens, Denmark
Sorry to hear about your day.

I've had the AC units fail in server rooms a couple times. This is why I spec them to be significantly larger than strictly necessary to hold the server racks, and I try to store additional equipment (workstations, servers, phones, whatever) in the same space as well. Additional thermal mass and a larger space to heat are helpful. One of my sites even has a "plan B" built in; filtered air ducts straight to the roof (supply and exhaust) with a blower on the exhaust.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,926
Location
USA
Sorry to hear about your day.

I've had the AC units fail in server rooms a couple times. This is why I spec them to be significantly larger than strictly necessary to hold the server racks, and I try to store additional equipment (workstations, servers, phones, whatever) in the same space as well. Additional thermal mass and a larger space to heat are helpful. One of my sites even has a "plan B" built in; filtered air ducts straight to the roof (supply and exhaust) with a blower on the exhaust.

Meh, shit happens. I feel bad for our dedicated lab team that's been running around like crazy solving issues since very early this morning. Where I'll be able to help them is once they turn the power back on to our equipment. I can then assess and begin repairing the damage. Our lab equipment isn't customer-facing so having downtime doesn't interrupt the immediate business. The server room where all these servers, arrays, switches, etc are locates is huge. It's also near capacity because we've been densifying by relocating from other parts of the building, etc which makes the heat problem worse in situations like this. When cooling is functioning we are well within specification for proper temperature but the cooling facility which houses the chiller had a big issue this morning.

We have no redundancy for cooling because this equipment isn't hosting or running customer environments. No customer data will be down or lost because of this event which is good. Our lab is in a sense expendable to a certain degree which is why there is likely no redundant chiller units for the AC. What you've done in your environment makes complete sense. You've mitigated your risk as much as you're able to given the resources they offer. We try to do the same within the confines of our lab. Events like this are rare. This is the first of this kind. The last one was a major transformer/circuit breaker went down for the remote building where we have a lab. That brought the entire building off the grid for most of the day until a specialized circuit breaker could be found. That was less severe than this one because we were able to deal with power outages. High levels of heat for extended periods causes so many other problems.
 

P5-133XL

Xmas '97
Joined
Jan 15, 2002
Messages
3,173
Location
Salem, Or
Since it is all technically expendable then fun times can happen during events like this. You get to learn, what worked and what didn't.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,926
Location
USA
Since it is all technically expendable then fun times can happen during events like this. You get to learn, what worked and what didn't.

We still have goals and deliverables to work on. It's a rare chance to see what happens in an event like this but none of this is in the real of what we would test on our team.

So far it looks like the array survived without any disks failing (so far). I'm thoroughly impressed. I was expecting a handful of failed spindles after the amount of heat they were subjected to. I'm booting up our blades now to see if they have any issues.
 

Bozo

Storage? I am Storage!
Joined
Feb 12, 2002
Messages
4,396
Location
Twilight Zone
Don't the motherboards that the CPUs are mounted on have the ability to shut down the system if the CPU overheats?
 

mubs

Storage? I am Storage!
Joined
Nov 22, 2002
Messages
4,908
Location
Somewhere in time.
Even though this equipment is not customer facing and does not justify having redundant cooling, I'm stupefied that there is no overheat alarm of some kind (pager, sms, etc.) to lab management personnel so that they can determine if an orderly shutdown is warranted. To just let the equipment cook like this is astounding.
 
Top