When would you use a blade configuration?

time

Storage? I am Storage!
Joined
Jan 18, 2002
Messages
4,932
Location
Brisbane, Oz
No really, what are the circumstances that would make you (or your organization) seriously consider using blade servers?

I have my own prejudices, I'm interested in what other people have to say.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,920
Location
USA
Our team was urged to go forward with a blade configuration because it was pushed hard that this solution would be "greener". While I certainly pushed back against spending the company's money on the expensive setup because I couldn't see why it would be greener or easier to use, there were other merits to having the setup. I was of the prejudiced that individual rack-mount systems would cost less and be more powerful for the same money. We also found that with the HP 7000c chassis and blades that it didn't make sense to buy into this setup unless we bought at least 5 blades with the expensive chassis and planned to populate the entire chassis over time. The expensive components being the Cisco network switches and Brocade SAN switches. My concerns were having all my eggs in one basket, having a very expensive chassis, the longevity of the platform for future upgrades, and also management of the infrastructure.

Fast forward a year and a half and I'm sold on them. I really am. I'm sold because they are much easier to manage once you get past the learning curve of their environment. HP has built everything to be managed via the web, so I can connect to the KVM of all of our 15 blades without ever having to step into the lab. I know this concept isn't new, but they've been able to seamlessly integrate the environments making it very easy to manage. I've been able to install/configure them as ESX servers, Redhat 5.x, and Windows 2003/2008 over the time we've owned them. The 7000c chassis has been around for a long time and continues to accept the newer generation blades as HP releases them with newer chipsets, CPUs, NIC, HBA, etc. This was one of my concerns when adopting the platform because I didn't want to get stuck with a $50K chassis and can't get later CPUs/blades. We currently have two different generations of CPU and blade in one chassis. We're using the HP BL460c and HP BL460c G6 together. We boot ESX using the internal storage and then do our work via a SAN-connected EMC Clariion.

The up-time and reliability has so far been superb. I've only had one blade complain about a faulty stick of RAM which caused me a scheduled downtime. That can happen in any server so this fault is not specific or unique to a blade infrastructure. The one major issue would be if the blade back plane were to ever fail. That would take down 100% of our running blades if it required replacement.

Once we finish populating this chassis (which is 10U of space) we can buy into a second one and have them all managed by the same Onboard Administrator which makes management easier. It's actually nice to have 16 powerful computers live in the space of 10U in our rack. This will actually allow us to more-densely populate our lab. Had we gone with the traditional rack-mount setup like a Dell R710, we would have consumed 16U of space, then an additional 8-10 U for the SAN and networking.

Now moving on to another product known as the Cisco Unified Computing System (UCS), I've had the luxury of working with them a bit within our organization because we are working on software that is part of the VCE initiative (VMware, Cisco, EMC. Any comments going forward of course come with the bias of me working with the company.

The UCS is the next evolution of a blade architecture compute resource. Cisco has figured out that blades can be decoupled from a one-to-one configuration and be used with profiles. They've gone forward with virtualizing the HBA and NIC (WWN and MAC addresses) so that you can apply your configurations for network and SAN to any blade by applying the profile. So in a case were a blade fails, you can easily apply the profile to another blade and be up in running in minutes. The next step that I am a part of is the VBlock configuration. We're selling a pre-defined UCS, MDS, and clariion computer package that allows customers to easily roll out a fully-working compute infrastructure. The software I've worked on a bit is called Unified infrastructure Manager (UIM) which automates a lot of the work for the end user, including automatically installing the OS.

Summary: so when would I use a blade configuration?
Whenever you want to condense your compute infrastructure in limited space and you require easier management of numerous servers in a unified setup. You will also need to be able to justify purchasing the right number of servers which makes investing into the chassis worth-while. If you can forecast buying more blades over the next couple years in order to populate the setup, the value only increases as you populate it more densely. Also, if you or your user is in need of upgrading often, this can allow you to plug in newer blades very easily as they become available. By having the chassis already wired into your lab, pulling an old blade and plugging in a new one doesn't require you to wire up all the typical NIC, HBA, KVM which in turn saves you time.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,269
Location
I am omnipresent
From what I've read, they're for high CPU density configurations where physical space is at a premium. So they make sense for easily parallel applications like VMs and web servers in small data centers or at colocation facilities.
 

Bozo

Storage? I am Storage!
Joined
Feb 12, 2002
Messages
4,396
Location
Twilight Zone
Handruin: do you have any idea what the startup cost was at your location for the blades? (Hardware only)
 

Howell

Storage? I am Storage!
Joined
Feb 24, 2003
Messages
4,740
Location
Chattanooga, TN
The power profile is similar or less than non blade units as well as the cooling profile. So what you really gain is compute density per sqft. However, particular implementations may buy you management and integration, like Handy was talking about.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,920
Location
USA
Handruin: do you have any idea what the startup cost was at your location for the blades? (Hardware only)

I had to look up the emails and found the following:

Blade cost (HP 460c G1): ~$5,500 each (x5)
2x quad core Intel Xeon E5450 3.0 GHz
32 GB RAM
2 Emulex LPe1105
4 GigE NICs
2x 146GB 15K SAS 2.5"

Chassis cost: ~$26,000
This included all power supplies (6), four Cisco 3020 10/100/1000 24 port switches, two Brocade 4/24 SAN 24 port 4Gb SAN switch

Initial investment: ~$54,000 USD
Keep in mind that the switches and SAN components can be really expensive which adds to the cost of the chassis. We've also grown from 5 blades up to 15 at the moment.

I do part time lab work (voluntary), I was able to physically install it, configure, and then also setup/install the OS etc. As an all-in-one, it continues to be easy to manage.
 
Top