Our team was urged to go forward with a blade configuration because it was pushed hard that this solution would be "greener". While I certainly pushed back against spending the company's money on the expensive setup because I couldn't see why it would be greener or easier to use, there were other merits to having the setup. I was of the prejudiced that individual rack-mount systems would cost less and be more powerful for the same money. We also found that with the HP 7000c chassis and blades that it didn't make sense to buy into this setup unless we bought at least 5 blades with the expensive chassis and planned to populate the entire chassis over time. The expensive components being the Cisco network switches and Brocade SAN switches. My concerns were having all my eggs in one basket, having a very expensive chassis, the longevity of the platform for future upgrades, and also management of the infrastructure.
Fast forward a year and a half and I'm sold on them. I really am. I'm sold because they are much easier to manage once you get past the learning curve of their environment. HP has built everything to be managed via the web, so I can connect to the KVM of all of our 15 blades without ever having to step into the lab. I know this concept isn't new, but they've been able to seamlessly integrate the environments making it very easy to manage. I've been able to install/configure them as ESX servers, Redhat 5.x, and Windows 2003/2008 over the time we've owned them. The 7000c chassis has been around for a long time and continues to accept the newer generation blades as HP releases them with newer chipsets, CPUs, NIC, HBA, etc. This was one of my concerns when adopting the platform because I didn't want to get stuck with a $50K chassis and can't get later CPUs/blades. We currently have two different generations of CPU and blade in one chassis. We're using the HP BL460c and HP BL460c G6 together. We boot ESX using the internal storage and then do our work via a SAN-connected EMC Clariion.
The up-time and reliability has so far been superb. I've only had one blade complain about a faulty stick of RAM which caused me a scheduled downtime. That can happen in any server so this fault is not specific or unique to a blade infrastructure. The one major issue would be if the blade back plane were to ever fail. That would take down 100% of our running blades if it required replacement.
Once we finish populating this chassis (which is 10U of space) we can buy into a second one and have them all managed by the same Onboard Administrator which makes management easier. It's actually nice to have 16 powerful computers live in the space of 10U in our rack. This will actually allow us to more-densely populate our lab. Had we gone with the traditional rack-mount setup like a Dell R710, we would have consumed 16U of space, then an additional 8-10 U for the SAN and networking.
Now moving on to another product known as the Cisco Unified Computing System (UCS), I've had the luxury of working with them a bit within our organization because we are working on software that is part of the VCE initiative (VMware, Cisco, EMC. Any comments going forward of course come with the bias of me working with the company.
The UCS is the next evolution of a blade architecture compute resource. Cisco has figured out that blades can be decoupled from a one-to-one configuration and be used with profiles. They've gone forward with virtualizing the HBA and NIC (WWN and MAC addresses) so that you can apply your configurations for network and SAN to any blade by applying the profile. So in a case were a blade fails, you can easily apply the profile to another blade and be up in running in minutes. The next step that I am a part of is the VBlock configuration. We're selling a pre-defined UCS, MDS, and clariion computer package that allows customers to easily roll out a fully-working compute infrastructure. The software I've worked on a bit is called Unified infrastructure Manager (UIM) which automates a lot of the work for the end user, including automatically installing the OS.
Summary: so when would I use a blade configuration?
Whenever you want to condense your compute infrastructure in limited space and you require easier management of numerous servers in a unified setup. You will also need to be able to justify purchasing the right number of servers which makes investing into the chassis worth-while. If you can forecast buying more blades over the next couple years in order to populate the setup, the value only increases as you populate it more densely. Also, if you or your user is in need of upgrading often, this can allow you to plug in newer blades very easily as they become available. By having the chassis already wired into your lab, pulling an old blade and plugging in a new one doesn't require you to wire up all the typical NIC, HBA, KVM which in turn saves you time.