SuperMicro Blade Servers

CougTek

Hairy Aussie
Joined
Jan 21, 2002
Messages
8,729
Location
Québec, Québec
I doubt anyone here has any kind of experience with their blade servers, but I'm drooling about the things I could do with even a single SBE-720E 7U blade bay filled with their SBI-7227R-T2 blade servers. I'd need to add a 500$ SBM-CMM-003 module (for KVM access) and a 550$ SBM-GEP-T20 module to give network access to the blade servers and I think that's it. I don't know if there's a power supply coming with their 4500$ SBE-720E bay, but their 2500W PSUs are ~450-500$. Even with four of those, the cost is still very low compared to what major OEMs would let similar products go.

A 6-blade setup with their SBI-7227R-T2 (with 4 Xeon E5-2670 each) would have enough processing power to replace the entire 5x42U racks we currently have (with several servers being 3-5 years old). It would only cost ~60K$ and consume at least four times less power than our current server farm. Even with spare parts, the overall cost would be very reasonable. We could also sell many of our current servers to cut the upgrading cost.

For each blade server :
  • 1x SuperMicro SBI-7227R-T2 blade server (~1100$)
  • 4x Intel Xeon E5-2670 2.6GHz 8c/16t LGA2011 (~1500$ each - 6000$ total)
  • 8x Kingston KVR1333D3LD4R9SL/8G (~175$ each - 1400$ total)
  • 4x Intel 180GB 520 Series SSD (~200$ each - 800$ total)

So that's ~9300$ for each blade server. For a 4-socket E5-2670 setup, that's ridiculously low.

But the main question is : is this stuff reliable? Because our current servers are. We only have a hickup once every blue moon and the owners like it like that. Our stuff is aging though and I'm kind of in charge of looking for a replacement/transition/upgrade solution. Bang-for-the-buck, I haven't found any better...if it's something we can rely on regarding long term stability.
 
Last edited:

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,728
Location
Horsens, Denmark
I would be thinking exactly as you are. It is a great deal, so long as it is reliable enough. I would probably put in the budget at least one spare of everything, possibly an entire spare blade instead of just the parts. I would also do a couple of test calls to technical support; don't lie, just call the number and see who answers. Just mentioning that you are considering their blade hardware and wanted to make sure their tech support is up to snuff will tell you quite a bit. Are they reading from a script? Comfortable talking to you about their experience with the hardware? Intelligent? What is the wait time like?

I would also make sure the boss made the final call. Spec out an equivalent Dell PowerEdge Blade system and a service plan sufficient to cover your needs. Express your feeling on what a failure would look like in either case, then voice your recommendation (if you feel strongly either way at that point).
 

CougTek

Hairy Aussie
Joined
Jan 21, 2002
Messages
8,729
Location
Québec, Québec
AFAIK, neither Dell of HP offer something similar. IBM might, but they sell it at least three times as much. With Dell or HP, the same density isn't possible unless, even with Xeon E5-46xx series. So I'd had to buy at least two blade server bays. Their bays are also two to three times the price of the SuperMicro one. It becomes a case of a)we might be able to afford the SuperMicro solution or b)we have to look for something else than blade servers because we simply don't have the money to buy a blade server setup from Dell/HP/IBM.

I see that Cisco has started to offer servers. I don't know if they've sold servers for a while or if it's something new but it's certainly news to me. I haven't checked if they sell blade servers or not, but I doubt they are much more affordable than those from Dell or IBM.

Having spare parts of everything would add another 15K$ to the bill and I was already considering this as mandatory.
 

MaxBurn

Storage Is My Life
Joined
Jan 20, 2004
Messages
3,245
Location
SC
Walking around some tremendous datacenters I rarely see the brand supermicro. Zero personal experience but I am thinking there are other reasons why they go with what they do. They liked the management interface on this, have a standing supplier contract with that etc. That support angle is huge, while you are checking them out I would say check into lead times for components.

Does the size matter for some reason? You sticking these in a colo or something? What about outsourcing the hardware to something like rackspace managed servers? Let them deal with the sourcing and support.
 

CougTek

Hairy Aussie
Joined
Jan 21, 2002
Messages
8,729
Location
Québec, Québec
For now, we are hosting our own equipment, but depending on the cost, we would consider sending the servers elsewhere. Of course, the least space we rent, the lesser the cost. I'm also looking into lowering the electrical consumption and increasing the power eficiency. Why? Because if the ventilation fails, it will take more time before the servers overheat and fail if they dissipate less heat in the first place. So we'll have more time to react.

With the SuperMicro blade solution, the cost is simply lower for similar performance even compared to having a bunch of 1U or 2U servers. So it's easier to add servers, it's easier to maintain, it's easier to cool down, it takes less space and it's cheaper to buy and to ship to to a remote hosting location. The only two grey areas I see are support and reliability.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,269
Location
I am omnipresent
Why not an Intel-branded system? Bozo has certainly expressed reservations about Supermicro parts recently and with Intel OEM stuff you know there's a serious support organization behind it. If you're going to roll your own systems they're probably at least worth a look.
 

Bozo

Storage? I am Storage!
Joined
Feb 12, 2002
Messages
4,396
Location
Twilight Zone
True, I have not had the best of luck with Supermicro motherboards. Maybe their high end stuff is better.
But i agree with Merc. You can't go wrong with Intel.
What are you running now?
 

CougTek

Hairy Aussie
Joined
Jan 21, 2002
Messages
8,729
Location
Québec, Québec
What are you running now?
We have 73 rackmount servers, where do you want to start?

I've looked into Intel's lineup and they have something similar to SuperMicro's SYS-2027TR-HTRF+ server, the H2216JFJR. The SuperMicro 2027-HTRF+ was the server system I was eyeing before stumbling on SuperMicro's 7U 10-bay blade setup and I haven't found anything like that from Intel. Still, their (Intel) H2216JFJR quad-node server isn't half bad. Thanks for bringing that up.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,920
Location
USA
AFAIK, neither Dell of HP offer something similar. IBM might, but they sell it at least three times as much. With Dell or HP, the same density isn't possible unless, even with Xeon E5-46xx series. So I'd had to buy at least two blade server bays. Their bays are also two to three times the price of the SuperMicro one. It becomes a case of a)we might be able to afford the SuperMicro solution or b)we have to look for something else than blade servers because we simply don't have the money to buy a blade server setup from Dell/HP/IBM.

I see that Cisco has started to offer servers. I don't know if they've sold servers for a while or if it's something new but it's certainly news to me. I haven't checked if they sell blade servers or not, but I doubt they are much more affordable than those from Dell or IBM.

Having spare parts of everything would add another 15K$ to the bill and I was already considering this as mandatory.

For whatever it is worth, Cisco has not "started" selling blades, they've been doing this for some time now. We have moved to Cisco blade and rack servers almost exclusively. Their UCS is a step beyond some of the other blade setups I've dealt with in terms of management and expandability. Having virtual NIC & HBAs along with profile is amazingly wonderful for managing the environment. Since the NIC and HBA traffic runs over the same 10Gb Ethernet, cabiling and config is also greatly reduced in the backside. I would guess that Cisco would cost a lot more than the supermicro solution you spec'ed out, so it may not be cost prohibitive for you to go Cisco. But in terms of ease of management and expandability, it's fantastic. We have had good results with the blade setups over the past several years. I've also had good reliability from the single 10U HP c7000 blade setup that I manage. I understand why it wouldnt work for you because the density doesn't match's but it's been a good solid setup for the 3+ years of continuous us. Only a couple months ago did one of the blades suffer from a motherboard failure. I also had a single 2.5" drive failure a week ago. A couple years back I had a single dimm throwing an ECC parity problem. Other than that, no major problems.
 

CougTek

Hairy Aussie
Joined
Jan 21, 2002
Messages
8,729
Location
Québec, Québec
I've looked into HP's blade solution. A 6U c3000 blade enclosure can house 8 half-eight blade servers like the BL460 G8. The BL460 G8 has up to two Xeon E5-2670 CPUs per server and it cost ~8000$. The c3000 blade enclosure cost between 6000$ and 10K$ depending on the options. However, I only see 10Gb NIC on the BL460 G8 servers and we are all wired with 1Gb ethernet. We have nothing to connect 10Gb ethernet. Filling a c3000 with BL460 G8 blade servers would cost ~70000$, about 10000$ more than the SuperMicro blade solution. I'll grant you that the HP setup has almost certainly been more thoroughly tested and has higher chances to prove reliable on the long term.

Regarding the density, it fits up to 16 Xeon E5-2670 into a 6U space. The Intel 2U/quad node server achieves up to 24 in the same space, while the SuperMicro blade setup fits up to 40 in a 7U rack space. Frankly, I don't "need" that much density. But I would really like to fit our entire 5x42U server farm into a 14U mobile rack. Or at least a 22U half-rack. Also, we don't have much empty space into our current 5 racks, so we need to fit the new platform somewhere while we'll be doing the transition. So space shouldn't be that high of a priority, but in fact it is.

Right now, the Intel quad-node seems to be the better fit for our needs, but I'm still undecided. I'll probably show all three solutions (HP blades, SuperMicro blade and Intel quad-node) to the administration and tell them what I think of each.
 

CougTek

Hairy Aussie
Joined
Jan 21, 2002
Messages
8,729
Location
Québec, Québec
Except for the number of QPI links, what's the performance difference between the E5-24xx Xeon and the X5-26xx Xeon? I know the latter is available in higher frequencies models, but otherwise, I don't see the difference. The socket is different, but purely from a performance perspective, they both seem comparable.
 

CougTek

Hairy Aussie
Joined
Jan 21, 2002
Messages
8,729
Location
Québec, Québec
For whatever it is worth, Cisco has not "started" selling blades, they've been doing this for some time now. We have moved to Cisco blade and rack servers almost exclusively. Their UCS is a step beyond some of the other blade setups I've dealt with in terms of management and expandability. Having virtual NIC & HBAs along with profile is amazingly wonderful for managing the environment. Since the NIC and HBA traffic runs over the same 10Gb Ethernet, cabling and config is also greatly reduced in the backside. I would guess that Cisco would cost a lot more than the supermicro solution you spec'ed out, so it may not be cost prohibitive for you to go Cisco. But in terms of ease of management and expandability, it's fantastic. We have had good results with the blade setups over the past several years.
Since you probably know the answer to this, I read a bit about the UCS 5108 chassis and am I wrong in thinking that I can't simply buy the 5108 chassis and expect it to work alone without the UCS 2104XP fabric extender? I mean, I need to add the price of the fabric extender (2 modules if more than 4 blade servers are used) to the price of the chassis if I want something usable. Otherwise, I won't be able to connect the blade servers to the rest of the network.

Is that correct?
 

CougTek

Hairy Aussie
Joined
Jan 21, 2002
Messages
8,729
Location
Québec, Québec
It's just that I've stumbled on this and it got me thinking. A fully filled 5108 chassis with two fabric extender and eight of the servers linked above would cost ~60500$ from comsource.com. It's not bad. I can have cheaper elsewhere, but not by that much.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,920
Location
USA
Since you probably know the answer to this, I read a bit about the UCS 5108 chassis and am I wrong in thinking that I can't simply buy the 5108 chassis and expect it to work alone without the UCS 2104XP fabric extender? I mean, I need to add the price of the fabric extender (2 modules if more than 4 blade servers are used) to the price of the chassis if I want something usable. Otherwise, I won't be able to connect the blade servers to the rest of the network.

Is that correct?

You are correct, you cannot use the 5108 without the fabric extenders. There would be no way for the blades to communicate. On top of this you'll also need at least one (preferably two for redundancy and performance) fabric interconnect switches to handle the traffic. I spoke to our lab guy and also one of the suppliers who were in the lab setting up our new UCS 5108 setup that just came in and they confirmed the above to be true.

This is basically what the connectivity will look:
ucs-qos-01.gif
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,920
Location
USA
It's just that I've stumbled on this and it got me thinking. A fully filled 5108 chassis with two fabric extender and eight of the servers linked above would cost ~60500$ from comsource.com. It's not bad. I can have cheaper elsewhere, but not by that much.

The Cisco B200 M3 is what we just got in for our team. The project we are working on requires a lot of compute and bandwidth in order to stress test so we went with the following (we didn't pay list price). We went with the larger fabric extenders to get us 8x 10Gb per blade. Yes, it seems ridiculous, but we plan to make use of it.

We work with a great supplier. I don't know if they work in Canada but I can ask and pass along info if this is something you're interested in?

Also, what is the primary OS you will be running on these blades? Is it Open Indiana? If so, it's not listed under their support matrix when I checked the other day.

This is part of our ordered quote. You will need to make sure you get the proper power cabling if you decide to go this route. I would recommend having a vendor come in and discuss the solution with you. Even if you don't order through the vendor, you'll get to understand all the pieces and basic pricing that you'll need to consider for an environment like this.

2Cisco UCS 5108 Chassis - Fully Loaded with Power Supplies and Fans - 2 x 2208XP Fabric Extenders$29,743.00
Consists of the Following:
1N20-C6508UCS 5108 Blade Svr AC Chassis/0 PSU/8 fans/0 fabric extender$5,999.00
4UCSB-PSU-2500ACPL2500W Platinum AC Hot Plug Power Supply for UCS 5108 Chassis$936.00
4CAB-C19-CBNCabinet Jumper Power Cord, 250 VAC 16A, C20-C19 Connectors$0.00
2UCS-IOM-2208XPUCS 2208XP I/O Module (8 External, 32 Internal 10Gb Ports)$10,000.00
1N01-UAC1Single phase AC power module for UCS 5108$0.00
1N20-CAKAccess. kit for 5108 Blade Chassis incl Railkit, KVM dongle$0.00
8N20-FAN5Fan module for UCS 5108$0.00
1N20-FW010UCS 5108 Blade Server Chassis FW package$0.00


16Cisco UCS B200 M3 - 2 x Eight-Core Sandy-Bridge E5-2690 Processors - 128GB RAM - 0 x HDD Blade Server with VIC 1240$23,291.25
MLOM and Expander
Consists of the Following:
1UCSB-B200-M3UCS B200 M3 Blade Server w/o CPU, memory, HDD, mLOM/mezz$3,154.00
2UCS-CPU-E5-26902.90 GHz E5-2690/135W 8C/20MB Cache/DDR3 1600MHz$6,103.12
8UCS-MR-1X162RY-A16GB DDR3-1600-MHz RDIMM/PC3-12800/dual rank/1.35v$729.00
1UCSB-MLOM-40G-01Cisco UCS VIC 1240 modular LOM for M3 blade servers$1,499.00
1UCSB-MLOM-PT-01Cisco UCS Port Expander Card (mezz) for VIC 1240 modular LOM$600.00
2N20-BBLKDUCS 2.5 inch HDD blanking panel$0.00
2UCSB-HS-01-EPCPU Heat Sink for UCS B200 M3 and B420 M3$0.00



2Cisco UCS 6248UP Fabric Interconnect - 48 x Available 10GbE Unified Ports - 34 Ports Licensed - 24 x 3M Twinax -$104,686.00
6 x 10G SR SFP's - 4 x 8GB FC SFP's
Consists of the Following:
1UCS-FI-6248UPUCS 6248UP 1RU Fabric Int/No PSU/32 UP/ 12p LIC$32,000.00
4DS-SFP-FC8G-SW8 Gbps Fibre Channel SW SFP+, LC$260.00
6SFP-10G-SR10GBASE-SR SFP Module$1,495.00
24SFP-H10GB-CU3M10GBASE-CU SFP+ Cable 3 Meter$210.00
1UCS-ACC-6248UPUCS 6248UP Chassis Accessory Kit$0.00
2UCS-PSU-6248UP-ACUCS 6248UP Power Supply/100-240VAC$1,400.00
14UCS-LIC-10GEUCS 6200 Series ONLY Fabric Int 1PORT 1/10GE/FC-port license$2,774.00
1N10-MGT010UCS Manager v2.0$0.00
2CAB-C13-C14-2MPower Cord Jumper, C13-C14 Connectors, 2 Meter Length$0.00
2UCS-FAN-6248UPUCS 6248UP Fan Module$0.00
1UCS-FI-DL2UCS 6248 Layer 2 Daughter Card$0.00
1UCS-FI-E16UPUCS 6200 16-port Expansion module/16 UP/ 8p LIC$16,000.00
 

CougTek

Hairy Aussie
Joined
Jan 21, 2002
Messages
8,729
Location
Québec, Québec
Ok...the need for the 100K$ UCS 6248UP fabric interconnect kind of breaks the deal for us. Too rich for our blood. It seems to be great technology, but we don't need that much and our wallet won't allow it. Nice, but we can do without. In fact, since we can't afford it, we have to do without. Too bad.

Thanks a lot for the info.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,920
Location
USA
Ok...the need for the 100K$ UCS 6248UP fabric interconnect kind of breaks the deal for us. Too rich for our blood. It seems to be great technology, but we don't need that much and our wallet won't allow it. Nice, but we can do without. In fact, since we can't afford it, we have to do without. Too bad.

Thanks a lot for the info.

Keep in mind the list cost for us is because we have three 5108 chassis and 22 blades to connect to two fabric interconnects totaling 48 ports (36 licensed). Depending on what you may need for redundancy, you may not need the 6248UP. I'm not saying it'll be cheap, but it may reduce it a bunch from $100K. These fabric interconnects are a pain point for us also.
 
Top