Infiniband and 10Gb networking

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,275
Location
I am omnipresent
Ebay seems to have a lot of inexpensive Infiniband controllers. Looks like there are quite a few PCIe 10GbE adapters for under $40, but I can't tell what if any transceiver is being sold with them. We'd probably be talking about $30 per meter of cable but that's still substantially cheaper than 10Gb over twisted pair.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,926
Location
USA
Would this be for a point to point config or would you invest in some kind of infiniband network?
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,275
Location
I am omnipresent
In my case I only need point to point. If I were willing to cheat I think I could get away with a single 10m cable.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,926
Location
USA
Instead of messing with Infiniband, what about going with a pair of these Brocade 1020 Dual-Port 10Gbps PCIe CNA cards if you're only doing point to point. Reportedly people have been offering $33 as best offer deals for these. Add in a twinax cable for about the same $30/meter and you might be able to avoid Infiniband altogether. If you need more bandwidth, team the two ports. :)

Supported cables:
1M: 58-1000026-01 23USD/pop on ebay from china seller cjsmd and here is a full result search: 58-1000026-01 | eBay
3M: 58-1000027-01 27USD/pop on ebay from china seller cjsmd and here is a full result search: 58-1000027-01 | eBay
5M: 58-1000023-01 33USD/pop on ebay from china seller cjsmd and here is a full result search: 58-1000023-01 | eBay

Transceiver support matrix.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,926
Location
USA
I like the idea of this Brocade MP8000B 24 port 10Gb switch (with 8 additional ports for FC/FCoE) for around $500, but I suspect the SFP+ adapters would be hella-expensive to populate it. I looked briefly but couldn't find where to buy them or a price.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,926
Location
USA
Instead of messing with Infiniband, what about going with a pair of these Brocade 1020 Dual-Port 10Gbps PCIe CNA cards if you're only doing point to point. Reportedly people have been offering $33 as best offer deals for these. Add in a twinax cable for about the same $30/meter and you might be able to avoid Infiniband altogether. If you need more bandwidth, team the two ports. :)

Supported cables:
1M: 58-1000026-01 23USD/pop on ebay from china seller cjsmd and here is a full result search: 58-1000026-01 | eBay
3M: 58-1000027-01 27USD/pop on ebay from china seller cjsmd and here is a full result search: 58-1000027-01 | eBay
5M: 58-1000023-01 33USD/pop on ebay from china seller cjsmd and here is a full result search: 58-1000023-01 | eBay

Transceiver support matrix.

My research shows that people have had success with these Fiberstore Brocade 10G-SFPP-SR Compatible 10GBASE-SR SFP+ 850nm 300m Transceiver Module used in those Brocade 1020 adapters and they're $18/each!

I wonder if these $18 transceivers would work in that Brocade 10Gb switch I linked to. That would be a crazy setup for relatively cheap given the bandwidth potential.
 

CougTek

Hairy Aussie
Joined
Jan 21, 2002
Messages
8,729
Location
Québec, Québec
Instead of messing with Infiniband, what about going with a pair of these Brocade 1020 Dual-Port 10Gbps PCIe CNA cards if you're only doing point to point.
Infiniband has a lower latency. I have no idea if it's more complicated to configure though, as I've never had to play with Infiniband devices.

Both 10Gb networking gears and Infiniband are a significant upgrade versus commun gigabit network connections.

Since I've configured quite a few 10Gb Ethernet devices, I would be more tempted with trying Infiniband. Just to increase my knowledge.

The Brocade switch and adapter deals are great though.
 

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,728
Location
Horsens, Denmark
Just bought my first 10Gb NIC (for a high-end Synology unit). Of course, it won't do much until I get at least one more. The plan is to put the other in an ESXi machine and use it to push backups.
 

Howell

Storage? I am Storage!
Joined
Feb 24, 2003
Messages
4,740
Location
Chattanooga, TN
We use 4 port cards in our hosts with 1 10G fiber pair going to storage and another 10G pair going to the core routers. All fiber runs to edge costs are also 10G. I'll have to think about whether we could benefit from infiniband. Thanks for the info on the latency Coug.
 

CougTek

Hairy Aussie
Joined
Jan 21, 2002
Messages
8,729
Location
Québec, Québec
I like the idea of this Brocade MP8000B 24 port 10Gb switch (with 8 additional ports for FC/FCoE) for around $500, but I suspect the SFP+ adapters would be hella-expensive to populate it. I looked briefly but couldn't find where to buy them or a price.
You want a cheap but tremendously fast switch, look at this one. 36x 40Gb ports, manageable and dual PSU...for 325$ and free shipping.

At this point, we should either stop or move the last part of the thread in the For Sale & Deal section of the site IMO.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,275
Location
I am omnipresent
Infiniband has a lower latency. I have no idea if it's more complicated to configure though, as I've never had to play with Infiniband devices. Both 10Gb networking gears and Infiniband are a significant upgrade versus common gigabit network connections.

It's just layer 2. Whatever it is, you put SMB, NFS or iSCSI on top of IP and I don't think it matters other than the data transfer rates. I had a tab open about 2000s-era supercomputing that mentioned Infiniband repeatedly, which is what inspired me to go look at parts pricing (14 year old NICs can't be that expensive etc). Handruin's Brocade stuff sounds like it would win for my simple point to point application application though. Coug, that IB Switch probably sounds like a jet engine, but at least 3m Twinax cables with SFP connectors to go with it are under $30 if you look around. It's definitely a laugh to think that less than $500 could get someone to the point of having pretty badass datacenter-type local connectivity.
 

CougTek

Hairy Aussie
Joined
Jan 21, 2002
Messages
8,729
Location
Québec, Québec
I had a tab open about 2000s-era supercomputing that mentioned Infiniband repeatedly, which is what inspired me to go look at parts pricing (14 year old NICs can't be that expensive etc).
That switch can't be more than 6 years old. The ports are not SDR or DDR Infiniband, they are QDR, so it's a lot more recent and significantly faster and modern.

Coug, that IB Switch probably sounds like a jet engine, but at least 3m Twinax cables with SFP connectors to go with it are under $30 if you look around.
Of course it sounds like a jet engine. The rating I've seen is close to 60dB and the power consumption is around 300W. But it's insanely fast and affordable. You can't have everything. The Brocade, while perhaps not as noisy, is surely not anywhere near quiet either.

I've seen 2-port 40Gb QSFP controllers for ~100$. Not as cheap as those with single 10Gb SFP+ ports, but still very affordable. The cheapest cables I've seen were less than 20$ for 3m QSFP copper cables. Again, not bad at all.

It might not be the best solution for you inside your appartment, but for anyone with dedicated racking space (like Howell seems to have), then it's quite a deal.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,926
Location
USA
Infiniband has a lower latency. I have no idea if it's more complicated to configure though, as I've never had to play with Infiniband devices.

Both 10Gb networking gears and Infiniband are a significant upgrade versus commun gigabit network connections.

Since I've configured quite a few 10Gb Ethernet devices, I would be more tempted with trying Infiniband. Just to increase my knowledge.

The Brocade switch and adapter deals are great though.

I wasn't up to date on Infiniband's low latency benefit. I've only done a small amount of searching on the matter to see what kinds of latency differences there are and it appears to depend on the manufacturer of the components.

For home use I can't imagine the reduced latency would make such a huge difference. My NAS and desktop systems would soon become the bottleneck in almost all situations even if I jumped to 10Gb Ethernet. I wasn't looking to implement iSCSI or SAN based connectivity which likely benefits more from the lower latency.

I agree with you that it would be a nice technology to add to the resume. So would increasing my understanding of 10Gb hardware and implementation.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,926
Location
USA
Just bought my first 10Gb NIC (for a high-end Synology unit). Of course, it won't do much until I get at least one more. The plan is to put the other in an ESXi machine and use it to push backups.

Which NIC did you go with? Are you planning to run a point-to-point connection and are you using twinax or fiber?
 

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,728
Location
Horsens, Denmark
Am I the only one here who sees value in keeping RJ-45 and GbE backwards-compatibility? Or are you guys anticipating a one-time massive expenditure/migration?
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,926
Location
USA
No, I would be interested in a 10Gb twisted pair switch but very few exist and the ones that do are usually very expensive per port and/or offer few amounts of ports. I dont have high hopes the industry will grow the 10Gb TP implementation in the foreseeable future. I would certainly like to integrate a switch with TP into my home environment rather than run multimode fiber to each room. 10Gb offers so much more bandwidth that anyone short of serious enthusiasts like us would rarely consider this kind of infrastructure upgrade. That puts us in a non-existent market or a market of used enterprise equipment of previous generations. I think the best shot we have is getting a mikrotik/ubiquiti or equivalent company to make something for this market.

I see it as partially backwards compatible because the end user will need to have CAT 6 at a minimum and even then they're limited in distance unless the upgrade their wiring to CAT 6A. If I were to have to overhaul my wiring for a 10Gb upgrade I'd seriously consider OM2/OM3 fiber wiring for its superior rejection of noise over CAT 6A.
 
Last edited:

Howell

Storage? I am Storage!
Joined
Feb 24, 2003
Messages
4,740
Location
Chattanooga, TN
Fiber is also a little more future proofed, being limited more by the optics than the fiber itself. Much more difficult to retrofit a house with though.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,926
Location
USA
Actually thinking this switch might be perfect for getting the ESXi and NAS on at 10Gb while feeding all the other switches.

View attachment 912

I might have missed something but this switch you linked to only has two 10Gb via SFP+ uplinks not RJ45. Is that what you intended? I think this might be why CougTek recommended the Cisco SG500X-24MPP over this because it gives you 4 x 10Gb SFP+.
 

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,728
Location
Horsens, Denmark
Unless I'm missing something, the switch I linked to supports both SFP+ and RJ-45 for each of the 10Gb links.

A decade ago a wealthy friend of mine took a general conversation we'd had and ran with it. In their new $10M+ home they'd put in 3 runs of fiber to each room. No Coax, no TP, just fiber. Sure, they could have baluns that converted to whatever they want, but at thousands of dollars each.

I suspect that regular CAT6 will support 10GbE in most installations, even at larger cable lengths. Several of my friends have followed my advice in the last couple years and installed 6/6a during remodels. Once I have a switch and a machine with a card I hope to take them out and see whether a 10GbE connection can be negotiated.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,926
Location
USA
The Cisco Catalyst 3560-CX only shows me 12 x 1Gb RJ-45, 2 x 10 GE SFP+, and 2 x 1 Gb RJ-45. I don't see any 10Gb RJ-45, only in SFP+. Am I looking at the right switch?

I feel pretty confident I could get a single LC/LC OM2 fiber run to each of my rooms (5) for less than a couple hundred dollars centralized to a home run location in the basement if I had to. I would rather just use the existing CAT6 at 10Gb if the switch prices were viable/reasonable. When I look at something like a Netgear XS708E going for $830 for 8 ports of RJ-45, why wouldn't I consider a 24 port (used) Brocade SFP+ switch for $500 that I could put 8 SFP+ adapters ($144) and run cables ($100) to be roughly at the same price point with incredibly more functionality and more room to grow down the road? Then I can buy the lesser expensive Mallonox dual port SFP+ adapters rather than the more expensive Intel RJ-45 10Gb adapters.

Yes, CAT6 will support 10Gb up to 55 meters (180 ft) or 33 meters (108 ft) in noisy environments. I've done iperf benchmark tests on systems at work using a short CAT6 cable (2 meters) and was able to achieve 9.4Gb/sec through it consistently in a point-to-point configuration with no switch. I haven't tried with longer cables but in this situation the throughput was good in short runs. I would expect you to get similar results unless you have crazy amounts of electrical noise to deal with.
 

Howell

Storage? I am Storage!
Joined
Feb 24, 2003
Messages
4,740
Location
Chattanooga, TN
With the square footage premium typical in residences it would be nice to find an in-wall conversion from fiber to multiple copper.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,275
Location
I am omnipresent
Am I the only one here who sees value in keeping RJ-45 and GbE backwards-compatibility? Or are you guys anticipating a one-time massive expenditure/migration?

Having some fiber in place means being able to switch your layer 2 technology as soon as it's economically feasible instead of waiting for the day that someone solves the issue of implementing that speed over copper wire. Your backbone stuff can be different from your client hardware. That's normal.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,275
Location
I am omnipresent
And in a moment of weakness, I am now the proud owner of a pair of MHQH29B-XTR 40Gbps (yes, 40) Infiniband adapters and a pair of 10m QSFP cables. All for under $200.
A little more reading regarding Windows and IB says that Server 2012+ fixes the performance issues that were present on Server 2008 machines and a firmware flash should give me all of the ridiculous bandwidth I could ever want between my two big boy computers.
 

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,728
Location
Horsens, Denmark
And in a moment of weakness, I am now the proud owner of a pair of MHQH29B-XTR 40Gbps (yes, 40) Infiniband adapters and a pair of 10m QSFP cables. All for under $200.
A little more reading regarding Windows and IB says that Server 2012+ fixes the performance issues that were present on Server 2008 machines and a firmware flash should give me all of the ridiculous bandwidth I could ever want between my two big boy computers.

So you are going to run 2x40Gbps between the two? That'll be epic. I was planning 2x10Gbps and thinking that would be cool.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,275
Location
I am omnipresent
The second cable was purchased as insurance in case the first one is bad or gets damaged or something. I don't think channel bonding will help enough to bother setting it up. Also I'm pretty much positive that nothing I have will move 5GB/s unless I start building disk arrays out of M.2 drives.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,926
Location
USA
Good luck with the setup; it sounds like it'll offer some fantastic bandwidth and low latency.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,275
Location
I am omnipresent
Here are the NICs and cables.

And just for laughs, best case sequential read off the fastest disk array I have top out at 3100MB/s. Writes are a different and much more depressing story but that's RAID6 for you.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,275
Location
I am omnipresent
Storage Spaces can't implement RAID10 but I don't think I'd want it. Slow writes are really only a problem once in a very great while.
 

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,728
Location
Horsens, Denmark
I moved the 10GbE links into production a few weeks ago. Likely storage subsystem limited to 300-400MB/s in the real world. I'll take it.
 
Top