Mellanox Adapters

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,122
Location
I am omnipresent
I suspect that many people do not know the power of Mellanox ConnectX adapters. Here's the deal:
Mellanox adapters are readily available on Ebay and other marketplaces. They come in lots of different models, but in most cases, 3rd generation cards can be bought in pairs for around $60. These cards will typically ship with one or two SFP+ or QSFP+ ports on the back and use an 8xPCI 3.0 interface. There are software interfaces available for Windows and Linux that can change the operating mode for each port between Ethernet and Infiniband operation and the speed for each port from 10Gbps all the way up to 56Gbps in some cases. More expensive cards can use PCIe 4.0 and operate at 100Gbps.

Are there down sides to this? Not many. One of the biggest issues is that Infiniband isn't super well supported on some common *nix OSes (e.g. most things based on BSD rather than Linux, like TrueNAS). Another is that Windows needs RDMA for SMB Direct file sharing, which is only supported on Windows Server and Windows for Workstations, which are different SKUs from Windows Professional. Getting everything running isn't as simple as just running a cable and binding an IP to a port and there can be troubleshooting and performance testing (frame sizes etc) to do once you're ready to go.

If you need a point-to-point high speed connection or two, this is a great and cheap way to get started. Basically, you can just give each port on the ConnectX its own IP on a different segment from whatever you'd normally use. If you want to set up routing between those segments, that's your call, but as long as each of the high speed connections are configured you can convince your backbone clients to talk to each other through the addresses or names you define for them.

Cabling can be a PITA. You need to make sure that you're using the proper type of SFP+ or QSFP+ termination on both sides and that the cable is configured to work specifically with Mellanox hardware. Twinax is pretty common and easy to find for short runs, but multimode fiber might be needed if you get much past 5m. To keep confusion to a minimum, it's handy to know that Mellanox sells its own branded and tested cables that aren't THAT hard to dig up.

Most people aren't going to notice the difference between 10Gb, 25Gb and 40Gb service, but given how capable hardware is nowadays, having your VM/Docker/Plex/File Host on some giant headless system that you can access at essentially local SSD speeds at lower cost than a 2.5GbE Switch is not remotely a bad thing.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,122
Location
I am omnipresent
I'm not aware of any 4x QSFP+ Mellanox cards. Many of the ones you can find will be dual port. With three dual port cards, everybody can have a dedicated path to the other high speed hosts, if that's something you need. 40Gb switches can ALSO be found for reasonable amounts of money but are also the sorts of things that sit in a rack and make crap tons of noise. There are threads all over the place where guys have modded the original fans with vastly quieter Noctua models, usually shooting for the 5krpm 40mm fans, since the low-RPM models can't push enough air to actually keep the switches cool.
 

LunarMist

I can't believe I'm a Fixture
Joined
Feb 1, 2003
Messages
17,339
Location
USA
I was thinking more like 4x SFP28 (25G), or is that not a good idea?
It may be better to wait for a totally SSD internal setup, but that's too expensively now.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,122
Location
I am omnipresent
I know 4x SFP28 cards exist but I sort of stopped looking at anything once I saw how dead cheap the 40Gb cards are. I take it that your goal is to sort of make your own switch? I am also aware that one QSFP+ to 4x10Gb SFP+ cables exist, although I don't immediately know what equipment supports that.

Going all SSD depends on needs and capacities. You can build tiered storage systems on Linux (kind of - in OSS space, proper storage tiers are something left for paid solutions but "Give ZFS gobs of RAM and dedicated SLOG drives' comes pretty close) or Windows Server so you're writing to a cache drive first. This kind of stuff is why your QNAP/Synology setups are better for people who don't want to mess with it.
 

LunarMist

I can't believe I'm a Fixture
Joined
Feb 1, 2003
Messages
17,339
Location
USA
I was just thinking about adding a higher speed to the current setups. I'm not living in a server room with all that noise and I don't have the power circuits to build a storage sever. It seems I should stick with the 10G for NAS or maybe look at the Thunderbowls.

I'm not ready to spend $8K for 61TB SSD, but maybe will revisit when the 9000 series AMD is out. That's for another thread.
 
Top