Mercutio
Fatwah on Western Digital
I suspect that many people do not know the power of Mellanox ConnectX adapters. Here's the deal:
Mellanox adapters are readily available on Ebay and other marketplaces. They come in lots of different models, but in most cases, 3rd generation cards can be bought in pairs for around $60. These cards will typically ship with one or two SFP+ or QSFP+ ports on the back and use an 8xPCI 3.0 interface. There are software interfaces available for Windows and Linux that can change the operating mode for each port between Ethernet and Infiniband operation and the speed for each port from 10Gbps all the way up to 56Gbps in some cases. More expensive cards can use PCIe 4.0 and operate at 100Gbps.
Are there down sides to this? Not many. One of the biggest issues is that Infiniband isn't super well supported on some common *nix OSes (e.g. most things based on BSD rather than Linux, like TrueNAS). Another is that Windows needs RDMA for SMB Direct file sharing, which is only supported on Windows Server and Windows for Workstations, which are different SKUs from Windows Professional. Getting everything running isn't as simple as just running a cable and binding an IP to a port and there can be troubleshooting and performance testing (frame sizes etc) to do once you're ready to go.
If you need a point-to-point high speed connection or two, this is a great and cheap way to get started. Basically, you can just give each port on the ConnectX its own IP on a different segment from whatever you'd normally use. If you want to set up routing between those segments, that's your call, but as long as each of the high speed connections are configured you can convince your backbone clients to talk to each other through the addresses or names you define for them.
Cabling can be a PITA. You need to make sure that you're using the proper type of SFP+ or QSFP+ termination on both sides and that the cable is configured to work specifically with Mellanox hardware. Twinax is pretty common and easy to find for short runs, but multimode fiber might be needed if you get much past 5m. To keep confusion to a minimum, it's handy to know that Mellanox sells its own branded and tested cables that aren't THAT hard to dig up.
Most people aren't going to notice the difference between 10Gb, 25Gb and 40Gb service, but given how capable hardware is nowadays, having your VM/Docker/Plex/File Host on some giant headless system that you can access at essentially local SSD speeds at lower cost than a 2.5GbE Switch is not remotely a bad thing.
Mellanox adapters are readily available on Ebay and other marketplaces. They come in lots of different models, but in most cases, 3rd generation cards can be bought in pairs for around $60. These cards will typically ship with one or two SFP+ or QSFP+ ports on the back and use an 8xPCI 3.0 interface. There are software interfaces available for Windows and Linux that can change the operating mode for each port between Ethernet and Infiniband operation and the speed for each port from 10Gbps all the way up to 56Gbps in some cases. More expensive cards can use PCIe 4.0 and operate at 100Gbps.
Are there down sides to this? Not many. One of the biggest issues is that Infiniband isn't super well supported on some common *nix OSes (e.g. most things based on BSD rather than Linux, like TrueNAS). Another is that Windows needs RDMA for SMB Direct file sharing, which is only supported on Windows Server and Windows for Workstations, which are different SKUs from Windows Professional. Getting everything running isn't as simple as just running a cable and binding an IP to a port and there can be troubleshooting and performance testing (frame sizes etc) to do once you're ready to go.
If you need a point-to-point high speed connection or two, this is a great and cheap way to get started. Basically, you can just give each port on the ConnectX its own IP on a different segment from whatever you'd normally use. If you want to set up routing between those segments, that's your call, but as long as each of the high speed connections are configured you can convince your backbone clients to talk to each other through the addresses or names you define for them.
Cabling can be a PITA. You need to make sure that you're using the proper type of SFP+ or QSFP+ termination on both sides and that the cable is configured to work specifically with Mellanox hardware. Twinax is pretty common and easy to find for short runs, but multimode fiber might be needed if you get much past 5m. To keep confusion to a minimum, it's handy to know that Mellanox sells its own branded and tested cables that aren't THAT hard to dig up.
Most people aren't going to notice the difference between 10Gb, 25Gb and 40Gb service, but given how capable hardware is nowadays, having your VM/Docker/Plex/File Host on some giant headless system that you can access at essentially local SSD speeds at lower cost than a 2.5GbE Switch is not remotely a bad thing.