120mm Case fans with adjustable speed - recommendations?

Stereodude

Not really a
Joined
Jan 22, 2002
Messages
10,865
Location
Michigan
My server was my workstation as well. I was tired of the network being the bottleneck between my data and my machine. Having 10TB+ of 600MB/s+ data locally is something you get used to.
Why didn't you use something like 4GB Fiber Channel instead? :-?
 

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,669
Location
Horsens, Denmark
OK, one 16 port 3ware card. What are the other drives connected to? Onboard SATA? Or did you get a second 3ware card and make one big array?

The 3Ware is not in service at the moment. Not sure what I'm going to do with it.

I'm pretty sure he said he used a 24 port Areca.

Indeed. 1280ML with 2GB memory upgrade and battery backup unit.

Why didn't you use something like 4GB Fiber Channel instead? :-?

Because that would be even more expensive.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,009
Location
I am omnipresent
Why didn't you use something like 4GB Fiber Channel instead? :-?

FC requires a whole support infrastructure to make it work. You need FC Switches and cabinets with a proper backplanes and even the cabling is expensive. Plus the controllers are mostly going to use some sort of funky interface like PCI-X or 64bit PCI, and that gets away from the consumer features ddrueding probably likes, like Crossfire/SLI and overclocking support.

Plus, with 24 drives, 4Gbps might be a bottleneck, depending on how the storage is distributed.

On the plus side, moving to FC does eliminate the need for a specific number of ports on a controller, as long as you can trunk one FC Switch off another.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,862
Location
USA
Plus the controllers are mostly going to use some sort of funky interface like PCI-X or 64bit PCI, and that gets away from the consumer features ddrueding probably likes, like Crossfire/SLI and overclocking support.

We use several different emulex FC 4GB cards (for example LPe11000) in-house that are PCIe...they do make them in non-funky interfaces. Many of our new Dell servers come with PCIe slots these days so that's how we connect.

Our current ESX cluster (10 HP blades) are all connected (masked) to some 40+ luns over 4GB FC to a clariion, so you're absolutely right about it being a nice way to remove port restrictions.

The way you can get around issues of saturating the 4GB FC HBA is to run software such as power path and have the I/O distributed among multiple HBA ports. This will also give you some redundancy in case one of the ports die (assuming everything is cabled correctly).
 

Stereodude

Not really a
Joined
Jan 22, 2002
Messages
10,865
Location
Michigan
FC requires a whole support infrastructure to make it work. You need FC Switches and cabinets with a proper backplanes and even the cabling is expensive. Plus the controllers are mostly going to use some sort of funky interface like PCI-X or 64bit PCI, and that gets away from the consumer features ddrueding probably likes, like Crossfire/SLI and overclocking support.

Plus, with 24 drives, 4Gbps might be a bottleneck, depending on how the storage is distributed.

On the plus side, moving to FC does eliminate the need for a specific number of ports on a controller, as long as you can trunk one FC Switch off another.
I only suggested it. ;)

FWIW, my buddy has a full FC setup in his house. He's running a SAN with 48 SAS 15k RPM drives on a Dell Perc 6/i using Server 2003. Apparently he bought all the FC stuff on ebay (used) pretty cheap. He gets insane performance and has no disks in any of his workstations.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,862
Location
USA
I took a look on eBay and I'm surprised at how affordable 16 port FC switches are. What wasn't clear to me is if they came with all 16 GBICs.
 

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,669
Location
Horsens, Denmark
I only suggested it. ;)

FWIW, my buddy has a full FC setup in his house. He's running a SAN with 48 SAS 15k RPM drives on a Dell Perc 6/i using Server 2003. Apparently he bought all the FC stuff on ebay (used) pretty cheap. He gets insane performance and has no disks in any of his workstations.

Back when I was a SPCR fanatic I looked into it, and many other solutions for diskless workstations. If we end up building a house, I'll consider wiring the house for it.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,862
Location
USA
My hunch is that 10Gb Ethernet is going to win over Fibre Channel in the future. You may want to think about wiring for that. At work I think we even have switches in-house that run Fibre Channel over Ethernet because it costs less for an IT department to manage just Ethernet compared to both. :)
 

Stereodude

Not really a
Joined
Jan 22, 2002
Messages
10,865
Location
Michigan
Maybe, but I wouldn't bet on it. Windows file sharing is a huge bottle neck on GigE. I shudder to think what it would do on 10 GigE. FC has no such bottleneck. Intel doesn't support XP for their 10 GigE cards (not sure about others). Ethernet and FC can do similar things, but FC has some key advantages over Ethernet and those are probably enough to keep it around.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,862
Location
USA
What I'm saying is, that a company likely has Ethernet already rolled out into their infrastructure, adding FC over Ethernet on 10Gb might make it easier to manage rather than having two separate sets of switches, etc. Your storage will still talk FC, but it'll be delivered over 10Gb Ethernet cabling (via copper or fiber).
 

CougTek

Hairy Aussie
Joined
Jan 21, 2002
Messages
8,728
Location
Québec, Québec
What I'm saying is, that a company likely has Ethernet already rolled out into their infrastructure, adding FC over Ethernet on 10Gb might make it easier to manage rather than having two separate sets of switches, etc.
But most companies have CAT5E cables all over the place and 10GbE requires something higher grade. They'll have to re-wire anyway, although I agree that CAT6 cable is still quite a bit cheaper than fiber.

(Not having googled it)...CAT6 is enough for 10GbE, right?
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,009
Location
I am omnipresent
Can you boot from FC over IP?

PXE relies on a small TCP/IP implementation to send out bootp requests and start the netboot process. In theory any interface that supports IP and has a DHCP/bootp server configured with the correct set of options would work, regardless of what's going on at layer 2.

A quick google says some FC HBAs support boot-from-LUN, but that's not PXE. I'm not exactly sure how that would work; every FC-equipped system I've ever dealt with has had some kind of internal SCSI drive to hold its OS.

Anyway, I'm in the process of making my Linux Servers into iSCSI targets right now. I've got a 1000baseSX switch and some cables up to 25m, and even managed to snag a transceiver off Ebay so in theory I can set everything up on my existing infrastructure, but I haven't bitten the bullet on NICs yet, just because of the possibility of getting cards that don't perform at better than 1000baseT speeds. I'm hoping that the combination of iSCSI and 1000Mbit transmission speeds takes care of my own storage needs.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,009
Location
I am omnipresent
But most companies have CAT5E cables all over the place and 10GbE requires something higher grade. They'll have to re-wire anyway, although I agree that CAT6 cable is still quite a bit cheaper than fiber.

(Not having googled it)...CAT6 is enough for 10GbE, right?

I was just looking at this with my Network+ students, oddly enough.

The least expensive, most practical 10GbE I'm aware of is 10GbaseSR, which is still fiber based.

There *is* a 10GbaseT, which uses Cat6 but introduces enormous latency into your connections, and a 10GbaseCX, which has a range on the cabling is really, really short (15m).
 
Top