Speeding up Windows File Sharing

Stereodude

Not really a
Joined
Jan 22, 2002
Messages
10,865
Location
Michigan
Does anyone know if these TCP tweaks for Windows XP and Server 2003 (aka XP x64) actually speed up file transfers using Windows file sharing?

I can't find any benchmarks showing what improvement occurs by increasing the window size or any suggested values for gigabit ethernet.

Am I wasting my time even looking at this stuff, or will I be able to speed up file transfers?
 

Stereodude

Not really a
Joined
Jan 22, 2002
Messages
10,865
Location
Michigan
I see up to ~45MB/sec file transfers. My buddy who's really networking savvy tells me that's about the limit for Windows file sharing, and that a more efficient protocol is needed to get better speed. He tells me FTP is good for ~90MB/sec.
 

Stereodude

Not really a
Joined
Jan 22, 2002
Messages
10,865
Location
Michigan
Without any tweaks, here are the numbers back from iperf run this way:

iperf.exe -c 192.168.1.151 -r -t 30 -w xxxxx

4k window:
c: 352 gbit/sec
s: 493 gbit/sec

8k window:
c: 352 gbit/sec
s: 493 gbit/sec

16k window:
c: 401 gbit/sec
s: 670 gbit/sec

32k window:
c: 884 gbit/sec
s: 926 gbit/sec

64k window:
c: 939 gbit/sec
s: 930 gbit/sec

128k window:
c: 922 gbit/sec
s: 934 gbit/sec

I assume this means I should set my default TCP window to something larger than the default value of 8k.
 

Stereodude

Not really a
Joined
Jan 22, 2002
Messages
10,865
Location
Michigan
Well the results so far are, no you can't really speed up WFS with TCP tuning. I benchmarked a file copy between my two servers with and without the following registry tweaks.

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters]
"TcpWindowSize"=dword:00080000
"Tcp1323Opts"=dword:00000003

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\AFD\Parameters]
"DefaultSendWindow"=dword:00010000

Both systems had the tweaks utilized. I pushed to both systems and pulled from both systems. I used a 5.5GB file to test with. I timed how long it took to copy it from one system to another. I did 3 successive runs and averaged the time (though the times were quite consistent).

Baseline:
Push to Server - 59.14MB/sec
Pull from Client - 52.85MB/sec

Pull from Server - 47.51MB/sec
Push to Client - 14.87MB/sec

Tweaked:
Push to Server - 59.34MB/sec
Pull from Client - 52.04MB/sec

Pull from Server - 35.91MB/sec
Push to Client - 17.56MB/sec


I plan to benchmark my desktop to the server and see what happens, but I'm not expecting any miracles.
 

Stereodude

Not really a
Joined
Jan 22, 2002
Messages
10,865
Location
Michigan
I tested with my desktop and my main server and got the following results:

Baseline:
Push to Server - 66.07MB/sec
Pull from Desktop - 51.72MB/sec

Pull from Server - 59.34MB/sec
Push to Desktop - 22.73MB/sec

Tweaked:
Push to Server - 65.30MB/sec
Pull from Desktop - 51.72MB/sec

Pull from Server - 49.02MB/sec
Push to Desktop - 22.79MB/sec
 

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,511
Location
Horsens, Denmark
I've decided to try a different route. I ordered these for the two main fileservers and my workstation. They (and my switch) are all supposed to support teaming.
 

Stereodude

Not really a
Joined
Jan 22, 2002
Messages
10,865
Location
Michigan
Why didn't you go quad port?

That solution doesn't really work for me. I don't have any free PCIe x4 or wider slots that I want to devote for a NIC in my server, though it has dual Realtek 8111C NiC's on-board. My backup server doesn't have any PCIe slots. I'm not sure what's out there in PCI-X land though that would sap BW from the RAID card. My desktop likewise doesn't have any free PCIe x4 slots in it.

I could probably put together some sort of dual teaming solution for my systems, but I'm skeptical of how much things would improve. I'd probably get more bang going with a MS OS that supports SMB 2.0 or another OS that doesn't use SMB.
 

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,511
Location
Horsens, Denmark
The quad port cards are amazingly expensive ($400+). But if this shows a significant improvement, the main fileserver and it's backup might get one.
 

Stereodude

Not really a
Joined
Jan 22, 2002
Messages
10,865
Location
Michigan
The posters on Hardforum all pushed Teracopy. I tried it and was underwhelmed.

Windows Explorer Copy: (from before)
Server2 push to Server1 - 59.14MB/sec
Server2 pull from Server1 - 47.51MB/sec

Teracopy 2.06beta:
Server2 push to Server1 - 42.93MB/sec
Server2 pull from Server1 - 32.78MB/sec
 

P5-133XL

Xmas '97
Joined
Jan 15, 2002
Messages
3,173
Location
Salem, Or
The numbers you are quoting of 32MB/Sec to 59MB/Sec. may actually be a limitation of the transfer rate to and from your hard drives especially if the drives are fragmented. This, obviously depends on the age of the HD's (Older drives have slower transfer rates) and how full or fragmented the drives are (The more seeks the slower the transfer). If you are testing the network infrastructure, you may want to eliminate the possibility that it is the drives that are limiting the speed by using quality SSD's at both ends.
 

Stereodude

Not really a
Joined
Jan 22, 2002
Messages
10,865
Location
Michigan
The numbers you are quoting of 32MB/Sec to 59MB/Sec. may actually be a limitation of the transfer rate to and from your hard drives especially if the drives are fragmented. This, obviously depends on the age of the HD's (Older drives have slower transfer rates) and how full or fragmented the drives are (The more seeks the slower the transfer). If you are testing the network infrastructure, you may want to eliminate the possibility that it is the drives that are limiting the speed by using quality SSD's at both ends.
The main server has a RAID-5 array capable of reading and writing 300+ MB/sec. The other server had a RAID-5 array capable of 150+MB/sec reads and slower writes (but still faster than what was measured). My desktop PC used a mostly empty 1.5TB Seagate 7200.11's that can push >110MB/sec at the front. All drives were defrag'd and none of the drives had the OS installed on 'em so they weren't multitasking.

I'm very confident that HD speed did not affect the benchmarks.
 

Stereodude

Not really a
Joined
Jan 22, 2002
Messages
10,865
Location
Michigan
I've decided to try a different route. I ordered these for the two main fileservers and my workstation. They (and my switch) are all supposed to support teaming.
FWIW, the guys on the Hardforum tell me this won't work for host to host and that teaming / link aggregation / trunking is only useful for increasing BW in a one to many or a many to one situation.
 

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,511
Location
Horsens, Denmark
FWIW, the guys on the Hardforum tell me this won't work for host to host and that teaming / link aggregation / trunking is only useful for increasing BW in a one to many or a many to one situation.

Based on my reading of the advanced Teaming options in the device driver, there seem to be many settings for teaming. Many of them are just as you say, but the mode called "Static Link Aggregation" looks promising. It only works with Intel multi-port NICs, and still requires the switches to support 802.3ad, but we shall see. I won't have time tonight, but maybe tomorrow.
 

Pradeep

Storage? I am Storage!
Joined
Jan 21, 2002
Messages
3,845
Location
Runny glass
FWIW, the guys on the Hardforum tell me this won't work for host to host and that teaming / link aggregation / trunking is only useful for increasing BW in a one to many or a many to one situation.

Teaming can provide benefits in a one to one situation. Cheaper performance whilst we wait for 10 gbit ethernet to trickle down.
 

Stereodude

Not really a
Joined
Jan 22, 2002
Messages
10,865
Location
Michigan
Based on my reading of the advanced Teaming options in the device driver, there seem to be many settings for teaming. Many of them are just as you say, but the mode called "Static Link Aggregation" looks promising. It only works with Intel multi-port NICs, and still requires the switches to support 802.3ad, but we shall see. I won't have time tonight, but maybe tomorrow.
I'm still not so sure about that. Per Intel:

Link Aggregation
The combining of multiple adapters into a single channel to provide greater bandwidth. Bandwidth increase is only available when connecting to multiple destination addresses. ALB mode provides aggregation for transmission only while RLB, SLA, and IEEE 802.3ad dynamic link aggregation modes provide aggregation in both directions. Link aggregation modes requires switch support, while ALB and RLB modes can be used with any switch.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,737
Location
USA
I've seen other reviews online that tested network performance by using a RAM drive to copy files between systems (or use various testing tools) to ensure that the hard drive wasn't limiting them. Just throwing that out there to see if it'll improve your numbers. I'm going to try some of this on my own systems. I can try to do the same tests as you to see how close the results are if you're interested?
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,737
Location
USA
Another quick thought...have you tried your tests with a direct Ethernet cable connection between your two machines to eliminate the possibility of the switch causing any issues?
 

Stereodude

Not really a
Joined
Jan 22, 2002
Messages
10,865
Location
Michigan
Another quick thought...have you tried your tests with a direct Ethernet cable connection between your two machines to eliminate the possibility of the switch causing any issues?
I didn't try that. It'll have to wait until my new RAID array is up and running before I can resume benchmarking though.
 

Stereodude

Not really a
Joined
Jan 22, 2002
Messages
10,865
Location
Michigan
I've seen other reviews online that tested network performance by using a RAM drive to copy files between systems (or use various testing tools) to ensure that the hard drive wasn't limiting them. Just throwing that out there to see if it'll improve your numbers. I'm going to try some of this on my own systems. I can try to do the same tests as you to see how close the results are if you're interested?
That's possibly an option... I tried to use larger files to avoid any sort of caching taking place. Two systems have 4GB of RAM, and the other 2GB of RAM. If you've got files that fit in a RAMdrive they might also be small enough to be mostly cached. I'm not really sure if Windows caches files from network shares though.
 

Stereodude

Not really a
Joined
Jan 22, 2002
Messages
10,865
Location
Michigan
So the fastest NiC in one scenario is also the slowest in the other three scenarios. Classic...



The results are the average of 3 copies of a 5.5GB file using Windows Explorer. Clearly the NiC has a far greater impact on file transfer speeds than any TCP "optimizations" (which made things worse in my testing).
 

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,511
Location
Horsens, Denmark
TeraCopy was significantly slower than the default windows copy in my scenario.

6x 3TB Hitachi "R"AID-0 on both ends, connected via Intel PCIe server NICs and a crossover cable. The arrays are capable of 630MB/s reads and 450MB/s writes; windows copy "pull" was able to do 98MB/s while Teracopy was limited to 60MB/s.
 
Top