How would one best go about tuning storage for Maximized Read \ Write rates?

JBHAIRE2004

What is this storage?
Joined
Feb 7, 2011
Messages
2
I have a prior generation SAN unit and I am trying to tune my subsystem for maximized, sustainable, Read \Write rates. I have been doing lots of reading and trial and error testing. There is so much conflicting information that I have managed to confuse myself. I’m not accomplishing anything at this point so I think it time to step back and ask for help. How would one best go about tuning storage for maximized Read \ Write rates?

My intent is to be maximizing Read \ Write rates on the SAN for large file storage and virtualization. The majority of the static files on the SAN will be 700MB ISO Images, 1-2GB video files, 4GB DVD ISO Images and 4GB Video files. I am also going to be putting Hyper-V virtual machines on the SAN. The VM VHDs will be a combination of dynamically expanding VHDs and fixed size VHDs.

Details and specs about the hardware in play:
- (16) x 1TB WD RE3 SATA Drives
- Infortrend A16F-R2221 FC to SATA SAN with Redundant controllers.
- Controllers are only SATA-I for all intents and purposes
- For max compatibility with the controller, I jumper the WD RE3 drive to hard set them at 1.5GHz
- 2GB sitck of RAM for cache in each controller
(Controller only uses 31-33% of cache or 630MB for Read \ Writes)
- Redundant 2GB Fibre channel ports to each controller on the SAN and to the Server HBA
- Brocade Silkwork 3902 32 Port 2GB FC Switch
- I am focused on Maximized Read \ Write rates for sequential reads and writes
(Tell me is that is wrong for my intended purposes).
- Separating the (16) Drives into (2)x 8 Drive RAID-5 LD Arrays
o Array A assigned to controller A
o Array B assigned to controller B.
- SAN Raid Level Stripe Sizes Available:
o 16K (16,384 bytes)
o 32K (32,768 bytes)
o 64K (65,536 bytes)
o 128K (131,072 bytes)
o 256K (262,144 bytes)
o 512K (524,288 bytes)
o 1024K (1,048,576 bytes)
- Windows Partition Sector Sizes Available:
o 512 - 512 bytes (.5K)
o 1024 – 1,024 bytes (1K)
o 2048 – 2,048 bytes (2K)
o 4096 – 4,096 bytes (4K)
o 8192 – 8,192 bytes (8K)
o 16K - 16,384 bytes (16K)
o 32K - 32,768 bytes (32K)
o 64K - 65,536 bytes (64K)
- Currently only 1 Partition Per LD on the SAN and in Windows.
- Server’s OS is Windows Server 2008 R2 x64 with RC SP1
- (If it matters Current server is Dell PE860 – Socket 775 2.4Ghz Quad Core with 8GB RAM
Once ready will connect SAN to my Dell PE1950 GIII –Socket 771 (2)x 2.33Ghz Quad Cores with 48GB RAM)
- (Multipath I/O Feature / MPIO driver are installed on DE860 if it could be effecting performance)

I have been using bst514 (aka Bart’s Stuff Test v5.1.4) and ATTO Disk Benchmark to conduct read and write tests. (See attached screenshots for reference). I always copy the application executable local to the drive that I am test to keep the results as apples to apples as possible.

What I have been attempting to use those apps to determine are the combination of SAN Raid Level Stripe Size and NTFS Windows Partition Sector Size that yield the highest possible read write rate.

Where things got fuzzy for me, was in trying to understand the bench mark results. Each application (bst514 and ATTO Disk Benchmark) had a setting that threw me when I examined things closer.

In bst514, under options there is a block size setting. Changing the block size affects the Read \ Write rates. While I run the test I monitor the writes from the GUI console of the SAN and see supporting Read \ Write rates reflected there as less as the Cache utilization.

In ATTO Disk Benchmark its the “Transfer Size” and “Total Length” although it also bring in I/O and I am not so familiar with the nuances of I/O. Again while I test I watch the SAN GUI and Read \ Write rates reflected in the app appear fairly supported.

By adjusting up the block size and transfer size within the test parameters of the applications; I was able to make what appeared to be a positive impact on the Read \ Write rates. However in an actual file copying test, from a windows drive to the drive I was testing. I didn’t see those highest speeds. How does Windows writes in the real world equate to these tests?

Any help or guidance to help put me on the right track would be greatly appreciated.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,269
Location
I am omnipresent
What kinds of data are being stored on the arrays? Video? .jpgs? Database stuff that's mostly held in RAM? The answer to your question is probably a solid "it depends" because your SAN performance in the end isn't going to be about bullshit synthetic benchmarks but live performance with real data. You'll probably have to test with some subset of real world data before you can accurately determine those things.
 

JBHAIRE2004

What is this storage?
Joined
Feb 7, 2011
Messages
2
Video Files and Hyper-V Virtual Machines.

I am the only one hiting the gear so it's not like the disk are ever going to be thrashed by a Enterprise full of users.

Since I started testing parameters I have increased my sustainable writes in a (reproducible manner) from 60-70MB/sec to 80-90MB/sec. Once as a fluke I got the sustainable writes up to 100-110MB/sec for one array and 120-130MB/sec for the other array. However after a reboot of both the host and the SAN, I was not able to reproduce those highest write rates. (I think that was related to a controller failure that rebounded in a wierd way). I did however take away from that, confirmation these drives can perform at 120-130MB/sec (if not higher) given the correct parameters.

Reads have always been at more than sufficient rates so my focus has been on writes.
My minimum goal is to get the sustainable writes rates over 100MB/sec.
I'd be very pleased if I could push the writes over 150MB/sec.
I'd be ecstatic to get them near 200MB/sec.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,269
Location
I am omnipresent
I don't think you're going to get 200MB/sec over 1.5Gbps FC even under the ideal circumstances.

In general, since the files you're dealing with are going to be large, you're probably best off using larger block and cluster sizes. I suspect that going bigger is going to cut down on the number of parity calculations your controller will have to make and thereby improve your write speed, but I'm not sure how that impacts what's going on inside your VMs. Usually the trade off for huge clusters is good I/O performance for wasted space on small files, and possibly some loss in functionality for some disk tools. It could be that matching cluster sizes for the guest OSes with the host might also have some positive impact.

Stereodude had some interesting performance comparisons for RAID5 on Dell Perc controllers that might be a decent starting point for your testing.
 
Top