From ram disk testing, my system is maxed at 1200 mb/sec.
David's at 3500 mb/sec, IIRC.
The 3 Ware Raid card has done 800 mb/sec, in the 133 PCI-X slot, with 8 7200 SATA drives in one of the tests.
I think at these speeds, and access times, to really notice a difference, you need to at least double throughput. Most of the tests they are using are not RAT heavy, but SRead
stuff.
So, I'm now in a situation that if the price is right, I'm going to pick up 3 or more smaller SSDS', and put them in Raid 0, and see what this setup will do.
I tested the card last night, and, it works great, and, so does my SCSI setup in the 100 mhz slot.
David is getting 600 mb/sec, with 3 drives. THAT would make a huge noticeable difference.
I'd like to see David's system with 6-10 drives in Raid 0...
LM:
As Sechs has pointed out, the difference is not nearly as huge with SCSI to SSD, as it is 7200 drives to SSD.
I would not mind having a boot drive that does 1200 mb/sec, and maxes the bus on this machine...
Also, when you put in SSDs, as has been pointed out, the disadvantage of using non-enterprise boards comes into play, due to the board makers using chips that can't approach
the interface maximum speed.
Dell and Apple come to mind...