SSDs - State of the Product?

sechs

Storage? I am Storage!
Joined
Feb 1, 2003
Messages
4,709
Location
Left Coast
I'm not installing an OS and buying apps for an old computer. That would be only if I build a new system eventually. I was hoping for 6Gbps SSDs to use on the controller, and maybe that will still be possible.
Maybe the Revo would be a good choice. Basically two SSDs and a RAID controller slapped together on a single PCI-E card.

Not as slick as the Z-Drive... but cheaper.
 

LunarMist

I can't believe I'm a Fixture
Joined
Feb 1, 2003
Messages
17,267
Location
USA
Maybe that would be nice, but I need the other 6 ports of the controller for HDs. :pirate:
 

sechs

Storage? I am Storage!
Joined
Feb 1, 2003
Messages
4,709
Location
Left Coast
I'm hoping that this is a server... and then wondering why you'd then need SSDs in RAID 0.
 

BingBangBop

Storage is cool
Joined
Nov 15, 2009
Messages
667
I'm hoping that this is a server... and then wondering why you'd then need SSDs in RAID 0.

With an SSD, the data transfer proportion of the speed equation is very much larger than with a HD so it is much easier to get a speed increase using raid.

Lets look at 64k blocks transfering at 100MB/s so a transfer takes .64ms

With an SSD the seek speed will be .1ms so a single drive file copy is a repeated seek +read data + seek + write data.

SSD: .1ms + .64ms + .1ms +.64ms = 1.48ms to transfer 64KB

For an average HD with a 100MB/s transfer speed and a 15ms seek speed the same operation takes this amount of time.

HD: 15ms + .64ms + 15ms + .64ms = 31.28ms to transfer 64KB.

Now lets do a raid 0 scenario for both doubling the transfer speed to 200MB/s.

raid 0 SSD: .1ms + .32ms + .1ms + .32ms = .84ms or a 76% speed improvement.
Raid 0 HD: 15ms + .32ms + 15ms + .32ms = 30.64 or a 2.1% speed improvement.

Thats why people want to raid SSD's!
 

Santilli

Hairy Aussie
Joined
Jan 27, 2002
Messages
5,254
With an SSD, the data transfer proportion of the speed equation is very much larger than with a HD so it is much easier to get a speed increase using raid.

Lets look at 64k blocks transfering at 100MB/s so a transfer takes .64ms

With an SSD the seek speed will be .1ms so a single drive file copy is a repeated seek +read data + seek + write data.

SSD: .1ms + .64ms + .1ms +.64ms = 1.48ms to transfer 64KB

For an average HD with a 100MB/s transfer speed and a 15ms seek speed the same operation takes this amount of time.

HD: 15ms + .64ms + 15ms + .64ms = 31.28ms to transfer 64KB.

Now lets do a raid 0 scenario for both doubling the transfer speed to 200MB/s.

raid 0 SSD: .1ms + .32ms + .1ms + .32ms = .84ms or a 76% speed improvement.
Raid 0 HD: 15ms + .32ms + 15ms + .32ms = 30.64 or a 2.1% speed improvement.

Thats why people want to raid SSD's!

It's not quite that linear, but, pretty close. .2ms is the usual tests for SSD seeks.
Raid 0 usually doesn't scale exactly double, but it's close enough for government work.

Where this really shows is with the Kingspec SSD that transfers data at about half the speed of the 7200 RPM that it replaced, 55 vs. 30, but, the seeks are .2ms, vs. 17ms
and this makes the increase in speed HUGE, perception wise.
 

LunarMist

I can't believe I'm a Fixture
Joined
Feb 1, 2003
Messages
17,267
Location
USA
I'm hoping that this is a server... and then wondering why you'd then need SSDs in RAID 0.

No, it is the same old system. I'm not sure how well the OS would work on the RAID 0. I was not planning on doing that, except for testing.
 

LunarMist

I can't believe I'm a Fixture
Joined
Feb 1, 2003
Messages
17,267
Location
USA
With an SSD, the data transfer proportion of the speed equation is very much larger than with a HD so it is much easier to get a speed increase using raid.

Lets look at 64k blocks transfering at 100MB/s so a transfer takes .64ms

With an SSD the seek speed will be .1ms so a single drive file copy is a repeated seek +read data + seek + write data.

SSD: .1ms + .64ms + .1ms +.64ms = 1.48ms to transfer 64KB

For an average HD with a 100MB/s transfer speed and a 15ms seek speed the same operation takes this amount of time.

HD: 15ms + .64ms + 15ms + .64ms = 31.28ms to transfer 64KB.

Now lets do a raid 0 scenario for both doubling the transfer speed to 200MB/s.

raid 0 SSD: .1ms + .32ms + .1ms + .32ms = .84ms or a 76% speed improvement.
Raid 0 HD: 15ms + .32ms + 15ms + .32ms = 30.64 or a 2.1% speed improvement.

Thats why people want to raid SSD's!

The average file size is about 500MB with some a few GB. I expect the access times not to be affected much by RAID 0, or is that the case?
 

BingBangBop

Storage is cool
Joined
Nov 15, 2009
Messages
667
The average file size is about 500MB with some a few GB. I expect the access times not to be affected much by RAID 0, or is that the case?

Raid 0 does not affect access time at all, only data transfer rates. The transfer rates increase because the data gets transfered in parallel.
 

LunarMist

I can't believe I'm a Fixture
Joined
Feb 1, 2003
Messages
17,267
Location
USA
I have no easy place left for the SSDs. The PS cage is gounded so I would not think there would be any interference, but are there any problems with sticking them on the PS? :spiderman:
 

LunarMist

I can't believe I'm a Fixture
Joined
Feb 1, 2003
Messages
17,267
Location
USA
Performance is awful. :tdown: Some users have complained about the need to wipe it first. :cursin:
 

LunarMist

I can't believe I'm a Fixture
Joined
Feb 1, 2003
Messages
17,267
Location
USA
What do you mean which one? They are both the same model: Corsair Forced 60. I have not opened the second package, but the product is not returnable according to the Egg. :crucified:
 

LunarMist

I can't believe I'm a Fixture
Joined
Feb 1, 2003
Messages
17,267
Location
USA
All things considered, the Sandstorm drive is quite impressive and nearly as fast as the X25-E. Unsurprisingly, RAID 0 does not help that much. I'm not sure where the bottleneck is. :porc:
 

LunarMist

I can't believe I'm a Fixture
Joined
Feb 1, 2003
Messages
17,267
Location
USA
Ugh, I screwed up that quote. :(

I never attempted the ICH RAID, having assumed that it was none too good. Does it need the ACHI or other special settings?

I suspect that most benchmarks are too slow and not optimized for SSDs and RAID. Perhaps the sample sizes are too small. Good old Winbench gives the highest transfer results when set to 64MB. Atto has variations from 360 write/395 read to 300 write/over 500 read. HDTach is awful and HDTune values increase with sample size. The max is ~355 at 8MB.
 

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,675
Location
Horsens, Denmark
Yeah, I have a strong suspicion that none of the benchmarking tools are actually accurate for SSDs, particularly SSDs in RAID. Seat-of-the-pants is fine for me.
 

LunarMist

I can't believe I'm a Fixture
Joined
Feb 1, 2003
Messages
17,267
Location
USA
I always use a timer to measure the actual differences. The RAID 0 is up to 25% faster, but the single drive was already 10% slower than the X25-E. It is not a worthwhile gain for speed, but I can use the capacity. I only ran a couple of quick stitching tests. The times were hardly any different between the SandStorm RAID 0 and X25-E. The CPU spends too much time doing nothing for some reason.

It's high time for Intel to leapfrog the others with a new 6Gbps design. :reindeer:
 

Gilbo

Storage is cool
Joined
Aug 19, 2004
Messages
742
Location
Ottawa, ON
The bottleneck is generally in the software.

Desktop software generally still does all IO single-threaded, at a queue depth of 1. It requests data, waits to receive it, requests more. This provides better performance with rotating magnetic storage, because it avoids thrashing the disk, but it means that the potential performance advantage of SSDs for desktop tasks remains largely unrealized.


The most important metric ends up being access time, since turning around those 1 queue depth requests as fast as possible is the only way to get the application to request more data so it can do more work. This is why a single SSD is generally just as fast as a RAID 0 of several.

A lot of RAID controllers will actually increase the access time of SSDs. The 9211 doesn't though, but its big brother the 9260 does, as do all the Adaptec models (horrendously so). The 9211 is up there with the Areca 1231 at the top of the heap for that particular metric, and is as good as you're going to get for desktop use.

You should use it only as a pass-through though. RAID the disks in Windows software RAID and you'll see better performance.


Photoshop's scratchdisk implementation, and Lightroom's cache are both particularly unoptimized for SSDs at the moment. What used to help with mechanical disks, now hurts performance.

All told, it's a disappointing situation for those of us looking to improve IO performance.


Incidentally, the SandForce's are definitely the SSDs to get for RAID. Their reserved space, and excellent internal garbage collection allows them to keep working at 100% performance even without TRIM (see this for comparison to the Crucial C300 and this for the X25-M 160GB).
 

Bozo

Storage? I am Storage!
Joined
Feb 12, 2002
Messages
4,396
Location
Twilight Zone
I thought Native Command Queuing was supposed to allow OSs to call for more than one thread at a time. NCQ has been around for 5+ years. I would have thought newer OSs and service packs would have caught up by now.
 

Gilbo

Storage is cool
Joined
Aug 19, 2004
Messages
742
Location
Ottawa, ON
I thought Native Command Queuing was supposed to allow OSs to call for more than one thread at a time. NCQ has been around for 5+ years. I would have thought newer OSs and service packs would have caught up by now.

The software can't tell. The OS will take advantage, but you'll usually need multiple applications doing simultaneous IO to raise the queue depth.


Databases are still, after all this time, about the only pieces of software written to do multithreaded IO.

A big annoyance for me is Adobe Lightroom. Even though its CPU processing is multithreaded, it clearly pulls data from the disk in a single-threaded, linear manner like I described above. My four CPU cores end up sitting around at about 30% per core (an old-school Q6600), while my SSD is only pushing ~25MB/s according to Windows' Resource Monitor. Even with worst case, random 4K IO my SandForce drive will happily push twice that, if asked, but Lightroom clearly isn't asking.

And this is with a largely sequential workload (generating 1:1 thumbnails from 21MB RAW files). My SandForce could likely push over 200MB/s into the Q6600 if Lightroom didn't bottleneck it. :(


EDIT: It's worth mentioning that the main reason things still happen this way, is that software writers are wary of thrashing mechanical disks, and causing their application to perform poorly on normal HDDs. They are thinking of performance. Until SSDs have been ubiquitous for a few years, I don't think the software will get rewritten.
 

LunarMist

I can't believe I'm a Fixture
Joined
Feb 1, 2003
Messages
17,267
Location
USA
The bottleneck is generally in the software.
That is how it has been forever and I was hoping it would be better by now. Apparently not. :(

You should use it only as a pass-through though. RAID the disks in Windows software RAID and you'll see better performance.

I tried Windows RAID 0 earlier, but it is a bit slower. It is also impractical since two OS need to access the array.

Eventually I'll repurpose the drives for other usage. One 60GB is very good and sufficient for most OS/apps. It's a lot better than the 80GB X25-M G2 that I have XP64 on right now.
 
Last edited:

sechs

Storage? I am Storage!
Joined
Feb 1, 2003
Messages
4,709
Location
Left Coast
Yeah, I have a strong suspicion that none of the benchmarking tools are actually accurate for SSDs, particularly SSDs in RAID.
I was reading the other day that there are problems using benchmarking software on SandForce-based SSDs because: (1) they have no cache and therefore react different than cached controllers; and (2) they compress the data before write, so compressibility affects performance.

From that, one may guess that, if you are working with a lot of already compressed data, a Sandforce-based drive won't perform particularly better (and, perhaps worse) than an otherwise similar Indilinx-based drive with a cache.
 

LunarMist

I can't believe I'm a Fixture
Joined
Feb 1, 2003
Messages
17,267
Location
USA
My impression is that 240 MB/sec. reads and 200 MB/sec. writes are typical for a single drive. The 285/275 figures are based on a best case scenario with compressible data. In any event, the "Force" is far faster than the OCZ Vortex with the Indilux chips I bought about 6 months ago.
 
Top