SSD in RAID

time

Storage? I am Storage!
Joined
Jan 18, 2002
Messages
4,932
Location
Brisbane, Oz
I'm planning a small business server(s). It will host databases, so it will have SSDs. Because many of us are a trifle edgy about SSD reliability, there will need to be some practical way to automatically or remotely reconstruct a working machine after an SSD failure.

Unfortunately, the server will be affllicted with Windows, so Linux RAID is not an option (otherwise, this discussion could end right here, because it is self-evidently the only sensible way to construct an SSD RAID).

I really can't see the point in a hardware RAID controller for this application. Paying list price (as I would have to do) would increase the cost of the server by up to 50%. It would also introduce an exotic (not quick and easy to replace) component that may actually fail itself.

So I'm actually considering fake RAID. I suppose I should also be considering Windows RAID, but the admin tools from Intel and AMD (which I've been reading up on) look pretty good to me. Feel free to chime in anytime.

Originally, I wanted to keep it simple and have monolithic SSDs, either one with frequent snapshots or two in a RAID 1 mirror. Two problems:
1. Large SSDs are not widely stocked and are priced at a premium
2. If/when one fails, that's a sh*tload of money to fork out if you need a quick replacement.

Although the simplicity of a RAID-on-a-card is attractive, it fails both the above tests and also puts all your eggs in one basket unless you buy two. It's also not possible to expand the capacity without starting from scratch.

To satisfy the criteria, I believe I need a minimum of 4 drives:

1. Boot drive, >=40GB. I have another means of recovering from a failure here, so this one doesn't need to be mirrored. Phew. I'd prefer an SSD, but after my recent experience, I may just settle for an ordinary HDD; can anyone see how that would affect server performance, as opposed to desktop performance?

2-4. Either RAID 0 of two drives + a hot spare, or a 3-drive RAID 5. I realize that the RAID 5 will hurt performance, but if I assume at least one drive will fail, it might be the only safe option. With RAID 5, we can remotely drop a drive from the array to erase it (in lieu of TRIM), then rebuild the array, all without missing a beat.

We can still do this with RAID 0 outside business hours because there will be regular snapshots to recover from. A bit more hassle making certain there's more than one valid backup before starting.

Single drive failure means seamless continuation for RAID 5, rebuild from backup required for RAID 0. Still remote, though, and I have alternative procedures to tide users over until the rebuild is finished.

You can also add one or two drives to this last configuration to increase capacity at a later date. I'm focusing on 120GB drives, yielding a capacity of 240-480GB. Hopefully, much larger SSDs will eventually become commonplace.
 

time

Storage? I am Storage!
Joined
Jan 18, 2002
Messages
4,932
Location
Brisbane, Oz
There are four realistic options for SSDs: Intel, Sandforce, Marvell and Samsung (don't know which controller they are using). AFAIK, performance bites the dust once you have executed the drive's capacity in writes. According to Anand, Crucial's Marvel-based SSDs are particularly crippled. Otherwise, they have the best read performance, particularly the 256GB model under SATA 3.0.

Apparently, the story's pretty similar for Intel. The new Samsung is supposed to be better, but I can't source it.

That leaves Sandforce, which is supposed to be able to deal with a 'full' drive without catastrophic breakdown in performance. It's also claimed to deliver maximum life for the MLC NAND chips, although this is usually academic.

An obvious winner, particularly for RAID, except for the fact that every brand of Sandforce drive has a legion of disgruntled customers complaining that their drives intermittently disconnect and/or fail completely. Only 5-10% based on Newegg feedback, but definitely there and disturbingly consistent. There is a widely held suspicion that the problems are attributable to weaknesses in the firmware, every variant of which is apparently created by Sandforce.

Other controller types also get their share of grief. Crucial gets a caning, and this is on products worth $500-600 a pop. Even Intel does not smell of roses.

So what the hell are you supposed to do? From where I sit, reliability is probably not as good as Maxtor, and maybe as bad as IBM in their darkest days. I need something that will work for 5 years, not 5 weeks.
 

Will Rickards

Storage Is My Life
Joined
Jan 23, 2002
Messages
2,012
Location
Here
Website
willrickards.net
Perhaps you should look at the pricier non-consumer SSD? From Fusion I/O?
They probably have reliability down a bit better as their customers aren't consumers.
 

time

Storage? I am Storage!
Joined
Jan 18, 2002
Messages
4,932
Location
Brisbane, Oz
Ummm, Fusion-IO charges somewhere between $24 and ~$50 per gigabyte. Dell sells a 640GB version for $15431.

I'm proposing to spend about $2 per gigabyte. :eek:
 

timwhit

Hairy Aussie
Joined
Jan 23, 2002
Messages
5,278
Location
Chicago, IL
What is wrong with the Intel SSDs? I thought they had a good reputation? Maybe I'm out of the loop on this.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,297
Location
I am omnipresent
I've contemplated building with 40GB Intel SSDs to get affordable high performance I/O. I'd definitely not want to do that for a live business app unless I had a very good backup and recovery plan in place.

Windows SoftRAID can be a PITA to resync in RAID5, but a degraded RAID5 on SSDs should still be pretty fast. The other problem is that it needs to be watched fairly closely, since Windows doesn't do a very good job of indicating that a drive has dropped or there are sync errors. ICH10 at least has some better monitoring tools on server systems.

I don't think I'd be all that troubled if a mid-sized file server had to limp through a RAID resync, but without knowing more about your workload I'm not sure putting database on a Windows SoftRAID5 is a good idea.
 

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,742
Location
Horsens, Denmark
Intel had some firmware issues on some drives a while ago. It has been a while and the recognized and resolved the issue fairly quickly.

Not long ago I built a DB server with a pair of RevoDrive X2 boards in Windows RAID-1 for data and an Intel 80GB SSD for the OS. Works fine, no issues whatsoever, super quick. What I like about this configuration is that the controller is also redundant.

I wouldn't have bothered with putting the redundancy on SSD at all (that is what backups are for), but this client insisted that the array would always be under massive load 24x7, so the snapshots would always be significantly behind. That part isn't my problem, thank Buddha.
 

Santilli

Hairy Aussie
Joined
Jan 27, 2002
Messages
5,285
DD:
How many OCZ Vertex Turbos have you used, with what failure rate?

I really see only one drive for this sort of thing. That would be the Intel X-25M.

Not even SP1 messed them up;-). Two 160 gig X-25M's still going strong, in Raid 0, using onboard Gigabyte GA 58 onboard Raid 0.
 

time

Storage? I am Storage!
Joined
Jan 18, 2002
Messages
4,932
Location
Brisbane, Oz
What is wrong with the Intel SSDs?

Consumer-level SSD have a general problem in that once they have written to all their blocks (wear leveling means they avoid re-using blocks for as long as practicable), blocks have to be erased before writing, which is many times slower. If some data has been deleted from a block (blocks are quite large), remaining data may also have to be moved in order to reclaim the space. You'd have to read up on that aspect.

TRIM enables the OS (Windows 7 or Windows 2008 ) to tell the drive to scavenge no-longer-used blocks by erasing them. This is why all the benchmarks you see are performed immediately after a TRIM (or a full drive erasure). The more the benchmarks are run, the slower the drive gets, although Windows 7 should stop it getting too bad by eventually issuing TRIM commands. For light use as you usually see on a desktop, that's a perfectly valid assumption as long as the drive has plenty of free space.

But what if you're not running Windows 7, or the drives are in a RAID? Once the drive has used up all its fresh blocks, subsequent writes have a lot of work to do. Anand likes to call this "backed into a corner". There's supposed to be background garbage collection, but it's very mild. If you do enough writes, most consumer-grade SSDs will hit the wall and stay there.

Intel's X25E ($$$) minimizes this problem, as does anything with a Sandforce 1200 or 1500 controller - probably because their original target market was enterprise computing. You can write to these day in, day out and performance should remain satisfactory for up to 5 years.

Intel's X25M does not have this capability, so given enough load over a long enough time, it will slow down significantly - unless you drop it out of RAID so you can send a TRIM command to it.
 

timwhit

Hairy Aussie
Joined
Jan 23, 2002
Messages
5,278
Location
Chicago, IL
Consumer-level SSD have a general problem in that once they have written to all their blocks (wear leveling means they avoid re-using blocks for as long as practicable), blocks have to be erased before writing, which is many times slower. If some data has been deleted from a block (blocks are quite large), remaining data may also have to be moved in order to reclaim the space. You'd have to read up on that aspect.

TRIM enables the OS (Windows 7 or Windows 2008 ) to tell the drive to scavenge no-longer-used blocks by erasing them. This is why all the benchmarks you see are performed immediately after a TRIM (or a full drive erasure). The more the benchmarks are run, the slower the drive gets, although Windows 7 should stop it getting too bad by eventually issuing TRIM commands. For light use as you usually see on a desktop, that's a perfectly valid assumption as long as the drive has plenty of free space.

But what if you're not running Windows 7, or the drives are in a RAID? Once the drive has used up all its fresh blocks, subsequent writes have a lot of work to do. Anand likes to call this "backed into a corner". There's supposed to be background garbage collection, but it's very mild. If you do enough writes, most consumer-grade SSDs will hit the wall and stay there.

Intel's X25E ($$$) minimizes this problem, as does anything with a Sandforce 1200 or 1500 controller - probably because their original target market was enterprise computing. You can write to these day in, day out and performance should remain satisfactory for up to 5 years.

Intel's X25M does not have this capability, so given enough load over a long enough time, it will slow down significantly - unless you drop it out of RAID so you can send a TRIM command to it.

I read the AnandTech article about this awhile back as well as some others. As I remember it the Intel drives (X80) performed the best when full or when any write operation would need to do an erase first. I believe that article was written before the Sandforce 1200 or 1500 controller were released.
 

Santilli

Hairy Aussie
Joined
Jan 27, 2002
Messages
5,285
My observations:
First Crystal bench mark, on newly installed array last year:
cRYSTALDISKMARK302Xx-25m160GBX2.jpg


Here is a test ran today:
2160gigX-25Mraid0test32311.jpg


Comments?
Conclusions?
 

time

Storage? I am Storage!
Joined
Jan 18, 2002
Messages
4,932
Location
Brisbane, Oz
Random write performance (4kB) has halved, from 29.08 to 15.73.

I take it that's the 2x X-25M 2 in RAID 0?
 

Santilli

Hairy Aussie
Joined
Jan 27, 2002
Messages
5,285
Yes. I'm kind of curious. For a server, isn't the load usually very heavily in favor of Reads?

Also, keep in mind that Crystal Disk Mark has a bit of variation from version to version.

The last test was done with 29% full drive, vs. 49% for the first test.?
So, perhaps it does read faster the fuller the drive? That would be consistent with TimWhits' comments.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,931
Location
USA
I'm planning a small business server(s). It will host databases, so it will have SSDs.

Yes. I'm kind of curious. For a server, isn't the load usually very heavily in favor of Reads?

Also, keep in mind that Crystal Disk Mark has a bit of variation from version to version.

The last test was done with 29% full drive, vs. 49% for the first test.?
So, perhaps it does read faster the fuller the drive? That would be consistent with TimWhits' comments.

For multiple DB servers, this may not be true. Depending on the DB, the 4K performance may be of greater performance depending on the data usage.
 

Bozo

Storage? I am Storage!
Joined
Feb 12, 2002
Messages
4,396
Location
Twilight Zone
I now have 2 Crucial C300 64GB SSD drives. I also have a 3Ware 9650se RAID controller.
I have not found a way to get the 9650se to work with the SSDs. Sometimes it sees both SSDs, other times it sees both but says one is degraded. I have the latest firmware and drivers loaded.
One of the weirdest things was after I created a RAID 0 setup and confirmed that I wanted to do this as it would remove everything from the SSDs, I could still boot from the SSD that had an OS on it. This is after trying to install server 2008 on the RAID 0 array.

Any suggestions?
 

Santilli

Hairy Aussie
Joined
Jan 27, 2002
Messages
5,285
HMMM.
I'm using two Vertex Turbo's on the 9550 with no problems, in Raid 0, and, prior to that, was running 3 in Raid 0.

I'd test each drive. Sounds like one of them is not working. The vertex turbo that went bad was seen by 7 Ultimate, but would not format.

Try different cables, and try them on different channels?

Then RMA the degraded drive.
 

time

Storage? I am Storage!
Joined
Jan 18, 2002
Messages
4,932
Location
Brisbane, Oz
:thumbleft: It is a much better idea than the 3 vomit SSDs in RAID 0. :)

Well yes, but.

1. I have to pay a hefty premium for Intel SSD, while in the U.S. they are the same or cheaper than Sandforce drives.
2. Intel X25-M are frequently in short supply here, particularly useful sizes like 80GB.
3. I assume the G2 is now obsolete, so availability after March is unknown. Based on the debacle with s1155/s1156, that's a realistic worry.
4. Mirroring is logistically difficult because of the very small drive capacity.

Having said all that, four X25-M in RAID 10 is what I'm leaning towards. But I wish I knew why Intel decided to throw the design in the bin ...
 

Santilli

Hairy Aussie
Joined
Jan 27, 2002
Messages
5,285
Might be something like come out with a quality product, low profit margin.

Get customer acceptance, and following, then follow with a cheaper, higher profit item,
at near the same price? Just guessing...
 

Bozo

Storage? I am Storage!
Joined
Feb 12, 2002
Messages
4,396
Location
Twilight Zone
I attached one SSD to the 9650se. Then ran AS SSD benchmark. The speeds were one third of what they were before I attached the SSD to the 9650se.
I need a new controller :(
 

time

Storage? I am Storage!
Joined
Jan 18, 2002
Messages
4,932
Location
Brisbane, Oz
I guess it's possible that you don't have enough motherboard SATA ports, but I'd very strongly encourage you to rethink your need for an add-in controller. I've been doing a lot of research on this lately, and for RAID 0 or RAID 1 at least, there really doesn't seem to be any point supplanting current Intel or AMD embedded controllers.

There's even a school of thought that says that Windows soft RAID is just as good, or even better (due to the massive surplus power of modern multi-core CPUs, I suppose).

I'd be really, really interested if you are able to find time to RAID your C300 pair on your native ports. It's a configuration that I'm considering.
 

Bozo

Storage? I am Storage!
Joined
Feb 12, 2002
Messages
4,396
Location
Twilight Zone
I have put the SSDs on the ICH9R controller on my motherboard. I have done a clean install of Server 2008. Still setting things up.
So far it is working fine.
 

Bozo

Storage? I am Storage!
Joined
Feb 12, 2002
Messages
4,396
Location
Twilight Zone
Performance on the ICH9R is great. Still need to run ATTO but AS SSD overall score is almost doubled from a single drive, but thats what you would expect.
 

Bozo

Storage? I am Storage!
Joined
Feb 12, 2002
Messages
4,396
Location
Twilight Zone
I removed the hard drives from the RAID array. Not having the TRIM function was not worth the extra performance. In real world use, it's hard to tell the difference anyway.
 

time

Storage? I am Storage!
Joined
Jan 18, 2002
Messages
4,932
Location
Brisbane, Oz
Thanks for your efforts, Bozo. It looks like you proved that there's nothing wrong with your SSDs, as well as getting good results from even a previous generation onboard controller.

I'm not sure that I still believe in the need for TRIM, because I suspect idle garbage collection is sufficient in most real world situations (not sure about Sandforce though). Anand hammered a C300 with writes, waited just 3 hours, then retested. Unsurprisingly, it had only recovered slightly. :roll:

I've been researching SSD performance tests, including Tom's efforts. I think a more realistic load would be measuring sequential reads while the drive has to perform small random writes; IMO this is what brings a server to its knees. Given that seeks are now irrelevant, I figure you can probably get a good indication just by focusing on 4kB random write results.

By themselves, I don't think sequential read tests are terribly useful. All second generation SSDs can manage extremely high results when not asked to do anything else at the same time. And you can always stripe them to boost sequential throughput if your application really needs it. Heck, you can get the same result with enough spinning disks.

On this basis, I don't think the Intel X25-M G2 is a great choice for a normal file or database server, and the new 510 doesn't change that. The upcoming X25-M G3, on the other hand, will probably be at the top of everyone's list.
 

Bozo

Storage? I am Storage!
Joined
Feb 12, 2002
Messages
4,396
Location
Twilight Zone
I would think TRIM would be more important on the smaller SSDs with less free space to start with. Mine are 64GB.
 

time

Storage? I am Storage!
Joined
Jan 18, 2002
Messages
4,932
Location
Brisbane, Oz
After reading yet more benchmarks, I'm pretty much sold on the Crucial C300 - preferably the 256GB model because of its much higher write throughput. It has by far the best 4kB/8kB performance of any current SSD.

There's also the distinct advantage that I can buy them directly from Crucial U.S. Not only does this remove the perpetual annoyance of trying to find them in stock somewhere, but I can return them direct to Crucial when there's a problem. Local forums suggest a warranty turnaround of 3 weeks, which is about the same as trying to return something through a reseller here. :(
 

Will Rickards

Storage Is My Life
Joined
Jan 23, 2002
Messages
2,012
Location
Here
Website
willrickards.net
I like the C300 too but I'm waiting till they resolve the firmware issue apparent in version 006. Just review their online forums. If you get a firmware version 002 drive you should be fine. You can't rollback the firmware version.
 

time

Storage? I am Storage!
Joined
Jan 18, 2002
Messages
4,932
Location
Brisbane, Oz
Thanks, that's the sort of feedback I was worried about. Does this mean that Bozo is still running version 002?
 

Bozo

Storage? I am Storage!
Joined
Feb 12, 2002
Messages
4,396
Location
Twilight Zone
No, I'm running version 006. I have been reading the Crucial forums also. I didn't see anything bad about V006 of the firmware though. But I didn't read every thread either.

The AS SSD test software gives me a total score of 496. When I had the two drives in RAID 0, the score was 819.
 
Top