Major Screwup

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,694
Location
Horsens, Denmark
I just clicked "Convert to Basic Drive" one of the drives in a Dynamic RAID-0 array. When it said "Are you Sure? This will destroy all data. Don't be Stupid." I clicked yes.

I would really like to get that data back. Is Get Data Back the tool for this job? I'm starting to Google now, but am in a bit of a panic about it at the moment.

Yes, there is a backup. The backup is part of a 13-drive RAID-5 that is 2% into a rebuild. I'd really like to get everything up and running before then.

Thanks for any advice!
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,879
Location
USA
I think so (I've not tried)...Get Data Back offers a RAID version with a trial so you can see if it will work. I hope it works for you.

Trial Limitation: You will be able to determine if RAID Reconstructor can reconstruct your array before you need to purchase a license. The demo version does not allow for any type of output. If RAID Reconstructor is unable to determine the parameters due to file system damage, a proprietary order, or any other number of reasons, you should use our RaidProbe service.
 

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,694
Location
Horsens, Denmark
I have RAID Reconstructor running now, but it can't find the correct settings? I thought MS Dynamic SoftRAID started at sector 63, but it isn't having it.
 

Pradeep

Storage? I am Storage!
Joined
Jan 21, 2002
Messages
3,845
Location
Runny glass
Didn't Disk Management require you to delete all existing volumes before offering the option of converting back to basic? You would have to delete the RAID 0 volume, then convert?

I would suggest some sleep before any further operation.

I haven't heard of recovery from such an event. At least you have the RAID 5 (though with 13 drives there's a good chance of a further failure during the long rebuild (fingers crossed, knock on wood).
 

Pradeep

Storage? I am Storage!
Joined
Jan 21, 2002
Messages
3,845
Location
Runny glass
The initial problem to fix is to get the dynamic partition data for the drive that has been converted to dynamic. My impression was that this portion (1MB stored at the end of the disk) is deleted upon conversion back to basic.

Only after the disk was somehow reconverted successfully back to dynamic could the soft RAID 0 be then reconstructed.

http://articles.techrepublic.com.com/5100-10878_11-5034875.html
 

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,694
Location
Horsens, Denmark
Didn't Disk Management require you to delete all existing volumes before offering the option of converting back to basic? You would have to delete the RAID 0 volume, then convert?

It notified me that all data would be destroyed, and I clicked yes. However the operation happened way too quickly to have actually re-written all the sectors, so I suspect the data is still there. The other drive is still dynamic and "broken".

I would suggest some sleep before any further operation.

Lovely idea, but I am a one-man show here, and work starts in about three and a half hours. Restoring from backup now.

I haven't heard of recovery from such an event. At least you have the RAID 5 (though with 13 drives there's a good chance of a further failure during the long rebuild (fingers crossed, knock on wood).

I'm actually copying data off the array during the rebuild. Slow as hell but I need it soonest. No problem, it's only a couple TB :(
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,091
Location
I am omnipresent
In RAID Reconstructor there some settings you can play with to determine the interleave settings on the drives. If there were only two drives in the array then there should only be two choices, but for each member drive the possibilities increase exponentially.

Basically, you have to do a full recovery to see if the interleave settings you picked were actually correct.
 

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,694
Location
Horsens, Denmark
I was able to pick the correct drive order, and I confirmed that the array is supposed to start at sector 63, and confirmed the stripe size, but it still wouldn't see anything.

I gave up, started the restore from backup, sent an e-mail to 100+ people saying the primary shared drive would be back up around noon, and went to bed.
 

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,694
Location
Horsens, Denmark
This doesn't help your current situation, but maybe it's time to ask to hire an assistant?

It has been recognized all around that I need a staff, but the money simply isn't there at the moment. The best I can do is keep my head down and dodge the pay cuts.
 

time

Storage? I am Storage!
Joined
Jan 18, 2002
Messages
4,932
Location
Brisbane, Oz
I didn't want to kick you while you were down, but don't you think 13 drives is way too much for safety with RAID-5?

I was going to suggest RAID-6 at least, but then I read this:

Why RAID 6 stops working in 2019

So, it's possible to show mathematically that past a certain storage size, the chances of data loss approach 100%. Ouch. Sounds like RAID-10 or forget it for everything, not just databases.

What's particularly interesting is that there is an order of magnitude difference in error rates between 5400 and 7200 rpm drives. I checked Samsung specs, and it's true for them as well as Seagate and WD. So there is a reason (other than being a cheapskate) to buy "Eco" drives.
 

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,694
Location
Horsens, Denmark
13 Drives in RAID-5 is enough for the reliability asked of a 2nd-tier backup solution, provided you have a hot-spare in the system. From the Slashdot Comments on the article:

The article assumes that when within a RAID5 array a drive encounters a single sector failure (the most common failure scenario), an entire disk has to go offline, be replaced and rebuilt.
That is utter nonsense, of course. All that's needed is to rebuild a single affected stripe of the array to a spare disk. (You do have spares in your RAID setups, right?)
As soon as the single stripe is rebuilt, the whole array is again in a fully redundant state again - although the redundancy is spread across the drive with a bad sector and the spare.
Even better, modern drives have internal sector remapping tables and when a bad sector occurs, all the array has to do is to read the other disks, calculate the sector, and WRITE it back to the FAILED drive.
The drive will remap the sector, replace it with a good one, and tada, we have a well working array again. In fact, this is exactly what Linux's MD RAID5 driver does, so it's not just a theory.
Catastrophic whole-drive failures (head crash, etc) do happen, too. And there the article would have a point - you need to rebuild the whole array. But then - these are by a couple orders of magnitude less frequent than simple data errors. So no reason to worry again.
*sigh*
 

Stereodude

Not really a
Joined
Jan 22, 2002
Messages
10,865
Location
Michigan
That slashdot poster has no idea what they're talking about. The problem occurs when a UBE is encountered during a rebuild. During a rebuild there's no extra parity data to calculate what the bit from the drive with the UBE should be. A good controller will continue rebuilding and log the error, but you've lost data and it may not be trivial to determine what file was affected.
 

time

Storage? I am Storage!
Joined
Jan 18, 2002
Messages
4,932
Location
Brisbane, Oz
Thanks Stereodude, you saved me the hassle. The math looks good and provides particularly valuable info for the members of this forum.

You can find guidelines on the web that recommend limiting RAID-5 to some arbitrary number of drives, eg 10. Even that would make me edgy, I lack Davin's chutzpah. :p
 

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,694
Location
Horsens, Denmark
So the odds of losing one stripe of data (corrupting one file) is greater than 50% if the volume size is greater than 12TB? And if the volume is less than half full, that makes it 25%? I'm cool with that. ;)

I'm presently downsizing all my RAID-5 arrays to 5 drives, should rebuild quickly enough.
 

LunarMist

I can't believe I'm a Fixture
Joined
Feb 1, 2003
Messages
17,305
Location
USA
So the odds of losing one stripe of data (corrupting one file) is greater than 50% if the volume size is greater than 12TB? And if the volume is less than half full, that makes it 25%? I'm cool with that. ;)

I'm presently downsizing all my RAID-5 arrays to 5 drives, should rebuild quickly enough.

The odds of losing data are inversely proportional to the number and proximity of backups. ;)
 

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,694
Location
Horsens, Denmark
The odds of losing data are inversely proportional to the number and proximity of backups. ;)

Nearly. You need to factor in the odds of the backup systems failing as well. I ended up with a system where the farther you are from production, the less reliable you are.

Production: RAID-10
Backup: low disk count RAID-5
Secondary backup: high disk count RAID-5
 

Stereodude

Not really a
Joined
Jan 22, 2002
Messages
10,865
Location
Michigan
So the odds of losing one stripe of data (corrupting one file) is greater than 50% if the volume size is greater than 12TB? And if the volume is less than half full, that makes it 25%? I'm cool with that. ;)

I'm presently downsizing all my RAID-5 arrays to 5 drives, should rebuild quickly enough.
That depends on the UBE rate for the drive. Some drives are 1 for every 10^15 bits read, others are 1 for every 10^14 bits read. If you are using the 10^14 drives once you have a 11.37TB RAID-5 array you're at 100% for encountering a UBE based on an oversimplification of the specs. In reality, you might not see it because you're not reading 10^14 bits from any single drive, but you're reading a total of 10^14 bits between all the drives. Regardless, the odds are not in your favor.
 

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,694
Location
Horsens, Denmark
At the moment all my RAID-5 arrays are 1.5TB Samsungs, so 6TB. I was always under the impression that the risk of outright losing a second drive before the array recovers was the biggest fault in RAID-5.
 

LunarMist

I can't believe I'm a Fixture
Joined
Feb 1, 2003
Messages
17,305
Location
USA
How much do you trust the specs and how homogeneous are the unrecoverable errors in a small population? I would not trust them farther than I could throw the drive.
 

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,694
Location
Horsens, Denmark
How much do you trust the specs and how homogeneous are the unrecoverable errors in a small population? I would not trust them farther than I could throw the drive.

The thing you need to keep in mind is that nothing is absolute. It is in the layers of different kinds of solutions that provides true redundancy. If I had two identical arrays of identical drives, a miscalculation on reliability would have far worse consequences.
 

udaman

Wannabe Storage Freak
Joined
Sep 20, 2006
Messages
1,209
The thing you need to keep in mind is that nothing is absolute. It is in the layers of different kinds of solutions that provides true redundancy. If I had two identical arrays of identical drives, a miscalculation on reliability would have far worse consequences.


I would imagine by 2019, most storage would be SSD? How reliable will the TB Raid SSD's be?

How reliable are tape backups? Is there still a good use for them?
 

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,694
Location
Horsens, Denmark
I would imagine by 2019, most storage would be SSD? How reliable will the TB Raid SSD's be?

How reliable are tape backups? Is there still a good use for them?

By 2015, all storage in this office will be SSD. Possibly by 2013. I don't believe SSDs have enough time and volume in the market to know acutal figures, but an order of magnitude above HDDs would not surprise me. Further, the much higher IOPS will allow array rebuilds at much larger capacities.

I consider tape dead for standard office backups. If you need to archive large amounts of data it may make sense, but the added PITA of dealing with media/changers/etc isn't worth the money saved over HDDs for most applications.
 

LunarMist

I can't believe I'm a Fixture
Joined
Feb 1, 2003
Messages
17,305
Location
USA
By 2015, all storage in this office will be SSD. Possibly by 2013. I don't believe SSDs have enough time and volume in the market to know acutal figures, but an order of magnitude above HDDs would not surprise me. Further, the much higher IOPS will allow array rebuilds at much larger capacities.

I consider tape dead for standard office backups. If you need to archive large amounts of data it may make sense, but the added PITA of dealing with media/changers/etc isn't worth the money saved over HDDs for most applications.

How long does the company you work for have to store electronic data?
 

LunarMist

I can't believe I'm a Fixture
Joined
Feb 1, 2003
Messages
17,305
Location
USA
The thing you need to keep in mind is that nothing is absolute. It is in the layers of different kinds of solutions that provides true redundancy. If I had two identical arrays of identical drives, a miscalculation on reliability would have far worse consequences.

Well of course you know what you are doing and how to mitigate the risk of data loss.

And you know why I have two drives in my mini notebook. ;)
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,879
Location
USA
There are actually other solutions for tape. We sell a device called a clariion disk library which presents itself to a host like a tape drive, but the data is actually stored on disk for much faster performance in both read and write. There is also another thing we sell which is coined content addresses storage (centera) which is used for the legal aspects of data retention of documents/files/email/etc to help a business be legal in its retention policies. They are obviously aimed at larger business and budgets, but they're an alternative to tape backups which can be a large hassle.
 

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,694
Location
Horsens, Denmark
How long does the company you work for have to store electronic data?

So far I haven't been faced with the issue. Storage technology is growing faster than our data needs. In other words, we keep everything. All-in, a single full backup of everything is about 2TB. The offline/offsite backup is still a single 3.5" HDD.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,091
Location
I am omnipresent
I've saving for a tape library system, myself. If I skip a generation of drive improvements that shouldn't be the end of the world, and a backup set of 24 or 48TB should certainly be plenty.
 

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,694
Location
Horsens, Denmark
Merc, any speculation on where the break-even point is for storage? Of course, the loss of on-line capabilities is tough to quantify, but at what point does buying tapes instead of drives pay for the library system?
 

Pradeep

Storage? I am Storage!
Joined
Jan 21, 2002
Messages
3,845
Location
Runny glass
Merc, any speculation on where the break-even point is for storage? Of course, the loss of on-line capabilities is tough to quantify, but at what point does buying tapes instead of drives pay for the library system?

Thing with tape is that it will cost X number of dollars for the hardware (tape drive(s), library enclosure that may fit in the rack or standalone) - (I'm putting aside backup licensing costs etc).

Tape will also cost Y in terms of media costs. Say Y = 30TB.

In comparison we have a rack of spinning disk. It will cost A for 30TB.

The point of crossover will be when your data storage needs exceed the limits of Y or A.

In the tape scenario, hardware cost X remains the same, you simply increment Y costs (in terms of getting more tapes).

In the disk scenario, hardware cost X must either be duplicated or upgraded. In most cases the tape expansion will be cheaper, especially as the TB/PB quantity increases. We aren't talking about drives from Newegg, RAID drives are premium priced by storage vendors and basically a choke point. Tapes on the other hand can be multisourced and are interchangeable (assuming similar types and that LTO goes back to read/write the previous generation and read only on the generation prior.

X+Y(amount of expansion) versus X(amount of expansion).

Like you say though, it's not one or the other. It's a combination of both. Multiple redundant arrays can provide more potential uptime in case of primary array failure, but when the Virus comes (or more likely a disgruntled employee) to deliver the plague, I can only sleep at night knowing at least a reasonably recent copy sitting on tape in an Iron Mountain vault.

BTW Ultrium 5 aka LTO 5 will be with us shortly: 1.5TB and 140MB/sec native, 3TB and 280MB/sec assuming max 2:1 compression.

http://www.webnewswire.com/node/517466

This will drop the pricing on LTO 4 which will remain the volume seller whilst LTO 5 commands premium pricing in the early adopter phase.
 
Top