SAN alternative

jman22

What is this storage?
Joined
Aug 17, 2008
Messages
9
Hello all!

I am looking for an alternative to SAN storage solution.

I am looking for a raid card that can stripe an array of 15 hard disks with raid 5 and then "share" the array with a raid card on another machine.

Let me explain, I have two application servers that will fail over in case of hardware failure and use load balancing. I would like the raid card from each machine be able to share a single array (I don't want to buy an additional 15 hard disks for the other app server). The problem I see with this is that when an array is built by one card with raid 5, parity bit conflicts will occur at some point and cause issues when the storage array is accessed/built by the other card.

So, I am looking for something like an expander back plane (like in a DAS) that could hold 15 hard disks, but has two, not one, Raid slots for sharing, and is intelligent enough to prevent the parity bit conflicts. Any one know of something like this?

Any help would be greatly appreciated!
Thanks!
 

jman22

What is this storage?
Joined
Aug 17, 2008
Messages
9
Bozo, thanks for that. And doing that would be the optimal solution. In fact I am using DFS for file replication between two of my file servers. With that I am getting fault tolerance. Thing is, I am looking to share 15 hard disks (for a total of 15 terabytes) worth of data between two application servers. If I were to get fault tolerance with that much data, I would need an addition 15 hard disks, which would exceed my power threshold (and be expensive as hell)

So, I would rather just settle on Raid 5 redundancy for these 15 hard disks and just try to share the data across the two servers...any other ideas?

Thanks
J
 

P5-133XL

Xmas '97
Joined
Jan 15, 2002
Messages
3,173
Location
Salem, Or
You can share HD's between servers with SCSI with controllers that have both internal and external interfaces. But whatever you do turn-off all forms of caching because otherwise you will get scrambled data because each application server has no knowledge of what the other server is doing with its cache.
 

jman22

What is this storage?
Joined
Aug 17, 2008
Messages
9
P5, can you name a vendor that produces a scsi controller card that allows for sharing HDs between servers? I'm looking but have not found anything yet.

Thanks so much!
Jared
 

P5-133XL

Xmas '97
Joined
Jan 15, 2002
Messages
3,173
Location
Salem, Or
All of them will. Just connect the two servers via the each controllers external interface and make sure that nothing is sharing on the SCSI bus has same SCSI ID (Including the controllers) and of course terminate the SCSI bus properly and the two machines will see each others drives.

The SCSI bus will look like the following:

Terminator <> Comp1 HD1 <> Comp1 Controler1 <-> Comp2 Controller <> Comp2 HD4 <> Terminator


It is basicly how SCSI works. It has no problem with multiple controllers. It doesn't care where the controllers are located. When a machine boots, it's controllers simply query the SCSI bus as to whats on it and the SCSI devices report back without regard as to what machine they are inside. Once all the devices are known, any device on the SCSI bus has access to any other device on the SCSI bus. So as long as the ID's are unique and the bus is properly terminated there should be no problem.

I would recommend that you keep all the HW to the same SCSI standard and you stay away from SCA converters. However, that is just so you keep your sanity when setting it up. The SCSI will test the HW and firmware and determine the lowest common denomenator. If it determines any compatibility issues then you will end up with hair-pulling trying to get it working properly because it won't tell you why the speed has dropped or why a drive keeps dropping out ...

I've done it before and it works fine. That's also how I found that all forms of caching, even read, will destroy the data
 

Bozo

Storage? I am Storage!
Joined
Feb 12, 2002
Messages
4,396
Location
Twilight Zone
Mark,
Due the cost of SCSI parts, can this be done with some of the new SATA controllers?

Bozo :joker:
 

jman22

What is this storage?
Joined
Aug 17, 2008
Messages
9
The SCSI bus will look like the following:

Terminator <> Comp1 HD1 <> Comp1 Controler1 <-> Comp2 Controller <> Comp2 HD4 <> Terminator

Good stuff. But I have a question:

Let's suppose Comp1 writes to a memory block on HD1, and Comp2 writes to that same memory block on HD1 at the same time. What happens? What mechanisms are in place to prevent such a collision?

J
 

jman22

What is this storage?
Joined
Aug 17, 2008
Messages
9
Good stuff. But I have a question:

Let's suppose Comp1 writes to a memory block on HD1, and Comp2 writes to that same memory block on HD1 at the same time. What happens? What mechanisms are in place to prevent such a collision?

J

Just thought of something else, what if your motherboard dies on Comp1? Would Comp2 be able to read and write to HD1?
 

jman22

What is this storage?
Joined
Aug 17, 2008
Messages
9
So long as the power supply keeps the hard drive spinning, it shouldn't be a problem.

Quite true, but I would like to avoid power issues. In fact, both of my application servers reside on separate power strips, so if I daisy chain scsi drives and comp1 dies, I will be SOL, unless I duplicate my 15 terabytes, which I don't want to do.

I am looking into how much building my own SAN with redundancy would run me. Let ya'll know how it goes.

Thanks everyone so far, you guys have been awesome!
 

P5-133XL

Xmas '97
Joined
Jan 15, 2002
Messages
3,173
Location
Salem, Or
Good stuff. But I have a question:

Let's suppose Comp1 writes to a memory block on HD1, and Comp2 writes to that same memory block on HD1 at the same time. What happens? What mechanisms are in place to prevent such a collision?

J

It shouldn't be a problem. Concurrent reads and writes to the same device is not a problem with SCSI because the devices typically queue data requests so that they becomes sequential. The controllers themselves won't write data to the bus unless it is free unlike Ethernet So there really isn't a collision possible. The worst that would happen would be that the drive reports back to the second computer that it was busy.

Again, the real problem is computer caching. There is no cache concurency between the two computers. Even read caching is going to be a problem because there may be a time that data is used that is cached that the other computer has changed. The only cache that is totally OK is the cache built into the drive itself.
 

P5-133XL

Xmas '97
Joined
Jan 15, 2002
Messages
3,173
Location
Salem, Or
One thing that may matter to you since you plan on using 15 drives. The SCSI bus only allows for 16 unique SCSI addresses and two of them will have to be Contollers which leaves only 14 addresses for the 15 drives. You can however, do this with two seperate buses.
 

P5-133XL

Xmas '97
Joined
Jan 15, 2002
Messages
3,173
Location
Salem, Or
Quite true, but I would like to avoid power issues. In fact, both of my application servers reside on separate power strips, so if I daisy chain scsi drives and comp1 dies, I will be SOL, unless I duplicate my 15 terabytes, which I don't want to do.

I am looking into how much building my own SAN with redundancy would run me. Let ya'll know how it goes.

Thanks everyone so far, you guys have been awesome!


Perhaps you need to be dealing with UPS's and redundent power supplies. If you are that scared, perhaps you should just pony-up the money to buy truely redundent drives ...

Also, you are going to be very hard pressed to find TB SCSI drives.
 

sechs

Storage? I am Storage!
Joined
Feb 1, 2003
Messages
4,709
Location
Left Coast
One thing that may matter to you since you plan on using 15 drives. The SCSI bus only allows for 16 unique SCSI addresses and two of them will have to be Contollers which leaves only 14 addresses for the 15 drives. You can however, do this with two seperate buses.
One might think that this would be easier to implement with SAS drives,since they're dual-ported.

I'd be more worried about coherency, since it sounds to me like each server would mount the file system independently. Recipe for data disaster.
 

P5-133XL

Xmas '97
Joined
Jan 15, 2002
Messages
3,173
Location
Salem, Or
One might think that this would be easier to implement with SAS drives,since they're dual-ported.

I'd be more worried about coherency, since it sounds to me like each server would mount the file system independently. Recipe for data disaster.

I don't know that SAS will operate the same way as SCSI. I don't know that SAS allows for multiple controllers or even device to device operation. I'd have to do a bunch of research and testing. I have similar doubts with SAS as I did with the SATA that was mentioned previously.

I have mentioned cache coherency, as the big worry, multiple times in previous posts above. I don't think I can over-emphisize it.
 

P5-133XL

Xmas '97
Joined
Jan 15, 2002
Messages
3,173
Location
Salem, Or
The main value I see with having multiple machines having access to the same physical drives is as a fail-over machine that really only accesses the drives when a heartbeat fails on the primary machine and then it simply takes over. The likelyhood of a cache coherency issue is far less likely if only one of the two machines is doing any work at a time. It may be worth taking that risk, if 100% uptime is necessary and one really can't afford the data lost from the update delay from dfs.
 

jman22

What is this storage?
Joined
Aug 17, 2008
Messages
9
Thanks again everyone! This information has helped.

Yes, I am in fact going to be using SATA drives, not SCSI. After looking into taking the DIY approach, there is just one thing I am still hung up on. I am looking for fault tolerance on the RAID controller (using RAID 5). So, if my RAID controller craps out, it will default over to a backup RAID controller. Apparently Dell's MD1000 SAN does this, but I can't figure out how. Any ideas on how to do this?

Thanks
J
 

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,728
Location
Horsens, Denmark
When I've done fail-over, it has been the entire storage stack. 2 Fileservers with a bunch of SATA ports connected to a bunch of 1TB SATA disks. Then get redundancy further down the chain by using DFS or something else. Hardware redundancy (Fail-over RAID cards or power supplies) gets really expensive really fast.
 

Bozo

Storage? I am Storage!
Joined
Feb 12, 2002
Messages
4,396
Location
Twilight Zone
In my limited experiance with RAID cards, when they fail they usually take the data with them. Or they make such a mess of things it is almost imposible to recover.
Best bet, two servers with complete storage arrays using DFS.

Bozo :joker:
 

jman22

What is this storage?
Joined
Aug 17, 2008
Messages
9
Understood. Seems we are back where we started. Though, I would like to know how expensive such RAID controllers (including the technology to manage the failover event) are. I think I will contact a number of storage companies and see what's out there.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,920
Location
USA
What we do at work is use fibre channel over fiber optics with lun masking. In each host we use multiple HBAs that are connected to different disk controllers on the storage array. The same lun/device is presented to both controllers so that both HBAs have access to it. Then software (power path) is installed on each host to manage the connections.

The software also makes use of performance algorithms (like round robin, least recently used, etc) to improve performance by spreading out the I/O. It's smart enough to detect if either path is lost and I/O continues uninterrupted. I just recently setup this environment to build a high availability MSCS cluster dedicated for SQL server 2005 and it works very well.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,920
Location
USA
Silly me didn't read the title of the thread which states SAN alternative...sorry. :)
 
Top