Small Version of Enterprise Storage?

samm

What is this storage?
Joined
Apr 7, 2010
Messages
5
OK, so I came across an opportunity to buy a bunch of fibre channel HDDs (Hitachi), and I'd like to implement them in my home for now. If I get them to work, I may put my HP DL server to use and start web hosting or something.

I need to know what y'all suggest as far as how to implement these hard drives.

This is what I think I know:
I need 3 fibre channel enclosures (14-15 bay)
I need a RAID controller
I need a way to access the storage

What I Have:
32 4Gbps dual port FC HDDs
1 Brocade 825 PCIe dual port 8Gbps HBA
HP DL360 G5 server machine

What I'm thinking I need:
HP EVA fibre channel enclosures?
HP EVA fibre channel controller(s)? I see most racks have two of them, do I NEED two? Or is that just for high speed access?
(Is EVA (virtual array) even the way to go? the whole "virtual" business kinda worries me - like thats maybe not what I'm looking for...)

Is this the best route to go? Can I plug controller(s) directly into HBA w/out the need for a FC hub/switch?
Can I just use a JBOD enclosure and let the HBA/OS do the RAID business?

Lastly, how would you recommend to implement these? At first, I want to try to store my media on them, using Windows 7 Media Center on client PCs, so Windows CIFS (map network drive) is easiest?
What about iSCSI? Is there a reliable/simple iSCSI setup for Windows that will allow multiple PCs to read/write the the target array at the same time? I know of iSCSI Initiator, but I need to use the target across multiple clients (thats the whole idea here).

As you can see, I'm looking for help with getting the drives spinning, AND which solution to use to access the drives over a network.

Thanks for reading and any input you may have.
SAM
 

Pradeep

Storage? I am Storage!
Joined
Jan 21, 2002
Messages
3,845
Location
Runny glass
Generally one would have multiple controllers for a failover situation, i.e. redundancy. It's unlikely that any one host would overwhelm a single port.

Are the Hitachi drives pulled out of an existing system? IME many of these backplanes require specific drives and/or firmware to try and lock you in to their overpriced yet "approved" drives.

In terms of "mini enterprise" in the home, it's good for learning, however the internet connection is almost always the easiest point of failure in such a "home hosting" environment, I would recommend having web sites hosted in data centers with adequate infrastructure (multiple generators, multiple Internet providers etc) to avoid most downtime scenarios.

The fibre HBA isn't going to do any RAID for you. You would be left with either software RAID via Windows etc, assuming the RAID backplanes can operate in JBOD without a controller (direct attach to FC HBA). Or hardware via the HP controller.

That's all I have for you, good luck.
 

samm

What is this storage?
Joined
Apr 7, 2010
Messages
5
Thank you so much for the input.

I'm not too worried about web hosting just yet, I wanna make sure these things work.

Eventually, yes, I would lease a small building for any entrepreneurship I would partake in.

So you're saying that Windows, via the Brocade HBA, will be able to address the JBOD HDDs w/out the need for a controller?
 

Pradeep

Storage? I am Storage!
Joined
Jan 21, 2002
Messages
3,845
Location
Runny glass
No. You need to verify that the backplane presents the drives in a manner that would allow JBOD. It could indeed be that the backplanes must be connected to an EVA controller. I have no experience with HP storage so please don't take my word for granted. Double/triple check everything. If this was IBM gear I could be more helpful.

When you say you want multiple PCs to read/write to the array at the same time, you are talking about multiple files right? Not multiple PCs reading/writing the same file? Former is easy, the latter not so much.
 

Stereodude

Not really a
Joined
Jan 22, 2002
Messages
10,865
Location
Michigan
Ok, so my reading comprehension isn't so good... :geek:

How big are the drives?

Unless they're really big I can't see how this is worth the effort, cost, & electricity unless you're really looking for the experience.
 

samm

What is this storage?
Joined
Apr 7, 2010
Messages
5
Stereodude,
I'm glad you asked.
For reference, each drive is 450GB.

I am currently a student - going for my IT infrastructure degree, so the experience is definately a plus.

But I also know that a properly configured 7TB RAID array coupled with an HP G5 server and a simple domain name can earn me some side cash.

Pradeep:
Thanks for the info. For reference, I don't need to use HP EVA products, I just thought of them since I have the hp server.

So if a fibre enclosure is advertised as "JBOD", what kind of information does the HBA that's connected to that enclosure see? Does it see one massive logical drive, or does the JBOD simply tell it "hey, i got a bunch of drives here, you decide what to do with them"?

And when I mention multiple PCs being able to access the array at the same time, you're correct, I mean separate files. I don't wanna get into the locking of files mess.
So, does iSCSI allow mutiple PCs to access separate files on one target at the same time, much like Windows CIFS (mapped network drive) does?

Thanks again, I'm getting excited to finally get this ball-a-rollin'.
 

samm

What is this storage?
Joined
Apr 7, 2010
Messages
5
...I can't edit messages after 5 mins?...

Also, I should ask, with your IBM experience, is it generally possible to format these enterprise arrays with NTFS? I will almost strictly be using Windows boxes with the array. One doesn't NEED a database (SQL, etc) to address these enterprise arrays/controllers does he?
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,879
Location
USA
So if a fibre enclosure is advertised as "JBOD", what kind of information does the HBA that's connected to that enclosure see? Does it see one massive logical drive, or does the JBOD simply tell it "hey, i got a bunch of drives here, you decide what to do with them"?

If you're connecting an array of FC drives from a unit that will present the disks as JBOD, then yes you'll see each disk as a separate device. Each device will show up in windows as a drive that you can format and do whatever you want with it.

The arrays I've worked with (EMC Clariion) don't present the individual disks from the back plane. I have to create RAID groups and LUNs and then assign access to different hosts. With this kind of setup, I typically do fibre channel WWN zoning to assign access to the storage. Since you aren't using a FC switch, you won't have to worry about this, but at the same time, you'll lose some control over the access to the drives.

And when I mention multiple PCs being able to access the array at the same time, you're correct, I mean separate files. I don't wanna get into the locking of files mess.
So, does iSCSI allow mutiple PCs to access separate files on one target at the same time, much like Windows CIFS (mapped network drive) does?

Thanks again, I'm getting excited to finally get this ball-a-rollin'.

This statement confuses me. If you plan to connect multiple computers/servers to the array using the same FC ports with JBOD, every machine will likely see all the drives unless you do some kind of zoning/masking. That is typically the approach I've seen in production. You zone specific hosts to the array and mask (grant) storage based on each WWN of the HBAs in each system. That way you don't risk one system overwriting or corrupting disks that do not have a proper mechanism in place for shared access (like microsoft clustering, VMWare ESX server, etc).

If you go with iSCSI, you can have multiple machines connected to a single iSCSI target depending on the implementation of the target hardware/software. However, both machines will see the exact same structure and have read/write access. This is not something I would recommend unless if you have a specific need (again, clustering, VMWare shared datastore, etc) with a proper mechanism to handle the locking. If you want shared access with CIFS, setup one machine to access the storage (either by iSCSI or direct FC via an HBA) and let the CIFS server handle the locking from multiple access.
 

Pradeep

Storage? I am Storage!
Joined
Jan 21, 2002
Messages
3,845
Location
Runny glass
...I can't edit messages after 5 mins?...

Also, I should ask, with your IBM experience, is it generally possible to format these enterprise arrays with NTFS? I will almost strictly be using Windows boxes with the array. One doesn't NEED a database (SQL, etc) to address these enterprise arrays/controllers does he?

Yes NTFS would be almost mandatory ("Dynamic" NTFS disks being necessary if you were using Windows soft RAID). I would recommend RAID 5 (you would need a Server flavour of Windows for it). Each FC drive should present it's unique WWN.
 

samm

What is this storage?
Joined
Apr 7, 2010
Messages
5
Great info, I figured this was the right place to ask this question :smile:

I would most likely decide to do CIFS then, as I want some sort of built in locking mechanism.

Is CIFS a role/feature that I can install with my Server 2008 R2?
 
Top