Mega-Storage

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,790
Location
USA
I saw that post on hardocp this morning but I don't see where or how they connected all the drives.
 

Bozo

Storage? I am Storage!
Joined
Feb 12, 2002
Messages
4,396
Location
Twilight Zone
In the very last picture you can see a whole gaggle of blue SATA cables going to the drives.
I get the impression that it is not finished yet.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,790
Location
USA
I saw those cables, but I wanted to see the bus interface used to connect all of those onto the motherboard. I'm curious if they used a SAS expander, or multiple SATA cards...I just don't see how they would get all those connected very easily.
 

jtr1962

Storage? I am Storage!
Joined
Jan 25, 2002
Messages
4,261
Location
Flushing, New York
I got nervous just looking at the way those hard drives were stacked. And then of course they're also on a carpet with all the attendant static discharge problems. :erm: I'm amazed how many people are unaware of how to properly handle hard drives.

70TB? I couldn't fathom filling that much space. Then again, ask me that in 5 years as 1080p downloads become common.
 

LunarMist

I can't believe I'm a Fixture
Joined
Feb 1, 2003
Messages
16,958
Location
USA
Either the camera has funky distortions or that jerry-rigged box is seriously out of whack.
 

CougTek

Hairy Aussie
Joined
Jan 21, 2002
Messages
8,726
Location
Québec, Québec
All Western Digital Caviar Green. No matter what Mercutio can think about Western Digital, their 2TB drives are A LOT more reliable than the Seagate of similar capacity.
 

Bozo

Storage? I am Storage!
Joined
Feb 12, 2002
Messages
4,396
Location
Twilight Zone
I got nervous just looking at the way those hard drives were stacked. And then of course they're also on a carpet with all the attendant static discharge problems. :erm: I'm amazed how many people are unaware of how to properly handle hard drives.

70TB? I couldn't fathom filling that much space. Then again, ask me that in 5 years as 1080p downloads become common.

The bottom had a ton of fans attached to it.
The bottom of the fans also looks to be about 4" off the floor
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
21,809
Location
I am omnipresent
I hope that guy spent some money on an LTO changer to go with all those WD drives. 'Cause I'm sure he'll be getting some use out of it.
 

jtr1962

Storage? I am Storage!
Joined
Jan 25, 2002
Messages
4,261
Location
Flushing, New York
The bottom had a ton of fans attached to it.
The bottom of the fans also looks to be about 4" off the floor
I was talking about the way he stacked the drives before he put them in the enclosure. Why didn't he just leave them in whatever container they came in until he was ready to mount them?
 

Chewy509

Wotty wot wot.
Joined
Nov 8, 2006
Messages
3,339
Location
Gold Coast Hinterland, Australia
I wonder what RAID setup they'll be using. I recently read a paper of RAID recovery and array rebuilds with 2TB disks, and the conclusion was basically that if you run more than 5 disks in an array (with disks of 2TB or larger), in the time it took to rebuild the array you had a 10% chance of hitting a hard error which would bork the rebuild. With those odds, it's not a good situation if you're reliant on the storage. (Hard errors and timeouts occur all the time, the OS just works around them).

Both hardware and software raid setups were used... I can't remember which disks they used. I'll have to hunt down the paper.

The recommended using tiered RAID setups like RAID50 as a workaround.
 

LunarMist

I can't believe I'm a Fixture
Joined
Feb 1, 2003
Messages
16,958
Location
USA
Were the ones in the stack on the floor in use? 70TB would be less drives I should think.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
21,809
Location
I am omnipresent
Something I would also recommend for large DIY storage arrays. But alas, since it's not embedded in Linux, it seems like a dirty word these days.

I have a lean, stable configuration for my file servers that's based on Fedora. ZFS is quite tempting but given the volume of storage I'm dealing with I really don't want to go messing around with my set up, especially with the current uncertainty about the direction of Solaris. I figure that a workable ZFS port will make it to Linux eventually, or at some point I'll have access to enough 2TB+ drives and a spare machine that I can make a serious test environment to play in.
 

Chewy509

Wotty wot wot.
Joined
Nov 8, 2006
Messages
3,339
Location
Gold Coast Hinterland, Australia
especially with the current uncertainty about the direction of Solaris.

That is something everyone outside of Oracle is wondering. Oracle's official stance is that Solaris 10 will be supported, and Solaris 11 will be coming out supporting both SPARC and x64. Oracle IIRC has also renewed OEM agreements with selected OEMs (HP and Dell come to mind) to supply and sell Solaris licenses on future servers. There is a future for it...

I just hope Solaris doesn't go the way of the other commercial UNIX systems, as it has a lot of very good technologies that both support businesses and developers alike.

PS. I'm still waiting to hear back from Oracle regarding the support agreement for my Sol10 license... only 10 emails to them, no responses, and 2 months later...
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,790
Location
USA
What if you considered your install of Solaris to be like an appliance. What I mean is, if you can get a storage solution working the way you want with ZFS and then just leave it alone, what does it matter if Oracle no longer supports Solaris in the future? If your storage solution is working, leave it be with the version you have running. I can't imagine security risks are a huge threat if it's in your own home environment.

Fundamentally I understand why people may be hesitant in adopting Solaris, but if you consider it a closed box solution and it meets your needs, just let it be.
 

Chewy509

Wotty wot wot.
Joined
Nov 8, 2006
Messages
3,339
Location
Gold Coast Hinterland, Australia
What if you considered your install of Solaris to be like an appliance. What I mean is, if you can get a storage solution working the way you want with ZFS and then just leave it alone, what does it matter if Oracle no longer supports Solaris in the future? If your storage solution is working, leave it be with the version you have running. I can't imagine security risks are a huge threat if it's in your own home environment.

Fundamentally I understand why people may be hesitant in adopting Solaris, but if you consider it a closed box solution and it meets your needs, just let it be.

That's the approach I'm taking. I've got my Sol10 installation working exactly how I want, with all the tools I need. I'm overly bothered by not having security updates, since my box is behind a firewall with nothing forwarded to it.

The one feature I do love about ZFS, are the snapshots and the ability to browse through old versions of files, just by using the time slider in the file window. Great for version control, and very easy to use, unlike Windows Shadow copy feature.

PS. With Solaris 10 u7, the default installation only has 1 service listening on the LAN adapter, all other services must be explicitly enabled to listen on the LAN adapters. Great for security... How many services listen on the external LAN adapter in a default Win7 installation?

PPS. ZFS with Solaris 10 u9 (just released last month) adds triple parity RAID with dual redundant file checksums, and the ability to specify any number of file duplicates to be stored across the zpool. Additionally block level data deduplication is nearly production ready stage. Block level deduplication is really cool, in that if you have 200 copies of the same file in your filesystem it is only stored once and all 200 copies point to the single copy. If you modify any of the 200 instances, using copy-on-write will auto create the single copy using the first as an template. Another new feature in Sol 10 u9, is support for storage devices with sectors of 512, 1024, 2048 and 4096byte sizes, and ALL system tools applicable will recognise and support the different sector sizes. That means slice alignment is done automatically for you, for both MBR and EFI (aka GPT) disk labels.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,790
Location
USA
I'm definitely sold on the features of ZFS (especially the block-level deduplication), I just need to bunker down and give it a try and then wait for it to be released. What I do like about my current setup with OpenFiler is the performance and the ease of management through a web-based console.
 
Last edited:

Chewy509

Wotty wot wot.
Joined
Nov 8, 2006
Messages
3,339
Location
Gold Coast Hinterland, Australia
How does the FreeBSD implementation compare to Solaris's?

It lags behind by about 3 months in feature set, but performance otherwise is very similar. (It lags behind, as some small modifications need to be done to the Oracle released code in order to integrate well with the FreeBSD kernel).

Oh, I did forget another feature - L2ARC. You can add a disk to the zpool as designate it as a cache disk, eg add 2-3 SSDs as cache disks in front of a heap of 15K SAS disks, and you get a nice IOPs performance boost for commonly read data. Since it's integrated as the filesystem level, all applications see the immediate performance increase.

PS. All OpemSolaris code is still available as opensource via Project Nevada, which is were FreeBSD get's it's source code to implement ZFS. As Oracle continues development of Solaris 11 - any code that is CDDL licensed gets released via Project Nevada**.

** It has a new name, just think of it now.
 
Top