Fastest SCSI System built by German magazine?

Tekunda

What is this storage?
Joined
Mar 18, 2007
Messages
9
The German computer magazine PC-Welt recently built what they called the "computer out of hell", a $25.000 monster.
Of interest is of course the disk array.

They used 4 Seagate 15k.4 147GB SAS hard drives in a Raid 10 together with a PCI-X Adaptec 4800SAS/128 for the operating system and program data.

They have put an additional four 750GB Seagate Barracuda 7200.10 in a Raid 10 array for data storage.

To me that looks like a friggin fast HD system. My question is though, since the new 15k.5 Seagate perpendicular hard drives are available, if this had not been the better choice? And what about the controller?

Does this system finally put the age old question to rest which Raid system is the fastest? Any thoughts on that?

The magazine tried to build a bleeding edge computer, so let me give you the rest of the specs:

MB: Tyan Tempest i5000XL
Processor: 2 x Intel Xeon 5355 Quad-Core, 8 MB L2-Cache, 1333 MHz FSB
Graphic card: 2 x Geforce 8800 GTX
 

LunarMist

I can't believe I'm a Fixture
Joined
Feb 1, 2003
Messages
17,497
Location
USA
Fastest for what? Is this a system for a server or workstation, or playing games? There are certainly faster drives than the 7200.10 for data (IOs are not so great), and one modest array is very limiting for some applications. Do you really need a 4-drive 15K.5 array for OS and applications, when some of the drives may be better used for data?

I'm not sure how it works where you are, but usually we put all the detailed system requirements in the RFQ and ask the vendors to spec the components. Of course you need to know if they are full of crap. Simply building a machine with no purpose is silly IMO.
 

Tekunda

What is this storage?
Joined
Mar 18, 2007
Messages
9
The article does not mention what this computer is being used for. They claim they wanted to build "one heck of a super computer".

BTW I forgot two more specs:

Memory: 4 x 1 MB Corsair PC-667 FB-DIMM
PSU: Enermax Galaxy 1000 Watts
 

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,729
Location
Horsens, Denmark
Considering how much they spent on HDDs and CPUs, I'm dissapointed at the small maount of RAM; I would have gone with at least 8GB.

Further, I would have gone without the 15K SAS array all together, getting an SSD instead.
 

Tekunda

What is this storage?
Joined
Mar 18, 2007
Messages
9
Considering how much they spent on HDDs and CPUs, I'm dissapointed at the small maount of RAM; I would have gone with at least 8GB.

Further, I would have gone without the 15K SAS array all together, getting an SSD instead.


Which SSD?
 

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,729
Location
Horsens, Denmark
I know what SSD means, I meant what brand SSD was he refering to, which would top the SAS Seagate array.

Any of them, really. 90% of mainstream stuff is seek limited, and just about any SSD will have much faster seek times. For that kind of money you could likely get a decent-sized one with good STR as well. I'll have a look and get back with some options.
 

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,729
Location
Horsens, Denmark
Looks like those drives are ~$650US each, and the controller is another ~$800US. So 650*4+800= $3400.

These are reportedly $350 each and boast:

32GB capacity
62MB/sec transfer rate
0.12ms access time

4 of those in RAID0 (of course) on one of these

So 350*4+280=$1680

You end up with half the capacity, but with the extra money you could get another 4 750GB drives (with money left over) and call it even.

I'm betting the SSDs will smoke the 15k drives in just about anything.
 

Tekunda

What is this storage?
Joined
Mar 18, 2007
Messages
9
Looks like those drives are ~$650US each, and the controller is another ~$800US. So 650*4+800= $3400.

These are reportedly $350 each and boast:

32GB capacity
62MB/sec transfer rate
0.12ms access time

4 of those in RAID0 (of course) on one of these

So 350*4+280=$1680

You end up with half the capacity, but with the extra money you could get another 4 750GB drives (with money left over) and call it even.

I'm betting the SSDs will smoke the 15k drives in just about anything.

Of course SSD will eventually smoke any known HD but I cannot see yet why the SSD you mentioned will smoke the SCSI used in that system.
At least the article states that the SCSI array is close to a 300MB/s transfer rate. The SSD disks only reach 1/5th of that. This is not what I call smoking fast.
 

CougTek

Hairy Aussie
Joined
Jan 21, 2002
Messages
8,729
Location
Québec, Québec
Go read some articles at Storagereview.com. You'll see that almost anything is I/O-limited, not STR-limited. A SSD will smoke, melt, sublimate a mecanical HD at the I/O game.
 

P5-133XL

Xmas '97
Joined
Jan 15, 2002
Messages
3,173
Location
Salem, Or
Sorry to disappoint but for most applications transfer rates mean very little and access times mean everything. Thats why raid-0 disappoints. The amount of time a disk uses to block transfer data is very small compared to the amount of time it takes to move the heads to their proper location.

Lets give you an example: such as moving a 6.4 meg file from one spot to another spot on a drive. For Windows, a normal block size is 64K so there are 100 blocks needed to be transfered. A typical SCSI array access time is 8ms, and lets give the SCSI array an infinite transfer rate so that it takes absolutely no time to actually transfer the data. a SSD access time is .12ms, and lets use a 64MB/s for the SSD so that it takes .1ms to transfer 64K of data

To move a block will require the following operations read block, move head, write block, move head.

For the SCSI array it will take 0ms read + 8ms move + 0ms write + 8ms move or 16ms to transfer a block and there are 100 blocks so the time it takes is 16 seconds.

For the SSD it will take .1ms read + .12ms move + .1 ms write + .12ms move for a total time of .44ms and its total time is .44 seconds or 36x faster.

It's the small access time's that makes SSD's smoke SCSI arrays. The only time access time isn't the over-riding characteristic is truely sequential data transfers such as moving a file from one drive to another drive with no fragmentation: Because the source and destination are on seperate drives, the heads can be relatively stationary by only moving when they have to move off the current track to the next track regardless of the block size.
 

timwhit

Hairy Aussie
Joined
Jan 23, 2002
Messages
5,278
Location
Chicago, IL
Looks like those drives are ~$650US each, and the controller is another ~$800US. So 650*4+800= $3400.

These are reportedly $350 each and boast:

32GB capacity
62MB/sec transfer rate
0.12ms access time

4 of those in RAID0 (of course) on one of these

So 350*4+280=$1680

You end up with half the capacity, but with the extra money you could get another 4 750GB drives (with money left over) and call it even.

I'm betting the SSDs will smoke the 15k drives in just about anything.

Got any idea when I will be able to buy one of those Sandisk SSDs?
 

Pradeep

Storage? I am Storage!
Joined
Jan 21, 2002
Messages
3,845
Location
Runny glass
I believe FemmeT from Tweakers.net has shown benchmarks in the 800MB/sec range using the newest Areca SATA RAID controllers. He also had a system with a combination of hardware and software RAID with very impressive numbers.
 

blakerwry

Storage? I am Storage!
Joined
Oct 12, 2002
Messages
4,203
Location
Kansas City, USA
Website
justblake.com
I recently had the pleasure of building 3 identical servers, each with 8 147GB Seagate 15k.4's, 24GB RAM, 2 x dual core 3.0GHz Intel Xeon 5160's for an ISP.

Though for the application I created a 2 drive RAID 1, a 5 drive RAID 5, and 1 global hot spare.

The below tests are on the RAID 5 array...




[root@cobra ~]# time dd if=/dev/zero of=/var/spool/tmp.txt bs=1M count=1000
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 4.32606 seconds, 242 MB/s

real 0m4.392s
user 0m0.001s
sys 0m1.739s
[root@cobra ~]# time cat /var/spool/tmp.txt >> /dev/null

real 0m0.578s
user 0m0.036s
sys 0m0.534s
[root@cobra ~]# time dd if=/dev/zero of=/var/spool/tmp.txt bs=1M count=10000
10000+0 records in
10000+0 records out
10485760000 bytes (10 GB) copied, 60.6734 seconds, 173 MB/s

real 1m0.911s
user 0m0.009s
sys 0m19.565s
[root@cobra ~]# time cat /var/spool/tmp.txt >> /dev/null

real 0m5.664s
user 0m0.448s
sys 0m5.215s
[root@cobra ~]# time dd if=/dev/zero of=/var/spool/tmp.txt bs=1M count=100000
100000+0 records in
100000+0 records out
104857600000 bytes (105 GB) copied, 666.229 seconds, 157 MB/s

real 11m6.869s
user 0m0.044s
sys 3m19.615s
[root@cobra ~]# time cat /var/spool/tmp.txt >> /dev/null

real 6m51.881s
user 0m6.803s
sys 1m24.124s
[root@cobra ~]# time rm -f /var/spool/tmp.txt

real 1m16.431s
user 0m0.000s
sys 0m5.523s
[root@cobra ~]#
 

blakerwry

Storage? I am Storage!
Joined
Oct 12, 2002
Messages
4,203
Location
Kansas City, USA
Website
justblake.com
It's amazing the price difference between 12, 24, and 32 GB of RAM. With the current usage, we're lucky to utilize more than 8GB. Though the customer wanted to have the extra headroom for future applications and growth available to them so the cost of 24GB was justifiable for them.

Having three identical servers makes it easy to swap roles in case of failure or other maintenance.

Under normal usage we're seeing a few hundred IOs/sec, though I've seen it get into the thousands.
 

Santilli

Hairy Aussie
Joined
Jan 27, 2002
Messages
5,278
I'd be the last one to argue about the failure of hard drives to compete with solid state for access time.

I would take issue with Mark Turner on the difference between Raid 0 and a single drive, and how it disappoints.

Since we have two machines, one booting from a single 15K Cheetah, about 80 mb/sec, and, my machine, booting from two in raid 0 at about 110-120 mb/sec, I will say that their is a really considerable difference, and, it's both snappier, and able to do multiple tasks better then the single drive setup.

I originally tried booting off a single scsi disk on this machine, using a Supermicro 5 drive SCA box, but, was disappointed with the boot time. I stripped two drives, and then tried three, and found two to be the best solution, with the third being used for a pagefile, two gigs of ram, dual xeon 2.8's, and an LSI single channel 320-1 raid card.

For a responsive boot drive, I've found two Seagate Cheetahs to be the best. I've also tested up to 4 drives, and, found that the access times suffer. I suspect one of the major benefits is the cache on the LSI card, 64 mb of ram, and, the doubling of the drive cache, giving about 80 mb of solid state ram for the system to store information on, and, the page file being minimized, and on one disk can't hurt, either.

Greg
 

Pradeep

Storage? I am Storage!
Joined
Jan 21, 2002
Messages
3,845
Location
Runny glass
I've found with the Atlas 15K IIs in RAID 0, a single U160 channel is not enough for them.
 

Santilli

Hairy Aussie
Joined
Jan 27, 2002
Messages
5,278
320 seems to handle through put pretty well. Last test was around 110-120 mb/sec, IIRC.

The LSI Megaraid was recommended, and tested, by a few folks here, I think it was Splash, but not sure.

End result is the card is capable of running enough drives to move about 300 mb/sec. That would take an array in raid zero, of something like 16 drives, or more.

Practically, the combination of the SCA removeable drive box, and the single channel 320 card, with 64 mb of ram on it, is a nice, reasonable point for a workstation raid setup.

Two drives is about as far as I want to go on an OS raid zero, due to the reliability factor. I'm already using refurbed drives, and, though I've had excellent luck, that will fail.

The dual drives seem to give great through put, plenty of cache, and, a good speed jump from a single drive.

To go to dual channel would require either two of the 320 cards, with two SCA boxes, or, I could go back to the dual channel I used on this motherboard in the first place, using the POS Adaptec 2010S card, that limited through put to about 90 mb/sec, no matter how many channels, two actually, or how many drives, since I did test it with a 4 drive raid zero array.

I could, if I wanted dual channel raid, go to two SCA boxes, combined with two 320 cards, but, now I'm into another box, and machine. No room for that in my Yeow cube.

Also, given the time the board was designed, and made, I'm not real sure what the actual chipset pci bus limit is on the Supermicro X5DA8. While the Adaptec motherboard channels claim, IIRC, 320, that doesn't mean the chips they used are capable of that, or the pathways on the motherboard capable of handling that.

Anyone tried maxing out the X5DA8 for scsi pci throughput?

Greg
 
Top