PCI-X SATA 2.0 Raid controller?

Santilli

Hairy Aussie
Joined
Jan 27, 2002
Messages
5,278
Hi
I was considering setting up a Raid 0 X-25M in a PCI-X 1x 64-bit 133/100/66MHz PCI-X (3.3V) slot.

Probably 4 drives, eventually. Any suggestions?
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,275
Location
I am omnipresent
Greg, if you have a PCI-X slot, you can probably find any number of PCI-X SAS controllers on Ebay for under $100.
 

Santilli

Hairy Aussie
Joined
Jan 27, 2002
Messages
5,278
Anyone actually used one, tested it, and had good numbers using SSDS?
 

Santilli

Hairy Aussie
Joined
Jan 27, 2002
Messages
5,278
Well, let's see if the 3 Ware driver for the 9550S 3.00.04.070 driver is any faster then the 2005 driver I was using...

HDTach results:
still stuck at 61.4 mb/sec...

so the answer is no...
 

Santilli

Hairy Aussie
Joined
Jan 27, 2002
Messages
5,278
I kind of figured that. 8 x 60 equals the tests I saw on review. Looks like that's the limit per channel.

So, LSI makes a 4 port PCI-X 133 MHZ 64 bit SAS card. Wonder if that would work?
Around 150 dollars...
 

blakerwry

Storage? I am Storage!
Joined
Oct 12, 2002
Messages
4,203
Location
Kansas City, USA
Website
justblake.com
As has been pointed out in previous threads - the Dell PERC 5/6 cards are cheap, work well, and available on ebay as system pulls. The Dell PERC cards are rebadged LSI cards with perhaps some slightly different firmware - they work fine with LSI drivers/software.
 

MaxBurn

Storage Is My Life
Joined
Jan 20, 2004
Messages
3,245
Location
SC
But the PERC6i is PCIE, think an x8. Still likely be better to do that and a motherboard rather than shop for old stuff.
 

Santilli

Hairy Aussie
Joined
Jan 27, 2002
Messages
5,278
Real problem is finding a PCI-X raid card that has channels fast enough to even make it worth raiding the SSDs.

Apparentlty the 9550SXU is 200% faster or more, then my 9500. Since the 9500 does 62 mb/sec, that should mean around 190 mb/sec per channel. Problem is, they are 300 dollars, and, with SATA's new standard, that 300 would buy a very nice motherboard that might well get near 1 gig per second, or more, with 4 of those Vertex Turbos...

Anyone have a motherboard with one PCI-X slot, and on board SATA 6 raid?
 

Santilli

Hairy Aussie
Joined
Jan 27, 2002
Messages
5,278
No one has on-board SATA6 yet. And the odds of that including PCI-X are massively low.

I talked to a guy at Supermicro, and, they had a couple boards with advanced features and PCI-X, but, none with SATA 6 yet.

Despite the current thought, leaving totally behind stuff that isn't really that bad, like SAS/SCSI 320, doesn't seem like a wise thing to do, since much of that hardware still works, very well, and, forcing upgrades sometimes isn't a thing that happens.

While I see the appeal of the SSD's, they aren't THAT much faster then my current scsi setup. However, they are now, with the Vertex turbo pricing, FAR more cost effective, provided you have a motherboard with onboard SATA 3 or better. The slow raid setup I had going was still faster, not huge, but noticeable, then the dual cheetah boot array, on a 320-1
Megaraid card.

Compare 150 for a single Vertex turbo, and, 1000 bucks for a raid card, cables, termination, and scsi drives, and the Vertex is far more reasonable.

I really wonder:

If, for almost 12 years, I've advocated access time over sustained through put, i.e., the access time of my old cheetahs was far faster then the IDE drives of the day, and, it was clear back then, and showed up for the entire time that access time was FAR more important then sustained data transfer rates, wouldn't folks think that my ultimate solution, and goal would be ram based storage, or, what we have now in SSD's????
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,275
Location
I am omnipresent
While I see the appeal of the SSD's, they aren't THAT much faster then my current scsi setup.

Yes they are. They really are. If you think Access Time is important, you want SSDs. There is no other game in town. Every time we tell you this, you come back with anecdotes about the data transfer rate of your SCSI shit, which has nothing to do with access time.
 

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,729
Location
Horsens, Denmark
I have to go with Merc on this one. If you want fast drives for OS/Apps, nothing compares with SSDs. They are the fastest game in town many times over. Older benchmarks might not show it, but they just are.
 

Santilli

Hairy Aussie
Joined
Jan 27, 2002
Messages
5,278
OK. My perception is based on using the Raid 0 Vertex Turbo setup, on the 9500S.

The writes are SO slow, and reads aren't that great, that in the functions that I have done with that setup, they just don't seem THAT much faster then the scsi setup.

The writes are 20 MB/sec, that's about 1/5th of the SCSI setup, and, when installing or transfering files, much of what I've been doing lately, it's no fun.

Speaking of which, before I get really nasty, I just realized that I'm angry because I'm setting up burial arrangements for my mother, so I'm going to let this one go...

So far, the Vertex does make a great drive for a pagefile...
 

LunarMist

I can't believe I'm a Fixture
Joined
Feb 1, 2003
Messages
17,497
Location
USA
Get a decent mobo. A single X25-E is so much faster on ICH 10, both in R/W IOPS, and access times.
 

Santilli

Hairy Aussie
Joined
Jan 27, 2002
Messages
5,278
I decided to go with a 9550SXU, and wait for SATA 6 motherboards, before I do the full monty.

The two to 4 vertex turbos should come in around 300-500 mb/sec, on that controller, I hope.

That should be enough to hold me over until SATA 6 is pretty common.

My goal is to have a boot array with around 1 gig per second data transfer, and a motherboard and processor that can do it.
 

Santilli

Hairy Aussie
Joined
Jan 27, 2002
Messages
5,278
It's real clear with mine that the processors are capable of handling pretty much everything I throw at them, but, despite the SCSI, I still have a huge amount of room to transfer data more quickly. The 4 port card, if it is 200% faster, should give me at least 300 MB/sec, and, hopefully, somewhat decent write speeds.

The 9500 is SO slow in write speeds, and, about 50% slower then the scsi drives in reads, that it takes away the gains made in access times, perception, and overall experience wise.

I did notice the much faster load times, and, everything did happen a bit quicker from command to load, so, I know what you both are talking about. In other words, working on
the double Vertex Turbos was faster in someways, and MUCH slower in others, and, the overall experience leaves me seeing the potential, hence trying the newer card.

Once I have the controller, adding a couple more 30 gig Vertex turbos should be fun, and intresting...

I also like having the scsi 320-1 Megaraid card in the same box, with a backup working OS
on it. That means if something doesn't work, or goes south, I just switch the megaraid card back into the faster PCI-X slot, and, I have a very quick, working system again...

My goal system will be similar to Davids', maybe, with a Raid 0 SSD setup in the 1 gig per second area, multiple processors and cores, maybe Xeons, maybe not, PCI-X
for the SCSI stuff, and, SATA 6 to connect the SSD's to.

When I started looking at the price of the hardware, I realized now is not the time to
upgrade when the SSD's are capable of exceeding the current Sata standard...

I do like the idea of picking up a high quality motherboard.
I think it's kind of neat that my Supermicro X5DA8 supports USB 2, even though that was not at all common place at the time I bought it, and, I like how well everything has held up, speed wise over time.
 

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,729
Location
Horsens, Denmark
I really believe that you will get a better average performance over the lifetime of the system if you spend half as much as you tend to and upgrade half as often. Of course, this equation depends on your budget. But there is no way that a Supermicro board and enterprise-grade stuff plots anywhere close to the price/performance curve. If you're curious where that curve is, you can plot it from the systems Merc and I have (i7@4Ghz, 12GB RAM, SSDs in RAID, ATI 5870) through the (likely, just a guess) average system here (Core2Quad, 4GB RAM, SSD) to the value machines that Coug, Tannin, and Merc build (Core2Duo/AMD, 2-3GB RAM, 7200RPM).

If you are following the price/performance curve up, I assure you you are in $5k+ land before multiple CPU sockets are involved. Hell, my system cost nearly $5k and the next grand will be water cooling before I think about adding another CPU.

If you are doing it for the geekery factor, then I completely understand. I have a dual-socket ASUS board with a pair of Xeons and 12GB of RAM on my workbench right now because I just had to play. But even though it would cost me nothing but time to stick it in my workstation, I'm not going to do it, because it would be slower.

When you see processing power left over, I wonder if it is because the apps you are running are single-threaded? Is a single core pegged? I've attached a screenshot of one of my machines that looks like it has plenty of power left, but it is maxed out, and will be for another 2 days.
 

Santilli

Hairy Aussie
Joined
Jan 27, 2002
Messages
5,278
David: Thank you for the input. My life kind of seems weird, but, sometimes I have a bunch of money, most of the time very little.

The first upgrade was done in a good time, and, it worked well. System has worked fine, lasted nearly 10 years, and, seems to be just fine for what I do with it.

I suspect with the right RAID card, these vertex drives will max out the 133/64 bit slot, but, we'll see what that is.

I've had a long look at your system, and, your motherboard is what, 320 bucks? I really like it, but, I want something that will fully support a 1 gig raid 0 SSD setup, with onboard chipset, read SATA 6.

If I take that, and the real value I7, about 300, last time I priced the 920, plus the ram, video card, etc. I'm looking at 1 grand. Not that I don't have it, it's just that right now, I'd rather spend 300, get a great raid card, and wait for the next generation of motherboards.

That said, to be real, this system is so faster then the other two in the house, Athlon 3200's, etc. with nice components, that I'm yet to be convinced anything I do requires
anything more then faster subsystem stuff.

Once I get 2-4 Turbo Vertex up, in Raid 0, we'll have a serious conversation about needing more processor power.

I REALLY value your opinions, but, for what I do, the only program that maxes the processors is DVD Shrink. Other then that, nothing I do even really stresses this setup.

Now, if it gets data 5 times faster, which the SSD's should do, that might change...
 

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,729
Location
Horsens, Denmark
Greg, a Vertex Turbo gets close to SATA 300MB/s, but doesn't beat it. SATA6 won't get you anything using Vertex Turbos. The bottleneck will be in the controller or northbridge. Somewhere around here I posted some graphs of 3 Vertex Turbos on a ICH10R, and the speed was limited by the controller. I've also put bunches of SSDs on top-end $1k+ RAID controllers by Areca and 3Ware without going over 1GB/s. Just saying, you may need to go to some expensive lengths to manage that particular number.

Really what we are waiting for is the RAID card mfgrs to catch up with the SSD revolution.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,275
Location
I am omnipresent
There's not a subjective difference for me from one X25 to two in RAID0 for anything I'd do on a desktop. It's nice for playing with a local copy of a .vmdk or a .tib or a .vhd, but those things tend to be network-limited anyway, at least in how I normally interact with them.
 

Santilli

Hairy Aussie
Joined
Jan 27, 2002
Messages
5,278
Greg, a Vertex Turbo gets close to SATA 300MB/s, but doesn't beat it. SATA6 won't get you anything using Vertex Turbos. The bottleneck will be in the controller or northbridge. Somewhere around here I posted some graphs of 3 Vertex Turbos on a ICH10R, and the speed was limited by the controller. I've also put bunches of SSDs on top-end $1k+ RAID controllers by Areca and 3Ware without going over 1GB/s. Just saying, you may need to go to some expensive lengths to manage that particular number.

Really what we are waiting for is the RAID card mfgrs to catch up with the SSD revolution.

To go a bit further, the onboard chipsets and raid controllers need to be up to the task as well.
One of the issues I have noticed is with the same drives on the two other machines I have, the user experience just isn't as fast on the Gigabyte 120 dollar or so motherboards, vs. the Supermicro X5DA8. Since my days with Dell and Apple, I still remember motherboard, and raid card makers using chips that limit throughput, and that being the bottleneck on the system. It's clear that when 3ware made the 9500 63 mb/sec a channel was all they needed with the then avaliable drives.

When the Velociraptors started coming around, they came out with the 9550, which has faster throughput, and, is supposed to be 200% faster.

One technique I've considered is staying one step behind you David, since you actually test this stuff, and, it's not theory, but actual experience. The other that has worked for me is buying more expensive motherboards that have faster throughput.

I am getting the urge to put one of those Vertex Turbos in the HTPC for a boot drive, on the onboard channel of the Gigabyte motherboards. Wonder if that setup is up to a 250 gig per second drive?
 

Santilli

Hairy Aussie
Joined
Jan 27, 2002
Messages
5,278
30 gigs is just a bit small for my OS, and programs.

That was my main interest in getting two, and, finding a SATA 3 card that gets close to 300 mb/sec seems to be more difficult then one would think, in PCI-X.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,275
Location
I am omnipresent
.
The other that has worked for me is buying more expensive motherboards that have faster throughput.

As we keep trying to tell you, the enterprise-grade stuff isn't worth the money for what you're doing. You're far better off buying three generations of commodity hardware than trying to hold on to decrepit workstation-grade equipment, which is what you seem to want to do.
 

LunarMist

I can't believe I'm a Fixture
Joined
Feb 1, 2003
Messages
17,497
Location
USA
As we keep trying to tell you, the enterprise-grade stuff isn't worth the money for what you're doing. You're far better off buying three generations of commodity hardware than trying to hold on to decrepit workstation-grade equipment, which is what you seem to want to do.

Yes, that is what we are saying. Even I only keep a system board about 18 months.
 

Santilli

Hairy Aussie
Joined
Jan 27, 2002
Messages
5,278
As we keep trying to tell you, the enterprise-grade stuff isn't worth the money for what you're doing. You're far better off buying three generations of commodity hardware than trying to hold on to decrepit workstation-grade equipment, which is what you seem to want to do.

I'm kind of curious about this comment. If you take a home office deduction, which I do, isn't the more money you spend on your home office computer money that will reduce your tax burden, yet give you something to work on you really enjoy?

IIRC, my system was depreciated over 5 years. I'm not sure how he deducted the additional office components...

Also, considering how fast the entire setup was compared to what commodity hardware was in the late 90's, and around 2000, how can you make the judgment that it wasn't worth the money to me?

Commodity hardware may have gotten faster in the last 10 years, but, it still has major flaws. SATA drives are just now approaching the speeds of the 15K scsi drives, yet still REALLY suck for access time. SSD's are not yet mainstream, and cheap enough, but, they will clear up a HUGE bottleneck in systems.

For what I do, the 4670 ATI AGP card is just fine.

What I will give you is the marketing approach to commodity hardware has certainly changed. AMD and Intel seem to come up with a different socket for the same processors each week. It certainly appears that while cpus and motherboards have progressed, the hard drive systems seem to be moving at a slower pace. Certainly if not for SSD's, access time would only be decent in the WD 10k drives, since every other commodity drive is 7200 rpm or slower.

If I was going to balance a system as a line graph, I'd have an inverted bell curve with the processors and ram at the top, connection and data exchange at the lower end, and, hard drive progress at the other end. I think the more you move to enterprise the flatter, and better balanced the line becomes, since the components are more capable all the way across the board.

I also find it kind of ironic that you tell me my enterprise board is antiquated, yet we still haven't even maxed out the PCI X 133MHZ/64 bit slot with data transfer.

The 9550 SXU is the fastest, generation 7 as they put it, SATA PCIX card around, without going into Davidland, insane controller costs. When I've managed to flood that, and find a limit either in the card or motherboard, then I'm going to upgrade, when SATA 6 is common place...
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,275
Location
I am omnipresent
I'm kind of curious about this comment. If you take a home office deduction, which I do, isn't the more money you spend on your home office computer money that will reduce your tax burden, yet give you something to work on you really enjoy?

I don't take a home office deduction because I don't have a home office. For that matter, I don't think it's a moral imperative to reduce my tax bill, because I understand the value of the services I get from my governments and I am willing to pay for them.

I don't want to deal with the aggravation of trying to shoehorn hot, noisy and old computer hardware that was probably made to sit in a datacenter into whatever the current needs are for the state of the art. My file servers and game systems are built with commodity parts. I can easily swap parts in and out. I don't need special chassis, power supplies or disk controllers. When some new state of the art thing (USB 3.0, SATA 6Gbps) comes along, I don't have to think twice about migrating to that new hardware.

You're paying an enormous price premium for something that might be subjectively faster than commodity hardware for short period of time. The inexorable progress of Moore's Law means that your $2500 worth of workstation hardware will be outclassed by a $400 budget machine two years later. That's fine and well and good if you need $2500 worth of computer right now, but you as an individual just don't.
 

LunarMist

I can't believe I'm a Fixture
Joined
Feb 1, 2003
Messages
17,497
Location
USA
I don't want to deal with the aggravation of trying to shoehorn hot, noisy and old computer hardware that was probably made to sit in a datacenter into whatever the current needs are for the state of the art. My file servers and game systems are built with commodity parts. I can easily swap parts in and out. I don't need special chassis, power supplies or disk controllers. When some new state of the art thing (USB 3.0, SATA 6Gbps) comes along, I don't have to think twice about migrating to that new hardware.

You're paying an enormous price premium for something that might be subjectively faster than commodity hardware for short period of time. The inexorable progress of Moore's Law means that your $2500 worth of workstation hardware will be outclassed by a $400 budget machine two years later. That's fine and well and good if you need $2500 worth of computer right now, but you as an individual just don't.

Well stated, as usual. :)
 

Santilli

Hairy Aussie
Joined
Jan 27, 2002
Messages
5,278
Enterprise is subject to careful shopping, as is commodity.
I paid 500 for a Supermicro motherboard. 300 dollar board, plus SCSI onboard, same as a commodity mobo, with a SCSI raid card? No, the commodity was 300 plus 300 for the SCSI card. Advantage enterprise.

Xeon 2.8 GHZ about 250 dollars. Comparable, or cheaper then some of the commodity fastest processors at the time, since the super fast ones had intel super pricing, double what the 2.8's went for. Advantage Enterprise.

Ram went to commodity, but not by a lot.

SCSI box, for 5 drives, hotswapable, NOT AVALIABLE FOR commodity. Advantage
enterprise. 18 drive R2D2 case, near the same price as a good gaming box, but, with capacity for 18 drives. Enterprise clear advantage.

Seasonic is my choice for power supplies, period.

Video cards at the time, same for both, AGP, didn't matter.

USB 2 standard on enterprise, not featured on commodity at the time.

If you want to waste money, you can do it on either commodity or enterprise.

I don't think I wasted a dime, since for 10 years, my dual setup is faster then either of the commodity boxes I built, between 4-7 years after.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,275
Location
I am omnipresent
Enterprise is subject to careful shopping, as is commodity.
I paid 500 for a Supermicro motherboard. 300 dollar board, plus SCSI onboard, same as a commodity mobo, with a SCSI raid card? No, the commodity was 300 plus 300 for the SCSI card. Advantage enterprise.

Even 10 years ago, you could have purchased a $200 SCSI contoller and moved it from $100 motherboard to $100 motherboard and had a series of systems that were cheaper and faster than what you bought.

Over time that 10 year old board has brought about a high opportunity cost, since you've been stuck with SCSI components, AGP graphics and possibly some exotic buffered or registered DRAM while being unable to take advantage of PCI express and lower-latency DDR2 and DDR3. I wouldn't call that an advantage.


Xeon 2.8 GHZ about 250 dollars. Comparable, or cheaper then some of the commodity fastest processors at the time

Generally speaking, entry-level Xeon CPUs cost about the same as the upper-middle mainstream chips, and have a high cost barrier to entry in the form of needing a $300+ system board. I don't feel like looking up historical pricing.
Ram went to commodity, but not by a lot.

I can certainly understand the appeal of multiprocessing, if your system was set up to do it. 10 years ago, it was possible and undoubtedly cheaper with Thunderbird Athlons and later with affordable Opteron hardware, even if you didn't want to move to the architecturally more-efficient Athlon X2s and Core-series CPUs, especially compared to NetBurst-era Xeons.

SCSI box, for 5 drives, hotswapable, NOT AVALIABLE FOR commodity.

I've been buying SATA hardware that's capable of hot-swap and external interface with no price premium compared to other commodity hardware since, hm, late 2003?

I will certainly admit that there is an access time advantage in 10k and 15k SCSI hardware, and that it resulted in a subjective performance improvement, but at this point there's no reason to consider high-RPM SCSI unless the number of read-write cycles exceed the expected life of an SSD or array of SSDs.

Video cards at the time, same for both, AGP, didn't matter.

No, but they sure as hell have changed in the time since, and you've missed out on all of it. Modern hardware decodes modern video codecs like H.264 and VC1, the stuff that's used on BluRay discs, so tha your CPUs aren't running at 95% utilization to play back a movie, and that's putting aside the nearly logarithmic improvement in 3D capabilities.

USB 2 standard on enterprise, not featured on commodity at the time.

This is patently false. The Via KT333 chipset could not have had a more humble origin, but it was paired with a southbridge that supported USB 2.0 and was the first shipping hardware to add motherboard support for USB 2.0. Intel might not have had its act together at that point, but if you're using USB2.0 as a justification for not buying commodity Intel, you should have taken a look at what those funny AMD people (who had SMP for Thunderbird Athlons back then) were doing, instead.

If you want to waste money, you can do it on either commodity or enterprise.

Well, yeah, but doing it with Enterprise hardware is generally the way to do it faster.
 
Last edited:

Santilli

Hairy Aussie
Joined
Jan 27, 2002
Messages
5,278
Even 10 years ago, you could have purchased a $200 SCSI contoller and moved it from $100 motherboard to $100 motherboard and had a series of systems that were cheaper and faster than what you bought.

NO. THAT'S THE THEORY, BUT, MY EXPERIENCE SUGGESTS THAT THE QUALITY ON AN ENTERPRISE BOARD IS LIKELY TO INCLUDE FASTER CHIPSETS, AND BETTER CONNECTION MATERIALS, RESULTING IN CLOSER TO MAXIMUM THROUGHPUT ON THE STANDARD IN QUESTION. REFERENCE DELL AND APPLE THAT DID 75 MB/SEC ON A 133 MB/SEC STANDARD.

Over time that 10 year old board has brought about a high opportunity cost, since you've been stuck with SCSI components, OHHH MY GOD, HOW AWFUL.
YOU SCREAM ABOUT THE MERITS OF SSD ACCESS TIME, YET CAN'T WON'T RECOGNIZE THAT 10 YEARS AGO 15K DRIVES HAD ACCESS TIMES 4-5 TIMES FASTER THEN IDE, NOT TO MENTION THROUGHPUT 4 TIMES FASTER.
AGP graphics and possibly some exotic buffered or registered DRAM while being unable to take advantage of PCI express and lower-latency DDR2 and DDR3. I wouldn't call that an advantage.
MY CURRENT AGP CARD IS 4670, RUNS DDR3 RAM, 1 GIG. NOW WHAT'S THAT ADVANTAGE.?



Generally speaking, entry-level Xeon CPUs cost about the same as the upper-middle mainstream chips, and have a high cost barrier to entry in the form of needing a $300+ system board. I don't feel like looking up historical pricing.
THAT'S OK. I'LL REFRESH YOUR MEMORY. THE 2.8 XEONS, TWO OF EM, ARE STILL FASTER THEN THE 3.2 amds IN THE OTHER TWO MACHINES, ONE BEING SCSI 10K, THE OTHER SATA.


I can certainly understand the appeal of multiprocessing, if your system was set up to do it. 10 years ago, it was possible and undoubtedly cheaper with Thunderbird Athlons and later with affordable Opteron hardware, even if you didn't want to move to the architecturally more-efficient Athlon X2s and Core-series CPUs, especially compared to NetBurst-era Xeons.
NO, THE MOTHERBOARDS FOR THE OPTERONS WERE NOWHERE NEAR THE SUPERMICRO QUALITY, AND, THEY CERTAINLY JUSTIFIED YOUR POSITION THAT IT'S EASY TO SPEND A LOT OF MONEY ON ENTERPRISE STUFF...


I've been buying SATA hardware that's capable of hot-swap and external interface with no price premium compared to other commodity hardware since, hm, late 2003?

GOOD FOR YOU. YOU ESPOUND THE MERITS OF SSDS', YET DENEGRATE SCSI, THAT HAS
3-6 TIMES BETTER ACCESS TIMES, THE POINT YOU MAKE IS WHAT MAKE SSD'S SO FAST.
SHAME ON YOU.

I will certainly admit that there is an access time advantage in 10k and 15k SCSI hardware, and that it resulted in a subjective performance improvement, but at this point there's no reason to consider high-RPM SCSI unless the number of read-write cycles exceed the expected life of an SSD or array of SSDs.

SCSI DRIVES LAST LONGER, ARE FAR FASTER, AND, THANKS TO REFURBS, ARE TESTED BEFORE BEING SOLD, SOMETHING IS NEVER DONE WITH SLOWSATA.


No, but they sure as hell have changed in the time since, and you've missed out on all of it. Modern hardware decodes modern video codecs like H.264 and VC1, the stuff that's used on BluRay discs, so tha your CPUs aren't running at 95% utilization to play back a movie, and that's putting aside the nearly logarithmic improvement in 3D capabilities.
SORRY. MY AGP CARD DOES ALL OF THE ABOVE, 4670. SAD TO SAY, BLURAY DOESN'T NEED THAT MUCH THROUGHPUT, USB 2 IS ENOUGH.


This is patently false. The Via KT333 chipset could not have had a more humble origin, but it was paired with a southbridge that supported USB 2.0 and was the first shipping hardware to add motherboard support for USB 2.0. Intel might not have had its act together at that point, but if you're using USB2.0 as a justification for not buying commodity Intel, you should have taken a look at what those funny AMD people (who had SMP for Thunderbird Athlons back then) were doing, instead.
YOU MAYBE RIGHT, BUT, I'M NOT WASTING TIME TO ARGUE.


Well, yeah, but doing it with Enterprise hardware is generally the way to do it faster.

NOT IF YOU'VE GOT A BRAIN, A COMPUTER, AND YOU SHOP.
 

Santilli

Hairy Aussie
Joined
Jan 27, 2002
Messages
5,278
"Even 10 years ago, you could have purchased a $200 SCSI contoller and moved it from $100 motherboard to $100 motherboard and had a series of systems that were cheaper and faster than what you bought.

Over time that 10 year old board has brought about a high opportunity cost, since you've been stuck with SCSI components, AGP graphics and possibly some exotic buffered or registered DRAM while being unable to take advantage of PCI express and lower-latency DDR2 and DDR3. I wouldn't call that an advantage."

The illusion of low cost SCSI RAID controllers, for high preformance drives is just that. You have to get into the 300 plus dollar range of SCSI controllers at the time I purchased my system to get full speed through put for 15K drives, which, at the time, were as much of an improvement over IDE as SSD's are over current drives.
Recently, cables, cards and drives have come down so much in price that a single
SCSI drive for a boot drive was an excellent choice, that is until SSD's came to town, and the prices started dropping.

I only have one program that consistently uses the processing power of dual Xeons, though it might be fun to try some games on this machine, soon.

The reason for considering SCSI is that Seagate takes the time to actually refurbish SCSI drives, and, they are both tested, and cheap.

Sechs:

I'm not in the least bit unhappy with this setup. I've been using raid 0 15k Seagate Cheetahs, with incredible speed, since around 1997. Merc certainly had a point that at that time, 500 dollars a drive for a 4 gig drive was a bit much, but, the SCSI setup at least worked, something that the cheap Promise raid never did, and, in the end, was a 2 grand waste of time, money and energy.

I guess that's the final straw for me. The stuff I've wanted to do with computers has usually been either hampered, or impossible to do with commodity stuff, or the commodity stuff required to do it was just as expensive as the enterprise stuff.

I have two other commodity boxes in the house, and, they are not 'cheap'. They work, have PCI-E, Velociraptor, thanks to David, but, when you start adding quality components,
the kind of stuff I like to use, I start looking at a cost that is similar to what it would cost me to use enterprise stuff.

Both of these boxes are stuck at Athlon 3200's, and, 939. That brings up another issue, the quick, and constant socket changes that make processor upgrades difficult, if not impossible with commodity boards. Instead, I've spent a 600 dollars on processors instead of 300, and, I have been delighted with the results for 10 years, and, I'm still amazed what this machine will do. I really don't have anything that even stresses it much, and, so far, no reason to upgrade the motherboard or processor.

As Sam has pointed out, how often do you really need more then 2 gigs of ram, and,
64 bit doesn't appear to be much of an advantage with XP PRO anyway, and, in fact, I might have driver problems.

If the 9550SXU card sucks with the Vertex Turbos, then I have a solid reason to upgrade.

Buying the 9500 is actually a pretty good example of what happens when I try and go cheap with commodity hardware: I end up with stuff that doesn't do what I need it to do,
and, I end up wasting the first purchase, selling it on ebay, and buying something that does what I want second.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,275
Location
I am omnipresent
Greg, the advantage of SCSI in terms of performance on a desktop system is about 99% due to the lowered access times available to 10k and 15k spindles. Once again, as I have attempted to stress to you repeatedly, improvement in access time = subjective improvement in performance. Data transfer speeds have very little to do with subjective improvement of performance.

If you stuck the right combination of adapters on an X15 and then attached that drive to a 16-year old Adaptec 2940UW (PCI Ultrawide SCSI, 40MB/sec), I'm absolutely certain that you would not be able to subjectively tell that the SCSI controller was impacting the performance of the system, because the key benefit of a that drive is the 3ms access time that is not affected by anything the controller is actually doing.

I understand being defensive about the equipment you've bought and the money you've spent, but I am trying to help you understand some things.
 

Santilli

Hairy Aussie
Joined
Jan 27, 2002
Messages
5,278
Greg, the advantage of SCSI in terms of performance on a desktop system is about 99% due to the lowered access times available to 10k and 15k spindles. Once again, as I have attempted to stress to you repeatedly, improvement in access time = subjective improvement in performance. Data transfer speeds have very little to do with subjective improvement of performance.

If you stuck the right combination of adapters on an X15 and then attached that drive to a 16-year old Adaptec 2940UW (PCI Ultrawide SCSI, 40MB/sec), I'm absolutely certain that you would not be able to subjectively tell that the SCSI controller was impacting the performance of the system, because the key benefit of a that drive is the 3ms access time that is not affected by anything the controller is actually doing.

I understand being defensive about the equipment you've bought and the money you've spent, but I am trying to help you understand some things.

Sam: we have been on the net for what, 13 years now?

Please remember, I have done exactly what you are talking about, and, I'm the one that is an access time advocate. Remember all those battles I had with Eugene???

Anyway, here's my take.
The 9000 was so slow in data transfer, that it managed to offset the access time decrease the SSD's had. 20 MB/sec was so bad, it made the system crawl.

I've just spent about 3 hours reinstalling and testing the two Vertex Turbos on the 9550SXU. I'm getting insane numbers, like 290 MB/sec writes, with nearly 400 MB/sec reads,with two drives in Raid 0.

I'll get back to you on my take, but, so far, it's instant with somethings, and seems like the same as SCSI with others.

I'm kind of wishing I'd bought 4 drives, instead of two, but, it's pretty clear that PCI-X isn't slow, and, 400 MB/sec is not slow.

HD TACh gives me 180 to 120 MB/Sec.

Access is pretty much instant.

Certain programs load pretty much instantly. MS word 2000 in particular.
 
Top