Splash: What are you using for new systems at work?

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,728
Location
Horsens, Denmark
blakerwry said:
::sigh:: remember paying $3000 for a 66MHz 486SX system...

The first system I ever got to play on regularly was a 386/33...It was the one computer in an office of 12 accountants, and all it did was print form letters...

I'd imagine all the work those 12 were doing is being done by one guy with a P90 right now.....
 

Pradeep

Storage? I am Storage!
Joined
Jan 21, 2002
Messages
3,845
Location
Runny glass
Hmm, just sold my Dell Insiporn 8200 lappy, and may have the funds to go down the Opteron route. Definitely will be getting me one of those shiny K8W mobos, had good experience with the TigerMP and dual AthlonXPs. Might just rip out the mobo and cram the K8W in this Antec case, if the PSU will fit.
 

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,728
Location
Horsens, Denmark
Pradeep said:
Hmm, just sold my Dell Insiporn 8200 lappy, and may have the funds to go down the Opteron route. Definitely will be getting me one of those shiny K8W mobos, had good experience with the TigerMP and dual AthlonXPs. Might just rip out the mobo and cram the K8W in this Antec case, if the PSU will fit.

I've found some funds and am considering the same thing. Take a look at the "64-bit" thread....
 

Splash

Learning Storage Performance
Joined
Apr 2, 2002
Messages
235
Location
Seaworld
Santilli said:
...It's amazing how slow the clock speed is, yet how fast the results are with the Opteron. Why?...

I believe Monsieur Merc aptly shed some light -- just above -- on why.

The P4 is all about a revved-up digestion pipeline, the P3, Centrino, and AMD64 are not. There's good and bad about the P4 approach. It's Opteron's on-die memory controller circuitry that makes the biggest difference with today's software, as it beats the stuffing out of the current P4 + North Bridge architecture. Intel's 90nm process (production) has also hit a brick wall thermally and will likely take them most of 2004 to iron out the kinks so they can get to 4.0 GHz.



64 bit? So I buy a mobo and single Opteron, at 1.4 ghz and it's as fast as a 3.0 ghz, Xeon???
Xeon is currently at 3.2 GHz. No, it's more like a 2.0 GHz or 2.2 GHz Opteron that's required to spank the fastest Xeon effectively.

But, there's one other important point to remember about Opterons: I believe, starting with the 2.0 GHz (246) though it could be the 2.2 GHz (248), DDR-400 memory support was quietly introduced with 400 MHz memory channels. Otherwise, the slower (older) Opterons support 333 MHz memory channel speeds.
 

Santilli

Hairy Aussie
Joined
Jan 27, 2002
Messages
5,273
Splash: When do you expect the prices to drop to reasonable for the 246 or 248?

Also, I'm kind of reluctant to be on the bleeding edge. How long before they get all the bugs worked out of the new boards, and the new cpu?

Thanks

gs
 

Santilli

Hairy Aussie
Joined
Jan 27, 2002
Messages
5,273
Splash: When do you expect the prices to drop to reasonable for the 246 or 248?

Also, I'm kind of reluctant to be on the bleeding edge. How long before they get all the bugs worked out of the new boards, and the new cpu?

Thanks

gs
 

CougTek

Hairy Aussie
Joined
Jan 21, 2002
Messages
8,729
Location
Québec, Québec
Santilli said:
How long before they get all the bugs worked out of the new boards, and the new cpu?
There are no more bugs on the Opteron platform than on any Xeon platforms. We're not talking about corean cars here, we're talking about processors designed with servers in mind.
 

Pradeep

Storage? I am Storage!
Joined
Jan 21, 2002
Messages
3,845
Location
Runny glass
Hmm, now to decide between the K8W and K8S. Damn, if only they had the dual gigabit/single 100mbit networking and integrated U320 of the server mob as an option on the workstation mobo so I could also use a decent graphics card. My current Tekram U3D won't fit in those fancy 3.3V slots, so it would be crippled on a 32bit PCI bus *shudders*
 

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,728
Location
Horsens, Denmark
Pradeep said:
Hmm, now to decide between the K8W and K8S. Damn, if only they had the dual gigabit/single 100mbit networking and integrated U320 of the server mob as an option on the workstation mobo so I could also use a decent graphics card. My current Tekram U3D won't fit in those fancy 3.3V slots, so it would be crippled on a 32bit PCI bus *shudders*

Or include a new controller as part of the upgrade...PCI-X is going to be around for some time to come IMO....
 

Santilli

Hairy Aussie
Joined
Jan 27, 2002
Messages
5,273
Cougtech:

I understand the advantages, but on the last new release of chips, and mobos, the Tyan Thunder didn't exactly win any reliability contests.

I think a bit of patience always saves money, and headaches, with technology.

gs
 

Santilli

Hairy Aussie
Joined
Jan 27, 2002
Messages
5,273
Splash: Thanks for the info on the memory speed support. Always helps...

gs
 

Pradeep

Storage? I am Storage!
Joined
Jan 21, 2002
Messages
3,845
Location
Runny glass
Hmm, bank just upped my CC limit. Coincidence, or destiny? Still, the price of the 246 is too high for my blood. Perhaps I'll just get a car and wait for PCI-Express.
 

Splash

Learning Storage Performance
Joined
Apr 2, 2002
Messages
235
Location
Seaworld
Santilli said:
Splash: When do you expect the prices to drop to reasonable for the 246 or 248?

The price of the bleeding edge dualie Opteron (currently the 248), could go up or down. I would not be a bit surprised if it were to go up US$10 ~ $40 if demand for it skyrockets.

Once the Opteron 250 makes it out of the oven (that should be sometime in the next 10 ~ 12 weeks) the price for the Opteron 250 could be even higher than the Opteron 248's current price with the 248 staying in the US$900 range. If all goes well for the supply / demand ratio, I would suspect that the price for a Opteron 248 could be down to US$550 ~ $650 by June.



Also, I'm kind of reluctant to be on the bleeding edge. How long before they get all the bugs worked out of the new boards, and the new cpu?

I strongly doubt there are ANY major defects in any of the available Opteron steppings. The most recent stepping was to provide the JEDEC standards-based 400 MHz DDR (ECC) memory interface. Otherwise, the verification process for the Opteron was the likely reason why the Opteron was so late getting out into the public domain.



I understand the advantages, but on the last new release of chips, and mobos, the Tyan Thunder didn't exactly win any reliability contests.

I believe you are talking about the earlier (2002) Tyan dual-Athlon mobo -- the K7 Thunder. It was a bit of a mess.

In the midst of some excellent mobos, Tyan and Asus can make the occasional clunker. So, a bit prudence must be exercised when selecting a Tyan or Asus mobo. Supermicro on the other hand, consistently makes top-notch mobo designs with a wide array of options from minimal to stuffed-with-features, manufactured at a very high level of quality. Unfortunately, Supermicro's apparent staunchly pro-Intel stance has painted them into a corner. So, unless Supermicro sees the light in 2004 and snaps to it, they probably won't get my GH Stamp Of Approval (GH = Good Housekeeping) for building high-end x86 photo / audio / video workstations. I might even use a Tyan K8 Thunder series for a server at this point, but this would likely be a compute-intensive server such as a database server, otherwise I'm sticking with a Supermicro a 2.4~3.0GHz Xeon box since I definitely know these exude stability and software / hardware compatibility.

If the Tyan K8W Thunder was a clunker, this fact would've surfaced by now. In my own snotty opinion, I would avoid the other available Opteron mobos (Iwill, Asus, MSI) at this time.



I think a bit of patience always saves money, and headaches, with technology.

Well, I though you were originally talking about acquiring something before the end of 2003. Yes, you would definitely be a lot better off waiting until at least June 2004 before jumping into the dual Opteron waters. I don't think anything but the prices will change by then -- other than what's the bleeding-edge Opteron as far as speed goes. I would suspect that DDR-400 memory would be about the same price, maybe a few pesos higher.

Beyond that, there will be the all-new 90nm Opteron microprocessors, which will -- of course -- be faster than ever :p , new mobo designs with PCI Express (probably for both 130nm and 90nm Opteron), and new faster memory (533 MHz or 666 MHz DDR memory channels).

If I were building a new dual Opteron box at this point in time, it would have the following core system specifications:

  • Tyan K8W Thunder mobo
  • 1- or 2-each Opteron 242 or 244 CPUs
  • 1 GB or 2 GB of DDR-400 RAM (ECC) using 512 MB DIMMs; 1GB for single Opteron system, 2GB for dual Opteron system
  • A dual-channel Ultra320 PCI-X SCSI host bus adaptor, such as the LSI Logic LSI21320-R
    21320.jpg

If you want to hear anything about what I would specify beyond these core specifications, then I can include that, but it may or may not jibe with what you want to use your system for. My needs would be for multiple operating systems, multiple tasks, ultra-flexible storage capability, etc.
 

Computer Generated Baby

Learning Storage Performance
Joined
Dec 16, 2003
Messages
221
Location
Virtualworld
ddrueding said:
...PCI-X is going to be around for some time to come IMO...

PCI-X will be around, but not as long as you might be thinking.

Even though 64-bit PCI-X is supposed to ramp up to 533 MHz over time (currently at 133 MHz), I rather doubt that PCI-X will EVER go beyond what it currently is in the marketplace in any significant numbers. Full-duplex PCI Express will take over BOTH the desktop and the server rather quickly. 1X PCI Express = PCI 64bit/66 MHz, except PCI Express is full-duplex. Server mobos will have 1X, 2X, and 4X PCI Express sockets. PCI Express also does not use a shared bus like PCI / PCI-X.

Conventional PCI and PCI-X will be supported for a while by a PCI Express -to- PCI(X) bridge once PCI Express debuts (late 2004 likely).

So, in the not too distant future, we'll have a full-duplex PCI expansion bus (PCI Express) and a full-duplex storage channel and hard drives (Serial Attached SCSI, or SAS).
 

Jan Kivar

Learning Storage Performance
Joined
Feb 3, 2003
Messages
410
Computer Generated Baby said:
ddrueding said:
...PCI-X is going to be around for some time to come IMO...

PCI-X will be around, but not as long as you might be thinking.

Even though 64-bit PCI-X is supposed to ramp up to 533 MHz over time (currently at 133 MHz), I rather doubt that PCI-X will EVER go beyond what it currently is in the marketplace in any significant numbers.
...
There is the need of spare parts for the current servers that run on PCI-X. Maybe there will be no more speed increases, but I'd expect at least five years more for PCI-X. But PCI-X will never become a bus for "normal" motherboards. Hopefully we'll see better than 1x PCI Express slots in "normal" motherboards (besides for the 16x for the display adapter).

Cheers,

Jan
 

Santilli

Hairy Aussie
Joined
Jan 27, 2002
Messages
5,273
Splash: You are right. I changed my mind, since I fixed up my box, and it's running ok right now.

Required Granite Digital cables, and managed to get a Lite on cdrom to install the os, properly.

If it works right now, I'm going to wait, and buy next year.

In another thread, you mention a Supermicro box, with SCA backplane.
Which one, and how do you like it?

Thanks

gs
 

Computer Generated Baby

Learning Storage Performance
Joined
Dec 16, 2003
Messages
221
Location
Virtualworld
Jan Kivar said:
There is the need of spare parts for the current servers that run on PCI-X.
If spare parts = newly-produced PCI expansion bus cards, then yes, those could be around for as long as five years (SCSI, RAID, Fibre-Channel, LAN cards). Otherwise... ... ...

...Maybe there will be no more speed increases, but I'd expect at least five years more for PCI-X...

... ... ...if PCI Express becomes a hit with the server crowd, PCI-X will last maybe about as long as EISA did after PCI was introduced, which may have been about 2 years at best. The benefits of the low-power serial point-to-point full-duplex PCI Express bus architecture over parallel shared-bus half-duplex PCI-X bus architecture are pretty clear.



But PCI-X will never become a bus for "normal" motherboards.

If Normal = "desktop" then, yes, the marketing departments of the vast majority of computer manufacturers never felt the need to have anything more elaborate than a common-as-dirt 32-bit/33 MHz PCI expansion bus in a desktop (a.k.a. -- home/office) computer, especially since the ATA and LAN interfaces were in the south bridge chipsets and graphics used AGP.



Hopefully we'll see better than 1x PCI Express slots in "normal" motherboards (besides for the 16x for the display adapter).

Desktop mobos will only have a single 16X PCI Express slot (for graphics) and some number (probably just 2, 3, or 4) of 1X PCI Express slots. I strongly doubt that this formula will change for YEARS as far as Desktop mobos are concerned. A 1X PCI Express slot can handle GbE I/O with ease, so I wouldn't be alarmed at having "just" a 1X PCI Express slot. PCI Express is pretty damned fast and efficient (full-duplex), not to mention you hot plug PCI Express cards just like a PCMCIA card.

In fact, PCMCIA / CardBus will also be replaced by PCI Express on portable computers. You will have *direct* access to the notebook's PCI Express bus using plug-in "Express Cards" which are simply little PCI Express expansion cards. No more need for a bus (PCMCIA) talking to another bus (PCI) in a portable computer. Express Cards capability will even be used in some of your higher-end *desktop* computers!

Anyway, if you are in need of more PCI Express bandwidth for a SCSI or 10 GbE LAN, then you will have no choice but to go with a more expensive server or workstation-class mobo, which should be available within a few months after the introduction of PCI Express mobos for desktop computers. These should have a mix of 1X and 2X PCI Express slots, maybe a 4X slot, along with a single 16X slot for graphics. Over a bit of time, there may even be some high-end mobos with 2 16X PCI Express slots for graphics. In addition to the mix of 1X and 2X PCI Express slots, most of these server / workstation mobos will have 1 or 2 "legacy" PCI / PCI-X slots, but probably no more than 4 legacy PCI slots.

Now, just to make things a little bit confusing: Well on down the road, when PCI Express is at mid-life (2008/2009?), there will allegedly be a *new* PCI Express clock speed defined at 2X the current clock rate! So, suddenly a "new school" 1X PCI Express slot will have 2X the bandwidth of the "old school" 1X PCI Express slot. But before any of that happens (2005/2006?), there will also supposedly be a 32X PCI slot introduced!
 

Computer Generated Baby

Learning Storage Performance
Joined
Dec 16, 2003
Messages
221
Location
Virtualworld
Santilli said:
In another thread, you mention a Supermicro box, with SCA backplane. Which one, and how do you like it?

Well, if you were starting from scratch, TODAY, someone here in December 2003 might go with either a 4-bay or 5-bay SCA SCSI mobile drive rack or a 4-bay or 5-bay SCA Serial ATA mobile drive rack (or one SCA SCSI and one SCA SATA) installed in a Supermicro SC-942 series chassis. Each mobile drive rack takes up 3-each 5¼-inch drive slots. The Supermicro SC-942 (comes in black or creme finish) has enough room for 2 mobile racks, along with a couple of 5¼-inch devices in the upper slots, and a provided front panel with a 3½-inch device slot and an external USB connector.

If you were starting from scratch in mid-to-late 2004, you might go ONLY with SATA / SAS mobile drive racks. A SAS (Serial Attached SCSI) host bus adaptor is *fully* compatible with both SAS hard drives and SATA hard drives. The future for ATA and SCSI is the SATA2 bus, with the SAS (full-duplex) controller talking to SATA (half-duplex) drives via its SATA Tunneling Protocol.

sc942.jpg
SC942i-600-sm.jpg




Of course, if you have SCA SCSI slots, you will have to procure new SCA SCSI hard drives. Since I believe you want to keep using all of your "old" hard drives, you would either need to mount your existing drives in the typical "fixed" manner into each 5¼-inch drive slot, or go with individual SCSI drive bays for "conventional" SCSI hard drives (SCSI drives with separate power and D68 data connectors). If you want the ability to easily remove or swap around a number of SCSI and/or ATA drives, you'll be best off with individual drive bays. If you only have a few drives, then you can go "fixed" like you are now (presumably). With just a few fixed drives, you wouldn't even need this Supermicro SC-942 case! But, if you need plug-able drive capabilities AND the drive slot capacity, then continue reading on.

In the case of going with individual SCSI drive bays. There aren't that many brands of SCSI drive bays, but I don't believe you can get as good a deal for the quality you get with any brand other than the Antec DataSwap KS895A -- which is an LVD / Ultra160-rated drive bay. Pretty much the same thing goes for Antec's parallel ATA (ATA-100-rated) drive bays.

So, you have 6 or even 8 available (or 9 if you remove the 3½-inch + USB front panel) 5¼-inch drive slots with the SC-942. Do you need 6 (or more) drive slots? You can get a CSE-942i-550B (black) or a CSE-942i-550 (creme) for US$287 (before shipping)...

http://www.thenerds.net/productpage.asp?un=170827&s=1

...which has a 550-watt PS/2 form factor non-redundant power supply (the PWS-046), with SSI-spec power connectors (24-pin main power + 8-pin + 4-pin power connectors) and redundant cooling fans (only one works at a time until it fails, then the other fan comes into service). Dual-Opteron and Dual-Xeon mobos all require a power supply like the above power supply. A modern desktop mobo will need a 20-pin main power connector and maybe a 4-pin power connector. The SC-942 chassis can handle any Extended ATX mobo, such as the Tyan K8W Thunder.
 

Santilli

Hairy Aussie
Joined
Jan 27, 2002
Messages
5,273
OK:
I currently have 4 160 drives in raid 0. I also have 4 CD/DVD drives, and a floppy.

I would like enough bays so I could mount those drives, and, ideally, have a hotswap/sca/scsi or sca/sata for storage. 2-4 bays would be plenty.

Stupid question, but is it safe to mount drives at 90 degrees?

Looks like the removeable setups are mounted sideways in this case.

Thanks

gs
 

Jan Kivar

Learning Storage Performance
Joined
Feb 3, 2003
Messages
410
Computer Generated Baby said:
...Maybe there will be no more speed increases, but I'd expect at least five years more for PCI-X...
... ... ...if PCI Express becomes a hit with the server crowd, PCI-X will last maybe about as long as EISA did after PCI was introduced, which may have been about 2 years at best. The benefits of the low-power serial point-to-point full-duplex PCI Express bus architecture over parallel shared-bus half-duplex PCI-X bus architecture are pretty clear.
EISA was created because other manufacturers weren't too happy to pay IBM licence fees for MCA. Basically just an extension to ISA. PCI was great when it was first introduced, but ten years is a long time. AGP was generated to get the GPU out from saturating PCI bus, and after that device integration to southbridge helped PCI to survive. There is/was faster versions of PCI, but the basic PCI bus timings were so complex, that it was very difficult to design cards for the 64-bit PCI. So, they changed the timings, and called it PCI-X.


Computer Generated Baby said:
But PCI-X will never become a bus for "normal" motherboards.
If Normal = "desktop" then, yes, the marketing departments of the vast majority of computer manufacturers never felt the need to have anything more elaborate than a common-as-dirt 32-bit/33 MHz PCI expansion bus in a desktop (a.k.a. -- home/office) computer, especially since the ATA and LAN interfaces were in the south bridge chipsets and graphics used AGP.
The main problem was that if You wanted to have "better than 32-bit, 33 MHz PCI", You'd need more expensive MCH. Which were designed only for dual CPU systems. Which are more expensive. For Intel, You'd have to use Xeons, which have always been pricey. I'm not sure if there is any PCI-X capable chipsets for AMD platforms, most of them used regular PCI (extended to 64-bit, of course).


Computer Generated Baby said:
Hopefully we'll see better than 1x PCI Express slots in "normal" motherboards (besides for the 16x for the display adapter).
Desktop mobos will only have a single 16X PCI Express slot (for graphics) and some number (probably just 2, 3, or 4) of 1X PCI Express slots. I strongly doubt that this formula will change for YEARS as far as Desktop mobos are concerned. A 1X PCI Express slot can handle GbE I/O with ease, so I wouldn't be alarmed at having "just" a 1X PCI Express slot. PCI Express is pretty damned fast and efficient (full-duplex), not to mention you hot plug PCI Express cards just like a PCMCIA card.

In fact, PCMCIA / CardBus will also be replaced by PCI Express on portable computers. You will have *direct* access to the notebook's PCI Express bus using plug-in "Express Cards" which are simply little PCI Express expansion cards. No more need for a bus (PCMCIA) talking to another bus (PCI) in a portable computer. Express Cards capability will even be used in some of your higher-end *desktop* computers!
True, 2,5 Gbps should suffice for a long time. I haven't checked the PCI express card sizes, but as Intel seems eager to push smaller form factors, maybe some of the cards will be "half-height" (or what it's called...). Another option would be to use those ExpressCards (or NewCards). Which would again be pricey. Not to mention a bad idea.

Computer Generated Baby said:
Anyway, if you are in need of more PCI Express bandwidth for a SCSI or 10 GbE LAN, then you will have no choice but to go with a more expensive server or workstation-class mobo, which should be available within a few months after the introduction of PCI Express mobos for desktop computers. These should have a mix of 1X and 2X PCI Express slots, maybe a 4X slot, along with a single 16X slot for graphics. Over a bit of time, there may even be some high-end mobos with 2 16X PCI Express slots for graphics. In addition to the mix of 1X and 2X PCI Express slots, most of these server / workstation mobos will have 1 or 2 "legacy" PCI / PCI-X slots, but probably no more than 4 legacy PCI slots.
AFAIK there is no 2x slot planned, only 1x, 4x, 8x and 16x. IIRC, one MCH can have only total of 32x. Which means that there is headroom for faster slots in the desktop boards. Even the 16x slot is slow; 8 GBps full duplex (IIRC), when the high-end GPUs have already over 20 GBps access to the card's memory.

DISCLAIMER: I'm not quite sure how much bull I just typed. If there are any flaws, please correct. :D

Cheers,

Jan
 

Computer Generated Baby

Learning Storage Performance
Joined
Dec 16, 2003
Messages
221
Location
Virtualworld
Jan Kivar said:
EISA was created because other manufacturers weren't too happy to pay IBM licence fees for MCA. Basically just an extension to ISA. PCI was great when it was first introduced, but ten years is a long time.
Oh, I all too familiar with EISA and why it came into existence (think Compaq and the unholy Gang of 8 that invented EISA). I've used EISA in various x86 PCs and SGI RISC boxes -- not to mention VESA Local Bus (VLB), NuBus, VME Bus, S-Bus, S-100 Bus, GPIB, and Microchannel.



True, 2,5 Gbps should suffice for a long time. I haven't checked the PCI express card sizes, but as Intel seems eager to push smaller form factors, maybe some of the cards will be "half-height" (or what it's called...). Another option would be to use those ExpressCards (or NewCards). Which would again be pricey. Not to mention a bad idea.
"Internal" PCI Express expansion card sizes are DEFINITELY going to shrink. There has already been a concerted effort to reduce the size of "conventional" PCI cards for sometime now. Fond memories of PCI cards being as large as they are now will certainly become as dated as fond memories of 5¼-inch or 8-inch hard drive mechanisms!

Express Cards for desktop computers will likely be restricted to workstation class desktop (or deskside) computers. An encryption card based on the Express Card hardware specification would be one application for such a PCI Express device for a non-mobile computer. Otherwise, the lion's share of Express Card marketshare will be aimed at the mobile computer market. ATI is working on a PCI Express graphics card for mobile computers that is based on Express Card. You will be able to upgrade your graphics on your notebook VERY EASILY with Express Card graphics. But, I also suspect that you will pay dearly for such convenience! ...at least for the first year or two.


AFAIK there is no 2x slot planned, only 1x, 4x, 8x and 16x. IIRC, one MCH can have only total of 32x. Which means that there is headroom for faster slots in the desktop boards...

Yep, I believe that's correct the more I think about it now (1X, 4X, 8X, 16X). I believe there will be no problem whatsoever in plugging a 1X card into a 4X or 8X connector, since only one TX/RX pair will sense a PCI connection and the PCI Express hub probably querying the PCI Express device confirming that it is of 1X class.

...Even the 16x slot is slow; 8 GBps full duplex (IIRC), when the high-end GPUs have already over 20 GBps access to the card's memory...
The internal bandwidth of a graphics card is one thing, but I don't think the host-to-GPU bandwidth is nearly that demanding. With a 16X PCI Express connection, latency will probably more critical than 16X bandwidth -- which is indeed a simultaneous 8 GB/s transmit and 8 GB/s receive.

And speaking of full-duplex, Hypertransport is full-duplex. So, now we have 1, 2, 4, or 8 Opertons communicating to a full-duplex processor bus, which will be communicating to a full-duplex PCI Express expansion bus, where you will have a full-duplex SAS controller that communicates to full-duplex SAS hard drives! When this is all a reality, the difference should be quite noticeable!
 

Computer Generated Baby

Learning Storage Performance
Joined
Dec 16, 2003
Messages
221
Location
Virtualworld
Santilli said:
I currently have 4 160 drives in raid 0. I also have 4 CD/DVD drives, and a floppy.

I would like enough bays so I could mount those drives, and, ideally, have a hotswap/sca/scsi or sca/sata for storage. 2-4 bays would be plenty.

It's a little hard to tell where you want to go with this "down the road" (so to speak). Are your 4-each Ultra160 hard drives SCA or the typical 68-pin variety of SCSI? Do you want -- or need -- the capability to quickly and easily remove your 4-drive RAID-0 array?

If you could condense your CD and DVD needs to just 2 drives -- say a fast ATAPI CD-R/W and a fast ATAPI DVD-R/W -- and use them for reading and writing, you could save 2 slots. Or, you could take these same fast ATAPI CD-R/W and DVD-R/W drives and install them into external Firewire housings and plug them into your system's Firewire host bus adaptor only when you need them (and use them with other Firewire-equipped computer systems when the need arises).



...Stupid question, but is it safe to mount drives at 90 degrees? Looks like the removeable setups are mounted sideways in this case.
You can mount any modern hard drive any way you wish -- even upside down. You may or may not be able get away with this on hard drives from about 1996 on back, though.
 

Jan Kivar

Learning Storage Performance
Joined
Feb 3, 2003
Messages
410
Computer Generated Baby said:
"Internal" PCI Express expansion card sizes are DEFINITELY going to shrink. There has already been a concerted effort to reduce the size of "conventional" PCI cards for sometime now. Fond memories of PCI cards being as large as they are now will certainly become as dated as fond memories of 5¼-inch or 8-inch hard drive mechanisms!
I still have GUS MAX somewhere. That was a big card for a sound card...

Computer Generated Baby said:
I believe there will be no problem whatsoever in plugging a 1X card into a 4X or 8X connector, since only one TX/RX pair will sense a PCI connection and the PCI Express hub probably querying the PCI Express device confirming that it is of 1X class.
This would be nice, hopefully this is in the PCI Express spec. I wonder what the motherboard makers are going to do with the area that is released when the PCI slots are replaced with PCI Express slots? Hopefully they will honor the "keep out" -areas for the 4x cards.

Cheers,

Jan
 

Platform

Learning Storage Performance
Joined
May 10, 2002
Messages
234
Location
Rack 294, Pos. 10
Jan Kivar said:
...I wonder what the motherboard makers are going to do with the area that is released when the PCI slots are replaced with PCI Express slots? Hopefully they will honor the "keep out" -areas for the 4x cards...
I suspect the mobos with a 16X slot and 2- to 6-each 1X slots will simply become dinky little things compared to today's ATX mobos.

They might be just big enough for a microprocessor, 2-each DIMMs, a mainboard power connector, some skinny little SATA cables jumping up from a far corner, and those little 1X PCI Express connectors all in a row next to the 16X connector.
 

Jan Kivar

Learning Storage Performance
Joined
Feb 3, 2003
Messages
410
Platform said:
Jan Kivar said:
...I wonder what the motherboard makers are going to do with the area that is released when the PCI slots are replaced with PCI Express slots? Hopefully they will honor the "keep out" -areas for the 4x cards...
I suspect the mobos with a 16X slot and 2- to 6-each 1X slots will simply become dinky little things compared to today's ATX mobos.

They might be just big enough for a microprocessor, 2-each DIMMs, a mainboard power connector, some skinny little SATA cables jumping up from a far corner, and those little 1X PCI Express connectors all in a row next to the 16X connector.
Actually, the BTX form factor requires that the board is exactly 266,70 mm deep (front-back), and the width may vary from 203,20 mm to 325,12 mm, depending on the type of the board (standard BTX, micro-, pico). But this is just Intel's perspective...

Cheers,

Jan
 

Santilli

Hairy Aussie
Joined
Jan 27, 2002
Messages
5,273
It's a little hard to tell where you want to go with this "down the road" (so to speak). Are your 4-each Ultra160 hard drives SCA or the typical 68-pin variety of SCSI? Do you want -- or need -- the capability to quickly and easily remove your 4-drive RAID-0 array?

The X 15's are first generation, 68 pin drives.

Currently, with the atto Dual Channel 32 bit/33 mhz card, they pretty much max out the bus on this mobo at 110 mb/sec.

Since I'm booting off the array, and I'm not sure if I want to give that up, just yet, I don't think I really need to quickly remove the 4 drive raid array.

I was thinking an SCA/scsi setup would allow me to buy a number of scsi drives, cheaply, and plug them into the system for daily, or weekly backups, and remove them for safety.

Another possibility is to use Granite Digitals' removeable firewire external drive setups, using ide drives, as you suggest, but, when you start costing it out, it starts getting very expensive. I'm wondering if a tower with backplane setup is cheaper, or SATA will be cheaper, but still provide a superior interface.

Is it possible to use a firewire external drive, and swap files between mac os extended, and ntfs?

As for the drives, I've got one ide, 56 x Lite-on,that's neccessary to boot from if you have problems with the array, or startup disk. Can't put that in a firewire enclosure.

Also one DVD reader, Pioneer, Scsi, which, when used with a DVD/external writer, in the near future, would allow burning DVD's and backups. It's Scsi, as are the plextor 40X UltraPlexCD reader, and, a 32x Plextor scsi CD-r writer, Currently I have the ide, CD reader scsi, and ide installed, with an external firewire CD-RW,Liteon, for backups, etc. I'd really like to be able to just mount them all in the case, since they coexist well, on the scsi card.

So, to answer your question about drive slots:

I would like 5 drive slots for peripherals. I'd like to pick up a DVD writer, and, another possibility is a hotswapable firewire enclosure, with removeable drives:
like this:

http://www.granitedigital.com/

However, if you look at the prices, I'm not sure a backplane sca/scsi setup isnt' cheaper.

Food for thought for next year...

I really like the case.

Thanks

gs
 

.Nut

Learning Storage Performance
Joined
Jul 30, 2002
Messages
229
Location
.MARS
Santilli said:
...Since I'm booting off the array, and I'm not sure if I want to give that up, just yet, I don't think I really need to quickly remove the 4 drive raid array.
Well, there goes 4 empty bays for fixed drives!

So, if you were presumably using the Supermicro SC-941 chassis we’ve been talking about (which can be setup either as a pedestal case or a rack-mountable case), you would then have 4-each 5¼-inch drive slots available once you mounted your 4-each fixed X-15 drives into the 5¼-inch drive slots (5¼-inch drive slot #9 would stay as floppy drive + USB front panel). After that, if you add an SCA mobile rack, you will then be down to just 1-each available 5¼-inch drive slot. If you go in this direction, you will need to have either 1-each DVD reader/writer or 1-each CD reader/writer acting as a bootable CD reader.

The other possibility is that after you mount all 4 of your X-15 drives, you add all 4 of your optical drives. You would then have no available 5¼ drive bays.



I was thinking an SCA/scsi setup would allow me to buy a number of scsi drives, cheaply, and plug them into the system for daily, or weekly backups, and remove them for safety.
You’re onto the right idea here about using inexpensive hard drives for doing fast large capacity backups, but an SCA / SCSI solution is not an inexpensive way to do this. It’s no secret that ATA or SATA hard drive technologies will beat the pants off any form of SCSI when it comes to the most storage capacity per peso spent. SCSI technologies simply don’t match up well for such a utilitarian task (occasional use, sitting around offline 99.9% of the time, inexpensive, etc). SCSI would provide poor return on investment (ROI) compared to ATA or SATA technologies. So, ATA or SATA is the best technology for this job.

But, there’s still a problem using raw ATA or SCSI drives for doing backups. You are stuck with having to plug in the drive, booting (or rebooting) the computer so that the SCSI or ATA system BIOS will recognise the “new” hard drive and allow your operating system to mount the drive volume before you can copy off files to the plugged-in hard drives. The same routine will have to be followed for dismounting the drives safely (shutdown system or reboot, remove drive before POST).

Currently, the best “affordable” way to do true plug’n’play disc-to-disc backups with is to use a hard drive installed in a Firewire or USB (if USB, preferably “High Speed” USB2). Firewire is faster than USB2. With either USB or Firewire drive, you can plug in a drive at any time, perform the backup, and remove the drive without having to worry about fiddling about with rebooting just to *safely* mount and dismount a hard drive. You could also use an advanced hardware RAID controller -- setup to support your hard drives as JBOD -- with full hot-swap / hot-plug services to do the same thing with SCSI or ATA/SATA drives, but that’s going to be more expensive than going with a self-contained external Firewire hard drive.

Further, an external 5¼-inch Firewire housing can be setup with removable drive bays. With removable ATA drive bays, you can use a single external Firewire drive housing but also plug in various ATA hard drives pre-mounted in ATA drive bays as needed instead of having each ATA hard drive in a separate closed Firewire drive housing (more expensive).


Another possibility is to use Granite Digitals' removeable firewire external drive setups, using ide drives, as you suggest, but, when you start costing it out, it starts getting very expensive.
If you don’t care about the true hot-swap hot-plug (plug’n’play) you get from Firewire or USB, need to have 2 or more hard drive available for use at one time, and just need the ability to plug a boot drive and/or multiple data drives in before you turn the computer on, then a SATA / SCA or SCSI / SCA backplane (a.k.a. -- “mobile drive rack”) is definitely the least expensive way of achieving this capability, as individual hard drives in separate Firewire housings would easily exceed the US$180 or $160 cost of a Supermicro CSE-M35 (5-drive) or CSE-M34 (4-drive) mobile drive rack.

I'm wondering if a tower with backplane setup is cheaper, or SATA will be cheaper, but still provide a superior interface.

In the “here and now” -- as opposed to 8 ~ 12 months from now -- about $160 will give you a 4-drive SATA plug-in capability, a bit more for parallel SCA SCSI . Approximately 8 ~ 12 months from now, when SAS finally debuts, that would be the hands down choice since you could plug SAS or SATA drives into the same 4-bay (or 5-bay) mobile drive rack -- both of which only occupy 3-each 5¼-inch drive slots in the chassis.


Is it possible to use a firewire external drive, and swap files between mac os extended, and ntfs?

Yes, but there aren’t any OSX file system drivers for Win2K / WinXP (that I know of) nor vice versa. So, your only choice for a common hard drive file system, for transferring large amounts of data back and forth between OSX and Win NT/2K/XP, is to simply use the well-understood DOS FAT32 file system.



As for the drives, I've got one ide, 56 x Lite-on,that's neccessary to boot from if you have problems with the array, or startup disk. Can't put that in a firewire enclosure.
An ATAPI 52x32x52 LiteON CD-R/W (LTR-5237) will work just as good as your existing ATAPI LiteON CD-ROM reader for booting up on a CD-ROM, not to mention it can write to CD-R and CD-R/W media, and can be bought for a measly $40. In my opinion, it’s easily the best budget CD-R/W around. With a single LiteON 52x CD-R/W mounted fixed in one of the chassis’ 5¼-inch drive slots, you’ll have both your bootable CD reader and your CD writer, thus saving a precious 5¼-inch drive slot. Or, probably even better yet, you can install a Pioneer DVR-105 DVD-R/W and have CD-ROM booting capability, as well as CD and DVD-R writing capabilities. The Pioneer DVR-105 (and the older DVR-104) are really excellent writers. Cost: about US$160. (Note that the Pioneer DVR-A05 and DVR-105 models are the same drives.)



Also one DVD reader, Pioneer, Scsi, which, when used with a DVD/external writer, in the near future, would allow burning DVD's and backups. It's Scsi, as are the plextor 40X UltraPlexCD reader, and, a 32x Plextor scsi CD-r writer, Currently I have the ide, CD reader scsi, and ide installed, with an external firewire CD-RW,Liteon, for backups, etc. I'd really like to be able to just mount them all in the case, since they coexist well, on the scsi card.
I want to make sure what your current optical drive inventory is. Is it ??? :
  • 1-each LiteON 56X ATAPI CD Reader
  • 1-each Plextor 32X SCSI CD-R/W
  • 1-each Plextor 40X SCSI CD Reader
  • 1-each Pioneer SCSI DVD reader
  • 1-each LiteON External Firewire CD-R/W
If this is correct, and you *definitely* intend on keeping all those SCSI drives for a few more years, then you would likely be best off to purchase an external 4-bay SCSI desktop drive tower and attach it to an external SCSI channel.

A trick that I’ve done in a pinch has been to use an empty mini-tower computer chassis (with power supply still inside) as an SCSI drive tower. The SCSI drives are mounted into the chassis as usual, but a SCSI cable goes from the back of the host computer over inside the mini-tower computer chassis. It’s ugly, but it works.

Otherwise, I believe a 3-bay or 4-bay external SCSI drive tower and 40 ~ 60 watt power supply, with front accessible 5¼-inch drive slots, go for around US$160 ~ $240. Undoubtedly, there are likely nowadays plenty of used ones available.



So, to answer your question about drive slots:

I would like 5 drive slots for peripherals. I'd like to pick up a DVD writer, and, another possibility is a hotswapable firewire enclosure, with removeable drives:
like this:

http://www.granitedigital.com/

However, if you look at the prices, I'm not sure a backplane sca/scsi setup isnt' cheaper.
At approximately US$160 ~ $180 for a capacity of 4- or 5-each hard drives, a Supermicro SCA mobile rack is definitely less expensive than 4- or 5-each Granite Digital Firewire drive bays, and either one of these SCA mobile racks occupies only 3-each 5¼-inch drive slots in the chassis.

http://www.granitedigital.com/catalog/pg29_firewireidesmartlcdcasekits.htm


SO...

Summing it up, if you had to do it all today, I would use a Supermicro SC-941 chassis, install your 4-each X-15 hard drives as fixed (non removable), add a new (ATAPI) Pioneer DVR-105 as a boot drive and decide what you want to do with the remaining 3-each 5¼-inch drive slots that would be left. You might consider adding the DVD reader and one ATA drive bay ($40) for removable cheap-o conventional parallel ATA hard drives, or as an alternative, a SCSI drive bay ($65) for removable non-SCA SCSI hard drives.

I don’t know what you have in inventory as far as “spare” parallel ATA hard drives and/or wide (LVD or SE) SCSI hard drives. Do you have a pile ATA hard drives that aren’t any older than about 1999 vintage, or maybe a pile of “older” wide SCSI (D68 connector, SE or LVD) hard drives?
 

Santilli

Hairy Aussie
Joined
Jan 27, 2002
Messages
5,273
Can you use pci cards in PCI-X slots?
I'm looking at this mobo: Supermicro X5DP8-G2, and, if I can use my old cards in it, I'll probably go with it.

Found a server case for 179 dollars with 17 drive bays. I can run my drives, plus the optical stuff, plus the rackmount SCA setup.

Probably be about 330 after PS, and fans, for case and 550 watt, plus cooling fans. Looks a bit like the Supermicro R2D2 model, but half the price.

Plan on buying the mobo, or the

http://www.supermicro.com/PRODUCT/MotherBoards/E7505/X5DA8.htm

depending upon if I can use my existing scsi cards in the motherboard, read 32 bit/33mhz.

Thanks

gs
 

Jan Kivar

Learning Storage Performance
Joined
Feb 3, 2003
Messages
410
Santilli said:
Can you use pci cards in PCI-X slots?
Yes, but the entire channel (more specifically, the PCI-X bridge segment) will run at the speed of the slowest card. The i7505 You mentioned has two separate PCI-X busses, plus two "regular" PCI slots.

Cheers,

Jan
 

DrunkenBastard

Storage is cool
Joined
Jan 21, 2002
Messages
775
Location
on the floor
You can put a 32bit PCI card in a PCI-X slot, provided that it is capable of 3.3V operation and has the 3.3V cutout. If it is a 5V only card it won't physically fit (no biggie if you have 32 bit slots on the mobo as well).
 

Jake the Dog

Storage is cool
Joined
Jan 27, 2002
Messages
895
Location
melb.vic.au
<excuse my on-topic post>

my main work PC is soon to be replaced with a Dell OptiPlex SX270 configured with a P4'C' 2.6, 875G chipset, 1GB RAM and 80GB Barracuda (yech).

a reasonably capable box which should see me through for 2 years hopefully :)

<carry on>

PCI-X 533 looks to be good!
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,918
Location
USA
On the same topic, I just got my replacement PC at work after 3 years on a Compaq AP250 1ghz/256 MB RDRAM.

My new work PC is a Dell Precision Workstation 650. Single 2.8GHz Xeon, 1GB,DDR266 SDRAM. 36GB Seagate SCSI U320 hard drive. (can't remember which model.) 16XDVD-ROM AND 48X/32X/48XCDRW . nVidia, QuadroFX 500, 128MB 8)

I'll probably have it for another 3-4 years...
 

Adcadet

Storage Freak
Joined
Jan 14, 2002
Messages
1,861
Location
44.8, -91.5
quick question here: why on earth are you guys complaining about plain ol' 33 Mhz 32-bit PCI slots? For what we're talking about (high-end workstation) do you really need to push the >110 MB/s that Santilli was talking about? Are the photoshop files truely huge and really benefit from a 4-drive RAID 0 config?


Last I heard, most requests from the HD are very very small, and access time is what becomes very important - hence the great access times of 10K and 15K SCSI drives are a major plus (and the 8 MB cache 7200 RPM IDE HDs in that they will often predict reads).

I ask 'cause I'm putting together a Athlon 64 3000+ workstation and selling my Tyan Tiger MP system with dual 1.2 GHz chips. My dual channel SCSI card with two Cheetah 18XLs and Atlas IV will be moving from the Tyan's 64-bit slot to a regular 32-bit slot, and it never occured to me that I might ever notice a difference.


On an unrelated note: I got to shoot a full-auto H&K MP5 over Christmas. Kindof fun. I also got to shoot a family friend's match Colt 45 1911 that he's cleaned up considerably for competition (he's a gunsmith and competes). Much much nicer than the stock match Colt 45 I shot a few years ago IIRC.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,264
Location
I am omnipresent
80 - 110MB/s isn't all that much when you've got a 2 drive RAID0 and 1000BaseT. 32/33 PCI really is a bottleneck in higher-end performance.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,918
Location
USA
Where does the average civilian get a full-auto MP5?

Back to the work related equipment, I also have in my cube another Compaq ap250 and a Sun Ultra60 running solairs 8. In sheer luck, a few days ago the sun box suffered two hardware failures, a bad CPU and a bad hard drive...it was barely usable.
 

Pradeep

Storage? I am Storage!
Joined
Jan 21, 2002
Messages
3,845
Location
Runny glass
Adcadet said:
On an unrelated note: I got to shoot a full-auto H&K MP5 over Christmas. Kindof fun. I also got to shoot a family friend's match Colt 45 1911 that he's cleaned up considerably for competition (he's a gunsmith and competes). Much much nicer than the stock match Colt 45 I shot a few years ago IIRC.

Nice :) I just ebayed some mounts for my shotty, looking at a 2.5x Pentax scope for it. No excuses for next season!
 

Pradeep

Storage? I am Storage!
Joined
Jan 21, 2002
Messages
3,845
Location
Runny glass
Handruin said:
Where does the average civilian get a full-auto MP5?

I believe it's pretty much a case of paying the ATF a bunch of fees, and paying the big bucks for a "transferable" gun, auto MP5s going for about $10K IIRC. It helps if you live in the right state too.
 
Top