Building a Storage Server Thread

CougTek

Hairy Aussie
Joined
Jan 21, 2002
Messages
8,729
Location
Québec, Québec
Do you plan to use any RAID flavor for that workstation Merc? I'm not sure but I think RAID 5 performances are quite poor on that Promise (well, on any Promise) controller.
 

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,728
Location
Horsens, Denmark
CougTek said:
Do you plan to use any RAID flavor for that workstation Merc? I'm not sure but I think RAID 5 performances are quite poor on that Promise (well, on any Promise) controller.

That's what I've heard as well. A RAID0 for working and a RAID1 for storage would probably be the best way. Depends if that controller supports the "striped read" of RAID1 arrays.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,262
Location
I am omnipresent
What it comes down to is talking the guy out of RAID0 for everything.
His OS will be on a WD800JB that he already has.

I'm hoping to convince him to take RAID1 + RAID0 on different volumes but I freely admit that I don't fully understand what he's doing yet - some kind of digital sound mixing, I think. He may actually need 1TB of disk space.

I know he's got a rack with several (DSP-type, not storage) firewire devices on it and some devices that accept 10s of analog inputs (Echo Layla).

Anyway, it's pretty close to what we're describing here, just as a storage server.
 

Bozo

Storage? I am Storage!
Joined
Feb 12, 2002
Messages
4,396
Location
Twilight Zone
Yesterday I set up a cable modem and router for a multiple compter setup. I used a Linksys(Cisco) router connected to the computers using CAT5 cable. Each computer had a 3Com 3c905c network card installed and was running WinXP Pro.
The computers were PIII 800, 66Mhz FSB, and 512M RAM. Except one.
The odd pc was a P4 2.53MHz, 533Mhz FSB, and 512M RAM. It also was running XP Pro and the same 3Com NIC.
Everyone that tried the new cable connection remarked at how much faster the P4 was at loading pages and downloading files. I don't know of a way to test such a setup, but theP4/533 FSB was faster at moving data through the network card.
If your building a server using an old and slow MB, CPU, and RAM, you are sacrificing speed. And if the server is connected to more than a couple of workstations, the the slowdown could be aggravated even more.

Bozo :mrgrn:
 

blakerwry

Storage? I am Storage!
Joined
Oct 12, 2002
Messages
4,203
Location
Kansas City, USA
Website
justblake.com
Bozo, I think you're a bit off there.

THe older PCs are slower at rendering webpages than the p4. I however, doubt that there is much of a difference in their network utilization. (like it would matter since they are probably on a 100MBit network with only a <3Mbit internet connection)

Rendering pages with all the crap HTML takes a lot longer(reletively, so it will be felt more on slower computers) than with a clean page... also showing images involves decompression which takes CPU time and any animations/video or the use of plugins like flash take extensively more CPU time.

It's gotten to the point where you need a pentiumMMX to surf the web nowdays.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,262
Location
I am omnipresent
Bozo's right. Sort of.

The P4 Bozo is referring to has a 133MHz system bus, which means that data flows from RAM to PCI 25% faster than those (presumably) 100MHz-based P3s. I've never heard of a 66MHz P3. If the PCs Bozo was using were heavily loaded somehow, the P4 would remain more responsive, longer.

Blake is also right.
The speed difference you're observing is from the lag between delivery and rendering the output of whatever HTML was received. Every PC qualified to run Windows NT can handle the pathetic amount of data a broadband connection can deliver. Do you really think the 100Mbit NICs in those PCs couldn't keep up with a 1.5Mbit data stream, Bozo?
 

blakerwry

Storage? I am Storage!
Joined
Oct 12, 2002
Messages
4,203
Location
Kansas City, USA
Website
justblake.com
::sigh:: I remember when surfing with a 14.4kbps and 66MHZ 486 was considered a dream....

you could visit the most graphics heavy sites and sue netscape to view animated gifs...




The reason why I corrected you was that you applied your hypothesis to a server, even though the job of a server is often very unlike that of a workstation.

Sub 100MHz processors acting as a router are known to easily pass 6Mbit/sec sustained datarates, while even the fastest p4 workstation would not be capable of rendering 6Mbit/s worth of webpages for very long.

See what I mean?
 

Bozo

Storage? I am Storage!
Joined
Feb 12, 2002
Messages
4,396
Location
Twilight Zone
What's the data path from NIC to hard drive.?
The packets need to be stripped of their "to-from" info to be wrote to the hard drive, right? The info from each packet must be 'assembled' into a file to be stored on the hard drive. This is done in memory(?). Doesn't this take processing power? Fast system bus?
And what about moving all this info from place to place?


Bozo :mrgrn:
 

blakerwry

Storage? I am Storage!
Joined
Oct 12, 2002
Messages
4,203
Location
Kansas City, USA
Website
justblake.com
And their is the magic of DMA and CPU offloading for the NIC.

Bottom line, no. It really doesn't take alot of processing power for a 100MBit setup. For a Gig-e setup with jumbo frames I would stick with a PIII or higher though.

It would be nice to go with something that had the NIC on PCI and the HDD's on the southbridge or another PCI bus, but that is hard to do because the southbridge isnt going to meet our needs as far as drive capacity and having more than 1 PCI bus requires an expensive "server/workstation" board.


hmmm... I wonder, the newer kt600 and SiS 748 chipsets both have S-ATA on the southbridge (2 channels) in addition to the normal two P-ATA channels. This could allow for 6 HDDs. We'd only need 5 250GB drives in RAID 5 to accomplish our needs. With a simple OS drive, we could use the other 5 drives to create a soft RAID 5 array.


That's a low ass budget machine, but it's do-able. I'd think that any AThlonXP could keep up with the parity calculations though.
 

blakerwry

Storage? I am Storage!
Joined
Oct 12, 2002
Messages
4,203
Location
Kansas City, USA
Website
justblake.com
yup, win2k server w/ 5-10 CAL's going for about $100-$200 on ebay. Advanced server maybe $50-100 more.


*nix is free and you can pretty easily setup software RAID 5 during the installation on any major distribution last time I checked (red hat 6/7 or mandrake 7/8).


since most linux distrbutions have a great SMB and web server it makes the choice pretty simple for anyone with either linux experience or a little time on their hands.
 

Bozo

Storage? I am Storage!
Joined
Feb 12, 2002
Messages
4,396
Location
Twilight Zone
blakerwry:
"And their is the magic of DMA and CPU offloading for the NIC."

DMA is Direct Memory Addressing. Not direct NIC addressing.

Using old and slow sub-systems on even a 100Gbit network slow things down. Maybe not with 5K files, but start moving 50M files and things get real slow, real fast. Or, have more than a few workstations attached.

More testing is needed...............

Be careful of what NIC you use. I purchsed some at a very good price only to find out later that they had no cache built in. Dog slow. Upgrading to a low-mid priced card made a world of difference. You get what you pay for I guess.

Bozo :mrgrn:
 

blakerwry

Storage? I am Storage!
Joined
Oct 12, 2002
Messages
4,203
Location
Kansas City, USA
Website
justblake.com
Bozo, dont let your name give away too much.

I know what DMA is and what it does. When transfering information to/from any device that uses DMA to main system memory it reduces the need of the CPU to be involved.


NIC offloading is less important... and infact may not even be desirable on a faster processor based system. From what I have read it only really helps with IPsec.


I have no idea what you think you're talking about when you say "Maybe not with 5K files, but start moving 50M files and things get real slow, real fast."

I have a feeling you have no clue what you're talking about.


Sure, using a 486 with a PIO4 HDD an 16MB/RAM with an OS that uses 24MB of memory on boot you're not going to get good performance. But take any properly configured PII/PIII system with a 7200rpm HDD and you're not going to have a problem saturating 100Mbit network (6-8MB/sec).
 

blakerwry

Storage? I am Storage!
Joined
Oct 12, 2002
Messages
4,203
Location
Kansas City, USA
Website
justblake.com
sorry if I'm being an ass. I've been up too long.


I agree that a good NIC makes a difference. I've noticed that cheap NICs can get as little as 4-5MB/sec transfer rate while the better ones can get nearly 8.

Having a good switch will help as well.
 

Bozo

Storage? I am Storage!
Joined
Feb 12, 2002
Messages
4,396
Location
Twilight Zone
Two computers:
#1=PII 300Mhz, 320M RAM, Win2k, 3C905c NIC
#2=P4 2.6Ghz, 1GB RAM, Win2k, 3c905c NIC

Transfer Win2K SP4 from server to computers, ~132Meg

#1=20 seconds
#2=12 seconds

Transfer Office 2000 folder from server to computers, ~682Meg

#1=2minutes, 55 seconds
#2=2minutes flat.

Same server, same switch, same cable, same NIC.

Using old and slow hardware for a server will result in a slow network. Imagine how slow a PII server would be with 2 dozen computers attached to it.

Maybe I should try the file copy in reverse??

Bozo :mrgrn:
 

blakerwry

Storage? I am Storage!
Joined
Oct 12, 2002
Messages
4,203
Location
Kansas City, USA
Website
justblake.com
when copying a large file like that you have to consider the hard drive. Is it running in DMA?, is it defragmented?, is it doing something else simultaneously?.


What you're testing is not the network, you are testing the system as a whole. One weak link and you're not going to get the performance you expect.

I believe that pII runs on a 66MHz bus and can have either EDO or PC 66 SDRAM. So it's about the slowest pentium II out there. additionally the southbridge is on the PCI bus so that means both the hard drive and the NIC (along with any other devices) are using the PCI bus.

No matter, even this configuration should yeild enough power to saturate 100Mbit with the proper hard drives if the controller offers UDMA2 (some 66mHz P2 chipsets don't).



Your first test shows that the P2 is capable of over 6.5MB/sec. This is a respectable score. However, your results show you got a sustained 11MB/sec using the p4... i have to doubt this claim. As even on the best setups I've only seen about 9MB/sec sustained.




Your second test is a very good indicator that your hard drives may be the bottleneck.

Even my Duron 750Mhz with a netgear FA311 (mid range NIC), 384MB PC133, and maxtor DM+9 outperforms your p4. The difference, I'm assuming is that I have a faster hard drive and it is not fragmented.
 

Bozo

Storage? I am Storage!
Joined
Feb 12, 2002
Messages
4,396
Location
Twilight Zone
WD400JB and a IBM 75GXP

The IBM was in the PII attached to a Promise Ultra controller.

Neither computer was doing anything else during the test. The P4 system was just put together this morning and loaded with Win2k. I was installing software on it from a file servver when I decided to time the transfers. The P4 was shut down and the PII installed on the work bench using the same network cable as the P4.

I wouldn't doubt the difference in your system and the P4 is the network card. I'm not impressed with 3Com lower end cards, but that's what the company buys.

In the next week or so, I'll be replacing a Netfinity PIII 800 server with two that I built in house. (two for redundancy or 'fail-over' protection.)
The Netfinity is routinly brought to it's knees. The new servers are P4, 2.6Ghz, 1Gb RAM, 3Ware 8500-4 and 4 36Gb Raptors in RAID 5.
Should be interesting....

Bozo :mrgrn:
 

blakerwry

Storage? I am Storage!
Joined
Oct 12, 2002
Messages
4,203
Location
Kansas City, USA
Website
justblake.com
if you want, i can setup a PII/III system based on a 440BX and do my own test using a 3com 905 or other NIC.

It's an easy way to prove my beliefs right or wrong.

I think the only available HDD I have right now are 5400rpm or slower <10 giggers (miles away from a 75gxp), but I might still be able to achieve >6MB/sec on several hundred MB transfers.
 

blakerwry

Storage? I am Storage!
Joined
Oct 12, 2002
Messages
4,203
Location
Kansas City, USA
Website
justblake.com
I've done some preliminary testing with a pIII 440BX system vs my 2GHz AthlonXP. Same HDD, same NICs, same cables.

It looks like the AthlonXP system is faster, but not by much.. about 10%.


I'll post the results after I finsih testing with my other NIC.
 

blakerwry

Storage? I am Storage!
Joined
Oct 12, 2002
Messages
4,203
Location
Kansas City, USA
Website
justblake.com
Shared:
Fujitsu Desktop 10, model: MPC3102AT (5400rpm, 512k cache, 3.4GB platters)
3Com 905 TX
Linksys NC100
D-link 704 100baseT router w/ 4 port switch



System1: 440BX w/ pIII 650, 192MB PC100.
System2: SiS 748 w/ AthlonXP 2000+, 512MB PC2700.


pIII results said:
3COM 905TX
100MB File:
5,981,837 bytes/sec

1GB File:
5,899,113 bytes/sec

3.5GB across 22 files:
5,868,298 bytes/sec


Linksys NC100
100MB File:
6,813,946 bytes/sec

1GB File:
6,471,776 bytes/sec

3.5GB across 22 files:
6,646,485 bytes/sec



AthlonXP 2000+ results said:
3COM 905TX
100MB File:
6,374,337 bytes/sec

1GB File:
6,426,200 bytes/sec

3.5GB across 22 files:
6,646,485 bytes/sec


Linksys NC100
100MB File:
7,428,738 bytes/sec

1GB File:
7,143,017 bytes/sec

3.5GB across 22 files:
7,477,296 bytes/sec


Overal, this shows the Athlon system about 10% faster... the HDD light was on about 3/4ths the time during transfers.... getting pretty close to its max.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,262
Location
I am omnipresent
I think a better test would be to use, say, an Apollo Pro and a KT400. There are two very different IDE implementations there, after all. Via:Via would probably be a cleaner comparison.
 

blakerwry

Storage? I am Storage!
Joined
Oct 12, 2002
Messages
4,203
Location
Kansas City, USA
Website
justblake.com
I could always take my CPU and clock it at 100MHz, 133MHz, and 166MHz bus speeds. That would show a clear comparison between CPU/RAM speed and the effect on transfers.

Infact. For the test I put my CPU back at 133MHz bus, but I also tested at a 160MHz bus (my standard). There was no difference (less than .001%).

Since my SiS900 based onboard NIC is faster than either of the two I benchmarked(~1-2% faster than the linksys), I assume that I reached the limit of the NIC cards and am aproaching the limits of my network or the NIC in my server...

When Fedora core 2 comes out I plan to shutdown my server and ugrade it to dual NIC and change the duron 750 out for a t-bred 1800+, along with the upgrade to core2, of course.

I am also interested in getting a larger switch. I currently have 2 routers, each with a 4 port switch. I would rather use 1 router and have an additional 8 port switch.
 

blakerwry

Storage? I am Storage!
Joined
Oct 12, 2002
Messages
4,203
Location
Kansas City, USA
Website
justblake.com
I should have mentioned, I also used a promise Ultra controller (the drive runs in UDMA mode2), win2k and a 50 foot length of cat5e.


It would be interesting to set my bus speed to 66MHz (433MHz CPU speed) to more closely simulate an older/slower pentium II. Maybe I'll mess with that tomorrow.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,262
Location
I am omnipresent
I don't suppose I should bother to point out that most of us wouldn't notice the difference between 150 seconds and 170 seconds while doing a network file copy. It's like burning a CD: It takes "about five minutes" whether you have a 32x drive or a 52x drive.

My usage pattern between my gBit-connected machines goes like this:
Start network copy/move
Browse
Check email
Try to remember why I opened wordperfect
Check SF
...

At some point I look over at my open file windows and realize that the copy is done.

For the 100mBit portions of my network it's more like
Start network copy/move
Check email
Try to remember why I opened wordperfect, close it when I can't.
Check SF
...
Look up, notice the copy is still going, check ebay for gigabit blades for my switch, again
Check email
Remember why I opened wordperfect, open it again
Check SF
...
Try to remember why I opened wordperfect, close it when I can't.

At some point, the copy actually finishes.

My usage patter for 802.11 (a/b/g)
Start copy/move
Go to bed (*you* try moving 8GB of crap over a 2MB/s link).
 

Bozo

Storage? I am Storage!
Joined
Feb 12, 2002
Messages
4,396
Location
Twilight Zone
blakerwry:
Are you measuring the network traffic? I am measuring the total time to leave one hard drive, go across the network and load on another hard drive. This is where a server running slow hardware will become a problem.

Cougtek:
The two new servers have been running in my cube for a few weeks now. (Burn-in, software installs, file replication setup etc). Another server in my cube is much noisier; two Seagate Cheetahs in RAID 1. They are both in Antec cases, but I believe the fan setup in the Cheetah case makes it noisier. Neither of the servers have a lot of hard drive noise.

Hmmmm.....Maybe a test between the IDE server and SCSI server would be interesting.

Mercutio:
As a single user to might not notice the speed difference, but add a couple dozen workstations to the network and things start to go real slow real fast. The network doesn't usually bogg down, but the constant hammering at the server causes it to slow everthing down.

When I make backups (using Drive Image) over the network, I prefer to do this on weekends when the file server is not being used by anyone but me. A 12Gb backup (compressed to 6GB) takes about 41 minutes on weekends. It's over an hour during the week because of the increased network and server traffic.

Bozo :mrgrn:
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,262
Location
I am omnipresent
We decided to focus on a home-type server some time ago. IT Pros have all the help they need and lots of friendly salesdroids from Dell and HP to figure it out. If there are "a couple dozen people" around your house then you need to be hit over the head with a box of Trojans.

Also, you might want to look at your network design in more detail. If you've got a larger (workgroup)-type network and your server isn't trunked off on something better than 100mbit (or whatever the clients have) and appropriately segmented, of course things will be slow. I had the same problem where I'm working now - PCs would take forever and a day to log in but the (sparsely used after everything logged in) network was zippy all day, until I shifted the LAN to four distinct network interfaces on my DC. Now the LAN is fast enough to meet everyone's expectations.

Look at the interaction of subsystems here (slowest to fastest):
Hard Disk - Countered by weight of numbers on the...
Controller 4 channels reading off RAID0 or RAID5 should be able to saturate...
PCI - 32/33 PCI tops out at about 80MB/s. This is where our biggest concern is, with a RAID controller and potentially GigE.
Northbridge - Plain and simple. I'm not talking about FSB here; I'm talking about the interconnect between PCI and everything else. That's the system bus speed, and that's basically all the northbridge that ain't part of the FSB. It's 66, 100, 133, 166 or 200MHz. Obviously, higher is better, but Blake's example shows there's a barely noticeable difference (10%) between 100 and 133MHz systems... and it's something that should only be addressed AFTER we've fixed the bottlenecks at the disk, controller and PCI, if we're dealing with GigE.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,262
Location
I am omnipresent
Oh yeah, more mostly-coherent sample text:

Several Paths to the Same Goal

Since there are a number of different strategies that can lead to the 1TB finish line, let’s look at some of the points of interest in building our storage server:

• Network Performance – In this case, the user-experience for desktop use is largely irrelevant. On the other hand, network performance will make or break this PC. If a user needs a half-hour to pull down a 4GB file, then there’s a disincentive for using the storage server over local storage. Gigabit Ethernet, it seems, is an important factor in planning our server.
In the realm of the wireless user, very little can be done aid performance, unfortunately.

• CPU and motherboard chipset – Again, CPU performance is not of interest, here, but in this case the decision we make with our motherboard is a crucial one. There are a number of interesting candidates in the field: Intel’s i875, which offers a dedicated data bus for gigabit networking; any of the various Serverworks chipsets, all of which bring several high-speed PCI slots, and the appliance-like Via EPIA-8000A, which is quiet and energy efficient but not so great on the performance front.

o i865/875 – Intel’s current “performance” chipset (i875) is mated to desktop-type Pentium 4 processors. In their current incarnation, these chips use an 800MHz front side-bus that is very helpful for maximizing performance between the CPU, RAM and peripherals. This is essential since Intel’s desktop chipsets don’t support advanced PCI interconnections. i875 does offer Communications Streaming Architecture (CSA), a dedicated, high-bandwidth bus for an onboard gigabit network adaptor. Although high-performance CPUs aren’t exactly the priority that they are in desktop machines, i875 does lose out on one of its big advantages without a new P4 processor; with anything less than a 2.4GHz/800MHz P4, i875’s system bus is a much less dazzling 533MHz.
The i865 is a Intel’s mainstream motherboard chipset. It lacks support for Error correcting RAM and Intel’s Performance Accelerating Technology, which basically consists of a number of esoteric tweaks to improve the memory subsystem. Since error correcting RAM is somewhat important to file servers, i865 is probably less interesting in this context than it might otherwise be.
Obviously, the bonus to either i865 or i875 is the “free” gigabit networking via CSA. Even among motherboards with built-in gigabit Ethernet controllers, it is important to realize that some vendors do not implement CSA.
The real down side to using a modern Pentium 4 chip on a file server is power consumption. New chips can eat nearly 100W of power that might otherwise go to a few more hard disks.

o Serverworks chipsets are industrial strength workhorses. They don’t normally have the bells and whistles of desktop motherboards. No on-board Highpoint controllers or 6-channel audio here! Instead Serverworks-based motherboards are the building-blocks for genuine server hardware (go figure). Here, the benefit is found in the massive I/O capability of 64-bit PCI (and/or PCI-X) and the high standards of engineering found in workstation-class hardware. Serverworks-based motherboards are available for a wide range of Intel processors and in configurations up to four CPUs.

o VIA’s EPIA motherboards don’t seem to fit on the same list with workstation-class parts. EPIA motherboards are non-expandable, single (32bit/33MHz) PCI-slot motherboards in a tiny, tiny form factor. The CPU (a Via C3) is even soldered to the board. Still, this is a platform with a purpose. With a maximum power dissipation of under 20W and whole-system prices around $125 (including the PC100 RAM they use), these little guys might be the break your wallet needs to keep your terabyte server affordable. There is a big drawback here, however, since the VIA Eden platform offers only one PCI slot, it must be occupied with a disk controller of some kind; network performance here isn’t going to break any speed records.

• RAM – Depending on Operating System and usage patterns, your server might get away with as little as 64MB of memory. Here, more is obviously better, but there’s little point in overloading a machine with RAM that isn’t going to be used. 128MB is perfectly acceptable for a Linux or Windows 2000-based file server, and for a file-only server, 256MB is probably overkill. Of more interest is the type of RAM. Hobbyist everywhere tend to stick to the fastest mainstream RAM they can afford. In this case, it’s less of an issue (Our limit here is probably going to be the hard disks or PCI bus), so PC2100 and even good old PC100 might have a happy home in a fileserver. RAM that is error correcting (ECC) is much more than RAM that is theoretically a little bit faster.

• Disk Controllers – There aren’t many options here. 1TB worth of reasonably redundant disks means passing the standard four IDE devices most motherboards support. Some motherboards add simple IDE RAID controllers, and simple add-on controllers from Promise and Highpoint certainly are available, but in this case, the limited feature set, capacity and management options of those chips really aren’t going to be enough. We need real RAID support. We need to handle at least six disks on a single controller (in the name of expansion, if nothing else), and we’d like some kind of volume management software, that’ll let us divide or extend our terabyte of disks as we see fit. Promise does make high-end SuperTrack controllers supporting up to six disks, which do support volume management and do support advanced RAID features. Our other candidate is 3ware, a company with Enterprise-class IDE RAID hardware supporting up to 12 hard disks. Of course, any time the word “enterprise” is used in relation to computer hardware, the other big E-word is also implied; 3ware controllers are not cheap.
Why not Serial ATA? 3ware and Promise do both have high-end SATA controllers, but at this point SATA devices carry a price premium over Parallel ATA. SATA has several advanced features that will hopefully trickle down to computer hobbyists in the near future (Mmmm… Hot swap support) but with the unfortunate lack of SATA power connectors on current power supplies (another cost over PATA) and somewhat unproven nature of SATA controllers, it seems clear that PATA is still the way to go for now. Ask again in another six months.

• Chassis – At minimum, six 3½” internal drive bays. Eight or ten would be even better, for cooling as much as expansion. Not many mid-tower cases are going to make the cut here, even with 5¼” rail kits. We’re probably looking at a large tower or rackmount case as the home of our terabyte.

• Power – 450 Watts is a good start. We’re as concerned about the fidelity of our power as the sheer wattage, so it’s a good idea to pay the extra $30 for something with a name brand attached. Losing 1TB of data because our no-name PSU couldn’t deliver 12V +/-10% reliably is probably the worst nightmare of anyone with 1TB of data to lose. Finding a power supply that can’t manage with all our drives is simply an expensive and easily avoidable mistake. Don’t let it happen to you.
An uninterruptible power supply is a must for a storage server. Even a 500VA “Blackout Buster” will keep a server running for the two minutes it takes to do a proper shutdown.

• Video and Display – There’s nothing else to say except that these things are afterthoughts. Telnet, SSH, VNC and Terminal Services can provide as much interaction as a person might need. The local computer shop would probably surrender a 1MB Trident card for less than the cost of a pack of cigarettes, and that’s as much graphical horsepower as a server needs.
 

blakerwry

Storage? I am Storage!
Joined
Oct 12, 2002
Messages
4,203
Location
Kansas City, USA
Website
justblake.com
I am measuring the total time to leave one hard drive, go across the network and load on another hard drive.


That is what I was measuring, and by my tests it doesn't make a whole hoot of a difference. Certainly not worth paying 4 times as much (assume cost of a $50 pII/III system vs a faster athlon system) for a <10% performance increase for most home users.
 

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,728
Location
Horsens, Denmark
It is factually correct, just gramatically incorrect. It should read:

The i865 is Intel’s mainstream motherboard chipset.

-or-

The i865 is one of Intel’s mainstream motherboard chipsets.

:wink:
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,918
Location
USA
Don't worry Merc. I missed the "a" after reading it three times... I couldn't figure out what was grammatically incorrect.
 
Top