Gaming off the LAN? Feasibile?

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,728
Location
Horsens, Denmark
One Server with the following spec:

1x Tyan Thunder K8W
3x Intel Pro1000MT Quad Server Adapters
1x MegaRAID SCSI 320-2X
28x Seagate Cheetah 10K.6, 73.4 GB
2x 14-drive SCA backplanes (supermicro I think)
2x AMD Opteron 240s
4x 512MB Buffalo PC2700 ECC Reg.
1x Antec TRUE550 EPS12V (and something else as well, I'd imagine)
Custom Chasis

Should still come in at under $15k, the custom chassis will be a wall-mount rig with a big-ass window...should look awesome.

28 Drives will provide over a TB of RAID-10 storage. Do you think the access pattern will be light enough on the writes to make it RAID-50?
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,269
Location
I am omnipresent
Technical first:

There are several single points of failure there. Let's start with the fact that you have a single server, a single NIC and a single SCSI card. You'll see, above, that I recommended multiple machines for this job, both to spread work around and to have a high degree of failover capacity.

Second: You've got too damn many drives on each controller. RAID5 performance doesn't improve in a linear fashion as you add drives. The hardware CRC processor ramps up to a certain point and then starts to degrade performance. RAID5 calculations across 28 drives will crush those poor little strongARM CPUs (3x 200MHz).

Raw STR on a 73LP runs 33 - 56MB/sec. Even over two U320 busses that's 600-something MB/sec. Even RAID5 reads would almost certainly exceed the 320MB/sec the bus can handle (writes, as I mentioned before, would probably be less inspiring).

Also, in point of fact: A Hitachi 7k250 can easily exceed a 73LP in terms of STR.

... but STR isn't the problem. Throw more drives into the mix and the problem goes away!

The problem is in seeks, actually. SCSI is famous for allowing disks to operate independently, 'cause, well, they can. Normally. Unless there's a RAID controller in the way, having to do processing to figure out which blocks on which drives make up the 400kb texture file you're looking for. Appreciate that in a RAID5, your data is going to be spread across X drives, and each of X is going to have to go for a chunk of that file, then wait for X-1 drives to send their chunks on the TX/RX pin on the cable, then send the daya then get new instructions for the blocks on the disk that make up the next chunk... RAID5 is going to add latency. STR will be very high but as I understand things the more drives you get, the more latency there is on a per-drive basis.

This is really bad for someone who needs to store lots of little files.
Especially under a medium or heavy load. I *know* UT2k4 is 5GB. But it's 5GB in 100kb chunks + some biggish texture files, and that's not going to be doing you any favors for performance.

Caching helps that, but the card you've selected doesn't do it; it just has a little tiny NVSRAM chip for write caching.

Anyway, you'd be better off for several reasons with single channel controllers of some sort. If you're stuck on SCSI I'd suggest sticking with maybe 6 drives per controller (5 in use + hot spare, and these numbers are hot from being pulled out of my ass BTW). Bigger drives would clearly be a bonus, too.

I submit to you that you'd be better off with a moderate number of drives of high areal density. That would provide high STR at the obvious penalty of seek time... but you'd lose the seek time anyway to having so damn many drives in your RAID, and you'd gain expandability, reduced costs and the ability to operate on commodity hardware.

Third: There's NO EARTHLY REASON to have dual Opterons on a fileserver.
On a game server, maybe, but without knowing the specific requirements of your games, I'd say start with one and upgrade if you need to. If you're running multiple game servers on the same PC, I imagine RAM will become a problem before CPU resources anyway.

Fourth: Power. To 28 drives and two opterons. You need three 500W PSUs for that load. No, I'm not kidding. One hot spare and two to run the PC. High end big-name OEM fileserver chassis are like that.

Fifth: NIC. Single Point of Failure. Still bad. It does load balancing, which is good, and it can carry ~120 - ~160MB/sec (ideal) in or out of your server, but it's all going to end up on the same network with a switch that probably can't handle simultaneous full-duplex I/O from four ports, let alone whatever the heck your gigabit client nodes are sending it (a Catalyst 4503 can manage 24Gbps, or half duplex on 1/2th of its possible ports, for about $10k. I don't think the $500 Linksys unit will fare as well, even given the reduced data rates of GBoC). Face it, your network WILL be a bottleneck, and given the limitations of your network, we can put an upper limit on your disk subsystem.

Also: You're talking about doing HUGE file transfers over the same LAN your customers are gaming on. With what are probably store and forward switches. Two people start installing/running different games over that LAN and, even though it's switched, the I/O buffer on the switch fills, and suddenly little Timmy's perfect Rail shot ain't so perfect any more.

Is any of this getting through?

Are you wondering with high-end fileservers cost so damn much, yet?
 

sechs

Storage? I am Storage!
Joined
Feb 1, 2003
Messages
4,709
Location
Left Coast
I'm sorry to break-up the über-storage server party, but wouldn't using a key server be a heck of a lot easier?
 

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,728
Location
Horsens, Denmark
Thanks for the detailed post. I really appreciate the time you've put into this crazy plan of mine. I'll try to do it justice in my response.

First (SPOF):

A single server. Though it is a choke point, is also much less complicated than having multiple servers. If I went to multiple servers, I have a few options:
1. Have each server cover a few of the games. In this situation each workstation must have a clear path to each server, and the load on any given server will vary based on what game people are playing at the time.
2. Have each server cover a few of the workstations. This is more simple in terms of the load balancing, but if all computers need to have access to all the games, I need to increase my storage capacity so each server can have all the games it may need.
3. Clustering of some kind. This sounds like it would be the way to go, except for the fact that it would be REALLY expensive and I know nothing about it.

A single NIC. I was actually looking at putting 3 of the 4-port GBoC NICs in the server, each to a different switch, to different workstations.

A single SCSI adapter. Yup, guilty. It is a bad idea and needs to be re-thought.

SPOF, though a bad thing, is a risk I'm already taking. In the current store, there is a single server with all the CD images on it. It is in a RAID5 array on a 3Ware 8506 controller, but it has a single CPU, PS, and NIC. Though downtime would be bad, the cost will not be in the six-figure range. If I have enough redundancy in the array to keep all the data from being lost, I'm ok with the idea of the server biting the dust for one reason or another.

Second (RAID/SCSI issues and Storage Subsystem Performance):

I know virtually nothing about SCSI, never had it in my own system and have only cursory experience with it in the field. That's another one of the reasons I hadn't considered it before you mentioned it. After your latest post, I recognise it as quite the beast and would prefer to avoid it altogether.

I absolutely recognise that STR is not the biggest concern when dealing with this load. The 5.5GB of UT2004 is composed of 1,410 files. This makes the average size under 40MB. With access time being the primary objective, and 15k drives being out of reach from a cost perspective, I was under the impression that "more spindles" was the best way to address the issue. That and several GB of RAM (good point on the caching RAID controller, BTW). Taking this a step further to the concept of increasing the number of disks having an adverse affect on the latency of the array, it makes total sense. What would be the best way to address this? To maintain the capacity I need and use less disks drives me to 7200RPM SATA drives?


Third (Duallie):

Again, I agree. The main reason I choose that motherboard is becuase of it's dual PCI-X busses. For a while I was considering using the same machine as the game server as well, but now I am considering parallel networks with dual-NICs in each workstation.


Forth (Power):

As soon as I calculated the number of drives necissary, I thought exactly what you said. I had no idea exacly what would be requred, but I knew it was possible.

Fifth (NIC):

I was planning on putting 3 of the 4-port GBoC cards in the server, each going to a switch that handles some of the workstations. The Dell PowerConnect 5224 we are currently using has an internal switching capacity of 48Gbps, If each one of these was only required to serve 7 workstations, I think it would be up to the task. Especially of the gaming occured on another network, making the higher latencies inherent in the main network more acceptable.
 

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,728
Location
Horsens, Denmark
sechs said:
I'm sorry to break-up the über-storage server party, but wouldn't using a key server be a heck of a lot easier?

If the game writers wern't asses, making their keys incredibly difficult to change, than this would be the easy solution. However, many of the games make it so difficult that re-installing the game is the easiest way to change the key.
 

mubs

Storage? I am Storage!
Joined
Nov 22, 2002
Messages
4,908
Location
Somewhere in time.
$275 * 2 = $550 vs. $600 with the corresponding reduction in number of spindles, meaning less power draw, fewer hot-swap bays. In my view, this would be a worthwhile trade off.

You've got an interesting problem to solve!
 

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,728
Location
Horsens, Denmark
mubs said:
$275 * 2 = $550 vs. $600 with the corresponding reduction in number of spindles, meaning less power draw, fewer hot-swap bays. In my view, this would be a worthwhile trade off.
Er, em, yes....I guess I need to stop dismissing things without actually looking into them :oops: Sorry about that.

mubs said:
You've got an interesting problem to solve!
Yes, I'm learning quite a bit here. Though I am still quite serious about the whole thing, it is fascinating even at a theoretical level.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,269
Location
I am omnipresent
Windows clustering requires the ultra-expensive "enterprise" versions of Server and won't work properly for your application anyway, since serving games doesn't fit the model of a "stateful" application that can live on a shared data source, and since serving games DOES have an in-game state, it can't be NLB'd either.

Yes, you need to be looking at high capacity 7200rpm disks. It's vastly more affordable - 3 740GDs or 3 Cheetah 73LPs = 1 7k250 at 1/3 the cost. More spindles are only going to help up to a certain point, and that point is the time when your HBA reaches its capacity. More Spindles + more HBAs to spread them across = lower latency (assuming PCI doesn't become a bottleneck), but in this case, that's out of reach from a cost standpoint ($900 for 3ware SATA or LSI SCSI controllers), so latency isn't a problem you're going to solve right now.

BTW sechs, this is purely academic. If David actually implements any of this instead of doing the smart/cheap thing (putting another drive in his workstations, possibly in a mobile rack, I'm going to drive to California to beat him to death with my schlong. :)
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,269
Location
I am omnipresent
... which would be exactly the most horrible way I can think of to beat someone to death. Even worse than the sackfull of Micropolis drives I've always wanted to kill Prof. Wizard with.
 

blakerwry

Storage? I am Storage!
Joined
Oct 12, 2002
Messages
4,203
Location
Kansas City, USA
Website
justblake.com
considering the lengths you guys are taking this, why dont you just buy 80GB 7k250's for each station and be done with it?

you can use the current raptors in some kind of server setup or sell them to recoup your losses.

i know you'd like to not have to have disks in each workstation, but I think that's just the way it has to be for now.
 

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,728
Location
Horsens, Denmark
sechs said:
ddrueding said:
If the game writers wern't asses

You mean, if game designers were making games for you, rather than everyone else in the world.

No, I mean if they thought these things through, instead of just being a PITA.

They've already come up with a great strategy, preventing multiple people from playing on the LAN/Internet with a key that's already in use. This is really hard to bypass and easy to implement. But then taking steps to make it nigh-impossible to change keys? Why? Other than to make life difficult for people like me.
 

Howell

Storage? I am Storage!
Joined
Feb 24, 2003
Messages
4,740
Location
Chattanooga, TN
Err, Not to be an ass David but they probably already addressed the key problem you identified. I'd guess the version that comes with the commercial license to be easier to work with.
 

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,728
Location
Horsens, Denmark
Not necissarily Howell. The plan that Valve is offering is basically a bulk rate on the retail product. This is the one I had the highest hopes for, considering how difficult Steam is to manage in the gamecenter. But their "easy how to" is the exact same thing that I'm doing now.

One of the other game companies expressly denied me the rights to use their game and did not offer a comercial licence at all. Needless to say, I'm not offering their game.
 

Bozo

Storage? I am Storage!
Joined
Feb 12, 2002
Messages
4,396
Location
Twilight Zone
How would you get around the problem of having to have the game cd in the machine to be able to play the game?
How do you do that with multiple single machines?

Bozo :mrgrn:
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,269
Location
I am omnipresent
Daemon Tools will let you mount an image of a CD. It's a very useful tool under any conditions, but I can't see how anyone who plays computer games would want to be without it.
 

sechs

Storage? I am Storage!
Joined
Feb 1, 2003
Messages
4,709
Location
Left Coast
The problem with image mounting tools, is that game makers are constantly trying to make them unuseful (for good reason, of course). You may find that, after an important patch, your mounted image no longer will allow the game to start.
 
Top