Technical first:
There are several single points of failure there. Let's start with the fact that you have a single server, a single NIC and a single SCSI card. You'll see, above, that I recommended multiple machines for this job, both to spread work around and to have a high degree of failover capacity.
Second: You've got too damn many drives on each controller. RAID5 performance doesn't improve in a linear fashion as you add drives. The hardware CRC processor ramps up to a certain point and then starts to degrade performance. RAID5 calculations across 28 drives will crush those poor little strongARM CPUs (3x 200MHz).
Raw STR on a 73LP runs 33 - 56MB/sec. Even over two U320 busses that's 600-something MB/sec. Even RAID5 reads would almost certainly exceed the 320MB/sec the bus can handle (writes, as I mentioned before, would probably be less inspiring).
Also, in point of fact: A Hitachi 7k250 can easily exceed a 73LP in terms of STR.
... but STR isn't the problem. Throw more drives into the mix and the problem goes away!
The problem is in seeks, actually. SCSI is famous for allowing disks to operate independently, 'cause, well, they can. Normally. Unless there's a RAID controller in the way, having to do processing to figure out which blocks on which drives make up the 400kb texture file you're looking for. Appreciate that in a RAID5, your data is going to be spread across X drives, and each of X is going to have to go for a chunk of that file, then wait for X-1 drives to send their chunks on the TX/RX pin on the cable, then send the daya then get new instructions for the blocks on the disk that make up the next chunk... RAID5 is going to add latency. STR will be very high but as I understand things the more drives you get, the more latency there is on a per-drive basis.
This is really bad for someone who needs to store lots of little files.
Especially under a medium or heavy load. I *know* UT2k4 is 5GB. But it's 5GB in 100kb chunks + some biggish texture files, and that's not going to be doing you any favors for performance.
Caching helps that, but the card you've selected doesn't do it; it just has a little tiny NVSRAM chip for write caching.
Anyway, you'd be better off for several reasons with single channel controllers of some sort. If you're stuck on SCSI I'd suggest sticking with maybe 6 drives per controller (5 in use + hot spare, and these numbers are hot from being pulled out of my ass BTW). Bigger drives would clearly be a bonus, too.
I submit to you that you'd be better off with a moderate number of drives of high areal density. That would provide high STR at the obvious penalty of seek time... but you'd lose the seek time anyway to having so damn many drives in your RAID, and you'd gain expandability, reduced costs and the ability to operate on commodity hardware.
Third: There's NO EARTHLY REASON to have dual Opterons on a fileserver.
On a game server, maybe, but without knowing the specific requirements of your games, I'd say start with one and upgrade if you need to. If you're running multiple game servers on the same PC, I imagine RAM will become a problem before CPU resources anyway.
Fourth: Power. To 28 drives and two opterons. You need three 500W PSUs for that load. No, I'm not kidding. One hot spare and two to run the PC. High end big-name OEM fileserver chassis are like that.
Fifth: NIC. Single Point of Failure. Still bad. It does load balancing, which is good, and it can carry ~120 - ~160MB/sec (ideal) in or out of your server, but it's all going to end up on the same network with a switch that probably can't handle simultaneous full-duplex I/O from four ports, let alone whatever the heck your gigabit client nodes are sending it (a Catalyst 4503 can manage 24Gbps, or half duplex on 1/2th of its possible ports, for about $10k. I don't think the $500 Linksys unit will fare as well, even given the reduced data rates of GBoC). Face it, your network WILL be a bottleneck, and given the limitations of your network, we can put an upper limit on your disk subsystem.
Also: You're talking about doing HUGE file transfers over the same LAN your customers are gaming on. With what are probably store and forward switches. Two people start installing/running different games over that LAN and, even though it's switched, the I/O buffer on the switch fills, and suddenly little Timmy's perfect Rail shot ain't so perfect any more.
Is any of this getting through?
Are you wondering with high-end fileservers cost so damn much, yet?