Quad-core Opterons in 2006 (H1)

LiamC

Storage Is My Life
Joined
Feb 7, 2002
Messages
2,016
Location
Canberra
<Burt Ward voice>
"Holy smokin' XEON's Batman, how will Intel keep up?"
<Adam West voice>
"Exactly Boy Wonder!, By smokin' the evil weed. It's the grown man equivalent of a child putting both hands over their ears and saying Nah, nah, nah. I can't here you.
But just as crime doesn't pay, denying it doesn't make it go away."

</I think I'll take my medication now>

http://www.theinquirer.net/?article=23747
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,741
Location
USA
I like the news. I'm very curious how large that chip will look, and for how much AMD plans to charge. I'd also like to know if one could put 4 of them into a typical quad motherboad. Now that would be nice. :)

I've been contemplating the purchase of a dual core. I've always wanted a dual CPU system, but without the complexities they normally come with. With a dual core, there comes a slight convenience of having only one HSF and no need for an expensive tyan or supermicro motherboard (OK, so I could probably find a reasonable MB, but you get my point).

Is anyone else considering buying one? Or have any strong reason why they should be avoided (besides price)? Sure, there is a slight price premium, but when I look at the alternative of buying two CPU's, two HSF's, having one chip isn't looking so bad. I'd consider the Athlon 64 X2 4400+ with a price about $581. Divide the price by 2, and it isn't looking that bad for a reasonably fast dual processor machine. When I saw divide by two, I'm trying to compare building an actual dual cpu, and relating the price to two physical chips.
 

LiamC

Storage Is My Life
Joined
Feb 7, 2002
Messages
2,016
Location
Canberra
AMD have announced a round of price cuts at the end of July, so I will wait for those and will then grab a 4400+. The stuff I do a lot of will benefit from dual-core and it will still be faster than a my current s754 Athlon, so single-threaded apps will benefit. I have been waiting for this upgrade path a while. By the end of July, the early adopters should have exposed any bugs in motherboards/BIOS's as sell.
 

sechs

Storage? I am Storage!
Joined
Feb 1, 2003
Messages
4,709
Location
Left Coast
I'm hoping that plopping all of these cores onto the same die will convince software developers to multithread their applications....
 

blakerwry

Storage? I am Storage!
Joined
Oct 12, 2002
Messages
4,203
Location
Kansas City, USA
Website
justblake.com
I'm looking forward to a dual core board with PCI-e. I'm also waiting atleast 6 mon to a year before I upgrade my AthlonXP 2400. Hoping that this will help lessen the chances of bus with early PCI-e implementations and the x2 processor/mobo combos.

I can't imagine that a quad core CPU would be worth the expense to me at home, but might be worth while in a server.
 

Fushigi

Storage Is My Life
Joined
Jan 23, 2002
Messages
2,890
Location
Illinois, USA
Definitely worth it in a server, and for several reasons:
- Blade & 1U servers need as dense a processor complex as you can get. The densest I normally see is 2 CPUs in a blade and 4 in a 1U rack; a quad-core ups that to 8 cores in a blade, 16 in 1U (imagine one of those running F @ H).
- Reduces power requirements & heat output.
- Software licensing. Microsoft has already stated that their software is priced per physical CPU, regardless of # of cores. This is really important when SQL Server Enterprise Edition costs $20K/CPU.
- Ups the ability to consolidate workloads and/or run virtual server instances (VMWare, etc.)

BTW, Sun is supposedly developing a new gen of Ultrasparc with 8 cores, each core executes 4 threads so you've an effective 32-way CPU in one package.
 

CougTek

Hairy Aussie
Joined
Jan 21, 2002
Messages
8,726
Location
Québec, Québec
Fushigi said:
...a quad-core ups that to 8 cores in a blade, 16 in 1U (imagine one of those running F @ H)..
Exactly what I thought. I've almost stopped my folding contribution because my processors were both too old and took too much space altogether, so I wasn't competitive for the trouble that having 5 boxes running 24x7 brought me. But if I could get 8 highly efficient cores sitting into a single box, I would consider to resume my folding effort.
 

Santilli

Hairy Aussie
Joined
Jan 27, 2002
Messages
5,078
Well, since I've got dual 2.8 ghz Xeon cpus, I will say I love having the horsepower, in particular for video editing, being able to multi-task, and still have processor power to burn.

Over my 1.4 ghz Athlon, the dual Xeons were a very big jump.

I'll compare the single 64 bit 3000+, with 64 bit Xp, pretty soon, since my final parts just came in.

I guess the real question is, what do you do with your home computer?

s
 

P5-133XL

Xmas '97
Joined
Jan 15, 2002
Messages
3,173
Location
Salem, Or
The big problem with multiple dual cores is ram bandwidth. As you add cores, the need to get data to those cores increases linearly. 2x cores means 2x RAM BW needed -- 8x cores means 8x RAM BW needed to keep them all happy. A real problem with Intel's current solution because there everything shares the same pipe. AMD scales much better because of independant pipes per physical CPU but it will still choke if you put enough cores per CPU. If there are enough cores then Cache coherincy becomes a real problem too because you will run out of internal hyper-channel BW. There are real scalability limits to the number of cores you can place in a box without a coresponding improvement in RAM/Cache technology.

In the end, you can build bigger pipes but it all comes down to the same RAM. You can cache but there are scalability limits to that too and in the end you will be limited by RAM BW and latency. I don't see any large scale improvements in the near future in RAM technology. When you are putting 8-16x the stress on the RAM subsystem those 20%-100% improvements that I see on the near horizon ain't gonna hack it.

After several years of boredom, The next few years should be interesting.
 

P5-133XL

Xmas '97
Joined
Jan 15, 2002
Messages
3,173
Location
Salem, Or
4 more memory controllers would require lots more pins on the socket for the extra data+address lines. I really don't see AMD adding 4 extra 128 bit lines per chip (512+ more pins).
 

sechs

Storage? I am Storage!
Joined
Feb 1, 2003
Messages
4,709
Location
Left Coast
I'd think that going to a 256bit memory bus would be a bit easier. Maybe they're saving up for the move to DDR2, which will necessarily require a socket change.
 

LiamC

Storage Is My Life
Joined
Feb 7, 2002
Messages
2,016
Location
Canberra
Socket F (new Opteron socket) is 1207 pins. Don't know what the extra 267 pins areused for.

Dual channel DDR2-800 will give the effective bandwidth of quad-channel DDR-400 and have better latencies. DDR2-667 will give much more bandwidth than DDR-400, latencies will be equivalent or slightly better and may be easier to deal with than DDR2-800 in a server environment.
 

CougTek

Hairy Aussie
Joined
Jan 21, 2002
Messages
8,726
Location
Québec, Québec
AMD should skip DDR2 entirely and jump to DDR3 for its next interface. It is a better memory design.

And I hate to say it, but lots of memory bandwidth with fewer pins could be done with another type of not-yet-dead technology : rambus. I think they are behind the XDR RAM used in one of the upcoming gaming console (think it's the PS3).
 

Gilbo

Storage is cool
Joined
Aug 19, 2004
Messages
742
Location
Ottawa, ON
XDR is really ideally suited for the K8 chips. The bandwidth/pin equation only gets more and more appealing as more cores are added onto the die.

I don't think it's going to happen though. The market will resist Rambus as much as possible. All the memory manufacturers hate the company, and Intel probably does as well. AMD certainly can't make the memory mainstream on its own.
 

Computer Generated Baby

Learning Storage Performance
Joined
Dec 16, 2003
Messages
221
Location
Virtualworld
If certain organisations out there would acts together, and get Magnetic RAM (or M-RAM) out the door instead of tweaking it incessantly, we could leave the fidgety world of dynamic RAM behind for good, not to mention the fescennine Rambus and its gang of greedy lawyers.


 

Corvair

Learning Storage Performance
Joined
Jan 25, 2002
Messages
231
Location
Desolation Boulevard
mubs said:
You might be a baby, but I need a dictionary handy to understand some words you use. Some baby!

For starters, if I would bother proof reading what the hell I'm spouting off, maybe it would be a bit more readable and a bit less embarrassing. Unfortunately, I often get disrupted by the mister telephono or some other important 5-minute event that takes me away right when I'm in the middle of pecking away on the keyboard.
  • Computer Generated Baby said:
    If certain organisations out there would acts together, and get...
    If certain organisations out there would GET THEIR acts together, and get... [list:50e71d7cca]
    sheesh...
[/list:u:50e71d7cca]



Nonetheless, M-RAM will be a m-a-j-o-r paradigm change once it gets out of the labs and into production. We've had magnetic memory in the past in the form of core memory in mainframe computers of the 1950s and 1960s, just not in the highly convenient form of an integrated circuit package. Once M-RAM arrives, we should be able to do wonderful things like shutting down computer systems leaving the primary memory's contents intact.


 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,741
Location
USA
Sounds like a great use for cache in a raid controller. Or better, a solid state m-ram drive.
 

LOST6200

Storage is cool
Joined
May 30, 2005
Messages
737
sechs said:
I'm hoping that plopping all of these cores onto the same die will convince software developers to multithread their applications....

Maybe by 2008? ;)
 
Top