According to this source:
http://www.xbitlabs.com/articles/cpu/display/athlon64-e3-mem.html
The E Stepping A64 (Venice 512KB, San Diego 1MB) can use 4 channels with double-sided DIMM's @ DDR400 (though I think it defaults to 2T). Single sided can be run in a 4 x 1T config.
The D Stepping can support DDR 400 x 4 but only at 2T
The C steppings (C0, CG), can only run 4 DIMMs @ DDR333.
If you go here:
http://www.aceshardware.com/read.jsp?id=65000305
you can see the difference 2T and 1T command rate (bus turnaround) makes
TR investigate here:
http://www.techreport.com/etc/2005q4/mem-latency/index.x?pg=1
Basically, in synthetic benchmarks you can see a respectable difference.
In real world apps, the difference is about 1% to 2% TR do some testing in games that show
up to 7% difference, but this is at 640 x 480. At 1024 x 768 or higher, i.e real world play settings, other system latencies make the memory settings disappear.
If you spend your time running 3DMark or SuperPi, worry. Otherwise, don't—you'll never see the difference. In fact, you'd more likely see the difference from DDR333 v DDR400 if you have a C stepping than command rate.
<aside>I wrote an article for my site that I never posted going into the why's of Athlon/A 64 cache/memory hierarchy. The design is (deliberately) cache size agnostic and relatively insensitive to memory accesses because of
Large L1,
L2 set associativity (16-way)
L1/L2 exclusive cache
Large TLB (translation lookaside buffers)
Low latency memory controller
You have to find artificially created benchmarks to show up the difference between 512KB cache and 1MB cache A64's, and in a lot of benchmarks, there is no measurable difference.
The vast majority of people don't work the way benchmarks do, so whether a 3% difference is measurable or noticeable is moot. And as cache hit rates are in the 90's (%) for most common code, a 3% speed up in 5% of cases is, in Australian vernacular, a poofteenth of fuck all.
I hoped you paid attention. There will be a test at 11:30.
:lol: