The coming end of Moore's law due to power dissipation

jtr1962

Storage? I am Storage!
Joined
Jan 25, 2002
Messages
4,372
Location
Flushing, New York
I'm sure everyone here is intimately familiar with Moore's Law, which basically states that computing power will double every 18 months or so. Of course, it was never a law in the strict sense, but merely a means of predicting the evolution of computers which thus far has been fairly reliable.

I've recently come to the conclusion that, short of a breakthrough that makes it out of the lab with record speed, Moore's Law will end by the year 2005, at least with regards to CPUs. Variously advances in storage have ensured that Moore's Law should continue in that area for at least another decade before bumping into inherent limits of their own.

My conclusion is not based on the usual "at some point we will reach the physical limits of silicon" but rather on the issue of power dissipation. What led me to start thinking about this was rather interesting. For several years experimenting with thermoelectric modules has been a hobby of mine. Since large(40mm x 40 mm) modules can put out in excess of 100 watts of heat, one of the initial problems I faced was finding suitable heat sinks that would give a low(<10° C) temperature rise above ambient while dissipating a hundred or so watts. Generally, I ended up purchasing fairly large aluminum extrusions of perhaps 5" x 12" x 2 " high and using a 120mm, 120 CFM fan, or making my own copper liquid heatsinks. I had never given much thought to using smaller heatsinks designed for microprocessors until recently, and I have Moore's Law to thank for it. When I first began this hobby in 1995 or so, microprocessors such as the Pentium were dissipating ~10 or 20 watts, and got by with fairly small heat sinks that were totally unsuitable for large TE modules. It was only recently that microprocessors like the Athlon and P4 began approaching the power levels of TE modules, and therefore needed similar heatsinks, but in a much smaller package, of course. I recently purchased a P4 heatsink to test for such purposes, and was frankly impressed by its performance, which was suitable for anything up to a 50W TE module. Better yet, the price($11.67) was half what a large extrusion and fan would cost me, and the size was about one third that of an extrusion with similar thermal performance. Since processors were only going to get more powerful(and thus need even better heatsinks), it seems that my TE cooling requirements for the foreseeable future are satisfied, and at a very reasonable price.

This insight combined with something I read in Electronics Design about high performance ICs approaching 200W by 2005, and with this article:http://www.electronics-cooling.com/html/2000_jan_a2.html. It then dawned on me that we are probably about two or three years from a big road block, and no solution is in sight. For years, transistors have been scaled down in size and core voltages have dropped. Both of these tend to decrease power dissipation. However, the trends towards more transistors and ever higher clock rates have conspired to produce a net increase in power consumption despite the lower voltages and smaller features. This will lead to several problems:

1)The ever increasing power is being removed from an ever smaller die area, resulting in an increased power density. Diamond dies and carbon nanotubes can overcome this problem, most likely, for a while, but sooner or later there will come a time that the power density(W/mm²) is so high that the temperature rise will be unacceptable, even with an infinite heat sink. For an analogy, imagine 100 W trying to leave an area the size of a few square mm. No material can remove that much power from so small an area while keeping it's temperature below, say, 65° C, except possibly with huge amounts of active cooling.

2)Besides the power density increasing, the absolute power being dissipated is also increasing. It is clear to me that for a variety of reasons, 150W or so is the upper limit to what a home PC processor will be allowed to dissipate.

3)The power supply must put out this power, and combined with other peripherals and the M/B, we are likely in the 300W to 400W range, which will affect supply reliability.

4)The effect of millions of new PCs that draw 200W more than their predecessors will be such that perhaps the equivalent of 20 or so 1 GW power plants will need to be added in the US alone at a time when we are under increasing pressure to cut back atmospheric emissions, and cannot build new nuclear plants due to NIMBYism. Clock throttling can only accomplish so much, and it will be difficult to justify replacing a machine that is basically used for web surfing and word processing(at least for the masses) with another consuming twice as much power.

5)Even assuming 1) thru 4) aren't show stoppers, the fact that the power must be removed from the PC will be. Given that PCs are unlikely to be any larger(probably the opposite), or any louder(again, probably the opposite), there are inherent physical limits as to how much heat can be removed with a heat sink of a given volume. First of all, the heat sink is already at a disadvantage since it is using the hotter air in the case, and with the general increase in power this air will be hotter still unless we have more noisy airflow to increase the air exchange rate inside the case(basic law of physics). Assuming we can keep the air in the case at 35°C, we still need to keep the microprocessor under 50° C for long term reliability and stability. For a processor dissipating 200 W, this implies a heat sink with a thermal resistance of (50°C - 35°C)/200W or 0.075° C, probably in a package not much larger than the P4 heatsink that I purchased(which incidentally had a manufacturer's rating of 0.26°C/W). Given that the heat sink I purchased incorporated nearly every trick in the book(thin, closely spaced fins, thick baseplate, powerful fan), I am at a loss to figure out exactly how the performance will be improved by a factor of nearly four. A tip magnetic fan will get you another 15%, perhaps optimizing the fin spacing another 15%, making the thing out of copper ~30%. As Scotty said to Captain Kirk: "I can't change the laws of physics", and that is the problem I see here.

6)Given the cost constraints and mass production problems, liquid cooling is not an option, and even if it were, it would only buy another year or two before the heat problem would rear it's ugly head again. You can mount a fairly large highly efficient liquid to air heatsink on top of the PC case and use ambient air to cool it, and then use the cooled liquid to cool the CPU with a copper liquid heat sink. The total performance of such a setup will be at best around 0.05° C/W, and since you will be using ambient(25°C) air, rather than heated case air, your CPU power dissipation can be (50°C - 25° C)/0.05°C/W or about 250W to keep the CPU temperature under 50° C. And that's it! So even with exotic cooling solutions, and putting aside all the other considerations, 250W is the physical limit that any passive cooling system will handle, and we'll be there within 3 years.

7)Active cooling is definitely not an option. Putting aside the usual cost and mass production concerns, the usual device used for active cooling(the Peltier or thermoelectric module) is grossly inefficient, and compressors are too heavy, expensive, and noisy to even consider. A Peltier module is about 10% to 25% as efficient as a compressor, depending upon the temperature differential, but is nevertheless used in many niche aplllications where simplicity of design and size outweigh increased power consumption. In general, you're lucky to get a COP(coefficient of performance) of even 1 with TE modules, meaning that if you need to remove 100 W of heat, you need to power the modules with 100 W of power. So for your 200W microprocessor you need to supply another 200W to power the TE modules, and you need to somehow remove all that heat from the computer case. Now we're starting to approach the power of a space heater.

8)It is not foreseeable that silicon ICs will be able to operate reliably at higher temperatures than 50°C. If it were, then the cooling requirements mentioned above would be immaterial. We could just use a regular P4 heatsink and standard case cooling, turn it on, watch the internal case temperature rise to ~60°C and the CPU die rise to ~115°C. Unfortunately, as transistors shrink they become more suspectible to thermal diffusion at higher temperatures, so if anything newer CPUs will be even more sensitive to temperature than their predecessors, not less. For example, I can happily operate a MOSFET at 125° C but wouldn't dare try it with a CPU.

By now I'm sure you're either beginning to see that the picture is looking rather gloomy 3 years down the road, or you know something I don't. The problem is simple-using current and foreseeable technologies, there is no way to increase computing power without also increasing heat production. Even multiprocessors are not a solution. Each generation of CPU is more efficient than its predecessor in terms of computing power per watt, so using multiple cooler running CPUs will consume more power, and occupy far more space. Thinking about this, I would need, say 4 PII-450s to equal the performance of a P4 2 GHz machine, and those 4 PII CPUs would consume in total about 140W versus 70W for the P4, and occupy more space to boot.

So that's it, folks. Sort of a major breakthough, which will likely not get out of the lab in time, I think our machines will reach a plateau in a few years, perhaps at around 5 GHz. Oh, did I mention there are also a myriad of other problems designing circuits operating at those near microwave frequencies?
 

Pradeep

Storage? I am Storage!
Joined
Jan 21, 2002
Messages
3,845
Location
Runny glass
I believe Moore's Law relates to transistor density, and not performance per se.

I have a couple of 172 W TEC's from T.E. Dist. I never got around to using them, mostly because I couldn't find an affordable PSU that could deliver the voltage/amperage. They were to go in side by side on a slot one waterblock. There is simply no way to aircool such a setup. In fact with the newer CPU's from AMD/Intel, TEC's directly cooling the CPU just doesn't work, the heat output is too high. But you can use a TEC or three to cool down the liquid going into a waterblock. Phase-change is of course much more efficient.
 

Tannin

Storage? I am Storage!
Joined
Jan 15, 2002
Messages
4,448
Location
Huon Valley, Tasmania
Website
www.redhill.net.au
A very interesting post, JTR, thanks for taking the time to spell your thoughts out so clearly. And, from a debating point of view, a perfect topic - "perfect" insofar as most of us should be able to argue one side or the other quite happily! Naturally, seeing as you have started off on the affirmative, I shall bring up some points for the negative.

1: I thought Moore's Law was that computing power doubled every two years. If my memory of it is correct, then that means we have got a little in front of the curve and can afford to slow down a little without violating the law.

2: Storage capacity has increased in a Moorian fashion these last ten years or more, but storage performance has not kept pace. Indeed, while I haven't troubled to bring any figures to support my seat of the pants judgement, I should imagine that most areas of computing are lagging a long way behind CPU development, in terms of mean time to double performance. I'm thinking here of RAM, mass storage, I/O, and above all, software.

This leads to the thought that we are not so much faced with the impending end of processor performance as with the impending irrelevance of processor performance: in this view, our CPUs have become so powerful that further increases gain us very little for most tasks, because they are lost in amongst the limitations of storage, I/O and software.

3: The first of several developments, none of them sufficient on their own to salvage Moore's Law from the bane of heat generation, but each one serving to make a significant contribution: Better silicons. Pure silicon can stand a good deal more heat than impure silicon. Really well-purified silicon wafers - we are talking 99.lots pure here - provide the chip designer with a way to make CPUs that can stand up to very hot conditions without failure. Already AMD have made some big steps in this direction, with quite a few of the Athlon family chips being rated to 95 degrees C, and there is doubtless more to come from this. Perhaps we will see the use of exotic materials too; if not as actual semi-conductors, then as internal heat conduits to help conduct the heat away faster.

4: Liquid cooling is only expensive because it's not mass produced yet. Companies like GM and Ford have become masters of the art of getting rid of 200 kilowatts without too much fuss. Sure, a car radiator is big and clumsy and costs a couple of hundred dollars, but we are talking about a device that deals with one thousand times as much heat as even a 200W CPU. A little skull sweat and some old-fashioned American production engineering could soon come up with a mass-produced water block and radiator system that might sell for perhaps $50 or so. Once we remember that this $50 is a drop in the bucket in the context of a high-end home computer system - about the same price as two CPU speed grades - it starts to seem very affordable.

5: Beyond that, we may need to move to a super-cooling system using refrigeration, or some variation on the "spread the heat around" idea that is behind sodium-cooled exhaust valves. Yes, refrigeration requires yet more power, but it doesn't have to overcome the entire thermal load of the CPU - only that portion of the thermal load that cannot be dealt with by conventional air or liquid cooling. Imagine, for example, that you have 1000ml of water in your radiator and its associated plumbing system. You have decided that you need to have the water flowing into the CPU cooling chamber at -10C in order to keep the CPU itself down to 100C, and the water flows out of the chamber at, say, 75 degrees. (Yes, we can use water to do this with these temperatures - using pressure, or anti-freeze additives, or a combination of both. And we want to use water because it is cheap, clean, easy to work with, and just happens to have one of the highest thermal densities of any liquid around - as we noted in another thread recently.) You don't need to use your refrigeration plant to extract the whole 85 degrees from this water: you can use a conventional radiator to drop the water temperature to something not too far above ambient (let's say ambient is 20 degrees, then it's not at all unreasonable to expect to have the radiator-cooled water flowing back the refrigeration chamber at 30 or 40 degrees C, which in turn means that we are getting 50-odd degrees of our cooling from the passive system, and only one-third of our cooling from the refrigeration coil - and our power consumption for cooling is only one-third of the CPU's consumption. (Plus the inefficiency of the refrigeration unit, of course - which is not insignificant.)

6: Power management. Already, notebook CPUs have become very good at using their full rated power only when absolutely required, and dropping their battery drain to almost nothing when idle. Given that the vast majority of CPUs spent the great bulk of their time doing nothing at all, and most of the rest of their time doing not very much (how hard is it for an Athlon XP to run Outlook, after all?) this is a very significant factor.

7: I am intrigued by the thought that there must be some way to harness all those wasted joules. In cars, we use wasted joules to run our heaters and demisters. Surely there must be something more practical than Tea's use of a wall full of Athlons folding full-tilt to substitute for the gas heater in the office. :) Does anyone remember all those endless jokes about computerised toasters? Maybe it's not so silly as it sounds!
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,269
Location
I am omnipresent
Ars technica had an article up over the weekend (I think. Going days without being online is messing up my sense of time) about moving to carbon, rather than silicon, for processor manufacture. I've heard talk of using germanium also. Both of these have vastly different thermal properties.

Tannin, quite a few aspects of computing have exeeded, or FAR exceeded the predictions of Moore's law. I don't think magnetic storage is covered, strictly speaking (we're talking about transistor density, after all), but it has done better than double every 18 months. Graphics processing, though, wow. Graphs I've seen suggest a log scale for improvements in graphics processing. Unreal stuff.

... and I think that's my point. We're in the process of moving some of the smarts from the CPU to other points on the PC. We have GPUs with transistor counts exceeding mainline CPUs. We have southbridge chips handling all our external I/O (remember when serial I/O was a different card from parallel?), northbridges handling HOW many different busses? These special-purpose chips are getting better and better. The result isn't really multiprocessing, and eventually all these functions will be moved back into a central chip (at least the motherboard stuff will, I think), but perhaps for the near future, that is a solution.

One final thought is that the desktop PC may very well be near the end of its life. My father now carries around a fujitsu tablet thing around his house and his office, with a wireless connection to a machine not unlike the one sitting on Flagreen's desk for back-end work. He can plug in a keyboard and regular mouse, or just use a stylus as a pointer. The machine isn't that fast by modern standards (700MHz, I think), but it does his Excel and Powerpoint stuff, even CAD, fast enough that anything more would be irrelevant. I think that perhaps we'll hit a point where that sort of setup will be more feasible for more people. To put things another way, if how we use the machines changes, maybe more CPU power will be less relevant.

Finally, rest assured that Intel will *never* stop making faster chips. The company depends on it.
 

cquinn

What is this storage?
Joined
Mar 5, 2002
Messages
74
Location
Colorado
1: I thought Moore's Law was that computing power doubled every two years.

That's why Moore's Law is not a law...

The original observation by Moore was that (at Intels then current rate
of chip development) transitor density doubled about every 24 months.

He later revised that estimate to 18 months, and the concept got carried
away so that people start applying it to every aspect of technology connected to computers, instead of specifically to Intels R&D division.
You really should not try to apply ML to other areas of computing, as
each case will come with its own factors that should be applies for
a valid observation of growth/performance. Rather, there should
be a different "Moore's Law" measurement for each category.

(If you look at the growth of ICs for the video card industry, their
more recent designs far exceed what would be expected if Moore's Law
was a consistent growth measure)
 

Prof.Wizard

Wannabe Storage Freak
Joined
Jan 26, 2002
Messages
1,460
Pradeep said:
I believe Moore's Law relates to transistor density, and not performance per se.
Exactly. I don't understand why people are always playing with these two terms: density and performance.
Poor engineer Moore said only the first. Which is still fairly true for a 20+ years old theory.
 

Buck

Storage? I am Storage!
Joined
Feb 22, 2002
Messages
4,514
Location
Blurry.
Website
www.hlmcompany.com
7: I am intrigued by the thought that there must be some way to harness all those wasted joules. In cars, we use wasted joules to run our heaters and demisters. Surely there must be something more practical than Tea's use of a wall full of Athlons folding full-tilt to substitute for the gas heater in the office. Does anyone remember all those endless jokes about computerised toasters? Maybe it's not so silly as it sounds!

How about a computer system that uses the heat generated by various chips to run a generator that powers the system itself? Have a self contained unit that takes advantage of its own heat.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
22,269
Location
I am omnipresent
I'm a PC tech, not an HVAC guy! Last thing I really want to do is start measuring the calories/joules or BTUs I need to build a semi-efficient generator. That's WAY too much work.
 

jtr1962

Storage? I am Storage!
Joined
Jan 25, 2002
Messages
4,372
Location
Flushing, New York
Pradeep said:
I have a couple of 172 W TEC's from T.E. Dist. I never got around to using them, mostly because I couldn't find an affordable PSU that could deliver the voltage/amperage. They were to go in side by side on a slot one waterblock. There is simply no way to aircool such a setup.

I had a similar problem when I first started experimenting with TECs, so I just made my own power supplies. I purchased several large transformers from surplus places, large filter capacitors, and a bunch of MOSFETS, inductors, etc. Eventually I had my own switching power supply that I used to power a thermoelectric temperature chamber that I made. The power supply puts out around 700W and operates at about 95% efficiency. As you said, I couldn't air cool such a setup, especially since 700W of heat would have made the small room that I work in intolerable warm, so I made my own copper heatsinks and used tap water. Seeing as I only use the chamber very occasionally, I wasn't concerned about throwing the tap water away when I was done with it(actually I have it go out to the garden except in winter). Very efficient setup. The TEC modules only run about 3°C above the water inlet temperature. In the winter the tap water is about 5° C, which helps the chamber reach lower temperatures. I've gone a bit under -50° C, and I'm certain I could go another 5 to 10° lower with new modules since some of my modules were slightly damaged during my experimenting, and hence not running at full capabilities. Of course, this project took me over a year of my spare time, but I learned a great deal, and if I had to make another one, I could do it better and in probably only a few weeks.
 

jtr1962

Storage? I am Storage!
Joined
Jan 25, 2002
Messages
4,372
Location
Flushing, New York
Tannin said:
2: Storage capacity has increased in a Moorian fashion these last ten years or more, but storage performance has not kept pace. Indeed, while I haven't troubled to bring any figures to support my seat of the pants judgement, I should imagine that most areas of computing are lagging a long way behind CPU development, in terms of mean time to double performance. I'm thinking here of RAM, mass storage, I/O, and above all, software.

Very true, and here I think there is some room to improve the overall performance of computers without encountering the heat problem. Storage performance especially is abysmal, but as long as we only have mechanical hard disks there is limited room for improvement. I think the latest iteration of Seagate's 15K drives are about as fast as we're ever going to see access times go. Of course, STRs will continue to rise, but only as a consequence of increasing areal density. When solid state stroage becomes mainstream(it will in time), then this one thing might very well increase the performance of existing machines by 50% to 100%, especially since the faster processors at that time will waste even more CPU cycles waiting for hard disk I/O than they do now.

Another fact that you touched on is software development. It seems that each new upgrade of software is more bloated and ends up running at the same speed on a faster machine as its predecessor did on a slower one. Software makers should make more efficient software, but then of course there wouldn't be a driving force to get people to buy new machines. It's a shame, really, that many perfectly useable old machines end up in the garbage because modern bloated software won't run fast enough on them.

Better silicons. Pure silicon can stand a good deal more heat than impure silicon. Really well-purified silicon wafers - we are talking 99.lots pure here - provide the chip designer with a way to make CPUs that can stand up to very hot conditions without failure. Already AMD have made some big steps in this direction, with quite a few of the Athlon family chips being rated to 95 degrees C, and there is doubtless more to come from this. Perhaps we will see the use of exotic materials too; if not as actual semi-conductors, then as internal heat conduits to help conduct the heat away faster.

We're already seeing some of this-use of copper(and soon carbon nanotubes), diamond dies, etc. The point is that fairly soon even these improvements won't be enough with the power levels the chips will reach. Due to the enormous power densities, keeping the copper slug on top of the CPU at 50°C may very well mean that some of the internal silicon is at 125° C just due to thermal losses at the interface between the CPU die and the slug. And unless materials are developed with thermal conductivity an order of magnitude or more above carbon nanotubes, this will prove to be an insoluble problem.

4: Liquid cooling is only expensive because it's not mass produced yet. Companies like GM and Ford have become masters of the art of getting rid of 200 kilowatts without too much fuss.

I'm sure liquid cooling for the masses will eventually happen, but it will only buy us a year or two, as I pointed out above. Sure, car radiators get rid of 200 KW, but look at the size of the cooling fan and radiator, and they let the coolant reach 90° to 100° C in the process. Our CPU cooling system needs to keep the coolant going to the microprocessor at perhaps 40° C maximum.

5: Beyond that, we may need to move to a super-cooling system using refrigeration, or some variation on the "spread the heat around" idea that is behind sodium-cooled exhaust valves.

Sure, this can be done today, but I'm not sure that we can ever make a system like this at a price people are willing to pay unless we leverage existing cooling systems already in the home, like the central air conditioning system. In a colder climate, we can just put a huge liquid heat sink outdoors, and use the chilled water to cool the CPU, but most of the nation doesn't live in that type of climate.

6: Power management. Already, notebook CPUs have become very good at using their full rated power only when absolutely required, and dropping their battery drain to almost nothing when idle.

A partial solution. The problem is that no matter how you slice it, the CPU must be able to operate at full power continuously when called upon, or else you might as well just use a slower CPU. So the heat needs to be dealt with. Imagine trying to sell a 5 GHz machine and telling the customer it has a 50% duty cycle?

7: I am intrigued by the thought that there must be some way to harness all those wasted joules. In cars, we use wasted joules to run our heaters and demisters. Surely there must be something more practical than Tea's use of a wall full of Athlons folding full-tilt to substitute for the gas heater in the office. :) Does anyone remember all those endless jokes about computerised toasters? Maybe it's not so silly as it sounds!

You can use some of the heat to perhaps preheat water going to a water heater. The problem is that such low-level waste heat is difficult to make practical use of since there is no technology capable of efficiently exploiting a small temperature differential. TE modules can be run in reverse as generators, but they are grossly inefficient at low temperature differentials. There is huge room for improvement in their efficiency, and recently some research has started in that area. The problem is that in the four decades of their existence, the efficiency of TE modules has only improved by 10 or 20%, mostly due to manufacturing enhancements. Bismuth Telluride is still the material of choice because nothing better has yet been found. If the efficiency of TE modules can be improved by a factor of three or four, they will replace general refrigeration worldwide, so there is plenty of incentive here, but unfortunately no solutions thus far. If they can be made to operate at close to the Carnot efficiency, then you can cool a 250W CPU down to, say -25°C, using only about 70W input power to the modules instead of the current thousands of watts that current modules would need, or the several hundred that a compressor would require. I think active cooling will only buy a couple a years, although that might be enough time for a new technology to make it out of the lab. Of course, wishing for a TE cooler that operates at near Carnot efficiency is like hoping fusion will become a reality. Eventually both will, but not any time soon.

As I said earlier, I still think the picture is rather gloomy, at least in the near future. Long term, we may very well surpass Moore's law, but we're not going to do it with silicon or any of it's derivatives. I think organic computing is a long term candidate-look at the computing power of the brain versus the power it consumes. Of course, our brains have their own liquid cooling system. :wink:
 

jtr1962

Storage? I am Storage!
Joined
Jan 25, 2002
Messages
4,372
Location
Flushing, New York
Mercutio said:
The machine isn't that fast by modern standards (700MHz, I think), but it does his Excel and Powerpoint stuff, even CAD, fast enough that anything more would be irrelevant. I think that perhaps we'll hit a point where that sort of setup will be more feasible for more people. To put things another way, if how we use the machines changes, maybe more CPU power will be less relevant.

Yes. I was thinking along the same lines. I would venture to say that given how most people use their machines, perhaps 1 GHz or its equivalent is enough, especially if software bloat can be reigned in. An interesting possibility is underclocking existing processors. Say you have a 3 GHz processor that uses 75W. You cut the clock speed to 1 GHz, and the power consumption goes down to 25W. Since it's now running at a lower clock speed, you can reduce the core voltage by about a factor of three also and still maintain stability, so your power consumption is now down to perhaps 8 W. A large heatsink will cool this processor just fine with the small airflow from the case fans, or better yet you can sink the processor to the PC's case.

As CPUs get faster, they also get more efficient in terms of computing power per watt, so if you do the same thing with your 7 GHz, 200 W processor(an extrapolation to the future) you end up with a 1 GHz processor consuming only 4 W. This might very well be where computing is going-fast hot machines for those who really need them, derated, very low power ones for those who can get by with 1 GHz or so, which will incidentally be enough for most home or office use. Combine with LCD or OE displays, and you may very well end up with a complete system that can run off a large(25W) AC adaptor.
 

SteveC

Storage is cool
Joined
Jul 5, 2002
Messages
789
Location
NJ, USA
Intel discussed this at the Fall IDF today. Apparently they are looking at making the CPU running at two different voltages, with less critical transistors running at a lower voltage while operating at the same frequency. There a short bit on it at Anand's.
They also demonstrated a processor running at 4.684 GHz made on a .13 micron process.

Steve
 

e_dawg

Storage Freak
Joined
Jul 19, 2002
Messages
1,903
Location
Toronto-ish, Canada
Tannin said:
6: Power management. Already, notebook CPUs have become very good at using their full rated power only when absolutely required, and dropping their battery drain to almost nothing when idle. Given that the vast majority of CPUs spent the great bulk of their time doing nothing at all, and most of the rest of their time doing not very much (how hard is it for an Athlon XP to run Outlook, after all?) this is a very significant factor.

And just how do you expect power management to work when we are folding 24x7? :)
 
Top