Intel's shift to 90 nanometers could be a little "shaky"
mmonnin
Centreville, VA
Intel's shift to 90 nanometers could be a little "shaky"
"A PRELIMINARY EXAMINATION of Intel's future plans shows that moving to the nano level, a level below .13 microns, might be tougher than we imagined.
According to PC Watch, Prescott is currently consuming 103 watts, the equivalent to the power of a dim light bulb. And even Dothan, Intel's Pentium M processor at the 90 nano shrink, is looking a tad shaky, too, in its words.
It was never going to be easy, and it really is to the credit of these CPU folk at Intel, AMD, Broadcom and elsewhere that they're stiffening their sinews, summoning up their blood, and having a good old stab at the formerly relatively easy "die shrink" problem. This is called in semiconductor engineer jargon "a challenge". That challenge will get more, er challenging, in the future."
http://www.theinquirer.net/?article=11022
========================
Intel document confirms Prescott dissipates 103 W
"DOCUMENTS from Intel that the INQUIRER has seen confirm that the future 90 nano chip will dissipate over 103 watts.
According to the documents, Prescott taped out in late May and it had originally estimated the original power requirements lower than the reality.
But, the documents indicate, there are no changes to Prescott frequencies and Intel is still on track to introduce a 3.4 GHz Prescott in Q4 of this year. It did, however, have to revise its FMB guidance that ended up in revision 1.5, as this was the only way to make all Prescott enabled motherboards support the chip."
http://www.theinquirer.net/?article=11092
"A PRELIMINARY EXAMINATION of Intel's future plans shows that moving to the nano level, a level below .13 microns, might be tougher than we imagined.
According to PC Watch, Prescott is currently consuming 103 watts, the equivalent to the power of a dim light bulb. And even Dothan, Intel's Pentium M processor at the 90 nano shrink, is looking a tad shaky, too, in its words.
It was never going to be easy, and it really is to the credit of these CPU folk at Intel, AMD, Broadcom and elsewhere that they're stiffening their sinews, summoning up their blood, and having a good old stab at the formerly relatively easy "die shrink" problem. This is called in semiconductor engineer jargon "a challenge". That challenge will get more, er challenging, in the future."
http://www.theinquirer.net/?article=11022
========================
Intel document confirms Prescott dissipates 103 W
"DOCUMENTS from Intel that the INQUIRER has seen confirm that the future 90 nano chip will dissipate over 103 watts.
According to the documents, Prescott taped out in late May and it had originally estimated the original power requirements lower than the reality.
But, the documents indicate, there are no changes to Prescott frequencies and Intel is still on track to introduce a 3.4 GHz Prescott in Q4 of this year. It did, however, have to revise its FMB guidance that ended up in revision 1.5, as this was the only way to make all Prescott enabled motherboards support the chip."
http://www.theinquirer.net/?article=11092
0
Comments
100watt bulbs aren't Dim. And if anyone can shift to 90nm, it'll be Intel. Their Process Tech is second to none.
It's AMD I'm worried about. It's only been about a year since AMD got it's 13nm process working properly
(AMD paper launched it's Tbred 2600+ in Aug 2002 - wasn't available until Nov)
But producing 103w from the Intel camp, and needing 90 amps to do it...That's just scary. The Prescott isn't even expected to top 3.8GHz, providing the lowest rise (In terms of percentage) from a new core in Intel history.
I don't doubt their ability to make the jump, I just doubt their ability to properly implement it on the first go around. Intel has been less than stellar at doing what their theory says they will.
AMD got roasted for their 75watt Tbird, and OC'd P4C hits 100watts easy. Prescott is doing that out the gate, it's going to make watercooling mainstream.
AMD is looking better with it's 60 watt Opteron x46.
BTW Nvidia has announced that NForce3 based Dual Opteron mobos are coming.
At 2.2GHz, the only thing an Opteron DOESN'T win against a 3.2C is SpecViewPERF, and that relies entirely on clockspeeds, and is an irrelevant synthetic!
1GHz slower, and 10-45% faster is nothin' to balk at.
Man, dual opteron nforce3. /drool
But at any rate, drawing 90 amps through motherboard traces, wire-fine at that. It's NUTS I tell ya! 103w STOCK is just way out there too...I keep wondering what, in their 90nm process went wrong? Theorists have always concluded that smaller dies have lower temperatures, and at identical clockspeeds through the .25u, .18u, and .13u processes, they've been right.
Did the theory not pan out right, or is Intel just having problems? Who knows, really? But Intel's facing trouble to quell that sweltering heat...If they DO decided to go watercooling, that adds more parts to machines to break. Consumers will hate them when yet another part in their machine breaks, but then again, the repair industry would love them, and the WC industry would surely skyrocket.
Bah, early-morning hypotheticals. Love 'em and hate 'em.
We'll have to see how the other Fabs (IBM, TMSC, UMC & AMD) do at 90nm...
GaAs has held tempting promises for years, and there are a few others on the tip of my tongue that I can't recall.
Diamond. The latest Wired has an article about a company that is growing real perfect diamond, soon to be in wafer form for use in the chipmaking industry... all for about $5 a carat. Interesting implications in the jewelry industry also...
Read the article.
Loved the article.
Only thing is, it said it would be a few years away before the diamonds were even 4" square.
Intel uses what? 13" diameter wafers?
Now, perhaps by the time silicon truly is greatly limiting production the diamonds will be larger, like 10" or so.
Who knows, by the time we're using diamond substrates for CPUs, a 4" wafer might be all that's needed to make a ton of dies.