If geeks love it, we’re on it

NVIDIA’s chief scientist: Moore’s Law is dead, CPUs inefficient

NVIDIA’s chief scientist: Moore’s Law is dead, CPUs inefficient

NVIDIA's Bill Dally

Chief Scientist and SVP of Research at NVIDIA Bill Dally said in a column in Forbes that Moore’s Law is no longer scaling processor performance with transistors, nor are CPUs capable of fulfilling ever-increasing performance demands.

“[Moore’s Law] predicted the number of transistors on an integrated circuit would double each year (later revised to doubling every 18 months). This prediction laid the groundwork for another prediction: that doubling the number of transistors would also double the performance of CPUs every 18 months,” Dally said.

“[Moore] also projected that the amount of energy consumed by each unit of computing would decrease as the number of transistors increased. This enabled computing performance to scale up while the electrical power consumed remained constant. This power scaling, in addition to transistor scaling, is needed to scale CPU performance. But in a development that’s been largely overlooked, this power scaling has ended. And as a result, the CPU scaling predicted by Moore’s Law is now dead. CPU performance no longer doubles every 18 months.”

Dally also proved critical of the industry’s trend towards increasing the number of cores per CPU, comparing it to “[building] an airplane by putting wings on a train.” He also lambasted CPUs for being inefficient, criticizing them for consuming too much power to handle a given workload.

Instead, Dally offers GPUs as the solution to the growing performance crunch, posing parallel computing as the way to avoid industry stagnation.

“Parallel computing, is the only way to maintain the growth in computing performance that has transformed industries, economies, and human welfare throughout the world,” Dally said. “The computing industry must seize this opportunity and avoid stagnation, by focusing software development and training on throughput computers – not on multi-core CPUs. Let’s enable the future of computing to fly – not rumble along on trains with wings.”

The Icrontic viewpoint

Rob Updegrove

Well, that certainly sounds like something that would come out of a graphics company, isn’t it? While I don’t disagree with him in theory, in reality it sounds a little bit biased coming from NVIDIA. Personally, though, I don’t think that the market requires CPU performance to double every 18 months any longer, at least not for the average consumer.

Average Joe Consumer doesn’t need anything more powerful than a netbook to experience all that MS Office and the Internet has to offer, at least on the CPU end of things.

Robert Hallock

If Dally is addressing the enterprise/HPC market, then he’s not wrong. The industry is approaching a point where improving compute density and efficiency is best served by courting the massively parallel architectures of the GPU; not only is the cost/benefit ratio out of this world when the task is suitable, but the CPU’s diminishing returns relative to that of the GPU make the case all the more convincing.

It is, however, the point of suitability that continues to serve as the spanner in the works: not every enterprise/HPC app benefits from the GPU. Sometimes it’s simply that the task at hand hasn’t been polished to love the GPU, and other times it’s the case that the application just doesn’t benefit from a slab of Teslas ready to rumble.

On the desktop, the case for the GPU is even less convincing (for now). Outside of GPU-accelerated user interfaces (Direct2D elements in browsers or Windows Aero, for example), many GPUs will never so much as squirt a bead of sweat at the hands of their Farmville-loving owners. That’s just the way things are, and why Intel’s share of the GPU market is enormous, not to mention forged on the back of lowly integrated GPUs.

It’s understandable that a GPU man would speak ill of the CPU market, which could at any time undermine NVIDIA’s GPU computing gamble. Suffice it to say, rattling the sabre for more GPUs is only part of the picture: CPUs continue to drive virtualization performance, database performance, transaction performance and, yes, even research. Microsoft, VMware, Red Hat, Parallels and Citrix–big names, all–are hot for a future filled with CPUs rocking more and faster cores.

Narrowing in on the GPU as the future of computing also fails to address major concerns with the GPU: the CPU is entirely more flexible, GPUs are harder to produce in volume (coughFermicough) and the world has 30 years’ experience with CPUs, not GPUs.

There is no doubt in my mind that the GPU will play a critical role in the future of computing, but I doubt it will ever do so with significant expense to the CPU. It also wouldn’t hurt if the industry players x86ified, so we could all get on with growing the industry, rather than dragging it down with arguments over how to build the foundation.

(I also think a flying train would be pretty rad. Just sayin’.)

Ryan Wilsey

I agree with [Rob Updegrove], for the most part, on the needs of Average Joe Consumer.

To me, [Moore’s Law] is somewhat like the console. While we’re not quite to true photo-realism in games, they’re developed to the point that there’s enough horsepower for developers to have plenty of freedom in art style, and disc storage is high enough to house high-fidelity audio.

Console generations don’t have the impressive jumps they once did, such as when we moved from the NES to the SNES. I feel it’s the same thing with CPUs and GPUs. As naïve as that claim may be, I think the average user has a lot going for them already and 18-month doublings aren’t as necessary anymore.

Comments

  1. John Mullinax I think NVIDIA has got a good point on leveraging the power of the GPU. Microsoft DirectCompute exists to make apps that use GPU resources accessible to developers. Where I disagree: It's not a question of whether to leverage CPU or GPU... The GPU is the more specialized component -- use it for everything possible. And use CPU for everything else, but use multi-core as much as possible. (BTW, these are two reasons I think IE9 will be a hit for Microsoft -- it uses GPU and multicore, plus compiled javascript, etc, etc.)

    I definitely disagree that average user just needs a netbook. For example, I think even average users want HD video via their PCs... and 1080P video on an Atom proc without GPU is not a great experience.
  2. Butters
    Butters I agree with Thrax. A flying train would be quite delicious.
  3. Tim
    Tim I knew years ago that Moore's Law was going to end someday. Doubling the performance of a Pentium 2 366 Mhz is one thing, doubling the performance of a new top end i7 is another. CPUs will get more powerful, just not double every 18 months.
  4. shwaip
    shwaip Moore's law isn't about performance.
  5. ardichoke
    ardichoke
    Moore's law describes a long-term trend in the history of computing hardware, in which the number of transistors that can be placed inexpensively on an integrated circuit has doubled approximately every two years.<sup id="cite_ref-0" class="reference">[1]</sup> It is often incorrectly quoted as a doubling of transistors every 18 months, as David House, an Intel Executive, gave that period to chip performance increase. The actual period was about 20 months.<sup id="cite_ref-news.cnet.com_1-0" class="reference">[2]</sup>

    You know... just to be precise.
  6. jpparker88
    jpparker88 I was just studying this in my IT program book a couple days ago, trippy. I somewhat disagree with the most users needing a netbook comment, but it's not far from the truth. Yes people would like to have 1080p video, but how many of them have 1080p capable monitors or televisions? Working for an internet company I see many people with many uses for computers and a fair lot of them just surf or check email. Nobody needs a quad-core uber machine for that. Technology will continue to soldier on, but I think it will be more in the handheld market, such as smartphones. I mean my dad's driod has more computing power and storage space than my first laptop computer. I think we will begin to see a paradigm shift from people having dedicated computers and phones to people using an all in one device for most of their needs and having a computer for things their handheld won't do.
  7. Bob I can understand why Nvidia are so nervous. Fast GPU's are limited by CPU's in gaming, which is after all a major market.

    For all practical purposes, real-time interactive 3d rendering is punching a wall with current CPU's. Its difficult to imagine this situation improving.

Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!