Chief Scientist and SVP of Research at NVIDIA Bill Dally said in a column in Forbes that Moore’s Law is no longer scaling processor performance with transistors, nor are CPUs capable of fulfilling ever-increasing performance demands.
“[Moore’s Law] predicted the number of transistors on an integrated circuit would double each year (later revised to doubling every 18 months). This prediction laid the groundwork for another prediction: that doubling the number of transistors would also double the performance of CPUs every 18 months,” Dally said.
“[Moore] also projected that the amount of energy consumed by each unit of computing would decrease as the number of transistors increased. This enabled computing performance to scale up while the electrical power consumed remained constant. This power scaling, in addition to transistor scaling, is needed to scale CPU performance. But in a development that’s been largely overlooked, this power scaling has ended. And as a result, the CPU scaling predicted by Moore’s Law is now dead. CPU performance no longer doubles every 18 months.”
Dally also proved critical of the industry’s trend towards increasing the number of cores per CPU, comparing it to “[building] an airplane by putting wings on a train.” He also lambasted CPUs for being inefficient, criticizing them for consuming too much power to handle a given workload.
Instead, Dally offers GPUs as the solution to the growing performance crunch, posing parallel computing as the way to avoid industry stagnation.
“Parallel computing, is the only way to maintain the growth in computing performance that has transformed industries, economies, and human welfare throughout the world,” Dally said. “The computing industry must seize this opportunity and avoid stagnation, by focusing software development and training on throughput computers – not on multi-core CPUs. Let’s enable the future of computing to fly – not rumble along on trains with wings.”
The Icrontic viewpoint
Rob Updegrove
Well, that certainly sounds like something that would come out of a graphics company, isn’t it? While I don’t disagree with him in theory, in reality it sounds a little bit biased coming from NVIDIA. Personally, though, I don’t think that the market requires CPU performance to double every 18 months any longer, at least not for the average consumer.
Average Joe Consumer doesn’t need anything more powerful than a netbook to experience all that MS Office and the Internet has to offer, at least on the CPU end of things.
Robert Hallock
If Dally is addressing the enterprise/HPC market, then he’s not wrong. The industry is approaching a point where improving compute density and efficiency is best served by courting the massively parallel architectures of the GPU; not only is the cost/benefit ratio out of this world when the task is suitable, but the CPU’s diminishing returns relative to that of the GPU make the case all the more convincing.
It is, however, the point of suitability that continues to serve as the spanner in the works: not every enterprise/HPC app benefits from the GPU. Sometimes it’s simply that the task at hand hasn’t been polished to love the GPU, and other times it’s the case that the application just doesn’t benefit from a slab of Teslas ready to rumble.
On the desktop, the case for the GPU is even less convincing (for now). Outside of GPU-accelerated user interfaces (Direct2D elements in browsers or Windows Aero, for example), many GPUs will never so much as squirt a bead of sweat at the hands of their Farmville-loving owners. That’s just the way things are, and why Intel’s share of the GPU market is enormous, not to mention forged on the back of lowly integrated GPUs.
It’s understandable that a GPU man would speak ill of the CPU market, which could at any time undermine NVIDIA’s GPU computing gamble. Suffice it to say, rattling the sabre for more GPUs is only part of the picture: CPUs continue to drive virtualization performance, database performance, transaction performance and, yes, even research. Microsoft, VMware, Red Hat, Parallels and Citrix–big names, all–are hot for a future filled with CPUs rocking more and faster cores.
Narrowing in on the GPU as the future of computing also fails to address major concerns with the GPU: the CPU is entirely more flexible, GPUs are harder to produce in volume (coughFermicough) and the world has 30 years’ experience with CPUs, not GPUs.
There is no doubt in my mind that the GPU will play a critical role in the future of computing, but I doubt it will ever do so with significant expense to the CPU. It also wouldn’t hurt if the industry players x86ified, so we could all get on with growing the industry, rather than dragging it down with arguments over how to build the foundation.
(I also think a flying train would be pretty rad. Just sayin’.)
Ryan Wilsey
I agree with [Rob Updegrove], for the most part, on the needs of Average Joe Consumer.
To me, [Moore’s Law] is somewhat like the console. While we’re not quite to true photo-realism in games, they’re developed to the point that there’s enough horsepower for developers to have plenty of freedom in art style, and disc storage is high enough to house high-fidelity audio.
Console generations don’t have the impressive jumps they once did, such as when we moved from the NES to the SNES. I feel it’s the same thing with CPUs and GPUs. As naïve as that claim may be, I think the average user has a lot going for them already and 18-month doublings aren’t as necessary anymore.