I could see servers using these amount of cores but I don't understand why they would expand this technology into personal computers. Only a fool would buy a 8 core processor without technology like OpenCL in existence. (Hence little to no market.)
OpenCL is important for addressing one issue with thread concurrency: Scaling across hardware and platforms.
With respect to desktop software applications, though, OpenCL does little to address the major concurrency hurdles that developers have been facing for decades now.
Most desktop software applications are inherently synchronous in nature, and this is the biggest problem. There are exceptions, of course: Graphics, encoding, algorithm modeling, data manipulation... But by and large, desktop applications are synchronous. They have to be in order to properly respond to unpredictable user input and other messages from external sources. You click a button, and then the software does something.
Typically the something that it does is very brief, or it relies on calling something else and then handling the response which requires synchronicity. Then there are moments where desktop applications truly can benefit from concurrency, such as when processing data. The question is: How do we make it easy enough for the developers writing these applications in high level languages (VC++, C#, VB.NET) to make these brief operations concurrent without introducing an implementation and code maintenance nightmare?
Microsoft is working on some pretty cool stuff for .NET 4.0 that aims to address just that. This shift towards declarative programming with behind-the-scenes concurrency is what is needed to get concurrent threading into mainstream software development, and that is what OpenCL fails to address.
The thing is that Microsoft doesn't seem to be on board with OpenCL yet. It would be awesome to write a few lines of code in C# that could churn through gigabytes of data on a GPU using a hundred threads. We shall see what happens. If OpenCL becomes widely adopted, and it looks like it will be, then Microsoft may be forced to support it.
In the short term, we can look forward to the software applications that are traditionally threaded to improve their performance with adoption of OpenCL. Graphical editing and rendering, data crunching, modeling, encoding... These things will all benefit from the opening up of GPU's to desktop software using a unified specification.
That is why Apple came up something Called
Grand Central Dispatch and a language feature called Blocks for C and Objective-C.
As far Microsoft is concerned they have DirectX 11 which probably will have the same kind of thing that is the reason they won't be adopting to OpenCL. In a way OpenCL is the missing feature of OpenGL.
I could see servers using these amount of cores but I don't understand why they would expand this technology into personal computers. Only a fool would buy a 8 core processor without technology like OpenCL in existence. (Hence little to no market.)
Yeah without OpenCL even quad core seems somewhat pointless as far as heavy apps are concerned.
Comments
**** yeah I am.
I could see servers using these amount of cores but I don't understand why they would expand this technology into personal computers. Only a fool would buy a 8 core processor without technology like OpenCL in existence. (Hence little to no market.)
With respect to desktop software applications, though, OpenCL does little to address the major concurrency hurdles that developers have been facing for decades now.
Most desktop software applications are inherently synchronous in nature, and this is the biggest problem. There are exceptions, of course: Graphics, encoding, algorithm modeling, data manipulation... But by and large, desktop applications are synchronous. They have to be in order to properly respond to unpredictable user input and other messages from external sources. You click a button, and then the software does something.
Typically the something that it does is very brief, or it relies on calling something else and then handling the response which requires synchronicity. Then there are moments where desktop applications truly can benefit from concurrency, such as when processing data. The question is: How do we make it easy enough for the developers writing these applications in high level languages (VC++, C#, VB.NET) to make these brief operations concurrent without introducing an implementation and code maintenance nightmare?
Microsoft is working on some pretty cool stuff for .NET 4.0 that aims to address just that. This shift towards declarative programming with behind-the-scenes concurrency is what is needed to get concurrent threading into mainstream software development, and that is what OpenCL fails to address.
The thing is that Microsoft doesn't seem to be on board with OpenCL yet. It would be awesome to write a few lines of code in C# that could churn through gigabytes of data on a GPU using a hundred threads. We shall see what happens. If OpenCL becomes widely adopted, and it looks like it will be, then Microsoft may be forced to support it.
In the short term, we can look forward to the software applications that are traditionally threaded to improve their performance with adoption of OpenCL. Graphical editing and rendering, data crunching, modeling, encoding... These things will all benefit from the opening up of GPU's to desktop software using a unified specification.
That is why Apple came up something Called
Grand Central Dispatch and a language feature called Blocks for C and Objective-C.
As far Microsoft is concerned they have DirectX 11 which probably will have the same kind of thing that is the reason they won't be adopting to OpenCL. In a way OpenCL is the missing feature of OpenGL.