ATI Stream Computing And Folding@Home
Winga
MrSouth Africa Icrontian
ATI recently showed off several examples of how Stream Computing would fit in real world applications.
Stream Computing uses the GPU as a compute engine, rather than limiting it to Graphics. If you look at the architecture of a GPU, it comprises of a bunch of shaders, each of which can crunch a lot of heavy sums. The problem facing developers is how to split code between the CPU and the GPU? If you get it wrong, you end up with a lot of data shuffling back and forth with little actual work getting done. Do it right, and data gets streamed in, worked on, and streamed out with extraordinary levels of throughput.
There are many target markets for this type of functionality. None are what you would call light duty tasks, and most are very time dependent. Stream computing can boost applications between 10 and 40 times if the application will support it. If it does not map well, or is poorly coded, you can lose performance.
ATI showed off four companies that use the technology either right now, or will, very soon. Of the people interviewed, one of them was Vijay Pande of Stanford University, one of the people behind Folding@Home.
When running Folding@Home on a GPU, specifically an X1900 class card, they are seeing between a 20-40 times speedup, depending on how fast a CPU can feed the card. This would mean a GPU can do what most of a rack of servers can.
Stream Computing uses the GPU as a compute engine, rather than limiting it to Graphics. If you look at the architecture of a GPU, it comprises of a bunch of shaders, each of which can crunch a lot of heavy sums. The problem facing developers is how to split code between the CPU and the GPU? If you get it wrong, you end up with a lot of data shuffling back and forth with little actual work getting done. Do it right, and data gets streamed in, worked on, and streamed out with extraordinary levels of throughput.
There are many target markets for this type of functionality. None are what you would call light duty tasks, and most are very time dependent. Stream computing can boost applications between 10 and 40 times if the application will support it. If it does not map well, or is poorly coded, you can lose performance.
ATI showed off four companies that use the technology either right now, or will, very soon. Of the people interviewed, one of them was Vijay Pande of Stanford University, one of the people behind Folding@Home.
When running Folding@Home on a GPU, specifically an X1900 class card, they are seeing between a 20-40 times speedup, depending on how fast a CPU can feed the card. This would mean a GPU can do what most of a rack of servers can.
Source: The InquirerWhile no products are yet available, the general feeling is that Stream Computing techniques and technology are here to stay. Early examples promise speedups of tens of times over the fastest CPU out there at a fraction of the cost. If your code fits, Stream may be a very good thing
0
Comments
You would get the same problem with Nvidia if they had a client that would run on a Nvidia card.
Tried to run the client on my X850 just for kicks.. no go This is definitely encouraging me to upgrade! Hey its for a good cause.. literally.
Sounds great, would love to see the practical uses for this!
I still don't regret buying my 7900GTX, even though it can't run the present gpu client. I'd much rather have my quieter, less power hungry vid card than a X1900XTX
EDIT: Also, it seems to be pretty buggy still and is having problems with the 1950 series. It also only works on only certain version Cat drivers, so you can't just automatically upgrade the vid card drivers either.
On another note, I've read that later they are talking about adding in support for the cheaper vid cards such as the X1650 series. I haven't heard about performance on them though.