ATI Stream Computing And Folding@Home

WingaWinga MrSouth Africa Icrontian
edited October 2006 in Science & Tech
ATI recently showed off several examples of how Stream Computing would fit in real world applications.

Stream Computing uses the GPU as a compute engine, rather than limiting it to Graphics. If you look at the architecture of a GPU, it comprises of a bunch of shaders, each of which can crunch a lot of heavy sums. The problem facing developers is how to split code between the CPU and the GPU? If you get it wrong, you end up with a lot of data shuffling back and forth with little actual work getting done. Do it right, and data gets streamed in, worked on, and streamed out with extraordinary levels of throughput.

There are many target markets for this type of functionality. None are what you would call light duty tasks, and most are very time dependent. Stream computing can boost applications between 10 and 40 times if the application will support it. If it does not map well, or is poorly coded, you can lose performance.

ATI showed off four companies that use the technology either right now, or will, very soon. Of the people interviewed, one of them was Vijay Pande of Stanford University, one of the people behind Folding@Home.
When running Folding@Home on a GPU, specifically an X1900 class card, they are seeing between a 20-40 times speedup, depending on how fast a CPU can feed the card. This would mean a GPU can do what most of a rack of servers can.
While no products are yet available, the general feeling is that Stream Computing techniques and technology are here to stay. Early examples promise speedups of tens of times over the fastest CPU out there at a fraction of the cost. If your code fits, Stream may be a very good thing
Source: The Inquirer

Comments

  • RWBRWB Icrontian
    edited October 2006
    nifty... ;)
  • GargGarg Purveyor of Lincoln Nightmares Icrontian
    edited October 2006
    Drool...
  • primesuspectprimesuspect Beepin n' Boopin Detroit, MI Icrontian
    edited October 2006
    The only statement I disagree with is the "fraction of the cost" bit - a GPU that is worth doing this stuff on is almost always more expensive than a powerful CPU.
  • GargGarg Purveyor of Lincoln Nightmares Icrontian
    edited October 2006
    The only statement I disagree with is the "fraction of the cost" bit - a GPU that is worth doing this stuff on is almost always more expensive than a powerful CPU.
    True, but if they can really get that GPU to perform at around 10x the performance (and up to 40x) of a CPU, then the savings vs. that many CPUs, motherboards, RAM modules, etc. racks up pretty quickly.
  • airbornflghtairbornflght Houston, TX Icrontian
    edited October 2006
    I want
  • Sledgehammer70Sledgehammer70 California Icrontian
    edited October 2006
    the main problem is people will go but a X1900 and throw it in a AMD 3200+ system... and a 3200 can not feed a X1900 enough data, even the Core 2 Duo Extreme chip can't feed today’s GPU's at 100% efficiency. to get optimal performance you need to lay a nice X1900 in a Dual, Dual core system or a quad system to see huge leaps of the 40 times faster they are bragging about.

    You would get the same problem with Nvidia if they had a client that would run on a Nvidia card.
  • RWBRWB Icrontian
    edited October 2006
    Well after this I am sure NVida's upcoming cards may have some of these extras so that folders can get extra boosts.
  • lemonlimelemonlime Canada Member
    edited October 2006
    I hear ya, Sledge.. Someone on Anandtech forums was mentioning that the GPU client seems to use "a lot of CPU time". You pretty much have to run only the GPU client (or if you have a dual core, keep one core free for the GPU client to use).

    Tried to run the client on my X850 just for kicks.. no go :D This is definitely encouraging me to upgrade! Hey its for a good cause.. literally.
  • jhenryjhenry California's Wine Country
    edited October 2006
    Yay, more great technology I'll never be able to afford...w00t

    Sounds great, would love to see the practical uses for this!
  • edited October 2006
    The early results I've seen on the OCF team say to hold onto your money for now. One guy is reporting that it looks like a X1900XT will return about 450 points/day, and like lemonlime said, it pretty much kills the processing of 1 core on a dual core machine. 450 ppd ain't nothing to sneeze at, but you can do better than that with a $150-$200 dual core upgrade in your machine, replacing an older single core proc.

    I still don't regret buying my 7900GTX, even though it can't run the present gpu client. I'd much rather have my quieter, less power hungry vid card than a X1900XTX

    EDIT: Also, it seems to be pretty buggy still and is having problems with the 1950 series. It also only works on only certain version Cat drivers, so you can't just automatically upgrade the vid card drivers either.

    On another note, I've read that later they are talking about adding in support for the cheaper vid cards such as the X1650 series. I haven't heard about performance on them though.
  • airbornflghtairbornflght Houston, TX Icrontian
    edited October 2006
    I've been doing some reading and it seems to be 2-3 years off yet, until you see mainstream use of it. but it looks REALLY promising.
Sign In or Register to comment.