GPU2 folding?
Any of you guys tried out the new GPU2 folding app with an GeForce 8800 or better vid card yet? The guys over on the OCForums team have been seeing some really good ppd numbers out of them. I've seen numbers posted in excess of 5,000 ppd with 8800GT vid cards so far.
If you are running an AMD system and have a GeForce 8800 or better, that would probably be the way to go with it as the vid card will absolutely smoke the ppd of the SMP client on an AMD rig. The only drawback with it is that the new GPU2 client only runs in Windows.
EDIT: Here's a link to their benches thread on the Nvidia GPU2 folding for you to check out.
If you are running an AMD system and have a GeForce 8800 or better, that would probably be the way to go with it as the vid card will absolutely smoke the ppd of the SMP client on an AMD rig. The only drawback with it is that the new GPU2 client only runs in Windows.
EDIT: Here's a link to their benches thread on the Nvidia GPU2 folding for you to check out.
0
Comments
I'm not real sure on concrete numbers since I don't fold any more (they got too labor intensive for my job being away for 2 weeks at a time to run the smp clients). But you should be able to hunt up some concrete numbers on simultaneous smp/gpu2 folding in their folding forum.
EDIT: For now, it looks like the Nvidia cards rule in GPU2; their drivers are better optimized than the ATI drivers right now. But with ATI coming out with the 4850 series forcing Nvidia to drop the price of the 9800GTX cards down to the $200 range, it's looking pretty good for you folding guys.
Yeah, and this would be the ONLY reason I would dual GPU in Crossfire (sorry, I have ATI) or in SLI. GPU folding has had good numbers- just not to this extent. I've avoided this because of power, cooling and inconsistent driver support issues.
I would presume that the ATIs would probably hold their own here as well- they've been able to fold for a while longer than nVidia.
It's speculation on my part if ATI needs to have a similar programming standard streamlined for its GPUs to match nVidia production- and it kind of misses the point here. Yet I also find it inconsistent that equivalent performing GPUs should have such a discrepancy in Folding point output and there has to be a reason.
Still, the die are cast with me- I have ATIs. Sorry for the distraction- it sounds like there is more incentive to buy nVidia for Folding purposes now. Can only hope at this point that ATI catches on.
There are SMP clients on both machines, and they have only slowed down a little due to the GPU client (less that 5%).
I was also running the GPU client on an 8500GT card, and was getting about 700 ppd from that card until I replaced it with the 9800.
EDIT: Got the .inf installed, GPU2 is running. Looks like ~1 minute 15 seconds per frame on Project 5205. I'll let you know PPD after FAHMon has enough time to have an accurate average. Right now I've still got 2 SMP clients running on this rig and it doesn't look like they are slowing down much (they may be slowing the GPU2 client down however)
Crazy.
To keep this from happening, set the priority in the GPU2 client configuration to "slightly higher." Keep the SMP client at the default.
Yeah I did that the first time I started the GPU2 client.
I did some thinking and reading and have now uninstalled Affinity Changer and have moved to a single SMP to run alongside the GPU2 client. The SMP clients were definitely suffering. The GPU2 client (FahCore_11.exe) gets all of one CPU core. The 3 remaining CPU cores were being fought over by 8 folding cores (FahCore_a1.exe) resulting in 1 SMP client running on 2 CPU cores and 1 SMP client running on a single CPU core. Bad.
This setup of a single SMP client + GPU2 client looks to be the way to go, at least until I move to an Oct-Core CPU.
That's how I have it set up, and the SMP client hardly loses anything.
I'm getting it on both systems that have nvidia cards, one is a BFG 8800GTX on Windows XP Pro and the other is a G92 8800GTS on Vista ultimate 64 bit.
Any ideas? Google didn't come back with anything recent...
Are you using the modded .inf file? Look here
To get it working under x64 start the first half of the install but cancel when it is done extracting to c:\NVIDIA\blah and close the installer before it moves on to the next step. Then copy the modded .inf file to the directory it just extracted to and replace the existing .inf file. Run the setup.exe within said folder; this time going through the entire install. Reboot and start GPU2.
- 8800GTX is a go
- 9800 series are a go
- maximum points configuration for quad Intel processor is 1 X GPU client Folding simultaneously with 1 X CPU Win SMP client, Affinity Changer not installed
Correct?
Also, what is the power draw for 9800, at stock core frequency with F@H engaged? The reason why I ask is it's simply a matter of household economics. I've already got 5 overclocked Q6600s each churning at 100% utilization. I consider my power bill increase for Folding@Home to be a part of my charitable giving (but not for tax purposes). I don't mind at all, but I have to draw the line somewhere with the power bill. Maybe it would be worth my while to part out one of my rigs and use the proceeds for a couple high end video cards for two of the remaining computers?
The client also works on other nVidia cards. Basically, anything from 8* and up should work.
You can get a lot more info on the folding forum site, right here: http://foldingforum.org/viewforum.php?f=43
If you need links to the nVidia drivers, including the mod for 177.35, I can post them.
Power consumption for a 9800GTX at full load is about 235 watts. For that it produces almost 5000 ppd. I think it would make economic sense to eliminate a machine running the SMP client and put the cards in the other machines. You have to have good power supplies in those machines, at least 500 watts, with a PCI-E connector.
I just bought my 9800GTX card at Fry's for $227 with a $50 rebate.
Since I started with the GPU clients and two cards, my weekly average has risen from about 22000 points to almost 40000. Today I had over 10000 points from my machines, even though the 8800GT card is not running all the time, because the machine is being used a lot for gaming. Before I was averaging a little over 3000 a day with four machines running SMP.
Really, I am seriously thinking about retiring a system for a F@H GPUs. I went to the OCForums link Mudd posted. We're talking serious production from the video cards!
I was wondering the same thing and found this post. Looks like it doesn't make a difference.
I use the 9800GTX, but it was the same price as an 8800GT at the time I bought it, because it was on sale. It gets about 10% better ppd than my 8800GT. In reality, it's a lot better because I've had some stability problems with the 8800GT + Vista (evil Vista) + nVidia drivers + GPU2 combo. I'm not overclocking any of them. I haven't had problems with the 9800, but maybe that's because I'm running it on an XP box.
If you were to overclock the Shaders on the 8800GT to match those of the 9800GTX you would close that 10% gap almost entirely. The remainder could be attributed to Vista being crappy. What is interesting is that the extra 16 SP that the 9800GTX and 8800GTS have over the 8800GT don't look to be making a single shred of difference, at least not yet.