I think regardless both ATI & Nvidia cards are put in the same spot in the same benchmark. So in theory the results are the same regardless if it is a game or benchmark.
Once a third party, preferably our own guys, puts the GTX 480 to work against a HD 5870 and throws some real-life applications at it (Crysis, etc), then we'll see where the hype really lies.
And, ultimately, my wallet is probably going to be the one doing the voting this time around. I'll pay a premium for the better card if it's worth it, but it's going to have to be damn impressive for me to do that.
I think regardless both ATI & Nvidia cards are put in the same spot in the same benchmark. So in theory the results are the same regardless if it is a game or benchmark.
Not at all. The 400-series cards have an extensible tessellation size, meaning that the tessellation performance of an NVIDIA card can scale upward if the GPU demand for other routines scales downwards. ATI cards have a fixed-size tessellator.
This means that benchmarks like Heaven, which considerably weight tessellation in the score, will return scores that wildly favor NVIDIA cards even if the actual performance gap for gamers is small.
I know Thrax knows from prior conversations, I care absolutely zero for synthetic testing benchmarks. Not for graphics, CPU nor memory, unless I can relate to what the real application performance is for the things I want to run, it has absolutely zero value to me.
Icrontic needs to get a hold of one of these cards so we can see the real performance in Crysis, Dirt 2 and Shattered Horizon. Also I want to see how the GPU encodes video vs. AVIVO. Once I see those benchmarks I'll have some practical data to make comparisons.
I see... But in fairness This benchmark is the only one that truly shows the use of DX 11. But from the sounds of it Futuremark is about to change that.
But again I do see why now, but overall that could be a benefit overall for Nvidia's tech when playing these new games that are not out yet
Not at all. The 400-series cards have an extensible tessellation size, meaning that the tessellation performance of an NVIDIA card can scale upward if the GPU demand for other routines scales downwards. ATI cards have a fixed-size tessellator.
This means that benchmarks like Heaven, which considerably weight tessellation in the score, will return scores that wildly favor NVIDIA cards even if the actual performance gap for gamers is small.
I will care more about the tessellation performance over 100+ fps score in TF2/L4D2. I don't care which card beats which in fps with the current games since my GTX260 is sufficient for anything current at 1080p. Tessellation is the feature I want. That is the critical feature for 3D realism and I can't wait to see.
I just want to know the power efficiency of Fermi. Many are still talking about the electricity the GPU throws away. That's secondary cost, never a good thing. I'd love for the rumors to be squashed.
"The “issue†with NVIDIA’s design, however, is that those SMs are likely to be busy with other tasks when it comes time to render a game"
This is a really good point. Consider PhysX calculations especially. There's also things like AI, and when online, even more crap is being thrown around. Solid tessellation performance will be key this generation, but it is hardly the only battleground.
And, of course, no two games are optimized the same. Take Oblivion, for example. For whatever reason, the game seems to run GPU heavy, as my 8800GTX would always crank its fan to maximum while playing, and the thing barfed out more heat with that game then it did with anything else. Other games with more intensive graphics, like Crysis, did no such thing.
I think you overestimate the impact/limits of tessellation. Yes, it increases object detail, but heavily tessellated scenes also place increased demands on other parts of the GPU pipeline. More polygons means more vertexes to shade, bigger textures to map, more lighting to calculate and so on.
Hardware tessellation is all well and good, but the performance of traditional raster elements practically limits the strength of tessellation. That's why designs like Fermi, which dominate tessellation, may not do much for the 3D realism you want outside of canned demos and carefully-concocted scenes.
Yes, I agree that my anticipations probably will come short in this generation. These are the first generation GPU's doing dynamic solid modeling. I called solid modeling, because this is just that. I used Patran a lot in the past to create adaptive meshes combined with ABAQUS (now Simulia), calling the remeshing and FEM libraries back and forth and waiting and waiting. Adaptive triangulation is a very important part of 3D realism not only for games but also technical computing.
I'm not going to argue against tessellation, I have seen what it can do for the water in Dirt 2, and I am completely impressed, and as more games use it I can see where the Fermi architecture may benefit, but thats how far off?
My point is this, if your looking at a piece of hardware, you have to balance it for what you can do today, right now. Listen, I'm the biggest AMD fanboy I know, and even I will concede that I think video encode on Fermi might be totally bad ass, and thats a feature that I want and use from time to time. Still, I need to see the real world, independent benchmark against AVIVO from a source that I trust before I can buy into that hype. And mind you, not all hardware press is truly independent, there are sites that are biased leaning to one camp or another, and I have evidence on a few that I know to be a little tainted. Lets just say direct advertising dollars speak. I know thats not the case with Icrontic, so I genuinely hope Nvidia will send Icrontic a card to run on a test bench.
So in my mind there are a few valid benchmarks...
Convert the same video to a couple formats, one shorter video, say five minutes, and one longer, perhaps sixty minutes, and see where they end up.
As far as gaming goes, listen, I want that 60 FPS full tilt Crysis experience at 1080P. My 5870 gets me so frigging close, but I want to see the single chip card that finally does it. The rumor is that Fermi wont, but once again, until Icrontic has a card and tells me one way or the next.....
In my mind there are only four real hardcore gaming benchmarks that count when you get into the premium cards, I'm talking anyone looking to spend more than $300 on a GPU.
Crysis DX 9 and DX 10 at 1080P and beyond
Shattered Horizon DX 10 at 1080P and beyond
Dirt 2 DX 11 at 1080P and beyond
Stalker: Call of Pripyat DX 11 at 1080P and beyond
Some reasonable level of AA and AF enabled in every test.
Thats it, you could bench a million other games, or resolutions or settings to get a feel for what they do, but everything else is going to be smooth as butter if you pick AMD or Nvidia in that price range. Hell, I would even consider a fair nod to Batman AA at 1080P and up for its use of in game PhysX, if someone wants to make that argument that it does a great deal to enhance that experience as a fifth benchmark just to be "fair" to Nvidia....
At the end of the day, thats all I have to go on, those are the best examples of truly demanding real world applications, that can be measured fairly across cards. That would be my test bench for those two, I would not even bother with 3D Mark of Heaven, I just don't care, they are for E-Peen only, and I don't spend that kinda cash for E-Peen, I spend it to get a better real world experience for the games I play, and for the GPU accelerated applications I use (video encode)
In fact, if Icrontic gets a tester, I'm going on record saying thats the testing methodology I would love to see vs. anything that may be suggested in the infamous Nvidia reviewers guide.
computing, especially with fermi, is going to be 99% dominated by nvidia. I just came back from a talk, a CFD guy in my place showed comparison of many platforms - nvidia (gtx260) beats ati gear (w/openCL) 3x in computing!
computing, especially with fermi, is going to be 99% dominated by nvidia. I just came back from a talk, a CFD guy in my place showed comparison of many platforms - nvidia (gtx260) beats ati gear (w/openCL) 3x in computing!
I agree that GPU compute is the future. It in part is the reason AMD paid a pretty penny for ATI. Today though, beyond video encode, how do you measure GPU compute's performance in some real world way that matters to the typical home user? Folding perhaps, but then what is it, a number that says I did X amount to contribute to the worlds least efficient super computer per watt. My point is, what today can I measure that shows the real world value to the home graphics consumer? Video encode speed will matter to a niche, it matters to me, thats one thing. In game physics is part of it, but we all know about that pissing match.
Whats are the best real way to measure what GPU compute offers consumers today?
alas, you know that ati (which at this time produces better game scores) is now in the red while nvidia is making profit thanks to the quadro and similar hi-end
engineering and/or medical imaging and computing cards.
it's not so clear if non-game computing is unimportant in the whole competition.
nvidia may err a bit in their estimates of how big the supercomputing market is, but they will own it, and use it for advertising other products by extension. the first gpu-based clusters (supercomputers) are just being built, I think they'll kill the cpu-based supercomputers in this decade (it'll take that long for programmers to switch to gpu computing, but there will be no going back).
LeonardoWake up and smell the glaciersEagle River, AlaskaIcrontian
edited March 2010
computing, especially with fermi, is going to be 99% dominated by nvidia
Hmm, Nvidia would have to produce more than 5,000 Fermi-based cards to accomplish that, don't you think?
Rumor has it the Fermi pudding is exremetly meager in quantity, horribly difficult to manufacture, takes significant energy to digest, and won't be quite as sweet as the competition's pudding that is available on stores shelves right now.
But we'll see. As they say, proof of the pudding is in the eating.
Update: Fermion, my intent was not really to be rude, I just don't have much confidence that Nividia will accomplish much of anything with Fermi at all except perhaps (doubtful) tone down Great Leader's arrogance and insularity. Granted, I do think that Fermi may prove to be a practical study for Nvidia that will help them produce excellent GPGPUs in the future. Fermi itself though, won't have much of a practical effect on anything for Nvidia if they can't produce it in volume. Things are not looking very good for them in that respect right now.
Mmmm, they did the benchmark with AA disabled. Even then, without anything else to do, Nvidia only maxes 40fps in high detail area like the dragon. I dont think either card, the 5870 or Fermi, is ready for this level of tessalation. So, both lost this round. Now it is down to price and power usage
IF u look in the video of the heaven bench mark its version 1.1 witch runs 30% faster then 1.0 version. but im not saying that radeon card was tested in 1.0 i'm just saying that we have another blurry bench on the fermi performance.. It might as well be same performance. Price on nvidia cards will tell witch is better buy. But i for instance will buy nvidias fermi CARD JUST BECOUSE 3d vision only works with nvidia thats all)))
"I think regardless both ATI & Nvidia cards are put in the same spot in the same benchmark. So in theory the results are the same regardless if it is a game or benchmark."
Umm, this is a benchmark, NOT a game. There is a big difference. Or I guess you could say a Tessellation benchmark, which doesn't reflect real life.
Hardware tessellation is all well and good, but the performance of traditional raster elements practically limits the strength of tessellation. That's why designs like Fermi, which dominate tessellation, may not do much for the 3D realism you want outside of canned demos and carefully-concocted scenes.
Comments
And, ultimately, my wallet is probably going to be the one doing the voting this time around. I'll pay a premium for the better card if it's worth it, but it's going to have to be damn impressive for me to do that.
Not at all. The 400-series cards have an extensible tessellation size, meaning that the tessellation performance of an NVIDIA card can scale upward if the GPU demand for other routines scales downwards. ATI cards have a fixed-size tessellator.
This means that benchmarks like Heaven, which considerably weight tessellation in the score, will return scores that wildly favor NVIDIA cards even if the actual performance gap for gamers is small.
Icrontic needs to get a hold of one of these cards so we can see the real performance in Crysis, Dirt 2 and Shattered Horizon. Also I want to see how the GPU encodes video vs. AVIVO. Once I see those benchmarks I'll have some practical data to make comparisons.
But again I do see why now, but overall that could be a benefit overall for Nvidia's tech when playing these new games that are not out yet
I will care more about the tessellation performance over 100+ fps score in TF2/L4D2. I don't care which card beats which in fps with the current games since my GTX260 is sufficient for anything current at 1080p. Tessellation is the feature I want. That is the critical feature for 3D realism and I can't wait to see.
"The “issue†with NVIDIA’s design, however, is that those SMs are likely to be busy with other tasks when it comes time to render a game"
This is a really good point. Consider PhysX calculations especially. There's also things like AI, and when online, even more crap is being thrown around. Solid tessellation performance will be key this generation, but it is hardly the only battleground.
And, of course, no two games are optimized the same. Take Oblivion, for example. For whatever reason, the game seems to run GPU heavy, as my 8800GTX would always crank its fan to maximum while playing, and the thing barfed out more heat with that game then it did with anything else. Other games with more intensive graphics, like Crysis, did no such thing.
I think you overestimate the impact/limits of tessellation. Yes, it increases object detail, but heavily tessellated scenes also place increased demands on other parts of the GPU pipeline. More polygons means more vertexes to shade, bigger textures to map, more lighting to calculate and so on.
Hardware tessellation is all well and good, but the performance of traditional raster elements practically limits the strength of tessellation. That's why designs like Fermi, which dominate tessellation, may not do much for the 3D realism you want outside of canned demos and carefully-concocted scenes.
The same is true for ATI cards.
My point is this, if your looking at a piece of hardware, you have to balance it for what you can do today, right now. Listen, I'm the biggest AMD fanboy I know, and even I will concede that I think video encode on Fermi might be totally bad ass, and thats a feature that I want and use from time to time. Still, I need to see the real world, independent benchmark against AVIVO from a source that I trust before I can buy into that hype. And mind you, not all hardware press is truly independent, there are sites that are biased leaning to one camp or another, and I have evidence on a few that I know to be a little tainted. Lets just say direct advertising dollars speak. I know thats not the case with Icrontic, so I genuinely hope Nvidia will send Icrontic a card to run on a test bench.
So in my mind there are a few valid benchmarks...
Convert the same video to a couple formats, one shorter video, say five minutes, and one longer, perhaps sixty minutes, and see where they end up.
As far as gaming goes, listen, I want that 60 FPS full tilt Crysis experience at 1080P. My 5870 gets me so frigging close, but I want to see the single chip card that finally does it. The rumor is that Fermi wont, but once again, until Icrontic has a card and tells me one way or the next.....
In my mind there are only four real hardcore gaming benchmarks that count when you get into the premium cards, I'm talking anyone looking to spend more than $300 on a GPU.
Crysis DX 9 and DX 10 at 1080P and beyond
Shattered Horizon DX 10 at 1080P and beyond
Dirt 2 DX 11 at 1080P and beyond
Stalker: Call of Pripyat DX 11 at 1080P and beyond
Some reasonable level of AA and AF enabled in every test.
Thats it, you could bench a million other games, or resolutions or settings to get a feel for what they do, but everything else is going to be smooth as butter if you pick AMD or Nvidia in that price range. Hell, I would even consider a fair nod to Batman AA at 1080P and up for its use of in game PhysX, if someone wants to make that argument that it does a great deal to enhance that experience as a fifth benchmark just to be "fair" to Nvidia....
At the end of the day, thats all I have to go on, those are the best examples of truly demanding real world applications, that can be measured fairly across cards. That would be my test bench for those two, I would not even bother with 3D Mark of Heaven, I just don't care, they are for E-Peen only, and I don't spend that kinda cash for E-Peen, I spend it to get a better real world experience for the games I play, and for the GPU accelerated applications I use (video encode)
In fact, if Icrontic gets a tester, I'm going on record saying thats the testing methodology I would love to see vs. anything that may be suggested in the infamous Nvidia reviewers guide.
I agree that GPU compute is the future. It in part is the reason AMD paid a pretty penny for ATI. Today though, beyond video encode, how do you measure GPU compute's performance in some real world way that matters to the typical home user? Folding perhaps, but then what is it, a number that says I did X amount to contribute to the worlds least efficient super computer per watt. My point is, what today can I measure that shows the real world value to the home graphics consumer? Video encode speed will matter to a niche, it matters to me, thats one thing. In game physics is part of it, but we all know about that pissing match.
Whats are the best real way to measure what GPU compute offers consumers today?
alas, you know that ati (which at this time produces better game scores) is now in the red while nvidia is making profit thanks to the quadro and similar hi-end
engineering and/or medical imaging and computing cards.
it's not so clear if non-game computing is unimportant in the whole competition.
nvidia may err a bit in their estimates of how big the supercomputing market is, but they will own it, and use it for advertising other products by extension. the first gpu-based clusters (supercomputers) are just being built, I think they'll kill the cpu-based supercomputers in this decade (it'll take that long for programmers to switch to gpu computing, but there will be no going back).
Cockatu, I choose you!!
Rumor has it the Fermi pudding is exremetly meager in quantity, horribly difficult to manufacture, takes significant energy to digest, and won't be quite as sweet as the competition's pudding that is available on stores shelves right now.
But we'll see. As they say, proof of the pudding is in the eating.
Update: Fermion, my intent was not really to be rude, I just don't have much confidence that Nividia will accomplish much of anything with Fermi at all except perhaps (doubtful) tone down Great Leader's arrogance and insularity. Granted, I do think that Fermi may prove to be a practical study for Nvidia that will help them produce excellent GPGPUs in the future. Fermi itself though, won't have much of a practical effect on anything for Nvidia if they can't produce it in volume. Things are not looking very good for them in that respect right now.
blockatiel is overrated... I prefer obstructerigar
Umm, this is a benchmark, NOT a game. There is a big difference. Or I guess you could say a Tessellation benchmark, which doesn't reflect real life.