AMD comments on NVIDIA dropping PhysX support when ATI hardware is present

«13

Comments

  • GargGarg Purveyor of Lincoln Nightmares Icrontian
    edited September 2009
    If there were bugs when using PhysX with a combined NVIDIA/ATI setup, I can understand them not wanting to spend time fixing them. But that's only if they existed. Otherwise, it's shens.
  • Cliff_ForsterCliff_Forster Icrontian
    edited October 2009
    On one hand, I can't blame Nvidia for wanting to hold on to one of its differentiating features, on the other...Direct compute makes sense for gamers long term, as simple as that.
  • edited October 2009
    Or we can just forget "legacy" support for the proprietary PhysX and go with OpenCL (and DirectCompute-11) which is (reportedly) supported by both GPU makers.

    AMD Announces Open Physics Initiative - http://tinyurl.com/yczjtkt

    OpenCL GPU Computing Support on NVIDIA - http://tinyurl.com/ybtdrd2

    Nvidia said it will support DirectCompute-11 later this year.
  • edited October 2009
    ATI and Nvidia both support OpenCL and DirectCompute (currently ATI only, Nvidia support in route)
  • ThraxThrax 🐌 Austin, TX Icrontian
    edited October 2009
    Hi, Michael. Sorry that your comment got eaten... Our CMS flagged the links as spam and stuck it in an approval queue.

    We at Icrontic certainly agree that moving away from PhysX (and ATI's Havok) is the way to go. But for now, PhysX is the leading and most populous physics engine gamers can have. NVIDIA gave that ability to ATI gamers for quite a long time--endorsed it, even. The decision to take it away goes against their policy of supporting gamers. There's just no way they can spin that it's in a gamer's best interests.

    We have sent NVIDIA questions to see what they will do in response to AMD's Bullet Physics.
  • foolkillerfoolkiller Ontario
    edited October 2009
    Well, this has made the decision for me. I have two ATI 4870s, and was thinking of an Nvidia card for Physics/Cuda. Now I'll just stick with ATI, they can have all my money.
  • edited October 2009
    I don't believe even a bit that either side considers the "benefit" of consumers/gamers before their profit. What NVIDIA is doing about dropping PhysX support for ATI cards is completely normal for a corporation and AMD is not a non-profit consumer organization either, although AMD has been lately playing the role of consumer advocate against Intel and Nvidia. Maybe they reached nirvana after years of no-profit suffering. The cost of PhysX is presumably a large amount for Nvidia and it is (potentially) a great feature. If ATI wants it, they should either pay the license or make the alternative technology available. I have not forgotten the days these two companies were fixing price. If there is any drama, both ATI and Nvidia deserve the slap.
    http://www.shacknews.com/onearticle.x/54969
  • edited October 2009
    Hey all, just wanted to clear up a few things and add a few of my own comments:

    @Michael Boardman:

    -Nvidia supports OpenCL and released both their driver and SDK some months ago. In fact, AMD's new physics engine du jour, Bullet Physics, was developed on Nvidia hardware using Nvidia's OpenCL SDK.

    -Nvidia also supports DirectCompute, as the only app AnandTech could find to benchmark DirectCompute in their 5870 review was Nvidia's Ocean Demo. You'll notice its fully functional on Nvidia's DX10 hardware also.

    Source: http://www.fudzilla.com/content/view/15642/34/
    http://www.anandtech.com/video/showdoc.aspx?i=3643&p=8



    @Thrax:

    -Havok is a wholly owned subsidiary of Intel; AMD/ATI has no ownership interest or any influence of the IP at all. Intel simply used them as a pawn and strung them along with the hopes of an OpenCL Havok client (which was demonstrated at GDC, but still vaporware as of today).

    At the time Havok was being tossed about as an alternative to PhysX, AMD claimed they did not want to support PhysX because they were opposed to "closed and proprietary" standards, yet they threw their support for Havok, which is you guessed it, a closed and proprietary standard. The announcement that they're now backing Bullet Physics is clearly backpedaling on their part and a slap in the face to those who supported them.

    I think Icrontic and the various news outlets are asking the wrong questions. The real question should be, why doesn't AMD just support PhysX as it was originally offered? Why don't they just write a CUDA driver for their hardware? It seems this is just another case of AMD preferring to sit on their hands and reap the benefits of the hard work of others.

    PhysX is just middleware with a bunch of backend APIs to help it interface various hardware, at some point I expect it to be ported to DirectCompute or OpenCL, but instead of waiting for these emerging technologies, Nvidia got it to work with CUDA.

    References and backstory:
    http://www.bit-tech.net/news/hardware/2008/12/11/amd-exec-says-physx-will-die/1
    http://www.tgdaily.com/content/view/38392/118/
    http://www.extremetech.com/article2/0,2845,2324555,00.asp
    http://www.bit-tech.net/custompc/news/602205/nvidia-offers-physx-support-to-amd--ati.html

    In any case, sounds like you guys have an open dialogue with AMD, I just wish people would ask AMD the real questions instead of just taking their excuses at face value. We have one company producing real solutions and we have another company fighting a war of words in the press without anything to show on their end. Who really cares about physics and PC gamers?

    I don't work for Nvidia, no direct ties to them, just another PC gamer for most of my life hoping hardware physics adoption happens sooner rather than later.
  • ThraxThrax 🐌 Austin, TX Icrontian
    edited October 2009
    Hi, Chizow.

    Yes, your post was getting flagged as spam by our CMS. I deleted the duplicate and approved your original with the links.

    Thank you for commenting. :)

    We agree that the industry needs to start firing on all cylinders when it comes to physics, but I don't think ATI adopting PhysX is the right answer. I don't even think it's the right answer using the OpenCL port. That's a port which would still have licensing restrictions, because NVIDIA ultimately owns the IP.

    I think the right answer is AMD's Bullet Physics. It's open source, there are no licensing attachments, and it uses open, vendor-neutral languages.

    Of course, all of this assumes that I care about physics at all. I've still never played a physics-enabled title, and the whole thing has been around for how long now?
  • lordbeanlordbean Ontario, Canada
    edited October 2009
    Thrax wrote:
    Of course, all of this assumes that I care about physics at all. I've still never played a physics-enabled title, and the whole thing has been around for how long now?

    ^ This.

    Of course, part of the problem may be that since the standard is proprietary to nvidia (and only really supported in an ATI configuration if you have an nvidia card in there somewhere), most developers have not wanted to seriously pursue physics because it means they are cutting themselves off from half of the gaming market by default. Once the open source standards get rolling, we may see more (and more interesting) physics-based titles.
  • edited October 2009
    Thrax wrote:
    Hi, Chizow.

    Yes, your post was getting flagged as spam by our CMS. I deleted the duplicate and approved your original with the links.

    Thank you for commenting. :)
    Thanks for the welcome. Found my way to the proper subforum so I can format my reply more elegantly. :smiles:
    Thrax wrote:
    We agree that the industry needs to start firing on all cylinders when it comes to physics, but I don't think ATI adopting PhysX is the right answer. I don't even think it's the right answer using the OpenCL port. That's a port which would still have licensing restrictions, because NVIDIA ultimately owns the IP.
    Well again, that begs the question. What is the right answer? The ever-changing, ever-evolving, mercurial answer that produces nothing? Or the one that is tried and true that has been progressing and provided solutions?

    Every iteration of PhysX is technically a "port". It originated as a software physics API on x86 and has been ported to just about every platform with a different backend solver. It supports all 3 major consoles, 360, PS3, Wii, even the iPhone. It is truly ubiquitous and the SDK is easily ported to whatever backend hardware is necessary.

    Nvidia did the same when it bought Ageia and ported the PhysX to CUDA to run on their hardware. Also, while PhysX is proprietary, it is free to use and there is no licensing fee. There's only a small fee if you want the source code. However, that's not to say Nvidia wouldn't require AMD to adhere to a PhysX logo certification program if they claimed support for PhysX. Personally, I think this is where the hang-up is. AMD simply does not want to advance a competitor's IP and validate their technology by using it, and they certainly don't want to promote Nvidia's brand on their own product.
    Thrax wrote:
    I think the right answer is AMD's Bullet Physics. It's open source, there are no licensing attachments, and it uses open, vendor-neutral languages.
    That's fine that its open source, but what does it matter if the tools aren't very good and no one uses it? One of the benefits of the established middleware SDKs (both PhysX and Havok) is that they already have significant exposure to various dev houses and are tried and proven with today's most popular game engines. With flexible SDK tools that solve for both software and hardware, its much easier for these companies to integrate additional HW effects into their games if they already use the SDK for software effects, especially if they're working on cross-platform titles.

    Havok Titles
    PhysX Titles
    Bullet Titles - That's all I found, had to follow a wiki ref.....
    Thrax wrote:
    Of course, all of this assumes that I care about physics at all. I've still never played a physics-enabled title, and the whole thing has been around for how long now?
    A bit of background on PhysX, before Ageia renamed it, they called it Novodex which was a pretty popular software physics SDK at the time. Not quite on the level of Havok's adoption rate, but a solid library with good industry backing. Then Ageia introduced their hardware PPU which was a PCIE add-in card. The problem was, it was expensive and there was no software support for its additional HW capabilities. Estimates were it sold around 100K units total; with such a small HW install-base it should come as no surprise the software support was weak.

    About 20 months ago, Nvidia bought Ageia and stated they planned to leverage CUDA and their programmable architecture to accelerate PhysX on their GPUs. Didn't hear anything for about 6 months...and then suddenly, Nvidia announced PhysX support on all of their 8-series and higher GPUs with a new PhysX driver in July-Aug 2008. Overnight, they went from 100K units to around 70 million hardware units capable of accelerating HW PhysX effects. And the best part about it was that if you were into PC games, you probably already owned the necessary hardware, a fast graphics card.

    So the answer is, about 14 months since PhysX has existed in its current incarnation. We didn't see any titles at all for about 6 months, just some backports of the handful that were developed for the original Ageia PPU. Then they started trickling in, Mirrors Edge, Cryostasis....but the big one, the killer app for PhysX that we were waiting for landed a few weeks ago, Batman: Arkham Asylum. It should be no surprise that the grumblings and anti-PhysX rhetoric started up again only recently, my guess is it has a lot to do with Batman's launch along with the 5870 debut (as more Nvidia cards may be taking a back seat).

    I doubt this one video will change your opinion of PhysX overnight, but I think it does a good job of showing why some feel so strongly about hardware physics and would like to see it adopted more quickly, regardless of middleware provider and politics.

    Batman Arkham Asylum Video Comparison

    In any case, if you get another chance to correspond with AMD, I'd urge you to ask them the simple question. Why not just write a CUDA driver for PhysX and run it on their hardware natively? No excuses about closed and proprietary, no need to worry about driver lock-outs, no need to shuffle and push proprietary or vaporware standards, just 100% of the discrete GPU industry supporting the only production SDK capable of accelerating HW physics effects. Their excuses and reasoning for not supporting PhysX have been less than genuine, imo, especially in light of their recent decision to back Bullet instead of Havok after all their press clippings on the matter months ago.
  • edited October 2009
    The only thing that matters to me in a standard is how "free" it is. OpenCL seems to be the highest qualifier, so I'll be buying an AMD card next.
  • lordbeanlordbean Ontario, Canada
    edited October 2009
    Technically speaking, if the OpenCL standard is truly "free" and widely adopted, it won't matter whether you buy AMD or nvidia. They will both be able to run it.
  • UPSLynxUPSLynx :KAPPA: Redwood City, CA Icrontian
    edited October 2009
    <cite class="ic-username"></cite>chizow - one would suggest that one of the greater strengths behind Bullet is the fact that it will couple with DirectCompute and DirectX11. As the DirectCompute API builds in popularity, PhysX will most likely become irrelevant.

    For gamers ready to rock DirectX11 as soon as possible, this is greater news.

    And no matter how available PhysX may be, CUDA is still very much an NVIDIA language. Regardless of it's ease of use, it's still simple PR for AMD to not want to use it. Bullet and OpenCL are both completely open, and in that respect, every gamer can use it and no one needs to bicker about PR.
  • edited October 2009
    UPSLynx wrote:
    <CITE class=ic-username></CITE>chizow - one would suggest that one of the greater strengths behind Bullet is the fact that it will couple with DirectCompute and DirectX11. As the DirectCompute API builds in popularity, PhysX will most likely become irrelevant.

    For gamers ready to rock DirectX11 as soon as possible, this is greater news.

    And no matter how available PhysX may be, CUDA is still very much an NVIDIA language. Regardless of it's ease of use, it's still simple PR for AMD to not want to use it. Bullet and OpenCL are both completely open, and in that respect, every gamer can use it and no one needs to bicker about PR.
    The problem here is it seems you have fallen into the trap of confusing the issue with all of this PR bickering like many other tech sites. AMD wants you to think PhysX is somehow tied to a closed and proprietary standard, when that's clearly not the case. As part of their reasoning, they seem to think that if the API becomes irrelevant, the middleware will as well. This is clearly false.

    I'll draw some clear parallels here to help illustrate the issue better. DirectX and OpenGL are current standard API for computer gaming development on the PC, but rarely are games developed with the limited tools provided by these API SDKs. On top of each you have numerous game engines like UE3.0, Gamebryo, CryEngine, Anvil, Rage. Content creation and modeling software like 3DS Max, Maya, Granny3D etc.. Sound engine designed with the likes of Miles3D, OpenAL, FMOD, etc. For physics you have Havok, PhysX, Velocity, etc. All of these middleware sit on top of the API and are portable as necessary.

    So how are games developed using these same middleware tools then used to create console games given those consoles use different API, like XNA, libgcm, PSGL or whatever other API native to those consoles. Simple, they're compiled for those different API, so clearly changing API output would not make the middleware irrelevant in any way provided the middleware is flexible enough to output to numerous API.

    So to bring this full circle, even if CUDA became irrelevant, that would have no direct impact on PhysX's future viability as a middleware given the fact its still supported on all 3 major consoles, iPhone, and on the PC with both x86 and GeForce hardware. The only "negative" outcome of CUDA becoming irrelevant is that Nvidia might be compelled to port PhysX to OpenCL or Direct Compute.

    AMD is simply trying to confuse the issue by linking PhysX to CUDA when in reality CUDA support was only necessary for lack of other viable options. DirectCompute and OpenCL change that situation but now the question is what benefit does it serve Nvidia to port PhysX given the very public attacks on their technology by AMD?

    Also, as an FYI, Bullet doesn't currently support DX11 or DirectCompute, its currently being developed and supported on GeForce hardware with OpenCL only. Given there's currently no hardware support at all under OpenCL in production titles, it just demonstrates how empty promises of "open and free standards" are when no one actually makes use of them in lieu of supported and established standards.
  • edited October 2009
    lordbean wrote:
    Technically speaking, if the OpenCL standard is truly "free" and widely adopted, it won't matter whether you buy AMD or nvidia. They will both be able to run it.
    Mike Green wrote:
    The only thing that matters to me in a standard is how "free" it is. OpenCL seems to be the highest qualifier, so I'll be buying an AMD card next.

    PhysX is free to use guys, so is CUDA. Its not open in the sense Nvidia still steers its development and future, but if you look at the history and development of OpenCL, you'll see Nvidia has a strong hand in shaping it. In fact, Nvidia's VP of Embedded Content chairs the standard and OpenCL has been widely described as a less user friendly, low-level version of CUDA's C debugger and compiler.

    I think its important to distinguish the difference between an API and the tools necessary to make use of that API. Simply creating the standard API means very little if no one is going to try and "invent the wheel" to make use of it, especially when superior tools are already in production.

    From a progression standpoint it looks like:

    Hardware > Driver (HAL) > API (HLPL) > Middleware (GUI-based tools)
  • Cliff_ForsterCliff_Forster Icrontian
    edited October 2009
    Hey Chizow,

    Your responses are obviously very well thought through. It would please me if you would consider registering at the forum and sticking around.

    I won't pretend to be an expert on the programing end, but what I will say as a gamer and consumer is that I would prefer open standards that allow me greater flexibility in the hardware and OS I choose to run while not limiting my experience from title to title. I realize its a pipe dream but I long for a world where I can buy a game, load it on whatever OS I choose while running it on any vendors hardware with a reasonable spec to run that title.
  • RyderRyder Kalamazoo, Mi Icrontian
    edited October 2009
    psst Cliff, he is registered.. that is why it doesn't say "guest" under his name :)
  • ThraxThrax 🐌 Austin, TX Icrontian
    edited October 2009
    Chizow:

    Your CUDA language vs. PhysX API argument is well-understood here, but it's an argument that goes beyond the original thesis: NVIDIA retracted support for PhysX modes it previously supported, and gamers were the victims.

    This is what people are taking issue with. Green reversed endorsement on two PhysX+ATI scenarios, one explicit, one implicit, and gamers suffered for it.

    Now, could the PhysX API be ported to another language? Yes. Could it just as easily run on OpenCL or DirectCompute? Yes. Has there been any momentum for the company to do so? No. And understandably so, because they have a lock on the physics market with PhysX.

    Why endorse an open source language for your API when you can just say you support open languages, and continue on with CUDA?

    That's what is pissing people off.
  • edited October 2009
    Hey Chizow,

    Your responses are obviously very well thought through. It would please me if you would consider registering at the forum and sticking around.

    I won't pretend to be an expert on the programing end, but what I will say as a gamer and consumer is that I would prefer open standards that allow me greater flexibility in the hardware and OS I choose to run while not limiting my experience from title to title. I realize its a pipe dream but I long for a world where I can buy a game, load it on whatever OS I choose while running it on any vendors hardware with a reasonable spec to run that title.

    Hi Cliff,

    Thanks for the kind words, I did register after my first reply, mainly so I could format my replies properly but this site certainly has content that fits my interests so thanks for the welcome!

    I'm no programming guru or designer either, just a life long PC gamer from your same viewpoint. I've only taken exceptional interest in physics development because I saw its potential very early on. To me, the moment Nvidia solved the hardware install-base problem overnight by enabling acceleration on all of their DX10+ unified shader hardware, I felt GPU accelerated physics had a chance to be the next big thing on the PC.

    Personally I don't care too much what standard or middleware is used, I only care that the hardware I choose and buy supports it. While "open and free" is certainly desirable from some viewpoints, I'm not so sure that translates so well to commercial production environments like video games. These guys are typically on tight budgets and deadlines, so if it isn't easy or familiar, the likelihood its implemented decreases significantly.
  • edited October 2009
    Thrax wrote:
    Chizow:

    Your CUDA language vs. PhysX API argument is well-understood here, but it's an argument that goes beyond the original thesis: NVIDIA retracted support for PhysX modes it previously supported, and gamers were the victims.

    This is what people are taking issue with. Green reversed endorsement on two PhysX+ATI scenarios, one explicit, one implicit, and gamers suffered for it.

    Now, could the PhysX API be ported to another language? Yes. Could it just as easily run on OpenCL or DirectCompute? Yes. Has there been any momentum for the company to do so? No. And understandably so, because they have a lock on the physics market with PhysX.

    Why endorse an open source language for your API when you can just say you support open languages, and continue on with CUDA?

    That's what is pissing people off.
    Hi Thrax, I understand the initial scope of this news bit and don't necessarily agree with those decisions by Nvidia, my point is that all of these problems could be moot if AMD took a proactive and cooperative stance from the outset and supported PhysX on their hardware natively.

    This again, leads to the question that is never asked, or perhaps never answered:
    Why doesn't AMD just write a CUDA driver for their own hardware and submit to whatever PhysX logo program Nvidia requires and properly support PhysX on their hardware natively?
    The problem I have with AMD's stance throughout is that they say one thing and do another, yet produce nothing in the way of solutions. Going back to earlier comments, my only interest is to have all hardware support as many middleware/API solutions as possible to increase the adoption rate of GPU physics.

    But politics are clearly getting in the way of this. As some of the earlier links I posted allude to, PhysX support was offered to AMD. They declined and rather than producing their own solution, they offered disingenous reasons as to why they declined and continued to downplay and criticize PhysX.

    That's what bothers me the most, instead of helping to increase enhanced physics implementation in games, they've clearly done all they can to impede progress while failing to produce results of their own and in the process, they've shifted their position numerous times. First it was Brook+, then it was Stream, then it was OpenCL and Havok, then it was DirectCompute, now its Bullet and OpenCL? And still, no functional product from AMD, just more excuses and rhetoric.

    As for the original news bit, ATI + Nvidia PhysX was always hit or miss, its only possible in XP and Win 7 due to WDDM driver restrictions. ATI + Ageia PPU still works, but you have to go back to the last standalone driver. Both solutions still work with pre-190 drivers, and while I don't necessarily agree with a lock-out approach, I think its understandable from a QA and PR standpoint given AMD clearly isn't willing to reciprocate support coupled with some of their inflammatory comments in the press.
  • Cliff_ForsterCliff_Forster Icrontian
    edited October 2009
    chizow,

    I think you could understand why AMD would not be too fond of having to license its physics implementation from NVIDIA.

    Just like NVIDIA is not in love with having to re up their agreement to product Intel chip-sets based on new architecture.

    If your the guy buying the license, its never good for you, you only do it because its required to be in the game. In the case of game physics, I think PhysX has not established itself as a dominating standard to the point where AMD has to cave and license it in order to satisfy its customers. At the same time, nobody can expect NVIDIA to take a differentiating feature that they paid good money to implement and just give it to AMD for free.

    Its just business.
  • lordbeanlordbean Ontario, Canada
    edited October 2009
    In the case of game physics, I think PhysX has not established itself as a dominating standard to the point where AMD has to cave and license it in order to satisfy its customers.

    I think this is an excellent point, and I both agree with it and disagree with it on some levels. My belief is that the reason PhysX has not established itself as a dominating standard is directly because of AMD's share of the graphics market. Because AMD GPUs do not run PhysX (thereby forcing the work to fall back to the CPU), running a PhysX-based (or even enabled) game on an AMD graphics configuration is not feasible. This means that all PhysX titles are doomed from birth to sell only to nvidia owners. This is a rather convincing reason not to use PhysX in your title, from a developer's standpoint. If AMD had bit the bullet and licensed the tech from nvidia, we would perhaps have seen a higher adoption rate of PhysX. In this regard, the blame can be made to rest on AMD's shoulders. However, consider it from AMD's business standpoint. If they license the technology from nvidia, they are bound to adhere to nvidia's standards for PhysX compliance, which puts AMD in a severely weakened position. If they do not license the tech from nvidia, PhysX does not become widely adopted, and as a result AMD stays free to develop their own technology as they see fit, and likely holds a greater market share with larger profits as a result. The way I see it, AMD had no choice in the matter. In order to remain competitive in the graphics industry, they could not reasonably agree to license and be bound to adhere to standards set by their #1 competitor. It would not have made sense from a business standpoint.
  • ardichokeardichoke Icrontian
    edited October 2009
    Add to all this the fact that if AMD licenses PhysX, the standard gets widely adopted, there's nothing from stopping Nvidia from denying AMD the licensing rights to PhysX in the future. Once PhysX got it's claws into the majority of games Nvidia could simply refuse to renew AMDs license and bingo, now only Nvidia cards can run all the new games efficiently. This would devastate AMD/ATI and give Nvidia a near monopoly on high end gaming cards. By AMD releasing Bullet under an open source license, it makes it so they can't pull a power play like that thus screwing over their competitor. This makes other graphics manufacturers more likely to adopt the standard which in turn makes game creators more likely to use it.
  • edited October 2009
    chizow,

    I think you could understand why AMD would not be too fond of having to license its physics implementation from NVIDIA.

    Just like NVIDIA is not in love with having to re up their agreement to product Intel chip-sets based on new architecture.

    If your the guy buying the license, its never good for you, you only do it because its required to be in the game. In the case of game physics, I think PhysX has not established itself as a dominating standard to the point where AMD has to cave and license it in order to satisfy its customers. At the same time, nobody can expect NVIDIA to take a differentiating feature that they paid good money to implement and just give it to AMD for free.

    Its just business.
    Well, the licensing issue is unclear actually, for example, PhysX nor Havok currently do not require any licensing for use on either AMD or Intel x86 CPUs. Its just software that runs on the CPU. I've never seen any indication from either Nvidia or AMD claiming PhysX licensing would require any fee, again, PhysX as an SDK is free to use and develop games with.

    The only thing that might be involved is involvement in a logo certification program that would require QA and perhaps certification. I think this is the problem AMD has with PhysX support on their hardware more than anything, even if it costs them nothing out of pocket. This is similar to the agreements Nvidia has with Microsoft for their 360 and Sony for the PS3 that gives all of their developers access to PhysX for free.

    As for dominant standard, again, there's no need for PhysX to become a dominant standard to be a glaring omission as a feature from AMD's standpoint. I posted some links earlier to titles that have used PhysX and Havok over the years with their software SDKs that show they both are used in a wide variety of quality titles on all platforms, PC and consoles. Bullet clearly isn't even close to that. Even on Bullet's site you can see Havok and PhysX are the clear leaders when it comes to physics SDKs:

    http://www.bulletphysics.com/wordpress/?p=88

    So really, we should be looking at end-game outcomes. How does not supporting PhysX benefit AMD when it comes to positioning their products? You can clearly see based on the links and points already discussed, Nvidia is by far best poised to take advantage of all standards and SDKs:
    • Nvidia - Supports all relevent API with OpenCL, DirectCompute, CUDA on the PC. 100% support for PhysX, 100% support for Bullet, and have made all necessary steps to support Havok if/when Intel rolls it out.
    • AMD - Supports OpenCL and DirectCompute, although their efforts are lagging behind Nvidia's from both a driver and SDK standpoint (neither are available for public consumption as of now). Presumed to support Bullet and Havok if/when Intel rolls it out. Software support will remain the same on their CPUs.
    • Intel - Support for DirectCompute and OpenCL, but are also pushing their own GPGPU compute API with Ct for Larrabee. They are part of the OpenCL standard consortium, but I think their participation is given begrudgingly as OpenCL is a clear threat to x86's dominance on Wintel platforms. Don't expect to see a hardware accelerated Havok client before Larrabee launches (looks like AMD finally figured this out, hence throwing their support to Bullet now), and even then I would not be shocked if Intel limits it to Ct. Software support will remain the same on their CPUs.
    So what you have, Nvidia is poised to support every API and Middleware out there, meaning their hardware will have the highest probability of supporting everything. AMD hardware will only be able to support Bullet, maybe Havok and definitely not PhysX. At best they support 2/3 of the prevailing middleware and the only one that's guaranteed is the weakest one. Intel's plans are unknown but they'll definitely support Havok in hardware and with Larrabee's x86 roots, it may be able to adequately accelerate PhysX CPU runtimes if it doesn't support CUDA natively.

    But like I said, no one is asking the right questions, point-blank, asking Nvidia if they are expecting money to change hands in a licensing agreement with AMD for PhysX. No one is asking AMD why they don't just support PhysX on their hardware natively, why they don't just write a CUDA driver for their hardware, and if it would actually cost them anything other than man hours and support time. Most indicators from press bits indicate there is no licensing fee involved and that there is no technical reason preventing AMD from supporting PhysX on their hardware natively.

    Here's a relatively obscure piece where the author actually gets the closest to asking the right questions, and you can see the AMD rep, Dave Hoff is none too pleased:

    http://www.hitechlegion.com/our-news/1411-bullet-physics-ati-sdk-for-gpu-and-open-cl-part-3?start=1

    And another interview with Dave Hoff where he skirts all around the issue and confirms much of what I've already stated about no technical reason preventing AMD from supporting PhysX natively. Hoff is exceptionally qualified to answer these questions btw, unlike most of the other talking heads, as he played a large part of CUDA development at Nvidia:

    http://www.rage3d.com/previews/video/ati_hd5870_performance_preview/index.php?p=4
    I can't imagine any commercial software company who has tried a GPGPU programming model previously from either graphics company to not switch to OpenCL or Direct Compute. It's very easy to move from CUDA to either of these....

    While it would be easy to convert PhysX from CUDA to OpenCL so it could run on our cards, and I've offered our assistance to do this, I can just imagine PhysX will remain that one app that doesn't ever become open.

    And that's really the crux of it. He confirms it would be easy to port CUDA to OpenCL and DirectCompute (something Nvidia said all along, but also saying CUDA made sense because OpenCL and DC did not exist yet). What he doesn't answer is why AMD didn't simply write their own CUDA driver, which should/would be just as easy to port to emergent API like OpenCL and DirectCompute, as necessary, as his own statements confirm. Instead AMD chose to throw out disingenous excuses and reasons why they wouldn't support PhysX and in the meantime, they've not only backtracked on previous comments, they still have nothing to show for their efforts.

    Sorry, lots to read I know, but the gist of it should read: AMD's decisions have basically led to a lose-lose situation for all involved, their own customers and those who actually want to see GPU physics succeed by basically doing everything they can to hinder development. Nvidia on the other hand has not only produced their own solutions, but are clearly chairing and pioneering all emergent technologies. Everything they've done is proactive with regard to GPU physics and that extends far beyond their own proprietary technology with their efforts chairing OpenCL, providing working SDKs with which Bullet was built upon, producing the only functional DirectCompute demos months in advance, etc.
  • edited October 2009
    lordbean wrote:
    My belief is that the reason PhysX has not established itself as a dominating standard is directly because of AMD's share of the graphics market. Because AMD GPUs do not run PhysX (thereby forcing the work to fall back to the CPU), running a PhysX-based (or even enabled) game on an AMD graphics configuration is not feasible.
    As a rather significant point in this discussion, Nvidia dominates the discrete graphics card market by a 2:1 ratio using just about any metric. Market share, check (check Peddie or Mercury Research #s). User surveys, check (check Steam survey or FutureMark/Yougamers). Revenue, check (50% of Nvidia's 4 bill = 2 bill compared to ATI's 1 billion total). TSMC wafer supply, check (check iSuppli figures). Its fluctuated somewhat over the last 3-4 years, but hasn't gone below ~60% for Nvidia, 30% for ATI and was as high as 70% for Nvidia and 20% for ATI at G80/G92's peak in 2007.

    So clearly Nvidia has the vast majority of relevant hardware in this case, which is DX10+ unified programmable shader hardware (required for GPGPU physics acceleration). Not only do they have the majority of hardware, their Developer Relations program TWIMTBP is also better funded so it allows them to work with more titles to implement hardware specific features, like PhysX. You might have seen this value-add program has also come under fire from AMD recently.....

    Anyways, AMD and Intel CPU will still be able to run limited software accelerated PhysX effects, it'll just be the same as the console versions and similar to any other physics effects seen on the PC over the last 7-8 years. But without GPU acceleration, the effects will undoubtedly be inferior to the GPU accelerated version. In the past this hasn't been so much of a problem because the titles with advanced PhysX were met with mixed reception, the most recent ruckus with regard to PhysX is undoubtedly a result of PhysX's "killer app", Batman Arkham Asylum. Suddenly all those who were indifferent or openly critical of PhysX suddenly care enough to make a massive stink over it now......
  • lordbeanlordbean Ontario, Canada
    edited October 2009
    chizow wrote:
    Well, the licensing issue is unclear actually, for example, PhysX nor Havok currently do not require any licensing for use on either AMD or Intel x86 CPUs. Its just software that runs on the CPU. I've never seen any indication from either Nvidia or AMD claiming PhysX licensing would require any fee, again, PhysX as an SDK is free to use and develop games with.

    The SDK is free, this is 100% true. However, the SDK only makes it easier to write code that interacts with the PhysX core, whose source is owned by nvidia, and has to be licensed (even if it's licensed for free) by other companies in order to run at the hardware level.
    chizow wrote:
    The only thing that might be involved is involvement in a logo certification program that would require QA and perhaps certification. I think this is the problem AMD has with PhysX support on their hardware more than anything, even if it costs them nothing out of pocket. This is similar to the agreements Nvidia has with Microsoft for their 360 and Sony for the PS3 that gives all of their developers access to PhysX for free.

    It's not difficult to see why AMD would have a problem with this. By licensing PhysX from nvidia, even if it costs nothing out of AMD's pocket, they are agreeing that their tech has to be OKed by nvidia before it can even go on the market. This puts AMD in a compromised position at best (nvidia would then be able to scrutinize all their graphics technology), and at worst, nvidia could use it as a way to undermine AMD's profits through unfair QA reports, and AMD couldn't do a thing about it because they'd have signed the license.
    chizow wrote:
    As for dominant standard, again, there's no need for PhysX to become a dominant standard to be a glaring omission as a feature from AMD's standpoint. I posted some links earlier to titles that have used PhysX and Havok over the years with their software SDKs that show they both are used in a wide variety of quality titles on all platforms, PC and consoles. Bullet clearly isn't even close to that. Even on Bullet's site you can see Havok and PhysX are the clear leaders when it comes to physics SDKs:

    http://www.bulletphysics.com/wordpress/?p=88

    It may be a glaring omission from AMD's graphics cards, but the fault in this does not lie with AMD. By keeping the source code for PhysX proprietary (and don't try to argue that it's not, the SDK may be free but that's not the same thing as PhysX's actual source), nvidia will effectively have a stranglehold of AMD in the graphics sector if PhysX becomes the dominant standard. Pretty clear why nvidia is trying to establish PhysX in this regard.
    chizow wrote:
    So really, we should be looking at end-game outcomes. How does not supporting PhysX benefit AMD when it comes to positioning their products? You can clearly see based on the links and points already discussed, Nvidia is by far best poised to take advantage of all standards and SDKs:

    Not supporting PhysX benefits AMD because if they did, it means they licensed it from nvidia, and essentially are then completely under nvidia's control when PhysX becomes the dominant standard.
    chizow wrote:
    • Nvidia - Supports all relevent API with OpenCL, DirectCompute, CUDA on the PC. 100% support for PhysX, 100% support for Bullet, and have made all necessary steps to support Havok if/when Intel rolls it out.
    • AMD - Supports OpenCL and DirectCompute, although their efforts are lagging behind Nvidia's from both a driver and SDK standpoint (neither are available for public consumption as of now). Presumed to support Bullet and Havok if/when Intel rolls it out. Software support will remain the same on their CPUs.
    • Intel - Support for DirectCompute and OpenCL, but are also pushing their own GPGPU compute API with Ct for Larrabee. They are part of the OpenCL standard consortium, but I think their participation is given begrudgingly as OpenCL is a clear threat to x86's dominance on Wintel platforms. Don't expect to see a hardware accelerated Havok client before Larrabee launches (looks like AMD finally figured this out, hence throwing their support to Bullet now), and even then I would not be shocked if Intel limits it to Ct. Software support will remain the same on their CPUs.

    Nvidia: You conveniently forgot to mention CAL in this list, AMD's solution to GPU computing. CUDA was built by nvidia, for nvidia graphics hardware. AMD's CAL was built by AMD, for AMD's graphics hardware. CUDA was never intended to be an industry standard, nor was CAL.
    AMD: Supports OpenCL, DirectCompute, and CAL. See above point. Also, if AMD is "lagging behind Nvidia's from both a driver and SDK standpoint (neither are available for public consumption as of now)", please explain why the Radeon HD5850 and HD5870 are available right now on store shelves with full DX11 support, and nvidia's GT300 is not due until at least Q1 2010.
    Intel: Honestly doesn't matter in the scope of this argument. If OpenCL really is a threat to the x86 platform, why is AMD still trying hard to compete in the CPU market? The central processing unit is not going anywhere in a hurry.
    chizow wrote:
    So what you have, Nvidia is poised to support every API and Middleware out there, meaning their hardware will have the highest probability of supporting everything. AMD hardware will only be able to support Bullet, maybe Havok and definitely not PhysX. At best they support 2/3 of the prevailing middleware and the only one that's guaranteed is the weakest one. Intel's plans are unknown but they'll definitely support Havok in hardware and with Larrabee's x86 roots, it may be able to adequately accelerate PhysX CPU runtimes if it doesn't support CUDA natively.

    Of course nvidia supports PhysX and AMD doesn't. If PhysX becomes the standard, AMD's graphics become moot, since nvidia will have control over AMD's products. AMD is pushing for a fully vendor-neutral solution (Bullet), which nvidia realizes would be suicide not to support, as it is based on DirectX 11 code, and thus required for DirectX 11 compliance. Both Havok and PhysX will become obsolete once the open-source standard is in place, and nvidia is trying their best to stop that from happening because PhysX would make them fully dominant in the graphics sector.
    chizow wrote:
    But like I said, no one is asking the right questions, point-blank, asking Nvidia if they are expecting money to change hands in a licensing agreement with AMD for PhysX. No one is asking AMD why they don't just support PhysX on their hardware natively, why they don't just write a CUDA driver for their hardware, and if it would actually cost them anything other than man hours and support time. Most indicators from press bits indicate there is no licensing fee involved and that there is no technical reason preventing AMD from supporting PhysX on their hardware natively.

    CAL is AMD's CUDA driver. As I mentioned above, CUDA was developed by nvidia specifically for nvidia's hardware. It was never intended to be used on any other GPUs. DirectCompute is the next generation of the idea behind CAL and CUDA - it is the vendor-neutral implementation of GPU processing. Asking AMD why they don't license PhysX from nvidia would be like asking a banker why he won't just give you his money rather than loan it to you. Supporting PhysX natively on AMD GPUs would require that AMD license it from nvidia, which as I mentioned more than once above, places AMD in a very compromised position.
    chizow wrote:
    Here's a relatively obscure piece where the author actually gets the closest to asking the right questions, and you can see the AMD rep, Dave Hoff is none too pleased:

    http://www.hitechlegion.com/our-news/1411-bullet-physics-ati-sdk-for-gpu-and-open-cl-part-3?start=1

    And another interview with Dave Hoff where he skirts all around the issue and confirms much of what I've already stated about no technical reason preventing AMD from supporting PhysX natively. Hoff is exceptionally qualified to answer these questions btw, unlike most of the other talking heads, as he played a large part of CUDA development at Nvidia:

    http://www.rage3d.com/previews/video/ati_hd5870_performance_preview/index.php?p=4
    Dave Hoff wrote:
    The contrast should be fairly stark here: we're intentionally enabling physics to run on all platforms - this is all about developer adoption. Of course we're confident enough in our ability to bring compelling new GPUs to market that we don't need to try to lock anyone in. As I mentioned last week, if the competition altered their drivers to not work with our Radeon HD 4800 series cards, I can't imagine them embracing our huge new leap with the HD 5800 series.

    While it would be easy to convert PhysX from CUDA to OpenCL so it could run on our cards, and I've offered our assistance to do this, I can just imagine PhysX will remain that one app that doesn't ever become open.

    You missed a vital bit of context here. Read this again carefully, between the lines, and this is what he was actually saying:

    "While it would be easy for nvidia to convert PhysX from CUDA to OpenCL so it could run on our cards, and I've offered our assistance to do this, I can just imagine PhysX will remain that one app that doesn't ever become open."

    He's pointing out that nvidia could change PhysX to OpenCL quite easily, yet they don't want to do it. The reasons for this are the points I've already made above - if PhysX is adopted as the standard while it is still proprietary to nvidia, they get a stranglehold on AMD. Case in point.

    chizow wrote:
    And that's really the crux of it. He confirms it would be easy to port CUDA to OpenCL and DirectCompute (something Nvidia said all along, but also saying CUDA made sense because OpenCL and DC did not exist yet). What he doesn't answer is why AMD didn't simply write their own CUDA driver, which should/would be just as easy to port to emergent API like OpenCL and DirectCompute, as necessary, as his own statements confirm. Instead AMD chose to throw out disingenous excuses and reasons why they wouldn't support PhysX and in the meantime, they've not only backtracked on previous comments, they still have nothing to show for their efforts.

    Sorry, lots to read I know, but the gist of it should read: AMD's decisions have basically led to a lose-lose situation for all involved, their own customers and those who actually want to see GPU physics succeed by basically doing everything they can to hinder development. Nvidia on the other hand has not only produced their own solutions, but are clearly chairing and pioneering all emergent technologies. Everything they've done is proactive with regard to GPU physics and that extends far beyond their own proprietary technology with their efforts chairing OpenCL, providing working SDKs with which Bullet was built upon, producing the only functional DirectCompute demos months in advance, etc.

    Nvidia's decision to keep PhysX's source proprietary instead of porting it to OpenCL is what's hurting customers, not AMD's decisions. AMD's decisions have been entirely in the interest of AMD keeping themselves afloat in the market. They are not trying to hurt their customers at all, in fact they are trying to help their customers by pushing for open source physics (which, if I may point out, will not hurt nvidia or its customers since bullet and openCL are vendor-neutral, and will thus run on nvidia hardware). In fact, it is nvidia that is hurting their customers by keeping PhysX proprietary instead of porting it to OpenCL. Nvidia is holding out in the off chance that AMD caves and licenses PhysX from them, but it's not going to happen. If AMD caves on this, they hand the graphics sector to nvidia on a golden platter.
  • edited October 2009
    lordbean wrote:
    The SDK is free, this is 100% true. However, the SDK only makes it easier to write code that interacts with the PhysX core, whose source is owned by nvidia, and has to be licensed (even if it's licensed for free) by other companies in order to run at the hardware level.
    Yep, the source is $50K as I'm pretty sure I've already mentioned, but you would only need the source if you wanted to integrate it into your own tools, recompile it for whatever reason for perhaps target hardware.
    lordbean wrote:
    It's not difficult to see why AMD would have a problem with this. By licensing PhysX from nvidia, even if it costs nothing out of AMD's pocket, they are agreeing that their tech has to be OKed by nvidia before it can even go on the market. This puts AMD in a compromised position at best (nvidia would then be able to scrutinize all their graphics technology), and at worst, nvidia could use it as a way to undermine AMD's profits through unfair QA reports, and AMD couldn't do a thing about it because they'd have signed the license.
    Oh please, these tin-foil hat what-ifs are better alternatives to the situation bearing out in the media now? Where AMD is crying about their hardware being unsupported for a solution they never supported? Their users being forced to download hacks off torrents for workarounds to get a half-baked PhysX solution to run on their CPU instead of natively on their GPU? Needing to rely on the power of community and workaround+ to get a patch that intercepts driver calls so ATI 3D + Nvidia PhysX work in the same ecosystem? That is what "open" and "free" get you, unsupported garbage solutions and workarounds.
    lordbean wrote:
    It may be a glaring omission from AMD's graphics cards, but the fault in this does not lie with AMD. By keeping the source code for PhysX proprietary (and don't try to argue that it's not, the SDK may be free but that's not the same thing as PhysX's actual source), nvidia will effectively have a stranglehold of AMD in the graphics sector if PhysX becomes the dominant standard. Pretty clear why nvidia is trying to establish PhysX in this regard.
    Again, who does fault lie with? I've already stated the source costs $50K, a point I've never concealed and stated very clearly on their own FAQ page. Also not sure how the PhysX source is even relevant given they'd have to write a driver for their own hardware before worrying about optimizing the source code to make sure Nvidia isn't cheating them somehow.

    Also, the only way Nvidia would gain any advantage or stranglehold over AMD is if PhysX gained widespread adoption and Nvidia supported it and AMD hardware did NOT. Which is the case now. If AMD and Nvidia both support PhysX how is it an advantage for either of them? Its not.
    lordbean wrote:
    Not supporting PhysX benefits AMD because if they did, it means they licensed it from nvidia, and essentially are then completely under nvidia's control when PhysX becomes the dominant standard.
    Yes PhysX is under Nvidia's control but AMD's hardware and drivers are under their control. If there's concerns about artificial limitation of performance than obviously AMD could invest in purchasing the source code and optimizing as necessary, but the cost of paranoia in this case would be $50K.
    lordbean wrote:
    Nvidia: You conveniently forgot to mention CAL in this list, AMD's solution to GPU computing. CUDA was built by nvidia, for nvidia graphics hardware. AMD's CAL was built by AMD, for AMD's graphics hardware. CUDA was never intended to be an industry standard, nor was CAL.
    CAL is dead, never even took off, AMD abandoned support of it along with Stream and Brook+ and whatever other synonym for "unsupported" they decided to go with. If you can find an app for CAL/Stream/Brook+ outside of Stanford's campus I'd be genuinely shocked. Pretty sure one or both of those Dave Hoff interviews clearly get the point across when he ducks the question and says they're going with OpenCL and DirectCompute instead.
    lordbean wrote:
    AMD: Supports OpenCL, DirectCompute, and CAL. See above point. Also, if AMD is "lagging behind Nvidia's from both a driver and SDK standpoint (neither are available for public consumption as of now)", please explain why the Radeon HD5850 and HD5870 are available right now on store shelves with full DX11 support, and nvidia's GT300 is not due until at least Q1 2010.
    Again, CAL is irrelevant. For the bolded portion, you can't download an official OpenCL driver or SDK for AMD as of today, they're still going through validation. Not sure what the point of bringing the 5850 and 5870 was when I was clearly referring to software and drivers, not hardware, not to mention Nvidia's DX10 hardware currently supports OpenCL and DirectCompute just fine.
    lordbean wrote:
    Intel: Honestly doesn't matter in the scope of this argument. If OpenCL really is a threat to the x86 platform, why is AMD still trying hard to compete in the CPU market? The central processing unit is not going anywhere in a hurry.
    AMD's x86 license is constantly being challenged by Intel and is perpetually tied up in litigation. Obviously their interests are tied to both the CPU and GPU so they will continue to try and compete, best they can in both markets but if they saw a chance to break away and reduce their reliance on x86 and its exorbitant licensing fees I'm sure they'd be interested.

    As for the bolded portion, I guess that can be interpreted as a double entendre perhaps? I'd say how much of a hurry depends what computing requirements you're looking at, as traditional CPUs aren't always the best for every task. I'd agree though the CPU isn't going anywhere in a hurry from a performance standpoint, its gains have stagnated in recent years to the point its only adhering to Moore's Law in size and transistor count.
    lordbean wrote:
    Of course nvidia supports PhysX and AMD doesn't. If PhysX becomes the standard, AMD's graphics become moot, since nvidia will have control over AMD's products. AMD is pushing for a fully vendor-neutral solution (Bullet), which nvidia realizes would be suicide not to support, as it is based on DirectX 11 code, and thus required for DirectX 11 compliance. Both Havok and PhysX will become obsolete once the open-source standard is in place, and nvidia is trying their best to stop that from happening because PhysX would make them fully dominant in the graphics sector.
    I'm about to go into a Prisoner's Dilemma explanation but I really shouldn't have to, this is just common sense. If both AMD and Nvidia both cooperate and support PhysX, they both win. There's only a loser if one or the other, or both defect. AMD has chosen to defect. Obviously its in everyone's best interest to cooperate. The outcomes by cooperating are clearly better than the outcomes if one or both defect.

    Also, I'm not sure how you come to the conclusion AMD's graphics become irrelevant if PhysX becomes dominant. Not only does it put far too much emphasis on physics as a feature set of graphics cards, that scenario would only have a chance of occurring IF AMD doesn't choose to support PhysX. If they supported it they wouldn't be at any disadvantage with regard to that feature outside of the conspiracy theories of Nvidia somehow crippling AMD hardware performance...

    As for Bullet......I've already provided the link. Its OpenCL based, not DX Compute and Bullet's OpenCL implementation was wholly developed on Nvidia hardware using Nvidia's OpenCL SDK. I'm sure they would've considered using AMD's hardware and SDK but it would've been difficult to do so with no AMD OpenCL driver and no AMD OpenCL SDK. Again, relevant quotes are in that TechLegion link.
    lordbean wrote:
    CAL is AMD's CUDA driver. As I mentioned above, CUDA was developed by nvidia specifically for nvidia's hardware. It was never intended to be used on any other GPUs. DirectCompute is the next generation of the idea behind CAL and CUDA - it is the vendor-neutral implementation of GPU processing. Asking AMD why they don't license PhysX from nvidia would be like asking a banker why he won't just give you his money rather than loan it to you. Supporting PhysX natively on AMD GPUs would require that AMD license it from nvidia, which as I mentioned more than once above, places AMD in a very compromised position.
    No, CAL is their low-level C-based language that compiles for whatever their machine code is. Its not directly compatible with CUDA, as that's Nvidia's C-based language architecture that also has a low-level API and driver to mirror CAL. AMD would need to write a driver for their hardware to work with CUDA, just as Nvidia would need to write a driver for their hardware to work with CAL. Its obviously possible given they're all C-based languages and have already been ported to other C-based languages like OpenCL. The difference is, there's a reason to write a CUDA driver, there is none and never has been a reason to write one for Stream/CAL/Brook+.

    http://developer.amd.com/gpu/ATIStreamSDK/Pages/default.aspx

    lordbean wrote:
    You missed a vital bit of context here. Read this again carefully, between the lines, and this is what he was actually saying:

    "While it would be easy for nvidia to convert PhysX from CUDA to OpenCL so it could run on our cards, and I've offered our assistance to do this, I can just imagine PhysX will remain that one app that doesn't ever become open."

    He's pointing out that nvidia could change PhysX to OpenCL quite easily, yet they don't want to do it. The reasons for this are the points I've already made above - if PhysX is adopted as the standard while it is still proprietary to nvidia, they get a stranglehold on AMD. Case in point.
    No, I didn't miss that bit of context, my point was that its obviously possible to go one way from CUDA to OpenCL/Direct Compute, he acknowledges it as trivial, the question that is not asked is why they don't get off their asses and write a driver for their own hardware instead of expecting everything to be handed to them for free while doing nothing? Its the typical free loader problem, AMD simply doesn't realize free rider economics don't apply well to technology. They could just as easily write a CUDA driver from their OpenCL driver today, just as they could've 14 months ago. Its trivial remember?

    Its not a simple question of Nvidia not wanting to, as I've already mentioned 1) There was no OpenCL when PhysX rolled out, they created the API for lack of an alternative and 2) They have no incentive to port it and even less reason to do so now when the only beneficiary is a free loading AMD that has done nothing but publicly criticize their technology.
    lordbean wrote:
    Nvidia's decision to keep PhysX's source proprietary instead of porting it to OpenCL is what's hurting customers, not AMD's decisions. AMD's decisions have been entirely in the interest of AMD keeping themselves afloat in the market. They are not trying to hurt their customers at all, in fact they are trying to help their customers by pushing for open source physics (which, if I may point out, will not hurt nvidia or its customers since bullet and openCL are vendor-neutral, and will thus run on nvidia hardware). In fact, it is nvidia that is hurting their customers by keeping PhysX proprietary instead of porting it to OpenCL. Nvidia is holding out in the off chance that AMD caves and licenses PhysX from them, but it's not going to happen. If AMD caves on this, they hand the graphics sector to nvidia on a golden platter.
    Its obvious you're not familiar with the timeline and specifics of this argument, so I'll leave it at this. Nvidia isn't hurting customers, they've developed a value-add feature that did not exist previously and provided it for free to all customers using their hardware, which again, is the overwhelming % of the discrete GPU market. They've also fully supported all emerging API and physics technologies and have stated they would do so from the start.

    The only people hurt by AMD's unwillingness to cooperate are AMD"s customers. That should be plainly obvious. Its the point of this original news bit and its further emphasized by the fact AMD users are the ones searching high and low for workarounds and hacks to get PhysX working even at reduced capabilities on their hardware. And you want to claim this is a better solution than AMD just sucking it up, making whatever apologies are required, and supporting PhysX natively on their own hardware? They can't win for losing really, since once again, Nvidia will support every API and physics middleware AMD can, with the addition of PhysX. The only way AMD can hope to reach parity and feature match Nvidia is if PhysX does die and becomes irrelevant, which makes the motivation for their rhetoric quite obvious and transparent.

    Sorry if I come off as a bit short or irritated in some of my replies, the basic argument you seem to be presenting, that AMD, their customers, and everyone else is somehow better off for them not supporting PhysX just makes no sense whatsoever.
  • lordbeanlordbean Ontario, Canada
    edited October 2009
    chizow wrote:
    Yep, the source is $50K as I'm pretty sure I've already mentioned, but you would only need the source if you wanted to integrate it into your own tools, recompile it for whatever reason for perhaps target hardware.

    Please give me a link to where I can purchase the source code for the PhysX core for $50,000. All I can find is http://developer.nvidia.com/object/physx.html and it's not there.
    chizow wrote:
    Oh please, these tin-foil hat what-ifs are better alternatives to the situation bearing out in the media now? Where AMD is crying about their hardware being unsupported for a solution they never supported? Their users being forced to download hacks off torrents for workarounds to get a half-baked PhysX solution to run on their CPU instead of natively on their GPU? Needing to rely on the power of community and workaround+ to get a patch that intercepts driver calls so ATI 3D + Nvidia PhysX work in the same ecosystem? That is what "open" and "free" get you, unsupported garbage solutions and workarounds.

    Tell that to DirectX. Last I checked, anyone can develop on it, and it's free.

    chizow wrote:
    Again, who does fault lie with? I've already stated the source costs $50K, a point I've never concealed and stated very clearly on their own FAQ page. Also not sure how the PhysX source is even relevant given they'd have to write a driver for their own hardware before worrying about optimizing the source code to make sure Nvidia isn't cheating them somehow.

    Again, please give me a link to the page that allows me to buy the PhysX source code for $50,000. I can't find it. The PhysX source code is VERY relevant here - it is the heart of PhysX itself, and without it, how is AMD supposed to port it?
    chizow wrote:
    Also, the only way Nvidia would gain any advantage or stranglehold over AMD is if PhysX gained widespread adoption and Nvidia supported it and AMD hardware did NOT. Which is the case now. If AMD and Nvidia both support PhysX how is it an advantage for either of them? Its not.

    Currently, nvidia does support it (they purchased it from Ageia) and AMD does not, yet I do not see widespread acceptance of PhysX. If AMD supported PhysX tomorrow, then they licensed the proprietary source code from nvidia to port, meaning their hardware becomes subject to nvidia's scrutiny to make sure their port is fully compatible with the original code. As well, there's nothing stopping nvidia from revoking the license with AMD once physX does become widely accepted.

    chizow wrote:
    Yes PhysX is under Nvidia's control but AMD's hardware and drivers are under their control. If there's concerns about artificial limitation of performance than obviously AMD could invest in purchasing the source code and optimizing as necessary, but the cost of paranoia in this case would be $50K.

    Between two corporations as major as AMD and nvidia, the source for something as potentially groundbreaking as PhysX is not going to be something nvidia just hands out. If they DO in fact offer to sell it with no future obligations to other companies, it would be for a lot more than $50,000. I'd be surprised if they do this at all, and if they do, it's much more likely that the $50,000 is for a license to access the PhysX source code, not to purchase it.

    chizow wrote:
    CAL is dead, never even took off, AMD abandoned support of it along with Stream and Brook+ and whatever other synonym for "unsupported" they decided to go with. If you can find an app for CAL/Stream/Brook+ outside of Stanford's campus I'd be genuinely shocked. Pretty sure one or both of those Dave Hoff interviews clearly get the point across when he ducks the question and says they're going with OpenCL and DirectCompute instead.

    If AMD has abandoned support for CAL, it was only in preparation for DirectCompute, which is logical because DirectCompute is a vendor-neutral standard. It's completely logical to abandon the proprietary standard in favor of the open source one... it represents that the company is thinking of its consumers and willing to play fair with its competition, two things nvidia seems to be unwilling to do at the moment. In all likelihood, they're grasping at straws trying to find a way to slow AMD, because AMD released their DX11 GPUs a full 6 months before the nvidia DX11 GPUs are even expected.

    chizow wrote:
    Again, CAL is irrelevant. For the bolded portion, you can't download an official OpenCL driver or SDK for AMD as of today, they're still going through validation. Not sure what the point of bringing the 5850 and 5870 was when I was clearly referring to software and drivers, not hardware, not to mention Nvidia's DX10 hardware currently supports OpenCL and DirectCompute just fine.

    CAL is irrelevant. So is CUDA. That's the whole point of DirectCompute. Saying that CUDA is good and should be a standard and that CAL is somehow bad when both standards are proprietary and made obsolete by DirectX 11 looks dangerously close to fanboy-ism. Also, I'd like to point out that AMD has full OpenCL support on Mac OSX snow leopard, and they are using that OS to develop Bullet as per Dave Hoff. Anything that works in OSX isn't far from being released in windows, and at the moment, it doesn't even matter that OpenCL isn't fully supported in windows yet. There's no applications out there to really take advantage of it yet.

    chizow wrote:
    AMD's x86 license is constantly being challenged by Intel and is perpetually tied up in litigation. Obviously their interests are tied to both the CPU and GPU so they will continue to try and compete, best they can in both markets but if they saw a chance to break away and reduce their reliance on x86 and its exorbitant licensing fees I'm sure they'd be interested.

    Intel holds by far the dominant share of the CPU market. To be competitive, AMD must match intel's standards, or else they will simply cease to be a CPU developer. Just because you can execute C++ code on your GPU, does not mean that the CPU is going to be made obsolete. At the very worst, some hardware techniques used in building GPUs may be adopted to CPUs. The core of the computer will always be the CPU for a long time to come.
    chizow wrote:
    As for the bolded portion, I guess that can be interpreted as a double entendre perhaps? I'd say how much of a hurry depends what computing requirements you're looking at, as traditional CPUs aren't always the best for every task. I'd agree though the CPU isn't going anywhere in a hurry from a performance standpoint, its gains have stagnated in recent years to the point its only adhering to Moore's Law in size and transistor count.

    You misinterpreted my statement, although that's possibly my fault for not being clear enough. What I meant was, the CPU will not be a replaced or obsolete part of the computer system as we know it for a long time to come.

    chizow wrote:
    I'm about to go into a Prisoner's Dilemma explanation but I really shouldn't have to, this is just common sense. If both AMD and Nvidia both cooperate and support PhysX, they both win. There's only a loser if one or the other, or both defect. AMD has chosen to defect. Obviously its in everyone's best interest to cooperate. The outcomes by cooperating are clearly better than the outcomes if one or both defect.

    AMD is willing to cooperate on an open source standard, which PhysX currently is not. It is proprietary code, owned by nvidia. If AMD has to license the source code from nvidia in order to cooperate on it, their hardware becomes subject to nvidia's scrutiny and guidelines to ensure compliance. If I were AMD's board of directors, this is not a position I'd want to be in.
    chizow wrote:
    Also, I'm not sure how you come to the conclusion AMD's graphics become irrelevant if PhysX becomes dominant. Not only does it put far too much emphasis on physics as a feature set of graphics cards, that scenario would only have a chance of occurring IF AMD doesn't choose to support PhysX. If they supported it they wouldn't be at any disadvantage with regard to that feature outside of the conspiracy theories of Nvidia somehow crippling AMD hardware performance...

    AMD's graphics become a moot point if PhysX becomes the dominant standard while the code is still proprietary to nvidia. Let's say that two years down the road, 50% of all new games use PhysX as part of their execution. That means that in order to be a viable choice for 50% of the gaming sector, AMD has to support PhysX. To support PhysX, they have to license it from nvidia, and that means their graphics cards must be inspected and passed by nvidia before they can be released to the market. There's nothing stopping nvidia from preventing the release of AMD graphics cards which are more powerful than nvidia graphics cards, or even simply revoking AMD's PhysX license once PhysX is established as a standard. When 50%+ of current games use PhysX, this would be devastating to the point of knocking AMD right off the graphics market.
    chizow wrote:
    As for Bullet......I've already provided the link. Its OpenCL based, not DX Compute and Bullet's OpenCL implementation was wholly developed on Nvidia hardware using Nvidia's OpenCL SDK. I'm sure they would've considered using AMD's hardware and SDK but it would've been difficult to do so with no AMD OpenCL driver and no AMD OpenCL SDK. Again, relevant quotes are in that TechLegion link.

    The point of an open source standard is that no matter whose development tools you use, the finished product will run fine on any hardware. Both AMD and nvidia support OpenCL, which means that it really doesn't matter whose graphics card is it developed on. AMD also fully supports OpenCL on Mac OSX snow leopard, meaning that support on windows cannot be far from release. As for the SDK, why would AMD bother to make one when nvidia has already created a good one? I've already made this point at the beginning of the paragraph.

    chizow wrote:
    No, CAL is their low-level C-based language that compiles for whatever their machine code is. Its not directly compatible with CUDA, as that's Nvidia's C-based language architecture that also has a low-level API and driver to mirror CAL. AMD would need to write a driver for their hardware to work with CUDA, just as Nvidia would need to write a driver for their hardware to work with CAL. Its obviously possible given they're all C-based languages and have already been ported to other C-based languages like OpenCL. The difference is, there's a reason to write a CUDA driver, there is none and never has been a reason to write one for Stream/CAL/Brook+.

    http://developer.amd.com/gpu/ATIStreamSDK/Pages/default.aspx

    If neither one are currently cross-compatible without needing to be ported, why bother to port them at all? Both DirectX 11 and OpenCL offer alternatives that are free to develop on and guaranteed to run on any modern graphics hardware.


    chizow wrote:
    No, I didn't miss that bit of context, my point was that its obviously possible to go one way from CUDA to OpenCL/Direct Compute, he acknowledges it as trivial, the question that is not asked is why they don't get off their asses and write a driver for their own hardware instead of expecting everything to be handed to them for free while doing nothing? Its the typical free loader problem, AMD simply doesn't realize free rider economics don't apply well to technology. They could just as easily write a CUDA driver from their OpenCL driver today, just as they could've 14 months ago. Its trivial remember?

    I've made this point repeatedly, but I guess it bears making again. Nvidia owns the source code to PhysX. AMD cannot simply write their own driver for it without licensing the source to port it, which makes them responsible to nvidia both for producing a port that works properly with all the commands the original code had, and also for producing graphics hardware compliant to the standard. It is simply not a position that would be good for AMD to be in.
    chizow wrote:
    Its not a simple question of Nvidia not wanting to, as I've already mentioned 1) There was no OpenCL when PhysX rolled out, they created the API for lack of an alternative and 2) They have no incentive to port it and even less reason to do so now when the only beneficiary is a free loading AMD that has done nothing but publicly criticize their technology.

    Nvidia did not create PhysX. They purchased Ageia and all Ageia's assets so that they could make the PhysX code proprietary to themselves. Their hope was that PhysX would become widely accepted off the bat (due to the fact it was the only hardware-accelerated physics solution at the time), and that they'd be able to license it to other GPU making companies. This would create a situation where any other GPU-producing company would need nvidia's stamp of acceptance on any hardware design which would accelerate PhysX. Essentially, this would give nvidia a monopoly on the graphics market, because they could cut support for PhysX from all other companies on a whim.

    chizow wrote:
    Its obvious you're not familiar with the timeline and specifics of this argument, so I'll leave it at this. Nvidia isn't hurting customers, they've developed a value-add feature that did not exist previously and provided it for free to all customers using their hardware, which again, is the overwhelming % of the discrete GPU market. They've also fully supported all emerging API and physics technologies and have stated they would do so from the start.

    I'm not familiar with the timeline of my argument? I'm fully aware of how PhysX came to be, and also how it came to be accelerated by nvidia hardware only. Ageia developed the software, and they even developed an expansion card designed to work in tandem with the graphics card to accelerate PhysX. Nvidia did not develop the PhysX technology, they purchased Ageia and converted the PhysX source into an application that would only be accelerated by nvidia's GPUs. If nvidia truly cared about the consumer and the future adoption of PhysX as a standard, they would have provided the necessary code and tools to other hardware corporations free of charge and without obligation, or else would not have purchased Ageia in the first place. By trying to keep PhysX as a proprietary standard, they are hurting their consumers in the long-run because if PhysX becomes the accepted standard, nvidia will end up with a monopoly on the graphics market, and without competition, technology does not advance nearly as quickly, prices are not competitive, and one corporation can control the supply for the entire graphics industry.

    If nvidia were to port PhysX to OpenCL or DirectCompute and make the core open-source, they would be demonstrating that their intentions are honorable, and that they desire advancement in the field of GPU-accelerated physics as much as AMD seems to. By keeping the PhysX core source proprietary, they are attempting to maneuver into a position that allows them to influence AMD's graphics development.
    chizow wrote:
    The only people hurt by AMD's unwillingness to cooperate are AMD"s customers. That should be plainly obvious. Its the point of this original news bit and its further emphasized by the fact AMD users are the ones searching high and low for workarounds and hacks to get PhysX working even at reduced capabilities on their hardware. And you want to claim this is a better solution than AMD just sucking it up, making whatever apologies are required, and supporting PhysX natively on their own hardware? They can't win for losing really, since once again, Nvidia will support every API and physics middleware AMD can, with the addition of PhysX. The only way AMD can hope to reach parity and feature match Nvidia is if PhysX does die and becomes irrelevant, which makes the motivation for their rhetoric quite obvious and transparent.

    AMD is not trying to force people to run PhysX through a hack or workaround. They are trying to promote open-source solutions for Physics acceleration that are not necessarily limited to Bullet physics. By endorsing OpenCL and DirectX11 compliance, AMD is showing that whatever standard for GPU physics is used, they're willing to treat their competition fairly, as it will also be accelerated just the same on nvidia's hardware. Nvidia, on the other hand, is attempting to force other companies to license hardware PhysX acceleration from them, and by doing so, place themselves in a position to control the entire graphics industry. That will hurt consumers in the long run for all the reasons I stated above. You're confusing what's better for consumers right now vs. what's better for consumers in the big picture. It'd be great if we could run PhysX on AMD hardware, but if they have to license it from nvidia to accomplish that, then AMD's graphics cards are simply going to disappear from the market one day. Nvidia will revoke the license and give themselves the monopoly on the GPU.
    chizow wrote:
    Sorry if I come off as a bit short or irritated in some of my replies, the basic argument you seem to be presenting, that AMD, their customers, and everyone else is somehow better off for them not supporting PhysX just makes no sense whatsoever.

    I don't know what you'd be irritated about. Personally, I enjoy a good debate. :)
  • edited October 2009
    lordbean wrote:
    Please give me a link to where I can purchase the source code for the PhysX core for $50,000. All I can find is http://developer.nvidia.com/object/physx.html and it's not there.
    http://http.download.nvidia.com/developer/cuda/seminar/TDCI_PhysX.pdf

    Slide 23 of 26, I've seen it elsewhere but this is one of the many links that popped up by simply searching "PhysX source $50". Glancing over some of your other replies, this takes care of most of the irrelevance.
    lordbean wrote:
    Tell that to DirectX. Last I checked, anyone can develop on it, and it's free.
    You could substitute PhysX for DirectX in that sentence verbatim, thanks for proving my point. The difference is of course Microsoft would just laugh at you if you asked for their source code.
    lordbean wrote:
    Again, please give me a link to the page that allows me to buy the PhysX source code for $50,000. I can't find it. The PhysX source code is VERY relevant here - it is the heart of PhysX itself, and without it, how is AMD supposed to port it?
    Again, the source code is irrelevant unless AMD was planning to integrate it into an engine or their own API, but we all know they don't have anything on either of those fronts. They're not porting anything because they have nothing to port, all they need to do is write a driver for their hardware for the existing API and piggy-back the efforts of others. Again, from an earlier post the progression from hardware to software would look like:

    Hardware > Driver (HAL) > API (HLPL) > Middleware (GUI-based tools)

    Everything else is in-place, all AMD needs to do is make their hardware compatible with prevaling API and everything else should take care of itself. In this case, PhysX source would only be relevant for those dealing directly with the API and middleware, and AMD has no control over any of those factors so the source is irrelevant.
    lordbean wrote:
    Currently, nvidia does support it (they purchased it from Ageia) and AMD does not, yet I do not see widespread acceptance of PhysX. If AMD supported PhysX tomorrow, then they licensed the proprietary source code from nvidia to port, meaning their hardware becomes subject to nvidia's scrutiny to make sure their port is fully compatible with the original code. As well, there's nothing stopping nvidia from revoking the license with AMD once physX does become widely accepted.
    PhysX is the #1 SDK in production, but GPU acceleration is obviously going to be an uphill struggle due to the strong industry focus on consoles. As such, features like GPU PhysX are going to be value-add for the PC only which means any dev house would have to weigh the pros and cons of adding such a feature. Most will not due to the additional development cost, which is why Nvidia's TWIMTBP program helps with development, but as each title that uses PhysX releases, that increases the chance more will in the future as the tech gains momentum.

    As for the nonsensical scenario you brought up about revoking licenses and hardware coming under scrutiny....I think you'd have to cross that bridge of AMD supporting PhysX before donning the tin foil hat. Not to mention licensing agreements are put in place for just that reason, to prevent any arbitrary revocation of license.
    lordbean wrote:
    Between two corporations as major as AMD and nvidia, the source for something as potentially groundbreaking as PhysX is not going to be something nvidia just hands out. If they DO in fact offer to sell it with no future obligations to other companies, it would be for a lot more than $50,000. I'd be surprised if they do this at all, and if they do, it's much more likely that the $50,000 is for a license to access the PhysX source code, not to purchase it.
    Yes apparently AMD is going to come through with a groundbreaking revelation that redefines what we know about physics if they ever get their hands on that source code. Much easier than just providing a driver for their hardware.
    lordbean wrote:
    If AMD has abandoned support for CAL, it was only in preparation for DirectCompute, which is logical because DirectCompute is a vendor-neutral standard. It's completely logical to abandon the proprietary standard in favor of the open source one... it represents that the company is thinking of its consumers and willing to play fair with its competition, two things nvidia seems to be unwilling to do at the moment. In all likelihood, they're grasping at straws trying to find a way to slow AMD, because AMD released their DX11 GPUs a full 6 months before the nvidia DX11 GPUs are even expected.
    That might actually make sense if DirectCompute was open source in any way, but its not, its a standard proprietary to Microsoft. Nvidia is simply supporting all standards and API unapologetically without disingenuous excuses. Some companies produce solutions, some produce excuses.
    lordbean wrote:
    CAL is irrelevant. So is CUDA. That's the whole point of DirectCompute. Saying that CUDA is good and should be a standard and that CAL is somehow bad when both standards are proprietary and made obsolete by DirectX 11 looks dangerously close to fanboy-ism. Also, I'd like to point out that AMD has full OpenCL support on Mac OSX snow leopard, and they are using that OS to develop Bullet as per Dave Hoff. Anything that works in OSX isn't far from being released in windows, and at the moment, it doesn't even matter that OpenCL isn't fully supported in windows yet. There's no applications out there to really take advantage of it yet.
    Actually comparing CAL to CUDA doesn't look like fanboy-ism, it reeks of it. Honestly I haven't seen CAL mentioned in at least a year when referring to GPGPU. Go to any GPGPU developer forum and compare the two and see how people who actually use the tools react to the comparison.

    CUDA isn't dead, in fact its growing, adapting and improving. It was never just an API, it was Nvidia's top-to-bottom GPGPU compute architecture. The progression I detailed is ALL encompassed within CUDA, from the hardware to the middleware. What's next for CUDA? How about integration into one of the most popular production IDE with Visual Studio, which will provide a one-stop debugger and compiler for Nvidia hardware for all relevant API: CUDA C, OpenCL, DirectCompute, Direct3D, and OpenGL.

    http://developer.nvidia.com/object/nexus.html

    Once again, Nvidia is providing solutions for their hardware that interested parties actually want and will put to good use. What's AMD doing? Oh right, doing another interview criticizing Nvidia.....
    lordbean wrote:
    Intel holds by far the dominant share of the CPU market. To be competitive, AMD must match intel's standards, or else they will simply cease to be a CPU developer. Just because you can execute C++ code on your GPU, does not mean that the CPU is going to be made obsolete. At the very worst, some hardware techniques used in building GPUs may be adopted to CPUs. The core of the computer will always be the CPU for a long time to come.
    Ya its the heart of Nvidia's heterogenous computing model, except they don't plan to have much use for more than a few x86 CPU cores if all goes according to their plans.
    lordbean wrote:
    You misinterpreted my statement, although that's possibly my fault for not being clear enough. What I meant was, the CPU will not be a replaced or obsolete part of the computer system as we know it for a long time to come.
    No it won't be obsolete, but its role will be vastly diminished to the point its just a tiny beating heart in a vastly undersized body feeding a massive GPU for a brain (again according to Nvidia's heterogenous computing model).
    lordbean wrote:
    AMD is willing to cooperate on an open source standard, which PhysX currently is not. It is proprietary code, owned by nvidia. If AMD has to license the source code from nvidia in order to cooperate on it, their hardware becomes subject to nvidia's scrutiny and guidelines to ensure compliance. If I were AMD's board of directors, this is not a position I'd want to be in.
    Again already been down this path of hypocrisy. AMD clearly has no problems supporting closed and proprietary standards (see: DirectX, Direct Compute and Havok). These lies only go so far, especially when AMD embarassingly backpedaled on their endorsement of Havok, probably after coming to the revelation Intel has no interest whatsoever in providing GPU acceleration to anyone before Larrabee is ready (and maybe never for competitors).

    lordbean wrote:
    AMD's graphics become a moot point if PhysX becomes the dominant standard while the code is still proprietary to nvidia. Let's say that two years down the road, 50% of all new games use PhysX as part of their execution. That means that in order to be a viable choice for 50% of the gaming sector, AMD has to support PhysX. To support PhysX, they have to license it from nvidia, and that means their graphics cards must be inspected and passed by nvidia before they can be released to the market. There's nothing stopping nvidia from preventing the release of AMD graphics cards which are more powerful than nvidia graphics cards, or even simply revoking AMD's PhysX license once PhysX is established as a standard. When 50%+ of current games use PhysX, this would be devastating to the point of knocking AMD right off the graphics market.
    Again, completely unsubstantiated fearmongering. Not only do you put far too much importance on physics over 3D capability driving sales, all AMD would have to do to avoid any such fictitious release roadblock would be to simply pull PhysX support and launch their product. Or more than likely, just launch their product, claim support for said feature, then play catch up some point down the line with hastily applied driver updates.
    lordbean wrote:
    The point of an open source standard is that no matter whose development tools you use, the finished product will run fine on any hardware. Both AMD and nvidia support OpenCL, which means that it really doesn't matter whose graphics card is it developed on. AMD also fully supports OpenCL on Mac OSX snow leopard, meaning that support on windows cannot be far from release. As for the SDK, why would AMD bother to make one when nvidia has already created a good one? I've already made this point at the beginning of the paragraph.
    No the point of open source is so that you have some control over the content of the standard so that you're not at an arbitrarily imposed disadvantage. After that, provided they're running the same API, the faster hardware wins.
    lordbean wrote:
    If neither one are currently cross-compatible without needing to be ported, why bother to port them at all? Both DirectX 11 and OpenCL offer alternatives that are free to develop on and guaranteed to run on any modern graphics hardware.
    For CUDA the reason is obvious, people actually used it so the API libraries have built up and evolved over time with numerous apps developed for it. Porting CUDA and its runtimes to OpenCL and DirectCompute make a lot more sense than re-inventing the wheel. In fact, with tools like Nexus, it shouldn't be much more difficult than simply debugging and recompiling the output to whatever target API you choose.

    As for the guarantee....no there is no guarantee, especially if a vendor doesn't provide a driver for their own hardware for that API. <----***hint important point hint ****
    lordbean wrote:
    I've made this point repeatedly, but I guess it bears making again. Nvidia owns the source code to PhysX. AMD cannot simply write their own driver for it without licensing the source to port it, which makes them responsible to nvidia both for producing a port that works properly with all the commands the original code had, and also for producing graphics hardware compliant to the standard. It is simply not a position that would be good for AMD to be in.
    You can make the point again and it wouldn't make you any more correct. They don't need the PhysX source code, all they need to do is write a driver for their own hardware for the API backend needed for PhysX acceleration, CUDA. Would they potentially need Nvidia's support to write that driver? Maybe, but again there's no need to cross that hypothetical bridge because its clearly a question that has not been posed. That's the problem, its the question that's never asked.
    lordbean wrote:
    Nvidia did not create PhysX. They purchased Ageia and all Ageia's assets so that they could make the PhysX code proprietary to themselves. Their hope was that PhysX would become widely accepted off the bat (due to the fact it was the only hardware-accelerated physics solution at the time), and that they'd be able to license it to other GPU making companies. This would create a situation where any other GPU-producing company would need nvidia's stamp of acceptance on any hardware design which would accelerate PhysX. Essentially, this would give nvidia a monopoly on the graphics market, because they could cut support for PhysX from all other companies on a whim.
    Yes I'm well aware of the history and already detailed it in an earlier post. They acquired the IP to further their GPGPU efforts and sell their hardware, all that other irrelevance you wrote makes no sense. The only way PhysX gains traction is if its install-base increases, meaning more hardware supports it. In the long-term I see Nvidia leveraging this technology to get their hardware into the next-gen consoles, at which point the technology will be truly ubiquitous in games and we'll truly see it integrated seamlessly across multiple platforms.
    lordbean wrote:
    I'm not familiar with the timeline of my argument? I'm fully aware of how PhysX came to be, and also how it came to be accelerated by nvidia hardware only. Ageia developed the software, and they even developed an expansion card designed to work in tandem with the graphics card to accelerate PhysX. Nvidia did not develop the PhysX technology, they purchased Ageia and converted the PhysX source into an application that would only be accelerated by nvidia's GPUs. If nvidia truly cared about the consumer and the future adoption of PhysX as a standard, they would have provided the necessary code and tools to other hardware corporations free of charge and without obligation, or else would not have purchased Ageia in the first place. By trying to keep PhysX as a proprietary standard, they are hurting their consumers in the long-run because if PhysX becomes the accepted standard, nvidia will end up with a monopoly on the graphics market, and without competition, technology does not advance nearly as quickly, prices are not competitive, and one corporation can control the supply for the entire graphics industry.
    Yes its obvious you're unfamiliar with the timeline when you ask questions why Nvidia chose to develop on CUDA or why they didn't just conveniently port it to API and standards that didn't exist yet. Not even going to bother with some of the common fallacies I glanced over in there.

    lordbean wrote:
    If nvidia were to port PhysX to OpenCL or DirectCompute and make the core open-source, they would be demonstrating that their intentions are honorable, and that they desire advancement in the field of GPU-accelerated physics as much as AMD seems to. By keeping the PhysX core source proprietary, they are attempting to maneuver into a position that allows them to influence AMD's graphics development.
    Why would Nvidia need to demonstrate anything when all of their actions up until the driver lockout have been more than honorable? Again, follow the progressions in the links from my first post and compare what Nvidia said and did to what AMD said and did. Nvidia holds all the cards now, they have no reason to offer PhysX on a platter in light of the negative press and publicity generated by AMD in the press.
    lordbean wrote:
    AMD is not trying to force people to run PhysX through a hack or workaround. They are trying to promote open-source solutions for Physics acceleration that are not necessarily limited to Bullet physics. By endorsing OpenCL and DirectX11 compliance, AMD is showing that whatever standard for GPU physics is used, they're willing to treat their competition fairly, as it will also be accelerated just the same on nvidia's hardware. Nvidia, on the other hand, is attempting to force other companies to license hardware PhysX acceleration from them, and by doing so, place themselves in a position to control the entire graphics industry. That will hurt consumers in the long run for all the reasons I stated above. You're confusing what's better for consumers right now vs. what's better for consumers in the big picture. It'd be great if we could run PhysX on AMD hardware, but if they have to license it from nvidia to accomplish that, then AMD's graphics cards are simply going to disappear from the market one day. Nvidia will revoke the license and give themselves the monopoly on the GPU.
    Again you seem to be ignoring what's occurring in reality. You also conveniently ignore the fact Nvidia also supports the same standards, and currently better than AMD mind you. You keep claiming Nvidia's introduction of innovative, value-add features somehow hurts the consumer, but that's clearly false as it benefits anyone that purchases their hardware, which is again, the overwhelming majority by any metric.
    lordbean wrote:
    I don't know what you'd be irritated about. Personally, I enjoy a good debate. :)
    I'm sure I'd enjoy it more if I was actually debating someone familiar with the material being discussed. As it is now its more like fact checking and correcting a lot of assumptions and misinformation.
Sign In or Register to comment.