If geeks love it, we’re on it

AMD comments on NVIDIA dropping PhysX support when ATI hardware is present

AMD comments on NVIDIA dropping PhysX support when ATI hardware is present

no_dramaCould shenanigans be afoot with NVIDIA’s PhysX technology?

Just days ago it was reported on Geeks3D that those who use an ATI GPU for graphics while offloading physics to a PhysX-enabled NVIDIA GPU will no longer be able to use such a setup. From NVIDIA driver version 186.x onward, whenever ATI hardware is detected in a system, PhysX is disabled. If you want PhysX, you’re going to have to use all-NVIDIA hardware to get it. PhysX technology has been the property of NVIDIA for some time now, so you may be wondering why this is a problem.

Well, about a year ago, a group of developers began working on porting the CUDA-based PhysX API to work on ATI’s Radeon cards. NVIDIA gave those developers official support. Also, up until the ForceWare 186.x driver, NVIDIA cards would happily handle physics no matter what GPU was powering graphics. NVIDIA has now disallowed both solutions, and that suggests they’ve had a big change of heart.

The whole fiasco began on the NGOHQ forums where user darthcyclonis discovered PhysX was being disabled, so he emailed NVIDIA regarding it. He received the following response:

Hello JC,

I’ll explain why this function was disabled.

PhysX is an open software standard. Any company can freely develop hardware or software that supports it. NVIDIA supports GPU accelerated PhysX on NVIDIA GPUs while using NVIDIA GPUs for graphics. NVIDIA performs extensive Engineering, Development, and QA work that makes PhysX a great experience for customers. For a variety of reasons–some development expense some quality assurance and some business reasons–NVIDIA will not support GPU accelerated PhysX with NVIDIA GPUs while GPU rendering is happening on non-NVIDIA GPUs. I’m sorry for any inconvenience caused but I hope you can understand.

Best Regards,
Troy
NVIDIA Customer Care

This is a very intriguing situation. It surely will not affect a great number of users, as the particular hardware setup in question is rather unique. It does beg the question, though: Why has NVIDIA changed their stance? One also cannot help but notice the timing of this change and how close it came to the release of ATI’s Radeon HD 5870 GPU.

To get ATI’s perspective on the situation, we contacted Neal Robison, Director of Global ISV for AMD. He had this to say in regards to the timing of this change:

The timing isn’t really important. Instead, I would point out that there’s a real discrepancy between what NVIDIA says, and what they do. They “say” that they are looking out for gamers’ best interests. However, decisions like this are the exact opposite of gamers’ best interests.

We asked Neal for his thoughts on the loss of PhysX support on ATI hardware, and why any of it even mattered considering that DirectX 11’s DirectCompute API standardizes physics offloading. Neal had this to say in response:

I think decent physics can be a good thing for gamers… but it should be for ALL gamers. When it’s available for everyone, game developers will be able to make physics an integral part of gameplay, rather than just extra eye candy. This requires a physics solution built on industry standards. That’s why DirectX 11 is such a great inflection point for our industry–DirectCompute allows game physics that can be enjoyed by everyone. There are several initiatives (some open-source) that will deliver awesome GPU-based physics for everyone, using either DirectCompute or OpenCL. You’re absolutely right–industry standards will make any proprietary standard irrelevant.

At press time, NVIDIA could not be reached for comment. We will update this story as soon as we hear their side of it.

Ed note: AMD has put its money where Neal Robison’s mouth is. The company announced today that it was moving ahead with the development of an open source physics engine called Bullet Physics. Based on the vendor-neutral OpenCL and DirectCompute languages, it can run on any recent Radeon or GeForce product. We have requested commentary from NVIDIA on this initiative as well. Oh how the plot thickens!



Comments

  1. Garg
    Garg If there were bugs when using PhysX with a combined NVIDIA/ATI setup, I can understand them not wanting to spend time fixing them. But that's only if they existed. Otherwise, it's shens.
  2. Cliff_Forster
    Cliff_Forster On one hand, I can't blame Nvidia for wanting to hold on to one of its differentiating features, on the other...Direct compute makes sense for gamers long term, as simple as that.
  3. Michael Boardman Or we can just forget "legacy" support for the proprietary PhysX and go with OpenCL (and DirectCompute-11) which is (reportedly) supported by both GPU makers.

    AMD Announces Open Physics Initiative - http://tinyurl.com/yczjtkt

    OpenCL GPU Computing Support on NVIDIA - http://tinyurl.com/ybtdrd2

    Nvidia said it will support DirectCompute-11 later this year.
  4. Michael Boardman ATI and Nvidia both support OpenCL and DirectCompute (currently ATI only, Nvidia support in route)
  5. Thrax
    Thrax Hi, Michael. Sorry that your comment got eaten... Our CMS flagged the links as spam and stuck it in an approval queue.

    We at Icrontic certainly agree that moving away from PhysX (and ATI's Havok) is the way to go. But for now, PhysX is the leading and most populous physics engine gamers can have. NVIDIA gave that ability to ATI gamers for quite a long time--endorsed it, even. The decision to take it away goes against their policy of supporting gamers. There's just no way they can spin that it's in a gamer's best interests.

    We have sent NVIDIA questions to see what they will do in response to AMD's Bullet Physics.
  6. foolkiller
    foolkiller Well, this has made the decision for me. I have two ATI 4870s, and was thinking of an Nvidia card for Physics/Cuda. Now I'll just stick with ATI, they can have all my money.
  7. mirage
    mirage I don't believe even a bit that either side considers the "benefit" of consumers/gamers before their profit. What NVIDIA is doing about dropping PhysX support for ATI cards is completely normal for a corporation and AMD is not a non-profit consumer organization either, although AMD has been lately playing the role of consumer advocate against Intel and Nvidia. Maybe they reached nirvana after years of no-profit suffering. The cost of PhysX is presumably a large amount for Nvidia and it is (potentially) a great feature. If ATI wants it, they should either pay the license or make the alternative technology available. I have not forgotten the days these two companies were fixing price. If there is any drama, both ATI and Nvidia deserve the slap.
    http://www.shacknews.com/onearticle.x/54969
  8. chizow
    chizow Hey all, just wanted to clear up a few things and add a few of my own comments:

    @Michael Boardman:

    -Nvidia supports OpenCL and released both their driver and SDK some months ago. In fact, AMD's new physics engine du jour, Bullet Physics, was developed on Nvidia hardware using Nvidia's OpenCL SDK.

    -Nvidia also supports DirectCompute, as the only app AnandTech could find to benchmark DirectCompute in their 5870 review was Nvidia's Ocean Demo. You'll notice its fully functional on Nvidia's DX10 hardware also.

    Source: http://www.fudzilla.com/content/view/15642/34/
    http://www.anandtech.com/video/showdoc.aspx?i=3643&p=8



    @Thrax:

    -Havok is a wholly owned subsidiary of Intel; AMD/ATI has no ownership interest or any influence of the IP at all. Intel simply used them as a pawn and strung them along with the hopes of an OpenCL Havok client (which was demonstrated at GDC, but still vaporware as of today).

    At the time Havok was being tossed about as an alternative to PhysX, AMD claimed they did not want to support PhysX because they were opposed to "closed and proprietary" standards, yet they threw their support for Havok, which is you guessed it, a closed and proprietary standard. The announcement that they're now backing Bullet Physics is clearly backpedaling on their part and a slap in the face to those who supported them.

    I think Icrontic and the various news outlets are asking the wrong questions. The real question should be, why doesn't AMD just support PhysX as it was originally offered? Why don't they just write a CUDA driver for their hardware? It seems this is just another case of AMD preferring to sit on their hands and reap the benefits of the hard work of others.

    PhysX is just middleware with a bunch of backend APIs to help it interface various hardware, at some point I expect it to be ported to DirectCompute or OpenCL, but instead of waiting for these emerging technologies, Nvidia got it to work with CUDA.

    References and backstory:
    http://www.bit-tech.net/news/hardware/2008/12/11/amd-exec-says-physx-will-die/1
    http://www.tgdaily.com/content/view/38392/118/
    http://www.extremetech.com/article2/0,2845,2324555,00.asp
    http://www.bit-tech.net/custompc/news/602205/nvidia-offers-physx-support-to-amd--ati.html

    In any case, sounds like you guys have an open dialogue with AMD, I just wish people would ask AMD the real questions instead of just taking their excuses at face value. We have one company producing real solutions and we have another company fighting a war of words in the press without anything to show on their end. Who really cares about physics and PC gamers?

    I don't work for Nvidia, no direct ties to them, just another PC gamer for most of my life hoping hardware physics adoption happens sooner rather than later.
  9. Thrax
    Thrax Hi, Chizow.

    Yes, your post was getting flagged as spam by our CMS. I deleted the duplicate and approved your original with the links.

    Thank you for commenting. :)

    We agree that the industry needs to start firing on all cylinders when it comes to physics, but I don't think ATI adopting PhysX is the right answer. I don't even think it's the right answer using the OpenCL port. That's a port which would still have licensing restrictions, because NVIDIA ultimately owns the IP.

    I think the right answer is AMD's Bullet Physics. It's open source, there are no licensing attachments, and it uses open, vendor-neutral languages.

    Of course, all of this assumes that I care about physics at all. I've still never played a physics-enabled title, and the whole thing has been around for how long now?
  10. lordbean
    lordbean
    Thrax wrote:
    Of course, all of this assumes that I care about physics at all. I've still never played a physics-enabled title, and the whole thing has been around for how long now?

    ^ This.

    Of course, part of the problem may be that since the standard is proprietary to nvidia (and only really supported in an ATI configuration if you have an nvidia card in there somewhere), most developers have not wanted to seriously pursue physics because it means they are cutting themselves off from half of the gaming market by default. Once the open source standards get rolling, we may see more (and more interesting) physics-based titles.
  11. chizow
    chizow
    Thrax wrote:
    Hi, Chizow.

    Yes, your post was getting flagged as spam by our CMS. I deleted the duplicate and approved your original with the links.

    Thank you for commenting. :)
    Thanks for the welcome. Found my way to the proper subforum so I can format my reply more elegantly. :smiles:
    Thrax wrote:
    We agree that the industry needs to start firing on all cylinders when it comes to physics, but I don't think ATI adopting PhysX is the right answer. I don't even think it's the right answer using the OpenCL port. That's a port which would still have licensing restrictions, because NVIDIA ultimately owns the IP.
    Well again, that begs the question. What is the right answer? The ever-changing, ever-evolving, mercurial answer that produces nothing? Or the one that is tried and true that has been progressing and provided solutions?

    Every iteration of PhysX is technically a "port". It originated as a software physics API on x86 and has been ported to just about every platform with a different backend solver. It supports all 3 major consoles, 360, PS3, Wii, even the iPhone. It is truly ubiquitous and the SDK is easily ported to whatever backend hardware is necessary.

    Nvidia did the same when it bought Ageia and ported the PhysX to CUDA to run on their hardware. Also, while PhysX is proprietary, it is free to use and there is no licensing fee. There's only a small fee if you want the source code. However, that's not to say Nvidia wouldn't require AMD to adhere to a PhysX logo certification program if they claimed support for PhysX. Personally, I think this is where the hang-up is. AMD simply does not want to advance a competitor's IP and validate their technology by using it, and they certainly don't want to promote Nvidia's brand on their own product.
    Thrax wrote:
    I think the right answer is AMD's Bullet Physics. It's open source, there are no licensing attachments, and it uses open, vendor-neutral languages.
    That's fine that its open source, but what does it matter if the tools aren't very good and no one uses it? One of the benefits of the established middleware SDKs (both PhysX and Havok) is that they already have significant exposure to various dev houses and are tried and proven with today's most popular game engines. With flexible SDK tools that solve for both software and hardware, its much easier for these companies to integrate additional HW effects into their games if they already use the SDK for software effects, especially if they're working on cross-platform titles.

    Havok Titles
    PhysX Titles
    Bullet Titles - That's all I found, had to follow a wiki ref.....
    Thrax wrote:
    Of course, all of this assumes that I care about physics at all. I've still never played a physics-enabled title, and the whole thing has been around for how long now?
    A bit of background on PhysX, before Ageia renamed it, they called it Novodex which was a pretty popular software physics SDK at the time. Not quite on the level of Havok's adoption rate, but a solid library with good industry backing. Then Ageia introduced their hardware PPU which was a PCIE add-in card. The problem was, it was expensive and there was no software support for its additional HW capabilities. Estimates were it sold around 100K units total; with such a small HW install-base it should come as no surprise the software support was weak.

    About 20 months ago, Nvidia bought Ageia and stated they planned to leverage CUDA and their programmable architecture to accelerate PhysX on their GPUs. Didn't hear anything for about 6 months...and then suddenly, Nvidia announced PhysX support on all of their 8-series and higher GPUs with a new PhysX driver in July-Aug 2008. Overnight, they went from 100K units to around 70 million hardware units capable of accelerating HW PhysX effects. And the best part about it was that if you were into PC games, you probably already owned the necessary hardware, a fast graphics card.

    So the answer is, about 14 months since PhysX has existed in its current incarnation. We didn't see any titles at all for about 6 months, just some backports of the handful that were developed for the original Ageia PPU. Then they started trickling in, Mirrors Edge, Cryostasis....but the big one, the killer app for PhysX that we were waiting for landed a few weeks ago, Batman: Arkham Asylum. It should be no surprise that the grumblings and anti-PhysX rhetoric started up again only recently, my guess is it has a lot to do with Batman's launch along with the 5870 debut (as more Nvidia cards may be taking a back seat).

    I doubt this one video will change your opinion of PhysX overnight, but I think it does a good job of showing why some feel so strongly about hardware physics and would like to see it adopted more quickly, regardless of middleware provider and politics.

    Batman Arkham Asylum Video Comparison

    In any case, if you get another chance to correspond with AMD, I'd urge you to ask them the simple question. Why not just write a CUDA driver for PhysX and run it on their hardware natively? No excuses about closed and proprietary, no need to worry about driver lock-outs, no need to shuffle and push proprietary or vaporware standards, just 100% of the discrete GPU industry supporting the only production SDK capable of accelerating HW physics effects. Their excuses and reasoning for not supporting PhysX have been less than genuine, imo, especially in light of their recent decision to back Bullet instead of Havok after all their press clippings on the matter months ago.
  12. Mike Green The only thing that matters to me in a standard is how "free" it is. OpenCL seems to be the highest qualifier, so I'll be buying an AMD card next.
  13. lordbean
    lordbean Technically speaking, if the OpenCL standard is truly "free" and widely adopted, it won't matter whether you buy AMD or nvidia. They will both be able to run it.
  14. UPSLynx
    UPSLynx <cite class="ic-username"></cite>chizow - one would suggest that one of the greater strengths behind Bullet is the fact that it will couple with DirectCompute and DirectX11. As the DirectCompute API builds in popularity, PhysX will most likely become irrelevant.

    For gamers ready to rock DirectX11 as soon as possible, this is greater news.

    And no matter how available PhysX may be, CUDA is still very much an NVIDIA language. Regardless of it's ease of use, it's still simple PR for AMD to not want to use it. Bullet and OpenCL are both completely open, and in that respect, every gamer can use it and no one needs to bicker about PR.
  15. chizow
    chizow
    UPSLynx wrote:
    <CITE class=ic-username></CITE>chizow - one would suggest that one of the greater strengths behind Bullet is the fact that it will couple with DirectCompute and DirectX11. As the DirectCompute API builds in popularity, PhysX will most likely become irrelevant.

    For gamers ready to rock DirectX11 as soon as possible, this is greater news.

    And no matter how available PhysX may be, CUDA is still very much an NVIDIA language. Regardless of it's ease of use, it's still simple PR for AMD to not want to use it. Bullet and OpenCL are both completely open, and in that respect, every gamer can use it and no one needs to bicker about PR.
    The problem here is it seems you have fallen into the trap of confusing the issue with all of this PR bickering like many other tech sites. AMD wants you to think PhysX is somehow tied to a closed and proprietary standard, when that's clearly not the case. As part of their reasoning, they seem to think that if the API becomes irrelevant, the middleware will as well. This is clearly false.

    I'll draw some clear parallels here to help illustrate the issue better. DirectX and OpenGL are current standard API for computer gaming development on the PC, but rarely are games developed with the limited tools provided by these API SDKs. On top of each you have numerous game engines like UE3.0, Gamebryo, CryEngine, Anvil, Rage. Content creation and modeling software like 3DS Max, Maya, Granny3D etc.. Sound engine designed with the likes of Miles3D, OpenAL, FMOD, etc. For physics you have Havok, PhysX, Velocity, etc. All of these middleware sit on top of the API and are portable as necessary.

    So how are games developed using these same middleware tools then used to create console games given those consoles use different API, like XNA, libgcm, PSGL or whatever other API native to those consoles. Simple, they're compiled for those different API, so clearly changing API output would not make the middleware irrelevant in any way provided the middleware is flexible enough to output to numerous API.

    So to bring this full circle, even if CUDA became irrelevant, that would have no direct impact on PhysX's future viability as a middleware given the fact its still supported on all 3 major consoles, iPhone, and on the PC with both x86 and GeForce hardware. The only "negative" outcome of CUDA becoming irrelevant is that Nvidia might be compelled to port PhysX to OpenCL or Direct Compute.

    AMD is simply trying to confuse the issue by linking PhysX to CUDA when in reality CUDA support was only necessary for lack of other viable options. DirectCompute and OpenCL change that situation but now the question is what benefit does it serve Nvidia to port PhysX given the very public attacks on their technology by AMD?

    Also, as an FYI, Bullet doesn't currently support DX11 or DirectCompute, its currently being developed and supported on GeForce hardware with OpenCL only. Given there's currently no hardware support at all under OpenCL in production titles, it just demonstrates how empty promises of "open and free standards" are when no one actually makes use of them in lieu of supported and established standards.
  16. chizow
    chizow
    lordbean wrote:
    Technically speaking, if the OpenCL standard is truly "free" and widely adopted, it won't matter whether you buy AMD or nvidia. They will both be able to run it.
    Mike Green wrote:
    The only thing that matters to me in a standard is how "free" it is. OpenCL seems to be the highest qualifier, so I'll be buying an AMD card next.

    PhysX is free to use guys, so is CUDA. Its not open in the sense Nvidia still steers its development and future, but if you look at the history and development of OpenCL, you'll see Nvidia has a strong hand in shaping it. In fact, Nvidia's VP of Embedded Content chairs the standard and OpenCL has been widely described as a less user friendly, low-level version of CUDA's C debugger and compiler.

    I think its important to distinguish the difference between an API and the tools necessary to make use of that API. Simply creating the standard API means very little if no one is going to try and "invent the wheel" to make use of it, especially when superior tools are already in production.

    From a progression standpoint it looks like:

    Hardware > Driver (HAL) > API (HLPL) > Middleware (GUI-based tools)
  17. Cliff_Forster
    Cliff_Forster Hey Chizow,

    Your responses are obviously very well thought through. It would please me if you would consider registering at the forum and sticking around.

    I won't pretend to be an expert on the programing end, but what I will say as a gamer and consumer is that I would prefer open standards that allow me greater flexibility in the hardware and OS I choose to run while not limiting my experience from title to title. I realize its a pipe dream but I long for a world where I can buy a game, load it on whatever OS I choose while running it on any vendors hardware with a reasonable spec to run that title.
  18. Ryder
    Ryder psst Cliff, he is registered.. that is why it doesn't say "guest" under his name :)
  19. Thrax
    Thrax Chizow:

    Your CUDA language vs. PhysX API argument is well-understood here, but it's an argument that goes beyond the original thesis: NVIDIA retracted support for PhysX modes it previously supported, and gamers were the victims.

    This is what people are taking issue with. Green reversed endorsement on two PhysX+ATI scenarios, one explicit, one implicit, and gamers suffered for it.

    Now, could the PhysX API be ported to another language? Yes. Could it just as easily run on OpenCL or DirectCompute? Yes. Has there been any momentum for the company to do so? No. And understandably so, because they have a lock on the physics market with PhysX.

    Why endorse an open source language for your API when you can just say you support open languages, and continue on with CUDA?

    That's what is pissing people off.
  20. chizow
    chizow
    Hey Chizow,

    Your responses are obviously very well thought through. It would please me if you would consider registering at the forum and sticking around.

    I won't pretend to be an expert on the programing end, but what I will say as a gamer and consumer is that I would prefer open standards that allow me greater flexibility in the hardware and OS I choose to run while not limiting my experience from title to title. I realize its a pipe dream but I long for a world where I can buy a game, load it on whatever OS I choose while running it on any vendors hardware with a reasonable spec to run that title.

    Hi Cliff,

    Thanks for the kind words, I did register after my first reply, mainly so I could format my replies properly but this site certainly has content that fits my interests so thanks for the welcome!

    I'm no programming guru or designer either, just a life long PC gamer from your same viewpoint. I've only taken exceptional interest in physics development because I saw its potential very early on. To me, the moment Nvidia solved the hardware install-base problem overnight by enabling acceleration on all of their DX10+ unified shader hardware, I felt GPU accelerated physics had a chance to be the next big thing on the PC.

    Personally I don't care too much what standard or middleware is used, I only care that the hardware I choose and buy supports it. While "open and free" is certainly desirable from some viewpoints, I'm not so sure that translates so well to commercial production environments like video games. These guys are typically on tight budgets and deadlines, so if it isn't easy or familiar, the likelihood its implemented decreases significantly.
  21. chizow
    chizow
    Thrax wrote:
    Chizow:

    Your CUDA language vs. PhysX API argument is well-understood here, but it's an argument that goes beyond the original thesis: NVIDIA retracted support for PhysX modes it previously supported, and gamers were the victims.

    This is what people are taking issue with. Green reversed endorsement on two PhysX+ATI scenarios, one explicit, one implicit, and gamers suffered for it.

    Now, could the PhysX API be ported to another language? Yes. Could it just as easily run on OpenCL or DirectCompute? Yes. Has there been any momentum for the company to do so? No. And understandably so, because they have a lock on the physics market with PhysX.

    Why endorse an open source language for your API when you can just say you support open languages, and continue on with CUDA?

    That's what is pissing people off.
    Hi Thrax, I understand the initial scope of this news bit and don't necessarily agree with those decisions by Nvidia, my point is that all of these problems could be moot if AMD took a proactive and cooperative stance from the outset and supported PhysX on their hardware natively.

    This again, leads to the question that is never asked, or perhaps never answered:
    Why doesn't AMD just write a CUDA driver for their own hardware and submit to whatever PhysX logo program Nvidia requires and properly support PhysX on their hardware natively?
    The problem I have with AMD's stance throughout is that they say one thing and do another, yet produce nothing in the way of solutions. Going back to earlier comments, my only interest is to have all hardware support as many middleware/API solutions as possible to increase the adoption rate of GPU physics.

    But politics are clearly getting in the way of this. As some of the earlier links I posted allude to, PhysX support was offered to AMD. They declined and rather than producing their own solution, they offered disingenous reasons as to why they declined and continued to downplay and criticize PhysX.

    That's what bothers me the most, instead of helping to increase enhanced physics implementation in games, they've clearly done all they can to impede progress while failing to produce results of their own and in the process, they've shifted their position numerous times. First it was Brook+, then it was Stream, then it was OpenCL and Havok, then it was DirectCompute, now its Bullet and OpenCL? And still, no functional product from AMD, just more excuses and rhetoric.

    As for the original news bit, ATI + Nvidia PhysX was always hit or miss, its only possible in XP and Win 7 due to WDDM driver restrictions. ATI + Ageia PPU still works, but you have to go back to the last standalone driver. Both solutions still work with pre-190 drivers, and while I don't necessarily agree with a lock-out approach, I think its understandable from a QA and PR standpoint given AMD clearly isn't willing to reciprocate support coupled with some of their inflammatory comments in the press.
  22. Cliff_Forster
    Cliff_Forster chizow,

    I think you could understand why AMD would not be too fond of having to license its physics implementation from NVIDIA.

    Just like NVIDIA is not in love with having to re up their agreement to product Intel chip-sets based on new architecture.

    If your the guy buying the license, its never good for you, you only do it because its required to be in the game. In the case of game physics, I think PhysX has not established itself as a dominating standard to the point where AMD has to cave and license it in order to satisfy its customers. At the same time, nobody can expect NVIDIA to take a differentiating feature that they paid good money to implement and just give it to AMD for free.

    Its just business.
  23. lordbean
    lordbean
    In the case of game physics, I think PhysX has not established itself as a dominating standard to the point where AMD has to cave and license it in order to satisfy its customers.

    I think this is an excellent point, and I both agree with it and disagree with it on some levels. My belief is that the reason PhysX has not established itself as a dominating standard is directly because of AMD's share of the graphics market. Because AMD GPUs do not run PhysX (thereby forcing the work to fall back to the CPU), running a PhysX-based (or even enabled) game on an AMD graphics configuration is not feasible. This means that all PhysX titles are doomed from birth to sell only to nvidia owners. This is a rather convincing reason not to use PhysX in your title, from a developer's standpoint. If AMD had bit the bullet and licensed the tech from nvidia, we would perhaps have seen a higher adoption rate of PhysX. In this regard, the blame can be made to rest on AMD's shoulders. However, consider it from AMD's business standpoint. If they license the technology from nvidia, they are bound to adhere to nvidia's standards for PhysX compliance, which puts AMD in a severely weakened position. If they do not license the tech from nvidia, PhysX does not become widely adopted, and as a result AMD stays free to develop their own technology as they see fit, and likely holds a greater market share with larger profits as a result. The way I see it, AMD had no choice in the matter. In order to remain competitive in the graphics industry, they could not reasonably agree to license and be bound to adhere to standards set by their #1 competitor. It would not have made sense from a business standpoint.
  24. ardichoke
    ardichoke Add to all this the fact that if AMD licenses PhysX, the standard gets widely adopted, there's nothing from stopping Nvidia from denying AMD the licensing rights to PhysX in the future. Once PhysX got it's claws into the majority of games Nvidia could simply refuse to renew AMDs license and bingo, now only Nvidia cards can run all the new games efficiently. This would devastate AMD/ATI and give Nvidia a near monopoly on high end gaming cards. By AMD releasing Bullet under an open source license, it makes it so they can't pull a power play like that thus screwing over their competitor. This makes other graphics manufacturers more likely to adopt the standard which in turn makes game creators more likely to use it.
  25. chizow
    chizow
    chizow,

    I think you could understand why AMD would not be too fond of having to license its physics implementation from NVIDIA.

    Just like NVIDIA is not in love with having to re up their agreement to product Intel chip-sets based on new architecture.

    If your the guy buying the license, its never good for you, you only do it because its required to be in the game. In the case of game physics, I think PhysX has not established itself as a dominating standard to the point where AMD has to cave and license it in order to satisfy its customers. At the same time, nobody can expect NVIDIA to take a differentiating feature that they paid good money to implement and just give it to AMD for free.

    Its just business.
    Well, the licensing issue is unclear actually, for example, PhysX nor Havok currently do not require any licensing for use on either AMD or Intel x86 CPUs. Its just software that runs on the CPU. I've never seen any indication from either Nvidia or AMD claiming PhysX licensing would require any fee, again, PhysX as an SDK is free to use and develop games with.

    The only thing that might be involved is involvement in a logo certification program that would require QA and perhaps certification. I think this is the problem AMD has with PhysX support on their hardware more than anything, even if it costs them nothing out of pocket. This is similar to the agreements Nvidia has with Microsoft for their 360 and Sony for the PS3 that gives all of their developers access to PhysX for free.

    As for dominant standard, again, there's no need for PhysX to become a dominant standard to be a glaring omission as a feature from AMD's standpoint. I posted some links earlier to titles that have used PhysX and Havok over the years with their software SDKs that show they both are used in a wide variety of quality titles on all platforms, PC and consoles. Bullet clearly isn't even close to that. Even on Bullet's site you can see Havok and PhysX are the clear leaders when it comes to physics SDKs:

    http://www.bulletphysics.com/wordpress/?p=88

    So really, we should be looking at end-game outcomes. How does not supporting PhysX benefit AMD when it comes to positioning their products? You can clearly see based on the links and points already discussed, Nvidia is by far best poised to take advantage of all standards and SDKs:
    • Nvidia - Supports all relevent API with OpenCL, DirectCompute, CUDA on the PC. 100% support for PhysX, 100% support for Bullet, and have made all necessary steps to support Havok if/when Intel rolls it out.
    • AMD - Supports OpenCL and DirectCompute, although their efforts are lagging behind Nvidia's from both a driver and SDK standpoint (neither are available for public consumption as of now). Presumed to support Bullet and Havok if/when Intel rolls it out. Software support will remain the same on their CPUs.
    • Intel - Support for DirectCompute and OpenCL, but are also pushing their own GPGPU compute API with Ct for Larrabee. They are part of the OpenCL standard consortium, but I think their participation is given begrudgingly as OpenCL is a clear threat to x86's dominance on Wintel platforms. Don't expect to see a hardware accelerated Havok client before Larrabee launches (looks like AMD finally figured this out, hence throwing their support to Bullet now), and even then I would not be shocked if Intel limits it to Ct. Software support will remain the same on their CPUs.
    So what you have, Nvidia is poised to support every API and Middleware out there, meaning their hardware will have the highest probability of supporting everything. AMD hardware will only be able to support Bullet, maybe Havok and definitely not PhysX. At best they support 2/3 of the prevailing middleware and the only one that's guaranteed is the weakest one. Intel's plans are unknown but they'll definitely support Havok in hardware and with Larrabee's x86 roots, it may be able to adequately accelerate PhysX CPU runtimes if it doesn't support CUDA natively.

    But like I said, no one is asking the right questions, point-blank, asking Nvidia if they are expecting money to change hands in a licensing agreement with AMD for PhysX. No one is asking AMD why they don't just support PhysX on their hardware natively, why they don't just write a CUDA driver for their hardware, and if it would actually cost them anything other than man hours and support time. Most indicators from press bits indicate there is no licensing fee involved and that there is no technical reason preventing AMD from supporting PhysX on their hardware natively.

    Here's a relatively obscure piece where the author actually gets the closest to asking the right questions, and you can see the AMD rep, Dave Hoff is none too pleased:

    http://www.hitechlegion.com/our-news/1411-bullet-physics-ati-sdk-for-gpu-and-open-cl-part-3?start=1

    And another interview with Dave Hoff where he skirts all around the issue and confirms much of what I've already stated about no technical reason preventing AMD from supporting PhysX natively. Hoff is exceptionally qualified to answer these questions btw, unlike most of the other talking heads, as he played a large part of CUDA development at Nvidia:

    http://www.rage3d.com/previews/video/ati_hd5870_performance_preview/index.php?p=4
    I can't imagine any commercial software company who has tried a GPGPU programming model previously from either graphics company to not switch to OpenCL or Direct Compute. It's very easy to move from CUDA to either of these....

    While it would be easy to convert PhysX from CUDA to OpenCL so it could run on our cards, and I've offered our assistance to do this, I can just imagine PhysX will remain that one app that doesn't ever become open.

    And that's really the crux of it. He confirms it would be easy to port CUDA to OpenCL and DirectCompute (something Nvidia said all along, but also saying CUDA made sense because OpenCL and DC did not exist yet). What he doesn't answer is why AMD didn't simply write their own CUDA driver, which should/would be just as easy to port to emergent API like OpenCL and DirectCompute, as necessary, as his own statements confirm. Instead AMD chose to throw out disingenous excuses and reasons why they wouldn't support PhysX and in the meantime, they've not only backtracked on previous comments, they still have nothing to show for their efforts.

    Sorry, lots to read I know, but the gist of it should read: AMD's decisions have basically led to a lose-lose situation for all involved, their own customers and those who actually want to see GPU physics succeed by basically doing everything they can to hinder development. Nvidia on the other hand has not only produced their own solutions, but are clearly chairing and pioneering all emergent technologies. Everything they've done is proactive with regard to GPU physics and that extends far beyond their own proprietary technology with their efforts chairing OpenCL, providing working SDKs with which Bullet was built upon, producing the only functional DirectCompute demos months in advance, etc.
  26. chizow
    chizow
    lordbean wrote:
    My belief is that the reason PhysX has not established itself as a dominating standard is directly because of AMD's share of the graphics market. Because AMD GPUs do not run PhysX (thereby forcing the work to fall back to the CPU), running a PhysX-based (or even enabled) game on an AMD graphics configuration is not feasible.
    As a rather significant point in this discussion, Nvidia dominates the discrete graphics card market by a 2:1 ratio using just about any metric. Market share, check (check Peddie or Mercury Research #s). User surveys, check (check Steam survey or FutureMark/Yougamers). Revenue, check (50% of Nvidia's 4 bill = 2 bill compared to ATI's 1 billion total). TSMC wafer supply, check (check iSuppli figures). Its fluctuated somewhat over the last 3-4 years, but hasn't gone below ~60% for Nvidia, 30% for ATI and was as high as 70% for Nvidia and 20% for ATI at G80/G92's peak in 2007.

    So clearly Nvidia has the vast majority of relevant hardware in this case, which is DX10+ unified programmable shader hardware (required for GPGPU physics acceleration). Not only do they have the majority of hardware, their Developer Relations program TWIMTBP is also better funded so it allows them to work with more titles to implement hardware specific features, like PhysX. You might have seen this value-add program has also come under fire from AMD recently.....

    Anyways, AMD and Intel CPU will still be able to run limited software accelerated PhysX effects, it'll just be the same as the console versions and similar to any other physics effects seen on the PC over the last 7-8 years. But without GPU acceleration, the effects will undoubtedly be inferior to the GPU accelerated version. In the past this hasn't been so much of a problem because the titles with advanced PhysX were met with mixed reception, the most recent ruckus with regard to PhysX is undoubtedly a result of PhysX's "killer app", Batman Arkham Asylum. Suddenly all those who were indifferent or openly critical of PhysX suddenly care enough to make a massive stink over it now......
  27. lordbean
    lordbean
    chizow wrote:
    Well, the licensing issue is unclear actually, for example, PhysX nor Havok currently do not require any licensing for use on either AMD or Intel x86 CPUs. Its just software that runs on the CPU. I've never seen any indication from either Nvidia or AMD claiming PhysX licensing would require any fee, again, PhysX as an SDK is free to use and develop games with.

    The SDK is free, this is 100% true. However, the SDK only makes it easier to write code that interacts with the PhysX core, whose source is owned by nvidia, and has to be licensed (even if it's licensed for free) by other companies in order to run at the hardware level.
    chizow wrote:
    The only thing that might be involved is involvement in a logo certification program that would require QA and perhaps certification. I think this is the problem AMD has with PhysX support on their hardware more than anything, even if it costs them nothing out of pocket. This is similar to the agreements Nvidia has with Microsoft for their 360 and Sony for the PS3 that gives all of their developers access to PhysX for free.

    It's not difficult to see why AMD would have a problem with this. By licensing PhysX from nvidia, even if it costs nothing out of AMD's pocket, they are agreeing that their tech has to be OKed by nvidia before it can even go on the market. This puts AMD in a compromised position at best (nvidia would then be able to scrutinize all their graphics technology), and at worst, nvidia could use it as a way to undermine AMD's profits through unfair QA reports, and AMD couldn't do a thing about it because they'd have signed the license.
    chizow wrote:
    As for dominant standard, again, there's no need for PhysX to become a dominant standard to be a glaring omission as a feature from AMD's standpoint. I posted some links earlier to titles that have used PhysX and Havok over the years with their software SDKs that show they both are used in a wide variety of quality titles on all platforms, PC and consoles. Bullet clearly isn't even close to that. Even on Bullet's site you can see Havok and PhysX are the clear leaders when it comes to physics SDKs:

    http://www.bulletphysics.com/wordpress/?p=88

    It may be a glaring omission from AMD's graphics cards, but the fault in this does not lie with AMD. By keeping the source code for PhysX proprietary (and don't try to argue that it's not, the SDK may be free but that's not the same thing as PhysX's actual source), nvidia will effectively have a stranglehold of AMD in the graphics sector if PhysX becomes the dominant standard. Pretty clear why nvidia is trying to establish PhysX in this regard.
    chizow wrote:
    So really, we should be looking at end-game outcomes. How does not supporting PhysX benefit AMD when it comes to positioning their products? You can clearly see based on the links and points already discussed, Nvidia is by far best poised to take advantage of all standards and SDKs:

    Not supporting PhysX benefits AMD because if they did, it means they licensed it from nvidia, and essentially are then completely under nvidia's control when PhysX becomes the dominant standard.
    chizow wrote:
    • Nvidia - Supports all relevent API with OpenCL, DirectCompute, CUDA on the PC. 100% support for PhysX, 100% support for Bullet, and have made all necessary steps to support Havok if/when Intel rolls it out.
    • AMD - Supports OpenCL and DirectCompute, although their efforts are lagging behind Nvidia's from both a driver and SDK standpoint (neither are available for public consumption as of now). Presumed to support Bullet and Havok if/when Intel rolls it out. Software support will remain the same on their CPUs.
    • Intel - Support for DirectCompute and OpenCL, but are also pushing their own GPGPU compute API with Ct for Larrabee. They are part of the OpenCL standard consortium, but I think their participation is given begrudgingly as OpenCL is a clear threat to x86's dominance on Wintel platforms. Don't expect to see a hardware accelerated Havok client before Larrabee launches (looks like AMD finally figured this out, hence throwing their support to Bullet now), and even then I would not be shocked if Intel limits it to Ct. Software support will remain the same on their CPUs.

    Nvidia: You conveniently forgot to mention CAL in this list, AMD's solution to GPU computing. CUDA was built by nvidia, for nvidia graphics hardware. AMD's CAL was built by AMD, for AMD's graphics hardware. CUDA was never intended to be an industry standard, nor was CAL.
    AMD: Supports OpenCL, DirectCompute, and CAL. See above point. Also, if AMD is "lagging behind Nvidia's from both a driver and SDK standpoint (neither are available for public consumption as of now)", please explain why the Radeon HD5850 and HD5870 are available right now on store shelves with full DX11 support, and nvidia's GT300 is not due until at least Q1 2010.
    Intel: Honestly doesn't matter in the scope of this argument. If OpenCL really is a threat to the x86 platform, why is AMD still trying hard to compete in the CPU market? The central processing unit is not going anywhere in a hurry.
    chizow wrote:
    So what you have, Nvidia is poised to support every API and Middleware out there, meaning their hardware will have the highest probability of supporting everything. AMD hardware will only be able to support Bullet, maybe Havok and definitely not PhysX. At best they support 2/3 of the prevailing middleware and the only one that's guaranteed is the weakest one. Intel's plans are unknown but they'll definitely support Havok in hardware and with Larrabee's x86 roots, it may be able to adequately accelerate PhysX CPU runtimes if it doesn't support CUDA natively.

    Of course nvidia supports PhysX and AMD doesn't. If PhysX becomes the standard, AMD's graphics become moot, since nvidia will have control over AMD's products. AMD is pushing for a fully vendor-neutral solution (Bullet), which nvidia realizes would be suicide not to support, as it is based on DirectX 11 code, and thus required for DirectX 11 compliance. Both Havok and PhysX will become obsolete once the open-source standard is in place, and nvidia is trying their best to stop that from happening because PhysX would make them fully dominant in the graphics sector.
    chizow wrote:
    But like I said, no one is asking the right questions, point-blank, asking Nvidia if they are expecting money to change hands in a licensing agreement with AMD for PhysX. No one is asking AMD why they don't just support PhysX on their hardware natively, why they don't just write a CUDA driver for their hardware, and if it would actually cost them anything other than man hours and support time. Most indicators from press bits indicate there is no licensing fee involved and that there is no technical reason preventing AMD from supporting PhysX on their hardware natively.

    CAL is AMD's CUDA driver. As I mentioned above, CUDA was developed by nvidia specifically for nvidia's hardware. It was never intended to be used on any other GPUs. DirectCompute is the next generation of the idea behind CAL and CUDA - it is the vendor-neutral implementation of GPU processing. Asking AMD why they don't license PhysX from nvidia would be like asking a banker why he won't just give you his money rather than loan it to you. Supporting PhysX natively on AMD GPUs would require that AMD license it from nvidia, which as I mentioned more than once above, places AMD in a very compromised position.
    chizow wrote:
    Here's a relatively obscure piece where the author actually gets the closest to asking the right questions, and you can see the AMD rep, Dave Hoff is none too pleased:

    http://www.hitechlegion.com/our-news/1411-bullet-physics-ati-sdk-for-gpu-and-open-cl-part-3?start=1

    And another interview with Dave Hoff where he skirts all around the issue and confirms much of what I've already stated about no technical reason preventing AMD from supporting PhysX natively. Hoff is exceptionally qualified to answer these questions btw, unlike most of the other talking heads, as he played a large part of CUDA development at Nvidia:

    http://www.rage3d.com/previews/video/ati_hd5870_performance_preview/index.php?p=4
    Dave Hoff wrote:
    The contrast should be fairly stark here: we're intentionally enabling physics to run on all platforms - this is all about developer adoption. Of course we're confident enough in our ability to bring compelling new GPUs to market that we don't need to try to lock anyone in. As I mentioned last week, if the competition altered their drivers to not work with our Radeon HD 4800 series cards, I can't imagine them embracing our huge new leap with the HD 5800 series.

    While it would be easy to convert PhysX from CUDA to OpenCL so it could run on our cards, and I've offered our assistance to do this, I can just imagine PhysX will remain that one app that doesn't ever become open.

    You missed a vital bit of context here. Read this again carefully, between the lines, and this is what he was actually saying:

    "While it would be easy for nvidia to convert PhysX from CUDA to OpenCL so it could run on our cards, and I've offered our assistance to do this, I can just imagine PhysX will remain that one app that doesn't ever become open."

    He's pointing out that nvidia could change PhysX to OpenCL quite easily, yet they don't want to do it. The reasons for this are the points I've already made above - if PhysX is adopted as the standard while it is still proprietary to nvidia, they get a stranglehold on AMD. Case in point.

    chizow wrote:
    And that's really the crux of it. He confirms it would be easy to port CUDA to OpenCL and DirectCompute (something Nvidia said all along, but also saying CUDA made sense because OpenCL and DC did not exist yet). What he doesn't answer is why AMD didn't simply write their own CUDA driver, which should/would be just as easy to port to emergent API like OpenCL and DirectCompute, as necessary, as his own statements confirm. Instead AMD chose to throw out disingenous excuses and reasons why they wouldn't support PhysX and in the meantime, they've not only backtracked on previous comments, they still have nothing to show for their efforts.

    Sorry, lots to read I know, but the gist of it should read: AMD's decisions have basically led to a lose-lose situation for all involved, their own customers and those who actually want to see GPU physics succeed by basically doing everything they can to hinder development. Nvidia on the other hand has not only produced their own solutions, but are clearly chairing and pioneering all emergent technologies. Everything they've done is proactive with regard to GPU physics and that extends far beyond their own proprietary technology with their efforts chairing OpenCL, providing working SDKs with which Bullet was built upon, producing the only functional DirectCompute demos months in advance, etc.

    Nvidia's decision to keep PhysX's source proprietary instead of porting it to OpenCL is what's hurting customers, not AMD's decisions. AMD's decisions have been entirely in the interest of AMD keeping themselves afloat in the market. They are not trying to hurt their customers at all, in fact they are trying to help their customers by pushing for open source physics (which, if I may point out, will not hurt nvidia or its customers since bullet and openCL are vendor-neutral, and will thus run on nvidia hardware). In fact, it is nvidia that is hurting their customers by keeping PhysX proprietary instead of porting it to OpenCL. Nvidia is holding out in the off chance that AMD caves and licenses PhysX from them, but it's not going to happen. If AMD caves on this, they hand the graphics sector to nvidia on a golden platter.
  28. chizow
    chizow
    lordbean wrote:
    The SDK is free, this is 100% true. However, the SDK only makes it easier to write code that interacts with the PhysX core, whose source is owned by nvidia, and has to be licensed (even if it's licensed for free) by other companies in order to run at the hardware level.
    Yep, the source is $50K as I'm pretty sure I've already mentioned, but you would only need the source if you wanted to integrate it into your own tools, recompile it for whatever reason for perhaps target hardware.
    lordbean wrote:
    It's not difficult to see why AMD would have a problem with this. By licensing PhysX from nvidia, even if it costs nothing out of AMD's pocket, they are agreeing that their tech has to be OKed by nvidia before it can even go on the market. This puts AMD in a compromised position at best (nvidia would then be able to scrutinize all their graphics technology), and at worst, nvidia could use it as a way to undermine AMD's profits through unfair QA reports, and AMD couldn't do a thing about it because they'd have signed the license.
    Oh please, these tin-foil hat what-ifs are better alternatives to the situation bearing out in the media now? Where AMD is crying about their hardware being unsupported for a solution they never supported? Their users being forced to download hacks off torrents for workarounds to get a half-baked PhysX solution to run on their CPU instead of natively on their GPU? Needing to rely on the power of community and workaround+ to get a patch that intercepts driver calls so ATI 3D + Nvidia PhysX work in the same ecosystem? That is what "open" and "free" get you, unsupported garbage solutions and workarounds.
    lordbean wrote:
    It may be a glaring omission from AMD's graphics cards, but the fault in this does not lie with AMD. By keeping the source code for PhysX proprietary (and don't try to argue that it's not, the SDK may be free but that's not the same thing as PhysX's actual source), nvidia will effectively have a stranglehold of AMD in the graphics sector if PhysX becomes the dominant standard. Pretty clear why nvidia is trying to establish PhysX in this regard.
    Again, who does fault lie with? I've already stated the source costs $50K, a point I've never concealed and stated very clearly on their own FAQ page. Also not sure how the PhysX source is even relevant given they'd have to write a driver for their own hardware before worrying about optimizing the source code to make sure Nvidia isn't cheating them somehow.

    Also, the only way Nvidia would gain any advantage or stranglehold over AMD is if PhysX gained widespread adoption and Nvidia supported it and AMD hardware did NOT. Which is the case now. If AMD and Nvidia both support PhysX how is it an advantage for either of them? Its not.
    lordbean wrote:
    Not supporting PhysX benefits AMD because if they did, it means they licensed it from nvidia, and essentially are then completely under nvidia's control when PhysX becomes the dominant standard.
    Yes PhysX is under Nvidia's control but AMD's hardware and drivers are under their control. If there's concerns about artificial limitation of performance than obviously AMD could invest in purchasing the source code and optimizing as necessary, but the cost of paranoia in this case would be $50K.
    lordbean wrote:
    Nvidia: You conveniently forgot to mention CAL in this list, AMD's solution to GPU computing. CUDA was built by nvidia, for nvidia graphics hardware. AMD's CAL was built by AMD, for AMD's graphics hardware. CUDA was never intended to be an industry standard, nor was CAL.
    CAL is dead, never even took off, AMD abandoned support of it along with Stream and Brook+ and whatever other synonym for "unsupported" they decided to go with. If you can find an app for CAL/Stream/Brook+ outside of Stanford's campus I'd be genuinely shocked. Pretty sure one or both of those Dave Hoff interviews clearly get the point across when he ducks the question and says they're going with OpenCL and DirectCompute instead.
    lordbean wrote:
    AMD: Supports OpenCL, DirectCompute, and CAL. See above point. Also, if AMD is "lagging behind Nvidia's from both a driver and SDK standpoint (neither are available for public consumption as of now)", please explain why the Radeon HD5850 and HD5870 are available right now on store shelves with full DX11 support, and nvidia's GT300 is not due until at least Q1 2010.
    Again, CAL is irrelevant. For the bolded portion, you can't download an official OpenCL driver or SDK for AMD as of today, they're still going through validation. Not sure what the point of bringing the 5850 and 5870 was when I was clearly referring to software and drivers, not hardware, not to mention Nvidia's DX10 hardware currently supports OpenCL and DirectCompute just fine.
    lordbean wrote:
    Intel: Honestly doesn't matter in the scope of this argument. If OpenCL really is a threat to the x86 platform, why is AMD still trying hard to compete in the CPU market? The central processing unit is not going anywhere in a hurry.
    AMD's x86 license is constantly being challenged by Intel and is perpetually tied up in litigation. Obviously their interests are tied to both the CPU and GPU so they will continue to try and compete, best they can in both markets but if they saw a chance to break away and reduce their reliance on x86 and its exorbitant licensing fees I'm sure they'd be interested.

    As for the bolded portion, I guess that can be interpreted as a double entendre perhaps? I'd say how much of a hurry depends what computing requirements you're looking at, as traditional CPUs aren't always the best for every task. I'd agree though the CPU isn't going anywhere in a hurry from a performance standpoint, its gains have stagnated in recent years to the point its only adhering to Moore's Law in size and transistor count.
    lordbean wrote:
    Of course nvidia supports PhysX and AMD doesn't. If PhysX becomes the standard, AMD's graphics become moot, since nvidia will have control over AMD's products. AMD is pushing for a fully vendor-neutral solution (Bullet), which nvidia realizes would be suicide not to support, as it is based on DirectX 11 code, and thus required for DirectX 11 compliance. Both Havok and PhysX will become obsolete once the open-source standard is in place, and nvidia is trying their best to stop that from happening because PhysX would make them fully dominant in the graphics sector.
    I'm about to go into a Prisoner's Dilemma explanation but I really shouldn't have to, this is just common sense. If both AMD and Nvidia both cooperate and support PhysX, they both win. There's only a loser if one or the other, or both defect. AMD has chosen to defect. Obviously its in everyone's best interest to cooperate. The outcomes by cooperating are clearly better than the outcomes if one or both defect.

    Also, I'm not sure how you come to the conclusion AMD's graphics become irrelevant if PhysX becomes dominant. Not only does it put far too much emphasis on physics as a feature set of graphics cards, that scenario would only have a chance of occurring IF AMD doesn't choose to support PhysX. If they supported it they wouldn't be at any disadvantage with regard to that feature outside of the conspiracy theories of Nvidia somehow crippling AMD hardware performance...

    As for Bullet......I've already provided the link. Its OpenCL based, not DX Compute and Bullet's OpenCL implementation was wholly developed on Nvidia hardware using Nvidia's OpenCL SDK. I'm sure they would've considered using AMD's hardware and SDK but it would've been difficult to do so with no AMD OpenCL driver and no AMD OpenCL SDK. Again, relevant quotes are in that TechLegion link.
    lordbean wrote:
    CAL is AMD's CUDA driver. As I mentioned above, CUDA was developed by nvidia specifically for nvidia's hardware. It was never intended to be used on any other GPUs. DirectCompute is the next generation of the idea behind CAL and CUDA - it is the vendor-neutral implementation of GPU processing. Asking AMD why they don't license PhysX from nvidia would be like asking a banker why he won't just give you his money rather than loan it to you. Supporting PhysX natively on AMD GPUs would require that AMD license it from nvidia, which as I mentioned more than once above, places AMD in a very compromised position.
    No, CAL is their low-level C-based language that compiles for whatever their machine code is. Its not directly compatible with CUDA, as that's Nvidia's C-based language architecture that also has a low-level API and driver to mirror CAL. AMD would need to write a driver for their hardware to work with CUDA, just as Nvidia would need to write a driver for their hardware to work with CAL. Its obviously possible given they're all C-based languages and have already been ported to other C-based languages like OpenCL. The difference is, there's a reason to write a CUDA driver, there is none and never has been a reason to write one for Stream/CAL/Brook+.

    http://developer.amd.com/gpu/ATIStreamSDK/Pages/default.aspx

    lordbean wrote:
    You missed a vital bit of context here. Read this again carefully, between the lines, and this is what he was actually saying:

    "While it would be easy for nvidia to convert PhysX from CUDA to OpenCL so it could run on our cards, and I've offered our assistance to do this, I can just imagine PhysX will remain that one app that doesn't ever become open."

    He's pointing out that nvidia could change PhysX to OpenCL quite easily, yet they don't want to do it. The reasons for this are the points I've already made above - if PhysX is adopted as the standard while it is still proprietary to nvidia, they get a stranglehold on AMD. Case in point.
    No, I didn't miss that bit of context, my point was that its obviously possible to go one way from CUDA to OpenCL/Direct Compute, he acknowledges it as trivial, the question that is not asked is why they don't get off their asses and write a driver for their own hardware instead of expecting everything to be handed to them for free while doing nothing? Its the typical free loader problem, AMD simply doesn't realize free rider economics don't apply well to technology. They could just as easily write a CUDA driver from their OpenCL driver today, just as they could've 14 months ago. Its trivial remember?

    Its not a simple question of Nvidia not wanting to, as I've already mentioned 1) There was no OpenCL when PhysX rolled out, they created the API for lack of an alternative and 2) They have no incentive to port it and even less reason to do so now when the only beneficiary is a free loading AMD that has done nothing but publicly criticize their technology.
    lordbean wrote:
    Nvidia's decision to keep PhysX's source proprietary instead of porting it to OpenCL is what's hurting customers, not AMD's decisions. AMD's decisions have been entirely in the interest of AMD keeping themselves afloat in the market. They are not trying to hurt their customers at all, in fact they are trying to help their customers by pushing for open source physics (which, if I may point out, will not hurt nvidia or its customers since bullet and openCL are vendor-neutral, and will thus run on nvidia hardware). In fact, it is nvidia that is hurting their customers by keeping PhysX proprietary instead of porting it to OpenCL. Nvidia is holding out in the off chance that AMD caves and licenses PhysX from them, but it's not going to happen. If AMD caves on this, they hand the graphics sector to nvidia on a golden platter.
    Its obvious you're not familiar with the timeline and specifics of this argument, so I'll leave it at this. Nvidia isn't hurting customers, they've developed a value-add feature that did not exist previously and provided it for free to all customers using their hardware, which again, is the overwhelming % of the discrete GPU market. They've also fully supported all emerging API and physics technologies and have stated they would do so from the start.

    The only people hurt by AMD's unwillingness to cooperate are AMD"s customers. That should be plainly obvious. Its the point of this original news bit and its further emphasized by the fact AMD users are the ones searching high and low for workarounds and hacks to get PhysX working even at reduced capabilities on their hardware. And you want to claim this is a better solution than AMD just sucking it up, making whatever apologies are required, and supporting PhysX natively on their own hardware? They can't win for losing really, since once again, Nvidia will support every API and physics middleware AMD can, with the addition of PhysX. The only way AMD can hope to reach parity and feature match Nvidia is if PhysX does die and becomes irrelevant, which makes the motivation for their rhetoric quite obvious and transparent.

    Sorry if I come off as a bit short or irritated in some of my replies, the basic argument you seem to be presenting, that AMD, their customers, and everyone else is somehow better off for them not supporting PhysX just makes no sense whatsoever.
  29. lordbean
    lordbean
    chizow wrote:
    Yep, the source is $50K as I'm pretty sure I've already mentioned, but you would only need the source if you wanted to integrate it into your own tools, recompile it for whatever reason for perhaps target hardware.

    Please give me a link to where I can purchase the source code for the PhysX core for $50,000. All I can find is http://developer.nvidia.com/object/physx.html and it's not there.
    chizow wrote:
    Oh please, these tin-foil hat what-ifs are better alternatives to the situation bearing out in the media now? Where AMD is crying about their hardware being unsupported for a solution they never supported? Their users being forced to download hacks off torrents for workarounds to get a half-baked PhysX solution to run on their CPU instead of natively on their GPU? Needing to rely on the power of community and workaround+ to get a patch that intercepts driver calls so ATI 3D + Nvidia PhysX work in the same ecosystem? That is what "open" and "free" get you, unsupported garbage solutions and workarounds.

    Tell that to DirectX. Last I checked, anyone can develop on it, and it's free.

    chizow wrote:
    Again, who does fault lie with? I've already stated the source costs $50K, a point I've never concealed and stated very clearly on their own FAQ page. Also not sure how the PhysX source is even relevant given they'd have to write a driver for their own hardware before worrying about optimizing the source code to make sure Nvidia isn't cheating them somehow.

    Again, please give me a link to the page that allows me to buy the PhysX source code for $50,000. I can't find it. The PhysX source code is VERY relevant here - it is the heart of PhysX itself, and without it, how is AMD supposed to port it?
    chizow wrote:
    Also, the only way Nvidia would gain any advantage or stranglehold over AMD is if PhysX gained widespread adoption and Nvidia supported it and AMD hardware did NOT. Which is the case now. If AMD and Nvidia both support PhysX how is it an advantage for either of them? Its not.

    Currently, nvidia does support it (they purchased it from Ageia) and AMD does not, yet I do not see widespread acceptance of PhysX. If AMD supported PhysX tomorrow, then they licensed the proprietary source code from nvidia to port, meaning their hardware becomes subject to nvidia's scrutiny to make sure their port is fully compatible with the original code. As well, there's nothing stopping nvidia from revoking the license with AMD once physX does become widely accepted.

    chizow wrote:
    Yes PhysX is under Nvidia's control but AMD's hardware and drivers are under their control. If there's concerns about artificial limitation of performance than obviously AMD could invest in purchasing the source code and optimizing as necessary, but the cost of paranoia in this case would be $50K.

    Between two corporations as major as AMD and nvidia, the source for something as potentially groundbreaking as PhysX is not going to be something nvidia just hands out. If they DO in fact offer to sell it with no future obligations to other companies, it would be for a lot more than $50,000. I'd be surprised if they do this at all, and if they do, it's much more likely that the $50,000 is for a license to access the PhysX source code, not to purchase it.

    chizow wrote:
    CAL is dead, never even took off, AMD abandoned support of it along with Stream and Brook+ and whatever other synonym for "unsupported" they decided to go with. If you can find an app for CAL/Stream/Brook+ outside of Stanford's campus I'd be genuinely shocked. Pretty sure one or both of those Dave Hoff interviews clearly get the point across when he ducks the question and says they're going with OpenCL and DirectCompute instead.

    If AMD has abandoned support for CAL, it was only in preparation for DirectCompute, which is logical because DirectCompute is a vendor-neutral standard. It's completely logical to abandon the proprietary standard in favor of the open source one... it represents that the company is thinking of its consumers and willing to play fair with its competition, two things nvidia seems to be unwilling to do at the moment. In all likelihood, they're grasping at straws trying to find a way to slow AMD, because AMD released their DX11 GPUs a full 6 months before the nvidia DX11 GPUs are even expected.

    chizow wrote:
    Again, CAL is irrelevant. For the bolded portion, you can't download an official OpenCL driver or SDK for AMD as of today, they're still going through validation. Not sure what the point of bringing the 5850 and 5870 was when I was clearly referring to software and drivers, not hardware, not to mention Nvidia's DX10 hardware currently supports OpenCL and DirectCompute just fine.

    CAL is irrelevant. So is CUDA. That's the whole point of DirectCompute. Saying that CUDA is good and should be a standard and that CAL is somehow bad when both standards are proprietary and made obsolete by DirectX 11 looks dangerously close to fanboy-ism. Also, I'd like to point out that AMD has full OpenCL support on Mac OSX snow leopard, and they are using that OS to develop Bullet as per Dave Hoff. Anything that works in OSX isn't far from being released in windows, and at the moment, it doesn't even matter that OpenCL isn't fully supported in windows yet. There's no applications out there to really take advantage of it yet.

    chizow wrote:
    AMD's x86 license is constantly being challenged by Intel and is perpetually tied up in litigation. Obviously their interests are tied to both the CPU and GPU so they will continue to try and compete, best they can in both markets but if they saw a chance to break away and reduce their reliance on x86 and its exorbitant licensing fees I'm sure they'd be interested.

    Intel holds by far the dominant share of the CPU market. To be competitive, AMD must match intel's standards, or else they will simply cease to be a CPU developer. Just because you can execute C++ code on your GPU, does not mean that the CPU is going to be made obsolete. At the very worst, some hardware techniques used in building GPUs may be adopted to CPUs. The core of the computer will always be the CPU for a long time to come.
    chizow wrote:
    As for the bolded portion, I guess that can be interpreted as a double entendre perhaps? I'd say how much of a hurry depends what computing requirements you're looking at, as traditional CPUs aren't always the best for every task. I'd agree though the CPU isn't going anywhere in a hurry from a performance standpoint, its gains have stagnated in recent years to the point its only adhering to Moore's Law in size and transistor count.

    You misinterpreted my statement, although that's possibly my fault for not being clear enough. What I meant was, the CPU will not be a replaced or obsolete part of the computer system as we know it for a long time to come.

    chizow wrote:
    I'm about to go into a Prisoner's Dilemma explanation but I really shouldn't have to, this is just common sense. If both AMD and Nvidia both cooperate and support PhysX, they both win. There's only a loser if one or the other, or both defect. AMD has chosen to defect. Obviously its in everyone's best interest to cooperate. The outcomes by cooperating are clearly better than the outcomes if one or both defect.

    AMD is willing to cooperate on an open source standard, which PhysX currently is not. It is proprietary code, owned by nvidia. If AMD has to license the source code from nvidia in order to cooperate on it, their hardware becomes subject to nvidia's scrutiny and guidelines to ensure compliance. If I were AMD's board of directors, this is not a position I'd want to be in.
    chizow wrote:
    Also, I'm not sure how you come to the conclusion AMD's graphics become irrelevant if PhysX becomes dominant. Not only does it put far too much emphasis on physics as a feature set of graphics cards, that scenario would only have a chance of occurring IF AMD doesn't choose to support PhysX. If they supported it they wouldn't be at any disadvantage with regard to that feature outside of the conspiracy theories of Nvidia somehow crippling AMD hardware performance...

    AMD's graphics become a moot point if PhysX becomes the dominant standard while the code is still proprietary to nvidia. Let's say that two years down the road, 50% of all new games use PhysX as part of their execution. That means that in order to be a viable choice for 50% of the gaming sector, AMD has to support PhysX. To support PhysX, they have to license it from nvidia, and that means their graphics cards must be inspected and passed by nvidia before they can be released to the market. There's nothing stopping nvidia from preventing the release of AMD graphics cards which are more powerful than nvidia graphics cards, or even simply revoking AMD's PhysX license once PhysX is established as a standard. When 50%+ of current games use PhysX, this would be devastating to the point of knocking AMD right off the graphics market.
    chizow wrote:
    As for Bullet......I've already provided the link. Its OpenCL based, not DX Compute and Bullet's OpenCL implementation was wholly developed on Nvidia hardware using Nvidia's OpenCL SDK. I'm sure they would've considered using AMD's hardware and SDK but it would've been difficult to do so with no AMD OpenCL driver and no AMD OpenCL SDK. Again, relevant quotes are in that TechLegion link.

    The point of an open source standard is that no matter whose development tools you use, the finished product will run fine on any hardware. Both AMD and nvidia support OpenCL, which means that it really doesn't matter whose graphics card is it developed on. AMD also fully supports OpenCL on Mac OSX snow leopard, meaning that support on windows cannot be far from release. As for the SDK, why would AMD bother to make one when nvidia has already created a good one? I've already made this point at the beginning of the paragraph.

    chizow wrote:
    No, CAL is their low-level C-based language that compiles for whatever their machine code is. Its not directly compatible with CUDA, as that's Nvidia's C-based language architecture that also has a low-level API and driver to mirror CAL. AMD would need to write a driver for their hardware to work with CUDA, just as Nvidia would need to write a driver for their hardware to work with CAL. Its obviously possible given they're all C-based languages and have already been ported to other C-based languages like OpenCL. The difference is, there's a reason to write a CUDA driver, there is none and never has been a reason to write one for Stream/CAL/Brook+.

    http://developer.amd.com/gpu/ATIStreamSDK/Pages/default.aspx

    If neither one are currently cross-compatible without needing to be ported, why bother to port them at all? Both DirectX 11 and OpenCL offer alternatives that are free to develop on and guaranteed to run on any modern graphics hardware.


    chizow wrote:
    No, I didn't miss that bit of context, my point was that its obviously possible to go one way from CUDA to OpenCL/Direct Compute, he acknowledges it as trivial, the question that is not asked is why they don't get off their asses and write a driver for their own hardware instead of expecting everything to be handed to them for free while doing nothing? Its the typical free loader problem, AMD simply doesn't realize free rider economics don't apply well to technology. They could just as easily write a CUDA driver from their OpenCL driver today, just as they could've 14 months ago. Its trivial remember?

    I've made this point repeatedly, but I guess it bears making again. Nvidia owns the source code to PhysX. AMD cannot simply write their own driver for it without licensing the source to port it, which makes them responsible to nvidia both for producing a port that works properly with all the commands the original code had, and also for producing graphics hardware compliant to the standard. It is simply not a position that would be good for AMD to be in.
    chizow wrote:
    Its not a simple question of Nvidia not wanting to, as I've already mentioned 1) There was no OpenCL when PhysX rolled out, they created the API for lack of an alternative and 2) They have no incentive to port it and even less reason to do so now when the only beneficiary is a free loading AMD that has done nothing but publicly criticize their technology.

    Nvidia did not create PhysX. They purchased Ageia and all Ageia's assets so that they could make the PhysX code proprietary to themselves. Their hope was that PhysX would become widely accepted off the bat (due to the fact it was the only hardware-accelerated physics solution at the time), and that they'd be able to license it to other GPU making companies. This would create a situation where any other GPU-producing company would need nvidia's stamp of acceptance on any hardware design which would accelerate PhysX. Essentially, this would give nvidia a monopoly on the graphics market, because they could cut support for PhysX from all other companies on a whim.

    chizow wrote:
    Its obvious you're not familiar with the timeline and specifics of this argument, so I'll leave it at this. Nvidia isn't hurting customers, they've developed a value-add feature that did not exist previously and provided it for free to all customers using their hardware, which again, is the overwhelming % of the discrete GPU market. They've also fully supported all emerging API and physics technologies and have stated they would do so from the start.

    I'm not familiar with the timeline of my argument? I'm fully aware of how PhysX came to be, and also how it came to be accelerated by nvidia hardware only. Ageia developed the software, and they even developed an expansion card designed to work in tandem with the graphics card to accelerate PhysX. Nvidia did not develop the PhysX technology, they purchased Ageia and converted the PhysX source into an application that would only be accelerated by nvidia's GPUs. If nvidia truly cared about the consumer and the future adoption of PhysX as a standard, they would have provided the necessary code and tools to other hardware corporations free of charge and without obligation, or else would not have purchased Ageia in the first place. By trying to keep PhysX as a proprietary standard, they are hurting their consumers in the long-run because if PhysX becomes the accepted standard, nvidia will end up with a monopoly on the graphics market, and without competition, technology does not advance nearly as quickly, prices are not competitive, and one corporation can control the supply for the entire graphics industry.

    If nvidia were to port PhysX to OpenCL or DirectCompute and make the core open-source, they would be demonstrating that their intentions are honorable, and that they desire advancement in the field of GPU-accelerated physics as much as AMD seems to. By keeping the PhysX core source proprietary, they are attempting to maneuver into a position that allows them to influence AMD's graphics development.
    chizow wrote:
    The only people hurt by AMD's unwillingness to cooperate are AMD"s customers. That should be plainly obvious. Its the point of this original news bit and its further emphasized by the fact AMD users are the ones searching high and low for workarounds and hacks to get PhysX working even at reduced capabilities on their hardware. And you want to claim this is a better solution than AMD just sucking it up, making whatever apologies are required, and supporting PhysX natively on their own hardware? They can't win for losing really, since once again, Nvidia will support every API and physics middleware AMD can, with the addition of PhysX. The only way AMD can hope to reach parity and feature match Nvidia is if PhysX does die and becomes irrelevant, which makes the motivation for their rhetoric quite obvious and transparent.

    AMD is not trying to force people to run PhysX through a hack or workaround. They are trying to promote open-source solutions for Physics acceleration that are not necessarily limited to Bullet physics. By endorsing OpenCL and DirectX11 compliance, AMD is showing that whatever standard for GPU physics is used, they're willing to treat their competition fairly, as it will also be accelerated just the same on nvidia's hardware. Nvidia, on the other hand, is attempting to force other companies to license hardware PhysX acceleration from them, and by doing so, place themselves in a position to control the entire graphics industry. That will hurt consumers in the long run for all the reasons I stated above. You're confusing what's better for consumers right now vs. what's better for consumers in the big picture. It'd be great if we could run PhysX on AMD hardware, but if they have to license it from nvidia to accomplish that, then AMD's graphics cards are simply going to disappear from the market one day. Nvidia will revoke the license and give themselves the monopoly on the GPU.
    chizow wrote:
    Sorry if I come off as a bit short or irritated in some of my replies, the basic argument you seem to be presenting, that AMD, their customers, and everyone else is somehow better off for them not supporting PhysX just makes no sense whatsoever.

    I don't know what you'd be irritated about. Personally, I enjoy a good debate. :)
  30. chizow
    chizow
    lordbean wrote:
    Please give me a link to where I can purchase the source code for the PhysX core for $50,000. All I can find is http://developer.nvidia.com/object/physx.html and it's not there.
    http://http.download.nvidia.com/developer/cuda/seminar/TDCI_PhysX.pdf

    Slide 23 of 26, I've seen it elsewhere but this is one of the many links that popped up by simply searching "PhysX source $50". Glancing over some of your other replies, this takes care of most of the irrelevance.
    lordbean wrote:
    Tell that to DirectX. Last I checked, anyone can develop on it, and it's free.
    You could substitute PhysX for DirectX in that sentence verbatim, thanks for proving my point. The difference is of course Microsoft would just laugh at you if you asked for their source code.
    lordbean wrote:
    Again, please give me a link to the page that allows me to buy the PhysX source code for $50,000. I can't find it. The PhysX source code is VERY relevant here - it is the heart of PhysX itself, and without it, how is AMD supposed to port it?
    Again, the source code is irrelevant unless AMD was planning to integrate it into an engine or their own API, but we all know they don't have anything on either of those fronts. They're not porting anything because they have nothing to port, all they need to do is write a driver for their hardware for the existing API and piggy-back the efforts of others. Again, from an earlier post the progression from hardware to software would look like:

    Hardware > Driver (HAL) > API (HLPL) > Middleware (GUI-based tools)

    Everything else is in-place, all AMD needs to do is make their hardware compatible with prevaling API and everything else should take care of itself. In this case, PhysX source would only be relevant for those dealing directly with the API and middleware, and AMD has no control over any of those factors so the source is irrelevant.
    lordbean wrote:
    Currently, nvidia does support it (they purchased it from Ageia) and AMD does not, yet I do not see widespread acceptance of PhysX. If AMD supported PhysX tomorrow, then they licensed the proprietary source code from nvidia to port, meaning their hardware becomes subject to nvidia's scrutiny to make sure their port is fully compatible with the original code. As well, there's nothing stopping nvidia from revoking the license with AMD once physX does become widely accepted.
    PhysX is the #1 SDK in production, but GPU acceleration is obviously going to be an uphill struggle due to the strong industry focus on consoles. As such, features like GPU PhysX are going to be value-add for the PC only which means any dev house would have to weigh the pros and cons of adding such a feature. Most will not due to the additional development cost, which is why Nvidia's TWIMTBP program helps with development, but as each title that uses PhysX releases, that increases the chance more will in the future as the tech gains momentum.

    As for the nonsensical scenario you brought up about revoking licenses and hardware coming under scrutiny....I think you'd have to cross that bridge of AMD supporting PhysX before donning the tin foil hat. Not to mention licensing agreements are put in place for just that reason, to prevent any arbitrary revocation of license.
    lordbean wrote:
    Between two corporations as major as AMD and nvidia, the source for something as potentially groundbreaking as PhysX is not going to be something nvidia just hands out. If they DO in fact offer to sell it with no future obligations to other companies, it would be for a lot more than $50,000. I'd be surprised if they do this at all, and if they do, it's much more likely that the $50,000 is for a license to access the PhysX source code, not to purchase it.
    Yes apparently AMD is going to come through with a groundbreaking revelation that redefines what we know about physics if they ever get their hands on that source code. Much easier than just providing a driver for their hardware.
    lordbean wrote:
    If AMD has abandoned support for CAL, it was only in preparation for DirectCompute, which is logical because DirectCompute is a vendor-neutral standard. It's completely logical to abandon the proprietary standard in favor of the open source one... it represents that the company is thinking of its consumers and willing to play fair with its competition, two things nvidia seems to be unwilling to do at the moment. In all likelihood, they're grasping at straws trying to find a way to slow AMD, because AMD released their DX11 GPUs a full 6 months before the nvidia DX11 GPUs are even expected.
    That might actually make sense if DirectCompute was open source in any way, but its not, its a standard proprietary to Microsoft. Nvidia is simply supporting all standards and API unapologetically without disingenuous excuses. Some companies produce solutions, some produce excuses.
    lordbean wrote:
    CAL is irrelevant. So is CUDA. That's the whole point of DirectCompute. Saying that CUDA is good and should be a standard and that CAL is somehow bad when both standards are proprietary and made obsolete by DirectX 11 looks dangerously close to fanboy-ism. Also, I'd like to point out that AMD has full OpenCL support on Mac OSX snow leopard, and they are using that OS to develop Bullet as per Dave Hoff. Anything that works in OSX isn't far from being released in windows, and at the moment, it doesn't even matter that OpenCL isn't fully supported in windows yet. There's no applications out there to really take advantage of it yet.
    Actually comparing CAL to CUDA doesn't look like fanboy-ism, it reeks of it. Honestly I haven't seen CAL mentioned in at least a year when referring to GPGPU. Go to any GPGPU developer forum and compare the two and see how people who actually use the tools react to the comparison.

    CUDA isn't dead, in fact its growing, adapting and improving. It was never just an API, it was Nvidia's top-to-bottom GPGPU compute architecture. The progression I detailed is ALL encompassed within CUDA, from the hardware to the middleware. What's next for CUDA? How about integration into one of the most popular production IDE with Visual Studio, which will provide a one-stop debugger and compiler for Nvidia hardware for all relevant API: CUDA C, OpenCL, DirectCompute, Direct3D, and OpenGL.

    http://developer.nvidia.com/object/nexus.html

    Once again, Nvidia is providing solutions for their hardware that interested parties actually want and will put to good use. What's AMD doing? Oh right, doing another interview criticizing Nvidia.....
    lordbean wrote:
    Intel holds by far the dominant share of the CPU market. To be competitive, AMD must match intel's standards, or else they will simply cease to be a CPU developer. Just because you can execute C++ code on your GPU, does not mean that the CPU is going to be made obsolete. At the very worst, some hardware techniques used in building GPUs may be adopted to CPUs. The core of the computer will always be the CPU for a long time to come.
    Ya its the heart of Nvidia's heterogenous computing model, except they don't plan to have much use for more than a few x86 CPU cores if all goes according to their plans.
    lordbean wrote:
    You misinterpreted my statement, although that's possibly my fault for not being clear enough. What I meant was, the CPU will not be a replaced or obsolete part of the computer system as we know it for a long time to come.
    No it won't be obsolete, but its role will be vastly diminished to the point its just a tiny beating heart in a vastly undersized body feeding a massive GPU for a brain (again according to Nvidia's heterogenous computing model).
    lordbean wrote:
    AMD is willing to cooperate on an open source standard, which PhysX currently is not. It is proprietary code, owned by nvidia. If AMD has to license the source code from nvidia in order to cooperate on it, their hardware becomes subject to nvidia's scrutiny and guidelines to ensure compliance. If I were AMD's board of directors, this is not a position I'd want to be in.
    Again already been down this path of hypocrisy. AMD clearly has no problems supporting closed and proprietary standards (see: DirectX, Direct Compute and Havok). These lies only go so far, especially when AMD embarassingly backpedaled on their endorsement of Havok, probably after coming to the revelation Intel has no interest whatsoever in providing GPU acceleration to anyone before Larrabee is ready (and maybe never for competitors).

    lordbean wrote:
    AMD's graphics become a moot point if PhysX becomes the dominant standard while the code is still proprietary to nvidia. Let's say that two years down the road, 50% of all new games use PhysX as part of their execution. That means that in order to be a viable choice for 50% of the gaming sector, AMD has to support PhysX. To support PhysX, they have to license it from nvidia, and that means their graphics cards must be inspected and passed by nvidia before they can be released to the market. There's nothing stopping nvidia from preventing the release of AMD graphics cards which are more powerful than nvidia graphics cards, or even simply revoking AMD's PhysX license once PhysX is established as a standard. When 50%+ of current games use PhysX, this would be devastating to the point of knocking AMD right off the graphics market.
    Again, completely unsubstantiated fearmongering. Not only do you put far too much importance on physics over 3D capability driving sales, all AMD would have to do to avoid any such fictitious release roadblock would be to simply pull PhysX support and launch their product. Or more than likely, just launch their product, claim support for said feature, then play catch up some point down the line with hastily applied driver updates.
    lordbean wrote:
    The point of an open source standard is that no matter whose development tools you use, the finished product will run fine on any hardware. Both AMD and nvidia support OpenCL, which means that it really doesn't matter whose graphics card is it developed on. AMD also fully supports OpenCL on Mac OSX snow leopard, meaning that support on windows cannot be far from release. As for the SDK, why would AMD bother to make one when nvidia has already created a good one? I've already made this point at the beginning of the paragraph.
    No the point of open source is so that you have some control over the content of the standard so that you're not at an arbitrarily imposed disadvantage. After that, provided they're running the same API, the faster hardware wins.
    lordbean wrote:
    If neither one are currently cross-compatible without needing to be ported, why bother to port them at all? Both DirectX 11 and OpenCL offer alternatives that are free to develop on and guaranteed to run on any modern graphics hardware.
    For CUDA the reason is obvious, people actually used it so the API libraries have built up and evolved over time with numerous apps developed for it. Porting CUDA and its runtimes to OpenCL and DirectCompute make a lot more sense than re-inventing the wheel. In fact, with tools like Nexus, it shouldn't be much more difficult than simply debugging and recompiling the output to whatever target API you choose.

    As for the guarantee....no there is no guarantee, especially if a vendor doesn't provide a driver for their own hardware for that API. <----***hint important point hint ****
    lordbean wrote:
    I've made this point repeatedly, but I guess it bears making again. Nvidia owns the source code to PhysX. AMD cannot simply write their own driver for it without licensing the source to port it, which makes them responsible to nvidia both for producing a port that works properly with all the commands the original code had, and also for producing graphics hardware compliant to the standard. It is simply not a position that would be good for AMD to be in.
    You can make the point again and it wouldn't make you any more correct. They don't need the PhysX source code, all they need to do is write a driver for their own hardware for the API backend needed for PhysX acceleration, CUDA. Would they potentially need Nvidia's support to write that driver? Maybe, but again there's no need to cross that hypothetical bridge because its clearly a question that has not been posed. That's the problem, its the question that's never asked.
    lordbean wrote:
    Nvidia did not create PhysX. They purchased Ageia and all Ageia's assets so that they could make the PhysX code proprietary to themselves. Their hope was that PhysX would become widely accepted off the bat (due to the fact it was the only hardware-accelerated physics solution at the time), and that they'd be able to license it to other GPU making companies. This would create a situation where any other GPU-producing company would need nvidia's stamp of acceptance on any hardware design which would accelerate PhysX. Essentially, this would give nvidia a monopoly on the graphics market, because they could cut support for PhysX from all other companies on a whim.
    Yes I'm well aware of the history and already detailed it in an earlier post. They acquired the IP to further their GPGPU efforts and sell their hardware, all that other irrelevance you wrote makes no sense. The only way PhysX gains traction is if its install-base increases, meaning more hardware supports it. In the long-term I see Nvidia leveraging this technology to get their hardware into the next-gen consoles, at which point the technology will be truly ubiquitous in games and we'll truly see it integrated seamlessly across multiple platforms.
    lordbean wrote:
    I'm not familiar with the timeline of my argument? I'm fully aware of how PhysX came to be, and also how it came to be accelerated by nvidia hardware only. Ageia developed the software, and they even developed an expansion card designed to work in tandem with the graphics card to accelerate PhysX. Nvidia did not develop the PhysX technology, they purchased Ageia and converted the PhysX source into an application that would only be accelerated by nvidia's GPUs. If nvidia truly cared about the consumer and the future adoption of PhysX as a standard, they would have provided the necessary code and tools to other hardware corporations free of charge and without obligation, or else would not have purchased Ageia in the first place. By trying to keep PhysX as a proprietary standard, they are hurting their consumers in the long-run because if PhysX becomes the accepted standard, nvidia will end up with a monopoly on the graphics market, and without competition, technology does not advance nearly as quickly, prices are not competitive, and one corporation can control the supply for the entire graphics industry.
    Yes its obvious you're unfamiliar with the timeline when you ask questions why Nvidia chose to develop on CUDA or why they didn't just conveniently port it to API and standards that didn't exist yet. Not even going to bother with some of the common fallacies I glanced over in there.

    lordbean wrote:
    If nvidia were to port PhysX to OpenCL or DirectCompute and make the core open-source, they would be demonstrating that their intentions are honorable, and that they desire advancement in the field of GPU-accelerated physics as much as AMD seems to. By keeping the PhysX core source proprietary, they are attempting to maneuver into a position that allows them to influence AMD's graphics development.
    Why would Nvidia need to demonstrate anything when all of their actions up until the driver lockout have been more than honorable? Again, follow the progressions in the links from my first post and compare what Nvidia said and did to what AMD said and did. Nvidia holds all the cards now, they have no reason to offer PhysX on a platter in light of the negative press and publicity generated by AMD in the press.
    lordbean wrote:
    AMD is not trying to force people to run PhysX through a hack or workaround. They are trying to promote open-source solutions for Physics acceleration that are not necessarily limited to Bullet physics. By endorsing OpenCL and DirectX11 compliance, AMD is showing that whatever standard for GPU physics is used, they're willing to treat their competition fairly, as it will also be accelerated just the same on nvidia's hardware. Nvidia, on the other hand, is attempting to force other companies to license hardware PhysX acceleration from them, and by doing so, place themselves in a position to control the entire graphics industry. That will hurt consumers in the long run for all the reasons I stated above. You're confusing what's better for consumers right now vs. what's better for consumers in the big picture. It'd be great if we could run PhysX on AMD hardware, but if they have to license it from nvidia to accomplish that, then AMD's graphics cards are simply going to disappear from the market one day. Nvidia will revoke the license and give themselves the monopoly on the GPU.
    Again you seem to be ignoring what's occurring in reality. You also conveniently ignore the fact Nvidia also supports the same standards, and currently better than AMD mind you. You keep claiming Nvidia's introduction of innovative, value-add features somehow hurts the consumer, but that's clearly false as it benefits anyone that purchases their hardware, which is again, the overwhelming majority by any metric.
    lordbean wrote:
    I don't know what you'd be irritated about. Personally, I enjoy a good debate. :)
    I'm sure I'd enjoy it more if I was actually debating someone familiar with the material being discussed. As it is now its more like fact checking and correcting a lot of assumptions and misinformation.
  31. lordbean
    lordbean
    chizow wrote:
    http://http.download.nvidia.com/developer/cuda/seminar/TDCI_PhysX.pdf

    Slide 23 of 26, I've seen it elsewhere but this is one of the many links that popped up by simply searching "PhysX source $50". Glancing over some of your other replies, this takes care of most of the irrelevance.

    In fact, no it does not. Read the slide a little more carefully. The listed prices are for PhysX SDK, not the physX core's source code. In other words, the slide shows that you can get compiled tools to develop on PhysX for free, or you can purchase the source code SDK for $50,000. It does not give the buyer the source code to PhysX itself.

    chizow wrote:
    You could substitute PhysX for DirectX in that sentence verbatim, thanks for proving my point. The difference is of course Microsoft would just laugh at you if you asked for their source code.

    Actually, no, you cannot. Applications developped on DirectX will run on ANY modern hardware. Applications developped on PhysX will ONLY run on nvidia hardware. It's a subtle difference, I know, but it is the defining difference of this point. Also, developing on PhysX is only free if you do not intend to make use of the source code-version SDK. If you do, it's $50,000 to license it.

    chizow wrote:
    Again, the source code is irrelevant unless AMD was planning to integrate it into an engine or their own API, but we all know they don't have anything on either of those fronts. They're not porting anything because they have nothing to port, all they need to do is write a driver for their hardware for the existing API and piggy-back the efforts of others. Again, from an earlier post the progression from hardware to software would look like:

    Hardware > Driver (HAL) > API (HLPL) > Middleware (GUI-based tools)

    Everything else is in-place, all AMD needs to do is make their hardware compatible with prevaling API and everything else should take care of itself. In this case, PhysX source would only be relevant for those dealing directly with the API and middleware, and AMD has no control over any of those factors so the source is irrelevant.

    How do you propose AMD make their hardware compatible with nvidia's proprietary API? You can't install the nvidia driver for use with an AMD graphics card, it doesn't work. Nvidia have even gone so far as to disable their own GPUs from accelerating PhysX if an AMD GPU is even detected in the system. In order to accelerate PhysX, the PhysX application core would need to be written into AMD's drivers. Short of reverse engineering the nvidia drivers (which I'm fairly sure is illegal), they would have to port nvidia's PhysX core, and in order to port something, you need to have the original source code. Nvidia owns the source code to the PhysX core, therefore AMD would have to license it, et cetera. I believe we've gone through this point before.

    chizow wrote:
    PhysX is the #1 SDK in production, but GPU acceleration is obviously going to be an uphill struggle due to the strong industry focus on consoles. As such, features like GPU PhysX are going to be value-add for the PC only which means any dev house would have to weigh the pros and cons of adding such a feature. Most will not due to the additional development cost, which is why Nvidia's TWIMTBP program helps with development, but as each title that uses PhysX releases, that increases the chance more will in the future as the tech gains momentum.

    The whole point of AMD's movement to make physics acceleration an open source matter is not only to promote compability, but also to avoid an extreme amount of time put into a proprietary physics engine for just this reason. With open source standards available, the graphics companies do not have to worry about developing their hardware physics engines - DirectCompute or OpenCL will take care of it through any potential number of physics APIs in the future. Bullet is only the beginning.

    I also disagree with your notion that GPU-accelerated physics will be a value-add feature only. I believe that as open-source physics processing solutions become more available, we are going to see a large increase in the number of games released with advanced physics. At the moment, PhysX is still a turn-off to developers for the simple reason that it is still proprietary to nvidia.
    chizow wrote:
    As for the nonsensical scenario you brought up about revoking licenses and hardware coming under scrutiny....I think you'd have to cross that bridge of AMD supporting PhysX before donning the tin foil hat. Not to mention licensing agreements are put in place for just that reason, to prevent any arbitrary revocation of license.

    A license agreement is a document drafted by licenser to the licensee, to be agreed to by the licensee. In order to license PhysX from nvidia, nvidia can potentially put anything they want to in that document, including reserving the right to break the agreement at any time. If, as you say, all nvidia wants is to further the physics acceleration market with PhysX, then why have they not ported it to DX11 or OpenCL and made the core open source?

    chizow wrote:
    Yes apparently AMD is going to come through with a groundbreaking revelation that redefines what we know about physics if they ever get their hands on that source code. Much easier than just providing a driver for their hardware.

    Your sarcasm is lost, as it comes from a misinterpretation. Not only does nvidia not sell the PhysX core source code, but even if they did, they would charge more than $50,000 for it. When I said groundbreaking, I meant simply that PhysX has the potential to be a groundbreaking piece of hardware/software cooperation (as seen in the few games that actually do use it), if nvidia would simply get their claws out of its stomach and allow it to become an open source standard.

    chizow wrote:
    That might actually make sense if DirectCompute was open source in any way, but its not, its a standard proprietary to Microsoft. Nvidia is simply supporting all standards and API unapologetically without disingenuous excuses. Some companies produce solutions, some produce excuses.

    Microsoft does not make GPUs. The DirectCompute source was built by microsoft, that is true, but it is designed to allow any graphics card with compute shaders to perform tasks other than simple graphics. There's a fine difference between PhysX here. The tools to develop for PhysX are free, but even the source code for those tools is not - nvidia is charging $50,000 for it. As well, PhysX acceleration was specifically written to operate only on nvidia GPUs - point being, AMD does not have a choice here. They cannot support PhysX because they do not have access to the API source code, and nvidia is not going to write a driver that accelerates PhysX on AMD's GPUs just because they can.

    chizow wrote:
    Actually comparing CAL to CUDA doesn't look like fanboy-ism, it reeks of it. Honestly I haven't seen CAL mentioned in at least a year when referring to GPGPU. Go to any GPGPU developer forum and compare the two and see how people who actually use the tools react to the comparison.

    CUDA isn't dead, in fact its growing, adapting and improving. It was never just an API, it was Nvidia's top-to-bottom GPGPU compute architecture. The progression I detailed is ALL encompassed within CUDA, from the hardware to the middleware. What's next for CUDA? How about integration into one of the most popular production IDE with Visual Studio, which will provide a one-stop debugger and compiler for Nvidia hardware for all relevant API: CUDA C, OpenCL, DirectCompute, Direct3D, and OpenGL.

    No company is perfect. CAL was a flop in terms of performance. I have a GTX 285 in my gaming rig, and I know how many more PPD I get in GPU2 folding than I do in any 4000 series radeon. However, to promote healthy competition in hardware and in the spirit of growing the industry, standards need to be in place, whether they be open source or not. DirectX is an excellent example of a standard which is not open source, and yet also encourages the industry to grow. In this way, DirectX encourages the graphics market to improve, while favoring neither AMD or nvidia, or even Microsoft for that matter.
    chizow wrote:
    http://developer.nvidia.com/object/nexus.html

    Once again, Nvidia is providing solutions for their hardware that interested parties actually want and will put to good use. What's AMD doing? Oh right, doing another interview criticizing Nvidia.....

    Any criticism coming from AMD is warranted, until nvidia shows that they are willing to move toward standardizing their APIs without requiring AMD (or other companies) to pay for or license them.

    chizow wrote:
    Ya its the heart of Nvidia's heterogenous computing model, except they don't plan to have much use for more than a few x86 CPU cores if all goes according to their plans.


    No it won't be obsolete, but its role will be vastly diminished to the point its just a tiny beating heart in a vastly undersized body feeding a massive GPU for a brain (again according to Nvidia's heterogenous computing model).

    I seem to recall a certain statement about tin-foil hats... this is much further in the future than physics acceleration APIs are.

    chizow wrote:
    Again already been down this path of hypocrisy. AMD clearly has no problems supporting closed and proprietary standards (see: DirectX, Direct Compute and Havok). These lies only go so far, especially when AMD embarassingly backpedaled on their endorsement of Havok, probably after coming to the revelation Intel has no interest whatsoever in providing GPU acceleration to anyone before Larrabee is ready (and maybe never for competitors).

    You don't see anything potentially unsettling for AMD in this scenario? Maybe nvidia really wouldn't use the licensing of PhysX to AMD in a manner that would undermine AMD's efforts, but if you were on AMD's board of directors, would you take that chance? By licensing PhysX from nvidia, they are allowing PhysX to become the standard, and when it does, AMD's GPU production is entirely at nvidia's mercy.

    chizow wrote:
    Again, completely unsubstantiated fearmongering. Not only do you put far too much importance on physics over 3D capability driving sales, all AMD would have to do to avoid any such fictitious release roadblock would be to simply pull PhysX support and launch their product. Or more than likely, just launch their product, claim support for said feature, then play catch up some point down the line with hastily applied driver updates.

    I don't even know where to begin with this paragraph.

    If AMD adopts PhysX now, the software developer adoption rate of PhysX to do physics acceleration could very well become a standard, hence the assumption that in 2 years' time, 50% of all new releases would use the PhysX library. AMD's graphics cards could have the rendering power of god himself, but if they can't do PhysX, they're sunk.

    They can't break the license agreement with nvidia, and then patch PhysX support in afterward. That would quite literally be breaking the law. They signed an agreement, broke it, and then continued to implement a proprietary standard anyway? Nvidia would sue them so fast you wouldn't even know what happened.
    chizow wrote:
    No the point of open source is so that you have some control over the content of the standard so that you're not at an arbitrarily imposed disadvantage. After that, provided they're running the same API, the faster hardware wins.

    Provided they're running the same API, the faster hardware wins?

    You just summed up why PhysX is hurting customers. AMD can't run the same API unless nvidia licenses the PhysX source code to them, or nvidia actually writes a PhysX API for AMD's GPUs themselves. Either way, AMD loses.

    chizow wrote:
    For CUDA the reason is obvious, people actually used it so the API libraries have built up and evolved over time with numerous apps developed for it. Porting CUDA and its runtimes to OpenCL and DirectCompute make a lot more sense than re-inventing the wheel. In fact, with tools like Nexus, it shouldn't be much more difficult than simply debugging and recompiling the output to whatever target API you choose.

    I never said anywhere that PhysX or CUDA should be outright abolished. What I said is that the release of vendor-neutral tools makes proprietary APIs obsolete. If nvidia were to port PhysX to OpenCL, then there is no problem whatsoever, as the APIs are then no longer proprietary. AMD could support it without licensing it, and everything moves forward as normal. Unfortunately, that is exactly what nvidia is not doing - they are continuing to develop PhysX as a proprietary standard.
    chizow wrote:
    As for the guarantee....no there is no guarantee, especially if a vendor doesn't provide a driver for their own hardware for that API. <----***hint important point hint ****

    You seem to be referring to PhysX as though it is a standard. AMD can't write a driver for an API they don't have access to. Your "important point" is, in fact, gibberish in this regard.

    chizow wrote:
    You can make the point again and it wouldn't make you any more correct. They don't need the PhysX source code, all they need to do is write a driver for their own hardware for the API backend needed for PhysX acceleration, CUDA. Would they potentially need Nvidia's support to write that driver? Maybe, but again there's no need to cross that hypothetical bridge because its clearly a question that has not been posed. That's the problem, its the question that's never asked.

    You just conceded to my argument with this paragraph. AMD would need nvidia's help to write a driver for PhysX, meaning, they need to license the PhysX core in order to understand exactly how it works in order to reproduce it on their hardware.

    chizow wrote:
    Yes I'm well aware of the history and already detailed it in an earlier post. They acquired the IP to further their GPGPU efforts and sell their hardware, all that other irrelevance you wrote makes no sense. The only way PhysX gains traction is if its install-base increases, meaning more hardware supports it. In the long-term I see Nvidia leveraging this technology to get their hardware into the next-gen consoles, at which point the technology will be truly ubiquitous in games and we'll truly see it integrated seamlessly across multiple platforms.

    Sony had to license PhysX from nvidia to put it in the PS3. They couldn't simply write a driver for it and say "oh look, we have hardware PhysX acceleration!" Why do you think it's going to be any different for AMD?

    The only way the technology becomes supported by more hardware at present is when other companies license the tech from nvidia. In order to support it, AMD would have to do the same thing, and I think you know where this train of thought is going.

    chizow wrote:
    Yes its obvious you're unfamiliar with the timeline when you ask questions why Nvidia chose to develop on CUDA or why they didn't just conveniently port it to API and standards that didn't exist yet. Not even going to bother with some of the common fallacies I glanced over in there.

    Where did I ask why nvidia chose to develop CUDA? I only pondered why instead of allowing PhysX to be open source to promote growth, they kept it proprietary and only accelerated by their own GPUs unless another company chose to request to license it. I also said nothing about porting PhysX or CUDA to OpenCL immediately after they acquired the PhysX technology. The point I was (and am) making is that now that the free standards are available in the forms of DirectX 11 and OpenCL, nvidia is still keeping their technology proprietary, and getting other companies to license it from them.


    chizow wrote:
    Why would Nvidia need to demonstrate anything when all of their actions up until the driver lockout have been more than honorable? Again, follow the progressions in the links from my first post and compare what Nvidia said and did to what AMD said and did. Nvidia holds all the cards now, they have no reason to offer PhysX on a platter in light of the negative press and publicity generated by AMD in the press.

    Nvidia does not hold all the cards, and that is precisely why they are holding onto PhysX as tightly as they can (and going against the open source initiative in doing so). They may be assisting in the development of OpenCL, but they are also devoting most of their resources to the furthering of PhysX, even when there are vendor-neutral APIs available.

    Consider also how delayed their GT300 is going to be behind the Radeon 5000 series. I believe the motivating factor behind nvidia not wanting to port PhysX to a vendor-neutral API is that nvidia's R&D screwed up and are way behind AMD. They are therefore looking for as many ways to stall AMD's sales as they can.

    chizow wrote:
    Again you seem to be ignoring what's occurring in reality. You also conveniently ignore the fact Nvidia also supports the same standards, and currently better than AMD mind you. You keep claiming Nvidia's introduction of innovative, value-add features somehow hurts the consumer, but that's clearly false as it benefits anyone that purchases their hardware, which is again, the overwhelming majority by any metric.
    lordbean wrote:
    You're confusing what's better for consumers right now vs. what's better for consumers in the big picture.
    If you bothered to actually read my points, you'd learn why.

    chizow wrote:
    I'm sure I'd enjoy it more if I was actually debating someone familiar with the material being discussed. As it is now its more like fact checking and correcting a lot of assumptions and misinformation.

    We're discussing the ramifications of PhysX acceleration in the near to medium future, and you're calling your points "fact"? It hasn't happened yet, therefore everything by both you and I is opinion. You're entitled to your opinion, but please, don't try to push it as fact. That's frankly an insult to my intelligence.
  32. ardichoke
    ardichoke Chizow,

    The big thing I'm seeing here that you don't seem to understand is what an SDK is. SDK stands for Standard Development Kit. It is a set of tools used to develop software for a platform. The PhysX SDK is for game developers to develop and compile code that will run on PhysX compatible cards. Of course Nvidia is going to make that available, if they don't no game manufacturers can actually utilize the technology. The thing is though, the SDK does not in any way shape or form allow AMD to develop a card that will run PhysX. PhysX is more than just software, it's also in the hardware. Could AMD reverse engineer it and make their own cards compatible? Probably, yes. However that would open them up to a whole host of legal issues and Nvidia would sue them to death, thus their only option would be to license the rights to produce PhysX hardware from Nvidia. This would make it so that, as bean has rightly stated many times before, Nvidia would be able to inspect AMD hardware before release, potentially delay the release of AMD hardware and refuse to renew the license at any time thus screwing AMD if PhysX ever became dominant in games. For AMD to put their product in a position of being reliant on their main competitor would be a very stupid business move and the only people it would benefit would be Nvidia itself as they would essentially have control of all future AMD products.
  33. chizow
    chizow
    lordbean wrote:
    In fact, no it does not. Read the slide a little more carefully. The listed prices are for PhysX SDK, not the physX core's source code. In other words, the slide shows that you can get compiled tools to develop on PhysX for free, or you can purchase the source code SDK for $50,000. It does not give the buyer the source code to PhysX itself.
    Unbelievable. Give me a single reason I should continue on with someone who argues so dishonestly? Your failure to acknowledge such a simple fact, that the PhysX source is available for $50K, can only be taken as extreme dishonesty or profound ignorance.

    Just in case you weren't sure which slide I was referring to and the relevant portions I went ahead and highlighted them for you.....

    http://rz3ovw.bay.livefilestore.com/y1pGf-TVoa3yRsHCFlp_GMGHVRUh17FnDNcFAYG0IRRc4-9M57F-OmES0ze7C76oy7-GspbYmZld03vyOu6Tj4U0nXfVLhnz_hN/PhysX%20pricing.jpg

    That slide is directly from Nvidia, but if you google the exact paramaters I stated "PhysX Source $50" and see what other hits you get. You'll find many more from interested parties who either know from experience or inquiry that the SDK source code costs $50K.

    As for semantics, there is a compiled Binary SDK with GUI tools that is free to use, and there is the source code SDK (includes compiler, debugger, whatever else) that costs $50K. There isn't some extra super sekret handshake SDK, that's all there is lol.

    Anyways, until we can get by this hurdle there's really no reason for me to continue arguing with you. As I stated in my last reply, debating with someone who clearly isn't familiar with the material and is unwilling to acknowledge simple facts is about as gratifying as banging your head against the wall lol. :banghead:
  34. chizow
    chizow
    ardichoke wrote:
    Chizow,

    The big thing I'm seeing here that you don't seem to understand is what an SDK is. SDK stands for Standard Development Kit. It is a set of tools used to develop software for a platform. The PhysX SDK is for game developers to develop and compile code that will run on PhysX compatible cards.
    The acronym "SDK" is actually commonly acknowledged as an abbreviation for "Software Development Kit". You got the bolded portion right, but that's about the extent of it. SDK is just software used to create more software. By that definition alone, you're clearly incorrect in trying to limit its scope to a single application along the hardware to software progression I've already listed a few times.
  35. lordbean
    lordbean
    chizow wrote:
    Unbelievable. Give me a single reason I should continue on with someone who argues so dishonestly? Your failure to acknowledge such a simple fact, that the PhysX source is available for $50K, can only be taken as extreme dishonesty or profound ignorance.

    Just in case you weren't sure which slide I was referring to and the relevant portions I went ahead and highlighted them for you.....

    http://rz3ovw.bay.livefilestore.com/y1pGf-TVoa3yRsHCFlp_GMGHVRUh17FnDNcFAYG0IRRc4-9M57F-OmES0ze7C76oy7-GspbYmZld03vyOu6Tj4U0nXfVLhnz_hN/PhysX%20pricing.jpg

    That slide is directly from Nvidia, but if you google the exact paramaters I stated "PhysX Source $50" and see what other hits you get. You'll find many more from interested parties who either know from experience or inquiry that the SDK source code costs $50K.

    As for semantics, there is a compiled Binary SDK with GUI tools that is free to use, and there is the source code SDK (includes compiler, debugger, whatever else) that costs $50K. There isn't some extra super sekret handshake SDK, that's all there is lol.

    Anyways, until we can get by this hurdle there's really no reason for me to continue arguing with you. As I stated in my last reply, debating with someone who clearly isn't familiar with the material and is unwilling to acknowledge simple facts is about as gratifying as banging your head against the wall lol. :banghead:

    On that very page you linked, the acronym "SDK" appears so many times, I simply cannot put polite words to this. If you haven't noticed that that slide is giving information on the PhysX SDK (not PhysX itself), then I simply don't know what more I can tell you. You're convinced of all your arguments based on a false belief which you are refusing to acknowledge even though I and Ardichoke have both pointed it out.
  36. chizow
    chizow
    lordbean wrote:
    On that very page you linked, the acronym "SDK" appears so many times, I simply cannot put polite words to this. If you haven't noticed that that slide is giving information on the PhysX SDK (not PhysX itself), then I simply don't know what more I can tell you. You're convinced of all your arguments based on a false belief which you are refusing to acknowledge even though I and Ardichoke have both pointed it out.
    HAHAHAH. All you two have proven is that there's no point in arguing with people who choose food-related pseudonyms. It seems you're both hung up on the acronym SDK, which isn't surprising since your definition is conflicted and inaccurate to begin with. Its simply software that is used to create more software. Apply that to the context of the source code SDK and it still applies and holds true.
  37. Thrax
    Thrax This is quickly devolving into personal attacks. Let's be a little more gentlemanly, eh?
  38. lordbean
    lordbean
    chizow wrote:
    HAHAHAH. All you two have proven is that there's no point in arguing with people who choose food-related pseudonyms. It seems you're both hung up on the acronym SDK, which isn't surprising since your definition is conflicted and inaccurate to begin with. Its simply software that is used to create more software. Apply that to the context of the source code SDK and it still applies and holds true.

    That is exactly the point we are making - an SDK is simply software to facilitate making more software. It is not the source code to the PhysX engine itself. This means that other companies cannot purchase the PhysX core in order to port it or tinker with it.

    Your argument is in conflict with itself. I suggest you re-read Ardichoke's post describing what an SDK is and is not - it might clarify some things for you.
  39. lordbean
    lordbean
    Thrax wrote:
    This is quickly devolving into personal attacks. Let's be a little more gentlemanly, eh?

    I'm doing my best to be nothing but. :)
  40. chizow
    chizow
    Thrax wrote:
    This is quickly devolving into personal attacks. Let's be a little more gentlemanly, eh?
    There's nothing personal from my standpoint, his unwillingness to acknowledge such simple (and trivial) facts like the PhysX Source SDK being available for $50K can only be interpreted as dishonesty or profound ignorance.
    lordbean wrote:
    That is exactly the point we are making - an SDK is simply software to facilitate making more software. It is not the source code to the PhysX engine itself. This means that other companies cannot purchase the PhysX core in order to port it or tinker with it.
    Wrong again, the source code SDK isn't just the HL source code, it also includes additional software tools to facilitate debug and compile, which again, satisfies the requirements of an SDK by the correct definition.....
  41. Ryder
    Ryder Chizow, et al.

    Chizow, you sound like you are defending the definition of an SDK, is that the case? Everybody seems to know that definition, I don't think that is in question.

    What is on debate is this.. Can AMD use that to make PhysX run on their cards? Answer is no.
  42. lordbean
    lordbean
    chizow wrote:
    Yep, the source is $50K as I'm pretty sure I've already mentioned, but you would only need the source if you wanted to integrate it into your own tools, recompile it for whatever reason for perhaps target hardware.

    This was the crux of your argument against AMD. You said they could purchase the source code to PhysX itself for $50,000 and then make their hardware accelerate it using the code. I believe I have shown that the source code to the PhysX core is, in fact, not available for sale from nvidia. I never contested that the SDK is there:
    lordbean wrote:
    The SDK is free, this is 100% true. However, the SDK only makes it easier to write code that interacts with the PhysX core, whose source is owned by nvidia, and has to be licensed (even if it's licensed for free) by other companies in order to run at the hardware level.
  43. chizow
    chizow
    RyderOCZ wrote:
    Chizow, et al.

    Chizow, you sound like you are defending the definition of an SDK, is that the case? Everybody seems to know that definition, I don't think that is in question.

    What is on debate is this.. Can AMD use that to make PhysX run on their cards? Answer is no.
    No I'm pointing out the selective application of SDK for the PhysX binary SDK vs. the PhysX source code SDK. They're both "SDK", which was a continuous point of contention that's clearly inaccurate.

    As for whether AMD can use it to make PhysX run on their cards, again, they wouldn't have to, the API is there, the binaries are there, they would just need to write a driver for their own hardware to interface the API. This is no different than them writing a driver for OpenCL or DirectCompute.
    lordbean wrote:
    This was the crux of your argument against AMD. You said they could purchase the source code to PhysX itself for $50,000 and then make their hardware accelerate it using the code. I believe I have shown that the source code to the PhysX core is, in fact, not available for sale from nvidia.
    And I've shown time and again you're clearly wrong, the source code is available for purchase. Why you continue to cling to this bit of misinformation is beyond me.

    http://rz3ovw.bay.livefilestore.com/y1pGf-TVoa3yRsHCFlp_GMGHVRUh17FnDNcFAYG0IRRc4-9M57F-OmES0ze7C76oy7-GspbYmZld03vyOu6Tj4U0nXfVLhnz_hN/PhysX%20pricing.jpg

    The only reason you would need it is if you wanted to port or compile to a different target API or hardware. But AMD wouldn't even need to go that far unless they wanted to port PhysX to OpenCL or DirectCompute for their own hardware, all they would have to do to circumvent those steps is to write a CUDA driver for their own hardware. Yes it might involve help from Nvidia or the aid of an additional SDK, but it'd be CUDA's SDK, which again brings me back to the point the emphasis on PhysX's source is irrelevant to begin with.
  44. lordbean
    lordbean
    chizow wrote:
    No I'm pointing out the selective application of SDK for the PhysX binary SDK vs. the PhysX source code SDK. They're both "SDK", which was a continuous point of contention that's clearly inaccurate.

    As for whether AMD can use it to make PhysX run on their cards, again, they wouldn't have to, the API is there, the binaries are there, they would just need to write a driver for their own hardware to interface the API. This is no different than them writing a driver for OpenCL or DirectCompute.


    And I've shown time and again you're clearly wrong, the source code is available for purchase. Why you continue to cling to this bit of misinformation is beyond me.

    http://rz3ovw.bay.livefilestore.com/y1pGf-TVoa3yRsHCFlp_GMGHVRUh17FnDNcFAYG0IRRc4-9M57F-OmES0ze7C76oy7-GspbYmZld03vyOu6Tj4U0nXfVLhnz_hN/PhysX%20pricing.jpg

    The only reason you would need it is if you wanted to port or compile to a different target API or hardware. But AMD wouldn't even need to go that far unless they wanted to port PhysX to OpenCL or DirectCompute for their own hardware, all they would have to do to circumvent those steps is to write a CUDA driver for their own hardware. Yes it might involve help from Nvidia or the aid of an additional SDK, but it'd be CUDA's SDK, which again brings me back to the point the emphasis on PhysX's source is irrelevant to begin with.

    It doesn't matter which source code AMD would need from nvidia, the point here is that they can't simply write a PhysX driver without having at least some of nvidia's proprietary code. AMD would need to license it from nvidia, which would make them responsible to nvidia for all future products they make which accelerate PhysX. This isn't a position AMD wants to be in, which is a point I've made again and again.
  45. chizow
    chizow
    lordbean wrote:
    It doesn't matter which source code AMD would need from nvidia, the point here is that they can't simply write a PhysX driver without having at least some of nvidia's proprietary code. AMD would need to license it from nvidia, which would make them responsible to nvidia for all future products they make which accelerate PhysX. This isn't a position AMD wants to be in, which is a point I've made again and again.
    Once again, I don't see how this can be made any more obvious. If AMD wanted to support PhysX on their hardware natively, they could take 2 approaches:

    1) Use the existing PhysX binaries, runtimes and API (CUDA) that are already in place on the PC and just write their own CUDA driver for their hardware. As Dave Hoff said, porting runtimes and drivers between the API in question, DC, OpenCL, CUDA, is "trivial" given how similar they are. Given their minimalistic approach, this would make the most sense, IF they wanted to support PhysX of course.

    2) Use the PhysX source code SDK and whatever API SDK of their choosing (OpenCL, DirectCompute, CAL lol) to port and compile PhysX for that API and their own hardware. This is no different than the process used to port PhysX from the PC to the 360, PS3, Wii, iPhone...and of course GeForce GPUs.... In this case, any licensing concerns would be agreed upon according to the licensing agreement and AMD would be in control of their own PhysX implementation.

    Both are viable paths that would work from different ends of the problem to come to a similar resolution, but obviously it takes a willing party to get there. But as we all know, AMD would rather feed the press with disingenuous reasons for why they won't support PhysX while leaving their customers to fend for themselves with hacks and workarounds.
  46. lordbean
    lordbean
    chizow wrote:
    Once again, I don't see how this can be made any more obvious. If AMD wanted to support PhysX on their hardware natively, they could take 2 approaches:

    1) Use the existing PhysX binaries, runtimes and API (CUDA) that are already in place on the PC and just write their own CUDA driver for their hardware. As Dave Hoff said, porting runtimes and drivers between the API in question, DC, OpenCL, CUDA, is "trivial" given how similar they are. Given their minimalistic approach, this would make the most sense, IF they wanted to support PhysX of course.

    Dave Hoff has pointed out that it would be easy for nvidia to port their existing APIs to OpenCL. AMD cannot just do it on their own, as they do not have access to the PhysX or CUDA source codes.
    chizow wrote:
    2) Use the PhysX source code SDK and whatever API SDK of their choosing (OpenCL, DirectCompute, CAL lol) to port and compile PhysX for that API and their own hardware. This is no different than the process used to port PhysX from the PC to the 360, PS3, Wii, iPhone...and of course GeForce GPUs.... In this case, any licensing concerns would be agreed upon according to the licensing agreement and AMD would be in control of their own PhysX implementation.

    The PhysX source code SDK is not the same thing as the source code to the PhysX core. The source code SDK is, simply put, the developer tools for PhysX, in pre-compiled form, to allow portability to any developing environment. It does not contain any usable code toward porting PhysX itself to other hardware.
    chizow wrote:
    Both are viable paths that would work from different ends of the problem to come to a similar resolution, but obviously it takes a willing party to get there. But as we all know, AMD would rather feed the press with disingenuous reasons for why they won't support PhysX while leaving their customers to fend for themselves with hacks and workarounds.

    In fact, neither way is viable because in either scenario, nvidia needs to port it themselves or give AMD their source code. This would require that they either make their sources open, or license it to AMD. The latter is not desirable to AMD, which is why they currently don't support PhysX.
  47. chizow
    chizow
    lordbean wrote:
    Dave Hoff has pointed out that it would be easy for nvidia to port their existing APIs to OpenCL. AMD cannot just do it on their own, as they do not have access to the PhysX or CUDA source codes.
    Wrong, he's simply ducking or avoiding the question that isn't explicitly asked, it would be just as easy to port it from CUDA as it would be to port it to CUDA. That's the beauty of software and reverse engineering. Erwin Coumans affirms this in his interview with TechLegion, also saying porting between the 3 API would be "trivial".
    The PhysX source code SDK is not the same thing as the source code to the PhysX core. The source code SDK is, simply put, the developer tools for PhysX, in pre-compiled form, to allow portability to any developing environment. It does not contain any usable code toward porting PhysX itself to other hardware.
    LMAO, I see the problem here. You're inventing some additional abstraction layer in your mind. As the slide I linked clearly stated and as I've repeated numerous times, the PhysX Source Code SDK includes the High-Level source code which is written in any one of the prevailing high-level programming languages in use today, most commonly C or some derivative.

    This code is then compiled to output in binary, which is then translated into machine code by the driver for target hardware. There is no additional step, there's the compiled free PhysX SDK binaries and there's the $50K PhysX Source Code SDK that includes the HL source code and debugging tools and software. That's it, there's no super sekret Nvidia martian source code language covered in green anti-AMD kryptonite.....

    http://rz3ovw.bay.livefilestore.com/y1pGf-TVoa3yRsHCFlp_GMGHVRUh17FnDNcFAYG0IRRc4-9M57F-OmES0ze7C76oy7-GspbYmZld03vyOu6Tj4U0nXfVLhnz_hN/PhysX%20pricing.jpg
    In fact, neither way is viable because in either scenario, nvidia needs to port it themselves or give AMD their source code. This would require that they either make their sources open, or license it to AMD. The latter is not desirable to AMD, which is why they currently don't support PhysX.
    Nvidia really doesn't need to do either, at some point AMD needs to be responsible for support of their own hardware. I've outline the two ways they could make it happen. The impetus isn't on Nvidia, they've created a value-add feature on their hardware and fully support it free of charge.
  48. lordbean
    lordbean
    chizow wrote:
    Wrong, he's simply ducking or avoiding the question that isn't explicitly asked, it would be just as easy to port it from CUDA as it would be to port it to CUDA. That's the beauty of software and reverse engineering. Erwin Coumans affirms this in his interview with TechLegion, also saying porting between the 3 API would be "trivial".


    LMAO, I see the problem here. You're inventing some additional abstraction layer in your mind. As the slide I linked clearly stated and as I've repeated numerous times, the PhysX Source Code SDK includes the High-Level source code which is written in any one of the prevailing high-level programming languages in use today, most commonly C or some derivative.

    This code is then compiled to output in binary, which is then translated into machine code by the driver for target hardware. There is no additional step, there's the compiled free PhysX SDK binaries and there's the $50K PhysX Source Code SDK that includes the HL source code and debugging tools and software. That's it, there's no super sekret Nvidia martian source code language covered in green anti-AMD kryptonite.....

    http://rz3ovw.bay.livefilestore.com/y1pGf-TVoa3yRsHCFlp_GMGHVRUh17FnDNcFAYG0IRRc4-9M57F-OmES0ze7C76oy7-GspbYmZld03vyOu6Tj4U0nXfVLhnz_hN/PhysX%20pricing.jpg


    Nvidia really doesn't need to do either, at some point AMD needs to be responsible for support of their own hardware. I've outline the two ways they could make it happen. The impetus isn't on Nvidia, they've created a value-add feature on their hardware and fully support it free of charge.

    At this point, it's clear to me you're not making the distinction properly between the Software Developer's Kit and the actual product itself. There's no point in continuing this debate until you do a little more research to educate yourself properly in the aspects of software development.
  49. ardichoke
    ardichoke
    chizow wrote:
    Once again, I don't see how this can be made any more obvious. If AMD wanted to support PhysX on their hardware natively, they could take 2 approaches:

    1) Use the existing PhysX binaries, runtimes and API (CUDA) that are already in place on the PC and just write their own CUDA driver for their hardware. As Dave Hoff said, porting runtimes and drivers between the API in question, DC, OpenCL, CUDA, is "trivial" given how similar they are. Given their minimalistic approach, this would make the most sense, IF they wanted to support PhysX of course.

    2) Use the PhysX source code SDK and whatever API SDK of their choosing (OpenCL, DirectCompute, CAL lol) to port and compile PhysX for that API and their own hardware. This is no different than the process used to port PhysX from the PC to the 360, PS3, Wii, iPhone...and of course GeForce GPUs.... In this case, any licensing concerns would be agreed upon according to the licensing agreement and AMD would be in control of their own PhysX implementation.

    Both are viable paths that would work from different ends of the problem to come to a similar resolution, but obviously it takes a willing party to get there. But as we all know, AMD would rather feed the press with disingenuous reasons for why they won't support PhysX while leaving their customers to fend for themselves with hacks and workarounds.

    You're making the same debunked arguments over and over again going round and round in circles. AMD cannot just up and use PhysX because Nvidia owns the patent on it and would sue them into the dirt if they did. You seem to not understand how patents work. The basis of this whole thing boils down to this one simple FACT.

    PhysX is a patented middleware program to simulate Physics. The key word there being patented. This means that to even make a compatible clone of it, AMD has to LICENSE THE RIGHTS TO DO SO from the owner, aka NVidia. This would mean that Nvidia could exert power over AMDs products and drivers in that they could refuse to license them, they could slow down their releases by requiring they inspect all new cards or drivers before release or they could do a number of other things to screw AMD over using the license for their proprietary, patented technology. That would be a monumentally stupid business decision on AMDs part whereas developing an open source physics API, while it won't allow them to put a stranglehold on the market like a proprietary API would, will advance physics acceleration on any graphics card without the need for licensing. In the end PhysX screws everyone except NVidia whereas Bullet provides a level playing field that can move the industry forward.

    As for the SDK argument, buying an SDK or even the source for an SDK does NOT mean you have the RIGHTS to make a clone of a PATENTED system. Just because you buy the SDK for .NET doesn't mean that you have the rights to create your own API that does exactly what .NET does but does it on OSX then distribute it. If you did so the patent holder (Microsoft) would have legal justification to sue you. Not only that but they would HAVE to sue you to maintain their patent as the US legal system, in their infinite wisdom, has determined that if patent holders don't maintain and defend their patents it's grounds for them losing said patent. PhysX is no different than .NET in this regard. Buying the SDK only gives you tools to develop for the platform, not the RIGHTS to develop your own system to run PhysX or to put PhysX into any driver you haven't been licensed to put it into. If AMD did so, Nvidia would be forced to C&D them and/or sue them otherwise they would risk losing their patent on PhysX.

    You can continue to try and argue against the aforementioned facts if you want, your arguments are getting stale though and have been thoroughly debunked by both myself and bean. I, for one, am sick of running around in circles with you pointing out the flaws in the same couple arguments you keep making. Additionally, there are no winners in Internet arguments. Ever. So hows about just leaving it at that and moving on to something we can have a constructive discussion on?
  50. chizow
    chizow
    lordbean wrote:
    At this point, it's clear to me you're not making the distinction properly between the Software Developer's Kit and the actual product itself. There's no point in continuing this debate until you do a little more research to educate yourself properly in the aspects of software development.
    Nah, its obvious you're hiding behind an incorrect interpretation or limitation of what you think an SDK is or should be. You seem to think the inclusion of "SDK" in reference to "Source Code" means its not the actual, fo'real source code and its somehow the fakie Bizarro source code. Got it.

    I've already clearly shown simply satisfying the requirements of "SDK" by no means disqualifies an SDK from also including the source code. They're clearly not mutually exclusive, as the Source Code SDK includes the HL source code along with additional software tools to facilitate debugging and integration for whatever target API intended.

    But you're right, its probably best to leave it at that until you've updated your frame of reference and better familiarize yourself with the material. Your inability to digest and get past these trivial details and facts makes it pointless to continue.

    In any case, if you're interested in better understanding the "basics of software development", you can probably start by posting such a simple inquiry here:

    http://www.gamedev.net/community/forums/forum.asp?forum_id=31

    Those guys are actually knowledgeable and mostly friendly, until you repeatedly demonstrate an unwillingness to acknowledge simple facts and realities.....
  51. chizow
    chizow
    ardichoke wrote:
    snip

    I glanced over your reply and it seems it focuses on licensing and PhysX's proprietary nature. I haven't ignored any of it, there's undoubtedly some agreements and concessions that need to be made by AMD and its very possible that ship has sailed given AMD's very public criticisms of PhysX in the press. I simply haven't covered them in detail because I already mentioned it in earlier posts:

    http://www.bit-tech.net/news/hardware/2008/12/11/amd-exec-says-physx-will-die/1

    http://www.tgdaily.com/content/view/38392/118/

    http://www.extremetech.com/article2/0,2845,2324555,00.asp

    http://www.bit-tech.net/custompc/news/602205/nvidia-offers-physx-support-to-amd--ati.html

    The gist of those links is that Nvidia has offered to support efforts to get PhysX to run on AMD hardware. But AMD declined for clearly disingenuous reasons and here we are today. Obviously any attempt at porting PhysX to AMD hardware would be accompanied by the appropriate licensing concerns but these obstacles clearly aren't prohibitive given the fact PhysX is fully supported on a wide variety of hardware platforms including all 3 major consoles.
  52. primesuspect
    primesuspect We could print this out on paper in 10pt and kill 25 trees.

    This is like a novel. :D
  53. ardichoke
    ardichoke Yes, I don't know what an SDK is. I only have a degree in computer science. Clearly I'm ignorant as to the business of software development. I usually refrain from making the following statement as I generally find that both sides of an argument usually have some merit to them. In this case though I can find none in yours. You're wrong and I have better things to do with my time than to try and point out all the ways in which your arguments are flawed, devoid of facts and proof or just plain bullshit. If lordbean wants to continue trying to point out the errors in your logic to you, more power to him, but I'm through beating my head against the brick wall of ignorance you have built around yourself.
  54. chizow
    chizow
    ardichoke wrote:
    Yes, I don't know what an SDK is. I only have a degree in computer science. Clearly I'm ignorant as to the business of software development.
    Thanks for the background info, that doesn't change the fact your unabbreviated definition of the acronym "SDK" is incorrect. It just would've been -2 points or whatever on your way to that CS degree in perhaps Intro to Software Development 100. Not to say it isn't a noble or grandiose aspiration, if we could all hope to create new standards each time we touched an SDK!
  55. mondi
    mondi For what it's worth - chizow is correct here.
  56. ardichoke
    ardichoke
    mondi wrote:
    For what it's worth - chizow is correct here.
    My what an informed and factually supported statement you've provided. Oh wait...
  57. Linc
    Linc
    ardichoke wrote:
    My what an informed and factually supported statement you've provided. Oh wait...
    You don't even know what you just stepped in. :bigggrin:

    mondi once manually edited my video driver to fix the support for my obscure motherboard. ;D
  58. ardichoke
    ardichoke he manually edited a binary file? I'd be amazed to see someone actually do that.
  59. Mt_Goat
    Mt_Goat
    ardichoke wrote:
    he manually edited a binary file? I'd be amazed to see someone actually do that.

    Mondi may not post much as of late but he has been around a long time. He is also extremely knowledgeable as well as creative. I have no doubt that he manually edited a binary file. We have had several talented people here who have done such things.
  60. ardichoke
    ardichoke
    Mt_Goat wrote:
    Mondi may not post much as of late but he has been around a long time. He is also extremely knowledgeable as well as creative. I have no doubt that he manually edited a binary file. We have had several talented people here who have done such things.
    I'm not questioning his creativeness or how knowledgeable he is. There were some people that I went to college with whom I saw do amazing feats of coding genius which made me question my worth as a human being. I've never seen anyone edit a binary executable by hand though. Forgive me but I'm skeptical that he did this without breaking anything while actually expanding functionality especially on a graphics driver which tends to be in the tens to hundreds of megabyte range. I find it far more believable that he tweaked an inf or ini file to get windows to load the driver properly. inf/ini file != driver.
  61. Linc
    Linc Heh, no it wasn't a binary.

    My point was simply I'd personally trust mondi's brevity on this one and, since you don't, I'd at least have the foresight to not take a sarcastic jab at him.
  62. ardichoke
    ardichoke Because no one has ever taken a sarcastic jab at anyone on this site before EVER. I've been the recipient of many a sarcastic jab myself and taken it (mostly) in stride. If you're going to jump in to defend one person from sarcasm then start doing it for everyone.
  63. Thrax
    Thrax There's a difference between a sarcastic jab, and jabbing at an internationally-recognized visual artist with a deep knowledge of GPUs, their many SDKs, and whose last project was writing his own 3D engine. ;) Look before you leap.
  64. ardichoke
    ardichoke Look at what exactly? His profile says he's from Stockholm and naught else. Am I supposed to run background checks on everyone before I post anything? Did any of you do a background check on me before posting snarky comments to/about me? Furthermore, were you not agreeing with the fact that it would be stupid for AMD to adopt PhysX because they would have to license it from Nvidia in irc the other day? Or have I been on hallucinogens for the past week?
  65. chizow
    chizow
    ardichoke wrote:
    Look at what exactly? His profile says he's from Stockholm and naught else. Am I supposed to run background checks on everyone before I post anything? Did any of you do a background check on me before posting snarky comments to/about me? Furthermore, were you not agreeing with the fact that it would be stupid for AMD to adopt PhysX because they would have to license it from Nvidia in irc the other day? Or have I been on hallucinogens for the past week?
    lol, I just found that extremely funny given your self-admitted proficiency with Google intarweb detective work.
  66. ardichoke
    ardichoke Doesn't take much to plug "chizow" and "nvidia" into a browser and see the pages of rabid "lolomg nvidia can do no rong u guise. Ur all dumb for saying anything bad about my lord and master" you spread everywhere you go. Considering that every post you've made here has either been attacking someone that disagrees with you or trying to convince everyone that Nvidia is god and AMD is a bunch of evil little goblins it's not all that surprising that someone would eventually Google that. I really have nothing else to say to you besides you have no credibility with me and never will, oh and your misguided diatribes have done nothing to redeem Nvidia in my eyes.

    I've never used an ATI/AMD video card up until this point, but I'll be making the switch at next upgrade due to NVidias stupidity and stubborn insistence that people should just plug their nose and swallow their closed standards instead of developing open ones which allow a level playing field for the competition.
  67. chizow
    chizow
    ardichoke wrote:
    Doesn't take much to plug "chizow" and "nvidia" into a browser and see the pages of rabid "lolomg nvidia can do no rong u guise. Ur all dumb for saying anything bad about my lord and master" you spread everywhere you go. Considering that every post you've made here has either been attacking someone that disagrees with you or trying to convince everyone that Nvidia is god and AMD is a bunch of evil little goblins it's not all that surprising that someone would eventually Google that. I really have nothing else to say to you besides you have no credibility with me and never will, oh and your misguided diatribes have done nothing to redeem Nvidia in my eyes.
    I really don't have any problems with you or anyone else disagreeing with me or my own opinions, provided they back them with actual facts and coherent arguments. You've repeatedly failed in this regard, which is why its so easy to dismiss your arguments as nothing short of comedy. Your shot at mondi was just another example where you clearly assume far too much for someone who knows so little.
    ardichoke wrote:
    I've never used an ATI/AMD video card up until this point, but I'll be making the switch at next upgrade due to NVidias stupidity and stubborn insistence that people should just plug their nose and swallow their closed standards instead of developing open ones which allow a level playing field for the competition.
    Yay! Rage buys are always the best way to show whats what and who's boss. Vote with your wallet! You'll really show me then! But not to sound cliche or rain on your parade....what does it accomplish when its all said and done? You still won't have these features and basic levels of support for your hardware. And when that frustrates you, will you complain to AMD? Or will you blame it on Nvidia?
  68. Thrax
    Thrax I think this thread has pretty much run its course. Don't you agree, gentlemen? At this point, it's devolved into back-handed baiting.
  69. ardichoke
    ardichoke I agreed to that point at least a half dozen posts ago. However as I've pointed out before I'm rather easy to troll.
  70. Support for Technology Hi, this is Daniel North.This post is really very appreciable.your post is very advantageous for me and very good.were you not agreeing with the fact that it would be stupid for AMD to adopt PhysX because they would have to license it from Nvidia in irc the other day.You still won't have these features and basic levels of support for your hardware. And when that frustrates you, will you complain to AMD? Or will you blame it on Nvidia?
    ======================================
    Daniel North

Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!