If geeks love it, we’re on it

NVIDIA’s take on AMD’s open source Bullet Physics

NVIDIA’s take on AMD’s open source Bullet Physics

nvidiaLast Wednesday, AMD announced a partnership with Pixelux Entertainment to develop an open source physics initiative called Bullet Physics. The new API is being written in the vendor-neutral OpenCL and DirectCompute languages; that means games which use Bullet could run physics on ATI and NVIDIA cards alike.

We know what AMD’s perspective on the new initiative would be, so we turned to NVIDIA to get their thoughts on what Bullet meant to them and the market at large. We heard from NVIDIA’s Director Technical Marketing, Tom Petersen.

Icrontic: Does NVIDIA intend to support Bullet Physics, as it is based on open industry standards which NVIDIA supports?

Tom Petersen: NVIDIA does support Bullet (we met with Erwin at [The GPU Technology Conference]). We like any software or API that makes it easier for anyone to use GPUs more effectively. As a matter of fact according to Erwin (the creator of Bullet) he uses NV GPUs to develop his code – He even provided a quote for us to that effect:

“Bullet’s GPU acceleration via OpenCL will work with any compliant drivers, we use NVIDIA GeForce cards for our development and even use code from their OpenCL SDK, they are a great technology partner.”

Erwin Coumans,
Creator of the Bullet Physics Engine

IC: What contributions, if any, does NVIDIA intend to make to the Bullet Physics project?

TP: We will continue to provide any support we can to the Bullet team. Right now they are leveraging the OpenCL drivers provided by NVIDIA.

IC: How does NVIDIA feel consumers will be impacted by the creation of a third physics engine?

TP: NVIDIA supports the use of GPUs to enhance PC gaming.. Since Bullet can do that for some developers on some games we are supportive. In parallel, NVIDIA will continue to innovate with PhysX on our GPUs. We will provide a complete solution including performance tuning, development tools, content tools, and developer relations and it delivers huge benefits to our customers. games like Batman Arkham Asylum.

IC: Is Bullet Physics the “right answer” to the incompatibilities between the PhysX and Havok engines?

TP: I don’t think there is one answer. Each physics implementation has its own strengths. NVIDA’s belief is that great performance and a complete content solution is required to provide a compelling solution. We have focused much of our efforts on APEX to make use of PhysX easy for game designers.

In short, NVIDIA is investing in innovation and we support independent efforts that do the same.

Petersen’s answers reveal that the ecosystem surrounding Bullet is already healthy and multi-vendor. More so, they speak to the industry’s slow migration towards hardware-agnostic physics engines which are distinguished by their features.

In many ways, physics is now headed down the trail once blazed by games. Vendor-neutral APIs like DirectX and OpenGL have allowed games to proliferate without the nasty divide of hardware exclusivity. Imagine what gaming might be like today if it hadn’t recognized the value of open APIs; industry-defining titles like Crysis could have easily been limited to one brand of GPU, if there was financial incentive to develop the engine at all.

While the growth of physics has been stunted by this brand of divisiveness, it’s clear that the rift is beginning to close. Perhaps one day–like Crysis for its engine–we might revere a game for the accuracy of its physics. That time is coming, and it’s coming soon.

Comments

  1. ardichoke
    ardichoke The only question I'd have for Nvidia would be this. Given the past trend for APIs to move away from closed standards and towards open ones, why in the hell are you wasting time continuing to develop the closed standard PhysX instead of contributing to open standards that have a better chance, historically speaking, of being widely adopted?

    Good article though Thrax.
  2. Anon Y Mous The real question is, why isn't nVidia working to port PhysX over to OpenCL so it could be hardware agnostic and any gamer could enjoy the benefits it provides to a given title?
  3. lordbean
    lordbean
    I don’t think there is one answer. Each physics implementation has its own strengths. NVIDA’s belief is that great performance and a complete content solution is required to provide a compelling solution. We have focused much of our efforts on APEX to make use of PhysX easy for game designers.

    If you ask me, "Complete Content Solution" implies that content created using the medium should run on any available hardware, including AMD. Tom Peterson's entire statement is in conflict with itself - "oh, we support OpenCL, we think it's a great idea, but we're gonna keep pushing PhysX hoping against hope that it'll knock AMD out of the market." Hypocrit.
  4. Thrax
    Thrax Factoid: NVIDIA has been openly been discussing an OpenCL port of PhysX since May.
  5. lordbean
    lordbean Discussing and doing are two very different things.
  6. name it would be nice to see games come out requiring todays video cards.
    crysis came out in 2006 and almost no one had a powerful enough PC to play that game on max settings.
    now in 2009 PC games are coming out and almost everyone can run PC games on max settings,and whats even more sad is a 2006 game has better graphics than a 2009 game.
    my PC has a Nvidia 9800GTX+ and i can run anything, even crysis warhead on max settings.
    if 1 nvidia 9800GTX+ can give me crysis graphics i want to see what 2 nvidia GTX285s will give me.
  7. James Wang [NVIDIA] We want great features to come to games and GPUs as fast as possible. CUDA C and PhysX do just that. If innovation comes through DirectX, OpenCL, CUDA C, Bullet or PhysX, it does not matter to NVIDIA.

    Take GPU Computing and CUDA as an example-- we preferred to make that happen 3 years ago with CUDA, as opposed to waiting for OpenCL and DirectCompute. Look at the results that GTC highlighted or on the CUDAzone website. Speeds-ups of 100s times faster in critical applications. Maybe not in the consumer space, but we had some pretty important breakthroughs in medical research, finance services and lots of other areas thanks to moving the work to the GPU via CUDA C.

    PhysX is here TODAY, OpenCL physics solutions are not. GeForce users have cool PhysX in Batman: Arkham Asylum TODAY. AMD users are waiting. I would be mad, too.

    AMD does not support PhysX for their customers, and we don’t do QA for AMD. Supporting something that has never been QAed is not a good idea. Adding AMD GPUs to our QA stack would increase work and cost. Again, AMD does not support PhysX for their customers. AMD needs to get serious about their own specific investments in GPU accelerated physics and in GPU computing technologies in general.

    One supporting OpenCL, you could argue that no company has done more for OpenCL adaption.
    • NVIDIA has the only OpenCL driver for the GPU today.
    • NVIDIA’s Neal Trevett is the chairman of the Khronos Group that oversees OpenCL.
    • We were first to submit a driver to the Khronos Group.
    • We were first to give a driver to developers.
    • We were first to demo OpenCL on a GPU.
    • NVIDIA is the 1st and only company to provide a Visual profiler for development of OpenCL programs.
    • NVIDIA is the 1st and only company to provide an OpenCL SDK for GPU’s.
    • NVIDIA is the 1st and only company to provide a best practices guide for OpenCL Programmers

    We want to innovate as fast as possible. NVIDIA supports open standards. We also support standards that allow us to innovate quickly.
  8. blackpawn It'd be great if AMD would implement CUDA support for their GPUs. Any chance of this? :)
  9. ardichoke
    ardichoke
    blackpawn wrote:
    It'd be great if AMD would implement CUDA support for their GPUs. Any chance of this? :)
    Not if the people running AMD have half a brain. As was thoroughly discussed in another thread earlier this week, for AMD to implement CUDA or PhysX in their drivers/cards they would have to license it from NVidia. This would put their technology at the mercy of their biggest competitor and would be about the stupidest business decision they could make at this point. Especially since their DX11 cards are 6 months ahead of NVidia's and DX11 pretty much makes CUDA moot.
  10. chizow
    chizow
    ardichoke wrote:
    The only question I'd have for Nvidia would be this. Given the past trend for APIs to move away from closed standards and towards open ones, why in the hell are you wasting time continuing to develop the closed standard PhysX instead of contributing to open standards that have a better chance, historically speaking, of being widely adopted?

    Good article though Thrax.
    What past trends are you referring to? Surely not the mass exodus from OpenGL to overwhelming support for DirectX in the matter of a few years? Or the inability of Linux to gain any traction over Windows despite it being "free and open" for years? Past trends show both users and developers gravitate towards the best supported standards, whether it be closed or open, price notwithstanding.



    Anyways, nice article Thrax, echoes many of the same points I made in that "other" PhysX thread. :wink: Glad to get clarification on the topic from not one, but two Nvidia reps on their stance when it comes to GPU accelerated physics and once again confirms what we've seen so far. One company is dedicated to producing solutions for its hardware, the other is busy producing excuses.
  11. ardichoke
    ardichoke Chizow, you don't understand the difference between an open standard and open source. DirectX is NOT open source but it IS an open standard in that it will work with any graphics hardware, anyone can develop for it, etc. Linux is open source. Maybe you should make sure you know what you're talking about before you open your maw?

    Also, of course the article echoes your sentiment, the article is just Nvidia answering some questions. Doesn't change the fact that for AMD to do what Nvidia says they should would be monumentally stupid on their part. It's funny, a quick google search on your user name shows that you go around to a lot of forums to sing the praises of Nvidia and arguing against anyone that criticizes them at all. So the question is, do you work for Nvidia/one of their subsidiaries or are you just delusional enough to think that Nvidia is a perfect company that never makes mistakes or puts profits ahead of its consumers?
  12. chizow
    chizow
    ardichoke wrote:
    Chizow, you don't understand the difference between an open standard and open source. DirectX is NOT open source but it IS an open standard in that it will work with any graphics hardware, anyone can develop for it, etc. Linux is open source. Maybe you should make sure you know what you're talking about before you open your maw?
    Where did I make any mention of open source? I was simply giving clear examples where your claim "Given the past trend for APIs to move away from closed stndards and towards open ones" simply doesn't hold true. I gave two examples of open standards getting slaughtered by proprietary ones, surely you can give one that backs your claim?
    ardichoke wrote:
    Also, of course the article echoes your sentiment, the article is just Nvidia answering some questions. Doesn't change the fact that for AMD to do what Nvidia says they should would be monumentally stupid on their part.
    What has Nvidia told AMD to do, other than perhaps suggest they should spend more time supporting their own hardware and less time crying about how other parties are somehow indirectly influencing their hardware?
    ardichoke wrote:
    It's funny, a quick google search on your user name shows that you go around to a lot of forums to sing the praises of Nvidia and arguing against anyone that criticizes them at all. So the question is, do you work for Nvidia/one of their subsidiaries or are you just delusional enough to think that Nvidia is a perfect company that never makes mistakes or puts profits ahead of its consumers?
    Oh how cute, I got myself an internet puppy dog. :) As I said in my very first post here, I'm genuinely interested in physics in games and have followed the developments since the beginning, so I know how dishonest AMD is on the topic. That's the extent of it, I want to see physics in games sooner rather than later so when AMD continually lies about what they've said and done in the press, I'm going to call them on it.

    Typically the news is filtered through the usual anti-Nvidia fansites who somehow bastardize AMD's unwillingness or inability to properly support their own hardware into wrongdoing by Nvidia. I simply followed the bread crumbs to the source (from Xbit in this case), and figured I would try to give some feedback to the authors so they could ask the questions that mattered rather than the questions AMD wants to answer.

    I do hope that next time Thrax, UPSLynx or anyone else who has covered the topic recently gets Dave Hoff or anyone else in AMD on the horn, they ask the simple question that is never asked, or never answered. Why won't AMD just support PhysX natively on their hardware to avoid all these problems. They could answer all the hypothetical tin foil hat licensing concerns you brought up as well. Surely AMD fans would also like the answers to these questions instead of being strung along with more questions and no real answers or solutions?

    As for your laughable claim about being an Nvidia employee lol....well over time I've found there's really no faster way to admit defeat in an argument than to accuse someone of being an employee of some company in attempt to discredit them just because you don't agree with what they have to say. It really should be a law, right up there with Godwin's Law on how to loose at intarweb trolling. The reality of it is, if you can't discredit what I've said on the merits of your own arguments, its probably because your arguments have no merit.
  13. Thrax
    Thrax There's no need to ask that question, because everyone knows the answer. It's not fiscally intelligent or responsible for AMD to do so, nor is propping up CUDA any longer than it is needed.

    Soon PhysX, or any other physics engine, will be a commodity. Buy it, implement it, run it on any GPU.
  14. chizow
    chizow
    Thrax wrote:
    There's no need to ask that question, because everyone knows the answer. It's not fiscally intelligent or responsible for AMD to do so, nor is propping up CUDA any longer than it is needed.
    But everyone doesn't know the answer, there's a lot of assumptions about licensing fees with no direct or definitive answer from anyone knowledgeable or authoritative. Hell there was rampant misinformation and the widespread belief on these forums (and perhaps still is?) that the PhysX source SDK wasn't available for purchase as if it actually mattered.

    The links I provided were also ambiguous on the topic and when asked about it, AMD claimed they didn't know because they never actually contacted Nvidia about PhysX support. So if AMD never bothered to ask if it would cost them anything to license PhysX, how would you or anyone else know for sure?
    Thrax wrote:
    Soon PhysX, or any other physics engine, will be a commodity. Buy it, implement it, run it on any GPU.
    Perhaps, that is certainly the goal, my point is soon could've been sooner, if it wasn't already a reality today. ;)

Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!