If geeks love it, we’re on it

Revisted: The NVIDIA Quadro FX 3800 unleashed

Revisted: The NVIDIA Quadro FX 3800 unleashed

Oftentimes, the hardest part of any gig is having to admit when you’re wrong. This is my mea culpa.

When I initially evaluated the NVIDIA Quadro FX 3800, I blindly operated without realizing that vertical synchronization, better known as vsync, was enabled. Leaving vsync enabled artificially limited the performance of the NVIDIA board while AMD’s FirePro boards advanced uninhibited.

The noteworthy detail in this story is that NVIDIA’s Quadro drivers install with vsync enabled by default, whereas AMD’s FirePro drivers leave the feature disabled by default. It only takes some simple control panel tweaks to disable vsync on the Quadro, but the performance difference is remarkable.

You may wonder: When vsync limits the performance of the GPU, why bother using it? vsync limits the framerate of 3D applications to match that of the display’s refresh rate. This prevents screen tearing. For example, if vsync is enabled and the display’s refresh rate is set to 60Hz, the output will be limited to 60 frames per second. In the world of benchmarking, such a limitation is not welcome. In a professional world, however, preventing screen tearing preserving image quality where it counts. In a studio environment, it would not be uncommon to see IQ chosen over raw performance.

Before going further, I would like to apologize to NVIDIA for the mistake in my methodology, as well as apologize to our readers for inadvertently providing misleading figures. I especially appreciate the scores of readers who took the time to make me aware of my mistake. To make matters right, today’s evaluation of the NVIDIA Quadro FX 3800 will reexamine the card free of any artificial framerate limitations.

NVIDIA Quadro FX 3800

The NVIDIA Quadro FX 3800 is based on the 55nm G100GL architecture, which is exclusive to NVIDIA’s workstation boards. This 1GB single-slot GPU contains 192 CUDA cores and sports an impressive specifications besides:

  • 55nm process node
  • 600MHz core clock
  • 192 CUDA Cores
  • 1GB 800MHz GDDR3
  • 256-bit Memory Bus
  • 51.2GBps memory Bandwidth
  • 1×6-pin PCIe power connector
  • 108W power consumption
  • 2 x DisplayPort, 1 x DL-DVI Outputs
  • 3-Pin Stereoscopic (3D) Support (Optional)
  • Serial Digital Interface (Optional)
  • SLI Multi-OS Support
  • Supports DirectX 10
  • USD$899 MSRP

For more information about the FX 3800’s features and specification, my original review covers that in greater detail.

Performance

The NVIDIA Quadro FX 3800 is intended to go head to head with AMD’s FirePro V7750. The FirePro V7750 is priced competitively with Quadro FX 380, but offers slightly less features in exchange for slightly faster specs. The main focus of the new benchmarks is the FX 3800 against the V7750, but the FirePro V8750 was also added for additional comparison.

Though the FX 3800 took a commanding lead in Solidworks, besting even the beefy FirePro V8750, the major animation packages continued to show (sometimes significant) favor to the ATI cards. This success can be attributed to AMD’s recent 8.702 FirePro drivers. In my original review, the Quadro FX 3800 without vsync would have bested the FirePro V7750, but the newest driver posted performance gains averaging 20%–very impressive gains from a simple driver update.

The Quadro FX 3800 also displayed some very impressive performance in real-time OpenGL tests. In the Cinebench R11.5 OpenGL benchmark, the Quadro FX 3800 not only dominated the FirePro V7750, but it even turned in better performance than the significantly more expensive FirePro V8750. This kind of OpenGL performance is quite dramatic coming from a high-end workstation solution.

Reconsidered

Benchmark numbers aside, the expanded feature set of the Quadro FX 3800 makes it a very attractive product. SLI Multi-OS, in particular, will be especially appealing to smaller studios that don’t have copious workstation real estate.

With a new perspective on price, features and performance, the NVIDIA Quadro FX 3800 a stellar product that most 3D artists should easily find compelling. I was pleasantly surprised before, but now I’m impressed with the NVIDIA Quadro FX 3800, and I firmly recommend it as an outstanding workstation graphics solution.

Comments

  1. Sledgehammer70
    Sledgehammer70 It sucks to find that one setting that held it back, but it good to see a wrap-up review giving it the proper dues. Game on :)
  2. primesuspect
    primesuspect Surprising that such an important setting is not highly publicized. I know I for one don't ever think to check default driver settings. That's not something I would have checked, and I'm probably an "average" user.
  3. UPSLynx
    UPSLynx I'm curious as to if big studios prefer to operate with such a setting enabled or not. Knowing that might answer why it defaults enabled or not.
  4. photodude
    photodude I wonder what the importance of v-sync is, and why you might want it on? especially considering when having it on lowers performance.
  5. drasnor
    drasnor Enabling Vsync prevents tearing as the image in the graphics card framebuffer is updated in sync with the display's vertical refresh rate. When disabled, the image in the graphics card framebuffer can be updated while the screen is refreshing which produces artifacts.
  6. UPSLynx
    UPSLynx If you play an FPS on a system that can render the game at a rate faster than 60 frames per second, you can spin the mouse around really quickly and if you look closely, you'lll notice tears and artifacts in the images.

    vsync prevents that, as Drasnor said, it locks the card to the display's refresh rate. It's all about throwing away overkill performance for clarity of visuals.
  7. Thrax
    Thrax Which, as I have long wondered, makes one wonder why it's not the default for AMD.
  8. photodude
    photodude Sounds like vsync is a valid and needed feature for games, to limit artifacts and tears. But in a workstation environment, for CAD/CAM/BIM or DCC, is vsync still a valid setting needed to keep a "clarity of visuals" at the sacrifice of additional performance?

    If vsync is primarily a setting needed for avoiding game rendering, what logic is there in making it a default setting for a workstation graphics card?

    Over all it seems vsync could be reimagined Since it's about "throwing away overkill performance" Would it not be better to program vsync to not throw away the extra performance but limit what portion of the GPU is being used and allow the "extra performance" to be available for other tasks?
  9. Thrax
    Thrax The point to consider about vsync is that any FPS above and beyond that of the refresh rate is <i>already</i> thrown away. Sixty hertz is sixty screen refreshes per second, just as 60 FPS is sixty screen refreshes per second--it's not possible to show 61, 65 or even 65,000 FPS on a 60Hz display.

    Performance dips, too, work just the same on vsync. If an 80 FPS task would drop you down to 60 FPS, then a PC without vsync drops to 60. The same PC running vsync wouldn't budge from 60 FPS, because it's already perfectly capable of running 60 FPS, just not more. If a PC lost 30 FPS down to 50 FPS without vsync, then a PC with vsync will lose 10 FPS. Enabling vsync doesn't mean the program or game will automatically subtract the same chunk of performance, regardless of the FPS cap, it just means that any dip that isn't big enough to move the FPS below 60 with an uncapped frame rate won't move the FPS at all with vsync.

    Vsync doesn't "sacrifice" or "throw away" performance, it just throws away the frames that get thrown away anyhow.
  10. UPSLynx
    UPSLynx And to add to what Thrax just said, viewport performance in 3D applications will experience the exact same tearing/artifacting effects that a video game does.

    The GPU is still pulling its full weight around with vsync enabled, its simply taking extra steps to ensure that the user doesn't have an unwelcome experience
  11. drasnor
    drasnor
    Thrax wrote:
    The point to consider about vsync is that any FPS above and beyond that of the refresh rate is <i>already</i> thrown away. Sixty hertz is sixty screen refreshes per second, just as 60 FPS is sixty screen refreshes per second--it's not possible to show 61, 65 or even 65,000 FPS on a 60Hz display.
    ...
    Vsync doesn't "sacrifice" or "throw away" performance, it just throws away the frames that get thrown away anyhow.
    That's not correct due to a misconception about how the picture is drawn on the screen. Imagine if you will two painters sitting side-by-side. One painter is rendering an original image (the graphics card) and the second painter is attempting to copy the original painter's image (the monitor). These two painters are painting the same way a computer draws: rasterizing the painting one pixel at a time, left to right top to bottom. When each painter reaches the bottom right corner of their canvas they go back to the top left corner and start over on a new image. In the case that your graphics card can render the image faster than your monitor, the original painter will be painting faster than the second painter and eventually we reach a point where portions of the second painter's image come from different paintings. This is fine for image quality if both paintings are more or less the same but if they're different you get artifacts. The exact same phenomenon occurs if the original painter is painting more slowly than the second painter for the same reasons.

    The only real difference between this analogy and the way it really works is that the graphics card refreshes its entire image at once instead of pixel-by-pixel the way the monitor does. The end effect is the same though. Functionally, it's not that Thrax's "extra frames" are thrown away, it's that only parts of them get drawn.

    Unfortunately, while enabling Vsync will improve the image quality it also introduces some latency penalties. When the graphics card is drawing frames faster than the monitor it must begin drawing a frame after some delay so that both it and the monitor finish drawing at the same time. Anything that "happens" between the beginning of that delay and the end of the image getting drawn is lost. It's even worse if the graphics card is slower than the monitor; you may see the same image get drawn for multiple times while the graphics card renders the next frame. That sort of behavior is usually alright for viewports but sucks for video games since lots of things happen between frames getting rendered (you getting shot at for instance.)

    Bottom line, it's a trade-off.
  12. photodude
    photodude Considering what has been said...... It would be nice to see benchmarks for ATI and Nvidia with and without vsync so they can be compaired better and more equially.

Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!