NVidia vs ATI

edited October 2003 in Hardware
is it just me or does this seem like a lie?
I though the radeon 9800XT outperformed the 5950 ultra, but then again, this could all be set up
http://php.technobabble.com.au/gate.html?name=Content&pa=showpage&pid=23&page=11
«1

Comments

  • croc_croc_ New
    edited October 2003
    I don't realy have much to say about this, just wanted to mess around with my sig.

    But from the other sites I have read regarding the 9800XT and 5950 Ultra, the XT usually outperforms. *shrug* I wish stores let you build systems and let you find out yourself which really performs better ;)
  • edited October 2003
    I run a 5900U at close to 5950U speeds and I'm pretty happy with it although I have no idea what a 9800pro or xt would do in here so I have no point of reference.
  • ThraxThrax 🐌 Austin, TX Icrontian
    edited October 2003
    Quake3 is the only current known exception to the 9x00 series being faster than the equivalent in the GeForceFX series.
  • edited October 2003
    besides the fact that nvidia does very well in dx7 and dx8 games, in wich q3 is dx8. Now go take that same fx card and xt card and use something that uses ps2.0 and dx9, then come back and say the xt isn't faster :).
  • edited October 2003
    I get 48,000 in Aqua Mark3 and 6800+ in 3d Mark '03 and 18,900+ in '01 with a P4 2.4c at 3.04ghz so I really don't see the XT as being earth shatteringly faster than my 5900U.
  • edited October 2003
    i might be mistaken but doesnt the 9800's support 6xAF and 16xAA, and the 5900's support only 8xAF and 8xAA, so the radeon cards will have a better image quality, wont they?

    and the radeons still manage to beat the 5900's even with these higher quality settings...

    i never owned a Nvidia card but i must say i loved my radeon 9800 pro
  • edited October 2003
    Loved? What happened to it?
    And at 8xaf and 4xqaa I'm perfectly happy with the image on my games and never have any slow downs in games.
    I'm just tired of everyone that's never seen a 5900U out of the box in person let alone run one capping on them.
    I've seen 9700pros, benched them and benched 9800's soft modded to pros as well and I like my 5900 better.
  • edited October 2003
    Cat 3.8's fried my 9800 pro :bawling: and it will be missed,died in action very bravely...

    But now im getting a radeon 9800XT :D so im fine now

    Oh and i must say i love ur sig, it makes me laugh evertime i read it :)
  • edited October 2003
    Sorry to hear about your loss dude...I hate it when good parts go bad.
    You ought to sent the card to ATI and let them do an autopsy on it to see exactly what happened to it.
    I like the new 9800xt's they look like smokin' cards...what brand are you going to get?
    I can say I highly recommend Asus's cards.
  • edited October 2003
    Gimme a break...Kyle's anti nvidia and everyone knows it.
    I give a crap less what he thinks about any part he reviews and that's an honest fact.
    I've built enough computers to form my own opinions about what works and what doesn't...I did it as a living until recently and plan on starting my own business soon.
    Al, you ever test a 5900u for yourself? or Thrax? As far as that goes has anyone else that keeps telling people how much they suck?
    I've built systems with high end ATI cards and high end Nvidia cards and I prefer Nvidia's drivers, stability and overclockability to ATI's and I'll recommend Nvidia's cards over ATI's unless you're just anal retentive about having the "bleeding edge" of performance and I'm not even going to drag the whole cat 3.8's mess into this because there's a cure for them and that's to just not use them.
    I don't like ATI's and that's it but I'm not going to sit here and try to assassinate them.
  • SimGuySimGuy Ottawa, Canada
    edited October 2003
    madmat had this to say
    Gimme a break...Kyle's anti nvidia and everyone knows it.
    I give a crap less what he thinks about any part he reviews and that's an honest fact.
    I've built enough computers to form my own opinions about what works and what doesn't...I did it as a living until recently and plan on starting my own business soon.
    Al, you ever test a 5900u for yourself? or Thrax? As far as that goes has anyone else that keeps telling people how much they suck?
    I've built systems with high end ATI cards and high end Nvidia cards and I prefer Nvidia's drivers, stability and overclockability to ATI's and I'll recommend Nvidia's cards over ATI's unless you're just anal retentive about having the "bleeding edge" of performance and I'm not even going to drag the whole cat 3.8's mess into this because there's a cure for them and that's to just not use them.
    I don't like ATI's and that's it but I'm not going to sit here and try to assassinate them.
    MadMat, I own both an Sapphire Atlantis 9800 Pro and a BFG Asylum GeForce FX 5900 Ultra.

    I've always been happy with the performance of my 9800 Pro in real-world games. Synthetic benchmarks don't really serve any purpose to me anymore and I applaud the way that Anandtech has gone with their video card benchmarking schemes: Real-world performance. I reserve judgement on all high-end cards until I can see how they perform @ 1024x768 with 2xAA & 4xAF, or @ 1280x1024 with 2xAA & 2xAF. Nobody plays just for pure frame-rate anymore.

    The days of NVidia's, 3DFX's & ATI's "pushing the framerate through the roof" are over. 500 FPS in Quake 3 is useless when your maximum refresh rate is 75 Hz or whatever. I want it to look good, no jaggies, no faded textures, no degraded quality. Paying almost $600.00 for the 5900U and $350.00 for the 9800 Pro, I expect amazing performance and quality from each one of them.

    Since there are so many "quality" and "performance" issues with AquaMark & 3DMark on DirectX 9.0 enabled hardware and the inherint possibility that all major video card manufacturers are adding "optimizations" to their video drivers for the sole purpose of raising benchmarking numbers, you can't utilize these programs as adequate benchmarks of DirectX 9.0 enabled hardware.

    As for DirectX 8.0 games, they are optimized for the extreme performance of the GeForce 4 Ti & Radeon 8500/9X00 series, not the GeForce FX.

    The GeForce FX was a radical departure from the GeForce 4 Ti architecture. The change can almost be akin to how Intel moved from their P6 architecture on the Pentium 3 to the Net-Burst Microarchitecture on the Pentium 4. Longer pipeline and less efficient at generic computations, but when software is optimized for that specific brand CPU, performance flies.

    easyrider had this to say
    besides the fact that nvidia does very well in dx7 and dx8 games, in wich q3 is dx8. Now go take that same fx card and xt card and use something that uses ps2.0 and dx9, then come back and say the xt isn't faster :).
    Quake 3 is an OpenGL game title, not DirectX 8.0, which doesn't utilize programmable pixel shaders. Quake 3 was designed for the hardware of 1999-2000: GeForce 256 DDR/GeForce 2 GTS, Voodoo 5 5500/6000 & ATI Radeon 7000/7200/7500. The sheer rendering power (Clock Speed x TMUs x Pipelines) of each of these cards determines how it performs in Quake 3, not shader performance. This is why the NVidia FX series keeps up with ATI's performance in this race: Look at what the 5900U is clocked at:

    450 MHz on the core x 8 Pipelines x 1 TMU = 3600 Megapixels per second theoretical fill rate.

    Where as the 9800 Pro is only clocked at 380 MHz on the core x 8 Pipelines x 1 TMU = 3040 Megapixels per second theoretical fill rate.

    We don't need to compare memory bandwidth because each of these cards are no longer memory-bandwidth limited.

    When DirectX 8.0 or 9.0 shaders do not play a factor, the inherint raw texture rendering speed advantage that the 5900 Ultra has over the 9800 Pro allows the 5900 Ultra to outperform all other competition. This is what happens in all Quake 3-engine based games.
  • croc_croc_ New
    edited October 2003
    I remember playing counter-strike on my voodoo 3 pci, 640x480 low quality models etc etc just to squeeze more fps out of it. Now its not how much more fps I can get, its how much more FFSA and AF I can get and still have playable frame rates. All these new games have amazing graphics, what fun is it if I have to play them at low res with no FFSA or AF. IMO it seems nvidia is trying a more brute force speed approach, where ATI is leaning more towards image quality and compatability.

    Whens this new dual gpu card coming out, I want to see how that performs/looks.
  • SimGuySimGuy Ottawa, Canada
    edited October 2003
    The XGI Volari Duo V8 Ultra & V5 Ultra are due out either in time for the Christmas holidays or just after. I'm not 100% sure on that though.

    Performance is supposodly @ a R9800 level, but I'll reserve judgement on that until I can see real-world performance (and not just a 3DMark03 number). :)

    Price is my concern on this one though.
  • edited October 2003
    The point is I haven't seen you hammering Nvidia when someone suggests it unlike other's I won't name here.
    And as far as IQ goes I personally never see any difference while I'm playing a game and IMHO if you're stopping to admire the scenery then you're going to get your butt fragged anyways so what's the point?
    I play DFBHD as a sniper quite a bit and all I can say is that when I get a chance to stop running from one camping spot to the next and actually look around the IQ I get with all the sliders maxed and 8xaf and 4xqaa at 1024x768 is breathtaking especially when I had been running at that resolution with details lowered and no af or aa just to get the frame rates I am getting now from my old TI4400. (If that makes sense to you)
    When I bought my 5900U it went for about 90 bucks more that the 256meg 9800pro so the price difference wasn't as great as you make it sound.
    and you basically said everything I did about why I am not interested in the 9800 to me it only offers higher frame rates and that's it...I don't see any difference in IQ and no matter how much people sit there and say "Look at this still picture in the upper right hand corner for this frame the 9800 looks better" I could care less...I don't game a frame at a time.
  • SimGuySimGuy Ottawa, Canada
    edited October 2003
    Well, I don't know how prices work in the US, but up here in Canada, you get raped when purchasing an NVidia-based performance card.

    The prices I listed were retail purchase prices when I purchased them brand new off the shelf in Canadian dollars.

    Before I owned the 9800 Pro & 5900 Ultra, I myself had an Asus Ti4400, which was quite the step up from the TNT2 M64 I had in my old P3-550E system at the time. I know exactly what you mean when you witness the power of both the 5900 Ultra & 9800 Pro running insane resolutions with AA & AF enabled. Games look great and perform great.

    As long as the video card can put out admirable performance at the resolution & refresh rate you run your monitor at, that's all that counts.

    Anyone would be happy with either card and both are a worthy upgrade to yesterday's cards, however most enthusiasts choose ATI-based cards because they offer higher levels of performance, allowing the resolution & AA/AF levels to be cranked up while sustaining a playable frame-rate. Brand loyalty means nothing to me. If I can play 1600x1200 @ 4xAA & 8xAF in UT2003 on my system with a 9800 Pro and only utilize 1600x1200 @ 2xAA & 4xAF on a 5900 U and play at approximately the same rates, I'll take the 9800 Pro (case and point which occurred).

    Why I choose the 9800 Pro over the 5900 Ultra? Simply, the 9800 Pro's inherint HLSL DirectX 9.0-compliant shader design is built upon Microsoft's standard for DirectX 9.0 shaders (which is HLSL), where as the 5900 Ultra DX9 shaders are built upon NVidia' Cg shader design.

    What's the difference you ask? Quite simply, games that are not optimized for either platform will run faster on the 9800 Pro when compared to the 5900 Ultra because game developers who utilize DirectX 9.0 extensions write them according to Microsoft's HLSL DirectX 9.0 standard, not NVidia's Cg.

    Look at Half-Life 2. a predominantly DirectX 9.0 shader-enabled game title. Preliminary performance numbers with Anti-Aliasing & Anisotropic Filtering enabled show the 9800 Pro/XT & 9600 Pro kicking the ass of the 5900 Ultra. Why you ask? DirectX 9.0 HLSL compatability is what HL2 is written for, not Cg.

    Now, to be fair, Valve is going back and optimizing the code for the game to be more "NV3x" friendly by adding a new graphics processing mode for the game, which will help NV3x performance. However, the inherint design of the NV3x series of cards doesn't allow it to perform nearly as well on HLSL-written DirectX 9.0 games as ATI's DirectX 9.0-compliant offerings.

    That's why I recommend ATI cards and so many others swear by them. Remember 3DFX & Glide. Look where that got them. To survive, cards must support industry standard & supported rendering modes. DirectX 9.0 & HLSL is that standard. Cg is not.
  • JengoJengo Pasco, WA | USA
    edited October 2003
    I hate it when people argue about what company is better, its dumb, just buy whatever you think is better, i have had many good experiences with both companies, if i like what i see, i buy it, regaurdless of company, brand, etc, although i used to be an nVidia fanboy, my ways have changed

    BUT I STILL LOVE MY GeForce FX 5200U!!! (i dunno why but i just have this attachment to it, i know it sucks but i still love it.... i guess im wierd...)

    :D
  • Geeky1Geeky1 University of the Pacific (Stockton, CA, USA)
    edited October 2003
    My understanding is that the 9800 series is still faster than the 5900 series FXs (9800 Pro/128 > 5900, 9800 Pro/256 > 5900 Ultra, 9800XT > 5950 Ultra) but I may be wrong. Both series of chips are very, very fast though, so it all comes down to personal preference in this case. My preference has always been for ATi, so I'd go with one of the 9800s, but if you like nVidia, there's nothing necessarily wrong with the 5900 series either...
  • edited October 2003
    Ill be getting a ASUS radeon 9800XT, just because ive owned radeon cards in the past, and was very impressed with there quality and performance, i myself have never actually used an Nvidia card, but feel no real need to. :) , but its really all about preference
  • croc_croc_ New
    edited October 2003
    I had the following cards:

    nvidia 4mb (or 8mb?) permedia2 chip (i think)
    voodoo 3 pci
    voodoo 5 agp
    nvidia geforce 3 (original, non-ti version)
    nvidia geforce 4 mx (onboard shuttle)
    nvidia geforce 4 4200
    nvidia geforce 4 4800se (asus)
    radeon 9700 pro
    radeon 9800 pro (current)

    Every upgrade has been GREAT! I want my voodoo 5 back =(
  • SimGuySimGuy Ottawa, Canada
    edited October 2003
    I was always a dire-hard NVidia fan up until recently and have utilized a plethora of video cards in my lifetime ranging from the #9 GXE 1 MB ISA display adapter to the ATI Mach64 to the GeForce FX 5900 Ultra.

    NVidia's NV10 (GeForce 256 GPU) technology blew away everything on August 29, 1999 when it was introduced. 15 Million triangles/sec was unheard of performance, and the ability of lifting the T&L calculations off of the CPU made performance even more impressive.

    At that point, ATI and 3DFX didn't have a suitable competitive product, so there was nothing else to choose from. Oh sure, you could pop in a Rage 128 Pro or a Voodoo 4 4500, but you couldn't touch the 32-bit rendering power of the NV10.

    NVidia releases the NV15 (GeForce 2 GTS/Pro/Ultra) and again shows the world it's king when it comes to the power of the GPU. Doubling the performance of the NV10 @ 31 million triangles/sec, utilising DDR memory for much higher bandwidth and introducing the first vertex shader ever seen to the world: NVidia's Shading Rasteriser (NSR). Although not programmable, it did make performance in games that called up on the NSR increase substantially. 3DFX released their 5500 & 6000 series of Voodoo cards, but super high frame-rates in either 16-bit or 32-bit color could not be matched by the pure performance of the NV15 architecture. ATI's answer was the Radeon 7000 series cards. Shoddy driver support and abysmal performance put the Radeon 7000 series out of touch with the NV15.

    Fast forward to Q1 2001: NVidia releases the NV20, GeForce 3. The world's first programmable pixel & vertex shader GPU, fully supporting DirectX 8.0 HLSL extensions, moving more and more of the video rendering features off the CPU and onto the GPU. ATI's only response? The 7500 series to begin with. Then, the big one: ATI's Radeon 8500. Showing up the performance of the NV20 all across the board and introducing Anti-Aliasing & Anisotropic Filtering techniques never seen before, the Radeon 8500 blasted NVidia's NV20.

    To counter-act, NVidia launches the NV25, the beloved GeForce 4 Ti series. Sporting DirectX 8.1 HLSL compatability, increased clock rates, improved AA & AF features, the cross-bar memory controller, Nvidia's improved Light-Speed Memory Architecture & Quincunx Anti-Aliasing, the NV25 put NVidia back on top of the 3D world. Instead of merely trying to catch up to NVidia, ATI decided to put their engineers to work in designing a high-performance, truely next generation DirectX 9.0-HLSL compatable part. Enter the R300 series.

    The ATI Radeon 9700 series, sporting 100% DirectX9.0-HLSL compatability, a 256-bit memory interface to reduce memory access latency & improve memory transfer speeds to the VPU, the world's first GPU to support 6xFSAA & 16xAF levels and the release of ATI's new Catalyst Driver software, the R300 was (and still is) an outstanding piece of 3D technology.

    Here's where we run into problems. NVidia, in an attempt to keep their 6-month product cycle, attempted to design a DirectX 9.0 compliant GPU before Microsoft finalized the DirectX 9.0 model. With NVidia wanting the 3D crown so badly again, they designed their own shader language (Cg) to be used by their next generation NV30 part (The FX5800/5800Ultra). Not only was industry standards not part of the NV30 architecture, it was to suffer an ever worse design flaw: a 128-bit memory interface, severely limiting the GPU's ability to transfer data from the DDR-II memory at an effective rate that's fast enough to satisfy the data-hungrey NV30. TMSC's 0.13 micron process was not 100% reliable yet. The NV30 was to be pushed out the door on this process, but delays due to that caused the NV30 to be pushed behind even further.

    When NV30 was released, performance was abysmal. Why? 128-bit interface, noisy cooler and piss poor AA & AF performance due to 128-bit interface. NVidia goes back to the drawing board. NV35.

    The NV35 now has a 256-bit interface, and AA & AF are no longer memory bandwidth limited because of the DDR memory and nice, wide 256-bit memory interface. However, the NV35 still suffers from the original shader-design mistakes made with the NV30.

    What am I getting at with all this? Yes, at one time NVidia did have the performance crown and was the top of the 3D world. ATI took that away from them and possess it to this day.

    Personal preferance or not, I don't understand why someone would invest nearly half a THOUSAND dollars into a piece of technology that is not going to offer shader performance levels that can be matched by a part made by another company for nearly 35% less. New games will use more and more shaders, moving away from the static textures utilized in all of today's and yesterdays games. Pixel Shader 2.0 performance will determine who has the card capable of higher performance in tomorrow's DX9 enabled games.

    If the NV40 finally utilizes industry-standard architecture (DX9-HLSL), it will be a 100% matchup with the R350/360 cards and I will consider purchasing an NVidia card. Until then, the NV3x architecture is an inefficient, non industry-supported design that is technically inferior to ATI's R300/350/360 DX9-HLSL compatable design.
  • edited October 2003
    I never said that ATI's architecture was worse than Nvidia's...I just don't think that the performance increases with the ATI cards is worth the flaws that I see with their drivers (which are improving but still there) and from a user friendly point of view I'd rather deal with the hardware quirks of the Nvidia based card than the software quirks of the ATI cards.
    I bought my card from www.newegg.com and at the time I bought it it was $520.00 U.S. and the 256mb 9800Pro (which had just neen released by a few companies) was $499.00 U.S. so there wasn't much difference and besides I'm really tired of red video cards, mobo's and parts in general and the blue of my Asus 9950U looks a lot better in my computer.
    I took a lot of things into consideration when I put my system together such as DVD playback on a TV screen, driver interface, Overclocking potential as well as color (or colour if you will) of the component itself.
    As I said in another post I've had an ATI card in one of my boxes and I just didn't like it.
    When ATI comes out with a easier interface and more precise controlled driver as well learning that all cards don't need to be red I might buy one but until that happens or Nvidia closes it's doors I will continue to choose Nvidia.
    Yeah I know that some companies that partner with ATI make blue and black cards but not in the 256md 9800Pro, I looked.
  • SimGuySimGuy Ottawa, Canada
    edited October 2003
    Both the Tyan Tachyon G9800 Pro and the Hercules 3D Prophet 9800 Pro are blue. Yes, they don't offer 256 MB Radeon 9800 Pro's and I have no idea why the don't, unless it is related to the fact that ATI only offers 256 MB XT boards and has discontinued their 9800 Pro 256 MB boards.

    NVidia's reference cards are always green for once sole purpose: Their corporate colors are green. ATI's reference cards are always red for one sole purpose: Their corporate colors are red.

    ATI's TV-out feature (SmartVision) works wonderfully well. I can output to 2 different monitors & an HDTV-compatable 65" Toshiba HDX HDTV at the same time with independent refresh-rates & resolutions on each. As well, ATI's cards have always supported native hardware DVD-decoding, long before it was introduced to NVidia's cards.

    Each driver series has its own "quirks," especially when it comes to game compatability. ATI has problems with NeverWinter Nights, Blade of Darkness, Morrowind, Jedi Academy & ChameleonMark. NVidia has rendering issues with WarCraft III, HomeWorld 2, BF1942, X2: The Threat and Far Cry. The point is that each software suite is just as stable as the other. NVidia's drivers were always strong. ATI's drivers, at one point, did suck. However, those issues are long gone and they are now on par with NVidia's.

    NVidia's NV3x hardware design cannot be changed with a software update. The Catalyst driver series can always be updated and installed.
  • ThraxThrax 🐌 Austin, TX Icrontian
    edited October 2003
    ATI's hardware has been superior to nVidia's both in terms of speed, and quality of production since the introduction of the Radeon 8500 approximately 2 years ago.

    Prior to the introduction of the Catalyst-series drivers, nVidia had sure-fire wins with their cards and the detonator series. This was the summer of 2002. Upon the introduction of the Catalyst drivers with the Radeon 8500 in late summer of '02, the GeForce3 TI500, the 8500's competing card, started losing benchmarks. Gradually the situation with nVidia grew worse.

    The introduction of the GeForce4 came slightly too late for nVidia, not quickly enough after the introduction of the GeForce3 series (NV2x). ATI had been working on manufacturing the Radeon 9700 series cards (R300) since the introduction of the Radeon 8500 in early 2002. Meanwhile, nVidia had focused on releasing six cards in that same amount of time (The GF3 series and the GF4 ti4x00 series). The existing development cycle allowed ATI to interject their GeForce-killer a full seven months before the release of the GeForceFX 5800. This was in very late 2002, roughly in december.

    The FX 5800, based on the NV30 core, was to be nVidia's ace in the hole. Surprise nVidia when the R300 (Radeon 9700 Pro) was released approximately six months before the 5800 hit the shelves. nVidia didn't adequately predict ATI's movements, and figured that their release-candidate card would've been slower than the 5800 due to driver issues and a few remaining video hardware issues. Mistake.

    The 9700 Pro schooled the 5800. The 5800 also suffered from the same fate that ATI once carried; the drivers for it were terrible. The fact that the 5800 could not use detonator drivers based off old detonator architecture (As nVidia had done successfully in the past) cripled it in the preview and retail markets. The 5800 also suffered from a pathetic move in engineering -- nVidia tried to release their card with a 128 bit memory bus, and ramp the memory speed up to compensate. Mind you, all video cards between 2000 and the introduction of the 5800 in early 2003 used 256 bit memory busses. With a 256 bit bus and a slower memory speed, you can process far more data at the SAME TIME than you can with a 128 bit bus and a faster memory speed. The 128 bit/fast memory combination forces data to be calcuated in succession, as opposed to parallel data processing with 256/slow. The real-world memory bandwidth of 128 bit/fast memory was FAR less than 256 bit/slow memory as the 9700 Pro had proven.

    The 5800 ultra also suffered from an immature .15 micron fabrication technology at TSMC. The result was a VERY hot GPU, needing extravagant OTES cooling on the OEM level. This made EXPENSIVE cards to produce and buy. Additionally, nVidia's drivers which controlled the GPU fan were buggy as hell. The drivers responded to function calls by the drivers and told the video card when to enable/disable the high speed fan...But nVidia missed close to five dozen popular applications, and 5800 cores were cooking themselves alive left and right.

    Reports from all over the internet, reviews and previews alike heralded the Goliath's fall to ATI's David. The 5800 sucked balls, and continues to do so. In two months time, nVidia was forced to produce a revamp on the GeForceFX series cards. This development cycle is three times quicker than any cycle nVidia has ever maintained. The 5900 Ultra was introduced as the fix to the 5800's lacklustre performance. The 5800 was removed from reference on nVidia's website, and 5900 Ultras were shipped out en masse.

    The 5900 Ultra suffered this time not from poor core-hardware; but a poor rendering engine. Colors were muddy, textures were poor, anti-aliasing was broken on first runs of the driver, etcetera. The image quality of the 5900 Ultra cards (NV31) is as poor as the hardware was previously on the 5800. This immediately turned several million potential customers away. nVidia failed to follow a paradigm shift in the desires of the community. People no longer wanted high speed video cards and nothing more, but cards that were slightly slower than they could be with considerably improved image quality and much-improved special-feature engines (Anti-aliasing and Anisotropic filtering). ATI served these needs and nVidia continues to fail in this regard.

    nVidia's introduction of the GeForceFX 5900 Ultra was once again too late for it to hold a reasonable market share. The introduction of the Radeon 9800 Pro as a faster DirectX 9 capable GPU effectively killed the GeForceFX 5900's chances of gaining respect in the community. Furthermore, the GeForceFX 5900 Ultra featured the same DirectX 9 inadequacies and short-comings that the 5800 came "Complete" with. The 5900 Ultra is INCAPABLE of accurately/quickly rendering high level shader languages. High-level shaders comprise a large portion of the rendering quality, and wonderful imagery that fill today's and tomorrow's directX 9 games. The 5900 Ultra is approximately 6 times slower than even the 9700 Pro when rendering Pixel Shader 2.0 code. The fixes potentially offered by the DirectX Dev team are specifically catered to nVidia simply because nVidia failed to follow DX9 specs, by MS refuses to cut the major player from their development cycles. Furthermore, the per-pixel color rendering of the NV3x series cards is so phenominally slow that programmers are forced to compile separate binaries within their executables to force DirectX 8/9 hybrid code which is extremely slow (5x slower than ATI parts), and extremely inaccurate. Floating point calculations, the basic principle for all calculations in computers, also fall on the 5800/5900 Ultra cards as a difficult task. It seems nVidia even failed to get the basics right.

    Subsequently, nVidia's efforts to beat their main rival is an effort they have not yet succeeded upon. Caught cheating twice in various applications to fabricate superior scores for their card, and to appear better than the competition were uncovered by even the sloppiest of amateur hardware sites. All respect that remained for and in nVidia ebbed quickly away, several dozen times faster than they gained the respect since 1988 as a DCC start-up. Moreover, nVidia's chief executive officer, Jen-Hsun Huang, has stated that nVidia's focus in the computer industry will shift away from the production of graphics cards which remains one of the most lucrative establishments in the business. This is a final recognition of their fate as a "Has-been."

    nVidia is the tragic story that accurately parallels the old saying: "Oh, how the mighty have fallen."

    Inferior hardware, and inferior drivers. nVidia GPUs are objectively worse than their ATI competitors.
  • edited October 2003
    Never mind the fact that the GF4 TI's raped everything ATI made up until the 9700.
    And I don't care how much preaching is done about why I should buy another ATI product because it's not going to happen until I see what I said earlier happen.
    You run what you want and be happy and leave me to run what I want and let me be happy and I don't see anything wrong in BF'42.
    'Nuff said.
  • Geeky1Geeky1 University of the Pacific (Stockton, CA, USA)
    edited October 2003
    The one and only time I use nVidia cards is in office machines. Not because they're more reliable than anything else (I haven't had problems with ANY graphics adapters in an office setting), but because the centralized driver architecture makes my (or the sysadmin's life) so much easier. Admittedly, ATi's catalysts are similar, but not as helpful, as they don't support ancient cards (like the R128) while nVidia's drivers still support the TNT2 and stuff...

    Other than that, there's no compelling reason to go with nVidia's cards anymore. Having used cards with chips from both companies, the image quality in 2d and 3d is noticeably better on ATi's cards. The bottom line is that while the GeForce FX is a capable card, nVidia screwed it up royally.
  • SimGuySimGuy Ottawa, Canada
    edited October 2003
    I know that no matter what I say, you aren't going to change your ways and your happiness with an NVidia product. Yelling across the forum really doesn't do a damned thing. My point in case is that when you look at the R3x0 -vs- NV3x architecture from a pure shader performance angle, the R3x0 wins hands down for one reason and one reason only: R3x0 is DX9-HLSL compatable. NV3x is not.

    I can't justify paying $520.00 USD for the FX when the R350 does the same for 35% less.

    The BF1942 issue affects the Ti4400 & 5950 only, but was fixed in the 52.16 ForceWare drivers.
  • Geeky1Geeky1 University of the Pacific (Stockton, CA, USA)
    edited October 2003
    You run what you want and be happy and leave me to run what I want and let me be happy

    Always a good policy. Like the people that come up to me all excited because they got a "great deal" on a new $500 HP/Compaq/Dell/Gateway that came with a monitor, a printer, speakers, a keyboard, and a mouse, as well as the computer, all for $500!!! :rolleyes:

    I simply tell them that I think they got ripped off, but if they're happy with it, then that's fine. And I'll say the same thing here... I think the 9800 is a better choice. However, it's your computer, you're the one that uses it, not me, so if you like the 5900, then by all means, enjoy it!

    However, I would like to point out that while the stock 8500 couldn't compete with the GF4Ti series, my overclocked 8500/64MB (275/275 stock --> 325 core/300 ram) is on par with, or better than, a Ti4400, and in some cases, a Ti4600 (depending on the benchmark).
  • TheLostSwedeTheLostSwede Trondheim, Norway Icrontian
    edited October 2003
    I have owned 4 Nvidia cards, all Ti4600. 3 of them went to RMA and i sold the last. I have owned 3 ATI 9X00 series, and i am still using a 9800. From a hardware point of view, the Ati´s are FAR better built than the Nvidias i have had. The Ti´s that died was from the classic artifacts-at-boot time. Never happened with the Ati´s and i can be pretty brutal on the cores. As i haven´t seen or tried a 59XX card, i cant compare it with Ati´s but i can tell you that i have never seen any better PQ at full blast and 1600X1200 in games.
  • Al_CapownAl_Capown Indiana
    edited October 2003
    Jengo had this to say
    I hate it when people argue about what company is better, its dumb, just buy whatever you think is better, i have had many good experiences with both companies, if i like what i see, i buy it, regaurdless of company, brand, etc, although i used to be an nVidia fanboy, my ways have changed

    BUT I STILL LOVE MY GeForce FX 5200U!!! (i dunno why but i just have this attachment to it, i know it sucks but i still love it.... i guess im wierd...)

    :D

    There's nothing wrong with arguing about which company is better. Just as long as the conversation does not digress into utterless crap and each person getting mad at each other.

    From what I've seen in this thread Simguy and Thrax have presented a wonderful argument and have certainly enriched my left side of the brain (graphics cards) with specific reasons why a purchase of a r3xx series is recommended over nv3x. In the past I would show a link to some benchmarks, a link to the register showing that nvidia's fx hardware will have problems with dx9 games. Now I know what I am talking about and can back it up whenever asked.

    Definetly appreciate the lengthy posts describing the past 4 years of Graphic Card advancements, since I did not get into computer hardware until approximately a year ago.

    Anyways, here are the video cards I have owned:
    PNY NvidiaTNT2 M64 32mb
    Crucial Radeon 7500 32mb
    Nvidia Geforce2mx200 32mb
    XTasy GeForce3ti200 64mb
    Radeon 9700 Non Pro
    Radeon 9800 Pro 256 MB

    I can say every video card I have owned besides the 7500 has given me what I expected out of it. However the 9800 256's inability to overclock is going to find its place in my doghouse and eventually in my FS thread.

    Since my time in the gaming / computer world (post-catalyst) I have not yet noticed a difference between the Detonators and Catalysts.

    On a final note, I'm hoping nv40 will be a hell of a card to once again increase competition. We, as the consumers, can't afford for such a large market share holder to drop from the rankings, just like in the CPU industry, but that's with saying that the XGI cards will be another dud.
Sign In or Register to comment.