The Voodoo5 returns... with DX9!

Geeky1Geeky1 University of the Pacific (Stockton, CA, USA)
edited September 2003 in Hardware
Well, kind of anyhow.

Saw this @ Icrontic:
http://www.xgitech.com/index.htm

It's a "new" company, made up primarily of SiS staff from what I understand. It promises a lot, but knowing SiS, it'll probably under-deliver.

However, it's still a DX9 capable, dual-GPU graphics board.
«1

Comments

  • GHoosdumGHoosdum Icrontian
    edited September 2003
    Neat. I don't quite know what to make of it, but, neat.
  • Geeky1Geeky1 University of the Pacific (Stockton, CA, USA)
    edited September 2003
    I don't either... yet. If SiS manages to live up to their promises (for once), this thing should kick @ss. But, they don't give any kind of detail on it's specs, other than a 350MHz core clock and support for DDR-II on one model... But it could (and knowing SiS, probably does) have like a 4-bit memory bus too... :rolleyes:
  • GHoosdumGHoosdum Icrontian
    edited September 2003
    ;D;D
  • Geeky1Geeky1 University of the Pacific (Stockton, CA, USA)
    edited September 2003
    found some info on the company:
    XGI Technology. Inc brings together the best and brightest individuals in the visual computing industry with a singular focus in mind: To design, manufacture and distribute a family of innovative graphics solutions for personal and professional applications that well exceed our customer’s expectations for performance, features and value.

    Founded in May of 2003 and headquartered in Taipei, Taiwan, XGI pulls from a deep reservoir of engineering and design talent stemming from its acquisitions of Trident Microsystems, and Silicon Integrated Systems’ graphics divisions. Guided by industry veterans with a deep understanding of the worldwide graphics processor market, XGI embodies a total commitment to providing a full range of high quality, cost effective products created for Professional Graphics, Multimedia Applications, 3D Gaming and Mobile Computing.

    At XGI, creating a revolutionary visual computing experience for our customers is not simply a one-time goal, but an ongoing passion. As graphics technology races into the future, visual computing has become more than just a means of viewing data. It has evolved into a medium of personal expression, empowering people to share ideas in a way that transcends the boundaries of culture and language. We are firmly committed to the creation of outstanding graphics components that will enrich the visual experience of our customers, and the world around them.

    They can't be serious. There's just no way they're serious. Why? They used "engineering talent" and "high quality" in the same sentence that they used "SiS" and "Trident Microsystems" in.
  • WuGgaRoOWuGgaRoO Not in the shower Icrontian
    edited September 2003
    HAH...trident
  • Geeky1Geeky1 University of the Pacific (Stockton, CA, USA)
    edited September 2003
    I forsee them building a TNT2 killer. Oh wait... that's been done. :rolleyes:
  • ThraxThrax 🐌 Austin, TX Icrontian
    edited September 2003
    Who knows?

    The SIS735 chipset was a killer, and the SIS648DX would've been a killer too had Intel not jimmy-whacked it from existence.
  • Geeky1Geeky1 University of the Pacific (Stockton, CA, USA)
    edited September 2003
    Apparently the company bought SiS's graphics division and Trident in May... So they're a separate company with former SiS staff, basically...

    We'll have to see, but the odds are against it being anything impressive, since neither SiS nor Trident have ever made a single semi-decent GPU...
  • ThraxThrax 🐌 Austin, TX Icrontian
    edited September 2003
    Untrue.

    Your idea of decent is the next OEM's idea of the perfect card. Trident and SiS hold a CONSIDERABLE market in cheap OEM graphics for laptops and cheap-as-hell computers.

    Are the cards "Good?" Yes. Good at what they're designed to do.
    Are the cards good? No. They suck for anything more demanding than Starcraft.

    That said, I think they have the potential.
  • profdlpprofdlp The Holy City Of Westlake, Ohio
    edited September 2003
    Geeky1 said
    ...neither SiS nor Trident have ever made a single semi-decent GPU...
    What, you didn't like that S3 Trio 64? ;D
  • Geeky1Geeky1 University of the Pacific (Stockton, CA, USA)
    edited September 2003
    You're right, "decent" depends on the use. However, there are some old SiS and Trident cards kicking around in some of the work machines... My experience with both has always been that they have:

    Crap-tastic image quality (2D & 3D)
    Crap-tastic resolution/refresh rate settings
    Great 3D-Deceleration

    Do they have potential? Absolutely. This thing might have 2, 256-bit memory busses (1/cpu) or something equally insane, but I doubt it. I really, really doubt it.
  • Geeky1Geeky1 University of the Pacific (Stockton, CA, USA)
    edited September 2003
    No, I didn't like the S3 Trio 64. Why? BECAUSE I'VE HAD TO FIND DRIVERS FOR THE GODD@MN THINGS, THAT'S WHY!!! :rant:

    SiS and Trident are possibly the only two companies in the world that I dislike more than Apple.
  • danball1976danball1976 Wichita Falls, TX
    edited September 2003
    The only SiS video card I had was a SiS 6326 AGP 2x 8MB RAM video card in 1999. Not a card to be playing games on. I then switched to nVidia (a TNT2 32MB back in mid 1999) and have used nVidia since. That Creative Labs TNT2 32MB was in my parents computer until April this year, which I replaced with a EVGA GeForce4 440MX 64MB DDR AGP 8x.
  • GHoosdumGHoosdum Icrontian
    edited September 2003
    I'm about to upgrade the video in two of the PCs in my sig to ATi... the last one has a built-in SiS card...
  • MediaManMediaMan Powered by loose parts.
    edited September 2003
    Indeed this company is a little brother to SIS. Originally they were talking about using the XABRE chipset. Glad they didn't. There are about 4 companies involved in this and it will be interesting.

    I've already been hounding them for product. :)
  • ThraxThrax 🐌 Austin, TX Icrontian
    edited September 2003
    What of SiS and the DeltaChrome?
  • SimGuySimGuy Ottawa, Canada
    edited September 2003
    The XGI Volari Duo V8 Ultra sounds impressive on paper.

    <LI>Dual 256-bit DX 9.0 GPU's
    <LI>128-bit DDR/DDR-II Memory Interface
    <LI>4 Vertex Shader 2.0 Units (DX 9 compatable)
    <LI>8 Pairs of Pixel Shader 2.0 Units (DX 9 compatable)
    <LI>16 sets of Pixel Rendering Pipelines (2 for each pair of PS units)
    <LI>AGP 1.0 to 3.0 support
    <LI>Integrated thermal diode
    <LI>2x & 4x FSAA
    <LI>Support for all DX mapping techniques such as Bump Mapping and Mipmapped Cubic Mapping

    The only thing I'm worried about is how XGI solved the problem of how both GPU's are going to render the images? Is each GPU going to render a frame (ie split the # of frames required per second in half for each GPU) or is each GPU going to render half of each frame (ie GPU 1 renders the ODD resolution lines, while GPU 2 renders the EVEN resolution lines). Maybe a custom, onboard SLI interface? Who knows.

    Both ATI & 3DFX tried both technologies back in the late 90's, with 3DFX the clear winner.

    If it lives up to the hype and is priced right (not A LA Parhelia), it may push DX9 hardware prices down, which would be kind of nice. :)
  • Geeky1Geeky1 University of the Pacific (Stockton, CA, USA)
    edited September 2003
    Key words in your post SimGuy:

    128-bit DDR/DDR-II Memory Interface

    One word: Eeeeeeeeewwwwwwww.
  • SimGuySimGuy Ottawa, Canada
    edited September 2003
    But is that 128-bit DDR/DDR-II for each chip (256-bit interface effectively)?

    Think about it... if each chip is only rendering 50% of the available data, each chip only needs a 128-bit memory interface to local VRAM.

    Does a Single VPU & 256-bit interface = the same as a Dual VPU & 2 x 128-bit interface when all other factors are constant? Theoretically, it makes sense.

    Again, that's all on paper :)

    We've seen the pure graphics performance that SiS can pull off :shakehead
  • Geeky1Geeky1 University of the Pacific (Stockton, CA, USA)
    edited September 2003
    But why only 128 bit?!?!? I clicked thru the whitepapers real fast and I believe that it uses the GPUs the same way that the V5k did (If I remember right)- it basically interlaces the image and gives one chip the odd lines, the other the evens...

    Think about this for a second. If each chip had a 256-bit bus, and was running the same 500MHz DDR-II the 5800 Ultra was, each chip would have 32GB/s of memory bandwidth, for a total of 64GB/s. The highest resolution that most consumers are likely remotely to be able to run is 2048x1536... which means each chip would have to handle at MOST a 1024x768 image. With 32 gigs/second of memory bandwidth. Once again: 1024x768 image, 32GB/s of memory bandwidth. With those kind of resources, you could run 2048x1536 @ 8x FSAA w/ 16x ANISO or whatever the highest quality settings now are in 32 bit color and it should still be damn fast.

    Can you imagine running it thru Q3 at 640x480 in 16 bit color at minimum detail settings? It'd DECIMATE the benchmark... you'd probably end up bottlenecking it because the GPUs couldn't keep up! 400FPS? With those kind of resources, I wouldn't be all that surprised if it hit more like oh, say... 800.

    That's why I don't like the 128-bit bus thing, regardless of whether it's over 1 GPU or both.

    //Edit

    AAAAGH! I meant 1024x1536, not 1024x768. :banghead:
  • SimGuySimGuy Ottawa, Canada
    edited September 2003
    Why only 128-bit? To keep the cost down for one thing.

    Don't get me wrong, 32 GB/s of memory bandwidth with a GPU/VPU powerful enough to push the required amount of polygons to saturate the memory bus would kick ass, however that product would come at a premium even higher than the overpriced CrapForce FX 5900 Ultra.

    The Volari Duo V8 Ultra pushes a core clock speed of 340 MHz with a 16-unit texture pipeline. We can calculate the maximum theoretical fill-rate of each GPU/VPU unit as:

    340 MHZ core speed x 16 texture units x 1 texture per unit = 5,440 MegaTexels/s, or 5.44 GigaTexels per second.

    That's more than DOUBLE the theoretical fill-rate of the GeForce 4 Ti4600 GPU, which is 2.4 GigaTexels/s (300 x 4 x 2). It's also nearly DOUBLE the theoretical fill-rate of the ATI Radeon 9800 Pro GPU, which is 3.04 GigaTexels/s (380 x 8 x 1).

    Looks like the memory on the Volari Duo V8 Ultra will be either 256 MB or 512 MB in DDR-1 & DDR-II configurations (HOLY ****). Running at a clock speed of 375 MHz (or faster), we can calculate the maximum memory throughput available for the card as:

    128-bit Interface x [2 x (375 MHz Memory Speed)] / 8 = 12,000 Mb/s, or 12 Gb/s memory bandwidth.

    That's approximately 16% more than the available memory bandwidth on a GeForce 4 Ti4600-based card, which is 10.4 Gb/s (128 x [2 x 325] / 8). However, it's almost 50% LESS than the available memory bandwidth on the ATI Radeon 9800 Pro, which is 21.76 Gb/s (256 x [2 x 340] / 8).

    As we can see, even though the GPU has some serious power behind it (16 pipelines with a 5.44 Gbps theoretical fill-rate), the memory-subsystem of this card has effectively castrated its performance to sub FX5800 Non Ultra-level.

    Although it may be able to play DX9 games @ 1024x768, you will be hard-pressed to turn ANY quality features on (ie AA & AF), as it will completely saturate the memory subsystem of this card, severely degrading performance.

    Never-the-less, it should be interesting to see how it actually performs in real-life, as theoretical & real-life seldom mix. :)

    //Edit: Spelling, punctuation, grammer and a little math error I made :D
  • SimGuySimGuy Ottawa, Canada
    edited September 2003
    I should also note that the performance of this card can be directly attributed to XGI's implementation of the Dual GPU/VPU system.

    If each VPU is allocated 128-bit access to its own dedicated 128 MB area of VRAM, we can effectively DOUBLE the memory bandwidth of the card to 24.0 Gb/s memory bandwidth. This can be done simply because each VPU only has to process 50% of the frames required by a normal single-VPU based graphics card. Its like the Voodoo 2's all over again, except on 1 PCB instead of 2 seperate cards.

    With a 5.44 GigaTexel fill-rate for EACH VPU and 12.0 Gbps available memory bandwidth VPU for each GPU, the card is still memory-bandwidth limited, but not as severely as before as I neglected to factor in that each VPU only processes 50% of the normal data-load. Hence, each VPU only needs 50% of the normal available memory-bandwidth of a single-VPU design.

    Actually, this card may prove to be a challenge to the Radeon 9800 Pro, provided it actually delivers what it promises.

    God damn, time for me to brush up on my math skills. :)
  • Geeky1Geeky1 University of the Pacific (Stockton, CA, USA)
    edited September 2003
    Seems illogical to me that you'd run 2 GPUs on something that you're trying to keep relatively cheap... Sounds to me like this is going to be another SiScrewup... but we'll have to see...
  • SimGuySimGuy Ottawa, Canada
    edited September 2003
    Geeky1 said
    Seems illogical to me that you'd run 2 GPUs on something that you're trying to keep relatively cheap... Sounds to me like this is going to be another SiScrewup... but we'll have to see...

    If they implemented a 256-bit memory interface for EACH VPU to 128 MB's of local VRAM, it would cost substantially more. Yes, you would have 24 GB's of memory bandwidth per VPU, essentially making it a card with 48 GB's of memory bandwidth. With the theoretical fill-rate so high, you WOULD be able to run at insanely high AA & AF levels with the resolution cranked up to the roof. :)

    Maybe a future idea for a Volari Duo V8 Ultra Super-Duper Edition :D
  • MediaManMediaMan Powered by loose parts.
    edited September 2003
    We won't know for the time being. Sample boards are not yet ready. At least that is what XGI tells me.
  • kanezfankanezfan sunny south florida Icrontian
    edited September 2003
    when it comes to companies like sis, i'll wait until the hardware arrives to pass judgment, in other words they've never delivered anyhitng but crap before. btw, i've never ever seen a xabre anywhere, not in action, not in a box at best buy, nowhere.
  • SimGuySimGuy Ottawa, Canada
    edited September 2003
    Results of the XGI lineup of video cards on a 3.0 Ghz Intel P4 platform with 512 MB DDR SDRAM @ 3DMark2003.

    Volari Duo V8 Ultra - 5600+
    Volari Duo V5 Ultra - 4000+
    Volari V8 Series - 3000+
    Volari V5 Series - 2000+
    Volari V3 Series - 1000+

    Considering an ATI Radeon 9800 Pro pushes 5800+ on the same system, maybe XGI won't be one of those SiScrewups as everyone keep suggesting... :)
  • primesuspectprimesuspect Beepin n' Boopin Detroit, MI Icrontian
    edited September 2003
    I wonder how they got those benchies without sample cards being ready....
  • ThraxThrax 🐌 Austin, TX Icrontian
    edited September 2003
    Sample boards means those for reviewers. That does not preclude alpha silicon and in-house boards.
  • primesuspectprimesuspect Beepin n' Boopin Detroit, MI Icrontian
    edited September 2003
    I would never trust an in-house benchmark.
Sign In or Register to comment.