The Voodoo5 returns... with DX9!
Well, kind of anyhow.
Saw this @ Icrontic:
http://www.xgitech.com/index.htm
It's a "new" company, made up primarily of SiS staff from what I understand. It promises a lot, but knowing SiS, it'll probably under-deliver.
However, it's still a DX9 capable, dual-GPU graphics board.
Saw this @ Icrontic:
http://www.xgitech.com/index.htm
It's a "new" company, made up primarily of SiS staff from what I understand. It promises a lot, but knowing SiS, it'll probably under-deliver.
However, it's still a DX9 capable, dual-GPU graphics board.
0
Comments
They can't be serious. There's just no way they're serious. Why? They used "engineering talent" and "high quality" in the same sentence that they used "SiS" and "Trident Microsystems" in.
The SIS735 chipset was a killer, and the SIS648DX would've been a killer too had Intel not jimmy-whacked it from existence.
We'll have to see, but the odds are against it being anything impressive, since neither SiS nor Trident have ever made a single semi-decent GPU...
Your idea of decent is the next OEM's idea of the perfect card. Trident and SiS hold a CONSIDERABLE market in cheap OEM graphics for laptops and cheap-as-hell computers.
Are the cards "Good?" Yes. Good at what they're designed to do.
Are the cards good? No. They suck for anything more demanding than Starcraft.
That said, I think they have the potential.
Crap-tastic image quality (2D & 3D)
Crap-tastic resolution/refresh rate settings
Great 3D-Deceleration
Do they have potential? Absolutely. This thing might have 2, 256-bit memory busses (1/cpu) or something equally insane, but I doubt it. I really, really doubt it.
SiS and Trident are possibly the only two companies in the world that I dislike more than Apple.
I've already been hounding them for product.
<LI>Dual 256-bit DX 9.0 GPU's
<LI>128-bit DDR/DDR-II Memory Interface
<LI>4 Vertex Shader 2.0 Units (DX 9 compatable)
<LI>8 Pairs of Pixel Shader 2.0 Units (DX 9 compatable)
<LI>16 sets of Pixel Rendering Pipelines (2 for each pair of PS units)
<LI>AGP 1.0 to 3.0 support
<LI>Integrated thermal diode
<LI>2x & 4x FSAA
<LI>Support for all DX mapping techniques such as Bump Mapping and Mipmapped Cubic Mapping
The only thing I'm worried about is how XGI solved the problem of how both GPU's are going to render the images? Is each GPU going to render a frame (ie split the # of frames required per second in half for each GPU) or is each GPU going to render half of each frame (ie GPU 1 renders the ODD resolution lines, while GPU 2 renders the EVEN resolution lines). Maybe a custom, onboard SLI interface? Who knows.
Both ATI & 3DFX tried both technologies back in the late 90's, with 3DFX the clear winner.
If it lives up to the hype and is priced right (not A LA Parhelia), it may push DX9 hardware prices down, which would be kind of nice.
128-bit DDR/DDR-II Memory Interface
One word: Eeeeeeeeewwwwwwww.
Think about it... if each chip is only rendering 50% of the available data, each chip only needs a 128-bit memory interface to local VRAM.
Does a Single VPU & 256-bit interface = the same as a Dual VPU & 2 x 128-bit interface when all other factors are constant? Theoretically, it makes sense.
Again, that's all on paper
We've seen the pure graphics performance that SiS can pull off :shakehead
Think about this for a second. If each chip had a 256-bit bus, and was running the same 500MHz DDR-II the 5800 Ultra was, each chip would have 32GB/s of memory bandwidth, for a total of 64GB/s. The highest resolution that most consumers are likely remotely to be able to run is 2048x1536... which means each chip would have to handle at MOST a 1024x768 image. With 32 gigs/second of memory bandwidth. Once again: 1024x768 image, 32GB/s of memory bandwidth. With those kind of resources, you could run 2048x1536 @ 8x FSAA w/ 16x ANISO or whatever the highest quality settings now are in 32 bit color and it should still be damn fast.
Can you imagine running it thru Q3 at 640x480 in 16 bit color at minimum detail settings? It'd DECIMATE the benchmark... you'd probably end up bottlenecking it because the GPUs couldn't keep up! 400FPS? With those kind of resources, I wouldn't be all that surprised if it hit more like oh, say... 800.
That's why I don't like the 128-bit bus thing, regardless of whether it's over 1 GPU or both.
//Edit
AAAAGH! I meant 1024x1536, not 1024x768. :banghead:
Don't get me wrong, 32 GB/s of memory bandwidth with a GPU/VPU powerful enough to push the required amount of polygons to saturate the memory bus would kick ass, however that product would come at a premium even higher than the overpriced CrapForce FX 5900 Ultra.
The Volari Duo V8 Ultra pushes a core clock speed of 340 MHz with a 16-unit texture pipeline. We can calculate the maximum theoretical fill-rate of each GPU/VPU unit as:
340 MHZ core speed x 16 texture units x 1 texture per unit = 5,440 MegaTexels/s, or 5.44 GigaTexels per second.
That's more than DOUBLE the theoretical fill-rate of the GeForce 4 Ti4600 GPU, which is 2.4 GigaTexels/s (300 x 4 x 2). It's also nearly DOUBLE the theoretical fill-rate of the ATI Radeon 9800 Pro GPU, which is 3.04 GigaTexels/s (380 x 8 x 1).
Looks like the memory on the Volari Duo V8 Ultra will be either 256 MB or 512 MB in DDR-1 & DDR-II configurations (HOLY ****). Running at a clock speed of 375 MHz (or faster), we can calculate the maximum memory throughput available for the card as:
128-bit Interface x [2 x (375 MHz Memory Speed)] / 8 = 12,000 Mb/s, or 12 Gb/s memory bandwidth.
That's approximately 16% more than the available memory bandwidth on a GeForce 4 Ti4600-based card, which is 10.4 Gb/s (128 x [2 x 325] / 8). However, it's almost 50% LESS than the available memory bandwidth on the ATI Radeon 9800 Pro, which is 21.76 Gb/s (256 x [2 x 340] / 8).
As we can see, even though the GPU has some serious power behind it (16 pipelines with a 5.44 Gbps theoretical fill-rate), the memory-subsystem of this card has effectively castrated its performance to sub FX5800 Non Ultra-level.
Although it may be able to play DX9 games @ 1024x768, you will be hard-pressed to turn ANY quality features on (ie AA & AF), as it will completely saturate the memory subsystem of this card, severely degrading performance.
Never-the-less, it should be interesting to see how it actually performs in real-life, as theoretical & real-life seldom mix.
//Edit: Spelling, punctuation, grammer and a little math error I made
If each VPU is allocated 128-bit access to its own dedicated 128 MB area of VRAM, we can effectively DOUBLE the memory bandwidth of the card to 24.0 Gb/s memory bandwidth. This can be done simply because each VPU only has to process 50% of the frames required by a normal single-VPU based graphics card. Its like the Voodoo 2's all over again, except on 1 PCB instead of 2 seperate cards.
With a 5.44 GigaTexel fill-rate for EACH VPU and 12.0 Gbps available memory bandwidth VPU for each GPU, the card is still memory-bandwidth limited, but not as severely as before as I neglected to factor in that each VPU only processes 50% of the normal data-load. Hence, each VPU only needs 50% of the normal available memory-bandwidth of a single-VPU design.
Actually, this card may prove to be a challenge to the Radeon 9800 Pro, provided it actually delivers what it promises.
God damn, time for me to brush up on my math skills.
If they implemented a 256-bit memory interface for EACH VPU to 128 MB's of local VRAM, it would cost substantially more. Yes, you would have 24 GB's of memory bandwidth per VPU, essentially making it a card with 48 GB's of memory bandwidth. With the theoretical fill-rate so high, you WOULD be able to run at insanely high AA & AF levels with the resolution cranked up to the roof.
Maybe a future idea for a Volari Duo V8 Ultra Super-Duper Edition
Volari Duo V8 Ultra - 5600+
Volari Duo V5 Ultra - 4000+
Volari V8 Series - 3000+
Volari V5 Series - 2000+
Volari V3 Series - 1000+
Considering an ATI Radeon 9800 Pro pushes 5800+ on the same system, maybe XGI won't be one of those SiScrewups as everyone keep suggesting...