PowerGamerz heeft een interessant interview afgenomen met nVidia. Ze hebben het o.a. over de 3dfx T-Buffer. Die schuiven ze zo effe aan de kant:
PG: How will the advent of T-buffering effect the future of PC gaming?Viet-Tam: The information released by 3dfx suggests to me that T-buffering is really nothing new under the sun. Basically when you want to do things like true full-scene antialiasing, motion blur, depth of field, you need to render several slightly different images and combine, average, or blend (same thing) them into a single image. From the beginning, OpenGL (for example) supported this by providing an "accumulation buffer"; an application can render several images into it, each image adding to the existing contents.
The downside to this is that it takes longer to render a single frame of animation, because you have to render the same geometry several times into the accumulation buffer to get the final displayed frame. What T-buffering seems to do is allow these multiple images to be rendered in parallel (at the same time) into separate buffers (collectively called the "T-buffer"?). Then some hardware reads these buffers and combines them together to produce the displayed image.
T-buffering is a slightly clever idea but it doesn't mean that you get antialiasing or motion blur (etc.) for free. Except in the case of antialiasing, you need to do different transformations for each image. Unless you have transformation and lighting (T&L) in hardware, in each of your parallel pipelines, your CPU is going to have to spend lots of extra time doing that, to say nothing of the extra bandwidth eaten up by sending all that extra geometry through your CPU <--> graphics chip bus (AGP, PCI, etc.).
Voor meer info, lees verder.