Rick Johnson van Raven Software heeft in z'n .plan file wat info over de GeForce en Soldier of Fortune neer geplant:
Doing some tests on the GEforce256 we have, using SoF, I saw a considerable speed improvement. I ran a few quick tests on two machines:
#1: W2K, PIII 550, TNT2
#2: NT4, PII 300, GEforce256
Running at 1280x960, turning detail texturing fully on (i.e. every world polygon has detail on it) would result in a frame rendering difference of ~15ms for #1. #2, it was only a difference of ~6 ms. In other words, the slower machine with the GEforce256 could render detail texturing with a much less of a performance hit (i.e. better fill rate). Before you go quoting these numbers too much, note that one machine is Windows2000 (which is my main dev machine), and that both systems are using "alpha" drivers.
SoF also has a hardware lighting mode for ghoul models (as of this morning ), which takes full advantage of the hardware lighting of the GEforce256. This resulted in a significant improvement in framerate, plus also allowing us to throw many more light sources at the models.
Once the drivers have finalized, I'll put out some accurate numbers testing on the same computer. [break] Ook interessant is een eerdere .plan update van Jake Simpson: [/break] About this whole T&L vs Fill rate thing. I'm gonna add my $0.02 here, since there's a lot of info and opinions flying around about this, not all of them backed up by any facts and/or experience. From our experience with the new GEforce256 card, we are seeing some fairly decent results. SOF ran a full 25% faster with the card, with no new coding at all. Now there's a lot that affects this, so lets talk about those factors.
With an OpenGL game, you only get acceleration if you pass your geometry transforms through the OpenGL driver. This in itself is a weak spot, since there are more than a few OpenGL drivers out there that, quite frankly, suck when it comes to doing the transforms in a timely manner. So a lot of games developers do the transforms/lighting themselves before handing it off to the card. This way, they know its being done in the fastest manner possible, and you can take lots of short cuts with lighting and so on, and *you can know exactly how its being done everytime*. Now the problem with this is although you get a more standard response across video cards, it means its still main CPU bound, rather than being off loaded to a separate T&L processor when its available. The real solution is to have both, detect if there is a T&L processor around, and if there is, use that, else use your own transform pipeline. More work, but a better situation for the gamer, which is what its all about, right?
Now, faster fill rates over T&L? Well, it all really depends on what you are doing with the card in the first place. If your game puts out a standard number of polys, but has a ton of procedural textures (Textures that are changed in some way every frame - like the Water in Unreal Tournament) then you are still going to be wacking those textures across the bus to the card every frame, and T&L is not going to make any difference there at all. Also, if you are doing a spell game, and wacking out a ton of magical effects with a ton of overdraw going on, then faster fill rates make all the difference. There is an argument that having T&L means you require a heavier fill rate, since you are pushing more polys through a scene. More polys = more pixels drawn on screen = more fill rate required. While this is an argument, experience shows its really not a good one, and here's why. More polys does not always equal more pixels. Dependant on what the more polys are used for, its entirely possible that more polys can equal less pixels.
Deze is ietsie te heftig om helemaal te posten, lees hier verder.