I'm making models for my game, been doing it on and off for a while. Tend to make a model, end up hating it, and rebuilding it again. Since I'm a "programmer artist," this is a painful process. Made the mistake of being too low poly in the beginning, since I'm ancient and was mentally mired in the old ways. Been upping the polycount each modelling iteration since. But how much is too much?
I actually focus more on vertex count. Fragment shaders are more about fill rates and the amount of screen a model takes up, rather than how many triangles there are. Vertex shaders on the other hand are all about vertex counts.
My system is a GeForce GTX560 SLI system, so not too bad. My test model has 5300 vertices per monster (6000 polygons), as read from the mesh.xml file. There are a lot of sharp edges and UV splits, I could optimize away about 1000 vertices if I wanted by having less sharp edges and less UV splits, and have the same polycount, but possibly loose a little bit of quality... maybe not. There are 7 monsters per mesh to cut down on batching, so with a limit of 65536 verts per mesh, the most verts per monster would be 9362. (Models can have more than 65536 verts, but need 32 bit indices instead of 16 bit indices.)
I wondered if I'd gone too far in the other direction, too many verts and polygons. Decided to test. Set up a scene with 126 monsters and a set camera position looking at them. This is far more than I'd use in a real game, where there'd typcially be maybe 16 monsters on screen, or maybe 32-48 in a really heavy battle scene. The frames per second with 126 monsters drops to about 16 fps, but most of that is spent on CPU physics. I check the GPU render time seperately.
Made 4 versions of the model with different numbers of vertices, and then checked the GPU rendering times. Note that the only difference here is vertex count. Batching, materials are the same, and screen coverage is virtually the same. (Different models mean marginally different area covered, but practically speaking it's identical.)
Code: Select all
50% (2650 vertices per cyborg) = 4.4 milliseconds total GPU time.
75% (3975 vertices per cyborg) = 4.5 milliseconds total GPU time.
100% (5300 vertices per cyborg) = 4.5 to 4.6 milliseconds total GPU time.
175% (9300 vertices per cyborg) = 4.5 to 4.6 milliseconds total GPU time.
Edit: Did some more tests out of curiosity, to see at what point vertex count made a difference. I altered the meshes so that there was only one monster per mesh instead of the usual seven, which means worse batching but a higher possible max vertices. I then made versions that had 16000, 32000 , 40000, 48000 and 64000 vertices.
Note that the numbers below aren't directly comparible to those above due to there being roughly 7 times more monster batches, even though there are still 126 monsters.
Code: Select all
100% ( 5300 vertices per cyborg) = 7.9 to 8.0 milliseconds total GPU time.
300% (16000 vertices per cyborg) = 7.8 to 7.9 milliseconds total GPU time.
600% (32000 vertices per cyborg) = 7.9 to 8.0 milliseconds total GPU time.
750% (40000 vertices per cyborg) = 9.7 to 9.8 milliseconds total GPU time.
900% (48000 vertices per cyborg) = 15.9 to 16.0 milliseconds total GPU time.
1200% (64000 vertices per cyborg) = 33.8 milliseconds total GPU time.
It also underscores the importance of batching. The 5300 vertex models, when done in groups of 7 monsters per batch, took 4.5 milliseconds. The same models rendered at only 1 monster per batch took 7.8 milliseconds.
A side note though, it isn't just framerates that matter. The size of the model is obviously a lot larger for higher vertex counts. This means longer loading times, both in the resource loading phase and in uploading to the graphics card the first time the player sees the model. These are only one time costs as opposed to per frame costs, but can still matter.