Hello,
I have a lot of problems with the material scripts' depth_bias option. To start with - I tested this option on different graphics cards and I never got the same result. I'm working on a X700 - using depth_bias values < 12 doesn't make a difference at all on DX - using OpenGL I have to start with values > 160 (!) to see a difference at all (note that the near clipping plane and the far clipping plane are well placed in my app - between the two there is a distance of 1000 units, only).
However, running the same piece of code on other machines was quite strange. I knew that different cards might differ in their implementations, but the differences I encountered made tha depth_bias option completely unusable.
That's what I observed:
X600 - no problems at all. On OpenGL and DirectX, a depth_bias of 1 clearly rendered the coplanar polygons the way they should be rendered.
FX5900XT - the strangest behaviour at all. OpenGL just doesn't render all the frames an objects using depth_bias is visible. So, the user thinks that the app freezed (previously rendered image is shown) completely. If you continue to turn the camera (even it can't be seen, since there are no frame updates no more) until the objects using depth_bias are considered to be infisible, the frames get updatet again.
On DirectX, everything in the scene is rendered all the time, but some overlays aren't rendered everytime some depth_bias object becomes visible.
In both modes, DX and OpenGL, zFighting occurs, and changing the depth_bias value doesn't change anything at all. I checked the cards dx rastercaps, and well... depthbias, slopescaledbias are available, but the results are strange, though.
FX6600XT - well, using DX, this card behaved exactly the same way as described for the FX5900XT (vanishing overlays). Using opengl, There were no frame freezes no more. However, for DX and OGL, the depth_bias values worked and everything was rendered nicely (except the overlays using DX).
Well, I hope this information is useful. Does anyone have an idea how to achieve the same functionality as depth_bias, but in a portable way ? The only way I know of is pushing the clip planes away a bit for every object that needs biasing towards the viewer. This would be portable, but the problem with this is, that I would have to modify the ogre sources in order to continue to use depth_bias in the materials (which in turn complicates maintaining the project). On the one hand, this would be nice, but on the other hand, there might be users who want to access the hardware's depthbias features and this support won't be accessible anymore....
Except for the problems with the FX5900XT, the DirectX DepthBias works (and the result is roughly the same on all the cards).
The latter should obviously be true, because, D3Ds depthbias just uses the projection matrix "trick" (if I'm not wrong about that), but since portability is quite important to me, it should work with OpenGL, too. Am I right about the assumption, that glPolygonOffset is just about the same technique as the old DX zBias ?
However, it would be nice to get the same result on every card and the two APIs.
Suggestions are welcome.
Thanks