We have an issue with Ogre using the wrong functions for binding vertex attributes in Ogre 1.8. The issue has been existing for a long time, but only became visible after a NVIDIA driver update.
The problem is focused around this line of code in OgreGLRenderSystem.cpp:
In our application, we have legacy shaders written in Cg. In Ogre these shaders end up as GLArbGpuProgram instances. The issue is that GLArbGpuProgram has no implementation of isAttributeValid, which falls back to the implementation in GLGpuProgram. That implementation returns false for anything bound to VES_TEXTURE_COORDINATES.
Code: Select all
isCustomAttrib = mCurrentVertexProgram->isAttributeValid(sem, elem.getIndex());
In turn, when isCustomAttrib is false, the buffer gets bound with glTexCoordPointer&friends instead of glVertexAttribPointer. That usage is no longer accepted since NVIDIA driver version 361.42, and causes garbage to appear at the given attribute in the shader.
The same problem appears when using GLSL shaders too, but only when attributes use a different name than those hard-coded in OgreGLSLLinkProgram.cpp (uv0, uv1, uv2, etc). In that case, the implementation of isAttributeValid also returns false, resulting in the wrong code path being taken when binding the attributes.
1. Is there any downside to always setting isCustomAttribute to true, as long as a vertex program is bound? This workaround seems to work great in our application, but I don't know if there are some corner cases that would fail.
2. Is there a reason why the isAttributeValid function looks up attributes by name instead of by index, when considering what is a custom attribute?
3. What is the recommended solution to this problem?