I am adding support for depth textures to Ogre 2.1; though HW PCF shadows are already support thanks to DX11's SampleCmp functions.
sparkprime wrote:Any reason why changing it to always use a texture would be a bad idea?
edit: looks like this can be done by using GlTextureBuffer instead of GlRenderBuffer, perhaps with a few addiitons.
I've researched the available information; and finally know the answer:
The thing is about hardware limitations.
When you create a depth buffer using the "GenFrameBuffer" method, you will usually get what you want. If you request a "GL_DEPTH_COMPONENT32F" that is, a 32-bit Float & no stencil, depth buffer, you probably will get a 32-bit float depth buffer without stencil.
However, when using the "GenTexture" method, you're telling the API you may use the depth buffer as a texture. The GPU may not support a 32-bit float depth buffer for sampling as a texture (but it does support the format if it's not used as a tex.), so you may silently get (for example) a 32-bit integer depth buffer which gets converted to floats on the fly when you read from it as a texture; or the depth buffer creation may fail instead, whereas it wouldn't if we had used the other method.
In short, it's due to hardware limits. D3D11 is a bit more explicit (although a bit cryptic at the same time) with their "TYPELESS" formats; and creating "views" that are able to reinterpret or convert the data on the fly separately when using as a depth buffer and as a sampled texture.
For example DX10 makes support for 32-bit float textures mandatory; and sampling using a comparison sampler (i.e. PCF) also mandatory. However sampling the direct value from a 32-bit float texture is optional. This means I can use a 32-bit float depth buffer for PCF shadow mapping, but I can't assume it can be used for SSAO.