Hardware PCF shadows

Discussion area about developing or extending OGRE, adding plugins for it or building applications on it. No newbie questions please, use the Help forum for that.
User avatar
dark_sylinc
OGRE Team Member
OGRE Team Member
Posts: 5509
Joined: Sat Jul 21, 2007 4:55 pm
Location: Buenos Aires, Argentina
x 1378

Re: Hardware PCF shadows

Post by dark_sylinc »

sparkprime wrote: * We could give a different TexturePtr to the RenderSystem, one that encapsulates the depth instead of the colour. This would be linked from the existing TexturePtr. We would chase the pointer in _setPass if the TUS flag was set.
I think that's the way I thought.

My idea was (roughly):

Code: Select all

TextureUnitState::setTextureName( const String& name, TextureType texType, bool useDepth)
{
    if( useDepth )
    {
         //OK Static downcast, this is ugly, refactor, you'll get the idea
         RenderTexture *rtt = static_cast<RenderTexture*>TextureMgr::Singleton().getByName( name );
         mFramePtr[0] = rtt->getDepthTexture()
    }
}
No messing with SceneManager nor RenderSystem at all. Everything is contained in TextureUnitState. Of course, some additional code in D3D9/GLRenderSystem might be needed at the exact moment of binding the specific API resource (or may be encapsulated in D3D9DepthRenderTexture & GLDepthRenderTexture)
User avatar
sparkprime
Ogre Magi
Posts: 1137
Joined: Mon May 07, 2007 3:43 am
Location: Ossining, New York
x 13

Re: Hardware PCF shadows

Post by sparkprime »

Ah yes I remember now. I was reading this thread again looking for that bit of discussion and I read right over the part where you suggested that earlier.

Code: Select all

    TexturePtr depthTexture = renderTarget->getDepthBuffer()->getAsTexture(); //Returns null if this isn't a depth texture
Anyway, that's good enough for named textures. For shadow and compositor targets it'd be necessary to modify TUS::_setTexturePtr to chase down to the right Texture instance. Changing it in SceneManager would require more work as it's called from 2 places. However I am already distracting myself as I said earlier I don't care about that yet :)

So now, we are looking at a few things:

We have a Texture instance, let's call it Ogre::TexturePtr tex, created by the user with TU_RENDERTARGET
We have the RenderTexture instance (also an instnace of RenderTarget) effectively owned by the Texture instance. I.e. Ogre::RenderTexture *rtex = tex->getBuffer()->getRenderTarget();
We have the DepthBuffer object being used by the RenderTarget, however it is actually owned by the RenderSystem's pool and may be shared. Ogre::DepthBuffer *dbuf = rtex->getDepthBuffer();
We have another Texture instance that is owned by the DepthBuffer and should be bound in the pass to allow reading of depth values from the shader. I.e. Ogre::TexturePtr dbuf->getAsTexture();
User avatar
sparkprime
Ogre Magi
Posts: 1137
Joined: Mon May 07, 2007 3:43 am
Location: Ossining, New York
x 13

Re: Hardware PCF shadows

Post by sparkprime »

It looks like GLDepthBuffer is a wrapper around GLRenderBuffer, which is a wrapper around the GL RenderBuffer concept from FBO.

This won't do for binding into texture units, it would have to be a depth texture, created with glGenTexture rather than glGenRenderBuffer

illustration code taken from: http://www.opengl.org/wiki/GL_EXT_framebuffer_object

Code: Select all

 glGenTextures(1, &depth_tex);
 glBindTexture(GL_TEXTURE_2D, depth_tex);
 glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
 glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
 glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
 glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
 glTexParameteri(GL_TEXTURE_2D, GL_DEPTH_TEXTURE_MODE, GL_INTENSITY);
 glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_COMPARE_MODE, GL_COMPARE_R_TO_TEXTURE);
 glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_COMPARE_FUNC, GL_LEQUAL);
 glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT32, 256, 256, 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_INT, NULL);
Any reason why changing it to always use a texture would be a bad idea?

edit: looks like this can be done by using GlTextureBuffer instead of GlRenderBuffer, perhaps with a few addiitons.
User avatar
DimA
Halfling
Posts: 59
Joined: Tue Jan 10, 2006 12:52 pm
Location: Ukraine
x 6

Re: Hardware PCF shadows

Post by DimA »

Hi guys!

I would like to follow up this great idea!
Why do not allow such depth textures for FBO.
It's really helpful for deferred lighting/shading techniques.

I think it might be implemented by extending RTT to support depth/stencil formats: PF_DEPTH16, PF_DEPTH32, PF_DEPTH24_STENCIL8.
Created depth texture should be allowed to attach to FBO RenderTarget as depthTexture through corresponding attachment points: GL_DEPTH_ATTACHMENT​ or GL_DEPTH_STENCIL_ATTACHMENT​.

Latest D3D render systems allow to do this as well.
User avatar
dark_sylinc
OGRE Team Member
OGRE Team Member
Posts: 5509
Joined: Sat Jul 21, 2007 4:55 pm
Location: Buenos Aires, Argentina
x 1378

Re: Hardware PCF shadows

Post by dark_sylinc »

I am adding support for depth textures to Ogre 2.1; though HW PCF shadows are already support thanks to DX11's SampleCmp functions.
sparkprime wrote:Any reason why changing it to always use a texture would be a bad idea?
edit: looks like this can be done by using GlTextureBuffer instead of GlRenderBuffer, perhaps with a few addiitons.
I've researched the available information; and finally know the answer:
The thing is about hardware limitations.
When you create a depth buffer using the "GenFrameBuffer" method, you will usually get what you want. If you request a "GL_DEPTH_COMPONENT32F" that is, a 32-bit Float & no stencil, depth buffer, you probably will get a 32-bit float depth buffer without stencil.
However, when using the "GenTexture" method, you're telling the API you may use the depth buffer as a texture. The GPU may not support a 32-bit float depth buffer for sampling as a texture (but it does support the format if it's not used as a tex.), so you may silently get (for example) a 32-bit integer depth buffer which gets converted to floats on the fly when you read from it as a texture; or the depth buffer creation may fail instead, whereas it wouldn't if we had used the other method.

In short, it's due to hardware limits. D3D11 is a bit more explicit (although a bit cryptic at the same time) with their "TYPELESS" formats; and creating "views" that are able to reinterpret or convert the data on the fly separately when using as a depth buffer and as a sampled texture.

For example DX10 makes support for 32-bit float textures mandatory; and sampling using a comparison sampler (i.e. PCF) also mandatory. However sampling the direct value from a 32-bit float texture is optional. This means I can use a 32-bit float depth buffer for PCF shadow mapping, but I can't assume it can be used for SSAO.