This thread is becoming large, so first a quick summary of latest changes:
TODO (besides polishing and testing):
* [s]Implement DepthSharing base system[/s] (DONE)
* [s]Implement D3D9 & OGL FBO systems[/s] (DONE)
* [s]Compositor script tokens for setting the pool ID (should be easy)[/s] (DONE)
* [s]D3D10 Render System (if not too different from D3D9, also easy. Must get Vista or 7 machine w/ DX10 HW)[/s] (DONE)
* [s]Write documentation for the compositor manual[/s] (DONE)
* [s]Check multiple monitor support for D3D9 isn't broken[/s] (DONE)
* [s]Check OGL support in Linux isn't broken[/s] (DONE)
* Check OGL support in Mac isn't broken (I don't have a Mac)
* Write wiki example showing how to use it in C++ code
Downloads:
Patch from SourceForge.Net
Patch from a mirror.
D3D10 & D3D11 patch from mirror.
Apply this patch against Rev 9725
Simple test to verify it works here.
Note about D3D10/11 patch:
The D3D10/11 patch also fixes very relevant issues:
Code: Select all
* Going to fullscreen could cause to switch to a different resolution due to wrong refresh rate parameter. Symptoms varied from wrong resolutions, stretched images, to crashes.
* FSAA was not working at all. The code was all screwed. Fixed.
* Depth buffers weren't being released, causing occasional crashes when switching from fullscreen to windowed mode.
* Depth buffers weren't being shared at all (very different behavior from D3D9 & OGL) Fixed with the new system.
For those render systems that aren't using this new system yet (OGL ES), they won't compile because _createDepthBufferFor is pure virtual. Just overload it and return null to workaround this problem.
Original thread:
Hi all!
Following recent issues regarding depth sharing problems, I thought it would be ideal to start a forum thread where the design of the depth sharing could be discussed to solve the root problem for once and for all.
Right now we're patching over patches, and we're just losing consistency between OpenGL & Direct3D implementations.
I come from a D3D background, so please feel free to criticize my approach if you feel it's going to cause trouble with OGL.
My proposal is that DepthBuffers should be able to attach to RenderTargets.
DepthBuffers would be RenderSystem agnostic, and have to be derived by the RenderSystem implementation to store the actual depth buffer (i.e. in D3D9 the IDirect3DSurface9)
DepthBuffers will have 4 boolean flags:
Code: Select all
bool mShareable
bool mForRenderWindow
bool mForRTT
bool mForShadowTexture
When this value is false, it can only be used when the user manually assigns this DepthBuffer to a RenderTarget.
* When mForRenderWindow is true, this depth buffer can be used with the RenderWindow (commonly, the backbuffer).
* When mForRTT is true, this depth buffer can be used with any Render Texture Target (excluding the render window). I'm having the compositor here in thought, and covers the MRTs too.
* When mForShadowTexture is true, this buffer can be used with a RTT that is used for shadowing.
When any of the 3 last values are false, that DepthBuffer would fail to attach to it's corresponding RenderTarget. This ensures that the DepthBuffer is not accidentally used where it shouldn't.
By default, the main depth buffer that is created (if created) with the Render Window should have all of it's flags set to true. This behavior could be overridden by the application.
Note: It doesn't have to be 4 boolean values, an 8-bit value and a simple AND bitwise operation ought to be enough.
Compositor Script:
I thought it would be great to see this from a high level perspective, and what's better than seeing this in Compositor script code. This is how a depth buffer could be created and handled in a compositor:
Code: Select all
compositor DeferredShading/GBuffer
{
technique
{
// temporary textures
texture myRTT target_width target_height PF_R8G8B8A8
depth_buffer myDepth <myRTT|target_output> <target_width|#width> <target_height|#height> <default|#msaa> <default|D24S8|D24X8|D16> <Shareable true|false> <RWindow true|false> <RTT true|false> <ShadowTexture true|false>
target myRTT
{
input none
depth_buffer <myDepth|target_depth_buffer>
shadows_depth_buffer <myDepth|target_depth_buffer>
pass clear
{
}
pass render_scene
{
}
}
}
}
Declares a new depth buffer (Mr obvious).
Parameters:
<myRTT|target_output|none>
Specifies an existing RTT (or render window) from which this depth buffer will be based upon as a reference. (width, height, bit depth, msaa setting, etc). These values can be overridden in the optional parameters that follows.
Default: target_output
<target_width|#width> <target_height|#height>
Specifies the width & height for this depth buffer. The same values as the "texture" keyword should be accepted. (i.e. target_height_scaled, etc). target_width & co. should be based from "<myRTT|target_output>" parameter specified at the beginning, it is invalid when the reference RTT is 'none'
Default: target_width target_height
<default|#msaa>
Specifies the MSAA setting for this depth buffer based on the RTT reference, or manually controlling the value. The number of msaa samples has to be explicitly specified if the RTT ref. is 'none'
Default: default
<default|D24S8|D24X8|D16>
The depth buffer format. I'm just putting it for maximum flexibility, but I don't know why someone would like something other than default (which lets Ogre decide).
The format has to be explicitly specified if the RTT ref. is 'none'
Default: default
Shareable
See mForRenderWindow above.
Default: true
RWindow
See mForRenderWindow above.
Default: true
RTT
See mForRTT above.
Default: true
ShadowTexture
See mForShadowTexture above.
Default: true
depth_buffer <myDepth|target_depth_buffer>
Specified inside the target. It can be either the depth buffer that the output has, or a manually specified one.
Default target_depth_buffer
shadows_depth_buffer <myDepth|target_depth_buffer>
Same as depth_buffer
As you can see, the following compositor would cause a fatal exception (or a log error) but it would be completely valid when parsing:
Code: Select all
compositor DeferredShading/GBuffer
{
technique
{
// temporary textures
texture myRTT target_width target_height PF_R8G8B8A8
depth_buffer myDepth myRTT target_width target_height target_msaa false false false false
target myRTT
{
input none
depth_buffer myDepth //Error mForRTT is false
shadows_depth_buffer myDepth //Error mForShadowTexture is false
pass render_scene
{
}
}
}
}
DepthBuffer class as a derived class from Texture
I'm leaving this for discussion:
The DepthBuffer class should ideally derive from Ogre::Texture (or similar) for the APIs that support the depth buffer as a texture.
OpenGL & Direct3D10 support this, Direct3D9 requieres a hack (which is different in ATI & NVIDIA drivers) in order to work. May be another boolean flag could be specified to tell that we want this depth_buffer to be used as a texture too.
I'm not very experienced with HW depth textures so I'm leaving the feedback for someone else who wants to join.
I'll be waiting for feedback.
What do you all think?
Cheers
Dark Sylinc
Edit: Ooops, forgot how the class should look somehow (big overview):
Code: Select all
class DepthBuffer //Derived classes skipped for now
{
//Note no Direct3DSurface9 here, or similar
int width
int height
DBFormat format; //Assuming this a valid enumeration of bit depth
bool mShareable
bool mForRenderWindow
bool mForRTT
bool mForShadowTexture
public:
virtual void baseTexture( RenderTarget *renderTarget ) ; //RenderTarget to base our depth from
virtual bool isCompatible( RenderTarget *renderTarget ) = 0; //Must be implemented by the API specific code
}