Light Volumes in deferred rendering / compositor

A place for users of OGRE to discuss ideas and experiences of utilitising OGRE in their games / demos / applications.
Post Reply
bharling
Gremlin
Posts: 166
Joined: Fri Jun 30, 2006 1:04 pm

Light Volumes in deferred rendering / compositor

Post by bharling »

Hi,

I'm trying to implement a deferred render scheme ( all the rage these days doncherknow .. ) and am pretty sure I understand the concepts well enough, but have a question regarding implementing this in Ogre.

All the information I've seen says that you should render lights as meshes to get the volume information and apply to the final lighting pass in a deferred scheme. This is to minimze overdraw by only updating the portion of the rendertarget that contains the light volume in screen space. My question is: How can you do this using Ogre's compositor framework? As far as I understand it, you can only draw the whole fullscreen quad in a compositor pass, not restrict it to only drawing certain geometry ( ie: the light volume ). As far as I can tell the only way to do it in ogre would be to do a 'once per light' pass in the compositor material ( after filling the depth / normals / diffuse targets etc ), then use 'discard' in the shader for all pixels that are outside the light volume? Trouble is I'd still be processing all pixels in the fullscreen quad for each light, when as I understand it, the point is to just process pixels within the light volume for each pass.

Just to be clear - I'm not trying to do volumetric lighting here, just deferred lighting using point lights for now.

Secondly -

I'd like to try and do the instant radiosity technique mentioned in the latest dev journal for Infinity ( so far its the only thing I've seen on there which I think I might be able to do .. ). Basically it involves doing a limited number of raycasts from the active lights in the scene to generate a number of ambient point lights to simulate GI in the scene ( this is not pre-calculated but updated every frame). I can do the raycasting part to get the positions of the ambient lights, but how would I sample the texture of any meshes that are hit by the ray to get the correct colour for the light bounce?

can anyone give me some pointers?

thanks :)
Was here
User avatar
Praetor
OGRE Retired Team Member
OGRE Retired Team Member
Posts: 3335
Joined: Tue Jun 21, 2005 8:26 pm
Location: Rochester, New York, US
x 3
Contact:

Re: Light Volumes in deferred rendering / compositor

Post by Praetor »

I actually recently did exactly what that journal article discusses: deferred, instant radiosity.

How to do it in Ogre? Take a look at the deferred shading demo that comes with Ogre. It is not the only way to do it, but it gives you an idea about what you need. A pure compositor system is not possible right now. That's something Noman is working on for SoC. Also, nullsquared can probably come in here and talk a bit about his deferred system he's set up. My deferred instant radiosity was done in XNA 3.0 (and it was a pain, just for the record).
Game Development, Engine Development, Porting
http://www.darkwindmedia.com
bharling
Gremlin
Posts: 166
Joined: Fri Jun 30, 2006 1:04 pm

Re: Light Volumes in deferred rendering / compositor

Post by bharling »

thanks praetor, I took your advice and had a look at the deferred demo, my first question arose quite quickly :) :

Code: Select all

compositor DeferredShading/ShowLit
{
	technique
	{
		// temporary textures
		texture mrt_output target_width target_height PF_FLOAT16_RGBA PF_FLOAT16_RGBA		
		
		target mrt_output
        {
            input none
        	pass clear
			{
			}
			
			// everything but the lights and their meshes
			// could do this with something like a visibility mask too
			pass render_scene
			{
				first_render_queue 22
				last_render_queue  90	
			}
        }
	
        target_output
        {
			input none
			
			// render skies
			pass render_scene
			{
				first_render_queue 5
				last_render_queue  5			
			}
						
			// just the lights and their meshes
			pass render_scene
			{
				first_render_queue 20
				last_render_queue 21
			}
		}
	}
}
I understand the first target well enough, but how does this compositor ( deferred.compositor from the ogre sample media ) resolve to a final image? I'm probably missing something about ogre's compositor framework - but as far as I can tell, only the sky and the lights will be rendered in the output target, and not the rest of the scene?
Was here
User avatar
ebol
Halfling
Posts: 67
Joined: Sun Sep 24, 2006 8:49 pm

Re: Light Volumes in deferred rendering / compositor

Post by ebol »

render lights as meshes to get the volume information and apply to the final lighting pass
Well, its not the only way to do it. Depending on what you are doing (what your actual pipeline looks like) you could do this in 2d space only. Its up to you to decide what will work best for you, since both approaches have their pros and cons (consider simplicity, flexibility, overdraw etc).

So, If you are interested, here's what you could do (3 optimized steps):

Code: Select all

1. Render light as a fullscreen quad, but calculate a scissor rectangle based on light sphere and turn on scissor test for this pass; This way, you will only update the parts of the screen which are affected by this light (approximately of course, since you project 3d sphere to 3d rectangle)

2. Now, in your shader, the first thing you should calculate is the distance from the pixel that you are currently processing and the light possition. If this value is greater then the light attenuation, discard this pixel/finish processing it; This way, you will only calculate pixels that are inside light sphere.

3. Another possible optimization is to calculate the diffuse lighting term NdotL and after saturating it, checking if its non zero. If NdotL is zero you can also discard this pixel.
 
Again, I'm not saying its the best or better solution, but if you are having trouble with creating and rendering lights using meshes, you could try do it this way.
bharling
Gremlin
Posts: 166
Joined: Fri Jun 30, 2006 1:04 pm

Re: Light Volumes in deferred rendering / compositor

Post by bharling »

Hi ebol,

thanks for the input, thats along the lines of what I was thinking - I've been looking at light scissoring in the ogre manual and it looks promising. Correct me if I'm wrong but that method would allow me to use regular ogre lights instead of meshes?

I think first though that I will try the method outlined here: http://www.gamedev.net/community/forums ... _id=482654

I think I can do all the culling operations mentioned with a combination of stencil operations over 2 compositor passes, combined with depth operations defined for the light mesh material. As I understand it, the downside is an extra pass to fill the stencil buffer, but the upside is very efficient culling before reaching the actual pixel processing stage ( if I can get my head around how to properly fill the stencil ). That way I shouldn't have to discard any pixels, as the stencil / depth comparisons will do that for me. That tutorial also outlines how to get the projective texture coords to sample the G-buffer, but am still not entirely sure how to pipe those textures into the light materials, will refer to the deferred rendering demo for that methinks..

If and when I get a working version I will post everything on the forums ( though it will be python-ogre rather than pure ogre, but should be easy to translate )

cheers!
Was here
User avatar
ebol
Halfling
Posts: 67
Joined: Sun Sep 24, 2006 8:49 pm

Re: Light Volumes in deferred rendering / compositor

Post by ebol »

bharling wrote:Correct me if I'm wrong but that method would allow me to use regular ogre lights instead of meshes?
Exactly, no meshes involved, just pass all the needed light info to your shader and calculate scissor rectangle based on light sphere, which is simply defined by lights possition and attenuation (center and radius).
bharling
Gremlin
Posts: 166
Joined: Fri Jun 30, 2006 1:04 pm

Re: Light Volumes in deferred rendering / compositor

Post by bharling »

Well, I've got a little further with this.

Now have the MRT setup correctly, and have light meshes that can use the results of the MRT ( albedo, normals, depth ) and project those textures back into screen space. Next problem is how to properly reconstruct the light position...

I'm trying out this technique used in infinity to cull pixels, but its not really making much sense to me at the moment:
For each light, a Z range is determined on the cpu. For point lights, it is simply the distance between the camera and the light center, plus or minus the light radius. When the depth is sampled in the shader, the pixel is discarded if the depth is outside this Z range. This is the very first operation done by the shader. Here's a snippet:

Code: Select all

    vec4 ColDist = texture2DRect(ColDistTex, gl_FragCoord.xy);
    if (ColDist.w < LightRange.x || ColDist.w > LightRange.y)
        discard;
Using this I just end up with a band of lighted pixels stretching horizontally across the screen, which is kind of what might be expected using the camera distance from the light?
Was here
bharling
Gremlin
Posts: 166
Joined: Fri Jun 30, 2006 1:04 pm

Re: Light Volumes in deferred rendering / compositor

Post by bharling »

Am getting very close with this now :( just cant work out my view transforms for the light position I believe.

I think all I need to do is to get the arbitrary light centre position into view space ( currently it is defined directly in Ogre coordinates, which i'm guessing would equate to world-space ? )

I'm using meshes as lights, and passing in the far top right frustrum corner to get a eye-to-pixel ray to obtain the view space position of the fragment - i think this part is correct now. I'm also generating projective texture coords and successfully sampling the g-buffers, again which seems to be working fine, so its really only the light position in view space I'm struggling with now!

anyway here's a screenshot and the shaders in case anyone can help!

Image

note: in the screenshot I'm shading any pixel that falls outside the supposed light radius ( but still in the light volume mesh ) in red instead of discarding for now so I can see what might be happening. Also, the depth buffer is compressed 32-bit float in linear view space ( thanks nullsquared ) which is why it looks a bit funny.

G-buffer shader (outputs to 3 MRT's all PF_R8G8B8A8 ):

Code: Select all

float4 packFloat( float val ) {
   float4 shift = float4(256.0 * 256.0 * 256.0, 256.0 * 256.0, 256.0, 1.0);
   float4 mask = float4(0.0, 1.0 / 256.0, 1.0 / 256.0, 1.0 / 256.0);
   float4 ret = frac(val * shift);
   ret -= ret.xxyz * mask;
   return ret.wzyx;
}

float unpack(float4 value)
{
    float4 shift = float4(
        1.0 / (256.0 * 256.0 * 256.0), 1.0 / (256.0 * 256.0), 1.0 / 256.0, 1.0);
    return dot(value.wzyx, shift);
}


void geom_vs (
    float4 inPos: POSITION,
    uniform float4x4 worldViewProjMat,
    uniform float4x4 v,
    uniform float4x4 worldViewMat,
    uniform float farClip,
    float3 inNormal: NORMAL,
    float2 inTexCoord: TEXCOORD0,

    out float4 oPos: POSITION,
    out float2 oDiffuseTexCoords: TEXCOORD0,
    out float depth: TEXCOORD1,
    out float3 oNormal: TEXCOORD2)
{
    oPos = mul(worldViewProjMat, inPos);
    float4 vPos = mul( worldViewMat, inPos );
    oDiffuseTexCoords = inTexCoord;
    depth = length( vPos.xyz ) / farClip;
    oNormal = normalize( mul( (float3x3)v, inNormal ).xyz) ; //inNormal; //
}

void geom_ps (
    float3 texCoords: TEXCOORD0,
    float depth: TEXCOORD1,
    float3 normal: TEXCOORD2,

    uniform sampler2D normalTexture: register(s0),
    uniform sampler2D diffuseTexture: register(s1),

    out float4 diffuseTarget: COLOR0,
    out float4 normalTarget: COLOR1,
    out float4 depthTarget: COLOR2)
{
    diffuseTarget = tex2D( diffuseTexture, texCoords );
    normalTarget = float4( normalize( tex2D( normalTexture, texCoords ).xyz * normal ), 1.0 );
    //depthTarget = packFloat( depth );
    //diffuseTarget = float4( 1, 0, 0, 1 );
    //normalTarget = float4( 0, 1, 0, 1 );
    depthTarget = packFloat( depth );
}
Mesh light shader:

Code: Select all

float unpack(float4 value)
{
    float4 shift = float4(
        1.0 / (256.0 * 256.0 * 256.0), 1.0 / (256.0 * 256.0), 1.0 / 256.0, 1.0);
    return dot(value.wzyx, shift);
}


void deferredLight_VS (
    float4 inPos : POSITION,

    uniform float4x4 worldViewProjMat,
    uniform float3 farCorner,
    uniform float farClip,
    uniform float2 screen,
    uniform float2 targetRes,

    out float4 oPos : POSITION,
    out float2 oTex : TEXCOORD0,
    out float3 ray : TEXCOORD1,
    out float far : TEXCOORD2,
    out float4 projCoords : TEXCOORD3
)
{
    float4 oPoz = mul( worldViewProjMat, inPos );
    oPos = oPoz;
    oPoz.x = ((oPoz.x + oPoz.w) * screen.x + oPoz.w) * targetRes.x;
    oPoz.y = ((oPoz.w - oPoz.y) * screen.y + oPoz.w) * targetRes.y;

    projCoords = oPoz;
    // clean up inaccuracies for the UV coords
    float2 uv = sign(oPos.xy);
    // convert to image space
    oTex = (float2(uv.x, -uv.y) + 1.0) * 0.5;
    ray = farCorner * float3(sign(inPos.xy), 1);
    far = farClip;
}

void deferredLight_PS (
    float2 inTex : TEXCOORD0,
    float3 ray : TEXCOORD1,
    float far : TEXCOORD2,
    float4 projCoords : TEXCOORD3,

    uniform sampler2D diffuseRT: register(s0),
    uniform sampler2D normalsrt: register(s1),
    uniform sampler2D depthRT: register(s2),

    uniform float3 camPos,
    uniform float3 lightPos,
    uniform float radius,

    out float4 oCol:COLOR
)
{
    float pDepth = unpack(tex2D( depthRT, inTex ));
    float3 viewPos = ray * pDepth;

    float3 lightVec = lightPos - viewPos;
    oCol = tex2Dproj( diffuseRT, projCoords.xyz ) * (length( lightVec ) / radius);
    if (length( lightVec ) > radius )
        oCol = float4(1,0,0,1);



    //oCol = float4( 0.2, 0.2, 0.2, 1.0 );
}
Was here
Post Reply