World View Projection Matrix Range?

Anything and everything that's related to OGRE or the wider graphics field that doesn't fit into the other forums.
Post Reply
User avatar
SpaceDude
Bronze Sponsor
Bronze Sponsor
Posts: 822
Joined: Thu Feb 02, 2006 1:49 pm
Location: Nottingham, UK
x 3
Contact:

World View Projection Matrix Range?

Post by SpaceDude »

What is the expected range you would get when multiplying a point with the world view projection matrix?

To put it another way, given the following typical line of code you would see in a vertex shader:

Code: Select all

outPos = mul(worldViewProj, position);
and supposing the point is visible on-screen is the range for the x, y and z components between [-1 to 1] or [0 to 1]?

I seem to have observed some differences between OpenGL and D3D, from my experiments it seems that D3D is giving me a range of [0 to 1] while OpenGL is giving me [-1 to 1] for the z component anyway, I have not verified x and y components. Is this normal?

I ask this because I'm working on my depth shadow mapping shaders and this point is rather important.
User avatar
xavier
OGRE Retired Moderator
OGRE Retired Moderator
Posts: 9481
Joined: Fri Feb 18, 2005 2:03 am
Location: Dublin, CA, US
x 22

Post by xavier »

This is normal, and expected. GL deals in the [-1..1] range, while D3D deals in [0..1]. This is normal, and is one of the major differences between the two.

Ogre IIRC deals internally with the ranges abstractly, and in the range [-1..1].
Do you need help? What have you tried?

Image

Angels can fly because they take themselves lightly.
User avatar
SpaceDude
Bronze Sponsor
Bronze Sponsor
Posts: 822
Joined: Thu Feb 02, 2006 1:49 pm
Location: Nottingham, UK
x 3
Contact:

Post by SpaceDude »

Ok good, took me a while to figure that out. Now for my second question :). I want my depth shadow mapping code to be the same in both OpenGL and Direct3D. My Cg shadow caster vertex program looks like this (modified version of the shadow demo program):

Code: Select all

// Shadow caster vertex program.
void casterVP(
	float4 position			: POSITION,
	out float4 outPos		: POSITION,
	out float2 outDepth		: TEXCOORD0,

	uniform float4x4 worldViewProj,
	uniform float4 texelOffsets
	)
{
	outPos = mul(worldViewProj, position);

	#if OPENGL
		// fix pixel / texel alignment
		outPos.xy += texelOffsets.zw * outPos.w;	// In D3D texelOffsets will be 0
		outDepth.x = (outPos.z+1)*0.5;	// OpenGL's z ranges from -1 to 1, instead of the wanted 0 to 1
	#else
		outDepth.x = outPos.z;		// D3D's z ranges from the desired 0 to 1
	#endif
	outDepth.y = outPos.w;
}
I think I raised this issue before, I can't find a neat way to do one thing on OpenGL and something else on D3D. Other than putting these #if statements in and using delegates in the .material file like this:

Code: Select all

vertex_program Ogre/DepthShadowmap/CasterVPD3D cg
{
	source DepthShadowmap.cg
	entry_point casterVP
	profiles vs_2_0
	
	compile_arguments -DOPENGL=0

	default_params
	{
		param_named_auto worldViewProj worldviewproj_matrix
		param_named_auto texelOffsets texel_offsets
	}
}

vertex_program Ogre/DepthShadowmap/CasterVPOpenGL cg
{
	source DepthShadowmap.cg
	entry_point casterVP
	profiles arbvp1
	
	compile_arguments -DOPENGL=1

	default_params
	{
		param_named_auto worldViewProj worldviewproj_matrix
		param_named_auto texelOffsets texel_offsets
	}
}

vertex_program Ogre/DepthShadowmap/CasterVP unified
{
	delegate Ogre/DepthShadowmap/CasterVPOpenGL
	delegate Ogre/DepthShadowmap/CasterVPD3D
}
It works but it's ugly and more than doubles the size of my .material files
User avatar
sinbad
OGRE Retired Team Member
OGRE Retired Team Member
Posts: 19269
Joined: Sun Oct 06, 2002 11:19 pm
Location: Guernsey, Channel Islands
x 66
Contact:

Post by sinbad »

Rather than using the projection matrix, you can use the viewspace depth and the scene_depth_range / shadow_scene_depth_range parameters to normalise it.
User avatar
SpaceDude
Bronze Sponsor
Bronze Sponsor
Posts: 822
Joined: Thu Feb 02, 2006 1:49 pm
Location: Nottingham, UK
x 3
Contact:

Post by SpaceDude »

sinbad wrote:Rather than using the projection matrix, you can use the viewspace depth and the scene_depth_range / shadow_scene_depth_range parameters to normalise it.
Ah yes, I thought about that. But then for the shadow receiver, I need a way to get the shadow's view matrix on its own. But the only auto parameter I can find relating to the shadow's matrices is:

texture_viewproj_matrix

There doesn't seem to be a:

texture_view_matrix

So I can't do the same calculation with the shadow receiver as I did with the shadow caster :cry:

Nevermind I went with the delegate thing in the end and it's working nicely. I'm thinking about creating the vertex/fragment program definitions dynamically either directly in my C++ code or with a Python script which will generate a .program file for me. Because it gets a bit difficult to define all the different shader combinations when you have a lot of #if #end statements in your shaders. Not just because of the D3D, OpenGL thing but also when you start having slightly different shadow algorithms to support different hardware you basically need to double the size of your .program file each time you add another variation.

I read about some plans to improve the script parsing in Shoggoth, does this try to address some of these issues?
User avatar
sinbad
OGRE Retired Team Member
OGRE Retired Team Member
Posts: 19269
Joined: Sun Oct 06, 2002 11:19 pm
Location: Guernsey, Channel Islands
x 66
Contact:

Post by sinbad »

Fair point. You could also pass the rescaling parameters as uniforms perhaps.

The new script compilers don't make delegation any different really, but you are able to 'import' scripts and inherit anything from anything else, which can improve reuse.
User avatar
SpaceDude
Bronze Sponsor
Bronze Sponsor
Posts: 822
Joined: Thu Feb 02, 2006 1:49 pm
Location: Nottingham, UK
x 3
Contact:

Post by SpaceDude »

sinbad wrote:Fair point. You could also pass the rescaling parameters as uniforms perhaps.

The new script compilers don't make delegation any different really, but you are able to 'import' scripts and inherit anything from anything else, which can improve reuse.
I was just thinking of something along the lines of what was implemented in Crysis (http://delivery.acm.org/10.1145/1290000 ... EN=6184618):
Very late in the project our renderer programmer introduced a new render path that was based on some über-shader approach. That was basically one pixel shader and vertex shader written in CG/HLSL with a lot of #ifdef. That turned out to be much simpler and faster for development as we completely avoided the hand optimization step. The early shader compilers were not always able to create shaders as optimal as humans could do but it was a good solution for shader model 2.0 graphics cards.
The über-shader had so many variations that compiling all of them was simply not possible. We accepted a noticeable stall due to compilation during development (when shader compilation was necessary) but we wanted to ship the game with a shader cache that had all shaders precompiled. We ended up playing the game on NVIDIA and on ATI till the cache wasn’t getting new entries. We shipped Far Cry with that but clearly that wasn’t a good solution and we had to improve that. We describe a lot more details about our first engine in [Wenzel05].
As he mentioned there were too many variations to compile all possible variations. And certainly too many to manually specify in a .program file. I am also getting to the point where defining each variation manually is too much. I'm not sure what the solution is yet, but it seems to me there must be an easier way. Is this something that you have though about at all?
User avatar
sinbad
OGRE Retired Team Member
OGRE Retired Team Member
Posts: 19269
Joined: Sun Oct 06, 2002 11:19 pm
Location: Guernsey, Channel Islands
x 66
Contact:

Post by sinbad »

Well, all that's possible already - in fact in the current project I'm on we do exactly this, compiling shader variants with a ton of precompiler options as needed based on material properties used. But you can't use scripts to instantiate the variants in this case obviously, unless you want to be explicit about them all. We do it in code.
User avatar
SpaceDude
Bronze Sponsor
Bronze Sponsor
Posts: 822
Joined: Thu Feb 02, 2006 1:49 pm
Location: Nottingham, UK
x 3
Contact:

Post by SpaceDude »

sinbad wrote:Well, all that's possible already - in fact in the current project I'm on we do exactly this, compiling shader variants with a ton of precompiler options as needed based on material properties used. But you can't use scripts to instantiate the variants in this case obviously, unless you want to be explicit about them all. We do it in code.
Ok gotcha! I was just concerned there might be some issues with the loading order, i.e. if materials are referencing shader programs that haven't been defined yet. But I guess it's just a question of making sure they are generated in code before parsing the .material files. Thanks.
User avatar
nullsquared
Old One
Posts: 3245
Joined: Tue Apr 24, 2007 8:23 pm
Location: NY, NY, USA
x 11

Post by nullsquared »

sinbad wrote:Fair point. You could also pass the rescaling parameters as uniforms perhaps.

The new script compilers don't make delegation any different really, but you are able to 'import' scripts and inherit anything from anything else, which can improve reuse.
Is it just me, or is the view space vector length equal to the distance between the light and the current pixel ;)?

I use the mentioned approach, and get my (linear) distance by length(lightPos - worldPos).
User avatar
SpaceDude
Bronze Sponsor
Bronze Sponsor
Posts: 822
Joined: Thu Feb 02, 2006 1:49 pm
Location: Nottingham, UK
x 3
Contact:

Post by SpaceDude »

nullsquared wrote:Is it just me, or is the view space vector length equal to the distance between the light and the current pixel ;)?

I use the mentioned approach, and get my (linear) distance by length(lightPos - worldPos).
For my case I'm using directional lights, so there isn't really a lightPos. nullsquared I saw your shadow demo, it's very nice and in fact it's what motivated me to finish working on shadows in my game. But I wonder if certain things couldn't be optimised. The length function is quite expensive because it involves taking a square root. I think it is best to just use the z value. Anyway this discussion should probably be moved to the appropriate thread.
User avatar
nullsquared
Old One
Posts: 3245
Joined: Tue Apr 24, 2007 8:23 pm
Location: NY, NY, USA
x 11

Post by nullsquared »

SpaceDude wrote:
nullsquared wrote:Is it just me, or is the view space vector length equal to the distance between the light and the current pixel ;)?

I use the mentioned approach, and get my (linear) distance by length(lightPos - worldPos).
For my case I'm using directional lights, so there isn't really a lightPos.
Good point. Aren't directional lights just rendered as "offset" spotlights (to cover the whole scene) with an ortho projection? Just a thought, but I bet it's possible to get a virtual light position by moving by this "offset" in the negative direction of the light...
nullsquared I saw your shadow demo, it's very nice and in fact it's what motivated me to finish working on shadows in my game. But I wonder if certain things couldn't be optimised. The length function is quite expensive because it involves taking a square root. I think it is best to just use the z value. Anyway this discussion should probably be moved to the appropriate thread.
I didn't really work on "optimizing" it, TBH, it was a pretty quick code-up where I just slapped some things together. Surely, there are optimizations to be done (and mainly, better techniques, such as the new layered variance shadow maps and so on).
Post Reply