Clip to world space using depth Topic is solved

Problems building or running the engine, queries about how to use features etc.
Post Reply
rpgplayerrobin
Greenskin
Posts: 145
Joined: Wed Mar 18, 2009 3:03 am
x 15

Clip to world space using depth

Post by rpgplayerrobin » Thu Feb 07, 2019 4:07 am

Ogre Version: 1.11.2
Operating System: Windows 10
Render System: Direct3D11

Hello!

I have had a bit of trouble converting depth value to a world position in a shader, I have attempted to do it for 30+ hours now without success.
This problem literally haunts my dreams.

The depth buffer itself is fine, since I am using it for many other shaders. Visualizing it shows nothing wrong with it.

The compositor shader I am using is trying to convert a UV coordinate to a world position using the depth of the UV.
I want it to be able to sample any pixel on the screen and be able to get that pixels world position, even if that pixel is not the current one.

All solutions I have found on this forum is by doing a vertex texcoord that holds the corner of the quad, but that only works for the current pixel in the pixel shader (because interpolations of the texcoord only do that specific pixel).
Correct me if I am wrong, but that means that it is impossible to calculate a world space position for other pixels than the one that was sent in using a different UV, using that clever vertex shader technique.

The named constants I am sending into the shader gets updated. I set them for both the base material and the copy of the material that the compositor creates internally.
How I know that they are getting updated is because half of the frames I set a value to multiply the output pixel color by either 0 or 1, which makes the entire screen flash, which works.

Since you cannot get matrices from the "param_named_auto" for compositors, I do it myself in code by doing something like this (I have tried more than 50 different kind of ways of calculating all the matrices):

Code: Select all

Affine3 tmpView = m_Camera->getViewMatrix(true);
if (m_SceneManager->getCameraRelativeRendering()) // I toggle this each attempt to see if it works or not
	tmpView.setTrans(Vector3::ZERO);
Affine3 tmpInvView = tmpView.inverse();
gpuParam->setNamedConstant("invView", tmpInvView);
I render objects into the depth buffer by doing this in the pixel shader:

Code: Select all

float clipDistance = cFarClipDistance - cNearClipDistance;
return float4((length(iViewPos) - cNearClipDistance) / clipDistance, 1, 1, 1);
So, how does one calculate the world position of a pixel only using a depth value and an UV?
From what I have read, it seems to be something like this:

Code: Select all

float3 WSPositionFromDepth(float2 uv, float depth)
{
    float4 clipSpacePosition = float4(uv.x * 2.0 - 1.0,
				(1.0 - uv.y) * 2.0 - 1.0,
				depth, // Sending in "sSceneDepthSampler.Sample(sSceneDepthSampler_state, uv).r"
				1.0); // I have tried the depth with and without "* 2.0 - 1.0" also

    float4 viewSpacePosition = mul(invProj, clipSpacePosition);

    //viewSpacePosition.xyz /= viewSpacePosition.w; // I tried it with and without this in many attempts

    float4 worldSpacePosition = mul(invView, viewSpacePosition);

    return worldSpacePosition.xyz;
}
But of course that does not work. I have tried hundreds of different versions of that function, with no success.

After a while I tried to skip matrices as far as I could and came up with this (with inspiration of the vertex texcoord solution):

Code: Select all

float3 ray = lerp(topLeftCorner, bottomLeftCorner, fragmentTC.y);
ray = lerp(ray, float3(topRightCorner.x, ray.y, ray.z), fragmentTC.x);
ray = normalize(ray);
//ray = mul(view, float4(ray, 1)); // I tried it with and without this in many attempts
float3 worldSpacePosition = cameraPosition + // cameraPosition is sent in manually in world space, no auto param
		(ray * (sSceneDepthSampler.Sample(sSceneDepthSampler_state, fragmentTC).r * clipDepth));
But this only works when looking in the Z/Y angle with the camera, any rotation towards X will make it be distorted, and I tried 50+ different ways of calculating the corners and the final world space position with that code.

I converted my shader code to Direct3D11 to be able to use the Visual Studio 2017 Graphics Debugging, and I have been debugging the shader code quite a lot, which helps, but I have still not succeeded even with that help.

I realize that the easy way would be to use a position buffer, but I really do not want to do that as I already have the depth buffer which should (?) be sufficient to get world positions.

I also realize that I can make my calculations in view-space instead, but at this point I just do not understand why it is so complex to convert a clip space/screen space position to a world position.

Has anyone else encountered this problem and fixed it?

Any input would be helpful at this point.

Some posts that has similar problems but still did not help me:
viewtopic.php?t=92173
viewtopic.php?t=42835
viewtopic.php?f=2&t=78348
viewtopic.php?f=2&t=72294
https://stackoverflow.com/questions/322 ... ffer-value
https://www.gamedev.net/forums/topic/47 ... rom-depth/
https://mynameismjp.wordpress.com/2009/ ... rom-depth/
0 x

rpgplayerrobin
Greenskin
Posts: 145
Joined: Wed Mar 18, 2009 3:03 am
x 15

Re: Clip to world space using depth

Post by rpgplayerrobin » Fri Feb 08, 2019 12:41 am

I finally fixed it...
For anyone else going through this trouble, here is the code:

C++:

Code: Select all

Vector3 tmpCameraPosition = m_Camera->getDerivedPosition();

Matrix4 tmpInvView = m_Camera->getViewMatrix();
tmpInvView.setTrans(Vector3::ZERO);
tmpInvView = tmpInvView.inverse();

Matrix4 tmpInvProj = m_Camera->getProjectionMatrix().inverse();

// Remember to gather the materials you need, do not only set the base material,
// set all materials. See CompositorInstance::Listener to attach it to a compositor,
// then see notifyMaterialSetup to fetch those materials and use them here
for(int i = 0; i < materials.Size(); i++)
{
	GpuProgramParametersSharedPtr tmpGPUParam = materials[i]->
		getTechnique(0)->getPass(0)->getFragmentProgramParameters();
	tmpGPUParam->setNamedConstant("cameraPosition", tmpCameraPosition);
	tmpGPUParam->setNamedConstant("invView", tmpInvView);
	tmpGPUParam->setNamedConstant("invProj", tmpInvProj);
}
HLSL (this is for Direct3D9):

Code: Select all

float3 WSPositionFromDepth(float2 uv, float depth)
{
    float4 clipSpacePosition = float4((uv.x * 2.0) - 1.0,
		((1.0 - uv.y) * 2.0) - 1.0,
		farClipDistance, // I first set "depth" here, but that caused errors when coming close to a surface
		1.0);

	float4 worldSpacePosition = mul(invProj, clipSpacePosition);
	worldSpacePosition = mul(invView, worldSpacePosition);
	worldSpacePosition /= worldSpacePosition.w;

	return cameraPosition - (normalize(worldSpacePosition.xyz) * depth);
}

// Then place this in your fragment shader:
float clipDepth = farClipDistance - nearClipDistance;
float fragmentWorldDepth = tex2D(depthSampler, uv).r * clipDepth;
float3 position = WSPositionFromDepth(uv, fragmentWorldDepth);

// You can visualize it by doing any of these methods:
color.xyz = length(position) / 50;
color.xyz = sin(position);
color.xyz = position / 1;


I created a code to find the correct way of calculating the position to world space.
I made a scene where the camera is static at a position and a direction.
In that scene I placed two objects (buckets, as entities) at two specific positions: one placed in the 3D world where it is in the middle of the screen to the far right, and one to the far bottom and far right.
Then I wrote down their positions after adjusting them to be close to the UV position I was going to test, to the variables "tmpBucketRight" and "tmpBucketBottomRight".
The first bucket (x = 1, y = 0.5) is tested first, and I used capslock to test the second bucket (x = 1, y = 1).
When the scene is loaded and the camera is set to the right spot, I set a breakpoint in the end of the code at "tmpAsd = "";" and then press "L".
The tmpAsd2 variable shows all tests that were close to the target position you wanted.
After tested both of them, I just extracted what I needed from the code and used it, and it worked.

Code: Select all

if (app->m_Keyboard->isKeyDown(OIS::KC_L))
{
	CString tmpAsd = "";
	CString tmpAsd2 = "";

	Vector3 tmpBucketRight = Vector3(7.479527f, 0.0f, -0.032144f);
	Vector3 tmpBucketBottomRight = Vector3(6.248f, 0.0f, 3.748356f);

	float tmpBucketRight_Distance = (tmpCameraPosition - tmpBucketRight).length();
	float tmpBucketBottomRight_Distance = (tmpCameraPosition - tmpBucketBottomRight).length();



	Vector3 targetPosition = tmpBucketRight;
	float targetDistance = tmpBucketRight_Distance;

	float texCoordx = 1.0f;
	float texCoordy = 0.5f;
	float depth = targetDistance;

	if (CGeneric::IsCapslockToggled())
	{
		targetPosition = tmpBucketBottomRight;
		targetDistance = tmpBucketBottomRight_Distance;

		texCoordx = 1.0f;
		texCoordy = 1.0f;
		depth = targetDistance;
	}



	Ogre::Real remappedX = (texCoordx * 2) - 1;
	Ogre::Real remappedY = ((1.0f - texCoordy) * 2) - 1;
	Ogre::Real depthZ = depth;

	Ogre::Vector4 clipSpaceValue(remappedX, remappedY, depthZ, 1);

	for (int n = 0; n < 2; n++)
	{
		for (int y = 0; y < 2; y++)
		{
			for (int x = 0; x < 2; x++)
			{
				for (int i = 0; i <= 3; i++)
				{
					for (int m = 0; m < 3; m++)
					{
						Ogre::Matrix4 projectionMat = app->m_Camera->getProjectionMatrixRS();
						if (i == 1)
							projectionMat = app->m_Camera->getProjectionMatrix();
						if (i == 2)
							projectionMat = app->m_Camera->getProjectionMatrixWithRSDepth();
						if (i == 3)
						{
							Ogre::Matrix4 projMatrix = app->m_Camera->getProjectionMatrix();
							Ogre::Root::getSingleton().getRenderSystem()->_convertProjectionMatrix(projMatrix, projectionMat, true);
						}

						Ogre::Matrix4 asdviewMatrix = app->m_Camera->getViewMatrix();
						if (y == 1)
							asdviewMatrix = app->m_Camera->getViewMatrix(true);
						if (n == 1)
							asdviewMatrix.setTrans(Vector3::ZERO);

						Ogre::Vector4 position = projectionMat.inverse() * clipSpaceValue;
						if (x == 1)
							position /= position.w;
						position = asdviewMatrix.inverse() * position;
						position /= position.w;

						Vector3 tmp3DPosition = Vector3(position.x, position.y, position.z);
						if (m == 1)
							tmp3DPosition = tmpCameraPosition + (tmp3DPosition.normalisedCopy() * depthZ);
						if (m == 2)
							tmp3DPosition = tmpCameraPosition - (tmp3DPosition.normalisedCopy() * depthZ);

						CString tmpStr = CGeneric::ToString(tmp3DPosition) +
							"   n: " + CGeneric::ToString(n) +
							", y: " + CGeneric::ToString(y) +
							", x: " + CGeneric::ToString(x) +
							", i: " + CGeneric::ToString(i) +
							", m: " + CGeneric::ToString(m);
						if ((targetPosition - tmp3DPosition).length() < 1.5f)
							tmpAsd2 += tmpStr.m_string + " dist: " + CGeneric::ToString((targetPosition - tmp3DPosition).length()) + "\n";
						else
							tmpAsd += tmpStr.m_string + "\n";
					}
				}
			}
		}
	}

	tmpAsd = ""; // Set a breakpoint here to see all the different choices, tmpAsd2 contains all the ones that came close to the position you wanted
	tmpAsd2 = "";
}
0 x

Post Reply