[2.3] Combining the Tutorials OpenVR and Sky_Postprocess

A place for users of OGRE to discuss ideas and experiences of utilitising OGRE in their games / demos / applications.
Slamy
Gnoblar
Posts: 22
Joined: Sat Mar 27, 2021 10:49 pm
Location: Bochum, Germany
x 10

[2.3] Combining the Tutorials OpenVR and Sky_Postprocess

Post by Slamy »

Hello you all. As I'm new to this board I think I shall introduce myself first.
My name is Slamy and I like to program stuff in my free time for fun. In January I got my hands on a VR headset as I thought it would be time to try them out. After playing some games I thought It would be cool to develop something for it. I've tried Unity and Unreal but it was all too much and as I'm already familiar with SDL and a little bit of OpenGL I searched for 3D game engines and ogre-next seemed to be a good choice as It already comes with a working example.
After tinkering with it for quite some time I wanted to get rid of the blue sky and replace with a skybox. Luckily there was already a working example of a Skybox using the Compositor. I tried to combine them as I thought that adding a render_quad to the main_stereo_render might do the trick.

Code: Select all

abstract target rt_renderwindow
{
	//Render opaque stuff
	pass render_scene
	{
		load
		{
			all				clear
			clear_colour	0.2 0.4 0.6 1
		}
		store
		{
			depth	dont_care
			stencil	dont_care
		}

		overlays	off

		shadows		ShadowMapDebuggingShadowNode
	}
	
	//Render sky after opaque stuff (performance optimization)
	pass render_quad
	{
		quad_normals	camera_direction
		material SkyPostprocess
	}
}

abstract target main_stereo_render
{
	//Eye render
	pass render_scene
	{
		load
		{
			all				clear
			clear_colour	0.2 0.4 0.6 1
		}
		store
		{
			depth	dont_care
			stencil	dont_care
		}

		//0x01234567
		identifier 19088743

		overlays	off

		cull_camera VrCullCamera

		shadows		ShadowMapDebuggingShadowNode

		instanced_stereo true
		viewport 0 0.0 0.0 0.5 1.0
		viewport 1 0.5 0.0 0.5 1.0
		
	}

	pass render_quad
	{
		quad_normals	camera_direction
		material SkyPostprocess
		viewport 0.0 0.0 0.5 1.0
	}
	
	pass render_quad
	{
		quad_normals	camera_direction
		material SkyPostprocess
		viewport 0 0.5 0.0 0.5 1.0
	}
}



compositor_node Tutorial_OpenVRNodeNoRDM
{
	in 0 stereo_output

	target stereo_output : main_stereo_render {}
}

compositor_node Tutorial_OpenVRMirrorWindowNode
{
	in 0 rt_renderwindow

	target rt_renderwindow : rt_renderwindow {}

}


workspace Tutorial_OpenVRWorkspaceNoRDM
{
	connect_output Tutorial_OpenVRNodeNoRDM 0
}

workspace Tutorial_OpenVRMirrorWindowWorkspace
{
	//connect_output Tutorial_OpenVRMirrorWindowNode 0
	connect_output Tutorial_OpenVRNodeNoRDM 0
}
Sadly though it only looks as expected if the define USE_OPEN_VR is inactive. It then goes for the NullCompositorListener and I can see both eyes. If I rotate the scene and the skybox match up. But if I activate USE_OPEN_VR it seems to alter the projection and if I rotate the view it looks weird. It almost seems that the Fov of the rendered scene doesn't match up with the skybox.
I had some thoughts but I can't put my finger on it. Maybe its because the Eye positions on the NullCompositor are equal and not equal on the OpenVRCompositorListener?
Is "pass render_quad" even considered a good option for something which also uses "instanced_stereo true" in the rendered scene?
To my shame I'm new to Ogre and also new to OpenVR. So maybe educating myself in both topics at the same time was a bad way to learn. :?

I hope haven't hurt any rule of the forum by asking this. Maybe someone has an idea for me. :wink:

Slamy
User avatar
dark_sylinc
OGRE Team Member
OGRE Team Member
Posts: 5436
Joined: Sat Jul 21, 2007 4:55 pm
Location: Buenos Aires, Argentina
x 1343

Re: [2.3] Combining the Tutorials OpenVR and Sky_Postprocess

Post by dark_sylinc »

Hi!

Like you found out, running a render_pass in VR is not trivial because you need to figure out the math of the reprojections and make sure each half of the image gets treated correctly.

We tried to get the sky as a postprocess sample in VR but there was an issue with aspect ratio and tilting which causes a slight distortion with the sky (and that makes you really dizzy in VR) thus it's better and easier to use SceneManager::setSky which will work in VR.

Just call that function and you're set.

Cheers
Matias
Slamy
Gnoblar
Posts: 22
Joined: Sat Mar 27, 2021 10:49 pm
Location: Bochum, Germany
x 10

Re: [2.3] Combining the Tutorials OpenVR and Sky_Postprocess

Post by Slamy »

Hi!
First of all, thank you for your fast answer.
I didn't knew there was a function like setSky. An Example is surely lacking there. :D
But it seems there is some bug or so in it.
I've added this to Tutorial_OpenVRGameState::createScene01(void) to be sure that I base this on the example and not that my project is faulty:

Code: Select all

sceneManager->setSky(true, Ogre::SceneManager::SkyCubemap, "SaintPetersBasilica.dds",
						 Ogre::ResourceGroupManager::AUTODETECT_RESOURCE_GROUP_NAME);
But it seems that only the left eye is getting the sky rendered while the right eye is clear colored blue. Maybe I'm still missing something?
User avatar
dark_sylinc
OGRE Team Member
OGRE Team Member
Posts: 5436
Joined: Sat Jul 21, 2007 4:55 pm
Location: Buenos Aires, Argentina
x 1343

Re: [2.3] Combining the Tutorials OpenVR and Sky_Postprocess

Post by dark_sylinc »

Mmmm that's weird. Unlit isn't using instanced stereo.

Arrghh!!! It uses low level materials. I forgot!

QuadCameraDirNoUV_vs would have to be edited to use gl_ViewportIndex and gl_InstanceID & 0x02 to get which eye you're rendering to.

This was quite the oversight.

Something like:

Code: Select all

gl_Position.xy = (worldViewProj * vec4( vertex.xy, 0, 1.0f )).xy;
if( (gl_InstanceID & 0x02) != 0 )
    gl_Position.x += 0.5f;
gl_ViewportIndex = gl_InstanceID & 0x02;
May just work, but needs testing
Slamy
Gnoblar
Posts: 22
Joined: Sat Mar 27, 2021 10:49 pm
Location: Bochum, Germany
x 10

Re: [2.3] Combining the Tutorials OpenVR and Sky_Postprocess

Post by Slamy »

I'm feeling like I just did a math test or so.
Something like this at least got the sky also on the right side as well.

Code: Select all

#version ogre_glsl_ver_330
#extension GL_ARB_shader_viewport_layer_array : require

vulkan_layout( OGRE_POSITION )	in vec2 vertex;
vulkan_layout( OGRE_NORMAL )	in vec3 normal;

in uint drawId;

vulkan( layout( ogre_P0 ) uniform Params { )
	uniform vec2 rsDepthRange;
	uniform mat4 worldViewProj;
vulkan( }; )

out gl_PerVertex
{
	vec4 gl_Position;
};

vulkan_layout( location = 0 )
out block
{
	vec3 cameraDir;
} outVs;

void main()
{
	gl_Position.xy = (worldViewProj * vec4( vertex.xy, 0, 1.0f )).xy;

	gl_Position.z = rsDepthRange.y;
	gl_Position.w = 1.0f;
	
	outVs.cameraDir.xyz	= normal.xyz;
	gl_ViewportIndex	= int( drawId & 0x01u );
}
But something is still wrong and its the FOV as it seems.

When I remove

Code: Select all

//mRenderSystem->_convertOpenVrProjectionMatrix(projectionMatrix[i], projectionMatrixRS[i]);
and provide a fixed matrix like below, I get of course the wrong transformation for my VR headset but at least the Fov of the rendered scene and the QuadCameraDirNoUV_vs do match up and when I rotate the camera, the scene does match up with my skybox.

Code: Select all

		Ogre::Matrix4 projectionMatrixRS[2] = {mCamera->getProjectionMatrixWithRSDepth(),
											   mCamera->getProjectionMatrixWithRSDepth()};
So there is something missing. It's almost as the FOV is not taken from the openvr projection matrix but a standard one instead.
These photos show this.

Image
Image

The skybox looks like a direct copy with no dependence on the VR projection matrix.
To my defense, I don't know anything about GSLS. This is all very new to me.
Slamy
Gnoblar
Posts: 22
Joined: Sat Mar 27, 2021 10:49 pm
Location: Bochum, Germany
x 10

Re: [2.3] Combining the Tutorials OpenVR and Sky_Postprocess

Post by Slamy »

dark_sylinc wrote: Mon Mar 29, 2021 7:05 am Arrghh!!! It uses low level materials. I forgot!
Ok, this is something I now understand. In OgreHlmsPbs.cpp there are these lines:

Code: Select all

	if( !isInstancedStereo )
        {
           // some non vr stuff :-D
        }
        else
        {
            //float4x4 viewProj[2];
            Matrix4 vrViewMat[2];
            for( size_t eyeIdx=0u; eyeIdx<2u; ++eyeIdx )
            {
                vrViewMat[eyeIdx] = cameras.renderingCamera->getVrViewMatrix( eyeIdx );
                Matrix4 vrProjMat = cameras.renderingCamera->getVrProjectionMatrix( eyeIdx );
                if( renderPassDesc->requiresTextureFlipping() )
                {
                    vrProjMat[1][0] = -vrProjMat[1][0];
                    vrProjMat[1][1] = -vrProjMat[1][1];
                    vrProjMat[1][2] = -vrProjMat[1][2];
                    vrProjMat[1][3] = -vrProjMat[1][3];
                }
                Matrix4 viewProjMatrix = vrProjMat * vrViewMat[eyeIdx];
                for( size_t i=0; i<16; ++i )
                    *passBufferPtr++ = (float)viewProjMatrix[0][i];
            }

            //float4x4 leftEyeViewSpaceToCullCamClipSpace
            if( forwardPlus )
            {
                Matrix4 cullViewMat = cameras.cullingCamera->getViewMatrix( true );
                Matrix4 cullProjMat = cameras.cullingCamera->getProjectionMatrix();
                if( renderPassDesc->requiresTextureFlipping() )
                {
                    cullProjMat[1][0] = -cullProjMat[1][0];
                    cullProjMat[1][1] = -cullProjMat[1][1];
                    cullProjMat[1][2] = -cullProjMat[1][2];
                    cullProjMat[1][3] = -cullProjMat[1][3];
                }
                Matrix4 leftEyeViewSpaceToCullCamClipSpace;
                leftEyeViewSpaceToCullCamClipSpace = cullProjMat * cullViewMat *
                                                     vrViewMat[0].inverseAffine();
                for( size_t i=0u; i<16u; ++i )
                    *passBufferPtr++ = (float)leftEyeViewSpaceToCullCamClipSpace[0][i];
            }
It fills the PassBuffer of the Pbs material containing:

Code: Select all

layout( std140, binding = 0 ) uniform PassBuffer
{
mat4 viewProj[2];
vec4 leftToRightView;
Now the QuadCameraDirNoUV_vs.glsl has only

Code: Select all

vulkan( layout( ogre_P0 ) uniform Params { )
	uniform vec2 rsDepthRange;
	uniform mat4 worldViewProj;
vulkan( }; )
So this means that for the Low Level Materials there is the dependency missing to the isInstancedStereo mode which only exists for Pbs and Unlit.
I think I would be close to a solution If I knew the place where the worldViewProj matrix is provided to the vertex shader. I'm currently unable to find it.
User avatar
dark_sylinc
OGRE Team Member
OGRE Team Member
Posts: 5436
Joined: Sat Jul 21, 2007 4:55 pm
Location: Buenos Aires, Argentina
x 1343

Re: [2.3] Combining the Tutorials OpenVR and Sky_Postprocess

Post by dark_sylinc »

Yes! The code responsible of handling this for low level materials is in AutoParamDataSource (see AutoParamDataSource::getWorldViewProjMatrix)

IMHO it looks like this would be better handling by adding a new auto param for VR matrices. See ACT_WORLDVIEWPROJ_MATRIX.
You'd basically have to add a new ACT_*_MATRIX and a new function to AutoParamDataSource for handling the stereo matrices.

Since you need to send two matrices, you can see ACT_WORLD_MATRIX_ARRAY_3x4 / ACT_WORLD_MATRIX_ARRAY for reference which send more than 1 matrix to the shader.
Slamy
Gnoblar
Posts: 22
Joined: Sat Mar 27, 2021 10:49 pm
Location: Bochum, Germany
x 10

Re: [2.3] Combining the Tutorials OpenVR and Sky_Postprocess

Post by Slamy »

It seems I'm not up to this task.
I've added a ACT_VR_WORLDVIEWPROJ_MATRIX to the AutoParams and also a
const Matrix4& AutoParamDataSource::getVrWorldViewProjMatrix(size_t eye) const
to acompany it.
I've integrated getVrProjectionMatrix() into it and first thought that getVrViewMatrix is needed as well. But this wasn't the case as the cube seems to make use of the IdentitiyView and therefore I guess the view matrix shall be the identity even for VR.

But even so I've applied all this and made the shader use the newly added matrix pair by adding it to Quad.program there is still something off.
The view on the sky is squashed horizontally and I need to apply a 2* scale on X to get this out. Then there was also a mysterious 0.43 scaling factor which needed to be applied to all axes to get the FOV right. I've got this number empirically through visual experimentation.
Then - as also mentioned by you - there is an offset for the both eyes applied in the shader depending on the IPD, I guess?

The result is a small window in the view which allows a peek into the skybox without much eye damage. But this can't be the solution and it still feels wrong.
Image

I've forked the main repo and put all what I did in this. It should be compilable out of the box as only small things were changed.
https://github.com/OGRECave/ogre-next/c ... re/skyTest

I think I'll go just with 6 planes and Hlms Unlit to avoid any further headaches.
Slamy
Gnoblar
Posts: 22
Joined: Sat Mar 27, 2021 10:49 pm
Location: Bochum, Germany
x 10

Re: [2.3] Combining the Tutorials OpenVR and Sky_Postprocess

Post by Slamy »

After tinkering with other stuff, I wasn't really satisfied with the approach of 6 planes. The issue here is that I need more textures than usually required. Not only the cubemap for the environmental reflections but also for the 6 planes. (I currently don't know whether cubemap textures can be used as 6 2D textures...)
So I went back to this topic as I kind of want to understand this and have a clean solution. So I gave this a fresh start.
We have discussed that adding something to AutoParamDataSource might be the best option. But actually it didn't took effect. And how could it? Rectangle2D which is used for the Sky uses IdentityProjection and IdentityView. So having getVrProjectionMatrix(...) instead of getProjectionMatrixWithRSDepth() won't resolve this. So there is still something in the path which I don't understand. If the Rectangle2D is not influenced by the Camera, why does it change when I perform

Code: Select all

mCamera->setFOVy(Ogre::Degree(10));
in the OpenVRCompositorListener.
This changes the Fov of the skybox but not the Pbs rendered shapes. It's almost like there is again a path of data apart from AutoParamDataSource which I haven't yet realized that still uses some data of mCamera.

I'm also sorry for putting this in the wrong topic. If you like, please move this to the 2.x developer area. This isn't "Using Ogre in practice" anymore.
User avatar
dark_sylinc
OGRE Team Member
OGRE Team Member
Posts: 5436
Joined: Sat Jul 21, 2007 4:55 pm
Location: Buenos Aires, Argentina
x 1343

Re: [2.3] Combining the Tutorials OpenVR and Sky_Postprocess

Post by dark_sylinc »

Hi!

Indeed we use an identity view matrix. However that's to ensure the 4 vertices of the fullscreen quad end up covering the screen in 2D (otherwise when we rotate the camera the 2D plane would end up somewhere else).

What's confusing you is that we send camera data through the 'normals' input attribute.

In C++ this can be located in OgreSceneManager.cpp

Code: Select all

const Vector3 *corners = camera->getWorldSpaceCorners();
const Vector3 &cameraPos = camera->getDerivedPosition();

const Real invFarPlane = 1.0f / camera->getFarClipDistance();
Vector3 cameraDirs[4];
cameraDirs[0] = ( corners[5] - cameraPos ) * invFarPlane;
cameraDirs[1] = ( corners[6] - cameraPos ) * invFarPlane;
cameraDirs[2] = ( corners[4] - cameraPos ) * invFarPlane;
cameraDirs[3] = ( corners[7] - cameraPos ) * invFarPlane;

mSky->setNormals( cameraDirs[0], cameraDirs[1], cameraDirs[2], cameraDirs[3] );
Those camera directions are unrelated to the worldViewProjMatrix and are not in identity. This is why the FOV change affects the sky.

If you're confused on what we're doing, this technique is derived from Reconstructing Position from Depth (part 1, 2 & 3). In that original technique the goal is to send the 4 corners of the view frustum; and then reconstruct the view space position using only the depth buffer.

However here we're doing something much simpler, which is only to draw the sky.
Slamy
Gnoblar
Posts: 22
Joined: Sat Mar 27, 2021 10:49 pm
Location: Bochum, Germany
x 10

Re: [2.3] Combining the Tutorials OpenVR and Sky_Postprocess

Post by Slamy »

Howdy! I've made great progress!

My first success was the correct FOV for each eye separatly. I first thought that Frustum::setCustomProjectionMatrix was the way to go. But I was wrong. It was Frustum::setFrustumExtents. With this function I was able to get the right FOV at least for one of both eyes. The other was still wrong.
So, I thought that all I need would be the eyeFrustumExtents for each eye and also the world space corners for each eye.
I've change the OgreCamera a bit, allowing to manage what I call VrWorldSpaceCorners. The Scene Manager would use this in tandem with the Rectangle2D's capability to work in Stereo Rendering. I've added a second pack of Normals to the Rectangle2D and changed the shader a bit.
My current work allows - as it seems - a correct display of something infinitely away for both eyes.

Now remaining, are some very specific issues I need yet to resolve:

1) Very shady shader
The shader of this custom Rectangle2D works similiar to the shader of the RadialDensityMask but is ugly:

Code: Select all

#version ogre_glsl_ver_330
#extension GL_ARB_shader_viewport_layer_array : require

vulkan_layout( OGRE_POSITION )	in vec2 vertex;
vulkan_layout( OGRE_NORMAL )	in vec3 normal;

in int gl_InstanceID;

vulkan( layout( ogre_P0 ) uniform Params { )
	uniform float ogreBaseVertex;
	uniform vec2 rsDepthRange;
	uniform mat4 worldViewProj;
vulkan( }; )

out gl_PerVertex
{
	vec4 gl_Position;
};

vulkan_layout( location = 0 )
out block
{
	vec3 cameraDir;
} outVs;

void main()
{
	gl_Position.xy = (worldViewProj * vec4( vertex.xy, 0, 1.0f )).xy;
	
	gl_Position.x += 3.5f * gl_InstanceID;
    
	gl_Position.z = rsDepthRange.y;
	gl_Position.w = 1.0f;
	outVs.cameraDir.xyz	= normal.xyz;
	
	gl_ViewportIndex = (gl_VertexID - ogreBaseVertex) >=  2 ? 1 : 0;
}
My problem here is that I have to move the second drawing aside. I currently don't know how to prohibit the second instance. I want to control the destination viewport according to the vertice numbers. Stereo Rendering in Rectangle2D seems to have double the amount of vertices which I would like to use here.
Maybe you have a suggestion for a more clean solution.

2) The eyes are not exactly on equal height.
I don't know how to describe it. It's almost like the horizon is off by a few pixels in height. This has a very weird effect on the horizontal edges as both eyes see a different Hlms Pbs to Skybox edge. Very distracting and certanly weird. I've printed the mLeftToRight vector from the mVrData. The weird thing here is also that both eyes are not on the same height as there is a vertical translation and one in depth. My expectation was, that only X has a value not 0 and x being the IPD.

Code: Select all

mLeftToRight 0.064026 -0.000561 -0.002325
This is no bug of Ogre as GetProjectionRaw delivers similiar results.

Code: Select all

eyeFrustumExtents
          x         y        w         z
left  -1.162964 0.988530 1.059385 -1.048484
right -0.991386 1.167313 1.056323 -1.050955
x and y, both left and right have to be different. But I would have expected that the top and bottom frustum would be the same. Maaaaaybe I'm still missing something here concerning the way OpenVR communicates these values.
This issue is so small that it can't be seen in the mirror window. Only in the Headset. :?

Alright. Time to go to bed.
See ya. :D
Hilarius86
Halfling
Posts: 51
Joined: Thu Feb 14, 2019 11:27 am
x 8

Re: [2.3] Combining the Tutorials OpenVR and Sky_Postprocess

Post by Hilarius86 »

I don't use the OpenVR sample, but the SkyPostprocess one with our own VR implementation. I have also spotted the discrepancy, but I postponed working on it for now. Looking forward to a solution and hope you can feed it back to us via PR.
Slamy
Gnoblar
Posts: 22
Joined: Sat Mar 27, 2021 10:49 pm
Location: Bochum, Germany
x 10

Re: [2.3] Combining the Tutorials OpenVR and Sky_Postprocess

Post by Slamy »

Hilarius86 wrote: Thu Apr 29, 2021 10:49 am I don't use the OpenVR sample, but the SkyPostprocess one with our own VR implementation. I have also spotted the discrepancy, but I postponed working on it for now. Looking forward to a solution and hope you can feed it back to us via PR.
I really would like to contribute here. But it has to work and the Ogre developers need to accept my solution.

The offset between both eyes is bothering me. I've searched online and this is actually a design decision in some VR subsystems. I don't yet fully understand why this is a thing. But I have to assume that I can't get my SteamVR to do it differently.

I've tried to move my setup into something with less influence from the headset. The camera is fixed at a place now and the cubemap is replaced by something generated in software. The example SaintPetersBasilica.dds was not up to this task. I've attached the result. With a cubemap like this, it's far easier to see the problem. Also I've moved the spheres and animated cubes of the example by 100 units to the side so I have something very distant for debugging.

Image

To scale up the error I've decided to squash the image horizontally as this will amplify it:

Image

Through this I've realized something scary. The view of the eyes is rotated for the Hlms rendered plane and the grid pattern is perfectly aligned to the horizon. What does this mean? Which of both are incorrect? With a perspective projection I would assume that the horizon is aligned to the coordinate system.

I got curious. Is this something only present in the rendering of Ogre? I went off to SteamVR Home and tried to load something with a structure which I would expect to be parallel to the ground. I've tried to hold the headset so that I got a pitch angle close to 0°
This is the result and again squashed.

Image

It turns out that the rendering of the left eye is rotated clock wise while the right eye is rotated counter clock wise. I don't know why this is the case but I currently assume my problem comes from the fact that the world space corners coming from the frustum extends are not taking this into account.
Slamy
Gnoblar
Posts: 22
Joined: Sat Mar 27, 2021 10:49 pm
Location: Bochum, Germany
x 10

Re: [2.3] Combining the Tutorials OpenVR and Sky_Postprocess

Post by Slamy »

I did it !!!
My assumption was correct. The frustum extents shall not be used. Instead the corner coordinates shall be directly calculated from the identity corners and not the edges. The result must then be multiplied with the headToEye matrix coming from OpenVR but with the translation component removed.
I now have a working example which I've also pushed as a messy something on the branch feature/skyTest3 on my fork of ogre-next.

In the process I've also learned how to make equirectangular environment maps using blender. :roll: lol

I will continue with cleaning my mess up for a PR so we can make this a permanent part of Tutorial_OpenVR.
Slamy
Gnoblar
Posts: 22
Joined: Sat Mar 27, 2021 10:49 pm
Location: Bochum, Germany
x 10

Re: [2.3] Combining the Tutorials OpenVR and Sky_Postprocess

Post by Slamy »

The code is cleaned up and a pull request is created.
I've tested it against a non instancing scenario too. So currently I won't expect regressions if someone uses the sky without any instanced rendering.

One thing though. There is a TODO in the shader code but I don't understand the interna of Ogre at this point. I need to prohibit a second drawing of the sky as I perform the instancing without the InstanceId. The RadialDensityMask also must perform this. At least I hope so. If not, it's rendered twice per eye at the moment and we all didn't knew it because one can't see it. It would be nice if someone of the core team could take a glimpse on this.

Have a nice day.